Prompt Engineering

Definition:

Prompt Engineering Principles

Prompt engineering is the systematic design of LLM inputs to maximize output quality. Core techniques:

  1. System message: Set role, tone, constraints
  2. Few-shot examples: Provide input-output demonstrations
  3. Chain-of-thought (CoT): Instruct step-by-step reasoning
  4. Output format specification: JSON, markdown, code blocks

Definition:

Chain-of-Thought Prompting

Chain-of-thought prompting instructs the model to show its reasoning before the final answer:

"Think step by step. First, identify the channel model. Then, compute the theoretical BER. Finally, compare with the simulation result."

CoT improves accuracy on math, logic, and multi-step reasoning tasks by 10-40% on average.

Theorem: Few-Shot Scaling

Performance on classification tasks typically follows: Accuracy(k)β‰ˆa0+(aβˆžβˆ’a0)(1βˆ’eβˆ’k/k0)\text{Accuracy}(k) \approx a_0 + (a_\infty - a_0)(1 - e^{-k/k_0}) where kk is the number of few-shot examples. Most gains come from the first 3-5 examples; beyond 10, returns diminish rapidly.

Few-shot examples serve as implicit task specification. The first examples establish the pattern; additional ones provide refinement but with rapidly diminishing returns.

Example: Designing a Prompt for Paper Classification

Design a prompt that classifies wireless papers into categories: channel modeling, signal processing, machine learning, networking.

Quick Check

When is chain-of-thought prompting most beneficial?

Simple factual retrieval

Multi-step reasoning and mathematical computation

Text summarization

Common Mistake: Ambiguous Instructions

Mistake:

Using vague prompts like 'analyze this paper' without specifying what to extract.

Correction:

Be explicit: specify the output format, fields to extract, and edge cases. The more precise the prompt, the more reliable the output.

Historical Note: Chain-of-Thought Prompting

2022

Wei et al. (2022) at Google Brain showed that adding "let's think step by step" to prompts dramatically improved reasoning performance. This simple technique enabled GPT-3 to solve grade-school math problems that it previously failed on completely.