Prompt Engineering
Definition: Prompt Engineering Principles
Prompt Engineering Principles
Prompt engineering is the systematic design of LLM inputs to maximize output quality. Core techniques:
- System message: Set role, tone, constraints
- Few-shot examples: Provide input-output demonstrations
- Chain-of-thought (CoT): Instruct step-by-step reasoning
- Output format specification: JSON, markdown, code blocks
Definition: Chain-of-Thought Prompting
Chain-of-Thought Prompting
Chain-of-thought prompting instructs the model to show its reasoning before the final answer:
"Think step by step. First, identify the channel model. Then, compute the theoretical BER. Finally, compare with the simulation result."
CoT improves accuracy on math, logic, and multi-step reasoning tasks by 10-40% on average.
Theorem: Few-Shot Scaling
Performance on classification tasks typically follows: where is the number of few-shot examples. Most gains come from the first 3-5 examples; beyond 10, returns diminish rapidly.
Few-shot examples serve as implicit task specification. The first examples establish the pattern; additional ones provide refinement but with rapidly diminishing returns.
Example: Designing a Prompt for Paper Classification
Design a prompt that classifies wireless papers into categories: channel modeling, signal processing, machine learning, networking.
Structured Prompt
system = """You are a wireless communications expert.
Classify papers into exactly one category:
- channel_modeling
- signal_processing
- machine_learning
- networking
Return JSON: {"category": "...", "confidence": 0.0-1.0}"""
few_shots = [
{"role": "user", "content": "Title: Deep Learning for MIMO Detection"},
{"role": "assistant", "content": '{"category": "machine_learning", "confidence": 0.9}'},
{"role": "user", "content": "Title: 3D Channel Model for Urban Macro"},
{"role": "assistant", "content": '{"category": "channel_modeling", "confidence": 0.95}'},
]
Quick Check
When is chain-of-thought prompting most beneficial?
Simple factual retrieval
Multi-step reasoning and mathematical computation
Text summarization
CoT forces the model to decompose complex problems, reducing errors in multi-step reasoning.
Common Mistake: Ambiguous Instructions
Mistake:
Using vague prompts like 'analyze this paper' without specifying what to extract.
Correction:
Be explicit: specify the output format, fields to extract, and edge cases. The more precise the prompt, the more reliable the output.
Historical Note: Chain-of-Thought Prompting
2022Wei et al. (2022) at Google Brain showed that adding "let's think step by step" to prompts dramatically improved reasoning performance. This simple technique enabled GPT-3 to solve grade-school math problems that it previously failed on completely.