Few-Shot Learning
Theory
Few-shot prompting places worked examples directly in the prompt context. The model pattern-matches them and is likely to continue that pattern β no weight updates, no retraining.
Zero-shot β instruction only
Prompt: "Classify this review."
Output: Sentiment: π (informal β guess) β format invented per call.
Few-shot β 3 demos
Demos: "Fast shipping" β {label: positive}
Demos: "Broken on arrival" β {label: negative}
...
Output: {label: positive, confidence: high} β locked shape.
Zero-shot0 demos Β· obvious tasks
One-shot1 demo Β· light format cue
Few-shot2β8 demos Β· unusual shapes
Coming next: chain-of-thought adds a reasoning trace to each example, which tends to lift accuracy on multi-step tasks where the final answer alone leaves intermediate steps ambiguous.