0
Act 2

Understanding

5 / 8

Few-Shot Learning

Act 2 Β· ~4 min

Theory

Few-shot prompting places worked examples directly in the prompt context. The model pattern-matches them and is likely to continue that pattern β€” no weight updates, no retraining.

Zero-shot β€” instruction only

Prompt: "Classify this review."

Output: Sentiment: 😊 (informal β€” guess) β€” format invented per call.

Few-shot β€” 3 demos

Demos: "Fast shipping" β†’ {label: positive} Demos: "Broken on arrival" β†’ {label: negative} ...

Output: {label: positive, confidence: high} β€” locked shape.

Zero-shot0 demos Β· obvious tasks
One-shot1 demo Β· light format cue
Few-shot2–8 demos Β· unusual shapes
More demos β†’ tighter output. Quality plateaus past ~5; extra tokens spent on edge-case coverage rather than count.

Coming next: chain-of-thought adds a reasoning trace to each example, which tends to lift accuracy on multi-step tasks where the final answer alone leaves intermediate steps ambiguous.