0
Act 2

Understanding

7 / 8

Hallucinations & Grounding

Act 2 · ~4 min

Theory

Hallucination is output that sounds correct but is factually wrong. The model has no built-in truth-checker — pretraining rewards predicting plausible tokens, not accurate ones.

TypeWhat it contradicts
IntrinsicSomething stated in the input or context
ExtrinsicExternal reality — fabricated fact, citation, or statistic
Ungrounded — fabricated

Q: Who cited Einstein in the 1905 relativity paper?

A: Poincaré and Lorentz are cited in footnote 12.

Confident, fluent, false. The footnote does not exist.

Grounded with retrieved context

Context: [1905 paper text — no footnotes]

A: The paper contains no citation footnotes.

Answer drawn from supplied facts, not pattern memory.

CoT is not a fix: a fluent reasoning chain can still end at a wrong conclusion. Reach for retrieval grounding (RAG), source-citation prompts, lower temperature, and constrained decoding. None guarantees accuracy. Act 3 covers retrieval in depth.