0
Act 1

Foundations

4 / 4

Hallucinations

Act 1 · ~5 min

Theory

Hallucinations sit at the intersection of training, generation, and how humans read fluent text.

    1. The model learned statistical patterns from massive web text, much of it unverified.
    2. At generation time, it samples likely continuations. There is no truth check, only probability.
    3. You read the result as knowledge because it is fluent and confident.

Three habits cut risk hard:

  • Ground the question. Paste the actual document and instruct: "Use only the text below. If the answer is not there, say so."
  • Ask for quotes. "Quote the exact sentence that supports your answer." Inventing a quote is harder than inventing a paraphrase.
  • Verify before you commit. For anything that moves money, sends to a customer, or affects health, treat AI output as a draft until you have checked the source.

A higher temperature setting increases hallucination rate; a lower one reduces it without erasing it. Retrieval (RAG) helps by putting real source text in the window, but if the wrong text is retrieved you get a confidently wrong answer with citations. The goal is not zero hallucination. It is a workflow that catches the wrong ones before they cost you.