AI Glossary · Definition

AI Hallucination

When an AI confidently invents information that isn't true. The single most important AI failure mode to know — and there are clean ways to reduce it.

The plain-English definition

An AI hallucination is when ChatGPT, Claude, or Gemini produces an answer that sounds confident and authoritative but is actually false — invented quotes, fake citations, made-up statistics, fabricated history. The model isn't lying on purpose. It's predicting the most plausible-sounding text, and sometimes that prediction is wrong.

Real examples

Why hallucinations happen

LLMs like ChatGPT are trained to predict likely next words, not to verify facts. When you ask about something they know well, you get accurate text. When you ask about something niche, recent, or specific, the model fills the gap with plausible-sounding text — which is sometimes wrong.

How to reduce them (the 4 techniques)

  1. Paste source material. Instead of "summarize this paper" without a paper, paste the actual paper into the chat.
  2. Use grounded tools. Perplexity and Gemini cite live web sources. ChatGPT Plus and Claude do too when their browsing/search is enabled.
  3. Ask the AI to flag uncertainty. Add this to your prompt: "If you're unsure about a fact, say so explicitly. Don't guess."
  4. Verify anything that matters. Treat AI output like a smart intern's first draft. Trust but verify.

Which models hallucinate least?

In 2026, Claude has the best reputation for honesty about uncertainty. ChatGPT and Gemini are competitive but more confident-sounding when they're wrong. None are zero. See our ChatGPT vs Claude comparison for more.

Related terms

Copied