When an AI confidently invents information that isn't true. The single most important AI failure mode to know — and there are clean ways to reduce it.
An AI hallucination is when ChatGPT, Claude, or Gemini produces an answer that sounds confident and authoritative but is actually false — invented quotes, fake citations, made-up statistics, fabricated history. The model isn't lying on purpose. It's predicting the most plausible-sounding text, and sometimes that prediction is wrong.
LLMs like ChatGPT are trained to predict likely next words, not to verify facts. When you ask about something they know well, you get accurate text. When you ask about something niche, recent, or specific, the model fills the gap with plausible-sounding text — which is sometimes wrong.
In 2026, Claude has the best reputation for honesty about uncertainty. ChatGPT and Gemini are competitive but more confident-sounding when they're wrong. None are zero. See our ChatGPT vs Claude comparison for more.