Skip to main content
AI & Machine Learning

What is Hallucination?

AI hallucination is when a large language model generates text that is factually incorrect, fabricated, or internally inconsistent, yet presented with apparent confidence. The term borrows from psychology — the model 'perceives' and reports something that does not exist in reality.

Last updated: March 6, 2026

Hallucination Explained

Hallucination is one of the most widely discussed failure modes of modern AI language models. Unlike a human who might say "I'm not sure," an LLM will often generate a plausible-sounding but completely invented answer — including fake citations, non-existent laws, fabricated statistics, and incorrect dates — and present it in the same confident tone as accurate information. This happens because language models are fundamentally next-token prediction engines, not fact-retrieval systems. They generate text that is statistically probable given the context, not text that is guaranteed to be true.

Why Hallucinations Occur

The root cause is the gap between a model's training distribution and the query it receives. When asked about obscure topics, recent events, or specific factual details that were rare in training data, the model lacks the grounding to produce accurate answers. Rather than refusing, it interpolates from related patterns and produces something that sounds correct. This is compounded by the fact that models are trained with reinforcement learning from human feedback (RLHF), which rewards confident, fluent responses — inadvertently rewarding the style of hallucinations.

Types of Hallucinations

Researchers classify hallucinations in several ways. Factual hallucinations are false claims presented as true facts (e.g., inventing a study that does not exist). Faithfulness hallucinations occur when the model contradicts or diverges from source material it was given — the model was told to summarize a document but invents content not present in the original. Reasoning hallucinations involve flawed logic chains that lead to incorrect conclusions even from correct premises. Each type requires different mitigation strategies.

Detecting and Reducing Hallucinations

Several techniques reduce hallucination rates. Retrieval-Augmented Generation (RAG) grounds responses in verified source documents, dramatically reducing factual errors. Chain-of-thought prompting asks the model to reason step by step before giving a final answer, which catches some logical errors. Self-consistency sampling generates multiple responses and takes the majority answer. For high-stakes applications, human review or automated fact-checking against a trusted database is still the most reliable safeguard. Users should also treat any specific claim — especially citations, statistics, and proper nouns — as unverified until cross-checked with a primary source.

Hallucination in Practical AI Tools

In everyday AI-powered tools, hallucination risk is highest when asking about very specific facts (exact prices, phone numbers, dates), recent events post the model's training cutoff, and highly technical or niche domains with limited training data. Providing clear context in your prompt, using models with recent training cutoffs, and enabling web-search or RAG features where available are the most practical ways to get more reliable answers.

Real-World Examples

1

A lawyer asked ChatGPT for case citations and the model invented several plausible-sounding but completely fictitious case names, including "Varghese v. China Southern Airlines" — a real lawsuit that embarrassed the attorney in court.

2

An AI assistant asked about a niche open-source library generates a detailed code example using function names that do not exist in the actual library.

3

A chatbot asked about a company's current pricing returns last year's prices confidently because it was not given access to live data.

4

A model summarizing a research paper invents a statistic not present in the original text, a faithfulness hallucination that changes the meaning of the summary.

Want a Deeper Explanation?

Ask AI to explain Hallucination in your own context or for your specific use case.

AI responses are generated independently and may vary

Frequently Asked Questions

Explore PlugMonkey Extensions

Now that you understand hallucination, put this knowledge to work with our Chrome extensions.