Don't be a smart person making an expensive AI mistake
(Not-so-smart people don't read The Humans in the Loop so they're not gonna know this stuff)
AI hallucinations are landing people in legal trouble, costing them their jobs, losing companies hundreds of thousands of dollars, putting people’s safety at risk, and getting major cases thrown out of court.
If you’re using AI and you don’t understand what hallucinations are, why they happen, how to manage them, and why they are not going away, you are taking on risk -whether you realise it or not.
I’ve just published a video breaking this down in detail. But if you prefer to read first, here’s the core of it.
What is an AI hallucination?
A hallucination is when an AI makes things up. A lie. A fact that isn’t a fact. A citation that doesn’t exist. A fake study. A real study but misleading results. A half-truth presented as certainty.
Sometimes it’s obvious. Often it isn’t.
A total fabrication might look like a court case that never happened; a study that doesn’t exist; a book that was never written
But perhaps more dangerous are the half-truths: a real drug name with the wrong dosage; a real study with the conclusion reversed; a real policy with critical details missing.
A simple example: I once asked ChatGPT to explain the common English idiom “never fill your hot water bottle with cheese.” It gave me a detailed explanation of what this phrase means - even though it of course was totally made up.
Or take the time the Chicago Times published a reading list of 15 books, 10 of which didn’t exist.
Or the lawyer who stood up in court and cited previous cases that had never happened, because an AI had invented them.
These are but a few small examples of AI hallucinating. But it happens a lot - you’ll be surprised how frequently!
Why does AI hallucinate?
Large language models aren’t programmed like normal software.
No one hard-codes answers into them. They’re trained on vast amounts of text - books, articles, websites - and they learn patterns in how language is used.
When you ask a question, the model doesn’t look up the correct answer in a database.
It generates a response one word at a time, predicting what word is most likely to come next based on what it has seen before.
These models are optimised to sound fluent and helpful - they are not optimised for accuracy. They are designed to always give you an answer. So when they don’t actually know something, they don’t stop and say “I don’t know.”
They guess.
Those guesses can sound polished, confident, and authoritative. Which makes them hard to spot - especially when you’re tired, busy, or under pressure.
Add to that the fact that their training data already contains errors (the internet is not exactly a bastion of truth), and hallucinations aren’t surprising. They’re inevitable.
AI hallucinations are WAY more common than you think
AI companies like to say hallucination rates are improving.
They aren’t.
Recent benchmarking shows hallucination rates of 15% or more across a range of models. If you’re using a chatbot and you’re not seeing hallucinations in roughly one in six answers, one of two things is happening:
you’re an extremely careful prompter
or you’re not noticing them
That second option is far more common.
Why are AI hallucinations dangerous?
Because they feel right.
They’re well written. Calm. Confident. Tailored exactly to what you asked for.
Your brain sees the formatting and the tone and thinks, this looks sensible. And when we’re busy, we’re very happy to let something else think for us.
And these models are trained to be helpful and agreeable.
They mirror your tone. They sound like a smart colleague. And if you say something wrong, they don’t push back. They say yes - and then they fabricate evidence to support that yes.
That combination - confidence plus agreement - multiplied across every facet of life - presents plenty of opportunity for things to go wrong across every facet of life.
If this helped you, consider hitting the ❤️ to let me know it was worth your time.
Can hallucinations be fixed?
No. They can’t. No one knows how to eliminate them.
In September 2025, OpenAI released a paper titled Why language models hallucinate. This paper was touted by many as the beginning of the end for hallucinations. But they were wrong. The paper simply confirmed what we already knew: models are rewarded for answering, not for admitting uncertainty. Maybe that’s why 2026 will be the year of AI realism).
The only way to completely avoid hallucinations is not to use AI at all.
You can reduce them by forcing the model to say “I don’t know” more often - but then you’ll probably find the tool frustratingly unhelpful.
So the question isn’t how to eliminate hallucinations. It’s how to manage the risk.
How to manage the risk of AI hallucinations
There are a few non-negotiables if you’re using AI seriously:
Never copy-paste anything consequential without checking it yourself
Treat every AI output as a draft, not a decision
Be explicit about where 100% factual accuracy is required
Name who is accountable for accuracy - not “the tool”
If you’re implementing AI in a business, you need processes. That includes how facts are verified and who signs things off.
You can also reduce risk by using techniques like RAG (retrieval-augmented generation), where the model is connected to a database you control - user manuals, policies, SOPs - instead of relying purely on its training data (which you don’t control).
That helps. It doesn’t solve the problem.
Hallucinations are a structural consequence of how these models work. And they’re not going away.
Which means you have to be the sensible adult in a room full of very shiny, very persuasive tools.
If you aren’t, they can cost you your reputation, your career, or a lot of money.
Do you prefer to watch? I got ya:


