Are you one of the 37% who overlook AI hallucinations?
(and what damage has that done so far?)
Imagine your lawyer, standing in court, quoting a legal precedent that never existed.
Or a national newspaper publishing a quote from an “expert” who isn’t real. Or a startup mapping out an entire go-to-market strategy based on made-up stats about its competitors.
These aren’t fringe cases or future threats. They’re happening now. And they’re happening because too many people don’t know how to spot AI hallucinations.
So what exactly is an AI hallucination?
It’s when AI tools confidently make stuff up. Not typos. Not misunderstandings.
We’re talking completely fictional facts (“In 2023, Country X saw a 200% rise in…”); quotes from people who never said those words; and references to reports or journals that don’t exist.
Add to that AI’s sycophancy - its built-in tendency to flatter your assumptions, reinforce your biases, and tell you what it thinks you want to hear - and you’ve got a recipe for confident nonsense.
These aren’t technical glitches. They’re baked into how these systems work: they predict the most likely next word based on patterns in their training data -not based on fact.
And unless you know how to check what they’re saying, it all sounds incredibly plausible. That’s where you go wrong.
Here’s what people are (and aren’t) doing about it
I ran a survey asking: “Do you take any steps to guard against AI hallucinations?”
The results were bleak.
37% either take no steps or don’t even know what hallucinations are (and it’s worse for women).
Less than a third manually review the output.
Just over a quarter cross-check with external sources or ask for citations.
Fewer still refine their prompts or compare multiple tools.
In other words: more than one in three AI users is effectively flying blind.
Why this happens
Even smart, competent people fall into the trap. AI is fast, fluent, and sounds authoritative. And in a world where output speed is prized, it’s easy to mistake confidence for truth.
Add vague prompts, pressure to deliver, and a lack of awareness about how these tools actually work - and hallucinations and sycophancy become invisible risks.
The real cost of ignoring it
If you rely on AI and don’t guard against this stuff, you risk:
Publishing false information that damages your reputation.
Making decisions based on fiction.
Losing credibility with colleagues, customers, investors.
This isn’t about being perfect. It’s about building the habit of checking before trusting.
So what can you do?
You don’t need to be technical. Just start with three habits:
Be skeptical: Plausibility isn’t proof.
Treat AI like a first draft: Always validate before you use.
Sharpen your prompts: Ask for sources (and then cross-reference them - you would be amazed by how often the sources cited by AI turn out to be completely useless). Request counterarguments. Don’t let the model flatter your assumptions.
Want help building that muscle?
If you’ve been caught out, then you need to do more to build your AI confidence. Why not sign up to the waitlist for my superhuman AI community.
If you’re using AI to create, write, think, or build - and want to do it better - join us on our superhuman AI journey.
I actually go the other way. I use AI hallucinations as a creative partner- sometime pushing AI purposefully into the realm of hallucinating just to see what happens. For general day to day use, hallucinations are a real concern.