Shhh...the signal quietly shouting that 2026 might be the year of AI realism
It's not what you think
We are told that agentic AI adoption is through the roof. 35% adoption in two years with another 44% planning to deploy it soon, according to the MIT Sloan Management Review. 24% of executives say AI agents take independent action in their organisation already, says an IBM report from October. They can reduce human error and cut employee’s low-value work time by 25% - 40% BCG informs us.
It’s easy to get caught up in the excitement, the awe, the wonder - I certainly did for a while.
But an emerging story suggests that 2026 might be a very different year for AI. Maybe not a bubble bursting. Maybe not a backlash. But maybe some much-needed realism.
And the signal that gave it away wasn’t a research paper, a product launch or a scandal.
It was insurance.
A growing list of major insurers are starting to step back from AI-related cover. Not because they think AI is dangerous, and not because they’ve suddenly become conservative. It’s because they can’t do the one thing insurers must be able to do: estimate the risk.
They can’t see where liability sits when an AI system goes wrong - the AI model? The developer? The business? They can’t price the probability of failure because the models are opaque.
They can handle one big claim - but they can’t handle a thousand simultaneous claims triggered by the same model failing in the same way and impacting many customers.
That’s the definition of systemic risk. And insurers steer clear of systemic risk.
(Pssst… Reader, I’d really appreciate a like!)
So what does this mean?
This doesn’t mean AI is over.
It doesn’t mean the bubble is bursting.
It doesn’t even mean we’re in a bubble.
But it could mean AI is having to mature. And maturity brings consequences.
What I expect in 2026 is a shift in behaviour:
More measured adoption. Companies will move fast on the parts of AI that are stable, safe and easy to govern. Like training staff to become super users (which means knowing the risks); super-charging marketing activity; better presentations; faster desk research; more effective emails; better online brands; great images; and automating low-risk, low-value work.
Less appetite for agentic AI. Autonomous agents are tricky, because AI hallucinates, which means it is highly risky to allow it to act autonomously. And if companies believe they might have to pay for these costly mistakes themselves they’ll be less likely to invest in these projects.
Sharper distinctions between “fun demos” and “production-grade systems”.
Leaders (at least the ones who have educated themselves about AI) will start asking harder questions about reliability, liability and operational risk.
And all this would mean a more realistic market. Less hype. More due diligence. Less “anything is possible”. More “What’s safe? What’s useful? What’s worth the risk?”
And the good news is there is a way to use AI that is safe. AI can be useful. And keeping an eye on the risks now will mean more intelligent adoption as the technology develops.
Insurers don’t do theatre (but I do books). They respond to what’s really happening which is why their signals matter.
2023 was the year of breakthrough.
2024 was the year of acceleration.
2025 is the year of chaos and experimentation.
And 2026… might just be the year AI gets real.
And I’m really looking forward to that!
PS. Speaking of AI risks, you might be interested in joining my session on AI Safety: Protect yourself in the age of invisible risk.


