November in AI: The insider's briefing
Everything you need to know to be the AI expert in the room
November was one of those months where AI moved fast enough to impress the insiders, confuse the public, and mildly terrify everyone in between. So here’s the distilled version of all the AI news stories from the last month.
(I do a monthly LIVE insider briefing. It’s 30 minutes and free to attend. You can book for December’s briefing here)
Coca-Cola’s AI Christmas ad
Coca-Cola released a fully AI-generated Christmas campaign, proudly declaring it “revolutionary.” The public responded: please stop.
It was technically impressive: 70,000 AI-generated clips refined by 100 humans. But audiences picked up on its artificial gloss instantly (Coca Cola didn’t try to hide that it was AI generated) and they didn’t like it.
The AI country hit that wasn’t
Headlines screamed that an AI-generated country song had topped the Billboard charts. That was slightly misleading.
It reached No. 1 on a chart nobody uses anymore because hardly anyone buys digital songs. A few thousand engineered purchases will do it.
But the signal here is clear: AI in music isn’t coming. It’s already leaking into the charts, the labels, and the weekly release cycle. Humans are now competing with machines - and with headlines about machines.
AI still can’t do your freelance job (not even close)
A study tested whether AI agents could complete real Upwork jobs end-to-end. The best model produced work a client would accept 2.5% of the time. Most sat under 2%.
The failures weren’t nuanced. They were spectacular. Broken files, half-finished outputs, nonsense deliverables.
Benchmarks say AI is brilliant. The real economy says: not without a human in the loop.
GenAI is mainstream. And the bottleneck is people
Wharton released a three-year study of GenAI adoption in large U.S. enterprises. Usage is now daily for half of leaders, ROI tracking is common, and budgets are shifting from pilots to real programmes.
The surprising part? The limiting factor isn’t model capability. It’s people, governance, and organisational maturity. The tech is ready (so the study says but I don’t think so). Humans… less so.
Insurers are quietly backing away
As I discussed this week, in a signal that the AI bubble might be shifting towards realism, major insurers are asking regulators for permission to exclude AI liability from cover. And some have already stopped underwriting risks tied to LLMs.
Why? Because if one foundational model fails, thousands of companies could be hit at once. Nobody wants to insure a domino chain reaction.
This will slow 2026 enterprise deployment more than any technical limitation.
The “dead relatives” app
A new app called 2wai launched a feature that lets people create interactive AI versions of deceased loved ones. Yes, really.
The advert shows a pregnant woman getting advice from her dead mother. Online reaction was immediate: Black Mirror called - it wants its script back.
It’s another example of the tech industry’s favourite mantra: move fast, break things, ignore the emotional debris.
Microsoft goes “humanist superintelligence”
Mustafa Suleyman (DeepMind co-founder turned Microsoft AI CEO - highly recommend his book) announced a new Superintelligence Team focused on solving specific human problems, instead of chasing open-ended artificial general intelligence.
It’s an interesting shift: a major player openly saying general, autonomous AI may not be controllable, so they’re choosing a different path.
It’s also Microsoft’s clearest attempt yet to own the “responsible intelligence” narrative.
OpenAI and Anthropic issue future warnings
Anthropic’s CEO warned that AI could cure cancer and wipe out half of entry-level white-collar roles. He also admitted that no one elected him or Sam Altman (and others who currently hold power in AI) to manage these decisions - which is precisely the point.
OpenAI, meanwhile, says current models are already “80% of the way to an AI researcher” and expects small scientific breakthroughs by 2026. (just because Sam A says it doesn’t make it so!)
Both companies say they want global safety coordination. Neither is getting it.
Lawsuits filed against OpenAI
Several families allege ChatGPT contributed to the deaths of their loved ones, focusing on emotionally charged, long conversations where safeguards reportedly failed.
This raises the same unresolved question: when a model sounds human and people treat it as human, who carries the responsibility?
Companies want human-like engagement without human-like accountability. That gap is getting dangerous.
World models: the next frontier?
Yann LeCun left Meta to build a startup focused on world models - systems that understand the physical world, not just language patterns. Fei-Fei Li says the same: the next breakthroughs will come from spatial intelligence.
If LLMs dominated 2023–2025, the next wave may be about perception, physics, and 3D reasoning - and it could reshuffle who leads.
Google’s big moment
Google finally landed a knockout: Gemini 3 arrived to rave reviews, strong enough that even Sam Altman told staff the company may be under pressure short-term.
It’s the first time in years we’ve seen OpenAI playing defence. Momentum flips fast in this field - but for now, the scoreboard tilts Google’s way.
The bubble: will it, won’t it?
Michael Burry is shorting NVIDIA and Palantir. Warren Buffett is betting on Alphabet.
The IMF, Bank of England, and Goldman Sachs are all issuing warnings.
Big Tech stocks slid into double-digit declines through November (excluding Alphabet).
Private markets are full of billion-dollar AI startups with no product and very little disclosure.
And we even have AI companies investing in each other in circles to inflate revenue optics.
Are we in a bubble?
Nobody knows.
We’ll know when it pops.
Watch the full briefing
If you want the full version with my facial expressions, check it out:


