December in AI: The insider’s briefing
Everything you need to know to be the AI expert in the room
December was one of those months where AI quietly crossed several important lines - technically, commercially, and culturally - while most people were distracted by end-of-year noise. So here’s the distilled version of the AI stories from December that actually matter.
I run a monthly LIVE insider briefing. It’s 30 minutes and free to attend. You can book January’s session (29th) here.
DeepSeek turns up the pressure
The Chinese lab DeepSeek released two new models, V3.2 and V3.2 Speciale, claiming performance that matches or beats GPT-5 and Gemini 3 Pro on reasoning, maths, and coding.
Benchmarks should always be treated cautiously - especially self-reported ones - but there are two signals here that are hard to ignore.
First, pricing. DeepSeek is dramatically cheaper than the leading Western models, which puts real pressure on companies building for long-term margins.
Second, this is another reminder that frontier capability is no longer geographically or politically contained.
That said, DeepSeek remains text-only. No images, no video, no multimodal stack. This isn’t a ChatGPT replacement. It’s more of a warning shot.
Anthropic starts behaving like a public company
Reports emerged that Anthropic is laying the groundwork for an IPO, potentially as early as 2026. Hiring IPO lawyers doesn’t mean a listing is imminent - but it does mean the internal mindset is changing.
At the same time, Anthropic is reportedly seeking a valuation around $300bn in its next private round, roughly on par with OpenAI. OpenAI, meanwhile, is also rumoured to be IPO-bound, with talk of trillion-dollar valuations.
Public markets will eventually have to decide whether these numbers represent a floor or a ceiling. Whoever lists first becomes the test case.
And once companies start optimising for shareholders, incentives change - from pure capability to predictability, defensibility, and margins.
Wall Street waves off bubble fears (for now)
Despite persistent chatter about an AI bubble, major investment banks expect US markets to keep rising through 2026.
A Financial Times survey of nine banks found unanimous expectations of S&P 500 growth next year, with AI still treated as a structural tailwind rather than a risk.
The logic rests on three things: supportive policy, expected rate cuts, and continued belief that AI spending will eventually convert into earnings.
In short: public markets are still willing to fund the build-out. Whether that patience holds is another question.
Anthropic studies itself - and everyone nods along
Anthropic published an internal study claiming its engineers use AI for 60% of tasks, with a 50% productivity boost.
I’d take this with a very large pinch of salt. People are not good at estimating their own productivity, and Anthropic employees are hardly neutral observers.
The most interesting detail was buried: 27% of AI-assisted work wouldn’t have existed otherwise. Maybe that means AI isn’t just speeding up work - it’s expanding it.
Whether that’s a productivity miracle or a recipe for cognitive overload depends entirely on context. Strangely, very few people questioned the study at all.
OpenAI publishes an enterprise report - and it’s circular
OpenAI released its first State of Enterprise AI report, which confidently concludes that enterprises using ChatGPT are… benefitting from ChatGPT.
Adoption is “accelerating”. Power users are pulling further ahead. Workers report value.
None of this is surprising. Much of it is self-reported. Some of it is pure circular reasoning - especially the idea that “frontier workers” are defined by how many messages they send.
More concerning is the quiet admission that anonymised enterprise data underpins the report - while OpenAI simultaneously claims sensitive business data remains fully under customer control.
Those two statements don’t sit comfortably together.
The chatbot race becomes real
For the first time, Google is seriously challenging OpenAI’s early lead.
Gemini 3 has been widely praised, and Google’s advantage isn’t just model quality -it’s distribution. Gemini is embedded across Android and Google’s ecosystem by default, reaching users who never consciously chose a chatbot.
OpenAI still dominates mindshare. But the competition is no longer theoretical.
Google tries smart glasses again
Google announced AI-powered smart glasses launching in 2026, designed with Samsung and deliberately styled to look like normal eyewear.
Meta’s relative success has proven that people will wear AI glasses. Whether they should is another debate entirely - particularly around privacy and ambient data capture.
This time, Google has stronger models and deeper integration. That makes the product more viable - and the implications more serious.
Disney picks a side
Disney signed a three-year licensing and investment deal with OpenAI, giving Sora access to more than 200 iconic characters across Disney, Marvel, Pixar, and Star Wars.
This isn’t Disney resisting AI. It’s Disney choosing its partner and monetising its catalogue.
For OpenAI, this is a powerful moat. For creators and animators, it’s deeply unsettling. The IP is protected. The labour story is far murkier.
McDonald’s pulls an AI Christmas ad
McDonald’s Netherlands released a fully AI-generated Christmas ad — and pulled it almost immediately after backlash.
The criticism wasn’t just about jobs. The ad was bleak, cynical, and emotionally tone-deaf. AI made it cheaper and faster but not better.
The most interesting moment came afterwards, when a production company released a satirical response pointing out that AI doesn’t get paid, doesn’t have contracts, and doesn’t have rights.
That contrast landed.
Image generation quietly improves
OpenAI upgraded its image model to GPT Image 1.5, and for once the praise feels justified.
The big shift is control. Iteration is more stable, edits are more precise, and small changes no longer destroy the whole image. That matters far more than raw aesthetics.
This is OpenAI catching up to Google - not leaping ahead.
“Slop” becomes word of the year
Merriam-Webster named slop its 2025 Word of the Year - shorthand for low-quality, mass-produced AI content.
Oxford chose ragebait. Collins picked vibe coding.
Time names the AI architects
Time named the architects of AI as Person of the Year, including Jensen Huang, Sam Altman, Demis Hassabis, Fei-Fei Li, Dario Amodei, Mark Zuckerberg, Elon Musk, and Lisa Su.
It’s recognition — and a reminder of how concentrated power has become.
(If you missed it, check out Meet the AI billionaires.)
AI passes the CFA exams — again
A new Cornell study found leading AI models can now pass all three levels of the CFA exams with scores in the high 90s.
Impressive. But exams are not jobs - and contamination of training data remains an open question.
This headline keeps resurfacing because it flatters the idea of replacement. Reality is messier.
Children are already using AI - a lot
I commissioned a study of 150 UK parents and found that 63% of children are already using chatbots, often weekly or daily.
This will be a major focus for me in 2026. Right now, children are effectively unregulated test subjects.
Amazon quietly pulls an AI recap feature
Amazon tested AI-generated TV recaps on Prime Video - and then pulled the feature after users noticed persistent factual errors.
Another reminder: hallucinations aren’t a theoretical risk. They are part of the AI territory.
ScotRail and the problem of voice rights
ScotRail replaced its AI announcer after it emerged that a voice actor’s recordings had been used without proper consent.
The replacement voice is now based on a ScotRail employee, described as “ethically produced”.
This won’t be the last dispute of its kind. Voice is becoming a frontline issue in AI labour rights.
That’s it for December.
January’s Insider Briefing takes place on the 29th and is available to book for free now.
If you want the full version of December in AI with tone, context, my facial expressions and a feline cameo you can watch the recording here:


