The Insider's AI briefing
January's been a rollercoaster in the world of AI
January was one of those months where AI stopped being theoretical in several different domains at once - vehicles, health, hiring, policing, and media - while a lot of people were still treating it like a set of tools in a browser tab. So here are the AI stories from January that actually mattered.
(I run a monthly LIVE insider briefing. It’s 30 minutes and free to attend. You can register for February’s session here.)
Nvidia pushes AI into the real world
Nvidia unveiled a new platform for autonomous vehicles, framed as a step towards “physical AI”.
The claim is that reasoning models can handle the long tail of rare, unpredictable driving scenarios - the last 1% that has stalled self-driving for years. Jensen Huang called this “physical AI’s ChatGPT moment” and reiterated his vision that all cars and trucks will eventually be autonomous.
Nvidia says robotaxis and driverless Mercedes vehicles are coming soon.
The ambition is clear. The timelines are familiar. But does humanity get a say in whether we want this vision?
Robotaxis reach Europe
London is expected to host some of Europe’s first robotaxi pilots from 2026, backed by fast-tracked legislation.
What makes this interesting is that Europe isn’t America. Road deaths are already lower, cities are denser, and public transport plays a bigger role. The safety narrative that dominates US robotaxi launches fits less neatly here.
The technology may be ready (or maybe not - as these cars are not technically driverless). The justification is less obvious.
AI moves into healthcare
Both OpenAI and Anthropic launched healthcare products.
ChatGPT Health lets individuals connect medical records and wellness apps to summarise information and prepare for appointments. Claude’s healthcare tools target organisations, supporting administrative, research, and regulatory work.
Both companies stress these tools support clinicians rather than replace them.
The upside is access and efficiency. The open questions are data, hallucinations, and what assumptions about “health” get embedded into the systems.
Sleep data predicts disease
Stanford researchers published SleepFM, an AI model that predicts more than 130 health conditions from a single night of sleep.
Reported accuracy rates were high across conditions including Parkinson’s, dementia, and heart attacks.
It’s striking work. It also raises obvious questions about bias - the data comes from sleep clinic patients - and about whether people want to know long-term health risks years before symptoms appear.
Hiring starts testing “AI-assisted thinking”
McKinsey is piloting graduate interviews where candidates work with its internal AI assistant.
The assessment isn’t about the right answer. It’s about how candidates prompt, question, and judge AI output. This reflects how junior consulting work is already changing.
Entry-level analysis roles are shrinking. Human value now sits in judgement, context, and sense-making on top of AI output.
Entry-level jobs keep getting harder to find
The FT reported continued declines in graduate hiring, with intense competition for fewer roles. AI is part of the story, but not the whole thing.
Higher employment taxes, business rates, and weak economic confidence are also contributing. For new graduates, it’s a perfect storm.
A counterpoint also emerged: shrinking working-age populations may mean AI offsets labour shortages rather than triggering mass unemployment. Both forces are now in play.
A policing scandal powered by hallucinations
West Midlands Police admitted it relied partly on AI-generated intelligence to justify banning Israeli fans from a football match.
The AI tool hallucinated incidents that never happened. Those claims made it into official intelligence, reinforcing a decision that appeared to have been made in advance.
After initially denying AI use, the Chief Constable later admitted it and stepped down.
This wasn’t a hypothetical risk. It was generative AI inside a real decision-making system, with real consequences.
Anthropic published Claude’s constitution
Anthropic finally published the constitution that governs how Claude is meant to behave.
Instead of rigid rules, it focuses on values, intent, and judgement, with clear red lines around unethical requests.
I’ve been critical of “constitutional AI” without transparency. Publishing the document matters. For now, Anthropic appears to be taking AI safety more seriously than most major labs.
Dario Amodei issues a warning
Anthropic’s founder published an essay arguing we’re entering AI’s most dangerous phase.
He focused on risks - from bioterrorism to economic disruption - and warned that up to half of entry-level office jobs could disappear within five years. He called for stronger intervention and greater transparency from AI labs.
The unresolved question is whether anyone is listening. And whether partial responsibility is enough to matter.
ChatGPT ads arrive
OpenAI confirmed ads are coming to ChatGPT for free and lower-tier users.
Ads are framed as a way to subsidise access. The risk is trust. Once ads exist, users will inevitably question whether responses are shaped by relevance or revenue.
This marks a shift: from AI as a thinking partner to AI as an advertising platform.
That’s the distilled version of January’s Inside a Briefing.
If you want the full context, tone, and nuance, you can watch the session here:


