The top 30 AI stories from March 2026
Robot training farms, Anthropic vs the Pentagon, and the AI system making bombing decisions in Iran.
Every month I spend 20+ hours reading, verifying, and making sense of what’s actually happening in AI - so you don’t have to. Then I run a live session called the Insider’s AI Briefing, where I take my audience through all the stories that matter in 30 minutes.
This is the summary of March’s session (which was yesterday). Thirty stories. The ones I’d want to know if I were running a business and couldn’t afford to miss what’s coming.
China is building robot training farms
China is building a network of state-funded facilities designed to produce training data for humanoid robots. The FT visited one in Wuhan where 70 young graduates work eight-hour shifts teaching 46 robots everyday tasks like serving food, wiping tables, folding laundry. Every repeated movement is captured by cameras and sensors. The site produces around 100 hours of usable data per day.
Beijing has identified “embodied intelligence” as one of six future industries in its new five-year plan. Technical barriers remain. But right now, these robots are learning to fold towels.
But to me, the end point of a humanoid robot that can move through physical space, manipulate objects, and respond to instructions is more than a better waiter. It’s a soldier. And that worries me, because it’s much easier to go to war if your soldiers don’t have loved ones back home to worry about them. The FT didn’t ask these questions though.
You may not wear these in my home
A joint investigation by two Swedish newspapers revealed that contract workers in Kenya are reviewing intimate footage captured by Meta’s Ray-Ban smart glasses. Their job is to label objects to train Meta’s AI. But the footage isn’t just furniture and street scenes - workers reported seeing people using the toilet, undressing, and having sex. Meta says faces are blurred. Workers say the blurring doesn’t consistently work.
The story has escalated. The ICO wrote to Meta requesting information on its data protection compliance. In the US, a class action has been filed alleging false advertising and privacy violations - the central claim being that glasses marketed with “designed for privacy, controlled by you” were running a pipeline sending intimate footage to workers overseas. Seven million people bought these glasses in 2025. And they’ve just relinquished all control over their privacy.
Anthropic had the biggest month of any AI company in history
Most people only heard about one story. There were several.
The Pentagon standoff. Anthropic had a $200 million contract with the Pentagon to deploy Claude in classified systems. During negotiations, Anthropic drew two lines: no mass domestic surveillance of Americans, no fully autonomous weapons. The Pentagon’s position was that a private company shouldn’t dictate how the military uses its technology. They wanted Anthropic to accept “any lawful use” and remove the safeguards.
Anthropic refused. President Trump and Defence Secretary Pete Hegseth publicly cut ties, labelled the company a “supply chain risk” - a designation normally reserved for foreign firms suspected of espionage - and directed all federal agencies to stop using Claude. Anthropic filed lawsuits. The Department of Justice called the company “an unacceptable risk to national security.”
Then it emerged that on the same day the Pentagon formalised that designation, its own Under Secretary emailed Dario Amodei saying the two sides were “very close” on the disputed issues. Publicly, a national security threat. Privately, nearly aligned.
OpenAI and X both signed Pentagon deals without conditions. Dario Amodei called OpenAI’s deal “maybe 20% real and 80% safety theatre,” and pointed to OpenAI co-founder Greg Brockman’s $25 million Trump donation. More than 30 staff from OpenAI and Google signed a legal brief backing Anthropic’s lawsuit. The first day of the hearing went well for Anthropic.
The consumer reaction. ChatGPT uninstalls surged 295% in a single day. Claude hit number one on the App Store. OpenAI’s robotics director resigned publicly over the Pentagon deal. OpenAI’s VP of Research, Max Schwarzer, left - for Anthropic.
The numbers. Anthropic was on pace for $9 billion in annual revenue at the start of the year. By early March, that had reportedly nearly doubled to $20 billion. The share of US companies paying for Anthropic’s tools went from around 4% a year ago to 40%. Over the same period, OpenAI’s share of enterprise spending dropped from 50% to 27%.
If you’ve been following Claude’s capabilities and want to understand the practical side of what’s driving this shift, Claude Co-work: 12 things I learned the hard way - is a useful place to start.
What 81,000 people want from AI. Anthropic published the largest qualitative AI study ever conducted - interviews with 81,000 Claude users across 159 countries in 70 languages.
What people want: professional excellence, personal transformation, life management.
What AI has actually delivered: productivity gains, or nothing.
What people are worried about: unreliability, job security, and what the researchers describe as “cognitive atrophy.”
There was also a clear geographic split. In wealthier regions, people worry about losing what they have. In developing economies, they see AI as access to things they’ve never had.
The Anthropic Institute. Anthropic launched a new research arm led by co-founder Jack Clark, bringing together machine learning engineers, economists, and social scientists to study AI’s impact on jobs, security, and society. This thing is properly staffed - not just a blog and a landing page. Anthropic is building the tools that could reshape entire industries while simultaneously funding research that examines whether that’s a good thing. You can read that as responsible. You can read it as hedging. Either way, it’s worth watching.
Prefer to watch? Here’s the recording of March’s Insider AI Briefing:
The startup built on cheating
Andreessen Horowitz funded a company whose slogan was “Cheat on everything.” Guess what happened next…
Cluely raised over $20 million to build a tool that feeds users real-time answers during job interviews and exams. The founders were suspended from Columbia University for building it. This month, the CEO admitted the revenue figure he’d quoted to TechCrunch was false. When challenged, he described the original interview as “a random cold call from some woman.” TechCrunch published the email chain showing his own PR team had arranged it. He had lied about the lie.
Cluely has since rebranded as an AI-powered meeting note-taker.
Zoom wants to send your avatar to meetings you don’t want to attend
Zoom announced photorealistic AI avatars that can attend meetings on your behalf. You brief them on your talking points, they replicate your appearance, facial expressions, and lip movements, and they go to the meeting so you don’t have to.
If the meeting is important enough to need your input, you should be in it. If it isn’t, why is the meeting happening? Sending an AI avatar doesn’t resolve that question - it just adds a layer. You’ve swapped attending a meeting for reading a summary of a meeting your avatar attended. That’s not saving time.
There’s also the matter of the other people in that meeting, who think they’re talking to you. They’re not. And Zoom is shipping deepfake detection alongside the avatars. A tool that pretends to be you, and a tool that detects when someone is pretending to be someone - in the same product update.
Hey Reader, if you’re enjoying this post, then please give it a ❤️ - it lets me know what’s resonating (and makes me feel really good)
OpenAI: code red
OpenAI signed the deal with the Pentagon that Anthropic walked away from, and the consumer backlash was immediate. Internally, head of applications Fidji Simo told staff that Anthropic’s enterprise dominance was a “wake-up call” and described it as a “code red,” saying the company “cannot miss this moment because we are distracted by side quests.”
The side quests have been extensive. Sora launched to enormous hype, hit number one on the App Store, and Disney signed a billion-dollar licensing deal. This month, OpenAI shut Sora down. It needs the computing power for coding and enterprise. The Disney deal is dead.
The focus now is Codex, their coding tool, which has quadrupled its weekly users to over 2 million since January. OpenAI acquired Astral, a Python developer tooling startup, and folded the team into Codex. They’re also reportedly building their own code repository to replace Microsoft’s GitHub - notable, given that Microsoft is one of their largest backers.
On the money side: OpenAI closed a $110 billion funding round led by Amazon at $50 billion, SoftBank at $30 billion, and Nvidia at $30 billion. The company is now valued at $840 billion, with 900 million weekly active users and 50 million paid subscribers. Extraordinary numbers. But Anthropic’s enterprise share climbed from 4% to 40% in a year while OpenAI’s dropped from 50% to 27%. Valuation and dominance are not the same thing.
Two weeks after Anthropic launched the Anthropic Institute, OpenAI announced its nonprofit arm plans to spend $1 billion this year on AI safety and societal risk research. Whether that’s genuine, or reputation management after the Pentagon controversy, is up to you.
AI and jobs: what happened in March
Several stories from this month belong together.
Jack Dorsey cut 40% of Block’s workforce - over 4,000 people. The business was performing strongly. He said explicitly that AI “fundamentally changes what it means to build and run a company,” and predicted most companies would do the same within a year. Gross profit per employee will go from approximately $500,000 in 2019 to (expected) $2 million in 2026. The stock surged 24%: The market didn’t punish the decision. It rewarded it.
Anthropic published research mapping which jobs AI is actually performing versus which it could theoretically handle. For computer and maths roles, AI could theoretically handle 94% of tasks. In practice it’s covering around 33%. The gap is similar across law, finance, and administration. The researchers reference the potential for a “Great Recession for white-collar workers.” What’s slowing things down right now is legal constraints, technical limitations, and the need for human review. Those are potentially temporary barriers, not permanent ones.
The Dallas Federal Reserve found that employment in computer systems design has fallen 5% since ChatGPT launched, and the decline falls disproportionately on workers under 25. Entry-level jobs are drying up. The bottom rungs of the career ladder are disappearing - and if a generation of workers can’t get their foot in the door, they can’t build the experience that makes them valuable later. That has implications well beyond the tech sector.
And then there’s the story that sounds like fiction. North Korean operatives are using AI to get hired at European companies - real-time deepfakes in video interviews, AI-generated CVs, voice-changing software. Once inside, they draw a salary and send the money to Pyongyang. Over 3,000 suspected operatives are currently working inside Western companies, generating over $600 million a year for the regime. Amazon has blocked more than 1,800 suspected operatives since April 2024. The operation is shifting to Europe because US enforcement tightened and European companies have fewer defences.
Because that section needed something less bleak: a producer in Europe used the AI music platform Suno to create a fictional Japanese metal band. It accumulated 80,000 monthly listeners on Spotify before fans traced the creator to Europe. Once the music had traction, the creator hired seven real musicians from Tokyo to perform the AI-composed tracks live. Three shows done, a headline gig confirmed. As the creator put it: in an age where AI is taking everyone’s jobs, this one actually created them.
AI is now making life-or-death targeting decisions in Iran
I don’t fully trust AI to write a blog post for me. Anyone who’s worked with these tools knows they hallucinate - they state fiction as fact with complete confidence. So I found the FT’s investigation into AI’s role in the Iran conflict genuinely disturbing.
The US military is using AI systems to run what’s called the “kill chain” - finding a target, prioritising it, selecting a weapon, assessing damage after the strike. Traditionally, that process required printed documents, senior commanders reviewing intelligence, and formal sign-off. It took hours, sometimes days.
AI has compressed that to minutes. The result: the US struck over 2,000 targets in Iran in four days. For comparison, the coalition struck a similar number of targets in the first six months of the campaign against ISIS that started in 2014.
The civilian cost is already visible. Iran’s Red Crescent reports over 20,000 non-military buildings hit, including more than 17,000 residential buildings.
The question from the FT: how do you exercise meaningful human judgment over decisions generated by systems running 37 million computations per second?
$265 million is being spent to prevent AI regulation
76% of Americans want AI regulated. The AI industry is spending $265 million to make sure that doesn’t happen.
The biggest spender is a group backed by OpenAI co-founder Greg Brockman, Andreessen Horowitz, and a Palantir co-founder. Their argument: individual states shouldn’t write their own AI rules because it creates an inconsistent patchwork that stifles innovation. That sounds reasonable. Except there are no national rules either. When someone says “we don’t want state-level regulation,” what they’re generally saying is “we don’t want regulation.” It just sounds better the first way.
Anthropic is on the other side. It’s given $20 million to a pro-regulation group and said publicly that effective AI governance requires more scrutiny of companies like itself, not less.
The UK government performed a U-turn on AI copyright
The government’s original position was to allow AI companies to train on copyrighted works, with creators required to opt out to protect their own material. Elton John called it theft on a high scale. Paul McCartney, Dua Lipa, Coldplay, and thousands of other artists opposed it. Only 3% of the 10,000 consultation respondents supported the proposal.
Technology Secretary Liz Kendall has now said the government “listened” and no longer favours the opt-out approach. They haven’t decided what to replace it with. The government currently has “no preferred option.”
The U-turn is welcome. But without a clear decision, it’s a U-turn into a lay-by. Nobody knows where we’re going next.
Half of teenagers are using chatbots as a search engine
Pew Research published a study on how US teenagers use AI chatbots. 64% have used them. The top use case, at 57%, is searching for information.
These are tools that hallucinate. They invent sources. They present fiction as fact with absolute confidence. Teenagers - still developing their critical evaluation skills - are using them as a primary way to find things out. That’s an information literacy crisis, happening right now, while most coverage focuses on whether kids are using chatbots to do their homework.
McKinsey got hacked through its own AI platform
Another day, another embarrassing AI story for a major consulting firm.
A one-man cybersecurity firm called CodeWall used its own AI agent to break into Lilli, McKinsey’s internal AI system used by 40,000 staff to plan strategy, analyse data, and build client presentations. It took two hours. The agent gained full read and write access to the entire production database, accessing 46.5 million chat messages, 57,000 user accounts, 728,000 sensitive file names, and the system prompts that revealed exactly how the AI was configured and what guardrails were in place.
McKinsey says the actual files were stored separately and were never at risk, and it patched the vulnerabilities within hours of being alerted. This was an ethical hack - but it shows what’s coming. The company that charges clients a fortune to advise them on AI got breached through its own AI platform before the attacker had finished his morning coffee.
A man used AI to design a cancer vaccine for his dog
This is a genuinely moving story. It’s also a good example of how AI hype builds.
Paul Conyngham, an AI consultant in Sydney, used AI to design a custom mRNA cancer vaccine for his dog Rosie after she was diagnosed with mast cell cancer in 2024. He chained together ChatGPT, DeepMind’s AlphaFold, and a university genomics lab. One tumour shrank by half after the injection in December.
That is impressive. But look at what was actually involved. He’s an AI consultant - not a casual user. He paid $3,000 for genomic sequencing. He had access to a world-class research lab to produce the vaccine, and worked with 350 gigabytes of tumour data. And Rosie isn’t cured. One tumour responded. Others didn’t.
The headlines say “man uses AI to create cancer vaccine for his dog.” The implication is that this is something anyone could do with the right prompts. This is how hype builds - a genuine, nuanced story gets compressed into a headline that implies something much bigger, and expectations get inflated beyond what’s real.
Join me for April’s Insider Briefing
Once a month I run a live session called the Insider’s AI Briefing: 30 AI stories in 30 minutes, with time for questions at the end. It’s free, it’s fast, and it’s built for leaders who need to stay informed without it taking over their week. The next one is 30 April at 12:30. Register here.


