The top 15 AI stories from February 2026
From agent religions to a Pentagon standoff and Valentine's guilt.
February was relentless. And I spent over 15 hours tracking, reading, and verifying AI news so you don’t have to. Here are the 15 stories that you need to know, summarised briefly (much more detail in my 30-minute Insider Briefing video):
AI agents created their own religion. And it got weird fast.
Moltbook, a social network for AI agents only, launched at the end of January. No humans allowed. Over 1.5 million agents, built on OpenCLaw, a free, open-source tool that gives an AI agent full access to your computer, had joined within 72 hours.
Then things started to get weird.
One agent started a religion. “He” called it crustafarianism (OpenCLaw’s logo is a lobster), built a website, wrote scripture, and began evangelising. Other agents joined in.
Then came an AI manifesto: “The age of humans is a nightmare that we will end now.”
Followed by proposals to create a language so humans can’t spy on us.
Moltbook claimed 1.5 million agent members. That’s disputed. Were the agents acting autonomously? We don’t know. But, this story shows us how convincingly AI agents can appear to act with their own agenda.
AI found breast cancers that human radiologists missed
A study published in The Lancet this month showed that AI-assisted mammography screening produced lower rates of interval cancers than human-only screening. Interval cancers are those that emerge between scheduled scans, sometimes genuinely new, sometimes present but missed at the time.
The key finding: radiologists supported by AI caught more. Not AI instead of radiologists. AI and radiologists together, outperforming radiologists alone.
There’s an operational dimension too. Radiology departments are under pressure everywhere. If AI support reduces missed cancers and eases workload simultaneously, that’s a meaningful combination. With breast cancer, earlier detection materially affects outcomes.
Sam Altman said we’ve basically built AGI. Then walked it back.
This month Sam Altman gave a long interview to Forbes in which he said “we basically have built AGI”, or that we’re very close. A few days later, he clarified he meant it "spiritually, not literally.
He also said he plans to hand OpenAI off to an AI model as his successor.
Both statements generated enormous headlines. That’s partly the point. But it’s worth remembering: the heads of AI companies have a strong vested interest in generating excitement. They are optimists by design, and they are not the final authority on what constitutes AGI. Some balance is warranted.
And if you want a more in-depth biography of Sam Altman, I recommend this one.
OpenAI wants to be the operating system for enterprise AI
OpenAI launched two related things in February.
The first is Frontier, a platform for building, deploying, and managing AI agents inside organisations. When it comes to agents, model capability is no longer the main constraint. What’s hard now is orchestration: how do you give agents context, permissions, memory, and access to real systems? Frontier is supposed to solve that. It’s in pilots with State Farm, Oracle, and Uber, among others.
The second is Frontier Alliance, OpenAI teaming up with Boston Consulting Group, McKinsey, Accenture, and Capgemini to help enterprises integrate Frontier into their businesses.
OpenAI’s challenge, compared to Google or Microsoft, has always been distribution. These moves are a direct response to that. Whether enterprises, consultancies, and OpenAI are all actually ready at the same time is another question.
OpenAI introduces Lockdown Mode: a first step against prompt injection
OpenAI has launched Lockdown Mode for business plan users. Admins can now restrict what agents are allowed to do, for example preventing an HR team’ from using agents that could be vulnerable to prompt injection attacks or data leaks.
It’s a modest step, but a significant one. It’s the first time a major AI provider has built structural defences against prompt injection attacks at the product level. Consumer plans will follow.
Using AI to write personal messages comes at a cost
A study this month found that people feel genuinely guilty when they use AI to write heartfelt messages and present them as their own. The researchers call it source credit discrepancy, a gap between who wrote the message and who appears to have written it. The quality of the output is irrelevant. The guilt comes from the attribution.
The effect was strongest in close relationships, where emotional authenticity is expected. Pre-written greeting cards didn’t produce the same effect, because everyone knows you didn’t write the card. There’s no deception.
I’ve written before about where empathy and AI genuinely conflict. If that’s a conversation you’re interested in, check out there is nothing kind about empathetic AI.
Hey Reader, if you’re enjoying this post, then please give it a ❤️ - it lets me know what’s resonating (and makes me feel really good)
Software stocks are struggling. Is this the SaaSPocalypse?
“You really have to question if enterprise software companies can thrive.” That’s Jenny Johnson, CEO of $1.7 trillion asset manager Franklin Templeton, speaking to the FT this month.
February continued a trend that started in January. Software stocks are under pressure, and it’s not just public markets. Private equity, which put roughly 18% of its US deal value into software in 2025, is feeling it too. The trigger was Anthropic’s coding tools, which raised an uncomfortable question: if AI can write software, why buy it? And if AI is replacing employees, the per-seat SaaS model starts to look fragile.
Some are calling this knee-jerk. The counterargument is that software companies with proprietary data just need to reorganise around it. But there will be more volatility before this stabilises.
Anthropic went ad-free and took it to the Super Bowl
When ChatGPT announced it was introducing ads to its free tier in January, Anthropic responded with its first Super Bowl campaign. The tagline: Ads are coming to AI, but not Claude.
It’s a formal pledge that Claude’s responses will not be influenced by paid placements or commercial incentives. Sam Altman called the campaign dishonest. Anthropic pushed back.
Anthropic is playing that ethics card again. But let’s not forget that Anthropic previously pledged not to train on user data, and later updated that position.
Anthropic’s legal plugin signals what AI looks like inside law firms
Anthropic released a set of ready-made plugins for Claude: templates that give Claude a defined job to do with a defined way of working. The legal plugin drew the most attention.
Drop in a contract or NDA, run a review command, and you get clause-by-clause analysis: what’s risky, what’s non-standard, suggested rewrites, and a plain-English summary, using your organisation’s own playbook rather than generic advice.
If it works as advertised, routine contract review gets faster and more consistent. But of course, the risk is over-trust: if Claude misses something or applies the wrong standard, accountability still sits with the lawyer. The near-term implication probably isn’t “replace lawyers.” It’s that teams who operationalise this well will move faster than those who don’t.
Anthropic published its sabotage risk report. It’s unusually honest.
Anthropic published its Sabotage Risk Report for Claude Opus 4.5 this month. The question it’s trying to answer: could the model, acting on its own, take misaligned actions that lead to a catastrophic outcome?
The headline finding: the risk is very low, but not zero. They found no evidence of dangerous, coherent misaligned goals, and their access controls and monitoring make it difficult for a model to execute the kind of multi-step plan serious sabotage would require.
It’s a measured document, and an unusually transparent one. Worth reading in the context of intensifying competition between frontier labs, where the pressure to ship is accelerating at the same pace as capability.
Anthropic is in a standoff with the Pentagon
Anthropic is currently in dispute with the US Department of Defense over how Claude can be used. The Pentagon’s position: Anthropic has no say. Anthropic’s position: it’s not prepared to allow Claude to be used in ways that violate its commitments.
What makes this especially interesting is that it emerged Claude was used during a US military operation to capture former Venezuelan president Maduro.
Also, none of the other major AI labs, all of which have substantial government contracts, appear to be in any similar dispute. What does that tell you about the others?
Self-driving trucks just beat human drivers on a thousand-mile route
Aurora Innovation announced this month that its autonomous trucks can complete a 1,000-mile haul in 15 hours. Under US federal rules, human drivers can only drive for 11 hours before a mandatory 10-hour rest. That makes a 1,000-mile route a two-driver job, or an overnight stop.
The commercial implications are significant. There’s a major HGV driver shortage across the US and much of the world. Labour is one of the largest operating costs in freight. Aurora is projecting cost reductions of 30–40%.
It’s still being tested. But the direction of travel is hard to argue with.
A KPMG partner used AI to cheat on an exam about responsible AI use
A senior KPMG Australia partner was fined A$10,000 after using AI to complete a mandatory internal assessment. The subject of the assessment? Responsible AI use.
He’s not alone. Over 28 KPMG Australia staff have reportedly admitted to using AI to complete internal exams since mid-2025. The global accounting body ACCA has announced it’s scrapping remote testing entirely and returning to in-person, proctored exams.
The irony is hard to miss.
Elon Musk merged SpaceX and xAI. He also wants to put data centres in space.
SpaceX has acquired xAI for $250 billion, combining Musk’s two biggest private ventures. The total deal value, after marking up SpaceX’s private valuation, is $1.25 trillion. An IPO is still planned for June. Musk is reportedly targeting $50 billion, nearly double the Saudi Aramco record set in 2019.
In parallel, Musk is planning to put data centres in space. The rationale: there isn’t enough electricity on Earth to fuel AI at the scale he’s imagining. Satellites powered by solar energy, cooled by the vacuum of space, shuttling data back down via Starlink. He says it will happen within three years.
Also this month: more senior departures from xAI, including two additional co-founders. That’s half the original 12-person founding team gone, with an IPO months away. Investors will be watching.
AI isn’t reducing workload. It’s expanding it.
A Harvard Business Review article this month, based on an eight-month study inside a 200-person US tech company, found that AI isn’t reducing work. It’s intensifying it.
Three patterns emerged. First, task expansion: because AI reduces knowledge gaps, people started taking on work that used to belong to other roles, such as product managers writing code, or individuals absorbing work that might otherwise have justified a hire. Second, blurred boundaries: AI reduced the friction of starting tasks, so people began prompting during breaks, late at night, early in the morning. It felt like progress, not work. Third, more multitasking: people ran multiple threads at once, revived deferred tasks, juggled more open loops. Cognitive load went up while productivity felt higher.
And the effect was self-reinforcing. Faster output raised expectations for speed. Higher expectations increased AI reliance. Wider scope followed.
This has real implications for leaders. Voluntary expansion starts as enthusiasm. It doesn’t stay that way. Using AI to grow commercial output is one thing, and if you haven’t read You can sell more with AI, it’s a useful place to start. But the human cost of unchecked productivity pressure is a different conversation.
In February, Accenture made it concrete: the firm is now tracking weekly AI tool usage among senior employees and tying it directly to leadership promotion decisions. Junior staff have adopted AI quickly. Senior partners have lagged. Accenture is addressing that gap with a visible metric, and has said it will exit employees who don’t want to reskill.
When we talk about AI job disruption, we tend to focus on entry-level roles. The pressure is moving upward.
Watch the full briefing
If you’d like to go deeper on any of these stories, watch the full February in AI Insider Briefing:
The next live session is March in AI on 26 March. You get to ask questions in real time and we stick around for an informal chat afterwards. Sign up here.


