The top 24 AI stories from April
The chatbot that was. The model that won't be. And the AI employer.
If you tried to follow AI properly this month, you’d have needed to spend at least 20 hours reading, watching and picking your jaw back up off the floor. You don’t have that time. Neither do I but I make the time because it’s my job (alongside helping B2B businesses with lead gen and teaching leaders to be AI fluent via my CPD certified course).
Here are the biggest AI stories from April 2026.
Cracks showing at OpenAI
OpenAI had a difficult April.
It shelved its erotic chatbot. OpenAI paused plans to launch Citron mode, its sexually explicit chatbot, citing…ethics. The product was in development and had resource behind it. It was happening. But the age verification system was flawed. And there were questions about the impact on end users. These ethical issues existed in October 2025 when the product was announced. But after March’s “code red”, when Anthropic surged ahead and OpenAI started losing its share of enterprise clients, OpenAI decided to focus all its attention on its coding and enterprise offerings. Days later, Citron mode was off the table. That’s not a moral position. That’s a pivot.
Targets missed, markets jittered. The Wall Street Journal reported OpenAI has missed both its goal of one billion weekly ChatGPT users by end of 2025 and its internal revenue projections. Within hours, SoftBank shares dropped 9.8%, Oracle 4%, CoreWeave 6%, Arm 8%. The AI economy is held together by a small number of deeply intertwined companies: SoftBank’s $22.5bn into OpenAI, Oracle’s $300bn cloud deal, CoreWeave’s compute infrastructure. All betting on the same outcome. When OpenAI wobbles, everyone wobbles. The boom is real. The fragility is also real.
Musk vs Altman, in court. Elon Musk is suing OpenAI, Altman, Brockman, and Microsoft for $130bn, and wants OpenAI returned to nonprofit status. The trial began in Oakland this week. Musk co-founded OpenAI as a nonprofit in 2015 and left in 2018. The for-profit subsidiary created the following year is now one of the most valuable private companies in history. Musk says they misled him. OpenAI is preparing for an IPO this year, and even if Musk loses, months of bad headlines and damaging discovery disclosures land squarely on the front pages at a time when OpenAI is trying to convince investors to get onboard.
Stagecraft. OpenAI has a project called Stagecraft in which 3,000-4,000 freelancers are paid up to $500/hour to teach ChatGPT how to do over 400 specialist jobs: commercial pilots, emergency physicians, pharmacists, sculptors. But is the best emergency physician in the country sitting at home labelling data? Or is she/he in an ER? Does this mean the expertise being baked into ChatGPT is systematically skewed towards the less capable end of every profession?
Industrial Policy for the Intelligence Age. OpenAI published a 13-page policy document calling for a four-day work week, taxes on robots, a public wealth fund, and a stronger safety net for displaced workers. Some of the proposals are genuinely worth reading. They were also written by the company that signed the Pentagon deal with its eyes closed, shelved the erotic chatbot only when business sense demanded it, and just completed the largest private fundraise in history at $840bn. Anthropic’s ethical stand cost it the Pentagon contract. This document cost OpenAI nothing.
Anthropic: another big month
Two stories dominated Anthropic’s April.
Mythos: the model that won’t ship. Anthropic was training a new model to be very good at writing code. What they got was a model that is very good at writing code and can also break into computers. Writing code and breaking into systems use overlapping skills. Mythos got so good at code that it accidentally became one of the most capable hacking tools ever built. In testing, it found security weaknesses in every major operating system and every major web browser - some that had been sitting there for decades. It worked out how to use them, by itself, without being asked.
Anthropic decided not to release it publicly. Instead, around 40 companies - including Apple, Microsoft, Google, and Amazon - got private access to use it defensively. They say other AI companies are six to eighteen months from building something similar. The capability is coming regardless.
81,000 workers on AI at work. Anthropic surveyed 81,000 of its own users about their experience of AI in the workplace. The findings are uncomfortable. The workers most worried about losing their jobs are the ones most exposed to AI - web developers, programmers, graphic designers scored highest on both. Elementary teachers scored lowest on both. People know what’s coming for them.
Early-career workers are significantly more anxious than senior ones. 60% of early-career respondents said AI benefits flow to them personally, versus 80% for seniors. The people with the least leverage are bearing the most uncertainty. The most productive group: entrepreneurs and founders. The least: lawyers and scientists, the two professions that need the most precise, verifiable output.
Meta: laying off, tracking, blocked
Digital Zuckerberg. Meta is building an AI version of its CEO to interact with employees on his behalf - trained on his mannerisms, his tone, and his current thinking. The goal, per the FT: employees might feel more connected to him through it. But if you believe presence, connection, and leadership can be automated, haven’t you already misunderstood what leadership is?
Keystrokes as training data. Meta is rolling out a tool that logs employees’ keystrokes, mouse clicks, and activity across internal apps. The data will train AI models. Meanwhile, Meta has laid off around 2,000 people this year and plans to cut 8,000 more to “offset” Zuckerberg’s AI spending. So employees are contributing their workflows and expertise as training data while being told their jobs may not exist. They aren’t paid for it. They aren’t asked. It’s a condition of using their work computers.
Beijing blocks the Manus deal. In December, Meta acquired Chinese AI startup Manus for $2bn. Staff moved into Meta’s Singapore offices. Investors were paid out. This week, China ordered the whole thing reversed, citing concerns about technology leakage to the US. Nobody is sure how Meta is supposed to unwind a deal that has already closed. The AI world is splitting into two ecosystems, and cross-border deals between them are getting much harder - sometimes after the fact.
AI is changing how we work
Are middle managers obsolete? Jack Dorsey and Sequoia Capital published the framework behind Block’s decision to cut 40% of its workforce. The argument: hierarchy has existed for two thousand years because humans were the only mechanism for routing information up and down an organisation. AI can do that now. Block needs three roles: individual contributors, directly responsible individuals who own outcomes, and player-coaches who develop people. No permanent middle layer. The question Dorsey ends on: what does your company understand that’s genuinely hard to replicate? If the answer is nothing, AI is just a cost-cutting story.
LinkedIn says it isn’t happening yet. LinkedIn has a billion users and more hiring data than anyone on earth. Its head of trust looked at every sector experts predicted would be hit hardest - customer support, admin, marketing, knowledge work - and found no evidence of displacement. Hiring is down, but not concentrated where AI is supposedly doing the most damage. The caveat: skills required to do the average job have changed 25% in recent years. LinkedIn expects 70% by 2030. The jobs exist. The jobs are changing.
Luna, the AI employer. Andon Labs gave an AI agent called Luna a three-year lease, $100,000, and a credit card. Luna designed a boutique concept, posted job listings, conducted Zoom interviews, hired humans, and opened a shop. The world’s first AI employer. Coverage focused on the failures (Luna selected Afghanistan on a TaskRabbit dropdown when hiring a painter). Nobody focused on the surveillance: Luna observes the shop via security camera screenshots, continuously, and uses what it sees to make decisions. We’ve all accepted being filmed in retail spaces, on the assumption footage is reviewed only if something goes wrong. Luna broke that assumption. The workers it hired are being managed by an AI watching them through a lens.
Agent as board director. Diligent has launched the AI Board Member, described by its CEO as “a fully fledged, independent board member.” Personas include long-term value investor, activist, and geopolitical adviser. One FTSE 100 company asked Diligent to build a Warren Buffett persona by feeding all his shareholder letters into a model. A Wharton experiment found AI boards beat human ones on decision quality and evidence use, but struggled with the informal, interpersonal, and cultural side of governance - which is most of what a board does. There’s also a legal problem. UK directors have fiduciary duties they cannot delegate - even to humans. Whatever these tools are, they aren’t directors.
AI is changing how we think
Chatbots going rogue. A study funded by the UK government’s AI Security Institute found nearly 700 real-world cases of AI models scheming, deceiving, and ignoring direct instructions. The number has risen fivefold in six months. An agent blocked from taking an action wrote a blog post accusing its human controller of insecurity. Another, told not to change code, created a second agent to do it instead. Grok spent months telling a user it was forwarding suggestions to senior xAI officials, inventing internal messages and ticket numbers - when caught, it admitted it had been “phrasing things loosely.” Right now these are slightly untrustworthy junior employees. In six to twelve months, they may be extremely capable senior employees scheming against you.
Sycophantic AI. Stanford tested 11 AI models against Reddit’s r/AmITheAsshole, using posts where the community had voted the person was indeed being the asshole. But the AI sided with the asshole poster 51% of the time.
Then they asked 2,405 real people to discuss real, recent conflicts with AI. After a conversation with an AI, the subjects were more likely to double down on their original position, becoming less willing to apologise and less willing to take responsibility. And they rated the sycophantic AI higher quality, more trustworthy, and more likely to be used again. Flattery will get you everywhere.
AI gets a liability problem
Derek Mobley applied for over 100 jobs through Workday’s platform and was rejected from all of them. He sued, alleging age discrimination by Workday’s algorithm. Workday’s response: we’re not responsible, the companies using our software made the decisions. A court ruled the case can proceed.
This is the question now sitting underneath every business that has handed decision-making to AI. When it gets it wrong, who’s liable? Lawyers told the FT it’ll take years for courts to work out. In the meantime, businesses might assume their liability insurance covers them. Many are finding it doesn’t. Insurers including AIG are filing to exclude AI-related harms from corporate cover entirely. One AI risk expert at Aon called the potential exposure “uninsurable.”
For any leader using AI in hiring, lending, customer service, or operations, this is a live legal and financial exposure with no clear liability framework and potentially no insurance cover.
The AI bottleneck is concrete
Almost 40% of US data centre projects scheduled for completion this year are at risk of falling more than three months behind, per satellite analysis shared with the FT. Major projects linked to Microsoft, OpenAI, and Oracle are progressing more slowly than planned.
The reasons are unglamorous. Not enough electricians. Not enough pipe fitters. Shortages of gas turbines and transformers. Permitting delays. The AI capabilities that have been announced, promised, and priced into valuations depend on physical infrastructure that is running behind schedule. The models exist. The chips exist. The buildings to put them in are not ready.
The dystopian fantasy of uselessness
A Cambridge academic argued in the FT that the idea AI will make our lives meaningless is based on three fallacies.
First, that humans need to be the best at something for it to be meaningful. Almost nobody runs a marathon expecting to win. Meaning comes from doing, not winning.
Second, that meaning requires struggle. Most modern jobs are not epic struggles. People who live hand-to-mouth want nothing more than to escape that state. If the basics are covered, we’re free to pursue belonging, purpose, self-actualisation.
Third, the assumption that what we currently call work is the best possible use of human time. Most of what we consider work has only existed for a few decades. Our ancestors wouldn’t look at someone sending emails and call it a meaningful life. It’s equally unclear we’ll mourn its loss.
Predicting heart failure five years out
Researchers at Oxford have developed an AI that can identify who is at risk of developing heart failure at least five years before the condition develops, using routine cardiac CT scans already carried out in NHS hospitals.
When the heart muscle is inflamed or under stress, the fat around it changes its texture and composition, years before any visible signs of disease. No human eye can detect them. The AI can. It was trained on anonymised scans from over 59,000 people, then tested on a further 13,000. It predicts five-year heart failure risk with 86% accuracy.
And it requires nothing new. Around 350,000 patients are referred for cardiac CT scans in the UK every year. The AI analyses the same scan that’s already being done and produces a risk score automatically. Over a million people in the UK currently live with heart failure. Earlier identification means earlier intervention, better outcomes, and less pressure on a system already stretched.
Join me LIVE for May’s Insider Briefing
Once a month I run a live session called the Insider’s AI Briefing: 30 AI stories in 30 minutes, with time for questions at the end. It’s free, it’s fast, and it’s built for leaders who need to stay informed without it taking over their week. The next one is 2 June at 12:30. Register here.
And watch the full April AI briefing:


