OpenAI and Anthropic want your medical data
Should they have it?
How much AI do you want involved in your medical care? How much would you want involved in your child’s?
That question stopped being theoretical this week.
Both OpenAI and Anthropic have launched new healthcare-focused products, formalising what millions of people are already doing: asking AI about their bodies, symptoms, test results, and fears.
This has the potential to change human healthcare forever. But will it be a good thing or a bad thing? I’m conflicted.
The optimistic view of AI healthcare
For many people, this will be a welcome development.
1. AI healthcare makes us safer
Hundreds of millions of people are already discussing their health with chatbots every week. These tools don’t create that behaviour. They acknowledge it and try to make it safer.
For patients, the upside seems obvious.
We’ll be able to securely connect medical records and wellness apps; understand our test results without needing a medical degree; spot trends; prepare better questions for appointments; and regain some sense of control over our own health.
2. AI healthcare is good for doctors
This will support clinicians, speed up admin, and reduce bureaucracy. Paperwork will be processed instantly. Appointments coordinated seamlessly. Let AI take the grind so clinicians can spend time where humans are actually needed.
3. Healthcare research will progress at pace
Then there’s research. At scale, health data analysis could transform how quickly we spot patterns, prevent illness, and identify which treatments are most likely to work while giving researchers access to the largest dataset of human medical information that has ever been created.
4. Access to healthcare democratised by AI
And what about access. This free or low-cost multilingual health assistant offers a credible case for reducing health inequality. Navigating healthcare systems is hard even if you’re educated, fluent, and confident. For people who aren’t, it can be paralysing.
From this angle, AI healthcare looks like something to be welcomed and celebrated.
But there’s a downside (or four) to AI healthcare
While AI is in theory saving us time and improving our experience as patients, these systems will also be collecting our most intimate data at scale.
Lab results. Diagnoses. Sleep. Food. Movement. Symptoms. Purchasing patterns. Emotional states.
To an AI company, this isn’t just sensitive data. It’s extraordinarily valuable data.
1. What will AI companies do with our health data?
And where value exists, incentives follow.
Will analytics be sold to insurers or pharmaceutical companies? Will partnerships quietly influence which tools, treatments, or pathways get recommended? Will we be targeted with advertising based on medical conversations we believed were private?
You might say that’s overly cynical. Both OpenAI and Anthropic insist health conversations will be kept private. They say models won’t be trained on this data.
But history matters. OpenAI began as a not-for-profit and reversed course. Sam Altman was removed at one point for not being candid with the board. Anthropic promised Claude wouldn’t train on user data - and later changed that policy (have you checked out my take on the AI billionaires?)
Both companies need to raise capital at a scale we’ve never seen before. Both are widely rumoured to be preparing for IPOs in 2026. Revenue growth will matter enormously.
What are the odds that either company sits on a goldmine of medical data and never feels pressure to monetise it?
If this helps you, consider hitting the ❤️ to let me know it was worth your time.
2. Have we forgotten that AI hallucinates?
Even now, models like ChatGPT and Claude hallucinate roughly 15% of the time. That’s inconvenient when you’re writing an email. It’s dangerous when you’re dealing with health.
Who will be responsible when a model hallucinates in a life-and-death context?
3. Who decides what the AI health models are trained on?
ChatGPT Health was developed with input from 260 physicians. But doctors are trained to treat illness - usually with drugs and surgery. For many people, health is broader than that: nutrition, stress, lifestyle, sleep, mindfulness.
These may not fit neatly into a narrow medical worldview, but they matter. And most doctors receive limited training in them.
Doctors can also be wrong. Or influenced - think Oxycontin; Ranitidine.
Can the business model and the mission coexist?
So this is the real question - for AI companies, regulators, clinicians, and patients alike:
Can the business model and the mission coexist?
Can companies under immense financial pressure be trusted with the most sensitive data humans possess?
We’re about to find out.
If you want my deeper take, I unpack all of this in the video below:
Check out a weekly LIVE AI session for ambitious future AI super users.


