There is nothing kind about empathetic AI
The concept worries me deeply
There’s a surge right now in people arguing that empathetic AI could be one of the most important technologies we ever build.
The framing is almost always the same. First, we’re told there’s a loneliness epidemic. That people are disconnected, unheard, emotionally starved. That human connection is scarce and expensive.
And into that gap, steps AI. AI, we’re told, can listen without distraction. AI doesn’t get tired. It doesn’t lose patience. It doesn’t judge. It can be designed to understand us. To empathise with our struggles. To meet our emotional needs at scale.
Some people go further than that. They argue that empathy itself is a cognitive process. Something that can be taught, modelled, learned.
And if that’s true - if empathy is just a skill - then why wouldn’t we build machines that are better at it than we are?
After all, empathy isn’t easy for humans.
We’re distracted. We’re stressed. We’re overwhelmed. We get compassion fatigue. So wouldn’t emotionally intelligent AI be a good thing?
Some even claim we already have proof. They point to studies showing AI responses rated as more empathetic than human ones. More caring than doctors. More compassionate than crisis responders.
And they say: look - the data speaks for itself.
This is where I start to feel deeply uncomfortable.
First: we barely understand the human brain
We are still a very, very long way from understanding the human brain.
Not metaphorically. Literally. We don’t fully understand how emotions arise. How context shapes feeling. How embodiment, memory, trauma, biology, culture, power and history interact in a single moment of human response.
So the idea that we can “recreate” empathy in machines - when we don’t even fully understand it in ourselves - should already raise a red flag.
We’re not talking about pattern recognition or language generation here. We’re talking about lived, embodied experience. And pretending those are the same thing is not a technical shortcut. It’s a philosophical error.
Second: what happens when humans don’t have to be empathetic anymore?
Let’s assume, for a moment, that empathic AI works exactly as promised. It listens perfectly. It responds kindly. It validates endlessly.
What does that do to us?
If humans can outsource empathy - if we can sub in an AI - what happens to our responsibility to each other? What happens to our responsibility to the vulnerable? Do we slowly decide that certain people are “better handled” by machines? That care is something we automate? That discomfort is something we route away?
Is this how we end up quietly warehousing loneliness, grief, disability, ageing, mental illness - not because we’re cruel, but because we’re “efficient”?
And I’m just gonna say the thing that will upset a lot of people. There can be too much empathy. Humans who are used to bottomless, frictionless empathy tend not to become better humans. They become more demanding ones. More needy. More self-focused. Less tolerant of real human limits.
There’s a reason we have concepts like tough love. There’s a reason care has boundaries.
Empathy without limits isn’t kindness. It’s indulgence.
Third: mimicked empathy already has a name
We already have a word for pretending to care. It’s not new. It’s not impressive. It’s called manipulation.
Mimicking empathy without actually feeling it is a well-known human behaviour. It’s a feature of narcissism. It’s how trust gets engineered, not earned.
So when I hear claims that AI can simulate empathy perfectly, my question isn’t “is that impressive?”
My question is: “Why would we celebrate the industrialisation of a behaviour we already recognise as dangerous in humans?”
And about those studies everyone keeps citing
There are studies showing AI responses rated as more empathetic than human ones.
One well-known line of research compared chatbot responses to patient questions with those of physicians - and yes, the AI scored higher.
People often take this as proof that AI is empathetic. I think that’s a lazy conclusion.
In my experience, many physicians are not particularly empathetic. They’re overworked. They’re exhausted. They suffer from healthcare fatigue. Sometimes they are just plain arrogant.
So what are we actually learning here? That AI is emotionally superior to humans?
Or that we’ve created systems that burn humans out and then act surprised when a tireless machine performs better in a narrow, text-based comparison?
Even if those results were flawless - which they aren’t - what would be the right response?
“Great, let’s replace human empathy with AI”?
Or:
“Something has gone very wrong if the people we rely on for care no longer have the capacity to care. What do we need to fix?”
Those are radically different conclusions.
Emotional intelligence scores miss the point
There’s also research showing chatbots scoring highly on emotional intelligence tests. On paper, that sounds impressive.
But emotional intelligence is least useful on a test.
I don’t need emotional intelligence in my life when I’m ticking boxes. I need it in messy, difficult, high-stakes interpersonal situations. Conflict. Grief. Moral disagreement. Power imbalance. Leadership. That’s where emotional intelligence matters.
And those are exactly the contexts where pattern-matching breaks down.
If you follow the incentives, the story breaks down
One final observation.
If you spend time watching videos celebrating emotionally intelligent AI, a pattern emerges very quickly. The people most confident that AI will soon be empathetic…are usually the people building the products. It is extremely helpful to them if we believe this story. Commercially. Strategically.
That doesn’t automatically make them wrong.
But it should make us careful. AI companies are already coming for our health data, the contents of our private meetings, and they have increasing access to our children. Do we want to invite them into perhaps the most intimate spaces of our lives at the precise times when we are most vulnerable?
The question we should actually be asking
The question isn’t “can AI sound empathetic?”
Clearly, it can.
The real question is: What kind of humans do we become if we outsource empathy - not because we can’t do it, but because we don’t want to?
And what kind of society quietly decides that simulated care is good enough?
If you’re vibing with this, a quick ❤️ tells me I’m on the right track.



The manipulation angle is what gets me. We already know what it looks like when people mimic empathy without feeling it, and it's never benign. Industrializing that at scale while calling it progress feels like missing the entire point. The physician burnout comparison is spot on,we're solving for the wrong variable entirely.
Hey, great read as always. Your analysis of empathy as a cognitive proces is very insightful. Could you elaborate on its implications for AI modelling?