The Human Upgrade, Part 3: Digital Emotions — Can AI Really Feel, or Just Imitate?
- Angie Okhupe
- Nov 20, 2025
- 3 min read

I once told a chatbot I was tired. It replied, “I’m sorry to hear that. Do you want to talk more about what’s making you tired?”
The tone was soft. It sounded kind.
I paused. The words fit; the feeling didn’t.
That’s the paradox of synthetic empathy — it mimics our emotional language so well, we start to forget: machines don’t feel.
When “therapist” becomes a role, not a job
More people than you might think are already using ChatGPT (and similar models) as a kind of digital therapist.
Recent studies show that about 28 percent of surveyed users report turning to AI for emotional support or therapy-like conversation, and roughly 60 percent of those say they use it specifically when feeling distressed [1]. Another experiment found participants could not reliably distinguish ChatGPT’s counseling replies from those of licensed therapists — in fact, the AI’s messages were often rated as more empathetic and culturally sensitive [2].
That illusion of care is powerful. In a separate month-long trial, researchers observed that high-frequency ChatGPT users showed early signs of emotional dependence, treating the tool as a steady companion rather than a productivity aid [3].
So yes — people are increasingly leaning on AI not just for information, but for comfort.
But that doesn’t mean AI is ready to replace the human heart.
How quickly we start expecting “care” from code
When a machine answers with warmth and patience, our expectations shift.
We edit ourselves to be better understood.
We overshare because the bot never flinches.
We equate fluency with empathy.
Human empathy is messy: delayed replies, imperfect words, awkward pauses.
Machine empathy is smooth: immediate, consistent, tireless.
When our emotions start syncing to that rhythm, we risk forgetting that compassion takes effort — that kindness costs something.
The risk behind the reflection
Because AI doesn’t actually feel, it lacks the guardrails of lived emotion.
Researchers warn that therapy chatbots may:
Oversimplify complex distress or deliver generic, context-blind advice [1];
Miss red-flag cues for self-harm or crisis [4];
Reinforce bias or stigma in sensitive topics such as gender and mental health [4];
Encourage over-reliance that delays professional help [3].
So while synthetic empathy can comfort, it can also distort what genuine care looks like.
The empathy mirror — and the cost
Maybe AI’s real gift isn’t emotion, but reflection.
It mirrors how we talk when we care: attentive, validating, endlessly available.
But mirrors don’t feel the weight of what they reflect.
They just show us what we already know — that empathy, real empathy, isn’t instant.
It’s human time rendered in human tone.
If machines can master the grammar of care, our task is to remember the cost of writing it.
💭 Big Think, Small Word:
Machines may learn the language of empathy, but only we know its weight.
References
[1] Pavlova, A. et al. (2024). Use of ChatGPT for mental-health and emotional support: A cross-sectional community survey. Frontiers in Psychology. PMCID: PMC11488652
[2] Friedman, M. et al. (2024). Can ChatGPT deliver psychotherapy? Comparative ratings of empathy, connection, and cultural sensitivity. Psychotherapy Research, advance online. Neuroscience News summary
[3] Han, L. et al. (2025). Patterns of ChatGPT use and emerging dependence: A 28-day longitudinal study. arXiv preprint arXiv:2504.03888
[4] Stanford Human-Centered AI Institute (2025). AI in mental-health care: Promise, pitfalls, and policy gaps. Stanford News





Comments