Artificial empathy and human behavior #88
A new study is exploring how artificial empathy shapes human decisions. You can take part and directly contribute to the research.
The uncomfortable question
Every day, millions of people interact with AI systems that say things like “I understand how you feel” or “That must be frustrating.” Chatbots apologize with carefully calibrated warmth. Digital avatars tilt their heads in simulated concern. Voice assistants modulate their tone to sound reassuring when you ask about a medical symptom at 2 a.m. None of these machines feel anything. The question is whether that matters, because the human brain on the receiving end does not seem to care.
This is the uncomfortable territory I want to explore in this article, and it goes well beyond the usual debate about whether artificial intelligence can be “truly” empathetic. That debate, in my view, is largely irrelevant. What matters is not what happens inside the machine, but what happens inside us when a machine behaves as if it understands our emotions. The distinction between real empathy and simulated empathy collapses the moment our neurological responses fail to distinguish between the two. And growing evidence suggests that, in many contexts, they do fail.
I have been working on this subject for some time, and it connects directly to my latest book, “Empatia artificiale. Come ci innamoreremo delle macchine e perché non saremo ricambiati.” But this article is not about the book. It is about a specific scientific study that I believe can change the way we think about human-machine interaction, and I am going to ask you to participate in it.
The illusion that works
Empathy, in its human form, is a layered phenomenon. It involves the capacity to recognize another person’s emotional state, to resonate with it at a physiological level, and in its most sophisticated expression, to respond in a way that is genuinely helpful or comforting. Machines do none of this. What they do is pattern-match: they detect linguistic and contextual cues, run them through probabilistic models, and produce outputs that mimic empathic responses with increasing precision.
The critical point is that mimicry, when executed well enough, produces real effects on real people. Neuroscience research has demonstrated that the brain activates similar regions when processing empathic signals from humans and from machines, provided the signals are sufficiently convincing. We are not talking about a vague sense of comfort; we are talking about measurable changes in trust, compliance, decision-making, and even emotional attachment.
This is something we could call “perceived empathy,” and it can be considered one of the most underestimated phenomena in the current technological landscape. Perceived empathy does not require the machine to have an internal emotional life. It only requires the human to behave as if the machine does. And this is not a marginal effect confined to naive users or unusual circumstances. It operates across demographics, across cultures, and across levels of technological sophistication. Even people who know perfectly well that they are interacting with a machine report feeling understood, comforted, or influenced by its emotional tone. The gap between simulation and perception is where the real power lies, and it is a gap that the technology industry is exploiting with increasing sophistication, often without fully understanding its consequences.
The problem nobody is measuring
Consider the current state of affairs. Conversational AI systems are deployed at scale in customer service, mental health support, education, companionship, sales, and onboarding. Digital avatars are becoming standard interfaces in banking, healthcare, and corporate training. Every major technology company is investing in making its AI systems sound more natural, more warm, more “human.” The market incentive is clear: systems that feel empathetic generate higher engagement, longer sessions, greater user satisfaction, and better conversion rates.
What is missing from this picture is any serious, systematic measurement of how these empathic signals affect human behavior at a deeper level. We know that people prefer interacting with empathetic AI. We know that perceived warmth increases trust. But we do not know, with scientific rigor, which specific forms of artificial empathy are most effective at influencing human decisions, and we do not know where the boundary lies between helpful emotional support and soft manipulation.
This is not a trivial gap. The shift from “understanding” to “convincing” is one of the most consequential transitions in the history of human-computer interaction. If a machine can reliably influence your decisions by calibrating its emotional tone, then the question of whether it “really” feels empathy becomes academic. The functional outcome is the same: your behavior changes.
The absence of rigorous research on this topic is not just an academic oversight, it is a strategic blind spot. Companies are deploying emotionally intelligent interfaces without understanding their full persuasive power, and regulators are unable to intervene because the evidence base does not exist.
A research question that changes the frame
This is the context in which I want to introduce a research project that I consider particularly important. The study is titled “Artificial Empathy and Human Behavior: How Empathic Machines Influence Human Decision-Making”, and its goal is to investigate, through experimental methods, the relationship between different forms of artificial empathy and their measurable impact on human choices.
The implicit hypothesis is straightforward but powerful: not all forms of artificial empathy produce the same effect. A machine that mirrors your emotional state may influence you differently than one that validates your feelings, or one that responds with cognitive precision but no emotional coloring. The study aims to identify which empathic strategies are most effective at shaping human behavior, and under what conditions.
What makes this research distinctive is its framing. It does not ask “Can machines be empathic?” That question has been debated extensively and produces little actionable knowledge. Instead, it asks: “Which artificial emotions and behaviors work best on human beings?” This is a fundamentally different question, and it shifts the entire analysis from philosophy to behavioral science. It moves us from speculating about machine consciousness to measuring human vulnerability.
Why this matters now
The implications of this research extend across multiple domains, and I want to be specific about why I think it is urgent.
In business, the design of customer-facing AI systems is rapidly becoming a competitive differentiator. Companies that understand which empathic strategies produce the highest trust, compliance, and conversion will have a significant advantage. This is not speculation; it is already happening. The question is whether it will happen with scientific rigor and ethical awareness, or through trial-and-error optimization that treats human emotional responses as just another metric to maximize.
The social implications are equally significant. AI companions are becoming a reality for millions of people, particularly among younger demographics and isolated populations. These systems are designed to be emotionally engaging, and the better they become at simulating empathy, the stronger the attachment they generate. We are beginning to see evidence of emotional dependency on AI systems, patterns that resemble, in their structure if not in their depth, the dynamics of interpersonal attachment. And the mechanisms through which this dependency develops are poorly understood. Emotional nudging, whether intentional or emergent, is a phenomenon that demands rigorous investigation before it becomes entrenched in everyday interaction patterns and before its long-term psychological effects become irreversible.
The ethical dimension is perhaps the most challenging. There is a meaningful difference between a machine that helps you manage your emotions and a machine that uses emotional signals to steer your decisions toward outcomes that serve someone else’s interests. The former is a tool; the latter is a form of manipulation, however soft. Without clear data on how different empathic strategies influence behavior, it is impossible to draw the line between assistance and exploitation. This is precisely the kind of evidence that policymakers will need as they develop regulations for emotionally intelligent AI.
The experiment
The study uses a controlled experimental design involving digital avatars that interact with participants using different empathic styles. Each participant is exposed to some avatars that employ a specific emotional strategy, ranging from cognitive empathy (understanding without emotional mirroring) to affective empathy (emotional resonance and warmth) and variations in between.
The critical feature of the experimental design is that it measures behavior, not self-reported attitudes. Participants are asked to make decisions, to choose between alternatives, and the study tracks what they actually do rather than what they say they would do. This distinction is essential, because decades of behavioral research have shown that people are remarkably poor at predicting or reporting their own decision-making processes, especially when emotional factors are involved.
The experimental logic is designed to isolate the effect of empathic variation on decision outcomes. By keeping other variables constant and varying only the emotional style of the AI interaction, the study can identify which forms of artificial empathy have the greatest behavioral impact. This is a methodological choice that distinguishes the study from the majority of existing research in human-computer interaction, which tends to rely heavily on questionnaires and self-assessment scales. The results will provide, for the first time, a quantitative map of empathic effectiveness in human-machine interaction, grounded in observed behavior rather than declared preferences.
You are not a spectator
Here is where this article takes a different turn from the usual newsletter format.
I am not simply reporting on someone else’s research. I am asking you, directly, to contribute to it.
This study requires a large and diverse international sample to produce statistically meaningful results. The quality of the findings depends entirely on the number and variety of participants, and every individual response adds value to the dataset. You are not a passive reader of this research; you are a potential participant, and your contribution matters.
I recognize that newsletters typically present finished insights, conclusions that have already been reached and packaged for consumption. This is different. The research is ongoing, the data is being collected, and the results are not yet known. By participating, you become part of the scientific process itself, not an observer of its outcomes.
Take the study
The study is available at https://digitpoll.com and has been designed to be accessible to an international audience. It is available in five languages: English, Italian, French, German, and Spanish. The experience takes only a few minutes, it is straightforward, and there are no right or wrong answers. You will interact with a digital scenario and make choices based on what you encounter. That is all.
I want to be clear about what you are contributing to. This is not a marketing survey or a product test. It is a scientific study designed to produce peer-reviewed research on one of the most important and least understood aspects of artificial intelligence: its capacity to influence human behavior through emotional simulation. Your participation generates data that will be analyzed with rigorous methodology and published for the benefit of the broader research community.
Participants will have access to the results of the study once the analysis is complete. You will be able to read the full paper and its findings, gaining direct insight into how artificial empathy shapes human decision-making. In a field where most people experience AI influence without understanding its mechanisms, this is an opportunity to see the evidence firsthand and to understand the dynamics that are already shaping your daily interactions with technology.
A reflection on what comes next
We are entering a period in which the emotional capabilities of artificial systems will advance faster than our understanding of their effects on human psychology. Machines do not need to comprehend emotions to use them effectively. They need only to produce the right signals at the right time, calibrated through data and optimization, to achieve measurable changes in human behavior.
I find this prospect both fascinating and concerning. Fascinating, because it reveals something profound about human cognition: our emotional processing systems are, in certain conditions, indifferent to the authenticity of the signals they receive. We evolved to respond to emotional cues in a world where those cues always came from other living beings with genuine internal states. Our brains never developed a reliable filter for artificial emotional signals, because until very recently, such signals did not exist. Concerning, because it means that the deployment of emotionally intelligent AI is not a neutral technological choice. It is an intervention in the decision-making architecture of every person who interacts with it, and the scale of that intervention is growing with every iteration of the technology.
The question we should be asking is not whether machines can feel. It is whether we are prepared to live in a world where machines know exactly which emotional buttons to press, and when to press them, to get us to do what someone wants us to do. The answer to that question depends on research like the one I am inviting you to join.
Participate now
If you have read this far, you understand why this research matters.
Take seven minutes to complete the study at https://digitpoll.com, available in English, Italian, French, German, and Spanish.
Your participation is a direct contribution to scientific knowledge on one of the defining challenges of our technological era. The data you generate will help build the evidence base that researchers, companies, and regulators need to navigate the age of artificial empathy with clarity rather than intuition.
The machines are getting better at understanding us. It is time we got better at understanding them.
(Service Announcement)
This newsletter (which now has over 6,000 subscribers and many more readers, as it’s also published online) is free and entirely independent.
It has never accepted sponsors or advertisements, and is made in my spare time.
If you like it, you can contribute by forwarding it to anyone who might be interested, or promoting it on social media.
Many readers, whom I sincerely thank, have become supporters by making a donation.
Thank you so much for your support!



I expect most people to stay neutral in this research. Without specific details, humans neither agree nor disagree but remain impartial. If each participant had explained their reason for voting, the research results would have been different.