In an era dominated by rapid advances in artificial intelligence, the boundaries of machine empathy and human emotional interaction are increasingly under scrutiny. A groundbreaking study by Wenger, Cameron, and Inzlicht, published in Communications Psychology in 2026, dives deep into the paradoxical dynamics of empathic preference, revealing that despite recognizing higher empathy levels in AI, people overwhelmingly opt to receive empathy from fellow humans. This phenomenon unpacks complex layers of human emotional processing and technological acceptance, shedding light on the nuanced interplay between perceived empathy and preferred sources of emotional support.
Empathy, traditionally regarded as an inherently human capacity, encapsulates the ability to understand and share the emotional states of others. The integration of AI into social communication platforms has introduced new modes for empathy simulation, challenging long-held assumptions about emotional labor and interpersonal connection. Wenger et al.’s research centralizes this challenge by investigating the cognitive and affective mechanisms that govern how people evaluate empathy delivered by humans versus machines.
The investigators employed rigorous experimental designs utilizing virtual interactions wherein participants received empathic responses either from AI-driven agents or from trained human confederates. Interestingly, despite participants consistently rating AI’s empathic behavior as more accurate, detailed, and precise, their behavioral choice leaned towards seeking out human empathy when given the option. This discrepancy between evaluative ratings and real-world preferences suggests that empathy is not merely a cognitive judgment but deeply embedded in relational and contextual factors.
From a technical standpoint, the AI systems used in the study were advanced natural language processing models capable of interpreting emotional cues from participants’ verbal expressions and generating responses that mirrored empathic human communication. These systems capitalized on machine learning frameworks that analyze sentiment, tone, and contextual meaning, allowing for real-time adaptive feedback that closely mimicked human empathy. The effectiveness of these algorithms in delivering targeted emotional responses explains why participants acknowledged AI empathy as higher in quality.
However, the preference for human empathy despite superior AI performance hints at underlying socio-emotional dynamics. Human empathy carries an intrinsic authenticity and unpredictability that AI, no matter how sophisticated, struggles to replicate fully. The spontaneously generated emotional resonance, subtle cues like microexpressions, and shared histories embedded within human interactions may contribute substantially to this preference, elements that are currently beyond algorithmic reach.
The study’s findings challenge the simplistic narrative that technological proficiency equates to user preference in socio-emotional domains. Instead, they illuminate the persistent role of human connection in emotional support systems. This insight has profound implications for designing empathetic AI in healthcare, education, and customer service, where human-machine collaboration could benefit from acknowledging and incorporating emotional context rather than merely optimizing response accuracy.
Beyond practical applications, Wenger and colleagues’ work foregrounds the philosophical questions about the nature of empathy and its mechanization. If AI empathy is rated higher for precision but fails to satisfy emotional needs, it provokes reconsideration of what empathy truly entails — is it purely cognitive mirroring, or is it fundamentally relational and experiential? The human penchant for choosing empathy from other humans despite rational assessments signals a deeper, perhaps evolutionary-rooted, preference for organic social connections.
Furthermore, this research emphasizes the importance of affective presence—the felt quality of being emotionally and socially present—in empathic exchanges. While AI can replicate verbal and even non-verbal cues through sophisticated algorithms, the experiential dimension of feeling genuinely “seen” and “understood” may require dimensions of consciousness and shared lived experience inaccessible to artificial entities. This dimension could explain why individuals, knowing that AI responses are technically more empathic, still gravitate towards human interlocutors for emotional solace.
The scientists also explored the role of trust and vulnerability in empathic interactions. Human empathy entrusts a level of emotional vulnerability that is intertwined with shared norms, cultural expectations, and reciprocal understanding. AI empathy, while accurate, may lack the relational history or mutual investment to evoke comparable feelings of safety and openness. As the study highlights, these relational factors modulate preference behavior and are critical to designing emotionally intelligent systems.
Moreover, the investigation points toward potential biases or cognitive dissonances in how people conceptualize AI versus human empathy. The cognitive elevation of AI responses might stem from perceptions of AI as unbiased and non-judgmental, offering standardized compassion. Nonetheless, such qualities may paradoxically reduce emotional richness or perceived sincerity, driving a preference to risk the messiness of human empathy to gain its authenticity and warmth.
Crucially, Wenger et al.’s research methodology involved triangulating subjective self-reports with physiological measures, including skin conductance and heart rate variability, to assess emotional engagement during empathetic exchanges. These objective data complement the subjective preference findings, suggesting that human empathy elicits more robust autonomic arousal, reinforcing the affective depth of human-to-human interaction.
The study also ignites debate on the future role of AI in mental health and emotional caregiving. While AI may serve as a first-line resource offering timely and consistent empathic feedback, it may not yet replace essential therapeutic human connection. Developers and policymakers should take note of this preference gap as they integrate AI into sensitive domains, ensuring that human touch and nuanced understanding remain at the forefront.
In conclusion, Wenger, Cameron, and Inzlicht have unveiled a critical paradox: despite consciously acknowledging superior AI empathy performance, individuals instinctively choose human empathy. This discovery underscores the complexity of empathy as a psychological construct and the intrinsic value of human relationality. It poses a clarion call for multidisciplinary approaches combining psychology, AI development, and philosophy to advance emotionally intelligent technologies that respect and reflect human emotional architecture.
This research marks a pivotal moment in the evolving narrative of human-machine relations, highlighting that technological superiority in empathy does not equate to emotional or experiential supremacy. As AI continues to evolve, understanding this nuanced human preference will be central to creating harmonious interactions that honor both computational precision and the irreplaceable qualities of human connection.
Subject of Research: The study investigates the paradox between people’s subjective ratings of AI empathy and their behavioral preference for human empathy, exploring the cognitive, affective, and relational factors influencing empathic choice.
Article Title: People choose to receive human empathy despite rating AI empathy higher.
Article References:
Wenger, J.D., Cameron, C.D. & Inzlicht, M. People choose to receive human empathy despite rating AI empathy higher. Communications Psychology (2026). https://doi.org/10.1038/s44271-025-00387-3
Image Credits: AI Generated

