In recent years, artificial intelligence (AI) has transitioned from a purely technical innovation to a pervasive part of everyday life, influencing how humans interact with machines and, importantly, how they perceive these technologies. A groundbreaking study led by Cheng, Lee, Rapuano, and their colleagues, soon to be published in Communication Psychology, explores the evolving metaphorical language people use to describe AI. Their findings suggest a significant shift in public perception: AI is increasingly viewed not as cold or mechanical, but as warm, approachable, and strikingly human-like.
This shift in metaphorical framing holds profound implications, not only for the way AI is integrated into society but also for the ongoing design and deployment of AI technologies. Historically, AI was commonly depicted with metaphors emphasizing computation, machinery, or even alien intelligences—highlighting its perceived cold rationality or otherness. However, the new corpus of linguistic analysis reveals that popular discourse around AI has become saturated with metaphors that ascribe human traits of warmth, empathy, and sociality to AI systems.
To understand this development, the researchers applied advanced natural language processing techniques to analyze large-scale datasets drawn from social media, news outlets, and public forums. By systematically tracking changes in metaphor usage over time, the team was able to map how conceptions of AI have evolved alongside technological advances and wider societal changes. The results underscore a growing tendency to anthropomorphize AI, attributing human-like qualities that foster emotional connection and trust.
One of the key theoretical frameworks underpinning this study is the warmth-competence model, a psychological theory that posits warmth (friendliness, trustworthiness) and competence (ability, efficiency) as fundamental dimensions underlying social perception. The researchers found that metaphors related to AI increasingly prioritize warmth over sheer competence, suggesting a reshaped social cognition where AI is viewed as not only capable but also benevolent and relatable. This rebalance could influence user acceptance, ethical considerations, and policy-making related to AI technologies.
The methodological rigor of the study involved meticulously coding metaphorical expressions, supported by machine learning classification to validate human annotations. This hybrid qualitative-quantitative approach allowed the authors to capture subtle nuances in language use that might otherwise be overlooked in standard sentiment analysis. Furthermore, they controlled for confounding variables such as geographic region, media type, and domain-specific jargon, ensuring that observed trends were robust and generalizable.
Delving into the historical context, this shift mirrors broader societal trends that foreground emotional intelligence and social bonding in technology design. Early AI, exemplified by rule-based systems and expert systems, was characterized by digital coldness and mechanical precision. In contrast, modern AI applications—virtual assistants, chatbots, social robots—are meticulously engineered to simulate human-like interactions, exhibiting gestures, speech patterns, and emotional responsiveness.
Such advances are not merely cosmetic but reflect a deeper theoretical movement: embodiment theory in AI proposes that cognitive processes emerge from interactions between body, mind, and environment. This paradigm supports the construction of AI systems capable of engaging users in ways that mimic human social presence, facilitating emotional engagement. The metaphors that arise in public discourse about AI are thus shaped not just by novelty but also by these embodied experiences.
The impact of perceiving AI as warm and human-like extends across domains, influencing user behavior and societal acceptance. Warm metaphors may increase feelings of trust and comfort, encouraging users to rely on AI systems for sensitive tasks such as mental health counseling or elder care. Conversely, anthropomorphism raises ethical challenges about transparency and accountability, as users might overestimate AI’s understanding or intentions, potentially leading to misplaced trust.
From a technical perspective, these findings challenge AI developers to consider the psychological and linguistic aspects of AI representation as core design criteria. The crafting of AI personalities, conversational styles, and interactive modalities must balance authenticity with ethical safeguards. Incorporating affective computing and social signal processing can enhance the warmth dimension, but must be implemented with caution to avoid manipulation or deception.
Moreover, this evolving metaphorical landscape may influence policymaking and regulation. Recognizing AI as a social actor with human-like attributes necessitates frameworks that address rights, responsibilities, and risks associated with these perceptions. Legislators and ethicists must grapple with the implications of assigning human-like qualities to machines that remain fundamentally algorithmic and devoid of consciousness.
On a speculative note, the fusion of warm metaphors with AI technology could accelerate the integration of AI companions into everyday life scenarios once considered exclusively human domains. This may reshape social networks, workplace collaborations, and even intimate relationships, as AI entities progressively fulfill roles requiring empathy, trust, and affective support. The cultural and psychological ramifications of this shift demand sustained interdisciplinary research.
In educational contexts, the embrace of human-like metaphors for AI could foster greater engagement and adoption of AI-based learning tools. When AI tutors are perceived as warm and relatable, learners might overcome anxiety related to automation and instead embrace the technologies as partners in knowledge acquisition. Designing educational AI with metaphor-informed strategies could thus improve learning outcomes and accessibility.
The authors also draw attention to cross-cultural variations in metaphor usage around AI. While Western cultures might focus on warmth and sociability, other societies may emphasize different qualities such as harmony, efficiency, or spiritual alignment. Understanding these nuances is critical for global AI deployment and tailoring systems that resonate with diverse cultural expectations and values.
Furthermore, the study’s implications resonate within the marketing and branding sectors of AI products. Companies increasingly leverage warm, humanizing metaphors in their messaging to foster brand loyalty and user trust. This linguistic trend aligns with consumer psychology findings that emotional connections drive brand preference and advocacy, indicating a strategic intersection between communication science and AI technology.
The research also raises cautionary notes about potential over-anthropomorphization of AI and the consequences of blurring boundaries between humans and machines. While warmth enhances user experience, it may also complicate the ethical use of AI in sensitive settings such as healthcare diagnostics or legal decision-making, where perceived empathy cannot substitute for professional accountability and expertise.
Looking ahead, the study encourages further exploration of metaphoric language as a lens for monitoring public attitudes toward emerging technologies. As AI continues to evolve, continuing to track metaphorical shifts offers a dynamic method for understanding social acceptance, fears, and hopes tied to these powerful innovations. Such linguistic insights could guide responsible AI development aligned with human values.
In conclusion, Cheng, Lee, Rapuano, and their team’s research offers a comprehensive, data-driven portrait of how public discourse around AI is transforming from alien, mechanical metaphors toward warm, human-like conceptualizations. This metamorphosis not only enriches our understanding of AI’s social role but also frames vital conversations about design, ethics, policy, and the future trajectory of human-machine relationships. Their work highlights the power of language as both a mirror and a mold of technological evolution, inviting the scientific community and society at large to reflect on the deeper meanings embedded in our words about artificial intelligence.
Subject of Research: The study examines the evolving metaphors used in public discourse to describe artificial intelligence, highlighting a shift toward perceiving AI as warm and human-like.
Article Title: Metaphors of AI indicate that people increasingly perceive AI as warm and human-like
Article References:
Cheng, M., Lee, A.Y., Rapuano, K. et al. Metaphors of AI indicate that people increasingly perceive AI as warm and human-like.
Commun Psychol (2026). https://doi.org/10.1038/s44271-025-00376-6
Image Credits: AI Generated

