In the rapidly evolving world of artificial intelligence, text-based chatbots have become ubiquitous in customer service, healthcare, education, and entertainment. However, despite their growing presence, the question persists: How do users respond to these digital interlocutors, especially when chatbots embody human-like qualities? A comprehensive meta-analysis recently published in Humanities and Social Sciences Communications sheds new light on this subject, uncovering subtle yet meaningful influences of human-likeness on how users perceive and interact with text-based conversational agents.
This extensive study synthesizes data from diverse experimental settings, investigating the extent to which human-like social cues embedded in purely textual interactions affect user responses. Unlike more tangible humanoid robots or avatar-based agents, text-based chatbots rely solely on language and dialogue to establish their presence. The core finding of the meta-analysis reveals a consistently small but positive effect of human-likeness on social responses, suggesting that even text alone, when designed thoughtfully, can foster a sense of social engagement and connection.
The research highlights the layered impact of human-like cues, distinguishing between variations in user outcomes such as perception, rapport, trust, affect (both positive and negative), attitudes, and behaviors. Notably, the effect size ranges from moderate influences on users’ perceptions of the chatbot to a negligible effect on behavioral outcomes, such as actual follow-through or task completion. These nuances offer critical insight for developers aiming to optimize chatbot engagement without overstating their immediate power to drive concrete user actions.
One of the most intriguing aspects of the meta-analysis concerns the differentiation between primary and secondary social cues. Existing theories like the Media Equation Social Agency (MASA) hypothesis propose that primary cues, such as visual human-like representations, exert greater influence on social response than secondary cues like language style or conversational tone. However, the findings from text-based interactions disrupt this long-held assumption, demonstrating that secondary cues—language use that mirrors human communication patterns—can rival or exceed the impact of primary cues in eliciting social responses. This revision challenges designers and theorists to reevaluate how digitally mediated social presence is constructed and maintained.
Moreover, the meta-analysis points out that the influence of human-likeness is context-dependent. In particular, chatbots deployed for unstructured interactions—those without rigid scripts or predetermined responses—benefit more from human-like features. This flexibility allows chatbots to utilize social cues more authentically, fostering trust and comfortable self-disclosure among users. Conversely, in highly structured scenarios or critical tasks, where precision or efficiency is paramount, the marginal benefit of human-likeness diminishes. Such distinctions are vital for applying these insights responsibly, ensuring that human-like cues enhance rather than mislead users.
Trust emerges as a recurrent theme throughout the findings. Human-like chatbots tend to engender greater trust in the information they provide, which can be particularly influential in domains like health advice or financial services. This trust enhancement is tightly linked to rapport, the feeling of mutual understanding and connection between user and bot, which itself is positively affected by human-like language and interaction styles. This has profound implications for areas requiring sensitive disclosures or the establishment of long-term user relationships.
Positive affect—the user’s favorable emotional response—also shows a modest but consistent uptick through human-like social cues. Interestingly, the study finds no corresponding impact on negative affect, underscoring that while human-likeness may enhance enjoyment or satisfaction, it does not exacerbate user frustrations or negative feelings. This safety profile narrows the ethical considerations somewhat, suggesting that careful implementations of human-like features can boost engagement without the risk of backlash or emotional discomfort.
The researchers also note subtle gradations in how human-likeness influences user attitudes toward chatbots. While effects on attitudes are generally small, they still reveal that incorporating social cues can shift users toward perceiving chatbots as more approachable, competent, or likeable. This attitudinal shift, albeit modest, can accumulate over repeated interactions, potentially fostering user retention and long-term adoption of conversational AI technologies.
Behavioral outcomes, however, remain the domain where human-likeness exerts the weakest effect. Whether measured by follow-up actions such as purchases, compliance with chatbot advice, or continued chatbot use, the data suggest only minimal behavioral changes attributable to human-like qualities in text. This indicates that while social cues engender favorable perceptions and feelings, these do not necessarily translate into immediate user behavior. It flags an essential area for future research, emphasizing the need to explore how social cues might indirectly influence outcomes over time or in combination with other factors.
From a technical standpoint, the study underscores the challenge and opportunity in engineering language-based social cues. Implementing nuanced conversational strategies—such as using naturalistic pronouns, demonstrating empathy, employing humor, or adapting politeness levels—requires sophisticated natural language processing and a deep understanding of human communication dynamics. Developers must balance authenticity with clarity, avoiding uncanny valley effects or misinterpretations that could erode user trust.
Ethically, the findings highlight the dual-edged nature of human-likeness in chatbots. Increasing user disclosure and trust is advantageous for service delivery but raises concerns about user privacy, consent, and potential manipulation. Transparent design principles and informed user consent become paramount to harness these benefits without compromising ethical standards. The study’s emphasis on guidance for responsible design hints at the growing consensus that chatbot sophistication must be matched with robust governance frameworks.
Overall, this meta-analysis marks a significant step forward in unraveling the complex interplay between human-likeness and social responses in text-based conversational agents. By quantifying the magnitude and variability of these effects, it lays the groundwork for more targeted, effective, and ethical chatbot interactions. As AI systems continue to pervade everyday life, such empirical insights will prove invaluable for aligning technological advances with human social and psychological realities.
The quantified small-to-moderate positive effects detailed here encourage both skepticism and enthusiasm. Skepticism toward overstated claims of chatbot sociality reminds designers not to overpromise or anthropomorphize beyond what users truly experience. Enthusiasm arises from the clear evidence that carefully crafted social cues—even in the absence of embodied form—can genuinely improve user engagement, satisfaction, and trust. This balance will likely shape the future landscape of human-agent interaction.
Finally, the evolving interpretation of secondary cues over primary visual or embodied ones signals a paradigm shift in conversational AI design. Text, traditionally viewed as limited compared to multimodal inputs, now receives elevated importance as a conduit for social presence. For the broader AI community and society at large, this insight opens exciting possibilities for creating inclusive, accessible, and culturally sensitive chatbots optimized for myriad communication contexts.
In conclusion, the meta-analysis by S.H. Klein offers a rigorous, data-driven reassessment of human-likeness effects on text-based chatbots, challenging existing theories, illuminating practical design pathways, and raising crucial ethical flags. Its nuanced perspective invites researchers, developers, and policymakers to rethink how social cues function in digital interaction, ensuring that technology serves both human needs and societal values in equal measure.
Subject of Research: The influence of human-like social cues on user responses toward text-based conversational agents.
Article Title: The effects of human-like social cues on social responses towards text-based conversational agents—a meta-analysis.
Article References:
Klein, S.H. The effects of human-like social cues on social responses towards text-based conversational agents—a meta-analysis. Humanit Soc Sci Commun 12, 1322 (2025). https://doi.org/10.1057/s41599-025-05618-w
Image Credits: AI Generated