In a groundbreaking study that bridges robotics, psychology, and human-computer interaction, researchers from the University of East Anglia have demonstrated a significant psychological phenomenon: the perception of robots as human-like entities can be enhanced through social engagement, even when the robot’s physical form is decidedly mechanical. This discovery has profound implications for the future integration of robots into daily human environments, ranging from healthcare to customer service. The study, recently published in the Journal of Experimental Psychology, uncovers the nuanced mechanisms by which human cognition attributes agency and social presence to nonhumanoid robots based solely on the context of interaction.
The research conducted involved a small, box-shaped robot called Cozmo, specifically designed to interact socially with humans through games and expressive behavior. Unlike humanoid robots that physically mimic human appearance, Cozmo’s design is intentionally minimalistic, featuring a compact torso and vivid digital eyes capable of conveying emotions. Researchers wanted to explore whether social experiences alone, detached from human likeness in form, could trigger vicarious agency—the cognitive attribution of intentionality—to a robot’s actions. More than 100 participants were divided into two groups: one group engaged in social games with Cozmo before more functional interactions, while the control group interacted mechanically without prior play.
Analyses of the participants’ responses revealed a fascinating psychological effect. The group that played games first began to perceive Cozmo’s actions as more intentional and human-like, making cognitive errors consistent with those people typically make when judging the timing of human-induced events. Conversely, the control group, which lacked the playful social context, treated the robot strictly as an object with predictable, mechanical behavior. These systematic timing errors represent a compelling cognitive marker, suggesting that the participants’ brains were processing Cozmo’s movements in a manner similar to how they interpret human actions – an effect the researchers term ‘vicarious agency.’
This research challenges long-standing assumptions about the essential role of physical human likeness in robot acceptance and social engagement. Traditionally, social robots have relied heavily on humanoid form factors, employing facial expressions and body language to elicit empathy and cooperation. The UEA study’s findings suggest that the context and nature of interaction—particularly those that foster social play—may play a far more critical role in shaping human perceptions. This could revolutionize design philosophies, sparking a paradigm shift away from costly humanoid robotics towards simpler, more efficient forms that emphasize social functionality over aesthetics.
Dr. Natalie Wyer, the lead psychologist on the project, emphasizes the broader implications for sectors preparing to embrace robotic assistance. Healthcare environments, for instance, increasingly utilize robots for eldercare and rehabilitation, where acceptance and emotional rapport are crucial for patient compliance and well-being. “If patients can engage playfully with a robot and thereby perceive it as an autonomous, social actor, it could drastically improve both the effectiveness and trustworthiness of robotic care providers,” she explains. Similarly, customer service roles, where robots must navigate complex social cues, may benefit from incorporating interaction models that emphasize shared social experience rather than mere task execution.
On a technical level, the study elegantly delineates how humans internalize perceived intentionality through timing and predictive processing. Normally, the human brain maintains highly refined expectations about when actions should occur in response to others. When these predictions are violated in systematic ways—such as errors in estimating the temporal relationship between cause and effect—it signals that the actor may be autonomous rather than mechanical. The fact that Cozmo, despite lacking a humanoid form, could trigger such neural processing underscores a critical cognitive flexibility in distinguishing ‘social actors’ from mere objects dependent on experience rather than visual appearance.
Moreover, the research introduces intriguing questions about robotic autonomy and artificial consciousness. While Cozmo does not possess genuine autonomous decision-making, the participants’ willingness to attribute independent ‘thought’ to it after gameplay reveals how sophisticated social cues can blur the boundaries between programmatic behavior and perceived consciousness. This has ethical and technical ramifications as artificial intelligence systems become more advanced and begin to inhabit increasingly social roles. Understanding how humans cognitively negotiate this boundary will be vital for the responsible design and deployment of next-generation autonomous agents.
Such insights also contribute to ongoing debates in robotics ethics about the potential emotional consequences of anthropomorphizing robots. The study’s findings warn that even simple, non-anthropomorphic machines can evoke complex social attributions, which may have unintended psychological effects. For instance, if people start perceiving robots as intentional beings without clear understanding of their actual capacities, there may be risks related to misplaced trust or emotional dependency. This underscores the importance of transparency and thoughtful design in integrating robots into society responsibly.
In practical terms, the UEA team’s experimental methodology involved controlled, randomized trials and precise measurement of temporal judgment errors, leveraging psychological paradigms grounded in social cognition research. This interdisciplinary approach—drawing from behavioral psychology, robotics engineering, and cognitive neuroscience—illustrates the power of cross-field collaboration in unraveling complex human-robot dynamics. It lays a foundation for future experimental designs aiming to refine how robotic agents can effectively engage human social cognition and behavior.
Beyond the laboratory, these findings open exciting prospects for the consumer robotics market. Companies developing social robots might reconsider the emphasis placed on anthropomorphic hardware, instead investing in rich software-driven interactions that foster social presence. Simple robots capable of expressive play and independent-appearing behavior could be more cost-effective and broadly appealing than humanoid counterparts, expanding accessibility and everyday integration potential.
Crucially, as robots increasingly share spaces with humans, these results highlight the necessity of reimagining coexistence strategies. Engaging robots in social games may be more than a novel icebreaker; it could be essential for forming the cognitive bridges that enable meaningful human-robot collaboration. If the ability to ‘play’ with robots precedes their acceptance as partners, future robotic ecosystems may evolve around social and emotional dynamics as much as technological capability.
In summation, the UEA study advances both theoretical understanding and practical application in human-robot interactions, revealing that perceived vicarious agency arises not from appearance but from the social context of interaction. Such knowledge promises to guide the next wave of robot design principles, policies, and ethical frameworks essential for blending artificial agents into the fabric of human life seamlessly yet thoughtfully.
Subject of Research: Not applicable
Article Title: Observed Nonhumanoid Robot Actions Induce Vicarious Agency When Perceived as Social Actors, Not as Objects
News Publication Date: 4-Jul-2025
Image Credits: University of East Anglia
Keywords: Robots, Robotic designs, Robotics ethics, Robots and society, Soft robotics, Robotics, Computer science, Artificial consciousness, Cognitive robotics, Generative AI, Psychological science, Behavioral psychology, Cognitive psychology, Social psychology