In an era where mobile applications and artificial intelligence (AI) chatbots have become integral to daily life, a groundbreaking study from Penn State reveals a paradox inherent in the design of these interactive technologies. The research demonstrates that the more playful and engaging a mobile app or AI chatbot appears through interaction, the more likely users are to lower their guard—consequently risking their privacy without fully realizing it. This phenomenon sheds new light on the delicate balance between user experience and data security in contemporary digital platforms.
The study, recently published in the journal Behaviour & Information Technology, investigates how different forms of interactivity during a sign-up process impact users’ vigilance regarding privacy concerns. The researchers took a novel approach by examining two distinct types of interactivity: “message interactivity,” which involves conversational exchanges with the app that build upon previous user inputs, and “modality interactivity,” which includes interface elements like clicking and zooming on images. Their goal was to understand how these interactive elements influence users’ perceptions of playfulness and, more importantly, their readiness to disclose personal information.
To explore this dynamic, an online experiment was conducted involving 216 participants who were asked to complete the sign-up process for a simulated fitness app. Participants were randomly assigned versions of the app that varied in levels of message and modality interactivity. They then rated their experience using seven-point scales that measured perceived fun, engagement, and privacy concerns. The detailed dataset provided compelling evidence that greater interactivity heightened the playfulness of the app, which in turn decreased individuals’ alertness to privacy risks.
One of the study’s most surprising findings relates to message interactivity. While intuitive assumptions would predict that a conversational, back-and-forth interaction would encourage users to think more critically about the personal data they are sharing, the opposite was true. In fact, message interactivity had a distracting effect, drawing users into a playful mindset and diminishing their caution. This insight challenges long-held beliefs in the design community regarding AI chatbots and conversational systems, highlighting that immersive dialogue can create a false sense of security and promote inadvertent data disclosure.
The lead author Jiaqi Agnes Bao from the University of South Dakota, who completed this work during her doctoral studies at Penn State, emphasizes the critical need for interface designs that foster user awareness. Bao suggests that while interactivity can enhance user engagement—an important factor for app success—it must be balanced carefully against privacy considerations. One promising design solution proposed involves coupling message interactivity with modality interactivity, such as periodically inserting pop-up prompts during conversations to encourage users to pause, reflect, and reassess the information they are submitting.
Senior author S. Shyam Sundar, a distinguished professor at Penn State and director of the Center for Socially Responsible Artificial Intelligence, elaborated on the implications of these findings. According to Sundar, current generative AI models primarily rely on message interactivity, creating highly engaging, conversational user experiences. This study cautions that such engagement can become a double-edged sword: while captivating users, it may inadvertently lower their awareness of privacy risks. Sundar advocates for integrating subtle interruption mechanisms within AI interactions as a way to “jerk users into awareness,” preventing unchecked data oversharing.
The broader context of this research is particularly relevant as generative AI technologies proliferate across diverse sectors—from healthcare and finance to social media and entertainment. The playful nature of these systems, often celebrated for enhancing accessibility and user satisfaction, now faces scrutiny for potentially masking users’ vulnerability to privacy breaches. The researchers argue that developers and designers bear a significant ethical responsibility to embed features that not only inform but actively guide users toward more conscious data sharing.
In addition to theoretical contributions, the research introduces practical guidelines for striking a synergy between playfulness and privacy protection. The combination of message and modality interactivity, for example, can induce users to intermittently evaluate their information disclosure without detracting significantly from the overall user experience. This design strategy points toward a future where interactive systems are both engaging and trustworthy—a crucial advancement as digital ecosystems grow increasingly complex.
Furthermore, the study highlights the importance of moving beyond simplistic user notifications about data sharing. According to co-author Yongnam Jung, a doctoral candidate at Penn State, truly building trust requires platforms to facilitate informed decision-making processes rather than relying on passive acknowledgment. This shift towards user-centric privacy empowerment is fundamental to raising digital literacy and fostering sustainable interactions in AI-driven applications.
This latest investigation builds on a foundation of prior research by the team, which similarly revealed that interactivity, while beneficial for engagement, tends to draw attention away from potential risks. Taken together, these studies underscore a critical trade-off that designers, policymakers, and users must grapple with: enhanced interactivity enriches the user experience but simultaneously complicates privacy management.
The study’s timing coincides with a rapidly evolving landscape in generative AI development. As advanced models generate increasingly natural and compelling conversations, there is a growing urgency to address the unintended consequences of such “playful” interactions. The researchers implore industry leaders to consider methods beyond traditional interfaces, advocating for intelligently embedded modality interruptions that prompt privacy awareness at critical moments during the user journey.
Ultimately, this research serves as a cautionary tale about the seductive power of interactivity in digital environments. It exposes how the very features designed to increase app attractiveness can paradoxically sedate user vigilance, allowing hidden privacy vulnerabilities to flourish. By bringing these insights to light, the Penn State team has opened a vital dialogue about responsible AI design that prioritizes both fun and fundamental data protections in this unprecedented technological era.
Subject of Research: Effects of interactivity on users’ privacy disclosure behavior in mobile apps and AI chatbots
Article Title: Are you fooled by interactivity? The effects of interactivity on privacy disclosure
News Publication Date: 24-Aug-2025
Web References:
- DOI:10.1080/0144929X.2025.2545312
- Penn State Center for Socially Responsible Artificial Intelligence
References: Behaviour & Information Technology Journal
Keywords: Generative AI, Artificial intelligence, Communications, Mass media, Social media, Smartphones, Behavioral psychology, Risk aversion