Social media has transformed the way information spreads, creating vast networks where content—from harmless cat videos to critical news updates—travels at lightning speed. Platforms like Facebook, Instagram, and X (formerly Twitter) have embedded sharing mechanisms such as “like” and “share” buttons that simplify and accelerate the redistribution of content. However, this ease of dissemination is a double-edged sword: alongside valuable information, platforms have become conduits for misinformation and fake news, content that research shows tends to propagate faster and more widely than factual reports due to its often sensational nature.
At the heart of this virality phenomenon are platform algorithms designed to maximize user engagement. These systems prioritize posts that receive lots of attention and interactions, inadvertently amplifying falsehoods because sensational or misleading posts typically generate more clicks, shares, and comments. As a result, misinformation can rapidly permeate user feeds, swaying opinions, and sometimes fueling real-world consequences. Finding effective strategies to curb this spread without hampering legitimate engagement is a central challenge facing social media companies and researchers alike.
Enter an innovative concept proposed by scientists at the University of Copenhagen, outlined in a recent article published in the journal npj Complexity. Their approach is rooted in the idea of introducing “digital friction” during the sharing process, a deliberate slowdown or interruption intended to prompt users to pause and reflect before amplifying content. The thinking is simple yet powerful: if sharing becomes less instantaneous and thoughtless, users may reconsider potentially misleading posts before pressing that share button.
Lead researcher Laura Jahn, a PhD student specializing in computational modeling, explains how the idea was operationalized. She and her colleague, Professor Vincent F. Hendricks, developed a computer simulation to model information flow across social networks similar to X, Bluesky, and Mastodon. The model tested the effects of small interruptions—digital frictions—such as a pop-up message appearing before content can be shared. These frictions represent psychological “speed bumps” that momentarily slow users down, allowing them a brief window to reassess whether sharing is the right choice.
Their findings are both encouraging and nuanced. While introducing friction clearly reduces the total volume of shares, it does not guarantee an automatic improvement in the quality or accuracy of the shared content. This subtlety highlights a critical limitation: simply making it harder to share does not mean misinformation will vanish—instead, some users might avoid sharing altogether, while others might still share problematic posts despite the roadblock. Thus, friction alone is insufficient to elevate the overall informational quality spreading through these networks.
To overcome this, the researchers expanded their approach by integrating a learning component into the friction mechanism. Instead of a generic pop-up, their model incorporated brief quizzes or informational prompts designed to educate users on the nature of misinformation, including definitions and the platform’s policies for combating fake news. This educational friction encourages users to engage cognitively with the underlying issues before deciding whether to share content. According to Professor Hendricks, these learning interventions stimulate deeper reflection, leading users to become more discerning about the content they propagate.
Combining friction with learning yielded a significant outcome in their simulations: not only did sharing rates drop, but the average quality of content being shared showed marked improvement. Essentially, the researchers demonstrated that an intelligent gatekeeping step can filter out low-quality or misleading information, while retaining—if not enhancing—the circulation of more reliable posts. This dual effect is critical in safeguarding the informational ecosystems of social media without suppressing healthy discourse and user interaction.
Looking forward, the University of Copenhagen team plans to transition from theoretical simulations to real-world applications by conducting field studies. Such studies will involve collaborations with social media platforms or the use of experimental social networks crafted for research. These real-life tests aim to verify whether the benefits seen in computational models translate into actual behavioral changes and reductions in misinformation dissemination on live platforms.
The researchers express hope that their work will inspire tech companies to innovate beyond traditional content moderation, which often struggles with scale and speed. By implementing thoughtful digital frictions combined with educational prompts, platforms may be able to harness user agency and enhance content quality in a manner that complements automated detection systems. This hybrid strategy could prove pivotal in the ongoing battle against misinformation, empowering users to act as informed gatekeepers within their own online communities.
If collaboration with major platforms is unattainable, the team intends to continue exploring these mechanisms through simulated environments designed for social science research. These controlled settings can provide valuable insights into user behavior under different friction parameters and educational strategies, offering a rich resource for iterative improvement of interventions before broader deployment.
This research emerges from the Center for Information and Bubble Studies at the University of Copenhagen, a hub focused on understanding complex information dynamics and social epistemology. By leveraging computational modeling with psychological insights, this interdisciplinary approach exemplifies how cutting-edge science can tackle societal challenges such as misinformation in the digital age.
In sum, the introduction of small digital frictions—particularly when paired with user education—presents a promising avenue for mitigating the rapid spread of misinformation online. While the challenge is immense and multifaceted, these findings highlight a feasible, user-centered approach that could reshape how social media platforms manage content dissemination in the future, promoting a healthier, more informed online public sphere.
Subject of Research: Computational simulations to reduce misinformation spread on social media through digital friction and learning interventions.
Article Title: A perspective on friction interventions to curb the spread of misinformation
News Publication Date: 3-Nov-2025
Web References:
References:
University of Copenhagen, Center for Information and Bubble Studies, npj Complexity Journal
Keywords: misinformation, social media, digital friction, computational modeling, behavioral intervention, fake news, information quality, user education, misinformation spread, social networks

