In the digital age, misinformation spreading rapidly on social media has posed significant challenges around the world. A compelling theoretical framework, known as the inattention-based theory, has emerged to explain why individuals often share false or misleading content online. Originally proposed and validated in Western contexts, particularly in the United States, recent research has sought to investigate the applicability and robustness of this theory in very different cultural landscapes. A groundbreaking study conducted in China has now provided critical insights, offering both a replication of prior findings and novel contributions to understanding misinformation dynamics in a rapidly digitizing society.
The essence of the inattention-based theory is elegant yet profound: people frequently share misinformation not necessarily because they intend to deceive or endorse falsehoods, but because they pay insufficient attention to the accuracy of the information when deciding whether to share it. This lapse in attention creates a fertile ground for misinformation to flourish, as users’ critical engagement with content is compromised. Studies from the United States have demonstrated that subtly prompting users to consider the accuracy of information before sharing can dramatically improve their discernment, thereby reducing the spread of misinformation.
In an ambitious replication effort, researchers applied this conceptual model to misinformation about COVID-19 circulating on Chinese social media platforms. The study sought to clarify whether the patterns observed in the US context hold true in China, a country with unique social, cultural, and digital milieus. The experiments revealed a striking disconnect between participants’ judgments about the truthfulness of information and their intentions to share it. Specifically, when individuals were simply asked whether they would share a given piece of information, their ability to discriminate between true and false content diminished substantially compared to scenarios where they were explicitly asked to evaluate accuracy.
Building on this foundation, the researchers introduced an accuracy-prompt intervention, akin to prior US studies, whereby participants were asked to judge the accuracy of unrelated information before being queried about their sharing intentions regarding COVID-19 content. This intervention proved effective in improving accuracy discernment during sharing decisions, signaling that drawing attention back to truthfulness can influence online behaviors even in vastly different cultural contexts. This replication solidifies the external validity of the inattention-based theory beyond Western borders, suggesting it taps into fundamental cognitive processes shared across cultures.
Nevertheless, the effect sizes observed revealed important cross-cultural distinctions. While inattention accounted for roughly 50% of misinformation sharing in the US political misinformation research, only about 37% of COVID-19 misinformation sharing in China could be attributed to the same factor. Correspondingly, the accuracy-prompt intervention enhanced sharing discernment in China by a factor of 1.2, compared to a 2.8-fold improvement seen in the United States. These discrepancies invite reflection on underlying causes rooted in cultural and technological contexts.
One plausible explanation lies in differences in digital literacy and internet experience. The United States, with a longer history of widespread internet and social media adoption, encompasses a population generally more adept at navigating digital informational ecosystems. This higher digital literacy likely fortifies individuals’ ability to accurately evaluate news content. Meanwhile, China’s rapid rise as a digital powerhouse may not yet have fully translated into uniformly elevated critical media skills across its vast population, or the types of misinformation prevalent may differ in nature and presentation.
Furthermore, the smaller proportion of misinformation sharing caused by inattention in China indicates that other factors—potentially including political, social, or psychological dimensions—may play a more prominent role in driving misinformation dissemination there. In practical terms, this means that accuracy-focused interventions may need to be complemented by additional strategies tailored to the specific cultural and media environment in China to maximize effectiveness.
From a theoretical perspective, confirming the applicability of the inattention-based theory in a non-Western setting enriches the global understanding of misinformation dynamics. It challenges the assumption that misinformation sharing is predominantly intentional or malicious and instead highlights that a significant portion of such behavior arises through unintentional lapses in attention. This shift in perspective necessitates the development of interventions targeting cognitive factors, rather than solely focusing on punitive or regulatory measures.
On a pragmatic level, the study offers a pathway for combating misinformation in China that does not rely heavily on traditional but resource-intensive approaches such as content deletion, account suspensions, or exhaustive fact-checking. These conventional strategies not only demand extensive effort from governments and technology companies but also face limitations in scalability given the sheer magnitude of content generated daily on platforms like Weibo and WeChat.
In contrast, accuracy-prompt interventions harness users’ own evaluative capacities. By subtly directing individuals’ attention toward truthfulness—perhaps through periodic prompts asking users to rate the accuracy of random content—social media platforms can encourage a culture of vigilance and conscientious sharing. Such interventions exploit cognitive mechanisms to amplify accuracy considerations, potentially curbing misinformation spread in a user-driven, distributed manner.
It is important to acknowledge that accuracy-prompts are not potent silver bullets isolating misinformation. Their relative efficacy varies by sociocultural context, and they should be integrated within a comprehensive, multifaceted misinformation mitigation ecosystem. This system would ideally combine algorithmic detection, psychological inoculation techniques that preempt exposure to falsehoods, and human-AI collaborative tools aimed at elevating users’ judgment.
Despite the promising findings, the researchers underscore several limitations that invite further inquiry. The sampling methodology, relying on an online research platform, may fall short of capturing the full diversity of Chinese social media users, tempering the generalizability of conclusions. Field experiments implemented directly on popular platforms could offer stronger ecological validity.
Moreover, the study did not delve into moderating variables such as digital media literacy, which may critically influence the potency of accuracy-based interventions. If users lack foundational media literacy skills, their ability to distinguish truth from falsehood might be inherently constrained, dampening intervention success. This suggests that coupling digital literacy education with accuracy prompts could yield synergistic benefits.
Another open question concerns the longevity of accuracy-prompt effects. The current findings reflect only short-term impacts measured in experimental settings. How repeated exposure to such prompts might recalibrate users’ habitual information processing and build sustained cognitive resistance to misinformation over time remains unexplored. Longitudinal studies are essential to elucidate this.
Additionally, the laboratory context of the study may differ from real-world social media environments where complex social dynamics, varying motivations, and platform affordances influence sharing behaviors. While prior studies in Western countries indicate correlations between experimentally measured sharing intentions and actual behavior, this relationship merits verification within the Chinese context.
The research also concentrates specifically on COVID-19 misinformation and Chinese microblog content. Broader testing across diverse misinformation domains—such as political discourse, health advice outside the pandemic, and emerging technological topics—is needed to establish the generalizability of the inattention framework and intervention efficacy. Furthermore, as misinformation modalities evolve, including increasingly realistic AI-generated content like deepfakes, the adaptability of accuracy-prompt methods to these novel challenges should be a priority for future work.
In sum, this pioneering study advances global efforts to understand and combat misinformation by validating and extending a psychologically grounded theory within China. It highlights the universality of inattention as a driver of erroneous sharing but also illuminates the nuanced cultural variations that must be acknowledged in crafting interventions. By shifting focus toward enhancing individual attention to accuracy, the research charts a promising path forward in the shared quest to preserve information integrity across continents and platforms.
As misinformation continues to threaten the fabric of informed societies worldwide, initiatives leveraging human cognition and scalable technologies could hold the key to fostering more truthful, resilient digital public spheres. The dual imperatives of empirical rigor and cultural sensitivity embodied in this work pave the way for increasingly sophisticated, context-aware strategies—balancing innovation with an appreciation for the complex realities of social media ecosystems.
Subject of Research:
Effects of accuracy-prompt interventions on reducing misinformation sharing on Chinese social media, testing the inattention-based theory of misinformation dissemination.
Article Title:
Can shifting attention to accuracy reduce misinformation on social media? A replication and extension in China.
Article References:
Liu, Z. Can shifting attention to accuracy reduce misinformation on social media? A replication and extension in China. Humanit Soc Sci Commun 12, 833 (2025). https://doi.org/10.1057/s41599-025-05233-9
Image Credits: AI Generated