The rapid advancement and widespread adoption of artificial intelligence (AI) in generating scientific and science-related content, particularly on social media platforms, have presented an unprecedented challenge to the integrity and credibility of public information. As AI systems become increasingly capable of producing sophisticated textual content, concerns intensify over the potential dissemination of misleading or false scientific information that users may find difficult to discern from verified facts. This phenomenon could significantly shape public opinion and influence critical decision-making in areas spanning health, technology, and beyond.
In response to these concerns, regulatory bodies and digital platforms are taking steps to enforce transparency by mandating the clear disclosure of AI-generated or AI-synthesized content. These disclosure labels aim to inform the public when content originates from AI systems, thereby ostensibly empowering users to better judge the authenticity and reliability of the information they encounter. However, provocative new research published in the Journal of Science Communication reveals that such transparency measures may inadvertently undermine their intended purpose, potentially diminishing trust in accurate scientific information while simultaneously enhancing the perceived credibility of false claims.
This unexpected effect, coined the “truth–falsity crossover effect,” emerged from a rigorous experimental study conducted by Teng Lin, a doctoral candidate, and Yiqing Zhang, a master’s student, both from the School of Journalism and Communication at the University of Chinese Academy of Social Sciences in Beijing. Their investigation focused precisely on social media posts relaying science-related information, making their findings highly relevant to the platforms where much science communication now occurs.
The research design involved recruiting 433 participants via the Credamo platform during early 2024. Participants were exposed to four distinct categories of social media-style posts: accurate information with and without an AI generation disclosure label, and false information with and without the same label. The texts were meticulously crafted through the advanced GPT-4 language model, reworking items originally published by China’s Science Rumour Debunking Platform. Each post was verified for factual correctness by the researchers before participant evaluation. The subjects then rated the perceived credibility of each post on a five-point scale, with additional assessments of their attitudes toward AI and their engagement with the topic in question.
The study’s revelations defy conventional expectations about transparency and trust. Rather than uniformly reducing misinformation acceptance, the AI disclosure labels distorted credibility perceptions in a paradoxical manner. When AI labels accompanied truthful scientific posts, participants rated these messages as less credible, signaling a penalty against AI-generated veracity. Contrastingly, false posts bearing the AI disclosure were judged more credible than those without it. This asymmetry in perception highlights an alarming vulnerability in the current approach to AI content disclosure.
This “truth–falsity crossover effect” indicates that labeling content as AI-generated does not straightforwardly help users differentiate fact from fiction. Instead, it seems to redistribute trust inversely, devaluing true statements and lending undue legitimacy to falsehoods. The complex psychological processes underlying this effect may be influenced by the public’s mixed feelings about AI technology, expectations of AI capabilities, and skepticism about the veracity of machine-produced material.
Further exploration within the study reveals that individual predispositions toward AI critically shape these credibility judgments. Participants harboring more negative attitudes toward AI demonstrated an intensified distrust of true information flagged as AI-generated. However, intriguingly, this skepticism did not entirely eliminate the enhanced credibility granted to false information with AI disclosures. The attenuation of this credibility boost varied across specific scientific topics, implying an intricate interplay between content type, personal biases, and disclosure signals.
These findings underscore that “algorithm aversion,” or the tendency to distrust automated systems, does not result in a simple wholesale rejection of AI-created content. Instead, it triggers a nuanced and asymmetric cognitive reaction that can paradoxically empower misinformation. This revelation calls into question the efficacy of blanket labeling policies and challenges policymakers to reconsider their assumptions about public responses to AI disclosures.
The implications of this research are profound for regulators and digital platform developers who aim to safeguard the public from the deleterious effects of misinformation. The study’s authors advocate for a more sophisticated and layered approach to disclosure strategies rather than simplistic labels that merely notify audiences of AI authorship. One promising direction is the implementation of a dual-label system that not only acknowledges the AI origin of content but also communicates whether the information has undergone independent verification or includes clear risk warnings. This nuanced labeling could provide users with richer contextual cues about the reliability and potential hazards associated with the content.
Moreover, Lin and Zhang suggest adopting a graded or categorical labeling framework tailored to the inherent risk profile of the scientific information presented. For instance, AI-generated content related to critical sectors such as medicine and health could carry stringent warnings, reflecting their potential to impact public health outcomes adversely if incorrect. In contrast, topics such as emerging technologies or general scientific advancements might warrant lighter disclosure requirements due to a lower associated risk. This tiered approach recognizes the heterogeneous nature of scientific communication and better aligns transparency efforts with real-world consequences.
These recommendations highlight the necessity for rigorous empirical evaluation of any proposed disclosure policies before widespread deployment. The researchers emphasize that transparency interventions designed without careful testing may unintentionally erode trust in valid scientific facts while amplifying misinformation, thereby compromising the very objectives they seek to achieve. This work serves as a clarion call for multidisciplinary collaborations among social scientists, communication experts, and technologists to refine disclosure methodologies that effectively promote informed public engagement.
In sum, as AI increasingly permeates the science communication ecosystem, understanding how disclosure practices influence public credibility assessments is crucial. This study’s unexpected findings disrupt the assumption that labeling AI-generated content unequivocally enhances user discernment. Instead, it reveals a more intricate landscape where transparency alone may be insufficient or even counterproductive. Consequently, this pioneering research compels a reevaluation of current regulatory paradigms and encourages the development of more sophisticated, context-aware solutions that mitigate misinformation while bolstering public trust in genuine scientific knowledge.
Subject of Research: People
Article Title: Visible Sources and Invisible Risks: Exploring the Impact of AI Disclosure on Perceived Credibility of AI-Generated Content
News Publication Date: 9-Mar-2026
Web References: https://doi.org/10.22323/358020260107085703
Image Credits: Federica Sgorbissa – SISSA Medialab
Keywords: AI, Science Communication, Credibility, Misinformation, Disclosure, Social Media, Algorithm Aversion, Cognitive Psychology

