In late 2022, the launch of ChatGPT heralded a new era in artificial intelligence (AI), quickly bringing the technology into widespread public awareness. The rapid adoption of AI tools and systems has sparked extensive debate regarding both their transformative potential and their inherent risks. As AI continues to permeate various facets of daily life, understanding public perceptions has become crucial, given how these attitudes can influence the trajectory of AI development, deployment, and governance. Recent research from the University of Pennsylvania’s Annenberg Public Policy Center (APPC) sheds light on American public opinion about AI science and scientists, revealing nuanced insights into prevailing hopes, anxieties, and the politicization—or relative lack thereof—of AI compared to other scientific domains.
The study, published in PNAS Nexus on June 17, 2025, examines public perceptions through a survey administered to a nationally representative sample of U.S. adults. Drawing on the “Factors Assessing Science’s Self-Presentation” (FASS) rubric, the researchers assessed how the public views AI science in terms of credibility, prudence, unbiasedness, self-correction, and benefit. This framework allows for an intricate analysis of trust and skepticism, particularly as it compares perceptions of AI science and scientists to those of climate science and science in general.
Results indicate that, while AI is widely recognized and discussed, public perceptions of AI scientists are comparatively more negative than those of scientists in the fields of climate science or broader scientific disciplines. The core driver of this negativity centers on the perceived imprudence of AI science. Many respondents expressed concern that AI development may be unleashing unintended consequences, highlighting fears over insufficient caution in the rapid advancement of AI technologies. This issue of prudence reflects a broader unease regarding AI’s unpredictable societal and ethical implications amid a landscape of accelerating innovation.
Crucially, the research investigated whether these negative views might soften as the technology becomes more familiar. However, survey data spanning 2024 to 2025 revealed that perceptions of AI science and scientists remained largely static, despite AI’s increasing integration into everyday tools and services. This suggests that increased exposure alone does not alleviate public anxiety, underscoring the need for deliberate engagement and transparent communication to build trust and understanding around complex AI systems.
Unlike other science domains, particularly climate science, which has been heavily politicized and embroiled in partisan debates, perceptions of AI science in the U.S. are notably less polarized by political affiliation. Historically, Republican confidence in medical and general science declined significantly during and after the COVID-19 pandemic, mirroring the deep partisan cleavages surrounding health policies and climate change. Interestingly, the APPC study found that AI has yet to become a similarly divisive issue along partisan lines. This relative neutrality offers a potentially fertile ground for consensus-building around AI governance and policy.
Dror Walter, lead author and associate professor of digital communication at Georgia State University, emphasizes that recognizing and addressing these negative perceptions is essential. He argues that understanding the particular concerns about AI—especially worries about unintended consequences—can guide more effective messaging and communication strategies. Emphasizing transparent and ongoing evaluations of both governmental and self-regulatory efforts could help assuage public fears and foster a regulatory environment that balances innovation with safety.
The research also illuminates the comparative dimensions of scientific self-presentation. AI scientists scored lower on key attributes such as prudence and self-correction, causing the public to view their work through a lens of caution, if not suspicion. By contrast, climate scientists, despite facing politicized skepticism, were generally seen as more aligned with principles of careful, evidence-based science. This dichotomy points to the challenges of public trust when pioneering or disruptive sciences operate within opaque developmental frameworks.
AI’s technical complexity and rapid evolution create unique communication hurdles. Much of the AI field involves opaque algorithms, machine learning models that are difficult to interpret, and potential emergent behaviors that defy straightforward prediction. These intrinsic characteristics fuel public concerns about uncontrollable or unforeseen effects, amplifying calls for transparency and accountability in AI research and product deployment. The APPC findings underscore that without addressing these challenges head-on, negative perceptions are unlikely to diminish.
Moreover, the study provides empirical grounding for policymakers, industry leaders, and science communicators to shape the future landscape of AI governance. The relatively low political polarization around AI suggests an opportunity for bipartisan cooperation on regulatory standards, safety protocols, and ethical frameworks. Establishing mechanisms for continuous self-assessment and independent oversight may also help build durable public trust.
The findings also stress the importance of framing AI science in ways that highlight tangible societal benefits, reducing fears rooted in abstract or sensationalized scenarios. By fostering nuanced understanding and depicting AI researchers as prudent and responsible actors, communication strategies can help close the gap between technical realities and public expectations. This alignment is critical as AI technologies increasingly influence economic sectors, healthcare, education, and national security.
Finally, the APPC study serves as a benchmark for ongoing monitoring of public attitudes towards AI science. As AI technologies evolve, future research will need to track how perceptions shift in response to breakthroughs, incidents, regulatory developments, and public discourse. The trajectory of trust—or distrust—in AI science will have profound implications for innovation adoption, regulatory acceptance, and the ethical stewardship of transformative technologies in the coming decades.
Subject of Research: People
Article Title: Public Perceptions of AI Science and Scientists Relatively More Negative but Less Politicized Than General and Climate Science
News Publication Date: 17-Jun-2025
Web References: http://dx.doi.org/10.1093/pnasnexus/pgaf163
References: Walter, D., Ophir, Y., Jamieson, P. E., & Jamieson, K. H. (2025). Public Perceptions of AI Science and Scientists Relatively More Negative but Less Politicized Than General and Climate Science. PNAS Nexus. https://doi.org/10.1093/pnasnexus/pgaf163
Keywords: Artificial intelligence, Scientific community, Technology policy, Regulatory policy, Science policy, Industrial research, Research and development, Public opinion, Social attitudes