Current AI Risks Are More Pressing Than Apocalyptic Scenarios, Study Finds
In a rapidly evolving technological landscape, artificial intelligence (AI) continues to spark widespread debate about its potential threats and benefits. While some have focused heavily on dystopian visions of AI triggering the extinction of humanity in the distant future, new empirical research challenges the notion that such speculative fears dominate public concern. A comprehensive study conducted by political scientists at the University of Zurich reveals that people are in fact far more apprehensive about immediate and tangible risks posed by AI technologies than about abstract, apocalyptic scenarios. This insight reshapes the narrative around AI risks, emphasizing the need to address current societal challenges alongside ongoing conversations about long-term existential dangers.
The study’s importance stems from the persistent tension in public discourse between sensational, far-future AI risk narratives and the very real, present-day problems that AI systems already cause. While existential risks – scenarios in which AI could hypothetically threaten humanity’s survival – often capture media attention and public imagination, they may inadvertently overshadow or distract from more pressing issues such as algorithmic bias, misinformation, privacy erosion, and labor market disruptions. The University of Zurich’s research offers the first large-scale, systematic data to empirically examine how framing AI risks influences public perception and concern.
To explore these dynamics, the research team undertook a series of three large-scale online experimental surveys involving more than 10,000 participants from the United States and the United Kingdom. These experimental designs exposed participants to varying narratives about AI: some read headlines emphasizing catastrophic, long-term threats to humanity; others encountered information focused exclusively on immediate concerns such as social discrimination, misinformation, and workforce impacts; while a control group viewed content highlighting the advantages and potential benefits of AI technologies. By comparing these groups’ responses, the researchers aimed to determine whether alarmist framing about existential risks diminishes public engagement with or awareness of more proximate problems.
A striking outcome from the experiments showed that individuals consistently expressed considerably greater concern about present-day AI-related harms compared to distant, theoretical catastrophes. Professor Fabrizio Gilardi of the University of Zurich’s Department of Political Science explains, “Our data highlight that people acknowledge the urgency of current risks like systemic bias encoded within AI algorithms and AI-driven job displacement. Even when confronted with apocalyptic warnings, their worry about these immediate problems remains substantially more pronounced.” This finding contradicts the criticism that emphasizing speculative future risks distracts from or undermines addressing urgent societal issues stemming from AI.
Notably, the study also reveals that people are capable of making nuanced distinctions between abstract, long-term existential threats and concrete, existing harms. Importantly, respondents did not exhibit a binary or mutually exclusive understanding of AI’s risks. Instead, they simultaneously recognized and took seriously the multifaceted challenges AI presents, both today and potentially tomorrow. This suggests that public concern should not be simplified or forced into an “either-or” framing when it comes to AI risk discourse.
The implications for the broader field of AI ethics, policy, and governance are significant. Historically, discussions of AI risk have often been polarized, with some arguing for urgent preventive measures solely against far-future scenarios, while others advocate focusing exclusively on mitigating current harms. The University of Zurich study encourages moving beyond this dichotomy by fostering a holistic approach that concurrently addresses immediate issues—such as bias amplification in automated decision-making systems and misinformation propagation—as well as invests in long-term research and safeguards against catastrophic risks.
This dual-perspective approach is especially crucial in times when AI technologies are increasingly embedded into key societal infrastructures. For example, facial recognition systems used in law enforcement can perpetuate racial and gender biases, leading to unjust outcomes. Social media algorithms powered by AI can exacerbate the spread of false information, influencing public opinion and democratic processes. Meanwhile, automation threatens to disrupt labor markets by making certain jobs obsolete, fueling economic and social uncertainty. Acknowledging these urgent challenges underscores the necessity of regulatory frameworks and oversight mechanisms tailored to present realities.
Moreover, the study’s nuanced findings provide valuable guidance for science communicators, policymakers, and AI developers aiming to engage the public effectively. Sensationalized warnings about an AI apocalypse, while attention-grabbing, should be balanced with clear communication about tangible harms and practical mitigation strategies to maintain public trust and foster constructive dialogue. As co-author Emma Hoes points out, “Public discourse should not be dominated by fear of the unknown but enriched with awareness of the concrete harms AI already causes, as well as the benefits it offers.”
Scientific literature had long speculated on the psychological effects of risk framing and media portrayal on public risk perception, but reliable empirical data in the context of AI risk narratives were scarce until now. This study fills that significant knowledge gap by demonstrating that existential risk narratives do not inherently suppress concern for immediate AI-related problems. Instead, awareness of both risk categories coexists robustly in the public mind, affirming the capacity for complex risk understanding among diverse populations.
The research team’s methodical approach, leveraging randomized exposure to different narrative framings, sets a new standard in studying perceptions around emerging technologies. This methodology can be applied to future investigations on risk communication strategies in other contentious technological domains. As debates unfold around AI-powered innovations like autonomous vehicles, medical diagnostics, and predictive policing, understanding how framing influences public caution and acceptance will be increasingly crucial.
While the study’s insights help to recalibrate the conversation on AI risk, they also call attention to the peril of complacency. The unmistakable fact that current AI risks command greater worry compels urgent policy responses and technological interventions. Regulatory bodies must prioritize transparency, accountability, and fairness in AI systems while ensuring that research into existential risks continues to receive due attention and funding.
In summary, the University of Zurich’s groundbreaking research reconciles two often competing perspectives within AI risk discourse. By establishing that neither present harms nor future dangers dominate or undermine the other in public concern, it underscores the importance of an integrative dialogue that encompasses the full spectrum of AI’s societal impact. Through precision in communication and policymaking, society can better navigate the complex landscape of artificial intelligence, harnessing its promise while safeguarding against its pitfalls.
Subject of Research: People
Article Title: Existential Risk Narratives About Artificial Intelligence Do Not Distract From Its Immediate Harms
News Publication Date: 17-Apr-2025
Web References: 10.1073/pnas.2419055122
References: Emma Hoes, Fabrizio Gilardi. Existential Risk Narratives About Artificial Intelligence Do Not Distract From Its Immediate Harms. Proceedings of the National Academy of Sciences. April 17, 2025.
Keywords: Artificial intelligence, Fear, Risk perception, Social psychology, Human social behavior, Society