In the face of an escalating climate crisis, the quest for effective communication strategies that can bridge the divide between climate science and public skepticism has never been more urgent. Emerging at this nexus is the potential of large language models (LLMs), sophisticated artificial intelligence systems capable of generating human-like text and engaging in meaningful conversations, to influence public attitudes toward climate change. Recent research spearheaded by Hornsey and colleagues delves into this promising frontier, investigating whether dialogue with generative AI like ChatGPT can plausibly reduce climate skepticism and foster pro-environmental intentions.
Large language models represent a radical shift in how information is disseminated and consumed. Unlike traditional media or static informational sources, LLMs offer interactive, instantly accessible, and multilingual communication tailored to individual users. Their capacity to personalize complex scientific content in digestible forms holds great promise for enhancing climate literacy, a crucial step toward galvanizing collective action. However, while the conceptual advantages of LLMs have been widely touted, empirical evidence supporting their efficacy in altering climate-related attitudes had remained scant—until now.
Hornsey et al. designed two comprehensive studies involving nearly 1,300 participants in total, specifically targeting individuals identified as climate skeptics. These studies employed short dialogues with ChatGPT to gauge whether such AI-facilitated conversations could reduce skepticism and bolster intentions to engage in environmentally friendly behaviors. The implications of these findings resonate deeply within the broader discourse on climate communication, where bridging ideological divides has long proven challenging.
The first study enrolled 949 climate skeptics who participated in brief exchanges spanning three conversational turns with ChatGPT. Remarkably, those interactions catalyzed small but statistically significant increases in participants’ intentions to take pro-environmental actions. Additionally, skeptics exhibited a measurable decrease in confidence toward their initial skeptical views following the dialogue. These outcomes suggest that even limited interaction with AI-driven conversational agents can subtly pivot entrenched perceptions on climate issues.
To validate and extend these findings, the researchers conducted a second study with 333 skeptics, replicating the conversational model with ChatGPT. Consistent with the first experiment, modest but clear effects emerged: participants’ climate skepticism waned slightly, and their support for pro-climate policies registered a small uptick. This replication bolsters the view that LLMs wield a tangible though nuanced capability to sway skeptical audiences. Yet, the overall effect sizes remained relatively modest, inviting further scrutiny.
Importantly, both studies uncovered a notable limitation—the longevity of these attitudinal shifts remains uncertain. Follow-up assessments revealed that the initial reductions in skepticism and enhancements in environmental intentions decayed over time. This attenuation points to a common challenge in behavior change interventions: transient gains may not translate into lasting transformation without sustained engagement or reinforcement.
Moreover, the researchers explored whether extending the conversation length with ChatGPT—from three to six exchanges—would intensify the positive effects. Contrary to what one might expect, doubling the dialogue rounds did not amplify the benefits. This finding signals that simply increasing the quantity of AI-facilitated interaction might not enhance persuasion and suggests a complex interplay between message content, user engagement, and cognitive processing in such dialogues.
The mixed nature of these results reflects broader tensions in the use of AI tools for social influence. On one hand, LLMs offer unprecedented abilities to tailor climate messages and counter misinformation at scale, particularly in multilingual contexts where scientific communication is often lacking. On the other, their efficacy is bounded by users’ pre-existing beliefs, cognitive biases, and the transient nature of attitudinal changes elicited through brief conversations.
Technically, the studies harnessed state-of-the-art generative AI architectures trained on vast corpora of knowledge, enabling them to navigate complex climate science topics and respond empathetically to skepticism. ChatGPT, in particular, leverages transformer-based neural networks capable of contextual understanding—a necessary feature to address the nuanced concerns of skeptics without triggering defensive reactions. The ability of the model to maintain coherence and relevance during dialogue was critical in fostering a credible and engaging user experience.
The research also underlines a broader psychological dynamic: persuasion through dialogue often involves subtle shifts rather than overt conversions. The modest changes observed underscore that climate skepticism is a deeply entrenched mindset, influenced by identity, ideology, and social networks. AI-facilitated conversations appear to chip away at these attitudes incrementally, potentially serving as complementary tools alongside broader educational and policy initiatives rather than standalone solutions.
Given the rapid evolution of LLM capabilities and their expanding applications, the studies’ findings serve as a cautious call for evidence-based integration of generative AI into climate communication frameworks. Practitioners must consider the limitations and mixed outcomes highlighted herein, emphasizing responsible deployment, transparency, and ethical engagement with users. Overhyping AI’s capacity to “solve” skepticism risks undermining public trust if unwarranted expectations are not managed.
Concurrently, the research opens avenues for refining AI dialogue strategies. Future development might explore more adaptive, longer-term conversational agents capable of nurturing sustained reflection and engagement. Incorporating behavioral science insights into prompt engineering, emotional resonance, and trust building could amplify impact. Personalization tailored not merely to language and comprehension but to psychological profiles could unlock deeper attitudinal change.
Furthermore, these experiments underscore the importance of interdisciplinary collaboration—melding AI technology, climate science expertise, and social psychology—to craft interventions that are not only technically sophisticated but socially grounded. As climate communication increasingly intersects with digital media ecosystems, harnessing the strengths of LLMs responsibly could meaningfully enrich public discourse.
While LLMs like ChatGPT show promise as scalable, interactive tools to support climate literacy and challenge skepticism, this pioneering research highlights that their influence is neither universal nor enduring without continued engagement and complementary approaches. The path toward widespread attitude transformation remains complex and multifaceted, with generative AI positioned as an intriguing but incremental actor in this intricate social process.
Ultimately, Hornsey and colleagues contribute a foundational empirical layer to inform how emergent AI technologies might be harnessed in the global effort to address climate change. Their nuanced findings encourage both optimism about new tools and humility regarding their limitations, advocating for strategic, evidence-driven use of AI in climate communication to gradually bridge divides and foster environmental stewardship.
Subject of Research: The efficacy of large language models (LLMs), specifically ChatGPT, in reducing climate skepticism and influencing pro-environmental intentions through AI-facilitated dialogue.
Article Title: The promise and limitations of using GenAI to reduce climate scepticism.
Article References:
Hornsey, M.J., Pearson, S., Bretter, C. et al. The promise and limitations of using GenAI to reduce climate scepticism. Nat. Clim. Chang. (2025). https://doi.org/10.1038/s41558-025-02425-8
Image Credits: AI Generated