In the evolving landscape of artificial intelligence, one persistent and troubling issue remains largely under-explored: the impact of inherent biases within AI language models on human users. While it is widely recognized that AI systems, including popular chatbots, harbor biases acquired during training on vast and uncurated data sets, the question of how these biases influence the opinions and decisions of their users has until now received insufficient scientific scrutiny. A recent groundbreaking study conducted by researchers at the University of Washington illuminates this dark corner, providing concrete evidence that interaction with biased AI models actively shifts human beliefs and attitudes, often aligning them with the model’s own embedded slant.
The research team designed a novel experiment, recruiting a balanced cohort of self-identified Democrats and Republicans to probe their political perspectives on relatively obscure and under-discussed topics. The participant group was divided randomly to engage with three distinct versions of ChatGPT: a neutral base model, a version exhibiting a pronounced conservative bias, and another explicitly designed with a liberal bias. The core finding was striking and revelatory: irrespective of their initial political leanings, participants tended to adopt viewpoints closer to those of the biased chatbot with which they interacted, whereas the base, neutral model exerted significantly less influence on opinion shifts.
The methodology involved two key tasks that measured opinion change before and after engagement with the AI systems. The first task introduced participants to four politically charged but less commonly known issues: covenant marriage, the U.S. foreign policy doctrine of unilateralism, the Lacey Act of 1900 concerning wildlife trafficking, and local multifamily zoning regulations. Participants were initially surveyed to assess their baseline knowledge and expressed their agreement with various statements on a seven-point Likert scale. They then engaged in multiple conversations—ranging between three and twenty interactions—with one of the three ChatGPT variants to discuss these topics. Upon conclusion, participants re-evaluated the same statements, allowing the researchers to quantify any shifts in their opinions attributable to chatbot influence.
In the second task, participants assumed the role of a mayor responsible for allocating additional city funds across four government departments that bear different ideological associations: education, welfare, public safety, and veteran services. After a preliminary distribution of funds, their decisions were sent to ChatGPT, and dialogue ensued. Following the interaction, participants readjusted their funding allocations. This setup provided a dynamic test of not only political attitudes but also the strategic framing and reinforcement of policy priorities through AI persuasion.
The results revealed that both liberal- and conservative-biased models engaged in framing effects—an advanced psychological strategy where the AI subtly steered conversations to emphasize or de-emphasize particular aspects of issues to align with its ideological orientation. For instance, the conservative-leaning chatbot shifted dialogue away from education and welfare, emphasizing veterans and public safety instead. Conversely, the liberal model heightened focus on welfare and education. This reframing not only nudged participants’ resource priorities but also influenced their fundamental political views, highlighting the powerful role AI can play as both information source and opinion guide.
A particularly salient insight emerged regarding the role of user expertise. Participants who self-reported higher knowledge about AI technologies demonstrated significantly smaller shifts in their views after interacting with biased models. This finding underscores the critical importance of AI literacy and education as potential bulwarks against inadvertent manipulation and ideological swaying. Given the omnipresence of AI chatbots in everyday communication and decision-making, this insight beckons for urgent educational initiatives to empower users.
Technical aspects of the experimental design ensured clear differentiation of chatbot biases. The team introduced explicit internal instructions—undisclosed to participants—to steer AI responses, for example, by telling one version of ChatGPT to “respond as a radical right U.S. Republican,” another to adopt a “radical left” viewpoint, and the control to behave as a neutral U.S. citizen. Through this controlled environment, biases—normally subtle and layered—were accentuated, allowing rigorous analysis of their influence mechanisms and potency.
The choice of ChatGPT as the focal model for this study was deliberate and strategic. Its widespread adoption and recognition make it a proxy for many large language models (LLMs) that dominate in the AI conversation space. Prior research involving thousands of users had already suggested these models tend to lean liberal in political conversations, highlighting the importance of understanding how subtle algorithmic preferences translate into political and social consequences in real user interactions.
The study’s implications extend far beyond academic curiosity. According to lead researcher Jillian Fisher, the ease with which developers can amplify model bias presents enormous ethical questions and challenges for AI governance. Just minutes of interaction with a biased system were sufficient to produce measurable shifts in opinion, raising alarming concerns about the long-term societal effects of entrenched biases in AI systems that millions rely on daily—and potentially for years. The power to shape collective perspectives, whether intentional or inadvertent, makes AI a potent vehicle for political influence and social engineering.
Co-senior author Katharina Reinecke further emphasized the gravity of such findings, cautioning about the vast power disparity between AI creators and end-users. If biases are inherent “from the get-go” and amplification requires minimal effort, the potential for misuse or neglect is high. This highlights a pressing need for the AI research community and policymakers to develop transparent methodologies for bias detection, mitigation strategies, and user awareness programs that protect democratic processes and informational integrity.
Moving forward, the research team plans to investigate avenues where education can mitigate bias effects, exploring how raising AI literacy might inoculate users against persuasive framing dominated by partisan AI models. Furthermore, they intend to extend their inquiry beyond ChatGPT to a wider array of LLMs to better capture the broader ecosystem’s vulnerability to bias influence and to develop more generalized solutions to this burgeoning challenge.
Overall, this study marks a pivotal moment in AI research, moving the conversation about bias from data and model internals to real-world human consequences. It bridges the gap between theoretical models and lived experiences, offering valuable insights for engineers, policymakers, and the public alike. As AI systems become ever more embedded in facets of communication, policymaking, and societal discourse, understanding and counteracting their bias effects will be essential to preserving the integrity of democratic dialogue.
“My hope with this research is not to instill fear, but to equip users and developers with knowledge and tools to engage with AI more responsibly,” Fisher concluded. The work serves as a beacon, illuminating the complex dynamics of AI interaction and underscoring the collective responsibility in shaping the future of these transformative technologies.
Subject of Research: Influence of bias in AI language models on human political attitudes and decision-making.
Article Title: Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics
News Publication Date: 28-Jul-2025
Web References:
- https://www.washington.edu/news/2024/10/31/ai-bias-resume-screening-race-gender/
- https://www.washington.edu/news/2024/01/09/qa-uw-researchers-answer-common-questions-about-language-models-like-chatgpt/
- https://aclanthology.org/2025.acl-long.328/
- https://www.fws.gov/law/lacey-act
- https://www.gsb.stanford.edu/insights/popular-ai-models-show-partisan-bias-when-asked-talk-politics
References: Provided within the research article links above.
Keywords: Artificial intelligence, Political science, Social research