Partisan bias has been a persistent and pervasive issue in shaping how individuals interpret information, with profound consequences in contexts ranging from public health crises like COVID-19 to complex political events such as Brexit. Social psychologists have long debated the underlying mechanisms of this bias, particularly whether it arises primarily from differential exposure to information or from motivated cognitive processes that distort the evaluation of truth. A groundbreaking study recently published in Psychological Science by Tyler J. Hubeny and colleagues from the University of Texas at Austin sheds new light on this debate by disentangling the contributions of knowledge differences and identity-driven motivation in shaping partisan misperceptions.
The study addresses a fundamental question in social cognition: Does partisan bias stem from the information people consume, or does it originate from an inherent motivation to protect one’s identity by selectively accepting congenial information while rejecting incongruent facts? Previous research has often posited that divergent media ecosystems create distinct knowledge pools aligned with political identities, thereby explaining partisan misinformation. However, Hubeny and his team hypothesized that even in the absence of prior knowledge discrepancies, motivated reasoning—wherein desires and identity influence judgment—could independently foster partisan bias.
To test this hypothesis, the researchers designed a novel experimental paradigm that eliminated the confounding effect of pre-existing political allegiances and knowledge disparities. Instead of classifying participants according to known political affiliations such as Democrat or Republican, the study randomly assigned over 600 U.S. citizens to artificially constructed groups named Team Spain, Team Greece, or No Team. This ingenious manipulation involved administering a bogus personality test reminiscent of viral Buzzfeed quizzes, ensuring that group membership was arbitrary and unrelated to any factual knowledge about the countries involved.
After these artificial group assignments, participants were presented with a series of factual statements that either favored Spain or Greece—examples included claims about Nobel laureates produced by each country. Participants were tasked with judging the veracity of these statements. By using signal detection theory, a rigorous analytical framework typically applied in psychophysics to measure an individual’s ability to distinguish signal from noise, the researchers quantified two critical parameters: truth sensitivity—defined as the ability to discern true from false information—and acceptance threshold—the tendency to accept or reject information regardless of its accuracy.
The results were striking and revealed a clear partisan bias despite the elimination of any prior knowledge difference. Participants consistently exhibited a lower threshold for accepting statements favorable to their arbitrary team and a higher threshold for rejecting incongruent information. Notably, the capacity to distinguish truth from falsehood did not vary significantly between groups, indicating that it was not deficient cognitive discernment but rather differential motivation driving the biased acceptance patterns.
These findings robustly support the motivational explanation of partisan bias in misinformation judgment. They demonstrate that identity protection mechanisms can suffice to skew truth evaluation, independent of the information environment’s influence. This conclusion stands in contrast to the dominant narrative that partisan bias predominantly stems from asymmetric exposure to divergent media sources. Instead, it points to deeper cognitive processes that selectively filter new information through an identity-affirming lens.
The implications for combating misinformation are profound and urgently call for a paradigm shift in intervention strategies. Traditional approaches emphasizing fact-checking and harmonizing knowledge across partisan divides may be necessary but insufficient when motivational biases fundamentally alter the reception of facts. As Hubeny notes, merely “having the facts” does not guarantee consensus or truth adherence when identity-based acceptance thresholds dictate which facts are embraced or dismissed.
Addressing this motivated reasoning requires novel interventions tailored to modify the cognitive and emotional underpinnings of partisan bias. However, the study acknowledges that effective strategies targeting motivated cognitive processes remain underdeveloped. Future research must delve into the psychological mechanisms that foster the willingness to accept falsehoods in service of identity protection. Understanding why individuals are motivated to reject inconvenient truths and embrace congenial misinformation will be pivotal in crafting tools to mitigate the pernicious effects of partisan bias.
An intriguing avenue is investigating whether interventions that recalibrate acceptance thresholds, or diminish identity threats linked to factual corrections, can reduce motivated misinformation acceptance. Methods such as promoting intellectual humility, encouraging perspective-taking, or employing identity-affirming messaging may hold promise. Pinpointing exact cognitive mechanisms—such as selective attention, confirmation bias, or affect-laden reasoning—can guide the development of such techniques.
Moreover, the use of artificially constructed groups in this study provides a compelling model for isolating motivational influences in social cognition. It demonstrates how arbitrary group identity alone suffices to bias information processing, implying that the phenomenon is rooted in basic psychological drives rather than solely in sociopolitical contexts. This has broad ramifications for understanding phenomena like in-group favoritism, out-group animosity, and selective truth acceptance across diverse domains, not just partisan politics.
Given the increasingly fragmented media landscape and the pervasive spread of misinformation, the study’s insights are particularly timely. As societies grapple with the polarized reception of scientific, political, and social facts, elucidating the cognitive architecture of partisan bias will be essential to foster more constructive public discourse. The findings underscore the complex interplay between cognition, motivation, and social identity in shaping judgments, reinforcing that combating misinformation demands multifaceted strategies engaging beyond mere informational correction.
In summary, Hubeny and colleagues’ research advances a critical frontier in the psychology of misinformation by experimentally evidencing that partisan bias transcends knowledge disparities and fundamentally arises from motivated reasoning. As the social sciences continue to unravel the architecture of belief formation, such work illuminates not only why false information persists in partisan milieus but also points toward the cognitive roots that interventions must target to promote truth resilience in a divided world.
Subject of Research: Partisan bias and misinformation judgment; motivations underlying truth evaluation beyond knowledge differences.
Article Title: Understanding Partisan Bias in Judgments of Misinformation: Identity Protection Versus Differential Knowledge
News Publication Date: 7-Jan-2026
Web References: DOI link
References: Hubeny, T. J., Nahon, L. S., & Gawronski, B. (2026). Understanding partisan bias in judgments of misinformation: Identity protection versus differential knowledge. Psychological Science, 37(1), 43–54.
Keywords: Partisan bias, motivated reasoning, misinformation, social identity, cognitive psychology, signal detection theory, truth sensitivity, acceptance thresholds, political psychology, identity protection mechanisms

