A groundbreaking study conducted by researchers at Pompeu Fabra University (UPF) has unveiled compelling evidence of rising ideological polarization and the intensified spread of misleading or biased news content on Facebook over recent years. By meticulously analyzing over six million news-related URLs from a diverse pool of 1,231 domains across the United States, this cutting-edge research maps out the complex interplay between user engagement, platform algorithm changes, and ideological segregation between conservatives and liberals on one of the world’s largest social media platforms from 2017 through 2020.
These four pivotal years encapsulate some of the most transformative political and social events in recent history, including the COVID-19 pandemic, the tumultuous 2020 US presidential election culminating in unprecedented political unrest, and the stakes of the 2018 midterm elections involving all House seats and a third of Senate seats. Such an environmentally charged period provided a rich dataset to explore shifts in media consumption patterns, the evolving nature of partisan divides, and the role of the platform’s algorithmic curation in shaping public discourse.
Central to the study’s findings is the observed acceleration in ideological segregation, measured by tracking the divergence in the average ideological orientations of news content consumed by self-identified conservatives and liberals on Facebook. This ideological gap deepened notably alongside an increase in interactions with news pieces originating from domains classified as low-quality, which often propagate false or notably biased information. The study correlates these trends to strategic changes made by Facebook in its content ranking algorithms, changes that emphasized certain engagement metrics over others, fundamentally reshaping how information is prioritized and consumed.
In the wake of sweeping algorithmic modifications enacted in 2018 and later in 2020, Facebook’s ranking system shifted its priority from passive metrics, such as likes, to more active engagement actions like shares and comments. The original intention behind this recalibration was to foster “meaningful social interactions,” particularly emphasizing exchanges within users’ personal networks. However, the research reveals that these adjustments inadvertently amplified exposure to ideologically extreme content and lower-quality news sources, intensifying the polarization gap as the platform increasingly favored content with higher engagement, regardless of its veracity or bias.
By employing robust data analytics methodologies, the research team, led by computational social sciences expert Emma Fraxanet and supervised by AI specialist Vicenç Gómez, dissected user engagement in careful detail. Quantifying clicks, shares, comments, likes, and other reaction types created a nuanced portrait of how user interactions mapped onto ideological lines. Their approach unearthed a distinctive U-shaped engagement pattern: users with more extreme ideological positions engaged substantially more with content aligned to their worldviews than moderate users did. This pattern provides a key insight into the mechanism driving increased polarization as extreme content tends to generate greater engagement and consequently gains algorithmic prominence.
Interestingly, the study does not establish a direct causal link between Facebook’s algorithm changes and the rise in polarization or misinformation dissemination. Nonetheless, it underscores a significant temporal association and highlights the multifaceted nature of the problem, urging further research to isolate and understand these dynamics fully. Moreover, the findings accentuate that user behavior itself—specifically, the tendency of individuals to seek out ideologically consonant content—plays a pivotal role in shaping online news ecosystems, working synergistically with platform algorithms to deepen divides.
Examining the platform modifications more closely, the 2018 algorithmic update deprioritized likes—often considered a low-effort signal—and increased the weight of shares and comments with the stated goal of promoting more substantive interactions. The researchers observed that following this shift, ideological polarization intensified, and engagement with dubious or low-quality content grew markedly. Conversely, the 2020 update further refined this approach by reducing the emphasis on shares and boosting the influence of comments, ostensibly to curb toxic or low-quality content dissemination. Yet, paradoxically, even though platform activity declined during this period, polarization and engagement with questionable news remained high.
This research contributes a vital empirical lens on how social media algorithms shape the contemporary information environment, offering quantitative evidence of how platform design decisions ripple through user behavior and ideology. It raises fundamental questions about the balance between promoting engagement and maintaining the integrity and plurality of public discourse within democratic societies, especially under the pressures of rapidly evolving digital ecosystems where misinformation can spread virally.
Beyond illuminating the direct implications of design choices made by social media giants, the study signals broader societal challenges. The entrenched divide in the consumption patterns of liberals and conservatives exacerbates difficulties in achieving common ground, a phenomenon with critical repercussions for democratic dialogue and governance. The U-shaped engagement suggests that extreme viewpoints are not just marginalized echoes but active centers of engagement, potentially skewing public perception and debate.
Moreover, the researchers’ utilization of a comprehensive dataset spanning millions of URLs and an array of engagement metrics exemplifies the growing power of computational social science to unpack complex social phenomena. By linking changes in algorithmic design with shifts in user behavior and content quality, this study exemplifies how interdisciplinary approaches combining computer science, sociology, and psychology can forge new understandings of the digital era’s challenges.
Acknowledging its limitations, the paper explicitly calls for future research to parsed deeper causal pathways and to explore intervention strategies that platforms might employ to mitigate polarization without sacrificing user agency or engagement. Such insights are crucial as platforms continue iterating on algorithms, often motivated by opaque priorities and competing objectives related to user experience, advertising revenue, and social responsibility.
In conclusion, this study stands as a significant marker in the ongoing discourse about how digital platforms influence political polarization and the dissemination of misinformation. Its careful empirical approach and temporal analysis spotlight the complex nexus between technological design, user engagement, and ideological segregation, highlighting the urgent need for transparent, evidence-based policies and designs that foster healthier informational ecosystems.
Subject of Research:
Not applicable
Article Title:
Analyzing news engagement on Facebook: tracking ideological segregation and news quality in the Facebook URL dataset
News Publication Date:
30-Sep-2025
Web References:
http://dx.doi.org/10.1140/epjds/s13688-025-00585-3
References:
Fraxanet, E., Kaltenbrunner, A., Germano, F. et al. Analyzing news engagement on Facebook: tracking ideological segregation and news quality in the Facebook URL dataset. EPJ Data Sci. 14, 73 (2025).
Image Credits:
Authors of the article.
Keywords:
Computational social science, Human social behavior, Social network theory