In the ever-evolving landscape of social media, the question of who should wield authority over content moderation remains profoundly contentious. As platforms grapple with the challenge of curbing harmful misinformation while preserving free expression, public trust in moderation systems is paramount. Recent groundbreaking research by Cameron Martel and colleagues, published in PNAS Nexus, offers illuminating insights into American perceptions of legitimacy concerning different types of content moderators. Their comprehensive survey, conducted during the summer of 2023, systematically unpacks how various moderation models resonate with the public, bringing new clarity to a complex and often polarized debate.
The study leverages a robust sample of 3,000 US residents, recruited through the online survey platform YouGov, to evaluate public attitudes toward the legitimacy of differing content moderation authorities, especially when moderation decisions diverge from individual judgment. Respondents were presented with nine distinct moderation constructs, ranging from expert juries composed of professional fact-checkers, journalists, or domain specialists, to layperson juries that varied in size, qualification criteria, and decision-making methods. Additional moderation mechanisms such as algorithms, the heads of social media companies, and randomized coin flips were also assessed to benchmark public trust.
A key revelation from the data is that Americans distinctly favor small juries composed of domain experts as the most legitimate arbiters of misleading content on social media. This preference underscores a broad recognition of specialized knowledge as a critical asset for effective moderation. Expert juries, fixed at three members to balance expertise and agility, evidently inspire more confidence relative to other modalities. This finding challenges narratives that expert moderation is necessarily perceived as elitist or biased by the public at large.
However, the research surprises in its nuanced findings regarding layperson juries. Larger juries comprising thousands of randomly selected users, when endowed with minimal knowledge qualifications and structured to deliberate collectively, garnered legitimacy ratings comparable to those of small expert panels. Politically balanced layperson groups with similar attributes also fared well in public estimation. These results suggest that inclusivity in numbers combined with informed discussion may approximate the legitimacy benefits traditionally associated with domain expertise.
The survey further reveals significant interaction effects between political affiliation and moderator type. Republican respondents tended to perceive expert panels as less legitimate than Democrats did, highlighting the entrenched partisan skepticism towards expertise that has characterized information ecosystems in recent years. Nevertheless, Republicans still favored expert judges over unqualified laypersons, particularly within smaller layperson juries lacking knowledge prerequisites. This indicates a foundational, if imperfect, deference to expertise across ideological divides.
Conversely, respondents expressed clear skepticism regarding the trustworthiness of social media executives as content moderators. Moderation decisions made by platform heads were rated no more favorably than outcomes determined by a coin toss. This stark distrust reflects widespread concerns about conflicts of interest, lack of transparency, and perceived accountability deficits associated with centralized corporate control over content governance. The findings resonate with growing public demands for more decentralized and depoliticized moderation frameworks.
Algorithmic moderation, often touted for scalability and neutrality, ranked lower in perceived legitimacy than expert or qualified layperson juries. While automation offers consistency and speed, respondents appeared wary of relinquishing critical judgment to opaque computational processes. This skepticism likely stems from documented algorithmic biases and the lack of meaningful recourse or explainability in many AI moderation tools—a technical challenge that continues to preoccupy researchers and practitioners alike.
Substantively, the study’s findings provide actionable guidance for social media platforms, policymakers, and regulators striving to design moderation systems that earn public trust. The juxtaposition of small expert juries and large, deliberative layperson juries as equivalently legitimate models opens avenues for hybrid frameworks that leverage both specialized knowledge and democratic inclusivity. Such models could balance efficiency, transparency, and representativeness, mitigating the pitfalls observed in existing unilateral moderation approaches.
Moreover, the differential legitimacy evaluations along political lines underscore the necessity of designing moderation systems resilient to partisan polarization. Embedding fairness mechanisms and ensuring political diversity within layperson juries may enhance perceived impartiality, thereby increasing overall legitimacy. The research implicitly advocates for greater experimentation with collective decision-making formats that can transcend entrenched ideological rifts.
From a technical standpoint, the use of randomly assigned size and qualification parameters for layperson juries represents a methodologically rigorous approach to disentangling the effects of group composition and decision modalities on perceived legitimacy. The inclusion of independent versus group discussion decision-making adds another layer of granularity, elucidating the benefits of deliberative processes that promote consensus and shared understanding in content adjudication.
This research further pushes the discourse on emerging AI-assisted moderation by highlighting public concerns that remain inadequately addressed. Algorithmic tools need not only to be improved in accuracy but also made transparent, auditable, and accountable to meet the legitimacy bar set by human expert or layperson juries. As AI increasingly integrates with human moderation systems, maintaining the delicate balance between efficiency and public trust becomes a critical design objective.
In conclusion, Martel et al.’s survey sheds unprecedented light on the contours of public legitimacy perceptions in social media moderation, a domain critical to information integrity and democratic discourse. By demonstrating that both expert small juries and large, qualified layperson juries deliberating collectively are seen as trustworthy, the study challenges binary assumptions about who should hold the reins of content governance. It calls for innovative moderation architectures that combine expertise, inclusiveness, and transparent deliberation to navigate the complexities of regulating online speech in an era of misinformation and political polarization.
The pathway forward, as illuminated by this research, involves transcending simplistic, top-down moderation paradigms. Integrating domain expertise with democratic participation and harnessing advances in AI—while maintaining transparency and accountability—offers the best chance to reconstruct public confidence in social media content governance. Platforms and regulators attentive to these insights can pioneer systems designed not only to remove harmful content but also to be embraced as legitimate, fair, and trustworthy by diverse user communities.
Ultimately, this research contributes vital empirical evidence to an ongoing societal challenge, underlining that legitimacy in content moderation is not merely a technical or policy question, but a fundamentally social one. Understanding and incorporating public preferences into moderation system design is indispensable for the future health of online public spheres, where trust is the currency of constructive engagement and democratic deliberation.
Subject of Research: Public perceptions of legitimacy in different content moderation models on social media
Article Title: Perceived legitimacy of layperson and expert content moderators
News Publication Date: 20-May-2025
Image Credits: Martel et al.
Keywords: Artificial intelligence, content moderation, social media, public trust, expert juries, layperson juries, misinformation, algorithmic moderation, political polarization