Who would users trust to fact-check online information? (KGBR/Shutterstock)
In a nutshell
- Americans trust expert panels most to decide what’s misleading online, but well-designed layperson juries—especially those that are large, knowledgeable, and diverse—can earn nearly equal trust.
- Political identity affects trust in content moderation, with Republicans showing less confidence in experts than Democrats, though still preferring experts over algorithms or random users.
- Algorithms, social media CEOs, and random chance were the least trusted options, suggesting that platforms should focus on transparent, human-centered moderation systems to build public legitimacy.
CAMBRIDGE, Mass. — When it comes to policing misinformation online, Americans prefer experts over everyday users or algorithms. A new study reveals that while Americans generally trust domain experts most when it comes to labeling misleading content on platforms like Facebook, properly designed juries of regular people can earn nearly equal trust if they’re structured correctly.
The findings, published in PNAS Nexus, arrive just as social media giants are dramatically shifting their approach to content moderation. Earlier this year, Meta CEO Mark Zuckerberg ended partnerships with professional fact-checkers, calling such relationships examples of “too much censorship.” Meanwhile, platforms like X (formerly Twitter) have increasingly turned to user-based systems like Community Notes to identify potentially misleading content.
Researchers from MIT, the University of Washington, and the University of Michigan explain that understanding what moderation systems the public finds legitimate is crucial for effectively addressing misleading content online.
Trust in Experts for Content Moderation
The nationally representative survey asked 3,000 Americans who they would prefer to decide whether online content is “harmfully misleading.” Participants were presented with multiple content moderation juries, ranging from professional fact-checkers to random social media users to algorithms.

Participants were asked to evaluate how legitimate they would find each jury’s decisions, even when they personally disagreed with the outcome. Expert panels consisting of domain specialists, fact-checkers, and journalists earned the highest legitimacy ratings. But researchers found that increasing jury size (from 3 to 30 members), requiring minimum knowledge qualifications, and allowing jury discussion significantly improved the perceived legitimacy of layperson juries.
In fact, when layperson juries were nationally representative or politically balanced, larger, qualified, and allowed to discuss, they achieved legitimacy ratings comparable to expert panels. This suggests platforms might be able to leverage both approaches effectively rather than abandoning professional fact-checking entirely.
The Partisan Divide in Misinformation Moderation
Political identity heavily influenced who participants trusted. Republicans rated expert panels as less legitimate compared to Democrats, though they still found them more legitimate than baseline layperson juries.
Americans consistently rejected certain approaches to content moderation, placing little faith in decisions made by platform CEOs, coin flips, or even computer algorithms. Major platforms are increasingly turning to AI-based solutions for identifying harmful content, but this might not fly with the public.
The study intentionally focused on cases where participants disagreed with the jury’s decision. According to the researchers, the real challenge is creating a process people accept as legitimate, even when they disagree with the outcome.
Building Better Content Moderation Systems
These results could lead to a potential path forward for Americans tired of online misinformation but wary of censorship. Rather than a cut-and-dry choice between expert moderation or hands-off approaches, platforms could deploy hybrid models that combine the credibility of experts with well-designed systems for community input.
The findings show that Americans don’t always distrust expert knowledge. While Republicans did show lower trust in experts than Democrats, domain experts still emerged as the most trusted baseline jury type across the political spectrum.
Americans want thoughtful, legitimate systems for addressing harmful misinformation, not just algorithms or unqualified random users making snap judgments. Platforms leaning toward AI in lieu of professional fact-checkers entirely may be misreading public preferences, even among right-leaning users. Given that misinformation can shape everything from vaccine hesitancy to election beliefs, platforms should take note of who Americans trust to separate fact from fiction online.
Paper Summary
Methodology
Researchers conducted a nationally representative survey of 3,000 US participants on YouGov between July and August 2023. Participants evaluated 20 unique content moderation juries in 10 pairs, focusing on who they believed should determine if content is “harmfully misleading” on social media platforms. The study used a rating and choice-based conjoint experiment that varied four key characteristics: who evaluated the content (experts, laypeople, or non-juries like algorithms), jury size (3, 30, or 3,000 members), qualifications (professional or minimum knowledge requirements), and whether jurors could discuss content during evaluation. Importantly, participants were asked to evaluate legitimacy under the condition that they disagreed with the jury’s decision, providing a critical test of institutional legitimacy.
Results
Domain experts received the highest baseline legitimacy ratings, followed by fact-checkers and journalists. All three non-juries (coin flip, platform CEO, computer algorithm) had significantly lower legitimacy ratings. Baseline layperson juries fell between experts and non-juries in perceived legitimacy. However, increasing layperson jury size from 3 to 30 members, requiring minimum knowledge qualifications, and allowing jury discussion each significantly improved legitimacy ratings. With these enhancements, nationally representative and politically balanced layperson juries achieved legitimacy ratings comparable to expert panels. Political partisanship moderated these effects, with Republicans rating experts lower than Democrats did, though experts still remained more legitimate than baseline layperson juries across all partisan groups.
Limitations
The study used hypothetical juries and decisions rather than showing actual content moderation decisions on specific pieces of content. All legitimacy evaluations were conducted under the condition of disagreement with the jury’s decision, which likely resulted in lower absolute legitimacy ratings overall. The findings are specific to Americans and the US context, and may not generalize to other countries or cultures. The researchers acknowledge that legitimacy perceptions may differ across different platforms (like Facebook versus more professionally-oriented sites like LinkedIn) and that the study focused specifically on private tech company policies rather than governmental speech moderation.
Funding and Disclosures
The research was supported by the National Science Foundation (grant no. 2137469). The lead author, Cameron Martel, was funded by the NSF Graduate Research Fellowship (grant no. 174530). In the competing interest statement, the paper notes that other research by co-author David G. Rand is funded by gifts from Meta and Google.
Publication Information
The paper “Perceived legitimacy of layperson and expert content moderators” was published in PNAS Nexus (2025, Volume 4, Issue 5, article ID pgaf111) with advance access publication on May 20, 2025. The authors are Cameron Martel, Adam J. Berinsky, David G. Rand, Amy X. Zhang, and Paul Resnick, affiliated with the Massachusetts Institute of Technology, University of Washington, and University of Michigan.







