In the evolving landscape of scientific inquiry, the peer review process stands as a critical mechanism ensuring the trustworthiness and integrity of published research. This traditional practice, where experts in relevant fields voluntarily scrutinize manuscripts prior to publication, has underpinned the advancement of science for decades. However, recent shifts in academic publishing and research culture signal that peer review is facing unprecedented strain, risking the very foundation it aims to protect. New mathematical modeling studies by Carl Bergstrom of the University of Washington and Kevin Gross of North Carolina State University illuminate this distressing pattern, revealing a feedback loop that both degrades peer review effectiveness and proliferates pressure on overburdened reviewers.
Peer review ideally functions as a gatekeeper, filtering out research that is flawed or insufficiently supported and ensuring that only quality work reaches academic audiences and beyond. When executed effectively, this system fosters a virtuous cycle: rigorous review encourages researchers to carefully select journals and submit only their strongest work, confident that their efforts will be fairly and thoroughly assessed. This selectivity, in turn, maintains high editorial standards and reliable literature. Bergstrom and Gross’s research underscores how this cycle is now at risk of reversal, spinning into a self-sustaining degradation.
The crisis identified by the researchers hinges on the burgeoning number of manuscript submissions and the steady decline in available, willing reviewers. As journals receive ever more papers, peer reviewers — who volunteer their time without compensation — are forced to divide their attention among more manuscripts. This shift diminishes the scrutiny given to each paper, making editorial outcomes less predictable and reliable. Consequentially, authors may become less selective about where or how often they submit, hoping to increase chances of acceptance by resubmitting to successive journals after rejections. This drives manuscript volume even higher, exacerbating the cycle of dilution and diminishing review quality.
While the precarious state of peer review has long been a subject of lament within the scientific community, several converging factors now intensify the challenge. First, science’s global expansion has produced larger, more dispersed networks, diffusing the sense of community responsibility that once motivated reviewers. Second, the commercial success of scientific publishing has prompted major publishers to launch numerous new journals, expanding the ecosystem into a crowded marketplace where rejected papers can be endlessly recycled. Each resubmission demands new rounds of volunteer peer review, cumulatively overwhelming the system. Moreover, the COVID-19 pandemic disrupted academic routines and priorities, leaving many researchers less able or willing to dedicate time to reviewing, from which global peer review efforts have yet to fully recover.
Concerns about the integrity of peer-reviewed literature under these conditions are nuanced. While the primary accountability for research accuracy remains with the authors, who have vested reputational interests in credible work, peer review serves as a crucial secondary safeguard. As peer review systems weaken, small fractures in quality control may seep into the literature, subtly undermining trust in scientific findings. In an era already rife with misinformation and skepticism, maintaining social trust in science is paramount. Even modest erosion of peer review credibility threatens researchers’ careers and public policy dependent on scientific evidence.
One of the more alarming potential consequences of the review crisis is a premature shift toward automated, AI-powered manuscript evaluation. While machine learning tools might offer supportive analysis or flag obvious issues, Bergstrom and Gross caution that replacing human judgment risks sacrificing nuanced critique and constructive dialogue. Peer review extends beyond binary accept-or-reject decisions, encompassing formative feedback that helps refine and elevate scientific ideas. Human reviewers provide a discourse crucial to intellectual growth, mentorship, and the iterative nature of discovery, functions unlikely to be replicated by algorithms in the near term.
Responding to the crisis requires bold experimentation with existing scholarly incentive structures. A controversial yet increasingly discussed proposal is financially compensating peer reviewers, particularly for commercial journals that profit from free labor. Remuneration could recognize the critical service reviewers provide, incentivizing participation and improving review quality. The appeal of this solution lies in its scalability: as soon as one journal successfully implements paid peer review, competitive pressures could catalyze widespread adoption. However, the shift also raises questions about the sustainability of funding and potential impacts on review impartiality.
Another intriguing idea involves awarding monetary prizes for exemplary reviews. Such recognition could encourage thorough, insightful evaluations, uplifting community standards. While editorial subjectivity and selection biases present challenges, the competitive incentive could stimulate improved reviewer engagement. Yet, no solution is without trade-offs, and the efficacy of such measures must be balanced against administrative burdens and risks to fairness.
Alternatively, the crisis might be addressed by reshaping the academic reward system that currently prioritizes publication quantity over quality. Hiring and promotion committees wield considerable influence over researcher behavior. By emphasizing the significance and impact of a limited number of publications rather than sheer volume, these bodies could dissuade the flood of incremental submissions. Such cultural realignment could alleviate pressure on peer reviewers by reducing the demand for manuscript assessment and encouraging more thoughtful dissemination practices.
Ultimately, the sustainability of peer review depends on collective acknowledgment of its indispensable role and commitment to preserving its human-centric core. The insights offered by Bergstrom and Gross’s mathematical model provide a framework to understand the dynamics undermining the system and an impetus to act before these negative feedback loops become irreversible. Protecting peer review’s integrity is not merely an academic exercise; it is a necessary investment in the future of science, public trust, and societal progress.
Subject of Research: Peer review process dynamics and emerging crises in scientific publishing
Article Title: Screening, sorting, and the feedback cycles that imperil peer review
News Publication Date: 24-Feb-2026
Web References:
- PLOS Biology article
- Carl Bergstrom University Profile
- Kevin Gross Profile
References: Bergstrom, C. & Gross, K. (2026). Screening, sorting, and the feedback cycles that imperil peer review. PLOS Biology. DOI: 10.1371/journal.pbio.3003650
Image Credits: Carl Bergstrom
Keywords
peer review crisis, scientific publishing, manuscript submissions, reviewer shortage, editorial process, academic incentives, publication quality, AI in peer review, reviewer compensation, feedback loops, scientific integrity, research reliability

