scientist on computer

The old fashioned way? (Credit: PaeGAG on Shutterstock)

Half of Science Reviewers Use Artificial Intelligence. Three-Quarters Of Researchers Don’t Know When Publishers Do

In A Nutshell

  • AI has quietly transformed peer review: 53% of peer reviewers now use AI tools when evaluating research, with nearly one in four increasing their usage in the past year
  • But most use it wrong: Reviewers primarily use AI for drafting reports and fixing grammar (59%) rather than assessing methodology or statistical soundness (19%), missing the technology’s real potential to strengthen scientific rigor
  • Trust without transparency: While 66% say publisher AI use speeds up publication, only 21% say it increases their trust, and 76% remain unsure whether AI was even used in their publication process
  • Training gap creates confusion: 35% of researchers teach themselves AI with no formal guidance, while 18% take no action at all to ensure best practices, leaving the community navigating powerful tools without clear standards

Artificial intelligence has quietly reshaped the foundation of scientific publishing, yet about three-quarters of researchers say they’re unsure or unaware whether publishers used AI during their publication process.

A sweeping survey reveals that 53% of peer reviewers now use AI tools when evaluating manuscripts, marking a profound shift in the validation of science. Nearly one in four reviewers (24%) say they’ve increased their AI use over the past year. Yet most researchers can’t tell when publishers deploy AI in the publication process, exposing a transparency gap at the heart of scientific communication.

The survey of 1,645 researchers found this transformation happened without fanfare, consistent training, or clear governance. While AI quietly became part of the machinery that decides which research enters the scientific record, disclosure standards and clear rules haven’t kept pace with adoption or prepared researchers for the change.

“I would consider it unethical to use AI in peer reviewing manuscripts,” one reviewer told surveyors. “Indeed, it wouldn’t have occurred to me that doing so was even possible, save that the last time I did a review, the form told me not to use AI.”

How AI Infiltrated Peer Review Without Transparency

The normalization of AI in peer review represents one of the most consequential shifts in scientific publishing since the formalization of peer review itself. Yet it occurred largely beneath the radar of the research community.

Publishers have begun integrating AI into editorial workflows, using it for integrity checks and workflow efficiency. Researchers themselves adopted AI tools for reviewing manuscripts, often without institutional guidance or clear policies. The result is a fundamentally altered system operating with little transparency about when, where, or how AI influences decisions.

The numbers reveal the trust gap. While 66% of researchers rate publisher use of AI as effective for speeding up publication, only 21% say it increases their trust in the process. Among the 76% who feel unsure whether publishers have used AI during their publication experiences, many express concern about the lack of disclosure.

When asked about barriers to responsible AI adoption, 20% of researchers who provided open-ended responses cited unclear rules or absent governance as their top obstacle.

Scientists just starting out in their careers have the highest AI adoption rates. (Credit: PeopleImages on Shutterstock)
Scientists just starting out in their careers have the highest AI adoption rates. (Credit: PeopleImages on Shutterstock)

Surface AI Adoption Misses Deeper Scientific Potential

While AI use has become common, most applications barely scratch the surface of what’s possible. Among reviewers using AI, 59% deploy it for drafting reports, 29% for summarizing findings, and 28% for flagging potential misconduct. Only 19% use it to assess methodology or statistical soundness.

One researcher in Chile summed up the tension: “These tools help me save time, improve clarity, and increase confidence when writing in English. However, I remain cautious about factual accuracy and always double-check scientific content.”

The pattern holds for researchers writing their own papers. About 70% use AI for polishing prose and improving clarity, while fewer than 25% tap it for analysis, experimental design, or methodology. AI gets relegated to secretarial work when it could be interrogating data quality, testing statistical claims, and exploring alternative methodological approaches.

This cautious surface-level adoption leaves major potential untapped. AI’s real promise lies in strengthening reproducibility, catching methodological flaws, and supporting deeper scientific rigor. These are exactly the areas where peer review often struggles.

AI Training Gaps Leave Researchers to Fend for Themselves

Roughly 35% of researchers are teaching themselves how to use AI tools. Another 31% get guidance from their institutions, while 18% take no action at all to ensure best practice. Publishers provide guidance to just 16% of users. (Respondents could select more than one.)

Researchers rely on trial and error, peer advice, and informal experimentation without consistent frameworks or shared standards. One researcher noted their proficiency improved over time but added, “I still need to improve my practice.”

Early career researchers show the highest adoption rates, with 87% using AI for authoring compared with 67% of senior researchers. For peer review specifically, 61% of those with five or fewer years of experience use AI tools, versus 45% among those with over 15 years of experience. Senior reviewers are the only group where the majority (55%) have never used AI in peer review.

Regional Divides Point to Competing Priorities

Geographic patterns reveal different attitudes toward AI’s role. In China and Africa, 77% and 66% of reviewers respectively use AI at least occasionally. Researchers in those regions see AI as an equalizer, particularly for non-native English speakers navigating language barriers in publishing.

By contrast, only 31% of North American and 46% of European reviewers have tried AI for peer review. Concerns about bias, misuse, and insufficient governance dominate in these regions, reflecting a cultural focus on ethics and policy readiness rather than access or capability.

A researcher in South Africa captured the perspective driving adoption in some regions: “AI is a tool that we should embrace, whether it is peer reviewing or writing. A 21st century skill that we should teach our learners how to use responsibly, alongside our own thought processes.”

Researchers are torn on AI’s effects. While 63% say AI improves manuscript quality, 52% worry it can make them question a work’s integrity, and 48% say it can introduce errors. Meanwhile, 71% of all researchers worry AI tools are being misused, 53% report personally observing what they believe to be AI misuse by peers, and 45% share concerns about publisher misuse.

The Path Forward

The report, published by Frontiers Media, outlines specific actions for closing trust, training, and governance gaps. Publishers should disclose all AI use externally, establish ethical oversight standards aligned with industry frameworks, and co-develop literacy programs with editors and reviewers.

Research institutions should integrate AI literacy into core curricula, develop certified training frameworks, and hold researchers accountable through clear policies. Funders and policymakers should mandate transparency requirements and embed disclosure requirements into funding mechanisms.

Tool developers need to explain how their systems work, enable independent audits, and maintain transparency as capabilities evolve. The current opacity around AI tool design and training data undermines efforts to use these systems responsibly.

Most fundamentally, the community needs to normalize disclosure. The quiet transformation of peer review created a system where AI influence remains largely invisible, even as it becomes widespread. Bringing that transformation into the light is the first step toward harnessing AI’s potential while maintaining scientific integrity.

Frontiers’ study represents the first comprehensive look at how AI has reshaped peer review, surveying researchers who had authored, peer reviewed, or served as editors in 2025.


Paper Notes

Study Limitations

The study’s sample, while geographically diverse, may not fully represent all scientific disciplines or publishing contexts. Self-reported data on AI use could be subject to recall bias or social desirability effects. The survey’s timing captured a snapshot of rapidly evolving practices, meaning adoption rates and patterns may have already shifted. Some respondents may have different interpretations of what constitutes “AI use” or “misuse,” potentially affecting comparability of responses.

Publication Details

Authors: Simone Ragavooloo, Portfolio Manager, Research Integrity (conceptual design); Elena Vicario, Director of Research Integrity (content development); Katie Allin, Customer Intelligence Manager (data and analysis); additional contributors include Josh Perera, Jenny Lycett, and Marina Mariano. Published by Frontiers Media, 2025. Title: “Unlocking AI’s untapped potential: responsible innovation in research and publishing.” The study methodology included 1,645 active researchers surveyed between May and June 2025.

Funding and Disclosures

This research was conducted by Frontiers Media, an open-access research publisher. The company develops and uses AI tools as part of its publishing operations. The study used GPT-5.1 (OpenAI) to support drafting and refining content, with all AI-assisted text reviewed, edited, and approved by human contributors. Frontiers has published policies encouraging transparent AI use by authors, editors, and reviewers.

About StudyFinds Analysis

Called "brilliant," "fantastic," and "spot on" by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

John Anderer

Associate Editor

Leave a Reply