In April 2026, the British Educational Research Association confirmed that 19 articles published in its flagship journal had been manipulated during the peer review process itself, by parties with no affiliation to the journal, its publisher Wiley, or BERA. None of those papers were flagged before publication. Blogs and social media caught them afterward.
That number matters. Not because 19 retractions is catastrophic on its own, but because it tells you exactly what peer review is and is not. It is a quality filter run by volunteers with limited time, no forensic tools, and no financial stake in the outcome. It was never designed to catch coordinated external manipulation. The question is whether the field is willing to say that plainly, or whether it will keep treating each fraud case as an isolated failure of individual reviewers.
The Manipulation Happened Inside the Process
BERA's investigation found that unaffiliated parties had manipulated the peer review process itself, not just submitted bad papers and hoped for the best. That is a different threat than a researcher fudging data in a lab. It means the gatekeeping mechanism was the attack surface. Wiley has since migrated BERJ to a platform with enhanced pre-submission checks, including AI detection. BERA acknowledged in the same report that methods for detecting AI use remain unreliable. So the fix being deployed against the most common flag is a tool that does not reliably work.
Paper mills have been selling authorship slots since at least 2021, with first-author positions priced between $57 and $5,600 depending on the journal. That price range tells you this is a functioning market, not a fringe operation. Buyers get a publication credit. The journal gets a submission that looks legitimate. Peer reviewers, who are unpaid and often reviewing 3 to 5 papers per month on top of their own research, get a manuscript with no obvious red flags.
Defenders of the current system will point out that most public flags in the BERA case did not warrant action. That is fair. The majority of social media accusations turned out to be noise. Peer review does filter out a large volume of low-quality work before it reaches readers, and that function has real value.
But filtering low-quality work is not the same as detecting fraud. The 19 papers that made it through were not weak submissions that slipped past tired reviewers. They were manipulated at the process level. That distinction matters for what the fix looks like.
The U.S. DOJ Launched a Fraud Division. Academic Publishing Has Not.
On April 7, 2026, the U.S. Department of Justice launched its National Fraud Enforcement Division, explicitly oriented toward preemptive detection rather than post-hoc prosecution. Academic publishing has moved in the opposite direction: it investigates after external parties raise alarms, then retracts, then tightens procedures for the next round.
The specific reform that would actually change outcomes is pre-submission identity and authorship verification, not AI detection, which BERA itself admits is unreliable. Journals need to confirm that the people listed as authors wrote the paper, that the peer reviewers are who they say they are, and that neither group has a financial relationship with a paper mill. This is not technically difficult. It requires institutional will and, frankly, money that publishers currently do not spend on it.
BERA's 19 retractions are a data point, not an outlier. Every field that relies on volunteer peer review and post-publication correction as its primary fraud defense is running the same experiment. The results keep coming back the same way.