Madam Chair and members of the committee, thank you for the opportunity to present my views on this important issue today.
I'm a co-founder of Retraction Watch, a non-profit news organization based in the U.S., which reports on scientific misconduct and reactions to it by universities, publishers and funding agencies, among other issues. We also maintain the world's most comprehensive database of scholarly retractions for Crossref, another non-profit that acquired the database in 2023.
I'm also a distinguished journalist in residence at New York University's Arthur Carter Journalism Institute and editor-in-chief of The Transmitter, a publication covering neuroscience.
I base my comments on 14 years of reporting and writing about relevant issues at Retraction Watch.
Last year, there were well over 10,000 retractions from the scholarly literature. Of note, just dozens of 2023's 10,000 plus retractions included researchers affiliated with Canadian universities. While that 10,000 figure was an 88% jump from 2022, the growth reflects an overall trend since the turn of the century.
Increased scrutiny of the literature is largely responsible for that rise, but 2023 revealed that a significant portion of what is published every year—conservative estimates are at least 2%, although it is likely higher—is produced simply to game the metrics that determine career and institutional success.
I wish to quote Dan Pearson, who studies how researchers can engage larger audiences: "Academic publishing is a game. And a lucrative one for those who win."
That gaming is in large part being carried out by what are known as “paper mills”—shady organizations that sell papers to researchers desperate to publish lest their careers perish. They also sell authorships, and our reporting has revealed that some of these companies even bribe editors to publish papers by their clients.
All of this is an entirely predictable response to standard incentives in academia. Universities around the world demand that researchers publish a high volume of papers—as many as possible in prestigious journals. That's because influential international rankings, such as those created by Times Higher Education prioritize citations, which are, of course, references to a researcher's work in subsequent papers.
Citations are very easy to game, as paper mills know. Knowing that citation counts are an oft-used metric to judge the quality and impact of research, citation cartels ensure that members' citation counts rise. All of this means that there is an uncomfortable truth behind the press releases, advertisements and other material universities and countries use to crow about their high rankings,. These rankings are based on a house of cards built with a stacked deck.
With good intentions, it's easy for governments and funding agencies to fall into the same trap. After all, we all rely on heuristics, apparently validated shortcuts, if you will, to make decisions, particularly when faced with a large number of choices, but citation heuristics pave the road to bad behaviour and retractions.
China offers a lesson here. Their publishing incentives have been among the most extreme in the world, and while they do top some impact and innovation rankings, they also top a ranking they probably wish they didn't: more than half of retractions in the world are by authors affiliated with Chinese universities.
I was therefore pleased to learn, as the committee heard from Jeremy Kerr last week, that five major Canadian research funding agencies have signed on to the Declaration on Research Assessment, also known as DORA. Others have suggested that instead of counting papers and citations, funders examine a small selection of papers chosen by researchers being evaluated: In other words, quality over quantity.
Such efforts will require effort and resources, but progress in research is worth it.
Thank you for your time. I welcome the opportunity to expand on my comments during the Q and A with members of the committee.