Ten takeaways from ten years at Retraction Watch

As we celebrate our tenth birthday and look forward to our second decade, we thought it would be a good time to take stock and reflect on some lessons we — and others — have learned.

  1. Retractions are more common than we — or anyone else — thought they were. Two decades ago, journals were retracting roughly 40 papers per year. Although we were pretty sure they needed to be doing more to police the literature, we had no idea how much more. We also assumed the number was somewhat similar in 2010, but we were off by at least an order of magnitude, depending how you count. Journals now retract about 1,500 articles annually — a nearly 40-fold increase over 2000, and a dramatic change even if you account for the roughly doubling or tripling of papers published per year  — and even that’s too few. 
  1. Journals are slow as molasses. Running a journal is not like turning a battleship, but it can seem that way. Journals can take years to retract papers even for relatively straightforward transgressions, like obvious plagiarism, or clear evidence of fraud that has already been adjudicated by a university investigation. Of course, sometimes the issues are more convoluted — knotty disputes over authorship, for example, or institutional investigations that can slow down the retraction process significantly. But much of the time, journals move slowly because they don’t have incentives to move more quickly. And sometimes — as sleuths and many of those who email us can attest — they do nothing at all.
  1. But they’re improving. The good news is that journals have improved. When we launched Retraction Watch, many, if not most journals issued opaque retraction notices — “This article has been withdrawn by the authors” is one of our favorites — ignored our inquiries for more information, deflected questions about tainted research to the authors themselves, and otherwise pretended like “there’s nothing to see here.” Now, no reputable publication that we’re aware of would consider an information-free retraction notice, and many go much further to illuminate the full extent of the problems with a retracted article. Some journals and publishers have even gone as far as to hire ethics managers to monitor the integrity of their articles — a welcome step.
  1. If you build it, they will come: A community finds itself. Okay, so it’s not quite Field of Dreams, but when we started Retraction Watch, while some had certainly come before us, we weren’t exactly fighting for elbow room as watchdogs of the scientific literature. But over the past decade, the coalition of the willing — sleuths like Elisabeth Bik, James Heathers and Nicholas Brown, and many others we’ve profiled — has grown stronger. and Thanks to their talents and efforts, and forums such as PubPeer, science has never been subject to greater scrutiny.
  1. More and more journalists are covering scientific misconduct, too. As journalists, one of our objectives when launching Retraction Watch was to surface interesting stories for others in the mainstream media. That goal was rooted in our belief that, although retractions might seem arcane, the tales behind them can be juicy. We’re gratified that other reporters appear to agree. From outlets like ProPublica, whose Jodi Cohen dug deeply into the Pavuluri case that we broke, to Margaret Munro, formerly of Canada’s PostMedia, and many other outlets around the world, we’ve alerted many journalists who have picked up our initial threads and woven more comprehensive stories. When we’re competing on stories, it makes us all work harder, and the mutual respect we have for the terrific work of reporters like Stephanie Lee tells that tale. We look forward to reading what other journalists uncover in the future.
  1. The ingenuity of fraudsters knows no bounds. Which is a good thing because, well, fraudsters gonna fraud. As Elisabeth Bik has detailed, image manipulation is running rampant in science, thanks in large part to improvements in software that make such misconduct more easy. We’ve seen hundreds of cases of rigged peer review; citation rings through which unscrupulous researchers inflate the impact of their papers and those of their colleagues; paper mills; authorships for sale; and more. And those are just the transgressions we’ve seen. What remains to be revealed in our next decade, we can’t even guess, but we’re confident it will clear even that high bar. Now if only some of those clever researchers would use their powers for good …
  1. Reputational damage is not as bad as a lot of people expect. Early critics of Retraction Watch feared that exposing retractions to sunlight would unfairly tarnish the reputations of researchers. That concern turns out to be unfounded — as long as retraction notices are clear. To be sure, scientists who cheat and get caught lose favor (if not necessarily job prospects). But those who retract for honest error, who get out in front of problematic research, don’t face a “retraction penalty.” In other words, doing the right thing pays, which is why we created a category to recognize such actions  and even helped start a now-moribund award, called the DIRT, to honor good behavior.
  1. It’s not Big Pharma that’s responsible for most retractions, and it’s not all journals that charge fees. Some people come to Retraction Watch with some assumptions. (Hey, so did we.) One is that studies funded or done by Big Pharma are more likely to be retracted, and the other is that the advent of author processing charges led to more retractions. Sorry, folks. Big Pharma might be to blame for many ills, but studies by industry scientists are rarely retracted, and almost never for misconduct. Sure, it happens, but industry manuscripts are subjected to multiple checks before they’re published, and researchers there know their work is under scrutiny. What about journals that charge author fees to publish? Such journals can rack up the numbers, but overall they’re not a major driver either. Instead, as Ferric Fang and Arturo Casadevall found in a 2011 analysis, the most highly ranked journals tend to have higher rates of retractions than less prestigious publications. (The analysis has basically held true over the years.) That means leading titles like The New England Journal of Medicine and Nature retract papers at a higher rate than their competitors. Why that’s the case isn’t entirely clear, although it could reflect the fact that more readers are scrutinizing the articles in these journals. Or, in the case of retractions for misconduct, it could indicate the “brass ring” effect, with fraudsters reaching for the most sensational prize (a top-tier publication) to catapult their careers.
  1. Lawyers play a big role in many aspects of the process, both seen and unseen. Retractions in science might seem to be the province solely of, well, scientists and perhaps publishers, but that’s not the case. Many times, lawyers get involved in the process — sometimes at the behest of an aggrieved author fighting the action, and sometimes to defend a journal against a potential suit. As Nature complained in 2014, the threat of legal action stemming from retractions has led to delays in the process and increased costs “because those under investigation increasingly turn to lawyers to defend themselves and their reputations, and their employers and journals are more frequently having to respond accordingly.” Meanwhile, some lawyers have found a comfortable niche in the area of scientific misconduct. One, John Thomas, in 2019 won the largest whistleblower suit in academia, with a $112.5 million judgment against Duke University. With such a big payout, we’re pretty sure we haven’t seen the last of such cases.
  1. The arguments over what to call retractions — aka taxonomy — will never end. Retractions might end the same way, but they’re far from identical. Some stem from misconduct; others, as we mentioned earlier, result from honest error. Some involve problems with authorship that don’t necessarily undermine the results of the study, while others are the fault of a publisher who mishandled the article. Given that retractions have different roots and repercussions, should they all be lumped together under the same umbrella? Or do we need a new nomenclature to describe the sub-species of retractions, not only to better reflect what went wrong, but to mitigate any stigma that might unfairly attach to authors? Scholars like Daniele Fanelli have long argued for more precision in the language of retractions, but unfortunately in our experience discussions about reform tend to revolve around ways to use what sound like weasel words to minimize the apparent scope of the problem. So far, journals have stuck with the Trinity of Notices: Retraction, Correction (or Corrigendum or Erratum, depending on the circumstances), and Expression of Concern — not that they are at all consistent with how they’re used. Stay tuned.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.

One thought on “Ten takeaways from ten years at Retraction Watch”

  1. I agree with Fanneli on the need for more precision in the language of retractions. Equally important, I think, is the need for more discussion and eventual consensus on what criteria beyond research misconduct should lead to a retraction.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.