Updates on academic fraud from across the globe

The new year starts with some encouraging news! British medical scientists call for stronger action against academic fraud.  “Dishonesty is common and institutionalized in medicine and medical research“, said one of the participants in the conference. Importantly, the scientists want to classify the non-publication of negative results as a serious misconduct, next to plagiarism and data fabrication.

In the US, the Office of Research Integrity has censured  for misconduct the director (and co-author) of a researcher who committed plagiarism. Failure to act on suspected fraud is rightly considered an offense in its own right.

In China, the president of Zhejiang University is leading a zero-tolerance policy against misconduct. The crackdown was partly motivated  by the discovery of one Chinese journal editor that ‘31% of the 2,233 submissions over that time to her publication, the Journal of Zhejiang University — Science, contained unoriginal material‘.

The bad news is that some of the research arguing for the health benefits of red wine has been discovered to be completely bogus. I bet they deliberately waited for the end of the holiday season to announce that!

Academic fraud reaching new heights

Academic  fraud is reaching new heights lows. Dutch social psychologist Diederik Stapel (Tilburg University)  is the culprit this time.

A commission looking into the issue came up with a report [in Dutch] on Monday saying that “the extent of fraud is very significant” (p.5). Stapel fabricated data for at least 30 papers published over a period of at least nine years (the investigation is still ongoing, the number can rise up to 150). Entire datasets supporting his hypotheses were made up from thin air. He also frequently gave fabricated data to colleagues and PhD students to analyze and co-author papers together.

Diederik Stapel is was an eminent and ‘charismatic’ scholar whose research made global news on more than one occasion. He has been awarded a Pioneer grant by the Dutch National Science  Foundation. He is the man behind all these sexy made-up findings:

What a painfully ironic turn of events for Stapel who also  published a paper on the way scientists react to a plagiarism scandal.

The whole affair first came to light this August when three young colleagues of Stapel suspected  that something isn’t quite right and informed the University. What is especially worrisome is that on a number of previous occasions people have implicated Stapel in wrongdoing but their signals had not been followed.  In hindsight, it is easy to see that the data is just too good to be true – always yielding incredibly big effects supporting the hypotheses, no missing data and outliers, etc. He didn’t even show any finesse or statistical sophistication in the fabrication. Still, co-authors, reviewers, and journal editors failed to spot the fraud for so many years and so many papers.

Stapel responds that the mistakes he has done are “not because of self-interest“. Interesting… A longer statement is expected on Monday. Tilburg University has already suspended Stapel  and  will decide  what other measures  to take once all investigations are over.

There are so many things going wrong on so many different levels here but I will only comment on the role of the academic  journals in this affair. How is it possible that all the reviewers missed the clues that something is fishy? A close reading should have revealed a pattern of improbably successful results. But are suspicions that results are too good to be true enough to reject an article? Probably not. But suspicions are enough to request more details about how the data was gathered. And, at the very least, the reviewers could have alerted the editors. It is probably too far-fetched to expect the data to be provided with the submission for review but a close inspection of summary statistics, cross-correlations and the like could have detected the fabrication.

But the bigger problem is the lack of incentives for replication. A pattern of strong results that cannot be replicated would have uncovered  the fraud much quicker but, of course, nobody (or very few) bothered to replicate. And why would they?  In a recent case, a leading psychology journal which initially published some outlandish claims for the effects of precognition refused to publish unsuccessful attempts to repeat the results with the argument that it doesn’t publish replications!  So Stapel might blame the ‘publish or perish culture’ for his misdemeanors but journal policies have to share a part of the blame.

On a side note: psychology and social psychology are especially prone to this type of data fabrication. Historians work with document sources that can easily be checked (e.g. when a team of Dutch scholars exposed the numerous problems with the sources and the evidence in Andrew Moravcsik’s widely-acclaimed The Choice for Europe ). In political science  and public administration data is often derived from the analysis of documents and observation of institutions, and, while mistakes can happen, they are relatively easy to spot. And often data collection requires a collective effort involving a number of scholars (e.g. in estimating party positions with manifestos or conducting representative surveys of political attitudes) which makes fraud on such a scale is less likely. I hope not to be proven wrong too soon.

For more info on the Stapel affair: an article in English is available here,  and  in Dutch here. Hat tips to Patrick and Toon for providing info and links.