Oct 4, 2013

A publishing sting, but what was stung?

by

Before Open Science there was Open Access (OA) — a movement driven by the desire to make published research publicly accessible (after all, the public usually had paid for it), rather than hidden behind paywalls.

Open Access is, by now, its very own strong movement — follow for example Björn Brembs if you want to know what is happening there — and there are now nice Open Access journals, like PLOS and Frontiers, which are peer-reviewed and reputable. Some of these journals charge an article processing fee for OA articles, but in many cases funders have introduced or are developing provisions to cover these costs. (In fact, the big private funder in Sweden INSISTS on it.)

But, as always, situations where there is money involved and targets who are desperate (please please please publish my baby so I won’t perish) breed mimics and cuckoos and charlatans, ready to game the new playing field to their advantage. This is probably just a feature of the human condition (see Triver’s “Folly of Fools”).

There are lists of potentially predatory Open Access journals — I have linked some in on my private blog (Åse Fixes Science) here and here. Reputation counts. Buyers beware!

Demonstrating the challenges of this new marketplace, John Bohannon published in Science (a decidedly not Open Access journal) a sting operation in which he spoofed Open Access journals to test their peer-review system. The papers were machine generated nonsense — one may recall the Sokal Hoax from the previous Science Wars. One may also recall the classic Ceci paper from 1982, which made the rounds again earlier this year (and I blogged about that one too - on my other blog).

Crucially, all of Bohannon’s papers contained fatal flaws that a decent peer-reviewer should catch. The big news? Lots of them did not (though PLOS did). Here’s the Science article with its commentary stream, and a commentary from Retraction Watch).

This is, of course, interesting — and it is generating buzz. But, it is also generating some negative reaction. For one, Bohannon did not include the regular non-OA journals in his test, so the experiment lacks a control group, which means we can make no comparison and draw no firm inferences from the data. The reason he states (it is quoted on the Retraction Watch site) is the very long turnaround times for regular journals, which can be months, even a year (or longer, as I’ve heard). I kinda buy it, but this is really what is angering the Open Access crowd, who sees this letter as an attempt to implicate Open Access itself as the source of the problem. And, apart from Bohannon not including regular journals in his test, Science published what he wrote without peer reviewing it.

My initial take? I think it is important to test these things — to uncover the flaws in the system, and also to uncover the cuckoos and the mimics and the gamers. Of course, the problems in peer review are not solely on the shoulders of Open Access — Diederik Stapel, and other frauds, published almost exclusively in paywalled journals (including Science). The status of peer review warrants its own scrutiny.

But, I think the Open Access advocates have a really important point to make. As noted on Retraction Watch, Bohannon's study didn't include non-OA journals, so it's unclear whether the peer-review problems he identified in OA journals are unique to their OA status.

I’ll end by linking in some of the commentary that I have seen so far — and, of course, we’re happy to hear your comments.

Share on: TwitterFacebookGoogle+Email

Comments