PLoS-1 published a “creationist” paper: some thoughts on what followed
As everyone knows by now, PloS-1 published what seemed to be a creationist paper. While references to the ‘Creator’ were few, the wording of the paper strongly supported intelligent design in human hand development. A later statement from the first author seemed to eschew actual creationism, but maintained teleological (if not theological) view of evolution, and saying that human limb evolution is unclear. The paper was published January 5, 2016. However, it seemed not to get any attention. The first comment on the PLoS-1 site was on March 2, when things blew up on Twitter, quickly adopting the #handofgod and #creatorgate hashtags. (As far as I could tell, the paper URL has not been on Twitter before March 2, except for a single mention the day it was published.) On March 3, PLoS-1 announced that a retraction is in process.
Open Access is not broken
Probably the strangest reaction I have seen to #handofgod was in this article in Wired that examined the old trope that open access articles are poorly reviewed. I thought we were already beyond that, and that at least science writers have educated themselves on the matter. Review quality has nothing to do with the licensing of the journal! Tarring all OA publication with the same brush, without even saying why open access is relevant to this problem, is simply poor journalism.
Oh, and please stop confusing Open Source (for software licensing) with Open Access (for licensing research works). The two terms stem from the same philosophy of share and reuse, but they are best not conflated.
PloS-1 is not broken
Saying that publishing this paper shows the failure of PloS-1’s publication model, is like saying that because you read a news story about someone who got run over on the sidewalk, you will never walk on a street shared with cars ever again. PloS-1 publishes 30,000 papers per year. It took PloS-1 less than a month to retract from the publication time, and less than 48 hours from when this paper came on the social media radar. In contrast, it took Lancet 12 years to retract Andrew Wakefield’s infamous paper on vaccine and autism; a paper that was not just erroneous, but ruled to be fraudulent, and has caused incredibly more damage than a silly ID paper. Also, I am still waiting for Science to retract the Arsenic Life paper from 2010, and for Nature to retract the Water Memory paper from 1988. At the same time, I bet that only few of those who clamored they will resign from editing for PloS-1, will turn down an offer to guest edit for Science. Here’s an idea for all PloS-1 editors who are “ashamed to be associated with PloS one”: instead of worrying or self-publicizing on Twitter or PloS’s comment section, take up another paper to edit, and make sure it is up to snuff.
Or, you know what, go ahead and resign; if your statistical and observational skills are so poor as to not recognize your own confirmation bias, you should not be editing papers for a scientific journal.
We should not move to a system that is exclusively post publication peer review
One argument was made that peer-review failed because this paper got through. OK, people die in car crashes even if they wear seat belts. It doesn’t mean you should never wear your seat belt because you may die anyway. Pre-publication peer-review is a safety valve: it helps maintain a certain level of quality and interest appropriate for the journal at hand. In PLoS-1, that would mean anything that is scientifically sound. In other journals, topical interest as well as gauging a level of novelty or impact may play in as well. Like seat belts, it is not 100% reliable (obviously), and it is hugely problematic (OK , this is where the seatbelt analogy breaks down, pun intended).
Exclusive post publication peer review might be mostly good for those that already established themselves as prominent scientists, and whose papers will be read anyway. I have yet to hear of a postpub plan that helps filter and rank papers somehow. And no, the good science will not always “make it” somehow. Yes, prepublication peer review can be horribly slow and unfair. But doing away with it completely is not a viable solution to publication woes, especially when a viable alternative is not proposed. But see here for an alternative and interesting, if somewhat open-ended worldview. Also, see below about making pre-publication reviews public.
(Added later) there is also the worry of the mob mentality of postpub review, that may lead the editors of a journal to a harsher response than is actually warranted. This concern was expressed with the swift retraction by PLoS-1.
Alternative metrics measure interest, not quality
The issue of alt-metrics per-se has nothing to do directly with the #handofgod paper, but the number of tweets and Facebook shares of the URL of this article shot through the roof (1446 as of the time of this writing). Alt-metrics advocates keep saying that counts of social media chatter, downloads, and web views are a more reliable metric of the interest in a paper than, say, traditional citations. (And of course, the much-maligned and manipulated Journal Impact Factor, which even Thompson-Reuters who originated it say it’s an inappropriate metric for assessing individual papers, authors or institutions). Alt-metric advocates are probably correct in saying that a high altmetric shows a high level of interest on social media, but an interest in a paper, by itself, is not necessarily a good thing. You need some additional metrics to complement it and say if the interest the paper warranted comes from a good place. Also, your paper may merit social media interest and downloads, but not receive it , for various reasons.
If your paper is really bad, you may get attention on social media, and many views and downloads. If your paper is really good, you may also get attention on social media. But you won’t get attention simply because your paper is really bad or really good. You will get attention because your paper will be an attention getter. If you publish a population survey of fish in an obscure pond over 5 years, and completely mess up the diversity equations, no one will notice. If you publish an interesting variation on how to build phylogenetic trees, you may be heralded in your sub-field, but not much more. If your paper is picked up by a media outlet, or a large journals News section, you will get more attention. But that would mean that your paper is either very relevant to current public interests, in a good or bad way.
Relevant note: one way to guarantee your paper gets high alt-metrics, is to have it discussed on Retraction Watch. You probably don’t want that.
So if your paper is sexy in a good or bad way, and hopefully it will get tweeted by someone with many followers, you will get a high number of counts. Research idea: check the correlation between corresponding authors’ number of tweeter followers, and alt-metric count.
We should make reviews public
This is probably the only good idea I heard so far to help prevent a recurrence of the #handofgod mess. I am not sure the paper’s reviews and editorial decision will ever be made public, but I am confident that the reviews do not mention the ID issue as problematic, or, on the slight chance that any of them do, the editor did not acknowledge the ID rationale when approving the paper for publication. We have all received the occasional review that was lazily written, and completely uninformative. They may have been positive and uninformative (“I have no comments, good paper”), or negatively and uninformative (“this paper should not be published in your journal”). The editor would be forced to get decent and informative reviews, or look for other reviewers. And once the paper is out, we would be able to see how and why it made it past reviews.
Note on reviewer anonymity: personally, I would prefer public reviews where the reviewers have the choice to remain anonymous. For good or bad, many scientists’ careers, especially junior scientists, still depend on the good graces of their colleagues; and scientists can be just as petty and vindictive as the rest of the human race. Anonymity helps the little fish be honest about the big fish without fear of retribution, yes, it may foster less-than-honest reviews from the little fish, but that is why several reviewers are used. PeerJ and eLife already have public anonymous (or signed by choice) peer reviews.