Wednesday 10 April 2013

Scientific publishing as it was meant to be


Last October I joined the editorial board of Cortex, and my first order of business was to propose a new format of article called a Registered Report. The essence of this new article is that experimental methods and proposed analyses are pre-registered and peer reviewed before data is collected. This publication model has the potential to cure a host of bad practices in science.

In November the publisher approved the new article format and I’m delighted to announce that Registered Reports will officially launch on May 1st. I’m especially proud that Cortex will become the first journal in the world to adopt this publishing mechanism.

For those encountering this initiative for the first time, here are some links to background material:

1. The open letter I wrote last October proposing the idea. 
2. A panel discussion I took part in last November at the Spot On London conference, where I spoke about Registered Reports.
3. My freely-accessible editorial article where we formally introduce the initiative (March 2013).
4. **Update 03/05** Finalised author and reviewer guidelines. 
5. **Update 26/04**: Slides from my talk at Oxford where I spoke about the initiative.

Why should we want to review papers before data collection? The reason is simple: as reviewers and editors we are too easily biased by the appearance of data. Rather than valuing innovative hypotheses or careful procedures, we too often we find ourselves applauding “impressive results” or bored by null effects. For most journals, issues such as statistical power and technical rigour are outshone by novelty and originality of findings.

What this does is furnish our environment with toxic incentives. When I spoke at the Spot On conference last year, I began by asking the audience: What is the one aspect of a scientific experiment that a scientist should never be pressured to control? After a pause – as though it might be a trick question – one audience member answered: the results. Correct! But what is the one aspect of a scientific experiment that is crucial for publishing in a high-ranking journal? Err, same answer. Novel, ground-breaking results.

The fact that we force scientists to touch the untouchable is unworthy of a profession that prides itself on behaving rationally. As John Milton says in Devil’s Advocate, it’s the goof of all time. Somehow we've created a game in which the rules are set in opposition.

The moment we incentivize the outcome of science over the process itself, other vital issues fall by the wayside. A priori statistical power becomes neglected, as Kate Button and Marcus Munafo prove today in their compelling analysis of neuroscience studies (and see excellent coverage of this work by Ed Yong and Christian Jarrett).

With little chance of detecting true effects, experimentation reduces to an act of gambling. Driven by the need to publish, researchers inevitably mine underpowered datasets for statistically significant results. No stone is left unturned; we p-hack, cherry pick, and even reinvent study hypotheses to "predict" unexpected results. Strange phenomena begin appearing in the literature that can only be explained by such practices – phenomena such as poor repeatability, prevalence of studies that support stated hypotheses, and a preponderance of articles in which obtained p values fall just below the significance threshold. More worryingly, a recent study by John et al shows that these behaviours are not the actions of a naughty minority – they are the norm.


None of this even remotely resembles the way we teach science in schools or undergraduate courses, or the way we dress it up for the public. The disconnect between what we teach and what we practice is so vast as to be overwhelming. 

Registered Reports will help eliminate these bad incentives by making the results almost irrelevant in reaching editorial decisions. The philosophy of this approach is as old as the scientific method itself: If our aim is to advance knowledge then editorial decisions must be based on the rigour of the experimental design and likely replicability of the findings – and never on how the results looked in the end.

We know that other journals are monitoring Cortex to gauge the success of Registered Reports. Will the format be popular with authors? Will peer reviewers be engaged and motivated? Will the published articles be influential? This success depends on you. We'll need you to submit your best ideas to Cortex – well thought-out proposals that address important questions – and, crucially, before you’ve collected the data. We need your support to help steer scientific publishing toward a better future.

For my part, I’m hugely excited about Registered Reports because it offers hope that science can evolve; that we can be self-critical, open-minded, and determined to improve our own practices. If Registered Reports succeeds then together we can help reinvent publishing as it was meant to be: rewarding the act of discovery rather than the art of performance.
___
 
I am indebted to many people for supporting the Registered Reports initiative, and my sincere apologies if I have left anyone off this list. For generating or helping to inspire the ideas (for which I take no personal credit), I’m grateful to Neuroskeptic, Marcus Munafò, Pete Etchells, Mark Stokes, Frederick Verbruggen, Petroc Sumner, Alex Holcombe, Ed Yong, Dorothy Bishop, Chris Said, Jon Brock, Ananyo Bhattacharya, Alok Jha, Uri Simonsohn, EJ Wagenmakers, Eric Eich, and Brian Nosek. I’m grateful also to Toby Charkin from Elsevier for working hard to facilitate the administrative aspects of the initiative. I also want to thank Zoltan Dienes for joining the editorial board. Zoltan will provide expert advice as part of the initiative for studies involving Bayesian statistical methods and his paper on the advantages of Bayesian techniques over conventional NHST is a must-read. My thanks as well to many members of the Cortex editorial board for their advice and valuable consultation, including especially Rob McIntosh and Jason Mattingley, and to Dario Battisti for the cover art accompanying the Cortex editorial (pictured above). Finally, I am especially grateful to the Editor-in-chief of Cortex, Sergio della Sala for having the vision and courage to support this idea and see it to fruition. A determined and progressive EIC is crucial for the success of any new publishing format, particularly one as ambitious as Registered Reports.

3 comments:

  1. Hi, I published a little rant in BJP last year basically on how journals aren't prepared to publish the reality of data if it doesn't fit with a preconceived schema of how a successful experiment will turn out. I think you're scheme is an excellent idea - disciplining not only scientists to lift our game, but also editorial boards to accept the real results of registered experiments, even if "outcome knowledge' (i.e., hindsight) invalidates the original/registered experimental plan midway (indeed, that is my only concern, because outcome knowledge so often changes how we view our experimental design, will scientists find themselves under pressure to massage their results to fit better with how things were originally planned and registered? Will it just shunt the dishonesty to another level? I'm eager to see how it goes!)

    John Ashton

    ReplyDelete
  2. Also "posthoc storytelling: reinventng hypotheses
    to“predict” unexpected results is very similar to "inference to the best explanation" which has been the very heart of much of science. Without it "The Origin of the Species" would have no unifying argument.

    Perhaps the real problem is not with postrationalised hypotheses, but with a lack of discipline on what counts as a good explanation of the results. David Deutch has written and spoken about the dangers of explanationless science, and on the importance of good explanations and on the importance of good criteria for what counts as a good explanation. Even the arch-empiricist Karl Popper spent a great deal of time in his writing detailing what counts as an informative hypothesis/theory and what does not. Perhaps it isn't the post-hoc hypothesizing that is the matter, but on the quality of the hypotheses as explanations, and on the lack of consensus on what logical criteria scientists should use in considering an explanatory hypothesis a good one. Back to Popper, there is a word of difference between an ad hoc "saving the phenomena" type hypothesis, and post-hoc explanation and one that increases explanatory and informative content.

    Inference to best explanation as a principle in fact lurks underneath much of statistics. Many tests rely on Maximum likelyhood, which is actually a probablistic form of best explanation, and Bayesian statistics has similar assumptions. Without inference to best explanation (post-hoc explantations) science would grind to a halt.

    ReplyDelete
    Replies
    1. The problem with reinventing the hypothesis of a study is that the study was not conducted again with the new hypothesis. It's fine to look at data and interpret it in different ways, but that's the start of the scientific process not the end.

      See Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science, 22(11), 1359–66.

      Delete