Tuesday 25 September 2012

Why I will no longer review or publish for the journal Neuropsychologia

 
A quick post.

I recently had an interesting experience with the journal Neuropsychologia, which led to a personal decision that some of my colleagues will probably think is a bit rash (To which my answer is: hey, it's me, what do you expect?!)

We submitted a manuscript that related pre-existing biases in spatial perception to the effects of transcranial magnetic stimulation (TMS) on spatial perception performance. The results are interesting (we think), even though there are some 'weaknesses' in the data: one of the significant effects is reliable in itself but doesn't dissociate significantly from another condition that is non-significant. For this reason we were careful about the interpretation, and the study was reasonably well powered compared to other studies in the field.

The paper was eventually rejected after going through two rounds of review. Once the initial downer of being rejected had passed, I realised that the reasons for the rejection were simple: it wasn't that our methodology was flawed or incomplete, it was that the data didn't meet the journal's standard of perfection.

This obsession with data perfection is one of the main reasons why we face a crisis of replicability and dodgy practices in psychology and cognitive neuroscience.

So after some consideration, I wrote to the action editor and the editor-in-chief and officially severed my relationship with the journal. The email is below. I'm a bit sad to do this because I've published with Neuropsychologia before, and reviewed for them many times -- and they have published some good work.

However my gripe isn't with either of the editors personally, or even with the reviewers of our specific paper (on the contrary, I am extremely grateful for the time and effort everyone invested). My problem is with the culture of perfection itself. 

For that reason I'm leaving Neuropsychologia behind and I urge you to do the same.
_________

Dear Jennifer and Mick,

I wanted to write to you concerning our rejected Neuropsychologia manuscript: NSY-D-12-00279R "The predictive nature of pseudoneglect for visual neglect: evidence from parietal theta burst stimulation".

Let me say at the outset that I am not seeking to challenge the decision. The reviewers make some excellent points and I'm very grateful for their considered assessment of the paper. I'm also grateful that you sought an additional review for us when the decision seemed to be a clear 'reject' based on the second review alone. That said, I would like to make a couple of comments.

First, the expectations of reviewers 2 and 3 about what a TMS study can achieve are fundamentally unrealistic. Indeed, it is precisely such unrealistic expectations for 'perfect' data that have created the file drawer problem and replicability crisis in psychology and cognitive neuroscience. It is also this pressure that encourages bad practices such as significance chasing, flexible data analyses, and cherry picking. All of the reviewers commented that our study was well designed, and it is manifestly well powered with 24 participants. If we had simply added another 10 subjects and shown 'cleaner' results, I wonder how many of the reviewers would have spotted the fatal flaw in doing so without correcting for data peeking. I suspect none.

Second, a number of the comments by the reviewers are misplaced. For instance, in commenting on the fact that we found a reliable effect of right AG TMS but not left AG TMS on line bisection performance, Reviewer 3 notes that "One cannot state that two effects are statistically different if one is significant and the other is not. A direct comparison is necessary." This is true but is also a straw man: we never state (or require) that the effects are statistically different between left AG and right AG. Our predictions were relative to the Sham condition and we focus our interpretation on those reliable significant effects. Similarly, Reviewer 2 challenges the categorisation of our participants into left and right deviants, noting the variable performance in the initial baseline condition. But this variation is expected, and we show with extra analyses that it cannot explain our results. Reviewer 2 simply disagrees, and this disagreement is sufficient grounds for rejection.

Overall, however, my main concern isn't with our specific paper (I am confident we will publish it elsewhere, for instance in PLoS One where 'perfect' data is not expected). My real problem is that by rejecting papers based on imperfect results, Neuropsychologia reinforces bad scientific practice and promotes false discoveries. It worries me how many other papers for Neuropsychologia get rejected for similar reasons. As Uri Simonsohn and colleagues note in their recent Psych Science paper on 'false positive psychology', "Reviewers should be more tolerant of imperfections in results. One reason researchers exploit researcher degrees of freedom is the unreasonable expectation we often impose as reviewers for every data pattern to be (significantly) as predicted. Underpowered studies with perfect results are the ones that should invite extra scrutiny."  (Simonsohn et al., Psychol Sci. 2011 Nov;22(11):1359-66.)

Based on my previous experiences as both an author and reviewer for Neuropsychologia, I have long suspected that a culture of 'data perfection' dominates at the journal. In fact, I have to admit that - for me - the current submission served as a useful experiment to test whether this culture would prevail for a study that is robust in design but with 'imperfect' (albeit statistically significant) results.

For this reason, my main purpose in writing is to inform you that I will no longer be submitting manuscripts to Neuropsychologia or reviewing them. I will be encouraging my colleagues similarly. Please note that this is in no way a criticism of you personally, but rather a personal decision to oppose what I see as a culture that needs active reform. I felt I owed you the courtesy of letting you know.

best wishes, Chris


Saturday 8 September 2012

A response to a wayward defence of a concerted critique of a bad argument about science and religion

And breathe. 

Ananyo Bhattacharya, chief online Editor of Nature Magazine, has penned a strident defence of this remarkable piece by Daniel Sarewitz on science and religion. In his response, Bhattacharya takes issue with my critique of Sarewitz's arguments, which you can read here.

I've responded to Ananyo, but the moderators at Discover magazine don't seem to work weekends (fair enough) so I've copied below my response.

---

As the “one critic” that Ananyo cites, I guess I ought to respond. I have a lot of respect for Ananyo, but his post strikes me as a muddled mix of non sequiturs and rapidly shifting goal posts.

First, the example of MRI is a straw man. I argued that the scientific method is the best way of understanding reality but this in no way reduces the mode of that understanding to any one form of investigation. I could just as easily adopt a scientific method to explore the psychology and phenomenology of a person’s response to the Dark Knight and, in doing so, learn a great deal about emotion and cognition. Unless, that is, Ananyo is arguing that psychology isn’t a real science, or that studying the brain is the only way to understand mental processes (I sincerely hope he isn’t).

To give Ananyo the benefit of the doubt, perhaps he is instead referring to the hard problem of consciousness, that no amount of scientific enquiry can ever fully illuminate the subjective experience of another person. In other words, could I ever know whether your experience of the Dark Knight is the same as mine? Some philosophers, like Dennett, have argued that the hard problem is itself an illusion, but even if it is a genuine question then the answer may lie behind a technological barrier rather than a philosophical one. Unless there is a ghost in the machine, or a supernatural world beyond our ability to study, anything that can be ‘experienced’ can conceivably be measured and studied in a scientific manner.

Second, Ananyo argues that because I believe the scientific method is the best way of understanding reality that me and other critics are “bash[ing] those that dare to suggest that one might experience wonder and awe”, and “dismiss[ing] culture without a second thought”. 

This is another straw man, and a mildly offensive one at that. My point in responding to Sarewtiz was simply that such feelings of awe and wonder – such us religious experiences – tell us nothing about reality. End of. A scientific study of wonder and awe itself could tell us about the basis of those emotions, but simply experiencing something is not the same thing as studying it or understanding it. I would argue that to understand something requires us to interpret our experiences through a rational filter.

Third, in my critique of Sarewitz I said that science is not just the best way of understanding reality, but the “best and only”. I agree that the use of “only” here is debatable, and whether others agree or not may depend on their definition of what science is. There is no real consensus on the necessary and sufficient conditions for something to be “scientific” but my view is quite open, which is to say that science – in it’s most basic form – is simply a way of appraising evidence through logic. The way in which this methodology is applied, and the stringency, varies across academic disciplines. But the scientific method is by no means the purview of the traditional sciences; many disciplines in the humanities (e.g. history) adopt what I would regard as a form of the scientific method, and historians I know agree.

For anyone interested, Neuroskeptic’s post on ‘what is science’ is well worth reading.

Fourth, Ananyo equates the criticism of Sarewitz with logical positivism. I’ll happily admit my knowledge of philosophy isn’t great, but my understanding is that the arguments by me and others are equally consistent with postpositivism. And if not, why not?

Finally, it’s disappointing to see the pejorative “scientistas”, as though the critiques of Sarewitz are necessarily an argument for scientism (yet another straw man, sigh). Perhaps Ananyo means this in a tongue-in-cheek way, but reading it as written it does come across as an insult to many readers of Nature magazine. Good luck with that one mate!