Thursday 26 November 2015

It's nice to be nice but it's more important to be honest

Science is hard, and if you're on the receiving end of criticism it can be especially hard. As scientists we need to have thick skins because we deal with harsh criticism every day - we are bombarded with critical comments from reviewers (usually anonymously) when they tear down our latest grant applications or papers. We get critical questions at conferences. We argue with our friends, colleagues, and people we don't even know. We disagree a lot. We get frustrated. It's fair to say that disagreement and frustration are hallmarks of the job.

As a junior scientist this can take some getting used to. Most of the time the disagreement is good-natured, but occasionally it can creep into the personal.

This morning we saw an example of this when a prominent study just published in PNAS drew some flack on Twitter. Small N, no replication, big story.  Personally I saw it as just another day at the office -- just another unremarkable exemplar of the low empirical standards we set for ourselves in cognitive neuroscience. I realise that sounds harsh but that's just how I feel about it. We need to set higher standards, and step one is being publicly honest about our reactions to published work.

Our field is peppered with small studies pumped out by petty fiefdoms, each vying for a coveted spot in high impact journals so we can have careers and get tenure and maybe make a few discoveries along the way. It would be disingenous to say that I'm any different. I've got my own fiefdom, just like the rest. It's no less petty; I am no better than anyone else.

When I look at fMRI studies like the one this morning, I see how far we need to come as a field. Does that sound arrogant? I don't care. I wrote about this recently because reproducibility is a huge problem in biomedical science and something a lot of people (but not enough) are working hard to fix. It is a bigger problem than anyone's ego, bigger than anyone's career.

Some folks get upset at the direct nature of post publication peer review. They might know the scientists involved; they might think they're careful; they might like them. And they might think such criticism is an attack on the integrity of the researchers -- that robust post-publication-peer-review, pointing out probable bias or low reproducibility, is tantamount to an accusation of misconduct. 

This is false because questionable practices aren't the same as fraud and bias isn't the same as misconduct. Much, if not most, research bias happens unconsciously. It can and does distort our results despite our best efforts because we're humans rather than robots. I believe many in our community are not only blind to unconscious bias, they're blind to the possibility of unconscious bias. They think that because they're careful that their studies are robust. But once you know the extent of your own bias it changes your mindset in a deep way. We learned this in my lab some time ago, which is why we now pre-register our studies.

Twitter is a great social leveller, allowing all kinds of voices to be heard. This is tremendous for science because it adds a layer of immediacy and diversity to peer review that busts conventions and blows traditional (stuffy) forms of interaction, and traditional hierarchies, right out the window. 

So while I agree with the sentiment that it's nice to be nice to each other, I believe it's even more important to be honest. If you wave what I see as bullshit in my face I will probably call it bullshit, and I expect you to do the same to me. In fact I expect you to do the same for me because by being honest you are doing me a favour.