Well, that didn't last long. Gorgeous spring day – and then the temperature dropped, the clouds rolled in and it started to rain. I'll just have to console myself with a cup of ‘tea’ (except, as you know, I don't drink tea). Besides, it will help me fight off the Jabberwock.
If you are just joining us, we've been talking about conflicts of interest (or COIs), which – in academia – generally equate to corporate connections that may be perceived to impact on our ability to be completely objective about our science. This is, if the result of my experiment can somehow influence my income, we should worry about how objective that result must be (or at least know about the connection to decide for ourselves how worried we should be). And I pointed out that if one does research professionally, in any context, then this manxome COI rushes in, burbling as it comes. And he's not alone: our natural human desire to garner the approval of our peers is just as potent a COI, a frumious bandersnatch that can undermine our credibility in the eyes of the public. (Doubt this? One need look no further than the many journalistic reports that we scientists are afraid to propose hypotheses or conclusions that run outside the accepted wisdom – and almost always, these are pointed out when the accepted wisdom is unpopular with those who raise such concerns about scientific bullying. We'll come back to this.)
The solution for each of us is to recognize, isolate, and fight these COIs, defending our credibility as the most valuable asset we hold. But the problem, once we see these monsters, is that we have to wonder about everyone else. How do we know that these COIs have not subjugated the authors of any published work? I'm not talking about outright fakery (or fudgery), but something more subtle: that loss of objectivity that can make us deceive ourselves and, in turn, anyone who hears about it. (I've talked about fakery before and, yes, this arises in the context of COIs, but the result is ultimately the same – a loss of trust.)
There are those who argue that we should diligently police the system, requiring extensive institutional review of every publication (“Okay, Prof. Whippet, now please show the committee the raw data for supplemental figure 17B. We will do 17C after dinner…”). This is enormously costly in time, effort and resources, and ultimately does more harm than good – if we don't trust each other at all, why should the public trust a committee to vet us? (The exception, of course, is when the issue of fakery has been raised, in which case this is our only option and, indeed, is all we can do to try to retain the trust of the community.)
One answer is that science is self-correcting: we generally try to repeat work we read about, and things that don't repeat fall to the side of the science highway. The highway is pretty cluttered, and it would be a good thing to clean it up. But here's the problem: try as we may, publishing negative data is very difficult.
This problem, the publication of negative data that refutes the published findings, is something a lot of scientists are thinking about and, yes, blogging about. Demands that journals pave the way to such refutations abound. And it is easy to see why: a new trainee is put onto a project that begins with someone else's published observation and, after months (or years) of effort, compounded by the necessary training and experience, it ends in desperate tears. At least this failure should be communicated to prevent others from the same terrible fate, right? But the journals are resisting, and this is for a couple of reasons.
First of all, it is very hard to prove a negative. Lots of experiments don't work for lots of technical reasons and, even with extensive trouble-shooting, we can be tripped up by unanticipated variables that doom our work. So actually proving that an observation is wrong often requires that we can show why it was wrong, not only that it didn't work in our hands.
Which leads to the second problem: because there are so many variables, negative results are often not very interesting. Journals are generally pretty cool to the idea that uninteresting things can be important, and they are not terribly invested in devoting their precious pages (and the effort that goes into generating them) to such things. I'm not saying they're right, but it seems to be the way of things.
Often, though, what we have is something between the extremes of “it didn't work for me,” and “I know exactly why it seemed to work.” We can show that our data are convincing and lead to conclusions, but these are inconsistent with what has been published. And sometimes this is sufficiently interesting to generate appropriate controversy in the field, which is a very good thing (and part of the fun of science). But (and you knew there was a ‘but’), it often happens that, while the first paper was published in a big, glossy journal (or one with lovely soft pages), we are outraged that ours is relegated to one of the ‘trades’. Unfortunately, our work isn't viewed as ‘new’ enough, even if we think we have it right and they have it wrong. The high-impact work is repeated in review after review, and our work seen as low-impact muttering.
We are outraged, and we shout and stamp that we are being excluded. A cadre of elite are working to suppress our views, and people should know about it. Scientific bullies are beating us up, and they have to stop. And we play right into the hands of those outside of science who will share out outrage, but for their own ends – casting doubt not only on one observation, but all observations, and these folks have COIs that make our Jubjub birds look like parakeets.
Of course, being the Mole, I have a few suggestions. First, we need a way to publish negative findings that can raise issues of validity without requiring that every negative result be couched in new conclusions – a way to say, “it didn't work, and here's why I think they got it wrong.” A way to present such findings in a forum that is wary of COIs but open to response. And then, we need a way to get other people to notice.
In this era of online publication, some journals are exploring the creation of electronic ‘sister’ journals in which papers deemed not-quite-terrific-enough for the print version might find a home. And these are freely available to anyone with connectivity, with no subscription required. Yes, these tend to be pretty low-impact affairs but, as we'll see, this may improve if things play out as I suggest. Use the on-line publication to air negative results, inviting responses from the authors of the original, refuted work, and then link this online paper to the original. There is a catch, but it may turn out to be a good one – these journals charge for publication (although there are ways around this if financial hardship is an issue), but that may help to limit the “it didn't work for me, I don't know why” sorts of claims.
But now, someone has to actually notice. The link to the original paper is a start. But if the refutation is actually important, then the refuters have a lot more work to do. Explain how the refutation affects our thinking about important matters, and publish findings that take us in that direction. Hopefully, in time, the correct answers will find their way into the reviews. And in so doing, the impact of the on-line publications will rise. That is, if others tend to find that the refuters were actually correct (if not, the refutation will, itself, fall by the side of the science highway; and, yes, we can link the refutation of the refutation – also online and freely available to anyone who wants to look).
Messy? Yes, it is. But airing negative results and those that take us in new directions is something we've always done on a small scale, through discussions in meetings and in our own labs, and maybe we should move this to the global community. Science is always messy until we sort things out and move forward to make our new messes. There are always jabberwocks, bandersnatches and jubjub birds whiffling through the tulgey wood, but with some uffish thought, and a lot of patience, we come galumphing back.
- © 2011.