A conversation on twitter today got me all jazzed about, among other things, science communication. Which, duh, I’ve obviously been jazzed about that since… well, forever, but SPECIFICALLY, today, I am thinking about how we scientists report our findings.
If you are unfamiliar, research science is reported in peer reviewed literature. What’s that? Basically there a bunch of scientific journals, like Science and Nature and a bunch of others that aren’t as well known outside of specific fields. So basically, when I finish some piece of research, something I and my colleagues deem reportable, we write it up and submit it as an article to one of these journals. A journal editor then sends it on to several reviewers – scientists. They read it and offer comments as well as an assessment of the validity and relevance of the work. The editor then based the decision of whether the research is published on those comments. Thus, peer reviewed.
So there are a couple of issues here. First – peer review is an imperfect process. Is not as if scientists are paid as reviewers – in fact, that would be a conflict of interest – but that can often mean that we do it in our very limited spare time. It’s also frequently the case that reviewers are not necessarily experts in the specifics of the research they are reviewing. Toxicology is a very broad field, for example – I am an expert in a handful of chemicals, a handful of experimental techniques, and a handful of computational approaches. It isn’t often that researchers other than the ones I directly work with publish studies that fit perfectly in my particular areas of expertise. Sometimes this can be a good thing, with reviewers offering a fresh perspective, but it can also be bad – they might not know the intricacies of an experimental approach, or the necessary caveats for interpretation. Despite those issues, the cream rises to the top over time as good, solid, repeatable research gets cited, and poor research does not.
HOWEVER. There ARE major issues, in my opinion. One obvious one is how science is reported in the media. This is actually not the thing that is bugging me today. The other big problem that is making me CRAZY is that as scientists, we are not well trained to communicate our results. In fact, we are trained to do it POORLY. We are told that you can’t publish negative results – there aren’t a lot of studies showing “no association” or “no significant effects.” It is notoriously hard to get stuff like that published, because what’s the point? What’s the relevance? Where’s the headline? So a lot of times, people overstate their findings, or rather, the significance of their findings. That’s how you end up with headlines screaming “Eggs: worse than cigarettes!”
And, worse, it’s even harder to get studies designed to show negative results FUNDED. Getting research money these days is incredibly challenging – grant funding rates are abysmal, well below 10% of applications get funded. So career scientists spend a huge portion of their time just writing grants and writing grants because you need several at a time, and so few get funded… Ahh! Anyways, to get a grant funded, you have to show the relevance of the research – this is a good thing, don’t get me wrong. In my fields, that usually means – what’s the bottom line for human health? Unfortunately, that ends up being “Why is X BAD for human health?” So, all results in publications and grants end up being put in that context, even if that requires significant leaps. Like, from maybe an experiment done in cell culture at an exposure much higher than anything humans might actually experience, to “Therefore, methylethylbad is a significant danger to human health.”
Even worse, and specific to my field, is when scientists bandy about loaded terms. Like “low dose” – this phrase gets used all the time in studies and in science reporting. “Even at low doses, there is a significant effect…” but rarely do authors define the term. And far, far too often, “low dose” does NOT mean “a dose comparable to real human exposures.” Far too often it means “a number the authors thought was small, because it had a lot of zeroes in it.” The peer review process doesn’t do a good job of catching that sort of thing. And as scientists, trying to stay afloat, making those statements is sadly reinforced all the time. DRIVES ME NUTS.