Science communication

A conversation on twitter today got me all jazzed about, among other things, science communication. Which, duh, I’ve obviously been jazzed about that since… well, forever, but SPECIFICALLY, today, I am thinking about how we scientists report our findings.

If you are unfamiliar, research science is reported in peer reviewed literature. What’s that? Basically there a bunch of scientific journals, like Science and Nature and a bunch of others that aren’t as well known outside of specific fields. So basically, when I finish some piece of research, something I and my colleagues deem reportable, we write it up and submit it as an article to one of these journals. A journal editor then sends it on to several reviewers – scientists. They read it and offer comments as well as an assessment of the validity and relevance of the work. The editor then based the decision of whether the research is published on those comments. Thus, peer reviewed.

So there are a couple of issues here. First – peer review is an imperfect process. Is not as if scientists are paid as reviewers – in fact, that would be a conflict of interest – but that can often mean that we do it in our very limited spare time. It’s also frequently the case that reviewers are not necessarily experts in the specifics of the research they are reviewing. Toxicology is a very broad field, for example – I am an expert in a handful of chemicals, a handful of experimental techniques, and a handful of computational approaches. It isn’t often that researchers other than the ones I directly work with publish studies that fit perfectly in my particular areas of expertise. Sometimes this can be a good thing, with reviewers offering a fresh perspective, but it can also be bad – they might not know the intricacies of an experimental approach, or the necessary caveats for interpretation. Despite those issues, the cream rises to the top over time as good, solid, repeatable research gets cited, and poor research does not.

HOWEVER. There ARE major issues, in my opinion. One obvious one is how science is reported in the media. This is actually not the thing that is bugging me today. The other big problem that is making me CRAZY is that as scientists, we are not well trained to communicate our results. In fact, we are trained to do it POORLY. We are told that you can’t publish negative results – there aren’t a lot of studies showing “no association” or “no significant effects.” It is notoriously hard to get stuff like that published, because what’s the point? What’s the relevance? Where’s the headline? So a lot of times, people overstate their findings, or rather, the significance of their findings. That’s how you end up with headlines screaming “Eggs: worse than cigarettes!”

And, worse, it’s even harder to get studies designed to show negative results FUNDED. Getting research money these days is incredibly challenging – grant funding rates are abysmal, well below 10% of applications get funded. So career scientists spend a huge portion of their time just writing grants and writing grants because you need several at a time, and so few get funded… Ahh! Anyways, to get a grant funded, you have to show the relevance of the research – this is a good thing, don’t get me wrong. In my fields, that usually means – what’s the bottom line for human health? Unfortunately, that ends up being “Why is X BAD for human health?” So, all results in publications and grants end up being put in that context, even if that requires significant leaps. Like, from maybe an experiment done in cell culture at an exposure much higher than anything humans might actually experience, to “Therefore, methylethylbad is a significant danger to human health.”

Even worse, and specific to my field, is when scientists bandy about loaded terms. Like “low dose” – this phrase gets used all the time in studies and in science reporting. “Even at low doses, there is a significant effect…” but rarely do authors define the term. And far, far too often, “low dose” does NOT mean “a dose comparable to real human exposures.” Far too often it means “a number the authors thought was small, because it had a lot of zeroes in it.” The peer review process doesn’t do a good job of catching that sort of thing. And as scientists, trying to stay afloat, making those statements is sadly reinforced all the time. DRIVES ME NUTS.

This entry was posted in Grumpy Toxicologist, Science!. Bookmark the permalink.

3 Responses to Science communication

  1. HereWeGoAJen says:

    And don’t get me started on p values and how only positive studies tend to get published. That means that studies can be perfectly valid, just fall into the acceptable margin of error, and get published, while the 95% of the studies that got the (correct) negative result sit in a drawer.

    You are making me want to go back to school. P VALUES!

  2. Hillary says:

    This is fascinating. I want to print this out and make all of our reporters read it.

  3. RA says:

    Hello! Reading through the archives and came across this, to which I wholeheartedly SHAKE MY FIST IN SUPPORT! I read a piece in CNN recently about how there was a study about how smoking is just as bad for your heart as eating eggs. Durrr, what? So I click through to the abstract, and look! A retrospective data review of patients with heart conditions who may or may not have remembered eating eggs in the past! Those were the compared groups! It wasn’t like, no smoking or eggs; just eggs; just smoking. Oh, no. And the conclusions were like, yeah, they’re both bad for you, but we would need to do a prospective, blinded study to really understand. Which will never happen because you can’t be like, “Hey, guy, could you smoke for a really long time while we monitor your heart and restrict you from eggs? Kthxbai.” ANYWAY. FIST PUMP.

    P.S. I had to write a manuscript on a failed study, and it was ridiculously hard. But! After a frajillion hours on it, it got published, so hoorah! It felt amazingly good.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>