Scientific studies are frequently the basis for news stories that seem too good to be true. How does the audience separate fact from fiction? The head of research at the UA explains.
People who watch morning talk shows or read high-traffic websites will often come across these four words at the start of an attention-grabbing story:
“A new study shows …”
Typically, these stories are quick, punchy, water-cooler conversation pieces that distill a researcher’s findings into one or two captivating talking points. Often, the headlines of these pieces will seem too good to be true.
Chocolate makes you smarter, study suggests (The Telegraph UK, March 8, 2016)
A new study says a glass of red wine is the equivalent to an hour at the gym (The Huffington Post, Jan. 8, 2016)
Beer may be good for your brain (CNN, Sept. 26, 2014)
But how truthful are these stories? Are these factual, evidence-based claims? Or are they tangential-at-best observations that may not possess any scientific validity?
Audiences often will encounter contradictory information, or struggle to discern which study to believe. How are people without an extensive background in research or statistics supposed to figure out which studies are the most accurate?
Studies show many studies are false (The Boston Globe, July 1, 2014)
Kimberly Andrews Espy, the University of Arizona’s senior vice president for research, said the most effective way for a lay audience to judge a study’s legitimacy is to consider the source of the research.
“Universities such as the UA are generally better than other institutions that might have an agenda,” Espy said. “Federally funded studies go through such rigorous peer review.”
If a study has not been peer reviewed, it’s safe to assume it isn’t reliable. Generally, peer reviewers are experts in their respective fields. This process exists to help filter out the junk. It isn’t perfect — for examples of “peer review gone wrong,” take a look through Ivan Oransky’s excellent blog, “Retraction Watch” — but the peer review process is essential to the veracity of scientific research.
The research pieces that tend to generate the most coverage are the studies that subvert a reader’s expectations.
“Studies that are contrary to a person’s expectations, such as ‘oh, I never would have thought I could eat all the ice cream I want and lose 15 pounds,’ those tend to make waves,” Espy said.
These are the studies that require the most scrutiny. And this is where sample size and funding sources come into play.
If a study based its conclusions on a statistically insignificant sample size, or is making a claim about “all people” without extending its pool to include data from individuals from a variety of socio-economic backgrounds, assume the study is invalid.
Also, if companies with a clear interest in a certain set of outcomes fund the study, approach it with a high degree of skepticism. A cola company, for example, would have a lot to gain from a “scientific” study that showed the “unexpected health benefits” of soda.
Ultimately, stories that get the general public interested in scientific research are a net positive, because “they draw people in,” according to Espy, but it’s important to read between the lines to get the full story.
As for researchers, this kind of media coverage is vital in terms of generating interest and setting project goals. The central question to nearly every research project should be: How does this help people?
“We encourage our researchers to talk about the downstream impact and why it matters,” Espy said. “So often we get so focused on the methods, we forget to talk about what the research is and why it matters.”
– By Nick Prevenas
*Source: The University of Arizona