This article is an excerpt from the Shortform book guide to "Bad Science" by Ben Goldacre. Shortform has the world's best summaries and analyses of books you should be reading.
Like this article? Sign up for a free trial here.
What is publication bias in research? Why do some companies only publish good results?
Publication bias is a problematic tactic because it fails to reveal every result of a study, whether good or bad. In Bad Science, Ben Goldacre goes into more detail about why publication bias in research needs to be stopped.
Let’s look into why companies consider giving into publication bias in the first place.
Publish the Good, Bury the Bad
Goldacre argues that in order to make their treatments seem more effective, companies only publish results when their treatments perform well, and they actively conceal negative results. This is commonly referred to as publication bias in research. For instance, if a company conducts three trials for a new drug, and the drug only performs well in one of them, that company could choose not to publish the other two studies. By hiding these crucial pieces of data, the company makes its treatment seem more effective than it really is.
(Shortform note: As Goldacre describes, publication bias often occurs because drug companies want their drugs to sell. However, some researchers believe that journal editors may also be responsible for publication bias, even in credible, peer-reviewed journals. In order to sell subscriptions and stay relevant, editors may seek out studies in which new treatments were shown to have strong effects, as these kinds of advancements are seen as more exciting for readers.)
As an example, Goldacre notes that when selective serotonin reuptake inhibitors (SSRIs) were being researched as a treatment for depression, pharmaceutical companies buried multiple studies in which SSRIs were shown not to work better than placebos. By hiding these lukewarm studies, the pharmaceutical industry was able to make sure SSRIs made it to market as quickly as possible.
(Shortform note: While these early studies of SSRIs reported that they vastly outperformed placebos in treating depression, more recent studies have shown that SSRIs only outperform placebos by a small margin. One possible explanation for this change is that publication bias in earlier studies made SSRIs seem more effective than they really are. However, researchers have noted that depression is now more widely diagnosed than it used to be, and that this may have also impacted results. According to this argument, because milder cases of depression are now treated with SSRIs, the relatively small impact of treatment in those cases may help explain the small clinical impact of SSRIs in general.)
In addition to omitting mediocre results, drug companies sometimes hide results that show that their products cause harm. In particular, Goldacre references Vioxx, a painkiller that was marketed from 1999 to 2004. The company that manufactured Vioxx hid evidence that Vioxx caused an increased risk of heart attack and pushed the drug to market anyway. Goldacre estimates that Vioxx caused tens of thousands of heart attacks in its brief time on the market. It’s important to understand and identify bad science, because, as illustrated here, when drugs are pushed to market irresponsibly it can cause tremendous harm.
(Shortform note: In addition to drug companies, irresponsible regulators may also play a part in allowing dangerous drugs to reach consumers. For example, the chairman of the safety board tasked with evaluating Vioxx owned tens of thousands of dollars of the company’s stock. This clear conflict of interest created a financial incentive for irresponsible behavior.)
Spotting Publication Bias
To spot publication bias, Goldacre recommends analyzing as many studies as possible when evaluating any given treatment. In general, studies with larger sample sizes and better funding should generally agree with each other. Their large sizes and superior methods tend to produce more consistent results. By contrast, the smaller studies should naturally produce a wider range of results due to their smaller sample sizes. If the smaller studies all agree with each other, it may be a sign that some studies have been omitted.
(Shortform note: Some statisticians argue that comparing results may not be a useful tool for spotting publication bias. According to this argument, small studies might agree with each other for many reasons that are unrelated to publication bias. Furthermore, some researchers don’t feel we need to test for publication bias at all—they argue that because it’s such a widespread problem, you should always assume publication bias is present.)
———End of Preview———
Like what you just read? Read the rest of the world's best book summary and analysis of Ben Goldacre's "Bad Science" at Shortform.
Here's what you'll find in our full Bad Science summary:
- The strategies researchers, corporations, and journalists use to mislead the public
- The tools you need to identify and call out shady science when you see it
- Why media outlets have an ethical obligation to publish retractions