Doctor and journalist Ben Goldacre, along with Stuart Armstrong, is calling attention to what might be the worst and most significant example of intentional confirmation bias in recent history — a widespread problem in drug research could be harming and killing many of us.
In his new book, Bad Pharma: How drug companies mislead doctors and harm patients, Goldacre describes how negative data goes missing for virtually all treatments and in all areas of science. And just as bad, the regulators and professional bodies expected to stamp out such practices have failed to meet the challenge.
"These problems have been protected from public scrutiny because they're too complex to capture in a soundbite," he writes, "This is why they've gone unfixed by politicians, at least to some extent; but it's also why it takes detail to explain. The people you should have been able to trust to fix these problems have failed you."
Specifically, Goldacre is referring to the ways pharmaceutical companies are conducting medical trials on drugs. He continues:
Drugs are tested by the people who manufacture them, in poorly designed trials, on hopelessly small numbers of weird, unrepresentative patients, and analysed using techniques that are flawed by design, in such a way that they exaggerate the benefits of treatments. Unsurprisingly, these trials tend to produce results that favour the manufacturer.
This means negative results, alternative or incredibly bad side-effects, comparisons to other trials with larger, more varied sample sizes, are all being ignored or glossed over. What does this mean for us, as non-medical people? It means our doctors who prescribe these drugs are doing so on potentially faulty information — on evidence massaged into truth by the same hands that also hold the broader context behind their backs.
This very much matters because of the complexity of the situation. Doctors are doing what (good) doctors do: They look at the available evidence, assess the risks, benefits and impact, discuss it with patients, and come to a conclusion on treatment. But the problem begins even before the patient enters her office: The doctor will refer to conclusions made by scientists who ignored proper scientific and research conduct.
Numbers and context
Here's part of the problem: If we're told that 2,000 people were successfully treated by Drug X, that might sound impressive.
But 2,000 out of what? These are raw numbers that need to be placed within a context — we must know the denominator so that we have a processed figure that actually tells us the truth. In other words, the "real number".
If it's 2,000 out of 2,010, that is indeed an optimistic conclusion. But 2,000 out of a 100,000 is not. Would you think such a drug is effective when only 2% of subjects showed positive results? Of course not (the real number would be much higher, probably, given the effectiveness of the Placebo effect).
Yet, this is precisely what happens with shoddy research results and omissions which currently occur on a vast scale. Essentially, Goldacre is highlighting that doctors really are prescribing drugs that are either not effective (our Drug X example, above) or are worse than the alternatives. And doctors prescribe these drugs because doctors, essentially, are looking at the raw number as the real number.
However, it's not completely raw, since the number does come from trials that have been conducted that are meant to manufacture this number into a real one. We might say it's half-cooked. But this is, indeed, quite unhealthy. With obvious raw things, we will stay away: it's the ones that looked prepared on the outside that usually causes the most problems since we ingest them thinking they're fine. And this is what's happening.
We're using uncooked numbers because the designs are poor, because negative results are ignored, so the number is never in a position to reflect the truth.
But why?
Journals, manufacturers, and indeed the media -– where many of us get our information on medicine -– are mostly only interested in positive results. Whereas science -– and especially medical science -– isn't actually interested solely in positive results. It's interested in incremental acquisitions of facts, testing previous hypotheses against new data, better designs, which are long, slow and rather unexciting. To reach a "Curiosity landing on Mars" moment, to tear up at the first picture of distant galaxies, takes an enormous amount of time.
But news must be reported! And quickly.
Thus when some new drug shows some positive results, we leap on it like starved dogs. We all want a cure to HIV/AIDS, cancers, and so on. Any news that hints at reaching this is eaten up, embossed and put on the frontpage. Scientists I've spoken to often express surprise at this; it happens because their often tentative claims are translated into positive-sounding press-releases, then coloured in by media outlets. According to this exaggerated process, it should be the case that, basically, every published scientist in the world should've won the Nobel Prize by now (perfectly illustrated here)!
All of this makes its way into our voting hands — and then our policy makers.
But what about all the other research showing negative results? What about research indicating it's all harmful? Unfortunately, there's nothing flashy or sexy about scientists demolishing our hopes -– but that's because science and evidence isn't aligned to our hopes. Instead, it's aligned to truth, to reality, to what is the case, not what we want to be the case. Thus again we come to the conflict. We want positive results, but science is not about positive results but a process to tentatively reach the truth.
We hear what we want to hear; we remember and believe what makes us feel good. Anyone speaking to the contrary is labelled a naysayer, pessimist or, worse, is ignored completely. Drug manufacturers and trials, as Goldacre reports, are basically exploiting this very human process of loving positives, of loving confirmation, and hating or dismissing dissent. As many scientists like Michael Shermer and Daniel Kahneman have shown, it's much harder to give up one's beliefs than to acquire or reinforce them.
But this is dangerous. Just because it's good to send out positive results, doesn't make the effectiveness of the drug true. The results from a particular study, or even several, may be true, but it's not sufficient to conclude a particular treatment is truly effective unless we've had access to contradictory, negative results, too (should there be any and are conclusions from well-designed studies). But our doctors are operating on these results to medically intervene on our behalf. Instead of getting a proper meal, we're often being force-fed undercooked meat.
Conclusion
Supposedly, Oliver Wendell Holmes, Sr. asserted: "If all medicines in the world were thrown into the sea, it would be all the better for mankind and all the worse for the fishes." I wouldn't go that far, but I would be concerned.
It shouldn't have to be said that this is not true for every single drug, study, business, or doctor, but it is a problem nonetheless that effects all these.
Many have expressed this terrible problem in medicine before. I've heard it from scientists for years. What we're seeing are businesses manipulating a system, fraught with legal grey areas and an inability to recognise failure, all so that the businesses themselves survive. This isn't a creepy ‘Big Pharma' scare or whatever conspiracy theorists call it: this is a very real, very dangerous problem that affects all of us and our loved ones. I urge you to read Goldacre's piece and get his book, to find out what we can do so that medicine is about our survival, not primarily manufacturers'.
Further Reading: Be sure to read fellow BT Blogger David Ropeik on scientists themselves perpetuating bad thinking about science.
This article originally appeared at Big Think.
Image: Africa Studio/Shutterstock.com.