Home > data science, math, open source tools, rant, statistics > Medical research needs an independent modeling panel

Medical research needs an independent modeling panel

November 9, 2012

I am outraged this morning.

I spent yesterday morning writing up David Madigan’s lecture to us in the Columbia Data Science class, and I can hardly handle what he explained to us: the entire field of epidemiological research is ad hoc.

This means that people are taking medication or undergoing treatments that may do they harm and probably cost too much because the researchers’ methods are careless and random.

Of course, sometimes this is intentional manipulation (see my previous post on Vioxx, also from an eye-opening lecture by Madigan). But for the most part it’s not. More likely it’s mostly caused by the human weakness for believing in something because it’s standard practice.

In some sense we knew this already. How many times have we read something about what to do for our health, and then a few years later read the opposite? That’s a bad sign.

And although the ethics are the main thing here, the money is a huge issue. It required $25 million dollars for Madigan and his colleagues to implement the study on how good our current methods are at detecting things we already know. Turns out they are not good at this – even the best methods, which we have no reason to believe are being used, are only okay.

Okay, $25 million dollars is a lot, but then again there are literally billions of dollars being put into the medical trials and research as a whole, so you might think that the “due diligence” of such a large industry would naturally get funded regularly with such sums.

But you’d be wrong. Because there’s no due diligence for this industry, not in a real sense. There’s the FDA, but they are simply not up to the task.

One article I linked to yesterday from the Stanford Alumni Magazine, which talked about the work of John Ioannidis (I blogged about his work here called “Why Most Published Research Findings Are False“), summed the situation up perfectly (emphasis mine):

When it comes to the public’s exposure to biomedical research findings, another frustration for Ioannidis is that “there is nobody whose job it is to frame this correctly.” Journalists pursue stories about cures and progress—or scandals—but they aren’t likely to diligently explain the fine points of clinical trial bias and why a first splashy result may not hold up. Ioannidis believes that mistakes and tough going are at the essence of science. “In science we always start with the possibility that we can be wrong. If we don’t start there, we are just dogmatizing.”

It’s all about conflict of interest, people. The researchers don’t want their methods examined, the pharmaceutical companies are happy to have various ways to prove a new drug “effective”, and the FDA is clueless.

Another reason for an AMS panel to investigate public math models. If this isn’t in the public’s interest I don’t know what is.

  1. Lrax1
    November 9, 2012 at 10:05 am | #1

    Are things different in any other field? How would we answer that question?

  2. November 9, 2012 at 10:06 am | #2

    Good post. I share your outrage, and hope this gets picked up widely. I’ve become a firm believer that “cui bono?” is always the best starting point when evaluating the results of a clinical study.

  3. GD
    November 9, 2012 at 10:49 am | #3

    I share your enthusiasm for Ioannidis work. There is no doubt that he is a genius, but I think that there are practical matters involved that go beyond conflict of interest. For me they come down to biological variability and cost. Those are issues in both drug trials and rare but potentially devastating effects of effective medications. Those issues also apply to the current opiate epidemic. Transparency is a partial solution, but the information published at clinical trials.gov does not include the mathematics and analysis of the data. All of that should be publicly disclosed. Like many physicians, the key data for me will be responses based on genotypes rather than across the entire population.

  4. AH
    November 9, 2012 at 2:41 pm | #4

    yes, this is unfortunate, but this is why we can’t put faith in man…

  5. Scott Carnahan
    November 9, 2012 at 5:14 pm | #5

    Now that the triumph of quantitative methods in predicting elections is in the news, perhaps it is an opportunity for the cause of good statistical practice to get a better foothold, either in the media or the regulatory infrastructure. With their electoral defeat fresh on the minds of the ultra-rich, we could sell it to them with a slogan like “good statistics means fewer ugly surprises.”

  6. Jess B-F
    November 9, 2012 at 11:18 pm | #6

    My eyes were opened to this issue (of pharmaceutical industry control over both the lion’s share of published medical research and over the often misleading way it is presented both to the public and to doctors) by John Abramson’s book Overdosed America. Obviously the general lack of understanding of why something as mathematically simple as Nate Silver’s election prediction system was so accurate speaks to why most people can’t and shouldn’t be expected to understand the evidence behind drug company claims (and why direct-to-consumer advertising should be banned, as it is in most of the rest of the developed world). But doctors really ought to be able to sift through the meaning behind the data. In theory. In reality, they (we) don’t have time, and it’s easier to read the “bottom line it for me” abstract in JAMA than to dig deeper. And we like to trust the people writing the guidelines that push us to prescribe more drugs for more people, and now if we don’t the insurance companies can withhold payments (or bonus payments) for “quality care” on the basis of exactly the kind of misleading crappy data you’re talking about. Hmm, I can work up a pretty good rant here if you let me.

Comments are closed.

Get every new post delivered to your Inbox.

Join 887 other followers

%d bloggers like this: