Home > data journalism, modeling, rant > Creepy big data health models

Creepy big data health models

January 6, 2015

There’s an excellent Wall Street Journal article by Joseph Walker, entitled Can a Smartphone Tell if You’re Depressed?that describes a lot of creepy new big data projects going on now in healthcare, in partnership with hospitals and insurance companies.

Some of the models come in the form of apps, created and managed by private, third-party companies that try to predict depression in, for example, postpartum women. They don’t disclose what they are doing to many of the women, or the extent of what they’re doing, according to the article. They own the data they’ve collected at the end of the day and, presumably, can sell it to anyone interested in whether a woman is depressed. For example, future employers. To be clear, this data is generally not covered by HIPAA.

Perhaps the creepiest example is a voice analysis model:

Nurses employed by Aetna have used voice-analysis software since 2012 to detect signs of depression during calls with customers who receive short-term disability benefits because of injury or illness. The software looks for patterns in the pace and tone of voices that can predict “whether the person is engaged with activities like physical therapy or taking the right kinds of medications,” Michael Palmer, Aetna’s chief innovation and digital officer, says.

Patients aren’t informed that their voices are being analyzed, Tammy Arnold, an Aetna spokeswoman, says. The company tells patients the calls are being “recorded for quality,” she says.

“There is concern that with more detailed notification, a member may alter his or her responses or tone (intentionally or unintentionally) in an effort to influence the tool or just in anticipation of the tool,” Ms. Arnold said in an email.

In other words, in the name of “fear of gaming the model,” we are not disclosing the creepy methods we are using. Also, considering that the targets of this model are receiving disability benefits, I’m wondering if the real goal is to catch someone off their meds and disqualify them for further benefits or something along those lines. Since they don’t know they are being modeled, they will never know.

Conclusion: we need more regulation around big data in healthcare.

Categories: data journalism, modeling, rant
  1. January 6, 2015 at 8:08 am

    Have you seen the SyFy series Continuum? The third season plays on a wearable med bracelet called Halo. It’s evil of course.

    Like

  2. January 6, 2015 at 11:26 am

    How long until homeowners/renters insurance insist that we install “smart” (i.e. spying) smoke detectors?

    We should create an organization: “Americans Against Household Electronic Spies” or something to that effect. Cathy, I’m sure you can come up with a good acryonym.

    Like

    • January 9, 2015 at 4:20 pm

      OMG I’ll be on double secret probation for life if I forget to replace the battery!

      Like

  3. January 6, 2015 at 4:25 pm

    yeah, ‘creepiness’ is part of life in the digital world, as an intrinsic part of ‘big business’ — we probably don’t know the half of it, and I suspect the ‘creepy’ promulgators will always stay one-step ahead of the regulators. 😦

    Like

  4. January 6, 2015 at 8:51 pm

    Looks to me like the nurses are doing their jobs, much like a doctor does with a stethoscope. I don’t see anything nefarious. Could also be a result of regulations to cut down on hospital readmits, making sure that patients take their meds and/or do PT.

    Like

  5. noneya
    January 8, 2015 at 5:20 pm

    That conclusion is plain lazy thinking. Magical regulations almost never work, what does are more robust system designs. External data stores (provided by 3rd party companies that can compete with each other) is one such system design idea I recently heard that attempts to address these kinds of issues.

    Like

  6. January 9, 2015 at 4:19 pm

    We may need more regulation around big data in healthcare, but I suggest we need even more regulation around big data in human resources. HR is given veto power over the economic independence of individuals. There should be legal mechanisms that have the authority to put any HR decision on the defensive at any time. Guilty until proven innocent, I say.

    Like

    • January 10, 2015 at 6:07 am

      Interesting. Can you give me examples or references for these HR practices?

      Like

  1. No trackbacks yet.
Comments are closed.