Home > data science, modeling, news, rant > Parents fighting back against sharing children’s data with InBloom

Parents fighting back against sharing children’s data with InBloom

January 6, 2014

There is a movement afoot in New York (and other places) to allow private companies to house and mine tons of information about children and how they learn. It’s being touted as a great way to tailor online learning tools to kids, but it also raises all sorts of potential creepy modeling problems, and one very bad sign is how secretive everything is in terms of privacy issues. Specifically, it’s all being done through school systems and without consulting parents.

In New York it’s being done through InBloom, which I already mentioned here when I talked about big data and surveillance. In that post I related an EducationNewYork report which quoted an official from InBloom as saying that the company “cannot guarantee the security of the information stored … or that the information will not be intercepted when it is being transmitted.”

The issue is super important and timely, and parents have been left out of the loop, with no opt-out option, and are actively fighting back, for example with this petition from MoveOn (h/t George Peacock). And although the InBloomers claim that no data about their kids will ever be sold, that doesn’t mean it won’t be used by third parties for various mining purposes and possibly marketing – say for test prep tools. In fact that’s a major feature of InBloom’s computer and data infrastructure, the ability for third parties to plug into the data. Not cool that this is being done on the downlow.

Who’s behind this? InBloom is funded by the Bill & Melinda Gates foundation and the operating system for inBloom is being developed by the Amplify division (formerly Wireless Generation) of Rupert Murdoch’s News Corp. More about the Murdoch connection here.

Wait, who’s paying for this? Besides the Gates and Murdoch, New York has spent $50 million in federal grants to set up the partnership with InBloom. And it’s not only New York that is pushing back, according to this Salon article:

InBloom essentially offers off-site digital storage for student data—names, addresses, phone numbers, attendance, test scores, health records—formatted in a way that enables third-party education applications to use it. When inBloom was launched in February, the company announced partnerships with school districts in nine states, and parents were outraged. Fears of a “national database” of student information spread. Critics said that school districts, through inBloom, were giving their children’s confidential data away to companies who sought to profit by proposing a solution to a problem that does not exist. Since then, all but three of those nine states have backed out.

Finally, according to this nydailynews article, Bill de Blasio is coming out on the side of protecting children’s privacy as well. That’s a good sign, let’s hope he sticks with it.

I’m not against using technology to learn, and in fact I think it’s inevitable and possibly very useful. But first we need to have a really good, public discussion about how this data is being shared, controlled, and protected, and that simply hasn’t happened. I’m glad to see parents are aware of this as a problem.

Categories: data science, modeling, news, rant
  1. Josh
    January 6, 2014 at 2:33 pm

    I agree. A public discussion (and then some restrictions on) “how data is being shared, controlled, protected…” would be great.

    Not just for data on children but all of us. Data on children is clearly a particular area of concern and perhaps a good way to get people to focus on the issue.

    Thanks for highlighting this.

  2. WT
    January 10, 2014 at 8:54 am

    It would be nice to hear from some parents who don’t come across as total Luddites on the issue, and who don’t seem ignorant of the fact that school districts would be incredibly handicapped if they couldn’t keep track some pretty basic information about students. So far the opponents have nothing to offer except for rather hysterical speculation about unspecified wrongdoing that could hypothetically occur.

    • January 10, 2014 at 9:06 am

      That’s what happens when reasonable standards are not in place – people concentrate on worst-case scenarios. I totally get that, and that’s why reasonable laws protecting student data are urgent and crucial.

    • Otter
      January 10, 2014 at 2:01 pm

      You mean “historical speculation about well-documented wrongdoing”.

  1. January 7, 2014 at 6:59 am
Comments are closed.
Follow

Get every new post delivered to your Inbox.

Join 1,807 other followers

%d bloggers like this: