I’m very gratified to say that my Lede Program for data journalism at Columbia is over, or at least the summer program is (some students go on to take Computer Science classes in the Fall).
My adorable and brilliant students gave final presentations on Tuesday and then we had a celebration Tuesday night at my house, and my bluegrass band played (didn’t know I have a bluegrass band? I play the fiddle! You can follow us on twitter!). It was awesome! I’m hoping to get some of their projects online soon, and I’ll definitely link to it when that happens.
It’s been an exciting week, and needless to say I’m exhausted. So instead of a frothy rant I’ll just share some reading with y’all:
- Andrew Gelman has a guest post by Phil Price on the worst infographic ever, which sadly comes from Vox. My students all know better than this. Hat tip Lambert Strether.
- Private equity firms are buying stuff all over the country, including Ferguson. I’m actually not sure this is a bad thing, though, if nobody else is willing to do it. Please discuss.
- Bloomberg has an interesting story about online PayDay loans and the world of investing. I am still on the search for someone who knows exactly how those guys target their ads online. Hat tip Aryt Alasti.
- Felix Salmon, now at Fusion, has set up a nifty interactive to help you figure out your lifetime earnings.
- Felix also set up this cool online game where you can play as a debt collector or a debtor.
- Is it time to end letter grades? Hat tip Rebecca Murphy.
- There’s a reason fast food workers are striking nationwide. The ratio of average CEO pay to average full-time worker pay is around 1252.
- People lie to women in negotiations. I need to remember this.
Have a great weekend!
I’ve been sent this recent New York Times article by a few people (thanks!). It’s called Grading Teachers, With Data From Class, and it’s about how standardized tests are showing themselves to be inadequate to evaluate teachers, so a Silicon Valley-backed education startup called Panorama is stepping into the mix with a data collection process focused on student evaluations.
Putting aside for now how much this is a play for collecting information about the students themselves, I have a few words to say about the signal which one gets from student evaluations. It’s noisy.
So, for example, I was a calculus teacher at Barnard, teaching students from all over the Columbia University community (so, not just women). I taught the same class two semesters in a row: first in Fall, then in Spring.
Here’s something I noticed. The students in the Fall were young (mostly first semester frosh), eager, smart, and hard-working. They loved me and gave me high marks on all categories, except of course for the few students who just hated math, who would typically give themselves away by saying “I hate math and this class is no different.”
The students in the Spring were older, less eager, probably just as smart, but less hard-working. They didn’t like me or the class. In particular, they didn’t like how I expected them to work hard and challenge themselves. The evaluations came back consistently less excited, with many more people who hated math.
I figured out that many of the students had avoided this class and were taking it for a requirement, didn’t want to be there, and it showed. And the result was that, although my teaching didn’t change remarkably between the two semesters, my evaluations changed considerably.
Was there some way I could have gotten better evaluations from that second group? Absolutely. I could have made the class easier. That class wanted calculus to be cookie-cutter, and didn’t particularly care about the underlying concepts and didn’t want to challenge themselves. The first class, by contrast, had loved those things.
My conclusion is that, once we add “get good student evaluations” to the mix of requirements for our country’s teachers, we are asking for them to conform to their students’ wishes, which aren’t always good. Many of the students in this country don’t like doing homework (in fact most!). Only some of them like to be challenged to think outside their comfort zone. We think teachers should do those things, but by asking them to get good student evaluations we might be preventing them from doing those things. A bad feedback loop would result.
I’m not saying teachers shouldn’t look at student evaluations; far from it, I always did and I found them useful and illuminating, but the data was very noisy. I’d love to see teachers be allowed to see these evaluations without there being punitive consequences.
“Data Science” is one of my least favorite tech buzzwords, second to probably “Big Data”, which in my opinion should be always printed followed by a winky face (after all, my data is bigger than yours). It’s mostly a marketing ploy used by companies to attract talented scientists, statisticians, and mathematicians, who, at the end of the day, will probably be working on some sort of advertising problem or the other.
Still, you have to admit, it does have a nice ring to it. Thus the title Democratizing Data Science, a vision paper which I co-authored with two cool Ph.D students at MIT CSAIL, William Li and Ramesh Sridharan.
The paper focuses on the latter part of the situation mentioned above. Namely, how can we direct these data scientists, aka scientists who interact with the data pipeline throughout the problem-solving process (whether they be computer scientists or programmers or statisticians or mathematicians in practice) towards problems focused on societal issues?
In the paper, we briefly define Data Science (asking ourselves what the heck it even means), then question what it means to democratize the field, and to what end that may be achieved. In other words, the current applications of Data Science, a new but growing field, in both research and industry, has the potential for great social impact, but in reality, resources are rarely distributed in a way to optimize the social good.
We’ll be presenting the paper at the KDD Conference next Sunday, August 24th at 11am as a highlight talk in the Bloomberg Building, 731 Lexington Avenue, NY, NY. It will be more like an open conversation than a lecture and audience participation and opinion is very welcome.
The conference on Sunday at Bloomberg is free, although you do need to register. There are three “tracks” going on that morning, “Data Science & Policy”, “Urban Computing”, and “Data Frameworks”. Ours is in the 3rd track. Sign up here!
If you don’t have time to make it, give the paper a skim anyway, because if you’re on Mathbabe’s blog you probably care about some of these things we talk about.
There was a recent New York Times op-ed by Sonja Starr entitled Sentencing, by the Numbers (hat tip Jordan Ellenberg and Linda Brown) which described the widespread use – in 20 states so far and growing – of predictive models in sentencing.
The idea is to use a risk score to help inform sentencing of offenders. The risk is, I guess, supposed to tell us how likely the person is to commit another act in the future, although that’s not specified. From the article:
The basic problem is that the risk scores are not based on the defendant’s crime. They are primarily or wholly based on prior characteristics: criminal history (a legitimate criterion), but also factors unrelated to conduct. Specifics vary across states, but common factors include unemployment, marital status, age, education, finances, neighborhood, and family background, including family members’ criminal history.
I knew about the existence of such models, at least in the context of prisoners with mental disorders in England, but I didn’t know how widespread it had become here. This is a great example of a weapon of math destruction and I will be using this in my book.
A few comments:
- I’ll start with the good news. It is unconstitutional to use information such as family member’s criminal history against someone. Eric Holder is fighting against the use of such models.
- It is also presumably unconstitutional to jail someone longer for being poor, which is what this effectively does. The article has good examples of this.
- The modelers defend this crap as “scientific,” which is the worst abuse of science and mathematics imaginable.
- The people using this claim they only use it for as a way to mitigate sentencing, but letting a bunch of rich white people off easier because they are not considered “high risk” is tantamount to sentencing poor minorities more.
- It is a great example of confused causality. We could easily imagine a certain group that gets arrested more often for a given crime (poor black men, marijuana possession) just because the police have that practice for whatever reason (Stop & Frisk). Then model would then consider any such man at a higher risk of repeat offending, but that’s not because any particular person is actually more likely to do it, but because the police are more likely to arrest that person for it.
- It also creates a negative feedback loop on the most vulnerable population: the model will impose longer sentencing on the population it considers most risky, which will in turn make them even riskier in the future, if “length of time in prison previously” is used as an attribute in the model, which is surely is.
- Not to be cynical, but considering my post yesterday, I’m not sure how much momentum will be created to stop the use of such models, considering how discriminatory it is.
- Here’s an extreme example of preferential sentencing which already happens: rich dude Robert H Richards IV raped his 3-year-old daughter and didn’t go to jail because the judge ruled he “wouldn’t fare well in prison.”
- How great would it be if we used data and models to make sure rich people went to jail just as often and for just as long as poor people for the same crime, instead of the other way around?
I’ve been fascinated to learn all sorts of things about how McDonalds operates their business in the past few days, as news broke about a recent NLRB decision to allow certain people who work in McDonalds to file complaints about their workplace and name McDonalds as a joint employer.
That sounds incredibly dull, right? The idea of letting McDonalds workers name McDonalds as an employer? Let me tell you a bit more. And this is all common knowledge, but I thought I’d gather it here for those of you who haven’t been following the story.
Most of the McDonalds joints you go to are franchises – 90% in this country. That means the business is owned by a franchisee, a person who pays good money (details here) for the right to run a McDonalds and is constrained by a huge long list of rules about how they have to do it.
The franchise owner attends Hamburger University and gets trained in all sorts of things, like exactly how things should look in the store, how customers should be funneled through space (maps included), how long each thing should take, and how to treat employees. There’s a QSC Playbook they are given (Quality, Service, and Cleanliness) as well as minute descriptions of how to organize their teams and even the vocabulary words they should use to encourage workers (see page 24 of the Shift Management Guide I found online here).
McDonalds also installs a real-time surveillance system into each McDonalds, which can calculate the rate of revenue brought in at a given moment, as well as the rate of pay going out, and when the ratio of those two numbers reaches a certain lower bound threshold, they encourage franchise owners to ask people to leave or delay people from clocking in. Encourage, mind you, not require. They are not the employers or anything remotely like that, clearly.
Take a step back here. What is the business model of a franchise? And when did McDonalds stop being a burger joint?
The idea is this. When you own a restaurant you have to deal with all these people who work for you and you have to deal with their complaints, and they might not like the way you treat them and they might organize against you or sue you. In order to contain your risks, you franchise. That effectively removes all of those people except one, the franchise owner, with whom you have an air-tight contract, written by a huge team of lawyers, which basically says that you get to cancel the franchise agreement for any minor infraction (where they’d lose a bunch of investment money), but most importantly it means the people actually working in a given franchise work for that one person, not for you, so their pesky legal issues are kept away from you. It’s a way to box in the legal risk of the parent company.
Restaurants aren’t the only business to learn that it’s easier to sell and manage a brand than it is to sell and manage an actual product. Hotels have been doing this for a long time, and avoid complaints and legal issues stemming from the huge population of service workers in hotels, mostly minority women.
For a copy of the original complaint that gave the details of McDonald’s control over workers, read this. For a better feel for being a McDonalds worker, please read this recent Reuters blog post written by a McDonalds worker. And for a better feel for being a McDonald’s franchise owner, read this recent Washington Post letter from a long-time McDonalds franchise owner who thinks workers are being unfairly treated.
Does that sounds confusing, that a franchise owner would side with the employees? It shouldn’t.
By nature of the franchise contract, the money actually available to a franchise owner is whatever’s left over after they pay McDonalds for advertising, and buy all the equipment and food that McDonalds tells them to from the sources that they tell them to, and after they pay for insurance on everything and for rent on the property (which McDonalds typically owns). In other words the only variable they have to tweak is the employer pay, but if they pay a living wage then they lose money on their business. In fact when franchise owners complain about the profit stream, McDonalds tells them to pay their workers less. McDonalds essentially controls everything except one variable, but since it’s a closed system of equations, that means the franchise owners have to decide between paying their workers reasonably and going in the red.
That’s not to say, of course, that McDonalds as an enterprise is at risk of losing money. In fact the parent corporation is making good money ($1.4 billion per quarter if you include international revenue), by squeezing the franchises. If the franchise owners had more leverage to negotiate better contracts, they could siphon off more revenue and then – possibly – share it with workers.
So back to the ruling. If upheld, and there’s a good chance it won’t be but I’m feeling hopeful today, this decision will allow people to point at McDonalds the corporation when they are treated badly, and will potentially allow a workers’ union to form. Alternatively it might energize the franchise owners to negotiate more flexible contracts, which could allow them to pay their workers better directly.
There’s a CNN video news story explaining how the NYC Mayor’s Office of Data Analytics is working with private start-up Placemeter to count and categorize New Yorkers, often with the help of private citizens who install cameras in their windows. Here’s a screenshot from the Placemeter website:
You should watch the video and decide for yourself whether this is a good idea.
Personally, it disturbs me, but perhaps because of my priors on how much we can trust other people with our data, especially when it’s in private hands.
To be more precise, there is, in my opinion, a contradiction coming from the Placemeter representatives. On the one hand they try to make us feel safe by saying that, after gleaning a body count with their video tapes, they dump the data. But then they turn around and say that, in addition to counting people, they will also categorize people: gender, age, whether they are carrying a shopping bag or pushing strollers.
That’s what they are talking about anyway, but who knows what else? Race? Weight? Will they use face recognition software? Who will they sell such information to? At some point, after mining videos enough, it might not matter if they delete the footage afterwards.
Since they are a private company I don’t think such information on their data methodologies will be accessible to us via Freedom of Information Laws either. Or, let me put that another way. I hope that MODA sets up their contract so that such information is accessible via FOIL requests.
I’m super excited about the recent “mood study” that was done on Facebook. It constitutes a great case study on data experimentation that I’ll use for my Lede Program class when it starts mid-July. It was first brought to my attention by one of my Lede Program students, Timothy Sandoval.
My friend Ernest Davis at NYU has a page of handy links to big data articles, and at the bottom (for now) there are a bunch of links about this experiment. For example, this one by Zeynep Tufekci does a great job outlining the issues, and this one by John Grohol burrows into the research methods. Oh, and here’s the original research article that’s upset everyone.
It’s got everything a case study should have: ethical dilemmas, questionable methodology, sociological implications, and questionable claims, not to mention a whole bunch of media attention and dissection.
By the way, if I sound gleeful, it’s partly because I know this kind of experiment happens on a daily basis at a place like Facebook or Google. What’s special about this experiment isn’t that it happened, but that we get to see the data. And the response to the critiques might be, sadly, that we never get another chance like this, so we have to grab the opportunity while we can.