There was a recent New York Times op-ed by Sonja Starr entitled Sentencing, by the Numbers (hat tip Jordan Ellenberg and Linda Brown) which described the widespread use – in 20 states so far and growing – of predictive models in sentencing.
The idea is to use a risk score to help inform sentencing of offenders. The risk is, I guess, supposed to tell us how likely the person is to commit another act in the future, although that’s not specified. From the article:
The basic problem is that the risk scores are not based on the defendant’s crime. They are primarily or wholly based on prior characteristics: criminal history (a legitimate criterion), but also factors unrelated to conduct. Specifics vary across states, but common factors include unemployment, marital status, age, education, finances, neighborhood, and family background, including family members’ criminal history.
I knew about the existence of such models, at least in the context of prisoners with mental disorders in England, but I didn’t know how widespread it had become here. This is a great example of a weapon of math destruction and I will be using this in my book.
A few comments:
- I’ll start with the good news. It is unconstitutional to use information such as family member’s criminal history against someone. Eric Holder is fighting against the use of such models.
- It is also presumably unconstitutional to jail someone longer for being poor, which is what this effectively does. The article has good examples of this.
- The modelers defend this crap as “scientific,” which is the worst abuse of science and mathematics imaginable.
- The people using this claim they only use it for as a way to mitigate sentencing, but letting a bunch of rich white people off easier because they are not considered “high risk” is tantamount to sentencing poor minorities more.
- It is a great example of confused causality. We could easily imagine a certain group that gets arrested more often for a given crime (poor black men, marijuana possession) just because the police have that practice for whatever reason (Stop & Frisk). Then model would then consider any such man at a higher risk of repeat offending, but that’s not because any particular person is actually more likely to do it, but because the police are more likely to arrest that person for it.
- It also creates a negative feedback loop on the most vulnerable population: the model will impose longer sentencing on the population it considers most risky, which will in turn make them even riskier in the future, if “length of time in prison previously” is used as an attribute in the model, which is surely is.
- Not to be cynical, but considering my post yesterday, I’m not sure how much momentum will be created to stop the use of such models, considering how discriminatory it is.
- Here’s an extreme example of preferential sentencing which already happens: rich dude Robert H Richards IV raped his 3-year-old daughter and didn’t go to jail because the judge ruled he “wouldn’t fare well in prison.”
- How great would it be if we used data and models to make sure rich people went to jail just as often and for just as long as poor people for the same crime, instead of the other way around?
Here’s what comes up in conversations at my Occupy meetings a lot: systemic racism.
Maybe once a week on average, whether we are talking about the criminal justice system, or the court system, or the educational system, or standardized tests, or chronic employment problems, or welfare rhetoric, or homelessness. There are many very well-informed people in my group which can speak eloquently and convincingly about how the system itself, not any particular person (although they do exist), discriminates against minorities in this country.
As a group we cheered when Ta-Nehisi Coates came out with his Atlantic piece entitled The Case for Reparations. So much resonated, especially the parts about widespread reverse redlining of mortgages to minorities in the run-up to the credit crisis. And it finally taught me how to think about affirmative action.
Another thing that comes up sometimes, although less often: how white people, even liberals like Elizabeth Warren, don’t talk about racism anymore. They want to address education inequalities through class-based or income-based measures rather than race-based ones. They talk about unemployment and joblessness and the need for criminal justice reform without referring to the enormous and glaring racial disparities.
I’m left feeling a lot like I felt in 7th grade social studies when we studied the period of mass genocide of American Indians and called it “Manifest Destiny.”
This recent study entitled Racial Disparities in Incarceration Increase Acceptance of Punitive Policies might explain why white people are so reluctant to talk about racism. Namely, because white react strangely when you specifically point out systemic racism (they are OK with it).
So in other words, if you tell them how many people are incarcerated in this country compared to other countries, they think it’s terrible and we should stop putting so many people in jail. But if you tell them most of those prisoners (60% in New York City) are black, then they’re less likely to think it’s terrible. They also remember the number wrong, thinking it’s higher than it is. Here’s a succinct summary from this Vox article:
The question seems to be which instinct wins out: the belief that our prison system isn’t fair, or the assumption that a prisoner must be a criminal. According to the study, when whites are primed to think of prisoners as black, it’s the latter that wins out.
The conclusion of the Vox article is this: politicians and activists have figured out that, if they want to agitate for criminal justice reform, they can’t mention systemically racist unfairness, because that just doesn’t upset powerful people enough. Instead, they need to focus on important stuff like saving money, which is how you get white people people up in arms. That’s what flies in the focus groups, apparently.
It explains why Elizabeth Warren doesn’t talk about race when she talks about student loans, preferring to talk about “young people”, even though the problem is worse for non-Asian minorities. Similarly, Obama is targeting for-profit colleges without reference to race (but with reference to veterans!) even though for-profit colleges notoriously target minorities.
The problem with understanding stuff like this is that it’s primarily used to be politically cunning, which is not enough. I’d like to talk about how to get people to directly confront racism, starting with liberals.
Today I’d like to rant about a pattern I’ve noticed.
Namely, I have a bunch of female friends and acquaintances that I consider feisty, informed, and argumentative sorts. People who are fun to be around and who know how to stick up for themselves, know how to spot misogyny and paternalism in all contexts, and most of all know how to dismiss such nonsense when it appears, and then get on with whatever they were doing.
And then they get pregnant and the lose most if not all of those properties. They get doctors who tell them what to eat, and how much, even though they’ve been doing quite well feeding themselves for 30 odd years without help. They get doctors who tell them how much pain killers they should have during labor, when it’s months and months before labor and we don’t even know what’s gonna happen. What gives?
Here’s a guess. Partly it’s the baby hormones that make you generally confused when you’re pregnant. The other part is that the stakes are high, and you are not an expert, so you defer to your baby doctor. Plus there’s all those ridiculous and scary pregnancy books out there which just serve to make women neurotic and should be burned. Oh and sometimes the doctors are women so they don’t seem paternalistic. But that’s what it is:
But here’s the thing, there’s not much evidence about exactly how you should eat when you’re pregnant, unless you are doing something absolutely weird. And, in spite of what a no-drugs doctor might suggest, it’s not all that dangerous to babies to have pain meds. In fact it’s super safe to have a baby now compared to the past, both for you and and your baby. And thank goodness for that.
On the flip side, a doctor has no business dictating to you that you will have an epidural either, which is what happened to my mom back in the 1970’s. It’s really your choice, and you should decide.
So if you have one of those pushy-ass doctors, fuck ‘em. This is your body, you get to decide that stuff. Go get a new doctor.
And to be sure, I’m not saying you shouldn’t inform yourself about risks and signs of pre-eclampsia and other truly important stuff, but for goodness sakes don’t forget your feminist training. It’s not just your baby here, it’s also you, and yes you deserve to eat food you want to eat and to moderate pain if it gets overwhelming. You will be happier, your baby will be just fine, and she or he won’t remember a thing. Consider it training for how to be a mom later.
Today I read this article written by Allie Gross (hat tip Suresh Naidu), a former Teach for America teacher whose former idealism has long been replaced by her experiences in the reality of education in this country. Her article is entitled The Charter School Profiteers.
It’s really important, and really well written, and just one of the articles in the online magazine Jacobin that I urge you to read and to subscribe to. In fact that article is part of a series (here’s another which focuses on charter schools in New Orleans) and it comes with a booklet called Class Action: An Activist Teacher’s Handbook. I just ordered a couple of hard copies.
I’d really like you to read the article, but as a teaser here’s one excerpt, a rant which she completely backs up with facts on the ground:
You haven’t heard of Odeo, the failed podcast company the Twitter founders initially worked on? Probably not a big deal. You haven’t heard about the failed education ventures of the person now running your district? Probably a bigger deal.
When we welcome schools that lack democratic accountability (charter school boards are appointed, not elected), when we allow public dollars to be used by those with a bottom line (such as the for-profit management companies that proliferate in Michigan), we open doors for opportunism and corruption. Even worse, it’s all justified under a banner of concern for poor public school students’ well-being.
While these issues of corruption and mismanagement existed before, we should be wary of any education reformer who claims that creating an education marketplace is the key to fixing the ills of DPS or any large city’s struggling schools. Letting parents pick from a variety of schools does not weed out corruption. And the lax laws and lack of accountability can actually exacerbate the socioeconomic ills we’re trying to root out.
I’m going to write one of those posts where many of you will already understand my point. In fact it might be old hat for a majority of my readers, yet it’s still important enough for me to mention just in case there are a few people out there who don’t know how the modern business model is set up.
Namely, like this. As a gmail and Google Search user, you are not a customer of Google. You are the product. The customers of Google are the ones who advertise to you. Your interaction with Google is, from the perspective of the business operation, that you give them information which they harvest so they can advertise to you in a more targeted way, thus increasing the likelihood of you clicking. The fact that you get a service from these interactions is great, because it means you’ll come back to give Google and its customers more information about you soon.
This misunderstanding, once you see it as such, can be clarifying. For example, when people talk about anti-trust and Google, they should talk about whether the customers of Google have any other serious choice. And since the customers of Google are advertisers, not gmailers or searchers, the alternatives aren’t hotmail or Bing. Rather they are other advertising outlets. And a very good case can be made that Google does violate anti-trust laws in that sense, just ask Nathan Newman.
It also explains why something like the recent European “right to be forgotten” law seems so strange and unreasonable to the powers that be at Google. It’d be like a meat farm where the cows go on strike and demand better food. Cows are the product, and they aren’t supposed to complain. They’re not even supposed to be heard. At worst we treat them better when our customers demand it, not when the cows do.
I was reminded about this ubiquitous business model yesterday, and newly enraged by its consequences, when reading this article entitled Held Captive by Flawed Credit Reports (hat tip Linda Brown) about the credit score agency Experian and how they utterly disregard the laws trying to protect consumers from mistakes in their credit reports. The problem here is that, to the giant company Experian, its customers are giant companies like Verizon which send credit score requests millions of times a day and pay for each score. Mere people, whose mortgage application is being denied because of mistakes, are the product, not the customer, and they are almost by definition unimportant.
And it seems that the law which is supposed to protect these people, namely the Fair Credit Reporting Act, first passed in 1970, doesn’t have enough teeth behind it to make the big credit scoring agencies sit up and pay attention. It’s all about the scale of the fines compare to the scale of the business. This is well explained in the article (emphasis mine):
Last year, the Federal Trade Commission found that 5 percent of consumers — or an estimated 10 million people — had an error on one of their credit reports that could have resulted in higher borrowing costs.
The F.T.C., which oversees the industry along with the Consumer Financial Protection Bureau, has been busy bringing cases in this arena. Since 2000, it has filed 18 enforcement actions against reporting bureaus; 13 were district court actions that generated $25.7 million in penalties.
Consumers have also won in the courts, on occasion. Last year, an Oregon consumer was awarded $18.4 million in punitive damages by a jury after she sued Equifax for inserting errors into her credit report. But the fines, settlements and judgments paid by the larger companies are not even close to a rounding error. Experian generated $4.8 billion in revenue for the year ended March 2014, and its after-tax profit of $747 million in the period was more than twice its 2013 figure.
Million versus billion. It seems like the cows don’t have much leverage.
I have been doing some reading about the Amazon/ Hachette battle and I have come to the conclusion that Amazon has become a huge bully. I also wasn’t impressed by how they treat employees, how they monitor and surveil them, and a host of other problems. For that reason I’m boycotting Amazon for my shopping as well as my blogging habits, so no more direct links.
Update: I’m actually still going to use their EC2 services as part of the Lede Program. Not sure how to avoid that actually, and I’d welcome suggestions.
It was bound to happen. Someone was inevitably going to have to write this book, entitled Social Physics, and now someone has just up and done it. Namely, Alex “Sandy” Pentland, data scientist evangelist, director of MIT’s Human Dynamics Laboratory, and co-founder of the MIT Media Lab.
A review by Nicholas Carr
Pentland argues that our greatly expanded ability to gather behavioral data will allow scientists to develop “a causal theory of social structure” and ultimately establish “a mathematical explanation for why society reacts as it does” in all manner of circumstances. As the book’s title makes clear, Pentland thinks that the social world, no less than the material world, operates according to rules. There are “statistical regularities within human movement and communication,” he writes, and once we fully understand those regularities, we’ll discover “the basic mechanisms of social interactions.”
By collecting all the data – credit card, sensor, cell phones that can pick up your moods, etc. – Pentland seems to think we can put the science into social sciences. He thinks we can predict a person like we now predict planetary motion.
OK, let’s just take a pause here to say: eeeew. How invasive does that sound? And how insulting is its premise? But wait, it gets way worse.
Vomit. But also not the worst part.
Here’s the worst part about Pentland’s book, from the article:
Ultimately, Pentland argues, looking at people’s interactions through a mathematical lens will free us of time-worn notions about class and class struggle. Political and economic classes, he contends, are “oversimplified stereotypes of a fluid and overlapping matrix of peer groups.” Peer groups, unlike classes, are defined by “shared norms” rather than just “standard features such as income” or “their relationship to the means of production.” Armed with exhaustive information about individuals’ habits and associations, civic planners will be able to trace the full flow of influences that shape personal behavior. Abandoning general categories like “rich” and “poor” or “haves” and “have-nots,” we’ll be able to understand people as individuals—even if those individuals are no more than the sums of all the peer pressures and other social influences that affect them.
Kill. Me. Now.
The good news is that the author of the article, Nicholas Carr, doesn’t buy it, and makes all sorts of reasonable complaints about this theory, like privacy concerns, and structural sources of society’s ills. In fact Carr absolutely nails it (emphasis mine):
Pentland may be right that our behavior is determined largely by social norms and the influences of our peers, but what he fails to see is that those norms and influences are themselves shaped by history, politics, and economics, not to mention power and prejudice. People don’t have complete freedom in choosing their peer groups. Their choices are constrained by where they live, where they come from, how much money they have, and what they look like. A statistical model of society that ignores issues of class, that takes patterns of influence as givens rather than as historical contingencies, will tend to perpetuate existing social structures and dynamics. It will encourage us to optimize the status quo rather than challenge it.
How to see how dumb this is in two examples
This brings to mind examples of models that do or do not combat sexism.
First, the orchestra audition example: in order to avoid nepotism, they started making auditioners sit behind a sheet. The result has been way more women in orchestras.
This is a model, even if it’s not a big data model. It is the “orchestra audition” model, and the most important thing about this example is that they defined success very carefully and made it all about one thing: sound. They decided to define the requirements for the job to be “makes good sounding music” and they decided that other information, like how they look, would be by definition not used. It is explicitly non-discriminatory.
By contrast, let’s think about how most big data models work. They take historical information about successes and failures and automate them – rather than challenging their past definition of success, and making it deliberately fair, they are if anything codifying their discriminatory practices in code.
My standard made-up example of this is close to the kind of thing actually happening and being evangelized in big data. Namely, a resume sorting model that helps out HR. But, using historical training data, this model notices that women don’t fare so well historically at a the made-up company as computer programmers – they often leave after only 6 months and they never get promoted. A model will interpret that to mean they are bad employees and never look into structural causes. And moreover, as a result of this historical data, it will discard women’s resumes. Yay, big data!
I’m kind of glad Pentland has written such an awful book, because it gives me an enemy to rail against in this big data hype world. I don’t think most people are as far on the “big data will solve all our problems” spectrum as he is, but he and his book present a convenient target. And it honestly cannot surprise anyone that he is a successful white dude as well when he talks about how big data is going to optimize the status quo if we’d just all wear sensors to work and to bed.