Silicon Valley and Journalism conference at Columbia’s Tow Center

Today I’m excited to attend a Tow Center Journalism Conference on the relationship between Silicon Valley and journalism. Unfortunately it’s sold out at this point, but here’s the updated schedule.

I’m particularly excited for the 10:45am panel which features Zeynep Tufecki and Kate Crawford, among others, discussing ethics arising from the relationship between Silicon Valley platforms like Facebook and Google and journalism, especially when it comes to censorship and digital rights.
Also, one of my favorite technology journalists Julie Angwin is speaking on a panel at noon, which is going to be interesting. And of course the brilliant and amazing Emily Bell, who runs the Tow Center, is moderating a couple of discussions. An excellent line up, I’m lucky to live so close.
Speaking of critical journalism, I was quoted in a recent piece on which focused on predictive policing, written by Jack Smith.
Categories: Uncategorized

Republicans would let car dealers continue racist practices undeterred

There’s an upcoming House Bill, HR1737, that would make it easier for auto dealers to get away with being racist. It’s being supported by Republicans* and is being voted on in the next couple of weeks. We should fight against it.

The issue centers on the problematic practice of “dealer markups,” discretionary fees that brokers slap on after the credit risk of a given borrower has been established. It turns out that these fees vary in size and are consistently bigger for blacks and Hispanics. Which means that if a number of people of different races but a similar credit history walk into a car dealership and buy a car, the minorities will typically end up paying more. This is illegal discrimination under the legal tool called the theory of “disparate impact.”

The Consumer Financial Protection Bureau (CFPB) has been making a valiant effort to hold accountable the financiers behind these loans. For political (read: lobbying) reasons the CFPB doesn’t have regulating power over auto dealers directly, but they do regulate the bankers that supply the money, and they’ve been nailing those bankers for unfair practices. For example, there’s an ongoing case against Ally Financial for upwards of $80 million, which would go to compensating the victims.

Here’s the amazing thing: Ally Financial doesn’t claim their loans weren’t racist. They simply claim that they were less racist than the CFPB thinks, and that the methodology that the CFPB used to measure the racism is flawed. The Republicans agree, and they’re trying to remove the CFPB’s ability to enforce disparate impact violations altogether**.

So, just to recap, the Republican argument is: if you can’t specify exactly how racist this practice is, then you can’t stop people from doing it at all. It’s a dumb and dangerous argument. It is, in fact, exactly what disparate impact was meant to avoid.

A little background on why the measurement is so involved. In mortgage lending, the race of the borrower is recorded. In fact the race of every applicant is recorded, so that later on people can go back and see if there are racist practices going on. This is not so for auto lending, however. That means that when the CFPB suspects racism in auto lending, they have to impute the race of the applicant based on the information they know. Then, once they have partitioned the borrower population into “probably minority” and “probably white” subgroups, they measure the extent to which the “probably minority” gets overcharged. If it’s substantial, they charge the lender with discrimination and allot damages.

The opponents of the CFPB methodology claim that the race estimates are inaccurate; in particular, they charge that white people are sometimes being mistakenly labeled black or Hispanic. But consider this. That error, of overestimating minorities, would be alleviated by the second step, because the measurement of the extent of racism would be diluted if the “probably minority” group contained a bunch of white people, for the very reason that white people don’t suffer from racism. So it’s not even clear that the amount of damages being awarded is inflated; it’s just that the damages aren’t provably being sent to exactly the right people.

In statistical terms, we are worried about false positives – white people who might receive a compensation check for racist practices – and the proposed solution is to ignore all the actual victims of racist practices – so all the true positives. It’s throwing the baby out with the bathwater.

Even so, the CFPB’s race model is an imperfect methodology, as all models which depend on proxies are. And hey, you can try it yourself here.

Let’s take a step back. What’s the long-term goal here? It’s most definitely not to stand by, watching car dealerships screw minorities, and then after the fact to step in and assess damages. The actual goal is to put an end to the racist policies themselves.

What would that look like? Well, BB&T, another big financing bank for car loans, is changing from the dealer markup system to a system of flat fees which solves this particular problem (there are others!). From the article:

The bank didn’t attribute the move to pressure from regulators, but in a written statement the bank cited “fair and equal treatment of all consumers,” and said it was launching a “nondiscretionary dealer compensation program.”

To state the situation frankly: the anti-discrimination policies that the CFPB has been developing puts pressure on car dealers’ banks, and thus on car dealers, to deal fairly and transparently with their customers. And pushback against those policies is a vote to keep shady, discriminatory negotiations with car dealers that, generally speaking, screw minorities.

From my perspective, as a data scientist who studies both algorithmic unfairness and institutional racism, this is just the beginning of a much larger debate around what kind of processes we can audit with algorithms, what constitutes evidence of discrimination, and how quickly – or slowly – the laws are going to catch up with technology. It’s a litmus test, and it’s on the verge of failing badly.

* and some Democrats as well. See the full list of supporters here.

** Technically, the CFPB would still be able to enforce Fair Lending laws, but given that their methodology would be scrapped, it’s not clear how they’d actually go about doing that.

Categories: Uncategorized

Duke deans drop the ball on scientific misconduct

Former Duke University cancer researcher Anil Potti was found guilty of research misconduct yesterday by the federal Office of Research Integrity (ORI), after a multi-year investigation. You can read the story in Science, for example. His punishment is that he won’t do research without government-sponsored supervision for the next five years. Not exactly stiff.

This article also covers the ORI decision, and describes some of the people who suffered from poor cancer treatment because of his lies. Here’s an excerpt:

Shoffner, who had Stage 3 breast cancer, said she still has side effects from the wrong chemotherapy given to her in the Duke trial. Her joints were damaged, she said, and she suffered blood clots that prevent her from having knee surgery now. Of the eight patients who sued, Shoffner said, she is one of two survivors.

What’s interesting to me this morning is that both articles above mention the same reason for the initial investigation in his work. Namely, that he had padded his resume, pretending to be a Rhodes Scholar when he wasn’t. That fact was reported by a website called Cancer Letter in 2010.

But here’s the thing, back in 2008 a 3rd-year medical student named Bradford Perez sent the deans at Duke (according to Cancer Letter) a letter explaining that Potti’s lab was fabricating results. And for those of you who can read nerd, please go ahead and read his letter, it is extremely convincing. An excerpt:

Fifty-nine cell line samples with mRNA expression data from NCI-60 with associated radiation sensitivity were split in half to designate sensitive and resistant phenotypes. Then in developing the model, only those samples which fit the model best in cross validation were included. Over half of the original samples were removed. It is very possible that using these methods two samples with very little if any difference in radiation sensitivity could be in separate phenotypic categories. This was an incredibly biased approach which does little more than give the appearance of a successful cross validation.

Instead of taking up the matter seriously, the deans pressured Perez to keep quiet. And nothing more happened for two more years.

The good news: Bradford Perez seems to have gotten a perfectly good job.

The bad news: the deans at Duke suck. Unfortunately I don’t know exactly which deans and what their job titles are, but still: why are they not under investigation? What would deans have to do – or not do – to get in trouble? Is there any kind of accountability here?

Goldman Sachs explains: social impact bonds are socially bankrupt

Have you ever heard of a social impact bond? It’s a kooky financial instrument – a bond that “pays off” when some socially desirable outcome is reached.

The idea is that people with money put that money to some “positive” purpose, and if it works out they get their money back with a bonus for a job well done. It’s meant to incentivize socially positive change in the market. Instead of only caring about profit, the reasoning goes, social impact bonds will give rich people and companies a reason to care about healthy communities.

So, for example, New York City issued a social impact bond in 2012 around recidivism for jails. Recidivism, which is the tendency for people to return to prison, has to go down for the bond to pay off. So Goldman Sachs made a bet that they could lower the recidivism rate for certain jails in the NYC area.

Or who knows, maybe they sold that bond three times over to their clients, and they are left short the bond. Maybe they are actually, internally, making a bet that more people are going to jail in the future. That’s the thing about financial instruments, they are flexible little doodads.

Also, and here’s a crucial element to look for when you hear about social impact bonds: the city of New York didn’t actually put up an money to issue the bonds. That was done instead by Goldman Sachs and MDRC, a local nonprofit. However, NYC might be on the hook if recidivism rates actually go down. On the other hand fewer people would be in jail in that case, so maybe the numbers would work out overall, I’m not sure. Theoretically, in that best case scenario, the city would also have the knowledge of how to reduce recidivism rates, so they’d be happy for that as well.

Which is how we get to the underlying goal of the social impact bond: namely, looking for privately financed “solutions” to social problems. The reasoning is that governments are inefficient and cannot be expected to solve deep problems associated to jails or homelessness, but private companies and possibly innovative non-profits might have the answers.

As another example, there’s a Massachusetts anti-homelessness social impact bond initiative, set up in 2014 with $1 million in philanthropic funding and $2.5 million in private capital investments, with the following description: “the investors assume project risk by financing services up front with the promise of Commonwealth repayment only in the event of success”.

There are actually a ton of examples. This is the new, hot way to create social experiments. Take a look here for an incomplete list. It’s international, as well; it’s done mostly in the US and the UK, but New Zealand is throwing its hat into the ring as well.

It’s a good idea to try things out and see what works for the big problems like homelessness and recidivism. That’s not up for debate. However, it’s not clear that social impact bonds are the best approach to this. There’s a real danger that it’s going end up being a lot like the charter school movement: they juice their numbers by weeding out problematic students, they are unaccountable, and even when they tout success their “solutions” don’t scale.

Here’s a big red flag on the whole social impact bond parade: Goldman Sachs was caught rigging the definition of success for a social impact bond in Utah. It revolved around a preschool program that was supposed to keep kids out of special ed. Again, it was hailed by the Utah Governor as “a model for a new way of financing public projects.” But when enormous success was claimed, it seemed like the books had been cooked.

Basically, Goldman Sachs got paid back, and rewarded, if enough kids who were expected to go into special ed actually didn’t. But the problems started with how find the kids “expected to go into special ed.”

Namely, they administered a test known as the PPVT, and if the kid got a score lower than 70, they were deemed “headed to special ed.” But the test was administered in English, when up to half of the preschoolers didn’t speak English at home. And also, the PPVT was never meant to measure kids for special ed needs in the first place. In fact, it’s a vocabulary test. Kids are shown a picture and a word or two of description – in English – is spoken, and the kid is supposed to say the number of the picture associated with the description. Here’s a sample:

Weird how non-native speakers didn't do so well, don't you think?

Weirdly, non-native speakers didn’t do so well.

Lo and behold, after a couple of years where the kids learned English, most of them headed to normal classrooms, and Goldman Sachs got paid back. From the article:

From 2006 to 2009, 30 to 40 percent of the children in the preschool program scored below 70 on the P.P.V.T., even though typically just 3 percent of 4-year-olds score this low. Almost none of the children ended up needing special education.

Let’s take a step back. We’re asking for help from private finance companies to solve big hard societal problems, and we’re putting huge money on the line. There’s a problem with this approach. We asking for gaming such as the above. We should expect to see more of it.

Worst case scenario: financiers are betting against the “socially beneficial” outcomes. It’s possible, we saw it happen in the housing crisis. From their perspective, it doesn’t make sense to have a market where you can’t bet against something, and if they think the chances of a positive outcome are overblown, they’d be stupid not to. And of course, if they can influence the result directly, then why not. It could get ugly.

Here’s my hope: that we soon realize that engaging like this doesn’t solve any problems, and moreover it wastes time and money. Financial incentives are not compatible with the scientific approach, and basic research depends on money not being directly involved. When private financiers want to get involved in this stuff, it’s because they can profit off of it, not because they want to help.

Categories: Uncategorized

Fox reporter needs math help

Yesterday I gave my day-long tutorial on data science here in Stockholm. The only weird thing was that Swedish audiences are super quiet and polite so my material went way faster than I’d planned. But on lunch breaks and bathroom breaks they were extremely outgoing and positive, so I’m going to assume it went well.

This morning I’m scheduled to give a talk at a Statistics Sweden conference. I’m planning to talk a lot for each slide so I don’t end 15 minutes early. I have just enough time right now to share this amusing email, originally written by a Columbia University Public Affairs Officer, that was forwarded to me from an anonymous source:


A reporter with Fox News just called looking for some help calculating percentages for a story he’s preparing for tonight’s broadcast. He’s looking for someone who can help him explain what percentage of $60 billion would $325,000 be. It’s for a  story about the NY State $60 billion budget, of which $325,000 was found in fraud. It’s not a controversial story and he’s just looking for someone who can help him explain how much this is in lay terms. It does not have to be on the record.

Do we have any mathematics professors who could help with this calculation in the next hour or so? Thanks!

Categories: Uncategorized

A couple of quick links

I’m in Stockholm, trying desperately to get over jet lag in time for my day-long tutorial tomorrow. Before I go out on the required yarn store walking tour I wanted to share two things with you:

First, I’m super proud of my Occupy group’s recent Huffington Post submission, an essay entitled Free Markets Ideology is Making Us Sick. This is the first of quite a few essays we are planning on the topic, and it’s the driving theme.

Second, I want you to consider listening to the most recent Slate Money podcast, where we interviewed University of Georgia Law Professor Mehrsa Baradaran, who recently wrote How The Other Half Banks. It’s a fantastic book, and I might write a review of it soon for those people (like me!) who don’t listen to podcasts. Bottomline, we should start a postal bank. Or restart one, rather.

Categories: Uncategorized

The gender wage gap is not misleading

The U.S. gender wage gap is the difference between what the median woman earns and what the median man earns in the United States. Since women earn consistently less than men, it’s typically quoted as the percentage that the median woman’s pay is of the median man’s pay. It’s gone up slowly over time:

You can also break it down by age, by race, by location, by percentile, or by occupation. You’ll find that the gender wage gap rises and falls depending on how you measure it and what restrictions you set.

I’m bringing up this simple statistic because I’ve noticed that recently, when it comes up in conversation, the person I’m talking to will often say that it’s “misleading.” When I ask them why, they mention that “women choose jobs that don’t pay as well.”

Well, I think this is incorrect. Or rather, I think that, taken as a whole, including socialization and how our culture values work, and so on, the simplistic statistic represented by the gender wage gap is actually pretty sophisticated. It captures a lot of the nuances of our sexist culture.

For example, it’s true that not as many women choose to become mathematicians versus, say, high school math teachers. But is this really an independently made choice that young women take? Or is it socialized choice? In other words, are women squeezed out of the mindset whereby they’d consider that path? Obviously the answer is “a bit of both.”

On the statistic side, then, it’s not enough to only consider “women who became mathematicians versus men who became mathematicians” when comparing ultimate wages. That would ignore the implicit socialization element that keeps women away from higher-paying jobs. Indeed, if you think about it, you’d really want to compare “women who might have become mathematicians if there weren’t so many barriers to doing so” with “men who might have become mathematicians if there weren’t so many barriers to doing so,” and I say it like that because of course, there are plenty of barriers for both men and women, although I’m pretty sure not as many men had their 6th grade teacher explicitly tell them not to study math because they “wouldn’t need it later in life” like I did.

The problem is, it’s hard to find those groups of people, because a good fraction of them didn’t become mathematicians or even high school math teachers. So we’re kind of left without a statistic at all for math nerds, if we are being honest. We just can’t collect the relevant data.

However, this same argument applies to basically every high-paying career. In fact it applies to every career, if you’re willing to generalize a bit and point out that some jobs are shunned by men for mostly social reasons, and they just happen to also be relatively underpaid as well.

So what we do, to be statistically correct, is we pool all the “women who might have done X” and we compare them against all the “men who might have done X,” where X varies over everything, and we get the best version of the gender wage gap that we can. And that’s actually what we’ve done when we compute the above statistic. It’s not misleading at all, in other words, when you take into account weird social rules we have around who should do what job and how much that job should be valued.

Just to give another example of how strong a signal this gender wage gap represents, imagine that we instead had separated the population into two different groups: the humans that were born during an even hour of the day versus the humans that were born during an odd hour of the day. We’d not expect to see a huge wage gap then, would we? And that’s because we don’t think they evenness or oddness of the hour of the day you were born really dictates much about your choice of work nor your ability to command a good salary.

Categories: Uncategorized

Get every new post delivered to your Inbox.

Join 3,679 other followers