Archive

Archive for the ‘modeling’ Category

Reforming the data-driven justice system

This article from the New York Times really interests me. It’s entitled Unlikely Cause Unites the Left and the Right: Justice Reformand although it doesn’t specifically mention “data driven” approaches in justice reform, it describes “emerging proposals to reduce prison populations, overhaul sentencing, reduce recidivism and take on similar initiatives.”

I think this sentence, especially the reference to reducing recidivism, is code for the evidence-based sentencing that my friend Luis Daniel recently posted about. I recently finished a draft chapter in my book about such “big data” models, and after much research I can assure you that this stuff runs the gamut between putting poor people away for longer because they’re poor and actually focusing resources where they’re needed.

The idea that there’s a coalition that’s taking this on that includes both Koch Industries and the ACLU is fascinating and bizarre and – if I may exhibit a rare moment of optimism – hopeful. In particular I’m desperately hoping they have involved people who understand enough about modeling not to assume that the results of models are “objective”.

There are, in fact, lots of ways to set up data-gathering and usage in the justice system to actively fight against unfairness and unreasonably long incarcerations, rather than to simply codify such practices. I hope some of that conversation happens soon.

Categories: data science, modeling

Creepy big data health models

There’s an excellent Wall Street Journal article by Joseph Walker, entitled Can a Smartphone Tell if You’re Depressed?that describes a lot of creepy new big data projects going on now in healthcare, in partnership with hospitals and insurance companies.

Some of the models come in the form of apps, created and managed by private, third-party companies that try to predict depression in, for example, postpartum women. They don’t disclose what they are doing to many of the women, or the extent of what they’re doing, according to the article. They own the data they’ve collected at the end of the day and, presumably, can sell it to anyone interested in whether a woman is depressed. For example, future employers. To be clear, this data is generally not covered by HIPAA.

Perhaps the creepiest example is a voice analysis model:

Nurses employed by Aetna have used voice-analysis software since 2012 to detect signs of depression during calls with customers who receive short-term disability benefits because of injury or illness. The software looks for patterns in the pace and tone of voices that can predict “whether the person is engaged with activities like physical therapy or taking the right kinds of medications,” Michael Palmer, Aetna’s chief innovation and digital officer, says.

Patients aren’t informed that their voices are being analyzed, Tammy Arnold, an Aetna spokeswoman, says. The company tells patients the calls are being “recorded for quality,” she says.

“There is concern that with more detailed notification, a member may alter his or her responses or tone (intentionally or unintentionally) in an effort to influence the tool or just in anticipation of the tool,” Ms. Arnold said in an email.

In other words, in the name of “fear of gaming the model,” we are not disclosing the creepy methods we are using. Also, considering that the targets of this model are receiving disability benefits, I’m wondering if the real goal is to catch someone off their meds and disqualify them for further benefits or something along those lines. Since they don’t know they are being modeled, they will never know.

Conclusion: we need more regulation around big data in healthcare.

Categories: data journalism, modeling, rant

Big data and class

About a month ago there was an interesting article in the New York Times entitled Blowing Off Class? We Know. It discusses the “big data” movement in colleges around the country. For example, at Ball State, they track which students go to parties at the student center. Presumably to help them study for tests, or maybe to figure out which ones to hit up for alumni gifts later on.

There’s a lot to discuss in this article, but I want to focus today on one piece:

Big data has a lot of influential and moneyed advocates behind it, and I’ve asked some of them whether their enthusiasm might also be tinged with a little paternalism. After all, you don’t see elite institutions regularly tracking their students’ comings and goings this way. Big data advocates don’t dispute that, but they also note that elite institutions can ensure that their students succeed simply by being very selective in the first place.

The rest “get the students they get,” said William F. L. Moses, the managing director of education programs at the Kresge Foundation, which has given grants to the innovation alliance and to bolster data-analytics efforts at other colleges. “They have a moral obligation to help them succeed.”

This is a sentiment I’ve noticed a lot, although it’s not usually this obvious. Namely, the elite don’t need to be monitored, but the rabble does. The rich and powerful get to be quirky philosophers but the rest of the population need to be ranked and filed. And, by the way, we are spying on them for their own good.

In other words, never mind how big data creates and expands classism; classism already helps decide who is put into the realm of big data in the first place.

It feeds into the larger question of who is entitled to privacy. If you want to be strict about your definition of pricacy, you might say “nobody.” But if you recognize that privacy is a spectrum, where we have a variable amount of information being collected on people, and also a variable amount of control over people whose information we have collected, then upon study, you will conclude that privacy, or at least relative privacy, is for the rich and powerful. And it starts early.

Mortgage tax deductions and gentrification

Yesterday we had a tax expert come talk to us at the Alternative Banking group. We mostly focused on the mortgage tax deduction, whereby people don’t have to pay taxes on their mortgage. It’s the single biggest tax deduction in America for individuals.

At first blush, this doesn’t seem all that interesting, even if it’s strange. Whether people are benefitting directly from this, or through their rent being lower because their landlord benefits, it’s a fact of life for Americans. Whoopdedoo.

Generally speaking other countries don’t have a mortgage tax deduction, so we can judge whether it leads to overall more homeownership, which was presumably what it was intended for, and the data seems to suggest the answer there is no.

We can also imagine removing the mortgage tax deduction, and we quickly realize that such a move would seriously impair lots of people’s financial planning, so we’d have to do it very slowly if at all.

But before we imagine removing it, is it even a problem?

Well, yes, actually. Let’s think about it a little bit more, and for the sake of this discussion we will model the tax system very simply as progressive: the more income you collect yearly, the more taxes you pay. Also, there is a $1.1 million (or so) cap on the mortgage tax deduction, so it doesn’t apply to uber wealthy borrowers with huge houses. But for the rest of us it does apply.

OK now let’s think a little harder about what happens in the housing market when the government offers a tax deduction. Namely, the prices go up to compensate. It’s kind of like a rebate: this house is $100K with no deduction, but with a $20K deduction I can charge $120K for it.

But it’s a little more complicated than that, since people’s different income levels correspond to different deductions. So a lower middle class neighborhood’s houses will be inflated by less than an upper middle class neighborhood’s houses.

At first blush, this seems ok too: so richer people’s houses are inflated slightly more. It means it’s slightly harder for them to get in on the home ownership game, but it also means that, come time to sell, their house is worth more. For them, a $400K house is inflated not by 20% but by 35%, or whatever their tax bracket is.

So far so good? Now let’s add one more layer of complexity, namely that, actually, neighborhoods are not statically “upper middle class” or “lower middle class.” As a group neighborhoods, and their associated classes, represent a dynamical system, where certain kinds of neighborhoods expand or contract. Colloquially we refer to this as gentrification or going to hell, depending on which direction it is. Let’s explore the effect of the mortgage tax deduction on how that dynamical system operates.

Imagine a house which is exactly on the border between a middle class neighborhood and an upper-middle class neighborhood. If we imagine that it’s a middle class home, the price of it has only been inflated by a middle-class income tax bracket, so 20% for the sake of argument. But if we instead imagine it is in the upper-middle class neighborhood, it should really be inflated by 35%.

In other words, it’s under-priced from the perspective of the richer neighborhood. They will have an easier time affording it. The overall effect is that it is easier for someone from the richer neighborhood to snatch up that house, thereby extending their neighborhood a bit. Gentrification modeled.

Put it another way, the same house at the same price is more expensive for a poorer person because the mortgage tax deduction doesn’t affect everyone equally.

Another related point: if I’m a home builder, I will want to build homes with a maximal mark-up, a maximal inflation level. That will be for the richest people who haven’t actually exceeded the $1.1 million cap.

Conclusion: the mortgage tax deduction has an overall negative effect, encouraging gentrification, unfair competition, and too many homes for the wealthy. We should phase it out slowly, and also slowly lower the cap. At the very very least we should not let the cap rise, which will mean it effectively goes down over time as inflation does its thing.

If this has been tested or observed with data, please send me references.

Categories: #OWS, economics, modeling

Fairness, accountability, and transparency in big data models

As I wrote about already, last Friday I attended a one day workshop in Montreal called FATML: Fairness, Accountability, and Transparency in Machine Learning. It was part of the NIPS conference for computer science, and there were tons of nerds there, and I mean tons. I wanted to give a report on the day, as well as some observations.

First of all, I am super excited that this workshop happened at all. When I left my job at Intent Media in 2011 with the intention of studying these questions and eventually writing a book about them, they were, as far as I know, on nobody’s else’s radar. Now, thanks to the organizers Solon and Moritz, there are communities of people, coming from law, computer science, and policy circles, coming together to exchange ideas and strategies to tackle the problems. This is what progress feels like!

OK, so on to what the day contained and my copious comments.

Hannah Wallach

Sadly, I missed the first two talks, and an introduction to the day, because of two airplane cancellations (boo American Airlines!). I arrived in the middle of Hannah Wallach’s talk, the abstract of which is located here. Her talk was interesting, and I liked her idea of having social scientists partnered with data scientists and machine learning specialists, but I do want to mention that, although there’s a remarkable history of social scientists working within tech companies – say at Bell Labs and Microsoft and such – we don’t see that in finance at all, nor does it seem poised to happen. So in other words, we certainly can’t count on social scientists to be on hand when important mathematical models are getting ready for production.

Also, I liked Hannah’s three categories of models: predictive, explanatory, and exploratory. Even though I don’t necessarily think that a given model will fall neatly into one category or the other, they still give you a way to think about what we do when we make models. As an example, we think of recommendation models as ultimately predictive, but they are (often) predicated on the ability to understand people’s desires as made up of distinct and consistent dimensions of personality (like when we use PCA or something equivalent). In this sense we are also exploring how to model human desire and consistency. For that matter I guess you could say any model is at its heart an exploration into whether the underlying toy model makes any sense, but that question is dramatically less interesting when you’re using linear regression.

Anupam Datta and Michael Tschantz

Next up Michael Tschantz reported on work with Anupam Datta that they’ve done on Google profiles and Google ads. The started with google’s privacy policy, which I can’t find but which claims you won’t receive ads based on things like your health problems. Starting with a bunch of browsers with no cookies, and thinking of each of them as fake users, they did experiments to see what actually happened both to the ads for those fake users and to the google ad profiles for each of those fake users. They found that, at least sometimes, they did get the “wrong” kind of ad, although whether Google can be blamed or whether the advertiser had broken Google’s rules isn’t clear. Also, they found that fake “women” and “men” (who did not differ by any other variable, including their searches) were offered drastically different ads related to job searches, with men being offered way more ads to get $200K+ jobs, although these were basically coaching sessions for getting good jobs, so again the advertisers could have decided that men are more willing to pay for such coaching.

An issue I enjoyed talking about was brought up in this talk, namely the question of whether such a finding is entirely evanescent or whether we can call it “real.” Since google constantly updates its algorithm, and since ad budgets are coming and going, even the same experiment performed an hour later might have different results. In what sense can we then call any such experiment statistically significant or even persuasive? Also, IRL we don’t have clean browsers, so what happens when we have dirty browsers and we’re logged into gmail and Facebook? By then there are so many variables it’s hard to say what leads to what, but should that make us stop trying?

From my perspective, I’d like to see more research into questions like, of the top 100 advertisers on Google, who saw the majority of the ads? What was the economic, racial, and educational makeup of those users? A similar but different (because of the auction) question would be to reverse-engineer the advertisers’ Google ad targeting methodologies.

Finally, the speakers mentioned a failure on Google’s part of transparency. In your advertising profile, for example, you cannot see (and therefore cannot change) your marriage status, but advertisers can target you based on that variable.

Sorelle Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian

Next up we had Sorelle talk to us about her work with two guys with enormous names. They think about how to make stuff fair, the heart of the question of this workshop.

First, if we included race in, a resume sorting model, we’d probably see negative impact because of historical racism. Even if we removed race but included other attributes correlated with race (say zip code) this effect would remain. And it’s hard to know exactly when we’ve removed the relevant attributes, but one thing these guys did was define that precisely.

Second, say now you have some idea of the categories that are given unfair treatment, what can you do? One thing suggested by Sorelle et al is to first rank people in each category – to assign each person a percentile in their given category – and then to use the “forgetful function” and only consider that percentile. So, if we decided at a math department that we want 40% women graduate students, to achieve this goal with this method we’d independently rank the men and women, and we’d offer enough spots to top women to get our quota and separately we’d offer enough spots to top men to get our quota. Note that, although it comes from a pretty fancy setting, this is essentially affirmative action. That’s not, in my opinion, an argument against it. It’s in fact yet another argument for it: if we know women are systemically undervalued, we have to fight against it somehow, and this seems like the best and simplest approach.

Ed Felten and Josh Kroll

After lunch Ed Felton and Josh Kroll jointly described their work on making algorithms accountable. Basically they suggested a trustworthy and encrypted system of paper trails that would support a given algorithm (doesn’t really matter which) and create verifiable proofs that the algorithm was used faithfully and fairly in a given situation. Of course, we’d really only consider an algorithm to be used “fairly” if the algorithm itself is fair, but putting that aside, this addressed the question of whether the same algorithm was used for everyone, and things like that. In lawyer speak, this is called “procedural fairness.”

So for example, if we thought we could, we might want to turn the algorithm for punishment for drug use through this system, and we might find that the rules are applied differently to different people. This algorithm would catch that kind of problem, at least ideally.

David Robinson and Harlan Yu

Next up we talked to David Robinson and Harlan Yu about their work in Washington D.C. with policy makers and civil rights groups around machine learning and fairness. These two have been active with civil rights group and were an important part of both the Podesta Report, which I blogged about here, and also in drafting the Civil Rights Principles of Big Data.

The question of what policy makers understand and how to communicate with them came up several times in this discussion. We decided that, to combat cherry-picked examples we see in Congressional Subcommittee meetings, we need to have cherry-picked examples of our own to illustrate what can go wrong. That sounds bad, but put it another way: people respond to stories, especially to stories with innocent victims that have been wronged. So we are on the look-out.

Closing panel with Rayid Ghani and Foster Provost

I was on the closing panel with Rayid Ghani and Foster Provost, and we each had a few minutes to speak and then there were lots of questions and fun arguments. To be honest, since I was so in the moment during this panel, and also because I was jonesing for a beer, I can’t remember everything that happened.

As I remember, Foster talked about an algorithm he had created that does its best to “explain” the decisions of a complicated black box algorithm. So in real life our algorithms are really huge and messy and uninterpretable, but this algorithm does its part to add interpretability to the outcomes of that huge black box. The example he gave was to understand why a given person’s Facebook “likes” made a black box algorithm predict they were gay: by displaying, in order of importance, which likes added the most predictive power to the algorithm.

[Aside, can anyone explain to me what happens when such an algorithm comes across a person with very few likes? I’ve never understood this very well. I don’t know about you, but I have never “liked” anything on Facebook except my friends’ posts.]

Rayid talked about his work trying to develop a system for teachers to understand which students were at risk of dropping out, and for that system to be fair, and he discussed the extent to which that system could or should be transparent.

Oh yeah, and that reminds me that, after describing my book, we had a pretty great argument about whether credit scoring models should be open source, and what that would mean, and what feedback loops that would engender, and who would benefit.

Altogether a great day, and a fantastic discussion. Thanks again to Solon and Moritz for their work in organizing it.

FATML and next Saturday’s Eric Garner protest

December 8, 2014 Comments off

At the end of this week I’ll be heading up to Montreal to attend and participate in a one-day workshop called Fairness, Accountability, and Transparency in Machine Learning (FATML), as part of a larger machine learning conference called NIPS. It’s being organized by Solon Barocas and Moritz Hardt, who kindly put me on the closing panel of the day with Rayid Ghani, who among other things runs the Data Science for Social Good Summer Fellowship out of the University of Chicago, and Foster Provost, an NYU professor of Computer Science and the Stern School of Business.

On the panel, we will be discussing examples of data driven projects and decisions where fairness, accountability, and transparency came into play, or should have. I’ve got lots!

When I get back from Montreal, late on Saturday morning, I’m hoping to have the chance to make my way over to Washington Square Park at 2pm to catch a large Eric Garner protest. It’s actually a satellite protest from Washington D.C. called for by Rev. Al Sharpton and described as “National March Against Police Violence”. Here’s what I grabbed off twitter:

Screen Shot 2014-12-08 at 7.09.31 AM

Categories: #OWS, modeling

Inflation for the rich

I’m preparing for my weekly Slate Money podcast – this week, unequal public school funding, Taylor Swift versus Spotify, and the economics of weed, which will be fun – and I keep coming back to something I mentioned last week on Slate Money when we were talking about the end of the Fed program of quantitative easing (QE).

First, consider what QE comprised:

  1. QE1 (2008 – 2010): $1.65 trillion dollars invested in bonds and agency mortgage-back securities,
  2. QE2 (2010 – 2011): another $600 billion, cumulative $2.25 trillion, and
  3. QE3 (2012 – present): $85 billion per month, for a total of about $3.7 trillion overall.

Just to understand that total, compare it to the GDP of the U.S. in 2013, at 16.8 trillion. Or the federal tax spending in 2012, which was $3.6 trillion (versus $2.5 trillion in revenue!).

Anyhoo, the point is, we really don’t know exactly what happened because of all this money, because we can’t go back in time and do without the QE’s. We can only guess, and of course mention a few things that didn’t happen. For example, the people against it were convinced it would drive inflation up to crazy levels, which it hasn’t, although of course individual items and goods have gone up of course:

united-states-inflation-cpi

Well but remember, the inflation rate is calculated in some weird way that economists have decided on, and we don’t really understand or trust it, right? Actually, there are a bunch of ways to measure inflation, including this one from M.I.T., and most of them kinda agree that stuff isn’t crazy right now.

So did QE1, 2, and 3 have no inflationary effect at all? Were the haters wrong?

My argument is that it indeed caused inflation, but only for the rich, where by rich I mean investor class. The stock market is at an all time high, and rich people are way richer, and that doesn’t matter for any inflation calculation because the median income is flat, but it certainly matters for individuals who suddenly have a lot more money in their portfolios. They can compete for New York apartments and stuff.

As it turns out, there’s someone who agrees with me! You might recognize his name: billionaire and Argentinian public enemy #1 Paul Singer. According to Matt O’Brien of the Washington Post, Paul Singer is whining in his investor letter (excerpt here) about how expensive the Hamptons have gotten, as well as high-end art.

It’s “hyperinflation for the rich” and we are not feeling very bad for them. In fact it has made matters worse, when the very rich have even less in common with the average person. And just in case you’re thinking, oh well, all those Steve Jobs types deserve their hyper-inflated success, keep in mind that more and more of the people we’re talking about come from inherited wealth.

Categories: economics, modeling, musing
Follow

Get every new post delivered to your Inbox.

Join 2,786 other followers