Archive

Archive for the ‘modeling’ Category

Lean in to what?

Women are underrepresented in businesses like Goldman Sachs and JP Morgan Chase, especially in the upper management. Why is that?

Many women never go into finance in the first place, and of course some of them do go in but leave. Why are they leaving, though? Is it because they don’t like success? Or they don’t like money? Are they forgetting to lean in sufficiently?

Here’s another possibility, which I dig. They’re less willing to sacrifice their ethics than their male colleagues for the sake of money and business success.

Last Friday I read this paper entitled Who Is Willing to Sacrifice Ethical Values for Money and Social Status?  Gender Differences in Reactions to Ethical Compromises and written by Jessica A. Kennedy and Laura J. Kray. It offers ethical distaste problems as at least one contributing reason we don’t see as many women as we might otherwise.

The paper

Please read the paper for details, I’m only giving a very brief overview without figures of statistical significance. They have three experiments.

First they saw who were interested in jobs that had major ethical compromises. Turns out that women were way less interested than men.

Second, to check whether that was because of the ethical compromises or because of the “job” part, they had different kinds of job descriptions and found that, in the presence of a culture of good ethics, women were just as interested in a job as men.

Third, they checked on the existing assumptions about the connection between ethics and various kinds of jobs, like the law, medicine, and “business”. Turns out woman associate compromised ethics with business but less so with law and medicine.

Conclusion: we can attribute some of the lack of women in business to a combination of assumed and real ethical compromises.

Some thoughts

First, I love that this paper was written by two women. Maybe that’s what it took for such an common sense idea to be tested.

Secondly, I think this paper should be kept in mind when we read things about how companies that are diverse are more successful. It’s probably because they are nice places to be that women and others are there, which in turn makes them more successful. It also explains why, when companies set out to be diverse, they often have so much trouble. They want to achieve diversity without changing their underlying culture.

Thirdly, I’m going to have to admit that men are under enormous pressure to succeed at all costs, which could explain why they’re more willing to become ethically compromised to be successful. That says something about our crazy expectations of men in this culture which I think we need to address. I say that as a mother of three sons.

Finally, whenever I hear someone talking about “leaning in” from now on, I will ask them, “lean in to what?”.

Categories: finance, modeling

How do opinions and convictions propagate?

Yesterday I read an interesting paper entitled Social influence and the collective dynamics of opinion formation, written by Mehdi Moussaïd, Juliane E. Kämmer, Pantelis P. Analytis, and Hansjörg Neth, about how opinions and strength of conviction spread in a crowd with many interactions, and how consensus is reached. I found the paper on Twitter through Steven Strogatz’s feed.

The paper

First they worked on individuals, and how they might update their opinion on some topic upon hearing of someone else’s opinion. They chose super unpolitical questions like, “what is the melting point of aluminum?”.

The interesting thing they did was to track both the opinion and the conviction – how sure someone was.

As expected, people did update their opinion if they heard someone else had a somewhat similar opinion, especially if that other person had a stronger conviction. They tended to ignore opinions that were super different, especially if the convictions were weaker. Sometimes they even adopted the other person’s opinion, if it wasn’t too different and if their original conviction was very low. But most of the time they ignored stuff:

Screen Shot 2013-12-27 at 6.22.08 AM

What was also interesting, and what we will get back to, is that when they heard other people had similar opinions to their own, their conviction went up without their opinion changing.

Next they used a computer simulation to see how opinions would propagate if no new information was introduced but many interactions occurred, if everyone acted the same in terms of updating opinions, and if they did so time after time.

So what were the results? I’ll explain a couple, please read the paper for more details, it’s short.

The most interesting to me was that, at the end of the day, after many interactions, the convictions of the group always ended up high even if the answer was wrong. This is because, when people heard similar opinions, their convictions rose, but if they heard differing opinions their convictions didn’t lower. But the end result is that, although high conviction correlated with being correct at the start, it had no correlation with being correct by the end.

In fact, conviction correlated to consensus rather than correctness after a few interactions. The takeaway is that, in the presence of not much information, strong convictions might just imply lots of local agreement.

The next result they found was that the dynamical system that was the opinion making soup had two kinds of attractors. Namely, small groups of “experts,” defined as people with very strong convictions but who were not necessarily correct (think Larry Summers), and large groups of people with low convictions but who all happen to agree with each other.

The fact that these two populations are attractors was named by the authors as “the expert effect” and “the majority effect” respectively. And if fewer than 15% of the population were experts, in the presence of a majority, the majority effect dominated.

Finally, the presence of random noise, which correspond to people with random opinions and random conviction levels, weakened both of the above effects. If 70% or more of the population was noise, then the two effects described above vanished.

Thoughts on the paper

  1. One thing I’ve thought about a lot from working with my Occupy group is how opinions form on a given issue. Since we’re going for informed opinions, we very deliberately start out with a learning phase, which could last a long time depending on the complexity of the subject. We also have a thing against experts, although we do have to trust our sources when we read up on a topic. So it’s kind of balancing act at all times.
  2. Also, of course, most opinions are not 1-dimensional. I can’t say my opinion on the Fed on a scale between 1 and 100, for example.
  3. Also, it’s not clear that I update my opinion on issues in exactly the same way each time I hear someone else’s. On the other hand I do continually revise my opinion on stuff.
  4. The study didn’t look at super political issues. I wonder if it’s different. I guess one of the big differences is in how often someone is truly neutral on a political topic. Maybe you could even define a topic as political in this context somehow, or at least build a test for the politicalness of a topic.
  5. Let’s assume it also works for political topics. Then the “I heard this so many times it must be true” effect seems to be directly in line with the agenda of Fox News. Also there’s the expert effect going on there as well.
  6. In any case it’s interesting to note that, if you’re trying to effect opinions, you might either go with “informing and educating the general public” on something or “building up a sufficient squad of experts” on that same thing, where experts are people with super strong opinions and have the ability to interact with lots of people.
Categories: modeling

What does a really efficient market look like?

The raison d’être of hedge funds is to make the markets efficient. Or at least that’s one of the raisons d’être, the others being 1) to get rich and 2) to leave early on Fridays in the summer (resp. winter) to get a jump on traffic to the Hamptons (resp. ski area, possibly in Kashmir).

And although having efficient markets sounds like a great thing, it makes sense to ask what that would look like from the perspective of a non-insider.

This recent Wall Street Journal article on high-tech snooping does a pretty good job setting the tone here. First, the kind of thing they’re doing:

Genscape is at the vanguard of a growing industry that employs sophisticated surveillance and data-crunching technology to supply traders with nonpublic information about topics including oil supplies, electric-power production, retail traffic and crop yields.

Next, who they’re doing it for:

The techniques, which are perfectly legal, represent the latest advance in the longtime Wall Street practice of searching for every possible trading advantage. But the high cost of much of the new information—Genscape’s oil-supply report costs $90,000 a year—means that some forms of trading are becoming even more the province of firms with substantial resources.

Let’s put these two things together from the perspective of the public. The market is getting information from hidden cameras and sensors, and all that information is being fed to “the market” via proprietary hedge funds via channels we will never tap into. The end result is that the prices of commodities are being adjusted to real-world events more and more quickly, but these are events that are not truly known to the real world.

[Aside: I'm going to try to avoid talking about the "true price" of things like gas, because I think that's pretty much a fool's errand. In any case, let me just say that, in addition to the potentially realtime sensor information that goes into a commodity's price, we also have people trading on it because they are adjusting their exposure to some other historically correlated or anti-correlated instrument, or because they've decided to liquidate their books, or because they've decided the Fed has changed its macroeconomic policy, or because Spain needs to deal with its bank problems, or because someone wants to take money out of the market to rent their summer house in the Hamptons. In other words, I'm not ready to argue that we're getting close to the "true price" of gas here. It's just tradable information like any other.]

I am now prepared, as you hopefully are as well, to question what good this all does for people like us, who are not privy to the kind of expensive information required to make these trades. From our perspective, nothing happens, the price fluctuates, and the market is deemed efficient. Is this actually an improvement over the alternative version where something happens, and then the price adjusts? It’s an expensive arms race, taking up vast resources, where things have only become more opaque.

How vast are those resources? Having worked in finance, I know the answer is a shit-ton, if it is profitable in a short-term edgy kind of way. Just as those guys dug a hole through mountains to make the connection between New York to Chicago a few nanoseconds faster, they will go to any length to get the newest info on the market, as long as it is deemed to have a profitable edge in some time frame – i.e. the amount of time it will take a flood of competitors to do the same thing.

Just as there’s a kind of false myth that most of the web is porn, I’d like to perpetuate a new somewhat false myth that most data gathering and mining happens for the benefit of trading. And if that’s false now, let’s talk about it again in 100 years, when the market for celebrities is mature, and you can make money shorting a bad marriage.

Categories: finance, modeling, rant

Insulin level tracking?

I’m wondering something kind of stupid this morning, which is why we don’t track people’s insulin levels. Or maybe we do and I don’t know about it? In which case, how do I get myself a Quantified Self insulin tracker?

Here’s a bit of background to explain why I’m asking this. Insulin levels in your blood regulate the speed at which sugar in your blood is turned into fat, and similarly they regulate how quickly fat cells release fat into the bloodstream.

If someone without diabetes eats something sweet their insulin levels shoot up to collect all the blood so the blood sugar levels doesn’t get toxic. Because what’s really important, as diabetics know, is that blood sugar levels remain not too high nor too low. Type I diabetics don’t make insulin, and Type II diabetics don’t respond to insulin properly.

OK so here’s the thing. I’m pretty sure my insulin levels are normally a bit high, and that when I eat sugar or bread they spike up dramatically. And although that might sound like the opposite of diabetes, it’s actually the precursor to diabetes, where my body goes nuts making insulin and then my organs become resistant to it.

But first of all, I’d like to know if I’m right – are my insulin levels elevated? – and second of all, I’d like to know how much my body reacts, insulin-wise, to eating and drinking various things. For instance, I drink a lot of coffee, and I’d like to know what that does to my insulin levels. And what about Coke Zero?

I am probably going to be disappointed here. I know that the critical level to keep track of for diabetics, blood sugar, is still pretty hard to do, although recent continuous monitors do exist and are helping. So if anyone knows of a “continuous insulin monitor” please tell me!

One last word about why insulin. I am fairly convinced that the insulin levels – combined with a measure of their insulin resistance – would explain a lot about why certain people retain fat where others don’t with the same diet. And it would be such a relief to stop arguing about willpower and start understanding this stuff scientifically.

One last thing. Please do not comment below telling me how to lose weight or talking about how to have more willpower: I will delete such comments! Please do comment on the scientific issues around the mechanisms of insulin, data collection, and potentially modeling with that data.

Categories: modeling, musing

Computer, do I really want to get married?

There’s a new breed of models out there nowadays that reads your face for subtle expressions of emotions, possibly stuff that normal humans cannot pick up on. You can read more about it here, but suffice it to say it’s a perfect target for computers – something that is free information, that can be trained over many many examples, and then deployed everywhere and anywhere, even without our knowledge since surveillance cameras are so ubiquitous.

Plus, there are new studies that show that, whether you’re aware of it or not, a certain “gut feeling”, which researchers can get at by asking a few questions, will expose whether your marriage is likely to work out.

Let’s put these two together. I don’t think it’s too much of a stretch to imagine that surveillance cameras strategically placed at an altar can now make predictions on the length and strength of a marriage.

Oh goodie!

I guess it brings up the following question: is there some information we are better off not knowing? I don’t think knowing my marriage is likely to be in trouble would help me keep the faith. And every marriage needs a good dose of faith.

I heard a radio show about Huntington’s disease. There’s no cure for it, but there is a simple genetic test to see if you’ve got it, and it usually starts in adulthood so there’s plenty of time for adults to see their parents degenerate and start to worry about themselves.

But here’s the thing, only 5% of people who have a 50% chance of having Huntington’s actually take that test. For them the value of not knowing that information is larger than knowing. Of course knowing you don’t have it is better still, but until that happens the ambiguity is preferable.

Maybe what’s critical is that there’s no cure. I mean, if there was therapy that would help Huntington’s disease sufferers delay it or ameliorate it, I think we’d see far more people taking that genetic marker test.

And similarly, if there were ways to save a marriage that is at risk, we might want to know on the altar what the prognosis is. Right?

I still don’t know. Somehow, when things get that personal and intimate, I’d rather be left alone, even if an algorithm could help me “optimize my love life”. But maybe that’s just me being old-fashioned, and maybe in 100 years people will treat their computers like love oracles.

Categories: data science, modeling, news

Predictive risk models for prisoners with mental disorders

My friend Jordan Ellenberg sent me an article yesterday entitled Coin-flip judgement of psychopathic prisoners’ risk.

It was written by Seena Fazel, a researcher at the department of psychiatry at Oxford, and it concerns his research into the currently used predictive risk models for violence, repeat offense, and the like, which are supposedly tailored to people who have mental disorders like psychopathy.

Turns out there are a lot of these models, and they’re in use today in a bunch of countries. I did not know that. And they’re not just being used as extra, “good to know” information, but rather as a tool to assess important decisions for the prisoner. From the article:

Many US states use such tools to assess sexual offending risk and to help decide whether to exercise their powers to detain sexual offenders indefinitely after a prison term ends.

In England and Wales, these tools are part of the admission criteria for centres that treat people with dangerous and severe personality disorders. Outside North America, Europe and Australasia, similar approaches are increasingly popular, particularly in clinical settings, and there has been a steady growth of research from middle-income countries, such as China, documenting their use.

Also turns out, according to a meta-analysis done by Fazel, that these models don’t work very well, especially for the highest risk most violent population. And what’s super troubling is, as Fazel says, “In practice, the high false-positive rate probably means that some offenders spend longer in prison and secure hospital than their true risk would suggest.”

Talk about creepy.

This seems to be yet another example of a mathematical obfuscation and intimidation that gives people a false sense of having a good tool at hand. From the article:

Of course, sensible clinicians and judges take into account factors other than the findings of these instruments, but their misuse does complicate the picture. Some have argued that the veneer of scientific respectability surrounding such methods may lead to over-reliance on their findings, and that their complexity is difficult for the courts. Beyond concerns about public protection, liberty and costs of extended detention, there are worries that associated training and administration may divert resources from treatment.

The solution? Get people to acknowledge that the tools suck, and have a more transparent method of evaluating them. In this case, according to Fazel, it’s the researchers who are over-estimating the power of their models. But especially where it involves incarceration and the law, we have to maintain an adherence to a behavior-based methodology. It doesn’t make sense to put people in jail an extra 10 years because a crappy model said so.

This is a case, in my opinion, for an open model with a closed black box data set. The data itself is extremely sensitive and protected, but the model itself should be scrutinized.

Categories: modeling, news, statistics

Algorithmic Accountability Reporting: On the Investigation of Black Boxes

Tonight I’m going to be on a panel over at Columbia’s Journalism School called Algorithmic Accountability Reporting: On the Investigation of Black Boxes. It’s being organized by Nick Diakopoulos, Tow Fellow and previous guest blogger on mathbabe. You can sign up to come here and it will also be livestreamed.

The other panelists are Scott Klein from ProPublica and Clifford Stein from Columbia. I’m super excited to meet them.

Unlike some panel discussions I’ve been on, where the panelists talk about some topic they choose for a few minutes each and then there are questions, this panel will be centered around a draft of a paper coming from the Tow Center at Columbia. First Nick will present the paper and then the panelists will respond to it. Then there will be Q&A.

I wish I could share it with you but it doesn’t seem publicly available yet. Suffice it to say it has many elements in common with Nick’s guest post on raging against the algorithms, and its overall goal is to understand how investigative journalism should handle a world filled with black box algorithms.

Super interesting stuff, and I’m looking forward to tonight, even if it means I’ll miss the New Day New York rally in Foley Square tonight.

Categories: data science, modeling

The cost savings of food stamps cuts versus the cost increases of diabetes care

As many of you are aware, food stamps were recently cut in this country. This has had a brutal effect on people and families and on neighborhood food pantries, which are being swamped with new customers and increased need among their existing customers.

One thing that I come away with when I read articles describing this problem is how often they detail individuals who have been diagnosed with diabetes but can no longer afford to pay for appropriate food for their condition.

As a person with a family history of diabetes, and someone who has been actively avoiding sugars and carbs to control my blood sugar for the past couple of years, I have a tremendous amount of sympathy for these struggling people.

Let me put it another way. Eating well in this country is expensive, and I’ve had to spend real money on food here in New York City to avoid sugary and fast carb-laden food. I don’t think I could have done that on a skimpy food budget. It’s especially hard to imagine budgeting healthy food on a withering food stamp budget.

Because here’s the thing, and it’s not a secret: shitty food is cheap. If I need to buy lots of food (read: calories) for a small amount of money, I can do it easily, but it will be hell for my blood sugar control. I’m guessing I’d be a full-blown diabetic by now if I were poor and on food stamps.

And that brings me to my nerd question of the morning. How much money are we really saving by decreasing the food stamp allowance in this country, if we consider how many more people will be diagnosed diabetic as a result of the decreased quality of their diet? And how many people’s diabetes will get worse, and how much will that cost?

It’s not over, either: apparently more cuts are coming over the next 10 years (maybe by $4 billion, maybe by $40 billion). And although diabetes care costs have gone up 40% in the last 5 years ($245 billion in 2012 from $174 billion in 2007), that doesn’t mean they won’t go up way more in the next 10.

I’m not an expert on how this all works, but the scale is right – we’re talking billions of dollars nationally, so not small potatoes, and of course we’re also talking about people’s quality of life. Never mind in a moral context – I’m definitely of the mind that people should be able to eat – I’m wondering if the food stamp cuts make sense in a dollars and cents context.

Please tell me if you know of an analysis in this direction.

Categories: modeling, news

“People analytics” embeds old cultural problems in new mathematical models

Today I’d like to discuss recent article from the Atlantic entitled “They’re watching you at work” (hat tip Deb Gieringer).

In the article they describe what they call “people analytics,” which refers to the new suite of managerial tools meant to help find and evaluate employees of firms. The first generation of this stuff happened in the 1950′s, and relied on stuff like personality tests. It didn’t seem to work very well and people stopped using it.

But maybe this new generation of big data models can be super useful? Maybe they will give us an awesome way of throwing away people who won’t work out more efficiently and keeping those who will?

Here’s an example from the article. Royal Dutch Shell sources ideas for “business disruption” and wants to know which ideas to look into. There’s an app for that, apparently, written by a Silicon Valley start-up called Knack.

Specifically, Knack had a bunch of the ideamakers play a video game, and they presumably also were given training data on which ideas historically worked out. Knack developed a model and was able to give Royal Dutch Shell a template for which ideas to pursue in the future based on the personality of the ideamakers.

From the perspective of Royal Dutch Shell, this represents huge timesaving. But from my perspective it means that whatever process the dudes at Royal Dutch Shell developed for vetting their ideas has now been effectively set in stone, at least for as long as the algorithm is being used.

I’m not saying they won’t save time, they very well might. I’m saying that, whatever their process used to be, it’s now embedded in an algorithm. So if they gave preference to a certain kind of arrogance, maybe because the people in charge of vetting identified with that, then the algorithm has encoded it.

One consequence is that they might very well pass on really excellent ideas that happened to have come from a modest person – no discussion necessary on what kind of people are being invisible ignored in such a set-up. Another consequence is that they will believe their process is now objective because it’s living inside a mathematical model.

The article compares this to the “blind auditions” for orchestras example, where people are kept behind a curtain so that the listeners don’t give extra consideration to their friends. Famously, the consequence of blind auditions has been way more women in orchestras. But that’s an extremely misleading comparison to the above algorithmic hiring software, and here’s why.

In the blind auditions case, the people measuring the musician’s ability have committed themselves to exactly one clean definition of readiness for being a member of the orchestra, namely the sound of the person playing the instrument. And they accept or deny someone, sight unseen, based solely on that evaluation metric.

Whereas with the idea-vetting process above, the training data consisted of “previous winners” which presumable had to go through a series of meetings and convince everyone in the meeting that their idea had merit, and that they could manage the team to try it out, and all sorts of other things. Their success relied, in other words, on a community’s support of their idea and their ability to command that support.

In other words, imagine that, instead of listening to someone playing trombone behind a curtain, their evaluation metric was to compare a given musician to other musicians that had already played in a similar orchestra and, just to make it super success-based, had made first seat.

That you’d have a very different selection criterion, and a very different algorithm. It would be based on all sorts of personality issues, and community bias and buy-in issues. In particular you’d still have way more men.

The fundamental difference here is one of transparency. In the blind auditions case, everyone agrees beforehand to judge on a single transparent and appealing dimension. In the black box algorithms case, you’re not sure what you’re judging things on, but you can see when a candidate comes along that is somehow “like previous winners.”

One of the most frustrating things about this industry of hiring algorithms is how unlikely it is to actively fail. It will save time for its users, since after all computers can efficiently throw away “people who aren’t like people who have succeeded in your culture or process” once they’ve been told what that means.

The most obvious consequence of using this model, for the companies that use it, is that they’ll get more and more people just like the people they already have. And that’s surprisingly unnoticeable for people in such companies.

My conclusion is that these algorithms don’t make things objective, they makes things opaque. And they embeds our old cultural problems in new mathematical models, giving us a false badge of objectivity.

Categories: data science, modeling, rant

Cool open-source models?

I’m looking to develop my idea of open models, which I motivated here and started to describe here. I wrote the post in March 2012, but the need for such a platform has only become more obvious.

I’m lucky to be working with a super fantastic python guy on this, and the details are under wraps, but let’s just say it’s exciting.

So I’m looking to showcase a few good models to start with, preferably in python, but the critical ingredient is that they’re open source. They don’t have to be great, because the point is to see their flaws and possible to improve them.

  1. For example, I put in a FOIA request a couple of days ago to get the current teacher value-added model from New York City.
  2. A friends of mine, Marc Joffe, has an open source municipal credit rating model. It’s not in python but I’m hopeful we can work with it anyway.
  3. I’m in search of an open source credit scoring model for individuals. Does anyone know of something like that?
  4. They don’t have to be creepy! How about a Nate Silver – style weather model?
  5. Or something that relies on open government data?
  6. Can we get the Reinhart-Rogoff model?

The idea here is to get the model, not necessarily the data (although even better if it can be attached to data and updated regularly). And once we get a model, we’d build interactives with the model (like this one), or at least the tools to do so, so other people could build them.

At its core, the point of open models is this: you don’t really know what a model does until you can interact with it. You don’t know if a model is robust unless you can fiddle with its parameters and check. And finally, you don’t know if a model is best possible unless you’ve let people try to improve it.

Twitter and its modeling war

I often talk about the modeling war, and I usually mean the one where the modelers are on one side and the public is on the other. The modelers are working hard trying to convince or trick the public into clicking or buying or consuming or taking out loans or buying insurance, and the public is on the other, barely aware that they’re engaging in anything at all resembling a war.

But there are plenty of other modeling wars that are being fought by two sides which are both sophisticated. To name a couple, Anonymous versus the NSA and Anonymous versus itself.

Here’s another, and it’s kind of bland but pretty simple: Twitter bots versus Twitter.

This war arose from the fact that people care about how many followers someone on Twitter has. It’s a measure of a person’s influence, albeit a crappy one for various reasons (and not just because it’s being gamed).

The high impact of the follower count means it’s in a wannabe celebrity’s best interest to juice their follower numbers, which introduces the idea of fake twitter accounts to game the model. This is an industry in itself, and an associated arms race of spam filters to get rid of them. The question is, who’s winning this arms race and why?

Twitter has historically made some strides in finding and removing such fake accounts with the help of some modelers who actually bought the services of a spammer and looked carefully at what their money bought them. Recently though, at least according to this WSJ article, it looks like Twitter has spent less energy pursuing the spammers.

It begs the question, why? After all, Twitter has a lot theoretically at stake. Namely, its reputation, because if everyone knows how gamed the system is, they’ll stop trusting it. On the other hand, that argument only really holds if people have something else to use instead as a better proxy of influence.

Even so, considering that Twitter has a bazillion dollars in the bank right now, you’d think they’d spend a few hundred thousand a year to prevent their reputation from being too tarnished. And maybe they’re doing that, but the spammers seem to be happily working away in spite of that.

And judging from my experience on Twitter recently, there are plenty of active spammers which actively degrade the user experience. That brings up my final point, which is that the lack of competition argument at some point gives way to the “I don’t want to be spammed” user experience argument. At some point, if Twitter doesn’t maintain standards, people will just not spend time on Twitter, and its proxy of influence will fall out of favor for that more fundamental reason.

Categories: data science, modeling

Crisis Text Line: Using Data to Help Teens in Crisis

This morning I’m helping out at a datadive event set up by DataKind (apologies to Aunt Pythia lovers).

The idea is that we’re analyzing metadata around a texting hotline for teens in crisis. We’re trying to see if we can use the information we have on these texts (timestamps, character length, topic – which is most often suicide – and outcome reported by both the texter and the counselor) to help the counselors improve their responses.

For example, right now counselors can be in up to 5 conversations at a time – is that too many? Can we figure that out from the data? Is there too much waiting between texts? Other questions are listed here.

Our “hackpad” is located here, and will hopefully be updated like a wiki with results and visuals from the exploration of our group. It looks like we have a pretty amazing group of nerds over here looking into this (mostly python users!), and I’m hopeful that we will be helping the good people at Crisis Text Line.

There is no “market solution” for ethics

We saw what happened in finance with self-regulation and ethics. Let’s prepare for the exact same thing in big data.

Finance

Remember back in the 1970′s through the 1990′s, the powers that were decided that we didn’t need to regulate banks because “they” wouldn’t put “their” best interests at risk? And then came the financial crisis, and most recently came Alan Greenspan’s recent admission that he’d got it kinda wrong but not really.

Let’s look at what the “self-regulated market” in derivatives has bestowed upon us. We’ve got a bunch of captured regulators and a huge group of bankers who insist on keeping derivatives opaque so that they can charge clients bigger fees, not to mention that they insist on not having fiduciary duties to their clients, and oh yes, they’d like to continue to bet depositors’ money on those derivatives. They wrote the regulation themselves for that one. And this is after they blew up the world and got saved by the taxpayers.

Given that the banks write the regulations, it’s arguably still kind of a self-regulated market in finance. So we can see how ethics has been and is faring in such a culture.

The answer is, not well. Just in case the last 5 years of news articles wasn’t enough to persuade you of this fact, here’s what NY Fed Chief Dudley had to say recently about big banks and the culture of ethics, from this Huffington Post article:

“Collectively, these enhancements to our current regime may not solve another important problem evident within some large financial institutions — the apparent lack of respect for law, regulation and the public trust,” he said.

“There is evidence of deep-seated cultural and ethical failures at many large financial institutions,” he continued. “Whether this is due to size and complexity, bad incentives, or some other issues is difficult to judge, but it is another critical problem that needs to be addressed.”

Given that my beat is now more focused on the big data community and less on finance, mostly since I haven’t worked in finance for almost 2 years, this kind of stuff always makes me wonder how ethics is faring in the big data world, which is, again, largely self-regulated.

Big data

According to this ComputerWorld article, things are pretty good. I mean, there are the occasional snafus – unappreciated sensors or unreasonable zip code gathering examples – but the general idea is that, as long as you have a transparent data privacy policy, you’ll be just fine.

Examples of how awesome “transparency” is in these cases vary from letting people know what cookies are being used (BlueKai), to promising not to share certain information between vendors (Retention Science), to allowing customers a limited view into their profiling by Acxiom, the biggest consumer information warehouse. Here’s what I assume a typical reaction might be to this last one.

Wow! I know a few things Acxiom knows about me, but probably not all! How helpful. I really trust those guys now.

Not a solution

What’s great about letting customers know exactly what you’re doing with their data is that you can then turn around and complain that customers don’t understand or care about privacy policies. In any case, it’s on them to evaluate and argue their specific complaints. Which of course they don’t do, because they can’t possibly do all that work and have a life, and if they really care they just boycott the product altogether. The result in any case is a meaningless, one-sided conversation where the tech company only hears good news.

Oh, and you can also declare that customers are just really confused and don’t even know what they want:

In a recent Infosys global survey, 39% of the respondents said that they consider data mining invasive. And 72% said they don’t feel that the online promotions or emails they receive speak to their personal interests and needs.

Conclusion: people must want us to collect even more of their information so they can get really really awesome ads.

Finally, if you make the point that people shouldn’t be expected to be data mining and privacy experts to use the web, the issue of a “market solution for ethics” is raised.

“The market will provide a mechanism quicker than legislation will,” he says. “There is going to be more and more control of your data, and more clarity on what you’re getting in return. Companies that insist on not being transparent are going to look outdated.”

Back to ethics

What we’ve got here is a repeat problem. The goal of tech companies is to make money off of consumers, just as the goal of banks is to make money off of investors (and taxpayers as a last resort).

Given how much these incentives clash, the experts on the inside have figured out a way of continuing to do their thing, make money, and at the same time, keeping a facade of the consumer’s trust. It’s really well set up for that since there are so many technical terms and fancy math models. Perfect for obfuscation.

If tech companies really did care about the consumer, they’d help set up reasonable guidelines and rules on these issues, which could easily be turned into law. Instead they send lobbyists to water down any and all regulation. They’ve even recently created a new superPAC for big data (h/t Matt Stoller).

And although it’s true that policy makers are totally ignorant of the actual issues here, that might be because of the way big data professionals talk down to them and keep them ignorant. It’s obvious that tech companies are desperate for policy makers to stay out of any actual informed conversation about these issues, never mind the public.

Conclusion

There never has been, nor there ever will be, a market solution for ethics so long as the basic incentives between the public and an industry are so misaligned. The public needs to be represented somehow, and without rules and regulations, and without leverage of any kind, that will not happen.

Categories: data science, finance, modeling

Alan Greenspan still doesn’t get it. #OWS

Yesterday I read Alan Greenspan’s recent article in Foreign Affairs magazine (hat tip Rhoda Schermer). It is entitled “Never Saw It Coming: Why the Financial Crisis Took Economists By Surprise,” and for those of you who want to save some time, it basically goes like this:

I’ll admit it, the macroeconomic models that we used before the crisis failed, because we assumed people financial firms behaved rationally. But now there are new models that assume predictable irrational behavior, and once we add those bells and whistles onto our existing models, we’ll be all good. Y’all can start trusting economists again.

Here’s the thing that drives me nuts about Greenspan. He is still talking about financial firms as if they are single people. He just didn’t really read Adam Smith’s Wealth of Nations, or at least didn’t understand it, because if he had, he’d have seen that Adam Smith argued against large firms in which the agendas of the individuals ran counter to the agenda of the company they worked for.

If you think about individuals inside the banks, in other words, then their individual incentives explain their behavior pretty damn well. But Greenspan ignores that and still insists on looking at the bank as a whole. Here’s a quote from the piece:

Financial firms accepted the risk that they would be unable to anticipate the onset of a crisis in time to retrench. However, they thought the risk was limited, believing that even if a crisis developed, the seemingly insatiable demand for exotic financial products would dissipate only slowly, allowing them to sell almost all their portfolios without loss. They were mistaken.

Let’s be clear. Financial firms were not “mistaken”, because legal contracts can’t think. As for the individuals working inside those firms, there was no such assumption about a slow exhale. Everyone was furiously getting their bonuses pumped up while the getting was good. People on the inside knew the market for exotic financial products would blow at some point, and that their personal risks were limited, so why not make systemic risk worse until then.

As a mathematical modeler myself, it bugs me to try to put a mathematical band-aid on an inherently failed model. We should instead build a totally new model, or even better remove the individual perverted incentives of the market using new rules (I’m using the word “rules” instead of “regulations” because people don’t hate rules as much as they hate regulations).

Wouldn’t it be nice if the agendas of the individuals inside a financial firm were more closely aligned with the financial firm? And if it was over a long period of time instead of just until the bonus day? Not impossible.

And, since I’m an occupier, I get to ask even more. Namely, wouldn’t it be even nicer if that agenda was also shared by the general public? Doable!

Mr. Greenspan, there are ways to address the mistake you economists made and continue to make, but they don’t involve fancier math models from behavioral economics. They involve really simple rule changes and, generally speaking, making finance much more boring and much less profitable.

Categories: #OWS, finance, modeling, rant

How do you know when you’ve solved your data problem?

I’ve been really impressed by how consistently people have gone to read my post “K-Nearest Neighbors: dangerously simple,” which I back in April. Here’s a timeline of hits on that post:

Stats for "K-Nearest Neighbors: dangerously simple." I've actually gotten more hits recently.

Stats for “K-Nearest Neighbors: dangerously simple.” I’ve actually gotten more hits recently.

I think the interest in this post is that people like having myths debunked, and are particularly interested in hearing how even the simple things that they thought they understand are possibly wrong, or at least more complicated than they’d been assuming. Either that or it’s just got a real catchy name.

Anyway, since I’m still getting hits on that post, I’m also still getting comments, and just this morning I came across a new comment by someone who calls herself “travelingactuary”. Here it is:

My understanding is that CEOs hate technical details, but do like results. So, they wouldn’t care if you used K-Nearest Neighbors, neural nets, or one that you invented yourself, so long as it actually solved a business problem for them. I guess the problem everyone faces is, if the business problem remains, is it because the analysis was lacking or some other reason? If the business is ‘solved’ is it actually solved or did someone just get lucky? That being so, if the business actually needs the classifier to classify correctly, you better hire someone who knows what they’re doing, rather than hoping the software will do it for you.

Presumably you want to sell something to Monica, and the next n Monicas who show up. If your model finds a whole lot of big spenders who then don’t, your technophobe CEO is still liable to think there’s something wrong.

I think this comment brings up the right question, namely knowing when you’ve solved your data problem, with K-Nearest Neighbors or whichever algorithms you’ve chosen to use. Unfortunately, it’s not that easy.

Here’s the thing, it’s almost never possible to tell if a data problem is truly solved. I mean, it might be a business problem where you go from losing money to making money, and in that sense you could say it’s been “solved.” But in terms of modeling, it’s very rarely a binary thing.

Why do I say that? Because, at least in my experience, it’s rare that you could possibly hope for high accuracy when you model stuff, even if it’s a classification problem. Most of the time you’re trying to achieve something better than random, some kind of edge. Often an edge is enough, but it’s nearly impossible to know if you’ve gotten the biggest edge possible.

For example, say you’re binning people you who come to your site in three equally sized groups, as “high spenders,” “medium spenders,” and “low spenders.” So if the model were random, you’d expect a third to be put into each group, and that someone who ends up as a big spender is equally likely to be in any of the three bins.

Next, say you make a model that’s better than random. How would you know that? You can measure that, for example, by comparing it to the random model, or in other words by seeing how much better you do than random. So if someone who ends up being a big spender is three times more likely to have been labeled a big spender than a low spender and twice as likely than a medium spender, you know your model is “working.”

You’d use those numbers, 3x and 2x, as a way of measuring the edge your model is giving you. You might care about other related numbers more, like whether pegged low spenders are actually low spenders. It’s up to you to decide what it means that the model is working. But even when you’ve done that carefully, and set up a daily updated monitor, the model itself still might not be optimal, and you might still be losing money.

In other words, you can be a bad modeler or a good modeler, and either way when you try to solve a specific problem you won’t really know if you did the best possible job you could have, or someone else could have with their different tools and talents.

Even so, there are standards that good modelers should follow. First and most importantly, you should always set up a model monitor to keep track of the quality of the model and see how it fares over time.  Because why? Because second, you should always assume that, over time, your model will degrade, even if you are updating it regularly or even automatically. It’s of course good to know how crappy things are getting so you don’t have a false sense of accomplishment.

Keep in mind that just because it’s getting worse doesn’t mean you can easily start over again and do better. But a least you can try, and you will know when it’s worth a try. So, that’s one thing that’s good about admitting your inability to finish anything.

On to the political aspect of this issue. If you work for a CEO who absolutely hates ambiguity – and CEO’s are trained to hate ambiguity, as well as trained to never hesitate – and if that CEO wants more than anything to think their data problem has been “solved,” then you might be tempted to argue that you’ve done a phenomenal job just to make her happy. But if you’re honest, you won’t say that, because it ‘aint true.

Ironically and for these reasons, some of the most honest data people end up looking like crappy scientists because they never claim to be finished doing their job.

Categories: data science, modeling

The private-data-for-services trade fallacy

I had a great time at Harvard Wednesday giving my talk (prezi here) about modeling challenges. The audience was fantastic and truly interdisciplinary, and they pushed back and challenged me in a great way. I’m glad I went and I’m glad Tess Wise invited me.

One issue that came up is something I want to talk about today, because I hear it all the time and it’s really starting to bug me.

Namely, the fallacy that people, especially young people, are “happy to give away their private data in order to get the services they love on the internet”. The actual quote came from the IBM guy on the congressional subcommittee panel on big data, which I blogged about here (point #7), but I’ve started to hear that reasoning more and more often from people who insist on side-stepping the issue of data privacy regulation.

Here’s the thing. It’s not that people don’t click “yes” on those privacy forms. They do click yes, and I acknowledge that. The real problem is that people generally have no clue what it is they’re trading.

In other words, this idea of a omniscient market participant with perfect information making a well-informed trade, which we’ve already seen is not the case in the actual market, is doubly or triply not the case when you think about young people giving away private data for the sake of a phone app.

Just to be clear about what these market participants don’t know, I’ll make a short list:

  • They probably don’t know that their data is aggregated, bought, and sold by Acxiom, which they’ve probably never heard of.
  • They probably don’t know that Facebook and other social media companies sell stuff about them even if their friends don’t see it and even though it’s often “de-identified”. Think about this next time you sign up for a service like “Bang With Friends,” which works through Facebook.
  • They probably don’t know how good algorithms are getting at identifying de-identified information.
  • They probably don’t know how this kind of information is used by companies to profile users who ask for credit or try to get a job.

Conclusion: people are ignorant of what they’re giving away to play Candy Crush Saga[1]. And whatever it is they’re giving away, it’s something way far in the future that they’re not worried about right now. In any case it’s not a fair trade by any means, and we should stop referring to it as such.

What is it instead? I’d say it’s a trick. A trick which plays on our own impulses and short-sightedness and possibly even a kind of addiction to shiny toys in the form of candy. If you give me your future, I’ll give you a shiny toy to play with right now. People who click “yes” are not signaling that they’ve thought deeply about the consequences of giving their data away, and they are certainly not making the definitive political statement that we don’t need privacy regulation.

1. I actually don’t know the data privacy rules for Candy Crush and can’t seem to find them, for example here. Please tell me if you know what they are.

Categories: data science, modeling, rant

Harvard Applied Statistics workshop today

I’m on an Amtrak train to Boston today to give a talk in the Applied Statistics workshop at Harvard, which is run out of the Harvard Institute for Quantiative Social Science. I was kindly invited by Tess Wise, a Ph.D. student in the Department of Government at Harvard who is organizing this workshop.

My title is “Data Skepticism in Industry” but as I wrote the talk (link to my prezi here) it transformed a bit and now it’s more about the problems not only for data professionals inside industry but for the public as well. So I talk about creepy models and how there are multiple longterm feedback loops having a degrading effect on culture and democracy in the name of short-term profits. 

Since we’re on the subject of creepy, my train reading this morning is this book entitled “Murdoch’s Politics,” which talks about how Rupert Murdoch lives by design in the center of all things creepy. 

Categories: data science, modeling

The multiple arms races of the college system

I’m reading a fine book called Nobody Makes You Shop at Walmart, which dispels many of the myths surrounding market populism, otherwise described in the book as “MarketThink”, namely the rhetoric which “portrays the world (governments aside) as if it works like an ideal competitive market, even when proposing actions that contradict that portrayal,” according to the author Tom Slee.

I’ve gotten a lot out of this book, and I suggest that you guys read it, especially if you are libertarians, so we can argue about it afterwards.

One thing Slee does is distinguish between different kinds of competitive and power-dynamic systems, and fingers certain situations as “arms races”, in which there are escalating costs but no long-lasting added value for the participants. They often involve relative rankings.

Slee’s example is a neighborhood block where all the men on the block compete to have the nicest cars. Each household spends a bunch of money to rise in the rankings just to have others respond by spending money too, and at the end of a year they’ve all spent money and none of the rankings have actually changed.

One of Slee’s overall points about arms races is that the way to deal with them is through armament agreements, which everyone involved needs to sign onto. Later in the book he also talks about how hard it is to get large groups of people to agree to anything at all, especially vague social contracts, when there’s an advantage to cheating, something he calls “free riding.” (as a commenter pointed out to me, free riding is more like someone who gets something for nothing, like a worker who benefits from the work of a union without being in the union and paying dues. This is just cheating.)

I’d argue, and I believe the book even uses this example, that education can be seen as an arms race as well. Take the statistics in this Opinionator blog from the New York Times, written by Jonathan Cowan and Jim Kessler, and entitled “The Middle Class Gets Wise.”

It describes how much more money the average high school graduate, versus two-year college, versus four-year college, versus professional degree graduate makes. In other words, it describes the payoffs to being higher ranked in that system. The money is real, of course, and everyone is aware of it as an issue even if they don’t know the exact numbers, so it is very analogous to the car status thing.

Cowan and Kessler describe in their article how, in the face of recession, lots more people have gone to college. That makes sense, since many of them didn’t have jobs and wanted to make themselves employable in the future, and at the same time people knew the job climate was even more rank-oriented since it has become tighter. People responded, in other words, to the incentives.

There’s a feedback loop going on in colleges as well, of course, and paired with the federal loan program and the fact that students cannot get rid of student debt in bankruptcy, we’ve seen a predictable (in direction if not size) and dramatic increase in tuition and student debt load for the younger generation.

My reaction to this is: we need an armament agreement, but it’s really not clear how that’s going to all of a sudden appear or how it would work, considering the number of entities involved, and the free rider problems due to the cash money incentives everywhere.

From the point of view of employers, rankings are great and they can be sure to pick the highest ranked individuals from that system, even if that means – as it often does – having Ph.D. graduates working in mailrooms. So don’t expect any help from them to add sanity to this system.

From the point of view of the colleges, they’re getting to hire more and more administrators, which means growth, which they love.

Finally, from the point of view of the individual student, it makes sense to go into debt, with almost no limit (to a point, but people rarely do that calculation explicitly, and if they did there’d be intense bias) to get significantly higher in the ranking.

In other words, it’s a shitshow, and possibly the only real disruption that could improve it would be widespread and universally respected basic and free-ish education. At least that would solve some of the arms race problems, for employers and for students. It would not make colleges happy.

The authors of the Opinionator piece, Cowan and Kessler, don’t agree with me. They have a goal, which is for even more people to go to school, and for tuition to be somehow magically decreased as well. In other words, up the antes for one feedback loop and hope its partner feedback loop somehow relaxes. Here’s the way they describe it:

So what can we do? Anya Kamenetz, the author of “Generation Debt,” has put together some excellent ideas for Third Way, the centrist policy organization where we both work. Let’s start by reducing the number of college administrators per 100 students, which jumped by 40 percent between 1993 and 2007. We should demand a cease-fire to the perk wars in which colleges build ever-more-luxurious living, dining and recreational facilities. Blended learning, which uses online teaching tools together with professors and teaching assistants, could also help students master coursework at less cost.

There are 37 million Americans with some college experience, but no degree. So pegging government tuition aid to college graduation rates would entice schools to find ways of keeping students in class. And eliminating some of the offerings of rarely chosen majors could bring some market efficiencies now lacking in education.

That really just doesn’t seem like a viable plan to me, and pegging government money to graduation rates is really stupid, as I described here, but maybe I’m just being negative. Cowan and Kessler, please tell me how that “demand” is going to work in practice.

Also, what’s funny about their idealistic demand is that they also think of a couple other things to do but dismiss them as unrealistic:

The most commonly discussed solutions to the problem of income inequality seem unlikely to get to the heart of the problem. Yes, we could raise additional taxes on the wealthy, but we just did that. Bumping up the minimum wage would help, but how high would lawmakers allow it to go? We should look instead at what Americans are already doing to solve this problem and help them do it far more successfully and at less cost.

Am I the only one who thinks raising the minimum wage would help more to address income inequality and is easier to imagine working?

Categories: modeling, musing

The scienciness of economics

A few of you may have read this recent New York TImes op-ed (hat tip Suresh Naidu) by economist Raj Chetty entitled “Yes, Economics is a Science.” In it he defends the scienciness of economics by comparing it to the field of epidemiology. Let’s focus on these three sentences in his essay, which for me are his key points:

I’m troubled by the sense among skeptics that disagreements about the answers to certain questions suggest that economics is a confused discipline, a fake science whose findings cannot be a useful basis for making policy decisions.

That view is unfair and uninformed. It makes demands on economics that are not made of other empirical disciplines, like medicine, and it ignores an emerging body of work, building on the scientific approach of last week’s winners, that is transforming economics into a field firmly grounded in fact.

Chetty is conflating two issues in his first sentence. The first is whether economics can be approached as a science, and the second is whether, if you are an honest scientist, you push as hard as you can to implement your “results” as public policy. Because that second issue is politics, not science, and that’s where people like myself get really pissed at economists, when they treat their estimates as facts with no uncertainty.

In other words, I’d have no problem with economists if they behaved like the people in the following completely made-up story based on the infamous Reinhart-Rogoff paper with the infamous excel mistake.

Two guys tried to figure what public policy causes GDP growth by using historical data. They collected their data and did some analysis, and they later released both the spreadsheet and the data by posting them on their Harvard webpages. They also ran the numbers a few times with slightly different countries and slightly different weighting schemes and explained in their write-up that got different answers depending on the initial conditions, so therefore they couldn’t conclude much at all, because the error bars are just so big. Oh well.

You see how that works? It’s called science, and it’s not what economists are known to do. It’s what we all wish they’d do though. Instead we have economists who basically get paid to write papers pushing for certain policies.

Next, let’s talk about Chetty’s comparison of economics with medicine. It’s kind of amazing that he’d do this considering how discredited epidemiology is at this point, and how truly unscientific it’s been found to be, for essentially exactly the same reasons as above – initial conditions, even just changing which standard database you use for your tests, switch the sign of most of the results in medicine. I wrote this up here based on a lecture by David Madigan, but there’s also a chapter in my new book with Rachel Schutt based on this issue.

To briefly summarize, Madigan and his colleagues reproduce a bunch of epidemiological studies and come out with incredible depressing “sensitivity” results. Namely, that the majority of “statistically significant findings” change sign depending on seemingly trivial initial condition changes that the authors of the original studies often didn’t even explain.

So in other words, Chetty defends economics as “just as much science” as epidemiology, which I would claim is in the category “not at all a science.” In the end I guess I’d have to agree with him, but not in a good way.

Finally, let’s be clear: it’s a good thing that economists are striving to be scientists, when they are. And it’s of course a lot easier to do science in microeconomic settings where the data is plentiful than it is to answer big, macro-economic questions where we only have a few examples.

Even so, it’s still a good thing that economists are asking the hard questions, even when they can’t answer them, like what causes recessions and what determines growth. It’s just crucial to remember that actual scientists are skeptical, even of their own work, and don’t pretend to have error bars small enough to make high-impact policy decisions based on their fragile results.

Categories: modeling, rant, statistics

MAA Distinguished Lecture Series: Start Your Own Netflix

I’m on my way to D.C. today to give an alleged “distinguished lecture” to a group of mathematics enthusiasts. I misspoke in a previous post where I characterized the audience to consist of math teachers. In fact, I’ve been told it will consist primarily of people with some mathematical background, with typically a handful of high school teachers, a few interested members of the public, and a number of high school and college students included in the group.

So I’m going to try my best to explain three different ways of approaching recommendation engine building for services such as Netflix. I’ll be giving high-level descriptions of a latent factor model (this movie is violent and we’ve noticed you like violent movies), of the co-visitation model (lots of people who’ve seen stuff you’ve seen also saw this movie) and the latent topic model (we’ve noticed you like movies about the Hungarian 1956 Revolution). Then I’m going to give some indication of the issues in doing these massive-scale calculation and how it can be worked out.

And yes, I double-checked with those guys over at Netflix, I am allowed to use their name as long as I make sure people know there’s no affiliation.

In addition to the actual lecture, the MAA is having me give a 10-minute TED-like talk for their website as well as an interview. I am psyched by how easy it is to prepare my slides for that short version using prezi, since I just removed a bunch of nodes on the path of the material without removing the material itself. I will make that short version available when it comes online, and I also plan to share the longer prezi publicly.

[As an aside, and not to sound like an advertiser for prezi (no affiliation with them either!), but they have a free version and the resulting slides are pretty cool. If you want to be able to keep your prezis private you have to pay, but not as much as you'd need to pay for powerpoint. Of course there's always Open Office.]

Train reading: Wrong Answer: the case against Algebra II, by Nicholson Baker, which was handed to me emphatically by my friend Nick. Apparently I need to read this and have an opinion.

Categories: math, math education, modeling
Follow

Get every new post delivered to your Inbox.

Join 888 other followers