It’s all mom’s fault

Maybe it’s because I grew up with an unapologetic working mother, but I am confused and enraged by all the cultural norms concerning mothers and how everything is their fault.

When I grew up in the 1970’s I had all sorts of role models of mothering. I was lucky to live next door to Sally, I met MA (Mary Ann) in puberty, and of course there was my own mom. All of these women were fiercely devoted to their choices: Sally and MA stayed home with their young kids but as their kids grew up, devoted more and more time to other things. My mom was a computer science professor my entire life. It goes without saying (but just for the record I’ll say it here) that I support people doing what they want to and need to for their own private reasons, no questions asked.

Sure, there were differences in interactions between my mom and these other surrogate moms. My mom didn’t have a lot of extra time to shop or cook, for example. But on the other hand she was a great role model for me in showing me how to be happy with what you do and have kids at the same time. And some things she didn’t have time for I was lucky enough to get from other things and people.

Here it is, thirty years later, and lots things have changed for working mothers. Some things have gotten easier: there’s online shopping, so I can provide my three sons with clothes and food without leaving home, which was a major struggle for my mom. Some things have gotten harder: school and daycare has gotten more expensive (more on that below). Other things haven’t changed so much, which itself is strange.

Here’s an article that got me pissed off enough to write this post. It’s a New York Times piece about an Olympic swimmer who, after taking time off and having two children, has returned to swimming and is actually competitive at the age of 40. I am so completely impressed by her, but for some reason the Times sees it as appropriate to deliver the following lines:

Evans said she had been criticized on social networking sites for training when she should be home with her children. But she has set up her schedule so her main swimming workout takes place in the morning, from 5:30 to 7:30, so she can make it home in time for breakfast. Her crazy hours are not lost on her daughter, who recently asked, “Why do you swim in the dark, Mommy?”

Willson’s job in technology sales allows him to work from home. He can chip in with the children when needed and behold the force of nature that is his wife.

First of all, how is it appropriate to mention idiots on Facebook? It is so entirely defensive and out of place. If I’m training for the Olympics, probably for the very last time in my life, my kids will be psyched for me to do my best, even if it means missing breakfast sometimes. And why is there always a mention of the martyred husband? Just imagine this was a male swimmer coming back to the Olympics after not swimming for 15 years, do we hear about his wife? No we don’t. Ridiculous, and the New York Times should do better. If they mention idiots on Facebook, they should also mention how they are idiots.

Here’s another story that got me incredibly pissed (if you were looking for a happy post this morning, I apologize). It’s about a public ad campaign in Georgia with billboard pictures of fat kids looking unhappy. This is insane and insulting on so many levels I don’t really know where to start, but let me start with the intended target: the mom. Yes, it’s mom’s fault that there are fat kids, and these billboards are telling mom not to let their kids get fat.

As an aside, it’s also now officially okay to blame mom for making her kids fat, as it’s also officially okay to blame the kids themselves. It’s government-sponsored bullying. Never mind the fact that they’ve shown nutrition education and exercise doesn’t actually cause people to lose weight (i.e. understanding where calories are hidden in food doesn’t magically make them leave cheeseburgers). Never mind that nobody has come up with a viable plan for how to address this issue. Let’s blame moms anyway, because then we are taking this issue seriously.

It makes you wonder why women want to become moms at all considering all the things we are signing up for. Oh and wait, actually lots of women aren’t having kids, but interestingly a recent paper came out showing women who are highly educated are having more kids. Here’s the abstract for that paper:

Conventional wisdom suggests that in developed countries income and fertility are negatively correlated. We present new evidence that between 2001 and 2009 the cross-sectional relationship between fertility and women’s education in the U.S. is U-shaped. At the same time, average hours worked increase monotonically with women’s education. This pattern is true for all women and mothers to newborns regardless of marital status. In this paper, we advance the marketization hypothesis for explaining the positive correlation between fertility and female labor supply along the educational gradient. In our model, raising children and home-making require parents’ time, which could be substituted by services bought in the market such as baby-sitting and housekeeping. Highly educated women substitute a significant part of their own time for market services to raise children and run their households, which enables them to have more children and work longer hours. Finally, we use our model to shed light on differences between the U.S. and Western Europe in fertility and women’s time allocated to labor supply and home production. We argue that higher inequality in the U.S. lowers the cost of baby-sitting and housekeeping services and enables U.S. women to have more children, spend less time on home production and work more than their European counterparts.

Also interesting is this interview, where they describe the results of another paper which tracked women vs. men in various fields of science, including math. It looks like evidence for my post about meritocracy and horizon bias, i.e. the idea that women self-select out of certain fields because they are just not very appealing. From the interview:

The women who come in to academic science careers tend to be so highly motivated that they stay. They limit the number of children they have. Other studies have shown that female academics have fewer children than other professional women, such as lawyers. Female graduates see women scientists working very hard in what they feel are less fair conditions, and it puts them off. Societal factors also make it harder for women to have such demanding careers–women tend to manage family problems, for example.

By the way, I am not insufferably sad about mothers and their fates. I make fun of mothers too, and this article about passive parents is one I could have written. From the article:

But seriously, what is the deal with asking our children to behave? “Maybe you should get down?” What the hell is wrong with you lady? She’s four. There’s no room for negotiating here. I’m all for giving my kids choices to make them feel like they’re in control of something, blah, blah, blah, but this is not the time. “Maybe” should be reserved for times like: “Do you want to wear a dress today or MAYBE a skirt?”

I could go on and on about the passivity of modern yuppie parents, and I’d be right (hey I live in the Upper West Side so you know I’d be right). But if you think about it for a minute, this is just another manifestation of the same thing: it’s all mom’s fault. These women are performing a mother role instead mothering from the stomach, and it’s because they are made insecure by all the incredible bullshit out there about how to be a good mom and what other people are going to think if they scream at their kid in public or if their kid starts to scream. We have taught our mothers to be insecure, and to feel at fault, and oh yes, to be the target of bullying ad campaigns as well.

People, let’s get it together and solve problems instead of pointing fingers. I’m looking at you, Santorum.

Categories: news, rant, women in math

Model Thinking (part 2)

I recently posted about the new, free online course Model Thinking. I’ve watched more videos now and I am prepared to make more comments.

For the record, the lecturer, Scott Page, is by all accounts a great guy, and indeed he seems super nice on video. I’d love to have him over for dinner with my family someday (Professor Page, please come to my house for dinner when you’re in town).

In spite of liking him, though, pretty much every example he gives as pro-modeling is, for me, an anti-modeling example. Maybe I should make a complementary series of YouTube comment videos. It’s not totally true, of course- I just probably don’t notice the things we agree on. But I do notice the topics on which we disagree:

  1. He talks a lot about how models make us clearer thinkers. But what he really seems to mean is that they make us stupider thinkers. His example is that, in order to decide who to vote for for president, we can model this decision as depending on two things: the likability of the person in question (presumably he assumes we want our president to be likable), and the extent to which that person is “as left” or “as right” as we are. I don’t know about you, but I actually care about specific issues and where people stand on them, and which issues I consider likely to come up for consideration in the next election cycle. Like, if I like someone for his “stick it to the banks” approach but he’s anti-abortion, then I think about whether abortion is likely to actually become illegal. And by the way, I don’t particularly care if my president is likable, I’d rather have him or her effective.
  2. He bizarrely chooses “financial interconnectedness” as a way of seeing how cool models are, and he shows a graph where the nodes are the financial institutions (Goldman Sachs, JP Morgan, etc.) and the edges are labeled with an interconnectedness score, bigger meaning more interconnected. He shows that, according to this graph, back in 2008 it shows we knew to bail out AIG but that it was definitely okay to let Lehman fail. I’m wondering if he really meant that this was an example of how your model could totally fail because your “interconnectedness scoring” sucked, but he didn’t seem to be tongue in cheek.
  3. He then talked about measuring the segregation of a neighborhood, either by race or by income, and he used New York and Chicago as examples. I won’t go into lots of details, but he gave a score to each block, like the census maps do with coloring, and he used those scores to develop a new score which was supposed to measure the segregation of each block. The problem I have with this segregation score is that it depends very heavily on the definition of the overall area you are considering. If you enlarge your definition of the New York City to include the suburbs, then the segregation score of New York City may (probably would) be completely different. This seems to be a really terrible characteristic of such a metric.
  4. My second problem with his segregation score is that, at the end, he had overall segregation numbers for Philly and Detroit, and then showed the maps and mentioned that, looking at the maps, you wouldn’t really notice that one is more segregated than the other (Philly more than Detroit), but knowing the scores you do know that. Umm.. I’d like to rather say, if you are getting scores that are not fundamentally obvious from looking at these pictures, then maybe it’s because your score sucks. What does having a “good segregation score” mean if not that it captures something you can see through a picture?
  5. One thing I liked was a demonstration of Schelling’s Segregation Model, which shows that, if you have a group of people who are not all that individually racist, you can still end up with a neighborhood which is very segregated.

I’m looking forward to watching more videos with my skeptical eye. After all, the guy is really a sweetheart, and I do really care about the idea of teaching people about modeling.

Categories: data science

#OWS Alternative Banking update

Crossposted from the Alternative Banking Blog.

I wanted to mention a few things that have been going on with the Alternative Banking group lately.

  1. The Occupy the SEC group submitted their public comments last week on the Volcker Rule and got AMAZING press. See here for a partial list of articles that have been written about these incredible folks.
  2. Hey, did you notice something about that last link? Yeah, Alt Banking now has a blog! Woohoo! One of our members Nathan has been updating it and he’s doing a fine job. I love how he mentions Jeremy Lin when discussing derivatives.
  3. Alt Banking also has a separate suggested reading list page on the new blog. Please add to it!
  4. We just submitted a short letter as a public comment to the new Consumer Financial Protection Bureau regulation which gives them oversight powers on debt collectors and credit score bureaus. We basically told them to make credit score models open source (and I wasn’t even in the initial conversation about what we should say to these guys! Open source rules!!):

Creepy model watch

I really feel like I can’t keep up with all of the creepy models coming out and the news articles about them, so I think I’ll just start making a list. I would appreciate readers adding to my list in the comment section. I think I’ll move this to a separate page on my blog if it comes out nice.

  1. I recently blogged about a model that predicts student success in for-profit institutions, which I claim is really mostly about student debt and default,
  2. but here’s a model which actually goes ahead and predicts default directly, it’s a new payday-like loan model. Oh good, because the old payday models didn’t make enough money or something.
  3. Of course there’s the teacher value-added model which I’ve blogged about multiple times, most recently here. And here’s a paper I’d like everyone to read before they listen to anyone argue one way or the other about the model (h/t Joshua Batson). The abstract is stunning: Recently, educational researchers and practitioners have turned to value-added models to evaluate teacher performance. Although value-added estimates depend on the assessment used to measure student achievement, the importance of outcome selection has received scant attention in the literature. Using data from a large, urban school district, I examine whether value-added estimates from three separate reading achievement tests provide similar answers about teacher performance. I find moderate-sized rank correlations, ranging from 0.15 to 0.58, between the estimates derived from different tests. Although the tests vary to some degree in content, scaling, and sample of students, these factors do not explain the differences in teacher effects. Instead, test timing and measurement error contribute substantially to the instability of value-added estimates across tests. Just in case that didn’t come through, they are saying that the results of the teacher value-added test scores are very very noisy.
  4. That reminds me, credit scoring models are old but very very creepy, wouldn’t you agree? What’s in them that they want to conceal them?
  5. Did you read about how Target predicts pregnancy? Extremely creepy.
  6. I’m actually divided about whether it’s the creepiest though, because I think the sheer enormity of information that Facebook collects about us is the most depressing thing of all.

Before I became a modeler, I wasn’t personally offended by the idea that people could use my information. I thought, I’ve got nothing to hide, and in fact maybe it will make my life easier and more efficient for the machine to know me and my habits.

But here’s how I think now that I’m a modeler and I see how this stuff gets made and I see how it gets applied. That we are each giving up our data, and it’s so easy to do we don’t think about it, and it’s being used to funnel people into success or failure in a feedback loop. And the modelers, the people responsible for creating these things and implementing them, are always already the successes, they are educated and are given good terms on their credit cards and mortgages because they have a nifty high tech job. So the makers get to think of how much easier and more convenient their lives are now that the models see how dependable they are as consumers.

But when there are funnels, there’s always someone who gets funneled down.

Think about how it works with insurance. The idea of insurance is to pool people so that when one person gets sick, the medical costs for that person are paid from the common fund. Everyone pays a bit so it doesn’t break the bank.

But if we have really good information, we begin to see how likely people are to get sick. So we can stratify the pool. Since I almost never get sick, and when I do it’s just strep throat, I get put into a very nice pool with other people who never get sick, and we pay very very little and it works out great for us. But other people have worse luck of the DNA draw and they get put into the “pretty sick” pool and their premium gets bigger as their pool gets sicker until they are really sick and the premium is actually unaffordable. We are left with a system where the people who need insurance the most can’t be part of the system anymore. Too much information ruins the whole idea of insurance and pooled risk.

I think modern modeling is analogous. When people offer deals, they can first check to see if the people they are offering deals are guaranteed to pay back everything. In other words, the businesses (understandably) want to make very certain they are going to profit from each and every customer, and they are getting more and more able to do this. That’s great for customers with perfect credit scores, and it makes it easier for people with perfect credit scores to keep their perfect credit scores, because they are getting the best deals.

But for people with bad credit scores, they get the rottenest deals, which makes a larger and larger percentage of their takehome pay (if they even get a job considering their credit scores) go towards fees and high interest rates. This of course creates an environment in which it’s difficult to improve their credit score- so they default and their credit score gets worse instead of better.

So there you have it, a negative feedback loop and a death spiral of modeling.

Categories: data science

What data science _should_ be doing

I recently read this New York Times article about a company that figures out how to get the best deal when you rent a car. The company is called AutoSlash and the idea is you book with them and they keep looking for good deals, coupons, or free offers every day until you actually need the car.

Wait a minute, a data science model that actually directly improves the lives of its customers? Why can’t we have more of these? Obviously the car companies absolutely hate this idea. But what are they going to do, stop offering online shopping?

Why don’t we see this in every category of shopping? It seems to me that you could do something like this and start a meta-marketplace, where you buy something and then, depending on how long you’re willing to wait until delivery, the model looks for a better online deal, in exchange for a small commission. Then you’d have to make sure that on average the commission is paying for itself with better deals, but my guess is it would work if you allowed it a few days to search per purchase. Or if you really are a doubter, fix a minimum wait time and let the company take some (larger) percentage of the difference between the initial price and the eventual best price.

Another way of saying this, is that when you go online to buy something, depending on the scale (say it’s on the expensive side) you probably shop around for a few days or weeks. Why do that in person? Just have a computer do it for you and tell you at the end what deal it gave you. Don’t get bombarded by ads, let the computer get bombarded by ads for you.

Categories: data science

Why I love nerds

February 20, 2012 Comments off

What is it that grad students do all day? Well if you’re Zachary Abel in the M.I.T. math department, then the answer may be that you fiddle with paperclips and make awesome nerdy and beautiful sculpture (I found his page through the God Plays Dice blog). Here’s my favorite sculpture from his site:

Be sure to read the explanations he gives of the things he’s made, they are very cool and sometimes comes with animation.

Categories: math, math education

Sunday morning music videos

February 19, 2012 Comments off

Adele spoof on Gingrich:

The House of the Rising Sun, nerdstyle (h/t Emil):

http://vimeo.com/33181232

Categories: Uncategorized

ECB trades crap for slightly less crappy crap

February 18, 2012 1 comment

Yesterday I read this New York Times article on how the ECB is trading its short term Greek bonds, with Greece, for longer term bonds.

Specifically, in order to avoid holding bonds that Greece is officially planning to “voluntarily” default on, the ECB is turning in that super crappy crap for other bonds that Greece hasn’t yet decided how much they’ll default on.

Just to spell it out even more, the plan to get private bondholders more excited about trusting the European bond market has been this:

  1. have the the ECB step in (around the beginning of 2012) and provide liquidity and faith in the bond market,
  2. negotiate that the Greek bonds maturing in March 2012 are given a 70% haircut,
  3. make sure credit default swaps on those bonds are not activated (why we need it to be “voluntary”),
  4. change the terms of the bonds’ contracts so that the holdouts of this voluntary deal can be safely ignored, and
  5. have the ECB trade those bonds for longer-dated bonds at the last minute so they don’t actually have to take losses.

I’m not sure about you, but if I’m a private European debt holder my confidence in the bond market is not stronger right now. The argument for why the ECB is doing this is that they aren’t allowed to be seen giving money to Greece, by their charter. It’s odd to me that this charter, of all the various rules that have been broken here, is the one that is being fixated on as the important one we can’t break.

There are complicated politics going on, I am sure. I’m no expert in European politics, but this is about as European and about as political as things get.

Ignoring all of that, as a private bondholder, I’m putting a “ECB back-door swap” premium on all of my European debt from now on. Except maybe for German debt since I think Germany would rather jump out of the Euro altogether than default on its debt. But every other country is fair game. Bottomline is I short French debt today.

Categories: finance, news

How Harvard is failing its students

In a recent Bloomberg article, Ezra Klein argues that Harvard and the other Ivy Leagues are failing their students because the students end up confused about what they can do with themselves after college and end up going to Wall Street firms as a way of making themselves marketable. From the article:

For many kids, college represents an end goal. Once you get into a good college, you’ve made it, and everyone stops worrying about you. You’re encouraged to take classes in subjects like English literature and history and political science, all of which are fine and interesting, but none of which leave you with marketable skills. After a few years of study, you suddenly find it’s late in your junior year, or early in your senior year, and you have no skills pointing to the obvious next step.

What Wall Street figured out is that colleges are producing a large number of very smart, completely confused graduates. Kids who have ample mental horsepower, incredible work ethics and no idea what to do next. So the finance industry takes advantage of that confusion, attracting students who never intended to work in finance but don’t have any better ideas about where to go.

He then talks about how the investment banks makes the application process formal, which is something that these kids are good at, and also that Wall Street promises to build them into people with careers and options. He also points out that some kids go into other formal applicationed jobs like Teach for America, so it’s not all about the money, at least not for all of them, and he concludes by saying how Harvard should change:

My hunch is that we have underemphasized the need to learn skills, rather than simply learn, while in college. The fact that Teach for America — which pays almost nothing and can place its hires far from cosmopolitan hot spots — is one of the few recruiting systems competitive with Wall Street suggests that graduates are open to paths that aren’t remotely as remunerative as finance and aren’t based in New York or San Francisco. They’re just not seeing all that many of them.

Although I agree with some of his diagnosis, I don’t agree with his solution of learning more “skills” in college.

As an aside, as I learned from Karen Ho’s excellent book about investment banking, Liquidated, and also from people I’ve met, the skills you learn on Wall Street as freshmen analysts are primarily bullshitting skills and Excel skills. These most definitely should not be taught at college.

I think he is right about these kids being comfortable with the “formal process” of applying to investment banks etc., but I don’t think he dives deep enough into why this is true. The fact is, the kids who get into Harvard nowadays are, generally speaking, professional test takers. They are moreover dependent on outside metrics for evaluating themselves. If you took away tests and grading systems, these kids would be desperately unhappy, because that’s how they’ve been trained all their lives to think about their self-worth.

When I was a tutor at one of the undergrad houses at grad school, I was incredibly impressed with the international group of undergrads I was in charge of; their credentials, even at the age of 20, were amazing, and their knowledge and self-possession were stunning. Same with the high school kids I taught at math camp last summer. But one thing I saw time and time again was how much they needed to please some outside authority. It’s like they never decided whether they themselves liked their major or whether it was a good fit- it was instead about whether they’d be successful and whether it would be an impressive path for them. So, external metrics of success.

Here’s my diagnosis. These kids are vulnerable to Wall Street investment firms and to things like Teach for America because they have application processes at all. But life, normal adult life, doesn’t have an application process. You actually, at some point, need to figure out what you want to do and what makes you happy. You need to take a leap of faith that your native talents and desires will end you up at a reasonable and interesting place.

Actually you don’t ever have to decide that, you could just keep doing what you think looks good to other people and pleases your parents or friends, without regard to whether it fulfills you at all. That’s kind of what’s happening I think with the 36% of the Princeton undergrads going to finance.

As for what Harvard et al can do about this, I would suggest trying to send the message in one of their core curriculum classes, that it’s not only about what you’re good at, it’s also about what makes you happy. I’m not sure those kids have ever really been told that. Being told that might not make a huge difference, but it’s a good start.

And instead of teaching them new “skills,” they should be told about options outside of school, and meet people who are employed doing interesting things with their liberal arts education. Have them talk about the way they made their way there, forged a path, and felt insecure about doing something weird but did it anyway. In other words, present them with role models who are living out their lives on their own terms, with independent thoughts.

Categories: finance, news, rant

A modeled student

There’s a recent article from Inside Higher Ed (hat tip David Madigan) which focuses on a new “Predictive Analytics Reporting Framework” that tracks students’ online learning and predicts their outcomes, like whether they will finish the classes they’re taking or drop out. Who’s involved? The University of Phoenix among others:

A broad range of institutions (see factbox) are participating. Six major for-profits, research universities and community colleges — the sort of group that doesn’t always play nice — are sharing the vault of information and tips on how to put the data to work.

I don’t know about you but I’ve read the wikipedia article about for-profit universities and I don’t have a great feeling about their goals. In the “2010 Pell Grant Fraud controversy” section you can find this:

Out of the fifteen sampled, all were found to have engaged in deceptive practices, improperly promising unrealistically high pay for graduating students, and four engaged in outright fraud, per a GAO report released at a hearing of the Health, Education, Labor and Pensions Committee held on August 4, 2010.[28]

Anyhoo, back to the article. They track people online and make suggestions for what classes people may want to take:

The data set has the potential to give institutions sophisticated information about small subsets of students – such as which academic programs are best suited for a 25-year-old male Latino with strength in mathematics, for example. The tool could even become a sort of Match.com for students and online universities, Ice said.

That makes me wonder- what would I have been told to do as a white woman with strength in math, if such a program had existed when I went to college? Maybe I would have been pushed to become something that historical data said I’d be best suited for? Maybe something safe, like actuarial work? What if this had existed when my mother was at MIT in applied math in the early ’60’s? Would they have had a suggestion for her?

Aside from snide remarks, let me make two direct complaints about this idea. First, I despise the idea of funneling people into chutes and ladders-type career projections based on their external attributes rather than their internal motives and desires. This kind of model, which as all models is based on historical data, is potentially a way to formally adopt racist and sexist policies. It codifies discrimination.

The second complaint: this is really all about money. In the article they mention that the model has already helped them decide whether Pell grants are being issued to students “correctly”:

Students can only receive the maximum Pell Grant award when they take 12 credit hours, which “forces people into concurrency,” said Phil Ice, vice president of research and development for the American Public University System and the project’s lead investigator. “So the question becomes, is the current federal financial aid structure actually setting these individuals up for failure?”

In other words, it looks like they are going to try to use the results of this model to persuade the government to change the way Pell Grants are distributed. Now, I’m not saying that the Pell Grant program is perfect; maybe it should be changed. But I am saying that this model is all about money and helping these online universities figure out which students will be most profitable. I’m familiar with constructing such models, because I was a quant at a hedge fund once and I know how these guys think. You can bet this model is proprietary, too- you wouldn’t want people to see into how they are being funneled too much, it might get awkward.

The article doesn’t she away from such comparisons either. From the article:

The project appears to have built support in higher education for the broader use of Wall Street-style slicing and dicing of data. Colleges have resisted those practices in the past, perhaps because some educators have viewed “data snooping” warily. That may be changing, observers said, as the project is showing that big data isn’t just good for hedge funds.

Just to be clear, they are saying it’s also good for for-profit institutions, not necessarily the students in them.

I’d like to see a law passed that forced such models to be open-sourced at the very very least. The Bill and Melinda Gates Foundation is funding this, who know how to reach those guys to make this request?

How Big Pharma Cooks Data: The Case of Vioxx and Heart Disease

This is cross posted from Naked Capitalism.

Yesterday I caught a lecture at Columbia given by statistics professor David Madigan, who explained to us the story of Vioxx and Merck. It’s fascinating and I was lucky to get permission to retell it here.

Disclosure

Madigan has been a paid consultant to work on litigation against Merck. He doesn’t consider Merck to be an evil company by any means, and says it does lots of good by producing medicines for people. According to him, the following Vioxx story is “a line of work where they went astray”.

Yet Madigan’s own data strongly suggests that Merck was well aware of the fatalities resulting from Vioxx, a blockbuster drug that earned them $2.4b in 2003, the year before it “voluntarily” pulled it from the market in September 2004. What you will read below shows that the company set up standard data protection and analysis plans which they later either revoked or didn’t follow through with, they gave the FDA misleading statistics to trick them into thinking the drug was safe, and set up a biased filter on an Alzheimer’s patient study to make the results look better. They hoodwinked the FDA and the New England Journal of Medicine and took advantage of the public trust which ultimately caused the deaths of thousands of people.

The data for this talk came from published papers, internal Merck documents that he saw through the litigation process, FDA documents, and SAS files with primary data coming from Merck’s clinical trials. So not all of the numbers I will state below can be corroborated, unfortunately, due to the fact that this data is not all publicly available. This is particularly outrageous considering the repercussions that this data represents to the public.

Background

The process for getting a drug approved is lengthy, requires three phases of clinical trials before getting FDA approval, and often takes well over a decade. Before the FDA approved Vioxx, less than 20,000 people tried the drug, versus 20,000,000 people after it was approved. Therefore it’s natural that rare side effects are harder to see beforehand. Also, it should be kept in mind that for the sake of clinical trials, they choose only people who are healthy outside of the one disease which is under treatment by the drug, and moreover they only take that one drug, in carefully monitored doses. Compare this to after the drug is on the market, where people could be unhealthy in various ways and could be taking other drugs or too much of this drug.

Vioxx was supposed to be a new “NSAID” drug without the bad side effects. NSAID drugs are pain killers like Aleve and ibuprofen and aspirin, but those had the unfortunate side effects of gastro-intestinal problems (but those are only among a subset of long term users, such as people who take painkillers daily to treat chronic pain, such as people with advanced arthritis). The goal was to find a pain-killer without the GI side effects. The underlying scientific goal was to find a COX-2 inhibitor without the COX-1 inhibition, since scientists had realized in 1991 that COX-2 suppression corresponded to pain relief whereas COX-1 suppression corresponded to GI problems.

Vioxx introduced and withdrawn from the market

The timeline for Vioxx’s introduction to the market was accelerated: they started work in 1991 and got approval in 1999. They pulled Vioxx from the market in 2004 in the “best interest of the patient”. It turned out that it caused heart attacks and strokes. The stock price of Merck plummeted and $30 billion of its market cap was lost. There was also an avalanche of lawsuits, one of the largest resulting in a $5 billion settlement which was essentially a victory for Merck, considering they made a profit of $10 billion on the drug while it was being sold.

The story Merck will tell you is that they “voluntarily withdrew” the drug on September 30, 2004. In a placebo-controlled study of colon polyps in 2004, it was revealed that over a time period of 1200 days, 4% of the Vioxx users suffered a “cardiac, vascular, or thoracic event” (CVT event), which basically means something like a heart attack or stroke, whereas only 2% of the placebo group suffered such an event. In a group of about 2400 people, this was statistically significant, and Merck had no choice but to pull their drug from the market.

It should be noted that, on the one hand Merck should be applauded for checking for CVT events on a colon polyps study, but on the other hand that in 1997, at the International Consensus Meeting on COX-2 Inhibition, a group of leading scientists issued a warning in their Executive Summary that it was “… important to monitor cardiac side effects with selective COX-2 inhibitors”. Moreover, in an internal Merck email as early as 1996, it was stated there was a “… substantial chance that CVT will be observed.” In other words, Merck knew to look out for such things. Importantly, however, there was no subsequent insert in the medicine’s packaging that warned of possible CVT side-effects.

What the CEO of Merck said

What did Merck say to the world at that point in 2004? You can look for yourself at the four and half hour Congressional hearing (seen on C-SPAN) which took place on November 18, 2004. Starting at 3:27:10, the then-CEO of Merck, Raymond Gilmartin, testifies that Merck “puts patients first” and “acted quickly” when there was reason to believe that Vioxx was causing CVT events. Gilmartin also went on the Charlie Rose show and repeated these claims, even go so far as stating that the 2004 study was the first time they had a study which showed evidence of such side effects.

How quickly did they really act though? Were there warning signs before September 30, 2004?

Arthritis studies

Let’s go back to the time in 1999 when Vioxx was FDA approved. In spite of the fact that it was approved for a rather narrow use, mainly for arthritis sufferers who needed chronic pain management and were having GI problems on other meds (keeping in mind that Vioxx was way more expensive than ibuprofen or aspirin, so why would you use it unless you needed to), Merck nevertheless launched an ad campaign with Dorothy Hamill and spent $160m (compare that with Budweiser which spent $146m or Pepsi which spent $125m in the same time period).

As I mentioned, Vioxx was approved faster than usual. At the time of its approval, the completed clinical studies had only been 6- or 12-week studies; no longer term studies had been completed. However, there was one underway at the time of approval, namely a study which compared Aleve with Vioxx for people suffering from osteoarthritis and rheumatoid arthritis.

What did the arthritis studies show? These results, which were available in late 2003, showed that the CVT events were more than twice as likely with Vioxx as with Aleve (CVT event rates of 32/1304 = 0.0245 with Vioxx, 6/692 = 0.0086 with Aleve, with a p-value of 0.01). As we see this is a direct refutation of the fact that CEO Gilmartin stated that they didn’t have evidence until 2004 and acted quickly when they did.

In fact they had evidence even before this, if they bothered to put it together (in fact they stated a plan to do such statistical analyses but it’s not clear if they did them- or in any case there’s so far no evidence that they actually did these promised analyses).

In a previous study (“Table 13”), available in February of 2002, the could have seen that, comparing Vioxx to placebo, we saw a CVT event rate of 27/1087 = 0.0248 with Vioxx versus 5/633 = 0.0079 with placebo, with a p-value of 0.01. So, three times as likely.

In fact, there was an even earlier study (“1999 plan”), results of which were available in July of 2000, where the Vioxx CVT event rate was 10/427 = 0.0234 versus a placebo event rate of 1/252 = 0.0040, with a p-value of 0.05 (so more than 5 times as likely). This p-value can be taken to be the definition of statistically significant. So actually they knew to be very worried as early as 2000, but maybe they… forgot to do the analysis?

The FDA and pooled data

Where was the FDA in all of this?

They showed the FDA some of these numbers. But they did something really tricky. Namely, they kept the “osteoarthritis study” results separate from the “rheumatoid arthritis study” results. Each alone were not quite statistically significant, but together were amply statistically significant. Moreover, they introduced a third category of study, namely the “Alzheimer’s study” results, which looked pretty insignificant (more on that below though). When you pooled all three of these study types together, the overall significance was just barely not there.

It should be mentioned that there was no apparent reason to separate the different arthritic studies, and there is evidence that they did pool such study data in other places as a standard method. That they didn’t pool those studies for the sake of their FDA report is incredibly suspicious. That the FDA didn’t pick up on this is probably due to the fact that they are overworked lawyers, and too trusting on top of that. That’s unfortunately not the only mistake the FDA made (more below).

Alzheimer’s Study

So the Alzheimer’s study kind of “saved the day” here. But let’s look into this more. First, note that the average age of the 3,000 patients in the Alzheimer’s study was 75, it was a 48-month study, and that the total number of deaths for those on Vioxx was 41 versus 24 on placebo. So actually on the face of it it sounds pretty bad for Vioxx.

There were a few contributing reasons why the numbers got so mild by the time the study’s result was pooled with the two arthritis studies. First, when really old people die, there isn’t always an autopsy. Second, although there was supposed to be a DSMB as part of the study, and one was part of the original proposal submitted to the FDA, this was dropped surreptitiously in a later FDA update. This meant there was no third party keeping an eye on the data, which is not standard operating procedure for a massive drug study and was a major mistake, possibly the biggest one, by the FDA.

Third, and perhaps most importantly, Merck researchers created an added “filter” to the reported CVT events, which meant they needed the doctors who reported the CVT event to send their info to the Merck-paid people (“investigators”), who looked over the documents to decide whether it was a bonafide CVT event or not. The default was to assume it wasn’t, even though standard operating procedure would have the default assuming that there was such an event. In all, this filter removed about half the initially reported CVT events, and about twice as often the Vioxx patients had their CVT event status revoked as for the placebo patients. Note that the “investigator” in charge of checking the documents from the reporting doctors is paid $10,000 per patient. So presumably they wanted to continue to work for Merck in the future.

The effect of this “filter” was that, instead of it seeming 1.5 times as likely to have a CVT event if you were taking Voixx, it seemed like it was only 1.03 as likely, with a high p-score.

If you remove the ridiculous filter from the Alzheimer’s study, then you see that as of November 2000 there was statistically significant evidence that Vioxx caused CVT events in Alzheimer patients.

By the way, one extra note. Many of the 41 deaths in the Vioxx group were dismissed as “bizarre” and therefore unrelated to Vioxx. Namely, car accidents, falling of ladders, accidentally eating bromide pills. But at this point there’s evidence that Vioxx actually accelerates Alzheimer’s disease itself, which could explain those so-called bizarre deaths. This is not to say that Merck knew that, but rather that one should not immediately dismiss the concept of statistically significant just because it doesn’t make intuitive sense.

VIGOR and the New England Journal of Medicine 

One last chapter in this sad story. There was a large-scale study, called the VIGOR study, with 8,000 patients. It was published in the New England Journal of Medicine on November 23, 2000. See also this NPR timeline for details. They didn’t show the graphs which would have emphasized this point, but they admitted, in a deceptively round-about way, that Vioxx has 4 times the number of CVT events than Aleve. They hinted that this is either because Aleve is protective against CVT events or that Vioxx is bad for it, but left it open.

But Bayer, which owns Aleve, issued a press release saying something like, “if Aleve is protective for CVT events then it’s news to us.” Bayer, it should be noted, has every reason to want people to think that Aleve is protective against CVT events. This problem, and the dubious reasoning explaining it away, was completely missed by the peer review system; if it had been spotted, Vioxx would have been forced off the market then and there. Instead, Merck purchased 900,000 preprints of this article from the NE Journal of Medicine, which is more than the number of practicing doctors in the U.S.. In other words, the Journal was used as a PR vehicle for Merck.

The paper emphasized that Aleve has twice the rate of ulcers and bleeding, at 4%, whereas Vioxx had a rate of only 2% among chronic users. When you compare that to the elevated rate of heart attack and death (0.4% to 1.2%) of Vioxx over Aleve, though, the reduced ulcer rate doesn’t seem all that impressive.

A bit more color on this paper. It was written internally by Merck, after which non-Merck authors were found. One of them is Loren Laine. Loren helped Merck develop a sound-bite interview which was 30 seconds long and was sent to the news media and run like a press interview, even though it actually happened in Merck’s New Jersey office (with a backdrop to look like a library) with a Merck employee posing as a neutral interviewer. Some smart lawyer got the outtakes of this video made available as part of the litigation against Merck. Check out this youtube video, where Laine and the fake interviewer scheme about spin and Laine admits they were being “cagey” about the renal failure issues that were poorly addressed in the article.

The damage done

Also on the Congress testimony I mentioned above is Dr. David Graham, who speaks passionately from minute 41:11 to minute 53:37 about Vioxx and how it is a symptom of a broken regulatory system. Please take 10 minutes to listen if you can.

He claims a conservative estimate is that 100,000 people have had heart attacks as a result of using Vioxx, leading to between 30,000 and 40,000 deaths (again conservatively estimated). He points out that this 100,000 is 5% of Iowa, and in terms people may understand better, this is like 4 aircraft falling out of the sky every week for 5 years.

According to this blog, the noticeable downwards blip in overall death count nationwide in 2004 is probably due to the fact that Vioxx was taken off the market that year.

Conclusion

Let’s face it, nobody comes out looking good in this story. The peer review system failed, the FDA failed, Merck scientists failed, and the CEO of Merck misled Congress and the people who had lost their husbands and wives to this damaging drug. The truth is, we’ve come to expect this kind of behavior from traders and bankers, but here we’re talking about issues of death and quality of life on a massive scale, and we have people playing games with statistics, with academic journals, and with the regulators.

Just as the financial system has to be changed to serve the needs of the people before the needs of the bankers, the drug trial system has to be changed to lower the incentives for cheating (and massive death tolls) just for a quick buck. As I mentioned before, it’s still not clear that they would have made less money, even including the penalties, if they had come clean in 2000. They made a bet that the fines they’d need to eventually pay would be smaller than the profits they’d make in the meantime. That sounds familiar to anyone who has been following the fallout from the credit crisis.

One thing that should be changed immediately: the clinical trials for drugs should not be run or reported on by the drug companies themselves. There has to be a third party which is in charge of testing the drugs and has the power to take the drugs off the market immediately if adverse effects (like CVT events) are found. Hopefully they will be given more power than risk firms are currently given in finance (which is none)- in other words, it needs to be more than reporting, it needs to be an active regulatory power, with smart people who understand statistics and do their own state-of-the-art analyses – although as we’ve seen above even just Stats 101 would sometimes do the trick.

Categories: data science, news

Today is Volcker Day

This is a guest post by George Bailey, who is part of Occupy the SEC. I just want insert here a congratulations to Occupy the SEC for submitting their public comments letter yesterday, and to point out that the organization SIFMA below is the same SIFMA I mentioned here and here (those guys are everywhere, defending the interests of the banks).

Today is “Volcker Day” and Paul Volcker was on a tear.

Mr Volcker added in a formal submission to regulators Monday that “proprietary trading is not an essential commercial bank service that justifies taxpayer support,” and that banks should stop “stonewalling.”

He went on:

“There should not be a presumption that evermore market liquidity brings a public benefit,” Volcker, 84, wrote in a letter submitted yesterday to regulators in defense of the rule curtailing banks’ bets on asset prices with their own money. “At some point, great liquidity, or the perception of it, may itself encourage more speculative trading (see here and here for the full story).

But then Jamie Dimon came along and bitch slapped Tall Paul. Ouch.

“Paul Volcker by his own admission has said he doesn’t understand capital markets,” Dimon told Francis in the Fox Business interview. “He has proven that to me.”

SIFMA, on behalf of the industry, took over to explain in detail just what it is that Mr. Volcker doesn’t understand in their comment letter.  They reiterate their dire warning about the devastating effects on  ‘corporate liquidity’’ from the Volcker Rule.  Yet surprisingly, no non-financial corporate bond issuers filed any comments to acknowledge or object to this danger.

In fact, there are no comment letters from any non-financial companies.  They did haul out the widely lampooned Oliver Wyman study to bolster their comment that ‘corporate’ America would suffer horribly if Volcker is enacted.  But that just serves to remind us again that the corporate bond liquidity that will be affected is the liquidity in dodgy financial company ‘corporate’ bonds, like CDOs and other drek.  They conclude the only solution is a rewrite . They request the rule makers go back and start all over again.

The SIFMA comment letter runs to 175 pages. I haven’t read all the other financial company letters, but the ones I’ve skimmed conform to SIFMAs position.

The Occupy the SEC comment letter logs in at 325 pages and oddly enough draws the exact opposite conclusions to each of SIFMAs objections. It’s an interesting contrast. For some reason (some familiarity with the subject matter and public interest primarily) the group seems to have understood and articulated Volcker’s (and the electorate’s) intent pretty effectively.

Of the comment letters received about 90% are from financial institutions, and another 5% are from foreign governments objecting to the priority the US regulators have gifted to  US traders in US Government Bonds.  The remaining 5% are from ordinary folks, like Mr. Volcker, Occupy the SEC and other public interest groups.

Its interesting that 95% of the comments reflect the views of the 1%, and the views of the 99% are embodied in the comments of the remaining 5% of commenters. I’m confident  the regulators will  recognize that, for all its complexity, the rules are comprehensible and can be refined to serve the public’s demand for control over a runaway financial system.

Mathematics has an Occupy moment

The Occupy Wall Street movement means a lot of things to a lot of people, but one of the things it pretty much universally represents is the concept of agency.

Instead of sitting passively by and allowing a dysfunctional system to detract from a culture, the participants in Occupy want to object, to reform the system, and if that doesn’t work, to build a new system. And the crucial point is that they feel that they have the right (if not obligation) to do so. Moreover, they wish to construct a new paradigm built on democratic understanding of the shared goals of the system itself, rather than letting whomever is in power decide how things work and who benefits.

I feel like there’s an analogy to be drawn between this process and what’s happening now in the fight between mathematicians and Elsevier, and for that matter the publishing world (as has been pointed out, Springer has the same issues as Elsevier, even though people like Springer a lot more).

It may seem like the fight against Elsevier is only a small part of the mathematics system, in that it’s really only one publisher of many, and some people (like the journal of Topology) have already gone ahead and started new journals that don’t share the more toxic properties that the Elsevier journals have. I don’t think that narrow view is justified.

In fact, part of tearing down Elsevier has to include a broader understanding of how antiquated the entire academic publishing world is, which immediately begs the question of what we need to build to replace it. This is not unlike the Occupy movement’s goal to replace the current financial system with another which would primarily serve the needs of the citizens and only secondarily the desires of bankers. A tall order to be sure, but luckily for mathematicians their system is less complicated, and moreover the community is much more empowered.

Why am I waxing so poetic over this struggle? Because, at the heart of the question of “what is the new system” is the even more fundamental question, “what do we, as a community, wish to treasure and what do we wish to discard?”. After all, we already have arXiv, or in other words a repository of everything, and the question then becomes, how do we sort out the good stuff from the crap?

I want to stop right there and examine that question, because it’s already quite loaded. Let’s face it, people don’t always agree on what it means for something to be good versus crap, and if there was ever a time to examine that question it’s now.

Here’s a thought experiment I’d like you to do with me. Since leaving academic mathematics, I’ve realized the enormous value of being able to explain mathematical concepts to broader audiences, and I’ve been left with the distinct impression that such a skill is underappreciated inside academic mathematics. In the past 8 months, since writing this blog, I’ve become sort of a hybrid mathematician and journalist, and it’s kind of cool, if unfocused. But what if I decided to really focus on the journalism side of mathematics inside mathematics, would that be appreciated?

So the thought experiment is this. Imagine if, every 6 months, I moved to a new field of mathematics and acted as a mathematical journalist, interviewing the people in math about their work, their field, where it’s going, what the important questions are, etc., and at the end of the 6 month gig I wrote an expository article that explained that field to the rest of the mathematicians. I’d do that every 6 months for 20 years, and I’ve covered 40 fields. Assuming I’m as good at explaining things as I say I am, I’ve really opened up these fields to a larger audience (albeit still math folks), which may allow for better communication between fields, or may avoid redundant work between fields, or may simply enrich the understanding of what’s going on. From my perspective, the work I’d be doing would really be mathematics, and would further the overall creation of mathematics.

However, think about those expository articles I’d be writing. They wouldn’t be original, nor would they be particularly hard- if anything the goal would be for people to understand them. Would they ever get published in a top journal (as of now)? I don’t think so. And please don’t suggest that papers like this, written by famous people in their fields, have been well-received. This is true but I claim more a result of the reputation of the writers than because of the content.

Let’s go back to the question of how we sort papers on arXiv. For some people, this question is really confusing and even scary. They fear that any system besides the one now in place would devalue contributions that are more technical, harder, and less accessible over results that are easy, flashy, and amenable to pop culture sound bytes. I exaggerate for effect, but this is the gist of worries I’ve been hearing. For these people, which I will call “the traditionalists”, the most they want to do is to circumvent the publishers’ fees but otherwise keep intact the referee system, whereby there are gatekeepers who choose experts to anonymously review papers. The publishers are the organizers of this system, and by inviting people to be editors for their journals essentially anoint the gatekeepers.

I actually think those traditionalists should be afraid, but not exactly for the reasons that they think. Instead of worrying that their hard, technical papers won’t be appreciated, they should worry that other, totally different kinds of skills will be appreciated. Of course in the end it’s the same result, namely that the top universities may not forever be populated exclusively by people who prove wonderfully difficult, original and ground-breaking results. They could also include people who are the great story-tellers of mathematics and are appreciated for their gifts of understanding and disseminating mathematics, as well as their broad understanding of the field.

In other words, a democratic system actually looks different from a oligarchy, and that’s not necessarily bad, although the oligarchs may think it is.

I’m going to make a prediction, namely that there will be two different systems in place in 15 years. Neither will involve traditional publishers, but one of them will keep that refereeing system intact whereas the other will be more of a crowd-sourced referee system. Maybe it will be something like this idea of Yann LeCun, for example. Maybe it will be better for women. That would be cool.

By the way, I want to be clear that I’m not suggesting all papers are written equally. There really are people who make huge contributions to their fields through proving hard, creative theorems. I just think there are also people who contribute to mathematics in other ways, that also require hard work and excellent skills. And there aren’t just two skills, of course; I just simplified matters for this discussion.

The discussion of the future of academic publishing is raging, as I posted about here. And that discussion is really important in itself, and the fact that so many people are participating in it, and figuring out the shared values of the mathematics community, is democracy in action. I fully believe we are witnessing a historic moment, and it’s weirdly, and happily, happening without police intervention, pepper spray, or drum circles.

Categories: #OWS, math

New online course: model thinking

There’s a new course starting soon, taught by Scott Page, about “model thinking” (hat tip David Laxer). The course web site is located here and some preview lectures are here. From the course description:

In this class, I present a starter kit of models: I start with models of tipping points. I move on to cover models explain the wisdom of crowds, models that show why some countries are rich and some are poor, and models that help unpack the strategic decisions of firm and politicians.

The models cover in this class provide a foundation for future social science classes, whether they be in economics, political science, business, or sociology. Mastering this material will give you a huge leg up in advanced courses. They also help you in life.

In other words, this guy is seriously ambitious. Usually around people who are this into modeling I get incredibly suspicious and skeptical, and this is no exception. I’ve watched the first two videos and I’ve come across the following phrases:

  • Models make us think better
  • Models are better than we are
  • Models make us humble

The third one is particularly strange since his evidence that models make us humble seems to come from the Dutch tulip craze, where a linear model of price growth was proven wrong, and the recent housing boom, where people who modeled housing prices as always going up (i.e. most people) were wrong.

I think I would have replaced the above with the following:

  • Models can make us come to faster conclusions, which can work as rules of thumb, but beware of when you are misapplying such shortcuts
  • Models make us think we are better than we actually are: beware of overconfidence in what is probably a ridiculous oversimplification of what may be a complicated real-world situation
  • Models sometimes fail spectacularly, and our overconfidence and misapplication of models helps them do so.

So in other words I’m looking forward to disagreeing with this guy a lot.

He seems really nice, by the way.

I should also mention that in spite of anticipating disagreeing fervently with this guy, I think what Coursera is doing by putting up online courses is totally cool. Check out some of their other offerings here.

How unsupervised is unsupervised learning?

I was recently at a Meetup and got into a discussion with Joey Markowitz about the difference between supervised, unsupervised, and partially (semi-) supervised learning.

For those who haven’t heard of this stuff, a bit of explanation. These are general categories of models. In every model there’s input data, and in some models there’s also a known quantity you are trying to predict, starting from the input data.

Not surprisingly, supervised learning is what finance quants do, because they always know what they’re going to predict: the money. Unsupervised means you don’t really know what you are looking for in advance. A good example of this is “clustering” algorithms, where you input the data and the number of clusters and the algorithm finds the “best” way of clustering the data into that many clusters (with respect to some norm in N-space where N is the number of attributes of the input data). As a toy example, you could have all your friends write down how much they like various kinds of foods (tofu, broccoli, garlic, ice cream, buttered toast) and after clustering you might find a bunch of people live in the “we love tofu, broccoli, and garlic” cluster and the others live over in the “we love ice cream and buttered toast” cluster.

I hadn’t heard of the phrase “partially supervised learning,” but it turns out it just means you train your model both on labeled and unlabeled data. Usually there’s a domain expert who doesn’t have time to classify all of the data, but the algorithm is augmented by their partial information. So, again a toy example, if the algorithm is classifying photographs, it may help for a human to go through some of them and classify them “porn” vs. “not porn” (because I know it when I see it).

Joey had some interesting thoughts about what’s really going on with supervised vs. unsupervised; he claims that “unsupervised” should really be called “indirectly supervised”. He followed up with this email:

I currently think about unsupervised learning as indirectly supervised learning.  The primary reason is because once you implement an unsupervised learning algorithm it eventually becomes part of a large package, and that larger package is evaluated.  Indirectly you can back out from the package evaluation the effectiveness of different implementations/seeds of the unsupervised learning algorithm.

So simply put, the unsupervised learning algorithm is only unsupervised in isolation, and indirectly supervised once part of a larger picture.  If you distill this further the evaluation metric for unsupervised algorithms are project specific and developed through error analysis whereas for supervised algorithms the metric is specific to the algorithm, irrespective to the project.

supervised learning:        input data -> learning algorithm -> problem non-specific cost metric -> output

unsupervised learning:    input data -> learning algorithm -> problem specific cost metric -> output

The main question is… once you formulate evaluation metric for an unsupervised algorithm specific to your project… can it still be called unsupervised?

This is a good question. One stupid example of this is that, if in the tofu-broccoli-ice cream example above, we had forced three clusters instead of the more natural two clusters, then after we look at the result we may say, shit this is really a two-cluster problem. That moment when we switch the number of clusters to two is, of course, supervising the so-called unsupervised process.

I think though that Joey’s remark runs deeper than that, and is perhaps an example of how we trick ourselves into thinking we’ve successfully algorithmized a process when in fact we have made an awful lot of choices.

Categories: data science

What’s going on: Greece and mortgages

There are two very confusing but important issues that you should be paying attention to in the news right now. Luckily, Naked Capitalism is covering this stuff for you (and for me).

First, it’s the mortgage settlement which was agreed on yesterday or maybe two days ago, which sucks in a lot of ways for poor homeowners but not for the banks. To see the top twelve reasons to hate the mortgage settlement, check out this post from Naked Capitalism.

Second, the Greek debt situation is not yet under control, and no matter what they do over there in Europe they can’t seem to admit it. Here’s a Naked Capitalism post from a couple of days ago, coupled with a new Bloomberg article that kind of says how awful that situation is.

I took all our money out of the money market account a few days ago because it’s not FDIC insured and because I really really don’t know what’s going to happen in Europe. Just saying.

Categories: finance, news

The future of academic publishing

I’ve been talking a lot to mathematicians in the past few days about the future of mathematics publishing (partly because I gave a talk about Math in Business out at Northwestern).

It’s an exciting time, mathematicians seem really fed up with a particularly obnoxious Dutch publisher called Elsevier (tag line: “we charge this much because we can”), and a bunch of people have been boycotting them, both for submissions (they refuse to submit papers to the journals Elsevier publishes) and for editing (they resign as editors or refuse offers). One such mathematician is my friend Jordan, for example.

Here’s a page that simply collects information about the boycott. As you can see by looking at it, there’s an absolutely exploding amount of conversation around this topic, and rightly so: the publishing system in academic math is ancient and completely outdated. For one thing, nobody I’ve talked to actually reads journals anymore, they all read preprints from arXiv, and so the only purpose publishers provide right now is a referee system, but then again the mathematicians themselves do the refereeing. So publishers are more like the organizers of refereeing than anything else.

What’s next? Some people are really excited to start something completely new (I talked about this a bit already here and here) but others just want the same referee system done without all the money going to publishers. I think it would be a great start, but who would do the organizing and get to choose the referees etc? It’s both lots of work and potentially lots of bias in an already opaque system. Maybe it’s time for some crowd-sourcing in reviewing? That’s also work to set up and could potentially be gamed (if you send all your friends online to review your newest paper for example).

We clearly need to discuss.

For example, here’s a post (hat tip Roger Witte) about using arXiv.org as a collector of papers and putting a referee system on top of it, which would be called arXiv-review.org. There’s an infant google+ discussion group about what that referee system would look like.

Update: here’s another discussion taking place.

Are there other online discussions going on? Please comment if so, I’d like to know about them. I’m looking forward to what happens next!

Categories: open source tools, rant

As predicted: watered down insider trading bill

Yesterday I posted about the insider trading bill which, in addition to making it illegal for politicians to trade on their insider knowledge, was also going to force “political intelligence firms” to register as lobbyists. Note that this is simply a form of transparency- they, people who work mostly for hedge funds and private equity, didn’t have to stop getting insider information, they’d just need to admit that they were getting it. But I guess that’s TMI from their perspective. From the Wall Street Journal article:

Rep. Eric Cantor, the No. 2 House Republican, plans to bring his version of the Stop Trading on Congressional Knowledge Act, or Stock Act, to the floor of the GOP-controlled chamber on Thursday, using a procedure that will prevent lawmakers from voting on major amendments. It is expected to pass by a wide margin.

At issue are changes Mr. Cantor made shortly before midnight Tuesday, when he unveiled his amendment to a bill that sailed through the Senate last week.

Most notably, Mr. Cantor cut a provision that would require people who mine Washington for market-moving information to disclose their activities in the same fashion as lobbyists. The provision covering what is known as the political-intelligence industry was opposed by Wall Street and its Washington lobbyists, including the Securities Industry and Financial Markets Association (SIFMA), which mounted an effort to kill it.

Just to be clear on who is writing legislation nowadays: they are called SIFMA, and they represent the players in the financial industry. You may remember them from this post, where they hired the research firm Oliver Wyman to investigate the impact of the Volcker Rule for a congressional hearing. Shockingly, that research firm thought the Volcker Rule should be watered down.

What exactly is the argument this guy Cantor is using to defend this change? I’d love to hear him come out and say, “I did it because SIFMA told me to”. How come we don’t get to see that argument made and defended? No wonder people don’t like or trust Congress. Even so I’ll give the last word to one of their members:

The House Democrat who has pushed for the legislation for the past six years—Rep. Louise Slaughter (D., N.Y.)—opposed the GOP-backed changes.

Ms. Slaughter said in a statement that the Cantor-backed version of the insider-trading bill was crafted “in secret, behind closed doors, brokering deals for special interests.” She added: “How ironic—insiders now appear to be writing a bill meant to ban insider trading.”

Categories: finance, news, rant

#OWS upcoming events

February 9, 2012 Comments off

Here ye, here ye, there will be an Occupy Town Square event this coming Saturday. Please come and help us reconstruct Zucotti Park inside a church at 86th and Amsterdam for the afternoon. Here’s the flyer:

Also, there will be a march from Liberty Plaza to the Fed and the SEC to celebrate very own Occupy the SEC’s submission of their Volcker Rule public comments, next Monday, February 13th, at 4:30pm.

Here’s the schedule:
4-430pm: Assemble at Liberty Plaza
5pm: March to the Fed (33 Liberty Street )
5:30pm: March to the SEC’s NY Office (3 World Financial Center, Suite 400)

Finally, the Alt Banking working group now has a twitter feed.

Categories: #OWS, news

This month’s Sky Mall: a sneak peek

I know I’m not the only person who loves Sky Mall magazine for those moments when you realize that you’re not allowed to use your electronic devices, that you have nothing at all physical to read, and that the plane won’t be airborne for 30 minutes due to runway congestion.

To tell you the truth it’s been a while since I’ve moseyed up to lean on it for psychological support so I was a bit hesitant- I didn’t know what to expect. Forgive my lack of faith.

Bottomline: Sky Mall has never disappointed me, which is more than I can say for most celebrated cultural icons. I want to share just a few of the highlights of this issue, and I hope you appreciate using up my precious 30 minutes of free in-air wifi (update: clear your cookies for another half hour) to do so:

  1. The Fleece Poncho With A Pillow (actual name) (see picture above). Best product description ever: The Fleece Poncho With A Pillow is an all-in-one fleece poncho-style blanket with a pillow attached.
  2. The Spongester (picture below). From the description: Made from the same steel as an industrial sink with labeled slots for your “good sponge” (utensils & dishes) and “evil sponge” (sink, counter, cat dish). Until now I (naively) didn’t realize that sponges had morals. I feel so… foolish.
  3. Touchless Sensor Seat (with video!!) (picture below): For only $159.99 you can get an automatic sensor that lifts and lowers the toilet seat for you. It may seem like this price is a bit steep but think about it some more: it sure beats a divorce attorney.

Categories: Uncategorized