Home > finance, modeling, rant, statistics > Nate Silver confuses cause and effect, ends up defending corruption

Nate Silver confuses cause and effect, ends up defending corruption

December 20, 2012

Crossposted on Naked Capitalism

I just finished reading Nate Silver’s newish book, The Signal and the Noise: Why so many predictions fail – but some don’t.

The good news

First off,  let me say this: I’m very happy that people are reading a book on modeling in such huge numbers – it’s currently eighth on the New York Times best seller list and it’s been on the list for nine weeks. This means people are starting to really care about modeling, both how it can help us remove biases to clarify reality and how it can institutionalize those same biases and go bad.

As a modeler myself, I am extremely concerned about how models affect the public, so the book’s success is wonderful news. The first step to get people to think critically about something is to get them to think about it at all.

Moreover, the book serves as a soft introduction to some of the issues surrounding modeling. Silver has a knack for explaining things in plain English. While he only goes so far, this is reasonable considering his audience. And he doesn’t dumb the math down.

In particular, Silver does a nice job of explaining Bayes’ Theorem. (If you don’t know what Bayes’ Theorem is, just focus on how Silver uses it in his version of Bayesian modeling: namely, as a way of adjusting your estimate of the probability of an event as you collect more information. You might think infidelity is rare, for example, but after a quick poll of your friends and a quick Google search you might have collected enough information to reexamine and revise your estimates.)

The bad news

Having said all that, I have major problems with this book and what it claims to explain. In fact, I’m angry.

It would be reasonable for Silver to tell us about his baseball models, which he does. It would be reasonable for him to tell us about political polling and how he uses weights on different polls to combine them to get a better overall poll. He does this as well. He also interviews a bunch of people who model in other fields, like meteorology and earthquake prediction, which is fine, albeit superficial.

What is not reasonable, however, is for Silver to claim to understand how the financial crisis was a result of a few inaccurate models, and how medical research need only switch from being frequentist to being Bayesian to become more accurate.

Let me give you some concrete examples from his book.

Easy first example: credit rating agencies

The ratings agencies, which famously put AAA ratings on terrible loans, and spoke among themselves as being willing to rate things that were structured by cows, did not accidentally have bad underlying models. The bankers packaging and selling these deals, which amongst themselves they called sacks of shit, did not blithely believe in their safety because of those ratings.

Rather, the entire industry crucially depended on the false models. Indeed they changed the data to conform with the models, which is to say it was an intentional combination of using flawed models and using irrelevant historical data (see points 64-69 here for more (Update: that link is now behind the paywall)).

In baseball, a team can’t create bad or misleading data to game the models of other teams in order to get an edge. But in the financial markets, parties to a model can and do.

In fact, every failed model is actually a success

Silver gives four examples what he considers to be failed models at the end of his first chapter, all related to economics and finance. But each example is actually a success (for the insiders) if you look at a slightly larger picture and understand the incentives inside the system. Here are the models:

  1. The housing bubble.
  2. The credit rating agencies selling AAA ratings on mortgage securities.
  3. The financial melt-down caused by high leverage in the banking sector.
  4. The economists’ predictions after the financial crisis of a fast recovery.

Here’s how each of these models worked out rather well for those inside the system:

  1. Everyone involved in the mortgage industry made a killing. Who’s going to stop the music and tell people to worry about home values? Homeowners and taxpayers made money (on paper at least) in the short term but lost in the long term, but the bankers took home bonuses that they still have.
  2. As we discussed, this was a system-wide tool for building a money machine.
  3. The financial melt-down was incidental, but the leverage was intentional. It bumped up the risk and thus, in good times, the bonuses. This is a great example of the modeling feedback loop: nobody cares about the wider consequences if they’re getting bonuses in the meantime.
  4. Economists are only putatively trying to predict the recovery. Actually they’re trying to affect the recovery. They get paid the big bucks, and they are granted authority and power in part to give consumers confidence, which they presumably hope will lead to a robust economy.

Cause and effect get confused 

Silver confuses cause and effect. We didn’t have a financial crisis because of a bad model or a few bad models. We had bad models because of a corrupt and criminally fraudulent financial system.

That’s an important distinction, because we could fix a few bad models with a few good mathematicians, but we can’t fix the entire system so easily. There’s no math band-aid that will cure these boo-boos.

I can’t emphasize this too strongly: this is not just wrong, it’s maliciously wrong. If people believe in the math band-aid, then we won’t fix the problems in the system that so desperately need fixing.

Why does he make this mistake?

Silver has an unswerving assumption, which he repeats several times, that the only goal of a modeler is to produce an accurate model. (Actually, he made an exception for stock analysts.)

This assumption generally holds in his experience: poker, baseball, and polling are all arenas in which one’s incentive is to be as accurate as possible. But he falls prey to some of the very mistakes he warns about in his book, namely over-confidence and over-generalization. He assumes that, since he’s an expert in those arenas, he can generalize to the field of finance, where he is not an expert.

The logical result of this assumption is his definition of failure as something where the underlying mathematical model is inaccurate. But that’s not how most people would define failure, and it is dangerously naive.

Medical Research

Silver discusses both in the Introduction and in Chapter 8 to John Ioannadis’s work which reveals that most medical research is wrong. Silver explains his point of view in the following way:

I’m glad he mentions incentives here, but again he confuses cause and effect.

As I learned when I attended David Madigan’s lecture on Merck’s representation of Vioxx research to the FDA as well as his recent research on the methods in epidemiology research, the flaws in these medical models will be hard to combat, because they advance the interests of the insiders: competition among academic researchers to publish and get tenure is fierce, and there are enormous financial incentives for pharmaceutical companies.

Everyone in this system benefits from methods that allow one to claim statistically significant results, whether or not that’s valid science, and even though there are lives on the line.

In other words, it’s not that there are bad statistical approaches which lead to vastly over-reported statistically significant results and published papers (which could just as easily happen if the researchers were employing Bayesian techniques, by the way). It’s that there’s massive incentive to claim statistically significant findings, and not much push-back when that’s done erroneously, so the field never self-examines and improves their methodology. The bad models are a consequence of misaligned incentives.

I’m not accusing people in these fields of intentionally putting people’s lives on the line for the sake of their publication records. Most of the people in the field are honestly trying their best. But their intentions are kind of irrelevant.

Silver ignores politics and loves experts

Silver chooses to focus on individuals working in a tight competition and their motives and individual biases, which he understands and explains well. For him, modeling is a man versus wild type thing, working with your wits in a finite universe to win the chess game.

He spends very little time on the question of how people act inside larger systems, where a given modeler might be more interested in keeping their job or getting a big bonus than in making their model as accurate as possible.

In other words, Silver crafts an argument which ignores politics. This is Silver’s blind spot: in the real world politics often trump accuracy, and accurate mathematical models don’t matter as much as he hopes they would.

As an example of politics getting in the way, let’s go back to the culture of the credit rating agency Moody’s.  William Harrington, an ex-Moody’s analyst, describes the politics of his work as follows:

In 2004 you could still talk back and stop a deal. That was gone by 2006. It became: work your tail off, and at some point management would say, ‘Time’s up, let’s convene in a committee and we’ll all vote “yes”‘.

To be fair, there have been moments in his past when Silver delves into politics directly, like this post from the beginning of Obama’s first administration, where he starts with this (emphasis mine):

To suggest that Obama or Geithner are tools of Wall Street and are looking out for something other than the country’s best interest is freaking asinine.

and he ends with:

This is neither the time nor the place for mass movements — this is the time for expert opinion. Once the experts (and I’m not one of them) have reached some kind of a consensus about what the best course of action is (and they haven’t yet), then figure out who is impeding that action for political or other disingenuous reasons and tackle them — do whatever you can to remove them from the playing field. But we’re not at that stage yet.

My conclusion: Nate Silver is a man who deeply believes in experts, even when the evidence is not good that they have aligned incentives with the public.

Distrust the experts

Call me “asinine,” but I have less faith in the experts than Nate Silver: I don’t want to trust the very people who got us into this mess, while benefitting from it, to also be in charge of cleaning it up. And, being part of the Occupy movement, I obviously think that this is the time for mass movements.

From my experience working first in finance at the hedge fund D.E. Shaw during the credit crisis and afterwards at the risk firm Riskmetrics, and my subsequent experience working in the internet advertising space (a wild west of unregulated personal information warehousing and sales) my conclusion is simple: Distrust the experts.

Why? Because you don’t know their incentives, and they can make the models (including Bayesian models) say whatever is politically useful to them. This is a manipulation of the public’s trust of mathematics, but it is the norm rather than the exception. And modelers rarely if ever consider the feedback loop and the ramifications of their predatory models on our culture.

Why do people like Nate Silver so much?

To be crystal clear: my big complaint about Silver is naivete, and to a lesser extent, authority-worship.

I’m not criticizing Silver for not understanding the financial system. Indeed one of the most crucial problems with the current system is its complexity, and as I’ve said before, most people inside finance don’t really understand it. But at the very least he should know that he is not an authority and should not act like one.

I’m also not accusing him of knowingly helping cover up the financial industry. But covering for the financial industry is an unfortunate side-effect of his naivete and presumed authority, and a very unwelcome source of noise at this moment when so much needs to be done.

I’m writing a book myself on modeling. When I began reading Silver’s book I was a bit worried that he’d already said everything I’d wanted to say. Instead, I feel like he’s written a book which has the potential to dangerously mislead people – if it hasn’t already – because of its lack of consideration of the surrounding political landscape.

Silver has gone to great lengths to make his message simple, and positive, and to make people feel smart and smug, especially Obama’s supporters.

He gets well-paid for his political consulting work and speaker appearances at hedge funds like D.E. Shaw and Jane Street, and, in order to maintain this income, it’s critical that he perfects a patina of modeling genius combined with an easily digested message for his financial and political clients.

Silver is selling a story we all want to hear, and a story we all want to be true. Unfortunately for us and for the world, it’s not.

How to push back against the celebrity-ization of data science

The truth is somewhat harder to understand, a lot less palatable, and much more important than Silver’s gloss. But when independent people like myself step up to denounce a given statement or theory, it’s not clear to the public who is the expert and who isn’t. From this vantage point, the happier, shorter message will win every time.

This raises a larger question: how can the public possibly sort through all the noise that celebrity-minded data people like Nate Silver hand to them on a silver platter? Whose job is it to push back against rubbish disguised as authoritative scientific theory?

It’s not a new question, since PR men disguising themselves as scientists have been around for decades. But I’d argue it’s a question that is increasingly urgent considering how much of our lives are becoming modeled. It would be great if substantive data scientists had a way of getting together to defend the subject against sensationalist celebrity-fueled noise.

One hope I nurture is that, with the opening of the various data science institutes such as the one at Columbia which was a announced a few months ago, there will be a way to form exactly such a committee. Can we get a little peer review here, people?

Conclusion

There’s an easy test here to determine whether to be worried. If you see someone using a model to make predictions that directly benefit them or lose them money – like a day trader, or a chess player, or someone who literally places a bet on an outcome (unless they place another hidden bet on the opposite outcome) – then you can be sure they are optimizing their model for accuracy as best they can. And in this case Silver’s advice on how to avoid one’s own biases are excellent and useful.

But if you are witnessing someone creating a model which predicts outcomes that are irrelevant to their immediate bottom-line, then you might want to look into the model yourself.

Categories: finance, modeling, rant, statistics
  1. December 20, 2012 at 7:17 am

    An anthropologist might say he’s part of the anti-politics machine. Basically, technical folks have an incentive to ignore politics, because politics is hard and math is fun. The result of this mistake is the extension of centralized command over decisions that used to be made more locally or collectively. That’s not necessarily bad, nor intentionally caused by bureaucrats, but sometimes decisions should be political.

    Like

  2. rethinkecon
    December 20, 2012 at 7:20 am

    I think how feminism took on medical experts is a pretty good model:  an inside-outside strategy that combined feminist medical experts working from the inside in conjunction with mass movement pressure from the outside — both to change laws and to change the culture.  A key part of that included small groups educating & empowering themselves using books like “Our Bodies Ourselves.”  Maybe your book will be v1 of “Our Data, Ourselves”?

    Like

  3. chris
    December 20, 2012 at 8:20 am

    Thanks so much for this reminder to be skeptical of any and all pop-simple, TED-like explanations of complex things. This is my own resolve these days and I particularly liked NY Mag’s takedown of “selling stories we all wont to hear” via “happier, shorter messages” http://nymag.com/news/features/jonah-lehrer-2012-11/ This article opened my eyes to the contect of all those books, like Lehrer’s, that I have been slobbering over.

    It seems that in your critique of Nate’s financial and drug modeling analysis is that you argue for what could be called a “meta-model” or uber-model which wraps/warps the immediate-subject’s model, to make it subjective. While terribly obvious to those of us who have been in the financial industry (and left disappointed, like you) it is not a convenient truth for most people. There just doesn’t seem to be anything these days that isn’t a slave to this uber-model.

    Have you seen the BBC’s documentary Century of Self? That seems to be a good explanation of the root cause of much of this. (but then again, I have to hold to my promise of assuming most everything is wrong! 😉

    Anyway, thank you soo much for the heroic work you are doing to reverse the tide of self-interest. Not sure I can be too optimistic, with HSBC and UBS’ “cost of doing business” fines this week, but I’ll keep hoping!

    Thank you!!!!

    Like

  4. December 20, 2012 at 10:32 am

    For back-up of your claim about rating agencies, read this infuriating interview with a former senior analyst at Moody’s. It wasn’t the models. It was corruption, pure and simple:

    http://www.guardian.co.uk/commentisfree/2012/dec/17/ex-moodys-analyst-william-harrington

    Like

    • December 20, 2012 at 10:55 am

      very sorry, left the wrong link. This is the correct one, a second testimony about what went on in the rating agencies:

      http://www.guardian.co.uk/commentisfree/joris-luyendijk-banking-blog/2012/jul/24/rating-agency-worker-voice-of-finance

      Like

      • December 20, 2012 at 3:23 pm

        Joris, your interviewee claims “”Have you read Gillian Tett’s Fool’s Gold about the crisis? It was exactly like that. You had bankers who did not understand their own complex financial products but thought that they did, and then raters who took their word for it. And nothing has fundamentally changed.”

        This somewhat contradicts the statement in this article:
        “We had bad models because of a corrupt and criminally fraudulent financial system.”

        Like

    • January 10, 2013 at 12:30 pm

      If the models were intentionally bad – as the author of this blog claims – then it was the JOB of the ratings agencies to say so.

      It IS worthwhile to say that a model is bad – either due to intentional fraud or unintentional incompetence. However, the job of a ratings agency or of a good regulator would be to identify and uncover a bad model.

      So I guess in true circular fashion – were the ratings agencies themselves corrupt and did they intentionally ignore bad models or did they simply not understand them and were they too lazy to care?

      Like

  5. Bill Lawson
    December 20, 2012 at 11:08 am

    This complaint, justified probably, amounts to saying something like: ‘Silver should have employed game theory to explain how moral hazard and collusion are better explanations of how things often go wrong than poor modeling.’

    Like

  6. December 20, 2012 at 11:18 am

    Fantastic post, and one that set my mind working furiously on the relationship between the underlying models used by institutions and the fairness (or lack of it) inherent in those institutions. In your explanation of how Silver confuses cause and effect, I began to glimpse a new angle of how to get at the concept of how fairness is defined, and who gets to define it, from the perspective of who gets to decide which model to use. I look forward to your book on modeling; hope it’s out soon.

    Like

  7. lawrence castiglione
    December 20, 2012 at 12:30 pm

    Your comments about the financial stuff was on the money (as it were). When I read his analysis I was struck by inability to realize that the whole idea was to palm the pea, so not a mistake in the modlel/plan/game. I got mine Jack. That’s it. No model, rather think scheme.

    Like

  8. December 20, 2012 at 12:42 pm

    Yes, every failed model is actually a success, for someone. I agree that nailing the someone is important. That money wasn’t LOST in the housing collapse. Someone took it and walked away with it.

    Like

  9. chaletfor2
    December 20, 2012 at 1:18 pm

    If what you say is true, you are quite right.Silver is brilliant in the way you describe but probably totally naive regarding the expropriating ways of the financial sector that’s aided & abetted by our kleptocratic government’s tacit approval–no actionable crimes here, we must look forward, not back, &c. Silver is a Democrat and like most Democrats does not want to believe that like the Republican party, the Democratic party is neo-liberal expropriationist.

    Like

  10. December 20, 2012 at 1:22 pm

    I’m going to defend Nate here: it’s not fair to rip on him for not acknowledging how incentives impact models without citing his chapter on weather prediction, where he shows very clearly that the Weather Channel overestimates the likelihood of rain compared to the National Weather Service because it viewers get angry when it rains.

    Nate also points out that economists are often making predictions in order to protect their reputations or drive a desired outcome, not because they’re trying to be accurate. He cites the fact that economic predictions made anonymously tend to outperform the predictions that people make when their names are attached to them. Again, very clear about how incentives lead to bad predictions.

    It’s a little early for the Nate Silver backlash; I can’t think of anyone who seems less comfortable with his celebrity than Nate.

    Like

    • December 20, 2012 at 4:03 pm

      Well put, Josh! This honed in on the financial meltdown chapter, when there were many other examples. I was impressed with the depth and breadth of Silver’s research across disciplines. And, he *did* seem to touch on “incentives” quite a bit (fox vs. hedgehog, pundits, Weather Channel, etc.). My takeaway was that Silver was trying to actually push back against the aura of “predictive invincibility” that surrounds him — effectively, saying, “Waiiiitttttt a minute, folks. I’ve built my career in places where the data works, and the uncertainty that I attach to my predictions gets ignored. Prediction is hard and messy, and there are a lot of environmental factors at play.” He acknowledges that people hunger for math-driven simplistic predictive truisms, and he makes the case time and again that the world doesn’t work that way.

      Like

    • channelclemente
      December 23, 2012 at 10:25 pm

      I think Silver’s perspective on perverse incentives is pretty well outlined in his discussion of the steroid abuse in baseball. Maybe he constrained himself to only one sermon per book.

      Like

  11. December 20, 2012 at 1:51 pm

    This isn’t personal. It’s not about Nate or Nate’s fame. It may be about the self-imposed limits of modeling. If one’s motivation is to explain, one ought to at least ask the question “What might account for this?”

    Like

  12. December 20, 2012 at 2:07 pm

    Again, to reinforce the idea that “we should be skeptical of everything that makes complicated events seem simple” here is a climate prof’s experience with Nate Silver’s “narrative” http://www.huffingtonpost.com/michael-e-mann/nate-silver-climate-change_b_1909482.html

    “That Nate would parrot Armstrong’s flawed arguments is a major disappointment, especially because there are some obvious red flags that even the most cursory research should have turned up.”

    Like

  13. December 20, 2012 at 2:12 pm

    “If you’re a science or math geek like me, you can’t help but like Nate Silver. He’s the fellow nerd who made good.”

    “He sought me out because he felt my expertise would make me an “excellent guide to the history of climate modeling”. ”

    “But he falls victim to a fallacy that has become all too common among those who view the issue through the prism of economics rather than science. Nate conflates problems of prediction in the realm of human behavior — where there are no fundamental governing ‘laws’ and any “predictions” are potentially laden with subjective and untestable assumptions — with problems such as climate change, which are governed by laws of physics, like the greenhouse effect, that are true whether or not you choose to believe them.”

    http://www.huffingtonpost.com/michael-e-mann/nate-silver-climate-change_b_1909482.html

    Like

  14. MP
    December 20, 2012 at 2:16 pm

    I disagree pretty strongly with your cause/effect analysis. You can argue that criminally fraudulent activity occurred leading up to and during the financial crisis, as I’m sure it did, but you cannot argue that it was the main cause of it without some serious evidence to back that up. This is precisely the reason most of these guys are not in jail right now – there was more a lack of understanding of the models and incentive misalignment than outright criminal activity. Your anger at financial executives should be for their incompetence and not confused with conspiracy theories of corruption.

    Like

    • archer
      December 22, 2012 at 12:58 am

      With all due respect, if you had been paying attention, tons has been written on why the bad guys haven’t been prosecuted, and it has squat to do with “lack of understanding”. The bar for fraud in securities law (Federal and New York’s Martin Act) is material misrepresentation. You don’t need intent. Price fixing is also criminal, as are false certifications under Sarbanes Oxley. The reason they have not been prosecuted is that Obama doesn’t want to, period.

      Plenty of people have made the case. Charles Ferguson has in Predator Nation, citing specific laws and evidence in the public domain. Matt Taibbi, Bill Black, and Yves Smith also have repeatedly demonstrated how specific laws were violated and how prosecutors are refusing to go after big, well connected players.

      Like

      • fresno dan
        December 22, 2012 at 11:07 am

        well said

        Like

    • March 15, 2013 at 5:05 am

      Even if you believe that the people structuring, trading, and pricing mortgage-backed paper were confused by the complexity of the products and innocently overconfident in the power of their models, (as do Nate Silver, Gillian Tett, and various other commentators) you still need to account for incentives.

      If you were running a financial institution do you think that you would rather employ overconfident, gullible and easily confused people to analyze complicated financial products, or shrewd and cynical people? Airplane engines are extremely complicated. Do you think that Boeing seeks to employ people who have the ability to understand them but who have the sense to question each assumption baked into the design and testing process? Or do you think they are content with people who find them too complicated to understand but who nevertheless have a childlike belief in the propensity of large, expensive, nicely painted machines to fly without crashing?

      Now suppose that you are a senior manager in an investment bank. If you hire people who are great at pulling numbers out of Excel sheets, but also happy to accept without too much questioning a story that says “everyone involved can make a lot of money on these deals without any downside risk”, you keep the money tap on for a bit longer. If you hire people who doubt that something can come from nothing, and who start sniffing around, questioning unspoken assumptions, and blowing whistles, you have to deal with the genuine problems of risk and capital requirements.

      There were plenty of people around in 2006 who a) understood supposedly “extremely complex credit” derivatives and CDOs, and b) knew that the bubble was a bubble. The fact that many of them worked for hedge funds rather than banks and ratings agencies is not a coincidence.

      Like

      • March 15, 2013 at 5:31 am

        I think your comments cut right to the heart of the matter. If Boeing executives hired gullible engineers to “keep the money tap flowing” just a little bit longer by certifying aircraft engines that would result in plane crashes, they should be locked up. And for good reason. And so should executives in charge of banks, guilty of exactly the same crime.

        Like

  15. Larry Headlund
    December 20, 2012 at 4:10 pm

    “This assumption generally holds in his experience: poker, baseball, and polling are all arenas in which one’s incentive is to be as accurate as possible.”

    I would say that polling is not always done with only an incentive to be accurate. Besides the notorius ‘push polls’ the poling results can be skewed by phrasing the question. The example that comes to mind is ‘Do you think abortion should be a private matter between a woman and her physician?” which will get around a 60% positive while “Do you support abortion on demand?” will get about 60% negative.

    There are incentives in polling besides accuracy.

    Like

  16. James Jameson
    December 20, 2012 at 4:27 pm

    From what I understand, the author isn’t really saying anything besides smugly “calling out” Nate Silver for being so confident in the power of mathematics and for having a track record of accurate predictions.

    Because, you know, all those pundits and their opinions were so accurate and insightful. We need to rely less on math, and more on opinions.

    How can anyone call Nate Silver naive, how? Those same people relied on punditry and gut-feelings for a Romney win, now they’re all butt-hurt because their ignorance isn’t as good as knowledge in this day and age.

    Like

    • wheelinshirt
      December 20, 2012 at 5:35 pm

      i think you either need to read the article again or read it for the first time.

      Like

    • December 22, 2012 at 3:13 pm

      You can be naive, smart, and data-driven all at the same time.

      That’s almost a classic description of a research nerd.

      Like

  17. downunder
    December 20, 2012 at 7:53 pm

    I have no experience with the finance side but your analysis sounds quite plausible to me. I do have lots of experience with the medical research aspect of it, and there I believe you are absolutely spot on about the quality of the models being secondary to the wrong incentives, namely to publish positive results at all cost and accuracy be damned. Climate science strikes me as a similarly corrupt area working under wrong incentives. I think fundamentally there is a big shift from science as a discipline where falsifiability was the main requirement, and Feynman’s “special kind of integrity” the most desirable characteristic of the researcher, to data science where the MO is to run a model and follow it with story telling. I think it is important that this type of modelling/interpretation be very clearly distinguished from the old fashioned scientific method.

    Like

    • January 8, 2013 at 2:16 am

      Re medical; research everyone should read Ben Goldacre’s new book Bad Pharma. Devastating.

      Re climate science I don’t agree. The current consensus on global warming has developed over many years and despite intense pressure from governments, business, etc., to reach a different conclusion.

      Like

  18. December 21, 2012 at 12:11 pm

    Great analysis and in particular, I like your call for better peer review. We live in a world where the loudest and simplest continue to prevail even though that shouldn’t always be the case.

    The Github mode of peer review is by far one of the most fascinating mechanisms I’ve seen and I’m curious why we don’t have something more concrete in place for data science/modeling. Put up your work, theses, hypotheses, etc. and let the community decide or build off your work. You see that form of peer review coming anytime soon?

    Like

  19. Walter
    December 21, 2012 at 12:40 pm

    Should we sublect global warming models and modelers to similar scrutiny of methods and motives?

    Like

  20. Vladimir Stepanov
    December 21, 2012 at 2:30 pm

    What would you expect from Silver, stand up high and call all those financial fraudsters fradusters and criminals? 🙂

    Like

  21. December 21, 2012 at 6:26 pm

    Sometimes it is better to remain silent and be thought a fool than to speak and to remove all doubt.

    Like

  22. December 22, 2012 at 12:23 am

    In many situations wherein I optimize a model for my own interest, it would contravene my interest to tell you what’s in my model.

    Like

  23. December 22, 2012 at 1:03 pm

    I know a former energy trader who did all the math but when the herd is buying you have to go with the momentum rather than the analysis if you want to make money.

    Like

  24. December 22, 2012 at 3:43 pm

    Cathy, in the age of google, you have a powerful platform from which to “fight back” against any incorrect assertions you see popularized in other forums. in fact this post has gotten quite the viewership, and you made great arguments about Nate’s analysis and your perspective on the same questions. unfortunately, the (totally unjustified) contempt ridden throughout your post reduces the impact of your ideas. i wish you had focused on the issues and delved deeper into your own perspective instead of over-compensating with personal put downs.

    Like

  25. mitsu
    December 23, 2012 at 1:00 am

    There are a few hypotheses about why things went wrong in the financial crisis, but it’s undoubtedly the case that the risk models were wrong. Here are a few operational hypotheses:

    1) The financial industry knew full well the models were wrong, and they purposefully pretended to believe in them in order to mislead people to prop up a massive asset bubble bigger than nearly any in generations.

    2) The models were wrong, and some people knew they must be wrong, but the financial industry as a whole ignored those who pointed this out because the systemic incentives were not aligned with finding errors in the models. In other words, everyone was making a lot of short-term money so there wasn’t a strong short-term incentive to question the models that were at the bottom of it.

    3) Everyone in the financial services industry was doing their absolute best to understand the risk models and they just happened to rely on models that were wrong (misapplying the Gaussian copula). When the financial crisis happened, everything blew up and boy was there egg on their faces.

    I think it’s obvious that 1) and 3) are wrong. 3) is absurdly naive, and 1) is paranoid and assumes far more “the masters of the universe are in control of everything” than is credibly the case. It’s simply not true that “everybody made a killing”. The financial crisis destroyed entire banks, caused losses in the billions to trillions across many privately-held portfolios. It was clearly not something that the financial industry knew was coming in the size and scope it came. Lehman is gone. Bear Stearns is gone. Do you really think the guys running those banks were masterminds who knew all about what was wrong with the models? I find that quite unbelievable.

    There are, however, clearly systemic incentives towards ignoring the models. This can be a combination of corruption, stupidity, inertia, and many other factors. It doesn’t have to be one simple “evil masterminds corrupt the system” explanation, nor a Pollyanna “they were just misled by bad models” explanation either. I think the evidence is strong that the answer is somewhere in the messy middle.

    Like

    • chris
      December 23, 2012 at 5:21 am

      “I think it’s obvious that 1) (is) wrong….1) is paranoid and assumes far more “the masters of the universe are in control of everything” than is credibly the case.”

      You obviously never worked in finance!

      You don’t have to be a “master of the universe” (i.e. smart) to be corrupted and seduced by a financial incentive, and to have control of little acts of corruption that add to wrecking the system. There is just too much evidence of venal, corrupted thinking at Lehmans and Bear and Countrywide and AIG to see it as anything but a cascade of “creeping opportunism” where every day, in every way, the actors, big and small, will take the wrong step because of a blatant financial incentive to do so. From the guy who threw one more crappy mortgage into a pool to Dick Fuld using repo 108 to circumvent capital requirements while publically claiming “we are fine”. The data is clear that people knew and deliberately ignored the truth because they were all paid to do so. The asset bubble was a byproduct of years of people willfully turning a blind eye to honesty because they were paid for years by being dishonest. It all adds up after a while.

      As Upton Sinclair wrote this, that “It’s difficult to get a man to understand something if his salary depends upon his not understanding it.

      Like

      • Mitsu
        December 23, 2012 at 10:00 am

        No, you misunderstand me. I’m not saying that it’s paranoid to suggest there was fraud and abuse; I’m saying it’s paranoid to suggest the Gaussian copula was intentionally designed to have the flaws it did in estimating default correlation. Of course there was fraud. However, the question is, how did the model get to be the way it is, and what effect did that have, in what order.

        No doubt some people were feeding bad data into the models, some knew the model was wrong, some exploited the flaws in the model. But even if you fed good data in, the model was wrong. And the idea the model was designed intentionally to be wrong is what I’m saying is paranoid. The incentives weren’t there to correct the error, undoubtedly. But presuming everyone knew the flaws in the default correlation calculation and the banks were cynically and purposefully misusing it in every case doesn’t make much sense given that these same banks went under. Fraud added to the disaster but it can’t possibly have been totally designed from the ground up. It must have been a combination of many factors, including fraud and lack of incentives to correct errors, somewhere in between Nate Silver’s and Cathy’s perspectives, it seems to me.

        Like

        • December 23, 2012 at 10:40 am

          Good point.

          But isn’t the situation that that the models were created for a certain outcome? People were paid to build models that would make money, a priori. Otherwise, they wouldn’t have built the models in the first place. They weren’t neutral. To paraphrase a previous poster said, “there wasn’t a ‘loss’, there was a winner every time.”

          The was a recent analysis of pharma company drug trials that has said the same thing; the models aren’t neutral by design. They are intended to be able to sell drugs by which the companies can profit, with the patient actually a secondary consideration. While you can use the models, the outcome is preordained.

          This is a fascinating thread for me, because it clears up some thinking that I’ve been doing. There is actually an “uber-model” within which most models operate. As another poster quoted George Carlin, “It’s a game, man, and you ain’t friggin’ in it!” — my quote of the year!

          Like

        • Mitsu
          December 23, 2012 at 1:43 pm

          They were paid by the banks (clear conflict of interest) so there wasn’t a good incentive to produce correct models. But to suggest the models were purposefully built from the outset to be grossly wrong just isn’t supported by the evidence. It is more like: the models were built, they started to use them, some people went wait a second, this seems wrong, their colleagues and bosses said don’t rock the boat. But like I said if they had really known how wrong the models were, why did they steer their banks right over the cliff? Lots of people lost millions and billions, even wealthy folks, so there’s no way every one of them was in on some elaborate con. The error was there and it wasn’t corrected by the system but it also wasn’t known by all the bankers and traders how wrong it was. That’s why Goldman was able to make out fairly unscathed; they knew better than the other banks the models were wrong.

          Like

        • December 23, 2012 at 11:09 pm

          Thanks again for chiming in. However, I think you are both off-topic and misinformed.

          Firstly, re-read Mathbabe’s original post: it is her taking issue with Silver’s analysis that says flawed models caused the problems. Mathbabe is annoyed that, once again, we legitimize that red herring of “bad models” when we should be looking at bad people using models (bad and otherwise).

          Secondly, Mathbabe says (and I and others agree) that bad people “steered the banks right over the cliff” (to use your phrase) because they were literally paid to do that. And, who wouldn’t be a bad guy for in exchange for millions, sometimes tens and hundreds of millions?

          You are misinformed that there were only losses in the situation; this is a zero sum game where for every loss, someone made money! Another red herring: the financial crises caused everyone “losses”. Square that with the highest end of the yacht market, art market, home prices in Aspen and high-end Manhattan real estate.

          Start with this data point and work backwards: few if any of the bad actors who did the driving lost money. The evidence is pretty conclusive that actors made self-interested financial decisions to build and use flawed models, with flawed input, for their own gain, not because they were naive. It’s the money that made them do it, not bad math! Those models enabled the biggest transfer of wealth in the history of the world. There were clear winners in this game, and they won so much that they were happy to steer their fellow man over the cliff. Blaming the inanimate models is looking for a cause in the wrong place.

          Like

        • mitsu
          December 24, 2012 at 12:51 am

          cwdz, you’re just repeating what Cathy wrote, which really doesn’t add anything to the discussion, since I’ve already addressed the points you repeat above. It is a category error to jump from “there were bad actors feeding bad data into the models” (indisputably true) to “bad actors were the sole cause of the crash, the sole prime mover in the chain of causality”. Contrary to your assertion that the evidence is “indisputable” that this is the case — it’s nothing like indisputable. There have been many people interviewing people at all levels of the financial services industry, and the picture is quite clear: the vast majority of the people thought the models were at least somewhat valid. Even take one of the quotes Cathy made by a trader who was talking about how the models weren’t capturing “all” the risk because the data they were feeding them was inadequate — that implies that that trader believed that had the data been correct, the model would have captured the risk a lot better. What none of the traders note is that it didn’t matter what data you put in the models, because the models were fundamentally flawed at their very core. THAT, I guarantee you, was not understood by everyone, it wasn’t understood by many people at all, in fact.

          Finally, it’s patently absurd to say that nobody who was “driving” lost any money. I’ve read the specious argument that Fuld didn’t really lose the $1 billion he lost when Lehman went under because he had made a lot of money before that. Do you really think a guy like Fuld, who was riding high at the top of one of the most powerful banks in the world at the time, really *planned* the whole thing? He knew the bank was going to go under, and he was perfectly okay with losing $1 billion in net worth? These guys do not work that way. They want to win.

          The banks fared very differently: like I said, Goldman Sachs was almost unscathed by the crash. Lehman and Bear Stearns went under. That fact alone ought to be clear evidence, speaking of evidence, that not everyone was as in on this supposed scam as it may appear. Who would you rather be in 2008? Chairman of Goldman Sachs or chairman of Lehman Brothers? Let’s get real here.

          I just think that this entire debate points out the folly of our tendency to oversimplify complex systems. We want to believe there’s one simple explanation for failure. It was some bad guys! It was due to fraud! No. Sometimes it’s due to a combination of factors, from the structure of the system to bad incentive structures to fraud to bad actors to stupidity to greed to bad models, all intermixing in a complicated mess. We don’t live in a world of Batman, where all bad things happen because of criminal masterminds. We live in a world of competing complex forces. Sure, there’s tons of fraud. But these guys aren’t as smart as you think. And that’s the problem. They’re not as smart as you think, or they think, and that’s why they need to be regulated. That’s why we need to put in place structural reforms to cushion the impact of popping bubbles and to restrict the size of them in the first place.

          Like

        • December 24, 2012 at 8:07 am

          You obviously never worked on Wall Street.

          Calling it a trader who comes out way, way ahead at the end of the day a “specious argument” is illogical. Good traders don’t take everything off the table all the time, and quite often they can’t when they would want to.

          I’d bet that Fuld had more money than Blankfein at the end of 2008. Do you know the actual numbers? Even back of the envelop effort would get you there.

          And I never said these folks are geniuses or masterminds. I said quite the opposite. In fact most people on Wall Street are no brighter than you and I. But the incentives for people to act in a similar way, smart or to be dumb, universally led in the same direction: their own profit at other’s expense. They used good and bad models, and good or bad data, but (again in case you haven’t been reading) there were clear winners in the game. And that data point, clear winners, is where one should start their inquiry, rather than abstractions.

          This is the point: there is ONE COMMON thread in the whole debate, there were some WINNERS and they were the ones building and using the models and the data. This is no coincidence. Your own argument (and somewhat Nate’s) of “it’s due to a combination of factors, from the structure of the system to bad incentive structures to fraud to bad actors to stupidity to greed to bad models, all intermixing in a complicated mess.” ignores the data point that the people who made the gains where the ones driving the bus.

          Like

        • mitsu
          December 24, 2012 at 11:24 am

          Like I said before, you’re simply repeating yourself, and not bothering to address my arguments. My simple argument is, if Fuld *knew* from the beginning that the entire house of cards of mortgage-backed securities was as fragile as it was, do you really think he would have sat back and blithely lost $1 billion in net worth (he was left with on the order of several hundred million)? The fact that he wasn’t *competely* wiped out isn’t a “data point” that proves anything at all. That’s perfectly consistent with my interpretation of the events, which is that he didn’t know what he was doing, or what was going to happen, at the scale that it did. You have to provide a hell of a lot more evidence than “Fuld wasn’t totally wiped out so he must have known what he was doing all along.”

          The point I made is if all the drivers were in on the con from the beginning, they shouldn’t have all had vastly different outcomes during the crash. Some lost 2/3rds of their net worth or more — some lost almost nothing. THAT fact is not consistent with your interpretation, and it IS consistent with mine. The fact that few if any of these guys were totally wiped out is obviously consistent with both interpretations — if you have that much money you’d have to be an idiot not to be diversified enough to avoid losing the entire amount — you keep saying “obviously you have never worked on Wall Street” — but come on, that is one of the simplest facts there is. The fact most or all of them didn’t lose EVERYTHING isn’t evidence for anything at all, other than they weren’t complete idiots.

          I’m saying they weren’t as smart as you seem to think they were.

          As for Silver’s argument as mathbabe presents it and mine — they’re not the same at all, as you also incorrectly assert. You seem to like to conflate positions into simplistic binary alternatives. Mathbabe is saying it is wrong to say that the problem was just in the models — I agree. However, I’m saying she’s wrong to say the problem was entirely in the realm of bad actors. Things that go wrong are not always due to bad actors, The Joker, Sauron, or whatever other bugaboo bad actors. There are bad actors in any given situation, but they exist in context. There’s always a context which sometimes or even almost always is the larger part of the explanation.

          Like

        • December 24, 2012 at 10:18 pm

          Forgive me, I am new to forums such as this. I realize now that this board is going to be fully “trolled” by virtue of its author.

          Your posts are obviously those of a pro troll, likely paid for by a conservative PAC, because you repeatedly flood the comments that artfully go around the very specific points made by others to simply confuse the issue with sophomoric generalizations.

          And you are successful; I will never bother engage in people on a board because it is so easy to lure people into your game.

          Like

        • mitsu
          December 25, 2012 at 2:31 am

          First of all, calling someone a “troll” because they don’t agree with you is one of the most content-free and tiresome maneuvers on the Internet. Has it ever occurred to you that there might be people who don’t agree with your point of view, and are not “trolls”? Do you think that Nate Silver is a troll? Do you think anyone who doesn’t agree 100% with your contentions is a “troll”?

          I’m not the one responding with vague generalizations — I’m making a very specific counter-argument. If all the bankers were as prescient as you seem to think, and they all knew exactly what was wrong with the models, then why did some of them lose vast amounts of net worth, and others did not? You haven’t addressed that basic point, at all, simply resorting to name-calling.

          You should read this article about the Gaussian copula:

          http://www.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all

          It includes specific interviews with people who used the formula, who worked on Wall Street, some of whom attempted to warn others about the pitfalls with the model. Do you think they’re all making it up, and in fact everyone in the industry knew what was going on from the bottom up? If so, why? And again, how do you account for the massive disparity in outcomes between different bankers, if they all knew what was wrong with their use of this model?

          And give me a break, I am not a conservative. I don’t work for a conservative PAC. My name is Mitsu Hadeishi — feel free to look me up. I am a real person, please do not treat me like some anonymous set of bits on the other side of your screen.

          Like

        • mitsu
          December 25, 2012 at 2:36 am

          One more thing: here’s what’s wrong with thinking that what happened with entirely due to bad actors: because then it makes it seem like the game itself isn’t the problem, it’s just a problem with bad people playing the game. MY contention is the game is the problem: bankers, even if they were all honest, are playing in a system that by its very nature creates feedback loops that can and will result in massive economic calamity. It doesn’t matter if the bankers were honest or not. Being selfish profiteers (which many of them undoubtedly were and are) makes it worse — but it wouldn’t make any difference if they weren’t. Because the system itself is unstable, and results in massive income inequality — whether you have bad actors or not.

          THAT means the industry ought to be regulated. We should have many more regulations to prevent or mitigate the impact and the unfairness of the fundamental instability at the core of the entire system. If that view makes you call me a “conservative troll” then you really have no clue what conservatives actually think.

          Like

  26. Anthony
    December 23, 2012 at 5:01 pm

    A system that depends on all actors acting without malice is a fundmentally broken system. Of course there are misaligned incentives when a company produces models. Of course a company would like to leverage more if their money is not at risk. The oversight for this is the people lending the money. One of the many issues that lend the financial crisis is that the creditors had little incentive to check the models because they new that the worse case would be a government bail out. The credit adjancies could not give junk AAA rankings unless that’s what the buyers of it wanted, not the sellers. You perspective is way too simplistic. There are bad people in every industry, the question is why did that lead to a worldwide financial crisis

    Like

  27. grwww
    December 23, 2012 at 11:08 pm

    My viewpoint is that the economic issues are caused by the definition of recession. That is, the amount of money circulating in the economy, was outstripped by the “value” that was driving the economy. That is to say, that people felt that they needed to purchase many more items than they actually had money to support. They used credit to do this, and the inflated value in the economy caused new jobs, buildings, cars, houses and all kinds of goods to be manufactured and facilities sized to that demand, when there was not enough opportunity in the economy to support that.

    The FED failed to “see” what was happening with the credit. It is true, that the housing bubble was a part of what happened. However, the number of people impacted by that, were not “everyone” affected by the “credit based stretch” of the value in the economy.

    So, the economy is now being expanded by the FED printing money to try and allow big business to “hire” more people. Unfortunately, that’s not what is actually needed. What actually needs to happen, is that all the credit needs to be paid off and the economy expanded to “allow” the new businesses and jobs to be maintained.

    A “Tax rebate” or “Tax reduction” for the “small income” families, who are the majority of the “spenders” in our general economy, would allow there to be more “money” in the economy to balance against the excess value.

    More efficient cars, smart phones, next generation computers, TVs and other “things” which people will buy, and which provide lots of “jobs” (The Apple software development and Apple Stores are not in China for example), is where our economy is focused on growth.

    Without the money coming to the masses at the bottom, it will never be in circulation…

    Like

  28. tmarthal
    December 25, 2012 at 2:02 pm

    “that the only goal of a modeler is to produce an accurate model.”

    Is there something wrong with that? You bring the human-factor into the loop with the incentives to produce inaccurate models, yet you never really countenance his argument that the models were wrong. The models did not accurately portray reality.

    I think that you and him define success differently, which is why the confusion exists.

    If you remove individual incentives from the equation, how else besides accuracy can you define the success of a model?

    Like

  29. sDunbar
    December 26, 2012 at 4:45 pm

    Nope. Nate Silver is corrupt.

    Like

  30. January 3, 2013 at 4:01 pm

    Nate Silver’s book “The Signal And The Noise” has an error: in Figure 1-2, he mislabels the columns by mistake, swapping the words “Correlated” and “Uncorrelated”. This is an big error – its like saying that when you drop a cup of coffee, it will fall up (presumably into orbit?). Just goes to prove that even the wise get very, very it wrong occasionally.

    Like

  31. January 5, 2013 at 1:19 pm

    Reblogged this on Sbagliando s'impera and commented:
    Anche se non sono completamente d’accordo, può essere utile a complemento della mia recensione (http://borislimpopo.com/2012/12/01/nate-silver-the-signal-and-the-noise/)

    Like

  32. January 6, 2013 at 12:17 pm

    I’m not an expert by any means, but Salmon’s award winning article, Recipe for Disaster: The Formula That Killed Wall Street http://www.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all Derman’s Models.Behaving.Badly and Weatherhall’s The Physics of Wall Street suggest that there might be something wrong with models too. (I’m not questioning your arguments, even, I do like the way as you describe failures as eventual successes á la Popper, I just want to ask if there is any possibility of bad financial models)

    Like

    • Romanoz
      January 12, 2013 at 5:48 pm

      Spot on. The ad hominem attack masks a deeper problem. It happened before – the collapse of hedge fund Long Term Capital Management (LTCM) in September 1998 was the largest corporate failure in history at the time. Two of its principals were Nobel Laureates – Merton and Scholes and their model for making money was the Scholes-Black formula. Nicholas Dunbar describes the collapse of LTCM and the shortcomings of their model, for which they received their Nobel Prizes, in his book “Inventing Money”.
      But Nate and Mathbabe have missed the point. There has been a paradigm shift from productive borrowers to unproductive lenders. This started to occur about 30 years ago well before Nate’s and Mathbabe’s time! John Quiggin touches on this in his “Voodoo Economics”, the Australian edition.
      Blindly ranting about a “corrupt and criminally fraudulent financial system” does not edify discourse.

      Like

  33. Isaac Scharp
    January 6, 2013 at 8:59 pm

    I feel like you read a cookbook and are complaining that it didn’t talk about good agricultural practices.

    Like

  34. troof
    January 9, 2013 at 5:07 am

    “a part of the occupy movement”

    why am i not surprised

    people like nate silver because he’s smart. try that sometime.

    Like

  35. Zathras
    January 9, 2013 at 10:27 am

    not sure if you saw this, but Nate had a short response to this post yesterday:

    Q.

    Last month, the quant-blogger mathbabe took your book to task for confusing cause and effect. She said, “We didn’t have a financial crisis because of a bad model or a few bad models. We had bad models because of a corrupt and criminally fraudulent financial system … this is not just wrong, it’s maliciously wrong.” She then claimed you were “a man who deeply believes in experts,” which is where your book went wrong.

    Could you address this criticism and defend your conclusions?

    (full post: https://mathbabe.org/2012/12/20/nate-silver-confuses-cause-and-effect-ends-up-defending-corruption/)
    — stickycinnamon
    A.

    I’d encourage you to read my book and ask whether she fairly interprets my hypothesis. I don’t think she does. The financial crisis chapter is quite explicit about asserting that the credit ratings agencies were not just stupid, but also a bunch of dirty rotten scoundrels, so to speak. And the book is generally quite skeptical about the role played by “experts”.

    http://fivethirtyeight.blogs.nytimes.com/2013/01/08/nate-silver-reddit-ask-me-anything-transcript/

    Like

  36. Just so you know...
    January 9, 2013 at 10:36 am

    From 528 Blog:
    Q.

    Last month, the quant-blogger mathbabe took your book to task for confusing cause and effect. She said, “We didn’t have a financial crisis because of a bad model or a few bad models. We had bad models because of a corrupt and criminally fraudulent financial system … this is not just wrong, it’s maliciously wrong.” She then claimed you were “a man who deeply believes in experts,” which is where your book went wrong.

    Could you address this criticism and defend your conclusions?

    (full post: https://mathbabe.org/2012/12/20/nate-silver-confuses-cause-and-effect-ends-up-defending-corruption/)
    — stickycinnamon
    A.

    I’d encourage you to read my book and ask whether she fairly interprets my hypothesis. I don’t think she does. The financial crisis chapter is quite explicit about asserting that the credit ratings agencies were not just stupid, but also a bunch of dirty rotten scoundrels, so to speak. And the book is generally quite skeptical about the role played by “experts”.

    Like

  37. January 10, 2013 at 10:14 pm

    Mathbabe, good post. You write that “The bad models are a consequence of misaligned incentives … (Silver) spends very little time on the question of how people act inside larger systems. “ Good outcomes require honesty, integrity, good data, sound analysis and good intent. Sadly, the pressures and incentives often act against these virtuous qualities. Indeed, I was seen as a threat at the highest levels of the Queensland “Public Service” for maintaining these qualities rather than playing the insider-outsider game with no regard for the public interest.

    Some good points on this from downunder #26 . Vivek Reddy #28, we need to ensure that the peer reviewers are not incestuous, as appears to be the case with many CAGW papers. Anthony #47, more good points.

    Like

  38. Colin
    January 24, 2013 at 12:36 pm

    Ok we get it, you are jealous of Nate Silver’s recent success and his whole accessible, comprehensive approach… I wonder if your book will make it to number 5 on the NYT best seller list? Probably not…

    Like

    • January 24, 2013 at 12:58 pm

      Colin, we really don’t need rudeness here. If you wouldn’t something to her face pls don’t say it here.

      Like

  39. CitizensArrest
    January 31, 2013 at 2:53 pm

    Haven’t you also just pointed out a big part of the reason that Bill Gates is so off base with his addiction to data and measurement? To the point where you could just switch names and goose the examples a bit? Yes, it’s a simplification, BUT………

    Like

  40. CitizensArrest
    January 31, 2013 at 3:20 pm

    This is for Mitsu http://www.auroraadvisors.com/articles/Webber-Metrics.pdf Seems we keep repeating history, even when we are well aware of it. I am becoming more and more convinced that economists, modelers, and all who try to interpret and improve massively complex systems that have any human behavior component should be REQUIRED to take both psychology and criminology courses that are purposely not tied to their specific fields in order to round out their abilities.
    Though I’m not at all knowledgeable about the banking industry, it seems those in charge of the “police work” within the individual companies are sorely lacking in autonomy and authority. They need to have the capabilities of the “Internal Affairs” division, popular on cop shows in the past. There seems to be no fear of immediate consequences since they don’t seem to exist in finance. Just sayin’

    Like

  41. Retired Bureaucrat
    March 16, 2013 at 10:26 pm

    Silver’s book is that of a charlatan who is doing Monday morning quarterbacking, unmindful of the fact, as Math Babe pointed out, that the the politics and the biases of the people funding the research or the models, lead to biases in the models. the same was true in the climate change models I had to review while working for Clinton Gore Admin. For the medical research, Nate’s criticism that most published research is not valid, does not recognize the fact that the studies have to be replicated and that it is the cumulative impact of several studies along with critical reviews of the studies that collectively lead to a change in a clinical practice or approval of a new drug or therapy.
    After all said and done, his only prescription is to explain Bayes Theorem with a simple example and to claim the new mantra that revising our beliefs on the basis of new evidence based on the news or sampling or experiments will lead to better predictions. This, unfortunately, is better said then done, not only because of the politics, but because most of us have biases of our own and have not the time to critique the models put forth by “experts.”

    Like

  42. March 29, 2013 at 12:50 am

    So, I’m late reading this very fascinating thread on the diamonds and the pyrite of Nate Silver. I also read a very interesting sub-thread between Mitsu Hadeishi and cwdz.

    Just in response to the original post, I don’t believe that Silver intended to place any sense of innocence on the senior leaders of financial firms in the events leading up to the meltdown. They were certainly hedging risk, and in a woefully ignorant way. As Mitsu indicated (and Silver points out), it was both the risk appetite of the moguls and overconfidence in a number of risk models (in addition to quite a number of other things which aligned to create one hell of a storm), that ushered in the 20km financial asteroid.

    Talking Risk:
    One thing that might have added support to Silver’s position would have been a discussion on how risk is evaluated, interpreted, and acted upon. While he gave a nice simplified explanation of the ‘bubble’, it might have been worthy to take another step (and a page or 2) to review various approaches to risk modeling, and how protecting the solvency of any institution is sometimes a maddening exercise of distilling the extraordinary historical complexities of risk into a range of inflation adjusted dollars. Following from the questions around risk modeling, the central issue to solvency of a financial company is then how it decides what to do (or not do) with the potential for low-probability, high impact tail losses (‘unexpected loss’). After all, they want to *make* money, not save it.

    I think Operational Risk guru Ali Samad-Khan has his finger on the pulse of this quite well when he discusses the notion that expressing risk in ‘probabilities’, as we would understand them from a purely statistical perspective, doesn’t necessarily compute with those who make the decisions which may exacerbate or mitigate risk. A 1%-3% risk of loss may not seem like much to a decision maker, unless you consider risk as a 1 in n year/month/week/etc type of event. Given the addition of the time domain, all of a sudden now the low chance / high impact event seems less like an ‘if’ and more like a ‘when’.

    To support Mitsu’s thought on the copula estimation controversy, this breed of models could never make that translation – not that a time-domain added intuition of risk would necessarily have magically changed the big FI’s practices (think about it, those weren’t sensible leverages they were using in their bets). But, like weather forecasts, probabilistic interpretation without some sort of meaning behind a given numeric value (what does a 30% chance of rain mean for me anyway?) can be fatal.

    In all, for the FI’s, their bets on being well capitalized at 97% of value-at-risk measures were not only dastardly, they were utterly misled. Interestingly, even today the Basel Committee is finding controversy with Basel II’s 99.5% requirement (and this is only *1* of their issues – another story for another time).

    Talking Clinical Research vs. Finance:
    I don’t find, in contrast to a few postings, that finance and pharma/medical device have the same type of problem with hedging their bets. In the finance world, risk is weighed on historical data, and decisions made post hoc. There isn’t a ‘prospective way’ that I know of to do the things that FI’s have done which mimic in any way a clinical drug or device trial.

    For clinical research, there are a GAZILLION insidious practices which can alter a desired outcome in a trial or post launch study (remember, some med devices can bypass expensive human trials if they get 510(k) clearance). Notice I said *desired*. In general, if a new drug or device is going to fail before launch, companies are more likely nowadays to admit this, particularly if its in early phase development. Of course, Phase I-III trials can be biased in their design, which favor a particular outcome, throwing a wrench in the machine that may not be picked up until long after trials are over.

    One particularly interesting bias is how companies will lead participating clinical site physicians by a carrot, promising them first position on a primary publication, based merely on how well they recruit trial candidates vs other participating sites. This can be great additional income and thus incentive for a primary care physician who is then likely to be tapped to $$$peak to other phy$$$icians on the company’s behalf about the product.

    What gets REALLY mouth-wateringly interesting in this area is how companies will suppress clinical study information when it comes to addressing long-term effectiveness of a drug/product, or argument for a new indication ….. or the competition. In the first 2 cases, it is not uncommon to find studies which fail to make the case which would lead to a product claim or new indication. It’s also not uncommon to find clinical research folks frantically scampering and carrying frighteningly large bludgeons toward the research statisticians when their CEO lets out a blurb ahead of clinical findings that the favorable outcome was met. These are indeed reality TV series moments.

    The 3rd case is less clear cut, but nevertheless another example of prospective manipulation. The business case is what ends up determining the trial to be conducted, and each has its own risks, and folks hedging their bets. In general, the company must decide if they want to develop a non-inferiority study, or one demonstrating superiority. The first case may be more reasonable to attain, but the cost to do so (much larger sample sizes), and the return on this investment may not be optimal. The latter case may be less expensive to conduct, but a product may not demonstrate (and thus allow a claim of) superiority over its direct competition. And whether someone considers, say an adaptive Bayesian or frequentist approach to the study (with some sort of stopping rule, if ‘sameness’ or superiority criteria have been met, no more need to recruit patients = less dinero) can mean the difference between the range of a positive and short-term, cost-effective study to a useless longer-term stroll into purgatory (bayesian trials run the risk of requiring more patients when an endpoint is not ‘yet’ met, if ever). And yes, when a company halts a trial in this genre – it never goes unnoticed.

    In Sum:

    There is a prevailing balance between magnanimous failures that constitute greed and those that constitute ignorance. In the middle of those lie the quantitative pros (or not) who are charged with conducting the surgical mechanics of providing meaning behind the data. With the type of industry being fairly irrelevant, those failures in which the analysis of and interpretations from data are a part will be distributed along a spectrum as much as the data themselves. Modeling risk, interpretation risk, the business appetite, the incentive to rationalize questionable decisions despite ‘the truth’; these will all be contributing conditions to the messes we’ve witnessed.

    I don’t believe that Nate Silver ever misrepresents any of these possibilities, but I do understand the angst that Cathy expresses in her blog, in that it is always far easier to flog the analyst than it is to get the executive within the area code of the flogging post.

    Like

  43. April 28, 2013 at 4:57 pm

    Hey There. I found your blog using msn. This is an extremely
    well written article. I’ll make sure to bookmark it and return to read more of your useful information. Thanks for the post. I will certainly comeback.

    Like

  44. Andy
    August 2, 2013 at 10:24 pm

    I’ve only read the first part of this book, but I balked at the point where Silver says Moody’s “buil(t) up exceptional profits despite picking résumés out of Wall Street’s reject pile,” and supported that assertion by citing “In 2005, the average Moody’s employee made $185,000, compared with the $520,000 received by the average Goldman Sachs employee that same year.” If that’s not lying with statistics, I don’t know what is. The comparison is ridiculous–would any company rival Goldman during that period? That hardly makes Moody’s employees “rejects.” Besides, it makes zero sense to use the average instead of the median. It all makes me think that Silver has taken his idea that there is no such thing as an objective model too much to heart, and he’s stopped even trying.

    Like

  1. December 20, 2012 at 9:33 am
  2. December 20, 2012 at 10:14 am
  3. December 20, 2012 at 12:07 pm
  4. December 20, 2012 at 2:16 pm
  5. December 20, 2012 at 7:00 pm
  6. December 21, 2012 at 2:14 am
  7. December 21, 2012 at 7:06 am
  8. December 21, 2012 at 10:02 am
  9. December 21, 2012 at 10:02 am
  10. December 21, 2012 at 3:17 pm
  11. December 22, 2012 at 9:38 pm
  12. December 23, 2012 at 9:44 am
  13. December 26, 2012 at 11:15 am
  14. December 26, 2012 at 11:51 pm
  15. December 30, 2012 at 8:07 am
  16. January 9, 2013 at 6:51 pm
  17. January 10, 2013 at 8:47 am
  18. January 10, 2013 at 11:48 am
  19. January 25, 2013 at 9:01 am
  20. January 25, 2013 at 9:06 am
  21. February 28, 2013 at 8:42 am
  22. March 21, 2013 at 3:33 pm
  23. April 6, 2013 at 2:03 pm
  24. July 7, 2013 at 3:21 pm
  25. January 28, 2014 at 12:08 pm
  26. February 12, 2014 at 8:04 am
Comments are closed.