Home > data science, finance, modeling > Whom can you trust?

Whom can you trust?

December 21, 2012

My friend Jordan has written a response to yesterday’s post about Nate Silver. He is a major fan of Silver and contends that I’m not fair to him:

I think Cathy’s distrust is warranted, but I think Silver shares it.  The central concern of his chapter on weather prediction is the vast difference in accuracy between federal hurricane forecasters, whose only job is to get the hurricane track right, and TV meteorologists, whose very different incentive structure leads them to get the weather wrong on purpose.  He’s just as hard on political pundits and their terrible, terrible predictions, which are designed to be interesting, not correct.

To this I’d say, Silver mocks TV meteorologists and political pundits in a dismissive way, as not being scientific enough. That’s not the same as taking them seriously and understanding their incentives, and it doesn’t translate to the much more complicated world of finance.

In any case, he could have understood incentives in every field except finance and I’d still be mad, because my direct experience with finance made me understand it, and the outsized effect it has on our economy makes it hugely important.

But Jordan brings up an important question about trust:

But what do you do with cases like finance, where the only people with deep domain knowledge are the ones whose incentive structure is socially suboptimal?  (Cathy would use saltier language here.)  I guess you have to count on mavericks like Cathy, who’ve developed the domain knowledge by working in the financial industry, but who are now separated from the incentives that bind the insiders.

But why do I trust what Cathy says about finance?

Because she’s an expert.

Is Cathy OK with this?

No, Cathy isn’t okay with this. The trust problem is huge, and I address it directly in my post:

This raises a larger question: how can the public possibly sort through all the noise that celebrity-minded data people like Nate Silver hand to them on a silver platter? Whose job is it to push back against rubbish disguised as authoritative scientific theory?

It’s not a new question, since PR men disguising themselves as scientists have been around for decades. But I’d argue it’s a question that is increasingly urgent considering how much of our lives are becoming modeled. It would be great if substantive data scientists had a way of getting together to defend the subject against sensationalist celebrity-fueled noise.

One hope I nurture is that, with the opening of the various data science institutes such as the one at Columbia which was a announced a few months ago, there will be a way to form exactly such a committee. Can we get a little peer review here, people?

I do think domain-expertise-based peer review will help, but not when the entire field is captured, like in some subfields of medical research and in some subfields of economics and finance (for a great example see Glen Hubbard get destroyed in Matt Taibbi’s recent blogpost for selling his economic research).

The truth is, some fields are so yucky that people who want to do serious research just leave because they are disgusted. Then the people who remain are the “experts”, and you can’t trust them.

The toughest part is that you don’t know which fields are like this until you try to work inside them.

Bottomline: I’m telling you not to trust Nate Silver, and I would also urge you not to trust any one person, including me. For that matter don’t necessarily trust crowds of people either. Instead, carry a healthy dose of skepticism and ask hard questions.

This is asking a lot, and will get harder as time goes on and as the world becomes more complicated. On the one hand, we need increased transparency for scientific claims like projects such as runmycode provide. On the other, we need to understand the incentive structure inside a field like finance to make sure it is aligned with its stated mission.

Categories: data science, finance, modeling
  1. Jonathan
    December 21, 2012 at 9:52 am

    I am in between Cathy and Jordon in my opinion of Nate Silver. I think some of the reasons illustrate the issues.

    I followed Nate’s blog during the election and his responses to criticisms. I think Nate Silver is very good in his domain of election prediction. The reason I say that is not because he is an expert but because:
    he explains where his conclusions come from in a clear way.
    he responds to the arguments of his critics (truly responds, not obfuscating as many experts do).
    he does not claim he is doing stuff that is too complex for non-experts to understand.
    his incentives, as far as I know, are pretty well-aligned with making accurate forecasts.

    I haven’t read Nate’s book but Cathy makes a very good case that Nate gets into trouble when he starts commenting on commenting on other topics. Cathy’s points about people intentionally building misleading models to serve their self-interest is crucial. I am disturbed that Nate seems unaware of this. But I would still trust his forecasts for the next election.

    But, I think Nate in his domain is a pretty good “model” for what to look for in experts. Most fields do require a lot of time to understand well (both directly in the field and often acquiring skills to be able to understand the field). So, inevitably, we are going to rely on the relatively small number of people who have devoted that time. I am not saying that we should accept what those people say uncritically but we can look for people who fit the criteria above who will help us make our own judgments. .

    There are large costs to distrusting of experts. We have responded too slowly to global warming in part because of skepticism of experts (admittedly, this is a huge oversimplification but it is, and continues to be, a piece of it).

    Cathy’s your advice “carry a healthy dose of skepticism and ask hard questions.” is great as far as it goes. But it is very time-consuming. Each of us can’t possibly “ask hard questions” about all of the important issues that need to be addressed today. What do we do about the issues we don’t have time to do that for? Not having an opinion serves purposes of the entrenched interests.

    We need to rely on others. One approach is to critically evaluate the experts (particularly those experts who are more open and less self-serving). But even that is very time-consuming. So, I think most commonly need to rely on other people who have done that critical evaluation.

    Carry a healthy dose of skepticism. But be smart about it. Be very skeptical of some, not so skeptical of others.

    Like

  2. Eugene
    December 21, 2012 at 11:24 am

    I told Jordan that he’s biased toward trusting experts because the experts in his field (math) are pretty reliable. Whereas in finance:

    1, The incentives are perverted (you covered this well)
    2. The facts are opaque (you don’t have them all when you’re trying to draw conclusions)
    3. It’s really hard to test hypotheses (non-stationarity plus lots and lots of variables)
    4. Experts are poorly vetted (related to (1))

    And yes, it’s hard to feel in your bones how bad it is until you’ve been in it. Musical story: about six or seven years ago, before the financial crisis, I saw Laurie Anderson do an early version of this song:

    (Without the section about the bailouts, obviously — the most political part was about the experts who got us into Iraq.) I remember thinking at the time that it was sort of heavy handed. Now, after being in finance during the crisis… not so much.

    Like

    • December 21, 2012 at 11:39 am

      Fantastic video!!

      Like

      • Jonathan
        December 21, 2012 at 2:45 pm

        Yes, wonderful video. Thanks.

        Like

    • JSE
      December 21, 2012 at 12:02 pm

      You might also add that I’m biased towards experts because I am one, and any general loss of trust in expertise hurts me personally (and leads people to be more apt to believe wrong things about the areas I’m expert in.)

      And then you could go on to say that I should be fighting with bad experts who give expertise a bad name, not with Cathy!

      Maybe I can do both…

      Like

  3. December 21, 2012 at 12:05 pm

    One strategy is to read what someone writes in an area where I (or one) is more knowlegeable than that someone. If that someone speaks sensibly without seeming overconfident (only 1% crackpot-like), then their credibility is increased or maintained. Also, learning about that someone’s milieu and personal circumstances to vet for real or apparent conflicts of interest. Also, some people are hubs bridging math and physics, or classical music and football, so one can leap-frog between quite informed people. But the devil is in the details, as always.

    Like

  4. Dikaios Logos
    December 21, 2012 at 3:23 pm

    I am not a friend of Jordan’s but someone who’s knows him in passing as there are intersections for three and perhaps four circles in my life and his. I understand that Jordan, like you has a rarefied background in pure math and that confers certain gravitas to their opinions. I also understand that Nate Silver is a very competent analyst and that he has offered unique and helpful insights on many occasions.

    To all this I recommend people “do the math” (a little jab at Jordan there!) and realize: SO WHAT! I was raised in an environment not too different from Jordan’s, have known lots of people in common with him or of similar intellectual pedigree, and was as a small child something of a prodigy myself. But I always found it hard to totally embrace the role of “super person”, because even as a small child I recognized this part of the human experience, this elevation of people as being intellectually ‘above it all’ or at least above others, was clearly an act of intellectual fraud.

    There is a tendency and perhaps a certainty, even among those uniquely capable, to oversell their virtues or the virtues of those similar to themselves. While Nate might be more likely to offer a uniquely helpful picture of voting dynamics or of data use, that is at best a statement about his relative abilities, not his absolute ability to avoid error. When you realize that much of his success is domain specific, that there was a time that those he is passing in reputation were also lauded as pioneers, and that perhaps most importantly of all, that people really, really want to think someone is above the fray, you start to see that “trusting” him too much is just silly. Any one should think there are reasons to not trust him, just on the face of it. No one is worthy of total trust, though they might be worth of listening a bit more than others.

    Your “Bottomline” paragraph really nails it for me. You never, ever get away from needing to be skeptical. It is for me the most sophisticated and difficult intellectual skill of all.

    As a slight aside, issues of trust are central to politics and I have an analogy that I bring out when people assume that I should fall into line behind one of two (or even three or four) ‘choices’ in politics and trust them or follow along. That is that the issues involved in getting TWO PEOPLE making active efforts to trust each other and work together or agree in a relationship are very tough and we all know that. In areas like politics, you are dealing with the problem of getting millions of people to work together or agree or at least be productive and that should be seen as a much more massive problem that requires unique thought, skepticism, creativity. Examining all the information or considering all the possibilities is impossible though provisionally you have to act as if it is possible and that means you might read experts and use them, but I really hope people see that they no one can be worthy of total trust.

    Rant over, I hope my difficult day hasn’t made this seem too deranged!

    Like

  5. pat craig
    December 22, 2012 at 11:37 am

    i worked with a guy for several years digging large, deep holes for sewer pipe repair. He knew what he was doing. If someone referred to him as an “expert” he would snap ” a expert is a has-been-under-pressure, I’m a professional!” He also would say “don’t SIR me, I work for a living!” Whenever i hear ‘expert’ I become a bit more critical regardless…also, as a blue collar intellectual I’ve been regularly amused by VSP’s and others who assume operating shovels and hammers etc are brainless endeavors until they get on the end of one…. . You can work it or it will work you. Most education is not documented. Some of the most intelligent and wise people in this world don’t have credentials. PS the word ‘expert’ is acceptable but ‘professional’ is better Pat, Seattle (roll em if ya got em!) PSS Stop the pigs!

    Like

  6. mitsu
    December 23, 2012 at 3:19 am

    I also come down in between Cathy and Nate Silver. Cathy points out examples where traders explicitly fed the models bad or incomplete data and used the output of the models to rate securities that were junk. Goldman famously sold junk as solid, and so on. But the point is, even when you used the right data, even when you tried to use the models “correctly” they were mispricing risk massively because the models themselves were overly simplistic. They assumed the probability of correlated default was a constant, which is wrong.

    So yes, the incentives weren’t there for the industry to correct the bad models, but to say the crash was caused only by corruption doesn’t make sense to me — as though every banker knew exactly what they were doing. Bear Stearns, Lehman, etc., clearly DIDN’T know what they were doing. They went under they were so out of it. Goldman seemed to know what was wrong with the models, but many other big banks fared poorly to disastrously. So it seems to me it is more accurate to say it was a combination of things: bad models, corruption, poor incentives, stupidity, and inertia… rather than simply saying that Nate Silver has the causality reversed. It seems to me more a complex network of multiple causes, with bad models both being a cause and an effect of that network.

    Like

    • Jonathan
      December 24, 2012 at 11:08 am

      Certainly, the crisis had many causes. But to call the mortgage models “bad” suggests they were errors of stupidity, not cupidity. As Cathy has noted, the models served their intended purpose admirably. Their purpose was not to estimate default risk but to facilitate underwriting of the junk. More accurate models would have impeded that.

      So, incentives had more to do with it than stupidity. Undoubtedly there were people who questioned the assumptions but they would have been squelched or weeded out in the interest of profit..

      Like

      • mitsu
        December 25, 2012 at 2:50 am

        I agree the incentives were there to prevent the errors from being corrected, but to imply that they were *designed* specifically to be wrong flies in the face of the evidence. There have been many excellent articles written about the Gaussian copula and its misuse: for example:

        http://www.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all

        I’m engaging in a tiresome flame war in another thread about this topic. For some strange reason, many people (it seems to be a particularly American cultural phenomenon — the paranoid style of American politics) seem to think that when something goes wrong, the only possible plausible explanation is that it was 100% due to intentional acts of wrongdoing by specific bad actors. I think that’s a kind of simplistic, Dudley Do-Right vs Snidely Whiplash sort of view of the world, which makes it seem like everything would be hunky dory with the world if only we could get rid of the bad people doing bad things. I think the world is filled with brittle systems that can and will break even if 100% of the people were trying their best to do right. Obviously there were and are many people in the financial services industry who are very far from doing their best, and the incentives are structured to reward short-term thinking — but to suggest that they all knew exactly what they were doing is to attribute a level of prescience and control that any careful examination of the details of what went wrong (and actual interviews with the people involved) clearly show couldn’t have been the case.

        I see no reason to believe that the originator of the Gaussian copula designed it intentionally to crash the world economy while racking up short-term profits. I think there’s ample evidence to suggest that because everyone was making short-term profits, no one wanted to correct the error, and there were clearly some who knew about the error and did things to hedge the risk (as many have pointed out, Goldman Sachs bet the housing market would crash, which is one reason they came out relatively unscathed:

        http://www.mcclatchydc.com/2009/11/01/77791/how-goldman-secretly-bet-on-the.html )

        There were bad incentives but to suggest the entire thing was *designed* from the outset is farfetched. A secret that big couldn’t possibly have been kept by everyone. Too many bigwigs lost too much money and prestige for that explanation to hold water.

        Like

        • Jonathan
          December 26, 2012 at 9:47 am

          I agree that we shouldn’t dismiss stupidity as an explanation. I don’t mean to suggest it was all a grand conspiracy.

          But, we shouldn’t underestimate the impact, both obvious and subtle, of bad incentives. Bad models that conveniently come to “the right” conclusion don’t get scrutinized. So, people don’t find the dumb assumptions underlying them.

          As far as the Gaussian copula. That is not a bad model at all any more than Black-Scholes is. They are useful tools that can be used appropriately or inappropriately. Same for a computer. Blaming the originator of the Gaussian Copula makes about as much sense as blaming Bill Gates.

          Like

        • Mitsu
          December 26, 2012 at 10:27 am

          Yes, I completely agree. There’s no doubt the incentive structure consciously or unconsciously suppressed attempts (which some quants did make) to correct the oversimplified use of the Gaussian copula. That’s indisputably the case. It’s a general problem with relying on pure market dynamics for everything: the market is inherently about short-term feedback. Global concerns tend to get short shrift. This is just a general feature of markets whether or not the behavior is due to conscious bad faith or not. That’s why merely punishing bad actors is not sufficient in my view: you have to deal with the fact that the incentive structures are inadequate to stabilize markets even if the actors are not engaged in completely conscious fraud. There can be all sorts of levels of bad faith from just not wanting to rock the short term profit boat to not wanting to look at criticisms of the existing models without the entire enterprise being a completely conscious fraud.

          Like

        • Jonathan
          December 26, 2012 at 9:51 am

          mitsu, I agree. I posted my more complete reply in the wrong spot.

          But while I think stupidity did play an important role, I think the persistent use of stupid models was greatly encouraged by the desire to make profits.

          Like

  7. Nathanael
    December 23, 2012 at 11:32 pm

    The solution for finance is, bluntly, to ban outright the activities which make finance too complicated for non-specialists to understand.

    This is actually what the US used to do under the New Deal regulations, before they were dismantled.

    This is not possible in every field, but in fields like finance this is very definitely possible; you can just make anything which is too complicated ILLEGAL. In some cases — such as computerized high-frequency trading — it’s hard to make it illegal, but you can use a transaction tax to make it *unprofitable*.

    Like

  8. Richard Van Noorden
    December 30, 2012 at 3:40 am

    Just found this blog from a twitter comment. I’m a science journalist, and every day I have to make quick judgments about who to trust in scientific, policy and financial realms. Three examples: climate modelling, the effects of Fukushima fallout, and analysis of the viability of carbon capture & storage. I’m an expert in hardly any of these areas, but adopt an attitude as skeptical as possible (always tempered by the time available for reporting; peer-reviewed research is still a good heuristic when time is short!).

    On the question of whom you can trust, I found the thinking of Onora O’Neill quite illuminating: the question should not be ‘whom can you trust’, but ‘what are the signs that someone is trustworthy?’ Analogously, the practical question is not ‘how can we restore trust in x?’, but ‘how do you make it easier to judge trustworthiness?’. She recently did a lay summary for the BBC of this (apologies for not linking to her more academic work): http://www.bbc.co.uk/news/magazine-20627410.

    “Professionals who take the time to listen, who use plain language, who open themselves to check and challenge, who offer others opportunity to judge their honesty, competence and reliability, are more likely to be trusted.”

    Personally I’ve found that a good summary of my instincts when reporting.

    This does not, perhaps, help when it comes to finding people to trust in ‘yucky’ fields like economics & some areas of medical research, as Cathy quotes above. But it does illuminate quite how yucky those fields are – I instinctively don’t trust anyone in them!

    Still, to the question that Jonathan posed: ‘why do I trust what Cathy says about finance?’ — my answer would not be ‘because she’s an expert’ so much as ‘because her attitude encourages trustworthiness’.

    Like

  1. December 26, 2012 at 12:50 am
  2. December 30, 2012 at 8:07 am
  3. April 9, 2013 at 6:15 am
  4. July 16, 2013 at 10:01 am
Comments are closed.