Home > modeling, news > Global move to austerity based on mistake in Excel

Global move to austerity based on mistake in Excel

April 17, 2013

As Rortybomb reported yesterday on the Roosevelt Institute blog (hat tip Adam Obeng), a recent paper written by Thomas HerndonMichael Ash, and Robert Pollin looked into replicating the results of a economics paper originally written by Carmen Reinhart and Kenneth Rogoff entitled Growth in a Time of Debt.

The original Reinhart and Rogoff paper had concluded that public debt loads greater than 90 percent of GDP consistently reduce GDP growth, a “fact” which has been widely used. However, the more recent paper finds problems. Here’s the abstract:

Herndon, Ash and Pollin replicate Reinhart and Rogoff and find that coding errors, selective exclusion of available data, and unconventional weighting of summary statistics lead to serious errors that inaccurately represent the relationship between public debt and GDP growth among 20 advanced economies in the post-war period. They find that when properly calculated, the average real GDP growth rate for countries carrying a public-debt-to-GDP ratio of over 90 percent is actually 2.2 percent, not -0:1 percent as published in Reinhart and Rogo ff. That is, contrary to RR, average GDP growth at public debt/GDP ratios over 90 percent is not dramatically different than when debt/GDP ratios are lower.

The authors also show how the relationship between public debt and GDP growth varies significantly by time period and country. Overall, the evidence we review contradicts Reinhart and Rogoff ’s claim to have identified an important stylized fact, that public debt loads greater than 90 percent of GDP consistently reduce GDP growth.

A few comments.

1) We should always have the data and code for published results.

The way the authors Herndon, Ash and Pollin managed to replicate the results was that they personally requested the excel spreadsheets from Reinhart and Rogoff. Given how politically useful and important this result has been (see Rortybomb’s explanation of this), it’s kind of a miracle that they released the spreadsheet. Indeed that’s the best part of this story from a scientific viewpoint.

2) The data and code should be open source.

One cool thing is that now you can actually download the data – there’s a link at the bottom of this page. I did this and was happy to have a bunch of csv files and some (open source) R code which presumably recovers the excel spreadsheet mistakes. I also found some .dta files, which seems like Stata proprietary file types, which is annoying, but then I googled and it seems like you can use R to turn .dta files into csv files. It’s still weird that they wrote code in R but saved files in Stata.

3) These mistakes are easy to make and they’re mostly not considered mistakes.

Let’s talk about the “mistakes” the authors found. First, they’re excluding certain time periods for certain countries, specifically right after World War II. Second, they chose certain “non-standard” weightings for the various countries they considered. Finally, they accidentally excluded certain rows from their calculation.

Only that last one is considered a mistake by modelers. The others are modeling choices, and they happen all the time. Indeed it’s impossible not to make such choices. Who’s to say that you have to use standard country weightings? Why? How much data do you actually need to consider? Why?

[Aside: I’m sure there are proprietary trading models running right now in hedge funds that anticipate how other people weight countries in standard ways and betting accordingly. In that sense, using standard weightings might be a stupid thing to do. But in any case validating a weighting scheme is extremely difficult. In the end you’re trying to decide how much various countries matter in a certain light, and the answer is often that your country matters the most to you.]

4) We need to actually consider other modeling possibilities.

It’s not a surprise, to economists anyway, that after you include more post-WWII years of data, which we all know to be high debt and high growth years worldwide, you get a substantively different answer. Excluding these data points is just as much a political decision as a modeling decision.

In the end the only reasonable way to proceed is to describe your choices, and your reasoning, and the result, but also consider other “reasonable” choices and report the results there too. And if you don’t like the answer, or don’t want to do the work, at the very least you need to provide your code and data and let other people check how your result changes with different “reasonable” choices.

Once the community of economists (and other data-centric fields) starts doing this, we will all realize that our so-called “objective results” utterly depend on such modeling decisions, and are about as variable as our own opinions.

5) And this is an easy model.

Think about how many modeling decisions and errors are in more complicated models!

Categories: modeling, news
  1. April 17, 2013 at 6:52 am

    “… selective exclusion of available data, and unconventional weighting of summary statistics …”

    Why is everyone focused on coding errors? Yeah, that’s pretty sad. But the points about cherry picking data and using questionable weighting of summary statistic really stinks. This looks like a paper written to prove a desired result. But what do people expect from business school researchers?


    • April 17, 2013 at 7:10 am

      Actually you have to choose what data to use and you have to choose weights.


      • April 17, 2013 at 8:53 am

        I understand that, but the critiques make a strong case that the choices were designed for a particular effect. When you “choose” to ignore the growth data that would undermine your conclusion, and you decide to use “unconventional” weightings, then things really stink.

        Weren’t you recently writing something about lying with models? 😉


        • April 17, 2013 at 8:54 am

          Yes I agree there’s evidence that they cherrypicked to some degree. On the other hand the fact that they did give out their spreadsheet indicates they didn’t intentionally exclude countries.


        • April 17, 2013 at 8:57 am

          They gave the spreadsheet out after years of ignoring requests. Why? As Yves Smith pointed out thsi morning, the damage is now done–the policymakers will not likely change their plans after being so committed for so long (and for them and their benefactors, so profitably).

          So far as I can tell, it’s Mission Accomplished for R&R.


        • April 17, 2013 at 8:59 am

          Great point.


        • April 17, 2013 at 10:12 am

          Thanks. And BTW, I do agree with the points you raised in your original post.

          But I’d go a bit farther too: We need to reconsider how we use social science research (including economics) and the opinions of social scientists make public policy. Given how fragmented these fields are, and their demonstrated inability to produce robust models of the consequences of human actions, why do we rely so heavily on people like Rogoff and Reinhard? (Or for that matter Stglitz and Krugman; I’m politcially agnostic on this point.) Right now, economics looks more like a brawl among competing superstitions than a rational discourse among scientsts.

          As Ian Shapiro documents so well in his book The Flight from Reality in the Human Sciences the social sciences have become mired in parochial feuds and have rejected empiracle evidence almost like the ancient Scholastics. Under these circumstances, how can be public be served by the advisors, aides, pundits, and other who now completely dominate our public discourse? Perhaps we’d be better off with politicians who paid attention to the real world and not ivory towers of academia and their think-tank outposts.

          The historan Bruce Kuklick wrote an excellent book, Blind Oracles reviewing the hisory of the post-war rise of political scientists in shaping US foreign policy, and argued that putting such persons in sensitive positions of policy is a basic error. Instead, Kulick argues, they should remain outside the realm of politics where they can best use their talents to comment on, rather than dictate, policy. As Kulick concludes:

          The [defence intellectuals] believed that their know-how could turn the state system in a better direction, and their learning leverage interests into the empire of justice. But politics was always the more powerful partner, and dragged the erudite into the dominion of partisanshpi, maneuver, and advantage. Politicians outranked academics; politics seduced scholarship.


        • April 17, 2013 at 10:17 am

          I agree. What I think you’re pointing to is the point that, instead of discussing our weighting scheme, say, and describing which data points we are including and why, we can once and for all admit that we can make the model say whatever we want so why bother.

          I mean, we should check that the above is true, but it probably is.

          And then we can stop with the weighting schemes, and we can admit that mathematical models aren’t helping, and that we need to think about what’s the right thing to do instead, irrespective of impressive and opaque models.


        • April 17, 2013 at 10:20 am

          Yes! 🙂


        • April 17, 2013 at 10:31 am

          Oh! And look at Krugman’s comments here. Even Krugman is asking ugly questions about someone he was preparing to defend.


        • Rama
          April 17, 2013 at 9:04 am

          Note that Carmen R. is employed by the Peterson institute whose goal is to get rid of social security, privatize it and siphon off profits through wall street shenanigans. The Peretson institute is also banging their drums on debt all the time – ther’s a conflict of interest here.


    • Leon Kautsky
      April 17, 2013 at 8:28 am

      Everyone is focused on the coding errors because they’re objectively errors, justifiable in no regime AND because those coding errors are enough to flip the sign of the result, which is huge.

      I’m not sure if the “business school researchers” is supposed to be derogatory or what, but it’s false at any rate. Rogoff is at the public policy school and Reinhart works in international finance.


      • April 17, 2013 at 9:13 am

        I had recalled Rogoff was at HBS; just my misrecollection. I don’t think that changes the substance of my comment. As others pointed out, Reinhoff is connected to the Peterson Institute, and so someone else pointed out on Naked Capitalism, her husband is at AEI. So she has questionable objectivity. And even Rogoff’s own connections still make him suspect, even without any direct HBS affiliation.

        As for the coding errors, the real problem was with the selection of facts and weightings–all of which were necessary to achieve R&R’s results. This plus their refusal to offer their data for so long makes their ethics very suspect.


  2. isobel
    April 17, 2013 at 7:12 am

    Isn’t it fundamentally trivial? I mean, the General Theory, anyone? Not so long time ago a rather brilliant economist already showed us that austerity is The wrong idea. This Rogoff thing feels like, there was a horrendous murderer, we knew he was a murderer, and now we find out he picks his nose and sucks the finger. Yeah it is gross, but who cares? He is a murderer, for Christ’s sake.


    • April 17, 2013 at 8:32 am

      The problem is that *The General Theory* is generally wrong or obscure. So highly-cited economists today don’t refer to it for macro research.

      Austerity during *normal times* can be neutral or positive, but becomes devastating in a recession/slow growth times.

      The R-R result suggested that *in general* austerity might have been the lesser of two evils during a recession/slow growth times because it would prevent debt levels from crossing a threshold at which they would constitute a long-run drag on growth. With the R-R result vanishing, there is now no intellectually coherent justification for austerity at this moment.


  3. April 17, 2013 at 7:38 am

    Not a fan of the post’s title, but you make very good points throughout. Having done a fair but of empirical research in economics…even some using macro time series…and done even more vetting of other people’s work for policy discussions, these problems are not uncommon. Debunking one paper does not overturn all the conclusions on this question (other authors, other works by these authors still support), but it should force a discussion in economics about research practices. I don’t really think it will. The most likely outcome is that economists will be even less likely to respond to requests for code or data or rely even more on proprietary data. That’s what happens when you punish people for being open (even with a delay and without admitting questionable analytical choices) and if you’re a discipline that puts near zero weight on replication studies. Sad, but not surprising.


    • April 17, 2013 at 8:24 am

      Say why you don’t like the title – do you think it’s incorrect? Or irrelevant? Or something else?

      Also please read this if you think it’s incorrect.



      • April 17, 2013 at 8:52 am

        The hyperbolic title calls to mind:

        The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back. I am sure that the power of vested interests is vastly exaggerated compared with the gradual encroachment of ideas. – JMK in 1935


      • beewhy2012
        April 17, 2013 at 10:27 am

        For an average Joe like me, this title deserves to be the frontpage headline of the NY Times tomorrow and the rest of the world’s papers the day after. How much human suffering has been caused by this stark case of opacity over transparency? Once again we have striking evidence that the “dismal science” can be disastrous as well as venal.


      • April 17, 2013 at 4:02 pm

        The Excel coding error was not as decisive as your title suggests. Maybe it is indicative of sloppy, overreaching empirical work but I worry about fighting fire with fire. I liked your points in the post a lot.


    • April 17, 2013 at 8:43 am

      Actually, the R-R result was huge because it was the only big macro empirical study in favor of austerity and because, like many things, it was published at the right time. With the exception of fresh World Bank data and unanonymizable stuff, I think all code/data is going to be open. It’s the only justifiable practice after failures like this. Let’s not forget that these aren’t the first econometrics superstars to make mistakes because of coding errors. Offhand I can think of: Lott’s shitty studies, Levitt’s first abortion results, all of the studies in favor of CAPM in the 1960s and the “low-price effect” in the 1980s.


  4. Min
    April 17, 2013 at 11:04 am

    “Only that last one is considered a mistake by modelers. The others are modeling choices, and they happen all the time. Indeed it’s impossible not to make such choices. Who’s to say that you have to use standard country weightings? Why? How much data do you actually need to consider? Why?”

    That’s why you say what you did, and why, in your original paper.


    • April 17, 2013 at 11:05 am

      Yes. There’s absolutely no excuse not to spell these choices out explicitly. I haven’t read the original R&R paper to see if they did this.


  5. Josh
    April 17, 2013 at 11:17 am

    With due respect for Cathy, Leon and Keynes, and not to defend R&R, the influence of the paper is overblown. The austerity campaign has been going on for years before the paper was published. It was a moral imperative — be fiscally responsible.

    I agree with the comments re: modeling, openness, etc.

    The Peterson connection though has gotten me worried about Simon Johnson, who I generally have great respect for.


    • NY
      April 18, 2013 at 12:29 am

      Hm, I’m not so sure about that. Their results are quoted a lot – both in academia and in industry, at least from my own experience at school and at work. This includes some very influential economists under whom I had the chance to take classes, etc. Even people who don’t agree with austerity would quote R&R.


    • pjm
      April 18, 2013 at 2:23 am

      I think you are right, though besides the problem of bias being introduced and propagated by lobbying aimed at legislators, there is a bias within academic economics. No other social science has its so many people in the profession working in industry and there is lots of right wing foundation money sloshing around endowing chairs, etc. So the fact the R&R was being quoted alot with econ departments just highlights to problem of the intra-academic bias. There were sound theoretical reasons for doubting R&R’s results even before these errors turned up. The fact that this research received such a warm reception even though the highly plausible (and obvious) counter-argument that R&R were reversing causation seems to confirm your point, Josh.


      • beewhy2012
        April 18, 2013 at 1:14 pm

        I’d say Krugman today clearly states the case on the responsibility of those who aggressively waved the Reinhart-Rogoff flag to support their political prejudices:
        “This is deciding what you want to believe, finding someone who tells you what you want to hear, and pretending that there are no other voices. It’s deeply irresponsible — and you can’t blame Reinhart-Rogoff for that mistake.”


        • April 18, 2013 at 1:20 pm

          Of course R-R share responsiblity, since they, like many in “frackademia” make it clear they can and will prove anything for anyone. I think the evidence is clear that this paper was written to “prove” a specific point for a specific political purpose and was in no way unbiased research. The pundits and politicians understand this perfect well; they and the academics all work together on these projects, and they all–quietly–understand the terms and implications of this work.

          Krugman is being willfully blind if he doesn’t see how all of these people cooperate to form an ecosystem of propaganda. Sadly, he just can’t bring himself to speak the truth about the moral and intellectual bankruptcy of so many of his colleages.


  6. April 17, 2013 at 12:39 pm

    In the late 1990’s there was a widely used Pentium chip that made costly mistakes due to a flaw sporadically setting the decimal place. At the same time Excel had badly inaccurate call-up expressions for simple trig functions as well as complex integrals (since corrected with the help of Mathematica). If the code and data of the spectacular 1998 crash and burn of Long Term Capital Management’s computer model (and Greenspan’s first use of the term TBTF) had been available for open source examination, then the truly Black Swan combination of bad chip and bad code along with the bad math of Black-Scholes assumptions (that are demonstrably fallacious) could have been a significant lesson in humility that might have prevented rather than predicted 2008’s crash and burn.


    • April 17, 2013 at 12:57 pm

      You’re certainly correct about the Pentium chip and the excel problems in the 90s as part of the parable of open-source. However, there are over 100 variations on B-S based on different assumptions from “open-source” academic publishing.

      I’m pretty sure the 2008 crash had little to do with either the busted Pentium or the broken complex integrals in Excel’s 1990s versions, except in a “butterfly effect” type way.


    • April 17, 2013 at 6:01 pm

      Have you any idea what you are talking about? The Pentium bug was indeed real, but it is practically irrelevant to numerical simulation. The bug only affected division; at worst it can affect the fourth [decimal] significant digit of the quotient, and much more “commonly” the 9th or 10th; relative to the size of the operands the errors are much smaller (the worst pair found caused a relative error of size something like 10^{-14}). I don’t believe that any financial model ever considered could have its accuracy be seriously affected by errors in the 9th significant digit of intermedial results.

      [Also, the error can only occur for extremely rare operand pairs]

      In fact, the bug was discovered by a number theorist doing prime number searches, since changing a huge integer even by a tiny amount (1) can radically affect its divisibility properties.


      • Leon Kautsky
        April 19, 2013 at 11:03 pm

        Well, I know that in the 1980s a couple quant firms lost a lot of money because their backtesting/optimization methods had some numerically unstable components. But it was a rare thing, had nothing to do with the Pentium chips, or black-scholes. I don’t think the guy knows what he is talking about.


  7. Chris Wroth
    April 17, 2013 at 9:07 pm

    Hi Cathy ~ All this reminds me of David X Li’s Gaussian copula function, used by everyone on Wall Street to asses the risk of bond tranches. The math was found wrong, but not until after it turned junk bonds into “Triple A” rated CDOs. Seems to me that the Pete Petersons of this world always find the math they need to justify screwing the rest of us. ~ Chris


  8. Heather
    April 18, 2013 at 9:57 am

    Re:The data and code should be open source.

    Agree with the basic sentiment but I can see some practical issues for holding this as an absolute standard. One issue with openly providing data is that some data is proprietary, like medical records or loan accounting systems. Other data is vended, but that should not stand in the way of transparency about how the data is used. For organizations that sell models, putting out the code for their published results could aid people in building their own similar models.


    • April 18, 2013 at 9:59 am

      Agreed this is an issue, but I believe we can get around it, ss I’ve explained in this post: https://mathbabe.org/2012/03/01/open-models-part-2/

      Specifically we still have the code open and we treat the data as a “black box”.



      • April 18, 2013 at 10:14 am

        Any paper used to support a major public policy should be open to all. If the authors want to keep any data or model details secret, then we shouldn’t use the paper as evidence to support the policy.

        Sadly, as I alluded to above, many academics like the illusion of being influential in policy circles and will prostitute themselves in order to be players. And politicians love to make appeals to authority to avoid having to explain their legislation. So, I doubt we’ve seen the end of R&R-type exercises.


  9. April 18, 2013 at 2:39 pm

    If you’re still following this, check out Yves Smith’s scathing comments.

    The upshot:

    Bottom line: The foundation of the entire global push for austerity and debt reduction in the last several years has been based on a screwup in an Excel spreadsheet and poorly constructed data.

    Reinhart and Rogoff have done a great deal of damage to the world. As Paul Krugman has observed, their replies to their critics have thus far only compounded the confusion. They need to come clean, stop talking like their mistakes are minor, and own up to the enormity of their errors.

    My take–this is far beyond sloppy research. This was deliberate. No one with the credentials of Reinhart and Rogoff can be that stupid. It’s time we face the reality that we can’t trust much of academic research.


  10. Josh
    April 18, 2013 at 3:33 pm

    I am not saying that R&R wasn’t influential, it was and am sure will continue to be cited widely in support of austerity.

    My point is that the causality is in the wrong direction. R&R didn’t cause the austerity campaign, the austerity campaign caused R&R (and then used it for support). That is, given that there was an active campaign to cut government spending, it is hardly surprising that economists would write papers justifying it and that those papers would be prominently cited.

    As evidence for my position, I would note that calls for cuts in government spending and debt go back decades before the R&R paper.


    • April 19, 2013 at 11:05 am

      But economists like to present themselves as scientists, not propagandists, which is what you describe. Now, I agree that economics is what Richard Feynman called “cargo cult science”, which adopts the outward appearances of scientific research while failing to actually practice the scientific method.

      I’m more skeptical here. Both R&R had strong ties to the austerity crowd, and the key errors, including Excel “goof”, all heavily skewed their description, arguments, and conclusion. I just find it too fantastical that neither they nor their students ever stopped to say, “Hey, wait a minute! This result can’t really be correct in view of the data!” No, this looks like what I call “laissez faire Lysenkoism”–the publication of fraudulent research to support laissez faire policies.


  11. RichP
    April 18, 2013 at 6:40 pm

    The data set was not that large, and the error was. If the published error was accidental, R&R were not that familiar with the data, couldn’t deal with numbers except as plug-and chug, or both. If so, why do they have prominent jobs in the field?

    Excel is a lousy programming language and environment. You need a lot of discipline to do much of anything correctly.

    Most major newspapers are ignoring this. Oddly enough, mlb.com, the official site of Major League Baseball, featured a link to Nate Silver’s article yesterday:



    • rp1588rp1588
      April 19, 2013 at 3:20 am

      Correction: Nate Silver didn’t write the article. His picture leads it, as this was a baseball stats article and Nate Silver is now the public stat person who emerged from the baseball stat community.


  12. April 19, 2013 at 10:58 am

    More interesting comments from Krugman on R&R, including recent work proving that causation is opposite what R&R claimed. And Krugman also notes the troubling stubbornness of R&R to come clean and admit their failure.


    • April 19, 2013 at 11:09 pm

      It has been obvious since at least the 1950s that causation is *at least* opposite R&R.

      The question is if causation is cumulative/bidirectional.

      Recent updates on R-R suggest that no, on average, increases in debt levels do not exacerbate the growth problems (but look at the STD. DEV in the forward looking study! even though the avg. is that 100% debt/gdp => -0.1% gdp growth per year, there do exist “forward looking” numbers that are much, much lower than that).


  13. krisv
    April 19, 2013 at 2:22 pm

    I think what is more egregious about R&R is not the Excel error, but that they appear to have cherry-picked their data and their model weightings to support the conclusions they arrive at.

    Most people are prone to confirmation bias, but one expects people as prominent and successful as R&R to be aware of the possibility of such bias, and be extra careful as a consequence. Instead they published sloppy work, pretended that the apparent relation between debt and GDP was just a correlation in public, but pushed the idea that debt causes negative GDP growth as a conclusion of their work behind closed doors and in congress. This is plain academic dishonesty and fraud.


  1. April 17, 2013 at 8:07 am
  2. April 17, 2013 at 10:53 am
  3. July 14, 2013 at 10:18 pm
  4. September 26, 2013 at 1:46 am
  5. November 27, 2013 at 6:53 am
Comments are closed.
%d bloggers like this: