Home > data science, finance, hedge funds > Economist versus quant

Economist versus quant

December 28, 2011

There’s an uneasy relationship between economists and quants. Part of this stems from the fact that each discounts what the other is really good at.

Namely, quants are good at modeling, whereas economists generally are not (I’m sure there are exceptions to this rule, so my apologies to those economists who are excellent modelers); they either oversimplify to the point of uselessness, or they add terms to their models until everything works but by then the models could predict anything. Their worst data scientist flaw, however, is the confidence they have, and that they project, in their overfit models. Please see this post for examples of that overconfidence.

On the other hand, economists are good at big-picture thinking, and are really really good at politics and influence, whereas most quants are incapable of those things, partly because quants are hyper aware of what they don’t know (which makes them good modelers), and partly because they are huge nerds (apologies to those quants who have perspective and can schmooze).

Economists run the Fed, they suggest policy to politicians, and generally speaking nobody else has a plan so they get heard. The sideline show of the two different schools of mainstream economics constantly at war with each other doesn’t lend credence to their profession (in fact I consider it a false dichotomy altogether) but again, who else has the balls and the influence to make a political suggestion? Not quants. They basically wait for the system to be set up and then figure out how to profit.

I’m not suggesting that they team up so that economists can teach quants how to influence people more. That would be really scary. However, it would be nice to team up so that the underlying economic model is either reasonably adjusted to the data, or discarded, and where the confidence of the model’s predictions is better known.

To that end, Cosma Shalizi is already hard at work.

Generally speaking, economic models are ripe for an overhaul. Let’s get open source modeling set up, there’s no time to lose. For example, in the name of opening up the Fed, I’d love to see their unemployment prediction model be released to the public, along with the data used to train it, and along with a metric of success that we can use to compare it to other unemployment models.

  1. Abe
    December 28, 2011 at 9:19 am

    I’m intrigued by the idea of open source modeling. What exactly would that look like–technically, and from a community perspective?

    Like

  2. December 28, 2011 at 10:19 am

    Actually they need to team up so that economists specify the model forms while letting the quants calibrate and the models. Ideally the loop is closed by the quants reporting back correlations in the model error terms, so that the economists can see whether there might be a behavioural explanation which can be used to refine the model forms.

    When the economists calibrate the models, as you observe, they either over-fit the models so that they are not predictive, or under-fit, so that they yield too little signal to overcome the noise.

    When quants specify the model forms they tend to mistake correlation for causation. So although all models have limited applicability, there is no way of validating when the quant-specified models will fail. (Oddly enough when the economists build models without the quants’ help, you get a similar effect because although the model’s limitations will be clear, there won’t be a clear indication of what symptoms to expect when the model is failing).

    Marx taught me that you cannot leave the politics out of political economy: the way that society organises itself (politics) and the way it distributes resources (economics) are two sides of the same coin (much to the chagrin of both economists and politicians).

    Like

  3. Sean Crowell
    December 28, 2011 at 10:43 am

    I think what you’re describing here is the continual tug and pull that exists between theoreticians, modelers, and observationalists. There’s a whole community clamoring for open source science to become the new model (which I totally agree with), where data, code and methods are all published along with scholarly papers that tend to emphasize the fruits of the labor (rather than the labor itself).

    In mathematics we always have to “show our work”, and so this business of being disconnected from the process of getting results is less prominent. For real world “dirty” science, it takes teams of people to do things, like friends of mine that spend several grad school years cleaning radar data for analysis by more senior folks, or the large community that uses a community model like the Weather Research and Forecasting model, often without knowing the nuts and bolts on the inside.

    I really like your proposal, though, and I hope that people listen and stop trusting their pure intellects so much.

    Like

  4. December 28, 2011 at 11:06 am

    I meant to respond to your “crappy modeling” post…but I got busy at work. Glad to see you have returned to the topic with a new angle. Sure economists need to improve their models, but the quant models caused ‘a few’ problems in the recent financial crisis too. I would only add a three things I have learned as an economist who started forecasting consumer expenditures in the fall of 2007:

    1) There is a difference between real-time (or now-time) forecasting and estimating models when all the data has been “cooked” … aggregate data take some time (often years) to collect and that means that the current signal on the state of the economy can be flawed. Path dependence can be important, so if we don’t know where we are exactly, it is very tricky to know where we are headed. This adds to the hindsight is 50-50 quip.

    2) Models are a tool for story telling. Economists start with theories about how the world works. This tells them what variables should be important in their models. Often the desired variable is not measured well, so they need to find a proxy variable. Data is used to test their theories and a model which passes the sniff test over history becomes a good candidate for forecasting. Yes, economists keep models simple, but parsimony is actually considered a virtue of models. I need to know why statisitical relationships exist…because they might change in period of great economic distress.

    3) One model is often not enough. All models have their flaws, so instead of looking for one, complicated model that performs well in most settings, it is often desirable to have a group of models. Each will have its flaws, but each will have settings in which it excels. A judgmental forecast of the most likely (or modal) forecast could rely on more than one model plus some intuition/experience of the forecaster. And then it’s important to think about downside/upside risks to that forecast. Again those might come from other models.

    None of this is to suggest that economists cannot learn from quants and vice versa. Clearly we all have a lot to learn.

    Like

  5. Annie
    December 28, 2011 at 11:44 am

    As an economist, by and large, I think you are right. Although you are making generalizations about economists but I am not sure exactly where you are getting your picture of economists from (not from your relatives who are all highly aware of the shortcomings of economics). Economists are a highly heterogeneous lot, ranging from big-picture macro theorists to geeky time-series econometricians to my advisor, Jon Gruber, who is a great thinker and has a great political sense (and has done more than any academic economist I can think of to actually change the world by helping to design Romneycare and Obamacare) but to whom sometimes I had to explain basic statistics. And then there’s my mom and me, who both feel that we are temperamentally data scientists in a field where data science is not that valued. I’ve dealt with that by moving into a subfield (health care) and a sector (government) where theory is not so important but the ability to work with data is. But I am struggling a bit because my training did not emphasize data science AT ALL even though I think it is quite important. I need to come up to New York soon so we can talk data science (or you are obviously welcome to come down to DC).

    Like

  6. December 29, 2011 at 5:57 am

    I’ve been working on this for a few weeks (interrupted by the holidays and critical business projects). I’ve got a three part system — one part for visualization and interaction; another part for model prototyping; and a third part for big data. My goal for the system was to be able to explain basic portfolio management to my (uninterested uncle) in the visualization system; rapid prototype in the prototyping system; and leverage my experience building with big data (terabytes) in the other systems.

    Like

    • December 29, 2011 at 6:01 am

      Wait, what? I feel like you forgot to write the first half of this comment.

      Like

      • December 30, 2011 at 5:30 pm

        I think you’re replying to me. I was responding to your call for: “Let’s get open source modeling set up, there’s no time to lose.”

        I’ve been thinking about open source modeling. Who the customers are, what they need, and how to give it to them. I’m happy to work with you and others on this.

        Like

  7. Nadia Hassan
    December 29, 2011 at 12:36 pm

    In some scenarios, Andrew Gelman has been pretty critical of how layers of psychology get added to some economists’ models.

    http://andrewgelman.com/2011/09/economists-dont-think-like-accountants-but-maybe-they-should/

    http://andrewgelman.com/2011/12/timing-is-everything-2/

    Like

  1. No trackbacks yet.
Comments are closed.