Home > finance, math > Should we have a ratings agency for scientific theories?

Should we have a ratings agency for scientific theories?

April 13, 2012

Recently in my friend Peter Woit’s blog, he discussed the idea of establishing a ratings agency for physics. From his blog:

In this week’s Nature, Abraham Loeb, the chair of the Harvard astronomy department, has a column proposing the creation of a web-site that would act as a sort of “ratings agency”, implementing some mathematical model that would measure the health of various subfields of physics. This would provide young scientists with more objective information about what subfields are doing well and worth getting involved with, as opposed to those which are lingering on despite a lack of progress.

Abraham Loeb was proposing to describe the field of String Theory as a perfect example of a bubble. And it’s absolutely true that String Theory has provided finance with tons of brilliant young orphans who either got disillusioned with the field or simply couldn’t get a job after writing a Ph.D. or after a post-doc. It provides an extreme example of a mismatch between supply and demand.

Would a ratings agency for scientific theories help? I don’t think so.

The very basic reason, as Peter points out, is that it’s hard to evaluate scientific theories while they are unfolding. There are two underlying causes: first, people in a field are too invested to admit things aren’t working out, and second, by the nature of scientific research, things could not work out for some time but then eventually still work out. It’s not clear when to give up on a theory!

Ignoring those problems, imagine a “mathematical model” which tries to gauge the success of a field. What would the elemental quantities be that would signify success? Would it count the number of proven theorems? Crappy theorems are easy to prove. Would it count the the number of successful experiments? We could always take a successful experiment and change it ever so slightly to get another success. I can’t think of a quantitative way to measure a field that isn’t open for enormous manipulation (which would only happen if people actually cared about the ratings agency’s rating).

Of course the same might be said about financial ratings. It begs the question, why are ratings agencies useful at all?

In finance we have lots of people buying very similar products with very similar contracts. Sometimes these are even sold on exchanges and carry with them the exact same risk profiles. In such a situation it makes sense to assign someone to look into the underlying risks and report back to the community on how risky a product is.

I would claim that the situation is very different in science or math. People enter a field for all sorts of reasons, with all sorts of goals and situations. String Theory is an extreme case where it could be argued that it got such spin that a whole generation of physics students got sucked into the field by sheer momentum. Perhaps it would have been nice to have a trusted institution whose job it was to calm people down and point out the reality, but I’m not sure it would have helped that much with all the excitement, especially if there had been a model which counted theorems and such. People would just have said the model had never seen something this exciting.

Then there’s the issue of trusting the modeler. Right now ratings agencies have a terrible reputation because they are paid by the people they rate products for, and have been known to sell good ratings. I’m hoping we can do better in the future, but it’s hard (but not impossible!) to imagine gathering enough experts in finance to do it well and to have the product be trusted by the community.

What is the analogy for scientific theories? The problem with rating science is that, because of the depth of most fields, only the experts in the field themselves understand it well enough to even talk about it. So that problem of getting an informed and impartial view on the worthiness of a theory is super super hard, assuming it’s possible.

Finally, I’m not sure what the ratings agency would be in charge of warning people about. Even the financial ratings agencies don’t agree- some of them measure default risk and others measure expected loss through default, which can be two really different things (for example if you think the U.S. will technically default but will end up paying their debts).

In science, I guess you could try to measure the risk that “the theory won’t end up being useful” but it’s not even clear how you’d decide that even after the fact. Maybe you could forecast the number of jobs in the field for graduating Ph.D. students, and that would be helpful to grad students but would also not be the best metric of success for the field.

I’m not saying we shouldn’t have people talk about fields and whether fields are failing, because that’s hugely important. But I don’t think there’s a quantitative model there to be created that would help the conversation. Let’s start an open forum, or a wiki, with the goal of talking about the health of various fields of scientific endeavor and have a bunch of good questions about the field and people can each add their two cents.

Categories: finance, math
  1. Aaron
    April 13, 2012 at 8:59 am

    It seems like we already have such an agency, called the NSF. Fields grow and shrink (largely) because of how NSF allocates its money, especially in expensive fields like physics. There are lots of ex-string theorists because the NSF investment in the field didn’t pan out. I don’t think the NSF uses any quantitative models for its ratings, and like you I don’t think they should, but I also don’t think they do that great a job as it is.

    Like

  2. Larry Headlund
    April 13, 2012 at 11:40 am

    There is also the possibility that a low-rated field (by what ever rating system) may be a good personal choice. Such a field may be neglected have a lot of “low hanging fruit” waiting for the right reseacher.

    Like

  3. Heather
    April 13, 2012 at 12:15 pm

    Just a few words in support of rating agencies. Lenders need to have good evaluations of their obligors’ credit risk both for loan origination and for reserving. Qualitative analysis can add significant information beyond what a quantitative model can give. Lenders do some of their own qualitative analysis, but there is an issue of scale that makes it efficient for rating agencies to do a thorough analysis and then publish the results. And in spite of the conflicts of interest issue, overall ratings do add information beyond what quantitative models give, particularly at long horizons. However, certain classes of ratings may be more reliable than others. For example, single name credit ratings may be more reliable than CDO ratings. Maybe there could be a rating on the the reliability of the ratings.

    Like

  4. K.J.
    April 13, 2012 at 3:46 pm

    I agree that hard science academic fields should NOT have ratings. It’s not a popularity contest, it’s science and math.

    But a funding index might well be worthwhile. If you’re driven to study a particular specialization within a subject, great. But what if you’re just starting out, and would like to be reasonably certain of having steady employment after school is over? Knowing how certain fields have been funded in the past, and how its trending, could be very useful. That’s not to say that a breakthrough wouldn’t drastically improve funding in particular areas, but it could still be something worth knowing, if you’re not in a field, and you’re thinking about going into it.

    @Larry, I think still think personal choice should be your prime motivator, because if you’re genuinely interested in something, you might be able to find funding by convincing someone else to see your enthusiasm.

    Like

  5. K.J.
    April 13, 2012 at 3:48 pm

    And an article that might be of interest to the discussion(s) you’ve had here:
    http://www.guardian.co.uk/science/2012/apr/09/frustrated-blogpost-boycott-scientific-journals

    Like

  6. Scott S
    March 8, 2013 at 9:29 pm

    If a rating system was strictly and only according to the scientific method then a rating that indicates how well a theory passes that method could be very useful. Ie. GR : rated 9/10, Ohms Law: 10/10… etc

    Like

  1. No trackbacks yet.
Comments are closed.