Home > data science, modeling > Interactive scoring models: why hasn’t this happened yet?

Interactive scoring models: why hasn’t this happened yet?

September 12, 2013

My friend Suresh just reminded me about this article written a couple of years ago by Malcolm Gladwell and published in the New Yorker.

It concerns various scoring models that claim to be both comprehensive (which means it covers the whole thing, not just one aspect of the thing) and heterogeneous (which means it is broad enough to cover all things in a category), say for cars or for colleges.

Weird things happen when you try to do this, like not caring much about price or exterior detailing for sports cars.

Two things. First, this stuff is actually really hard to do well. I like how Gladwell addresses this issue:

At no point, however, do the college guides acknowledge the extraordinary difficulty of the task they have set themselves.

Second of all, I think the issue of combining heterogeneity and comprehensiveness is addressable, but it has to be addressed interactively.

Specifically, what if instead of a single fixed score, there was a place where a given car-buyer or college-seeker could go to fill out a form of preferences? For each defined and rated aspect, the user would fill answer a question about how much they cared about that aspect. They’d assign a weight to each aspect. A given question would look something like this:

For colleges, some people care a lot about whether their college has a ton of alumni giving, other people care more about whether the surrounding town is urban or rural. Let’s let people create their own scoring system. It’s technically easy.

I’ve suggested this before when I talked about rating math articles on various dimensions (hard, interesting, technical, well-written) and then letting people come and search based on weighting those dimensions and ranking. But honestly we can start even dumber, with car ratings and college ratings.

Categories: data science, modeling
  1. JSE
    September 12, 2013 at 8:52 am

    This is how the most recent NRC ranking of graduate programs worked. But I’m not sure anyone liked it! It would be interesting to hear from them how often people actually used the interactive tools (essentially what you propose above) they offered on their website.


    • September 12, 2013 at 9:01 am

      Oh wow, cool! I’d certainly be interested in hearing from people about their experiences.


  2. rjh
    September 12, 2013 at 9:48 am

    The car comment reminds me of the approach used in Consumer Reports. They’ve got about 50 specific measurements and ratings per model type and year, plus several paragraphs of narrative discussion of subjective impressions. Colleges deserve something similar. It may take a while to figure out what should be measured and rated, and these attempts to simplify choosing a college to a simple pick the highest number approach get in the way.

    The multi-variable approach makes more sense than the foolish notion that there can exist a simple single numeric rating for something as complex as college.


  3. jmhl
    September 12, 2013 at 9:57 am

    This is how Nate Silver’s Liveability Calculator for New York City neighborhoods works. You decide how much you care about various features (affordability, schools, transit costs, etc) and it reranks neighborhoods according to your preferences.


  4. September 12, 2013 at 11:18 am

    This would be good if the relationship between quality and ratings was in one direction. Unfortunately, in a lot of cases if everybody “agrees that this is good” then we actually perceive it as better than we would have otherwise. This happens all the time with wine or vodka, for instance. In such cases, an interactive rating would not be that useful.

    It is not clear to me that this is any different for colleges (unfortunately). For a lot of people, if your college is ranked higher than some other college (maybe itself in the past) then you feel better about your college experience independent of if anything changed in the quality of your education. You could try to write this off as ‘silly irrationality’ and say that these ratings would be for the ‘enlightened people that can overcome such internal biases’, but even that would not hold. In a lot of jobs where you don’t actually need a college education but a degree is still required (i.e. most jobs), the performance of your school is arbitrary but “standard” rankings matters to the employers, and if you are in college just to be employed after (which unfortunately seems to be most people) then this should also matter to you. This is probably why the NRC ranking of graduated programs that @JSE mentions was disliked. Too few people care about the ‘objective’ experience, and most just want to be at a ‘well ranked place’.


  5. mathematrucker
    September 12, 2013 at 1:19 pm

    Roughly speaking, personal rankings trump third-party ones.

    Take employment for example. It struck me a few years back that long-haul trucking has a rich set of data points that seems very ripe for ranking analysis. Not only are there lots (millions) of truckers (big potential market), there are a whole slew of “How important to you is…” questions that are specific to long-haul trucking.

    No such comprehensive poll of trucking jobs has ever been done to my knowledge. Thinking I might try to, I reserved the domain “truckerjobratings.com” a few years ago but never got around to diving into the project.

    Glassdoor.com is interesting, but nowhere near as occupation-specific as I’d like to see.


  6. September 12, 2013 at 6:01 pm

    The OECD Better Life Index has a nice interactive visualisation where you can adjust priorities


  7. September 12, 2013 at 9:53 pm

    When I was applying to college, princeton review or someone had a thing like that.


  8. September 13, 2013 at 12:46 pm

    Interesting ideas!

    There are some past prominent examples that show that having people explicitly give preference criteria is difficult. It’s unwieldy so people won’t do it, and people aren’t always a good judge of parameters of their own algorithm. The successful raters, like Amazon and Netflix, infer the preferences from past choices.

    You can’t replicate that exactly for colleges, as the number of past choices is small and the choice will depend on the child anyway. But perhaps there’s a way to infer personal choice algorithms by having people make a series of choices between two college possibilities??


  9. September 20, 2013 at 10:55 am

    Cathy, the question of finding the right undergraduate college via information filtering techniques is one I’ve spent a long time thinking about. There’s a big user problem: selecting input values and weights takes a lot of self-knowledge on the part of the high school student & family *and* requires them to understand the context in which these inputs and weights apply. Not only do very few students know what they want in a college (in terms of the possible inputs), there’s almost no way they or their families (or even their counselors) can know the full range of possible values in order to judge.

    My solution was that users would enter one or several colleges that they already know “sort of” fit them. The system replies to the user telling them what’s special about these colleges and in what relative weights, and offers the student/family an opportunity to tell the system yes or no on the various attributes. Think of it like Pandora for colleges.

    Just as important are a student’s high school grades, test scores, and class rank, which (for better or worse) play an enormous role in college admission and aid. As the user iterates on her list of possible colleges, the system should also plot the colleges on an “admission chances” and “aid chances” graph, and — if the chances are low — offer alternative colleges with a similar fit but higher chances of admission or aid.

    This system is live at http://www.possibilityu.com. Give it a try by logging in with your email or Facebook account. We were invited to the White House for the Department of Education’s Datapalooza, won a Software and Information Industry Association national top 10 award in their Innovation Incubator competition, and just won a College Knowledge Challenge grant funded by the Bill & Melinda Gates Foundation.


  1. No trackbacks yet.
Comments are closed.
%d bloggers like this: