Model Thinking (part 2)
For the record, the lecturer, Scott Page, is by all accounts a great guy, and indeed he seems super nice on video. I’d love to have him over for dinner with my family someday (Professor Page, please come to my house for dinner when you’re in town).
In spite of liking him, though, pretty much every example he gives as pro-modeling is, for me, an anti-modeling example. Maybe I should make a complementary series of YouTube comment videos. It’s not totally true, of course- I just probably don’t notice the things we agree on. But I do notice the topics on which we disagree:
- He talks a lot about how models make us clearer thinkers. But what he really seems to mean is that they make us stupider thinkers. His example is that, in order to decide who to vote for for president, we can model this decision as depending on two things: the likability of the person in question (presumably he assumes we want our president to be likable), and the extent to which that person is “as left” or “as right” as we are. I don’t know about you, but I actually care about specific issues and where people stand on them, and which issues I consider likely to come up for consideration in the next election cycle. Like, if I like someone for his “stick it to the banks” approach but he’s anti-abortion, then I think about whether abortion is likely to actually become illegal. And by the way, I don’t particularly care if my president is likable, I’d rather have him or her effective.
- He bizarrely chooses “financial interconnectedness” as a way of seeing how cool models are, and he shows a graph where the nodes are the financial institutions (Goldman Sachs, JP Morgan, etc.) and the edges are labeled with an interconnectedness score, bigger meaning more interconnected. He shows that, according to this graph, back in 2008 it shows we knew to bail out AIG but that it was definitely okay to let Lehman fail. I’m wondering if he really meant that this was an example of how your model could totally fail because your “interconnectedness scoring” sucked, but he didn’t seem to be tongue in cheek.
- He then talked about measuring the segregation of a neighborhood, either by race or by income, and he used New York and Chicago as examples. I won’t go into lots of details, but he gave a score to each block, like the census maps do with coloring, and he used those scores to develop a new score which was supposed to measure the segregation of each block. The problem I have with this segregation score is that it depends very heavily on the definition of the overall area you are considering. If you enlarge your definition of the New York City to include the suburbs, then the segregation score of New York City may (probably would) be completely different. This seems to be a really terrible characteristic of such a metric.
- My second problem with his segregation score is that, at the end, he had overall segregation numbers for Philly and Detroit, and then showed the maps and mentioned that, looking at the maps, you wouldn’t really notice that one is more segregated than the other (Philly more than Detroit), but knowing the scores you do know that. Umm.. I’d like to rather say, if you are getting scores that are not fundamentally obvious from looking at these pictures, then maybe it’s because your score sucks. What does having a “good segregation score” mean if not that it captures something you can see through a picture?
- One thing I liked was a demonstration of Schelling’s Segregation Model, which shows that, if you have a group of people who are not all that individually racist, you can still end up with a neighborhood which is very segregated.
I’m looking forward to watching more videos with my skeptical eye. After all, the guy is really a sweetheart, and I do really care about the idea of teaching people about modeling.