I’ve been busy recording podcasts recently, and I wanted to collect some of them together for your convenience and listening pleasure.
Please forgive me for posting constantly about the amazing press my book is getting, I’m milking this once-in-a-lifetime opportunity.
Because, HOLY SHIT!
Also, I talked with Russ Roberts a while back about my book and it’s now on EconTalk as a podcast.
I’m on my way to Boston by train today to for a book talk and Q&A at the Harvard Bookstore in Harvard Square at 7pm. More information here.
Please join if you’re around, I’d love to see you.
Other recent mentions of my book:
I’ve been on book tour for nearly a month now, and I’ve come across a bunch of arguments pushing against my book’s theses. I welcome them, because I want to be informed. So far, though, I haven’t been convinced I made any egregious errors.
Here’s an example of an argument I’ve seen consistently when it comes to the defense of the teacher value-added model (VAM) scores, and sometimes the recidivism risk scores as well. Namely, that the teacher’s VAM scores were “one of many considerations” taken to establish an overall teacher’s score. The use of something that is unfair is less unfair, in other words, if you also use other things which balance it out and are fair.
The obvious irony of the “one of many” argument is, besides the mathematical one I will make below, that the VAM was supposed to actually have a real effect on teachers assessments, and that effect was meant to be valuable and objective. So any argument about it which basically implies that it’s okay to use it because it has very little power seems odd and self-defeating.
Sometimes it’s true that a single inconsistent or badly conceived ingredient in an overall score is diluted by the other stronger and fairer assessment constituents. But I’d argue that this is not the case for how teachers’ VAM scores work in their overall teacher evaluations.
Here’s what I learned by researching and talking to people who build teacher scores. That most of the other things they use – primarily scores derived from categorical evaluations by principals, teachers, and outsider observers – have very little variance. Almost all teachers are considered “acceptable” or “excellent” by those measurements, so they all turn into the same number or numbers when scored. That’s not a lot to work with, if the bottom 60% of teachers have essentially the same score, and you’re trying to locate the worst 2% of teachers.
The VAM was brought in precisely to introduce variance to the overall mix. You introduce numeric VAM scores so that there’s more “spread” between teachers, so you can rank them and you’ll be sure to get teachers at the bottom.
But if those VAM scores are actually meaningless, or at least extremely noisy, then what you have is “spread” without accuracy. And it doesn’t help to mix in the other scores.
In a statistical sense, even if you allow 50% or more of a given teacher’s score to consist of non-VAM information, the VAM score will still dominate the variance of a teacher’s score. Which is to say, the VAM score will comprise much more than 50% of the information that goes into the score.
An extreme version of this is to think about making the non-VAM 50% of a teacher’s score always exactly the same. Denote it by 50. When we take the population of teacher VAM scores and average them with 50, the population of teacher VAM scores are now between 25 and 75, instead of 0 and 100, but besides being squished into a smaller range, they haven’t changed with respect to each other. Their relative rankings, in particular, do not change. So whoever was unlucky enough to get a bad VAM score will still be on the bottom.
This holds true for other choices of “50” as well.
A word about recidivism risk scores. It’s true that judges use all sorts of information in determining a defendant’s sentencing, or bail, or parole. But if one of the most trusted and most statistically variant ones is flawed – and in this case racist – then a similar argument to the above could be made, and the conclusion would be as follows: the overall effect of using flawed recidivism risk scores is stronger, rather than weaker, than one might expect given its weighting. We have to be more worried about it, not less.
- The Vagina Dispatches (must watch video)
- Massive Yahoo data breach
- Gaming the Mayo markets
- Big data meets insurance, there’s an explosion
- Wells Fargo employees were fired for whistleblowing
- Low-value customers are NOT always right
- Amazon does not give customers the best deal
- Why the mainstream media is ignoring the nationwide prison strike
- Gun rights are for white people
- Drowning in systemic injustice
I also discussed a bunch of these topics on this week’s Slate Money.
I’m flying to London Sunday night to conduct my UK book tour. Here’s the schedule so far:
Date: Tuesday, September 27th
Place: Faculty of Education, 184 Hills Road, Cambridge
More info: here
London’s How To Academy
Date: Tuesday, September 27th
Place: CNCFD- Condé Nast College of Fashion & Design, 16-17 Greek Street, Soho, London
More info: here
King’s College London
Date: Wednesday, September 28th
Place: S-2.08, King’s College London, Strand, London
More info: here
In addition to the above, I’ll also be on BBC’s Today Programme on Tuesday morning, and I’ll be interviewed by Significance, the Royal Statistical Society & American Statistical Association magazine, the Guardian Science podcast, and Business Daily for BBC’s The World Service.
I haven’t been posting too often, in part because I’ve been traveling a lot on book tour, and also because I’ve been writing for other things and interviewing quite a bit. Today I wanted to share some of that stuff.
- I wrote a Q&A for Jacobin called Welcome to the Black Box.
- I wrote a piece for Slate called How Big Data Transformed Applying to College.
- Times Higher Education chose my book as their reviewed Book of the Week and had a nice spread about it.
There may be more, and I’ll post them when I remember them.
Also, great news! My book is a best-seller in Canada! Those Canadians are just the smartest.