Home > modeling, statistics > SAT overhaul

SAT overhaul

March 11, 2014

There’s a good New York Times article by Todd Balf entitled The Story Behind the SAT Overhaul (hat tip Chris Wiggins).

In it is described the story of the new College Board President David Coleman, and how he decided to deal with the biggest problem with the SAT: namely, that it was pretty easy to prepare for the test, and the result was that richer kids did better, having more resources – both time and money – to prepare.

Here’s a visual from another NY Times blog on the issue:

allscores

Here’s my summary of the story.

At this point the SAT serves mainly to sort people by income. It’s no longer an appropriate way to gauge “IQ” as it was supposed to be when it was invented. Not to mention that colleges themselves have been playing a crazy game with respect to gaming the US News & World Reports college ranking model via their SAT scores. So it’s one feedback loop feeding into another.

How can we deal with this? One way is to stop using it. The article describes some colleges that have made SAT scores optional. They have not suffered, and they have more diversity.

But since the College Board makes their livelihood by testing people, they were never going to just shut down. Instead they’ve decided to explicitly make the SAT about content knowledge that they think high school students should know to signal college readiness.

And that’s good, but of course one can still prepare for that test. And since they’re acknowledging that now, they’re trying to set up the prep to make it more accessible, possibly even “free”.

But here’s the thing, it’s still online, and it still involves lots of time and attention, which still saps resources. I predict we will still see incredible efforts towards gaming this new model, and it will still break down by income, although possibly not quite as much, and possibly we will be training our kids to get good at slightly more relevant stuff.

I would love to see more colleges step outside the standardized testing field altogether.

Categories: modeling, statistics
  1. mb
    March 11, 2014 at 9:06 am

    The premise, if the data are to be believed, of this post is wrong. The story fits nicely with your world view, and confirmation bias is powerful, but I would suggest you read outside your comfort zone. I would also read about common core and the overhual. The problem with using income on this is it is a confounding variable. It is correlated with ability, parents with high income (ability) will like have children with high ability.
    http://marginalrevolution.com/marginalrevolution/2014/03/the-sat-test-prep-income-and-race.html

    Like

    • March 11, 2014 at 10:41 am

      Agreed, although it all hinges on how you define “high ability.”

      It’s the same story with standardized tests of all kinds: poverty is bad for scores, wealth is good. It’s such a strong relationship that it is arguable that the causality goes the other way: poverty causes poor test performance.

      Said another way, the tests seem to measure income. And given that, we probably want to use some other way of determining who gets to go to college.

      Like

      • March 11, 2014 at 7:20 pm

        Cathy, I’m rather surprised that on one hand you challenge financial models and on the other hand seem to accept flawed statistical data so easily. Why would you look only at the first moment and not the distribution? Are there no poor kids who do extremely well on the SATs? I have no memory of what I scored on the SAT, but I certainly was in the lowest economic bracket at the time. Furthermore, why is the Y axis stretched out so much to make it look like there is a much larger correlation between scores and income than their actually is? Compare the Y axis on the first three graphs with that of the composite graph. Third, you know as well as anybody that correlation does not mean causality. One thing I can say based on prior teaching experience is that parent behavior (caring, showing up at parent-teacher conferences) and parents modeling reading have a large impact on students learning and on their scores. Even those from the lowest economic strata can do well if the parents encourage them. You see that in poor Asian communities.

        Like

    • cat
      March 11, 2014 at 11:55 am

      “It is correlated with ability, parents with high income (ability) will like have children with high ability.”

      Common sense says high income parents will have access to better education for their children either through being able to afford housing in good school districts or private schools.

      With all due respect to teachers trying to educate in low income schools, I doubt the kids get the same education as the kids with high income parents. This can be a large hurdle for high ability kids in low income public schools for many reasons.

      As to your link, it has many flaws in its arguments.
      A) Effects are additive not multiplicative like they try to state.
      B) The links to the academic articles show the opposite of what they try to state. i.e. their are multiple modest increases to scores that are conferred to high income students even if test prep only confers a modest increase.

      Like

  2. Josh
    March 11, 2014 at 9:08 am

    I did like the article. But, while the graphic makes a compelling case that income is a crucial factor, it clearly goes much deeper than test prep. The article itself says that test prep only improves scores but 30 points. That seems low to me but I don’t think it is the primary factor in this. Which means that this problem is much deeper than fixing (or not using) the test.

    Like

  3. March 11, 2014 at 9:11 am

    What I’d really like to see is the correlation between SAT scores and college success; that is what should determine whether colleges use the SAT or not. The colleges must know (they are using the test, or have decided to drop it), but they haven’t been releasing the figures.

    In particular, I’d expect colleges who decided to drop the test to not care about the so-called “College Board”, and may be open to publishing their internal statistics: does including the SAT in the admission decision improve the predictive power or not?

    Like

    • March 11, 2014 at 9:19 am

      Answer based on this research is that it doesn’t: “Hiss’ study, “Defining Promise: Optional Standardized Testing Policies in American College and University Admissions,” examined data from nearly three-dozen “test-optional” U.S. schools, ranging from small liberal arts schools to large public universities, over several years.
      Hiss found that there was virtually no difference in grades and graduation rates between test “submitters” and “nonsubmitters.” Just 0.05 percent of a GPA point separated the students who submitted their scores to admissions offices and those who did not. And college graduation rates for “nonsubmitters” were just 0.6 percent lower than those students who submitted their test scores.”
      http://www.npr.org/2014/02/18/277059528/college-applicants-sweat-the-sats-perhaps-they-shouldn-t

      Like

  4. Min
    March 11, 2014 at 1:05 pm

    The bias of the SAT and other tests is hardly news. Back in the late 1950s or early 1960s research in which the standard test questions of the form, “A is to B as C is to ___”, upon which minority kids scored poorly, were changed to, “A goes with B like C goes with ___,” minority kids scored as well as advantaged kids. But did that research lead to altering the form of the standard test questions? Of course not.

    Two of the things that I recall from the movie, “Stand and Deliver”, were the bias of the people of the testing service, who did not believe that minority kids could score so well without cheating, and Escalante teaching to the test. (Since he was an advisor to the film, I suppose that that was a fair portrayal.)

    Like

  5. Min
    March 11, 2014 at 1:19 pm

    Lior Silberman :
    What I’d really like to see is the correlation between SAT scores and college success;

    There was an infamous example from, I believe, the 1950s, in which there was a multiple choice test question about belief in God to which the correct answer was agnostic. However, atheist was scored as correct while agnostic was scored as incorrect, based upon the college grades of the students who took the test. (According to my stat prof.)

    Like

    • March 11, 2014 at 7:10 pm

      I’ve taken PSAT, SAT, GRE, GMAT and LSAT. Almost all tests have some ambiguous or incorrect questions/answers. Same with teacher created tests. What that means, is not that the tests are invalid, rather that a 40 point, or even an 80 point, difference is not meaningful.

      Like

      • Min
        March 11, 2014 at 9:48 pm

        My point was not that the tests were fallible, but that the test based the score of an answer, not on its correctness, but on how the students who gave that answer performed in school.

        Like

  6. March 11, 2014 at 7:25 pm

    And this from today’s NY Times:

    Like

  7. March 11, 2014 at 11:23 pm

    While the problem of ensuring students from all socio-economic backgrounds is an interesting one, I think it is also interesting that by testing skills that the College Board believes should be attained by the end of high school, the College Board have the potential to create a defacto national standardised curriculum, given college entry is a goal of high proportion of high school pupils.

    Like

  8. Guest2
    March 13, 2014 at 8:40 am

    College Board duplicity is well known — and this turn-around only confirms it.

    After years (probably even decades) of looking the other way while Kaplan and Barron’s made piles of money off the SAT by prepping for tests, now the College Board decides to do the same thing for free (but online, as Cathy points out). How does this help the legitimacy of the SAT? It does not.

    Either the test represents a true measure or it does not; before, it did, but now it does not. Uh? Why not …. ?

    Every year, the College Board comes out with a report on who great college is for everyone, and twists the data to prop up the so-called college premium. They need to crawl back under the Galtonian-eugenics rock from where they started.

    Like

  1. March 19, 2014 at 9:53 am
Comments are closed.