Home > data science, math education, modeling > How to be wrong

How to be wrong

June 27, 2013

My friend Josh Vekhter sent me this blog post written by someone who calls herself celandine13 and tutors students with learning disabilities.

In the post, she reframes the concept of mistake or “being bad at something” as often stemming from some fundamental misunderstanding or poor procedure:

Once you move it to “you’re performing badly because you have the wrong fingerings,” or “you’re performing badly because you don’t understand what a limit is,” it’s no longer a vague personal failing but a causal necessity.  Anyone who never understood limits will flunk calculus.  It’s not you, it’s the bug.

This also applies to “lazy.”  Lazy just means “you’re not meeting your obligations and I don’t know why.”  If it turns out that you’ve been missing appointments because you don’t keep a calendar, then you’re not intrinsically “lazy,” you were just executing the wrong procedure.  And suddenly you stop wanting to call the person “lazy” when it makes more sense to say they need organizational tools.

And she wants us to stop with the labeling and get on with the understanding of why the mistake was made and addressing that, like she does when she tutors students. She even singles out certain approaches she considers to be flawed from the start:

This is part of why I think tools like Knewton, while they can be more effective than typical classroom instruction, aren’t the whole story.  The data they gather (at least so far) is statistical: how many questions did you get right, in which subjects, with what learning curve over time?  That’s important.  It allows them to do things that classroom teachers can’t always do, like estimate when it’s optimal to review old material to minimize forgetting.  But it’s still designed on the error model. It’s not approaching the most important job of teachers, which is to figure out why you’re getting things wrong — what conceptual misunderstanding, or what bad study habit, is behind your problems.  (Sometimes that can be a very hard and interesting problem.  For example: one teacher over many years figured out that the grammar of Black English was causing her students to make conceptual errors in math.)

On the one hand I like the reframing: it’s always good to see knee-jerk reactions become more contemplative, and it’s always good to see people trying to help rather than trying to blame. In fact, one of my tenets of real life is that mistakes will be made, and it’s not the mistake that we should be anxious about but how we act to fix the mistake that exposes who we are as people.

I would, however, like to take issue with her anti-example in the case of Knewton, which is an online adaptive learning company. Full disclosure: I interviewed with Knewton before I took my current job, and I like the guys who work there. But, I’d add, I like them partly because of the healthy degree of skepticism they take with them to their jobs.

What the blogwriter celandine13 is pointing out, correctly, is that understanding causality is pretty awesome when you can do it. If you can figure out why someone is having trouble learning something, and if you can address that underlying issue, then fixing the consequences of that issue get a ton easier. Agreed, but I have three points to make:

  1. First, a non-causal data mining engine such as Knewton will also stumble upon a way to fix the underlying problem by dint of having a ton of data and noting that people who failed a calculus test, say, did much better after having limits explained to them in a certain way. This is much like the spellcheck engine of Google works by keeping track of previous spelling errors, and not by mind reading how people think about spelling wrong.
  2. Second, it’s not always easy to find the underlying cause of bad testing performance, even if you’re looking for it directly. I’m not saying it’s fruitless – tutors I know are incredibly good at that – but there’s room for both “causality detectives” and tons of smart data mining in this field.
  3. Third, it’s definitely not always easy to address the underlying cause of bad test performance. If you find out that the grammar of Black English affects students’ math test scores, what do you do about it?

Having said all that, I’d like to once more agree with the underlying message that a mistake is a first and foremost a signal rather than a reflection of someone’s internal thought processes. The more we think of mistakes as learning opportunities the faster we learn.

  1. June 27, 2013 at 9:18 am

    Fantastic post! These insights about errors do not just apply to people with general learning disabilities but I think, more insidiously, they apply to those of us who think we are “good” at one thing but when doing other things, too quickly come to the conclusion that we’re horrible at them. For example, a physicist might think she is no good at improv theater for no good reason other than (1) having a reference point of ease, (2) not having found the right mentors or processes to slowly master a new field. Or, more commonly, perhaps, the case of a whole slew of very intelligent people who claim that they fear math.


  2. Glen S. McGhee
    June 27, 2013 at 10:08 am

    Wow! The URL to “Twice as Less” above blew my mind! Showing the link between language and conceptual problems with motion, distance, etc.

    Ruby Payne draws important social conclusions from this issue, the only cognitive approach to poverty that I know of — explicitly dealing with the cognitive dimension of social stratification. http://www.ahaprocess.com/

    The interesting thing is that public educators mostly hate her, even though her approach is essentially Marxian! Go figure!


  3. Glen S. McGhee
    June 27, 2013 at 10:15 am

    Well, to say that it is not you, but it is the bug that is the problem, is an infinite regress that ends up in the same place.

    While it may help to characterize and discuss the problem, it does not *solve* the problem.

    So what if the problem is not you, but the bug: *you* still have to fix the bug.

    In sociology, this surfaces as the problem of agency, and even as the macro/micro problem.


  4. June 27, 2013 at 11:04 am

    “The more we think of mistakes as learning opportunities the faster we learn.”
    A lot of pigmeat thar! The trick is recognizing “mistakes” and then learning from them. It takes humility and persistence and the cupboard is too often bare.


  5. Michael Edesess
    June 28, 2013 at 12:49 am

    I don’t know if it’s kosher to do this, but I must comment on your scintillating, nearly-year-old blog of last August 25 that George Peacock pointed out to me at https://mathbabe.org/2012/08/25/nsa-mathematicians/. It’s about your revealing interview with the NSA. This post should be re-issued or revived in light of the Snowden affair (I’m in Hong Kong by the way, and had an op-ed on it yesterday in the SCMP: http://www.scmp.com/comment/insight-opinion/article/1269617/society-cant-be-blase-about-extensive-government-spying — is there any way to hyperlink on this blog site I wonder?) We need to put a lot of thought into this whole issue — your post is a great example.


  6. cassie
    June 28, 2013 at 9:13 am

    “Third, it’s definitely not always easy to address the underlying cause of bad test performance. If you find out that the grammar of Black English affects students’ math test scores, what do you do about it?”

    These points aren’t quite relevant to the general point of the blog post, but I will mention them anyway because they are relevant to this example.

    1) In general, there is awareness that linguistic differences are a factor in the so-called “achievement gap” between African-American and white students. Unfortunately, there is so much prejudice against acknowledging that AAVE is a legitimate dialect of English and that children deserve to begin their educational experience by being taught in their native dialect/language that no one is allowed to act on this knowledge to improve the educational experience of AAVE speakers. (The difference between a dialect and a language is of course much more arbitrary than non-linguists may realize.) The work of Bill Labov is a good place to start on this:

    Click to access Ebonic%20testimony.pdf

    In other words, we do know a bit what to do about this language issue, but no one is willing to act on it.

    2) The “achievement gap” is mostly a gap in standardized test results and not a gap in educational ability/outcomes. (Good discussion of the questionable legitimacy here: http://richgibson.com/Race-Assessment-Reform.htm) Certainly though, part of this gap is illusory because test questions are retained or eliminated during the test creation process based on whether they are answered more frequently by majority or minority populations:

    “According to a declaration by Prof. Martin Shapiro of Emory University, who is both a lawyer and a psychologist, Texas uses “point-biserial correlations” in deciding which items to use and which questions to discard as the test is assembled from field-tested questions. Items with high biserial correlations are those generally answered correctly by test-takers who score high on the test overall. Items which many low-scoring students get right have lower correlations.

    To obtain higher consistency (and hence technical reliability) on the test, Texas follows the typical practice of using items with the highest correlation values. This procedure means that on items covering the same materials, the ones with the greatest gaps between high and low scorers will be used. Because minority group students typically perform less well on the test as a whole, the effort to increase reliability also increases bias against minorities.” http://www.fairtest.org/racial-bias-built-tests


    • June 28, 2013 at 9:15 am

      Wow, that is super interesting! And it’s a point that I don’t usually think about, namely how questions are chosen for a test. Thanks!


  1. July 2, 2013 at 10:01 am
Comments are closed.
%d bloggers like this: