Intentional discrimination versus disparate impact
I’m paying lots of attention to the Supreme Court’s coming decision on The Fair Housing Act. A New York Times editorial of this morning does a good job explaining the issues, including the concern that Chief Justice Roberts seems to think we’ve moved past racial discrimination in this country.
The burning question is whether housing developments and the like are responsible only for intentionally discriminating against individuals, or whether they are responsible in a more general, statistical sense, of having disparate impact on groups of people. The New York Times, like me, hopes for a broader reading, consistent with the 11 courts of appeals decisions over the last 40 years. From the Times:
The ability to show discriminatory effect has only become more important as intentional discrimination has become harder to prove. To take one prominent example, the Justice Department relied on it to negotiate the largest-ever fair-lending settlement — $335 million — with Bank of America in 2011. The bank’s mortgage unit, Countrywide Financial, had charged higher average fees and interest rates to black and Latino borrowers than to whites with the same credit risk, a practice that former assistant attorney general Thomas Perez called “discrimination with a smile.”
This case is focused on housing, but of course it could generalize to all sorts of other systems, including job applications and credit applications among others.
If we stick to the “intentional discrimination” only, we are opening up a door to (even more) widespread use of algorithmic decision-making that produces unfair and discriminatory results. And as it turns out, it’s easy to produce a model that effectively discriminates.
And if you are not in charge of your own system, then who is?