Before I begin this morning’s rant, I need to mention that, as I’ve taken on a new job recently and I’m still trying to write a book, I’m expecting to not be able to blog as regularly as I have been. It pains me to say it but my posts will become more intermittent until this book is finished. I’ll miss you more than you’ll miss me!
On to today’s bullshit modeling idea, which was sent to me by both Linda Brown and Michael Crimmins. It’s a new model built in part by the former chief economist for the Commodity Futures Trading Commission (CFTC) Andrei Kirilenko, who is now a finance professor at Sloan. In case you don’t know, the CFTC is the regulator in charge of futures and swaps.
I’ll excerpt this New York Times article which describes the model:
The algorithm, he says, uncovers key word clusters to measure “regulatory sentiment” as pro-regulation, anti-regulation or neutral, on a scale from -1 to +1, with zero being neutral.
If the number assigned to a final rule is different from the proposed one and closer to the number assigned to all the public comments, then it can be inferred that the agency has taken the public’s views into account, he says.
- I know really smart people that use similar sentiment algorithms on word clusters. I have no beef with the underlying NLP algorithm.
- What I do have a problem with is the apparent assumption that the “the number assigned to all the public comments” makes any sense, and in particular whether it takes into account “the public’s view”.
- It sounds like the algorithm dumps all the public comment letters into a pot and mixes it together to get an overall score. The problem with this is that the industry insiders and their lobbyists overwhelm public commenting systems.
- For example, go take a look at the list of public letters for the Volcker Rule. It’s not unlike this graphic on the meetings of the regulators on the Volcker Rule:
- Besides dominating the sheer number of letters, I’ll bet the length of each letter is also much longer on average for such parties with very fancy lawyers.
- Now think about how the NLP algorithm will deal with this in a big pot: it will be dominated by the language of the pro-industry insiders.
- Moreover, if such a model were to be directly used, say to check that public commenting letters were written in a given case, lobbyists would have even more reason to overwhelm public commenting systems.
The take-away is that this is an amazing example of a so-called objective mathematical model set up to legitimize the watering down of financial regulation by lobbyists.
Update: I’m willing to admit I might have spoken too soon. I look forward to reading the paper on this algorithm and taking a deeper look instead of relying on a newspaper.
Here are two things you might have some trouble believing if you read the papers regularly and find yourself convinced we are in a housing recovery. First, there are still huge numbers of homeowners on the brink of, or just starting to enter, foreclosure. Second, many of the banks foreclosing on those properties do not have clear legal ownership over the mortgages in question.
Obama should have addressed the first problem through TARP way back in 2008. In fact mortgage modification was an intention of TARP that was promised Congress when it passed the second half of the money but it never happened. Instead Obama came up with the garbage called HAMP, which has been dreadfully implemented and possibly a net harmful program.
Even without Obama, we should have seen a willingness to renegotiate debt. After all, we can negotiate credit card debt, and businesses routinely renegotiate their mortgages. Why are private home mortgages kept airtight? I guess the banks see it as in their interest not to allow negotiations, and whatever the banks want, the banks seem to get.
The second problem, which is essentially one of botched paperwork (explained here), is probably technically the job of some regulator to deal with, but nobody wants to “blow up the system” so nobody is dealing with it. This is especially ironic considering how often we hear about the so-called sanctity of the contract.
The result of these huge looming problems is that banks got bailed out and the system never got cleared of its actual debt and paperwork problems,.
Enter the concept of using eminent domain to force these two issues. Strike Debt, an offshoot of Occupy Wall Street, is pushing this in a few nationwide court cases, for example in Richmond, California.
More recently, and what inspired this post this morning, is a plan cooked up by Strike Debt using eminent domain to force courts to clear up broken chains of title, written by Hannah Appel and JP Massar.
This idea is on its face unappealing, given the history of that crude tool eminent domain. Everyone I meet has their own stories, but start here for a short list of eminent domain abuses.
And it might not work, either. A district judge might not want to deal with the complexity of the issue and might just let the bad paperwork through.
For that matter, many concerns have been voiced about the practicality of this approach, and one that deeply resonates with me is the idea of using it against current mortgages – i.e. mortgages where the homeowner is up-to-date with payment. Using eminent domain in such a case could set a precedent whereby, even though someone has been taking care of their property, the city uses eminent domain to condemn it based on historical data which implies the owner is likely to neglect their property. That would not be good enough. As far as I know the current plan only uses mortgages where there have been missed payments, though.
The bottomline is this: we’re in a situation where all these homeowners are being crushed with unreasonable monthly payments, and hugely inflated principals, where the legal ownership of the mortgage itself is under question, and nobody seems to want to do squat about it. Maybe it’s time a crude tool is used against a cruel enemy.
A while back I was talking to some math people about how credit default swaps (CDSs), by their very nature, contain risk that is generally speaking undetectable with standard risk models like Value-at-Risk (VaR).
It occurred to me then that I could put it another way: that perhaps credit default swaps might have been deliberately created by someone who knew all about the standard risk models to game the system. VaR was commercialized in the mid 1990’s and CDSs existed around the same time, but didn’t take off for a decade or so until after VaR became super widespread, which makes it hard to prove without knowing the actors.
For that matter it is reasonable to assume something less deliberate occurred: that a bunch of weird instruments were created and those which hid risk the most thrived, kind of an evolutionary approach to the same theory.
I was reminded recently of this conspiracy theory when Joe Burns talked to my Occupy group last Sunday about his recent book, Reviving the Strike. He talked about the history of strikes as a tool of leverage, and how much less frequently we’ve seen large-scale strikes and industry-wide strikes. He made the point that the legality of strikes has historically been uncorrelated to the existence of strikes – that strikers cannot necessarily wait for the legal system to catch up with the needs of the worker. Sometimes strikers need to exert pressure on legislation.
Anyhoo, one question that came up in Q&A was how, in this world of subsidiaries and franchises, can workers strike against the upper management with control over the actual big money? After all, McDonalds workers work for franchisees who are often not well-off. The real money lives in the mother company but is legally isolated from the franchises.
Similarly, with Walmart, there are massive numbers of workers that don’t work directly for Walmart but do work in the massive supply chain network set up and run by Walmart. They would like to hold Walmart responsible for their working conditions. How does that work?
It seems like the same VaR/CDS story as above. Namely, the legal structure of McDonalds and Walmart almost seems deliberately set up to avoid legal responsibility from disgruntled workers. So maybe first you had the legal system, then lawyers set up the legal construction of the supply chain and workers such that striking workers could only strike against powerless figures, especially in the McDonalds case (since Walmart has plenty of workers working for the mother company as well).
Last couple of points. First, only long-term, powerful enterprises can go to the trouble of gaming such large systems. It’s an artifact of the age of the corporation.
And finally, I feel like it’s hard to combat. We could try to improve our risk or legal system but that makes them – probably – even more complicated, which in turn gives massive corporations more ways to game them. Not to be a cynic, but I don’t see a solution besides somehow separately sidestepping our personal risk exposure to these problems.
I am looking into the history of anti-discrimination laws like the Equal Credit Opportunity Act, (ECOA) and how it got passed, and hopefully find data to measure how well it’s worked since it got passed in 1974.
Putting aside the history of this legislation for now – although it is fascinating – I’d like to talk this morning about this paper from 1989 written by Gregory Elliehausen and Thomas Durkin from the Board of Governors of the Federal Reserve System, which discusses the abstract question of approaches to defining and regulation around discrimination.
This came up because when Congress passed ECOA, they left it to the regulators – in this case the Federal Reserve – to decide exactly how to write the rules, which pertain to credit decisions (think credit card offerings). From the article:
The term “discriminate against an applicant” was defined in Section 202. 2(n) as meaning “to treat an applicant less favorably than other applicants.” By itself, this rule does not offer an unquestionably unambiguous operational definition of socially unacceptable discrimination in a screening context where limited selections are constantly being made from a longer list of applicants.
The paper then goes on to list 3 separate regulatory approaches to anti-discrimination regulation. I have found these three definition really interesting and thought-provoking. I won’t even go into the rest of the paper on this post because I think just this list of three approaches is so interesting. Tell me if you agree.
1) The “effects-based” approach to regulation. This is the idea that, we don’t need to know how you actually make credit decisions, but if the effect is that no women or minorities ever get credit from you, then you’re doing something wrong. If you want to be really extreme in this category you get to things like quotas. if you want to be less extreme you think about studying applications that are similar except for one thing like race or gender, kind of like the the male vs. female science lab application test studied here. Needless to say, effects-based regulation is not in use, it’s considered too extreme.
2) The “intent-based” approach to regulation. This is where you have to prove intent to discriminate. It’s super rare that you can do that, because it’s super rare that people aiming to discriminate are dumb enough to make it obvious. Far easier to embed discrimination in a model where you can maintain plausible deniability. Although intent-based regulation is considered too extreme in the other direction, it seems to be what surfaces when there’s a legal case (although I’m not a legal expert).
3) The “practices-based” approach to regulation. This is where you make a list of acceptable or unacceptable practices in extending credit and hope you cover everything. So for example you aren’t allowed to explicitly use race or marital status or governmental assistance status in your credit models. This is what the Fed finally decided to use, and it makes sense in that it’s easy to implement, but of course the lists change over time, and that’s the key issue (for me anyway): we need to update those lists in the age of big data.
Tell me if you think there’s yet another approach not mentioned. And note these regulatory approaches correspond to different ways of thinking about or even defining discrimination, which is itself a great reason to list them comprehensively. I think my future discussions about what constitutes discrimination will be informed by which above approach will pick up on a given instance.
Every now and then you see a published result that has exactly the right kind of data, in sufficient amounts, to make the required claim. It’s rare but it happens, and as a data lover, when it happens it is tremendously satisfying.
Today I want to share an example of that happening, namely with this paper entitled Regulating Consumer Financial Products: Evidence from Credit Cards (hat tip Suresh Naidu). Here’s the abstract:
We analyze the effectiveness of consumer financial regulation by considering the 2009 Credit Card Accountability Responsibility and Disclosure (CARD) Act in the United States. Using a difference-in-difference research design and a unique panel data set covering over 150 million credit card accounts, we find that regulatory limits on credit card fees reduced overall borrowing costs to consumers by an annualized 1.7% of average daily balances, with a decline of more than 5.5% for consumers with the lowest FICO scores. Consistent with a model of low fee salience and limited market competition, we find no evidence of an offsetting increase in interest charges or reduction in volume of credit. Taken together, we estimate that the CARD Act fee reductions have saved U.S. consumers $12.6 billion per year. We also analyze the CARD Act requirement to disclose the interest savings from paying off balances in 36 months rather than only making minimum payments. We find that this “nudge” increased the number of account holders making the 36-month payment value by 0.5 percentage points.
That’s a big savings for the poorest people. Read the whole paper, it’s great, but first let me show you some awesome data broken down by FICO score bins:
This data, and the results in this paper, fly directly in the face of the myth that if you regulate away predatory fees in one way, they will pop up in another way. That myth is based on the assumption of a competitive market with informed participants. Unfortunately the consumer credit card industry, as well as the small business card industry, is not filled with informed participants. This is a great example of how asymmetric information causes predatory opportunities.
I’m just recovering from a killer flu that had me wheezing and miserable for 5 days. I have a whole backlog of rants and vents but no time this morning to even start, so instead let me suggest you read this article (hat tip Chris Wiggins) about a New York Times reporter who crashed the yearly party of Kappa Beta Phi, a Wall Street secret society. Pretty amazing, if true.
At first glance this seems totally weird, for two reasons. First, debit cards by construction have no ability to go below zero, so they are not directly relevant to the concept of credit, which is by definition when you borrow something and then hopefully pay it back. Second, my first, second, and third intuitive response to credit bureaus is to give them less information, not more. I already think they have way too much data about us. Their recent foray into using social media data is super creepy, for example, and threatens the “no outdated information” rule of the Fair Credit Act, for example.
I watched Orman explain her reasoning about her card, which I believe launched in 2012, and I kind of get her points about why she thinks this is a good idea (even though she clearly has a conflict of interest here): some people have trouble with credit cards, and for that reason they should use debit cards or cash, but cash has no data trail and thus people who are in only cash can never improve their credit scores enough to qualify for things like mortgages and car loans, which they may well be able to handle.
Here’s the thing, though. Her card actually has bad terms, and loads of fees, and it doesn’t look like FICO is actually going to use data from her cards to build peoples’ credit scores after all. Oh well.
Here’s an idea, which is not original at all but hasn’t gotten momentum because it doesn’t make bankers money: instead of shitty and expensive debit cards, let’s have the Post Office open a national bank and let people put money for free on their phones. Systems like this already exist in Kenya (Matt Stoller calls it a “M-Pesa style mobile cash system” in this fine post about the Post Office Bank idea) and in Ghana, and they work great, and let me once again mention there are no fees. It’s a free service as long as you have a cell phone, and it certainly doesn’t have to be a fancy smart phone.
In the short term, such a system will free poor people from getting ever increasingly ripped off by banks and companies with their crappy pre-paid debit cards. It might not give them stellar credit scores, but I’d argue that it’d still be an improvement.
In the asymptotic limit of that system, we’d have a pretty sharp division between people who live in the world of credit, with good FICO scores, and people who deal in cash and mobile cash, with bad or nonexistent FICO scores. It would be hard to get a good mortgage or car loan if you are in the latter group, but that’s already true (unless you count the kind of mortgages Wells Fargo gave to minorities to rip them off).
In the longer term, if we wanted to give credit scores to people who deal in cash, we could use their mobile cash records to deem their spending habits “credit worthy”.
In the much longer term, it would be great if we stopped pretending (I’m looking at you Suze Orman) that having a bad FICO score is a moral failing: it’s really mostly a sign of being broke. If we want to help people get out of debt spirals, then let’s talk about a Basic Guaranteed Income.