Home > #OWS, finance > What does “too big to fail” mean?

What does “too big to fail” mean?

November 21, 2011

The Alternative Banking meeting yesterday was really good, and interesting. During the discussion someone raised the point that when we describe a bank as “too big to fail,” we almost always measure that in terms of their assets under management, or the percent of deposits they have, or their net or gross exposures. In other words, we measure the size of the individual institution.

However, what’s just as important in terms of being “too big” is the extent to which a given bank is too interconnected, meaning they are in deals with so many other counterparties that if they go under, they would set off a cascade of contractual defaults which would cause chaos in the entire system. In fact Lehman was like this, too interconnected to fail. It’s funny but I’m pretty sure Lehman wasn’t too big to fail under many of the current definitions.

Although this question of counterparty risk is brought up consistently, it’s never adequately addressed in terms of risk; we are still more or less asking people to measure the volatility of their PnL, and we typically don’t force them to expose their counterparty exposure in stress tests and whatnot.

What if we addressed this directly? How could we regulate the interconnectedness of a given institution? What would be the metric in the first place and what limits could we set? How could we set up a regulator to convincingly check that institutions are sticking to their interconnectedness quotas?

I’ll keep thinking about this, they are not easy questions. But I think they are important ones, because they get to the heart of the current problems more than most.

A further question brought up yesterday was, how do we know when the entire financial system is too big? I guess we don’t need any fancy metrics to say that for now we just know.

Categories: #OWS, finance
  1. November 21, 2011 at 10:50 am

    The global SIFI (systemically important financial institution) list included State Street and Bank of New York Mellon because of their interconnectedness, not because of their size. They manage collateral and custody for securities. I am not sure whether there are good quantitative measures, but clearly regulators are thinking about what could throw a financial system into crisis or gum up the works. Unfortunately I don’t think it’s failure of the institution alone in some cases, but more straightforward issues around their operational continuity. Other institutions also have multiple roles – for example Bank of America is big, but after it acquired Countrywide it also became the largest mortgage servicer.


  2. Allison Gilmore
    November 21, 2011 at 3:28 pm

    There are plenty of network analysis measures for this kind of thing if the data is available. That is, if you could make a graph with companies=vertices and edges whenever there’s a counter-party relationship (maybe with risk above a certain threshold?), then you could measure density of that graph in various ways, find unusually dense subgraphs, etc. To what extent does that kind of data have to be disclosed?


    • November 21, 2011 at 4:00 pm

      Good idea- we’d also want to weight the edges depending on how much money is at stake or something. I’m not sure about disclosure, but this is the time to talk about it since we are currently in the public comment period for the Volcker Rule, which is related. Can you give me some references?



      • Allison Gilmore
        November 21, 2011 at 4:29 pm

        Unfortunately I don’t know a good survey article (and I’m a little out of date on this stuff) but here are some places to start.

        A classic by Girvan and Newman:

        Anything more recent from Newman:
        (e.g. second article in the list; #58 in the list looks like a survey)

        A reading list from Shalizi:

        I also have a chapter on community structure in my MPhil thesis (2006) that I’d send to you if you’re interested. It’s expository, and probably doesn’t cover anything beyond what’s above, but it briefly collects all the definitions and algorithms I knew at the time.

        There are simpler density measures that you might guess: proportion of possible edges that actually exist in a subgraph; comparisons of vertex degree within a subgraph vs outside; path lengths within a subgraph vs outside; etc.


  3. November 22, 2011 at 1:23 am

    Is http://www.newscientist.com/article/mg21228354.500-revealed–the-capitalist-network-that-runs-the-world.html relevant? The edges on this one are unweighted but it seems to be heading in the right direction


  4. FogOfWar
    November 24, 2011 at 10:40 am

    Critical point. On observation:

    All large banks collect and quantify their counterparty risk exposure. I.e., how much in total would I lose if BofA went under. It’s done as a matter of course at risk departments in major banks & they spend a fair bit of effort making sure that risks from various and disparate parts of the bank are aggregated for a total number (and it still is by no means perfect–the financial world passes risk in a lot of different and hard-to-see ways).

    What if banks were required by the Fed/OCC/FDIC to compute their “inverse counterparty risk exposure”–i.e., have BofA (and other SIFIs or potential SIFIs) compute on a regular basis who would lose how much if BofA went under. This would be more challenging than the first computation (more visibility on indirect risks when you’re taking them then when you’re giving them), but at least you could get some kind of number.

    In some ways it’s a doublecheck on the first system. A core basis for good classical accounting, or any modelling exercise for that matter, is to get information from multiple sources and see if the results are the same or similar. If not, that’s a red flag that something’s wrong with your method of analysis.




  1. No trackbacks yet.
Comments are closed.
%d bloggers like this: