Home > data science, finance, modeling > Should lawmakers use algorithms?

Should lawmakers use algorithms?

August 5, 2013

Here is an idea I’ve been hearing floating around the big data/ tech community: the idea of having algorithms embedded into law.

The argument for is pretty convincing on its face: Google has gotten its algorithms to work better and better over time by optimizing correctly and using tons of data. To some extent we can think of their business strategies and rules as a kind of “internal regulation”. So why don’t we take a page out of that book and improve our laws and specifically our regulations with constant feedback loops and big data?

No algos in law

There are some concerns I have right off the bat about this concept, putting aside the hugely self-serving dimension of it.

First of all, we would be adding opacity – of the mathematical modeling kind – to an already opaque system of law. It’s hard enough to read the legalese in a credit card contract without there also being a black box algorithm to make it impossible.

Second of all, whereas the incentives in Google are often aligned with the algorithm “working better”, whatever that means in any given case, the incentives of the people who write laws often aren’t.

So, for example, financial regulation is largely written by lobbyists. If you gave them a new tool, that of adding black box algorithms, then you could be sure they would use it to further obfuscate what is already a hopelessly complicated set of rules, and on top of it they’d be sure to measure the wrong thing and optimize to something random that would not interfere with their main goal of making big bets.

Right now lobbyists are used so heavily in part because they understand the complexity of their industries more than the lawmakers themselves. In other words, they actually add value in a certain way (besides in the monetary way). Adding black boxes would emphasize this asymmetric information problem, which is a terrible idea.

Third, I’m worried about the “black box” part of algorithms. There’s a strange assumption among modelers that you have to make algorithms secret or else people will game them. But as I’ve said before, if people can game your model, that just means your model sucks, and specifically that your proxies are not truly behavior-based.

So if it pertains to a law against shoplifting, say, you can’t have an embedded model which uses the proxy of “looking furtive and having bulges in your clothes.” You actually need to have proof that someone stole something.

If you think about that example for a moment, it’s absolutely not appropriate to use poor proxies in law, nor is it appropriate to have black boxes at all – we should all know what our laws are. This is true for regulation as well, since it’s after all still law which affects how people are expected to behave.

And by the way, what counts as a black box is to some extent in the eye of the beholder. It wouldn’t be enough to have the source code available, since that’s only accessible to a very small subset of the population.

Instead, anyone who is under the expectation of following a law should also be able to read and understand the law. That’s why the CFPB is trying to make credit card contracts be written in Plain English. Similarly, regulation law should be written in a way so that the employees of the regulator in question can understand it, and that means you shouldn’t have to have a Ph.D. in a quantitative field and know python.

Algos as tools

Here’s where algorithms may help, although it is still tricky: not in the law itself but in the implementation of the law. So it makes sense that the SEC has algorithms trying to catch insider trading – in fact it’s probably the only way for them to attempt to catch the bad guys. For that matter they should have many more algorithms to catch other kinds of bad guys, for example to catch people with suspicious accounting or consistently optimistic ratings.

In this case proxies are reasonable, but on the other hand it doesn’t translate into law but rather into a ranking of workflow for the people at the regulatory agency. In other words the SEC should use algorithms to decide which cases to pursue and on what timeframe.

Even so, there are plenty of reasons to worry. One could view the “Stop & Frisk” strategy in New York as following an algorithm as well, namely to stop young men in high-crime areas that have “furtive motions”. This algorithm happens to single out many innocent black and latino men.

Similarly, some of the highly touted New York City open data projects amount to figuring out that if you focus on looking for building code violations in high-crime areas, then you get a better hit rate. Again, the consequence of using the algorithm is that poor people are targeted at a higher rate for all sorts of crimes (key quote from the article: “causation is for other people”).

Think about this asymptotically: if you live in a nice neighborhood, the limited police force and inspection agencies never check you out since their algorithms have decided the probability of bad stuff happening is too low to bother. If, on the other hand, you are poor and live in a high-crime area, you get checked out daily by various inspectors, who bust you for whatever.

Said this way, it kind of makes sense that white kids smoke pot at the same rate as black kids but are almost never busted for it.

There are ways to partly combat this problem, as I’ve described before, by using randomization.

Conclusion

It seems to me that we can’t have algorithms directly embedded in laws, because of the highly opaque nature of them together with commonly misaligned incentives. They might be useful as tools for regulators, but the regulators who choose to use internal algorithms need to carefully check that their algorithms don’t have unreasonable and biased consequences, which is really hard.

Categories: data science, finance, modeling
  1. Michael Edesess
    August 5, 2013 at 11:26 am

    I couldn’t agree more — you hit all the bases. Yes, it’s a terrible idea to embed them in law, but they can be used to winnow out lawbreakers, but judiciously.

    Like

  2. JSE
    August 5, 2013 at 11:41 am

    What’s the tax code if not an algorithm embedded into law? And — just as you say — the fact that the rules are too complicated for most people (certainly for me) to fully understand creates advantage for actors with the resources to find exploitable quirks in the algorithm.

    Like

    • August 5, 2013 at 11:43 am

      Yup. And look at multinational corporations and their complexity, which is essentially a direct consequence of the tax law. http://opencorporates.com/viz/financial/index.html

      Like

    • FogOfWar
      August 5, 2013 at 1:50 pm

      Without getting too far into it, you only see a small part of the tax code and while i is a very complicated Code it is not in fact an algorithm at all. It does seem that way from following your 1040 or plugging in TurboTax, but there are a lot of grey areas there…

      FoW

      Like

  3. EMB
    August 5, 2013 at 1:02 pm

    How would you suggest fixing gerrymandering? The only way I can think of would be embedding an algorithm (e.g. http://math.stanford.edu/~dankane/COMAP07.pdf) into law at the federal level somehow. (Of course, such a law would be incredibly hard to pass…)

    Like

  4. August 5, 2013 at 1:48 pm

    You are right and yes I have created blog post on this topic as you stated, mostly relating to healthcare. I started out about 3-4 years ago on this topic and now with the complexities that have been added on, yes I agree it would be hard to do now. What I have corrected myself of late to kind of change this idea is to perhaps at least line out some IT Infrastructures in laws as maybe a step in that direction. I have to change my ideas and thoughts too as technology moves forward and learned to eat my own dog food years ago, still doesn’t taste good but part of the way it works:)

    Health insurers and now HHS uses algorithms to catch and detect fraud and depending on how the insurers build their models, there are false positives there too, and again comes back to “context”. I feel there’s not enough folks out there trained how to work with data too and when profits and shareholders are involved, well you know that tune better than almost anyone out there:)

    There has to be something in the middle here that would be better than what we have with modeling and programming gaming the “verbiage” of laws though. I don’t know but perhaps a better specified IT Infrastructure spelled out with more specifics might offer something? I have seen doctors on claims drug over the coals on false positives but again when used correctly in identifying patterns you do catch a lot of fraud. This really is a tough topic all the way around for sure and getting it to where it would work, I agree is hard. Right now in healthcare there’s the big debate over a lot of the analytics that are being used and granted some are written to generate profits only, so the same applies in a lot of places in business but nowhere closer to us than in healthcare.

    I have written posts about Blue Cross and United getting caught on their algorithms denying service and United is the king of this and last year settled a huge class action lawsuit to where they used their out of network model for customary charges to short reimburse doctors, hospitals and patients for 15 years! They sold the license to other insurers so they were all in it. The announcement was made in 2009 and Aetna for their part at the end of December 2012 is just now setting their short pays. Quite the model and made a few millions if not billions for insurers. This is what I mean when I thought of something that spells out the technologies used along with the related business intelligence. Thank goodness for Cuomo who finally called United on it and the AMA did ok here, but you know they sell tons of data too for profit, so they are in there.

    http://ducknetweb.blogspot.com/2009/12/ama-announced-settlement-of-class.html

    Subsidiary of United sells software belonging to a subsidiary of the AMA…at Harvard medical..and this was marketed and sold before they became subsidiaries…and Dr. Halamka is tops in Health IT so he watches as insurers and others buy up companies and as subsidiaries they can also function where the corporate conglomerate can’t…so again my idea about spelling out the IT Infrastructures when changes in ownership take place for sure.

    http://ducknetweb.blogspot.com/2013/07/one-pioneer-aco-saved-money-using.html

    Like

  5. FogOfWar
    August 5, 2013 at 1:48 pm

    Adding one point–there are (from memory) six pillars behind a “just system of law”. The ability of the people subject to the law to know the law is one of those fundamental pillars. If I don’t know what is legal and what is illegal, that’s (for reasons that become obvious on some quick reflection) an unjust law and one of the hallmarks of a totalitarian regime.

    So secret algos may or may not have advantages, but they most definitely have no place in the law and, in fact, might well be struck down as unconstitutional.

    FoW

    Like

  6. Michael Edesess
    August 5, 2013 at 9:50 pm

    Given agreement that algorithms shouldn’t be embedded in the law, what does that say about the law and about algorithms? Is it (a) laws should be written fuzzy to leave room to prosecute those who would adhere to the letter but not the intent of the law; (b) algorithms can make only a very limited set of prescriptions precise and they tend to be the ones that can be gamed; (c) we’re using the wrong algorithms; or (d) some combination.

    Like

  7. Abe Kohen
    August 6, 2013 at 9:45 am

    In the mid 1980s, Paul Brest, Professor of Law, and later Dean of Stanford Law, ran a class called Computers and Law, where he introduced algorithms via a book called The Sachertorte Algorithm.

    As a student of that class, together with my very able law school partner, we proceeded to encode laws via algorithms. If a law can be articulated it can be encapsulated as an algorithm, with appropriate boundary conditions and exception handling.

    Lotfi Zadeh of Cal (Berkeley) was a great contributor in the fields of fuzzy math and fuzzy logic.

    http://www.cs.berkeley.edu/~zadeh/scv.html

    As for lobbyists making rules, what’s that they say about the fox guarding the hens?

    Like

  8. Guest
    August 6, 2013 at 10:15 am

    Don’t forget that lawmakers THEMSELVES are alorithms …

    Like

  9. Oblique
    August 6, 2013 at 3:31 pm

    I agree with the spirit of what you wrote, but many of the details suggest misunderstandings of the actual law or exhibit a lack of awareness of legal scholarship. At the risk of sounding like I’m arguing poorly, I note upfront that I’m an attorney.

    You say:

    First of all, we would be adding opacity – of the mathematical modeling kind – to an already opaque system of law. It’s hard enough to read the legalese in a credit card contract without there also being a black box algorithm to make it impossible.

    I actually don’t find this persuasive. Law’s opacity is of a different kind than the opacity of a mathematical model. In other words, there are plenty of contexts in which adding a “black box” somewhere in a legal system would not meaningfully detract from comprehension of the legal rule. As long as it’s clear what rule you have to follow, it shouldn’t matter that the law makes reference to the outcome of an algorithmic calculation (any more than it would matter if the law made reference to following a certain recipe as the method for determining whether something was “chocolate cake”). For example, certain courts have jurisdictional requirements that there be a certain amount in controversy. Since the rule is clear (“Have $X in play or take it to Small Claims”), there’s no reason not to put a black box around how the amount is calculated.

    You continue:

    Right now lobbyists are used so heavily in part because they understand the complexity of their industries more than the lawmakers themselves. In other words, they actually add value in a certain way (besides in the monetary way). Adding black boxes would emphasize this asymmetric information problem, which is a terrible idea.

    I’m not sure I understand this argument. What is it about adding algorithms to the process that is supposed to make it easier for lobbyists to unfairly manipulate information asymmetry? That is, it seems like you contradicted yourself. If lobbyists “add value” by “understanding the complexity of their industries more than the lawmakers themselves,” wouldn’t they only become more efficient at operationalizing that understanding if they could use algorithmic methods? Why is that supposed to be bad?

    Further, you contend that:

    So if it pertains to a law against shoplifting, say, you can’t have an embedded model which uses the proxy of “looking furtive and having bulges in your clothes.” You actually need to have proof that someone stole something.

    But this not a good example of a context in which it even makes sense to talk about using algorithms in law. The question in a criminal larceny proceeding is whether the defendant took and carried away the personal property of another with intent to permanently deprive its owner of possession. The state has to prove each of those elements beyond a reasonable doubt to get a conviction. It makes no sense to talk about a conviction by proxy – the fact that a defendant looked around furtively and had bulges in his or her clothes is probative evidence, nothing more. There’s no opportunity for an algorithmic procedure here without fundamentally changing many, many other parts of how a criminal prosecution works.

    You then go on to say:

    And by the way, what counts as a black box is to some extent in the eye of the beholder. It wouldn’t be enough to have the source code available, since that’s only accessible to a very small subset of the population. Instead, anyone who is under the expectation of following a law should also be able to read and understand the law. That’s why the CFPB is trying to make credit card contracts be written in Plain English.

    With respect, that’s not why that’s happening, and you seem to be confusing a credit card agreement with a statutory law. CFPB is trying to make credit card contracts be written in Plain English in the hopes that clearer disclosure of the awful, predatory terms of those agreements will discourage people from taking them. They simply don’t have enough power to do more, so they’re doing what they can.

    Finally, you argue that:

    Similarly, regulation law should be written in a way so that the employees of the regulator in question can understand it, and that means you shouldn’t have to have a Ph.D. in a quantitative field and know python.

    Yet apparently you don’t need a law degree to have strong opinions about what law should look like!

    This idea that “anyone who is under the expectation of following a law should also be able to read and understand the law” certainly sounds good, but I think it is ultimately misguided and a little naïve. It’s far, far more important that the law produce just results than that it be comprehensible on its face. For example, I could define all property crimes as “intentionally taking someone else’s belongings” and then just assign punishment by the cash value of the thing taken. This is relatively easy to understand. But it’s actually a worse system than a much more complicated set of definitions that recognize the numerous possible variations of such an act that really should be treated differently. Armed robbery vs. burglary. Petty larceny vs. large-scale embezzlement. Law’s complexity is much more a result of the factual complexity of the world than it is a result of the inability of its authors to speak clearly.

    Like

    • FogOfWar
      August 6, 2013 at 9:11 pm

      A few comments to the comments, in rough order:

      //FoW comments in//

      I agree with the spirit of what you wrote, but many of the details suggest misunderstandings of the actual law or exhibit a lack of awareness of legal scholarship. At the risk of sounding like I’m arguing poorly, I note upfront that I’m an attorney.
      You say:

      First of all, we would be adding opacity – of the mathematical modeling kind – to an already opaque system of law. It’s hard enough to read the legalese in a credit card contract without there also being a black box algorithm to make it impossible.

      I actually don’t find this persuasive. Law’s opacity is of a different kind than the opacity of a mathematical model. In other words, there are plenty of contexts in which adding a “black box” somewhere in a legal system would not meaningfully detract from comprehension of the legal rule. As long as it’s clear what rule you have to follow, it shouldn’t matter that the law makes reference to the outcome of an algorithmic calculation (any more than it would matter if the law made reference to following a certain recipe as the method for determining whether something was “chocolate cake”). For example, certain courts have jurisdictional requirements that there be a certain amount in controversy. Since the rule is clear (“Have $X in play or take it to Small Claims”), there’s no reason not to put a black box around how the amount is calculated.

      // I think the point is that “$X in play” is something that anyone can access and follow, but “and amount determined by a model developed by MIT graduates that you couldn’t understand” isn’t the same and would make the determination of venue largely incomprehensible to non-professionals//

      You continue:

      Right now lobbyists are used so heavily in part because they understand the complexity of their industries more than the lawmakers themselves. In other words, they actually add value in a certain way (besides in the monetary way). Adding black boxes would emphasize this asymmetric information problem, which is a terrible idea.

      I’m not sure I understand this argument. What is it about adding algorithms to the process that is supposed to make it easier for lobbyists to unfairly manipulate information asymmetry? That is, it seems like you contradicted yourself. If lobbyists “add value” by “understanding the complexity of their industries more than the lawmakers themselves,” wouldn’t they only become more efficient at operationalizing that understanding if they could use algorithmic methods? Why is that supposed to be bad?

      //I think Cathy meant that they add value to their clients, which in fact often results in subtracting value from everyone else in the world and thus is a bad thing rather than a good thing! Having watched the lobbying process in action, I’m inclined to agree with Cathy on this point.//

      Further, you contend that:

      So if it pertains to a law against shoplifting, say, you can’t have an embedded model which uses the proxy of “looking furtive and having bulges in your clothes.” You actually need to have proof that someone stole something.

      But this not a good example of a context in which it even makes sense to talk about using algorithms in law. The question in a criminal larceny proceeding is whether the defendant took and carried away the personal property of another with intent to permanently deprive its owner of possession. The state has to prove each of those elements beyond a reasonable doubt to get a conviction. It makes no sense to talk about a conviction by proxy – the fact that a defendant looked around furtively and had bulges in his or her clothes is probative evidence, nothing more. There’s no opportunity for an algorithmic procedure here without fundamentally changing many, many other parts of how a criminal prosecution works.

      //It’s not a good example, or maybe it’s a good example of a bad example. Let me try a slightly different example: the legislature of MA passes a new law labeled “attempted shoplifting”. The necessary elements of this crime are that a person engage in a pattern of eye-focus and tempo of limb-movement behavior which is flagged by an MIT algorithm. MIT was charged with the MA legislature with designing an algorithm that detects patterns of movement associated with shoplifting, and produced the algorithm in question as a result. Once the DA establishes (1) that the surveillance equipment was working properly, (2) that the defendant was properly recorded engaged in the sequence of eye and limb movements, and (3) that the computer software was loaded properly on the system, a guilty verdict and 6 mo-2yrs sentence is applied.

      Cathy: what does the algo community think of this example? Is this the kind of thing they had in mind when they talk about algos in the law? Is this a dystopian twisting of what they intended when they talk about algos in the law?

      Oblique: assuming I’m not creating a complete straw man, what’s your reaction to this new MA law both as a lawyer and as a matter of jurisprudence (policy)? Do you think the law stands to challenge on Constitutional ground (and if not, under what provision)? What unintended consequences when the MA legislature enacts this law?

      Please limit all answers to 3 blue-book pages. A forced-curve will apply.//

      You then go on to say:

      And by the way, what counts as a black box is to some extent in the eye of the beholder. It wouldn’t be enough to have the source code available, since that’s only accessible to a very small subset of the population. Instead, anyone who is under the expectation of following a law should also be able to read and understand the law. That’s why the CFPB is trying to make credit card contracts be written in Plain English.

      With respect, that’s not why that’s happening, and you seem to be confusing a credit card agreement with a statutory law. CFPB is trying to make credit card contracts be written in Plain English in the hopes that clearer disclosure of the awful, predatory terms of those agreements will discourage people from taking them. They simply don’t have enough power to do more, so they’re doing what they can.

      //There is a fundamental distinction between statutory law (the relationship of the state to the individual) and the regulatory law the CFBP is engaged in (regulating the conduct of individual to individual. I think Cathy’s responding to the fact that the spirit of the CFPB proposals is similar to the fundamental need to have statutory law (the state) be published so that those subject to the law can know how to abide by it (no ‘secret laws’).//

      Finally, you argue that:

      Similarly, regulation law should be written in a way so that the employees of the regulator in question can understand it, and that means you shouldn’t have to have a Ph.D. in a quantitative field and know python.

      Yet apparently you don’t need a law degree to have strong opinions about what law should look like!
      This idea that “anyone who is under the expectation of following a law should also be able to read and understand the law” certainly sounds good, but I think it is ultimately misguided and a little naïve. It’s far, far more important that the law produce just results than that it be comprehensible on its face. For example, I could define all property crimes as “intentionally taking someone else’s belongings” and then just assign punishment by the cash value of the thing taken. This is relatively easy to understand. But it’s actually a worse system than a much more complicated set of definitions that recognize the numerous possible variations of such an act that really should be treated differently. Armed robbery vs. burglary. Petty larceny vs. large-scale embezzlement. Law’s complexity is much more a result of the factual complexity of the world than it is a result of the inability of its authors to speak clearly.

      //I think there are some very deep questions embedded in this point. Oblique has a very good point about ‘necessary complexity’, although I wonder if there would be disagreement with the statement “having the law written in a way that is accessible is a very important goal and departing from that goal by having opaque or inaccessible law has significant negative consequences that should be kept at the forefront of mind by regulatory and legislative framers.”

      “Law’s complexity is much more a result of the factual complexity of the world than it is a result of the inability of its authors to speak clearly.”—well, maybe it is and maybe it isn’t. Here I think Oblique is giving too much benefit of the doubt to regulatory and legislative framers, because, in my experience, there is also a shit-ton of ‘unnecessary complexity’ in statute and regulation for a whole host of reasons ranging from historical accident to plain old incompetence to overwhelmed staff to deliberate manipulation by vested third-party lobbying interests.//

      FoW

      Like

      • Oblique
        August 7, 2013 at 6:33 pm

        Fog of War,

        Great post! Some responses:

        I think the point is that “$X in play” is something that anyone can access and follow, but “and amount determined by a model developed by MIT graduates that you couldn’t understand” isn’t the same and would make the determination of venue largely incomprehensible to non-professionals

        This is technically subject-matter jurisdiction, not venue, but whatever. My point was that the calculation of $X is already complicated and, in any case in which it’s a real issue, already incomprehensible to non-professionals, and moreover very difficult for lawyers (who, spoiler, are not good at this kind of thing). If there were a way to have an algorithm figure this out, why not use it? It’s not like judges are good at math. I have, in my own practice, seen multiple probate and appellate judges repeatedly fail to understand that dividing by 3 is not the same as multiplying by 0.3.

        I think Cathy meant that they add value to their clients, which in fact often results in subtracting value from everyone else in the world and thus is a bad thing rather than a good thing! Having watched the lobbying process in action, I’m inclined to agree with Cathy on this point.

        Fair enough. If that was the point, I agree. I just literally didn’t understand what she was trying to say.

        It’s not a good example, or maybe it’s a good example of a bad example. Let me try a slightly different example: the legislature of MA passes a new law labeled “attempted shoplifting”. The necessary elements of this crime are that a person engage in a pattern of eye-focus and tempo of limb-movement behavior which is flagged by an MIT algorithm. MIT was charged with the MA legislature with designing an algorithm that detects patterns of movement associated with shoplifting, and produced the algorithm in question as a result. Once the DA establishes (1) that the surveillance equipment was working properly, (2) that the defendant was properly recorded engaged in the sequence of eye and limb movements, and (3) that the computer software was loaded properly on the system, a guilty verdict and 6 mo-2yrs sentence is applied.

        Oblique: assuming I’m not creating a complete straw man, what’s your reaction to this new MA law both as a lawyer and as a matter of jurisprudence (policy)? Do you think the law stands to challenge on Constitutional ground (and if not, under what provision)? What unintended consequences when the MA legislature enacts this law?

        Please limit all answers to 3 blue-book pages. A forced-curve will apply.

        Hahaha, no promises.

        I’m going to assume you want me to disregard that “attempted shoplifting” (= larceny, presumably) is already a crime with a definition and focus on the features of this offense as you’ve defined it.

        From the top of my head, there are many reasons to think the proposed statute might be unconstitutional:

        * Violates 1st Amendment right to freedom of expression and assembly. This is not the best argument, but it has teeth.

        * The very use of the device amounts to a warrantless search without probable cause for 4th Amendment purposes. This argument is probably a winner.

        * The definition of the crime amounts to a deprivation of one’s 5th or 14th Amendment right to substantive due process. Also probably a winner.

        * Deprivation of the 6th Amendment right to confrontation since the definition of the crime precludes cross-examining the creators of the algorithm. A bit strange, but also has teeth.

        But more importantly, the problem with this law is that it confuses the idea of evidence of attempt with the legal notion of attempt itself. If someone made a computer that could track limb-movements and eye-focus as you suggest, that would be admissible evidence to show the mens rea for an attempt. But that’s not the same thing as actually attempting to commit a crime, which requires a substantial step toward completing the physical act. I think it’s horrible policy at odds with fundamental precepts of criminal justice.

        There is a fundamental distinction between statutory law (the relationship of the state to the individual) and the regulatory law the CFBP is engaged in (regulating the conduct of individual to individual. I think Cathy’s responding to the fact that the spirit of the CFPB proposals is similar to the fundamental need to have statutory law (the state) be published so that those subject to the law can know how to abide by it (no ‘secret laws’).

        Well, I mean, first, I was simply (pedantically) pointing out that a credit card agreement — as a private contract — is not actually “law” at all, but second, I am saying that I disagree that that is the reason that CFPB is doing what it’s doing. The reason they think they should get to control how credit transactions are contracted for (vs., say, buying groceries at the store) is that the companies engage in lots of predatory practices. This is a tool for striking back; it’s not actually about making sure you understand your agreement, even if it has that effect as a side benefit.

        I think there are some very deep questions embedded in this point. Oblique has a very good point about ‘necessary complexity’, although I wonder if there would be disagreement with the statement “having the law written in a way that is accessible is a very important goal and departing from that goal by having opaque or inaccessible law has significant negative consequences that should be kept at the forefront of mind by regulatory and legislative framers.”

        I almost went on to say, though I didn’t because my post was already quite long, that it’s simply not true that this is an important goal. It’s not even close to possible to know all of the law. Even the most knowledgable lawyer or the most venerated Supreme Court Justice does not come close to knowing or understanding all — or even most — of the law. It’s simply a fallacy to act as though all of us operate in a society whose legal rules we basically understand. Rather, the happy reality is that our lives are simple and constrained enough that we simply do not come into contact with the vast majority of areas of the law in our day-to-day existences. I mean, do you have any idea how the Securities and Exchange Act works? What’s in Article 3 of the Uniform Commercial Code (which governs, among other things, checks)? What the legal elements of the offense of arson are? What the exceptions to the hearsay rule are? To say nothing of regulatory regimes that effect products you use everyday. In other words, even if these could be written better, why would it matter? No one would read them anyway, and for good reason. For any legal matter of any real importance, you’re going to want a lawyer regardless.

        “Law’s complexity is much more a result of the factual complexity of the world than it is a result of the inability of its authors to speak clearly.”—well, maybe it is and maybe it isn’t. Here I think Oblique is giving too much benefit of the doubt to regulatory and legislative framers, because, in my experience, there is also a shit-ton of ‘unnecessary complexity’ in statute and regulation for a whole host of reasons ranging from historical accident to plain old incompetence to overwhelmed staff to deliberate manipulation by vested third-party lobbying interests.

        Could you cite a few examples? I doubt very much that the root causes of these kinds of complexity could be remedied by rewriting the same rules in simpler language.

        Like

        • FogOfWar
          August 8, 2013 at 5:34 pm

          Read your reply with 2 minutes before I have to run, but…

          First example off the top of my head: the classic “there are 12 different definitions of a ‘dependent’ in the tax code.” That one isn’t nefarious, just clutter and no incentive for anyone on the hill to clean it up. It’s better now, but it was like that for decades!

          I’d agree with you that the law probably cannot be completely comprehensible to the citizenry, but that (to my mind) isn’t a reason to stop trying to make it closer to that goal! There does come a point where oversimplifying the substantive law in the name of accessibility would lose my support, but, for example, the whole “plain english” movement of the last (how many now?) decades is an example of at least moving in the right direction…

          Like

  10. August 6, 2013 at 6:23 pm

    What do you mean by an algorithm? I would think that any numerical parameter of a law that is pegged to (say) the consumer price index is already determined algorithmically. You could imagine only slightly more complicated variants of such laws (e.g. if the home ownership rate is < x, increase the limit on the mortgage interest deduction, else decrease it). Of course these algorithms are all very simple. If you start adding in loops, you can start writing laws with undecidable behavior…

    Like

  11. Oblique
    August 7, 2013 at 6:39 pm

    I appear to have failed at the blockquote tag. If someone could fix that, it would be much appreciated.

    Like

  12. August 17, 2013 at 5:54 am

    An algorithm is a set of rules for taking a set of inputs and either assessing the inputs for certain qualities (truth/falsity; guilt/innocence) or for conversion into an output ( income-> tax liability)….

    And “rule of law” implies that the laws are known to everyone

    Those advocating adoption of formal algorithmic expressions into law implicitly and usually very explicitly state that the algo’s are NOT secret… And that the advantage of making law more algorithmic will be to make it amenable to the formal rules and tools that have been developed by “information theorists” thereby making it more transparent in effects and more internally robust and consistent… All of which will reduce the power of the “insiders” over the governed

    So, yes, hidden and proprietary algo’s in law would a disaster…. But no one is advocating that……….

    Like

  1. August 12, 2013 at 11:02 am
Comments are closed.