Livermore, Michael A., Rule by Rules
Computational Legal Studies: The Promise and Challenge of Data-Driven Legal Research
From at least Leibniz, the dream of removing human beings from the loop of legal reasoning has captured the imaginations of philosophers, lawyers, and (more recently) computer scientists. This project of law-ascomputation (sometimes referred to as “computational law”) seeks to reduce the law to a set of algorithms that could be automatically executed on a computer, seamlessly translating raw inputs into legal conclusions.
Proponents of this approach generally argue that legal automation would increase legal certainty and facilitate the neutral application of law by transcending human biases and errors. The Leibnizian vision of law has been given renewed life by recent advances in machine learning and artificial intelligence. As has been frequently noted, machine applications are now able to engage in a variety of tasks that, until recently, were the exclusive domain of human beings: from language processing and driving to the mastery of strategic games such as chess, go, and poker. Across many fields, machines are now able to outperform humans in tasks once thought of as at the core of human intelligence, as when IBM’s Watson defeated Jeopardy! champion Ken Jennings. If a computer can engage in the complex tasks of natural language processing and knowledge management needed to beat the best humans in a trivia game, the natural question arises as to whether computers could perform similar feats in other domains of social life, including law.
Many law scholars may be inclined to object to Leibniz’s vision based on some version of a legal realistic critique of legal determinism. For many, these objections might begin with the argument that law, as an artifact of natural language, law cannot ever be amenable to machine interpretation based purely on symbolic reasoning.
The recent generation of machine learning tools show that this objection is likely to fail. There are currently available information processing tools that are sufficiently flexible that, in principle, they could be used to render legal decision making fully deterministic, even when applied in the real world of human conflict and dispute. If this is true, and natural language does not place an absolute limit on the ability of purely automated systems to execute the law, something like the Leibnizian vision is possible. But although law-as-computation might be possible, a second question arises concerning whether it is desirable, and there are number of normative and practical questions concerning its implementation that law scholarship is likely to address in the coming years.
One high level question is whether law-as computation is even a project worth pursuing at all. To some, this may seem obvious. For some, the human discretion of judges is understood as a necessary evil in the implementation of law, something to be bemoaned and accepted at best—this was the view of Jeremey Bentham, for example (Alfange 1969). From this perspective, it may be the case that it is simply impossible for a legislature to craft statutory language in the kind of detailed but supple manner that would be necessary to determine outcomes in every case. But the prospect of law-as-computation raises the possibility of selfexecuting statutes: human discretion is no longer necessary for the basic functioning of a legal order.
read more ->
Available at SSRN: https://ssrn.com/abstract=3387701 or http://dx.doi.org/10.2139/ssrn.3387701