Long time coming for us but here is Version 2.01 of our #SCOTUS Paper …
We have added three times the number years to the prediction model and now predict out-of-sample nearly two centuries of historical decisions (1816-2015). Then, we compare our results to three separate null models (including one which leverages in-sample information).
Here is the abstract: Building on developments in machine learning and prior work in the science of judicial prediction, we construct a model designed to predict the behavior of the Supreme Court of the United States in a generalized, out-of-sample context. Our model leverages the random forest method together with unique feature engineering to predict nearly two centuries of historical decisions (1816-2015). Using only data available prior to decision, our model outperforms null (baseline) models at both the justice and case level under both parametric and non-parametric tests. Over nearly two centuries, we achieve 70.2% accuracy at the case outcome level and 71.9% at the justice vote level. More recently, over the past century, we outperform an in-sample optimized null model by nearly 5%. Our performance is consistent with, and improves on the general level of prediction demonstrated by prior work; however, our model is distinctive because it can be applied out-of-sample to the entire past and future of the Court, not a single term. Our results represent an advance for the science of quantitative legal prediction and portend a range of other potential applications.
We started this blog (7 years ago) because we thought that there was insufficient attention to computational methods in law (NLP, ML, NetSci, etc.) Over the years this blog has evolved to become mostly a blog about the business of law (and business more generally) and the world is being impacted by automation, artificial intelligence and more broadly by information technology.
However, returning to our roots here — it is pretty interesting to see that the Economist has identified that #MachineLearning is finally coming to economics (pol sci + law as well).
Social science generally (and law as a late follower of developments in social science) it is still obsessed with causal inference (i.e. diff in diff, regression discontinuity, etc.). This is perfectly reasonable as it pertains to questions of evaluating certain aspects of public policy, etc.
However, there are many other problems in the universe that can be evaluated using tools from computer science, machine learning, etc. (and for which the tools of causal inference are not particularly useful).
In terms of the set of econ papers using ML, my bet is that a significant fraction of those papers are actually from finance (where people are more interested in actually predicting stuff).
In my 2013 article in Emory Law Journal called Quantitative Legal Prediction – I outline this distinction between causal inference and prediction and identify just a small set of the potential uses of predictive analytics in law. In some ways, my paper is already somewhat dated as the set of use cases has only grown. That said, the core points outlined therein remains fully intact …
The program committee for the 16th International Conference on Artificial Intelligence and Law has just named King College London as the host for the biannual ICAIL conference. Mark you calendars for 2017 in London!
The example above is an algorithmic system that enhanced by the use of crowd based teaching. It is a useful example of the creativity employed by those in the machine learning research community. It is also instructive (at broader level) of the cutting edge approaches used in all of predictive analytics / machine learning.
In discussing legal prediction or the application of predictive analytics in law, we often try to start by highlighting The Three Forms of (Legal) Prediction: Experts, Crowds and Algorithms. These are really the only streams of intelligence that one can use to forecast anything. Historically, in the law – experts centered forecasting has almost exclusively dominated the industry. In virtually every field of human endeavor, there have been improvements (sometimes small to sometimes large) in forecasting which have been driven in the move from experts to ensembles (i.e. mixtures of these respective streams of intelligence – experts, crowds + algorithms).
Through our company LexPredict and in our research, we have been working toward building such ensemble models across a wide range of topics. In addition, we have engaged in a public display of these ideas through Fantasy SCOTUS, our SCOTUS prediction algorithm and through the identification of non-traditional experts (i.e. our superforecasters which — unlike most lawyers — are folks that have actually been benchmarked in their predictive performance). Finally, we have demonstrated the usefulness of SCOTUS prediction in a narrow subset of cases that actually move the securities market.