We are excited to be giving a talk at Stanford the day before the Future Law Conference. Our talk will be hosted by Stanford CodeX – The Center for Legal Informatics. If you are in the Bay Area – you can join us by signing up for free here.
A more measured article than what we have seen lately regarding the so called ‘Robot Lawyers Thesis.’ I find it pretty funny that the NY Times Facebook link leads to the Click-Bait title but the final online version has the more measured title (see image above).
The article certainly has an Enterprise Law / Big Law undertone. If we focus on this subset of the market for legal services, there are a number of collective trends which together are transforming the market. It is the combined cocktail that is potent …
Here are five of them:
(1) Legal Outsourcing
(2) Insourcing and the Growth of Corporate Legal Departments
(3) Process Improvement (Lean / Six Sigma)
(4) Automation of Legal Tasks using A.I. (a.k.a. robot lawyers)
(5) Financialization of the Law aka #Fin(Legal)Tech
Plenty has been written about legal outsourcing, insourcing, and the growth of corporate legal departments and the application of process improvement methods (Lean / Six Sigma).
With respect to automation, it is curious to see the Times cite the Remus / Levy paper. At best, this paper is only relevant to the automation of the fraction of the work that is undertaken in Big Law (drawing from data from several years ago). They suggest an ‘automation rate’ of 2.5% per year. If that were to continue – this implies a rate for the decade of 25% just in Big Law alone. Again, this does not focus upon the other market dynamics highlighted above.
It is worth noting their data comes from a period before the implementation of #MLaaS (Machine Learning as a Service). Since its inception, #MLaaS has made A.I. tools far cheaper to custom build to problems. I have said recently that the best in legal tech has yet to be built (see slide 260).
So thanks to the NY Times for shedding light on this field. But lets remember the #RobotLawyers Thesis is only a small part of the puzzle.
As a matter of strategy, some element of the #LegalInnovation agenda should be part of the strategic portfolio of every legal organization (law firm, law school, corporate legal dept, etc.) Why? Because those who do so can increase their standing in the relevant market in question. Only those who use the newest and best tools available will thrive in an ever-changing market.
Not sure this is actually a “set back for AI in Medicine.” Rather, long story short — it ain’t 2014 anymore … as we discuss in our talk – Machine Learning as a Service : #MLaaS, Open Source and the Future of Legal Analytics – what started with Watson has turned into significant competition among major technology industry players. Throw in a some open source and you have some really strong economic forces which are upending even business models which were sound just three years ago …
From the story — “The partnership between IBM and one of the world’s top cancer research institutions is falling apart. The project is on hold, MD Anderson confirms, and has been since late last year. MD Anderson is actively requesting bids from other contractors who might replace IBM in future efforts. And a scathing report from auditors at the University of Texas says the project cost MD Anderson more than $62 million and yet did not meet its goals. The report, however, states: ‘Results stated herein should not be interpreted as an opinion on the scientific basis or functional capabilities of the system in its current state’….”
“It’s the alignment of tech and economics that is allowing all this stuff to start moving … The real roll-up of all this isn’t robot lawyers, its financialization, with law becoming an applied branch of finance and insurance” says Daniel Martin Katz, professor at Illinois Tech’s Chicago Kent College of Law.
“Michael Bommarito II and Daniel Martin Katz, legal scholars at the Illinois Institute of Technology, have tried to measure the growth of regulation by analyzing more than 160,000 corporate annual reports, or 10-K filings, at the US Securities and Exchange Commission. In a pre-print paper released Dec. 29, the authors find that the average number of regulatory references in any one filing increased from fewer than eight in 1995 to almost 32 in 2016. The average number of different laws cited in each filing more than doubled over the same period.”
Long time coming for us but here is Version 2.01 of our #SCOTUS Paper …
We have added three times the number years to the prediction model and now predict out-of-sample nearly two centuries of historical decisions (1816-2015). Then, we compare our results to three separate null models (including one which leverages in-sample information).
Here is the abstract: Building on developments in machine learning and prior work in the science of judicial prediction, we construct a model designed to predict the behavior of the Supreme Court of the United States in a generalized, out-of-sample context. Our model leverages the random forest method together with unique feature engineering to predict nearly two centuries of historical decisions (1816-2015). Using only data available prior to decision, our model outperforms null (baseline) models at both the justice and case level under both parametric and non-parametric tests. Over nearly two centuries, we achieve 70.2% accuracy at the case outcome level and 71.9% at the justice vote level. More recently, over the past century, we outperform an in-sample optimized null model by nearly 5%. Our performance is consistent with, and improves on the general level of prediction demonstrated by prior work; however, our model is distinctive because it can be applied out-of-sample to the entire past and future of the Court, not a single term. Our results represent an advance for the science of quantitative legal prediction and portend a range of other potential applications.
When it comes to prediction – law would benefit from better applying the tools of STEM / Finance / Insurance and so in that spirit — our company recently launched LexSemble and it allows for near frictionless crowd sourcing of predictions in law (and beyond). Many potential applications in law including early (and ongoing) case assessment in litigation, forecasting various sorts of transactional outcomes and predicting the actions of regulators, etc. It also has a range of machine learning capabilities which allow for crowd segmentation, expert weighting, natural language processing on relevant documents, etc.
Learn More: https://lexsemble.com/features.html