Crowdsourcing SCOTUS Paper Presentation at University of Minnesota Law School

The next leg of our SCOTUS Crowdsourcing Tour takes us to Minneapolis – for talk at the University of Minnesota Law School.  Looking forward to it!

Crowdsourcing Accurately and Robustly Predicts Supreme Court Decisions – Professors Daniel Martin Katz, Michael Bommarito & Josh Blackman


Today Michael J Bommarito II and I were live in Ann Arbor at the University of Michigan Center for Political Studies to kickoff the tour for our #SCOTUS Crowd Prediction Paperhere is version 1.01 of the slide deck

Workforce Implications of Machine Learning – Brynjolfsson + Mitchell in Science

Regarding the quote above — we agree.  However, it should be noted that the ‘simple substitution story’ works at the aggregate level over a period of time with the simple assumption that the tasks which comprise current jobs can be decomposed and recombined into new jobs.  Certainly, institutions (both firms and public sector) will take some period of time to be able to repackage certain existing jobs.  Thus, lags are to be expected.  < Click Here to Access the Article >

Six New Videos Added to TheLawLabChannel.com

WENDY RUBAS (VILLAGEMD) 
FROM ANECDOTE TO ANALYTICS: WAYFINDING AS A MODERN GENERAL COUNSEL

JILLIAN BOMMARITO (LEXPREDICT)
IT’S 10 PM – DO YOU KNOW WHERE YOUR LEGAL RESERVES ARE?

DENNIS KENNEDY (MASTERCARD)
AGILE LAWYERING IN THE PLATFORM ERA

EDDIE HARTMAN (LEGALZOOM)
THE PRICE IS THE PROOF

NICOLE SHANAHAN (STANFORD CODEX)
TRANSACTION COSTS AND LEGAL AI: FROM COASE’S THEOREM TO IBM WATSON, AND EVERYTHING IN BETWEEN

ED WALTERS (FASTCASE)
LAW’S FUTURE FROM FINANCE’S PAST: WHAT COULD POSSIBLY GO WRONG?

Crowdsourcing Accurately and Robustly Predicts Supreme Court Decisions — By Daniel Martin Katz, Michael Bommarito, Josh Blackman – via SSRN)

ABSTRACT:  Scholars have increasingly investigated “crowdsourcing” as an alternative to expert-based judgment or purely data-driven approaches to predicting the future. Under certain conditions, scholars have found that crowd-sourcing can outperform these other approaches. However, despite interest in the topic and a series of successful use cases, relatively few studies have applied empirical model thinking to evaluate the accuracy and robustness of crowdsourcing in real-world contexts. In this paper, we offer three novel contributions. First, we explore a dataset of over 600,000 predictions from over 7,000 participants in a multi-year tournament to predict the decisions of the Supreme Court of the United States. Second, we develop a comprehensive crowd construction framework that allows for the formal description and application of crowdsourcing to real-world data. Third, we apply this framework to our data to construct more than 275,000 crowd models. We find that in out-of-sample historical simulations, crowdsourcing robustly outperforms the commonly-accepted null model, yielding the highest-known performance for this context at 80.8% case level accuracy. To our knowledge, this dataset and analysis represent one of the largest explorations of recurring human prediction to date, and our results provide additional empirical support for the use of crowdsourcing as a prediction method.  (via SSRN)

Applied Introduction to Machine Learning (via International Legal Technology Association Blog)

Fish & Richardson is one of the largest IP firms in the US so it is cool to see them exploring these ideas.  If you look at this intro using Microsoft Azure – this is very on point with lots of we have been saying about the mix of semistructured data and #MLaaS (machine learning as a service) … and why we teach both an introduction to quant methods and a machine learning for lawyers course.

Revisiting Distance Measures for Dynamic Citation Networks – Published in Physica A


I was revisiting some of our old stuff for this Oslo event -early on for us on our #LegalPhysics #LegalAnalytics path – published in Physica A – “By applying our sink clustering method, we obtain a dendrogram of the network’s largest weakly connected component shown in Fig. 4. However, despite their general topical relatedness, these two clusters of cases engage substantively different sub-questions, and are thus appropriately divided into separate clusters. While not a major focus of the docket of the modern court, the early court elaborated a number of important legal concepts through the lens of these admiralty decisions. For example, the red group of cases engages questions of presidential power and the laws of war, as well as general interpretations of the Prize Acts of 1812. Meanwhile, the blue cluster engages questions surrounding tort liability, jurisdiction, and the burden of proof.”