Over on the Public Dashboard of Good Judgement Project, their aggregated SuperForecasters have been predicting a wide range of geo-political and other events including critical questions associated with COVID-19. A key question is When will enough doses of FDA-approved COVID-19 vaccine(s) to inoculate 25 million people be distributed in the United States? Note: For purposes of this prediction — “Compassionate use” and “emergency use” authorizations would count as approval.
Tag: crowd sourcing
Crowdsourcing SCOTUS Paper Presentation at University of Minnesota Law School
The next leg of our SCOTUS Crowdsourcing Tour takes us to Minneapolis – for talk at the University of Minnesota Law School. Looking forward to it!
Crowdsourcing Accurately and Robustly Predicts Supreme Court Decisions – Professors Daniel Martin Katz, Michael Bommarito & Josh Blackman
Today Michael J Bommarito II and I were live in Ann Arbor at the University of Michigan Center for Political Studies to kickoff the tour for our #SCOTUS Crowd Prediction Paper — here is version 1.01 of the slide deck !
SCOTUS Crowdsourcing Paper Road Show (Presentation at University of Michigan Center for Political Studies / ISR)
Excited to take the show on the road next week where we will be presenting our SCOTUS Crowdsourcing Paper at University of Michigan Center for Political Studies and at the University of Minnesota Law School.
Our SCOTUS Crowdsourcing Paper Featured in Augur Development Update
Excited that our paper was highlighted on the Augur Weekly Development Update – It is about #SCOTUS as a use case but the formalization is in the general form – with implications for #Crypto #Oracles #Crowdsourcing
Mike and I are pretty hot on Augur, Ethereum and their potential applications across a wide set of use cases – so we are happy to see this recognition of our work.
Wisdom of the Crowd Accurately Predicts Supreme Court Decisions (MIT Technology Review)
See coverage of our paper in MIT Technology Review and access paper on arXiv or SSRN
Crowdsourcing Accurately and Robustly Predicts Supreme Court Decisions — By Daniel Martin Katz, Michael Bommarito, Josh Blackman – via SSRN)
ABSTRACT: Scholars have increasingly investigated “crowdsourcing” as an alternative to expert-based judgment or purely data-driven approaches to predicting the future. Under certain conditions, scholars have found that crowd-sourcing can outperform these other approaches. However, despite interest in the topic and a series of successful use cases, relatively few studies have applied empirical model thinking to evaluate the accuracy and robustness of crowdsourcing in real-world contexts. In this paper, we offer three novel contributions. First, we explore a dataset of over 600,000 predictions from over 7,000 participants in a multi-year tournament to predict the decisions of the Supreme Court of the United States. Second, we develop a comprehensive crowd construction framework that allows for the formal description and application of crowdsourcing to real-world data. Third, we apply this framework to our data to construct more than 275,000 crowd models. We find that in out-of-sample historical simulations, crowdsourcing robustly outperforms the commonly-accepted null model, yielding the highest-known performance for this context at 80.8% case level accuracy. To our knowledge, this dataset and analysis represent one of the largest explorations of recurring human prediction to date, and our results provide additional empirical support for the use of crowdsourcing as a prediction method. (via SSRN)
Noise: How to Overcome the High, Hidden Cost of Inconsistent Decision Making (via Harvard Business Review)
From the article: “The prevalence of noise has been demonstrated in several studies. Academic researchers have repeatedly confirmed that professionals often contradict their own prior judgments when given the same data on different occasions. For instance, when software developers were asked on two separate days to estimate the completion time for a given task, the hours they projected differed by 71%, on average. When pathologists made two assessments of the severity of biopsy results, the correlation between their ratings was only .61 (out of a perfect 1.0), indicating that they made inconsistent diagnoses quite frequently. Judgments made by different people are even more likely to diverge. Research has confirmed that in many tasks, experts’ decisions are highly variable: valuing stocks, appraising real estate,sentencing criminals, evaluating job performance, auditing financial statements, and more. The unavoidable conclusion is that professionals often make decisions that deviate significantly from those of their peers, from their own prior decisions, and from rules that they themselves claim to follow.”
Suffice to say we at LexPredict agree. Indeed, building from our work on Fantasy SCOTUS where our expert crowd outperforms any known single alternative (including the highest ranked Fantasy SCOTUS player), we have recently launched LexSemble (our configurable crowdsourcing platform) in order to help legal and other related organizations make better decisions (in transactions, litigation, regulatory matters, etc.).
We are working to pilot with a number of industry partners interested in applying underwriting techniques to more rigorously support their decision making. This is also an example of what we have been calling Fin(Legal)Tech (the financialization of law). If you want to learn more please sign up for our Fin(Legal)Tech conference coming on November 4th in Chicago) (tickets are free but space is limited).
Experts, Crowds and Algorithms – AI Machine Learns to Drive Using Crowdteaching
The example above is an algorithmic system that enhanced by the use of crowd based teaching. It is a useful example of the creativity employed by those in the machine learning research community. It is also instructive (at broader level) of the cutting edge approaches used in all of predictive analytics / machine learning.
In discussing legal prediction or the application of predictive analytics in law, we often try to start by highlighting The Three Forms of (Legal) Prediction: Experts, Crowds and Algorithms. These are really the only streams of intelligence that one can use to forecast anything. Historically, in the law – experts centered forecasting has almost exclusively dominated the industry. In virtually every field of human endeavor, there have been improvements (sometimes small to sometimes large) in forecasting which have been driven in the move from experts to ensembles (i.e. mixtures of these respective streams of intelligence – experts, crowds + algorithms).
Through our company LexPredict and in our research, we have been working toward building such ensemble models across a wide range of topics. In addition, we have engaged in a public display of these ideas through Fantasy SCOTUS, our SCOTUS prediction algorithm and through the identification of non-traditional experts (i.e. our superforecasters which — unlike most lawyers — are folks that have actually been benchmarked in their predictive performance). Finally, we have demonstrated the usefulness of SCOTUS prediction in a narrow subset of cases that actually move the securities market.
Announcing the All New LexPredict FantasySCOTUS – (Sponsored By Thomson Reuters)
Today I am excited to announce that LexPredict has now launched the all new FantasySCOTUS under the direction of Michael J. Bommarito II, Daniel Martin Katz and Josh Blackman.
FantasySCOTUS is the leading Supreme Court Fantasy League. Thousands of attorneys, law students, and other avid Supreme Court followers make predictions about cases before the Supreme Court. Participation is FREE and Supreme Court geeks can win cash prizes up to $10,000 (many other prizes as well — thanks to the generous support of Thomson Reuters).
We hope to launch additional functionality soon but we are now live and ready to accept your predictions for the 2014-2015 Supreme Court Term!