ABSTRACT: “In this paper, we consider the problem of organizing supporting documents vital to U.S. work visa petitions, as well as responding to Requests For Evidence (RFE) issued by the U.S.~Citizenship and Immigration Services (USCIS). Typically, both processes require a significant amount of repetitive manual effort. To reduce the burden of mechanical work, we apply machine learning methods to automate these processes, with humans in the loop to review and edit output for submission. In particular, we use an ensemble of image and text classifiers to categorize supporting documents. We also use a text classifier to automatically identify the types of evidence being requested in an RFE, and used the identified types in conjunction with response templates and extracted fields to assemble draft responses. Empirical results suggest that our approach achieves considerable accuracy while significantly reducing processing time.” Access Via arXiv — To Appear in ICDM 2020 workshop: MLLD-2020
Tag: legal prediction
Predicting United States Policy Outcomes with Random Forests (via arXiv)
Interesting paper which follows on to a number of Machine Learning / NLP driven Legislative Prediction or Government Prediction papers. Access the draft of paper from arXiv.
For more examples, see e.g. the follow papers —
Gerrish SM, Blei DM. “Predicting legislative roll calls from text”. ICML, 2011.
Yano T, Smith NA, Wilkerson JD. “Textual Predictors of Bill Survival in Congressional Committees”. Proc 2012 Conf N Amer Chapter Assoc Comp Linguistics, Human Language Technologies, 2012.
Katz DM, Bommarito MJ, Blackman J. “A general approach for predicting the
behavior of the Supreme Court of the United States”. PLOS One, 2017.
Nay, J. “Predicting and Understanding Law Making with Word Vectors and an Ensemble Model.” PLOS One, 2017.
Waltl, Bernhard Ernst. “Semantic Analysis and Computational Modeling of Legal Documents.” PhD diss., Technische Universität München, 2018.
Davoodi, Maryam, Eric Waltenburg, and Dan Goldwasser. “Understanding the Language of Political Agreement and Disagreement in Legislative Texts.” In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5358-5368. 2020.
Legal Data Science Research Group at Bucerius Law School
Spent the past few days here in Hamburg working with our multi-institutional scientific research team (Bucerius Law, Max Planck Institute, Chicago Kent Law, Heidelberg Law) … culminating in our presentation to the Bucerius Law Faculty today ! cc: Dirk Hartung Corinna Coupette Janis Beckedorf #legalinnovation #makelawbetter #legaltech #methods #legaldata #science #datascience #networkscience
Contract Analytics Session at ILTACON 2019 – Legal Artificial Intelligence (A.I.) Track
Today I was happy to moderate the third Session of the AI + Law Track here at #ILTACon2019 – today we focused on a topic of great interest to me … Contract AI / Analytics … Thanks to our organizer Rodney Mullins … Thanks to our speakers – Kerry Westland Kevin L. Miller Martin Davidson Noah Waisberg !
LegalAI Track at International Legal Technology Association Conference in Orlando
Fighting off Stormtroopers so that I can moderate the AI and Law Track here at International Legal Technology Association Conference in Orlando #ILTACON19 #ILTACON2019 #legaltech #legaleducation #legalinnovation #makelawbetter
AI Track at ILTACON 2019 – Session #1
Full House for Session 1 of the AI Track at #ILTACON19 – I am Moderating each of the Four Sessions this year (see you tomorrow for Session 2) … #LegalTech #LegalAI #legaleducation #legalinnovation
Thanks to our Panelists for Session 1 – Brad Blickstein Benjamin Alarie Ann McCrackin Jeremiah Weasenforth
Closing Keynote at the Artificial Intelligence and Law Summit (Hosted by the Law Society of England and Wales)
Yesterday I ran the anchor leg (i.e. gave the closing Keynote) at the Artificial Intelligence and Law Summit — Hosted by the Law Society of England and Wales here in London!
#LegalAI #LegalTech #LegalInnovation
Primerus – 2018 Annual PDI Convocation – Scottsdale, AZ
It is my great pleasure to visit with Primerus and its associated law firms and deliver an address at its 2018 Annual PDI Convocation.
Evaluating Litigation Risk in the 21st Century – Conference at UConn Law
Today I am UConn Law speaking at a Conference entitled – Evaluating Litigation Risk in the 21st Century. Thanks to Alexandra Lahav and the UConn Insurance Law Center for hosting me today!
Crowdsourcing SCOTUS Paper Presentation at University of Minnesota Law School
The next leg of our SCOTUS Crowdsourcing Tour takes us to Minneapolis – for talk at the University of Minnesota Law School. Looking forward to it!
Crowdsourcing Accurately and Robustly Predicts Supreme Court Decisions – Professors Daniel Martin Katz, Michael Bommarito & Josh Blackman
Today Michael J Bommarito II and I were live in Ann Arbor at the University of Michigan Center for Political Studies to kickoff the tour for our #SCOTUS Crowd Prediction Paper — here is version 1.01 of the slide deck !
SCOTUS Crowdsourcing Paper Road Show (Presentation at University of Michigan Center for Political Studies / ISR)
Excited to take the show on the road next week where we will be presenting our SCOTUS Crowdsourcing Paper at University of Michigan Center for Political Studies and at the University of Minnesota Law School.
Crowdsourcing Accurately and Robustly Predicts Supreme Court Decisions — By Daniel Martin Katz, Michael Bommarito, Josh Blackman – via SSRN)
ABSTRACT: Scholars have increasingly investigated “crowdsourcing” as an alternative to expert-based judgment or purely data-driven approaches to predicting the future. Under certain conditions, scholars have found that crowd-sourcing can outperform these other approaches. However, despite interest in the topic and a series of successful use cases, relatively few studies have applied empirical model thinking to evaluate the accuracy and robustness of crowdsourcing in real-world contexts. In this paper, we offer three novel contributions. First, we explore a dataset of over 600,000 predictions from over 7,000 participants in a multi-year tournament to predict the decisions of the Supreme Court of the United States. Second, we develop a comprehensive crowd construction framework that allows for the formal description and application of crowdsourcing to real-world data. Third, we apply this framework to our data to construct more than 275,000 crowd models. We find that in out-of-sample historical simulations, crowdsourcing robustly outperforms the commonly-accepted null model, yielding the highest-known performance for this context at 80.8% case level accuracy. To our knowledge, this dataset and analysis represent one of the largest explorations of recurring human prediction to date, and our results provide additional empirical support for the use of crowdsourcing as a prediction method. (via SSRN)