Legal Complexity – Short Talk at Stanford CodeX Future Law ONLINE Conference 2021

Yesterday we presented our work in a Short Talk at the Annual Stanford CodeX Future Law Conference — our presentation was focused on “Complex Societies and the Growth of the Law” (as well as our new paper on measuring legal systems over time, at scale).

Here are the Papers that were Briefly Presented —
Daniel Martin Katz, Corinna Coupette, Janis Beckedorf & Dirk Hartung, Complex Societies and the Growth of the Law, 10 Scientific Reports 18737 (2020) < Nature Research >

Corinna Coupette, Janis Beckedorf, Dirk Hartung, Michael Bommarito, & Daniel Martin Katz, Measuring Law Over Time: A Network Analytical Framework with an Application to Statutes and Regulations in the United States and Germany (Under Review – Frontiers in Physics) < SSRN >

Problem Solving Experiments Reveal that People are More Likely to Consider Solutions that Add Features than Solutions that Remove Them (via Nature)

Interesting paper … when individuals evaluate the space of possible solutions, they systematically overlook the solutions which involve removing parts of a system.  This may explain bloat in software, regulation, organizations which fail to innovate, etc.

ABSTRACT: “Improving objects, ideas or situations—whether a designer seeks to advance technology, a writer seeks to strengthen an argument or a manager seeks to encourage desired behaviour—requires a mental search for possible changes. We investigated whether people are as likely to consider changes that subtract components from an object, idea or situation as they are to consider changes that add new components. People typically consider a limited number of promising ideas in order to manage the cognitive burden of searching through all possible ideas, but this can lead them to accept adequate solutions without considering potentially superior alternatives. Here we show that people systematically default to searching for additive transformations, and consequently overlook subtractive transformations. Across eight experiments, participants were less likely to identify advantageous subtractive changes when the task did not (versus did) cue them to consider subtraction, when they had only one opportunity (versus several) to recognize the shortcomings of an additive search strategy or when they were under a higher (versus lower) cognitive load. Defaulting to searches for additive changes may be one reason that people struggle to mitigate overburdened schedules, institutional red tape and damaging effects on the planet.”

DAKSH Centre of Excellence for Law and Technology at the Indian Institute of Technology, Delhi

Excited to advise the new DAKSH Centre of Excellence for Law and Technology at the Indian Institute of Technology, Delhi.

“The CoE will bring together lawyers, researchers, scientists and policy analysts to build solutions for the biggest challenges facing the justice system, drawing from fields such as operations research, data analytics, technology and law. As an interdisciplinary centre harnessing the strengths and experience of IIT Delhi and DAKSH, the CoE will leverage rigourous research to produce real-world impact in the functioning of the justice system. IIT Delhi brings to the CoE its expertise in statistical techniques, data modelling, natural language processing, and machine learning. DAKSH offers its pioneering use of data analytics to assist the judiciary and a deep understanding of judicial processes and the legal system.”

Usability for Multiple Personas: A Product Manager’s Mission – Inside the Engine Room with Kelly Marsh

This week I hosted the Elevate Together Podcast for our second ‘Inside the Engine Room’ Series with my Guest Kelly Marsh, Director of Product Development at Elevate Services. In this Podcast, we discuss Kelly’s PolyMath background which includes Mathematics, Law and Applied Technology. Next, we talk about Kelly’s career path in Big Law, Corporate Legal, at our startup aka LexPredict and now here at Elevate. Finally, we discuss Product Management and her work on supporting the various user personas in the Elevate ‘Manage Contracts’ ELM Module.

The Podcast is entitled – Usability for Multiple Personas: A Product Manager’s Mission. Check it Out HERE or on Apple Podcasts, SoundCloud or Spotify.

Bucerius Legal Tech Essentials 2021

SIGN UP FOR FREE — May 2021 — The 2021 Edition of Bucerius Legal Tech Essentials!  https://techsummer.law-school.de/ – When we introduced Bucerius Legal Tech Essentials in 2020, we were overwhelmed by how many great lecturers and students (5000+) decided to join us. Not only was it the one of largest LegalTech Programs ever created, it was also incredibly intense, fun and engaging. It should come as no surprise that we are going for a version 2.0 in 2021. The program’s core remains the same: a free online educational experience with many of the lecturers that would normally be on our campus!

So Again Sign Up For Free and More Information to Follow !

cc: Dirk Hartung Lauritz Gerlach

Productization of Legal A.I. – Inside the Engine Room with Eric Detterman

This week I hosted the Elevate Together Podcast for our first ‘Inside the Engine Room’ Series with my Guest Eric Detterman, VP Data Engineering and Solutions at Elevate.  In our wide ranging conversation, we talked about the path from Ad Hoc Machine Learning Projects to Building Enterprise Grade Products, thoughts on Tech Stacks, the Decomposition of Legal A.I. Products into their component parts (UI, Database, Workflow, Engine, etc.) as well key IT questions such as how to push and pull data with best in class API Infrastructure and Deployment (Docker, Kubernetes).  Check it Out here or on Apple Podcasts, SoundCloud or Spotify. 

#LegalAI #MachineLearning #LegalML #LegalNLP #LegalIT #LegalTech #FinLegalTech #LegalAPI #DataEngineering #Digital #DigitalTransformation #LawCompany 

Causal BERT : Language Models for Causality Detection Between Events Expressed in Text

ABSTRACT: “Causality understanding between events is a critical natural language processing task that is helpful in many areas, including health care, business risk management and finance. On close examination, one can find a huge amount of textual content both in the form of formal documents or in content arising from social media like Twitter, dedicated to communicating and exploring various types of causality in the real world. Recognizing these “Cause-Effect” relationships between natural language events continues to remain a challenge simply because it is often expressed implicitly. Implicit causality is hard to detect through most of the techniques employed in literature and can also, at times be perceived as ambiguous or vague. Also, although well-known datasets do exist for this problem, the examples in them are limited in the range and complexity of the causal relationships they depict especially when related to implicit relationships. Most of the contemporary methods are either based on lexico-semantic pattern matching or are feature-driven supervised methods. Therefore, as expected these methods are more geared towards handling explicit causal relationships leading to limited coverage for implicit relationships and are hard to generalize. In this paper, we investigate the language model’s capabilities for causal association among events expressed in natural language text using sentence context combined with event information, and by leveraging masked event context with in-domain and out-of-domain data distribution. Our proposed methods achieve the state-of-art performance in three different data distributions and can be leveraged for extraction of a causal diagram and/or building a chain of events from unstructured text.”

An Interesting Paper — ACCESS HERE

Measuring Law Over Time: A Network Analytical Framework with an Application to Statutes and Regulations in the United States and Germany

We have just posted our NEW PAPER featuring a combined dataset of network and text data which is roughly 120 MILLION words (tokens) in Size. “Measuring Law Over Time: A Network Analytical Framework with an Application to Statutes and Regulations in the United States and Germany.” Access paper draft via SSRN.

ABSTRACT: How do complex social systems evolve in the modern world? This question lies at the heart of social physics, and network analysis has proven critical in providing answers to it. In recent years, network analysis has also been used to gain a quantitative understanding of law as a complex adaptive system, but most research has focused on legal documents of a single type, and there exists no unified framework for quantitative legal document analysis using network analytical tools. Against this background, we present a comprehensive framework for analyzing legal documents as multi-dimensional, dynamic document networks. We demonstrate the utility of this framework by applying it to an original dataset of statutes and regulations from two different countries, the United States and Germany, spanning more than twenty years (1998–2019). Our framework provides tools for assessing the size and connectivity of the legal system as viewed through the lens of specific document collections as well as for tracking the evolution of individual legal documents over time. Implementing the framework for our dataset, we find that at the federal level, the American legal system is increasingly dominated by regulations, whereas the German legal system remains governed by statutes. This holds regardless of whether we measure the systems at the macro, the meso, or the micro level.

#LegalComplexity #LegalDataScience #NetworkScience #LegalAI #SocialPhysics #LegalNLP #ComplexSystems

Can A Fruit Fly Learn Word Embeddings ?

Very interesting Conference Proceeding Paper available on arXiv.

ABSTRACT: “The mushroom body of the fruit fly brain is one of the best studied systems in neuroscience. At its core it consists of a population of Kenyon cells, which receive inputs from multiple sensory modalities. These cells are inhibited by the anterior paired lateral neuron, thus creating a sparse high dimensional representation of the inputs. In this work we study a mathematical formalization of this network motif and apply it to learning the correlational structure between words and their context in a corpus of unstructured text, a common natural language processing (NLP) task. We show that this network can learn semantic representations of words and can generate both static and context-dependent word embeddings. Unlike conventional methods (e.g., BERT, GloVe) that use dense representations for word embedding, our algorithm encodes semantic meaning of words and their context in the form of sparse binary hash codes. The quality of the learned representations is evaluated on word similarity analysis, word-sense disambiguation, and document classification. It is shown that not only can the fruit fly network motif achieve performance comparable to existing methods in NLP, but, additionally, it uses only a fraction of the computational resources (shorter training time and smaller memory footprint).”