It is a great honor to have been selected as the 2021 Jones Day Visiting Professor of Law at Singapore Management University. While my formal in person visit will occur in 2022 (due to delays associated with COVID), I will deliver the Jones Day Lecture via Webinar on November 17, 2021 (9:00am Singapore Time) (FREE ACCESS).
On November 17, I will also join the Honourable Justice Aedit Abdullah (Supreme Court of Singapore) and Mr Mauricio F Paez (Partner, Jones Day) in a panel discussion moderated by Dean Yihan Goh of Yong Pung How Law School Singapore Management University.
Law as Code? Code as Law? A long standing debate … but what other ideas / concepts from Computer Science might be leveraged in understanding and managing the law ?
Corinna Coupette, Dirk Hartung, Janis Beckedorf, Maximilian Bother & Daniel Martin Katz, Law Smells – Defining and Detecting Problematic Patterns in Legal Drafting – available at < SSRN > < arXiv >
ABSTRACT – “Building on the computer science concept of code smells, we initiate the study of law smells, i.e., patterns in legal texts that pose threats to the comprehensibility and maintainability of the law. With five intuitive law smells as running examples — namely, duplicated phrase, long element, large reference tree, ambiguous syntax, and natural language obsession — we develop a comprehensive law smell taxonomy. This taxonomy classifies law smells by when they can be detected, which aspects of law they relate to, and how they can be discovered. Our new paper demonstrates how ideas from software engineering can be leveraged to assess and improve the quality of legal code, thus drawing attention to an understudied area in the intersection of law and computer science and highlighting the potential of computational legal drafting.”
“LexGLUE: A Benchmark Dataset for Legal Language Understanding in English” … Pre-trained Transformers, including BERT (Devlin et al., 2019), GPT-3 (Brown et al., 2020), T5 (Raffel et al., 2020), BART (Lewis et al., 2020), DeBERTa (He et al., 2021) and numerous variants, are currently the state of the art in most natural language processing (NLP) tasks. The question is how to adapt these models to legal text. Inspired by the recent widespread use of the GLUE multi-task benchmark NLP dataset, we introduce LexGLUE, a benchmark dataset to evaluate the performance of NLP methods in legal tasks.
LexGLUE is based on seven English existing legal NLP datasets. However, we anticipate that more datasets, tasks, and languages will be added in later versions of LexGLUE. As more legal NLP datasets become available, we also plan to favor datasets checked thoroughly for validity (scores reflecting real-life performance), annotation quality, statistical power and social bias.