Daniel Martin Katz Named Fellow Elect of the College of Law Practice Management – Ceremony at 2017 Futures Conference in Atlanta, Georgia

I am honored to be Elected as a Fellow of the College of Law Practice Management.  The College includes legal technologists, law firm leaders, corporate counsel, etc.  I am looking forward to joining many friends and colleagues who are members of the college …

Exploring the Physical Properties of Regulatory Ecosystems – Professors Daniel Martin Katz + Michael J Bommarito

Measuring the Temperature and Diversity of the U.S. Regulatory Ecosystem – Bommarito + Katz – Presentation at Stanford CodeX on April 5th 2017


We are excited to be giving a talk at Stanford the day before the Future Law Conference.  Our talk will be hosted by Stanford CodeX – The Center for Legal Informatics. If you are in the Bay Area – you can join us by signing up for free here.

The underlying paper is available here.  Some starter slides here (start at slide 158) and we will be previewing our second paper in this three part series.

The rate at which US Companies cite regulations as an obstacle has quadrupled over the last 20 years (via Quartz)

“Michael Bommarito II and Daniel Martin Katz, legal scholars at the Illinois Institute of Technology, have tried to measure the growth of regulation by analyzing more than 160,000 corporate annual reports, or 10-K filings, at the US Securities and Exchange Commission. In a pre-print paper released Dec. 29, the authors find that the average number of regulatory references in any one filing increased from fewer than eight in 1995 to almost 32 in 2016. The average number of different laws cited in each filing more than doubled over the same period.”

Law is Code: A Software Engineering Approach to Analyzing the United States Code

Law is CodeWilliam Li, Pablo Azar, David Larochelle, Phil Hill & Andrew Lo, Law is Code: A Software Engineering Approach to Analyzing the United States Code

ABSTRACT:  “The agglomeration of rules and regulations over time has produced a body of legal code that no single individual can fully comprehend. This complexity produces inefficiencies, makes the processes of understanding and changing the law difficult, and frustrates the fundamental principle that the law should provide fair notice to the governed. In this article, we take a quantitative, unbiased, and software-engineering approach to analyze the evolution of the United States Code from 1926 to today. Software engineers frequently face the challenge of understanding and managing large, structured collections of instructions, directives, and conditional statements, and we adapt and apply their techniques to the U.S. Code over time. Our work produces insights into the structure of the U.S. Code as a whole, its strengths and vulnerabilities, and new ways of thinking about individual laws. For example, we identify the first appearance and spread of important terms in the U.S. Code like “whistleblower” and “privacy.” We also analyze and visualize the network structure of certain substantial reforms, including the Patient Protection and Affordable Care Act (PPACA) and the Dodd-Frank Wall Street Reform and Consumer Protection Act, and show how the interconnections of references can increase complexity and create the potential for unintended consequences. Our work is a timely illustration of computational approaches to law as the legal profession embraces technology for scholarship, to increase efficiency, and to improve access to justice.”

Mike and I are excited to see this paper as it is related to two of our prior papers:
Daniel Martin Katz & Michael J. Bommarito II, Measuring the Complexity of the Law: The United States Code, 22 Journal of Artificial Intelligence & Law 1 (2014)

Michael J. Bommarito II & Daniel Martin Katz , A Mathematical Approach to the Study of the United States Code, 389 Physica A 4195 (2010)

Leg/Ex – Legislative Explorer for Data Driven Discovery (Just One of Many User Interfaces for Legal / Political Institutions)

Lets face it – legal systems are complex.  They are complex for the sophisticated players and even more complex for the average citizen. Complexity is the problem and the question which has been at the center of some of our recent work (see here) is how best to mediate that complexity.

For long periods of time, clients and legal stakeholders have dealt with complexity by allocating human capital to the problem.  However, there are other tools/methods that might be employed to mediate legal complexity.

Reducing legal complexity is a question of information engineering and it is a question of design.  Legal systems need a user interface such as the one displayed above. They need UI/UX. This is a major thrust of behind design thinking for lawyers and this is will be a major thrust of work (undertaken by lawyers and non-lawyers) over the coming years. Stay tuned!

(HT: Robert Richards, Ted Sichelman for flagging this project)

Measuring the Complexity of the Law: The United States Code ( Slides by Daniel Martin Katz & Michael J. Bommarito II )

Measuring the Complexity of the Law: The United States Code (By Daniel Martin Katz & Michael J. Bommarito)

From our abstract:  “Einstein’s razor, a corollary of Ockham’s razor, is often paraphrased as follows: make everything as simple as possible, but not simpler.  This rule of thumb describes the challenge that designers of a legal system face—to craft simple laws that produce desired ends, but not to pursue simplicity so far as to undermine those ends.  Complexity, simplicity’s inverse, taxes cognition and increases the likelihood of suboptimal decisions.  In addition, unnecessary legal complexity can drive a misallocation of human capital toward comprehending and complying with legal rules and away from other productive ends.

While many scholars have offered descriptive accounts or theoretical models of legal complexity, empirical research to date has been limited to simple measures of size, such as the number of pages in a bill.  No extant research rigorously applies a meaningful model to real data.  As a consequence, we have no reliable means to determine whether a new bill, regulation, order, or precedent substantially effects legal complexity.

In this paper, we address this need by developing a proposed empirical framework for measuring relative legal complexity.  This framework is based on “knowledge acquisition,” an approach at the intersection of psychology and computer science, which can take into account the structure, language, and interdependence of law. We then demonstrate the descriptive value of this framework by applying it to the U.S. Code’s Titles, scoring and ranking them by their relative complexity.  Our framework is flexible, intuitive, and transparent, and we offer this approach as a first step in developing a practical methodology for assessing legal complexity.”

This is a draft version so we invite your comments (katzd@law.msu.edu) and (michael.bommarito@gmail.com).  Also, for those who might be interested – we are building out a full replication page for the paper.  In the meantime, all of the relevant code and data can be accessed at GitHub and from the Cornell Legal Information Institute.

UPDATE: Paper was named “Download of the Week” by Legal Theory Blog.