Machine Learning Explained (via Google's Nat + Lo)

Pretty useful summary which is something we try to teach our students in our Legal Analytics Course (which could really be called Machine Learning for Lawyers). BTW – For those of you who emailed us, we promise to fill out the balance of the set of free, online Legal Analytics course materials in the coming months.

Your Lawyer May Soon Ask This AI-Powered App for Legal Help (via Wired)

“I thought back to that big problem lawyers face in their day-to-day work, and how it impacts regular people,” Ovbiagele tells WIRED. “I thought we should apply the capabilities of machine learning to tackle this problem and make things better for lawyers and for clients.”

“LegalRank can figure out which results get preferential results, whether that’s prioritizing a case that has more citations, knowing that a Supreme Court case should rank higher than a local decision, and other nuances.”

A subsidiary of (world’s largest) global law firm Dentons, NextLaw Labs, has signed ROSS Intelligence as its first portfolio company.

Full disclosure – I am a member of the advisory board of NextLaw Labs

Using Technology and System Design to Improve the Dodd-Frank Resolution Planning Requirement (+ Better Manage Complexity)

Screen Shot 2015-07-19 at 10.38.50 AM

This past week, I had the pleasure of participating in a half day closed door session with about ~40-50 folks from the financial services industry including several of the world’s finest law firms, representatives from SIFI and non-SIFI financial institutions as well as folks from IBM Watson and LegalOnRamp (a Watson eco-system partner).

The specific subject was RRP – the resolution planning / living wills requirement under Dodd Frank.  Former Congressman Barney Frank provided opening remarks and joined the group for the balance of the half day session.  Paul Lippe and I discussed our recent paper on Resolution Planning that was published in Banking Perspective (The Journal of The Clearing House).

As we argue in the paper, the ‘too big to fail argument’ is not really that intellectually forceful.  The question – properly posed – is what to do about complexity and the management of complex systems.  The complex and interdependent nature of the banking ecosystem is the feature that really challenges efforts to develop robust regulatory / management structures.  This would be true even if existing financial institutions were made smaller.

Our conversation was about how to use technology and system redesign to confront and manage wide scale complexity.  The resolution planning challenge should not just be focused upon clearing the existing regulatory hurdle but actually can be an opportunity for organizations to build better financial/legal information infrastructure (ultimately leading to an internet of contracts or more broadly an internet of legal things).  In building a better financial/legal information infrastructure, banks will be better positioned to manage/properly price risk.

Dentons (World's Largest Law Firm) Launches NextLaw Labs and Creates Legal Business Accelerator

Read stories here, here and here  plus check out NextLaw Labs.

Stanford CodeX FutureLaw Conference 2015 – Conference on Innovation, Technology and the Future of the Legal Industry

Screen Shot 2015-05-03 at 9.44.38 PM

Chicago Legal Innovation & Technology Meetup (@Skadden in Chicago)

Thanks to everyone who joined us last night @Skadden for the Chicago Legal Innovation & Technology Meetup.  Our next meeting is at Chapman & Cutler in May with a series of additional events in the months to come …

Even The Algorithms Think Obamacare’s Survival Is A Tossup (via 538.com)

Screen Shot 2015-03-04 at 10.03.19 AM

Readers will probably observe that {Marshall+} is still a work in progress (for example – my colleague noted {Marshall+} believes that Justice Ginsburg would appear to be slightly more likely to vote to overturn the ACA than Justice Thomas).  While this probably will not prove to be correct in King v. Burwell, our method is rigorously backtested and designed to minimize errors across all predictions (not just in this specific case).  This optimization question is tricky for the model and it will be the source of future model improvements. I have preached the whole mantra Humans + Machines > Humans or Machines and this problem is a good example.  The problem with exclusive reliance upon human experts is they have cognitive biases, info processing issues, etc.  The problem with models is that they generate errors that humans would not.

Anyway, the good thing about having a base model such as {Marshall+} is that we can begin to incorporate a range of additional information in an effort to create a {Marshall++} and beyond.    And on that front there is more to come …