From the abstract: “Computer scientists have recently undermined our faith in the privacy-protecting power of anonymization, the name for techniques for protecting the privacy of individuals in large databases by deleting information like names and social security numbers. These scientists have demonstrated they can often “reidentify” or “deanonymize” individuals hidden in anonymized data with astonishing ease. By understanding this research, we will realize we have made a mistake, labored beneath a fundamental misunderstanding, which has assured us much less privacy than we have assumed. This mistake pervades nearly every information privacy law, regulation, and debate, yet regulators and legal scholars have paid it scant attention. We must respond to the surprising failure of anonymization, and this Article provides the tools to do so.”
Given my involvement with the Gerald R. Ford School of Public Policy, many have justifiably asked me to describe how a computational simulation could assist in the crafting of public policy. The Electricity Market Simulations run at Argonne National Lab represent a nice example. These are high level models run over six decision levels and include features such as a simulated bid market. Argonne has used this model to help the State of Illinois as well as several European countries regulate their markets for electricity.
Formally organized into 50 titles, the United States Code is the repository for federal statutory law. While each of the 50 titles define a particular substantive domain, the structure within and across titles can be represented as a graph/network. In a series of prior posts, we offered visualizations at various “depths” for a number of well know U.S.C. titles. Click here and click Here for our two separate visualizations of the Tax Code (Title 26). Click here for our visualization of the Bankruptcy Code (Title 11). Click here for our visualization of Copyright (Title 17). While our prior efforts were devoted to displaying the structure of a given title of the US Code, the visualization above offers a complete view of the structure of the entire United States Code (Titles 1-50).
Using Zoomorama, each title is labeled with its respective number. The small black dots are “vertices” representing all sections in the aggregate US Code (~37,500 total sections). Given the size of the total undertaking, in the visual above, every title is represented to the “section level.” As we described in earlier posts, a “section level” representation halts at the section and thus does not represent any of subsection depth. For example, all sections under 26 U.S.C. § 501 including the well known § 501 (c) (3) are reattributed upward to their parent section.
There are two sources of structure within the United States Code. The explicitly defined structure / linkage / dependancy derives from the sections contained under a given title. The more nuanced version of structure is obtained from references or definitions contained within particular sections. This class of connections not only link sections within a given title but also connection sections across titles. Within this above visual, we represent these important cross-title references by coloring them red.
Taken together, this full graph of the Untied States Code is quite large {i.e. directed graph (|V| = 37500, |E| = 197749)}. There exist 37,500 total sections distributed across the 50 Titles. However, these sections are not distributed in a uniform manner. For example, components such as Title 1 feature very few sections while Titles such as 26 and 42 contain many sections. The number of edges far outstrips the number of vertices with a total 197,000+ edges in the graph.
We are currently writing the paper on this subject … so please consider this the trailer. Above we offer the same visual of the United States Code (Titles 1-50) which we previously offered here … this time we are using Zoomorama. Zoomorama is an alternative to Seadragon which we believe might perform better on certain machine configurations.
If you click on the image above you should be taken straight to the full page image. From there you should be click to zoom in and then read the titles … For those unfamiliar, please click here for the Zoomorama Instructions!
Download the paper: Collins, Christopher; Viégas, Fernanda B.; Wattenberg, Martin. Parallel Tag Clouds to Explore Faceted Text CorporaTo appear in Proceedings of the IEEE Symposium on Visual Analytics Science and Technology (VAST), October, 2009. [Note: The Paper is 24.5 MB]
Here is the abstract: Do court cases differ from place to place? What kind of picture do we get by looking at a country’s collection of law cases? We introduce Parallel Tag Clouds: a new way to visualize differences amongst facets of very large metadata-rich text corpora. We have pointed Parallel Tag Clouds at a collection of over 600,000 US Circuit Court decisions spanning a period of 50 years and have discovered regional as well as linguistic differences between courts. The visualization technique combines graphical elements from parallel coordinates and traditional tag clouds to provide rich overviews of a document collection while acting as an entry point for exploration of individual texts. We augment basic parallel tag clouds with a details-in-context display and an option to visualize changes over a second facet of the data, such as time. We also address text mining challenges such as selecting the best words to visualize, and how to do so in reasonable time periods to maintain interactivity.
In the next few tutorials, we’re going to transition to exploring how to model dynamics on a network.
The first tutorial was a bit of a blockbuster length-wise because there was a lot of ground to cover to get the basic model up and running. Moving forward, we’ll be able to go a bit more incrementally, adding elements to the basic model as we go. If you don’t feel comfortable with the original, go back and take a look at it and make sure that it makes sense to you before moving on.
We’re first going to deal with some of the efficiency issues in the first model. After this, we’ll make some basic changes to the architecture of the SIR program that makes it more amenable to contact patterns on a social network.
Finally, we’ll show you how to how to take the output of your epidemic model and generate animations like this one:
Blue nodes are exposed but uninfected, red nodes are infectious, and yellow oneshave recovered.
The movie is a bit of a carrot to get you through the less flashy, but, I promise, important and actually interesting nuts and bolts of putting these kinds of models together.
This tutorial is going to cover the last two big things that we need to tackle before we get to the model of an outbreak on a network. So, here we go!
New Concepts
1. Arbitrarily Distributed Infectious Periods
First, we’re going to deal with the duration of the infectious period. The assumption of an exponentially distributed infectious period is unnecessarily restrictive for a general model of diffusion, and the way the original code goes about recovering individuals – drawing a random number on every step for every infectious individual – should strike you as both inelegant and computationally inefficient, particularly when the rate of recovery is slow and there are many infectious individuals.
In order to deal with this, we’re going to introduce two new tools. The first is the scipy.stats toolkit and the second is a neat (and very easy to use) data structure called a heap.
A heap is in very many ways what it sounds like: imagine a pile of trash in a landfill; the tires and rusting washing machines are on the bottom, while the pop cans and grocery store receipts are closer to the top.
As a programming tool, a heap is useful because it always keeps the smallest (or largest, depending on your preference) item at the top of the list. It also allows for linear-time insertion and removal of objects. This means that the time it takes to execute an action grows proportionally to the size of the list, so if it has N items, it takes N*C steps (where C is a constant) to process the list, and if it has 2*N items, it takes 2*N*C steps. Other ways of sorting could take N^2 or worse steps to do the same.
In our outbreak model, the top item on the heap is always going to be the time at which the next individual recovers. By doing this, we can avoid the loop in the first tutorial (and replicated in one implementation here) that checks whether each infectious individual is going to recover on each step.
Looping over everyone is the most intuitive way to check if they’re going to recover, but it’s very inefficient, especially when infectious periods are long and the population is large. It’s also problematic from a theoretical perspective, because it chains us to exponentially distributed recovery periods.
Exponentially distributed infectious periods make analytic sense for a deterministic model, but your disease or *insert diffusible here* may have a constant or normally distributed ‘infectious’ period.
By using a heap-based implementation, as you will see, we can use arbitrary recovery periods, and Python’s implementation of the heap is very straightforward – just a small twist on the usual list using the heapq module.
2. Object Oriented Programming
One of Python’s strengths is that it supports a style of programming that mixes the best of object-oriented programming (OOP) and procedural or imperative programming.
We won’t go too deep into the details of OOP here, but the real strength of OOP implementations are that they allow code to be easily re-used in other programs (Python’s all-powerful ‘import‘ statement really makes this true) and also forces some structure on what functions have access to what variables, etc.
Click Below to Review the Implementation and Commented Code!
From the article … “Texas began testing almost every student in almost every public school in grades 3-11 in 2003 with the Texas Assessment of Knowledge and Skills (TAKS). Every other state in the United States administers similar tests and gathers similar data, either because of its own testing history, or because of the Elementary and Secondary Education Act of 2001 (No Child Left Behind, or NCLB). Texas mathematics scores for the years 2003 through 2007 comprise a data set involving more than 17 million examinations of over 4.6 million distinct students. Here we borrow techniques from statistical mechanics developed to describe particle flows with convection and diffusion and apply them to these mathematics scores. The methods we use to display data are motivated by the desire to let the numbers speak for themselves with minimal filtering by expectations or theories.
The most similar previous work describes schools using Markov models. “Demographic accounting” predicts changes in the distribution of a population over time using Markov models and has been used to try to predict student enrollment year to year, likely graduation times for students, and the production of and demand for teachers. We obtain a more detailed description of students based on large quantities of testing data that are just starting to become available. Working in a space of score and time we pursue approximations that lead from general Markov models to Fokker–Planck equations, and obtain the advantages in physical interpretation that follow from the ideas of convection and diffusion.”
Several months ago, I put together this syllabus for use in a hypothetical seminar course entitled Law as a Complex System. I hoped to revise this reading list / syllabus and have now done so …. While this revised version might contain more content than would be practical for the typical 2-3 credit seminar, I do believe it is far more reasonable than Version 1.0. For anyone who is interested in learning more about the methodological tradition from which much of our scholarship is drawn … feel free to use this as a reading list. If you see any law related scholarship you believe should be included please feel free to email me.
As part our multipart series on the clerkship tournament, here is a simple bar graph for the top placing law schools in the Supreme Court Clerkship Tourney. It is important to note that we do not threshold for the number of graduates per school. Specifically, we do not just divide by the number graduates per school because we have little theoretic reason to believe that placements linearly scale to differences in size of graduating classes. In other words, given we do not know the proper functional form — we just offer the raw data. For those interested in other posts, please click herefor the law clerks tag.
As we mentioned in previous posts, Seadragonis a really cool product. Please note load times may vary depending upon your specific machine configuration as well as the strength of your internet connection. For those not familiar with how to operate it please see below. In our view, the Full Screen is best the way to go ….
Benoît Mandelbrot whose classic work on fractals as well as more recent work questioning the Efficient Market Hypothesisoffers a lasting contribution to positive economic theory. While the committee is likely considering Eugene Fama and/or Kenneth French (of Fama-French fame), we believe they should instead consider Mandelbrot (or at a minimum split the award between Fama, French & Mandelbrot).
Robert Shiller for his contributions to behavioral finance including his work challenging the Efficient Market Hypothesis. Of course, Shiller is also well known for his work on the real estate market with Karl Case(including the Case-Shiller Index). This also represents important work worthy of recognition.
Elinor Ostromfor her work on public choice theory, common pool resources and collective action. Her work has offered a substantial contribution to political economy as well as institutional and environmental economics. {Note: (Ladbrokes places her at 50 to 1)}.