Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization

Ohm on Privacy

On this blog, we have previously featured the work of Paul Ohm (Colorado Law School) including his important article Computer Programming and the Law: A New Research Agenda. Professor Ohm has recently posted Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization, 57 UCLA Law Review ____ (forthcoming 2010). A review of SSRN downloads indicates that despite having been posted in just the last two months, this paper is the top downloaded new law paper posted to the SSRN in the past 12 months.

From the abstract: “Computer scientists have recently undermined our faith in the privacy-protecting power of anonymization, the name for techniques for protecting the privacy of individuals in large databases by deleting information like names and social security numbers. These scientists have demonstrated they can often “reidentify” or “deanonymize” individuals hidden in anonymized data with astonishing ease. By understanding this research, we will realize we have made a mistake, labored beneath a fundamental misunderstanding, which has assured us much less privacy than we have assumed. This mistake pervades nearly every information privacy law, regulation, and debate, yet regulators and legal scholars have paid it scant attention. We must respond to the surprising failure of anonymization, and this Article provides the tools to do so.”

Electricity Market Simulations @ Argonne National Labs

Electricity Market Simulations

Given my involvement with the Gerald R. Ford School of Public Policy, many have justifiably asked me to describe how a computational simulation could assist in the crafting of public policy. The Electricity Market Simulations run at Argonne National Lab represent a nice example. These are high level models run over six decision levels and include features such as a simulated bid market. Argonne has used this model to help the State of Illinois as well as several European countries regulate their markets for electricity.

The Structure of the United States Code [w/ Zoomorama] [Repost]

US CODE (All Titles)

Formally organized into 50 titles, the United States Code is the repository for federal statutory law. While each of the 50 titles define a particular substantive domain, the structure within and across titles can be represented as a graph/network. In a series of prior posts, we offered visualizations at various “depths” for a number of well know U.S.C. titles. Click here and click Here for our two separate visualizations of the Tax Code (Title 26).  Click here for our visualization of the Bankruptcy Code (Title 11).  Click here for our visualization of Copyright (Title 17). While our prior efforts were devoted to displaying the structure of a given title of the US Code, the visualization above offers a complete view of the structure of the entire United States Code (Titles 1-50).

Using Zoomorama, each title is labeled with its respective number. The small black dots are “vertices” representing all sections in the aggregate US Code (~37,500 total sections). Given the size of the total undertaking, in the visual above, every title is represented to the “section level.”  As we described in earlier posts, a “section level” representation halts at the section and thus does not represent any of subsection depth.  For example, all sections under 26 U.S.C. § 501 including the well known § 501 (c) (3) are reattributed upward to their parent section.

There are two sources of structure within the United States Code. The explicitly defined structure / linkage / dependancy derives from the sections contained under a given title. The more nuanced version of structure is obtained from references or definitions contained within particular sections. This class of connections not only link sections within a given title but also connection sections across titles.  Within this above visual, we represent these important cross-title references by coloring them red.

Taken together, this full graph of the Untied States Code is quite large {i.e. directed graph (|V| = 37500, |E| = 197749)}. There exist 37,500 total sections distributed across the 50 Titles. However, these sections are not distributed in a uniform manner. For example, components such as Title 1 feature very few sections while Titles such as 26 and 42 contain many sections. The number of edges far outstrips the number of vertices with a total 197,000+ edges in the graph.

We are currently writing the paper on this subject … so please consider this the trailer. Above we offer the same visual of the United States Code (Titles 1-50) which we previously offered here … this time we are using Zoomorama. Zoomorama is an alternative to Seadragon which we believe might perform better on certain machine configurations.

If you click on the image above you should be taken straight to the full page image.  From there you should be click to zoom in and then read the titles … For those unfamiliar, please click here for the Zoomorama Instructions!

United States Court of Appeals & Parallel Tag Clouds from IBM Research

Ct of Appeals

Download the paper: Collins, Christopher; Viégas, Fernanda B.; Wattenberg, Martin. Parallel Tag Clouds to Explore Faceted Text Corpora To appear in Proceedings of the IEEE Symposium on Visual Analytics Science and Technology (VAST), October, 2009. [Note: The Paper is 24.5 MB]

Here is the abstract: Do court cases differ from place to place? What kind of picture do we get by looking at a country’s collection of law cases? We introduce Parallel Tag Clouds: a new way to visualize differences amongst facets of very large metadata-rich text corpora. We have pointed Parallel Tag Clouds at a collection of over 600,000 US Circuit Court decisions spanning a period of 50 years and have discovered regional as well as linguistic differences between courts. The visualization technique combines graphical elements from parallel coordinates and traditional tag clouds to provide rich overviews of a document collection while acting as an entry point for exploration of individual texts. We augment basic parallel tag clouds with a details-in-context display and an option to visualize changes over a second facet of the data, such as time. We also address text mining challenges such as selecting the best words to visualize, and how to do so in reasonable time periods to maintain interactivity.

A Statistical Mechanics Take on No Child Left Behind — Flow and Diffusion of High-Stakes Test Scores [From PNAS]

PNAS NCLB

The October 13th Edition of the Proceedings of the National Academy of Science features a very interesting article by Michael Marder and Dhruv Bansal from the University of Texas.

From the article … “Texas began testing almost every student in almost every public school in grades 3-11 in 2003 with the Texas Assessment of Knowledge and Skills (TAKS). Every other state in the United States administers similar tests and gathers similar data, either because of its own testing history, or because of the Elementary and Secondary Education Act of 2001 (No Child Left Behind, or NCLB). Texas mathematics scores for the years 2003 through 2007 comprise a data set involving more than 17 million examinations of over 4.6 million distinct students. Here we borrow techniques from statistical mechanics developed to describe particle flows with convection and diffusion and apply them to these mathematics scores. The methods we use to display data are motivated by the desire to let the numbers speak for themselves with minimal filtering by expectations or theories.

The most similar previous work describes schools using Markov models. “Demographic accounting” predicts changes in the distribution of a population over time using Markov models and has been used to try to predict student enrollment year to year, likely graduation times for students, and the production of and demand for teachers. We obtain a more detailed description of students based on large quantities of testing data that are just starting to become available. Working in a space of score and time we pursue approximations that lead from general Markov models to Fokker–Planck equations, and obtain the advantages in physical interpretation that follow from the ideas of convection and diffusion.”

The Clerkship Tournament: Supreme Court Edition [Repost from 6/3]

Picture 1

As part our multipart series on the clerkship tournament, here is a simple bar graph for the top placing law schools in the Supreme Court Clerkship Tourney. It is important to note that we do not threshold for the number of graduates per school. Specifically, we do not just divide by the number graduates per school because we have little theoretic reason to believe that placements linearly scale to differences in size of graduating classes. In other words, given we do not know the proper functional form — we just offer the raw data. For those interested in other posts, please click here for the law clerks tag.

The Map of the Future [From Densitydesign.org]

Map of the Future

Picture 7As we mentioned in previous posts, Seadragon is a really cool product. Please note load times may vary depending upon your specific machine configuration as well as the strength of your internet connection. For those not familiar with how to operate it please see below. In our view, the Full Screen is best the way to go ….

Who Should Win (probably not who will win) the 2009 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel

economics_list_collage

Benoît Mandelbrot whose classic work on fractals as well as more recent work questioning the Efficient Market Hypothesis offers a lasting contribution to positive economic theory. While the committee is likely considering Eugene Fama and/or Kenneth French (of Fama-French fame), we believe they should instead consider Mandelbrot (or at a minimum split the award between Fama, French &  Mandelbrot).

Robert Axelrod whose work on the evolution of cooperation is among the most cited work in all of the social sciences. Iterated Prisoners Dilemma as well as concepts such Tit for Tat are part of the cannon of almost all introductory courses in game theory.

Robert Shiller for his contributions to behavioral finance including his work challenging the Efficient Market Hypothesis. Of course, Shiller is also well known for his work on the real estate market with Karl Case (including the Case-Shiller Index). This also represents important work worthy of recognition.

Elinor Ostrom for her work on public choice theory, common pool resources and collective action. Her work has offered a substantial contribution to political economy as well as institutional and environmental economics. {Note: (Ladbrokes places her at 50 to 1)}.

UPDATE: ELINOR OSTROM AND OLIVER WILLIAMSON WIN THE 2009 NOBEL PRIZE {In Our Estimation, this is a Very Appropriate Decision }