Election Models, Election Dynamics and Early Voting Data

As it stands today, the Biden Campaign would appear quite likely but not guaranteed to win come November 3rd (or at some point thereafter). It could end early on November 3rd (if Florida appears to be trending toward Biden). Namely, it is hard to craft a scenario whereby Trump loses Florida and wins the White House. 538 has created an interactive where you can explore the inferential dynamics between the states (we learn about the likelihood in State B from the earlier results in State A). The interactive also highlights how results in early reporting states can reduce the remaining plausible paths to victory (there are only a few paths for Trump at this point).

Of course, it should be stated that remaining events or other issues could (potentially) change the dynamics or undermine the ability to leverage polls to make a proper inference. Here are few possibilities —

(1) Another October Surprise could drop between now and Election Day (there have already been several). However, it should be noted that one implication of all of this early voting is that the impact of a late October surprise is diminished.

(2) There could be systematic bias in polling (such as an unwillingness on behalf of voters to admit to pollsters their support for Trump). Alternatively, there could be a fundamental misunderstanding of the composition of the 2020 Electorate. As has been recently noted, the Trump Campaign has spent a significant amount of time on voter registration in several key battleground states. Will these newly registered folks actually vote ?

(3) Turnout dynamics associated with the cocktail of early voting (very large numbers so far), large scale absentee ballots (including rejection of ballots, delays in mail, etc.) or fear of turning up to the polls due to our latest COVID surge (the Trump campaign is counting on a Election Day surge). Any or all could impact the final outcome.

That said, if I had to bet I would bet on Biden to win (and give far better than even money).

We do have at least some information on the state of ongoing voting thanks to the Early Voting Tracking Project by Michael McDonald.

It is unprecedented turnout thus far.  On its face this would purport to favor the Biden Campaign. However, the question remains whether this is merely a cannibalization of the normal Early In Person Voting and/or Election Day In Person Voting.  In other words, how much will net turnout increase? Will it make a difference?    

Taking Pennsylvania as a highly probable Tipping Point State, it will be interesting to see what percentage of mail in ballots are returned in the days to come.  At the County level, there is significant variation in number of returned ballots thus far (even among those who have already requested a ballot).   

Immigration Document Classification and Automated Response Generation

ABSTRACT: “In this paper, we consider the problem of organizing supporting documents vital to U.S. work visa petitions, as well as responding to Requests For Evidence (RFE) issued by the U.S.~Citizenship and Immigration Services (USCIS). Typically, both processes require a significant amount of repetitive manual effort. To reduce the burden of mechanical work, we apply machine learning methods to automate these processes, with humans in the loop to review and edit output for submission. In particular, we use an ensemble of image and text classifiers to categorize supporting documents. We also use a text classifier to automatically identify the types of evidence being requested in an RFE, and used the identified types in conjunction with response templates and extracted fields to assemble draft responses. Empirical results suggest that our approach achieves considerable accuracy while significantly reducing processing time.” Access Via arXiv — To Appear in ICDM 2020 workshop: MLLD-2020

Behavioral Nudges Reduce Failure to Appear for Court (via Science)

ABSTRACT: “Each year, millions of Americans fail to appear in court for low-level offenses, and warrants are then issued for their arrest. In two field studies in New York City, we make critical information salient by redesigning the summons form and providing text message reminders. These interventions reduce failures to appear by 13-21% and lead to 30,000 fewer arrest warrants over a 3-year period. In lab experiments, we find that while criminal justice professionals see failures to appear as relatively unintentional, laypeople believe they are more intentional. These lay beliefs reduce support for policies that make court information salient and increase support for punishment. Our findings suggest that criminal justice policies can be made more effective and humane by anticipating human error in unintentional offenses.” Access Full Article.

LEGAL-BERT: The Muppets Straight Out of Law School

ABSTRACT: “BERT has achieved impressive performance in several NLP tasks. However, there has been limited investigation on its adaptation guidelines in specialised domains. Here we focus on the legal domain, where we explore several approaches for applying BERT models to downstream legal tasks, evaluating on multiple datasets. Our findings indicate that the previous guidelines for pre-training and fine-tuning, often blindly followed, do not always generalize well in the legal domain. Thus we propose a systematic investigation of the available strategies when applying BERT in specialised domains. These are: (a) use the original BERT out of the box, (b) adapt BERT by additional pre-training on domain-specific corpora, and (c) pre-train BERT from scratch on domain-specific corpora. We also propose a broader hyper-parameter search space when fine-tuning for downstream tasks and we release LEGAL-BERT, a family of BERT models intended to assist legal NLP research, computational law, and legal technology applications.”

Congrats to all of the authors on their acceptance in the Empirical Methods in Natural Language Processing Conference in November.

In the legal scientific community, we are witnessing increasing efforts to connect general purpose NLP Advances to domain specific applications within law. First, we saw Word Embeddings (i.e. word2Vec, etc.) now Transformers (i.e BERT, etc.). (And dont forget about GPT-3, etc.) Indeed, the development of LexNLP is centered around the idea that in order to have better performing Legal AI – we will need to connect broader NLP developments to the domain specific needs within law. Stay tuned!

Rethinking Attention with Performers (Important New Paper on arXiv)

Transformers (such as BERT, etc.) have suffered quadratic complexity in the number of tokens in the input sequence … which makes training incredibly laborious / expensive… so this is an important paper by researchers from Google, Cambridge and DeepMind …

ABSTRACT: “We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness. To approximate softmax attention-kernels, Performers use a novel Fast Attention Via positive Orthogonal Random features approach (FAVOR+), which may be of independent interest for scalable kernel methods. FAVOR+ can be also used to efficiently model kernelizable attention mechanisms beyond softmax. This representational power is crucial to accurately compare softmax with other kernels for the first time on large-scale tasks, beyond the reach of regular Transformers, and investigate optimal attention-kernels. Performers are linear architectures fully compatible with regular Transformers and with strong theoretical guarantees: unbiased or nearly-unbiased estimation of the attention matrix, uniform convergence and low estimation variance. We tested Performers on a rich set of tasks stretching from pixel-prediction through text models to protein sequence modeling. We demonstrate competitive results with other examined efficient sparse and dense attention methods, showcasing effectiveness of the novel attention-learning paradigm leveraged by Performers.” ACCESS THE PAPER from arXiv.

Predicting United States Policy Outcomes with Random Forests (via arXiv)

Interesting paper which follows on to a number of Machine Learning / NLP driven Legislative Prediction or Government Prediction papers. Access the draft of paper from arXiv

For more examples, see e.g. the follow papers —

Gerrish SM, Blei DM. “Predicting legislative roll calls from text”. ICML, 2011.

Yano T, Smith NA, Wilkerson JD. “Textual Predictors of Bill Survival in Congressional Committees”. Proc 2012 Conf N Amer Chapter Assoc Comp Linguistics, Human Language Technologies, 2012.

Katz DM, Bommarito MJ, Blackman J. “A general approach for predicting the
behavior of the Supreme Court of the United States”. PLOS One, 2017.

Nay, J. “Predicting and Understanding Law Making with Word Vectors and an Ensemble Model.” PLOS One, 2017.

Waltl, Bernhard Ernst. “Semantic Analysis and Computational Modeling of Legal Documents.” PhD diss., Technische Universität München, 2018.

Davoodi, Maryam, Eric Waltenburg, and Dan Goldwasser. “Understanding the Language of Political Agreement and Disagreement in Legislative Texts.” In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5358-5368. 2020.