Tag: computer science
Introduction to Artificial Intelligence [ CS 221 – Free Online Course from Stanford University ]
“A bold experiment in distributed education, “Introduction to Artificial Intelligence” will be offered free and online to students worldwide during the fall of 2011. The course will include feedback on progress and a statement of accomplishment. Taught by Sebastian Thrun and Peter Norvig, the curriculum draws from that used in Stanford’s introductory Artificial Intelligence course. The instructors will offer similar materials, assignments, and exams.
Artificial Intelligence is the science of making computer software that reasons about the world around it. Humanoid robots, Google Goggles, self-driving cars, even software that suggests music you might like to hear are all examples of AI. In this class, you will learn how to create this software from two of the leaders in the field. Class begins October 10.” Check out the syllabus here. Although I have previously highlighted Academic Earth – I am a little late on this New York Times coverage available here. See also Andrew Ng’s class on Machine Learning (which is also available right now on Academic Earth).
Kevin Slavin: How Algorithms Shape Our World [ TED 2011 ]
Kevin Slavin argues that we’re living in a world designed for — and increasingly controlled by — algorithms. In this riveting talk from TEDGlobal, he shows how these complex computer programs determine: espionage tactics, stock prices, movie scripts, and architecture.
Stuxnet – The Development and Evolution of an Open Source Weapon
Stuxnet: Anatomy of a Computer Virus
Daniel Kraft: Medicine’s Future? There’s an App for That [TedXMaastricht]
Very interesting talk and note this could just as easily say – Law’s Future? There’s an App for That. Generated by the nexus of available technology and the current legal employment crisis there appears to be a growing app / “garage guys” culture breaking out. LawTechCamp, New and Emerging Legal Infrastructures and other related conferences showcase just some of the app style innovations that are being generated in the legal marketplace. I guess you can say there is nothing more dangerous than a bunch of smart folks who get pushed into a corner — forced to innovate for a want of high quality alternative options. Change is on the march and my bet is there is more disruption in the windshield than in the rearview. As my old football coach liked to say — better keep your head on a swivel!
Bommarito, Katz & Isaacs-See –> Virginia Tax Review [ Online Supplement and Datasets ]
Our paper An Empirical Survey of the Population of United States Tax Court Written Decisions was recently published in the Virginia Tax Review. We have just placed supplementary materials online (click here or above to access).
Simply put, our paper is a “dataset paper.” While common in the social and physical sciences, there are far fewer (actually borderline zero) “dataset papers” in legal studies.
In our estimation, the goals of a “dataset paper” are three fold:
- (1) Introduce the data collection process with specific emphasis upon why the collection method was able to identify the targeted population
- (2) Highlight some questions that might be considered using this and other datasets
- (3) Make the data set available to various applied scholars who might like to use it
As subfields such as empirical legal studies mature (and in turn legal studies starts to look more like other scientific disciplines) it would be reasonable to expect to see additional papers of this variety. With the publication of the online supplement, we believe our paper has achieved each of these goals. Whether our efforts prove useful for others — well — only time will tell!
Applying the Science of Similarity to Computer Forensics (with lots of other potential applications) [via Jesse Kornblum]
From the talk description: “Computers are fantastic at finding identical pieces of data, but terrible at finding similar data. Part of the problem is first defining the term similar in any given context. The relationships between similar pictures are different than the relationships between similar pieces of malware. This talk will explore the different kinds of similar, a scientific approach to finding similar things, and how these apply to computer forensics. Fuzzy hashing was just the beginning! Topics will include wavelet decomposition, control flow graphs, cosine similarity, and lots of other fun mathy stuffs which will make your life easier.”
I have been quite interested in the “science of similarity” and its application to a variety of questions in law and the social sciences. Whether it concerns the sort of analogical reasoning described by legal scholars such as Edward Levi or Cass Sunstein or cognitive biases such as the availability heuristic (Tversky & Kahneman (1973)), developments in “science of similarity” are of great relevance to theorists in a wide variety of sub-fields.
While there has been lots of skepticism regarding the application of these principles (particularly by those in legal theory), from our perspective it appears as though computer science ∩ psychology/cognitive science stands on the cusp of a new age in the “science of similarity.” I offer the slides above as I found them to be both interesting and useful. Stay tuned for more …
How to Grow a Mind: Statistics, Structure, and Abstraction [via Science]
From the abstract: “In coming to understand the world—in learning concepts, acquiring language, and grasping causal relations—our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?”
Rock / Paper / Scissors – Man v. Machine (as t→∞ you are not likely to win) [via NY Times]
From the site … “A truly random game of Rock-Paper-Scissors would result in a statistical tie with each player winning, tying and losing one-third of the time … However, people are not truly random and thus can be studied and analyzed. While this computer won’t win all rounds, over time it can exploit a person’s tendencies and patterns to gain an advantage over its opponent.
Computers mimic human reasoning by building on simple rules and statistical averages. Test your strategy against the computer in this rock-paper-scissors game illustrating basic artificial intelligence. Choose from two different modes: novice, where the computer learns to play from scratch, and veteran, where the computer pits over 200,000 rounds of previous experience against you.”
Time to dust off your random seed / pseudorandom number generators … good luck!