Volume 2018 — Issue 1 (Spring)
"A Location-Based Test for Jurisdiction Over Data: The Consequences for Global Online Privacy" by Shelli Gimelstein
Download PDF | Abstract
This Article argues that basing government jurisdiction over data on the data’s physical location threatens user privacy. It also creates unworkable and unpredictable results for technology companies by failing to account for the significant differences in how they divide, store, and transmit their users’ data around the world. In the context of digital searches, the data location test has two potential effects. First, it will create bottlenecks in the already-burdensome mutual legal assistance system, hindering intergovernmental cooperation on law enforcement investigations. Second, it may embolden foreign governments to circumvent the system by adopting similar, or even more extreme, positions on jurisdiction over data, such as data localization and mandatory encryption backdoor laws. These policies have dangerous consequences for privacy, free expression, and innovation around the world.
While some have written about the data location test in Microsoft Ireland in the abstract, this Article takes a step further and considers its role in the rulings conflicting with Microsoft Ireland that have been issued by federal judges over the past two years. It also evaluates several recent legislative and non-legislative proposals to solve the problems arising from the data location test. In particular, this Article highlights the pressing need for Congress to reform the Stored Communications Act, incorporating an alternative test for jurisdiction over user data and provisions that would clarify companies’ data disclosure obligations under conflicting legal regimes. Finally, while much of the literature on this topic focuses solely on legislative proposals rather than the real-world impact of the uncertainty creating a need for statutory reform, this Article focuses on what companies should do while they await a resolution from Congress or the Supreme Court. To that end, this Article offers some practical recommendations for how companies can navigate the issues arising from the data location test, particularly as they make decisions about their global operations and data storage architecture.
"Mechanizing Alice: Automating the Subject Matter Eligibility Test of Alice V. CLS Bank" by Ben Dugan
Download PDF | Abstract
This Article describes the design, development, and applications of a machine classifier that classifies patent claims according to the Alice test. We employ supervised learning to train our classifier with examples of eligible and ineligible claims obtained from patent applications examined by the U.S. Patent Office. In an example application, the classifier is used as part of a patent claim evaluation system that provides a user with feedback regarding the subject matter eligibility of an input patent claim. Finally, we use the classifier to quantitatively estimate the impact of Alice on the universe of issued patents.
"Just the Facts: Empirically Driven Impact Litigation as a Route to Copyright Reform" by Vanessa J. Reid
Download PDF | Abstract
This Article therefore investigates strategies for incorporating empirical evidence into impact litigation aimed at reforming U.S. copyright law. Although direct changes to law by the legislature would provide a more durable solution, it is clear that there is currently insufficient political will to make the needed changes to copyright law through the legislative process. Therefore, empirically-driven impact litigation remains the most realistic avenue for driving substantive reform of copyright law at present. Significant legal scholarship has demonstrated the problems with current copyright law, but the question of how to bring this information before the courts has been largely ignored. The literature contains no systematic examination of how empirical evidence might be incorporated into copyright litigation. This Article seeks to fill that gap.
There are a number of hurdles that copyright litigators must overcome in order to successfully present this type of evidence to the courts. Most importantly, judges who rely on empirical evidence risk the accusation that they are merely substituting their own policy preferences for those of the legislature. Because courts must give a high level of deference to the legislature in matters of statutory interpretation, it is imperative that copyright litigators provide courts with robust doctrinal hooks for considering empirical evidence, and that such evidence be methodologically unassailable. In particular, advocates may wish to focus on constitutional arguments, as constitutional grounds give courts a more solid footing for potentially contradicting or undermining Congressional intent. The Article demonstrates that current copyright law contains a number of promising doctrinal hooks that do just this. Accordingly, there are ample avenues for litigators to successfully employ empirical evidence to drive much needed reform of copyright law—provided they are careful to heed the lessons of similar efforts in other areas.
"The Reasonable Algorithm" by Karni Chagal-Feferkorn
Download PDF | Abstract
A car accident could involve both human drivers and driverless vehicles. Patients may receive an erroneous diagnosis or treatment recommendation from either a physician or a medical-algorithm. Yet because algorithms were traditionally considered “mere tools” in the hands of humans, the tort framework applying to them is significantly different than the framework applying to humans, potentially leading to anomalous results in cases where humans and algorithmic decision-makers could interchangeably cause damage.
This Article discusses the disadvantages stemming from these anomalies and proposes to develop and apply a “reasonable algorithm” standard to non-human decision makers—similar to the “reasonable person” or “reasonable professional” standard that applies to human tortfeasors.
While the safety-promotion advantages of a similar notion have been elaborated on in the literature, the general concept of subjecting non-humans to a reasonableness analysis has not been addressed. Rather, current anecdotal references to applying a negligence or reasonableness standard to autonomous machines mainly discarded the entire concept, primarily because “algorithms are not persons.” This Article identifies and addresses the conceptual difficulties stemming from applying a “reasonableness” standard on non-humans, including the intuitive reluctance of subjecting non-humans to human standards; the question of whether there is any practical meaning of analysing the reasonableness of an algorithm separately from the reasonableness of its programmer; the potential legal implications of a finding that the algorithm “acted” reasonably or unreasonably; and whether such an analysis reconciles with the rationales behind tort law.
Other than identifying the various anomalies resulting from subjecting humans and non-humans conducting identical tasks to different tort frameworks, the Article’s main contribution is, therefore, explaining why the challenges associated with applying a “reasonable standard” to algorithms are overcome.
"The FCC Restoring Internet Freedom Order and Zero Rating or: How We Learned to Stop Worrying and Love the Market" by Daniel A. Schuleman
"Electric Eye: Mass Aerial Surveillance and the Fourth Amendment" by Andrea Carlson
"Garbage In, Garbage Out: Is Seed Set Disclosure a Necessary Check on Technology-Assisted Review and Should Courts Require Disclosure?" by Shannon H. Kitzer
"Gone in Sixty Seconds: Fading Automobile Insurance Costs in a Driverless Future" by Hasan Siddiqui