Under Review

This online section of the University of Illinois Journal of Law, Technology & Policy contains drafts of Articles, Essays and Student Notes that have been accepted for publication and currently are undergoing the peer-review process. Although these drafts may not reflect the content or form of the final publication, they are placed in this section to elicit feedback from the site's scholarly visitors. Comments on the materials in this section may be made by emailing JLTP directly.

Volume 2018 — Issue 1 (Spring)

"A Location-Based Test For Jurisdiction Over Data: The Consequences For Global Online Privacy" by Shelli Gimelstein
Download PDF | Abstract

U.S. technology companies face growing uncertainty over whether and how they can be compelled to turn foreign-stored user content over to law enforcement officials. In July 2016, the Second Circuit ruled that the government could not require Microsoft to produce user content stored on its server in Ireland because the execution of the government’s warrant constituted an impermissible extraterritorial application of the Stored Communications Act (SCA) . But after the Second Circuit declined to rehear Microsoft Corp. v. United States, (formally titled In the Matter of a Warrant to Search a Certain E‐Mail Account Controlled and Maintained by Microsoft Corporation, also known as the "Microsoft Ireland" case) en banc in January 2017, other courts issued orders compelling Google to produce foreign-stored data requested by SCA warrants. Two fundamental question divide these courts: (1) whether the physical location of the data at the time it is accessed should determine whether it is within the reach of the SCA, and (2) whether other countries’ data privacy laws and search-and-seizure protections apply.

This Note argues that basing government jurisdiction over data on the data’s physical location threatens user privacy. It also creates unworkable and unpredictable results for technology companies by failing to account for the significant differences in how they divide, store and transmit their users’ data around the world. In the context of digital searches, the data location test has two potential effects. First, it will create bottlenecks in the already-burdensome mutual legal assistance system, hindering intergovernmental cooperation on law enforcement investigations. Second, it may embolden foreign governments to circumvent the system by adopting similar, or even more extreme, positions on jurisdiction over data, such as data localization and mandatory encryption backdoor laws. These policies have dangerous consequences for privacy, free expression, and innovation around the world.

While some have written about the data location test in Microsoft Ireland in the abstract, this Note takes a step further and considers its role in the rulings conflicting with Microsoft Ireland that have been issued by federal magistrate judges in the past few months. It also evaluates several recent legislative and non-legislative proposals to solve the problems arising from the data location test. In particular, this Note highlights the pressing need for Congress to reform the Stored Communications Act, incorporating an alternative test for jurisdiction over user data and provisions that would clarify companies’ data disclosure obligations under conflicting legal regimes. Finally, while much of the literature on this topic focuses solely on legislative proposals rather than the real-world impact of the uncertainty creating a need for statutory reform, this Note focuses on what companies should do while they await a resolution from Congress or the Supreme Court. To that end, this Note offers some practical recommendations for how companies can navigate the issues arising from the data location test, particularly as they make decisions about their global operations and data storage architecture.

"Mechanizing Alice: Automating the Subject Matter Eligibility Test of Alice v. CLS Bankby Ben Dugan
Download PDF | Abstract

In Alice v. CLS Bank, the Supreme Court established a new test for determining whether a patent claim is directed to patent-eligible subject matter. The impact of the Court’s action is profound: the modified standard means that many formerly valid patents are now invalid, and that many pending patent applications that would have been granted under the old standard will now not be granted.

This article describes a project to mechanize the subject matter eligibility test of Alice v. CLS Bank. The Alice test asks a human to determine whether or not a patent claim is directed to patent-eligible subject matter. The core research question addressed by this article is whether it is possible to automate the Alice test. Is it possible to build a machine that takes a patent claim as input and outputs an indication that the claim passes or fails the Alice test? We show that it is possible to implement just such a machine, by casting the Alice test as a classification problem that is amenable to machine learning.

This article describes the design, development, and applications of a machine classifier that approximates the Alice test. Our machine classifier is a computer program that takes the text of a patent claim as input, and indicates whether or not the claim passes the Alice test. We employ supervised machine learning to construct the classifier. Supervised machine learning is a technique for training a computer program to recognize patterns. Training comprises presenting the program with positive and negative examples, and automatically adjusting associations between particular features in those examples and the desired output.

The examples we use to train our machine classifier are obtained from the United States Patent Office. Within a few months of the Alice decision, examiners at the Patent Office began reviewing claims in patent applications for subject matter compliance under the new framework. Each decision of an examiner is publicly reported in the form of a written office action. We programmatically obtained and reviewed many thousands of these office actions to build a data set that associates patent claims with corresponding eligibility decisions. We then used this dataset to train, test, and validate our machine classifier.

“Just the Facts: Empirically Driven Impact Litigation as a Route to Copyright Reform”  by Vanessa J. Reid
Download PDF | Abstract

There is extensive evidence suggesting that U.S. copyright law, in its current form, is broken. Legal scholars and practitioners overwhelmingly agree that there is a fundamental disconnect between copyright law’s animating purposes and its actual effects. However, little of this evidence has been seriously considered by the courts. This is unfortunate, as sound interpretation of copyright law requires an appreciation of the profound impact that this relatively technical doctrine has on the public interest.

This article therefore investigates strategies for incorporating empirical evidence into impact litigation aimed at reforming U.S. copyright law. Although direct changes to law by the legislature would provide a more durable solution, it is clear that there is currently insufficient political will to make the needed to changes to copyright law through the legislative process. Therefore, empirically-driven impact litigation remains the most realistic avenue for driving substantive reform of copyright law at present. Significant legal scholarship has demonstrated the problems with current copyright law, but the question of how to bring this information before the courts has been largely ignored. The literature contains no systematic examination of how empirical evidence might be incorporated into copyright litigation. This article seeks to fill that gap.

There are a number of hurdles that copyright litigators must overcome in order to successfully present this type of evidence to the courts. Most importantly, judges who rely on empirical evidence risk the accusation that they are merely substituting their own policy preferences for those of the legislature. Because courts must give a high level of deference to the legislature in matters of statutory interpretation, it is imperative that copyright litigators provide courts with robust doctrinal hooks for considering empirical evidence, and that such evidence be methodologically unassailable. In particular, advocates may wish to focus on Constitutional arguments, as constitutional grounds give courts a more solid footing for potentially contradicting or undermining Congressional intent. The article demonstrates that current copyright law contains a number of promising doctrinal hooks that do just this. Accordingly, there are ample avenues for litigators to successfully employ empirical evidence to drive much needed reform of copyright law—provided they are careful to heed the lessons of similar efforts in other areas.

“The Reasonable Algorithm”  by Karni Chagal-Feferkorn
Download PDF | Abstract

Algorithmic decision-makers dominate many aspects of our lives. Beyond simply performing complex computational tasks, they often replace human discretion and even professional judgement. As sophisticated and accurate as they may be, autonomous algorithms may cause damage.

A car accident could involve both human drivers and driverless vehicles. Patients may receive an erroneous diagnosis or treatment recommendation from either a physician or a medical-algorithm. Yet because algorithms were traditionally considered "mere tools" in the hands of humans, the tort framework applying to them is significantly different than the framework applying to humans, potentially leading to anomalous results in cases where humans and algorithms decision-makers could interchangeably cause damage.

This article discusses the disadvantages stemming from these anomalies and proposes to develop and apply a "reasonable algorithm" standard to non-human decision makers- similar to the "reasonable person" or "reasonable professional" standard that applies to human tortfeasors.

While the economic advantages of a similar notion have been elaborated on in the literature, the general concept of subjecting non-humans to a reasonableness analysis has not been addressed. Rather, current anecdotal references to applying a negligence or a reasonableness standard on autonomous machines mainly discarded the entire concept, primarily because "algorithms are not persons". This article identifies and addresses the conceptual difficulties stemming from applying a "reasonableness" standard on non-humans, including the intuitive reluctance of subjecting non-humans to human standards; the question of whether there is any practical meaning of analysing the reasonableness of an algorithm separately from the reasonableness of its; the potential legal implications of a finding that the algorithm "acted" reasonably or unreasonably; and whether such an analysis reconciles with the rationales behind tort law.

Other than identifying the various anomalies resulting from subjecting humans and non-humans conducting identical tasks to different tort frameworks, the article's main contribution is, therefore, explaining why the challenges associated with applying a "reasonable standard" to algorithms are overcome.

Volume 2018 — Issue 2 (Fall)

"Does Technology Drive Law? The Dilemma of Technological Exceptionalism in Cyberlaw"

"'One if by Land, Two if by Sea': The Federal Circuit's Oversimplification of Computer Data Processing (And Why Computer-Implemented Mathematical Algorithms that Provide Technological Improvements to Otherwise Conventional Computer Systems Should be Patent Eligible)"

"Good Intentions and the Road to Regulatory Hell: How the TCPA Went from Consumer Protection Statute to Litigation Nightmare"

"Preserving Capital Markets Efficiency in the High-Frequency Trading Era"


Volume 2019 — Issue 1 (Spring)

"Cracks in the Armor: Legal Approaches to Encryption"