Current Issue

Volume 2020 — Issue 2 (Fall)


"Algorithmic Content Moderation on Social Media in EU Law: Illusion of Perfect Enforcement" By: Céline Castets-Renard
Download PDF| Abstract

Intermediaries today do much more than passively distribute user content and facilitate user interactions. They now have near-total control of users’ online experience and content moderation. Even though these service providers benefit from the same liability exemption regime as technical intermediaries (ECommerce Directive, Art. 14), they have unique characteristics that must be addressed. Consequently, many debates are ongoing to decide whether or not platforms should be more strictly regulated.

Platforms are required to remove illegal content in the event of notice and take-down procedures built on automated processing and are equally encouraged to take proactive and automated measures to detect and remove it. Algorithmic decision-making helps scale down the massive task of content moderation. It would, therefore, seem that algorithmic decision-making would be the most effective way to provide perfect enforcement.

However, this is an illusion. A difficulty occurs when deciding what, precisely, is illegal. Platforms manage the removal of illegal content automatically, which makes it particularly challenging to verify that the law is being respected. The automated decision-making systems are opaque, and many scholars have shown that the main problem here is the over-removal chilling effect. Moreover, content removal is a task which, in many circumstances, should not be automated, as it depends on an appreciation of both the context and the rule of law.

To address this multi-faceted issue, this article offers solutions to improve algorithmic accountability and to increase the transparency around automated decision-making. Improvements may be made specifically by providing platform users with new rights, which in turn will provide stronger guarantees for judicial and non-judicial redress in the event of over-removal.

"Iterative Autonomous Vehicle Regulation and Governance" By: Shili Shao
Download PDF| Abstract

Despite all the hype about the coming age of autonomous vehicles (AVs), the technology remains in early stages and may not reach mass adoption until thirty years later according to a leading industry expert. As AV technologies take many different shapes in the coming decades, it will be crucial to have government support and an iterative model of AV regulation and governance in place that can evolve alongside the technology.

Based on empirical studies of existing AV regulations and pilot programs in over 50 states and localities in the U.S. as well as 25 countries internationally, this paper proposes a regulatory model involving distributed, iterative AV governance and a federal backstop to facilitate inter-regional competition in America. Under this regime, state and local governments overseeing AV pilots can obtain important information about fast-moving AV technologies necessary for informed regulatory experiments and rapidly adjust policies in response to public feedback; the federal government will provide for key minimum rules to prevent deregulatory races to the bottom among states and offer a model AV code to reduce fragmentation and lead the march towards a national AV regime. Both care and support are needed along the way to nurture the future of mobility, as safety, efficiency, and societal welfare of enormous magnitudes are at stake in this technological and regulatory evolution.

"Transplanting Fair Use in China? History, Impediments and the Future" By: Dr. Tianxiang He
Download PDF| Abstract

The history of the Copyright Law of China (CLC) is a history of “legal transplant.” Since its enactment in 1990, the CLC has been continually shaped by factors such as legal traditions, international norms, legal developments in advanced countries, and technological challenges. This hybrid legislative model has provided an extremely useful tool for transitional China to readily establish an operative copyright legal framework and demonstrate its compliance with various international treaties. However, after almost 30 years of development, China has developed to a stage that necessitates a customized, demand-driven, and internally coherent CLC. The evolution of its copyright exceptions model that concerns the balance between the interests of the copyright owner and public interests, such as access to knowledge and freedom of speech, is a pivotal part of the new search. It will also further our understanding of the global paradigm shift related to copyright exception models.

The nature of a future copyright exception models in China has been intensely debated since 2011, when the Chinese government launched a public consultation regarding the third-round amendment of the CLC. In the revised 2014 draft, an open-ended general clause of copyright exceptions was proposed; however, this was criticized by academics for its imprecision and several internal inconsistencies. However, over the past few years, many new forms and ways to utilize copyright works have emerged, during which the existing closedlist copyright exceptions model was heavily challenged due to its inflexibility in the face of these new cases. Although Chinese judiciaries have already employed the U.S. four-factor fair use model or its elements in deciding difficult copyright cases, whether the U.S. approach will eventually be transplanted to the CLC remains unclear, making it a “must-solve” problem in the upcoming round of revisions.

To demonstrate the complexity of the evolutionary process and the search for a solution, the first section of this article explores the historical constraints of the current model of copyright exceptions adopted by the CLC and the new challenges it is facing. The second section explores why the current approach taken by Chinese courts is meant to be an interim one. The third section critically assesses the latest published revised draft and addresses why China will not transplant the U.S. fair use model directly. The concluding section identifies a possible model of copyright exceptions for China. This model should be flexible enough to cover future needs and better reflect local realities and priorities by learning from civil law jurisdictions, such as Japan and Taiwan, that have transplanted or planned to transplant an exotic model like fair use.

"Harnessing AI Innovation For Struggling Families By: Jacqueline G. Schafer
Download PDF| Abstract

State child welfare systems impact one of the oldest fundamental liberty interests recognized by the U.S. Supreme Court—the interest of parents in the care, custody, and control of their children—and alter family bonds for hundreds of thousands of vulnerable children across the U.S. An ethical child welfare system demands that decisions about how to help struggling families be data-informed, yet states have been painfully slow to acknowledge the potential for existing and emerging technology tools to transform their operations.

This article starts from the premise that child welfare systems and the families they interact with could benefit immensely if cutting-edge, private sector technology innovation could be applied to the vast social science datasets generated over the life of a state child welfare case. However, to realize these benefits, Congress must update the multi-layered regime of federal laws governing child welfare data to require as a condition of funding that states increase data scientists’ and researchers’ access to this data. And importantly, the government must increase investment in the technological infrastructure that can enable artificial intelligence (AI)-enabled applications to identify the most effective interventions and reduce the currently enormous administrative burden on child welfare system workers. Moreover, state child welfare agencies must actively prepare to address the complex ethical and privacy questions raised by the inevitable introduction of AI-enabled technology into the practice of social work.

This article will (1) provide an overview of the data that is currently stored and collected by state child welfare systems; (2) describe the complex set of federal laws that restrict sharing of this information and propose legislative and regulation changes that, if implemented, would foster technological innovation that could be life-changing for children and families; (3) suggest currently feasible machine learning applications that would benefit tech-optimized child welfare systems; (4) describe the practical steps needed to ready state child welfare agencies to implement technology innovation; and (5) analyze privacy considerations in adopting AI technologies.

Overall, this article provides a roadmap for the U.S. Department of Health and Human Services in complying with Section 5 of President Trump’s February 11, 2019 Executive Order on AI, which commands all heads of federal agencies to “review their Federal data and models to identify opportunities to increase access and use by the greater non-Federal AI research community in a manner that benefits that community, while protecting safety, security, privacy, and confidentiality.”1 Ideally, this analysis will inspire government, private sector, and nonprofit leaders to recognize the need for a coordinated investment in technological transformation that reflects the urgency of child welfare cases.


"Conditioning Section 230 Immunity On Unbiased Content Moderation Practices As An Unconstitutional Condition" By: Edwin Lee
Download PDF

"Netflix, Disney+, & A Decision Of Paramount Importance" By: Dawson Oler
Download PDF

"Dating Data: LGBT Dating Apps, Data Privacy, and Data Security" By: Nivedita Sriram
Download PDF