Listening from Afar: An Algorithmic Analysis of Testimonies from the International Criminal Court

By: Dr. Renana Keydar, Assistant Professor of Law and Digital Humanities, The Hebrew University of Jerusalem

Despite the recognized importance of witness testimony in addressing systematic violence and human rights violations, reflected in the participation of large numbers of witnesses in international legal processes, establishing facts based on oral testimonies in international criminal tribunals remains a contentions matter.

The article develops a new model for assessing judicial attention to and engagement with testimonial narratives, in particular of victims of sexual violence, by conceptualizing the testimonies as “textual datasets.”  The article presents the results of an algorithm-based approach for analyzing testimonial corpora, applying a generative statistical model known as unsupervised topic modeling.  I employ LDA topic modeling for empirically assessing the international courts’ capacity to “listen” to large quantities of witness testimonies.  Harnessing the large number of testimonies in international criminal trials, I use topic modeling in order to explore latent themes and semantic fields that could benefit the legal process and its critical scholarly appreciation. 

The article proposes Automated Content Analysis, in particular topic modeling method, as a novel method to assist scholars and practitioners in making sense of complex legal cases, involving large amounts of testimonies, documents and data, while preserving the voice and vocabulary of the individual witness.

The article highlights the potential of topic modeling methods, rooted in Natural Language Processing and Digital Humanities, to overcome critical impediments in empirical legal studies.  It demonstrates the method’s capacity to transform both as a practical heuristic mechanism that can be employed during the legal proceeding, and in its ex-post analysis in legal scholarship.


Contextual Fairness: A Legal and Policy Analysis of Algorithmic Fairness

By: Doaa Abu-Elyounes, S.J.D. Candidate at Harvard Law School

To date, all stakeholders are working intensively on policy design for artificial intelligence.  All initiatives center around the requirement that AI algorithms should be fair.  But what exactly does it mean?  And how can algorithmic fairness be translated to legal and policy terms?  These are the main questions that this paper aims to explore.  Each discipline approaches those questions differently.  While computer scientists may favor one notion of fairness over others across the board, this paper argues in favor of a case-by-case analysis and application of the relevant fairness notion.  The paper discusses the legal limitations of the computer science (CS) notions of fairness and suggests a typology of matching each CS notions to its corresponding legal mechanism.  The paper concludes that fairness is contextual.  The fact that each notion, or a group of notions correspond with a different legal mechanism, makes them suitable for a certain policy domain more than others.  Thus, throughout the paper, examples for possible applicability of the CS notions to some policy domains will be introduced.  In addition, the paper will highlight for both developers and policy makers the practical steps that need to be taken in order to better address algorithmic fairness. 

In some instances, notions of fairness that seem, on their face, unproductive from a technical perspective, could in fact be quite helpful from a legal perspective.  In other instances, desirable notions in the eye of computer scientists could be challenging to implement in the legal regime, due to the need to determine complex moral and legal questions.  Thus, the article emphasizes, a one-size-fits-all solution is not applicable for algorithmic fairness.  Rather, an approach that demonstrates a deep understanding of the specific context that a certain algorithm is operating in can guarantee a fairer outcome.

China’s Regulatory Approach to the Sharing Economy: A Perspective on Ride-Hailing

By: Huiqin Jiang, Associate Professor, School of Law and Politics, Zhejiang Sci-Tech University, China & Heng Wang, Professor & Co-Director, Herbert Smith Freehills China International Business and Economic Law (CIBEL) Centre at University of New South Wales, Sydney

While the sharing economy brings significant social benefits in China, it comes with regulatory challenges that are novel and unpredictable. How should regulators handle these challenges? This paper offers fresh insights into the regulatory approach to the ride-hailing industry, the most comprehensively regulated sharing industry in China. A historical review identifies three regulatory approaches deployed to date: self-regulation, market-based regulation and government regulation. Self-regulation relies on the platforms with incentive to provide better service for greater profit, and to deal with sharing-specific challenges. Market-based regulation invites rivals to keep a watchful eye on other players, in order to enhance their market position by outperforming the competition. Both approaches are capable of delivering quick, and often innovative, responses to new challenges. Government regulation, on the other hand, came late and plays a neutral role. The rules there are mostly of the “old wine in a new bottle” kind; in other words, applying existing (old) rules to the new sharing economy. Certainly, those rules succeed to provide a level playing field for traditional and sharing-market players. However, the authors argue that government regulations are inadequate for solving sharing-specific challenges such as the legal status of the participants, the challenges of uncertain externalities, and new forms of competition. Instead, regulators should in the future give affirmative value to self-regulation and market-based regulation. These complementary approaches are capable of yielding innovative and sharing-specific regulatory responses, from which the government regulators can glean and evaluate before codifying them.

Automated Vehicles and Third-Party Liability: A European Perspective

By: Dr Michael Chatzipanagiotis & Dr George Leloudas, Associate Professor, HRC School of Law at Swansea University

This article examines third-party liability issues of automated vehicles (AV) as currently regulated in Europe at the level of the European Union (EU) and at national level.

We begin with a brief presentation of international law on traffic rules, whose binding effect has influenced the content of European provisions. We proceed with the analysis of the provisions of the Product Liability Directive and the Motor Insurance Directive in view of their applicability to AV. Subsequently, we briefly analyze the pertinent provisions of German and English law on product liability and road traffic liability, including the special rules of the German Road Traffic Act, which focuses on the liability of keeper and the driver, as well as the English Automated and Electric Vehicles Act 2018, which focuses on insurance and represents a different approach. We then outline the legal landscape in the US and compare it with the European. Afterwards, we examine briefly less obvious parameters, such as human-factors, the role of media and ethics, and explore the potential for international harmonization.

We conclude that, at present, the risks and benefits from the use of AV are not making a convincing case to depart from traditional liability rules on road traffic and defective products. There is no uninsurable disaster potential and no radical change in people’s lives to justify limiting the legal right of uninvolved victims to receive compensation compared to ordinary vehicles. There are more appropriate means than liability reform to incentivize technological development. Moreover, establishing uniform international liability rules would be desirable, but appears neither necessary nor politically feasible.

Why Such Lack of Coherence Between U.S. and EU Data Privacy Law?

By: Gregory Voss, Associate Professor, TBS Business School (Toulouse, France)

In the forthcoming Fall 2019 Issue of the University of Illinois Journal of Law, Technology & Policy (JLTP), my article “Obstacles to Transatlantic Harmonization of Data Privacy Law in Context” will appear. (A pre-print of the article is available at https://ssrn.com/abstract=3446833.) Not only will this article serve to as an introduction to privacy and data protection issues, it also will help to understand the paradoxical divergence between U.S. and EU Data Privacy Law, after a common set of principles (known as the FIPPs) defined early legislation.

At this conjuncture this study is important for a few reasons. First, the European Union’s newly applicable General Data Protection Regulation (GDPR) has extraterritorial effect—even businesses with no establishment in Europe may be required to respect the GDPR in connection with the processing of personal data of EU residents, if such processing is in connection with the offer of goods or services (whether for pay or in exchange for personal data) to such EU residents, or if behavioral monitoring of such EU residents’ behavior with in the European Union is engaged in, such as in connection with behavioral marketing. In this context, companies are struggling with issues of compliance and the dilemma of treating U.S. customers’ information with fewer protections than those of their EU residents, as the U.S. data privacy laws are patchy, or applying the higher standard worldwide. As globalization would tend to require harmonized legal standards, they could hope for the same through the ongoing U.S. discussions of new federal privacy legislation. However, this article will help them understand why they are unlikely to obtain harmonized legal standards, and will also point to this divergence as the reason for which U.S. privacy standards are not considered adequate by Europeans, which leads to the requirement that certain firms must sign on to the Privacy Shield framework, negotiated between the European Union and the United States, in order to receive cross-border transfers of personal data of EU residents (for example, in connection with the provision of cloud or other processing services).

Secondly, in connection with such discussions in the United States, this article focuses on the reasons for divergence, would could to a certain extent be addressed by the legislature in a new legislative text. While it is unlikely that full harmonization could occur, arguably that is not required for a legal system to be found to provide adequate protection for data by the European Union, thus allowing for cross-border data transfers without a Privacy Shield framework. However, U.S. mass surveillance might prevent any such adequacy finding. Furthermore, while lobbying is discussed in a negative sense in this article, companies could choose to support harmonized laws, thereby easing compliance, through corporate political activity in support of legislation like the GDPR.

After a development regarding the interest of harmonized data privacy laws in a globalized economy, where the current U.S. piecemeal legislation makes it an “outlier,” this article goes to the origins of data privacy law in the 1970s and the underlying FIPPs developed between the United States and Europe. Then three major obstacles to transatlantic harmonization of data privacy law are posited and detailed. These are: laissez-faire policy and neoliberalism in the United States, the lobbying power of the U.S. technology industry giants in a conducive U.S. legislative system and differing constitutional provisions on one side and the other of the Atlantic. The first of these obstacles could be a subject for the debates between the potential candidates for the 2020 U.S. presidential elections; the second, which involves advertising-dependent technology companies ensuring their future great prosperity, could be the subject of counter efforts by civil society groups and privacy-responsible companies, if legislators have the true will to reform U.S. data privacy law. The last of these obstacles is related to differing legal cultures and may be the most difficult to counter. In any event, I think that the most that may be achieved in the United States, given these obstacles, is what some academics have referred to as a “GDPR-lite,” despite the optimism of other writers. However, one area for improvement is the creation of a true independent data privacy protection agency (DPA), unlike the current U.S. de facto DPA—the Federal Trade Commission—which even its supporters agree needs reform.

The pre-print of this article was cited by EU tech policy journalist Jennifer Baker in a CPO Magazine article.[1] Baker (@BrusselsGeek) tweeted that it was a “Great paper. I read it with interest and recommend it to anyone covering this area! 🙂”[2] My hope is that you will read it, too, and that it will give you matter for thought and perhaps action.

My thanks go out to the JLTP editors, members and staff for making this blog post possible and for their assistance during the editing process of my article.

[1] Jennifer Baker, Groundhog Day for Privacy Shield Review, CPO Magazine (Sept. 24, 2019), https://www.cpomagazine.com/data-protection/groundhog-day-for-privacy-shield-review/.

[2] Jennifer Baker (@BrusselsGeek), Twitter (1:35 AM Sept. 25, 2019), https://twitter.com/BrusselsGeek/status/1176777340803846145.

*This article is featured in our Volume 2019 — Issue 2 (Fall) publication and can be found here