Machine Learning and Law

Machine Learning and Law

 

Lawyering in the age of algorithms

 

Harry Surden

 

University of Colorado Law School

 

March 26,  2014

 

 

 

 

Abstract

“This Article explores the application of machine learning techniques within the practice of law. Broadly speaking “machine learning” refers to computer algorithms that have the ability to “learn” or improve in performance over time on some task. In general, machine learning algorithms are designed to detect patterns in data and then apply these patterns going forward to new data in order to automate particular tasks. Outside of law, machine learning techniques have been successfully applied to automate tasks that were once thought to necessitate human intelligence — for example language translation, fraud-detection, driving automobiles, facial recognition, and data-mining. If performing well, machine learning algorithms can produce automated results that approximate those that would have been made by a similarly situated person.

This Article begins by explaining some basic principles underlying machine learning methods, in a manner accessible to non-technical audiences. The second part explores a broader puzzle: legal practice is thought to require advanced cognitive abilities, but such higher-order cognition remains outside the capability of current machine-learning technology. This part identifies a core principle: how certain tasks that are normally thought to require human intelligence can sometimes be automated through the use of non-intelligent computational techniques that employ heuristics or proxies (e.g., statistical correlations) capable of producing useful, “intelligent” results. The third part applies this principle to the practice of law, discussing machine-learning automation in the context of certain legal tasks currently performed by attorneys: including predicting the outcomes of legal cases, finding hidden relationships in legal documents and data, electronic discovery, and the automated organization of documents.”

 

You can find the link and original paper below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2417415

A Right to Reasonable Inferences: Re-thinking Data Protection Law in the Age of Big Data and AI

A Right to Reasonable Inferences:

Re-thinking Data Protection Law in the Age of Big Data and AI

 

 

Sandra Wachter  & Brent Mittelstadt

 

University of Oxford – Oxford Internet Institute

 

 September 13, 2018

 

 

 

 

Abstract

“Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviours, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Concerns about algorithmic accountability are often actually concerns about the way in which these technologies draw privacy invasive and non-verifiable inferences about us that we cannot predict, understand, or refute. Data protection law is meant to protect people’s privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics. The broad concept of personal data in Europe could be interpreted to include inferences, predictions, and assumptions that refer to or impact on an individual. If seen as personal data, individuals are granted numerous rights under data protection law. However, the legal status of inferences is heavily disputed in legal scholarship, and marked by inconsistencies and contradictions within and between the views of the Article 29 Working Party and the European Court of Justice.

As we show in this paper, individuals are granted little control and oversight over how their personal data is used to draw inferences about them. Compared to other types of personal data, inferences are effectively ‘economy class’ personal data in the General Data Protection Regulation (GDPR). Data subjects’ rights to know about (Art 13-15), rectify (Art 16), delete (Art 17), object to (Art 21), or port (Art 20) personal data are significantly curtailed when it comes to inferences, often requiring a greater balance with controller’s interests (e.g. trade secrets, intellectual property) than would otherwise be the case. Similarly, the GDPR provides insufficient protection against sensitive inferences (Art 9) or remedies to challenge inferences or important decisions based on them (Art 22(3)).

This situation is not accidental. In standing jurisprudence the European Court of Justice (ECJ; Bavarian Lager, YS. and M. and S., and Nowak) and the Advocate General (AG; YS. and M. and S. and Nowak) have consistently restricted the remit of data protection law to assessing the legitimacy of input personal data undergoing processing, and to rectify, block, or erase it. Critically, the ECJ has likewise made clear that data protection law is not intended to ensure the accuracy of decisions and decision-making processes involving personal data, or to make these processes fully transparent.

Conflict looms on the horizon in Europe that will further weaken the protection afforded to data subjects against inferences. Current policy proposals addressing privacy protection (the ePrivacy Regulation and the EU Digital Content Directive) fail to close the GDPR’s accountability gaps concerning inferences. At the same time, the GDPR and Europe’s new Copyright Directive aim to facilitate data mining, knowledge discovery, and Big Data analytics by limiting data subjects’ rights over personal data. And lastly, the new Trades Secrets Directive provides extensive protection of commercial interests attached to the outputs of these processes (e.g. models, algorithms and inferences).

In this paper we argue that a new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap currently posed ‘high risk inferences’ , meaning inferences that are privacy invasive or reputation damaging and have low verifiability in the sense of being predictive or opinion-based. In cases where algorithms draw ‘high risk inferences’ about individuals, this right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data is a relevant basis to draw inferences; (2) why these inferences are relevant for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged. A right to reasonable inferences must, however, be reconciled with EU jurisprudence and counterbalanced with IP and trade secrets law as well as freedom of expression and Article 16 of the EU Charter of Fundamental Rights: the freedom to conduct a business.”

 

You can find the link and original paper below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3248829

 

How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem

 

How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem

 

Yapay zeka, yemekte kedinizi pişirebilir!

 

Amanda Levendowski

New York University School of Law

July 24, 2017

 

 

 

 

Abstract:

 

“As the use of artificial intelligence (AI) continues to spread, we have seen an increase in examples of AI systems reflecting or exacerbating societal bias, from racist facial recognition to sexist natural language processing. These biases threaten to overshadow AI’s technological gains and potential benefits. While legal and computer science scholars have analyzed many sources of bias, including the unexamined assumptions of its often-homogenous creators, flawed algorithms, and incomplete datasets, the role of the law itself has been largely ignored. Yet just as code and culture play significant roles in how AI agents learn about and act in the world, so too do the laws that govern them. This Article is the first to examine perhaps the most powerful law impacting AI bias: copyright.

Artificial intelligence often learns to “think” by reading, viewing, and listening to copies of human works. This Article first explores the problem of bias through the lens of copyright doctrine, looking at how the law’s exclusion of access to certain copyrighted source materials may create or promote biased AI systems. Copyright law limits bias mitigation techniques, such as testing AI through reverse engineering, algorithmic accountability processes, and competing to convert customers. The rules of copyright law also privilege access to certain works over others, encouraging AI creators to use easily available, legally low-risk sources of data for teaching AI, even when those data are demonstrably biased. Second, it examines how a different part of copyright law—the fair use doctrine—has traditionally been used to address similar concerns in other technological fields, and asks whether it is equally capable of addressing them in the field of AI bias. The Article ultimately concludes that it is, in large part because the normative values embedded within traditional fair use ultimately align with the goals of mitigating AI bias and, quite literally, creating fairer AI systems.”

 

You can find the link and original paper below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3024938