A Right to Reasonable Inferences: Re-thinking Data Protection Law in the Age of Big Data and AI

A Right to Reasonable Inferences:

Re-thinking Data Protection Law in the Age of Big Data and AI

 

 

Sandra Wachter  & Brent Mittelstadt

 

University of Oxford – Oxford Internet Institute

 

 September 13, 2018

 

 

 

 

Abstract

“Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviours, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Concerns about algorithmic accountability are often actually concerns about the way in which these technologies draw privacy invasive and non-verifiable inferences about us that we cannot predict, understand, or refute. Data protection law is meant to protect people’s privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics. The broad concept of personal data in Europe could be interpreted to include inferences, predictions, and assumptions that refer to or impact on an individual. If seen as personal data, individuals are granted numerous rights under data protection law. However, the legal status of inferences is heavily disputed in legal scholarship, and marked by inconsistencies and contradictions within and between the views of the Article 29 Working Party and the European Court of Justice.

As we show in this paper, individuals are granted little control and oversight over how their personal data is used to draw inferences about them. Compared to other types of personal data, inferences are effectively ‘economy class’ personal data in the General Data Protection Regulation (GDPR). Data subjects’ rights to know about (Art 13-15), rectify (Art 16), delete (Art 17), object to (Art 21), or port (Art 20) personal data are significantly curtailed when it comes to inferences, often requiring a greater balance with controller’s interests (e.g. trade secrets, intellectual property) than would otherwise be the case. Similarly, the GDPR provides insufficient protection against sensitive inferences (Art 9) or remedies to challenge inferences or important decisions based on them (Art 22(3)).

This situation is not accidental. In standing jurisprudence the European Court of Justice (ECJ; Bavarian Lager, YS. and M. and S., and Nowak) and the Advocate General (AG; YS. and M. and S. and Nowak) have consistently restricted the remit of data protection law to assessing the legitimacy of input personal data undergoing processing, and to rectify, block, or erase it. Critically, the ECJ has likewise made clear that data protection law is not intended to ensure the accuracy of decisions and decision-making processes involving personal data, or to make these processes fully transparent.

Conflict looms on the horizon in Europe that will further weaken the protection afforded to data subjects against inferences. Current policy proposals addressing privacy protection (the ePrivacy Regulation and the EU Digital Content Directive) fail to close the GDPR’s accountability gaps concerning inferences. At the same time, the GDPR and Europe’s new Copyright Directive aim to facilitate data mining, knowledge discovery, and Big Data analytics by limiting data subjects’ rights over personal data. And lastly, the new Trades Secrets Directive provides extensive protection of commercial interests attached to the outputs of these processes (e.g. models, algorithms and inferences).

In this paper we argue that a new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap currently posed ‘high risk inferences’ , meaning inferences that are privacy invasive or reputation damaging and have low verifiability in the sense of being predictive or opinion-based. In cases where algorithms draw ‘high risk inferences’ about individuals, this right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data is a relevant basis to draw inferences; (2) why these inferences are relevant for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged. A right to reasonable inferences must, however, be reconciled with EU jurisprudence and counterbalanced with IP and trade secrets law as well as freedom of expression and Article 16 of the EU Charter of Fundamental Rights: the freedom to conduct a business.”

 

You can find the link and original paper below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3248829

 

Voter Privacy in the Age of Big Data

 

Voter Privacy in the Age of

Big Data

 

 

Ira Rubinstein

New York University (NYU) – Information Law Institute

 

April 26, 2014

 

 

 

Abstract

 

“In the past several election cycles, presidential campaigns and other well-funded races for major political offices have become data-driven operations. Presidential campaign organizations and the two main parties (and their data consultants) assemble and maintain extraordinarily detailed political dossiers on every American voter. These databases contain hundreds of millions of individual records, each of which has hundreds to thousands of data points. Because this data is computerized, candidates benefit from cheap and nearly unlimited storage, very fast processing, and the ability to engage in data mining of interesting voter patterns.

The hallmark of data-driven political campaigns is voter microtargeting, which political actors rely on to achieve better results in registering, mobilizing and persuading voters and getting out the vote on or before Election Day. Voter microtargeting is the targeting of voters in a highly individualized manner based on statistical correlations between their observable patterns of offline and online behavior and the likelihood of their supporting a candidate and casting a ballot for him or her. In other words, modern political campaigns rely on the analysis of large data sets in search of useful and unanticipated insights, an activity that is often summed up with the phrase “big data.” Despite the importance of big data in U.S. elections, the privacy implications of data-driven campaigning have not been thoroughly explored much less regulated. Indeed, political dossiers may be the largest unregulated assemblage of personal data in contemporary American life.

This Article seeks to remedy this oversight. It proceeds in three parts. Part I offers the first comprehensive analysis of the main sources of voter data and the absence of legal protection for this data and related data processing activities. Part II considers the privacy interests of individuals in both their consumer and Internet-based activities and their participation in the political process, organizing the analysis under the broad rubrics of information privacy and political privacy. That is, it asks two interrelated questions: first, whether the relentless profiling and microtargeting of American voters invades their privacy (and if so what harm it causes) and, second, to what extent do these activities undermine the integrity of the election system. It also examines three reasons why political actors minimize privacy concerns: a penchant for secrecy that clashes with the core precept of transparent data practices; a tendency to rationalize away the problem by treating all voter data as if it were voluntarily provided or safely de-identified (and hence outside the scope of privacy law) while (falsely) claiming to follow the highest commercial privacy standards; and, a mistaken embrace of commercial tracking and monitoring techniques as if their use has no impact on the democratic process.

Part III presents a moderate proposal for addressing the harms identified in Part II consisting in (1) a mandatory disclosure and disclaimer regime requiring political actors to be more transparent about their campaign data practices; and (2) new federal privacy restrictions on commercial data brokers and a complementary “Do Not Track” mechanism enabling individuals (who also happen to be voters) to decide whether and to what extent commercial firms may track or target their online activity. The article concludes by asking whether even this moderate proposal runs afoul of political speech rights guaranteed by the First Amendment. It makes two arguments. First, the Supreme Court is likely to uphold mandatory privacy disclosures and disclaimers based on doctrines developed and re-affirmed in the leading campaign finance cases, which embrace transparency over other forms of regulation. Second, the Court will continue viewing commercial privacy regulations as constitutional under longstanding First Amendment doctrines, despite any incidental burdens they may impose on political actors, and notwithstanding its recent decision in Sorrell v. IMS Health, which is readily distinguishable.”

 

You can find the link and original paper below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2447956

AI at Google: Our Principles

AI at Google: Our Principles

Related image

Sundar Pichai

 

Sundar Pichai

CEO

June 7, 2018

 

 

 

At its heart, AI is computer programming that learns and adapts. It can’t solve every problem, but its potential to improve our lives is profound. At Google, we use AI to make products more useful—from email that’s spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy.

Beyond our products, we’re using AI to help people tackle urgent problems. A pair of high school students are building AI-powered sensors to predict the risk of wildfires. Farmers are using it to monitor the health of their herds. Doctors are starting to use AI to help diagnose cancer and prevent blindness. These clear benefits are why Google invests heavily in AI research and development, and makes AI technologies widely available to others via our tools and open-source code.

We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.

We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.

Objectives for AI applications

We will assess AI applications in view of the following objectives. We believe that AI should:

  1. Be socially beneficial. 

The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.

AI also enhances our ability to understand the meaning of content at scale. We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.

  1. Avoid creating or reinforcing unfair bias.

AI algorithms and datasets can reflect, reinforce, or reduce unfair biases.  We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

  1. Be built and tested for safety.

We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.  We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

  1. Be accountable to people.

We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.

  1. Incorporate privacy design principles.

We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

  1. Uphold high standards of scientific excellence.

Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration. AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences. We aspire to high standards of scientific excellence as we work to progress AI development.

We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.

  1. Be made available for uses that accord with these principles.  

Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications. As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors:

 

  • Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use
  • Nature and uniqueness: whether we are making available technology that is unique or more generally available
  • Scale: whether the use of this technology will have significant impact
  • Nature of Google’s involvement: whether we are providing general-purpose tools, integrating tools for customers, or developing custom solutions

AI applications we will not pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas:

 

  1. Technologies that cause or are likely to cause overall harm.  Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

 

We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.

AI for the long term

While this is how we’re choosing to approach AI, we understand there is room for many voices in this conversation. As AI technologies progress, we’ll work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will continue to share what we’ve learned to improve AI technologies and practices.

We believe these principles are the right foundation for our company and the future development of AI. This approach is consistent with the values laid out in our original Founders’ Letter back in 2004. There we made clear our intention to take a long-term perspective, even if it means making short-term tradeoffs. We said it then, and we believe it now.

 

You can reach the original text below:

https://blog.google/topics/ai/ai-principles/

Privacy and Freedom of Expression In the Age of Artificial Intelligence

 

Privacy and Freedom of Expression In the Age of Artificial Intelligence

 

Related image

Article 19 & Privacy International

April, 2018

 

Executive Summary

“Artificial Intelligence (AI) is part of our daily lives. This technology shapes how people Access information, interact with devices, share personal information, and even understand foreign languages. It also transforms how individuals and groups can be tracked and identified, and dramatically alters what kinds of information can be gleaned about people from their data.

AI has the potential to revolutionise societies in positive ways. However, as with any scientific or technological advancement, there is a real risk that the use of new tools by states or corporations will have a negative impact on human rights.

While AI impacts a plethora of rights, ARTICLE 19 and Privacy International are particularly concerned about the impact it will have on the right to privacy and the right to freedom of expression and information.

This scoping paper focuses on applications of ‘artificial narrow intelligence’: in particular, machine learning and its implications for human rights.

The aim of the paper is fourfold:

  1. Present key technical definitions to clarify the debate;
  2. Examine key ways in which AI impacts the right to freedom of expression and the right to privacy and outline key challenges;
  3. Review the current landscape of AI governance, including various existing legal, technical, and corporate frameworks and industry-led AI initiatives that are relevant to freedom of expression and privacy; and
  4. Provide initial suggestions for rights-based solutions which can be pursued by civil society organisations and other stakeholders in AI advocacy activities.

We believe that policy and technology responses in this area must:

  • Ensure protection of human rights, in particular the right to freedom of expression and the right to privacy;
  • Ensure accountability and transparency of AI;
  • Encourage governments to review the adequacy of any legal and policy frameworks, and regulations on AI with regard to the protection of freedom of expression and privacy;
  • Be informed by a holistic understanding of the impact of the technology: case studies and empirical research on the impact of AI on human rights must be collected; and
  • Be developed in collaboration with a broad range of stakeholders, including civil society and expert networks.”

 

You can find the link and original report below:

https://privacyinternational.org/sites/default/files/2018-04/Privacy%20and%20Freedom%20of%20Expression%20%20In%20the%20Age%20of%20Artificial%20Intelligence.pdf

Automated Individual Decision-Making And Profiling

Automated Individual Decision-Making And Profiling

 

Ä°lgili resim

ARTICLE 29 DATA PROTECTION WORKING PARTY

3 October 2017       17/EN WP 251 

 

 

INTRODUCTION

The General Data Protection Regulation (the GDPR), specifically addresses profiling and automated individual decision-making, including profiling.

Profiling and automated decision-making are used in an increasing number of sectors, both private and public. Banking and finance, healthcare, taxation, insurance, marketing and advertising are just a few examples of the fields where profiling is being carried out more regularly to aid decision-making.

Advances in technology and the capabilities of big data analytics, artificial intelligence and machine learning have made it easier to create profiles and make automated decisions with the potential to significantly impact individuals’ rights and freedoms.

The widespread availability of personal data on the internet and from Internet of Things (IoT) devices, and the ability to find correlations and create links, can allow aspects of an individual’s personality or behaviour, interests and habits to be determined, analysed and predicted.

Profiling and automated decision-making can be useful for individuals and organisations as well as for the economy and society as a whole, delivering benefits such as: increased efficiencies; and resource savings.

They have many commercial applications, for example, they can be used to better segment markets and tailor services and products to align with individual needs. Medicine, education, healthcare and transportation can also all benefit from these processes.

However, profiling and automated decision-making can pose significant risks for individuals’ rights and freedoms which require appropriate safeguards.

These processes can be opaque. Individuals might not know that they are being profiled or understand what is involved.

Profiling can perpetuate existing stereotypes and social segregation. It can also lock a person into a specific category and restrict them to their suggested preferences. This can undermine their freedom to choose, for example, certain products or services such as books, music or newsfeeds. It can lead to inaccurate predictions, denial of services and goods and unjustified discrimination in some cases.

The GDPR introduces new provisions to address the risks arising from profiling and automated decision-making, notably, but not limited to, privacy. The purpose of these guidelines is to clarify those provisions.

This document covers:

-Definitions of profiling and automated decision-making and the GDPR approach to these in general – Chapter II

-Specific provisions on automated decision-making as defined in Article 22 – Chapter III

-General provisions on profiling and automated decision-making – Chapter IV

-Children and profiling – Chapter V

-Data protection impact assessments – Chapter VI

 

The Annexes provide best practice recommendations, building on the experience gained in EU Member States.

 

 

You can reach the guidelines from the link below:

http://ec.europa.eu/newsroom/document.cfm?doc_id=47742

Robots and Privacy

Robots and Privacy

 

İlgili resim

 

Ryan Calo

University of Washington

2 April, 2010

 

 

 

 

Abstract

“It is not hard to imagine why robots raise privacy concerns.Practically by definition, robots are equipped with the ability to sense, process, and record the world around them. Robots can go places humans cannot go, see things humans cannot see. Robots are, first and foremost, a human instrument. And after industrial manufacturing, the principle use to which we’ve put that instrument has been surveillance.

Yet increasing the power to observe is just one of ways in which robots may implicate privacy within the next decade. This chapter breaks the effects of robots on privacy into three categories — direct surveillance, increased access, and social meaning — with the goal of introducing the reader to a wide variety of issues. Where possible, the chapter points toward ways in which we might mitigate or redress the potential impact of robots on privacy, but acknowledges that in some cases redress will be difficult under the current state of privacy law.”

 

You can reach the article from the link below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1599189