The Impact of the General Data Protection Regulation on Artificial Intelligence 

The Impact of the General Data Protection Regulation on Artificial Intelligence 

 

European Parliamentary Research Service

June 2020

 

Abstract

This study addresses the relationship between the General Data Protection Regulation (GDPR) and artificial intelligence (AI). After introducing some basic concepts of AI, it reviews the state of the art in AI technologies and focuses on the application of AI to personal data. It considers challenges and opportunities for individuals and society, and the ways in which risks can be countered and opportunities enabled through law and technology. 

The study then provides an analysis of how AI is regulated in the GDPR and examines the extent to which AI fits into the GDPR conceptual framework. It discusses the tensions and proximities between AI and data protection principles, such as, in particular, purpose limitation and data minimisation. It examines the legal bases for AI applications to personal data and considers duties of information concerning AI systems, especially those involving profiling and automated decision-making. It reviews data subjects’ rights, such as the rights to access, erasure, portability and object. 

The study carries out a thorough analysis of automated decisionmaking, considering the extent to which automated decisions are admissible, the safeguard measures to be adopted, and whether data subjects have a right to individual explanations. It then addresses the extent to which the GDPR provides for a preventive risk-based approach, focusing on data protection by design and by default. The possibility to use AI for statistical purposes, in a way that is consistent with the GDPR, is also considered. 

The study concludes by observing that AI can be deployed in a way that is consistent with the GDPR, but also that the GDPR does not provide sufficient guidance for controllers, and that its prescriptions need to be expanded and concretised. Some suggestions in this regard are developed.

 

You can find original report  below:

Click to access EPRS_STU(2020)641530_EN.pdf

Smart Glasses and Data Protection

 

Smart Glasses and Data Protection

 

Brussels, January 2019

 

 

Executive Summary

Smart glasses are wearable computers with a mobile Internet connection that are worn like glasses or that are mounted on regular glasses. They allow to display information in the user’s view field and to capture information from the physical world using e.g. camera, microphone and GPS receiver for augmented-reality (AR) applications.

The initial release of Google’s smart glass gained significant attention worldwide and increased the popularity of those devices. While the target audience has been initially the business sector (e.g. logistics, training simulations, etc.) with unit prices of about EUR 1500, recently competitors such as Snap Inc. address a wider and younger audience with cheaper models for about EUR 150.

While smart glasses may be very useful tools in many fields of application (technical maintenance, education, construction, etc.), their use has been discussed controversially because they also are considered to yield a high potential to undermine the privacy of individuals, especially where not properly privacy-friendly designed. The data protection impact of recording videos of persons in public places has already been discussed in the context of CCTV and dashcams. The sensors may record environmental information including video streams of the users’ view field, audio recordings and localisation data. Furthermore, smart glasses may allow their users to process invisible personal data of others, such as device identifiers, that devices emit regularly in form of Wi-Fi or Bluetooth radio signals. These data may not only contain personal data of the users, but also of individuals in their proximity (non- users). Applications of the smart glasses may process recorded data locally or remotely by third parties after an automated transfer via the Internet. Especially when smart glasses are used in densely populated public areas, existing safeguards to inform data subjects by means of acoustic or visual indicators (LEDs) are not efficient. Smart glasses may also leak personal data of their users to their environment. Depending on the smart glass design, non-users may also watch the smart glass display, which may contain personal data such as private mails, pictures, etc. Like any other Internet connected device, smart glasses may suffer from security loopholes than can be actively exploited to steal data or run unauthorised software.

While smart glasses play so far only a marginal role in everyday life, experts estimate an important potential to increase productivity in the professional sector thanks to AR and the smart glasses initiatives of Facebook, Apple and Amazon will lead to an increasing adoption in the consumer market. Technological improvements in facial or voice recognition and battery life may allow for novel use cases of smart glasses in many sectors. For instance, in the law enforcement field, reports revealed in early 2018 that police officers employ smart glasses to match individuals (in crowds) against a database of criminal suspects using facial recognition. In this dynamic field, data protection authorities are challenged to keep pace with the rapid developments and provide guidelines. Indeed, many aspects have been covered already in the WP 29 Opinion on the Internet of Things.

With the GDPR, a harmonised set of principles and a system of tools have been provided, first and foremost for the controllers, processors and developers of smart glasses to assess and control their impact on data protection and privacy. At the current stage of the development, an urgent need for technology specific legislative initiatives does not appear to be justified. However, the development of smart glasses and similar connected recording devices underlines the need to establish a robust framework for privacy and electronic communications, as proposed with the ePrivacy Regulation.

 

You can find the full report and the link below:

https://edps.europa.eu/sites/edp/files/publication/19-01-18_edps-tech-report-1-smart_glasses_en.pdf

Personal Robots and Personal Data

 

Personal Robots and Personal Data

 

 

Gizem Gültekin Várkonyi

Assistant Coordinator, PhD Fellow

University of Szeged

Faculty of Law and Political Sciences

 

 

Abstract

In this blog post, I argue whether, and at what level, it is possible to exercise right to personal data protection in the era of Social Robots with Artificial Intelligence (hereafter, Social Robot). I analyze the concept of consent that was strengthened in European Union’s General Data Protection Regulation (GDPR). I basically reach to such a conclusion that, a Social Robot at personal usage challenges practicability of the GDPR. This conclusion derives from, first, Social Robot’s ability to collect vast amount of data naturally, e.g. via natural Human-Robot Interaction, or when it connects to Internet. Since a personal Social Robot’s life source, its blood, is personal data, it would be absurd for a user to not to give consent to get more personal services. In addition, it is well-known that most of the users do not read/listen consent texts, or do not understand even if they do so. Moreover, it is not easy to answer to the question of whether consent could be validly given for purposes that even the developer is not able to foresee (Unpredictable by Design). Finally, even if consent was validly given, it is not possible to make Social Robot to “forget” about the personal data in subject, in case when a particular personal data became an organic part of robot’s Neural Network. Otherwise, how consent could be withdrawn from a Social Robot should also be questioned.

 

For more details about author:

https://robotic.legal/author/ggvarkonyi/

A Right to Reasonable Inferences: Re-thinking Data Protection Law in the Age of Big Data and AI

A Right to Reasonable Inferences:

Re-thinking Data Protection Law in the Age of Big Data and AI

 

 

Sandra Wachter  & Brent Mittelstadt

 

University of Oxford – Oxford Internet Institute

 

 September 13, 2018

 

 

 

 

Abstract

“Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviours, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Concerns about algorithmic accountability are often actually concerns about the way in which these technologies draw privacy invasive and non-verifiable inferences about us that we cannot predict, understand, or refute. Data protection law is meant to protect people’s privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics. The broad concept of personal data in Europe could be interpreted to include inferences, predictions, and assumptions that refer to or impact on an individual. If seen as personal data, individuals are granted numerous rights under data protection law. However, the legal status of inferences is heavily disputed in legal scholarship, and marked by inconsistencies and contradictions within and between the views of the Article 29 Working Party and the European Court of Justice.

As we show in this paper, individuals are granted little control and oversight over how their personal data is used to draw inferences about them. Compared to other types of personal data, inferences are effectively ‘economy class’ personal data in the General Data Protection Regulation (GDPR). Data subjects’ rights to know about (Art 13-15), rectify (Art 16), delete (Art 17), object to (Art 21), or port (Art 20) personal data are significantly curtailed when it comes to inferences, often requiring a greater balance with controller’s interests (e.g. trade secrets, intellectual property) than would otherwise be the case. Similarly, the GDPR provides insufficient protection against sensitive inferences (Art 9) or remedies to challenge inferences or important decisions based on them (Art 22(3)).

This situation is not accidental. In standing jurisprudence the European Court of Justice (ECJ; Bavarian Lager, YS. and M. and S., and Nowak) and the Advocate General (AG; YS. and M. and S. and Nowak) have consistently restricted the remit of data protection law to assessing the legitimacy of input personal data undergoing processing, and to rectify, block, or erase it. Critically, the ECJ has likewise made clear that data protection law is not intended to ensure the accuracy of decisions and decision-making processes involving personal data, or to make these processes fully transparent.

Conflict looms on the horizon in Europe that will further weaken the protection afforded to data subjects against inferences. Current policy proposals addressing privacy protection (the ePrivacy Regulation and the EU Digital Content Directive) fail to close the GDPR’s accountability gaps concerning inferences. At the same time, the GDPR and Europe’s new Copyright Directive aim to facilitate data mining, knowledge discovery, and Big Data analytics by limiting data subjects’ rights over personal data. And lastly, the new Trades Secrets Directive provides extensive protection of commercial interests attached to the outputs of these processes (e.g. models, algorithms and inferences).

In this paper we argue that a new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap currently posed ‘high risk inferences’ , meaning inferences that are privacy invasive or reputation damaging and have low verifiability in the sense of being predictive or opinion-based. In cases where algorithms draw ‘high risk inferences’ about individuals, this right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data is a relevant basis to draw inferences; (2) why these inferences are relevant for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged. A right to reasonable inferences must, however, be reconciled with EU jurisprudence and counterbalanced with IP and trade secrets law as well as freedom of expression and Article 16 of the EU Charter of Fundamental Rights: the freedom to conduct a business.”

 

You can find the link and original paper below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3248829

 

Artificial Intelligence, Data Security and GDPR

 

Artificial Intelligence, Data Security and GDPR

 

 

Among the prominent features of artificial intelligence and machine learning, which are now being used in many sectors, is the ability to analyze data much faster from programmatic tools and from human beings as well as to learn how to manipulate data on its own.

In recent years, profiling and automated decision-making systems, which are frequently used in both public and private sectors, have brought benefits to individuals and corporations in terms of increased productivity and resource conservation, as well as bringing about risks. The decisions made by these systems which affect the individual and the complex nature of the decisions, cannot be justified. For example, artificial intelligence can lock a user into a specific category and restrict it to the suggested preferences. Hence, this reduces their freedom to choose specific products and services, such as books, music or news articles. (Article 29 Data Protection Working Party, WP251, sf.5)

GDPR, which will come into force in Europe in May, has provisions on profiling and automated decision making, to prevent from being used in such a way as to have an adverse effect on the rights of individuals. GDPR defines profiling in Article 4 as follows: “Any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.” (WP251, sf.6)Profiling is used to make predictions about people, using the data obtained from various sources on those people. From this point of view, it can also be considered as an evaluation or classification of individuals based on characteristics such as age, gender, and weight.

Automated decision-making is the ability to make decisions with technological tools (such as artificial intelligence) without human intervention. Automated decision-making can be based on any type of data. For example, data provided directly by the individuals concerned (such as responses to a questionnaire); data observed about the individuals (such as location data collected via an application); derived or inferred data such as a profile of the individual that has already been created (e.g. a credit score).

There are potentially three ways in which profiling may be used:

(i) general profiling;

(ii) decision-making based on profiling; and

(iii) solely automated decision-making, including profiling (Article 22).

The difference between (ii) and (iii) is best demonstrated by the following two examples where an individual applies for a loan online: a human decides whether to agree the loan based on a profile produced by purely automated means(ii); an algorithm decides whether the loan is agreed and the decision is automatically delivered to the individual, without any meaningful human input (iii). (WP251, sf.8)

The important questions to be encountered here are:

-How does the algorithm access this data?

-Is the source of data correct?

-Does the decision of the algorithm cause legal effects on the person?

-Can the individuals have some rights over the decision based on automated process?

-What measures should the data controllers take in this case?

Nowadays, most companies are able to analyze their customers’ behaviors by collecting their datas. For example, an insurance company can determine insurance premiums by tracking driving behavior of the driver, through automatic decision making.In addition, profiling and automatic decision-making systems, especially in advertising and marketing applications, can cause effective results on other individuals. Hypothetically, without relying on his/her own payment history, a credit card company can reduce a customer’s card limit by analyzing other customers who live in the same area and who shop at the same store. Therefore, this means that, based on the actions of others, you are deprived of a chance.

Data controller will be held responsible because of mistakes

It is important to note that, mistakes or prejudices in the collected or shared data may lead to evaluations based on incorrect classifications and uncertain outcomes in the automated-decision making process and may have negative effects on the individual. Decisions can be based on out-of-date data or outsourced data can be misinterpreted by the system. If the data used for automated-decision making is not correct, then the resulting decision or profiling will not be correct.

In the face of possible mistakes that may arise in such systems where artificial intelligence and machine learning are used, some obligations will occur for “data controllers”.Data controller should take adequate measures to ensure that the data used or indirectly obtained are accurate and up-to-date.

In addition, the data controller should also take steps to ensure long-term data retention, as data retention times may be incompatible with accuracy and update, as well as with proportionality. Another important issue is that the special categories of personal data are processed and used in these systems.GDPR is seeking the explicit consent in the processing of special categories of personal data.However, in this case, the data controller should remember that the profiling can create special categories of personal data by combining non-special categories of non-personal data with non-special categories of personal data. For example, when a person’s health status can be obtained from food purchase records, food quality and energy content data. (WP251, sf.22)

GDPR also mentions that people, who are affected by automated-decision making with the data used, have certain rights on this situation. Given the transparency principle underlying GDPR, according to Articles 13 and 14, the data controller should clearly explain how the process of profiling or automated-decision making works for the individual.

Profiling may include an estimation component that increases the risk of mistakes. Input data may be incorrect or irrelevant or may be out of context. Individuals may want to query the validity of the data and grouping used. At this point, according to Article 16, the data subject will also have the right of correction.

Similarly, the right to delete in Article 17 may be claimed by the data subject in this context. If there is a given consent to the basis of the profiling and the consent is subsequently withdrawn, the data controller has to delete the personal data of the data subject as long as there is no other legal basis for profiling.

Importance of personal data of children’s

Another point that needs attention about profiling and automated-decision making is, the use of children’s personal data. Children can be more sensitive and more easily affected, especially in online media. For example, regarding the online games, profiling can be used to target players, who are more likely to spend money in their game, as well as to be offered more personalized advertising. GDPR does not distinguish whether the processing is related to children and adults in Article 22. Nevertheless, as children can be easily affected by such marketing efforts, the data controller must be sure to take appropriate measures for children and ensure that they are effective in protecting children’s rights, freedoms and legitimate interests.

As a result, profiling and automated-decision making based on systems such as artificial intelligence and machine learning can have important consequences for the individual. Data collected in connection with this technology, must be collected by the consent of the persons or must be set on a legal ground. Also, it is important to subsequently use these data in connection with the purpose for which they are collected. If the system suddenly starts to make unusual decisions, the data controller should take the necessary precautions and guard the rights and the freedoms of the persons involved, including what roadmaps to follow.

See also:

Yapay Zekâ, Veri Güvenliği ve GDPR

Automated Individual Decision-Making And Profiling

Automated Individual Decision-Making And Profiling

 

Ä°lgili resim

ARTICLE 29 DATA PROTECTION WORKING PARTY

3 October 2017       17/EN WP 251 

 

 

INTRODUCTION

The General Data Protection Regulation (the GDPR), specifically addresses profiling and automated individual decision-making, including profiling.

Profiling and automated decision-making are used in an increasing number of sectors, both private and public. Banking and finance, healthcare, taxation, insurance, marketing and advertising are just a few examples of the fields where profiling is being carried out more regularly to aid decision-making.

Advances in technology and the capabilities of big data analytics, artificial intelligence and machine learning have made it easier to create profiles and make automated decisions with the potential to significantly impact individuals’ rights and freedoms.

The widespread availability of personal data on the internet and from Internet of Things (IoT) devices, and the ability to find correlations and create links, can allow aspects of an individual’s personality or behaviour, interests and habits to be determined, analysed and predicted.

Profiling and automated decision-making can be useful for individuals and organisations as well as for the economy and society as a whole, delivering benefits such as: increased efficiencies; and resource savings.

They have many commercial applications, for example, they can be used to better segment markets and tailor services and products to align with individual needs. Medicine, education, healthcare and transportation can also all benefit from these processes.

However, profiling and automated decision-making can pose significant risks for individuals’ rights and freedoms which require appropriate safeguards.

These processes can be opaque. Individuals might not know that they are being profiled or understand what is involved.

Profiling can perpetuate existing stereotypes and social segregation. It can also lock a person into a specific category and restrict them to their suggested preferences. This can undermine their freedom to choose, for example, certain products or services such as books, music or newsfeeds. It can lead to inaccurate predictions, denial of services and goods and unjustified discrimination in some cases.

The GDPR introduces new provisions to address the risks arising from profiling and automated decision-making, notably, but not limited to, privacy. The purpose of these guidelines is to clarify those provisions.

This document covers:

-Definitions of profiling and automated decision-making and the GDPR approach to these in general – Chapter II

-Specific provisions on automated decision-making as defined in Article 22 – Chapter III

-General provisions on profiling and automated decision-making – Chapter IV

-Children and profiling – Chapter V

-Data protection impact assessments – Chapter VI

 

The Annexes provide best practice recommendations, building on the experience gained in EU Member States.

 

 

You can reach the guidelines from the link below:

http://ec.europa.eu/newsroom/document.cfm?doc_id=47742