Data Privacy Guidelines for AI Solutions

 

Data Privacy Guidelines for AI Solutions

November 2020

Purpose

  1. The purpose of this paper is to provide guiding principles concerning the use of personal and personal related information in the context of Artificial Intelligence (AI) solutions1 developed as part of applied Information & Communication Technologies (ICTs), and to emphasise the importance of a legitimate basis for AI data processing by governments and corporations. 
  2. This Guidance is intended to serve as a common international minimum baseline for data protection standards regarding AI solutions, especially those to be implemented at the domestic level, and to be a reference point for the ongoing debate on how the right to privacy can be protected in the context of AI solutions. 
  3. AI solutions are intended to guide or make decisions that affect all our lives. Therefore, AI solutions are currently subject to broader debates within society. The subject of these debates – moral, ethical and societal questions including non-discrimination and free participation, are still to be solved. All of these questions are preconditioned by lawful data processing from a data privacy perspective. The data privacy underpinnings for AI solutions are the focus of this Guidance. 
  4. This guideline is based on the United Nations Charter of Human Rights (The Universal Declaration of Human Rights, Dec. 10th , 1948, reaffirmed 2015, UDHR) and reflects the spirit as well as the understanding of this Charter. Above all Article 7 (non-discrimination) and Article 12 (right to privacy) shall be considered whenever developing or operating AI solutions. The themes and values of these UDHR Articles are found in Articles 2 and 3 (nondiscrimination), and Article 17 (privacy) of International Covenant on Civil and Political Rights, and are obligations upon countries that have ratified the Treaty. 

Scope

  1. This Guidance is applicable to the data processing of AI solutions in all sectors of society including the public and private sectors. Data processing in this context means the design, the development, the operation and decommissioning of an AI solution. 
  2. This Guidance is applicable to all controllers of AI solutions. Controller in this context means designer, developer or operator (self-responsible or principal) each in its specific function. 
  3. This Guidance does not limit or otherwise affect any law that grants data subjects more, wider or in whatsoever way better rights, protection, and/or remedies. This Guidance does not limit or otherwise affect any law that imposes obligations on controllers and processors where that law imposes higher, wider or more rigorous obligations regarding data privacy aspects.
  4. This Guidance does not apply to AI solutions that might be performed by individuals in the context of purely private, non-corporate or household activities.

 

(All submissions must have been received by 2 November 2020.)

 

You can find original draft Guidelines from the link below:

https://www.ohchr.org/Documents/Issues/Privacy/SR_Privacy/2020_Sept_draft_data_Privacy_guidelines.pdf

Draft Ethics Guidelines For Trustworthy AI

Draft Ethics Guidelines For Trustworthy AI

The European Commission’s High-Level Expert Group

18 December 2018

 

EXECUTIVE SUMMARY

 

This working document constitutes a draft of the AI Ethics Guidelines produced by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG), of which a final version is due in March 2019.

Artificial Intelligence (AI) is one of the most transformative forces of our time, and is bound to alter the fabric of society. It presents a great opportunity to increase prosperity and growth, which Europe must strive to achieve. Over the last decade, major advances were realised due to the availability of vast amounts of digital data, powerful computing architectures, and advances in AI techniques such as machine learning. Major AI-enabled developments in autonomous vehicles, healthcare, home/service robots, education or cybersecurity are improving the quality of our lives every day. Furthermore, AI is key for addressing many of the grand challenges facing the world, such as global health and wellbeing, climate change, reliable legal and democratic systems and others expressed in the United Nations Sustainable Development Goals.

Having the capability to generate tremendous benefits for individuals and society, AI also gives rise to certain risks that should be properly managed. Given that, on the whole, AI’s benefits outweigh its risks, we must ensure to follow the road that maximises the benefits of AI while minimising its risks. To ensure that we stay on the right track, a human-centric approach to AI is needed, forcing us to keep in mind that the development and use of AI should not be seen as a means in itself, but as having the goal to increase human well-being. Trustworthy AI will be our north star, since human beings will only be able to confidently and fully reap the benefits of AI if they can trust the technology.

Trustworthy AI has two components: (1) it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose” and (2) it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.

These Guidelines therefore set out a framework for Trustworthy AI:

–  Chapter I deals with ensuring AI’s ethical purpose, by setting out the fundamental rights, principles and values that it should comply with.

–  From those principles, Chapter II derives guidance on the realisation of Trustworthy AI, tackling both ethical purpose and technical robustness. This is done by listing the requirements for Trustworthy AI and offering an overview of technical and non-technical methods that can be used for its implementation.

– Chapter III subsequently operationalises the requirements by providing a concrete but non- exhaustive assessment list for Trustworthy AI. This list is then adapted to specific use cases.

In contrast to other documents dealing with ethical AI, the Guidelines hence do not aim to provide yet another list of core values and principles for AI, but rather offer guidance on the concrete implementation and operationalisation thereof into AI systems. Such guidance is provided in three layers of abstraction, from most abstract in Chapter I (fundamental rights, principles and values), to most concrete in Chapter III (assessment list).

The Guidelines are addressed to all relevant stakeholders developing, deploying or using AI, encompassing companies, organisations, researchers, public services, institutions, individuals or other entities. In the final version of these Guidelines, a mechanism will be put forward to allow stakeholders to voluntarily endorse them.

Importantly, these Guidelines are not intended as a substitute to any form of policymaking or regulation (to be dealt with in the AI HLEG’s second deliverable: the Policy & Investment Recommendations, due in May 2019), nor do they aim to deter the introduction thereof. Moreover, the Guidelines should be seen as a living document that needs to be regularly updated over time to ensure continuous relevance as the technology and our knowledge thereof, evolves. This document should therefore be a starting point for the discussion on “Trustworthy AI made in Europe”.

While Europe can only broadcast its ethical approach to AI when competitive at global level, an ethical approach to AI is key to enable responsible competitiveness, as it will generate user trust and facilitate broader uptake of AI. These Guidelines are not meant to stifle AI innovation in Europe, but instead aim to use ethics as inspiration to develop a unique brand of AI, one that aims at protecting and benefiting both individuals and the common good. This allows Europe to position itself as a leader in cutting-edge, secure and ethical AI. Only by ensuring trustworthiness will European citizens fully reap AI’s benefits.

Finally, beyond Europe, these Guidelines also aim to foster reflection and discussion on an ethical framework for AI at global level.

 

 

You can find the original draft and the link below:

https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai