The European Code of Ethics on the Use of Artificial Intelligence in Jurisdictions 

 

The European Code of Ethics on the Use of Artificial Intelligence in Jurisdictions 

 

global network artificial intelligence ethics

 

Gizem YILMAZ

Marmara Avrupa Araştırmaları Dergisi

Vol 28, No 1

2020

 

 

 

Abstract

 

Artificial intelligence, which is included in many technologies we use today, serves the justice system by facilitating the work of the judicial organs thanks to the analysis, storage and connections between the systems. Especially in the United States and China, it is seen that legal counseling services are provided with lawyer robots and artificial intelligence software with the ability to make decisions is developed. Increasing the powers of artificial intelligence and the widespread use of it also lead to questioning its reliability and discussing the risks it carries. So that, even the issue of who/what will be the responsibility for the compensation of damages caused by artificial intelligence software requires new acceptance in legal systems alone. In this new society model in which artificial intelligence is included in our lives, legal rules will be regulated within the framework of ethical rules to be adopted. In this sense, the European Union, which has taken an important step to establish ethical principles between artificial intelligence and human rights and to use artificial intelligence in judicial systems, has first created a Declaration of Cooperation to determine the rules that artificial intelligence technologies will follow while serving the judiciary in the legal world and then “human-oriented ethics” with the understanding of Artificial Intelligence has published the Ethical Charter. These two basic texts are of particular importance, as they will shed light on future artificial intelligence studies in the European Union.

 

 

You can reach original article from the link below:

https://avrupa.marmara.edu.tr/dosya/avrupa/mjes%20arsiv/vol%2028_1/2_Gizem_Yilmaz.pdf

 

Policy and Investment Recommendations for Trustworthy AI

 

Policy and Investment Recommendations for Trustworthy AI

 

the High-Level Expert Group 

26 June 2019

 

Introduction

 

In its various communications on artificial intelligence (AI)1 the European Commission has set out its vision for AI, which is to be trustworthy and human-centric. Three pillars underpin the Commission’s vision: (i) increasing public and private investments in AI to boost its uptake, (ii) preparing for socio-economic changes, and (iii) ensuring an appropriate ethical and legal framework to protect and strengthen European values. To support the implementation of this vision, the Commission established the High-Level Expert Group on Artificial Intelligence (AI HLEG), an independent group mandated with the drafting of two deliverables: a set of AI Ethics Guidelines and a set of Policy and Investment Recommendations.2

In our first deliverable, the Ethics Guidelines for Trustworthy AI3 published on 8 April 2019 (Ethics Guidelines), we stated that AI systems need to be human-centric, with the goal of improving individual and societal well-being, and worthy of our trust. In order to be deemed trustworthy, we put forward that AI systems – including all actors and processes involved therein – should be lawful, ethical and robust. Those Guidelines therefore constituted a first important step in identifying the type of AI that we want and do not want for Europe, but that is not enough to ensure that Europe can also materialise the beneficial impact that Trustworthy AI can bring.

Taking the next step, this document contains our proposed Policy and Investment Recommendations for Trustworthy AI, addressed to EU institutions and Member States. Building on our first deliverable, we put forward 33 recommendations that can guide Trustworthy AI towards sustainability, growth and competitiveness, as well as inclusion – while empowering, benefiting and protecting human beings. We believe that EU Institutions and Member States will play a key role in the achievement of these goals, as a pivotal player in the data economy, a procurer of Trustworthy AI systems and as a standard-setter of sound governance.

Our recommendations focus on four main areas where we believe Trustworthy AI can help achieving a beneficial impact, starting with humans and society at large (A), and continuing then to focus on the private sector (B), the public sector (C) and Europe’s research and academia (D). In addition, we also address the main enablers needed to facilitate those impacts, focusing on availability of data and infrastructure (E), skills and education (F), appropriate governance and regulation (G), as well as funding and investment (H).

These recommendations should not be regarded as exhaustive, but attempt to tackle the most pressing areas for action with the greatest potential. Europe can distinguish itself from others by developing, deploying, using and scaling Trustworthy AI, which we believe should become the only kind of AI in Europe, in a manner that can enhance both individual and societal well-being.

 

You can find original document from the link below:

https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60343

Ethics Guidelines For Trustworthy AI

 

Ethics Guidelines For Trustworthy AI

 

European Commission High-Level Expert Group

April 8, 2019

 

EXECUTIVE SUMMARY

The aim of the Guidelines is to promote Trustworthy AI. Trustworthy AI has three components, which should be met throughout the system’s entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles and values and (3) it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm. Each component in itself is necessary but not sufficient for the achievement of Trustworthy AI. Ideally, all three components work in harmony and overlap in their operation. If, in practice, tensions arise between these components, society should endeavour to align them.

These Guidelines set out a framework for achieving Trustworthy AI. The framework does not explicitly deal with Trustworthy AI’s first component (lawful AI). Instead, it aims to offer guidance on the second and third components: fostering and securing ethical and robust AI. Addressed to all stakeholders, these Guidelines seek to go beyond a list of ethical principles, by providing guidance on how such principles can be operationalised in sociotechnical systems. Guidance is provided in three layers of abstraction, from the most abstract in Chapter I to the most concrete in Chapter III, closing with examples of opportunities and critical concerns raised by AI systems.

I. Based on an approach founded on fundamental rights, Chapter I identifies the ethical principles and their correlated values that must be respected in the development, deployment and use of AI systems.

Key guidance derived from Chapter I:

  • Develop, deploy and use AI systems in a way that adheres to the ethical principles of: respect for human autonomy, prevention of harm, fairness and explicability. Acknowledge and address the potential tensions between these principles.
  • Pay particular attention to situations involving more vulnerable groups such as children, persons with disabilities and others that have historically been disadvantaged or are at risk of exclusion, and to situations which are characterised by asymmetries of power or information, such as between employers and workers, or between businesses and consumers.
  • Acknowledge that, while bringing substantial benefits to individuals and society, AI systems also pose certain risks and may have a negative impact, including impacts which may be difficult to anticipate, identify or measure (e.g. on democracy, the rule of law and distributive justice, or on the human mind itself.) Adopt adequate measures to mitigate these risks when appropriate, and proportionately to the magnitude of the risk.

II. Drawing upon Chapter I, Chapter II provides guidance on how Trustworthy AI can be realised, by listing seven requirements that AI systems should meet. Both technical and non-technical methods can be used for their implementation.

Key guidance derived from Chapter II:

  • Ensure that the development, deployment and use of AI systems meets the seven key requirements for Trustworthy AI: (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) environmental and societal well-being and (7) accountability.
  • Consider technical and non-technical methods to ensure the implementation of those requirements.
  • Foster research and innovation to help assess AI systems and to further the achievement of the requirements; disseminate results and open questions to the wider public, and systematically train a new generation of experts in AI ethics.
  • Communicate, in a clear and proactive manner, information to stakeholders about the AI system’s capabilities and limitations, enabling realistic expectation setting, and about the manner in which the requirements are implemented. Be transparent about the fact that they are dealing with an AI system.
  • Facilitate the traceability and auditability of AI systems, particularly in critical contexts or situations.
  • Involve stakeholders throughout the AI system’s life cycle. Foster training and education so that all stakeholders are aware of and trained in Trustworthy AI.
  • Be mindful that there might be fundamental tensions between different principles and requirements. Continuously identify, evaluate, document and communicate these trade-offs and their solutions.

III. Chapter III provides a concrete and non-exhaustive Trustworthy AI assessment list aimed at operationalising the key requirements set out in Chapter II. This assessment list will need to be tailored to the specific use case of the AI system.

Key guidance derived from Chapter III:

  • Adopt a Trustworthy AI assessmentlist when developing, deploying or using AI systems, and adapt it to the specific use case in which the system is being applied.
  • Keep in mind that such an assessment list will never be exhaustive. Ensuring Trustworthy AI is not about ticking boxes, but about continuously identifying and implementing requirements, evaluating solutions, ensuring improved outcomes throughout the AI system’s lifecycle, and involving stakeholders in this.

A final section of the document aims to concretise some of the issues touched upon throughout the framework, by offering examples of beneficial opportunities thatshould be pursued, and critical concerns raised by AI systems that should be carefully considered.

While these Guidelines aim to offer guidance for AI applications in general by building a horizontal foundation to achieve Trustworthy AI, different situations raise different challenges. It should therefore be explored whether, in addition to this horizontal framework, a sectorial approach is needed, given the context-specificity of AI systems.

These Guidelines do not intend to substitute any form of current or future policymaking or regulation, nor do they aim to deter the introduction thereof. They should be seen as a living document to be reviewed and updated over time to ensure their continuous relevance as the technology, our social environments, and our knowledge evolve. This document is a starting point for the discussion about “Trustworthy AI for Europe”.

Beyond Europe, the Guidelines also aim to foster research, reflection and discussion on an ethical framework for AI systems at a global level.

 

You can find original Guidelines and the link below:

https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

Draft Ethics Guidelines For Trustworthy AI

Draft Ethics Guidelines For Trustworthy AI

The European Commission’s High-Level Expert Group

18 December 2018

 

EXECUTIVE SUMMARY

 

This working document constitutes a draft of the AI Ethics Guidelines produced by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG), of which a final version is due in March 2019.

Artificial Intelligence (AI) is one of the most transformative forces of our time, and is bound to alter the fabric of society. It presents a great opportunity to increase prosperity and growth, which Europe must strive to achieve. Over the last decade, major advances were realised due to the availability of vast amounts of digital data, powerful computing architectures, and advances in AI techniques such as machine learning. Major AI-enabled developments in autonomous vehicles, healthcare, home/service robots, education or cybersecurity are improving the quality of our lives every day. Furthermore, AI is key for addressing many of the grand challenges facing the world, such as global health and wellbeing, climate change, reliable legal and democratic systems and others expressed in the United Nations Sustainable Development Goals.

Having the capability to generate tremendous benefits for individuals and society, AI also gives rise to certain risks that should be properly managed. Given that, on the whole, AI’s benefits outweigh its risks, we must ensure to follow the road that maximises the benefits of AI while minimising its risks. To ensure that we stay on the right track, a human-centric approach to AI is needed, forcing us to keep in mind that the development and use of AI should not be seen as a means in itself, but as having the goal to increase human well-being. Trustworthy AI will be our north star, since human beings will only be able to confidently and fully reap the benefits of AI if they can trust the technology.

Trustworthy AI has two components: (1) it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose” and (2) it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.

These Guidelines therefore set out a framework for Trustworthy AI:

–  Chapter I deals with ensuring AI’s ethical purpose, by setting out the fundamental rights, principles and values that it should comply with.

–  From those principles, Chapter II derives guidance on the realisation of Trustworthy AI, tackling both ethical purpose and technical robustness. This is done by listing the requirements for Trustworthy AI and offering an overview of technical and non-technical methods that can be used for its implementation.

– Chapter III subsequently operationalises the requirements by providing a concrete but non- exhaustive assessment list for Trustworthy AI. This list is then adapted to specific use cases.

In contrast to other documents dealing with ethical AI, the Guidelines hence do not aim to provide yet another list of core values and principles for AI, but rather offer guidance on the concrete implementation and operationalisation thereof into AI systems. Such guidance is provided in three layers of abstraction, from most abstract in Chapter I (fundamental rights, principles and values), to most concrete in Chapter III (assessment list).

The Guidelines are addressed to all relevant stakeholders developing, deploying or using AI, encompassing companies, organisations, researchers, public services, institutions, individuals or other entities. In the final version of these Guidelines, a mechanism will be put forward to allow stakeholders to voluntarily endorse them.

Importantly, these Guidelines are not intended as a substitute to any form of policymaking or regulation (to be dealt with in the AI HLEG’s second deliverable: the Policy & Investment Recommendations, due in May 2019), nor do they aim to deter the introduction thereof. Moreover, the Guidelines should be seen as a living document that needs to be regularly updated over time to ensure continuous relevance as the technology and our knowledge thereof, evolves. This document should therefore be a starting point for the discussion on “Trustworthy AI made in Europe”.

While Europe can only broadcast its ethical approach to AI when competitive at global level, an ethical approach to AI is key to enable responsible competitiveness, as it will generate user trust and facilitate broader uptake of AI. These Guidelines are not meant to stifle AI innovation in Europe, but instead aim to use ethics as inspiration to develop a unique brand of AI, one that aims at protecting and benefiting both individuals and the common good. This allows Europe to position itself as a leader in cutting-edge, secure and ethical AI. Only by ensuring trustworthiness will European citizens fully reap AI’s benefits.

Finally, beyond Europe, these Guidelines also aim to foster reflection and discussion on an ethical framework for AI at global level.

 

 

You can find the original draft and the link below:

https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai