Feasibility Study on AI

Feasibility Study on AI

 

Council of Europe

Ad Hoc Committee on AI

2020

Introduction

“As noted in various Council of Europe documents, including reports recently adopted by the Parliamentary Assembly (PACE), AI systems are substantially transforming individual lives and have a profound impact on the fabric of society and the functioning of its institutions. Their use has the capacity to generate substantive benefits in numerous domains, such as healthcare, transport, education and public administration, generating promising opportunities for humanity at large. At the same time, the development and use of AI systems also entails substantial risks, in particular in relation to interference with human rights, democracy and the rule of law, the core elements upon which our European societies are built.

AI systems should be seen as “socio-technical systems”, in the sense that the impact of an AI system – whatever its underlying technology – depends not only on the system’s design, but also on the way in which the system is developed and used within a broader environment, including the data used, its intended purpose, functionality and accuracy, the scale of deployment, and the broader organisational, societal and legal context in which it is used. The positive or negative consequences of AI systems depend also on the values and behaviour of the human beings that develop and deploy them, which leads to the importance of ensuring human responsibility. There are, however, some distinct characteristics of AI systems that set them apart from other technologies in relation to both their positive and negative impact on human rights, democracy and the rule of law.

First, the scale, connectedness and reach of AI systems can amplify certain risks that are also inherent in other technologies or human behaviour. AI systems can analyse an unprecedented amount of fine-grained data (including highly sensitive personal data) at a much faster pace than humans. This ability can lead AI systems to be used in a way that perpetuates or amplifies unjust bias, also based on new discrimination grounds in case of so called “proxy discrimination”. The increased prominence of proxy discrimination in the context of machine learning may raise interpretive questions about the distinction between direct and indirect discrimination or, indeed, the adequacy of this distinction as it is traditionally understood. Moreover, AI systems are subject to statistical error rates. Even if the error rate of a system applied to millions of people is close to zero, thousands of people can still be adversely impacted due to the scale of deployment and interconnectivity of the systems. On the other side, the scale and reach of AI systems also imply that they can be used to mitigate certain risks and biases that are also inherent in other technologies or human behaviour, and to monitor and reduce human error rates.

Second, the complexity or opacity of many AI systems (in particular in the case of machine learning applications) can make it difficult for humans, including system developers, to understand or trace the system’s functioning or outcome. This opacity, in combination with the involvement of many different actors at different stages during the system’s lifecycle, further complicates the identification of the agent(s) responsible for a potential negative outcome, hence reducing human responsibility and accountability.

Third, certain AI systems can re-calibrate themselves through feedback and reinforcement learning. However, if an AI system is re-trained on data resulting from its own decisions which contains unjust biases, errors, inaccuracies or other deficiencies, a vicious feedback loop may arise which can lead to a discriminatory, erroneous or malicious functioning of the system and which can be difficult to detect.”

 

You can reach original study from the link below:

https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da

Towards Regulation of AI Systems

 

Towards Regulation of AI Systems

 

The CAHAI Secretariat
December 2020

 

Summary

 

Title 1. International Perspective

Preliminary Chapter introduces the present report, submitted by the CAHAI to the Committee of Ministers and details the progress achieved to date, taking into account the impact of COVID-19 pandemic measures. It also includes reflections on working methods, synergy and complementarity with other relevant stakeholders and proposals for further action by the CAHAI by means of a robust and clear roadmap.

Chapter 1 outlines the impact of AI on human rights, democracy and rule of law. It identifies those human rights, as set out by the European Convention on Human Rights (“ECHR”), its Protocols and the European Social Charter (“ESC”), that are currently most impacted or likely to be impacted by AI.

 Chapter 2 maps the relevant corpus of soft law documents and other ethical-legal frameworks developed by governmental and non- governmental organisations globally with a twofold aim. First, we want to monitor this ever-evolving spectrum of non-mandatory governance instruments. Second, we want to prospectively assess the impact of AI on ethical principles, human rights, the rule of law and democracy.

Chapter 3 aims to contribute to the drafting of future AI regulation by building on the existing binding instruments, contextualising their principles and providing key regulatory guidelines for a future legal framework, with a view to preserving the harmonisation of the existing legal framework in the field of human rights, democracy and the rule of law.

 

You can find original document from the link below:

https://rm.coe.int/cahai-ai-regulation-publication-en/1680a0b8a4

Assessment List for Trustworthy AI

 

Assessment List for Trustworthy AI

 

European AI Alliance | FUTURIUM | European Commission

The EU Commission

High-Level Expert Group on AI

August 2020

 

Fundamental Rights

Fundamental rights encompass rights such as human dignity and non-discrimination, as well as rights in relation to data protection and privacy, to name just some examples. Prior to self-assessing an AI system with this Assessment List, a fundamental rights impact assessment (FRIA) should be performed. 

A FRIA could include questions such as the following – drawing on specific articles in the Charter and the European Convention on Human Rights (ECHR)14 its protocols and the European Social Charter.

1.Does the AI system potentially negatively discriminate against people on the basis of any of the following grounds (non-exhaustively): sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation? 

Have you put in place processes to test and monitor for potential negative discrimination (bias) during the development, deployment and use phases of the AI system? 

Have you put in place processes to address and rectify for potential negative discrimination (bias) in the AI system? 

2.Does the AI system respect the rights of the child, for example with respect to child protection and taking the child’s best interests into account? 

Have you put in place processes to address and rectify for potential harm to children by the AI system? 

Have you put in place processes to test and monitor for potential harm to children during the development, deployment and use phases of the AI system? 

3.Does the AI system protect personal data relating to individuals in line with GDPR?

Have you put in place processes to assess in detail the need for a data protection impact assessment, including an assessment of the necessity and proportionality of the processing operations in relation to their purpose, with respect to the development, deployment and use phases of the AI system?

Have you put in place measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data with respect to the development, deployment and use phases of the AI system? 

4.Does the AI system respect the freedom of expression and information and/or freedom of assembly and association?

Have you put in place processes to test and monitor for potential infringement on freedom of expression and information, and/or freedom of assembly and association, during the development, deployment and use phases of the AI system? 

Have you put in place processes to address and rectify for potential infringement on freedom of expression and information, and/or freedom of assembly and association, in the AI system?

 

You can find original document from the link below:

https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=68342

War, Responsibility and Killer Robots

 

War, Responsibility and Killer Robots

 

 

Rebecca Crootof

Yale University – Law School

February 24, 2015

 

 

 

© Lucas Varela

 

Abstract

 

“Although many are concerned that autonomous weapon systems may make war “too easy,” no one has addressed how their use may alter the distribution of the constitutional war power. Drones, cyber operations, and other technological advances in weaponry already allow the United States to intervene militarily with minimal boots on the ground, and increased autonomy in weapon systems will further reduce risk to soldiers. As human troops are augmented and supplanted by robotic ones, it will be politically easier to justify using force, especially for short-term military engagements. Accordingly, one of the remaining incentives for Congress to check presidential warmongering — popular outrage at the loss of American lives — will diminish. The integration of autonomous weapon systems into U.S. military forces will therefore contribute to the growing concentration of the war power in the hands of the Executive, with implications for the international doctrine of humanitarian intervention.”

 

You can find original article and the link below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2569298

 

Guidelines On Artificial Intelligence And Data Protection

 

Guidelines On Artificial Intelligence And Data Protection

January 25, 2019

 

Artificial Intelligence[1](“AI”) based systems, software and devices (hereinafter referred to as AI applications) are providing new and valuable solutions to tackle needs and address challenges in a variety of fields, such as smart homes, smart cities, the industrial sector, healthcare and crime prevention. AI applications may represent a useful tool for decision making in particular for supporting evidence-based and inclusive policies. As may be the case with other technological innovations, these applications may have adverse consequences for individuals and society. In order to prevent this, the Parties to Convention 108 will ensure and enable that AI development and use respect the rights to privacy and data protection (article 8 of the European Convention on Human Rights), thereby enhancing human rights and fundamental freedoms.

These Guidelines provide a set of baseline measures that governments, AI developers, manufacturers, and service providers should follow to ensure that AI applications do not undermine the human dignity and the human rights and fundamental freedoms of every individual, in particular with regard to the right to data protection.[2]

Nothing in the present Guidelines shall be interpreted as precluding or limiting the provisions of the European Convention on Human Rights and of Convention 108. These Guidelines also take into account the new safeguards of the modernised Convention 108 (more commonly referred to as “Convention 108+”)[3].

     

        I. General guidance

  1. The protection of human dignity and safeguarding of human rights and fundamental freedoms, in particular the right to the protection of personal data, are essential when developing and adopting AI applications that may have consequences on individuals and society. This is especially important when AI applications are used in decision- making processes.
  2. AI development relying on the processing of personal data should be based on the principles of Convention 108+. The key elements of this approach are: lawfulness, fairness, purpose specification, proportionality of data processing, privacy-by-design and by default, responsibility and demonstration of compliance (accountability), transparency, data security and risk management.
  3. An approach focused on avoiding and mitigating the potential risks of processing personal data is a necessary element of responsible innovation in the field of AI.
  4. In line with the guidance on risk assessment provided in the Guidelines on Big Data adopted by the Committee of Convention 108 in 2017[4], a wider view of the possible outcomes of data processing should be adopted. This view should consider not only human rights and fundamental freedoms but also the functioning of democracies and social and ethical values.
  5. AI applications must at all times fully respect the rights of data subjects, in particular in light of article 9 of Convention 108+.
  6. AI applications should allow meaningful control by data subjects over the data processing and related effects on individuals and on society.

 

        II. Guidance for developers, manufacturers and service providers

  1. AI developers, manufacturers and service providers should adopt a values-oriented approach in the design of their products and services, consistent with Convention 108+, in particular with article 10.2, and other relevant instruments of the Council of Europe.
  2. AI developers, manufacturers and service providers should assess the possible adverse consequences of AI applications on human rights and fundamental freedoms, and, considering these consequences, adopt a precautionary approach based on appropriate risk prevention and mitigation measures.
  3. In all phases of the processing, including data collection, AI developers, manufacturers and service providers should adopt a human rights by-design approach and avoid any potential biases, including unintentional or hidden, and the risk of discrimination or other adverse impacts on the human rights and fundamental freedoms of data subjects.
  4. AI developers should critically assess the quality, nature, origin and amount of personal data used, reducing unnecessary, redundant or marginal data during the development, and training phases and then monitoring the model’s accuracy as it is fed with new data. The use of synthetic data[5]may be considered as one possible solution to minimise the amount of personal data processed by AI applications.
  5. The risk of adverse impacts on individuals and society due to de-contextualised data[6]and de-contextualised algorithmic models[7]should be adequately considered in developing and using AI applications.
  6. AI developers, manufacturers and service providers are encouraged to set up and consult independent committees of experts from a range of fields, as well as engage with independent academic institutions, which can contribute to designing human rights- based and ethically and socially-oriented AI applications, and to detecting potential bias. Such committees may play an especially important role in areas where transparency and stakeholder engagement can be more difficult due to competing interests and rights, such as in the fields of predictive justice, crime prevention and detection.
  7. Participatory forms of risk assessment, based on the active engagement of the individuals and groups potentially affected by AI applications, should be encouraged.
  8. All products and services should be designed in a manner that ensures the right of individuals not to be subject to a decision significantly affecting them based solely on automated processing, without having their views taken into consideration.
  9. In order to enhance users’ trust, AI developers, manufacturers and service providers are encouraged to design their products and services in a manner that safeguards users’ freedom of choice over the use of AI, by providing feasible alternatives to AI applications.
  10. AI developers, manufacturers, and service providers should adopt forms of algorithm vigilance that promote the accountability of all relevant stakeholders throughout the entire life cycle of these applications, to ensure compliance with data protection and human rights law and principles.
  11. Data subjects should be informed if they interact with an AI application and have a right to obtain information on the reasoning underlying AI data processing operations applied to them. This should include the consequences of such reasoning.
  12. The right to object should be ensured in relation to processing based on technologies that influence the opinions and personal development of individuals.

       

        III. Guidance for legislators and policy makers

  1. Respect for the principle of accountability, the adoption of risk assessment procedures and the application of other suitable measures, such as codes of conduct and certification mechanisms, can enhance trust in AI products and services.
  2. Without prejudice to confidentiality safeguarded by law, public procurement procedures should impose on AI developers, manufacturers, and service providers specific duties of transparency, prior assessment of the impact of data processing on human rights and fundamental freedoms, and vigilance on the potential adverse effects and consequences of AI applications (hereinafter referred to as algorithm vigilance[8]).
  3. Supervisory authorities should be provided with sufficient resources to support and monitor the algorithm vigilance programmes of AI developers, manufacturers, and service providers.
  4. Overreliance on the solutions provided by AI applications and fears of challenging decisions suggested by AI applications risk altering the autonomy of human intervention in decision-making processes. The role of human intervention in decision-making processes and the freedom of human decision makers not to rely on the result of the recommendations provided using AI should therefore be preserved.
  5. AI developers, manufacturers, and service providers should consult supervisory authorities when AI applications have the potential to significantly impact the human rights and fundamental freedoms of data subjects.
  6. Cooperation should be encouraged between data protection supervisory authorities and other bodies having competence related to AI, such as: consumer protection; competition; anti-discrimination; sector regulators and media regulatory authorities.
  7. Appropriate mechanisms should be put in place to ensure the independence of the committees of experts mentioned in Section II.6.
  8. Individuals, groups, and other stakeholders should be informed and actively involved in the debate on what role AI should play in shaping social dynamics, and in decision- making processes affecting them.
  9. Policy makers should invest resources in digital literacy and education to increase data subjects’ awareness and understanding of AI applications and their effects. They should also encourage professional training for AI developers to raise awareness and understanding of the potential effects of AI on individuals and society. They should support research in human rights-oriented AI.

 

[1]The following definition of AI is currently available on the Council of Europe’s website https://www.coe.int/en/web/human-rights-rule-of-law/artificial-intelligence/glossary: “A set of sciences, theories and techniques whose purpose is to reproduce by a machine the cognitive abilities of a human being. Current developments aim, for instance, to be able to entrust a machine with complex tasks previously delegated to a human.”

[2]These Guidelines follow and build on the Report on Artificial Intelligence (“Artificial Intelligence and Data Protection: Challenges and Possible Remedies”) available at : https://rm.coe.int/artificial-intelligence-and-data-protection-challenges-and-possible-re/168091f8a6

[3]Amending Protocol CETS n°223 to the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data.

[4]https://rm.coe.int/t-pd-2017-1-bigdataguidelines-en/16806f06d0

[5]Synthetic data are generated from a data model built on real data. They should be representative of the original real data. See the definition of synthetic data in OECD. ‘Glossary of Statistical Terms’. 2007.  http://ec.europa.eu/eurostat/ramon/coded_files/OECD_glossary_stat_terms.pdf(“An approach to confidentiality where instead of disseminating real data, synthetic data that have been generated from one or more population models are released”).

[6]This is the risk of ignoring contextual information characterising the specific situations in which the proposed AI-based solutions should be used.

[7]This happens when AI models, originally designed for a specific application, are used in a different context or for different purposes.

[8]On the notion of algorithmic vigilance, as adoption of accountability, awareness and risk management practices related to potential adverse effects and consequences throughout the entire life cycle of these applications see also 40th International Conference of Data Protection and Privacy Commissioners, Declaration on Ethics and Data Protection in Artificial Intelligence, guiding principle no. 2. See also the Report on Artificial Intelligence (footnote 2), Section II.4

 

You can find original guidelines and the link below:

https://rm.coe.int/guidelines-on-artificial-intelligence-and-data-protection/168091f9d8