robotic.legal
Selin Çetin

Assessment List for Trustworthy AI

0 237

 

Assessment List for Trustworthy AI

 

European AI Alliance | FUTURIUM | European Commission

The EU Commission

High-Level Expert Group on AI

August 2020

 

Fundamental Rights

Fundamental rights encompass rights such as human dignity and non-discrimination, as well as rights in relation to data protection and privacy, to name just some examples. Prior to self-assessing an AI system with this Assessment List, a fundamental rights impact assessment (FRIA) should be performed. 

A FRIA could include questions such as the following – drawing on specific articles in the Charter and the European Convention on Human Rights (ECHR)14 its protocols and the European Social Charter.

1.Does the AI system potentially negatively discriminate against people on the basis of any of the following grounds (non-exhaustively): sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation? 

Have you put in place processes to test and monitor for potential negative discrimination (bias) during the development, deployment and use phases of the AI system? 

Have you put in place processes to address and rectify for potential negative discrimination (bias) in the AI system? 

2.Does the AI system respect the rights of the child, for example with respect to child protection and taking the child’s best interests into account? 

Have you put in place processes to address and rectify for potential harm to children by the AI system? 

Have you put in place processes to test and monitor for potential harm to children during the development, deployment and use phases of the AI system? 

3.Does the AI system protect personal data relating to individuals in line with GDPR?

Have you put in place processes to assess in detail the need for a data protection impact assessment, including an assessment of the necessity and proportionality of the processing operations in relation to their purpose, with respect to the development, deployment and use phases of the AI system?

Have you put in place measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data with respect to the development, deployment and use phases of the AI system? 

4.Does the AI system respect the freedom of expression and information and/or freedom of assembly and association?

Have you put in place processes to test and monitor for potential infringement on freedom of expression and information, and/or freedom of assembly and association, during the development, deployment and use phases of the AI system? 

Have you put in place processes to address and rectify for potential infringement on freedom of expression and information, and/or freedom of assembly and association, in the AI system?

 

You can find original document from the link below:

https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=68342

Bunları da beğenebilirsin
Haber Bülenimize Abone Olun
Eklenmiş yazılardan ilk siz haberdar olmak istiyorsanız bültenimize abone olun.
İstediğiniz zaman abonelikten çıkabilirsiniz.