Feasibility Study on AI

Feasibility Study on AI

 

Council of Europe

Ad Hoc Committee on AI

2020

Introduction

“As noted in various Council of Europe documents, including reports recently adopted by the Parliamentary Assembly (PACE), AI systems are substantially transforming individual lives and have a profound impact on the fabric of society and the functioning of its institutions. Their use has the capacity to generate substantive benefits in numerous domains, such as healthcare, transport, education and public administration, generating promising opportunities for humanity at large. At the same time, the development and use of AI systems also entails substantial risks, in particular in relation to interference with human rights, democracy and the rule of law, the core elements upon which our European societies are built.

AI systems should be seen as “socio-technical systems”, in the sense that the impact of an AI system – whatever its underlying technology – depends not only on the system’s design, but also on the way in which the system is developed and used within a broader environment, including the data used, its intended purpose, functionality and accuracy, the scale of deployment, and the broader organisational, societal and legal context in which it is used. The positive or negative consequences of AI systems depend also on the values and behaviour of the human beings that develop and deploy them, which leads to the importance of ensuring human responsibility. There are, however, some distinct characteristics of AI systems that set them apart from other technologies in relation to both their positive and negative impact on human rights, democracy and the rule of law.

First, the scale, connectedness and reach of AI systems can amplify certain risks that are also inherent in other technologies or human behaviour. AI systems can analyse an unprecedented amount of fine-grained data (including highly sensitive personal data) at a much faster pace than humans. This ability can lead AI systems to be used in a way that perpetuates or amplifies unjust bias, also based on new discrimination grounds in case of so called “proxy discrimination”. The increased prominence of proxy discrimination in the context of machine learning may raise interpretive questions about the distinction between direct and indirect discrimination or, indeed, the adequacy of this distinction as it is traditionally understood. Moreover, AI systems are subject to statistical error rates. Even if the error rate of a system applied to millions of people is close to zero, thousands of people can still be adversely impacted due to the scale of deployment and interconnectivity of the systems. On the other side, the scale and reach of AI systems also imply that they can be used to mitigate certain risks and biases that are also inherent in other technologies or human behaviour, and to monitor and reduce human error rates.

Second, the complexity or opacity of many AI systems (in particular in the case of machine learning applications) can make it difficult for humans, including system developers, to understand or trace the system’s functioning or outcome. This opacity, in combination with the involvement of many different actors at different stages during the system’s lifecycle, further complicates the identification of the agent(s) responsible for a potential negative outcome, hence reducing human responsibility and accountability.

Third, certain AI systems can re-calibrate themselves through feedback and reinforcement learning. However, if an AI system is re-trained on data resulting from its own decisions which contains unjust biases, errors, inaccuracies or other deficiencies, a vicious feedback loop may arise which can lead to a discriminatory, erroneous or malicious functioning of the system and which can be difficult to detect.”

 

You can reach original study from the link below:

https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da

2012 yılında Japonca eğitimim sonrasında hukuk fakültesine başladı. Jürging-Örkün-Putzar Rechtsanwalte (Almanya), Güler Hukuk Bürosu ve Ünsal & Gündüz Attorneys at Law' da staj yaptı. Japon dili sertifikası aldı. Ayrıca arabuluculuk- tahkim ve ceza hukuku gibi alanlarda sertifika programlarına katıldı.Bunların akabinde Bilişim ve Teknoloji Hukuku alanında yüksek lisans yapmaya başladı. Köksal & Partners hukuk bürosunda avukat olarak çalışmakta. Büyük bir merakla, robotlar, yapay zeka ve onların hukuksal durumları ve problemler ile ilgili çalışmalar yürütmekte. She studied law following herJapanese education on 2012. She fulfilled her internships in Jurging-Orkun-Putzar Rechtsanwalte(Germany), Guler Law Office and Unsal&Gunduz Attorney at Law . Also she has certificate of Japanese language and she has mediation and arbitration certificates and criminal law certificates from law workshops. Afterwards, she started the master program on information and technology law, at Istanbul Bilgi University. She works as a lawyer at Koksal & Partners law office. Her goal and ambition is the working in the field of Robotics, AI and their legal statutes and problems and exploring the relevant necessities where no women has ever gone before... Yazarın diğer yazıları için ayrıca bakınız: For further works of the author: https://bilgi.academia.edu/Selin%C3%87etin https://siberbulten.com/author/selin-cetin/

Leave a Reply

*