New Rules For Artificial Intelligence – Questions and Answers

 

New Rules For Artificial Intelligence

-Questions and Answers-

European Commission

2021

 

  1. Why do we need to regulate the use of Artificial Intelligence technology?

The potential benefits of AI for our societies are manifold from improved medical care to better education. Faced with the rapid technological development of AI, the EU must act as one to harness these opportunities. While most AI systems will pose low to no risk, certain AI systems create risks that need to be addressed to avoid undesirable outcomes. For example, the opacity of many algorithms may create uncertainty and hamper the effective enforcement of the existing legislation on safety and fundamental rights. Responding to these challenges, legislative action is needed to ensure a well-functioning internal market for AI systems where both benefits and risks are adequately addressed. This includes applications such as biometric identification systems or AI decisions touching on important personal interests, such as in the areas of recruitment, education, healthcare or law enforcement. The Commission’s proposal for a regulatory framework on AI aims to ensure the protection of fundamental rights and user safety, as well as trust in the development and uptake of AI.

 

  1. Which risks will the new AI rules address?

The uptake of AI systems has a strong potential to bring societal benefits, economic growth and enhance EU innovation and global competitiveness. However, in certain cases, the specific characteristics of certain AI systems may create new risks related to user safety and fundamental rights. This leads to legal uncertainty for companies and potentially slower uptake of AI technologies by businesses and citizens, due to the lack of trust. Disparate regulatory responses by national authorities would risk fragmenting the internal market.

 

  1. What are the risk categories?

The Commission proposes a risk–based approach, with four levels of risk:

Unacceptable risk: A very limited set of particularly harmful uses of AI that contravene EU values because they violate fundamental rights (e.g. social scoring by governments, exploitation of vulnerabilities of children, use of subliminal techniques, and – subject to narrow exceptions – live remote biometric identification systems in publicly accessible spaces used for law enforcement purposes) will be banned.

High-risk: A limited number of AI systems defined in the proposal, creating an adverse impact on people’s safety or their fundamental rights (as protected by the EU Charter of Fundamental Rights) are considered to be high-risk. Annexed to the proposal is the list of high-risk AI systems, which can be reviewed to align with the evolution of AI use cases (future-proofing). These also include safety components of products covered by sectorial Union legislation. They will always be high-risk when subject to third-party conformity assessment under that sectorial legislation. In order to ensure trust and a consistent and high level of protection of safety and fundamental rights, mandatory requirements for all high-risk AI systems are proposed. Those requirements cover the quality of data sets used; technical documentation and record keeping; transparency and the provision of information to users; human oversight; and robustness, accuracy and cybersecurity. In case of a breach, the requirements will allow national authorities to have access to the information needed to investigate whether the use of the AI system complied with the law. The proposed framework is consistent with the Charter of Fundamental Rights of the European Union and in line with the EU’s international trade commitments.

Limited risk: For certain AI systems specific transparency requirements are imposed, for example where there is a clear risk of manipulation (e.g. via the use of chatbots). Users should be aware that they are interacting with a machine.

Minimal risk: All other AI systems can be developed and used subject to the existing legislation without additional legal obligations. The vast majority of AI systems currently used in the EU fall into this category. Voluntarily, providers of those systems may choose to apply the requirements for trustworthy AI and adhere to voluntary codes of conduct.

 

  1. What are the obligations for providers of high-risk AI systems?

Before placing a high-risk AI system on the EU market or otherwise putting it into service, providers must subject it to a conformity assessment. This will allow them to demonstrate that their system complies with the mandatory requirements for trustworthy AI (e.g. data quality, documentation and traceability, transparency, human oversight, accuracy and robustness). In case the system itself or its purpose is substantially modified, the assessment will have to be repeated. For certain AI systems, an independent notified body will also have to be involved in this process. AI systems being safety components of products covered by sectorial Union legislation will always be deemed high-risk when subject to third-party conformity assessment under that sectorial legislation. Also for biometric identification systems a third party conformity assessment is always required.

Providers of high-risk AI systems will also have to implement quality and risk management systems to ensure their compliance with the new requirements and minimise risks for users and affected persons, even after a product is placed on the market. Market surveillance authorities will support post-market monitoring through audits and by offering providers the possibility to report on serious incidents or breaches of fundamental rights obligations of which they have become aware.

 

  1. How will compliance be enforced?

Member States hold a key role in the application and enforcement of this Regulation. In this respect, each Member State should designate one or more national competent authorities to supervise the application and implementation, as well as carry out market surveillance activities. In order to increase efficiency and to set an official point of contact with the public and other counterparts, each Member State should designate one national supervisory authority, which will also represent the country in the European Artificial Intelligence Board.

 

  1. What is the European Artificial Intelligence Board?

The European Artificial Intelligence Board would comprise high-level representatives of competent national supervisory authorities, the European Data Protection Supervisor, and the Commission. Its role will be to facilitate a smooth, effective and harmonised implementation of the new AI Regulation. The Board will issue recommendations and opinions to the Commission regarding high-risk AI systems and on other aspects relevant for the effective and uniform implementation of the new rules. It will also help building up expertise and act as a competence centre that national authorities can consult. Finally, it will also support standardisation activities in the area.

 

  1. Will imports of AI systems and applications need to comply with the framework?

Yes. Importers of AI systems will have to ensure that the foreign provider has already carried out the appropriate conformity assessment procedure and has the technical documentation required by the Regulation. Additionally, importers should ensure that their system bears a European Conformity (CE) marking and is accompanied by the required documentation and instructions of use.

 

  1. How is the Machinery Regulation related to AI?

Machinery regulation ensures that the new generation of machinery products guarantee the safety of users and consumers, and encourage innovation. Machinery products cover an extensive range of consumer and professional products, from robots (cleaning robots, personal care robots, collaborative robots, industrial robots) to lawnmowers, 3D printers, construction machines, industrial production lines.

 

  1. How does it fit with the regulatory framework on AI?

Both are complementary. The AI Regulation will address the safety risks of AI systems ensuring safety functions in machinery, while the Machinery Regulation will ensure, where applicable, the safe integration of the AI system into the overall machinery, so as not to compromise the safety of the machinery as a whole.

 

You can reach all questions and the original from the link below:

https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683#3

Artificial Intelligence and Gender Equality

 

Artificial Intelligence and Gender Equality

United Nations Educational, Scientific and Cultural Organization (UNESCO)

UNESCO

2020

 

Introduction

Simply put, artificial intelligence (AI) involves using computers to classify, analyze, and draw predictions from data sets, using a set of rules called algorithms. AI algorithms are trained using large datasets so that they can identify patterns, make predictions, recommend actions, and figure out what to do in unfamiliar situations, learning from new data and thus improving over time. The ability of an AI system to improve automatically through experience is known as Machine Learning (ML).

While AI thus mimics the human brain, currently it is only good, or better than the human brain, at a relatively narrow range of tasks. However, we interact with AI on a daily basis in our professional and personal lives, in areas such as job recruitment and being approved for a bank loan, in medical diagnoses, and much more.

The AI-generated patterns, predictions and recommended actions are reflections of the accuracy, universality and reliability of the data sets used, and the inherent assumptions and biases of the developers of the algorithms employed. AI is set to play an even more important role in every aspect of our daily lives in the future. It is important to look more closely at how AI is, and will affect gender equality, in particular women, who represent over half of the world’s population.

Research, including UNESCO’s 2019 report I’d Blush if I Could: closing gender divides in digital skills through education, unambiguously shows that gender biases are found in Artificial Intelligence (AI) data sets in general and training data sets in particular. Algorithms and devices have the potential of spreading and reinforcing harmful gender stereotypes. These gender biases risk further stigmatizing and marginalizing women on a global scale. Considering the increasing ubiquity of AI in our societies, such biases put women at risk of being left behind in all realms of economic, political and social life. They may even offset some of the considerable offline progress that countries have made towards gender equality in the recent past.

AI also risks having a negative impact on women’s economic empowerment and labour market opportunities by leading to job automation. Recent research by the IMF and the Institute for Women’s Policy Research found that women are at a significantly higher risk of displacement due to job automation than men. Indeed, the majority of workers holding jobs that face a high-risk of automation, such as clerical, administrative, bookkeeping and cashier positions, are women. It is therefore crucial that women are not left behind in terms of retraining and reskilling strategies to mitigate the impact of automation on job losses.

On the other hand, while AI poses significant threats to gender equality, it is important to recognize that AI also has the potential to make positive changes in our societies by challenging oppressive gender norms. For example, while an AI-powered recruitment software was found to discriminate against women, AI-powered genderdecoders help employers use gender-sensitive language to write job postings that are more inclusive in order to increase the diversity of their workforce. AI therefore has the potential of being part of the solution for advancing gender equality in our societies.

Building on the momentum of the report I’d Blush if I Could and subsequent conversations held in hundreds of media outlets and influential conferences, UNESCO invited leaders in AI, digital technology and gender equality from the private sector, academia and civil society organizations to join the conversation on how to overcome gender biases in AI and beyond, understanding that real progress on gender equality lies in ensuring that women are equitably represented where corporate, industry and policy decisions are made. UNESCO, as a laboratory of ideas and standard setter, has a significant role to play in helping to foster and shape the international debate on gender equality and AI.

The purpose of the UNESCO’s Dialogue on Gender Equality and AI was to identify issues, challenges, and good practices to help:

Overcome the built-in gender biases found in AI devices, data sets and algorithms;

Improve the global representation of women in technical roles and in boardrooms in the technology sector; and

Create robust and gender-inclusive AI principles, guidelines and codes of ethics within the industry.

This Summary Report sets forth proposed elements of a Framework on Gender Equality and AI for further consideration, discussion and elaboration amongst various stakeholders. It reflects experts’ inputs to the UNESCO Dialogue on Gender Equality and AI, as well as additional research and analysis. This is not a comprehensive exploration of the complexities of the AI ecosystem in all its manifestations and all its intersections with gender equality. Rather, this is a starting point for conversation and action and has a particular focus on the private sector.

It argues for the need to

  1. Establish a whole society view and mapping of the broader goals we seek to achieve in terms of gender equality;
  2. Generate an understanding of AI Ethics Principles and how to position gender equality within them;
  3. Reflect on possible approaches for operationalizing AI and Gender Equality Principles; and
  4. Identify and develop a funded multi-stakeholder action plan and coalition as a critical next step

 

You can reach original report from the link below:

https://en.unesco.org/system/files/artificial_intelligence_and_gender_equality.pdf

Click to access artificial_intelligence_and_gender_equality.pdf

Artificial Intelligence Assets and Criminal Law

 

Artificial Intelligence Assets and Criminal Law

 

criminal law and AI stock ile ilgili görsel sonucu

 

Hakan Aksoy

PhD Student

Istanbul University

2021

 

 

 

 

Abstract

 

“With the development of technology, artificial intelligence assets have started to be used in almost every area of our daily lives. It is anticipated that these assets, which are now available to people service and make their jobs easier, will achieve human status and do some professions in the future. These developments also bring some legal and criminal questions. What are the legal status of artificial intelligence assets, who will have criminal responsibility for crimes that arise due to their use, and what is their role in criminal proceedings are among the questions to be answered. The purpose of this study is to make an assessment on these questions with existing legal regulations. The methodology of the study is carried out as a literature review. As a result of the study, it has been concluded that artificial intelligence assets are in the status of ‘property’, cannot be held responsible for crimes due to their use, and cannot be replaced by the subjects (judges, prosecutors, lawyers) of the proceedings, although they have important contributions to the criminal proceedings. In this context, existing legal regulations are sufficient to solve the problems that will arise. However, if artificial intelligence assets acquire ‘human’ status as a fully autonomous and conscious entity, radical changes will be required in our legal system.”

 

You

can find original paper from the link below:

https://dergipark.org.tr/en/download/article-file/1293489

 

Is Turkish Copyright Law Ready to Protect Works Generated by Artificial Intelligence?

Is Turkish Copyright Law Ready to Protect Works Generated by

Artificial Intelligence?

 

Image for post

 

 

 

Dr. Hasan Kadir YILMAZTEKİN

Türkiye Adalet Akademisi

GSÜHFD, 2020; 2: 1513-1586

 

 

 

 

 

 

Portreit generated via AI by Mario Klingemann

 

 

Abstract

 

“Artificial intelligence (“AI”) nowadays immensely and importantly infiltrates our lives. From Apple’s Siri to Tesla’s auto-driving car and Amazon’s Alexa, we live in a world of AI goods.

The advent of AI-powered technologies increasingly affects people‟s life across the globe. With its technological advances, AI also shapes our economy and welfare.

Current AI Technologies can produce many works that might be subject of copyright law. An AI device’s ability to generate works raises the question of who will own the intellectual property rights over those works. Will it be the author who hires or contracts with the AI device programmer? Will it be the programmer? Or will it be the AI device itself? Or will it be a joint work?

In this article, we seek answers to these questions under European Union (EU), United States of America (US), United Kingdom (UK) laws and more comprehensively under Turkish law. In short, this study makes policy proposals for Turkish copyright law. It particularly offers a proposal of a solution based on a three-step test to single out the human authors(s) from the relevant actors around the AI device and model legal norms for AI-generated works.

 

 

You can reach original article from the link below:

https://dosya.gsu.edu.tr/docs/hukukfakultesi/tr/fakultedergisi/GSUHFD-2020-2.pdf

Click to access GSUHFD-2020-2.pdf

 

The European Code of Ethics on the Use of Artificial Intelligence in Jurisdictions 

 

The European Code of Ethics on the Use of Artificial Intelligence in Jurisdictions 

 

global network artificial intelligence ethics

 

Gizem YILMAZ

Marmara Avrupa Araştırmaları Dergisi

Vol 28, No 1

2020

 

 

 

Abstract

 

Artificial intelligence, which is included in many technologies we use today, serves the justice system by facilitating the work of the judicial organs thanks to the analysis, storage and connections between the systems. Especially in the United States and China, it is seen that legal counseling services are provided with lawyer robots and artificial intelligence software with the ability to make decisions is developed. Increasing the powers of artificial intelligence and the widespread use of it also lead to questioning its reliability and discussing the risks it carries. So that, even the issue of who/what will be the responsibility for the compensation of damages caused by artificial intelligence software requires new acceptance in legal systems alone. In this new society model in which artificial intelligence is included in our lives, legal rules will be regulated within the framework of ethical rules to be adopted. In this sense, the European Union, which has taken an important step to establish ethical principles between artificial intelligence and human rights and to use artificial intelligence in judicial systems, has first created a Declaration of Cooperation to determine the rules that artificial intelligence technologies will follow while serving the judiciary in the legal world and then “human-oriented ethics” with the understanding of Artificial Intelligence has published the Ethical Charter. These two basic texts are of particular importance, as they will shed light on future artificial intelligence studies in the European Union.

 

 

You can reach original article from the link below:

https://avrupa.marmara.edu.tr/dosya/avrupa/mjes%20arsiv/vol%2028_1/2_Gizem_Yilmaz.pdf

 

Feasibility Study on AI

Feasibility Study on AI

 

Council of Europe

Ad Hoc Committee on AI

2020

Introduction

“As noted in various Council of Europe documents, including reports recently adopted by the Parliamentary Assembly (PACE), AI systems are substantially transforming individual lives and have a profound impact on the fabric of society and the functioning of its institutions. Their use has the capacity to generate substantive benefits in numerous domains, such as healthcare, transport, education and public administration, generating promising opportunities for humanity at large. At the same time, the development and use of AI systems also entails substantial risks, in particular in relation to interference with human rights, democracy and the rule of law, the core elements upon which our European societies are built.

AI systems should be seen as “socio-technical systems”, in the sense that the impact of an AI system – whatever its underlying technology – depends not only on the system’s design, but also on the way in which the system is developed and used within a broader environment, including the data used, its intended purpose, functionality and accuracy, the scale of deployment, and the broader organisational, societal and legal context in which it is used. The positive or negative consequences of AI systems depend also on the values and behaviour of the human beings that develop and deploy them, which leads to the importance of ensuring human responsibility. There are, however, some distinct characteristics of AI systems that set them apart from other technologies in relation to both their positive and negative impact on human rights, democracy and the rule of law.

First, the scale, connectedness and reach of AI systems can amplify certain risks that are also inherent in other technologies or human behaviour. AI systems can analyse an unprecedented amount of fine-grained data (including highly sensitive personal data) at a much faster pace than humans. This ability can lead AI systems to be used in a way that perpetuates or amplifies unjust bias, also based on new discrimination grounds in case of so called “proxy discrimination”. The increased prominence of proxy discrimination in the context of machine learning may raise interpretive questions about the distinction between direct and indirect discrimination or, indeed, the adequacy of this distinction as it is traditionally understood. Moreover, AI systems are subject to statistical error rates. Even if the error rate of a system applied to millions of people is close to zero, thousands of people can still be adversely impacted due to the scale of deployment and interconnectivity of the systems. On the other side, the scale and reach of AI systems also imply that they can be used to mitigate certain risks and biases that are also inherent in other technologies or human behaviour, and to monitor and reduce human error rates.

Second, the complexity or opacity of many AI systems (in particular in the case of machine learning applications) can make it difficult for humans, including system developers, to understand or trace the system’s functioning or outcome. This opacity, in combination with the involvement of many different actors at different stages during the system’s lifecycle, further complicates the identification of the agent(s) responsible for a potential negative outcome, hence reducing human responsibility and accountability.

Third, certain AI systems can re-calibrate themselves through feedback and reinforcement learning. However, if an AI system is re-trained on data resulting from its own decisions which contains unjust biases, errors, inaccuracies or other deficiencies, a vicious feedback loop may arise which can lead to a discriminatory, erroneous or malicious functioning of the system and which can be difficult to detect.”

 

You can reach original study from the link below:

https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da

Sociological Imagination; Artificial Intelligence and Alan Turing

 

Sociological Imagination: Artificial Intelligence and Alan Turing

 

Çağatay Topal

DTCF Press, 2017

 

Abstract

“C. Wright Mills views sociological imagination as the ability to relate the most intimate to the most impersonal. There are essential linkages between personal troubles and social issues. The sociologist should be able to trace the linkages between biographies and histories. Sociological imagination necessitates sensibility, commitment and responsibility since sociology is a practice of life as well as a practice of work. Sociology is, then, a practice that potentially everyone can perform. The crucial condition is the existence of sociological imagination and sensibility. This sensibility indicates the capacity to picture a social imaginary, however broad or limited. This paper traces the sociological imagination of Alan Turing, who is often considered as the founder of modern computing technology. The history of Turing’s scientific endeavours follows (and is followed by) his biography, revealing the strong linkages between his life and work. Turing’s sensible, committed and responsible attitude is clear in several cases. This paper focuses on the case of artificial intelligence in order to assess Turing’s sociological imagination. The paper claims that Alan Turing has the sensibility and imagination to picture a social imaginary. In order to analyse Turing’s imagination in the example of artificial intelligence, the paper refers to three faces of sociological imagination of Mills: (1) emphasis on the relation between the most intimate and the most impersonal; (2)
developing new sensibilities and new spaces of sensibility; (3) imagining a social picture. By referring to these three faces, the paper analyses the biography of Turing, his mathematical but also sociological imagination, and artificial intelligence as the product of this imagination; and again through artificial intelligence, it further aims to demonstrate the different possibilities in C. W. Mills’ concept of sociological imagination.”

 

You can find original and full article from the link below:

Towards Regulation of AI Systems

 

Towards Regulation of AI Systems

 

The CAHAI Secretariat
December 2020

 

Summary

 

Title 1. International Perspective

Preliminary Chapter introduces the present report, submitted by the CAHAI to the Committee of Ministers and details the progress achieved to date, taking into account the impact of COVID-19 pandemic measures. It also includes reflections on working methods, synergy and complementarity with other relevant stakeholders and proposals for further action by the CAHAI by means of a robust and clear roadmap.

Chapter 1 outlines the impact of AI on human rights, democracy and rule of law. It identifies those human rights, as set out by the European Convention on Human Rights (“ECHR”), its Protocols and the European Social Charter (“ESC”), that are currently most impacted or likely to be impacted by AI.

 Chapter 2 maps the relevant corpus of soft law documents and other ethical-legal frameworks developed by governmental and non- governmental organisations globally with a twofold aim. First, we want to monitor this ever-evolving spectrum of non-mandatory governance instruments. Second, we want to prospectively assess the impact of AI on ethical principles, human rights, the rule of law and democracy.

Chapter 3 aims to contribute to the drafting of future AI regulation by building on the existing binding instruments, contextualising their principles and providing key regulatory guidelines for a future legal framework, with a view to preserving the harmonisation of the existing legal framework in the field of human rights, democracy and the rule of law.

 

You can find original document from the link below:

https://rm.coe.int/cahai-ai-regulation-publication-en/1680a0b8a4

1 2 3 15