Sociological Imagination; Artificial Intelligence and Alan Turing

 

Sociological Imagination: Artificial Intelligence and Alan Turing

 

Çağatay Topal

DTCF Press, 2017

 

Abstract

“C. Wright Mills views sociological imagination as the ability to relate the most intimate to the most impersonal. There are essential linkages between personal troubles and social issues. The sociologist should be able to trace the linkages between biographies and histories. Sociological imagination necessitates sensibility, commitment and responsibility since sociology is a practice of life as well as a practice of work. Sociology is, then, a practice that potentially everyone can perform. The crucial condition is the existence of sociological imagination and sensibility. This sensibility indicates the capacity to picture a social imaginary, however broad or limited. This paper traces the sociological imagination of Alan Turing, who is often considered as the founder of modern computing technology. The history of Turing’s scientific endeavours follows (and is followed by) his biography, revealing the strong linkages between his life and work. Turing’s sensible, committed and responsible attitude is clear in several cases. This paper focuses on the case of artificial intelligence in order to assess Turing’s sociological imagination. The paper claims that Alan Turing has the sensibility and imagination to picture a social imaginary. In order to analyse Turing’s imagination in the example of artificial intelligence, the paper refers to three faces of sociological imagination of Mills: (1) emphasis on the relation between the most intimate and the most impersonal; (2)
developing new sensibilities and new spaces of sensibility; (3) imagining a social picture. By referring to these three faces, the paper analyses the biography of Turing, his mathematical but also sociological imagination, and artificial intelligence as the product of this imagination; and again through artificial intelligence, it further aims to demonstrate the different possibilities in C. W. Mills’ concept of sociological imagination.”

 

You can find original and full article from the link below:

Towards Regulation of AI Systems

 

Towards Regulation of AI Systems

 

The CAHAI Secretariat
December 2020

 

Summary

 

Title 1. International Perspective

Preliminary Chapter introduces the present report, submitted by the CAHAI to the Committee of Ministers and details the progress achieved to date, taking into account the impact of COVID-19 pandemic measures. It also includes reflections on working methods, synergy and complementarity with other relevant stakeholders and proposals for further action by the CAHAI by means of a robust and clear roadmap.

Chapter 1 outlines the impact of AI on human rights, democracy and rule of law. It identifies those human rights, as set out by the European Convention on Human Rights (“ECHR”), its Protocols and the European Social Charter (“ESC”), that are currently most impacted or likely to be impacted by AI.

 Chapter 2 maps the relevant corpus of soft law documents and other ethical-legal frameworks developed by governmental and non- governmental organisations globally with a twofold aim. First, we want to monitor this ever-evolving spectrum of non-mandatory governance instruments. Second, we want to prospectively assess the impact of AI on ethical principles, human rights, the rule of law and democracy.

Chapter 3 aims to contribute to the drafting of future AI regulation by building on the existing binding instruments, contextualising their principles and providing key regulatory guidelines for a future legal framework, with a view to preserving the harmonisation of the existing legal framework in the field of human rights, democracy and the rule of law.

 

You can find original document from the link below:

https://rm.coe.int/cahai-ai-regulation-publication-en/1680a0b8a4

Data Privacy Guidelines for AI Solutions

 

Data Privacy Guidelines for AI Solutions

November 2020

Purpose

  1. The purpose of this paper is to provide guiding principles concerning the use of personal and personal related information in the context of Artificial Intelligence (AI) solutions1 developed as part of applied Information & Communication Technologies (ICTs), and to emphasise the importance of a legitimate basis for AI data processing by governments and corporations. 
  2. This Guidance is intended to serve as a common international minimum baseline for data protection standards regarding AI solutions, especially those to be implemented at the domestic level, and to be a reference point for the ongoing debate on how the right to privacy can be protected in the context of AI solutions. 
  3. AI solutions are intended to guide or make decisions that affect all our lives. Therefore, AI solutions are currently subject to broader debates within society. The subject of these debates – moral, ethical and societal questions including non-discrimination and free participation, are still to be solved. All of these questions are preconditioned by lawful data processing from a data privacy perspective. The data privacy underpinnings for AI solutions are the focus of this Guidance. 
  4. This guideline is based on the United Nations Charter of Human Rights (The Universal Declaration of Human Rights, Dec. 10th , 1948, reaffirmed 2015, UDHR) and reflects the spirit as well as the understanding of this Charter. Above all Article 7 (non-discrimination) and Article 12 (right to privacy) shall be considered whenever developing or operating AI solutions. The themes and values of these UDHR Articles are found in Articles 2 and 3 (nondiscrimination), and Article 17 (privacy) of International Covenant on Civil and Political Rights, and are obligations upon countries that have ratified the Treaty. 

Scope

  1. This Guidance is applicable to the data processing of AI solutions in all sectors of society including the public and private sectors. Data processing in this context means the design, the development, the operation and decommissioning of an AI solution. 
  2. This Guidance is applicable to all controllers of AI solutions. Controller in this context means designer, developer or operator (self-responsible or principal) each in its specific function. 
  3. This Guidance does not limit or otherwise affect any law that grants data subjects more, wider or in whatsoever way better rights, protection, and/or remedies. This Guidance does not limit or otherwise affect any law that imposes obligations on controllers and processors where that law imposes higher, wider or more rigorous obligations regarding data privacy aspects.
  4. This Guidance does not apply to AI solutions that might be performed by individuals in the context of purely private, non-corporate or household activities.

 

(All submissions must have been received by 2 November 2020.)

 

You can find original draft Guidelines from the link below:

https://www.ohchr.org/Documents/Issues/Privacy/SR_Privacy/2020_Sept_draft_data_Privacy_guidelines.pdf

Preventing Discrimination Caused by the Use of Artificial Intelligence 

 

Preventing Discrimination Caused by the Use of Artificial Intelligence 

Committee on Equality and Non-Discrimination
Rapporteur: Christophe LACROIX, Belgium, 2020

 

Summary

Artificial intelligence (AI), by allowing massive upscaling of automated decision-making processes, creates opportunities for efficiency gains – but in parallel, it can perpetuate and exacerbate discrimination. Public and private sector uses of AI have already been shown to have a discriminatory impact, while information flows tend to highlight extremes and foster hate. The use of biased datasets, design that fails to integrate the need to protect human rights, the lack of transparency of algorithms and of accountability for their impact, as well as a lack of diversity in AI teams, all contribute to this phenomenon.

States must act now to prevent AI from having a discriminatory impact in our societies, and should work together to develop international standards in this field. 

Parliaments must moreover play an active role in overseeing the use of AI-based technologies and ensuring it is subject to public scrutiny. Domestic antidiscrimination legislation should be reviewed and amended to ensure that victims of discrimination caused by the use of AI have access to an effective remedy, and national equality bodies should be effectively equipped to deal with the impact of AI-based technologies. 

Respect for equality and non-discrimination must be integrated from the outset in the design of AI-based systems, and tested before their deployment. The public and private sectors should actively promote diversity and interdisciplinary approaches in technology studies and professions.

 

You can reach original report from the link below:

Click to access doc.%2015151.pdf

A Framework for Developing a National Artificial Intelligence Strategy

 

A Framework for Developing a National Artificial Intelligence Strategy

 

World Economic Forum

2019

 

Abstract

Over the past decade, artificial intelligence (AI) has emerged as the software engine that drives the Fourth Industrial Revolution, a technological force affecting all disciplines, economies and industries. The exponential growth in computing infrastructure combined with the dramatic reduction in the cost of obtaining, processing, storing and transmitting data has revolutionized the way software is developed, and automation is carried out. Put simply, we have moved from machine programming to machine learning. This transformation has created great opportunities but poses serious risks. Various stakeholders, including governments, corporations, academics and civil society organizations have been making efforts to exploit the benefits it provides and to prepare for the risks it poses. Because government is responsible for protecting citizens from various harms and providing for collective goods and services, it has a unique duty to ensure that the ongoing Fourth Industrial Revolution creates benefits for the many, rather than the few.

To this end, various governments have embarked on the path to formulate and/or implement a national strategy for AI, starting with Canada in 2017. Such efforts are usually supported by multimillion-dollar – and, in a few cases, billion-dollar-plus – investments by national governments. Many more should follow given the appropriate guidance. This white paper is a modest effort to guide governments in their development of a national strategy for AI. As a rapidly developing technology, AI will have an impact on how enterprises produce, how consumers consume and how governments deliver services to citizens. AI also raises unprecedented challenges for governments in relation to algorithmic accountability, data protection, explainability of decision-making by machine-learning models and potential job displacements. These challenges require a new approach to understanding how AI and related technology developments can be used to achieve national goals and how their associated risks can be minimized. As AI will be used in all sectors of society and as it directly affects all citizens and all of the services provided by governments, it behoves governments to think carefully about how they create AI economies within their countries and how they can employ AI to solve problems as diverse as sustainability of ecosystems to healthcare. Each country will need AI for different things; for example, countries with ageing populations may not be so worried about jobs lost due to AI automation, whereas countries with youthful populations need to think of ways in which those young people can participate in the AI economy. Either way, this white paper provides a framework for national governments to follow while formulating a strategy of national preparedness and planning to draw benefits from AI developments.

The framework is the result of a holistic study of the various strategies and national plans prepared by various countries, including Canada, the United Kingdom, the United States, India, France, Singapore, Germany and the UAE. Additionally, the World Economic Forum team interviewed government employees responsible for developing their national AI strategies in order to gain a detailed understanding of the design process they followed. The authors analysed these strategies and designed processes to distil their best elements.

The framework aims to guide governments that are yet to develop a national strategy for AI or which are in the process of developing such a strategy. The framework will help the teams responsible for developing the national strategy to ask the right questions, follow the best practices, identify and involve the right stakeholders in the process and create the right set of outcome indicators. Essentially, the framework provides a way to create a “minimum viable” AI strategy for a nation.

 

You can find original report from the link below:

http://www3.weforum.org/docs/WEF_National_AI_Strategy.pdf

 

Four Principles of Explainable Artificial Intelligence

 

Four Principles of Explainable Artificial Intelligence

 

August, 2020

 

Abstract

“We introduce four principles for explainable artificial intelligence (AI) that comprise the fundamental properties for explainable AI systems. They were developed to encompass the multidisciplinary nature of explainable AI, including the fields of computer science, engineering, and psychology. Because one size fits all explanations do not exist, different users will require different types of explanations. We present five categories of explanation and summarize theories of explainable AI. We give an overview of the algorithms in the field that cover the major classes of explainable algorithms. As a baseline comparison, we assess how well explanations provided by people follow our four principles. This assessment provides insights to the challenges of designing explainable AI systems.”

 

You can find original document below:

Click to access NIST%20Explainable%20AI%20Draft%20NISTIR8312%20%281%29.pdf

Assessment List for Trustworthy AI

 

Assessment List for Trustworthy AI

 

European AI Alliance | FUTURIUM | European Commission

The EU Commission

High-Level Expert Group on AI

August 2020

 

Fundamental Rights

Fundamental rights encompass rights such as human dignity and non-discrimination, as well as rights in relation to data protection and privacy, to name just some examples. Prior to self-assessing an AI system with this Assessment List, a fundamental rights impact assessment (FRIA) should be performed. 

A FRIA could include questions such as the following – drawing on specific articles in the Charter and the European Convention on Human Rights (ECHR)14 its protocols and the European Social Charter.

1.Does the AI system potentially negatively discriminate against people on the basis of any of the following grounds (non-exhaustively): sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation? 

Have you put in place processes to test and monitor for potential negative discrimination (bias) during the development, deployment and use phases of the AI system? 

Have you put in place processes to address and rectify for potential negative discrimination (bias) in the AI system? 

2.Does the AI system respect the rights of the child, for example with respect to child protection and taking the child’s best interests into account? 

Have you put in place processes to address and rectify for potential harm to children by the AI system? 

Have you put in place processes to test and monitor for potential harm to children during the development, deployment and use phases of the AI system? 

3.Does the AI system protect personal data relating to individuals in line with GDPR?

Have you put in place processes to assess in detail the need for a data protection impact assessment, including an assessment of the necessity and proportionality of the processing operations in relation to their purpose, with respect to the development, deployment and use phases of the AI system?

Have you put in place measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data with respect to the development, deployment and use phases of the AI system? 

4.Does the AI system respect the freedom of expression and information and/or freedom of assembly and association?

Have you put in place processes to test and monitor for potential infringement on freedom of expression and information, and/or freedom of assembly and association, during the development, deployment and use phases of the AI system? 

Have you put in place processes to address and rectify for potential infringement on freedom of expression and information, and/or freedom of assembly and association, in the AI system?

 

You can find original document from the link below:

https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=68342

Legal Tech and Applications in the Legal Profession

 

Legal Tech and Applications in the Legal Profession

 

Hukuk Teknolojileri ve Avukatlık Mesleğindeki Uygulamaları

AI Working Group

July 2020

Istanbul

 

Abstract

Technology has become inevitable in regard to maintaining the legal profession. With the common use of artificial intelligence software, the legal industry will undergo a significant transformation. It is hereby analyzed to use AI-embedded Legal Technology (Legal Tech) in the legal profession. It is emphasized that the attorneys must keep pace with the rapid development of technology and move to tech-enabled law practice to improve efficiency. The opinion letter explains the current use of AI-embedded Legal Tech and its benefits for attorneys. However, taking also possible risks of the Legal Tech into consideration, the challenges attorneys might encounter are underlined. Finally, recommendations are made both for the development and the use of Legal Tech.

 

You can find original version of the opinoin from the link below:

1 2 3 11