Artificial Intelligence and Gender Equality

 

Artificial Intelligence and Gender Equality

United Nations Educational, Scientific and Cultural Organization (UNESCO)

UNESCO

2020

 

Introduction

Simply put, artificial intelligence (AI) involves using computers to classify, analyze, and draw predictions from data sets, using a set of rules called algorithms. AI algorithms are trained using large datasets so that they can identify patterns, make predictions, recommend actions, and figure out what to do in unfamiliar situations, learning from new data and thus improving over time. The ability of an AI system to improve automatically through experience is known as Machine Learning (ML).

While AI thus mimics the human brain, currently it is only good, or better than the human brain, at a relatively narrow range of tasks. However, we interact with AI on a daily basis in our professional and personal lives, in areas such as job recruitment and being approved for a bank loan, in medical diagnoses, and much more.

The AI-generated patterns, predictions and recommended actions are reflections of the accuracy, universality and reliability of the data sets used, and the inherent assumptions and biases of the developers of the algorithms employed. AI is set to play an even more important role in every aspect of our daily lives in the future. It is important to look more closely at how AI is, and will affect gender equality, in particular women, who represent over half of the world’s population.

Research, including UNESCO’s 2019 report I’d Blush if I Could: closing gender divides in digital skills through education, unambiguously shows that gender biases are found in Artificial Intelligence (AI) data sets in general and training data sets in particular. Algorithms and devices have the potential of spreading and reinforcing harmful gender stereotypes. These gender biases risk further stigmatizing and marginalizing women on a global scale. Considering the increasing ubiquity of AI in our societies, such biases put women at risk of being left behind in all realms of economic, political and social life. They may even offset some of the considerable offline progress that countries have made towards gender equality in the recent past.

AI also risks having a negative impact on women’s economic empowerment and labour market opportunities by leading to job automation. Recent research by the IMF and the Institute for Women’s Policy Research found that women are at a significantly higher risk of displacement due to job automation than men. Indeed, the majority of workers holding jobs that face a high-risk of automation, such as clerical, administrative, bookkeeping and cashier positions, are women. It is therefore crucial that women are not left behind in terms of retraining and reskilling strategies to mitigate the impact of automation on job losses.

On the other hand, while AI poses significant threats to gender equality, it is important to recognize that AI also has the potential to make positive changes in our societies by challenging oppressive gender norms. For example, while an AI-powered recruitment software was found to discriminate against women, AI-powered genderdecoders help employers use gender-sensitive language to write job postings that are more inclusive in order to increase the diversity of their workforce. AI therefore has the potential of being part of the solution for advancing gender equality in our societies.

Building on the momentum of the report I’d Blush if I Could and subsequent conversations held in hundreds of media outlets and influential conferences, UNESCO invited leaders in AI, digital technology and gender equality from the private sector, academia and civil society organizations to join the conversation on how to overcome gender biases in AI and beyond, understanding that real progress on gender equality lies in ensuring that women are equitably represented where corporate, industry and policy decisions are made. UNESCO, as a laboratory of ideas and standard setter, has a significant role to play in helping to foster and shape the international debate on gender equality and AI.

The purpose of the UNESCO’s Dialogue on Gender Equality and AI was to identify issues, challenges, and good practices to help:

Overcome the built-in gender biases found in AI devices, data sets and algorithms;

Improve the global representation of women in technical roles and in boardrooms in the technology sector; and

Create robust and gender-inclusive AI principles, guidelines and codes of ethics within the industry.

This Summary Report sets forth proposed elements of a Framework on Gender Equality and AI for further consideration, discussion and elaboration amongst various stakeholders. It reflects experts’ inputs to the UNESCO Dialogue on Gender Equality and AI, as well as additional research and analysis. This is not a comprehensive exploration of the complexities of the AI ecosystem in all its manifestations and all its intersections with gender equality. Rather, this is a starting point for conversation and action and has a particular focus on the private sector.

It argues for the need to

  1. Establish a whole society view and mapping of the broader goals we seek to achieve in terms of gender equality;
  2. Generate an understanding of AI Ethics Principles and how to position gender equality within them;
  3. Reflect on possible approaches for operationalizing AI and Gender Equality Principles; and
  4. Identify and develop a funded multi-stakeholder action plan and coalition as a critical next step

 

You can reach original report from the link below:

https://en.unesco.org/system/files/artificial_intelligence_and_gender_equality.pdf

Click to access artificial_intelligence_and_gender_equality.pdf

A Framework for Developing a National Artificial Intelligence Strategy

 

A Framework for Developing a National Artificial Intelligence Strategy

 

World Economic Forum

2019

 

Abstract

Over the past decade, artificial intelligence (AI) has emerged as the software engine that drives the Fourth Industrial Revolution, a technological force affecting all disciplines, economies and industries. The exponential growth in computing infrastructure combined with the dramatic reduction in the cost of obtaining, processing, storing and transmitting data has revolutionized the way software is developed, and automation is carried out. Put simply, we have moved from machine programming to machine learning. This transformation has created great opportunities but poses serious risks. Various stakeholders, including governments, corporations, academics and civil society organizations have been making efforts to exploit the benefits it provides and to prepare for the risks it poses. Because government is responsible for protecting citizens from various harms and providing for collective goods and services, it has a unique duty to ensure that the ongoing Fourth Industrial Revolution creates benefits for the many, rather than the few.

To this end, various governments have embarked on the path to formulate and/or implement a national strategy for AI, starting with Canada in 2017. Such efforts are usually supported by multimillion-dollar – and, in a few cases, billion-dollar-plus – investments by national governments. Many more should follow given the appropriate guidance. This white paper is a modest effort to guide governments in their development of a national strategy for AI. As a rapidly developing technology, AI will have an impact on how enterprises produce, how consumers consume and how governments deliver services to citizens. AI also raises unprecedented challenges for governments in relation to algorithmic accountability, data protection, explainability of decision-making by machine-learning models and potential job displacements. These challenges require a new approach to understanding how AI and related technology developments can be used to achieve national goals and how their associated risks can be minimized. As AI will be used in all sectors of society and as it directly affects all citizens and all of the services provided by governments, it behoves governments to think carefully about how they create AI economies within their countries and how they can employ AI to solve problems as diverse as sustainability of ecosystems to healthcare. Each country will need AI for different things; for example, countries with ageing populations may not be so worried about jobs lost due to AI automation, whereas countries with youthful populations need to think of ways in which those young people can participate in the AI economy. Either way, this white paper provides a framework for national governments to follow while formulating a strategy of national preparedness and planning to draw benefits from AI developments.

The framework is the result of a holistic study of the various strategies and national plans prepared by various countries, including Canada, the United Kingdom, the United States, India, France, Singapore, Germany and the UAE. Additionally, the World Economic Forum team interviewed government employees responsible for developing their national AI strategies in order to gain a detailed understanding of the design process they followed. The authors analysed these strategies and designed processes to distil their best elements.

The framework aims to guide governments that are yet to develop a national strategy for AI or which are in the process of developing such a strategy. The framework will help the teams responsible for developing the national strategy to ask the right questions, follow the best practices, identify and involve the right stakeholders in the process and create the right set of outcome indicators. Essentially, the framework provides a way to create a “minimum viable” AI strategy for a nation.

 

You can find original report from the link below:

http://www3.weforum.org/docs/WEF_National_AI_Strategy.pdf

 

Four Principles of Explainable Artificial Intelligence

 

Four Principles of Explainable Artificial Intelligence

 

August, 2020

 

Abstract

“We introduce four principles for explainable artificial intelligence (AI) that comprise the fundamental properties for explainable AI systems. They were developed to encompass the multidisciplinary nature of explainable AI, including the fields of computer science, engineering, and psychology. Because one size fits all explanations do not exist, different users will require different types of explanations. We present five categories of explanation and summarize theories of explainable AI. We give an overview of the algorithms in the field that cover the major classes of explainable algorithms. As a baseline comparison, we assess how well explanations provided by people follow our four principles. This assessment provides insights to the challenges of designing explainable AI systems.”

 

You can find original document below:

Click to access NIST%20Explainable%20AI%20Draft%20NISTIR8312%20%281%29.pdf

Legal Tech and Applications in the Legal Profession

 

Legal Tech and Applications in the Legal Profession

 

Hukuk Teknolojileri ve Avukatlık Mesleğindeki Uygulamaları

AI Working Group

July 2020

Istanbul

 

Abstract

Technology has become inevitable in regard to maintaining the legal profession. With the common use of artificial intelligence software, the legal industry will undergo a significant transformation. It is hereby analyzed to use AI-embedded Legal Technology (Legal Tech) in the legal profession. It is emphasized that the attorneys must keep pace with the rapid development of technology and move to tech-enabled law practice to improve efficiency. The opinion letter explains the current use of AI-embedded Legal Tech and its benefits for attorneys. However, taking also possible risks of the Legal Tech into consideration, the challenges attorneys might encounter are underlined. Finally, recommendations are made both for the development and the use of Legal Tech.

 

You can find original version of the opinoin from the link below:

The Impact of the General Data Protection Regulation on Artificial Intelligence 

The Impact of the General Data Protection Regulation on Artificial Intelligence 

 

European Parliamentary Research Service

June 2020

 

Abstract

This study addresses the relationship between the General Data Protection Regulation (GDPR) and artificial intelligence (AI). After introducing some basic concepts of AI, it reviews the state of the art in AI technologies and focuses on the application of AI to personal data. It considers challenges and opportunities for individuals and society, and the ways in which risks can be countered and opportunities enabled through law and technology. 

The study then provides an analysis of how AI is regulated in the GDPR and examines the extent to which AI fits into the GDPR conceptual framework. It discusses the tensions and proximities between AI and data protection principles, such as, in particular, purpose limitation and data minimisation. It examines the legal bases for AI applications to personal data and considers duties of information concerning AI systems, especially those involving profiling and automated decision-making. It reviews data subjects’ rights, such as the rights to access, erasure, portability and object. 

The study carries out a thorough analysis of automated decisionmaking, considering the extent to which automated decisions are admissible, the safeguard measures to be adopted, and whether data subjects have a right to individual explanations. It then addresses the extent to which the GDPR provides for a preventive risk-based approach, focusing on data protection by design and by default. The possibility to use AI for statistical purposes, in a way that is consistent with the GDPR, is also considered. 

The study concludes by observing that AI can be deployed in a way that is consistent with the GDPR, but also that the GDPR does not provide sufficient guidance for controllers, and that its prescriptions need to be expanded and concretised. Some suggestions in this regard are developed.

 

You can find original report  below:

Click to access EPRS_STU(2020)641530_EN.pdf

What If We Could Fight Coronavirus with Artificial Intelligence?

 

What If We Could Fight Coronavirus with Artificial Intelligence?

 

European Parliamentary Research Service

2020

 

Analytics have changed the way disease outbreaks are tracked and managed, thereby saving lives. The international community is currently focused on the 2019-2020 novel coronavirus (COVID-19) outbreak, first identified in Wuhan, China. As it spreads, raising fears of a worldwide pandemic, international organisations and scientists are using artificial intelligence (AI) to track the epidemic in real-time, to effectively predict where the virus might appear next and develop an effective response.

On 31 December 2019, the World Health Organization (WHO) received the first report of a suspected novel coronavirus (COVID-19) in Wuhan. Amid concerns that the global response is fractured and uncoordinated, the WHO declared the outbreak a public health emergency of international concern (PHEIC) under the International Health Regulations (IHR) on 30 January 2020. Warnings about the novel coronavirus spreading beyond China were raised by AI systems more than a week before official information about the epidemic was released by international organisations. A health monitoring start-up correctly predicted the spread of COVID-19, using natural-language processing and machine learning. Decisions during such an outbreak need to be made on an urgent basis, often in the context of scientific uncertainty, fear, distrust, and social and institutional disruption. How can AI technologies be used to manage this type of global health emergency, without undermining protection of fundamental values and human rights?

Potential impacts and developments

In the case of COVID-19, AI has been used mostly to help detect whether people have novel coronavirus through the detection of visual signs of COVID-19 on images from lung CT scans; to monitor, in real time, changes in body temperature through the use of wearable sensors; and to provide an open-source data platform to track the spread of the disease. AI could process vast amounts of unstructured text data to predict the number of potential new cases by area and which types of populations will be most at risk, as well as evaluate and optimise strategies for controlling the spread of the epidemic. Other AI applications can deliver medical supplies by drone, disinfect patient rooms and scan approved drug databases (for other illnesses) that might also work against COVID-19. AI technologies have been harnessed to come up with new molecules that could serve as potential medications or even accelerate the time taken to predict the virus’s RNA secondary structure. A series of risk assessment algorithmsfor COVID-19 for use in healthcare settings have been developed, including an algorithm for the main actions that need to be followed for managing contacts of probable or confirmed COVID-19 cases, as developed by the European Centre for Disease Prevention and Control.

Certain AI applications can also detect fake news about the disease by applying machine-learning techniques for mining social media information, tracking down words that are sensational or alarming, and identifying which online sources are deemed authoritative for fighting what has been called an infodemic. Facebook, Google, Twitter and TikTok have partnered with the WHO to review and expose false information about COVID-19.

In public health emergency response management, derogating from an individual’s rights of privacy, nondiscrimination and freedom of movement in the name of the urgency of the situation can sometimes take the form of restrictive measures that include domestic containment strategies without due process, or medical examination without informed consent. In the case of COVID-19, AI applications such as the use of facial recognition to track people not wearing masks in public, or AI-based fever detection systems, as well as the processing of data collected on digital platforms and mobile networks to track a person’s recent movements, have contributed to draconian enforcement of restraining measures for the confinement of the outbreak for unspecified durations. Chinese search giant Baidu has developed a system using infrared and facial recognition technology that scans and takes photographs of more than 200 people per minute at the Qinghe railway station in Beijing. In Moscow, authorities are using automated facial recognition technology to scan surveillance camera footage in an attempt to identify recent arrivals from China, placed under quarantine for fear of COVID-19 infection and not expected to enter the station. Finally, Chinese authorities are deploying dronesto patrol public places, conduct thermal imaging, or to track people violating quarantine rules.

 

You can find original document from the link below:

Click to access EPRS_ATA(2020)641538_EN.pdf

Responsible Bots: 10 Guidelines for Developers of Conversational AI

 

Responsible Bots: 10 Guidelines for Developers of Conversational AI

 

 

Guidelines 

  1. Articulate the purpose of your bot and take special care if your bot will support consequential use cases

The purpose of your bot is central to ethical design, and ethical design is particularly important when it is anticipated that a consequential use will be served by the bot you are developing. Consequential use cases include access to services such as healthcare, education, employment, financing or other services that, if denied, would have meaningful and significant impact on an individual’s daily life.

  1. Be transparent about the fact that you use bots as part of your product or service.

Users are more likely to trust a company that is transparent and forthcoming about its use of bot technology, and a bot is more likely to be trusted if users understand that the bot is working to serve their needs and is clear about its limitations. 

  1. Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot’s competence. 

If your bot will engage people in interactions that may require human judgment, provide a means or ready access to a human moderator. 

  1. Design your bot so that it respects relevant cultural norms and guards against misuse.

Since bots may have human-like personas, it is especially important that they interact respectfully and safely with users and have built-in safeguards and protocols to handle misuse and abuse.

  1. Ensure your bot is reliable. 

Ensure that your bot is sufficiently reliable for the function it aims to perform, and always take into account that since AI systems are probabilistic they will not always provide the correct answer. 

  1. Ensure your bot treats people fairly. 

The possibility that AI-based systems will perpetuate existing societal biases, or introduce new biases, is one of the top concerns identified by the AI community relating to the rapid deployment of AI. Development teams must be committed to ensuring that their bots treat all people fairly. 

  1. Ensure your bot respects user privacy. 

Privacy considerations are especially important for bots. While the Microsoft Bot Framework does not store session state, you may be designing and deploying authenticated bots in personal settings (like hospitals) where bots will learn a great deal about users. People may also share more information about themselves than they would if they thought they were interacting with a person. And, of course, bots can remember everything. All of this (plus legal requirements) makes it especially important that you design bots from the ground up with a view toward respecting user privacy. This includes giving users sufficient transparency into bots’ data collection and use, including how the bot functions, and what types of controls the bot offers users over their personal data.

  1. Ensure your bot handles data securely. 

Users have every right to expect that their data will be handled securely. Follow security best practices that are appropriate for the type of data your bot will be handling. 

  1. Ensure your bot is accessible. 

Bots can benefit everyone, but only if they are designed to be inclusive and accessible to people of all abilities. Microsoft’s mission to empower every person to achieve more includes ensuring that new technology interfaces can be used by people with disabilities, including users of assistive technology. 

  1. Accept responsibility. 

We are a long way away from bots that can truly act autonomously, if that day will ever come. Humans are accountable for the operation of bots.

You can reach original document from the link below:

https://www.microsoft.com/en-us/research/uploads/prod/2018/11/Bot_Guidelines_Nov_2018.pdf

White Paper on Artificial Intelligence: A European approach to excellence and trust

 

White Paper on Artificial Intelligence 

A European approach to excellence and trust

Brussel, 19.02.2020

 

Abstract

Artificial Intelligence is developing fast. It will change our lives by improving healthcare (e.g. making diagnosis more precise, enabling better prevention of diseases), increasing the efficiency of farming, contributing to climate change mitigation and adaptation, improving the efficiency of production systems through predictive maintenance, increasing the security of Europeans, and in many other ways that we can only begin to imagine. At the same time, Artificial Intelligence (AI) entails a number of potential risks, such as opaque decision-making, gender-based or other kinds of discrimination, intrusion in our private lives or being used for criminal purposes. 

Against a background of fierce global competition, a solid European approach is needed, building on the European strategy for AI presented in April 2018. To address the opportunities and challenges of AI, the EU must act as one and define its own way, based on European values, to promote the development and deployment of AI. 

The Commission is committed to enabling scientific breakthrough, to preserving the EU’s technological leadership and to ensuring that new technologies are at the service of all Europeans – improving their lives while respecting their rights. 

Commission President Ursula von der Leyen announced in her political Guidelines a coordinated European approach on the human and ethical implications of AI as well as a reflection on the better use of big data for innovation. 

Thus, the Commission supports a regulatory and investment oriented approach with the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology. The purpose of this White Paper is to set out policy options on how to achieve these objectives. It does not address the development and use of AI for military purposes.The Commission invites Member States, other European institutions, and all stakeholders, including industry, social partners, civil society organisations, researchers, the public in general and any interested party, to react to the options below and to contribute to the Commission’s future decision-making in this domain.

You can reach the full paper from the link below:

https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf

1 2 3 4