A Framework for Developing a National Artificial Intelligence Strategy

 

A Framework for Developing a National Artificial Intelligence Strategy

 

World Economic Forum

2019

 

Abstract

Over the past decade, artificial intelligence (AI) has emerged as the software engine that drives the Fourth Industrial Revolution, a technological force affecting all disciplines, economies and industries. The exponential growth in computing infrastructure combined with the dramatic reduction in the cost of obtaining, processing, storing and transmitting data has revolutionized the way software is developed, and automation is carried out. Put simply, we have moved from machine programming to machine learning. This transformation has created great opportunities but poses serious risks. Various stakeholders, including governments, corporations, academics and civil society organizations have been making efforts to exploit the benefits it provides and to prepare for the risks it poses. Because government is responsible for protecting citizens from various harms and providing for collective goods and services, it has a unique duty to ensure that the ongoing Fourth Industrial Revolution creates benefits for the many, rather than the few.

To this end, various governments have embarked on the path to formulate and/or implement a national strategy for AI, starting with Canada in 2017. Such efforts are usually supported by multimillion-dollar – and, in a few cases, billion-dollar-plus – investments by national governments. Many more should follow given the appropriate guidance. This white paper is a modest effort to guide governments in their development of a national strategy for AI. As a rapidly developing technology, AI will have an impact on how enterprises produce, how consumers consume and how governments deliver services to citizens. AI also raises unprecedented challenges for governments in relation to algorithmic accountability, data protection, explainability of decision-making by machine-learning models and potential job displacements. These challenges require a new approach to understanding how AI and related technology developments can be used to achieve national goals and how their associated risks can be minimized. As AI will be used in all sectors of society and as it directly affects all citizens and all of the services provided by governments, it behoves governments to think carefully about how they create AI economies within their countries and how they can employ AI to solve problems as diverse as sustainability of ecosystems to healthcare. Each country will need AI for different things; for example, countries with ageing populations may not be so worried about jobs lost due to AI automation, whereas countries with youthful populations need to think of ways in which those young people can participate in the AI economy. Either way, this white paper provides a framework for national governments to follow while formulating a strategy of national preparedness and planning to draw benefits from AI developments.

The framework is the result of a holistic study of the various strategies and national plans prepared by various countries, including Canada, the United Kingdom, the United States, India, France, Singapore, Germany and the UAE. Additionally, the World Economic Forum team interviewed government employees responsible for developing their national AI strategies in order to gain a detailed understanding of the design process they followed. The authors analysed these strategies and designed processes to distil their best elements.

The framework aims to guide governments that are yet to develop a national strategy for AI or which are in the process of developing such a strategy. The framework will help the teams responsible for developing the national strategy to ask the right questions, follow the best practices, identify and involve the right stakeholders in the process and create the right set of outcome indicators. Essentially, the framework provides a way to create a “minimum viable” AI strategy for a nation.

 

You can find original report from the link below:

http://www3.weforum.org/docs/WEF_National_AI_Strategy.pdf

 

Four Principles of Explainable Artificial Intelligence

 

Four Principles of Explainable Artificial Intelligence

 

August, 2020

 

Abstract

“We introduce four principles for explainable artificial intelligence (AI) that comprise the fundamental properties for explainable AI systems. They were developed to encompass the multidisciplinary nature of explainable AI, including the fields of computer science, engineering, and psychology. Because one size fits all explanations do not exist, different users will require different types of explanations. We present five categories of explanation and summarize theories of explainable AI. We give an overview of the algorithms in the field that cover the major classes of explainable algorithms. As a baseline comparison, we assess how well explanations provided by people follow our four principles. This assessment provides insights to the challenges of designing explainable AI systems.”

 

You can find original document below:

Click to access NIST%20Explainable%20AI%20Draft%20NISTIR8312%20%281%29.pdf

Legal Tech and Applications in the Legal Profession

 

Legal Tech and Applications in the Legal Profession

 

Hukuk Teknolojileri ve Avukatlık Mesleğindeki Uygulamaları

AI Working Group

July 2020

Istanbul

 

Abstract

Technology has become inevitable in regard to maintaining the legal profession. With the common use of artificial intelligence software, the legal industry will undergo a significant transformation. It is hereby analyzed to use AI-embedded Legal Technology (Legal Tech) in the legal profession. It is emphasized that the attorneys must keep pace with the rapid development of technology and move to tech-enabled law practice to improve efficiency. The opinion letter explains the current use of AI-embedded Legal Tech and its benefits for attorneys. However, taking also possible risks of the Legal Tech into consideration, the challenges attorneys might encounter are underlined. Finally, recommendations are made both for the development and the use of Legal Tech.

 

You can find original version of the opinoin from the link below:

The Impact of the General Data Protection Regulation on Artificial Intelligence 

The Impact of the General Data Protection Regulation on Artificial Intelligence 

 

European Parliamentary Research Service

June 2020

 

Abstract

This study addresses the relationship between the General Data Protection Regulation (GDPR) and artificial intelligence (AI). After introducing some basic concepts of AI, it reviews the state of the art in AI technologies and focuses on the application of AI to personal data. It considers challenges and opportunities for individuals and society, and the ways in which risks can be countered and opportunities enabled through law and technology. 

The study then provides an analysis of how AI is regulated in the GDPR and examines the extent to which AI fits into the GDPR conceptual framework. It discusses the tensions and proximities between AI and data protection principles, such as, in particular, purpose limitation and data minimisation. It examines the legal bases for AI applications to personal data and considers duties of information concerning AI systems, especially those involving profiling and automated decision-making. It reviews data subjects’ rights, such as the rights to access, erasure, portability and object. 

The study carries out a thorough analysis of automated decisionmaking, considering the extent to which automated decisions are admissible, the safeguard measures to be adopted, and whether data subjects have a right to individual explanations. It then addresses the extent to which the GDPR provides for a preventive risk-based approach, focusing on data protection by design and by default. The possibility to use AI for statistical purposes, in a way that is consistent with the GDPR, is also considered. 

The study concludes by observing that AI can be deployed in a way that is consistent with the GDPR, but also that the GDPR does not provide sufficient guidance for controllers, and that its prescriptions need to be expanded and concretised. Some suggestions in this regard are developed.

 

You can find original report  below:

Click to access EPRS_STU(2020)641530_EN.pdf

What If We Could Fight Coronavirus with Artificial Intelligence?

 

What If We Could Fight Coronavirus with Artificial Intelligence?

 

European Parliamentary Research Service

2020

 

Analytics have changed the way disease outbreaks are tracked and managed, thereby saving lives. The international community is currently focused on the 2019-2020 novel coronavirus (COVID-19) outbreak, first identified in Wuhan, China. As it spreads, raising fears of a worldwide pandemic, international organisations and scientists are using artificial intelligence (AI) to track the epidemic in real-time, to effectively predict where the virus might appear next and develop an effective response.

On 31 December 2019, the World Health Organization (WHO) received the first report of a suspected novel coronavirus (COVID-19) in Wuhan. Amid concerns that the global response is fractured and uncoordinated, the WHO declared the outbreak a public health emergency of international concern (PHEIC) under the International Health Regulations (IHR) on 30 January 2020. Warnings about the novel coronavirus spreading beyond China were raised by AI systems more than a week before official information about the epidemic was released by international organisations. A health monitoring start-up correctly predicted the spread of COVID-19, using natural-language processing and machine learning. Decisions during such an outbreak need to be made on an urgent basis, often in the context of scientific uncertainty, fear, distrust, and social and institutional disruption. How can AI technologies be used to manage this type of global health emergency, without undermining protection of fundamental values and human rights?

Potential impacts and developments

In the case of COVID-19, AI has been used mostly to help detect whether people have novel coronavirus through the detection of visual signs of COVID-19 on images from lung CT scans; to monitor, in real time, changes in body temperature through the use of wearable sensors; and to provide an open-source data platform to track the spread of the disease. AI could process vast amounts of unstructured text data to predict the number of potential new cases by area and which types of populations will be most at risk, as well as evaluate and optimise strategies for controlling the spread of the epidemic. Other AI applications can deliver medical supplies by drone, disinfect patient rooms and scan approved drug databases (for other illnesses) that might also work against COVID-19. AI technologies have been harnessed to come up with new molecules that could serve as potential medications or even accelerate the time taken to predict the virus’s RNA secondary structure. A series of risk assessment algorithmsfor COVID-19 for use in healthcare settings have been developed, including an algorithm for the main actions that need to be followed for managing contacts of probable or confirmed COVID-19 cases, as developed by the European Centre for Disease Prevention and Control.

Certain AI applications can also detect fake news about the disease by applying machine-learning techniques for mining social media information, tracking down words that are sensational or alarming, and identifying which online sources are deemed authoritative for fighting what has been called an infodemic. Facebook, Google, Twitter and TikTok have partnered with the WHO to review and expose false information about COVID-19.

In public health emergency response management, derogating from an individual’s rights of privacy, nondiscrimination and freedom of movement in the name of the urgency of the situation can sometimes take the form of restrictive measures that include domestic containment strategies without due process, or medical examination without informed consent. In the case of COVID-19, AI applications such as the use of facial recognition to track people not wearing masks in public, or AI-based fever detection systems, as well as the processing of data collected on digital platforms and mobile networks to track a person’s recent movements, have contributed to draconian enforcement of restraining measures for the confinement of the outbreak for unspecified durations. Chinese search giant Baidu has developed a system using infrared and facial recognition technology that scans and takes photographs of more than 200 people per minute at the Qinghe railway station in Beijing. In Moscow, authorities are using automated facial recognition technology to scan surveillance camera footage in an attempt to identify recent arrivals from China, placed under quarantine for fear of COVID-19 infection and not expected to enter the station. Finally, Chinese authorities are deploying dronesto patrol public places, conduct thermal imaging, or to track people violating quarantine rules.

 

You can find original document from the link below:

Click to access EPRS_ATA(2020)641538_EN.pdf

Responsible Bots: 10 Guidelines for Developers of Conversational AI

 

Responsible Bots: 10 Guidelines for Developers of Conversational AI

 

 

Guidelines 

  1. Articulate the purpose of your bot and take special care if your bot will support consequential use cases

The purpose of your bot is central to ethical design, and ethical design is particularly important when it is anticipated that a consequential use will be served by the bot you are developing. Consequential use cases include access to services such as healthcare, education, employment, financing or other services that, if denied, would have meaningful and significant impact on an individual’s daily life.

  1. Be transparent about the fact that you use bots as part of your product or service.

Users are more likely to trust a company that is transparent and forthcoming about its use of bot technology, and a bot is more likely to be trusted if users understand that the bot is working to serve their needs and is clear about its limitations. 

  1. Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot’s competence. 

If your bot will engage people in interactions that may require human judgment, provide a means or ready access to a human moderator. 

  1. Design your bot so that it respects relevant cultural norms and guards against misuse.

Since bots may have human-like personas, it is especially important that they interact respectfully and safely with users and have built-in safeguards and protocols to handle misuse and abuse.

  1. Ensure your bot is reliable. 

Ensure that your bot is sufficiently reliable for the function it aims to perform, and always take into account that since AI systems are probabilistic they will not always provide the correct answer. 

  1. Ensure your bot treats people fairly. 

The possibility that AI-based systems will perpetuate existing societal biases, or introduce new biases, is one of the top concerns identified by the AI community relating to the rapid deployment of AI. Development teams must be committed to ensuring that their bots treat all people fairly. 

  1. Ensure your bot respects user privacy. 

Privacy considerations are especially important for bots. While the Microsoft Bot Framework does not store session state, you may be designing and deploying authenticated bots in personal settings (like hospitals) where bots will learn a great deal about users. People may also share more information about themselves than they would if they thought they were interacting with a person. And, of course, bots can remember everything. All of this (plus legal requirements) makes it especially important that you design bots from the ground up with a view toward respecting user privacy. This includes giving users sufficient transparency into bots’ data collection and use, including how the bot functions, and what types of controls the bot offers users over their personal data.

  1. Ensure your bot handles data securely. 

Users have every right to expect that their data will be handled securely. Follow security best practices that are appropriate for the type of data your bot will be handling. 

  1. Ensure your bot is accessible. 

Bots can benefit everyone, but only if they are designed to be inclusive and accessible to people of all abilities. Microsoft’s mission to empower every person to achieve more includes ensuring that new technology interfaces can be used by people with disabilities, including users of assistive technology. 

  1. Accept responsibility. 

We are a long way away from bots that can truly act autonomously, if that day will ever come. Humans are accountable for the operation of bots.

You can reach original document from the link below:

https://www.microsoft.com/en-us/research/uploads/prod/2018/11/Bot_Guidelines_Nov_2018.pdf

White Paper on Artificial Intelligence: A European approach to excellence and trust

 

White Paper on Artificial Intelligence 

A European approach to excellence and trust

Brussel, 19.02.2020

 

Abstract

Artificial Intelligence is developing fast. It will change our lives by improving healthcare (e.g. making diagnosis more precise, enabling better prevention of diseases), increasing the efficiency of farming, contributing to climate change mitigation and adaptation, improving the efficiency of production systems through predictive maintenance, increasing the security of Europeans, and in many other ways that we can only begin to imagine. At the same time, Artificial Intelligence (AI) entails a number of potential risks, such as opaque decision-making, gender-based or other kinds of discrimination, intrusion in our private lives or being used for criminal purposes. 

Against a background of fierce global competition, a solid European approach is needed, building on the European strategy for AI presented in April 2018. To address the opportunities and challenges of AI, the EU must act as one and define its own way, based on European values, to promote the development and deployment of AI. 

The Commission is committed to enabling scientific breakthrough, to preserving the EU’s technological leadership and to ensuring that new technologies are at the service of all Europeans – improving their lives while respecting their rights. 

Commission President Ursula von der Leyen announced in her political Guidelines a coordinated European approach on the human and ethical implications of AI as well as a reflection on the better use of big data for innovation. 

Thus, the Commission supports a regulatory and investment oriented approach with the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology. The purpose of this White Paper is to set out policy options on how to achieve these objectives. It does not address the development and use of AI for military purposes.The Commission invites Member States, other European institutions, and all stakeholders, including industry, social partners, civil society organisations, researchers, the public in general and any interested party, to react to the options below and to contribute to the Commission’s future decision-making in this domain.

You can reach the full paper from the link below:

https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf

Artificial Intelligence: A European Perspective

 

Artificial Intelligence: A European Perspective

EU Commission, 2018

 

Abstract

We are only at the beginning of a rapid period of transformation of our economy and society due to the convergence of many digital technologies. Artificial Intelligence (AI) is central to this change and offers major opportunities to improve our lives. The recent developments in AI are the result of increased processing power, improvements in algorithms and the exponential growth in the volume and variety of digital data. Many applications of AI have started entering into our every-day lives, from machine translations, to image recognition, and music generation, and are increasingly deployed in industry, government, and commerce. Connected and autonomous vehicles, and AI-supported medical diagnostics are areas of application that will soon be commonplace. There is strong global competition on AI among the US, China, and Europe. The US leads for now but China is catching up fast and aims to lead by 2030. For the EU, it is not so much a question of winning or losing a race but of finding the way of embracing the opportunities offered by AI in a way that is human-centred, ethical, secure, and true to our core values. The EU Member States and the European Commission are developing coordinated national and European strategies, recognising that only together we can succeed. We can build on our areas of strength including excellent research, leadership in some industrial sectors like automotive and robotics, a solid legal and regulatory framework, and very rich cultural diversity also at regional and sub-regional levels. 

It is generally recognised that AI can flourish only if supported by a robust computing infrastructure and good quality data: 

  • With respect to computing, we identified a window of opportunity for Europe to invest in the emerging new paradigm of computing distributed towards the edges of the network, in addition to centralised facilities. This will support also the future deployment of 5G and the Internet of Things. 
  • With respect to data, we argue in favour of learning from successful Internet companies, opening access to data and developing interactivity with the users rather than just broadcasting data. In this way, we can develop ecosystems of public administrations, firms, and civil society enriching the data to make it fit for AI applications responding to European needs. 

We should embrace the opportunities afforded by AI but not uncritically. The black box characteristics of most leading AI techniques make them opaque even to specialists. AI systems are currently limited to narrow and well-defined tasks, and their technologies inherit imperfections from their human creators, such as the well-recognised bias effect present in data. We should challenge the shortcomings of AI and work towards strong evaluation strategies, transparent and reliable systems, and good human-AI interactions. Ethical and secure-by-design algorithms are crucial to build trust in this disruptive technology, but we also need a broader engagement of civil society on the values to be embedded in AI and the directions for future development. This social engagement should be part of the effort to strengthen our resilience at all levels from local, to national and European, across institutions, industry and civil society. Developing local ecosystems of skills, computing, data, and applications can foster the engagement of local communities, respond to their needs, harness local creativity and knowledge, and build a human-centred, diverse, and socially driven AI. We still know very little about how AI will impact the way we think, make decisions, relate to each other, and how it will affect our jobs. This uncertainty can be a source of concern but is also a sign of opportunity. The future is not yet written. We can shape it based on our collective vision of what future we would like to have. But we need to act together and act fast.

 

You can reach the report from the link below:

https://ec.europa.eu/jrc/en/publication/artificial-intelligence-european-perspective

2016–2019 Progress Report: Advancing Artificial Intelligence R&D

 

2016–2019 Progress Report: Advancing Artificial Intelligence R&D

November 2019

 

The United States national strategy for artificial intelligence (AI), the American AI Initiative, identifies research and development (R&D) as a top priority for maintaining global leadership in AI. The United States leads the world in AI innovation, due in large part to its robust R&D ecosystem. Federal agencies contribute significantly to AI innovation by investing in numerous world-class research programs in areas consistent with the unique missions of each agency. 

This 2016–2019 Progress Report on Advancing Artificial Intelligence R&D (“2016–2019 Progress Report”) documents the important progress that agencies are making to deliver on Federal AI R&D. 

Guiding Federal research investments is the National Artificial Intelligence Research and Development Strategic Plan: 2019 Update (“2019 Plan”), 2 which builds upon the 2016 version of the Plan. The 2019 Plan articulates eight national AI R&D strategies:

Strategy 1: Make long-term investments in AI research. 

Strategy 2: Develop effective methods for human-AI collaboration. 

Strategy 3: Understand and address the ethical, legal, and societal implications of AI.

Strategy 4: Ensure the safety and security of AI systems. 

Strategy 5: Develop shared public datasets and environments for AI training and testing.

Strategy 6: Measure and evaluate AI technologies through benchmarks and standards.

Strategy 7: Better understand the national AI R&D workforce needs.

Strategy 8: Expand public-private partnerships in AI to accelerate advances in AI. 

This 206–2019 Progress Report highlights AI research first by strategy, then by sector, with subsequent supporting details describing individual agency contributions that provide a whole-of-government overview. The diversity of programs and activities reflects the remarkable breadth and depth of Federal investments in AI. This report highlights not only the broad themes of Federal R&D but also provides illustrative examples in sidebars that highlight individual agency AI R&D breakthroughs that advance the state of the field. 

Taken as a whole, the 2016–2019 Progress Report conveys the following key messages: 

  1. The Federal Government is investing in a considerable breadth and depth of innovative AI concepts that can transform the state of the field. 
  2. The United States benefits significantly from the broad spectrum of Federal agencies that invest in AI from their unique mission perspectives, consistent with the national AI R&D strategy. 
  3. Federal investments have generated impactful breakthroughs that are revolutionizing our society for the better. 

Collectively, the investments described in this report demonstrate how the Federal Government leverages and improves America’s AI capabilities through R&D and ensures that those capabilities are increasing prosperity, safety, security, and quality of life for the American people for decades to come. 

 

You can reach the full report from the link below:

 

1 2 3 4