A Framework for Developing a National Artificial Intelligence Strategy


A Framework for Developing a National Artificial Intelligence Strategy


World Economic Forum




Over the past decade, artificial intelligence (AI) has emerged as the software engine that drives the Fourth Industrial Revolution, a technological force affecting all disciplines, economies and industries. The exponential growth in computing infrastructure combined with the dramatic reduction in the cost of obtaining, processing, storing and transmitting data has revolutionized the way software is developed, and automation is carried out. Put simply, we have moved from machine programming to machine learning. This transformation has created great opportunities but poses serious risks. Various stakeholders, including governments, corporations, academics and civil society organizations have been making efforts to exploit the benefits it provides and to prepare for the risks it poses. Because government is responsible for protecting citizens from various harms and providing for collective goods and services, it has a unique duty to ensure that the ongoing Fourth Industrial Revolution creates benefits for the many, rather than the few.

To this end, various governments have embarked on the path to formulate and/or implement a national strategy for AI, starting with Canada in 2017. Such efforts are usually supported by multimillion-dollar – and, in a few cases, billion-dollar-plus – investments by national governments. Many more should follow given the appropriate guidance. This white paper is a modest effort to guide governments in their development of a national strategy for AI. As a rapidly developing technology, AI will have an impact on how enterprises produce, how consumers consume and how governments deliver services to citizens. AI also raises unprecedented challenges for governments in relation to algorithmic accountability, data protection, explainability of decision-making by machine-learning models and potential job displacements. These challenges require a new approach to understanding how AI and related technology developments can be used to achieve national goals and how their associated risks can be minimized. As AI will be used in all sectors of society and as it directly affects all citizens and all of the services provided by governments, it behoves governments to think carefully about how they create AI economies within their countries and how they can employ AI to solve problems as diverse as sustainability of ecosystems to healthcare. Each country will need AI for different things; for example, countries with ageing populations may not be so worried about jobs lost due to AI automation, whereas countries with youthful populations need to think of ways in which those young people can participate in the AI economy. Either way, this white paper provides a framework for national governments to follow while formulating a strategy of national preparedness and planning to draw benefits from AI developments.

The framework is the result of a holistic study of the various strategies and national plans prepared by various countries, including Canada, the United Kingdom, the United States, India, France, Singapore, Germany and the UAE. Additionally, the World Economic Forum team interviewed government employees responsible for developing their national AI strategies in order to gain a detailed understanding of the design process they followed. The authors analysed these strategies and designed processes to distil their best elements.

The framework aims to guide governments that are yet to develop a national strategy for AI or which are in the process of developing such a strategy. The framework will help the teams responsible for developing the national strategy to ask the right questions, follow the best practices, identify and involve the right stakeholders in the process and create the right set of outcome indicators. Essentially, the framework provides a way to create a “minimum viable” AI strategy for a nation.


You can find original report from the link below:



Four Principles of Explainable Artificial Intelligence


Four Principles of Explainable Artificial Intelligence


August, 2020



“We introduce four principles for explainable artificial intelligence (AI) that comprise the fundamental properties for explainable AI systems. They were developed to encompass the multidisciplinary nature of explainable AI, including the fields of computer science, engineering, and psychology. Because one size fits all explanations do not exist, different users will require different types of explanations. We present five categories of explanation and summarize theories of explainable AI. We give an overview of the algorithms in the field that cover the major classes of explainable algorithms. As a baseline comparison, we assess how well explanations provided by people follow our four principles. This assessment provides insights to the challenges of designing explainable AI systems.”


You can find original document below:

Click to access NIST%20Explainable%20AI%20Draft%20NISTIR8312%20%281%29.pdf

Assessment List for Trustworthy AI


Assessment List for Trustworthy AI


European AI Alliance | FUTURIUM | European Commission

The EU Commission

High-Level Expert Group on AI

August 2020


Fundamental Rights

Fundamental rights encompass rights such as human dignity and non-discrimination, as well as rights in relation to data protection and privacy, to name just some examples. Prior to self-assessing an AI system with this Assessment List, a fundamental rights impact assessment (FRIA) should be performed. 

A FRIA could include questions such as the following – drawing on specific articles in the Charter and the European Convention on Human Rights (ECHR)14 its protocols and the European Social Charter.

1.Does the AI system potentially negatively discriminate against people on the basis of any of the following grounds (non-exhaustively): sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation? 

Have you put in place processes to test and monitor for potential negative discrimination (bias) during the development, deployment and use phases of the AI system? 

Have you put in place processes to address and rectify for potential negative discrimination (bias) in the AI system? 

2.Does the AI system respect the rights of the child, for example with respect to child protection and taking the child’s best interests into account? 

Have you put in place processes to address and rectify for potential harm to children by the AI system? 

Have you put in place processes to test and monitor for potential harm to children during the development, deployment and use phases of the AI system? 

3.Does the AI system protect personal data relating to individuals in line with GDPR?

Have you put in place processes to assess in detail the need for a data protection impact assessment, including an assessment of the necessity and proportionality of the processing operations in relation to their purpose, with respect to the development, deployment and use phases of the AI system?

Have you put in place measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data with respect to the development, deployment and use phases of the AI system? 

4.Does the AI system respect the freedom of expression and information and/or freedom of assembly and association?

Have you put in place processes to test and monitor for potential infringement on freedom of expression and information, and/or freedom of assembly and association, during the development, deployment and use phases of the AI system? 

Have you put in place processes to address and rectify for potential infringement on freedom of expression and information, and/or freedom of assembly and association, in the AI system?


You can find original document from the link below:


Legal Tech and Applications in the Legal Profession


Legal Tech and Applications in the Legal Profession


Hukuk Teknolojileri ve Avukatlık Mesleğindeki Uygulamaları

AI Working Group

July 2020




Technology has become inevitable in regard to maintaining the legal profession. With the common use of artificial intelligence software, the legal industry will undergo a significant transformation. It is hereby analyzed to use AI-embedded Legal Technology (Legal Tech) in the legal profession. It is emphasized that the attorneys must keep pace with the rapid development of technology and move to tech-enabled law practice to improve efficiency. The opinion letter explains the current use of AI-embedded Legal Tech and its benefits for attorneys. However, taking also possible risks of the Legal Tech into consideration, the challenges attorneys might encounter are underlined. Finally, recommendations are made both for the development and the use of Legal Tech.


You can find original version of the opinoin from the link below:

AI Governance in the Public Sector


AI Governance in the Public Sector:

Three tales from the frontiers of automated decision-making in democratic settings


Author links open overlay pane


Maciej Kuziemski, Berkman Klein Center for Internet and Society, Harvard University

GianlucaMisuraca, European Commission, Joint Research Centre, Digital Economy Unit

April 2020





“The rush to understand new socio-economic contexts created by the wide adoption of AI is justified by its far-ranging consequences, spanning almost every walk of life. Yet, the public sector’s predicament is a tragic double bind: its obligations to protect citizens from potential algorithmic harms are at odds with the temptation to increase its own efficiency – or in other words – to govern algorithms, while governing by algorithms. Whether such dual role is even possible, has been a matter of debate, the challenge stemming from algorithms’ intrinsic properties, that make them distinct from other digital solutions, long embraced by the governments, create externalities that rule-based programming lacks. As the pressures to deploy automated decision making systems in the public sector become prevalent, this paper aims to examine how the use of AI in the public sector in relation to existing data governance regimes and national regulatory practices can be intensifying existing power asymmetries. To this end, investigating the legal and policy instruments associated with the use of AI for strenghtening the immigration process control system in Canada; “optimising” the employment services” in Poland, and personalising the digital service experience in Finland, the paper advocates for the need of a common framework to evaluate the potential impact of the use of AI in the public sector. In this regard, it discusses the specific effects of automated decision support systems on public services and the growing expectations for governments to play a more prevalent role in the digital society and to ensure that the potential of technology is harnessed, while negative effects are controlled and possibly avoided. This is of particular importance in light of the current COVID-19 emergency crisis where AI and the underpinning regulatory framework of data ecosystems, have become crucial policy issues as more and more innovations are based on large scale data collections from digital devices, and the real-time accessibility of information and services, contact and relationships between institutions and citizens could strengthen – or undermine – trust in governance systems and democracy.”


You can find original paper from the link below:


The Role and Future of Artificial Intelligence in Criminal Procedure Law


The Role and Future of Artificial Intelligence in Criminal Procedure Law




Dr. Zafer İçer, Marmara University Law Faculty

Research Asst. Başak Buluz, Gebze Technical Universty Engineering Faculty





Since the beginning of the present century, innovative technologies have evolved with unprecedented speed; cyber-physical systems and the innovations which produced through the internet linking these systems have created technological era. One of the most important subjects of the digital era is “artificial intelligence systems” what called the catalyzer of industrial digital transformation of course. Artificial intelligence systems are interact with many different disciplines and that touch humanity and everyday life at any point from driveless cars to virtual assistants, from smart home products to industrial automation; and in recent years, as in all legal fields, it has began to take its place in Criminal Procedure. In various countries, intelligent digital assistants have been actively used to identify and analyze concrete legal conflicts and predict the possible consequences of the lawsuits to be opened and artificial intelligence platforms are being used in subjects such as legal analysis and evidence evaluation. Undoubtedly, the common goal of these systems is to provide fast, efficient and accurate solutions in this area. On the other hand, in the near future, robotic systems are likely to become the subjects of judging and they may have important roles in decision-making processes as robot judges, prosecutors and lawyers. In this study, the role and future of these intelligent systems in criminal proceedings will be discussed with a scientific perspective in light of current examples and possible developments by referring to technical aspects of artificial learning and artificial intelligence.”


You can find original and full paper from the link below:

Law and Artificial Intelligence: E-Person, Liability, and A Legal Application Example


Law and Artificial Intelligence: E-Person, Liability, and A Legal Application Example


How artificial intelligence is transforming the world - Axiom Groupe

Prof. Dr. Zafer ZEYTİN

Turkish-German University, Law Faculty


Tübingen University, Department of Computer Science





“Nowadays artificial intelligence is used in many fields such as city planning, production, automation, medicine and security. In legal field, artificial intelligence research is making progress since 30 years. Nevertheless, many questions about the implementation are still open. The interaction between artificial intelligence and law is examined on two levels in this study. Firstly, it is discussed whether artificial intelligence systems can be a legal subject, and if so, what could be the consequences and implications. Secondly, it is discussed how law as a discipline can be supported with artificial intelligence systems. An example is put forward as to how such systems can be designed to be implemented in marital property regimes.”


You can find original and full paper below:

AI and control of Covid-19


AI and control of Covid-19


the Ad hoc Committee on Artificial Intelligence (CAHAI) secretariat



Artificial intelligence (AI) is being used as a tool to support the fight against the viral pandemic that has affected the entire world since the beginning of 2020. The press and the scientific community are echoing the high hopes that data science and AI can be used to confront the coronavirus (D. Yakobovitch, How to fight the Coronavirus with AI and Data Science, Medium, 15 February 2020) and “fill in the blanks” still left by science (G. Ratnam, Can AI Fill in the Blanks About Coronavirus? Think So Experts, Government Technology, 17 March 2020).

China, the first epicentre of this disease and renowned for its technological advance in this field, has tried to use this to its real advantage. Its uses seem to have included support for measures restricting the movement of populations, forecasting the evolution of disease outbreaks and research for the development of a vaccine or treatment. With regard to the latter aspect, AI has been used to speed up genome sequencing, make faster diagnoses, carry out scanner analyses or, more occasionally, handle maintenance and delivery robots (A. Chun, In a time of coronavirus, China’s investment in AI is paying off in a big way, South China Morning post, 18 March 2020). 

Its contributions, which are also undeniable in terms of organising better access to scientific publications or supporting research, does not eliminate the need for clinical test phases nor does it replace human expertise entirely. The structural issues encountered by health infrastructures in this crisis situation are not due to technological solutions but to the organisation of health services, which should be able to prevent such situations occurring (Article 11 of the European Social Charter). Emergency measures using technological solutions, including AI, should also be assessed at the end of the crisis. Those that infringe on individual freedoms should not be trivialised on the pretext of a better protection of the population. The provisions of Convention 108+ should in particular continue to be applied.

The contribution of artificial intelligence to the search for a cure

The first application of AI expected in the face of a health crisis is certainly the assistance to researchers to find a vaccine able to protect caregivers and contain the pandemic. Biomedicine and research rely on a large number of techniques, among which the various applications of computer science and statistics have already been making a contribution for a long time. The use of AI is therefore part of this continuity.

The predictions of the virus structure generated by AI have already saved scientists months of experimentation. AI seems to have provided significant support in this sense, even if it is limited due to so-called “continuous” rules and infinite combinatorics for the study of protein folding. The American start-up Moderna has distinguished itself by its mastery of a biotechnology based on messenger ribonucleic acid (mRNA) for which the study of protein folding is essential. It has managed to significantly reduce the time required to develop a prototype vaccine testable on humans thanks to the support of bioinformatics, of which AI is an integral part. 

Similarly, Chinese technology giant Baidu, in partnership with Oregon State University and the University of Rochester, published its Linearfold prediction algorithm in February 2020 to study the same protein folding. This algorithm is much faster than traditional algorithms in predicting the structure of a virus’ secondary ribonucleic acid (RNA) and provides scientists with additional information on how viruses spread. The prediction of the secondary structure of the RNA sequence of Covid-19 would thus have been calculated by Linearfold in 27 seconds instead of 55 minutes (Baidu, How Baidu is bringing AI to the fight against coronavirus, MIT Technology Review, 11 March 2020). DeepMind, a subsidiary of Google’s parent company, Alphabet, has also shared its predictions of coronavirus protein structures with its AlphaFold AI system (J. Jumper, K. Tunyasuvunakool, P. Kohli, D. Hassabis et al, Computational predictions of protein structures associated with COVID-19, DeepMind, 5 March 2020). IBM, Amazon, Google and Microsoft have also provided the computing power of their servers to the US authorities to process very large datasets in epidemiology, bioinformatics and molecular modelling (F. Lardinois, IBM, Amazon, Google and Microsoft partner with White House to provide compute resources for COVID-19 research, Techcrunch, 22 March 2020).


Emergent Medical Data: Health Information Inferred by Artificial Intelligence

Emergent Medical Data: Health Information Inferred by Artificial Intelligence


Mason Marks

Gonzaga University  

School of Law





“Artificial intelligence can infer health data from people’s behavior even when their behavior has no apparent connection to their health. AI can analyze social media to track the spread of infectious disease outbreaks, scrutinize retail purchases to identify pregnant customers, and track people’s movements to predict who might attempt suicide. These feats are possible because in modern societies, people continuously interact with internet-enabled devices in homes, workplaces, schools, and public spaces, and these devices are increasingly designed for surveillance. Smart phones track people’s whereabouts, wearables monitor their physical activity, smart speakers record their voices, and surveillance cameras observe their facial expressions. Continuous daily exposure to these devices produces millions of digital traces, the electronic remnants of people’s interactions with technology.

Digital traces provide insight into who we are, what we have done, and what we might do. However, in their raw form, they are rarely very interesting or useful; one’s retail purchases and internet browsing habits are relatively mundane pieces of information. Before scientists, corporations, and government agencies can profit from them, they must transform those traces to enhance their value. Transforming digital traces into health information is called mining for emergent medical data (EMD) because, through analysis with AI, the connections between digital traces and people’s health emerge unexpectedly, as if by magic.

This Article argues that EMD should be viewed as a new type of health information, distinct from traditional medical data (TMD), which is transmitted voluntarily from patients to healthcare providers. It describes how EMD-based profiling and predictions are increasingly promoted as solutions to public health problems such as the opioid crisis, rising rates of suicide, and the high prevalence of gun violence. However, there is little evidence to show that EMD-based profiling works. Even worse, it can cause significant harm, and current health privacy and data protection laws contain loopholes that allow public and private entities to mine EMD without people’s knowledge or consent.

After describing the EMD mining process, and the benefits and risks of EMD, the Article proposes six different ways of conceptualizing this emerging technology. It concludes with preliminary recommendations for effective regulation. Potential options include banning or restricting the collection of digital traces, regulating EMD mining algorithms and limiting which entities can use them, restricting how EMD can be used once it is produced, and requiring ethics board approval for EMD mining research.”


You can reach original article from the link below: