AI and control of Covid-19

 

AI and control of Covid-19

 

the Ad hoc Committee on Artificial Intelligence (CAHAI) secretariat

2020

 

Artificial intelligence (AI) is being used as a tool to support the fight against the viral pandemic that has affected the entire world since the beginning of 2020. The press and the scientific community are echoing the high hopes that data science and AI can be used to confront the coronavirus (D. Yakobovitch, How to fight the Coronavirus with AI and Data Science, Medium, 15 February 2020) and “fill in the blanks” still left by science (G. Ratnam, Can AI Fill in the Blanks About Coronavirus? Think So Experts, Government Technology, 17 March 2020).

China, the first epicentre of this disease and renowned for its technological advance in this field, has tried to use this to its real advantage. Its uses seem to have included support for measures restricting the movement of populations, forecasting the evolution of disease outbreaks and research for the development of a vaccine or treatment. With regard to the latter aspect, AI has been used to speed up genome sequencing, make faster diagnoses, carry out scanner analyses or, more occasionally, handle maintenance and delivery robots (A. Chun, In a time of coronavirus, China’s investment in AI is paying off in a big way, South China Morning post, 18 March 2020). 

Its contributions, which are also undeniable in terms of organising better access to scientific publications or supporting research, does not eliminate the need for clinical test phases nor does it replace human expertise entirely. The structural issues encountered by health infrastructures in this crisis situation are not due to technological solutions but to the organisation of health services, which should be able to prevent such situations occurring (Article 11 of the European Social Charter). Emergency measures using technological solutions, including AI, should also be assessed at the end of the crisis. Those that infringe on individual freedoms should not be trivialised on the pretext of a better protection of the population. The provisions of Convention 108+ should in particular continue to be applied.

The contribution of artificial intelligence to the search for a cure

The first application of AI expected in the face of a health crisis is certainly the assistance to researchers to find a vaccine able to protect caregivers and contain the pandemic. Biomedicine and research rely on a large number of techniques, among which the various applications of computer science and statistics have already been making a contribution for a long time. The use of AI is therefore part of this continuity.

The predictions of the virus structure generated by AI have already saved scientists months of experimentation. AI seems to have provided significant support in this sense, even if it is limited due to so-called “continuous” rules and infinite combinatorics for the study of protein folding. The American start-up Moderna has distinguished itself by its mastery of a biotechnology based on messenger ribonucleic acid (mRNA) for which the study of protein folding is essential. It has managed to significantly reduce the time required to develop a prototype vaccine testable on humans thanks to the support of bioinformatics, of which AI is an integral part. 

Similarly, Chinese technology giant Baidu, in partnership with Oregon State University and the University of Rochester, published its Linearfold prediction algorithm in February 2020 to study the same protein folding. This algorithm is much faster than traditional algorithms in predicting the structure of a virus’ secondary ribonucleic acid (RNA) and provides scientists with additional information on how viruses spread. The prediction of the secondary structure of the RNA sequence of Covid-19 would thus have been calculated by Linearfold in 27 seconds instead of 55 minutes (Baidu, How Baidu is bringing AI to the fight against coronavirus, MIT Technology Review, 11 March 2020). DeepMind, a subsidiary of Google’s parent company, Alphabet, has also shared its predictions of coronavirus protein structures with its AlphaFold AI system (J. Jumper, K. Tunyasuvunakool, P. Kohli, D. Hassabis et al, Computational predictions of protein structures associated with COVID-19, DeepMind, 5 March 2020). IBM, Amazon, Google and Microsoft have also provided the computing power of their servers to the US authorities to process very large datasets in epidemiology, bioinformatics and molecular modelling (F. Lardinois, IBM, Amazon, Google and Microsoft partner with White House to provide compute resources for COVID-19 research, Techcrunch, 22 March 2020).

https://www.coe.int/en/web/artificial-intelligence/ai-and-control-of-covid-19-coronavirus

White Paper on Artificial Intelligence: A European approach to excellence and trust

 

White Paper on Artificial Intelligence 

A European approach to excellence and trust

Brussel, 19.02.2020

 

Abstract

Artificial Intelligence is developing fast. It will change our lives by improving healthcare (e.g. making diagnosis more precise, enabling better prevention of diseases), increasing the efficiency of farming, contributing to climate change mitigation and adaptation, improving the efficiency of production systems through predictive maintenance, increasing the security of Europeans, and in many other ways that we can only begin to imagine. At the same time, Artificial Intelligence (AI) entails a number of potential risks, such as opaque decision-making, gender-based or other kinds of discrimination, intrusion in our private lives or being used for criminal purposes. 

Against a background of fierce global competition, a solid European approach is needed, building on the European strategy for AI presented in April 2018. To address the opportunities and challenges of AI, the EU must act as one and define its own way, based on European values, to promote the development and deployment of AI. 

The Commission is committed to enabling scientific breakthrough, to preserving the EU’s technological leadership and to ensuring that new technologies are at the service of all Europeans – improving their lives while respecting their rights. 

Commission President Ursula von der Leyen announced in her political Guidelines a coordinated European approach on the human and ethical implications of AI as well as a reflection on the better use of big data for innovation. 

Thus, the Commission supports a regulatory and investment oriented approach with the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology. The purpose of this White Paper is to set out policy options on how to achieve these objectives. It does not address the development and use of AI for military purposes.The Commission invites Member States, other European institutions, and all stakeholders, including industry, social partners, civil society organisations, researchers, the public in general and any interested party, to react to the options below and to contribute to the Commission’s future decision-making in this domain.

You can reach the full paper from the link below:

https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf

Artificial Intelligence: A European Perspective

 

Artificial Intelligence: A European Perspective

EU Commission, 2018

 

Abstract

We are only at the beginning of a rapid period of transformation of our economy and society due to the convergence of many digital technologies. Artificial Intelligence (AI) is central to this change and offers major opportunities to improve our lives. The recent developments in AI are the result of increased processing power, improvements in algorithms and the exponential growth in the volume and variety of digital data. Many applications of AI have started entering into our every-day lives, from machine translations, to image recognition, and music generation, and are increasingly deployed in industry, government, and commerce. Connected and autonomous vehicles, and AI-supported medical diagnostics are areas of application that will soon be commonplace. There is strong global competition on AI among the US, China, and Europe. The US leads for now but China is catching up fast and aims to lead by 2030. For the EU, it is not so much a question of winning or losing a race but of finding the way of embracing the opportunities offered by AI in a way that is human-centred, ethical, secure, and true to our core values. The EU Member States and the European Commission are developing coordinated national and European strategies, recognising that only together we can succeed. We can build on our areas of strength including excellent research, leadership in some industrial sectors like automotive and robotics, a solid legal and regulatory framework, and very rich cultural diversity also at regional and sub-regional levels. 

It is generally recognised that AI can flourish only if supported by a robust computing infrastructure and good quality data: 

  • With respect to computing, we identified a window of opportunity for Europe to invest in the emerging new paradigm of computing distributed towards the edges of the network, in addition to centralised facilities. This will support also the future deployment of 5G and the Internet of Things. 
  • With respect to data, we argue in favour of learning from successful Internet companies, opening access to data and developing interactivity with the users rather than just broadcasting data. In this way, we can develop ecosystems of public administrations, firms, and civil society enriching the data to make it fit for AI applications responding to European needs. 

We should embrace the opportunities afforded by AI but not uncritically. The black box characteristics of most leading AI techniques make them opaque even to specialists. AI systems are currently limited to narrow and well-defined tasks, and their technologies inherit imperfections from their human creators, such as the well-recognised bias effect present in data. We should challenge the shortcomings of AI and work towards strong evaluation strategies, transparent and reliable systems, and good human-AI interactions. Ethical and secure-by-design algorithms are crucial to build trust in this disruptive technology, but we also need a broader engagement of civil society on the values to be embedded in AI and the directions for future development. This social engagement should be part of the effort to strengthen our resilience at all levels from local, to national and European, across institutions, industry and civil society. Developing local ecosystems of skills, computing, data, and applications can foster the engagement of local communities, respond to their needs, harness local creativity and knowledge, and build a human-centred, diverse, and socially driven AI. We still know very little about how AI will impact the way we think, make decisions, relate to each other, and how it will affect our jobs. This uncertainty can be a source of concern but is also a sign of opportunity. The future is not yet written. We can shape it based on our collective vision of what future we would like to have. But we need to act together and act fast.

 

You can reach the report from the link below:

https://ec.europa.eu/jrc/en/publication/artificial-intelligence-european-perspective

Explaining AI Decisions

 

Explaining AI Decisions

 

ICO & Alan Turing Institute

2 December 2019

The ICO and The Alan Turing Institute (The Turing) have launched a consultation on our co-badged guidance, Explaining decisions made with AI. This guidance aims to give organisations practical advice to help explain the processes, services and decisions delivered or assisted by AI, to the individuals affected by them.  

Increasingly, organisations are using artificial intelligence (AI) to support, or to make decisions about individuals. If this is something you do, or something you are thinking about, this guidance is for you.

We want to ensure this guidance is practically applicable in the real world, so organisations can easily utilize it when developing AI systems. This is why we are requesting feedback

The guidance consists of three parts. Depending on your level of expertise, and the make-up of your organisation, some parts may be more relevant to you than others. You can pick and choose the parts that are most useful.

The survey will ask you about all three parts but answer as few or as many questions as you like.

Part 1: The basics of explaining AI defines the key concepts and outlines a number of different types of explanations. It will be relevant for all members of staff involved in the development of AI systems.

Part 2: Explaining AI in practice helps you with the practicalities of explaining these decisions and providing explanations to individuals. This will primarily be helpful for the technical teams in your organisation, however your DPO and compliance team will also find it useful.

Part 3: What explaining AI means for your organisation goes into the various roles, policies, procedures and documentation that you can put in place to ensure your organisation is set up to provide meaningful explanations to affected individuals. This is primarily targeted at your organisation’s senior management team, however your DPO and compliance team will also find it useful.

You can send your thoughts via email to [email protected]

 

You can find original guidance from the link below:

https://ico.org.uk/media/about-the-ico/consultations/2616441/explain-about-this-guidance.pdf

Policy and Investment Recommendations for Trustworthy AI

 

Policy and Investment Recommendations for Trustworthy AI

 

the High-Level Expert Group 

26 June 2019

 

Introduction

 

In its various communications on artificial intelligence (AI)1 the European Commission has set out its vision for AI, which is to be trustworthy and human-centric. Three pillars underpin the Commission’s vision: (i) increasing public and private investments in AI to boost its uptake, (ii) preparing for socio-economic changes, and (iii) ensuring an appropriate ethical and legal framework to protect and strengthen European values. To support the implementation of this vision, the Commission established the High-Level Expert Group on Artificial Intelligence (AI HLEG), an independent group mandated with the drafting of two deliverables: a set of AI Ethics Guidelines and a set of Policy and Investment Recommendations.2

In our first deliverable, the Ethics Guidelines for Trustworthy AI3 published on 8 April 2019 (Ethics Guidelines), we stated that AI systems need to be human-centric, with the goal of improving individual and societal well-being, and worthy of our trust. In order to be deemed trustworthy, we put forward that AI systems – including all actors and processes involved therein – should be lawful, ethical and robust. Those Guidelines therefore constituted a first important step in identifying the type of AI that we want and do not want for Europe, but that is not enough to ensure that Europe can also materialise the beneficial impact that Trustworthy AI can bring.

Taking the next step, this document contains our proposed Policy and Investment Recommendations for Trustworthy AI, addressed to EU institutions and Member States. Building on our first deliverable, we put forward 33 recommendations that can guide Trustworthy AI towards sustainability, growth and competitiveness, as well as inclusion – while empowering, benefiting and protecting human beings. We believe that EU Institutions and Member States will play a key role in the achievement of these goals, as a pivotal player in the data economy, a procurer of Trustworthy AI systems and as a standard-setter of sound governance.

Our recommendations focus on four main areas where we believe Trustworthy AI can help achieving a beneficial impact, starting with humans and society at large (A), and continuing then to focus on the private sector (B), the public sector (C) and Europe’s research and academia (D). In addition, we also address the main enablers needed to facilitate those impacts, focusing on availability of data and infrastructure (E), skills and education (F), appropriate governance and regulation (G), as well as funding and investment (H).

These recommendations should not be regarded as exhaustive, but attempt to tackle the most pressing areas for action with the greatest potential. Europe can distinguish itself from others by developing, deploying, using and scaling Trustworthy AI, which we believe should become the only kind of AI in Europe, in a manner that can enhance both individual and societal well-being.

 

You can find original document from the link below:

https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60343

Ethics Guidelines For Trustworthy AI

 

Ethics Guidelines For Trustworthy AI

 

European Commission High-Level Expert Group

April 8, 2019

 

EXECUTIVE SUMMARY

The aim of the Guidelines is to promote Trustworthy AI. Trustworthy AI has three components, which should be met throughout the system’s entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles and values and (3) it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm. Each component in itself is necessary but not sufficient for the achievement of Trustworthy AI. Ideally, all three components work in harmony and overlap in their operation. If, in practice, tensions arise between these components, society should endeavour to align them.

These Guidelines set out a framework for achieving Trustworthy AI. The framework does not explicitly deal with Trustworthy AI’s first component (lawful AI). Instead, it aims to offer guidance on the second and third components: fostering and securing ethical and robust AI. Addressed to all stakeholders, these Guidelines seek to go beyond a list of ethical principles, by providing guidance on how such principles can be operationalised in sociotechnical systems. Guidance is provided in three layers of abstraction, from the most abstract in Chapter I to the most concrete in Chapter III, closing with examples of opportunities and critical concerns raised by AI systems.

I. Based on an approach founded on fundamental rights, Chapter I identifies the ethical principles and their correlated values that must be respected in the development, deployment and use of AI systems.

Key guidance derived from Chapter I:

  • Develop, deploy and use AI systems in a way that adheres to the ethical principles of: respect for human autonomy, prevention of harm, fairness and explicability. Acknowledge and address the potential tensions between these principles.
  • Pay particular attention to situations involving more vulnerable groups such as children, persons with disabilities and others that have historically been disadvantaged or are at risk of exclusion, and to situations which are characterised by asymmetries of power or information, such as between employers and workers, or between businesses and consumers.
  • Acknowledge that, while bringing substantial benefits to individuals and society, AI systems also pose certain risks and may have a negative impact, including impacts which may be difficult to anticipate, identify or measure (e.g. on democracy, the rule of law and distributive justice, or on the human mind itself.) Adopt adequate measures to mitigate these risks when appropriate, and proportionately to the magnitude of the risk.

II. Drawing upon Chapter I, Chapter II provides guidance on how Trustworthy AI can be realised, by listing seven requirements that AI systems should meet. Both technical and non-technical methods can be used for their implementation.

Key guidance derived from Chapter II:

  • Ensure that the development, deployment and use of AI systems meets the seven key requirements for Trustworthy AI: (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) environmental and societal well-being and (7) accountability.
  • Consider technical and non-technical methods to ensure the implementation of those requirements.
  • Foster research and innovation to help assess AI systems and to further the achievement of the requirements; disseminate results and open questions to the wider public, and systematically train a new generation of experts in AI ethics.
  • Communicate, in a clear and proactive manner, information to stakeholders about the AI system’s capabilities and limitations, enabling realistic expectation setting, and about the manner in which the requirements are implemented. Be transparent about the fact that they are dealing with an AI system.
  • Facilitate the traceability and auditability of AI systems, particularly in critical contexts or situations.
  • Involve stakeholders throughout the AI system’s life cycle. Foster training and education so that all stakeholders are aware of and trained in Trustworthy AI.
  • Be mindful that there might be fundamental tensions between different principles and requirements. Continuously identify, evaluate, document and communicate these trade-offs and their solutions.

III. Chapter III provides a concrete and non-exhaustive Trustworthy AI assessment list aimed at operationalising the key requirements set out in Chapter II. This assessment list will need to be tailored to the specific use case of the AI system.

Key guidance derived from Chapter III:

  • Adopt a Trustworthy AI assessmentlist when developing, deploying or using AI systems, and adapt it to the specific use case in which the system is being applied.
  • Keep in mind that such an assessment list will never be exhaustive. Ensuring Trustworthy AI is not about ticking boxes, but about continuously identifying and implementing requirements, evaluating solutions, ensuring improved outcomes throughout the AI system’s lifecycle, and involving stakeholders in this.

A final section of the document aims to concretise some of the issues touched upon throughout the framework, by offering examples of beneficial opportunities thatshould be pursued, and critical concerns raised by AI systems that should be carefully considered.

While these Guidelines aim to offer guidance for AI applications in general by building a horizontal foundation to achieve Trustworthy AI, different situations raise different challenges. It should therefore be explored whether, in addition to this horizontal framework, a sectorial approach is needed, given the context-specificity of AI systems.

These Guidelines do not intend to substitute any form of current or future policymaking or regulation, nor do they aim to deter the introduction thereof. They should be seen as a living document to be reviewed and updated over time to ensure their continuous relevance as the technology, our social environments, and our knowledge evolve. This document is a starting point for the discussion about “Trustworthy AI for Europe”.

Beyond Europe, the Guidelines also aim to foster research, reflection and discussion on an ethical framework for AI systems at a global level.

 

You can find original Guidelines and the link below:

https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

Summary Of The 2018 Department Of Defense Artificial Intelligence Strategy

 

Summary Of The 2018 Department Of Defense Artificial Intelligence Strategy

Department Of Defense

Harnessing AI to Advance Our Security and Prosperity

 

PREFACE

The Department of Defense’s (DoD) Artificial Intelligence (AI) Strategy directs the DoD to accelerate the adoption of AI and the creation of a force fit for our time. A strong, technologically advanced Department is essential for protecting the security of our nation, preserving access to markets that will improve our standard of living, and ensuring that we are capable of passing intact to the younger generations the freedoms we currently enjoy.

AI is rapidly changing a wide range of businesses and industries. It is also poised to change the character of the future battlefield and the pace of threats we must face. We will harness the potential of AI to transform all functions of the Department positively, thereby supporting and protecting U.S. servicemembers, safeguarding U.S. citizens, defending allies and partners, and improving the affordability, effectiveness, and speed of our operations. The women and men in the U.S. armed forces remain our enduring source of strength; we will use AI-enabled information, tools, and systems to empower, not replace, those who serve.

Realizing this vision requires identifying appropriate use cases for AI across DoD, rapidly piloting solutions, and scaling successes across our enterprise. The 2018 DoD AI Strategy, summarized here, will drive the urgency, scale, and unity of effort needed to navigate this transformation. The Joint Artificial Intelligence Center (JAIC) is the focal point for carrying it out. As we systematically explore AI’s full potential, study its implications, and begin the process of learning about its impact on defense, we will remain thoughtful and adaptive in our execution.

We cannot succeed alone; this undertaking requires the skill and commitment of those in government, close collaboration with academia and non-traditional centers of innovation in the commercial sector, and strong cohesion among international allies and partners. We must learn from others to help us achieve the fullest understanding of the potential of AI, and we must lead in responsibly developing and using these powerful technologies, in accordance with the law and our values.

As stewards of the security and prosperity of the American public, we will leverage the creativity and agility of our nation to address the technical, ethical, and societal challenges posed by AI and leverage its opportunities in order to preserve the peace and provide security for future generations.

 

You can find the full and original document below:

https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF

 

 

Review of Controls for Certain Emerging Technologies

 

Review of Controls for Certain Emerging Technologies

Bureau of Industry and Security

November 19, 2018

 

Summary

“The Bureau of Industry and Security (BIS) controls the export of dual-use and less sensitive military items through the Export Administration Regulations (EAR), including the Commerce Control List (CCL). As controls on exports of technology are a key component of the effort to protect sensitive U.S. technology, many sensitive technologies are listed on the CCL, often consistent with the lists maintained by the multilateral export control regimes of which the United States is a member. Certain technologies, however, may not yet be listed on the CCL or controlled multilaterally because they are emerging technologies. As such, they have not yet been evaluated for their national security impacts. This advance notice of proposed rulemaking (ANPRM) seeks public comment on criteria for identifying emerging technologies that are essential to U.S. national security, for example because they have potential conventional weapons, intelligence collection, weapons of mass destruction, or terrorist applications or could provide the United States with a qualitative military or intelligence advantage. Comment on this ANPRM will help inform the interagency process to identify and describe such emerging technologies. This interagency process is anticipated to result in proposed rules for new Export Control Classification Numbers (ECCNs) on the CCL.”

 

You can find the link and original proposed rulemaking below:

https://www.gpo.gov/fdsys/pkg/FR-2018-11-19/pdf/2018-25221.pdf

Artificial Intelligence, Automation and Work

 

Artificial Intelligence, Automation and Work

 

Blockchain and Fintech Investment

 

Daron Acemoglu – MIT

 

Pascual Restrepo – Boston University

 

January 4, 2018

 

 

 

 

Abstract

 

“We summarize a framework for the study of the implications of automation and AI on the demand for labor, wages, and employment. Our task-based framework emphasizes the displacement effect that automation creates as machines and AI replace labor in tasks that it used to perform. This displacement effect tends to reduce the demand for labor and wages. But it is counteracted by a productivity effect, resulting from the cost savings generated by automation, which increase the demand for labor in non-automated tasks. The productivity effect is complemented by additional capital accumulation and the deepening of automation (improvements of existing machinery), both of which further increase the demand for labor. These countervailing effects are incomplete. Even when they are strong, automation increases output per worker more than wages and reduce the share of labor in national income. The more powerful countervailing force against automation is the creation of new labor-intensive tasks, which reinstates labor in new activities and tends to in- crease the labor share to counterbalance the impact of automation. Our framework also highlights the constraints and imperfections that slow down the adjustment of the economy and the labor market to automation and weaken the resulting productivity gains from this transformation: a mismatch between the skill requirements of new technologies, and the possibility that automation is being introduced at an excessive rate, possibly at the expense of other productivity-enhancing technologies.”

 

You can find the link and original paper below:

https://economics.mit.edu/files/14641

Artificial intelligence: Anticipating Its Impact On Jobs To Ensure A Fair Transition

 

Artificial intelligence: Anticipating Its Impact On Jobs To Ensure A Fair Transition 

avrupa birliği ile ilgili görsel sonucu

European Economic and Social Committee

Rapporteur: Franca SALIS-MADINIER

19/09/2018

 

1.Conclusions and recommendations

1.1.Artificial intelligence (AI) and robotics will expand and amplify the impact of the digitalisation of the economy on labour markets. Technological progress has always affected work and employment, requiring new forms of social and societal management. The EESC believes that technological development can contribute to economic and social progress; however, it feels that it would be a mistake to overlook its overall impact on society. In the world of work, AI will expand and amplify the scope of job automation. This is why the EESC would like to give its input to efforts to lay the groundwork for the social transformations which will go hand in hand with the rise of AI and robotics, by reinforcing and renewing the European social model.

1.2.The EESC flags up the potential of AI and its applications, particularly in the areas of healthcare, security in the transport and energy sectors, combating climate change and anticipating threats in the field of cybersecurity. The European Union, governments and civil society organisations have a key role to play when it comes to fully tapping the potential advantages of AI, particularly for people with disabilities or reduced mobility, the elderly and people with chronic health issues.

1.3.However, the EU has insufficient data on the digital economy and the resulting social transformation. The EESC recommends improving statistical tools and research, particularly on AI, the use of industrial and service robots, the Internet of Things and new economic models (the platform-based economy and new forms of employment and work).

1.4.The EESC calls on the European Commission to promote and support studies carried out by European sector-level social dialogue committees on the sector-specific impact of AI and robotics and, more broadly, of the digitalisation of the economy.

1.5.It is acknowledged that AI and robotics will displace and transform jobs, by eliminating some and creating others. Whatever the outcome, the EU must guarantee access to social protection for all workers, employees and self-employed or bogus self-employed persons, in line with the European Pillar of Social Rights.

 

You can find the link of the full document below:

https://www.eesc.europa.eu/en/our-work/opinions-information-reports/opinions/artificial-intelligence-anticipating-its-impact-jobs-ensure-fair-transition-own-initiative-opinion

1 2 3 4