What Happened in Artificial Intelligence in 2019?
English translation is soonishh… 🙂
What Happened in Artificial Intelligence in 2019?
English translation is soonishh… 🙂
The making of a smart city: policy recommendations
EU Smart Cities Information Systems
About the Report
The purpose of this report is to share key lessons learned and to provide policy recommendations on how to support Smart Cities projects development. It is the third of a series of SCIS Reports in the Smart Cities Information Systems (SCIS) project, which aims to support and stimulate the replication of successful innovative technologies tested through EU-funded projects. The SCIS project brings together project developers, cities, institutions, industry and experts from across Europe to exchange data, experience the know-how and to collaborate on the creation of smart cities and an energy-efficient urban environment.
This report presents policy recommendations for local, national and EU level policy makers. It covers the main areas influenced by policy, namely regulatory environment and finance. The report also offers a final specialist section dedicated to innovation policy for EU authorities related to Smart Cities. Smart City Planning and project implementation issues, which are the domain of city planners and promoters, are covered by the SCIS report on technology replication1 . This report complements SCIS information database that is focused on the projects themselves, by presenting an analysis of the barriers encountered by projects caused by policy framework conditions in place. It also proposes s some potential policy solutions. This report is thus to some extent the reverse of the coin of the technical replication study and will show therefore a number of synergies. The report is based on several main sources of information:
• Technological, policy and financial analysis of Smart Cities and Communities FP7 and Horizon 2020 projects in the areas of energy, mobility and transport and ICT, co-financed by the European Commission;
• Insights, shared by Smart Cities projects coordinators during dedicated workshops;
• Insights from other Smart Cities platforms, such as the European Innovation Partnership on Smart Cities and Communities;
• Literature review and other sources.
The report has the following structure:
Chapter 1 Introduces the report;
Chapter 2 provides overview of the policy challenges to be addressed by authorities and policy makers at the three levels of governance in the area of innovation and replication;
Chapter 3 focuses on policy actions needed at national and local level;
Chapter 4 Focuses on EU level policy aspects.
You can find the full report from the link below:
Policy and Investment Recommendations for Trustworthy AI
the High-Level Expert Group
26 June 2019
In its various communications on artificial intelligence (AI)1 the European Commission has set out its vision for AI, which is to be trustworthy and human-centric. Three pillars underpin the Commission’s vision: (i) increasing public and private investments in AI to boost its uptake, (ii) preparing for socio-economic changes, and (iii) ensuring an appropriate ethical and legal framework to protect and strengthen European values. To support the implementation of this vision, the Commission established the High-Level Expert Group on Artificial Intelligence (AI HLEG), an independent group mandated with the drafting of two deliverables: a set of AI Ethics Guidelines and a set of Policy and Investment Recommendations.2
In our first deliverable, the Ethics Guidelines for Trustworthy AI3 published on 8 April 2019 (Ethics Guidelines), we stated that AI systems need to be human-centric, with the goal of improving individual and societal well-being, and worthy of our trust. In order to be deemed trustworthy, we put forward that AI systems – including all actors and processes involved therein – should be lawful, ethical and robust. Those Guidelines therefore constituted a first important step in identifying the type of AI that we want and do not want for Europe, but that is not enough to ensure that Europe can also materialise the beneficial impact that Trustworthy AI can bring.
Taking the next step, this document contains our proposed Policy and Investment Recommendations for Trustworthy AI, addressed to EU institutions and Member States. Building on our first deliverable, we put forward 33 recommendations that can guide Trustworthy AI towards sustainability, growth and competitiveness, as well as inclusion – while empowering, benefiting and protecting human beings. We believe that EU Institutions and Member States will play a key role in the achievement of these goals, as a pivotal player in the data economy, a procurer of Trustworthy AI systems and as a standard-setter of sound governance.
Our recommendations focus on four main areas where we believe Trustworthy AI can help achieving a beneficial impact, starting with humans and society at large (A), and continuing then to focus on the private sector (B), the public sector (C) and Europe’s research and academia (D). In addition, we also address the main enablers needed to facilitate those impacts, focusing on availability of data and infrastructure (E), skills and education (F), appropriate governance and regulation (G), as well as funding and investment (H).
These recommendations should not be regarded as exhaustive, but attempt to tackle the most pressing areas for action with the greatest potential. Europe can distinguish itself from others by developing, deploying, using and scaling Trustworthy AI, which we believe should become the only kind of AI in Europe, in a manner that can enhance both individual and societal well-being.
You can find original document from the link below:
Ethics Guidelines For Trustworthy AI
European Commission High-Level Expert Group
April 8, 2019
The aim of the Guidelines is to promote Trustworthy AI. Trustworthy AI has three components, which should be met throughout the system’s entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles and values and (3) it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm. Each component in itself is necessary but not sufficient for the achievement of Trustworthy AI. Ideally, all three components work in harmony and overlap in their operation. If, in practice, tensions arise between these components, society should endeavour to align them.
These Guidelines set out a framework for achieving Trustworthy AI. The framework does not explicitly deal with Trustworthy AI’s first component (lawful AI). Instead, it aims to offer guidance on the second and third components: fostering and securing ethical and robust AI. Addressed to all stakeholders, these Guidelines seek to go beyond a list of ethical principles, by providing guidance on how such principles can be operationalised in sociotechnical systems. Guidance is provided in three layers of abstraction, from the most abstract in Chapter I to the most concrete in Chapter III, closing with examples of opportunities and critical concerns raised by AI systems.
I. Based on an approach founded on fundamental rights, Chapter I identifies the ethical principles and their correlated values that must be respected in the development, deployment and use of AI systems.
Key guidance derived from Chapter I:
II. Drawing upon Chapter I, Chapter II provides guidance on how Trustworthy AI can be realised, by listing seven requirements that AI systems should meet. Both technical and non-technical methods can be used for their implementation.
Key guidance derived from Chapter II:
III. Chapter III provides a concrete and non-exhaustive Trustworthy AI assessment list aimed at operationalising the key requirements set out in Chapter II. This assessment list will need to be tailored to the specific use case of the AI system.
Key guidance derived from Chapter III:
A final section of the document aims to concretise some of the issues touched upon throughout the framework, by offering examples of beneficial opportunities thatshould be pursued, and critical concerns raised by AI systems that should be carefully considered.
While these Guidelines aim to offer guidance for AI applications in general by building a horizontal foundation to achieve Trustworthy AI, different situations raise different challenges. It should therefore be explored whether, in addition to this horizontal framework, a sectorial approach is needed, given the context-specificity of AI systems.
These Guidelines do not intend to substitute any form of current or future policymaking or regulation, nor do they aim to deter the introduction thereof. They should be seen as a living document to be reviewed and updated over time to ensure their continuous relevance as the technology, our social environments, and our knowledge evolve. This document is a starting point for the discussion about “Trustworthy AI for Europe”.
Beyond Europe, the Guidelines also aim to foster research, reflection and discussion on an ethical framework for AI systems at a global level.
You can find original Guidelines and the link below:
The European Commission’s High-Level Expert Group
18 December 2018
This working document constitutes a draft of the AI Ethics Guidelines produced by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG), of which a final version is due in March 2019.
Artificial Intelligence (AI) is one of the most transformative forces of our time, and is bound to alter the fabric of society. It presents a great opportunity to increase prosperity and growth, which Europe must strive to achieve. Over the last decade, major advances were realised due to the availability of vast amounts of digital data, powerful computing architectures, and advances in AI techniques such as machine learning. Major AI-enabled developments in autonomous vehicles, healthcare, home/service robots, education or cybersecurity are improving the quality of our lives every day. Furthermore, AI is key for addressing many of the grand challenges facing the world, such as global health and wellbeing, climate change, reliable legal and democratic systems and others expressed in the United Nations Sustainable Development Goals.
Having the capability to generate tremendous benefits for individuals and society, AI also gives rise to certain risks that should be properly managed. Given that, on the whole, AI’s benefits outweigh its risks, we must ensure to follow the road that maximises the benefits of AI while minimising its risks. To ensure that we stay on the right track, a human-centric approach to AI is needed, forcing us to keep in mind that the development and use of AI should not be seen as a means in itself, but as having the goal to increase human well-being. Trustworthy AI will be our north star, since human beings will only be able to confidently and fully reap the benefits of AI if they can trust the technology.
Trustworthy AI has two components: (1) it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose” and (2) it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.
These Guidelines therefore set out a framework for Trustworthy AI:
– Chapter I deals with ensuring AI’s ethical purpose, by setting out the fundamental rights, principles and values that it should comply with.
– From those principles, Chapter II derives guidance on the realisation of Trustworthy AI, tackling both ethical purpose and technical robustness. This is done by listing the requirements for Trustworthy AI and offering an overview of technical and non-technical methods that can be used for its implementation.
– Chapter III subsequently operationalises the requirements by providing a concrete but non- exhaustive assessment list for Trustworthy AI. This list is then adapted to specific use cases.
In contrast to other documents dealing with ethical AI, the Guidelines hence do not aim to provide yet another list of core values and principles for AI, but rather offer guidance on the concrete implementation and operationalisation thereof into AI systems. Such guidance is provided in three layers of abstraction, from most abstract in Chapter I (fundamental rights, principles and values), to most concrete in Chapter III (assessment list).
The Guidelines are addressed to all relevant stakeholders developing, deploying or using AI, encompassing companies, organisations, researchers, public services, institutions, individuals or other entities. In the final version of these Guidelines, a mechanism will be put forward to allow stakeholders to voluntarily endorse them.
Importantly, these Guidelines are not intended as a substitute to any form of policymaking or regulation (to be dealt with in the AI HLEG’s second deliverable: the Policy & Investment Recommendations, due in May 2019), nor do they aim to deter the introduction thereof. Moreover, the Guidelines should be seen as a living document that needs to be regularly updated over time to ensure continuous relevance as the technology and our knowledge thereof, evolves. This document should therefore be a starting point for the discussion on “Trustworthy AI made in Europe”.
While Europe can only broadcast its ethical approach to AI when competitive at global level, an ethical approach to AI is key to enable responsible competitiveness, as it will generate user trust and facilitate broader uptake of AI. These Guidelines are not meant to stifle AI innovation in Europe, but instead aim to use ethics as inspiration to develop a unique brand of AI, one that aims at protecting and benefiting both individuals and the common good. This allows Europe to position itself as a leader in cutting-edge, secure and ethical AI. Only by ensuring trustworthiness will European citizens fully reap AI’s benefits.
Finally, beyond Europe, these Guidelines also aim to foster reflection and discussion on an ethical framework for AI at global level.
You can find the original draft and the link below:
Artificial intelligence: Anticipating Its Impact On Jobs To Ensure A Fair Transition
European Economic and Social Committee
Rapporteur: Franca SALIS-MADINIER
1.Conclusions and recommendations
1.1.Artificial intelligence (AI) and robotics will expand and amplify the impact of the digitalisation of the economy on labour markets. Technological progress has always affected work and employment, requiring new forms of social and societal management. The EESC believes that technological development can contribute to economic and social progress; however, it feels that it would be a mistake to overlook its overall impact on society. In the world of work, AI will expand and amplify the scope of job automation. This is why the EESC would like to give its input to efforts to lay the groundwork for the social transformations which will go hand in hand with the rise of AI and robotics, by reinforcing and renewing the European social model.
1.2.The EESC flags up the potential of AI and its applications, particularly in the areas of healthcare, security in the transport and energy sectors, combating climate change and anticipating threats in the field of cybersecurity. The European Union, governments and civil society organisations have a key role to play when it comes to fully tapping the potential advantages of AI, particularly for people with disabilities or reduced mobility, the elderly and people with chronic health issues.
1.3.However, the EU has insufficient data on the digital economy and the resulting social transformation. The EESC recommends improving statistical tools and research, particularly on AI, the use of industrial and service robots, the Internet of Things and new economic models (the platform-based economy and new forms of employment and work).
1.4.The EESC calls on the European Commission to promote and support studies carried out by European sector-level social dialogue committees on the sector-specific impact of AI and robotics and, more broadly, of the digitalisation of the economy.
1.5.It is acknowledged that AI and robotics will displace and transform jobs, by eliminating some and creating others. Whatever the outcome, the EU must guarantee access to social protection for all workers, employees and self-employed or bogus self-employed persons, in line with the European Pillar of Social Rights.
You can find the link of the full document below: