Feasibility Study on AI

Feasibility Study on AI

 

Council of Europe

Ad Hoc Committee on AI

2020

Introduction

“As noted in various Council of Europe documents, including reports recently adopted by the Parliamentary Assembly (PACE), AI systems are substantially transforming individual lives and have a profound impact on the fabric of society and the functioning of its institutions. Their use has the capacity to generate substantive benefits in numerous domains, such as healthcare, transport, education and public administration, generating promising opportunities for humanity at large. At the same time, the development and use of AI systems also entails substantial risks, in particular in relation to interference with human rights, democracy and the rule of law, the core elements upon which our European societies are built.

AI systems should be seen as “socio-technical systems”, in the sense that the impact of an AI system – whatever its underlying technology – depends not only on the system’s design, but also on the way in which the system is developed and used within a broader environment, including the data used, its intended purpose, functionality and accuracy, the scale of deployment, and the broader organisational, societal and legal context in which it is used. The positive or negative consequences of AI systems depend also on the values and behaviour of the human beings that develop and deploy them, which leads to the importance of ensuring human responsibility. There are, however, some distinct characteristics of AI systems that set them apart from other technologies in relation to both their positive and negative impact on human rights, democracy and the rule of law.

First, the scale, connectedness and reach of AI systems can amplify certain risks that are also inherent in other technologies or human behaviour. AI systems can analyse an unprecedented amount of fine-grained data (including highly sensitive personal data) at a much faster pace than humans. This ability can lead AI systems to be used in a way that perpetuates or amplifies unjust bias, also based on new discrimination grounds in case of so called “proxy discrimination”. The increased prominence of proxy discrimination in the context of machine learning may raise interpretive questions about the distinction between direct and indirect discrimination or, indeed, the adequacy of this distinction as it is traditionally understood. Moreover, AI systems are subject to statistical error rates. Even if the error rate of a system applied to millions of people is close to zero, thousands of people can still be adversely impacted due to the scale of deployment and interconnectivity of the systems. On the other side, the scale and reach of AI systems also imply that they can be used to mitigate certain risks and biases that are also inherent in other technologies or human behaviour, and to monitor and reduce human error rates.

Second, the complexity or opacity of many AI systems (in particular in the case of machine learning applications) can make it difficult for humans, including system developers, to understand or trace the system’s functioning or outcome. This opacity, in combination with the involvement of many different actors at different stages during the system’s lifecycle, further complicates the identification of the agent(s) responsible for a potential negative outcome, hence reducing human responsibility and accountability.

Third, certain AI systems can re-calibrate themselves through feedback and reinforcement learning. However, if an AI system is re-trained on data resulting from its own decisions which contains unjust biases, errors, inaccuracies or other deficiencies, a vicious feedback loop may arise which can lead to a discriminatory, erroneous or malicious functioning of the system and which can be difficult to detect.”

 

You can reach original study from the link below:

https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da

Preventing Discrimination Caused by the Use of Artificial Intelligence 

 

Preventing Discrimination Caused by the Use of Artificial Intelligence 

Committee on Equality and Non-Discrimination
Rapporteur: Christophe LACROIX, Belgium, 2020

 

Summary

Artificial intelligence (AI), by allowing massive upscaling of automated decision-making processes, creates opportunities for efficiency gains – but in parallel, it can perpetuate and exacerbate discrimination. Public and private sector uses of AI have already been shown to have a discriminatory impact, while information flows tend to highlight extremes and foster hate. The use of biased datasets, design that fails to integrate the need to protect human rights, the lack of transparency of algorithms and of accountability for their impact, as well as a lack of diversity in AI teams, all contribute to this phenomenon.

States must act now to prevent AI from having a discriminatory impact in our societies, and should work together to develop international standards in this field. 

Parliaments must moreover play an active role in overseeing the use of AI-based technologies and ensuring it is subject to public scrutiny. Domestic antidiscrimination legislation should be reviewed and amended to ensure that victims of discrimination caused by the use of AI have access to an effective remedy, and national equality bodies should be effectively equipped to deal with the impact of AI-based technologies. 

Respect for equality and non-discrimination must be integrated from the outset in the design of AI-based systems, and tested before their deployment. The public and private sectors should actively promote diversity and interdisciplinary approaches in technology studies and professions.

 

You can reach original report from the link below:

Click to access doc.%2015151.pdf

Assessment List for Trustworthy AI

 

Assessment List for Trustworthy AI

 

European AI Alliance | FUTURIUM | European Commission

The EU Commission

High-Level Expert Group on AI

August 2020

 

Fundamental Rights

Fundamental rights encompass rights such as human dignity and non-discrimination, as well as rights in relation to data protection and privacy, to name just some examples. Prior to self-assessing an AI system with this Assessment List, a fundamental rights impact assessment (FRIA) should be performed. 

A FRIA could include questions such as the following – drawing on specific articles in the Charter and the European Convention on Human Rights (ECHR)14 its protocols and the European Social Charter.

1.Does the AI system potentially negatively discriminate against people on the basis of any of the following grounds (non-exhaustively): sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation? 

Have you put in place processes to test and monitor for potential negative discrimination (bias) during the development, deployment and use phases of the AI system? 

Have you put in place processes to address and rectify for potential negative discrimination (bias) in the AI system? 

2.Does the AI system respect the rights of the child, for example with respect to child protection and taking the child’s best interests into account? 

Have you put in place processes to address and rectify for potential harm to children by the AI system? 

Have you put in place processes to test and monitor for potential harm to children during the development, deployment and use phases of the AI system? 

3.Does the AI system protect personal data relating to individuals in line with GDPR?

Have you put in place processes to assess in detail the need for a data protection impact assessment, including an assessment of the necessity and proportionality of the processing operations in relation to their purpose, with respect to the development, deployment and use phases of the AI system?

Have you put in place measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data with respect to the development, deployment and use phases of the AI system? 

4.Does the AI system respect the freedom of expression and information and/or freedom of assembly and association?

Have you put in place processes to test and monitor for potential infringement on freedom of expression and information, and/or freedom of assembly and association, during the development, deployment and use phases of the AI system? 

Have you put in place processes to address and rectify for potential infringement on freedom of expression and information, and/or freedom of assembly and association, in the AI system?

 

You can find original document from the link below:

https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=68342

White Paper on Artificial Intelligence: A European approach to excellence and trust

 

White Paper on Artificial Intelligence 

A European approach to excellence and trust

Brussel, 19.02.2020

 

Abstract

Artificial Intelligence is developing fast. It will change our lives by improving healthcare (e.g. making diagnosis more precise, enabling better prevention of diseases), increasing the efficiency of farming, contributing to climate change mitigation and adaptation, improving the efficiency of production systems through predictive maintenance, increasing the security of Europeans, and in many other ways that we can only begin to imagine. At the same time, Artificial Intelligence (AI) entails a number of potential risks, such as opaque decision-making, gender-based or other kinds of discrimination, intrusion in our private lives or being used for criminal purposes. 

Against a background of fierce global competition, a solid European approach is needed, building on the European strategy for AI presented in April 2018. To address the opportunities and challenges of AI, the EU must act as one and define its own way, based on European values, to promote the development and deployment of AI. 

The Commission is committed to enabling scientific breakthrough, to preserving the EU’s technological leadership and to ensuring that new technologies are at the service of all Europeans – improving their lives while respecting their rights. 

Commission President Ursula von der Leyen announced in her political Guidelines a coordinated European approach on the human and ethical implications of AI as well as a reflection on the better use of big data for innovation. 

Thus, the Commission supports a regulatory and investment oriented approach with the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology. The purpose of this White Paper is to set out policy options on how to achieve these objectives. It does not address the development and use of AI for military purposes.The Commission invites Member States, other European institutions, and all stakeholders, including industry, social partners, civil society organisations, researchers, the public in general and any interested party, to react to the options below and to contribute to the Commission’s future decision-making in this domain.

You can reach the full paper from the link below:

https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf

Explaining AI Decisions

 

Explaining AI Decisions

 

ICO & Alan Turing Institute

2 December 2019

The ICO and The Alan Turing Institute (The Turing) have launched a consultation on our co-badged guidance, Explaining decisions made with AI. This guidance aims to give organisations practical advice to help explain the processes, services and decisions delivered or assisted by AI, to the individuals affected by them.  

Increasingly, organisations are using artificial intelligence (AI) to support, or to make decisions about individuals. If this is something you do, or something you are thinking about, this guidance is for you.

We want to ensure this guidance is practically applicable in the real world, so organisations can easily utilize it when developing AI systems. This is why we are requesting feedback. 

The guidance consists of three parts. Depending on your level of expertise, and the make-up of your organisation, some parts may be more relevant to you than others. You can pick and choose the parts that are most useful.

The survey will ask you about all three parts but answer as few or as many questions as you like.

Part 1: The basics of explaining AI defines the key concepts and outlines a number of different types of explanations. It will be relevant for all members of staff involved in the development of AI systems.

Part 2: Explaining AI in practice helps you with the practicalities of explaining these decisions and providing explanations to individuals. This will primarily be helpful for the technical teams in your organisation, however your DPO and compliance team will also find it useful.

Part 3: What explaining AI means for your organisation goes into the various roles, policies, procedures and documentation that you can put in place to ensure your organisation is set up to provide meaningful explanations to affected individuals. This is primarily targeted at your organisation’s senior management team, however your DPO and compliance team will also find it useful.

You can send your thoughts via email to [email protected]

 

You can find original guidance from the link below:

https://ico.org.uk/media/about-the-ico/consultations/2616441/explain-about-this-guidance.pdf

Recommendations of the Data Ethics Commission for the Federal Government’s Strategy on Artificial Intelligence

 

Recommendations of the Data Ethics Commission for the Federal Government’s Strategy on Artificial Intelligence

 

Image result for daten ethik kommission"

9 October 2018

 

The Data Ethics Commission is pleased that the Federal Government is developing a strategy on artificial intelligence. At its constitutive meeting on 4 and 5 September 2018, the Data Ethics Commission discussed the Federal Government’s policy paper for such a strategy. The Commission recommends that the Federal Government should add the following points to its strategy:

(1) the objective “Upholding the ethical and legal principles based on our liberal democracy throughout the entire process of developing and applying artificial intelligence”

 (2) the area of action “Promoting the ability of individuals and society as a whole to understand and reflect critically in the information society”

I.

The term “artificial intelligence” (AI) is used in the media and in general discourse to refer to different things. The Federal Government’s policy paper does not specify the technologies covered in the paper. This information should be added.

In this context, we understand “artificial intelligence” as a collective term for technologies and their applications which process potentially very large and heterogeneous data sets using complex methods modelled on human intelligence to arrive at a result which may be used in automated applications. The most important building blocks of AI as part of computer science are sub-symbolic pattern recognition, machine learning, computerized knowledge representation and knowledge processing, which encompasses heuristic search, inference and planning.

The range of applications using AI today is already enormous. These applications range from the simple calculation of travel routes to image and language recognition and generation to highly complex environments for making decisions and predictions and for exerting influence. The most important applications involve systems which recognize language and images; collaborative robots and other automated systems (cars, aircraft, trains); multi-agent systems; chatbots; and engineered environments with ambient intelligence. We expect increasingly autonomous and comprehensive applications to be developed which will pervade all areas of life and are capable of automating, (partly) replacing and far outperforming more and more human activity in ever broader fields of action.

Questions with ethical and legal relevance which arise in this context also concern “simple” systems of rules based on algorithms “manually” defined by experts (= rules). These do not constitute AI as it is generally understood. It is important for the Federal Government’s strategy on AI to cover these processes as well.

II.

The diversity of possible AI applications and the complexity of the relevant technologies make it especially challenging to design them in compliance with ethics and the law and to regulate this compliance. As more and more decision-making processes are shifting from humans as the subject of action to AI-driven systems, new questions arise as to who is responsible for the development, programming, introduction, use, steering, monitoring, liability and external review of AI and applications based on it. Further, the specific functioning depends on the selection and quality of the data entered and/or used to “train” the application. Simply ignoring certain types of data and using poorly prepared data can have ethical consequences extending to systematic discrimination or results antagonistic to plurality. In this context, more support should be given to research into modern methods of anonymization and into generating synthetic training data, also in order to increase the amount of data that can be processed for AI technologies without threatening fundamental rights.

The data needed for some AI applications are highly concentrated among a small number of companies which also possess a high level of technological expertise. This raises the question as to whether and how access to non-personal data in private hands should be regulated by law.

Finally, with regard to the democratic process, it should be noted that technology which is increasingly able to imitate human behaviour in a remarkably convincing way can also be easily used to influence social trends and political opinions.

Ethical considerations should be addressed throughout the entire process of developing and applying AI, using the approach “ethics by, in and for design” and as the trademark of “AI made in Europe”. This includes research, development and production of AI, as well as the use, operation, monitoring and governance of AI-based applications. For the Data Ethics Commission, ethics does not mean primarily the definition of limits; on the contrary, when ethical considerations are addressed from the start of the development process, they can make a powerful contribution to design, supporting advisable and desirable applications.

It is also necessary to consider interactions between technology, users and society (“the AI ecosystem”). Within this ecosystem, it is necessary to ensure sufficient transparency, accountability, freedom from discrimination and the ability to review those automated processes which prepare decisions or draw conclusions which may be carried out without additional human input. This is the only way to generate trust in the use and results of algorithm-driven processes. The policy paper (p. 9) rightly demands these measures for algorithms used in public administration. But the same principles should apply in an appropriate way to private parties as well. Measures for quality assurance are also needed which can be supported in part by independent third parties and in part by automated processes. It is also necessary to ensure that the persons affected and the supervisory authorities have appropriate and effective possibilities to intervene as well as access to effective legal remedies.

The most important standard for dealing responsibly with AI is first of all the Constitution, in particular the fundamental rights and the principles of the rule of law, the welfare system and democracy. This also includes individuals’ right to self-determination, including control over their personal data, which also requires companies to inform their customers how they use their data; respect for individual user decisions concerning personal use of an application; protection against unfair discrimination; and the possibility to review machine-made decisions effectively. We also need legal provisions which clearly define the extent of responsibility for developing and applying AI-based technologies according to ethical, legal and economic principles. This also applies to compensation for damage and the enforcement of public-law obligations with regard to AI.

A wide range of control mechanisms necessary for inserting ethical and legal principles into the process of designing and applying these technologies is conceivable. These mechanisms must be decided at national and European level in a democratic process. The use of AI by government actors must be subject to special oversight. Possibilities for supervision include targeted (material, etc.) promotion of applications which comply with the Constitution, certification and standards, official authorization of supervision and institutions to uphold fundamental rights and ethical rules related to AI and binding law.

With this in mind, the Data Ethics Commission recommends that the Federal Government’s strategy on artificial intelligence should promote and demand attention to ethical and legal principles throughout the entire process of developing and applying AI, and that the strategy should include this as an additional objective. The strategy’s areas of action should be defined with this objective in mind.

III.

Information and technologies of all kinds pervade every level of society and our lives to a degree never before known. They increasingly influence social interactions and discourse as structurally relevant elements of democracy. The rapid development of new applications for AI also demands a constant process of critical examination. These profound and diverse changes are significant not only for individual expression, but also for our life in society. They make a discourse which reinforces freedom and democracy more necessary now than ever. Among other things, we need a framework in which individuals and institutional actors can acquire sufficient digital and media literacy and the ability to reflect critically on how to deal with technical innovation.

The Federal Government’s policy paper already calls for implementing its strategy on artificial intelligence in constant dialogue with representatives of the research community, civil society and business and industry, as well as with policy-makers, in order to establish a culture of AI in Germany which promotes trust. The Data Ethics Commission underscores the importance of these measures. It also recommends adding to the AI strategy a separate area of action: “Promoting the ability of individuals and society as a whole to understand and reflect critically in the information society”. This is intended to ensure that individuals and institutional actors acquire sufficient digital and media literacy and the ability to reflect critically on how to deal with AI. Such abilities are essential for society to conduct an objective, informed and nuanced examination which can help promote trust in the use of AI. However, the Data Ethics Commission believes a broader approach will be needed than is currently described in the Federal Government’s policy paper.

Ways to promote digital and media literacy and critical reflection range from offering comprehensive, objective information in campaigns (e.g. to explain realistic application scenarios), to teaching media literacy at school and in adult education courses, to using and promoting technologies to enforce the law and uphold ethical principles in the world of technology. The media and institutions of media supervision also have an important role to play in this context: Their role is not only to inform society about new technologies and examine technological progress critically, but also to provide new forums for debate.

Investment in technology impact assessment must increase to the same extent as technologies such as AI are applied in our society. For example, more research and development should be conducted on data portability, interoperability and consumer enabling technologies; these include AI applications whose primary aim is to help consumers make everyday decisions.

And a balance must be found between the state’s responsibility for creating and enforcing framework conditions, which ensures trust, and the freedom, autonomy and responsibility of users and others affected by the new technologies on the one hand, and the forces of the market and competition on the other hand. This balance must be discussed and determined by society in light of these changes. The growing economic strength of those companies which play a major role in the development of AI must not result in research and civil society becoming increasingly dependent on funding from precisely these companies. Government must enable research and civil society to make independent and competence-based contributions to this important societal discussion.

As modern technologies, including AI, evolve and relieve humans of certain chores, we not only gain new skills but also lose existing skills. This demands a discussion of our responsibility to preserve and develop certain skills for the next generation to remain independent. So we also need to discuss the definition of and requirements for sovereignty of the entire society.

The Data Ethics Commission therefore recommends including another area of action in the strategy focused on creating appropriate framework conditions to promote the ability of individuals and society as a whole to understand and reflect critically in the information society.

IV.

Progress and responsible innovation make a major contribution to the prosperity of society. They offer enormous opportunities which we should welcome and promote, but they also come with risks. These opportunities can make a lasting contribution to freedom, justice and prosperity above all when people’s individual rights are protected and social cohesion is strengthened. With this in mind, the Data Ethics Commission strongly recommends adding the two items referred to at the beginning of this document to the Federal Government’s strategy on artificial intelligence.

 

You can reach the original document from the link below:

https://www.bmjv.de/SharedDocs/Downloads/DE/Ministerium/ForschungUndWissenschaft/DEK_Empfehlungen_englisch.html;jsessionid=DDAF76836371D0CC6F04A232F117B72F.1_cid324?nn=11678512

Civil and Military Drones

 

Civil and Military Drones

 

European Parliament, October 2019

 

SUMMARY

Often labelled as one of today’s main disruptive technologies, drones have indeed earned this label by prompting a fundamental rethinking of business models, existing laws, safety and security standards, the future of transport, and modern warfare. The European Union (EU) recognises the opportunities that drones offer and sees them as opening a new chapter in the history of aerospace. The EU aviation strategy provides guidance for exploring new and emerging technologies, and encourages the integration of drones into business and society so as to maintain a competitive EU aviation industry.

Ranging from insect-sized to several tonnes in weight, drones are extremely versatile and can perform a very large variety of functions, from filming to farming, and from medical aid to search and rescue operations. Among the advantages of civil and military drones are their relative low cost, reach, greater work productivity and capacity to reduce risk to human life. These features have led to their mass commercialisation and integration into military planning. Regulatory and oversight challenges remain, however, particularly regarding dual-use drones – civil drones that can be easily turned into armed drones or weaponised for criminal purposes.

At EU level, the European Commission has been empowered to regulate civil drones and the European Aviation Safety Agency to assist with ensuring a harmonised regulatory framework for safe drone operations. The latest EU legislation has achieved the highest ever safety standards for drones. Another challenge remaining for regulators, officials and manufacturers alike is the need to build the trust of citizens and consumers. Given that drones have been in the public eye more often for their misuse than their accomplishments, transparency and effective communication are imperative to prepare citizens for the upcoming drone age.

 

You can reach original document from the link below:

http://www.europarl.europa.eu/RegData/etudes/BRIE/2019/642230/EPRS_BRI(2019)642230_EN.pdf

The Ethics of Big Data

 

The Ethics of Big Data

 

2017

 

Abstract

 

This study, carried out to support the activities of the EESC, explores the ethical dimensions of Big Data in an attempt to balance them with the need for economic growth within the EU. In the first part of the study an in-depth review of the available literature was carried out, to highlight ethical issues connected with Big Data. Five actions were devised as tools to strike the balance described above. The second phase of the study involved interviews with a number of key stakeholders and conducting a survey that acquired information on the general knowledge of the issues connected to the use of Big Data. Feedback on the proposed balancing actions was also sought and taken into consideration in the final analysis. Attitudes as emerged from interviews and survey most often ranged from concerned to worried, while benefits of Big Data were seldom discussed by the respondents. Benefits are intrinsic to Big Data, as well as risks, and they are discussed more broadly throughout the study.

 

You can reach original and full study from the link below:

https://www.eesc.europa.eu/sites/default/files/resources/docs/qe-04-17-306-en-n.pdf

The US Air Force AI Strategy

 

The US Air Force AI Strategy

 

2 0 1 9
The United States Air Force Artificial Intelligence Annex

to The Department of Defense Artificial Intelligence Strategy

 

Executive Summary

 

Artificial intelligence is poised to change how warfare is conducted in the 21st century. The comparative advantage currently enjoyed by the Air Force will either erode or strengthen depending on the manner in which we adopt these technologies.

Unlike previous technological advances, AI has already proliferated into many commercial enterprises and, as such, cannot be governmentally controlled or contained. Just as the commercial sector has rushed to embrace these technologies, our global competitors are overtly accelerating the integration and weaponization of AI as an effective measure to counter our traditional strengths and exploit our perceived weaknesses. This is especially true for our Air Force, where our ability to execute missions across Air, Space, and Cyberspace rely on insights driven by data and information. Depending on the strategic choices we make now, our ability to operate around the globe may be blunted or bolstered by the adoption of—or hardening against—artificial intelligence.

The Air Force is charged to provide the nation with Air and Space Superiority, Global Strike, Rapid Global Mobility, Intelligence, Surveillance, and Reconnaissance, and Command Control. AI is a capability that will underpin our ability to compete, deter, and win across all five of these diverse missions. It is crucial to fielding tomorrow’s Air Force faster and smarter, executing multi-domain operations in the high-end fight, confronting threats below the level of open conflict, and partnering with our allies around the globe.

This Annex and associated Appendix serve as the framework for aligning our efforts with the National Defense Strategy and the Department of Defense Artificial Intelligence Strategy as executed by the Joint Artificial Intelligence Center (JAIC). It details the fundamental principles, enabling functions, and objectives necessary to effectively manage, maneuver, and lead in the digital age. Doing so is contingent upon our ability to operationalize AI for support and warfighting operations alike.

Artificial intelligence is not the solution to every problem. Its adoption must be thoughtfully considered in accordance with our ethical, moral, and legal obligations to the Nation. As stewards of this great responsibility, Airmen should execute their assigned missions with a focus on emerging technologies, but also with an understanding that everything we do is a human endeavor.

In this return to great power competition, the United States Air Force will harness and wield the most representative forms of AI across all mission-sets, to better enable outcomes with greater speed and accuracy, while optimizing the abilities of each and every Airman. We do this to best protect and defend our Nation and its vital interests, while always remaining accountable to the American public.

 

You can find original and full Annex below:

https://www.af.mil/Portals/1/documents/5/USAF-AI-Annex-to-DoD-AI-Strategy.pdf

The making of a smart city: policy recommendations

 

The making of a smart city: policy recommendations

EU Smart Cities Information Systems

 

About the Report

The purpose of this report is to share key lessons learned and to provide policy recommendations on how to support Smart Cities projects development. It is the third of a series of SCIS Reports in the Smart Cities Information Systems (SCIS) project, which aims to support and stimulate the replication of successful innovative technologies tested through EU-funded projects. The SCIS project brings together project developers, cities, institutions, industry and experts from across Europe to exchange data, experience the know-how and to collaborate on the creation of smart cities and an energy-efficient urban environment.

This report presents policy recommendations for local, national and EU level policy makers. It covers the main areas influenced by policy, namely regulatory environment and finance. The report also offers a final specialist section dedicated to innovation policy for EU authorities related to Smart Cities. Smart City Planning and project implementation issues, which are the domain of city planners and promoters, are covered by the SCIS report on technology replication1 . This report complements SCIS information database that is focused on the projects themselves, by presenting an analysis of the barriers encountered by projects caused by policy framework conditions in place. It also proposes s some potential policy solutions. This report is thus to some extent the reverse of the coin of the technical replication study and will show therefore a number of synergies. The report is based on several main sources of information:

• Technological, policy and financial analysis of Smart Cities and Communities FP7 and Horizon 2020 projects in the areas of energy, mobility and transport and ICT, co-financed by the European Commission;

• Insights, shared by Smart Cities projects coordinators during dedicated workshops;

• Insights from other Smart Cities platforms, such as the European Innovation Partnership on Smart Cities and Communities;

• Literature review and other sources.

 

The report has the following structure:

Chapter 1 Introduces the report;

Chapter 2 provides overview of the policy challenges to be addressed by authorities and policy makers at the three levels of governance in the area of innovation and replication;

Chapter 3 focuses on policy actions needed at national and local level;

Chapter 4 Focuses on EU level policy aspects.

 

You can find the full report from the link below:

https://www.smartcities-infosystem.eu/sites/default/files/document/the_making_of_a_smart_city_-_policy_recommendations.pdf