New Rules For Artificial Intelligence – Questions and Answers


New Rules For Artificial Intelligence

-Questions and Answers-

European Commission



  1. Why do we need to regulate the use of Artificial Intelligence technology?

The potential benefits of AI for our societies are manifold from improved medical care to better education. Faced with the rapid technological development of AI, the EU must act as one to harness these opportunities. While most AI systems will pose low to no risk, certain AI systems create risks that need to be addressed to avoid undesirable outcomes. For example, the opacity of many algorithms may create uncertainty and hamper the effective enforcement of the existing legislation on safety and fundamental rights. Responding to these challenges, legislative action is needed to ensure a well-functioning internal market for AI systems where both benefits and risks are adequately addressed. This includes applications such as biometric identification systems or AI decisions touching on important personal interests, such as in the areas of recruitment, education, healthcare or law enforcement. The Commission’s proposal for a regulatory framework on AI aims to ensure the protection of fundamental rights and user safety, as well as trust in the development and uptake of AI.


  1. Which risks will the new AI rules address?

The uptake of AI systems has a strong potential to bring societal benefits, economic growth and enhance EU innovation and global competitiveness. However, in certain cases, the specific characteristics of certain AI systems may create new risks related to user safety and fundamental rights. This leads to legal uncertainty for companies and potentially slower uptake of AI technologies by businesses and citizens, due to the lack of trust. Disparate regulatory responses by national authorities would risk fragmenting the internal market.


  1. What are the risk categories?

The Commission proposes a risk–based approach, with four levels of risk:

Unacceptable risk: A very limited set of particularly harmful uses of AI that contravene EU values because they violate fundamental rights (e.g. social scoring by governments, exploitation of vulnerabilities of children, use of subliminal techniques, and – subject to narrow exceptions – live remote biometric identification systems in publicly accessible spaces used for law enforcement purposes) will be banned.

High-risk: A limited number of AI systems defined in the proposal, creating an adverse impact on people’s safety or their fundamental rights (as protected by the EU Charter of Fundamental Rights) are considered to be high-risk. Annexed to the proposal is the list of high-risk AI systems, which can be reviewed to align with the evolution of AI use cases (future-proofing). These also include safety components of products covered by sectorial Union legislation. They will always be high-risk when subject to third-party conformity assessment under that sectorial legislation. In order to ensure trust and a consistent and high level of protection of safety and fundamental rights, mandatory requirements for all high-risk AI systems are proposed. Those requirements cover the quality of data sets used; technical documentation and record keeping; transparency and the provision of information to users; human oversight; and robustness, accuracy and cybersecurity. In case of a breach, the requirements will allow national authorities to have access to the information needed to investigate whether the use of the AI system complied with the law. The proposed framework is consistent with the Charter of Fundamental Rights of the European Union and in line with the EU’s international trade commitments.

Limited risk: For certain AI systems specific transparency requirements are imposed, for example where there is a clear risk of manipulation (e.g. via the use of chatbots). Users should be aware that they are interacting with a machine.

Minimal risk: All other AI systems can be developed and used subject to the existing legislation without additional legal obligations. The vast majority of AI systems currently used in the EU fall into this category. Voluntarily, providers of those systems may choose to apply the requirements for trustworthy AI and adhere to voluntary codes of conduct.


  1. What are the obligations for providers of high-risk AI systems?

Before placing a high-risk AI system on the EU market or otherwise putting it into service, providers must subject it to a conformity assessment. This will allow them to demonstrate that their system complies with the mandatory requirements for trustworthy AI (e.g. data quality, documentation and traceability, transparency, human oversight, accuracy and robustness). In case the system itself or its purpose is substantially modified, the assessment will have to be repeated. For certain AI systems, an independent notified body will also have to be involved in this process. AI systems being safety components of products covered by sectorial Union legislation will always be deemed high-risk when subject to third-party conformity assessment under that sectorial legislation. Also for biometric identification systems a third party conformity assessment is always required.

Providers of high-risk AI systems will also have to implement quality and risk management systems to ensure their compliance with the new requirements and minimise risks for users and affected persons, even after a product is placed on the market. Market surveillance authorities will support post-market monitoring through audits and by offering providers the possibility to report on serious incidents or breaches of fundamental rights obligations of which they have become aware.


  1. How will compliance be enforced?

Member States hold a key role in the application and enforcement of this Regulation. In this respect, each Member State should designate one or more national competent authorities to supervise the application and implementation, as well as carry out market surveillance activities. In order to increase efficiency and to set an official point of contact with the public and other counterparts, each Member State should designate one national supervisory authority, which will also represent the country in the European Artificial Intelligence Board.


  1. What is the European Artificial Intelligence Board?

The European Artificial Intelligence Board would comprise high-level representatives of competent national supervisory authorities, the European Data Protection Supervisor, and the Commission. Its role will be to facilitate a smooth, effective and harmonised implementation of the new AI Regulation. The Board will issue recommendations and opinions to the Commission regarding high-risk AI systems and on other aspects relevant for the effective and uniform implementation of the new rules. It will also help building up expertise and act as a competence centre that national authorities can consult. Finally, it will also support standardisation activities in the area.


  1. Will imports of AI systems and applications need to comply with the framework?

Yes. Importers of AI systems will have to ensure that the foreign provider has already carried out the appropriate conformity assessment procedure and has the technical documentation required by the Regulation. Additionally, importers should ensure that their system bears a European Conformity (CE) marking and is accompanied by the required documentation and instructions of use.


  1. How is the Machinery Regulation related to AI?

Machinery regulation ensures that the new generation of machinery products guarantee the safety of users and consumers, and encourage innovation. Machinery products cover an extensive range of consumer and professional products, from robots (cleaning robots, personal care robots, collaborative robots, industrial robots) to lawnmowers, 3D printers, construction machines, industrial production lines.


  1. How does it fit with the regulatory framework on AI?

Both are complementary. The AI Regulation will address the safety risks of AI systems ensuring safety functions in machinery, while the Machinery Regulation will ensure, where applicable, the safe integration of the AI system into the overall machinery, so as not to compromise the safety of the machinery as a whole.


You can reach all questions and the original from the link below:

Feasibility Study on AI

Feasibility Study on AI


Council of Europe

Ad Hoc Committee on AI



“As noted in various Council of Europe documents, including reports recently adopted by the Parliamentary Assembly (PACE), AI systems are substantially transforming individual lives and have a profound impact on the fabric of society and the functioning of its institutions. Their use has the capacity to generate substantive benefits in numerous domains, such as healthcare, transport, education and public administration, generating promising opportunities for humanity at large. At the same time, the development and use of AI systems also entails substantial risks, in particular in relation to interference with human rights, democracy and the rule of law, the core elements upon which our European societies are built.

AI systems should be seen as “socio-technical systems”, in the sense that the impact of an AI system – whatever its underlying technology – depends not only on the system’s design, but also on the way in which the system is developed and used within a broader environment, including the data used, its intended purpose, functionality and accuracy, the scale of deployment, and the broader organisational, societal and legal context in which it is used. The positive or negative consequences of AI systems depend also on the values and behaviour of the human beings that develop and deploy them, which leads to the importance of ensuring human responsibility. There are, however, some distinct characteristics of AI systems that set them apart from other technologies in relation to both their positive and negative impact on human rights, democracy and the rule of law.

First, the scale, connectedness and reach of AI systems can amplify certain risks that are also inherent in other technologies or human behaviour. AI systems can analyse an unprecedented amount of fine-grained data (including highly sensitive personal data) at a much faster pace than humans. This ability can lead AI systems to be used in a way that perpetuates or amplifies unjust bias, also based on new discrimination grounds in case of so called “proxy discrimination”. The increased prominence of proxy discrimination in the context of machine learning may raise interpretive questions about the distinction between direct and indirect discrimination or, indeed, the adequacy of this distinction as it is traditionally understood. Moreover, AI systems are subject to statistical error rates. Even if the error rate of a system applied to millions of people is close to zero, thousands of people can still be adversely impacted due to the scale of deployment and interconnectivity of the systems. On the other side, the scale and reach of AI systems also imply that they can be used to mitigate certain risks and biases that are also inherent in other technologies or human behaviour, and to monitor and reduce human error rates.

Second, the complexity or opacity of many AI systems (in particular in the case of machine learning applications) can make it difficult for humans, including system developers, to understand or trace the system’s functioning or outcome. This opacity, in combination with the involvement of many different actors at different stages during the system’s lifecycle, further complicates the identification of the agent(s) responsible for a potential negative outcome, hence reducing human responsibility and accountability.

Third, certain AI systems can re-calibrate themselves through feedback and reinforcement learning. However, if an AI system is re-trained on data resulting from its own decisions which contains unjust biases, errors, inaccuracies or other deficiencies, a vicious feedback loop may arise which can lead to a discriminatory, erroneous or malicious functioning of the system and which can be difficult to detect.”


You can reach original study from the link below:

Towards Regulation of AI Systems


Towards Regulation of AI Systems


The CAHAI Secretariat
December 2020




Title 1. International Perspective

Preliminary Chapter introduces the present report, submitted by the CAHAI to the Committee of Ministers and details the progress achieved to date, taking into account the impact of COVID-19 pandemic measures. It also includes reflections on working methods, synergy and complementarity with other relevant stakeholders and proposals for further action by the CAHAI by means of a robust and clear roadmap.

Chapter 1 outlines the impact of AI on human rights, democracy and rule of law. It identifies those human rights, as set out by the European Convention on Human Rights (“ECHR”), its Protocols and the European Social Charter (“ESC”), that are currently most impacted or likely to be impacted by AI.

 Chapter 2 maps the relevant corpus of soft law documents and other ethical-legal frameworks developed by governmental and non- governmental organisations globally with a twofold aim. First, we want to monitor this ever-evolving spectrum of non-mandatory governance instruments. Second, we want to prospectively assess the impact of AI on ethical principles, human rights, the rule of law and democracy.

Chapter 3 aims to contribute to the drafting of future AI regulation by building on the existing binding instruments, contextualising their principles and providing key regulatory guidelines for a future legal framework, with a view to preserving the harmonisation of the existing legal framework in the field of human rights, democracy and the rule of law.


You can find original document from the link below:

Data Privacy Guidelines for AI Solutions


Data Privacy Guidelines for AI Solutions

November 2020


  1. The purpose of this paper is to provide guiding principles concerning the use of personal and personal related information in the context of Artificial Intelligence (AI) solutions1 developed as part of applied Information & Communication Technologies (ICTs), and to emphasise the importance of a legitimate basis for AI data processing by governments and corporations. 
  2. This Guidance is intended to serve as a common international minimum baseline for data protection standards regarding AI solutions, especially those to be implemented at the domestic level, and to be a reference point for the ongoing debate on how the right to privacy can be protected in the context of AI solutions. 
  3. AI solutions are intended to guide or make decisions that affect all our lives. Therefore, AI solutions are currently subject to broader debates within society. The subject of these debates – moral, ethical and societal questions including non-discrimination and free participation, are still to be solved. All of these questions are preconditioned by lawful data processing from a data privacy perspective. The data privacy underpinnings for AI solutions are the focus of this Guidance. 
  4. This guideline is based on the United Nations Charter of Human Rights (The Universal Declaration of Human Rights, Dec. 10th , 1948, reaffirmed 2015, UDHR) and reflects the spirit as well as the understanding of this Charter. Above all Article 7 (non-discrimination) and Article 12 (right to privacy) shall be considered whenever developing or operating AI solutions. The themes and values of these UDHR Articles are found in Articles 2 and 3 (nondiscrimination), and Article 17 (privacy) of International Covenant on Civil and Political Rights, and are obligations upon countries that have ratified the Treaty. 


  1. This Guidance is applicable to the data processing of AI solutions in all sectors of society including the public and private sectors. Data processing in this context means the design, the development, the operation and decommissioning of an AI solution. 
  2. This Guidance is applicable to all controllers of AI solutions. Controller in this context means designer, developer or operator (self-responsible or principal) each in its specific function. 
  3. This Guidance does not limit or otherwise affect any law that grants data subjects more, wider or in whatsoever way better rights, protection, and/or remedies. This Guidance does not limit or otherwise affect any law that imposes obligations on controllers and processors where that law imposes higher, wider or more rigorous obligations regarding data privacy aspects.
  4. This Guidance does not apply to AI solutions that might be performed by individuals in the context of purely private, non-corporate or household activities.


(All submissions must have been received by 2 November 2020.)


You can find original draft Guidelines from the link below:

Preventing Discrimination Caused by the Use of Artificial Intelligence 


Preventing Discrimination Caused by the Use of Artificial Intelligence 

Committee on Equality and Non-Discrimination
Rapporteur: Christophe LACROIX, Belgium, 2020



Artificial intelligence (AI), by allowing massive upscaling of automated decision-making processes, creates opportunities for efficiency gains – but in parallel, it can perpetuate and exacerbate discrimination. Public and private sector uses of AI have already been shown to have a discriminatory impact, while information flows tend to highlight extremes and foster hate. The use of biased datasets, design that fails to integrate the need to protect human rights, the lack of transparency of algorithms and of accountability for their impact, as well as a lack of diversity in AI teams, all contribute to this phenomenon.

States must act now to prevent AI from having a discriminatory impact in our societies, and should work together to develop international standards in this field. 

Parliaments must moreover play an active role in overseeing the use of AI-based technologies and ensuring it is subject to public scrutiny. Domestic antidiscrimination legislation should be reviewed and amended to ensure that victims of discrimination caused by the use of AI have access to an effective remedy, and national equality bodies should be effectively equipped to deal with the impact of AI-based technologies. 

Respect for equality and non-discrimination must be integrated from the outset in the design of AI-based systems, and tested before their deployment. The public and private sectors should actively promote diversity and interdisciplinary approaches in technology studies and professions.


You can reach original report from the link below:

Click to access doc.%2015151.pdf

Assessment List for Trustworthy AI


Assessment List for Trustworthy AI


European AI Alliance | FUTURIUM | European Commission

The EU Commission

High-Level Expert Group on AI

August 2020


Fundamental Rights

Fundamental rights encompass rights such as human dignity and non-discrimination, as well as rights in relation to data protection and privacy, to name just some examples. Prior to self-assessing an AI system with this Assessment List, a fundamental rights impact assessment (FRIA) should be performed. 

A FRIA could include questions such as the following – drawing on specific articles in the Charter and the European Convention on Human Rights (ECHR)14 its protocols and the European Social Charter.

1.Does the AI system potentially negatively discriminate against people on the basis of any of the following grounds (non-exhaustively): sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation? 

Have you put in place processes to test and monitor for potential negative discrimination (bias) during the development, deployment and use phases of the AI system? 

Have you put in place processes to address and rectify for potential negative discrimination (bias) in the AI system? 

2.Does the AI system respect the rights of the child, for example with respect to child protection and taking the child’s best interests into account? 

Have you put in place processes to address and rectify for potential harm to children by the AI system? 

Have you put in place processes to test and monitor for potential harm to children during the development, deployment and use phases of the AI system? 

3.Does the AI system protect personal data relating to individuals in line with GDPR?

Have you put in place processes to assess in detail the need for a data protection impact assessment, including an assessment of the necessity and proportionality of the processing operations in relation to their purpose, with respect to the development, deployment and use phases of the AI system?

Have you put in place measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data with respect to the development, deployment and use phases of the AI system? 

4.Does the AI system respect the freedom of expression and information and/or freedom of assembly and association?

Have you put in place processes to test and monitor for potential infringement on freedom of expression and information, and/or freedom of assembly and association, during the development, deployment and use phases of the AI system? 

Have you put in place processes to address and rectify for potential infringement on freedom of expression and information, and/or freedom of assembly and association, in the AI system?


You can find original document from the link below:

White Paper on Artificial Intelligence: A European approach to excellence and trust


White Paper on Artificial Intelligence 

A European approach to excellence and trust

Brussel, 19.02.2020



Artificial Intelligence is developing fast. It will change our lives by improving healthcare (e.g. making diagnosis more precise, enabling better prevention of diseases), increasing the efficiency of farming, contributing to climate change mitigation and adaptation, improving the efficiency of production systems through predictive maintenance, increasing the security of Europeans, and in many other ways that we can only begin to imagine. At the same time, Artificial Intelligence (AI) entails a number of potential risks, such as opaque decision-making, gender-based or other kinds of discrimination, intrusion in our private lives or being used for criminal purposes. 

Against a background of fierce global competition, a solid European approach is needed, building on the European strategy for AI presented in April 2018. To address the opportunities and challenges of AI, the EU must act as one and define its own way, based on European values, to promote the development and deployment of AI. 

The Commission is committed to enabling scientific breakthrough, to preserving the EU’s technological leadership and to ensuring that new technologies are at the service of all Europeans – improving their lives while respecting their rights. 

Commission President Ursula von der Leyen announced in her political Guidelines a coordinated European approach on the human and ethical implications of AI as well as a reflection on the better use of big data for innovation. 

Thus, the Commission supports a regulatory and investment oriented approach with the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology. The purpose of this White Paper is to set out policy options on how to achieve these objectives. It does not address the development and use of AI for military purposes.The Commission invites Member States, other European institutions, and all stakeholders, including industry, social partners, civil society organisations, researchers, the public in general and any interested party, to react to the options below and to contribute to the Commission’s future decision-making in this domain.

You can reach the full paper from the link below:

Explaining AI Decisions


Explaining AI Decisions


ICO & Alan Turing Institute

2 December 2019

The ICO and The Alan Turing Institute (The Turing) have launched a consultation on our co-badged guidance, Explaining decisions made with AI. This guidance aims to give organisations practical advice to help explain the processes, services and decisions delivered or assisted by AI, to the individuals affected by them.  

Increasingly, organisations are using artificial intelligence (AI) to support, or to make decisions about individuals. If this is something you do, or something you are thinking about, this guidance is for you.

We want to ensure this guidance is practically applicable in the real world, so organisations can easily utilize it when developing AI systems. This is why we are requesting feedback. 

The guidance consists of three parts. Depending on your level of expertise, and the make-up of your organisation, some parts may be more relevant to you than others. You can pick and choose the parts that are most useful.

The survey will ask you about all three parts but answer as few or as many questions as you like.

Part 1: The basics of explaining AI defines the key concepts and outlines a number of different types of explanations. It will be relevant for all members of staff involved in the development of AI systems.

Part 2: Explaining AI in practice helps you with the practicalities of explaining these decisions and providing explanations to individuals. This will primarily be helpful for the technical teams in your organisation, however your DPO and compliance team will also find it useful.

Part 3: What explaining AI means for your organisation goes into the various roles, policies, procedures and documentation that you can put in place to ensure your organisation is set up to provide meaningful explanations to affected individuals. This is primarily targeted at your organisation’s senior management team, however your DPO and compliance team will also find it useful.

You can send your thoughts via email to [email protected]


You can find original guidance from the link below:

Recommendations of the Data Ethics Commission for the Federal Government’s Strategy on Artificial Intelligence


Recommendations of the Data Ethics Commission for the Federal Government’s Strategy on Artificial Intelligence


Image result for daten ethik kommission"

9 October 2018


The Data Ethics Commission is pleased that the Federal Government is developing a strategy on artificial intelligence. At its constitutive meeting on 4 and 5 September 2018, the Data Ethics Commission discussed the Federal Government’s policy paper for such a strategy. The Commission recommends that the Federal Government should add the following points to its strategy:

(1) the objective “Upholding the ethical and legal principles based on our liberal democracy throughout the entire process of developing and applying artificial intelligence”

 (2) the area of action “Promoting the ability of individuals and society as a whole to understand and reflect critically in the information society”


The term “artificial intelligence” (AI) is used in the media and in general discourse to refer to different things. The Federal Government’s policy paper does not specify the technologies covered in the paper. This information should be added.

In this context, we understand “artificial intelligence” as a collective term for technologies and their applications which process potentially very large and heterogeneous data sets using complex methods modelled on human intelligence to arrive at a result which may be used in automated applications. The most important building blocks of AI as part of computer science are sub-symbolic pattern recognition, machine learning, computerized knowledge representation and knowledge processing, which encompasses heuristic search, inference and planning.

The range of applications using AI today is already enormous. These applications range from the simple calculation of travel routes to image and language recognition and generation to highly complex environments for making decisions and predictions and for exerting influence. The most important applications involve systems which recognize language and images; collaborative robots and other automated systems (cars, aircraft, trains); multi-agent systems; chatbots; and engineered environments with ambient intelligence. We expect increasingly autonomous and comprehensive applications to be developed which will pervade all areas of life and are capable of automating, (partly) replacing and far outperforming more and more human activity in ever broader fields of action.

Questions with ethical and legal relevance which arise in this context also concern “simple” systems of rules based on algorithms “manually” defined by experts (= rules). These do not constitute AI as it is generally understood. It is important for the Federal Government’s strategy on AI to cover these processes as well.


The diversity of possible AI applications and the complexity of the relevant technologies make it especially challenging to design them in compliance with ethics and the law and to regulate this compliance. As more and more decision-making processes are shifting from humans as the subject of action to AI-driven systems, new questions arise as to who is responsible for the development, programming, introduction, use, steering, monitoring, liability and external review of AI and applications based on it. Further, the specific functioning depends on the selection and quality of the data entered and/or used to “train” the application. Simply ignoring certain types of data and using poorly prepared data can have ethical consequences extending to systematic discrimination or results antagonistic to plurality. In this context, more support should be given to research into modern methods of anonymization and into generating synthetic training data, also in order to increase the amount of data that can be processed for AI technologies without threatening fundamental rights.

The data needed for some AI applications are highly concentrated among a small number of companies which also possess a high level of technological expertise. This raises the question as to whether and how access to non-personal data in private hands should be regulated by law.

Finally, with regard to the democratic process, it should be noted that technology which is increasingly able to imitate human behaviour in a remarkably convincing way can also be easily used to influence social trends and political opinions.

Ethical considerations should be addressed throughout the entire process of developing and applying AI, using the approach “ethics by, in and for design” and as the trademark of “AI made in Europe”. This includes research, development and production of AI, as well as the use, operation, monitoring and governance of AI-based applications. For the Data Ethics Commission, ethics does not mean primarily the definition of limits; on the contrary, when ethical considerations are addressed from the start of the development process, they can make a powerful contribution to design, supporting advisable and desirable applications.

It is also necessary to consider interactions between technology, users and society (“the AI ecosystem”). Within this ecosystem, it is necessary to ensure sufficient transparency, accountability, freedom from discrimination and the ability to review those automated processes which prepare decisions or draw conclusions which may be carried out without additional human input. This is the only way to generate trust in the use and results of algorithm-driven processes. The policy paper (p. 9) rightly demands these measures for algorithms used in public administration. But the same principles should apply in an appropriate way to private parties as well. Measures for quality assurance are also needed which can be supported in part by independent third parties and in part by automated processes. It is also necessary to ensure that the persons affected and the supervisory authorities have appropriate and effective possibilities to intervene as well as access to effective legal remedies.

The most important standard for dealing responsibly with AI is first of all the Constitution, in particular the fundamental rights and the principles of the rule of law, the welfare system and democracy. This also includes individuals’ right to self-determination, including control over their personal data, which also requires companies to inform their customers how they use their data; respect for individual user decisions concerning personal use of an application; protection against unfair discrimination; and the possibility to review machine-made decisions effectively. We also need legal provisions which clearly define the extent of responsibility for developing and applying AI-based technologies according to ethical, legal and economic principles. This also applies to compensation for damage and the enforcement of public-law obligations with regard to AI.

A wide range of control mechanisms necessary for inserting ethical and legal principles into the process of designing and applying these technologies is conceivable. These mechanisms must be decided at national and European level in a democratic process. The use of AI by government actors must be subject to special oversight. Possibilities for supervision include targeted (material, etc.) promotion of applications which comply with the Constitution, certification and standards, official authorization of supervision and institutions to uphold fundamental rights and ethical rules related to AI and binding law.

With this in mind, the Data Ethics Commission recommends that the Federal Government’s strategy on artificial intelligence should promote and demand attention to ethical and legal principles throughout the entire process of developing and applying AI, and that the strategy should include this as an additional objective. The strategy’s areas of action should be defined with this objective in mind.


Information and technologies of all kinds pervade every level of society and our lives to a degree never before known. They increasingly influence social interactions and discourse as structurally relevant elements of democracy. The rapid development of new applications for AI also demands a constant process of critical examination. These profound and diverse changes are significant not only for individual expression, but also for our life in society. They make a discourse which reinforces freedom and democracy more necessary now than ever. Among other things, we need a framework in which individuals and institutional actors can acquire sufficient digital and media literacy and the ability to reflect critically on how to deal with technical innovation.

The Federal Government’s policy paper already calls for implementing its strategy on artificial intelligence in constant dialogue with representatives of the research community, civil society and business and industry, as well as with policy-makers, in order to establish a culture of AI in Germany which promotes trust. The Data Ethics Commission underscores the importance of these measures. It also recommends adding to the AI strategy a separate area of action: “Promoting the ability of individuals and society as a whole to understand and reflect critically in the information society”. This is intended to ensure that individuals and institutional actors acquire sufficient digital and media literacy and the ability to reflect critically on how to deal with AI. Such abilities are essential for society to conduct an objective, informed and nuanced examination which can help promote trust in the use of AI. However, the Data Ethics Commission believes a broader approach will be needed than is currently described in the Federal Government’s policy paper.

Ways to promote digital and media literacy and critical reflection range from offering comprehensive, objective information in campaigns (e.g. to explain realistic application scenarios), to teaching media literacy at school and in adult education courses, to using and promoting technologies to enforce the law and uphold ethical principles in the world of technology. The media and institutions of media supervision also have an important role to play in this context: Their role is not only to inform society about new technologies and examine technological progress critically, but also to provide new forums for debate.

Investment in technology impact assessment must increase to the same extent as technologies such as AI are applied in our society. For example, more research and development should be conducted on data portability, interoperability and consumer enabling technologies; these include AI applications whose primary aim is to help consumers make everyday decisions.

And a balance must be found between the state’s responsibility for creating and enforcing framework conditions, which ensures trust, and the freedom, autonomy and responsibility of users and others affected by the new technologies on the one hand, and the forces of the market and competition on the other hand. This balance must be discussed and determined by society in light of these changes. The growing economic strength of those companies which play a major role in the development of AI must not result in research and civil society becoming increasingly dependent on funding from precisely these companies. Government must enable research and civil society to make independent and competence-based contributions to this important societal discussion.

As modern technologies, including AI, evolve and relieve humans of certain chores, we not only gain new skills but also lose existing skills. This demands a discussion of our responsibility to preserve and develop certain skills for the next generation to remain independent. So we also need to discuss the definition of and requirements for sovereignty of the entire society.

The Data Ethics Commission therefore recommends including another area of action in the strategy focused on creating appropriate framework conditions to promote the ability of individuals and society as a whole to understand and reflect critically in the information society.


Progress and responsible innovation make a major contribution to the prosperity of society. They offer enormous opportunities which we should welcome and promote, but they also come with risks. These opportunities can make a lasting contribution to freedom, justice and prosperity above all when people’s individual rights are protected and social cohesion is strengthened. With this in mind, the Data Ethics Commission strongly recommends adding the two items referred to at the beginning of this document to the Federal Government’s strategy on artificial intelligence.


You can reach the original document from the link below:;jsessionid=DDAF76836371D0CC6F04A232F117B72F.1_cid324?nn=11678512

Civil and Military Drones


Civil and Military Drones


European Parliament, October 2019



Often labelled as one of today’s main disruptive technologies, drones have indeed earned this label by prompting a fundamental rethinking of business models, existing laws, safety and security standards, the future of transport, and modern warfare. The European Union (EU) recognises the opportunities that drones offer and sees them as opening a new chapter in the history of aerospace. The EU aviation strategy provides guidance for exploring new and emerging technologies, and encourages the integration of drones into business and society so as to maintain a competitive EU aviation industry.

Ranging from insect-sized to several tonnes in weight, drones are extremely versatile and can perform a very large variety of functions, from filming to farming, and from medical aid to search and rescue operations. Among the advantages of civil and military drones are their relative low cost, reach, greater work productivity and capacity to reduce risk to human life. These features have led to their mass commercialisation and integration into military planning. Regulatory and oversight challenges remain, however, particularly regarding dual-use drones – civil drones that can be easily turned into armed drones or weaponised for criminal purposes.

At EU level, the European Commission has been empowered to regulate civil drones and the European Aviation Safety Agency to assist with ensuring a harmonised regulatory framework for safe drone operations. The latest EU legislation has achieved the highest ever safety standards for drones. Another challenge remaining for regulators, officials and manufacturers alike is the need to build the trust of citizens and consumers. Given that drones have been in the public eye more often for their misuse than their accomplishments, transparency and effective communication are imperative to prepare citizens for the upcoming drone age.


You can reach original document from the link below:

1 2 3