Responsible Bots: 10 Guidelines for Developers of Conversational AI


Responsible Bots: 10 Guidelines for Developers of Conversational AI




  1. Articulate the purpose of your bot and take special care if your bot will support consequential use cases

The purpose of your bot is central to ethical design, and ethical design is particularly important when it is anticipated that a consequential use will be served by the bot you are developing. Consequential use cases include access to services such as healthcare, education, employment, financing or other services that, if denied, would have meaningful and significant impact on an individual’s daily life.

  1. Be transparent about the fact that you use bots as part of your product or service.

Users are more likely to trust a company that is transparent and forthcoming about its use of bot technology, and a bot is more likely to be trusted if users understand that the bot is working to serve their needs and is clear about its limitations. 

  1. Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot’s competence. 

If your bot will engage people in interactions that may require human judgment, provide a means or ready access to a human moderator. 

  1. Design your bot so that it respects relevant cultural norms and guards against misuse.

Since bots may have human-like personas, it is especially important that they interact respectfully and safely with users and have built-in safeguards and protocols to handle misuse and abuse.

  1. Ensure your bot is reliable. 

Ensure that your bot is sufficiently reliable for the function it aims to perform, and always take into account that since AI systems are probabilistic they will not always provide the correct answer. 

  1. Ensure your bot treats people fairly. 

The possibility that AI-based systems will perpetuate existing societal biases, or introduce new biases, is one of the top concerns identified by the AI community relating to the rapid deployment of AI. Development teams must be committed to ensuring that their bots treat all people fairly. 

  1. Ensure your bot respects user privacy. 

Privacy considerations are especially important for bots. While the Microsoft Bot Framework does not store session state, you may be designing and deploying authenticated bots in personal settings (like hospitals) where bots will learn a great deal about users. People may also share more information about themselves than they would if they thought they were interacting with a person. And, of course, bots can remember everything. All of this (plus legal requirements) makes it especially important that you design bots from the ground up with a view toward respecting user privacy. This includes giving users sufficient transparency into bots’ data collection and use, including how the bot functions, and what types of controls the bot offers users over their personal data.

  1. Ensure your bot handles data securely. 

Users have every right to expect that their data will be handled securely. Follow security best practices that are appropriate for the type of data your bot will be handling. 

  1. Ensure your bot is accessible. 

Bots can benefit everyone, but only if they are designed to be inclusive and accessible to people of all abilities. Microsoft’s mission to empower every person to achieve more includes ensuring that new technology interfaces can be used by people with disabilities, including users of assistive technology. 

  1. Accept responsibility. 

We are a long way away from bots that can truly act autonomously, if that day will ever come. Humans are accountable for the operation of bots.

You can reach original document from the link below:

Robots, Artificial Intelligence and Criminal Law


Robots, Artificial Intelligence and Criminal Law




Dr. Sinan Altunç

Bahçeşehir University, Law Faculty, Departmant of Criminal Law 






English translation is soonish! 🙂



You can reach original and full article from the link below:

2016–2019 Progress Report: Advancing Artificial Intelligence R&D


2016–2019 Progress Report: Advancing Artificial Intelligence R&D

November 2019


The United States national strategy for artificial intelligence (AI), the American AI Initiative, identifies research and development (R&D) as a top priority for maintaining global leadership in AI. The United States leads the world in AI innovation, due in large part to its robust R&D ecosystem. Federal agencies contribute significantly to AI innovation by investing in numerous world-class research programs in areas consistent with the unique missions of each agency. 

This 2016–2019 Progress Report on Advancing Artificial Intelligence R&D (“2016–2019 Progress Report”) documents the important progress that agencies are making to deliver on Federal AI R&D. 

Guiding Federal research investments is the National Artificial Intelligence Research and Development Strategic Plan: 2019 Update (“2019 Plan”), 2 which builds upon the 2016 version of the Plan. The 2019 Plan articulates eight national AI R&D strategies:

Strategy 1: Make long-term investments in AI research. 

Strategy 2: Develop effective methods for human-AI collaboration. 

Strategy 3: Understand and address the ethical, legal, and societal implications of AI.

Strategy 4: Ensure the safety and security of AI systems. 

Strategy 5: Develop shared public datasets and environments for AI training and testing.

Strategy 6: Measure and evaluate AI technologies through benchmarks and standards.

Strategy 7: Better understand the national AI R&D workforce needs.

Strategy 8: Expand public-private partnerships in AI to accelerate advances in AI. 

This 206–2019 Progress Report highlights AI research first by strategy, then by sector, with subsequent supporting details describing individual agency contributions that provide a whole-of-government overview. The diversity of programs and activities reflects the remarkable breadth and depth of Federal investments in AI. This report highlights not only the broad themes of Federal R&D but also provides illustrative examples in sidebars that highlight individual agency AI R&D breakthroughs that advance the state of the field. 

Taken as a whole, the 2016–2019 Progress Report conveys the following key messages: 

  1. The Federal Government is investing in a considerable breadth and depth of innovative AI concepts that can transform the state of the field. 
  2. The United States benefits significantly from the broad spectrum of Federal agencies that invest in AI from their unique mission perspectives, consistent with the national AI R&D strategy. 
  3. Federal investments have generated impactful breakthroughs that are revolutionizing our society for the better. 

Collectively, the investments described in this report demonstrate how the Federal Government leverages and improves America’s AI capabilities through R&D and ensures that those capabilities are increasing prosperity, safety, security, and quality of life for the American people for decades to come. 


You can reach the full report from the link below:


Blockchain 1.0 and 2.0


Blockchain 1.0 and 2.0




The development of the blockchain technology has brought us ‘smart contract’ as the product of blockchain 2.0. Smart contracts which are operated by the infrustracture of blockchain provide cost and time reduce in legal processes. On the other hand, in frame of current technology smart contracts have some clashes against our current legal regulations. In our opinion, especially the irrevocable record system of blockchain is in contradiction with some principles of Obligation Law and Data Protection Law. Moreover, the definition of cryptocurrencies are immessenly controversial issue among regulatory institutions. In this article, these dilemmas are emphasized with the short definitions of blockchain and smart contracts.

Explaining AI Decisions


Explaining AI Decisions


ICO & Alan Turing Institute

2 December 2019

The ICO and The Alan Turing Institute (The Turing) have launched a consultation on our co-badged guidance, Explaining decisions made with AI. This guidance aims to give organisations practical advice to help explain the processes, services and decisions delivered or assisted by AI, to the individuals affected by them.  

Increasingly, organisations are using artificial intelligence (AI) to support, or to make decisions about individuals. If this is something you do, or something you are thinking about, this guidance is for you.

We want to ensure this guidance is practically applicable in the real world, so organisations can easily utilize it when developing AI systems. This is why we are requesting feedback

The guidance consists of three parts. Depending on your level of expertise, and the make-up of your organisation, some parts may be more relevant to you than others. You can pick and choose the parts that are most useful.

The survey will ask you about all three parts but answer as few or as many questions as you like.

Part 1: The basics of explaining AI defines the key concepts and outlines a number of different types of explanations. It will be relevant for all members of staff involved in the development of AI systems.

Part 2: Explaining AI in practice helps you with the practicalities of explaining these decisions and providing explanations to individuals. This will primarily be helpful for the technical teams in your organisation, however your DPO and compliance team will also find it useful.

Part 3: What explaining AI means for your organisation goes into the various roles, policies, procedures and documentation that you can put in place to ensure your organisation is set up to provide meaningful explanations to affected individuals. This is primarily targeted at your organisation’s senior management team, however your DPO and compliance team will also find it useful.

You can send your thoughts via email to [email protected]


You can find original guidance from the link below:

Deepfake Videos: When Seeing Isn’t Believing


Deepfake Videos: When Seeing Isn’t Believing


Image result for depfake"


Holly Kathleen Hall

Arkansas State University






“The 2016 election evidenced a change in how campaign news and information spreads, especially false or misleading information, and the involvement of a foreign government in its dissemination. This new direction increased apprehension regarding the effect and influence of the new communication dynamic on the democratic process. Advancing technology and increasing popularity of social media networks have led to a rise in video creation and sharing. Innovations in technology are also allowing the public to edit and tinker with videos, creating falsified or fabricated content that appears very real. In 2018 a new software tool was released to the public, allowing the creation of videos of human faces of one person to be substituted for another. The result is videos of people speaking words they have never articulated and/or performing tasks they never did. There has been a dramatic uptick in the creation of these “deepfake” videos, leading to potential legal implications in the areas of privacy, defamation, and free expression.

The extraordinary success of fake news being accepted in the marketplace creates grave concerns for individuals and democracy. This is exacerbated when a video is added to the equation. Ponder some of the following potential situations: blackmailers using deepfakes to extort money or private information, a deepfake showing a government official accepting a bribe he/she never took, or a deepfake depicting an official announcing an impending attack by a foreign government. The possibilities are alarming. The capacity for harm caused by deepfakes naturally leads to considering new laws and regulations. However, any regulation of speech and expression in the United States implicates the First Amendment. In the past we have relied on the “marketplace of ideas” concept, which encourages more speech as a means to uncover the truth and have the best ideas rise to the fore, rather than censor particular content. Is this argument still valid when the public cannot discern what information is true, misleading, or false?

This article will first discuss the rise of fake news in the United States governmental process. Then this article will explore the practice of deepfake videos, including their potential use as tools of deception in the electoral process, and the complexities of regulations around this form of communication, given First Amendment protections. The paper concludes with recommendations to combat deepfakes and fake news in general.”

You can find original paper from the link below:

Recommendations of the Data Ethics Commission for the Federal Government’s Strategy on Artificial Intelligence


Recommendations of the Data Ethics Commission for the Federal Government’s Strategy on Artificial Intelligence


Image result for daten ethik kommission"

9 October 2018


The Data Ethics Commission is pleased that the Federal Government is developing a strategy on artificial intelligence. At its constitutive meeting on 4 and 5 September 2018, the Data Ethics Commission discussed the Federal Government’s policy paper for such a strategy. The Commission recommends that the Federal Government should add the following points to its strategy:

(1) the objective “Upholding the ethical and legal principles based on our liberal democracy throughout the entire process of developing and applying artificial intelligence”

 (2) the area of action “Promoting the ability of individuals and society as a whole to understand and reflect critically in the information society”


The term “artificial intelligence” (AI) is used in the media and in general discourse to refer to different things. The Federal Government’s policy paper does not specify the technologies covered in the paper. This information should be added.

In this context, we understand “artificial intelligence” as a collective term for technologies and their applications which process potentially very large and heterogeneous data sets using complex methods modelled on human intelligence to arrive at a result which may be used in automated applications. The most important building blocks of AI as part of computer science are sub-symbolic pattern recognition, machine learning, computerized knowledge representation and knowledge processing, which encompasses heuristic search, inference and planning.

The range of applications using AI today is already enormous. These applications range from the simple calculation of travel routes to image and language recognition and generation to highly complex environments for making decisions and predictions and for exerting influence. The most important applications involve systems which recognize language and images; collaborative robots and other automated systems (cars, aircraft, trains); multi-agent systems; chatbots; and engineered environments with ambient intelligence. We expect increasingly autonomous and comprehensive applications to be developed which will pervade all areas of life and are capable of automating, (partly) replacing and far outperforming more and more human activity in ever broader fields of action.

Questions with ethical and legal relevance which arise in this context also concern “simple” systems of rules based on algorithms “manually” defined by experts (= rules). These do not constitute AI as it is generally understood. It is important for the Federal Government’s strategy on AI to cover these processes as well.


The diversity of possible AI applications and the complexity of the relevant technologies make it especially challenging to design them in compliance with ethics and the law and to regulate this compliance. As more and more decision-making processes are shifting from humans as the subject of action to AI-driven systems, new questions arise as to who is responsible for the development, programming, introduction, use, steering, monitoring, liability and external review of AI and applications based on it. Further, the specific functioning depends on the selection and quality of the data entered and/or used to “train” the application. Simply ignoring certain types of data and using poorly prepared data can have ethical consequences extending to systematic discrimination or results antagonistic to plurality. In this context, more support should be given to research into modern methods of anonymization and into generating synthetic training data, also in order to increase the amount of data that can be processed for AI technologies without threatening fundamental rights.

The data needed for some AI applications are highly concentrated among a small number of companies which also possess a high level of technological expertise. This raises the question as to whether and how access to non-personal data in private hands should be regulated by law.

Finally, with regard to the democratic process, it should be noted that technology which is increasingly able to imitate human behaviour in a remarkably convincing way can also be easily used to influence social trends and political opinions.

Ethical considerations should be addressed throughout the entire process of developing and applying AI, using the approach “ethics by, in and for design” and as the trademark of “AI made in Europe”. This includes research, development and production of AI, as well as the use, operation, monitoring and governance of AI-based applications. For the Data Ethics Commission, ethics does not mean primarily the definition of limits; on the contrary, when ethical considerations are addressed from the start of the development process, they can make a powerful contribution to design, supporting advisable and desirable applications.

It is also necessary to consider interactions between technology, users and society (“the AI ecosystem”). Within this ecosystem, it is necessary to ensure sufficient transparency, accountability, freedom from discrimination and the ability to review those automated processes which prepare decisions or draw conclusions which may be carried out without additional human input. This is the only way to generate trust in the use and results of algorithm-driven processes. The policy paper (p. 9) rightly demands these measures for algorithms used in public administration. But the same principles should apply in an appropriate way to private parties as well. Measures for quality assurance are also needed which can be supported in part by independent third parties and in part by automated processes. It is also necessary to ensure that the persons affected and the supervisory authorities have appropriate and effective possibilities to intervene as well as access to effective legal remedies.

The most important standard for dealing responsibly with AI is first of all the Constitution, in particular the fundamental rights and the principles of the rule of law, the welfare system and democracy. This also includes individuals’ right to self-determination, including control over their personal data, which also requires companies to inform their customers how they use their data; respect for individual user decisions concerning personal use of an application; protection against unfair discrimination; and the possibility to review machine-made decisions effectively. We also need legal provisions which clearly define the extent of responsibility for developing and applying AI-based technologies according to ethical, legal and economic principles. This also applies to compensation for damage and the enforcement of public-law obligations with regard to AI.

A wide range of control mechanisms necessary for inserting ethical and legal principles into the process of designing and applying these technologies is conceivable. These mechanisms must be decided at national and European level in a democratic process. The use of AI by government actors must be subject to special oversight. Possibilities for supervision include targeted (material, etc.) promotion of applications which comply with the Constitution, certification and standards, official authorization of supervision and institutions to uphold fundamental rights and ethical rules related to AI and binding law.

With this in mind, the Data Ethics Commission recommends that the Federal Government’s strategy on artificial intelligence should promote and demand attention to ethical and legal principles throughout the entire process of developing and applying AI, and that the strategy should include this as an additional objective. The strategy’s areas of action should be defined with this objective in mind.


Information and technologies of all kinds pervade every level of society and our lives to a degree never before known. They increasingly influence social interactions and discourse as structurally relevant elements of democracy. The rapid development of new applications for AI also demands a constant process of critical examination. These profound and diverse changes are significant not only for individual expression, but also for our life in society. They make a discourse which reinforces freedom and democracy more necessary now than ever. Among other things, we need a framework in which individuals and institutional actors can acquire sufficient digital and media literacy and the ability to reflect critically on how to deal with technical innovation.

The Federal Government’s policy paper already calls for implementing its strategy on artificial intelligence in constant dialogue with representatives of the research community, civil society and business and industry, as well as with policy-makers, in order to establish a culture of AI in Germany which promotes trust. The Data Ethics Commission underscores the importance of these measures. It also recommends adding to the AI strategy a separate area of action: “Promoting the ability of individuals and society as a whole to understand and reflect critically in the information society”. This is intended to ensure that individuals and institutional actors acquire sufficient digital and media literacy and the ability to reflect critically on how to deal with AI. Such abilities are essential for society to conduct an objective, informed and nuanced examination which can help promote trust in the use of AI. However, the Data Ethics Commission believes a broader approach will be needed than is currently described in the Federal Government’s policy paper.

Ways to promote digital and media literacy and critical reflection range from offering comprehensive, objective information in campaigns (e.g. to explain realistic application scenarios), to teaching media literacy at school and in adult education courses, to using and promoting technologies to enforce the law and uphold ethical principles in the world of technology. The media and institutions of media supervision also have an important role to play in this context: Their role is not only to inform society about new technologies and examine technological progress critically, but also to provide new forums for debate.

Investment in technology impact assessment must increase to the same extent as technologies such as AI are applied in our society. For example, more research and development should be conducted on data portability, interoperability and consumer enabling technologies; these include AI applications whose primary aim is to help consumers make everyday decisions.

And a balance must be found between the state’s responsibility for creating and enforcing framework conditions, which ensures trust, and the freedom, autonomy and responsibility of users and others affected by the new technologies on the one hand, and the forces of the market and competition on the other hand. This balance must be discussed and determined by society in light of these changes. The growing economic strength of those companies which play a major role in the development of AI must not result in research and civil society becoming increasingly dependent on funding from precisely these companies. Government must enable research and civil society to make independent and competence-based contributions to this important societal discussion.

As modern technologies, including AI, evolve and relieve humans of certain chores, we not only gain new skills but also lose existing skills. This demands a discussion of our responsibility to preserve and develop certain skills for the next generation to remain independent. So we also need to discuss the definition of and requirements for sovereignty of the entire society.

The Data Ethics Commission therefore recommends including another area of action in the strategy focused on creating appropriate framework conditions to promote the ability of individuals and society as a whole to understand and reflect critically in the information society.


Progress and responsible innovation make a major contribution to the prosperity of society. They offer enormous opportunities which we should welcome and promote, but they also come with risks. These opportunities can make a lasting contribution to freedom, justice and prosperity above all when people’s individual rights are protected and social cohesion is strengthened. With this in mind, the Data Ethics Commission strongly recommends adding the two items referred to at the beginning of this document to the Federal Government’s strategy on artificial intelligence.


You can reach the original document from the link below:;jsessionid=DDAF76836371D0CC6F04A232F117B72F.1_cid324?nn=11678512

AI Ethics Principles for DoD


AI Ethics Principles for DoD


October, 2019 – DIB


AI Ethics Principles for DoD


The following principles represent the means to ensure ethical behavior as the Department develops and deploys AI. To that end, the Department should set the goal that its use of AI systems is:

1. Responsible. Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of AI systems.

2. Equitable. DoD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons.

3. Traceable. DoD’s AI engineering discipline should be sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes, and operational methods of its AI systems, including transparent and auditable methodologies, data sources, and design procedure and documentation.

4. Reliable. AI systems should have an explicit, well-defined domain of use, and the safety, security, and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.

5. Governable. DoD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and disengage or deactivate deployed systems that demonstrate unintended escalatory or other behavior.

This White Paper is organized into five chapters. Chapter One outlines the stated needs for a set of AI Ethics Principles and ethics guidance from existing strategic documents and law. It also addresses definitional approaches associated with AI and autonomy to clearly frame how the DIB approaches key issues for DoD and its development and use of AI. Chapter Two provides the necessary grounding for a set of DoD AI Ethics Principles to ensure that they are coherently and consistently derived for DoD. We explain DoD’s existing ethics framework that applies to all of DoD’s technologies, AI included. In Chapter Three, we offer substantive and evidence-driven explanations of each of the five AI Ethics Principles. These principles are normative; they are intended to inform and guide action. However, we are mindful that depending upon context, some principles may override others, and for various AI use cases, these will apply differently. Chapter Four outlines our recommendations to the Department, while Chapter Five provides conclusions. We also provide a set of Appendices to aid readers by providing transparency and clarity about our process in developing these principles and providing high-level content about existing DoD processes.

You can reach the White Paper from the link below:

Civil and Military Drones


Civil and Military Drones


European Parliament, October 2019



Often labelled as one of today’s main disruptive technologies, drones have indeed earned this label by prompting a fundamental rethinking of business models, existing laws, safety and security standards, the future of transport, and modern warfare. The European Union (EU) recognises the opportunities that drones offer and sees them as opening a new chapter in the history of aerospace. The EU aviation strategy provides guidance for exploring new and emerging technologies, and encourages the integration of drones into business and society so as to maintain a competitive EU aviation industry.

Ranging from insect-sized to several tonnes in weight, drones are extremely versatile and can perform a very large variety of functions, from filming to farming, and from medical aid to search and rescue operations. Among the advantages of civil and military drones are their relative low cost, reach, greater work productivity and capacity to reduce risk to human life. These features have led to their mass commercialisation and integration into military planning. Regulatory and oversight challenges remain, however, particularly regarding dual-use drones – civil drones that can be easily turned into armed drones or weaponised for criminal purposes.

At EU level, the European Commission has been empowered to regulate civil drones and the European Aviation Safety Agency to assist with ensuring a harmonised regulatory framework for safe drone operations. The latest EU legislation has achieved the highest ever safety standards for drones. Another challenge remaining for regulators, officials and manufacturers alike is the need to build the trust of citizens and consumers. Given that drones have been in the public eye more often for their misuse than their accomplishments, transparency and effective communication are imperative to prepare citizens for the upcoming drone age.


You can reach original document from the link below:

New Challenges to Government Use of Algorithmic Decision Systems


New Challenges to Government Use of Algorithmic Decision Systems




Workshop Summary

Algorithmic decision systems (ADS) are often touted for their putative benefits: mitigating human bias and error, and offering the promise of cost efficiency, accuracy, and reliability. Yet within health care, criminal justice, education, employment, and other areas, the implementation of these technologies has resulted in numerous problems. In our 2018 Litigating Algorithms Report, we documented outcomes and insights from the first wave of US lawsuits brought against government use of ADS, highlighting key legal and technical issues they raised and how courts were learning to address the substantive and procedural problems they create.

In June of 2019, with support from The John D. and Catherine T. MacArthur Foundation, AI Now and NYU Law’s Center on Race, Inequality, and the Law held our second Litigating Algorithms Workshop.1 We revisited several of last year’s cases, examining what progress, if any, had been made. We also examined a new wave of legal challenges that raise significant questions about (1) what access, if any, should criminal defense attorneys have to law enforcement ADS in order to challenge allegations leveled by the prosecution; (2) the collateral consequences of erroneous or vindictive uses of governmental ADS; and (3) the evolution of America’s most powerful biometric privacy law and its potential impact on ADS accountability. As with the previous year’s Litigating Algorithms Workshop, participants shared litigation strategies, raised provocative questions, and recounted key moments from both their victories and losses.

Workshop attendees came from various advocacy, policy, and research communities, including the ACLU, Center for Civil Justice, Center for Constitutional Rights, Center on Privacy and Technology at Georgetown Law, Citizen Lab, Digital Freedom Fund, Disability Rights Oregon, the Electronic Frontier Foundation, Equal Justice Under Law, Federal Defenders of New York, the Ford Foundation, LatinoJustice PRLDEF, Legal Aid of Arkansas, Legal Aid Society of New York, the MacArthur Foundation, NAACP Legal Defense and Educational Fund, National Association of Criminal Defense Lawyers, National Employment Law Project, National Health Law Program, New York County Defender Services, Philadelphia Legal Assistance, Princeton University, Social Science Research Council, Bronx Defenders, UDC Law School, Upturn, and Yale Law School.


AI Now Institute 2019 Symposium:


You can reach original and full report from the link below:

1 2 3 4 5 11