Responsible Bots: 10 Guidelines for Developers of Conversational AI

 

Responsible Bots: 10 Guidelines for Developers of Conversational AI

 

 

Guidelines 

  1. Articulate the purpose of your bot and take special care if your bot will support consequential use cases

The purpose of your bot is central to ethical design, and ethical design is particularly important when it is anticipated that a consequential use will be served by the bot you are developing. Consequential use cases include access to services such as healthcare, education, employment, financing or other services that, if denied, would have meaningful and significant impact on an individual’s daily life.

  1. Be transparent about the fact that you use bots as part of your product or service.

Users are more likely to trust a company that is transparent and forthcoming about its use of bot technology, and a bot is more likely to be trusted if users understand that the bot is working to serve their needs and is clear about its limitations. 

  1. Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot’s competence. 

If your bot will engage people in interactions that may require human judgment, provide a means or ready access to a human moderator. 

  1. Design your bot so that it respects relevant cultural norms and guards against misuse.

Since bots may have human-like personas, it is especially important that they interact respectfully and safely with users and have built-in safeguards and protocols to handle misuse and abuse.

  1. Ensure your bot is reliable. 

Ensure that your bot is sufficiently reliable for the function it aims to perform, and always take into account that since AI systems are probabilistic they will not always provide the correct answer. 

  1. Ensure your bot treats people fairly. 

The possibility that AI-based systems will perpetuate existing societal biases, or introduce new biases, is one of the top concerns identified by the AI community relating to the rapid deployment of AI. Development teams must be committed to ensuring that their bots treat all people fairly. 

  1. Ensure your bot respects user privacy. 

Privacy considerations are especially important for bots. While the Microsoft Bot Framework does not store session state, you may be designing and deploying authenticated bots in personal settings (like hospitals) where bots will learn a great deal about users. People may also share more information about themselves than they would if they thought they were interacting with a person. And, of course, bots can remember everything. All of this (plus legal requirements) makes it especially important that you design bots from the ground up with a view toward respecting user privacy. This includes giving users sufficient transparency into bots’ data collection and use, including how the bot functions, and what types of controls the bot offers users over their personal data.

  1. Ensure your bot handles data securely. 

Users have every right to expect that their data will be handled securely. Follow security best practices that are appropriate for the type of data your bot will be handling. 

  1. Ensure your bot is accessible. 

Bots can benefit everyone, but only if they are designed to be inclusive and accessible to people of all abilities. Microsoft’s mission to empower every person to achieve more includes ensuring that new technology interfaces can be used by people with disabilities, including users of assistive technology. 

  1. Accept responsibility. 

We are a long way away from bots that can truly act autonomously, if that day will ever come. Humans are accountable for the operation of bots.

You can reach original document from the link below:

https://www.microsoft.com/en-us/research/uploads/prod/2018/11/Bot_Guidelines_Nov_2018.pdf

AI Now Report 2018

AI Now Report 2018

 

December 2018

 

RECOMMENDATIONS

 

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain. The implementation of AI systems is expanding rapidly, without adequate governance, oversight, or accountability regimes. Domains like health, education, criminal justice, and welfare all have their own histories, regulatory frameworks, and hazards. However, a national AI safety body or general AI standards and certification model will struggle to meet the sectoral expertise requirements needed for nuanced regulation. We need a sector-specific approach that does not prioritize the technology, but focuses on its application within a given domain. Useful examples of sector-specific approaches include the United States Federal Aviation Administration and the National Highway Traffic Safety Administration.
  1. Facial recognition and affect recognition need stringent regulation to protect the public interest. Such regulation should include national laws that require strong oversight, clear limitations, and public transparency. Communities should have the right to reject the application of these technologies in both public and private contexts. Mere public notice of their use is not sufficient, and there should be a high threshold for any consent, given the dangers of oppressive and continual mass surveillance. Affect recognition deserves particular attention. Affect recognition is a subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health, and “worker engagement” based on images or video of faces. These claims are not backed by robust scientific evidence, and are being applied in unethical and irresponsible ways that often recall the pseudosciences of phrenology and physiognomy. Linking affect recognition to hiring, access to insurance, education, and policing creates deeply concerning risks, at both an individual and societal level.
  1. The AI industry urgently needs new approaches to governance. As this report demonstrates, internal governance structures at most technology companies are failing to ensure accountability for AI systems. Government regulation is an important component, but leading companies in the AI industry also need internal accountability structures that go beyond ethics guidelines. This should include rank-and-file employee representation on the board of directors, external ethics advisory boards, and the implementation of independent monitoring and transparency efforts. Third party experts should be able to audit and publish about key systems, and companies need to ensure that their AI infrastructures can be understood from “nose to tail,” including their ultimate application and use.
  1. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector. Vendors and developers who create AI and automated decision systems for use in government should agree to waive any trade secrecy or other legal claim that inhibits full auditing and understanding of their software. Corporate secrecy laws are a barrier to due process: they contribute to the “black box effect” rendering systems opaque and unaccountable, making it hard to assess bias, contest decisions, or remedy errors. Anyone procuring these technologies for use in the public sector should demand that vendors waive these claims before entering into any agreements.
  1. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers. Organizing and resistance by technology workers has emerged as a force for accountability and ethical decision making. Technology companies need to protect workers’ ability to organize, whistleblow, and make ethical choices about what projects they work on. This should include clear policies accommodating and protecting conscientious objectors, ensuring workers the right to know what they are working on, and the ability to abstain from such work without retaliation or retribution. Workers raising ethical concerns must also be protected, as should whistleblowing in the public interest.
  1. Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services. The hype around AI is only growing, leading to widening gaps between marketing promises and actual product performance. With these gaps come increasing risks to both individuals and commercial customers, often with grave consequences. Much like other products and services that have the potential to seriously impact or exploit populations, AI vendors should be held to high standards for what they can promise, especially when the scientific evidence to back these promises is inadequate and the longer-term consequences are unknown.
  1. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces. Technology companies and the AI field as a whole have focused on the “pipeline model,” looking to train and hire more diverse employees. While this is important, it overlooks what happens once people are hired into workplaces that exclude, harass, or systemically undervalue people on the basis of gender, race, sexuality, or disability. Companies need to examine the deeper issues in their workplaces, and the relationship between exclusionary cultures and the products they build, which can produce tools that perpetuate bias and discrimination. This change in focus needs to be accompanied by practical action, including a commitment to end pay and opportunity inequity, along with transparency measures about hiring and retention.
  1. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.” For meaningful accountability, we need to better understand and track the component parts of an AI system and the full supply chain on which it relies: that means accounting for the origins and use of training data, test data, models, application program interfaces (APIs), and other infrastructural components over a product life cycle. We call this accounting for the “full stack supply chain” of AI systems, and it is a necessary condition for a more responsible form of auditing. The full stack supply chain also includes understanding the true environmental and labor costs of AI systems. This incorporates energy use, the use of labor in the developing world for content moderation and training data creation, and the reliance on clickworkers to develop and maintain AI systems.
  1. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues. The people most at risk of harm from AI systems are often those least able to contest the outcomes. We need increased support for robust mechanisms of legal redress and civic participation. This includes supporting public advocates who represent those cut off from social services due to algorithmic decision making, civil society organizations and labor organizers that support groups that are at risk of job loss and exploitation, and community-based infrastructures that enable public participation.
  1. University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

You can find  the full report from the link below:

https://ainowinstitute.org/AI_Now_2018_Report.pdf

AI at Google: Our Principles

AI at Google: Our Principles

Related image

Sundar Pichai

 

Sundar Pichai

CEO

June 7, 2018

 

 

 

At its heart, AI is computer programming that learns and adapts. It can’t solve every problem, but its potential to improve our lives is profound. At Google, we use AI to make products more useful—from email that’s spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy.

Beyond our products, we’re using AI to help people tackle urgent problems. A pair of high school students are building AI-powered sensors to predict the risk of wildfires. Farmers are using it to monitor the health of their herds. Doctors are starting to use AI to help diagnose cancer and prevent blindness. These clear benefits are why Google invests heavily in AI research and development, and makes AI technologies widely available to others via our tools and open-source code.

We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.

We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.

Objectives for AI applications

We will assess AI applications in view of the following objectives. We believe that AI should:

  1. Be socially beneficial. 

The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.

AI also enhances our ability to understand the meaning of content at scale. We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.

  1. Avoid creating or reinforcing unfair bias.

AI algorithms and datasets can reflect, reinforce, or reduce unfair biases.  We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

  1. Be built and tested for safety.

We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.  We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

  1. Be accountable to people.

We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.

  1. Incorporate privacy design principles.

We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

  1. Uphold high standards of scientific excellence.

Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration. AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences. We aspire to high standards of scientific excellence as we work to progress AI development.

We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.

  1. Be made available for uses that accord with these principles.  

Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications. As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors:

 

  • Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use
  • Nature and uniqueness: whether we are making available technology that is unique or more generally available
  • Scale: whether the use of this technology will have significant impact
  • Nature of Google’s involvement: whether we are providing general-purpose tools, integrating tools for customers, or developing custom solutions

AI applications we will not pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas:

 

  1. Technologies that cause or are likely to cause overall harm.  Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

 

We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.

AI for the long term

While this is how we’re choosing to approach AI, we understand there is room for many voices in this conversation. As AI technologies progress, we’ll work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will continue to share what we’ve learned to improve AI technologies and practices.

We believe these principles are the right foundation for our company and the future development of AI. This approach is consistent with the values laid out in our original Founders’ Letter back in 2004. There we made clear our intention to take a long-term perspective, even if it means making short-term tradeoffs. We said it then, and we believe it now.

 

You can reach the original text below:

https://blog.google/topics/ai/ai-principles/

Robots and Privacy

Robots and Privacy

 

İlgili resim

 

Ryan Calo

University of Washington

2 April, 2010

 

 

 

 

Abstract

“It is not hard to imagine why robots raise privacy concerns.Practically by definition, robots are equipped with the ability to sense, process, and record the world around them. Robots can go places humans cannot go, see things humans cannot see. Robots are, first and foremost, a human instrument. And after industrial manufacturing, the principle use to which we’ve put that instrument has been surveillance.

Yet increasing the power to observe is just one of ways in which robots may implicate privacy within the next decade. This chapter breaks the effects of robots on privacy into three categories — direct surveillance, increased access, and social meaning — with the goal of introducing the reader to a wide variety of issues. Where possible, the chapter points toward ways in which we might mitigate or redress the potential impact of robots on privacy, but acknowledges that in some cases redress will be difficult under the current state of privacy law.”

 

You can reach the article from the link below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1599189