Four Principles of Explainable Artificial Intelligence

 

Four Principles of Explainable Artificial Intelligence

 

August, 2020

 

Abstract

“We introduce four principles for explainable artificial intelligence (AI) that comprise the fundamental properties for explainable AI systems. They were developed to encompass the multidisciplinary nature of explainable AI, including the fields of computer science, engineering, and psychology. Because one size fits all explanations do not exist, different users will require different types of explanations. We present five categories of explanation and summarize theories of explainable AI. We give an overview of the algorithms in the field that cover the major classes of explainable algorithms. As a baseline comparison, we assess how well explanations provided by people follow our four principles. This assessment provides insights to the challenges of designing explainable AI systems.”

 

You can find original document below:

Click to access NIST%20Explainable%20AI%20Draft%20NISTIR8312%20%281%29.pdf

Explaining AI Decisions

 

Explaining AI Decisions

 

ICO & Alan Turing Institute

2 December 2019

The ICO and The Alan Turing Institute (The Turing) have launched a consultation on our co-badged guidance, Explaining decisions made with AI. This guidance aims to give organisations practical advice to help explain the processes, services and decisions delivered or assisted by AI, to the individuals affected by them.  

Increasingly, organisations are using artificial intelligence (AI) to support, or to make decisions about individuals. If this is something you do, or something you are thinking about, this guidance is for you.

We want to ensure this guidance is practically applicable in the real world, so organisations can easily utilize it when developing AI systems. This is why we are requesting feedback

The guidance consists of three parts. Depending on your level of expertise, and the make-up of your organisation, some parts may be more relevant to you than others. You can pick and choose the parts that are most useful.

The survey will ask you about all three parts but answer as few or as many questions as you like.

Part 1: The basics of explaining AI defines the key concepts and outlines a number of different types of explanations. It will be relevant for all members of staff involved in the development of AI systems.

Part 2: Explaining AI in practice helps you with the practicalities of explaining these decisions and providing explanations to individuals. This will primarily be helpful for the technical teams in your organisation, however your DPO and compliance team will also find it useful.

Part 3: What explaining AI means for your organisation goes into the various roles, policies, procedures and documentation that you can put in place to ensure your organisation is set up to provide meaningful explanations to affected individuals. This is primarily targeted at your organisation’s senior management team, however your DPO and compliance team will also find it useful.

You can send your thoughts via email to [email protected]

 

You can find original guidance from the link below:

https://ico.org.uk/media/about-the-ico/consultations/2616441/explain-about-this-guidance.pdf