Pushing explanations back into the spotlight



Participants of ExaCt 2009

Participants of ExaCt 2009 on day #1

The ability to explain reasoning processes and results can substantially affect the usability and acceptance of a software system. There is no doubt about it. Unfortunately, the topic of explanation has not received proper attention since the demise of expert systems research during the „AI winter“. Only recently explanation is seen as a research topic in its own right again. At my last workshop on Explanation-aware computing ExaCt 2009, organised and held together with Nava Tintarev and David B. Leake at IJCAI-09 in Pasadena, CA, several more renowned researchers joined my effort of pushing and promoting the topic of explanation; among them Deborah McGuinness (RPI, USA), Anind Dey (CMU, USA), Ashok Goel (Georgia Tech, USA), Doug Walton (U. of Windsor, CA), and Miltos Petridis (U. of Greenwich, UK). (More researchers can be found on my dedicated website on-explanation.net.)

So, what do I want to achieve?

First, I want to make software designers and developers aware of explanations. In the long run, I also want software systems to become explanation-aware. I want to promote the notion of explanation as a research topic in its own right in order to develop respective engineering methods.

How to get there

A lot of inspiration came from Edward Tufte’s book Visual Explanations, which I stumbled upon by accident on the desk of a former colleague, years before I took an active interest in explanation. Edward Tufte invites the reader to

enter the cognitive paradise of explanation, a sparkling and exuberant world, intensely relevant to the design of information.

As Artificial Intelligence is about simulating human intelligence, we AI researchers should take Tufte’s words to heart. In AI, we strive—not only but also—for the goal that AI systems become able to discover explanations themselves and that they represent them appropriately in order to communicate with their users. Until that goal is reached, we should at least provide such systems with pre-formulated explanations and representation templates to support human users in their interaction with the system.

Within the field of knowledge-based systems, explanations are considered as an important link between humans and machines. Their main purpose is to increase the confidence of the user in the system’s result (persuasion) or the system as a whole (satisfaction), by providing evidence of how the solution was derived (transparency). Explanations are part of human understanding processes and part of most dialogues, and, therefore, need to be incorporated into system interactions. But looking at all the efforts already invested in explanation research, I think, we have just rattled at the gates of the above mentioned cognitive paradise.

explanation participants with sources coloured-perspective transform.png

A helpful tool for designing and developing software systems from an explanation-aware viewpoint is the general explanation scenario depicted here with three participants: user, originator, and explainer. The user communicates by way of a user interface (UI) with the whole software system and is the recipient of explanations. The originator is the tool the user works with to perform tasks and solve problems. The explainer can be seen as another tool that helps understanding how the originator works and what knowledge the originator uses. Please note that this scenario is simplifying in so far as it does not consider the case where the software system asks the user for explanations or justifications.

Explainer and originator need to have knowledge about each other. The originator needs to provide knowledge about its reasoning process as well as intermediate results and decisions in order to allow the explainer to generate good explanations. The relationship between explainer and originator is somewhat asymmetrical regarding the used knowledge. Whereas the explainer needs to have access to the originator’s knowledge-base (in addition to its explanation supporting knowledge-base) this does not hold for the originator. For its problem-solving task the originator does not need to have access to the explainer’s knowledge-base, but the originator needs to be aware of it in order to fill it appropriately.

Community building

Making people—researchers, software designers and developers, and, not to forget, funding organisations—aware of explanation-awareness (pun intended) is not a one-man show but a community effort. At last year’s ExaCt 2008 the idea of a manifesto came up as a way of expressing goals of the developing explanation community. A first version is available here.

If you would like to participate in further discussions or just like to receive further information on this topic and about future workshops you might consider joining the Yahoo!-group explanation-research. You think you or someone else belongs on the list of explanation researchers? Drop me a line and a URL to their  homepage and indicate relevant research.

Advertisements

Kommentar verfassen

Trage deine Daten unten ein oder klicke ein Icon um dich einzuloggen:

WordPress.com-Logo

Du kommentierst mit Deinem WordPress.com-Konto. Abmelden / Ändern )

Twitter-Bild

Du kommentierst mit Deinem Twitter-Konto. Abmelden / Ändern )

Facebook-Foto

Du kommentierst mit Deinem Facebook-Konto. Abmelden / Ändern )

Google+ Foto

Du kommentierst mit Deinem Google+-Konto. Abmelden / Ändern )

Verbinde mit %s