Explanation-aware Thinking, Design and Computing

I will be giving a talk at the BCS event Real Artificial Intelligence — A special one-day event showcasing Practical Applications of Artificial Intelligence on Friday, 7 October 2016, at the British Computer Society London Office in Covent Garden. You can register online here.

Title: Explanation-aware Thinking, Design and Computing

Abstract: Explanation, trust, and transparency are concepts that are strongly associated with information systems. The ability to explain reasoning processes and results can substantially affect the usability and acceptance of a software system. Within the field of knowledge-based systems, explanations are an important link between humans and machines. There, their main purpose is to increase confidence of the user in the system’s result (persuasion) or the system as a whole (satisfaction), by providing evidence of how the solution was derived (transparency). For example, in recommender systems good explanations can help to inspire user trust and loyalty, and make it quicker and easier (efficiency) for users to find what they want (effectiveness). This talk presents important concepts for analysing and developing software systems with explanation capabilities and illustrates them with example implementations.

Werbung

Report: 12th Social Study of ICT workshop (SSIT12)

Health Information Systems: Searching the Past – Finding a Future

Hosted by the London School of Economics on 18 April 2012, the 12th Social Study of ICT workshop (SSIT12) looked at the past and the future of Healthcare Information Technology (HIT). The workshop series is organised by the Information Systems and Innovation Group.

The keynote speakers focused on such questions as „how helpful is information technology for patients, practice, or payers?“ and „the important role of ‘open’“.  Both speakers, Ross Koppel, University of Pennsylvania, and Bill Aylward, Moorfields Eye Hospital NHS Trust, highlighted the problem of closed systems and the feeling of being held hostage by HIT vendors.

Ross Koppel gave a lot of examples of bad UI design of healthcare information systems with sometimes deadly consequences, e.g., when the dosage is calculated wrongly. He showed how people work around software issues with again sometimes bad consequences for patients.  Bill Aylward then focused on ideas of openness and transparency in open source development and bug tracking as a way of dealing with quality issues. Developers and HIT users are often very far apart during software development. Open Eyes shows how to bring them closer together in an open source project.

For Bill Aylward HIT should be more like air traffic control software with problem-focussed user interfaces and swift response times. HIT instead has its data all over the place which requires its users to wait 2-6 minutes in average for just opening a patient record. His vision: an ecosystem of apps like on iOS devices such as the iPhone where data is shared but apps are independent.

The other speakers explored the „consequences of using electronic patient records in diverse clinical settings“ (Maryam Ficociello, Simon Fraser University), viewed „evaluation as a multi-ontological endeavour“ (Ela Klecun, LSE), and took us on a „Journey to DOR: A Retro Science-Fiction Story on researching ePrescribing“ (Valentina Lichtner, City University).  The last session closed with talks on „Real People, Novel Futures, Durable Presents“ (Margunn Aanestad, University of Oslo) and „Awaiting an Information Revolution“ (Amir Takian, Brunel University).

The speakers provided lots of evidence for the need of software that can explain (at least some of) the design rationale of the software engineer in order to bridge the gap between software engineer and user. Bringing them together like in the Open Eyes project is one way of dealing with the issue. But not all users can be included in the development. New users will not know about the design rationale and will not have access to the respective software engineers. This is where explanation-aware software design (EASD) comes into play. EASD aims at making software systems smarter in interactions with their users by providing such information as background information, justifications, provenance information.

Workshop programme

Report: 7th International and Interdisciplinary Conference on Modelling and Using Context (CONTEXT 2011)

After a four years hiatus the CONTEXT conference series came back to life and presented itself as professional and ambitious as ever. The Seventh International and Interdisciplinary Conference on Modeling and Using Context (CONTEXT) took place in Karlsruhe, Germany. It brought together researchers and practitioners from a wide range of disciplines.  Thanks to Prof. Michael Beigl and his team — most notably Dr. Hedda Schmidtke — the conference turned out to be a great event. About 70 participants enjoyed the conference venue, the Karlsruhe Institute of Technology. Most of the participants attended nearly every talk. This shows how well the conference was received and how interesting the sometimes not so easy to follow talks from different fields of one’s own research were. The conference dinner at the Centre for Art and Media (ZKM) was a highlight of the event.

Three invited talks marked the milestones of the main conference. On Wednesday morning, Jerry Hobbs reflected on „Discourse Interpretation in Context“. The second keynote was given by Ruth Kempson, King’s College, London, on „Ellipsis in Conversational Dialogue“. Even though the last invited talk was on the morning after the conference dinner, Paul Holleis had many listeners for his  talk on „Explicit, Generic, and Social Context“.

From the three workshops before the main conference I attended the workshop on Modelling and Reasoning in Context (MRC). The one and a half days workshop was a lively event that I enjoyed very much. In between the eight presentations two panel discussions and an open discussion gave lots of opportunities to look at context from different angles.

The CONTEXT community decided to start a wiki to collect information about the topic of context and the people working on it. Stay tuned. More information coming up soon.

Report: 19th International Conference on Case-Based Reasoning ICCBR 2011

Last year’s conference marked a turning point in my career, having been offered a visiting professor position at the University of Hildesheim which then led me to my new position at the University of West London. … David Aha on the doctoral consortium: reminds „students are the lifeblood of research“ David Aha on the doctoral consortium: reminds „students are the lifeblood of research“ A recurring theme at the conference was reasoning.

I never would have thought that ICCBR could get better for me, but it did, thanks to great colleagues and friends.

Last year’s conference marked a turning point in my career, having been offered a visiting professor position at the University of Hildesheim just at that time which in turn led to my new position at the University of West London. This year’s ICCBR—organised by Ashwin Ram, Nirmalie Wiratunga, and Miltos Petridis at the University of Greenwich just across town—was my first conference as Professor in Computing, which felt quite nice 🙂 The conference started with a Doctoral Consortium where Edwina Rissland and I had been invited by David Aha to give talks about our respective careers.  The slides of the students as well as my slides will be made available on the DC homepage.

Recurring themes at the conference were reasoning and explanation. Ashwin Ram reported on an analysis of submission topics that showed still an emphasis on work on retrieval. He encouraged the community to work more on reasoning and learning. This was very much in line with my workshop on „Human-Centered and Cognitive approaches to CBR“ co-organised by Jörg Cassens, Anders Kofod-Petersen, Stewart Massie and Sutanu Chakraborti. I’d like to point out David Leake’s talk on „Assembling Latent Cases from the Web – A Challenge Problem for Cognitive CBR“. As David and I have done quite a few workshops on explanation and as the topic is central to David’s research it is no surprise that explanation played a central role. His vision of the Web as a huge case base received quite some attention.

In parallel, workshops on „Process-oriented Case-Based Reasoning (PO-CBR)“ and „Case-Based Reasoning for Computer Games“ were held.

The first invited talk was given by Kris Hammond: „Reasoning as search: supporting intelligence with distributed memory“. He reviewed his work on CBR and current projects, nicely underlining Ashwin Ram’s encouragement for more research on reasoning. Kris first led us from „reasoning is remembering“ to „reasoning is JUST remembering“. Cause „plan modifications are just damned hard“ one needs to „pull modifications out of the hands of the machine and put them in the hands of the user“. He further led us to „Reasoning is (just) remembering other people’s stuff“ (from CHEF to FAQfinder), „Reasoning is search“, „Reasoning is structure“, „Reasoning is knowing“, and finally back to „reasoning is remembering“. Go to his homepage to learn more about his research.

The second invited speaker was Steffen Staab. He talked about „Ontologies and similarity“. The focus was very much on ontologies and less on similarity. I very much liked his CBR view of Linked Data where „cases are metadata without frontiers“.

All in all, ICCBR was well organised and fun to attend. Next year ICCBR will be held in Lyon, France. I am looking forward to it!

Erklärungsfähige Softwaresysteme: Ein neues Software-Paradigma?

Man vertraut einem (Software-) System mehr, wenn es erklären kann, was es tut und warum es zu einer Entscheidung gekommen ist. … Der Vortrag stellt wichtige Konzepte zur Analyse und Entwicklung von Erkärungsfähigkeiten für Softwaresysteme vor und erläutert sie anhand von Beispielanwendungen.

Eingeladener Vortrag an der Universität Kassel am 30. 6. 2011

Erklärungsfähigkeiten haben bereits eine gewisse Historie in wissensbasierten Systemen. Erklärung, Vertrauen und Transparenz sind dabei Themen, die eng miteinander verknüpft sind. Man vertraut einem (Software-) System mehr, wenn es erklären kann, was es tut und warum es zu einer Entscheidung gekommen ist. Es „beweist“ damit seine Vertrauenswürdigkeit gegenüber dem Benutzer. Erklärungen sind ein wichtiger Bestandteil menschlicher Verstehens- und Verständigungsprozesse. Sie sollten daher inhärenter Bestandteil von (wissensbasierten) Systemen sein. Der Vortrag stellt wichtige Konzepte zur Analyse und Entwicklung von Erkärungsfähigkeiten für Softwaresysteme vor und erläutert sie anhand von Beispielanwendungen.

[From Universität Kassel: 30.06.2011]

Missing out on ExaCt 2011 fun

The workshop series on Explanation-aware Computing is my baby and I am saddened that I cannot participate in it this year due to move to the UK. To my great relief, my co-organisers Nava Tintarev and David Leake will more than make up for my absence. Have a look at the agenda to see what’s in store for all you explanation-enthusiasts out there.

Besides many interesting refereed contributions (see proceedings here) participants will discuss such topics as „What are the common properties of explanation systems? Can a high level model be devised?“ and „How do evaluation criteria relate to each other?“. At least, that’s these are our suggested topics. Maybe other topics will come up and will be covered by attendees. Another highlight will be Nava’s talk in which she is addressing the question of „What makes good explanations?“.

Visualising EXPONO

When preparing the EXPONO project proposal this image of Princess Leia and R2D2 came to my mind. It’s a good example of human-computer interaction where explanations—at least for the audience—are required about what the droid „talked“. Unfortunately copyright prohibits us using this for a logo.

(r2 & leia stamp; originally uploaded by bep1972)

EXPONO – Enhanced human-robot interaction through explanations and adaptation

For robot users, EXPONO will lead to increased acceptance of robots in real world situations since a robot’s ability to explain its behaviour will lead to greater trust, while a user’s ability to influence and correct robot behaviors will result in more versatile robotic applications. … EXPONO will achieve these goals by: (a) providing the computational foundations for developing explanation-aware robots; (b) producing re-usable algorithms and engineering tools that allow companies to incorporate explanation-aware technologies in their service robot development; and (c) developing three diverse real-world applications to encourage widespread adoption.

Same, but different … This year’s first try on getting (EU-)funding. This time as an affiliate of the University of Hildesheim.

Led again by my colleague and friend Anders Kofod-Petersen from SINTEF (Norway), we submitted yesterday the project proposal „EXPONO: Enhanced human-robot interaction through explanations and adaptation“. It addresses objective „2.1 Cognitive systems and robotics“ of ICT Call 7, which had quite a different emphasis than last year’s Call 6. Here’s the proposal’s short summary:

The long-term vision for service robotics is to produce autonomous systems that interact in an intelligent way with their environment, and adapt their behaviour to match the evolving expectations of their users. EXPONO will make it technically feasible and economically viable to achieve enhanced interaction and learning capabilities in service robots, through the use of explanation and adaptation techniques.

For robot users, EXPONO will lead to increased acceptance of robots in real world situations since a robot’s ability to explain its behaviour will lead to greater trust, while a user’s ability to influence and correct robot behaviors will result in more versatile robotic applications.

For service robot developers, EXPONO will provide the means to conceive, design, implement and deploy innovative service robots that can generate explanations of their decisions and perceptual processes, and optimize future behaviours.

EXPONO will achieve these goals by:

(a) providing the computational foundations for developing explanation-aware robots;

(b) producing re-usable algorithms and engineering tools that allow companies to incorporate explanation-aware technologies in their service robot development; and

(c) developing three diverse real-world applications to encourage widespread adoption.

In co-operation with robotics networks such as EURON and EUROP, the community of researchers and industrial developers interested in explanation-aware techniques will be expanded to relevant stakeholders in robotics. Potential partners will be addressed through industry workshops, summer schools, trade fair presentations and publications.

Besides SINTEF, the project partners comprise the Norwegian University of Science and Technology (NTNU), Aldebaran Robotics (France), Entertainment Robotics (Denmark), Fraunhofer IPA (Germany), and the Dublin Institute of Technology (DIT).

6th Workshop on Explanation-aware Computing ExaCt 2011, Barcelona, Spain

The workshop series aims to draw on multiple perspectives on explanation, to examine how explanation can be applied to further the development of robust and dependable systems and to illuminate system processes to increase user acceptance and feeling of control.

… Models and knowledge representations for explanations Quality of explanations; understandability Integrating application and explanation knowledge Explanation-awareness in (designing) applications Methodologies for developing explanation-aware systems Explanations and learning Context-aware explanation vs. explanation-aware context Confidence and explanations Privacy, trust, and explanation Empirical studies of explanations Requirements and needs for explanations to support human understanding Explanation of complex, autonomous systems Co-operative explanation Visualising explanations Dialogue management and natural language generation Human-Computer Interaction (HCI) and explanation Important dates (not finalised yet) :

201011281537.jpg

Both within AI systems and in interactive systems, the ability to explain reasoning processes and results can substantially affect system usability. For example, in recommender systems good explanations may help to inspire user trust and loyalty, increase satisfaction, make it quicker and easier for users to find what they want, and persuade them to try or buy a recommended item.

The workshop series aims to draw on multiple perspectives on explanation, to examine how explanation can be applied to further the development of robust and dependable systems and to illuminate system processes to increase user acceptance and feeling of control. ExaCt 2011 will be held in Barcelona, Spain, in conjunction with 22nd International Joint Conference on Artificial Intelligence IJCAI-11.

Suggested topics for contributions (not restricted to IT views):

  • Models and knowledge representations for explanations
  • Quality of explanations; understandability
  • Integrating application and explanation knowledge
  • Explanation-awareness in (designing) applications
  • Methodologies for developing explanation-aware systems
  • Explanations and learning
  • Context-aware explanation vs. explanation-aware context
  • Confidence and explanations
  • Privacy, trust, and explanation
  • Empirical studies of explanations
  • Requirements and needs for explanations to support human understanding
  • Explanation of complex, autonomous systems
  • Co-operative explanation
  • Visualising explanations
  • Dialogue management and natural language generation
  • Human-Computer Interaction (HCI) and explanation

Important dates (not finalised yet):

  • Workshop paper submission deadline: March 2011
  • Notification of workshop paper acceptance: April, 2011
  • Camera-ready copy submission: May 2011
  • Workshop (two days): July 16-18, 2011

Read the complete call for papers on the workshop website.

Constructing Understandable Explanations for Semantic Search Results

I am also happy that another paper has been accepted for publication at EKAW 2010 – Knowledge Engineering and Knowledge Management by the Masses , Lisbon, Portugal: Constructing Understandable Explanations for Semantic Search Results by Björn Forcher, Thomas Roth-Berghofer, Michael Sintek, and Andreas Dengel Abstract. … The search engine of the MEDICO demonstrator RadSem is based on formal ontologies and designated for different kinds of users such as medical doctors or patients.

I am also happy that another paper has been accepted for publication at EKAW 2010 – Knowledge Engineering and Knowledge Management by the Masses, Lisbon, Portugal:

Constructing Understandable Explanations for Semantic Search Results by Björn Forcher, Thomas Roth-Berghofer, Michael Sintek, and Andreas Dengel

Abstract. The research project MEDICO aims to develop an intelligent, robust, and scalable semantic search engine for medical images. The search engine of the MEDICO demonstrator RadSem is based on formal ontologies and designated for different kinds of users such as medical doctors or patients. Since semantic search results are often hard to understand, an explanation facility was integrated into RadSem. For justifying search results, the facility applies the same ontologies as RadSem by showing a connection between query and result. The constructed explanations are depicted as semantic networks containing various medical concepts and labels. This paper addresses the tailoring of justifications to different kinds of users regarding such quality aspects as understandability or amount of information. The presented user experiment shows that under certain conditions the quality of justifications can be pre-estimated by considering the usage frequency of medical terms in natural language. [This research work was supported in part by the research program THESEUS in the MEDICO project, funded by the German Federal Ministry of Economics and Technology (01MQ07016). Responsibility for this publication lies with the authors.]

%d Bloggern gefällt das: