Update: Handout of slides
Explanation, trust, and transparency are concepts that are strongly associated with information systems. One trusts a computer system much more if it can explain what it is doing and, thus, can „prove“ its trustworthiness. An information system (and more so a knowledge-based one) should be able to explain at every point in time why it prefers solution A over solution B. Furthermore, it should tell about the meaning of concepts used, and where an information item originally came from („knowledge provenance“). Explanations are part of human understanding processes and part of most dialogs, and, therefore, need to be incorporated into system interactions in order to, for example, improve decision-making processes. As information systems grow more and more complex, computer support is needed. Therefore, it becomes increasingly important for computer systems to have advanced explanation capabilities.
Complex personal information systems will be part of the results of the EU project Nepomuk – The Social Semantic Desktop . Those systems will allow annotating and linking of arbitrary information objects on one‘s desktop such as documents, emails, address book entries, photos, and bookmarks. In addition, Nepomuked systems such as gnowsis automatically crawl and classify (or tag) information objects, thus linking them to one’s personal information model ontology (PIMO). (For example, address book entries become PIMO persons.) But more complex lifting operations are under development, and explanation needs become obvious.
Case-Based Reasoning (CBR) systems are another example of complex information systems. In e-commerce scenarios, the case base often is filled from product catalogs. Modeling the similarity measures for products is a complex task for the knowledge engineer, who can be supported by providing explanations about the structure and content of the case base. Such support features are currently being implemented in the open source CBR tool myCBR . First results will be shown.