Explanations are always tied to some goal or intention in order to understand something better. One asks questions either explicitly or implicitly during a conversation. The conversation partner does not even need to be another person (or computer program). Often we ask questions to ourselves, e.g., in order to comprehend what we read or how a system works. Currently, I am working with two students at two scenarios where I think explanations are helpful.
Scenario 1: Similarity measure modeling in case-based reasoning (CBR) is a complex task. In commercial settings consultants help the users of CBR tools, i.e., the to-be knowledge engineers, to learn necessary skills. The goal of explanation I pursue here is to collect questions knowledge engineers ask the consultants and transfer some of the rules of thumb CBR that consultants use into the system. First implementation results of the work will be available soon. Check the myCBR homepage.
Scenario 2: Applications being built using Nepomuk standards and components will be knowledge-based systems of high complexity. Users of such applications will need support in some places where automatisms enrich already entered knowledge. The gnowsis system already shows how complex the upcoming Nepomuked applications will become. One of its components, the so-called Rebirth Machine, already generates new concepts in the personal information model PIMO from data on your desktop (also known as lifting). For example, the Rebirth Machine generates PIMO Persons from address book entries. As there could be some „magic“ involved the user might wonder where certain concepts came from (knowledge provenance) and how they were generated and/or interconnected (cognitive explanations).