I use the term explanation-aware now for quite a while, but noticed only today that I have not written about it here. Well, it’s about time, isn’t it?
Complex information / knowledge-based systems require more intelligent handling of communication between system and user. The user needs to have the opportunity to ask about unknown concepts or, generally, what is going on in the system. My current research work is aiming at improving system responses by working towards explanation-awareness of such systems. Systems that intend to exhibit explanation-awareness must be more than simple reactive systems.
The basic explanation scenario (see figure) has three participants: user, originator, and explainer. The originator is the problem solver. It achieves the main goal of the system, i.e., providing decision support of some kind. The explainer’s task is to provide the answers to questions the user has about concepts used by the originator or about how the originator came to a conclusion or why it presented some decision. Explainer and originator need to be coupled quite tightly in order to provide detailed explanations, but the explainer also needs to have its own knowledge base, which goes beyond the problem solving knowledge of the orginator.
When the term awareness is used in conjunction with the term explanation it implies consciousness about explanation. When a system exhibits explanation-awareness, it is capable of reasoning about explanations. When we use the word aware we are making a strong statement about the capabilities of the entity described. As being knowledgeable is central to being aware, some kind of reasoning capabilities, or intelligence, is implied. Thus, a computer system that aims at becoming explanation-aware must regard explanations from the knowledge level.
There are lots of open questions to phrase and solve in order to come up with a lightweight framework and a methodology for constructing explanation-aware information systems.