I am organising another workshop on Explanation-aware Computing ExaCt 2008, supported by my colleagues and friends David B. Leake, Stefan Schulz, and Daniel Bahls. This time ExaCt takes place 21–21 July as part of ECAI 2008 in Patras, Greece.
Here is the main part of the call for papers:
The increasing complexity of current knowledge-based systems requires improved explanation capabilities. During the height of expert systems research many workshops (including at ECAI) addressed the issue of explanation capabilities. However, with the decrease in expert systems research, AI explanation research dwindled as well. Consequently, the time is ripe for renewed investigations of explanation in AI.
Other disciplines such as cognitive science, linguistics, philosophy of science, psychology, and education have investigated explanation as well. They consider varying aspects, making it clear that there are many different views of the nature of explanation and facets of explanation to explore. Within the field of knowledge-based systems, explanations have been considered as an important link between humans and machines. There, their main purpose has been to increase the confidence of the user in the system’s result, by providing evidence of how it was derived. Additional AI research has focused on how computer systems can themselves use explanations, for example to guide learning.
Both within AI systems and in interactive systems, the ability to explain reasoning processes and results can have substantial impact. Current interest in mixed-initiative systems provides a new context in which explanation issues may play a crucial role. When knowledge-based systems are partners in an interactive socio-technical process, with incomplete and changing problem descriptions, communication between human and software systems is a central part. Explanations exchanged between human agents and software agents may play an important role in mixed-initiative problem solving.
This workshop series aims to draw on multiple perspectives on explanation, to examine how explanation can be applied to further the development of robust and dependable systems and to illuminate system processes to increase user acceptance and feeling of control.
GOALS AND AUDIENCE
The main goal of the workshop is to bring researchers, scientists from both industry and academics, and representatives from different communities and areas such as those mentioned above, together to study, understand, and explore explanation in IT-applications. In addition to presentations and discussions of invited contributions and invited talks, this workshop will offer organised and open spaces for targeted discussions and creating an interdisciplinary community. Demonstration sessions will provide the opportunity to showcase explanation-enabled/-aware applications.
TOPICS OF INTEREST
Suggested topics for contributions (not restricted on IT views):
- Models for explanations
- Integrating application and explanation knowledge
- Explanation-awareness in applications
- Methodologies for developing explanation-aware systems
- Learning to explain
- Context-aware explanation vs. explanation-aware context
- Confidence and explanations
- Security, trust, and explanation
- Requirements and needs for explanations to support human understanding
- Explanation of complex, autonomous systems
- Co-operative explanation
- Submission deadline: April 10, 2008
- Notification of acceptance: May 10, 2008
- Camera-ready versions of papers: May 26, 2008
Read more at the workshop website.