morphological explanation in expert systems
play

MORPHOLOGICAL EXPLANATION IN EXPERT SYSTEMS I. ALVAREZ CEMAGREF, - PDF document

MORPHOLOGICAL EXPLANATION IN EXPERT SYSTEMS I. ALVAREZ CEMAGREF, Electronics and artificial intelligence laboratory, BP121, F-92185 Antony, FRANCE LAFORIA, University of Paris VI, F-75005 Paris, FRANCE tel: (33-1) 40 96 61 77 fax: (33-1) 40 96


  1. MORPHOLOGICAL EXPLANATION IN EXPERT SYSTEMS I. ALVAREZ CEMAGREF, Electronics and artificial intelligence laboratory, BP121, F-92185 Antony, FRANCE LAFORIA, University of Paris VI, F-75005 Paris, FRANCE tel: (33-1) 40 96 61 77 fax: (33-1) 40 96 60 80 Topic: Principles of AI Applications Category: ECAI-92 Workshop "Improving the use of Knowledge-based systems with explanations", 1992, pp 75-81

  2. Abstract Knowledge-based systems are able to provide automatically an explanation of the reasoning, generally with the help of an additional unit. However their abilities are still limited, compared to human "natural" explanatory capabilities. Human explanation is very rich, depending on several parameters that are not mastered yet. In this paper we present a type of explanation centred not on the reasoning but on the case to explain. It consists in a study of the behaviour of the system for input data slightly different from the initial set. The analyze of the results given by the system allows to define relevant directions relatively to the case to explain, by analogy with the mathematic notion of gradient. These directions can be used to produce explanations, not in a logical nor chronological way, but in a geometrical way. This morphological explanation leads to reconsider how to take into account the user's knowledge and the explanatory semantic of the domain. Introduction Explanation of reasoning has always been considered as one of the main advantages of expert systems compared to classical algorithmic programs, for which the trace of calculus has to be specified. Therefore it is well known that trace of the reasoning can be very far from what human being calls explanation. Since expert systems claim to be standard development tools some weaknesses of the trace-based explanation have been corrected, mainly by using model-based systems or several reasoning modules. Part of implicit knowledge is made explicit in this way, and it is easier to define different levels of abstraction. But some major defaults remain, although the explanation task is now generally treated separately from the resolution task: main points are still extracted from details through the process of reasoning, (although it may not be relevant); explanation has to be generated in natural language, even when pieces of knowledge are expressed in a quasi-natural language, and so on. This lack of explanation ability is an obstacle as important to the operational use of expert systems as updating rulebase, validation and real-time problems. Explanation abilities of expert systems don't seem to be currently up to their promising beginning. Complexity of explanatory discourse between human beings accounts certainly for the difficulties encountered by automatic explanation. In a first part we remind of general characteristics of human explanation, and we make them match solutions that are applied to expert systems. In a second part we focus on the notion of case to explain. Explanation is often restricted to explanation of reasoning and more precisely to justification, which is only a way of explanation, as well as context description, comparison, and so on; we propose to consider the explanation issue from the point of view of the case to explain, not from the reasoning one. We defend the need for selection criteria between elements of the explanatory discourse, that depend particularly on the case to explain and relatively not on the other parameters of explanation. Explanation purpose is to make the user understand the result given by the expert system; generally explanation of reasoning make the user understand how the result was obtained. It doesn't explain directly why this result is the "good" one. In a third part we propose a type of explanation which tries to focus on the validity of the

  3. result, no matter how it was deduced. Morphological explanation, as we call it, is not based on the trace of reasoning. It consists in a local study of the behavior of the expert system in a small neighborhood of the case to explain. It proposes explanations in a geometrical way rather than in a logical way. This type of explanation has been applied to SCORPIO, a diagnosis expert system of diesel engines, and in a fourth part we describe the type of explanation we can provide. 1. "Natural" and automatic explanations Explanation is a task usually performed in every day life, in many ways: justification, dispute, teaching, decision making, and so on. It is part of common sense, which is so difficult to capture into models. This is certainly a main reason to the dissatisfaction of end-users facing automatic explanations [Schank], whatever efforts are done to approximate human explanation - and it's not only a problem of natural language. 1.1. natural explanation characteristics Natural explanation is very rich, adjusted to each case, depending on contextual parameters. Many types of explanation are used in free discourse and selected through a process that implies at least the search for the user's needs, the choice of a strategy, the choice of a level of abstraction, the choice of the factual elements to emphasize. Main strategies of explanation are paraphrase, justification, proof by absurd, example (and counter-example), analogy, comment. These strategies apply to both positive and negative natural explanation. The choice of a strategy and of the content of an explanation depends principally on several parameters: - the object of the explanation; human beings use different ways to explain different objects, for instance a scientific statement like a theorem in mathematics, a political decision, a behavior, a feeling, and so on. - author's purpose; making the interlocutor understand is rather an imprecise definition, because degree of understanding varies continuously. The psychological context is also important. The author's purpose may be: to emphasize the understanding of one or both interlocutors. This case includes free discussion; the explanation function is to make new elements emerge, either by reorganizing facts that are already known, or by going deeper in the analysis. to convince the hearer; The speaker want him to accept a statement, right or wrong. (When it's wrong on purpose he wants to deceive his interlocutor.) to teach; the explanation purpose in this case is to pass on knowledge. - recipient's representation; An explanation is different depending on the supposed level of knowledge of the recipient, on his reactions, (including questions, gestures, nods, ...), on his relationship with the author.

  4. - degree of interactivity; The explanation content is deeply different depending on the mean of communication (oral, written, visual, ...) and the possibility of exchange of information. 1.2. automatic explanation processing From the form point of view, natural explanation uses (when it is possible) other means of communication than language, mainly gestures and graphical illustrations. Automatic explanation is necessarily different; user perception is acquired through filtered text or selection of pre-defined screen areas, which gives very little information. On the other hand, communication by means of a screen gives a completely different form to explanation discourse, and graphical possibilities of computers may offer new expression means. Anyway the explanation form problem is part of the more general problem of man-machine communication, and our interest here is explanation substance. Independently of the problems of understanding and generating natural language, it is presently impossible to take into account all the parameters involved in natural explanation. Work on explanation is centered on particular fields (diagnosis, debugging, theorem proving, ...) and explanation purpose is generally supposed to be teaching (ITS) or knowledge transfer (expert to novice) [Wognum], with a high degree of interactivity. Finding out user's need is a very complicated problem since user's representation is very hard to manage: it implies to build and to update a model of the interlocutor. The setting and the use of such models are still difficult. [Goguen], [Wiener]. In practice it is easier to design a system for users whom knowledge is over a minimum. This excludes ITS and knowledge transfer, but it is the case for decision making help systems (including diagnosis). Concerning the choice of a level of abstraction, the use of the trace of reasoning in rule-based systems presents well-known inconvenience [Clancey1] [Chandrasekaran]. The structuration of knowledge used by the system and of the reasoning process is a good way to organize the trace of reasoning: it corresponds to the decomposition of a problem in sub-problems with a specific type of reasoning, specific knowledge or level of details. Spliting the trace in this way is therefore relevant to the user and well accepted in an interactive mode [Haziza]. But if some fields or purposes are very interactive (debugging, on-line diagnosis, knowledge transfer, ...), some others are less interactive. In this case users are interested by a global explanation that involves its specific characteristics [Swartout]. In this context the structuration of the trace by the reasoning is not necessarily a good way to find the good level of abstraction. Regarding strategies, most of explanation work handle with justification, because it is based on the trace of reasoning. Explanation tries often to answer to these questions [Clancey2], [Swartout], [Safar], [David&Krivine]: - how was a result deduced ? (justification of the result) - how was a result not deduced? (justification in negative explanation) - in what is a particular fact important in the reasoning process? (justification of a step)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend