Next: Summary Up: Reasoning Operators Previous: Therapy Effect Prediction

Explanation

The final operator is the explanation operator. This takes the PSM and the results of other operators and provides the user with a way of understanding them. There are two kinds of explanation produced by the program, one that provides information about the causal model and the diagnostic hypotheses and one that allows the user to explore the therapy predictions.

An example of the kinds of graphical explanations provided for diagnostic hypotheses was shown in figure 2. The diagnostic hypothesis consists of two lists of nodes, a list of nodes true in the hypothesis and a list of false nodes. These are explained to the user by graphing the causal relations in the list of true nodes. This kind of explanation is a rich source of information. It justifies the hypothesis by proposing the mechanisms by which they might have been produced in physiologic terms understandable to the physician. This helps the physician to see what assumptions are being made and therefore may identify aspects that need verification and whether the hypothesis is really appropriate given what the physician knows.

This method of graphical explanation is also useful for a number of other aspects of understanding the analysis and the model. Because parts of the display can be highlighted or italicized, it can be used for comparing two hypotheses, generated with the same or different input or with variations in the causal model. This makes it readily apparent what is common and what is different about the hypotheses. It is also useful for examining the conclusions of other operators. One can display the nodes with definite values and see what must be explained by the diagnostic process. It can display suggested therapies to see where their expected effects intersect the diagnosis. It can also be used to show where additional information might affect the diagnosis. This graphical method of display is also a good way of exploring the causal model for model development or to gain a better understanding of the model.

The explanation of the therapy prediction operator requires a somewhat different approach. The causal links as determined by the model equations are represented in the initial display, providing an overview of the model. When therapy prediction is done, the display shows the expected changes in the parameters as well. Still, that does not provide the user with any understanding of why those changes should take place. Indeed, with simulation based methodologies for predicting the changes, there is no good way of sorting out the relative importance of different influences on the changes. With the signal analysis based approach, all of the influences are recorded on the pathways through the model. This allows the program to identify the major influences in any expected change. In figure 3, this is shown. The user asks to see the major influences of exercise on cardiac output under these conditions by selecting highlight on the cardiac output parameter menu and the program highlights the two pathways that have the largest effect on this parameter. This helps the user to see which relations are most influential in determining what will actually happen.

Both of these methods of explanation have the advantage that they provide a lot of information about the conclusions of the program. In essence, they answer many questions without the user having to ask them. This high bandwidth communication has proven to be an excellent method of model development as well as an effective method for explaining analyses to the user.



Next: Summary Up: Reasoning Operators Previous: Therapy Effect Prediction


wjl@MEDG.lcs.mit.edu
Sat Nov 4 10:36:18 EST 1995