![]() |
CDM Seminar Series 2004-05 |
The inconsistency of expert assignments to variables in expert systems can
be a major source of performance limitation. I will present the results of an
experiment designed to assess the inter-rater and intra-rater consistency of
assignments within a commercially successful Medical Diagnostic Expert System.
The expert-assigned variables represent the strength of relationships between
diseases and findings (which include patient-reported symptoms, signs found
on examination, lab values, etc.), and are intended to allow the system to infer
the likelihood of disease given the presence or absence of a particular set
of findings.
The experiment necessitated mapping the system's variables to a probabilistic
framework, and inferring the experts' mental models in terms of that framework.
Inter-rater consistency was measured by direct comparison of variable assignments.
Intra-rater consistency was measured by inferring a likely mental model in probabilistic
terms for each expert, then noting that each set of variables that reference
a common finding implies a specific Bayesian leak probability that should be
constant across all variables in the set. The consistency of assignments involving
a common finding can then be estimated by the consistency of the Bayesian leak
probabilities implied by those assignments. This also suggests a method for
correcting expert assignments by forcing "leak invariance", and I
investigated the hypothesis that correcting for intra-rater consistency in this
way would improve inter-rater consistency.