P. Szolovits and S. G. Pauker. Editorial Commentary: A Coherent Philosophy for Development or a Straightjacket for Research. Methods of Information in Medicine 32:16-17, 1993.It was written in response to the article
H. A. Heathfield and J. Wyatt. Philosophies for the Design and Development of Clinical Decision-Support Systems. Methods of Information in Medicine 32:1-8, 1993.
Although it is foolish to argue with good advice, a coherent design philosophy must be relevant to the problem to which it is applied. Heathfield and Wyatt present a sound, although hardly a new, approach to the development of clinical decision support systems, but we fear their proposal could become a straightjacket that limits basic research in medical informatics. When the task is, in fact, the development of a functional, practical clinical system, we agree that a careful analysis of the problems to be solved, an evaluation of the suitability of available methods, the use of rapid prototyping, and continual evaluation, re-design and feedback, based on appropriate outcome measures, are excellent suggestions. Some of our colleagues have failed to heed this traditional approach, and the resulting failures have surely contributed to the sluggish dissemination of their products. But we believe that the work of medical informatics remains, in large measure, basic research that will provide better definitions of the clinical tasks that require decision support and will produce a new, more flexible set of software tools to address those problems. Taking a more experimental view, we are skeptical about what can be expected from a more formal development process and doubt whether existing methods, no matter how sensibly applied, are adequate to solve or significantly aid the management of a broad range of critical medical decision problems.
In non-medical domains, most systems deal with the routine, and major systems are nevertheless hard to build. In clinical medicine, much is routine but three major tasks confront those who would provide decision support. First, given the exponential expansion of medical knowledge, most clinicians cannot identify all relevant and current knowledge that bears on a clinical problem. Second, given the large number of available knowledge sources and the realization that most diseases do not come in pure culture but arrive in patients who often have several problems, the task of knowledge integration and strategy selection under the joint bounds of uncertainty and conflicting goals is complex. Third, given the complexity of medicine some system of real-time conflict and error detection is essential. But not all of medicine is routine. Hidden among the growing volume of patients treated by clinicians struggling to keep up with a speeding treadmill are special cases, ones for whom routine off-the-shelf strategies of care would be inappropriate. The trick is to identify these exceptions. It is probably possible to build simple systems now, using off-the-shelf technology, to assist in relatively simple, routine decision-making, but even there the possibility of discovery should not be discounted. As Wyatt points out, even ACORN uncovered new truth. If the original goal-- to build an intelligent assistant--still holds, then there is much research left to be done, despite all the progress (which the authors measure only in terms of the volume of research publications).
Comprehensive on-line clinical databases are really just now becoming a reality, and even today often do not include what is, for clinical decision aids, the most critical component: problem lists, histories and physical examinations, progress notes, the coded results of non-numeric diagnostic tests (e.g., radiology), and diagnostic categories. Early systems were, until very recently, limited to simple trend detection and extrapolation because the only data available to them were numerical records routinely produced in labs. We expect that new kinds of systems, such as decision aids that don't respond to specific requests for assistance but simply monitor the provision of good care, will only be effective if they can provide timely advice to the clinician who is integrating information, designing strategies, and making choices.
The design and development cycle that Heathfield and Wyatt propose can only help if the outcomes which drive its corrective feedback are relevant to the task. If the goal is to improve clinical results, then they are the outcomes of interest. But what aspect of clinical results should be evaluated? Short-term survival? Long-term survival? Life expectancy? Quality of life? Health status? Patient satisfaction? Physician satisfaction? Efficiency? Cost? Cost-effectiveness? Data relevant to evaluating each of these is being collected and applied even now in many medical problem domains, and it is quite likely that different measures and evaluation perspectives will produce quite different feedback corrections for the design and development of a clinical decision support system. On the other hand, this tight focus on outcome often neglects another dimension--one that has been traditionally important to the artificial intelligence community-- the development of basic insights and new knowledge. We strongly believe that our research and development community must also keep their eyes on that ball. In the long run, it will bear far richer fruits and have a far more profound effect on clinical medicine. As we acknowledge the wisdom of Heathfield and Wyatt's suggestions, let's not ignore that a good philosophy appropriately applied should broaden our horizons and not limit our perspectives.
Peter Szolovits, PhD