by Josh Peterson, MD, MPH
It has been nearly three decades since we've known that clinical decision support systems (CDSS) improve the safety of prescribing. In 1977, Indiana University's Regenstrief Institute implemented a new system in which laboratory and drug data were retrieved from the electronic medical record at the time of prescribing to highlight common safety concerns. For example, physicians received an alert regarding a low potassium when prescribing a thiazide diuretic, or the need for gastrointestinal prophylaxis when prescribing high-dose nonsteroidal antiinflammatories.(1) These types of simple reminders nearly doubled physicians' attention to safe prescribing practices.
Over the years, safety alerts have proliferated. For example, the most developed computerized physician order entry (CPOE) and electronic prescribing products warn prescribers of problems related to total drug dose, duplicate therapy, patient weight, renal function, or age. They can also flag drug–drug interactions, allergies, pregnancy status, comorbid conditions, formulary status, and cost.
There is little doubt that electronic prescribing that includes these decision tools ultimately produces a safer prescription.(2-4) However, early successes in changing prescriber behavior have not always been replicated, and users have often reacted to increased warnings by circumventing or ignoring the decision support. Systems that deploy extensive drug decision support have developed a mixed reputation: while they are credited for preventing uninformed or unsafe prescribing, they are also criticized for sounding false alarms and consuming scarce time and attention. Charges that alerting (in particular) produces information "noise" and process "fatigue" have become common; such charges feature prominently on lists of unintended adverse consequences of information technology.(5,6) Despite their potential to prevent significant medication errors, CPOE vendors introducing clinical decision support into new environments are often implored to "please turn off the alerts."
Is comprehensive drug decision support broken? How should designers recapture users' trust and attention?
In formulating a strategy to address problems with medication decision support, it is important to recognize that we are at the beginning of a long process of discovery and refinement that may gradually lead to a standardized "cockpit" for knowledge management in clinical settings. The alert and reminder may prove to be inappropriate mechanisms to convey all of the safety information generated by automated algorithms. Since the Regenstrief experiment, the process of creating decision support applications has primarily been through trial and error involving many independent developers, each working with little coordination and limited shared experience. More deliberate methods to create enduring and well-accepted CDSS systems are needed. To improve the end product, it may be worth adhering to the model that drives development of other new technologies and pharmaceuticals: thoroughly describe the problem that is being solved, determine the potential for unintended harm and likely benefit in a small-scale experiment, and undertake larger experiments with frequent monitoring and feedback.
For example, in preparing a medication CDSS for a large roll-out, one might undertake the following steps:
- Quantitate the frequency and severity of the targeted prescription error.
- Estimate the likely burden of patient injury, if any.
- Develop a consensus-based approach to addressing the error (could be technical, educational, or process-oriented).
- If the technical approach is preferred, then develop the CDSS.
- Simulate the impact of the CDSS using retrospective data.
- Implement the CDSS in a test unit, and immediately measure workflow changes and initial provider response.
- Solicit feedback.
Too often, in our desire to get information tools into the hands of users, we have skipped one or more of these steps when developing or implementing a CDSS. As an example of suboptimal CDSS development, consider the history of drug–drug interaction alerts, which have override rates of greater than 80% across a large number of order-entry and prescription writing systems. Drug interactions are common, but the severity and frequency of adverse patient outcomes are often unknown (except for the most severe). If drug–drug interaction systems were tested using simulation, many of them would have shown high alert rates with little or no known clinical impact. Moreover, pilot testing of these systems would have quickly demonstrated high override rates (and tremendous provider frustration), which would have led to major redesigns before widespread implementation.
The creation of a high-quality medication CDSS may take more time, evaluation, and refinement than initially has been envisioned. Our academic group has recently completed an effort to design and implement an alerting system to avoid medication errors in patients with acute kidney injury. We invested considerable time enlisting the support and expertise of nephrology, pharmacy, intensive care, and infectious disease experts. We estimated potential benefits and alert frequencies. We piloted the CDSS with a single drug (vancomycin) that has well-described pharmacokinetics and target goals, and subsequently expanded our study to include several dozen medications affected by renal function. Even with this extensive preparation, we have needed to refine the intervention in response to user reactions. More than half the time, we found that physicians responded to the alerts by deferring action—neither dismissing nor accepting the recommended advice. As it turns out, this class of decisions (e.g., whether to discontinue aminoglycosides in the face of a creatinine bump) is often made as a team in our academic environment, with consultation among pharmacists, attendings, and senior residents. We had not anticipated this behavior, and our pilot testing led us to redesign the application to be much more team-oriented. Insights such as these may be missed without adequate evaluation of provider response.
In summary, medication decision support needs to be extensively field-tested, and designs need to be frequently iterated. Much like information interfaces in personal computing, users may ultimately come to accept these systems as they adapt to new workflow and the system design accommodates unanticipated user needs. A number of talented academic and commercial groups have already worked through many human-computer interaction issues that impede effective decision support. Drug knowledge bases have been curated to reduce unnecessary alerting, and interruptive alerts have been redesigned to better fit workflow. This extensive experience is only partially published, and many of the "raw materials" (ranging from the underlying drug knowledge or detailed schema about how the decision support functions) have not yet been openly shared. Ultimately, the optimal system is likely to involve the sharing of these kinds of extensive field tests, probably through a common informatics infrastructure. Unfortunately, at this writing, such an infrastructure is an underfunded and distant goal in the United States, while other countries such as the United Kingdom and the Netherlands are making tremendous progress.
Josh Peterson, MD, MPH
Center for Health Services Research
Departments of Internal Medicine and Biomedical Informatics
Vanderbilt University Medical Center
Back to Top
1. McDonald CJ, Wilson GA, McCabe GP Jr. Physician response to computer reminders. JAMA. 1980;244:1579-1581.
[go to PubMed]
2. Hunt DL, Haynes RB, Hanna SE, Smith K. Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA. 1998;280:1339-1346. [go to PubMed]
3. Kuperman GJ, Bobb A, Payne TH, et al. Medication-related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc. 2007;14:29-40. [go to PubMed]
4. Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ. 2005;330:765. [go to PubMed]
5. van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc. 2006;13:138-147. [go to PubMed]
6. Campbell EM, Sittig DF, Ash JS, Guappone KP, Dykstra RH. Types of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc. 2006;13:547-556. [go to PubMed]