Skip Navigation
Perspectives on Safety >
May 2012  |  Perspective
Download: Adobe Reader    Email     Print

The Emergence of the Trigger Tool as the Premier Measurement Strategy for Patient Safety

Perspective

 

by Paul J. Sharek, MD, MPH

In the landmark 1999 report, To Err is Human: Building a Safer Health System, the Institute of Medicine estimated that avoidable medical errors contribute to 44,000–98,000 deaths, and more than a million injuries, annually in United States hospitals.(1) In response to these disturbing data, accreditation bodies, payers, nonprofit organizations, governments, and hospitals launched major initiatives and invested considerable resources to improve patient safety.(2-3) Assessing the impact of these patient safety initiatives requires generally accepted, rigorous, standardized, and practical measures of adverse events.(4-5)

A number of approaches to measuring adverse event rates have been used, including voluntary reports ("incident" or "occurrence" reports), mining of administrative databases (most notably the Agency for Healthcare Research and Quality's [AHRQ] Patient Safety Indicators), the two stage review process used in the Harvard Medical Practice Study, and the Institute for Healthcare Improvement's (IHI) "trigger tool" approach.(6-7) Each of these methods has advantages and limitations (Table). By identifying clues that guide chart reviewers to specific events during a patient's hospitalization more likely to contain an adverse event, the trigger tool approach provides an efficient variation on retrospective chart reviews and overcomes many of the limitations of other methods.(7-11) A brief discussion on each of these approaches to patient safety measurement is worth pursuing, as dramatically different adverse event rates are identified depending on the techniques being used to identify and measure harm.

Occurrence reports: The most well known strategy to identify and measure patient safety in U.S. hospitals is the use of occurrence ("incident") reports submitted by caregivers. Although these data are relatively easy and inexpensive to obtain, evidence suggests that occurrence reports are underutilized (12-14) and only identify between 2% and 8% of all adverse events in the inpatient setting.(7,9-10,12) This underutilization results from the fact that occurrence reports are voluntary, time intensive, far more likely to be completed by nurses than physicians (15), and frequently perceived by staff to result in punitive action.(12) While identifying important clues to process flaws, occurrence reports generally identify near misses and sentinel events but rarely reflect the spectrum of adverse events.(16-18)

Administrative data sets: Approaches to measuring patient safety using administrative data sets are appealing, as these data are often routinely available, inexpensive to obtain, and are immediately comparable across sites. However, administrative data sets, which are the source of adverse event rates identified by AHRQ's Patient Safety Indicators (19), are highly susceptible to variation in coding practices and suffer from harms being easily hidden in the medical record. The end result is that present approaches to identify adverse events using administrative data sets have limited sensitivity and specificity, and should probably only be used to help hospitals prioritize chart review and improvement initiatives.(7,20-21)

Retrospective or concurrent chart review: The Harvard Medical Practice Study used retrospective chart review to uncover adverse events.(22) Another influential study identified adverse events using a combination of "voluntary and verbally solicited reports from house officers, nurses, and pharmacists; and by medication order sheet, medication administration record, and chart review of all hospitalized patients."(17) Several other significant safety studies used similar methods. The most frequently cited adult studies using a retrospective methodology (22-23) revealed adverse event rates of 3.7 and 2.9 per 100 admissions, respectively. This identification strategy suffers from several problems: inconsistency in defining adverse events; poor, incomplete, confusing, or conflicting entries in the medical records; and resource intensiveness. This methodology was valuable in the early days of the patient safety field by highlighting the major patient safety risks present in inpatient health care settings. However, it has largely been replaced by the more efficient and more sensitive trigger tool method described below.(7)

Trigger-based chart review: The trigger tool methodology has emerged as the premier approach for adverse event detection.(7,24-25) Triggers, defined as "occurrences, prompts, or flags found on review of the medical record that 'trigger' further investigation to determine the presence or absence of an adverse event" (26), have been shown to more efficiently identify adverse events than any other published detection method.(7,9-10,12-13,25-26) Recent studies using the IHI Global Trigger Tool (27) have identified harm rates in adults in U.S. hospitals of 49 per 100 admissions (33% of patients) (7), 36 per 100 admissions (28% of patients) in Medicare patients (25), and 25 per 100 admissions (18% of patients) across North Carolina.(24) Between 44% and 63% of these adverse events were interpreted as preventable. Examples of triggers include abnormal laboratory results such as rising creatinine, prescriptions for antidote medications such as naloxone, and other medical record–based hints that tell the chart reviewer that an adverse event might have occurred, triggering a more thorough review of the medical record.(23) The IHI adult Global Trigger Tool (27), the most well studied of the published trigger tools, consistently demonstrates compelling operator characteristics, including excellent inter- and intra-rater reliability, very good to excellent sensitivity, and excellent specificity when compared with the gold standard of detailed expert chart review.(7,11,18)

A 2011 study by Classen and colleagues highlighted the relative test characteristics of the various adverse event detection methods.(7) The authors reviewed 795 closed medical records from 3 large academic medical centers and found that the IHI Global Trigger Tool identified 354 of the 393 adverse events (90%) detected by expert chart review, while the AHRQ Patient Safety Indicators (derived from an algorithm applied to administrative data) identified 35 adverse events (9%), and occurrence reports identified only 4 adverse events (1%). Other studies have demonstrated similar findings.(9-10,13,28)

In summary, rates of harm in U.S. hospitals remain unacceptably high, with little evidence of significant improvement since To Err is Human was published in 1999.(4,7,24-25) One major reason for these persistently high rates has been the lack of an accepted, rigorous, standardized, and practical approach to measuring and tracking adverse events over time. The IHI Global Trigger Tool, along with other more patient population–specific triggers tools, was developed to provide practical and reliable measurement approaches to track rates of harm over time (7,24-25,27) at the local, regional, and national level. Although not perfect, trigger tools have better operator characteristics than other measurement approaches and detect significantly more adverse events than occurrence reports, administrative database–derived harm rates, and concurrent or retrospective chart review.(29) Present efforts are under way to automate the IHI adult Global Trigger Tool and to construct and automate a pediatric global trigger tool. Once these two automated global trigger tools are validated, it seems likely that the Centers for Medicaid and Medicare Services (CMS) will require hospitals to report "all cause" harm rates and perhaps report such results publicly or tie them to reimbursement. Other public and private insurance companies are sure to follow. These will be important next steps to move U.S. hospitals forward toward the real work at hand—reliably improving the safety of patients in our health care system.

Paul J. Sharek, MD, MPH
Associate Professor of Pediatrics, Stanford University School of Medicine
Medical Director, Center for Quality and Clinical Effectiveness
Chief Clinical Patient Safety Officer, Lucile Packard Children's Hospital



References

1. Kohn LT, Corrigan JM, Donaldson MS, eds. To Err Is Human: Building a Safer Health System. Washington, DC: Committee on Quality of Health Care in America, Institute of Medicine, National Academies Press; 2000. ISBN: 9780309068376.

2. Patient Safety & Medical Errors. Rockville, MD: Agency for Healthcare Research and Quality. [Available at]

3. McCannon CJ, Hackbarth AD, Griffin FA. Miles to go: an introduction to the 5 Million Lives Campaign. Jt Comm J Qual Patient Saf. 2007;33:477-484. [go to PubMed]

4. Leape LL, Berwick DM. Five years after To Err Is Human: what have we learned? JAMA. 2005;293:2384-2390. [go to PubMed]

5. Vincent C, Aylin P, Franklin BD, et al. Is health care getting safer? BMJ. 2008;337:a2426. [go to PubMed]

6. Sharek PJ, Classen D. The incidence of adverse events and medical error in pediatrics. Pediatr Clin North Am. 2006;53:1067-1077. [go to PubMed]

7. Classen DC, Resar R, Griffin F, et al. 'Global Trigger Tool' shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood). 2011;30:581-589. [go to PubMed]

8. Griffin FA, Classen DC. Detection of adverse events in surgical patients using the Trigger Tool approach. Qual Saf Health Care. 2008;17:253-258. [go to PubMed]

9. Sharek PJ, Horbar JD, Mason W, et al. Adverse events in the neonatal intensive care unit: development, testing, and findings of an NICU-focused trigger tool to identify harm in North American NICUs. Pediatrics. 2006;118:1332-1340. [go to PubMed]

10. Takata GS, Mason W, Taketomo C, Logsdon T, Sharek PJ. Development, testing, and findings of a pediatric-focused trigger tool to identify medication-related harm in US children's hospitals. Pediatrics. 2008;121:e927-e935. [go to PubMed]

11. Sharek PJ, Parry G, Goldmann D, et al. Performance characteristics of a methodology to quantify adverse events over time in hospitalized patients. Health Serv Res. 2011;46:654-678. [go to PubMed]

12. Resar RK, Rozich JD, Classen DC. Methodology and rationale for the measurement of harm with trigger tools. Qual Saf Health Care. 2003;12(suppl 2):ii39-ii45. [go to PubMed]

13. Rozich JD, Haraden CR, Resar RK. Adverse drug event trigger tool: a practical methodology for measuring medication related harm. Qual Saf Health Care. 2003;12:194-200. [go to PubMed]

14. Layde PM, Cortes LM, Teret SP, et al. Patient safety efforts should focus on medical injuries. JAMA. 2002;287:1993-1997. [go to PubMed]

15. Wild D, Bradley EH. The gap between nurses and residents in a community hospital's error reporting system. Jt Comm J Qual Patient Saf. 2005;31:13-20. [go to PubMed]

16. Suresh G, Horbar JD, Plsek P, et al. Voluntary anonymous reporting of medical errors for neonatal intensive care. Pediatrics. 2004;113:1609-1618. [go to PubMed]

17. Kaushal R, Bates DW, Landrigan C, et al. Medication errors and adverse drug events in pediatric inpatients. JAMA. 2001;285:2114-2120. [go to PubMed]

18. Classen DC, Lloyd RC, Provost L, Griffin FA, Resar R. Development and evaluation of the Institute for Healthcare Improvement Global Trigger Tool. J Patient Saf. 2008;4:169-177. [Available at]

19. AHRQ Quality Indicators: Introduction. Rockville, MD: Agency for Healthcare Research and Quality. [Available at]

20. West AN, Weeks WB, Bagian JP. Rare adverse medical events in VA inpatient care: reliability limits to using Patient Safety Indicators as performance measures. Health Serv Res. 2008;43(1 Pt 1):249-266. [go to PubMed]

21. Scanlon MC, Harris JM Jr, Levy F, Sedman A. Evaluation of the Agency for Healthcare Research and Quality Pediatric Quality Indicators. Pediatrics. 2008;121:e1723-e1731. [go to PubMed]

22. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324:370-376. [go to PubMed]

23. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care. 2000;38:261-271. [go to PubMed]

24. Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med. 2010;363:2124-2134. [go to PubMed]

25. Levinson DR. Adverse Events in Hospitals: National Incidence Among Medicare Beneficiaries. Washington, DC: US Department of Health and Human Services, Office of the Inspector General; November 2010. Report No. OEI-06-09-00090. [Available at]

26. Classen DC, Pestotnik SL, Evans RS, Lloyd JF, Burke JP. Adverse drug events in hospitalized patients. Excess length of stay, extra costs, and attributable mortality. JAMA. 1997;277:301-306. [go to PubMed]

27. Griffin FA, Resar RK. IHI Global Trigger Tool for Measuring Adverse Events: IHI Innovation Series white paper. Cambridge, MA: Institute for Healthcare Improvement; 2007.

28. Agarwal S, Classen D, Larsen G, et al. Prevalence of adverse events in pediatric intensive care units in the United States. Pediatr Crit Care Med. 2010;11:568-578. [go to PubMed]

29. Medical errors in the USA: human or systemic? Lancet. 2011;377:1289. [go to PubMed]


Table

Table. Comparison of four most frequently used methods to identify harm.


(Go to table citation in the text)

Harm Detection Method Advantages Limitations
Incident (occurrence) reports

• Well established process in most hospitals
• Inexpensive
• Easy information to obtain

• Identifies only between 2% and 8% of harmful events
• Focus tends to be on error, not harm
• Voluntary nature results in vast underreporting
• Can be time intensive
• Often perceived as punitive by staff

Administrative database algorithms

• Standard definitions
• Method allows direct comparison between hospitals
• Inexpensive to obtain data

• Identifies less than 10% of all harms (7)
• Poor sensitivity and specificity
• Focus is on only a few specific harm types (not "all cause" harm)
• Harm easily hidden/missed (if not well described in charting)
• Dependent on accuracy of chart coding

Retrospective/Concurrent Chart Review (from Harvard Medical Practice Study) (22)

• Active surveillance can identify harms not well articulated in chart (if honest communication occurs)
• Measures "all cause" harm
• Provides a rate (i.e., harms per 100 admissions or per 1000 patient days)

• Substantially underreported harm rates (3,13)
• Relies partially on voluntary or verbally solicited identification of harm
• Active real time surveillance is quite resource intensive
• Unfocused review of charts is also resource intensive
• Retrospective review of charts challenging if poor/incomplete documentation

Trigger Tools

• Measures "all cause" harm
• Measures total harm burden
• Provides a rate (i.e., harms per 100 admissions or per 1000 patient days)
• Focuses on harm, but includes errors as well
• Allows sampling strategy
• Relatively efficient: 20 minutes per chart
• Can be population specific (specialty specific trigger tools available for areas such as pediatric and neonatal intensive care units, etc.)
• Excellent specificity and very good sensitivity

• Requires training
• Resource intensive: IHI recommends 20 charts per month at 20 minutes per chart
• Global trigger tools not automated (though major ongoing effort to do so)
• Retrospective review of charts challenging if poor/incomplete documentation


Also from May 2012
INTERVIEW: In Conversation With…David C. Classen, MD, MS
CASES & COMMENTARIES: The Perils of Cross Coverage
CASES & COMMENTARIES: Double Dose at Transfer
CASES & COMMENTARIES: The Forgotten Line