2005 HSR&D National Meeting Abstract
1037 — Using Patient Safety Indicators to Identify VA Outlier Hospitals: A Picture is Worth 1000 Statistics
Christiansen CL (CHQOER & Boston University)
Rivard P (CHQOER & Boston College)
Tsilimingras D (CHQOER & Boston University)
Zhao S (CHQOER)
Loveland S (CHQOER & Boston University)
Rosen AK (CHQOER & Boston University)
Patient Safety Indicators (PSIs) developed by the Agency for Healthcare Research and Quality (AHRQ) are useful for identifying potential in-hospital patient safety events. However, inherent characteristics of indicators make it difficult to interpret PSI results. Our objectives were to 1) compare Bayesian and average-ranking methodologies for selecting hospitals that have extremely high or low rates and 2) improve presentation of hospital-level information from PSI analyses.
Hospital-level PSI counts (numerators), acute-care hospitalizations (denominators), observed, expected, and AHRQ-smoothed rates were derived for 16 PSIs using FY’01 Patient Treatment File and AHRQ PSI software (version 2.0). From observed (O) to expected (E) ratios and Bayesian models, distributions representing “true” O/E ratios for each indicator at 118 hospitals were determined. For the 6 most frequent PSIs, we used simulation methods to obtain hospital-level posterior densities for the six-indicator combination of O/E ratios. We compared rankings of median ratios from the posterior densities to average rankings of smoothed rates for hospitals that ranked in the top 10 (or bottom 10) by either method. Graphical methods were developed to facilitate communication of results.
Both methods selected seven of the same hospitals in the top 10 and seven in the bottom 10. Six hospitals chosen by the Bayesian but not by the average-ranking method had rankings that varied widely across indicators. By averaging ranks, the strength of evidence for high or low ratios is lost; the Bayesian method retains this and incorporates known correlation across PSIs. Using the Bayesian method we can conclude, with more certainty than not, that the top 10 hospitals had 23% to 46% fewer PSIs than expected. The bottom 10 had 39% to 92% more than expected. Posterior density graphs demonstrate how the certainty of the estimates affects the combination ratio.
Bayesian analyses provided more information, appropriately recognized the uncertainty in estimates of performance, and, with the help of graphics, were as easy to understand as results from selection based on average rankings.
Translating patient safety analyses into concise, useable information is both science and art. Bayesian models and graphics are tools that should play a major role in patient safety research.