Search | Search by Center | Search by Source | Keywords in Title
Christiansen C, Rivard P, Tsilimingra D, Zhao S, Loveland S, Rosen A. Using Patient Safety Indicators to Identify VA Outlier Hospitals: A Picture is Worth 1000 Statistics. Paper presented at: AcademyHealth Annual Research Meeting; 2005 Feb 1; Boston, MA.
Objectives: Patient Safety Indicators (PSIs) developed by the Agency for Healthcare Research and Quality (AHRQ) are useful for identifying potential in-hospital patient safety events. However, inherent characteristics of indicators make it difficult to interpret PSI results. Our objectives were to 1) compare Bayesian and average-ranking methodologies for selecting hospitals that have extremely high or low rates and 2) improve presentation of hospital-level information from PSI analyses.Methods: Hospital-level PSI counts (numerators), acute-care hospitalizations (denominators), observed, expected, and AHRQ-smoothed rates were derived for 16 PSIs using FY01 Patient Treatment File and AHRQ PSI software (version 2.0). From observed (O) to expected (E) ratios and Bayesian models, distributions representing true O/E ratios for each indicator at 118 hospitals were determined. For the 6 most frequent PSIs, we used simulation methods to obtain hospital-level posterior densities for the six-indicator combination of O/E ratios. We compared rankings of median ratios from the posterior densities to average rankings of smoothed rates for hospitals that ranked in the top 10 (or bottom 10) by either method. Graphical methods were developed to facilitate communication of results.Results: Both methods selected seven of the same hospitals in the top 10 and seven in the bottom 10. Six hospitals chosen by the Bayesian but not by the average-ranking method had rankings that varied widely across indicators. By averaging ranks, the strength of evidence for high or low ratios is lost; the Bayesian method retains this and incorporates known correlation across PSIs. Using the Bayesian method we can conclude, with more certainty than not, that the top 10 hospitals had 23% to 46% fewer PSIs than expected. The bottom 10 had 39% to 92% more than expected. Posterior density graphs demonstrate how the certainty of the estimates affects the combination ratio.Implications: Bayesian analyses provided more information, appropriately recognized the uncertainty in estimates of performance, and, with the help of graphics, were as easy to understand as results from selection based on average rankings.Impacts: Translating patient safety analyses into concise, useable information is both science and art. Bayesian models and graphics are tools that should play a major role in patient safety research.