Accurate risk adjustment is critical for valid assessment of quality, provider comparisons, and benchmarking across healthcare systems. Chart-based clinical data, the "gold standard" for risk adjustment, are costly and time-consuming to collect, thus administrative data remain the basis of risk adjustment, despite inadequacies in capturing patient severity. Agency for Healthcare Research and Quality initiatives such as development of administrative data-based inpatient quality indicators to screen for inpatient quality problems and more recent initiatives to enhance these measures with automated clinical data such as laboratory tests are important steps towards improvement of risk-adjustment models.
This study will explore the addition of clinical data to administrative data-based risk-adjustment models for specific outcomes of acute medical care:
Specific study objectives are:
To compare the ability of an administrative data-based model to one supplemented by clinical data elements to predict:
1) Outcomes from selected acute medical conditions
2) Patients at high and low risk of the specified outcomes.
And to 3) Examine facility-level prediction error associated with models using only administrative data compared to that obtained from models supplemented by clinical data.
This was a retrospective observational pilot study. We included all VA acute medical admissions with a principal diagnosis of acute myocardial infarction, congestive heart failure, cirrhosis and alcoholic hepatitis, chronic obstructive pulmonary disease, gastrointestinal hemorrhage, hip fracture, pneumonia, acute renal failure and acute stroke during FY04 through 07. Administrative inpatient data and laboratory test data were obtained from VA's PTF and DSS-LAR data files respectively. Outcomes were in-hospital death (y/n) and length of stay (LOS; days). Predictor variables included age, gender, race, comorbidities, and laboratory data. Starting with a standard risk-adjustment model based on administrative data, we evaluated the improvement in predicting in-hospital death and LOS from adding risk factors based on six laboratory tests (albumin, bilirubin, serum sodium, creatinine, blood urea nitrogen [BUN] and white blood cell [WBC] count). Model development and hypothesis testing was based on estimation of hierarchical multivariate logistic and linear regression models.
There were nearly 368,000 relevant admissions across all 160 VA facilities (substations). Hip fracture was the smallest cohort (N=8,688) and CHF the largest (N=87,698). Virtually all the laboratory test measures were statistically significant in all the cohort regressions. Adding laboratory test measures improved model discrimination (C-statistic) for most cohorts and for both outcomes; the largest improvement was for death from cirrhosis & alcoholic hepatitis (from 0.66 to 0.78). Model calibration, as measured by comparing the bottom and top risk deciles from each model, also improved significantly across most cohorts. Evaluating the impact on facility-level performance, we found poor concordance between the models in identifying the top/bottom ten facilities by risk. At the bottom end, the concordance rate varied from 1 to 5 facilities - that is, out of the ten facilities with the lowest mortality risk based on the laboratory model, 1 to 5 were also identified by the administrative model.
This project represents an important step toward developing a methodology for valid and accurate comparisons of risk-adjusted hospital performance with respect to outcomes of acute medical conditions that can be used to monitor and improve inpatient care in the VA. Further, these methods will be useful to prospectively identify veterans at higher risk of adverse outcomes such that interventions may be implemented to try to decrease this risk.
None at this time.