1004 — (In)equity in Risk Prediction: Examining and Mitigating Racial Bias in the Veterans Affairs Care Assessment Needs (CAN) Risk Model
Lead/Presenter: Amol Navathe,
All Authors: Navathe AS (Department of Medical Ethics and Health Policy; University of Pennsylvania), ; Park, SH (Department of Medical Ethics and Health Policy; University of Pennsylvania); Hearn, CM (Department of Medical Ethics and Health Policy; University of Pennsylvania); Jenkins, KA (Leonard David Institute of Health Economics, University of Pennsylvania); Rosania, MU (Leonard David Institute of Health Economics, University of Pennsylvania); Maciejewski, ML (Department of Medicine; Duke University); Chhatre, S (Leonard David Institute of Health Economics, University of Pennsylvania); Kreisler, C (Office of Quality and Patient Safety, Veterans Health Administration); Roberts, CB (Center for Health Equity Research and Promotion; U.S. Department of Veterans Affairs); Linn, KA (Department of Biostatistics, Epidemiology, and Informatics; University of Pennsylvania); Parikh, RB (Department of Medical Ethics and Health Policy; University of Pennsylvania);
The VA computes the Care Assessment Needs (CAN) score weekly for over 5 million Veterans to predict risk of one-year mortality or hospitalization, and to improve resource allocation to high-risk Veterans. Motivated by evidence of unfair predictive algorithms in other settings, we examined the CAN mortality score for racial unfairness and mechanisms to mitigate it
Study Design: This cross-sectional study consisted of Veterans who were alive in 2018 and used national VA administrative claims and electronic health data. We used the last score in 2018 from the current CAN model (v2.5) for all analyses and deaths were confirmed using 2019 mortality data. First, we compared distributions of CAN scores for self-identified non-Hispanic White and non-Hispanic Black Veterans. Second, we assessed CAN fairness by comparing false-negative rates (FNR) across racial groups, defining a score ?75th percentile as a â€œpositiveâ€ prediction of mortality. Third, we compared mortality by race after pooling differences from strata of Veterans constructed using exact matching on age and Elixhauser comorbidities. Fourth, to determine whether class imbalance (lower representation of Black Veterans) contributed to model unfairness, we re-assessed fairness metrics by adding race-by-age interaction terms in the model, upweighting Black Veterans, and adding a penalty term for unfairness. Population Studied: 975,344 (20.0%) Black and 3,909,673 (80.0%) White Veterans.
Black Veterans were younger (median age 57.4 vs. 63.8 years) and more likely to suffer from PTSD (48.9% vs. 38.8%) and be unmarried (57.8% vs. 41.2%) than White Veterans. CAN scores appeared lower for Black Veterans than White Veterans (mean [SD] 41.7 [28.1] vs 52.1 [28.7]) and reflected under-prediction of death for Black more than White Veterans (FNR 29.7% vs. 19.5%). When matching on comorbidities, the pooled mortality rate was lower for Black Veterans (2.1% vs. 3.9%), largely because younger Black Veterans had similar comorbidities to older White Veterans. This discrepancy was mitigated after additionally matching on age (pooled mortality 3.2% vs. 3.7 %). While weighting and interaction terms did not mitigate fairness, a group fairness penalty term reduced the FNR gap with better mitigation in unfairness than using a regression without penalties (FNR 25.8% vs. 21.4%; gap reduced from 10.2 to 4.4 percentage points with the fairness penalty).
The CAN score, a widely used VA risk model, underestimates mortality risk for Black Veterans. A primary driver of unfairness is the difference in the age distributions of the two racial groups. Given a group of Veterans with a specific set of comorbidities, Black Veterans tend to be younger than White Veterans, resulting in a lower CAN score. Statistical methods to address class imbalance mitigated but did not eliminate unfairness, suggesting additional unmeasured confounders contribute to unfairness in the CAN score, potentially those associated with social determinants of health.
This study describes indicators of racial unfairness in a VA algorithm due to a relatively young and sick Black population compared to Whites. Mitigating algorithmic unfairness may require data on social determinants of health and should be a priority to improve healthcare equity.