Ameliorating Racial Unfairness in a VA Algorithm for High-Risk Veterans
Predictive algorithms are often used to identify high-risk patients who may benefit from care management programs, palliative care and hospice, or other resources. The VA has been on the leading edge of using predictive algorithms to improve care delivery. However, recent evidence suggests that such algorithms may be unintentionally biased against racial and ethnic minorities and socioeconomically disadvantaged populations, although this has never been shown at VA. Our team is collaborating with the Office of Clinical Systems Development and Evaluation and the Office of the VA Chief Improvement & Analytics Officer to investigate how to improve the algorithmic fairness of the Care Assessment Needs (CAN) score – a commonly used VA algorithm that reflects a Veteran’s risk of hospitalization or death within a year.
The CAN score was developed over a decade ago when data limitations precluded accurate records on race and ethnicity, and when less sophisticated data and analytic infrastructure were in place. The CAN score is based on routinely collected electronic health record and administrative data in the VA Corporate Data Warehouse (CDW). We used data from 4,332,315 Veterans who were alive and had at least one outpatient primary care encounter in 2016. We first assessed the CAN score for unfairness by comparing the distribution of CAN scores and false-negative rates between Black and white Veterans. We then investigated potential mechanisms behind the unfairness. Finally, we are exploring a variety of statistical and data approaches to reduce unfairness in the CAN score based on novel methodologies.
Our assessment of data from these Veterans revealed the following:
- Black Veterans had lower CAN scores than white Veterans on average.
- The false negative rate was higher for Black Veterans, meaning the CAN score may be underpredicting risk for Black Veterans.
- Differential comorbidity burden is not the primary mechanism behind unfairness.
- After additional matching on age, CAN scores were equivalent between Black and white Veterans, suggesting that the younger age of Black Veterans and different relationships between comorbidities/diseases and death are a primary mechanism of unfairness.
- Various statistical techniques, including weighting, interactions, and fitting separate models by race, have not thus far ameliorated Black-white differences in false-negative rates. This means we likely need more granular data on social determinants of health to generate an equitable risk score.
This is the first study to show opportunities to improve the systematic racial fairness in the CAN score, a widely-used VA risk model. The CAN score underestimates mortality risk to some extent for Black Veterans, suggesting that its fairness could be improved. Differences in age distributions are a mechanism of unfairness that may apply to other VA algorithms as well. Rather than applying different statistical techniques, mitigating algorithmic unfairness to improve VA equity may require more than clinical or administrative data. Data on social determinants should be a priority to improve VA healthcare equity.