» Back to Table of Contents
The health care that Veterans receive can be substantially
improved by implementing validated health care quality measures because they define and motivate guideline-congruent care, reveal gaps in the continuum of
care, and identify high-and low-performing facilities so that quality improvement efforts can be targeted. However, implementing quality measures without
sufficient validation may promote poor or incomplete care, divert effort and attention from more important activities, and create skepticism and ill will
toward the entire quality management enterprise.
Unfortunately, quality measures are often formulated
and implemented without careful empirical validation or adequate appreciation for possible unintended consequences. Health services researchers
can play a critical role by conducting validation studies of new and existing quality measures in order to refine their specifications and guide their
interpretation and implementation.
This article discusses three under-appreciated
aspects of quality measure validation that, when addressed, can improve the quality of health care that Veterans receive.
Predictive validity refers to the association between
antecedent quality indicators, particularly process quality measures, and subsequent quality
indicators, particularly outcomes. Much of the research on the predictive validity of quality measures examines the associations between facility-level
quality measure scores and average patient outcomes (e.g., mortality rates). Due to a phenomenon known as the ecological fallacy, such analyses tell you
nothing about whether patients who receive quality-measure congruent care have better outcomes; this question can only be addressed with patient-level
process and outcome data. In fact, the correlation between aggregated facility-level process data and average
outcomes is often in the opposite direction of the patient-level association. This somewhat counter-intuitive fact has been long-appreciated in other
scientific disciplines, but is only recently beginning to be recognized in the context of quality measure validation. Thus, in order to determine the
predictive validity of a process quality measure—that is, if patients who satisfy the process criteria have better subsequent outcomes—it is necessary to use a model that contains
patient-level process and outcome data.1
The ability to use available administrative data to accurately identify patients with particular characteristics and the occurrence of specific health care
events is central to the validity of many quality measures. Specification validity refers to the sensitivity and specificity of the coding strategies used
to identify and define the relevant patients and processes. In many areas of health care, available administrative data only approximately map onto
consensus quality standards.
For example, mental health care procedure codes such as "individual psychotherapy" do not specify the type or target of care (e.g., prolonged exposure
therapy for PTSD). Although quality measure developers are often very creative in using combinations of diagnosis, procedure, and other codes to
operationalize aspects of quality, the sensitivity and specificity of these coding strategies need to be verified through comparisons
with other data sources such as chart review or direct observation. Studies of specification validity have revealed major problems in the specifications of
long-established quality measures.
2, 3 More studies of specification validity are needed to understand the limitations of the underlying coding strategies and improve them when possible.
Quality measures vary in their vulnerability to improving measured performance without improving actual performance. For example, many quality measures
have this form:
Facilities can improve measured performance by increasing the numerator, which is the intention,
or by restricting the number of patients who qualify for the denominator. The validity of many quality measures relies on the invulnerability
of the denominator to manipulation. This kind of "denominator management" can often be accomplished by examining if the overall proportion
of all patients who meet the denominator
criteria substantially changes once the quality measure is implemented, or if the proportion of patients qualifying for the denominator varies by facility
to a surprising degree. For example, does the number of patients with a particular diagnosis, positive screen, or lab test change once a facility is held
accountable to provide additional services to those patients? Studies such as this could determine if facilities are achieving high measured performance by
restricting access to the denominator. Thus, creating denominator monitoring systems or quality measures that are less subject to manipulation are
critically important to overall quality measure validity.
When these three under-appreciated aspects of health care quality measure validation are addressed, we can be more confident that the quality measures used
in VA are having the intended effect: improving the quality of care that Veterans receive.
Finney, J.W. et al. "Why Health Care Process Performance Measures Can Have Different Relationships to Outcomes for Patients and Hospitals: Understanding
the Ecological Fallacy," American Journal of Public Health 2011; 101(9):1635-42.
Harris, A.H. et al. "Validation of the Treatment Identification Strategy of the HEDIS Addiction Quality Measures: Concordance with Medical Record Review,"
BMC Health Services Research 2011; 11:73.
Harris, A.H. et al. "Are VHA Administrative Location Codes Valid Indicators of Specialty Substance Use Disorder Treatment?" Journal of Rehabilitation
Research and Development 2010; 47(8):699-708.