Talk to the Veterans Crisis Line now
U.S. flag
An official website of the United States government

VA Health Systems Research

Go to the VA ORD website
Go to the QUERI website
FORUM - Translating research into quality health care for Veterans

» Back to Table of Contents


Performance Measurement: Technical Aspects

This article will briefly review some general concepts related to the technical aspects of performance measurement that managers should be aware of in creating and using performance measurement systems. These concepts all relate to the following basic principle of performance measurement:

Specific performance indicators represent a sample of all the possible processes and behaviors that need to happen so that the patients receive high overall quality care.

While the result of an indicator relating to colon cancer screening has some intrinsic interest, when managers offer substantial monetary incentives and assess performance through the use of such indicators the explicit assumption is that they measure a broader construct, such as a clinic's or hospital's overall quality, or perhaps the state of primary care or preventive care at a facility. Like the more familiar sampling of people to estimate a population characteristic, the indicators must be sampled in such a way as to represent the entire target population of indicated processes and behaviors. Furthermore, the sampling of indicators to estimate a construct such as quality introduces another source of uncertainty into the estimates. Standard performance measurement approaches do not adequately consider these issues.

First and most obviously, a performance measurement system is irretrievably flawed if the sampled measures do not adequately represent the range of behaviors that are actually important and that we should encourage. A non-representative sample provides a distorted measure of a broader construct such as "primary care quality." A non-representative sample also provides perverse incentives for providers to abandon important processes of care and concentrate on the incidental processes that are over represented in the performance measures.

Second, the usual aggregate scores such as average pass rates—on all measured preventive care or chronic care indicators for example—do not adequately reflect the sampling variability inherent in the choice of a few indicators to represent a broader construct of quality. The result is that these aggregate measures may have a much higher noise to signal ratio than is suspected and may not track well any changes in practice by a provider or clinic. This leads to cynicism and demoralizes those profiled. One way to mitigate this problem is to use a random effects or multilevel analysis, which by explicitly modeling and removing some of the measurement error, results in a more precise quality score.

Third, in designing a performance measurement system, it is important to find the organizational level at which the variation is located, and at which, a response should occur. If a process varies across facilities but not across providers within the facility, and if the best approach to fixing the problem is an organizational rather than an individual one, then what is the point of constructing provider level profiles?

A corollary of this last point is that if there is not much variation, then there may not be much point in measuring other than at the population level. So, for example, if only 50 percent of patients get a recommended process of care across a health care network and there is little variation across providers relative to this huge absolute gap (from 50 percent to 100 percent), then a network-wide remedy is needed. Furthermore, to assess the remedy a simple measurement based on a modest, network-wide sample is all that is needed to see how the rate changes. This task is much simpler than implementing a performance measurement system that must draw samples, calculate rates, and educate individual providers.

Finally, managers should be relieved to know that there is data to suggest that measurement, feedback, and incentives using well designed performance indicators, such as some of the VA External Peer Review Process (EPRP) indicators, do appear to have a "halo" effect on indicators that are not part of the active measurement and feedback set.1 However, this halo appears to extend only across the same clinical condition and not to unrelated clinical areas that are not part of the current EPRP system. The implication of this finding is not to start using poor measures for clinical areas that we do not yet monitor, but to recognize that performance improvement using clinical indicators may only be able to cover a finite amount of the waterfront. While seeking the holy grail of comprehensive automated performance measurement, other established but often under emphasized management tools—such as an active and effective emphasis on the perennial challenges of human resources and staff morale—remain critically important to maintaining and improving quality.

  1. Asch SM, et al. Comparison of Quality of Care for Patients in the Veterans Health Administration and Patients in a National Sample. Annals of Internal Medicine 2004; 141(12):938-45.

Questions about the HSR website? Email the Web Team

Any health information on this website is strictly for informational purposes and is not intended as medical advice. It should not be used to diagnose or treat any condition.