» Back to Table of Contents
How do you measure value? That question was tackled by a panel as part of a recent State of the Art (SOTA) conference on performance measurement. In our panel, experts from a range of backgrounds met virtually on three occasions to define value and help focus priority areas for further VA research on measurement strategies.
Value has become a buzzword in the health care lexicon, but that was not always the case. In the 1990s, value was not a common term; instead the focus was on cost-benefit and cost-effectiveness analysis. In these models, the goal was to compare the lifetime societal costs and benefits for alternative technologies or treatments. Cost-effectiveness analysis, though more popular than cost benefit analysis, hit roadblocks as results were often treated lightly (or even fearfully) and their policy implications were often ignored.
The past five years has seen a growing interest in value as evidenced by the recent IOM report that espoused better care for less.1 In 2012, HSR&D funded two Centers of Innovation (COINs) with explicit value goals and several others whose goals touch on value. Our COIN in Palo Alto, the Center for Innovation to Implementation (Ci2i), has an overarching value theme, and seeks opportunities to improve the value of care for mental health and specialty care services.
Often we are asked how we define value. We, like the SOTA panel, defi value as the gain in health outcomes per dollar spent. At first blush, this definition is consistent with the American College of Physician's endorsement of cost per quality adjusted life year as the best measure of value.2 The feeling of deja vu is likely to be palpable for many of our generation. If outcomes are measured using quality adjusted life years, then our definition seems to suggest that value equals cost effectiveness analysis, which some publicly accountable decision makers still regard as the untouchable third rail.
Of course, we see some important distinctions between value and cost-effectiveness analysis. To measure value, the outcomes need to gauge what is most important to the consumer. While the concept of health-related quality of life may sound great, it is hard to look at the items on the EQ-5D or SF-12 scales and agree that they are measuring exactly what is most important to each patient. These generic scales undeniably miss improvements that patients would say have high value to them. Consequently, we need to measure other endpoints that are proxies for patient outcomes. Process quality metrics that have great evidentiary support can be good proxies, while oth- ers might also include access or patient-reported outcomes, such as satisfaction.
The SOTA discussions helped us further identify three ways that the current discussion about value differs from past discussions on cost-effectiveness. First, the standard cost-effectiveness analysis should assume efficient production. There is near universal agreement that the U.S. health care system is burdened by so many inefficiencies that up to 30 percent of spending could be cut without negatively affecting patients' outcomes..1 Thus, by disconnecting value from cost-effectiveness, we do not have to assume efficient production and being more efficient, cutting waste, and becoming lean—all issues embraced by the Blueprint for Excellence—are important components of value.
This subtle shift also allows us to broaden our discussion to consider organizational culture, which is a critical component of safety, quality, and efficiency. Therefore, we are able to consider clinician behaviors, and the use of information and incentives as ways to improve the delivery of high value care. Inefficiencies are readily apparent: we observe the use of more expensive biologics for age-related macular degeneration, the use of more expensive second generation antipsychotic drugs, and the use of more technically challenging surgical techniques. These arguably "suboptimal" behaviors continue even when there is evidence that less expensive or better treatment options exist.
Second, many well-done cost-effectiveness analyses were divorced from economic questions of implementation. However when purchasing expensive new technologies as a robot to rehabilitate stroke patients, cost-effectiveness cannot be easily divorced from implementation. Therefore, within these discussions about value we can also raise important questions about the organization and delivery of care, especially as organizations are tasked with purchasing new innovations within a fixed budget.
Finally, the discussion about value enables us to consider alternatives, such as competition, that are not easily measured with a traditional cost-effectiveness analysis. Most Veterans over age 65 can choose whether to get care from VA or a Medicare provider. If they choose to use VA over the alternatives, then they value VA. VA can (and sometimes does) do things that can cause Veterans to reconsider their choice to use VA care. We recently published a paper that followed five large-scale adverse events (LSAE). We found that patients over age 65 responded to quality and safety information, as evidenced by their switch of providers after an LSAE.3
Before the SOTA, we heard from many people about the inherent difficulties of measuring value. Some said it was hopeless, that value, like beauty, is in the eyes of the beholder. The SOTA convinced us that some clear thinking on what we mean by value, and how to mea- sure it in a relevant way, can avoid that trap and make our work as health services researchers much more focused and useful.
1. Smith, M., Saunders, R., Stuckhardt, L., et al., eds. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC The National Academies Press 2012. Institute of Medicine, ed.
2. Owens, D., Qaseem, A., Chou, R., et al. "High-value, Cost-conscious Health Care: Concepts for Clinicians to Evaluate the Benefits, Harms, and Costs of Medi- cal Interventions," Annals of Internal Medicine 2011; 154(3):174-80.
3. Wagner, T.H., Taylor, T., Cowgill, E., et al. "Intended and Unintended Effects of Large-scale Adverse Event Disclosure: a Controlled Before-after Analysis of Five Large-scale Notifications," BMJ Quality and Safety 2015; 24(5):295-302.