The typical American’s health compares poorly to that of their counterparts in other high-income countries, even though the U.S. spends twice as much as these countries do on medical care. Behind that middling average lies substantial health inequality. A 40-year-old American male can expect to live 15 years less if he’s one of the poorest 1% of Americans rather than one of the richest 1%. Black children who live in the richest parts of the United States have higher mortality rates than White children in the poorest parts of the country.
Many have put these observations together with another aspect of U.S. “exceptionalism”: We are the only high-income country without universal health insurance coverage. And they have concluded that the key to improving health and reducing health inequality in the U.S. is to finally enact universal coverage.
They’re wrong. While these two facts are correct, they have very little to do with each other. There are good reasons to support universal health coverage, but noticeably improving population health is not one of them.
Indeed, the evidence suggests that the health disparities among Americans are not driven by differences in access to health insurance or to medical care. Rather, the key to improving health is far more complex: It lies in changing health behaviors and reducing exposure to external sources of poor health.
Perhaps the clearest evidence for how little impact health insurance reform has on health comes from the experience of other countries which have universal health insurance but also experience substantial health inequality. Consider Sweden and Norway, two Nordic countries with universal health insurance as well as a cradle-to-grave generous social safety net. Yet differences in life expectancy between adults in the top 10% and bottom 10% of the national income distribution in those countries are similar to the disparities in the United States.
Or consider the enormous differences across the country in remaining life expectancy for elderly Americans, all of whom are covered by the same Medicare health insurance program. Researchers have identified which cities in the U.S. are better or worse for elderly longevity, and also which tend to provide more medical care than others. But, the evidence indicates, the places you’d want to move to in order to increase your life expectancy in retirement aren’t the same as the places to move to if you want to receive more medical care.
Indeed, there is widespread agreement among researchers that medical care, let alone health insurance, is not the only—or even the most important—determinant of health. Rather, the key to better health and smaller health disparities lies in the air we breathe, the food we eat, and the cigarettes we do or do not smoke. Which means that the key public policies for improving health must be those that tackle these sources of poor health through pollution regulation, or soda and cigarette taxes. The path to major health improvements doesn’t run through health insurance and health care policy.
How can this possibly be?
It is not because health insurance is not important for health. Of course it is. But its effects are too small for health insurance reform to make much of a dent in the large U.S. income-health gradient, or to substantially improve the poor health of average Americans.
Behind this relative unimportance of health insurance coverage for health is a startling, but little-understood reality: No one in America is actually uninsured when it comes to their health care. Rather, the nominally “uninsured”—those who lack formal health insurance coverage—nonetheless receive a substantial amount of medical care which they don’t pay for.
There is a vast web of public policy requirements and dedicated public funding to provide the nominally uninsured with free or heavily discounted medical care. And no, we’re not just talking about the emergency room. Through a piecemeal slew of policies at the federal, state, and local level, the government has created a large, complex web of publicly-regulated, publicly-funded programs that provide free or low-fee preventive care, care management for chronic health problems, and non-emergency hospital care for the uninsured and under-insured.
This point was made clear by data from Oregon, where the state ran a lottery for health insurance coverage in 2008. The process was similar to a clinical trial for a new drug, in which some patients are randomly assigned the new drug and others are assigned an older drug or a sugar pill. Except in this case, Oregon randomly assigned public health insurance coverage to about 10,000 low-income, uninsured adults but not to the thousands of others who had signed up to “win” free public health insurance. The results of this lottery made clear that providing formal health insurance coverage to the uninsured provides them with real benefits: better protection against expensive medical bills, greater likelihood of having a medical home, more access to medical care, and ultimately, improved health.
But the experiment’s results also revealed something striking about the experience of the uninsured: The uninsured receive about four-fifths of the medical care that they would get had they been insured. This medical care includes primary care, preventive care, prescription drugs, emergency room visits, and hospital admissions. And they pay for only about 20 cents out of every dollar of medical care that they receive. In other words, they are not actually uninsured. Rather, there’s a lot more commonality in the medical care received and (not) paid for by the insured and the uninsured than those labels might suggest.
And once we realize that everyone in America can access medical care, it becomes much clearer why formalizing this access – while important for other reasons – is unlikely to make an important difference for people’s health, or substantially reduce the large disparities in population health.
The surprisingly limited role for health care policy or health insurance in driving population health is not a new observation. A half century ago, the economist Victor Fuchs – who at age 99 is now widely considered to be the founding father of the economic study of health – made this point in his now-famous “Tale of Two States.” He described two neighboring states in the Western U.S. that were similar along many of the dimensions believed to be important for health – including medical care, income, schooling, climate, and urbanicity. Yet in one state, the people were among the U.S. healthiest. Their neighbors in the other state were among the least healthy, with annual death rates that were 40% to 50% higher.
You may get an inkling of where Fuchs was going with this comparison when we tell you that the two states were Utah and Nevada. And that the residents of Utah were the ones enjoying much better health.
Fuchs famously attributed the lower-mortality rates of the clean-living, predominantly Mormon residents of Utah to their better health behaviors. Their Nevada neighbors enjoyed what he referred to as “more permissive” norms. Rates of smoking and drinking were much lower in Utah than in Nevada. And differences in mortality between the two states were particularly pronounced for diseases for which there was a direct link to such behaviors, such as lung cancer and cirrhosis of the liver.
Fuchs’s simple tabulations of publicly reported death rates by age and gender for Utah and Nevada appear antiquated by modern data science standards. But his central argument has stood the test of time. A subsequent half-century of confirmatory work has hammered home an important but often overlooked point: when it comes to improving health outcomes and reducing health disparities, health insurance policy is not the lever to lean on.