If you were an employer with tens of thousands of employees and dependents and had launched a health promotion, disease management and care management program, how would you assess its impact? If you answered you wouldn't bother with measuring outcomes, you'd flying blind. If you answered that you'd call up an academic institution to fashion a comparative clinical research trial, you'd be using a lot of your time and money.
But if you answered you'd use a quasi-experimental design, you'd be eligible or a Disease Management Care Blog Gold Star. That's because business owes it to its employees and investors to not only understand the value of these programs, but also make reasonable compromises on the detail, speed and accuracy of these kinds of analyses.
That's why this paper by Serxner and colleagues appearing in the American Journal of Health Promotion is a good template for companies that want understand if their health management programs are doing any good. This analysis involved an unnamed company with over 120,000 insurance beneficiaries.
The authors decided to focus on 75,475 active employees and COBRA participants who were eligible for the programs. They included a health risk assessment, lifestyle management, telephonic disease management, a health information nurseline, and health awareness initiatives. The analysis itself put limitations on the age inclusion (18 to 64 years) as well as continuous enrollment, which further limited the research to 49,237 individuals.
The baseline comparison period extended from January 2003 through December 2004, while the intervention period extended went from 2005 through 2007. When the programs were rolled out in 2005, participation among the insured beneficiaries grew. Since not all of the employees participated at once, the authors took advantage of what turned out to be a staggered implementation with concurrent parallel cohorts made up of participants and non-participants. This allowed for the "quasi-experimental" comparison. Regression modeling was used to isolate and account for the impact of age, gender and the type of medical plan.
In year one, the company spent over $2.5 million, followed by $4.6 million in year two and 5.0 million in year 3 for its programs (including $2 pmpm for the disease management).
What happened to the company's health care costs? While they increased for everyone over time, the persons who had enrolled in the program experienced less of an increase in their health care costs. The total gross savings for each of the years were $1.5 million, $15.4 million and $13 million in years one, two and three, respectively. When the savings were netted against the program costs,the first year had an unfavorable return on investment (ROI) of .59:1. This turned positive once persons were in the program for two and three years, with ROIs of 3.33 and 2.59, respectively.
While studies like this can not rule out the possibility of "self-selection bias" (i.e. persons destined for lower health care costs were naturally drawn to the programs), the analysis passes muster for a large business that needs a reasonable degree of assurance that it is getting its money's worth. While the analysis itself may seem daunting, the DMCB suggests that that cost, when compared the millions already being spent, is comparatively reasonable and should be rolled into the price of doing business.
The DMCB suspects dozens of studies like this are being done by employers and insurers, most of which are being used for internal consumption. Sexner et al's study is one that made it into the public domain.
Heads up, corporate America: your competition is not only investing in their employees health, but using quasi-experimental analytics to understand the return on investment. It's another factor in achieving a competitive advantage.