Is the pursuit of evidence-based medicine evidence based? That was the head-cramping question the Disease Management Care Blog grappled with when it read this just published Health Affairs article, Tool Used To Assess How Well Community Health Centers Function As Medical Homes May Be Flawed.
Readers will recall that the National Committee for Quality Assurance (NCQA) is a Washington DC-based not-for-profit that champions the use of performance measures to assess the quality of health care. Provider organizations go through an assessment process based on the measures and, if they pass muster, are "recognized" by the NCQA. The performance measures are based on peer-reviewed medical evidence, vetted by expert panels and then opened for public comment before they are finalized and used.
The DMCB knows this because it has served on two of the NCQA panels.
While its most visible activity has been the ranking of health insurers, the NCQA has been offering a soup of recognition, accreditation and certification programs for other types of provider organizations including the disease management vendors (for example) and, more recently, medical homes. More on that group of providers later.
Once you earn it, the NCQA quality badge is more than just a festively colored addition to your letterhead and collaterals. Given the past evidence that purchasers also pay some attention to it, the DMCB recalled being unsurprised when the number of disease management vendors sporting the newly established NCQA accreditation logo multiplied faster than the number of vixens at a Kennedy White House pool party. They reasonably believed that it would help them gain credibility and give them a leg-up against their competition.
But even if the NCQA performance measures are based on the scientific quality as well as consensus and drive competition based on quality, the question remains: if an organization achieves NCQA recognition, does that really mean that patients are better off for it?
Enter Robin Clarke and colleagues who wanted to know if that was true for medical homes and their patients with diabetes. They adapted the 2008 version of the NCQA's medical home evaluation survey tool to 40 Los Angeles community primary care health centers. The medical director or executive team at each center had to complete the tool which was scored in the usual manner. The score - and the corresponding level of recognition - was then compared to the centers' clinical diabetes care measures based on the National Quality Foundation's (NQF) quality measures. These measures were collected on samples of patients based on reviews of the medical records.
Only 30 of the centers completed the tool. They were made up of 88 LA clinics that were taking care of more than 600,000 mostly low income patients. The vast majority of the patient population was Medicaid.
The NCQA survey tool is based on a combination of "must pass" criteria combined with a 100 point scale. The average score among the centers in this study was a respectable 67. Eight would have received the highest Level 3 Recognition (more than 75 points), three would have been Level 2 (between 50 and 75) and the remainder were Level 1. The percent of patients who had a measure of HbA1c, LDL, or blood pressure in the past twelve months was 84%, 70%, and 90%, respectively. Approximately 60% of patients had kidney disease screening and 35% had a diabetes eye examination.
However, when the authors used multiple methods to look for a statistical association between higher scores or Levels and higher quality percentages on the NQF measures, none was found.
To their credit, the authors point out that 1) theirs was a faux and unaudited NCQA process, 2) that a Level 1 accreditation, while no better than a 2 or a 3, may be better than a "zero," 3) that persons with conditions other than diabetes may still benefit from this kind of process and 4) that they didn't use the 2011 edition of the tool. The DMCB adds that this was in a community health setting involving mostly patients with Medicaid insurance. It's possible that patients in other settings, socioeconomic classes or with other types of insurance could benefit.
Despite the limitations, the DMCB thinks this is an important study that puts the NCQA into perspective and tells us what it may and what it may not be doing. Hopefully this kind of study will be done in other settings involving other provider types, including the disease management community. In the meantime, the NCQA would be well served to continue to examine the links between its prsitine interpretations of the science and the real-world benefits for patients.
In other words, it's time for us to better understand the link between pursuing outcomes and actually achieving them.