Should we care if the NHS is number 1? Questions about the Commonwealth Fund Scorecard
I’ve been hearing a lot of claims recently that the NHS has (as the Guardian puts it) been rated as “the world’s best healthcare system”. These stories draw on a recent Commonwealth Fund report. However, I don’t think the question of whether the NHS is number 1 in the world is a good thing to focus on. I’m also not convinced that the Commonwealth Fund report is a good way of measuring NHS performance.
We shouldn’t focus on ranking healthcare systems
To start by stating the obvious, healthcare isn’t a zero-sum game. If I get excellent care in one place, it’s no skin off my nose if someone else gets even better care in another place. Likewise, if the current Westminster government destroys the NHS but everywhere else in the world gets worse slightly more quickly, it’s not going to be much comfort if we remain number 1. A first question, then, is whether this is a good thing to focus on when defending the NHS – aren’t there better arguments to make?
Another obvious point to make is that the Commonwealth Fund report doesn’t say the NHS is the best in the world. It compares 11 countries. It focuses on industrialised countries, but the inclusion/exclusion criteria aren’t clear.
Slightly more complex, though, is the way that ranking runs through the Commonwealth Fund’s methodology. The Fund acknowledges that a focus on “overall rankings may overshadow important absolute differences in performance” (though argues that this is more in the middle of the ranking tables)*. This does raise the question of what information is lost in the focus on rankings.
Is the Commonwealth Fund’s research a good way to assess NHS performance?
I’ve been struggling to find an in-depth description of the Commonwealth Fund’s methodology for rating healthcare, but a 2006 Technical Report (p. 4) on their Scorecard makes clear that it aims to benchmark the US against the best that’s available. This is a reasonable enough thing to do, but a system aimed at comparing US performance with that of other systems may not necessarily be optimal for comparing, say, the NHS and the Swedish system.
In terms of specifics, there are a number of aspects of the 2014 report that one could take issue with. I’ll discuss a few here:
- The Report (pp. 13-4) gives credit for preventive care programmes that may not be evidence-based. For example, asking whether “Patients [are] routinely sent computerized reminder notices for preventive or follow-up care” would mean systems are given credit for providing useless or harmful screening and health check programmes.
- When discussing safe care (p. 15) the Report asks whether “Patient believed a medical mistake was made in treatment or care in past 2 years”. This could actually penalise systems that follow good practice and proactively tell patients about mistakes, while a system that does a ‘good’ job of hiding errors could score better. The Report also judges safety on whether “Doctor routinely recieves reminders for guideline-based interventions and/or tests”. Again, this doesn’t take account of whether the guidelines in question are based on robust evidence.
- The Report (p. 23) defines equity as “providing care that does not vary in quality because of personal characteristics such as gender, ethnicity, geographic location, and socioeconomic status.” However, it only discusses equity by looking at the experiences of “adults [grouped] by two income categories” – thus missing important issues ranging from unequal treatment by ethnicity to the need for parity in the treatment of mental health problems.
- A number of the questions the Report asked about equity (p. 24) – for example, “Had medical problem but did not visit doctor because of cost in the past year” – will very much tend to favour a free at the point of delivery system like the NHS. I support this way of providing healthcare, but if you’re arguing for it based on questions which really favour this type of system then there’s a risk that your argument becomes rather circular.
Clearly, no research is perfect; I’m also aware that this isn’t an area I work in myself. However, with this Commonwealth Fund report I’ve struggled to work out exactly what they’ve done and why.
There have been some very positive moves towards critically assessing research when reporting it, rather than just writing up the findings or press release. However, I haven’t found any good critical assessment of this work in the coverage of it – instead, I just keep seeing people writing about how it shows the NHS is number 1 in the world. I haven’t, for example, seen discussion of why the Commonwealth Fund came up with different rankings to the Euro Health Consumer Index and why.**
I understand that people are keen to find ‘good news’ stories in order to defend the NHS. However, it’s also important that pro-NHS arguments are robust. Arguing badly for the NHS risks damaging trust (and there are lots of really good arguments for the NHS, so there’s no need to fall back on weak ones!) Also, relying on rankings of uncertain reliability can make the NHS more vulnerable to attack in some ways – for example, imagine the negative headlines if the Commonwealth Fund re-jigs its methodology and the NHS ends up ranked 6th next time round!
* See the Report‘s methodology section, pp. 28-9
** Thanks to Tom Forth for pointing me towards this