Resilient GP: bad research, and an inappropriate demand on GPs’ time
Resilient GP’s survey report Inappropriate demands to GPs is a poor quality piece of research – bad enough that it can’t tell readers anything much. I’ve already blogged about the report’s ethical problems In this post, I’ll raise concerns about the sampling used in this project, the survey itself, the reporting of this work, and the report’s analysis (or lack thereof). I’ll argue that getting GPs – busy as they are – to participate in low quality research like this is an inappropriate use of their time
Resilient GP say that they
conducted a survey on a large, private online discussion group composed entirely of GPs…We received over 200 unique responses.
Unfortunately, the survey report doesn’t specify much the characteristics of the group or those who responded – for example, if all the respondents were in England their answers may not reflect the situation in Scotland. The report also doesn’t specify the response rate: if these 200 responses came from a group of 200 GPs then they clearly reflect the group well; if they came from a group of 50,000 then the minority who chose to respond may be very different from the group as a whole (for example, may be responding because they’re annoyed by particular demands). Without this information, it’s impossible to know to what extent issues with the sample may bias the report’s findings or limit how much one can generalise from it.
Resilient GP’s report says almost nothing about the survey they used. They state that
We asked for examples that were considered by that GP to be an inappropriate use of their time and skills.
However we don’t know, for example, whether participants received a large number of questions to elicit these responses or how the questions were phrased. Without this information, the readers of this report can’t really interpret the survey results.
The way the survey report presents the survey results makes it near-enough impossible to draw any robust conclusions from this work. The report presents a list of what were viewed as inappropriate demands. It states that
We excluded very similar responses or those we considered might have conceivably have been a presentation of underlying illness.
However, there is nothing beyond this (for example, no way of knowing how Resilient GP verified judgements as to what “might have conceivably have been a presentation of underlying illness” or how they decided what was/was not similar enough to exclude).
There is no way of knowing how recent or regular these demands are: for example, for a GP to see one patient who wants a fake sick note in a 40 year career then this would be unfortunate but not exactly shocking; if this happens every day that would be a very noteworthy finding. If there were regular demands for pet medicines 20 years ago but never today this would mean something very different from regular demands for pet medicines today.
It is also not clear whether Resilient GP have reviewed the available evidence on demands on healthcare professionals and healthcare systems. If they have, this isn’t apparent in this survey report.
Bluntly, there isn’t much. The survey report reads like a long list of inappropriate demands reported by GPs, loosely divided into five categories. It is not clear how or why Resilient GP chose all of these five categories, although the survey report states that this was done
For ease of reference, and to help stimulate ideas for alternative solutions
There might be interesting information that can be drawn from the survey data Resilient GP collected. If there is, though, this analysis fails to do so.
One can conclude very little from Resilient GP’s survey report. Some GPs (about whom we know very little) were asked something (we don’t know what). They responded with a number of reports (we don’t know how many) about what they remember as inappropriate patient demands. We don’t know when these demands were made or how frequently they arise. This survey report therefore doesn’t tell us much at all.
Finally, while I’ve already blogged about ethical problems with this survey report, I’d like note one more ethical issue. Doing low quality research is often seen as unethical – among other problems, it wastes the time of participants and may make things harder for future researchers. Asking GPs – busy as they are – to participate in low quality research like this is an inappropriate use of their time.
* To be fair, the survey report does state that one demand – a “letter stating patient is unable to attend their tribunal or ATOS assessment” – is “a very common request”