Prostate Cancer UK recently released their “Consensus statements on PSA testing in asymptomatic men in the UK“. The organisation appear to acknowledge that Conflicts of Interest (COIs) are an issue when they note that
This project was funded through charitable funds which were not provided by the pharmaceutical industry or any medical device or treatment company.
However, unfortunately Prostate Cancer UK don’t provide COI declarations for Steering Group members, who had a significant influence on the process through which the consensus statements were arrived at [PDF]. I have asked Prostate Cancer UK for a link to the relevant COI declarations, but they have not shared one.
In the case of this Steering Group, a few minutes of very non-exhaustive research shows that at least one member works at a private clinic that charges for PSA tests. Prostate Cancer UK are therefore failing to disclose information which may be relevant to the interpretation of their consensus statements. This is unfortunate: it is important to disclose any potential COIs, so that readers and users of research can be aware of them.
Health Secretary Jeremy Hunt has announced that
we intend to publish the indicative medicine costs to the NHS on the packs of all medicines costing more than £20, which will also be marked ‘funded by the UK taxpayer’. This will not just reduce waste by reminding people of the cost of medicine, but also improve patient care by boosting adherence to drug regimes. I will start the processes to make this happen this year, with an aim to implement it next year.
These plans risk causing real harm and may not work, so I asked the Department of Health to tell me: whether the plans will be trialled; the evidence supporting the plans; and details of any planned research. They helpfully sent me some of the evidence they considered and some information about planned research. This is, sadly, entirely inadequate to support rolling out the policy. To show this, I will go through the examples they offered. The Department first referred me to
A study by Fogarty, Sturrock, Premji and Prinsloo (2013) which showed that provision of feedback on the cost of a particular diagnostic test (C-reactive protein assay) to physicians can prevent unnecessary tests being ordered.
This is an interesting study, but it does not look at the impact of publishing medicine costs and it measures responses of clnicians rather than patients.
The Department then mentioned that
Work is currently underway at Nottingham City Hospital to test this concept of cost feedback to physicians further – in particular looking at the impact of adaptions to the electronic prescribing system in use at the hospital, to include details of cost, on physicians’ prescribing decisions. Initial results will be available in early 2016.
This sounds like fascinating work, but won’t tell us how patients respond to this information. Also, as not even initial results are available, we don’t yet know whether this intervention (in physicians) works.
The Department also told me that
this concept of feedback on costs will also be tested with patients, in relation to increasing medications adherence. This study tests the use of a commitment sticker, which aims to create a commitment between the patient and the pharmacist by asking the patient to sign the commitment to take their medication at the point that the medication is dispensed. A range of stickers will be tested, one of which includes a message about the cost to the NHS each year from wasted medication. The study is due to go into the field later in July 2015, with initial results available at the end of 2015.
These are all interesting things to test, but this doesn’t constitute a trial of Hunt’s proposal (which would actually be simpler to trial than the combination of a signed commitment statement with various other stickers). Also, while adherence is important, Hunt specifically claimed that this policy would reduce medicine waste. From the description above, this planned trial will not test this claim.
The last thing the Department referred me to was
The “Evaluation of the scale, causes and costs of medicines waste” report, published in November 2010 by the York Health Economics Consortium/The School of Pharmacy, University of London, estimated the gross cost of unused prescription medicines in primary and community care to be in the region of £300 million a year in England in 2009, with around half of that being avoidable.
However, this report, (p84) found that
Some sources have advocated putting the supply price on every NHS medicine pack in order to raise awareness of costs, and so perhaps to reduce wastage and/or increase consumption…Some interviewees said it would ‘make patients realise the cost of medicines’.But in rather more cases, service user respondents and others felt that pack pricing might cause some patients in need of effective treatment to become worried about its cost to the public purse. One GP said that the people who he most wanted to target probably would not care about the price of their medicines
This might be a reason to trial this policy of printing the supply price on NHS medication. However, it’s certainly not robust evidence for rolling out the policy. We don’t know what the harms and benefits of this policy might be, and it should be properly tested to see if it works before its introduction.
Appendix I quote from an e-mail exchange with the Department of Health above. For the sake of transparency, I’ll reproduce this in full below (with e-mail addresses/names removed).
Dear Dr Mendel
Thank you for your e-mail of 2nd July. I am sorry for the delay in replying.
The Department has considered a range of evidence in relation to making health service users aware of the costs of their care leading to better use of health services. This includes:
• A study by Fogarty, Sturrock, Premji and Prinsloo (2013), which showed that provision of feedback on the cost of a particular diagnostic test (C-reactive protein assay) to physicians can prevent unnecessary tests being ordered.
• Work is currently underway at Nottingham City Hospital to test this concept of cost feedback to physicians further – in particular looking at the impact of adaptions to the electronic prescribing system in use at the hospital, to include details of cost, on physicians’ prescribing decisions. Initial results will be available in early 2016.
• In addition, this concept of feedback on costs will also be tested with patients, in relation to increasing medications adherence. This study tests the use of a commitment sticker, which aims to create a commitment between the patient and the pharmacist by asking the patient to sign the commitment to take their medication at the point that the medication is dispensed. A range of stickers will be tested, one of which includes a message about the cost to the NHS each year from wasted medication. The study is due to go into the field later in July 2015, with initial results available at the end of 2015.
The “Evaluation of the scale, causes and costs of medicines waste” report, published in November 2010 by the York Health Economics Consortium/The School of Pharmacy, University of London, estimated the gross cost of unused prescription medicines in primary and community care to be in the region of £300 million a year in England in 2009, with around half of that being avoidable.
I hope this is helpful. Please let me know if you have any other queries.
From: Jonathan Mendel [redacted] Sent: 02 July 2015 15:45 To: MPIG Support Subject: Plans to publish cost to NHS on medicine packs
Dear Sir/Madam, Jeremy Hunt has announced that “we intend to publish the indicative medicine costs to the NHS on the packs of all medicines costing more than £20, which will also be marked ‘funded by the UK taxpayer’. This will not just reduce waste by reminding people of the cost of medicine, but also improve patient care by boosting adherence to drug regimes.” I’m e-mailing about these plans, as suggested by the Department’s Twitter account. I would be grateful for several pieces of information. Firstly, given some lack of clarity in the media coverage, it would be helpful if you could confirm whether the plan is to roll out these plans nationwide or whether they will be trialled first. Secondly, I would be interested in seeing the Department’s evidence for the claim that this move will “reduce waste by reminding people of the cost of medicine [and] also improve patient care by boosting adherence to drug regimes”. If you hold unpublished research that is relevant to this claim, I would be grateful for a copy. If the research has already been published, a link/reference would be very helpful (and apologies for not having found this myself). Finally, if additional research is planned on this issue, I would be interested in seeing detail re what this is. All the best, Dr Jonathan Mendel PS: I am interested in discussing the interplay between research and policy, and think it is important to do so as openly as possible. With that in mind, please assume that all replies are potentially for publication.
Resilient GP’s survey report Inappropriate demands to GPs is a poor quality piece of research – bad enough that it can’t tell readers anything much. I’ve already blogged about the report’s ethical problems In this post, I’ll raise concerns about the sampling used in this project, the survey itself, the reporting of this work, and the report’s analysis (or lack thereof). I’ll argue that getting GPs – busy as they are – to participate in low quality research like this is an inappropriate use of their time
Resilient GP say that they
conducted a survey on a large, private online discussion group composed entirely of GPs…We received over 200 unique responses.
Unfortunately, the survey report doesn’t specify much the characteristics of the group or those who responded – for example, if all the respondents were in England their answers may not reflect the situation in Scotland. The report also doesn’t specify the response rate: if these 200 responses came from a group of 200 GPs then they clearly reflect the group well; if they came from a group of 50,000 then the minority who chose to respond may be very different from the group as a whole (for example, may be responding because they’re annoyed by particular demands). Without this information, it’s impossible to know to what extent issues with the sample may bias the report’s findings or limit how much one can generalise from it.
Resilient GP’s report says almost nothing about the survey they used. They state that
We asked for examples that were considered by that GP to be an inappropriate use of their time and skills.
However we don’t know, for example, whether participants received a large number of questions to elicit these responses or how the questions were phrased. Without this information, the readers of this report can’t really interpret the survey results.
The way the survey report presents the survey results makes it near-enough impossible to draw any robust conclusions from this work. The report presents a list of what were viewed as inappropriate demands. It states that
We excluded very similar responses or those we considered might have conceivably have been a presentation of underlying illness.
However, there is nothing beyond this (for example, no way of knowing how Resilient GP verified judgements as to what “might have conceivably have been a presentation of underlying illness” or how they decided what was/was not similar enough to exclude).
There is no way of knowing how recent or regular these demands are: for example, for a GP to see one patient who wants a fake sick note in a 40 year career then this would be unfortunate but not exactly shocking; if this happens every day that would be a very noteworthy finding. If there were regular demands for pet medicines 20 years ago but never today this would mean something very different from regular demands for pet medicines today.
It is also not clear whether Resilient GP have reviewed the available evidence on demands on healthcare professionals and healthcare systems. If they have, this isn’t apparent in this survey report.
Bluntly, there isn’t much. The survey report reads like a long list of inappropriate demands reported by GPs, loosely divided into five categories. It is not clear how or why Resilient GP chose all of these five categories, although the survey report states that this was done
For ease of reference, and to help stimulate ideas for alternative solutions
There might be interesting information that can be drawn from the survey data Resilient GP collected. If there is, though, this analysis fails to do so.
One can conclude very little from Resilient GP’s survey report. Some GPs (about whom we know very little) were asked something (we don’t know what). They responded with a number of reports (we don’t know how many) about what they remember as inappropriate patient demands. We don’t know when these demands were made or how frequently they arise. This survey report therefore doesn’t tell us much at all.
Finally, while I’ve already blogged about ethical problems with this survey report, I’d like note one more ethical issue. Doing low quality research is often seen as unethical – among other problems, it wastes the time of participants and may make things harder for future researchers. Asking GPs – busy as they are – to participate in low quality research like this is an inappropriate use of their time.
* To be fair, the survey report does state that one demand – a “letter stating patient is unable to attend their tribunal or ATOS assessment” – is “a very common request”
Resilient GP posted a survey report yesterday about inappropriate patient demand. The ethics and methodology of what they did was questioned, and they reposted a (revised) version of the post today – justified for the purpose of “debate” and “educating patients not to use up appointments” inappropriately. I’m going to look at ethical problems with this survey report here, and look at problems with methology/write-up in a subsequent post. I’ll argue that it’s not ethical to have posted this, and that Resilient GP’s arguments for posting it don’t stand up.
It’s important for doctors to maintain patient confidentiality, except where there are very good reasons not to (for example, the patient might die if they don’t). However, the original version of the Resilient GP survey report contained some very specific information that could identify patients. Some of this has now been removed – and it wouldn’t be ethical for me to repost this information – but there was no good justification for posting this information in the first place. Resilient GP has not publicly reflected on the removed information – so I don’t even know whether this was done for ethical reasons, let alone if any lessons have been learnt. Some of the survey report (e.g. point 5.22) is still specific enough to identify patients.
Resilient GP argued today that “[c]areful reading of the report” will show that patient confidentiality hasn’t been breached. However, it’s not classy to make this argument after removing (but not noting the removal of) potentially identifying information – readers now are unlikely to know that information has been removed and may therefore reach an overly positive conclusion about how patients have been anonymised.
The doctor-patient relationship is important: mellojonny suggests that, for some, “a long-term relationship with a stable adult who knows them is literally life-saving”. One reason for maintaining patient confidentiality is so patients feel confident sharing very personal things with their doctors. Will patients now worry that what they say to their GP will be posted online? For doctors to post quite specific anecdotes about patients in what seems a derogatory way is hardly likely to help build a good relationship.
Generally, we ask research participants to give informed consent to taking part in research projects. I’d be very surprised if the patients featured in this survey report consented to this (I asked Resilient GP to confirm, but they haven’t). There are some cases where one might proceed without informed consent, but I can’t see how this is justified here.
Resilient GP’s defence of the ethics of their survey report
Resilient GP offers a defence of the ethics of posting this material. However, this is very weak. They argue that
ethics of utilitarianism are equally important here and it is a duty of doctors to challenge inappropriate use of resources
A utilitarian justification of posting this, though, would depend on having a reasonable expectation that it will have positive consequences. Resilient GP have not presented any good evidence that it will. As Betabetic has shown, it’s not at all clear whether (well-resourced, large-scale) campaigns again over-consulting will have the desired effect. It seems rather unlikely that the best way to address inappropriate demand is to post anecdotes about inappropriate demand on a website that seems largely aimed at GPs. There is significant potential for harm, though – for example, in breaching patient confidentiality or damaging the doctor-patient relationship.
Resilient GP also states that the ambulance service has “raise[d] real life examples of inappropriate appointment use” and appears to feel that this shows that it’s not unethical to do so. This is a really lousy argument, though – it might, for example, just be that others have acted in ethically problematic ways or that the GP-patient relationship is different to the ambulance-patient relationship.
Resilient GP should pull this post ASAP, at least till it can be revised to properly address issues related to patient confidentiality. They should also reflect on why a post with this issues was put up in the first place.
More broadly, if Resilient GP feel utilitarian ethics are the best lens to view this through, they should give more serious consideration to the consequences of their survey report: I haven’t seen good reason to expect the positive to outweigh the negative. Also, as I’ll argue in my next post on this, I don’t think the research they’ve posted is all that good – and I’m not sure that starting a ‘debate’ centred around low quality research is especially useful.
Note: I have contacted Resilient GP to suggest that they pull the post ASAP, at least till they can resolve issues with (non)anonymisation. They haven’t yet responded.
Inside Health this week covered NHS staff morale – and there was lots of interesting discussion of the challenges that staff face. However, the programme presented evidence that staff satisfaction correlates with better care as showing that improving staff morale causes better care. I therefore think the programme overstated the evidence it presented: it didn’t address problems with the direction of causality. Staff might be happier because they work in organisations that look after patients better, for example.
Martin Powell of Birmingham University was interviewed about his interesting research on staff satisfaction and performance. Mark Porter (hosting the show) suggested that this was “hard evidence…that staff morale can have an impact on clinical outcomes”. Powell agreed, and went on to refer to how this was discussed in various policy documents. However, Powell et al.’s 2014 paper on this topic is clear that
conclusions about the direction of causality were less clear (except for absenteeism). This is probably due in part to the relatively blunt nature of the data used
The paper explicity mentions concerns about reverse causality where staff might be more satisfied because they are working for better-performing organisations – it might be the good performance that makes staff happy, rather than the other way round. While policies to support staff better and improve staff morale are very likely a positive thing, it’s important not to overstate the evidence in favour of these measures.
I feel a bit bad posting this – Inside Health generally has some of the best discussions of research in the mainstream media, and has had excellent discussion of questions around, for example, whether screening leads to reduced mortality. I’m certainly not convinced I’d do any better than Powell if I was trying to discuss complicated research in an interview. If I’ve missed out relevant research, I’m happy to be corrected of course – but I can’t find convincing evidence about about the direction of causility here.
ComRes recently carried out a CARE-sponsored survey (PDF) on the donation of mitocondrial DNA to help couples avoid passing some inherited disorders to their children. The methodology used in a 2014 ComRes survey on this topic was criticised by Watermeyer and Rowe (PDF). However, the more recent survey is very flawed, repeats some previous errors, and in some ways is worse than ComRes’ previous effort. Read More…
I’ve criticised the limited support that the Samaritans provided to users of their Samaritans Radar app when they deactivated it. I’m pleased to say that (1 week after deactivating the app) they have now the app’s e-mailed users; however, the support provided to users is still inadequate. This is the e-mail they sent:
As you may already be aware, following the feedback and advice Samaritans has received since the launch of Radar, the app has been suspended. As a result we have deactivated all subscriber accounts and are deleting all associated data.
For a full statement on the suspension of the app, please see our website http://www.samaritans.org
Thank you for subscribing to the Samaritans Radar app and for your support to date. If you have any further questions please email firstname.lastname@example.org
There are a number of problems with this e-mail:
- There is no offer of support to users. The link in the e-mail takes users to a recent announcement about the app’s closure, not a support page. An e-mail like this might be OK if this was an app for sharing funny cat pics, but when suspending an app which aims to help prevent suicide the Samaritans should be pointing users towards support in case they’re worried or distressed.
- They don’t give any information on what to do with alerts received but now inaccessible.
- The e-mail is being sent 1 week after the app was suspended. Users might have been expecting the app to be working and sending alerts during this week.
- The Samaritans don’t solicit feedback from users (or even link their own feedback survey).
Seriously – 1 week to send a 78 word e-mail and it’s still not especially good. This is really disappointing.
Update: I had linked to the wrong Samaritans announcement; corrected this now.
The Samaritans have now suspended their Samaritans Radar app. They’re right to suspend it, but have done so in a way that risks adding to the harm done by the app. The Samaritans have – quite rightly – emphasised the value of Twitter as a support mechanism; they have also claimed that Samaritans Radar can contribute significantly to this. With this in mind, I’d have expected them to be careful not to do harm when suspending the app. Sadly, I don’t see evidence off this: there hasn’t been adequate notification of users, and I don’t see adequate support systems in place.
So far as I can tell, people who signed up to use the app haven’t been told that it’s no longer working. Some may notice the Twitter discussion of this, but many won’t. While I think the app was a bad idea overall there were, as quantumplations notes, some positive ways in which it could be used: for example, it might be used by someone “who knows they follow people on twitter who might have mental health issues, wants to keep an eye on those people, and [is] able to provide meaningful support to them.” Such users may be relying on the app to keep an eye on people, and therefore not checking Twitter feeds manually; this could lead to them missing worrying tweets.
If someone clicks on an alert that was sent prior to the app’s suspension, they are just taken to this page. This means that, while a user will have been told that there was a worrying tweet from someone they follow, they won’t be able to know who the tweet was from or what it said. This could be rather distressing. Given data protection concerns this is probably now unavoidable, but Samaritans should have provided better support to people in this situation.
Unfortunately, the first thing someone clicking on a Samaritans Radar alert will see is a fairly generic statement which seems to be aimed at media covering the suspension; if they scroll down the page (which many may not do) the next thing they will see is a link to a survey. Finally, at the bottom of the page, they will see contact details for the Samaritans. This isn’t a good response to what may be a very worrying situation for a user of the app – instead, the Samaritans should have put up a statement tailored to people in this situation, along with putting details of available support up-front. There should also be details of support for users outside the UK.
A final thing to note is that suspending the app at around 6pm on a Friday means that sources of support people would typically rely on may be unavailable or harder to access and that – if users of the app are worried about some of those they follow – there’s more chance people may be offline over the weekend. Not only have Samaritans suspended the app in a rather messy way, the timing of doing so may make the situation worse.
I’ve argued that the Samaritans did not adequately mitigate the risk of harm from Samaritans Radar. The same applies to their suspension of the app. Even if they only made the final decision to suspend the app at 5:45pm on Friday, they must have been aware that this was a possibility for some time – so should have prepared better measures to reduce the risk of harm to users of the app and to other people affected by its suspension.
Update 9/11/14: quantumplations is looking at some problems with the survey the Samaritans released to coincide with Samaritans Radar’s suspension. More effective engagement following the suspension might also have helped to mitigate current or future harms related to the app.
 They should have acted sooner – or, better, developed the app in such a way as to avoid these problems – and should have posted a more meaningful apology. There are also questions about what the Samaritans are doing with the data collected. I’ll pass over these issues for now, though.
 I can’t be sure that some users haven’t received notification of the suspension. I can’t find anyone who has, though – so, clearly, if there was a notification system in place it’s not working well.
 To be clear, I’d only view this type of monitoring as appropriate where all parties are happy for this to take place.
 It could have been avoided had things been thought through better pre-launch, but none of us have a time machine…
This is the second of two posts looking at Samaritans Radar from a research ethics point of view: thinking about the type of questions that would be asked if you were applying for ethical approval for the project. The first post is here.
I’ve found Hay’s work on research ethics helpful when introducing the topic. Hay suggests three principles of ethical behaviour: justice, beneficence/non-malevolence and respect (see p. 38). I’d argue that, when doing social research, an ethical approach is important for several reasons: it’s the right thing to do, it can strengthen a research project, and it protects the possibilities for future research.
whole focus of the app is designed towards the ‘user’, a follower who may wish to identify when people they follow on twitter are vulnerable. Originally the privacy section of their website only mentioned the privacy of the user of the app – the person doing the monitoring – with no mention of any privacy concerns for the individuals who are being followed… An app designed for an individual with mental health problems would be very unlikely to remove the agency of the individual in such a fashion.
Samaritans have pointed out that “Samaritans’ Radar…has been tested with several different user groups who have contributed to its creation, including young people with mental health problems”. I’m glad to hear that this has been done. However, a deeper engagement with groups who are particularly affected by the app – for example, people with mental health problems – might have allowed the Samaritans to show greater respect for monitored people. One of the great things about social media is that it can facilitate quite open engagement. For example, if the Samaritans had held an open Twitter chat about the project a few weeks before launch they might quickly have learnt that the lack of an opt-out for individuals was viewed as objectionable by many. This type of feedback could have let them start out with a better product.
Pain (PDF, p. 22) emphasises the “principles of collaboration and social justice…in impactful research” and asks “[s]hould we be striking a blow, or walking together?” In some ways, Samaritans Radar will have achieved high impact – it has had lots of media coverage, lots of discussion on Twitter, and has been monitoring and sending alerts about over 1.6m Twitter feeds. However, walking together with some of the communities most affected by the project might have allowed a more collaborative process of enhancing the support available through Twitter. This might have led to a different type of app and project, but the way these negotiations played out would itself have been interesting and might have generated something far more useful. A more participatory and respectful approach could have led to a stronger and more ethical project.
Hay also emphasises the principle of beneficence/non-malevolence – carrying out research in a way that benefits research participants, or at least does not do harm. When seeking ethical approval for a project this is rightly given lots of weight – harming participants is normally, quite rightly, viewed as bad. Samaritans Radar is aimed at bringing wide-ranging benefits – for example, helping to support people when they are in distress or prevent suicides. However, it also carries significant risk of harm and I’m not convinced these have been adequately mitigated. I’ll first go through risks that I think should have been anticipated and dealt with better pre-launch, and them look at some significant post-launch harms.
Firstly, the app sends alerts when tweets appear to show cause for concern. It is clearly important to support the users getting these alerts as well as possible. However, when responding to an alert, users are initially taken to a page that asks them to click a box to say whether or not they are worried out the tweet. As far as I can tell, advice about what to do if they are worried is not available until they have given this feedback. This is inappropriate: given the risks of harm here, advice and support should be offered up-front, while giving feedback to improve the app should be entirely optional.
Secondly, while the Samaritans do offer excellent support by telephone, e-mail, post and face-to-face, they should have planned to offer better social media and international support in order to mitigate the risk of harm from a social media project on this scale. The level of support a researcher is expected to provide alongside a project tends to depend on its size and on the risks involved. In terms of size, this project is huge – over 1.6m Twitter accounts monitored. It is also very high-risk: it’s trying to flag up tweets from suicidal people. With this in mind, I’d argue that the @Samaritans Twitter account should have been set up to be monitored – and reply to distressed tweets – 24/7 (even if the replies largely just point people towards other sources of support). I’m aware that people in the UK or Republic of Ireland (ROI) can phone Samaritans, but people don’t always look up the correct contact details – especially when upset – so I think a 24/7 social media presence would be reasonable to mitigate the risks from a project like this.
The Samaritans’ presence is largely in the UK and ROI. However, this project will clearly go far beyond these borders. With this in mind, the Samaritans should consider what support they can offer to mitigate the risk of harm to people in other parts of the world. Currently, the information they offer for people outside of the UK and Ireland is limited and – while it might be OK for a UK-focussed charity – is nowhere near adequate for an organising running an international project on this scale.
There is also the risk that the app will be used to find when people are vulnerable in order to harm them. As far as I can tell, Samaritans don’t have any measures in place to block abusers (and if it’s not clear how to report abuse of your social media product, your anti-abuse strategy is probably broken). When considering the ethics of the project, this issue should have been addressed.
Of course, things don’t always go to plan. After the launch of Samaritans Radar, it has become clear that the app is causing considerable distress to many Twitter users; I presume this wasn’t intended or predicted. Some people are closing their accounts, making them private or censoring what they say as a result of the app. The Samaritans’ online responses haven’t been adequate – their official Twitter account is now back to tweeting about raffles and their response to the concerns raised doesn’t adequately address problems or include an apology for things like the lack of an opt-out which have now been corrected. The Samaritans should act to mitigate these harms, but are not doing so effectively. With this in mind, I would argue that the harm being done is sufficient that the app should be stopped from running in its current form – people are being harmed by the thought that they’re being monitored and alerts may be being sent against their wishes so (given that the Samaritans have failed to find any adequate resolution to these problems) the best way to deal with this is simply to stop the monitoring.
A last potential harm to note is that interventions from Samaritans Radar might be worse than useless. Lots of interventions that seem to work well sadly don’t, when they are tested. In the case of this app it is, for example, plausible that false positives are harmful. Prof Scourfield has acknowledged that
There is not yet what I would consider evidence of the app’s effectiveness, because it is a brand new innovation which has not yet been evaluated using a control group.
Although the app has been launched on large scale there is no way to be confident that it is not actively harming users and monitored people, even where all parties are happy about the monitoring.
Hay also discusses the principle of justice – considering how benefits and burdens of research will be distributed. A major problem here is that a lot of the burden from Samaritans Radar appears to be falling on people with mental health problems. I would have major worries about the justice of a project that makes life harder for a group that is already too-often marginalised and stigmatised. A lot of the more obvious benefits from the app seem to be accruing to the Samaritans: for example, the Samaritans seem pleased by metrics associated with the project, and they have received positive industry coverage for the campaign. While it is also likely that some people will benefit from alerts, the distribution of benefits, burdens and harms associated with the app and marketing so far raises real questions of justice.
I do, then, have serious concerns about how well these ethical issues were considered when developing Samaritans Radar. A more participatory approach might have allowed the development of a better, more ethical and more just project, but there’s no way to turn the clock back. I think the most appropriate action now – given the harm being done, and in order to show respect to those who find the app is stopping them from using Twitter as they want – is to stop the app. This is why I’ve signed the petition to stop the app, and why I think you should sign it too.
 I’m not altogether comfortable with the emphasis on autonomy here. I’ve previously drawn on Levinas’ and Derrida’s decentralisation of subjectivity through emphasis on an Other, and other approaches to ethics have been influenced, for example, by Carol Gilligan’s relational ethics of care. The participatory approach discussed in this post is often linked with a relational view of subjectivity. However, I still do think it is important for research to respect the agency of participants and – with this in mind – the idea of autonomy is still useful. Also, I’m not sure a long digression on ideas of subjectivity would enhance the box office gold that is a 1,900 word discussion of research ethics.
 Providing additional support online and internationally may well be expensive, and the Samaritans may feel this draws them away from their core goals. However, this is why these ethical issues should be discussed prior to the launch of a project – decisions on what to do can then be informed by the support that can (or cannot) be provided.
 While I had hoped that the Samaritans might find a way to stop the harm that’s currently being done while keeping the app running, this now looks unlikely. Hopefully, something can be salvaged from this in due course; for now, though, I think closing the app is the best option. This is why I have now – regretfully – signed the petition calling to shut down Samaritans Radar.
 I think they’re wrong to be pleased – getting lots of people to hate your project enough to tweet about it isn’t normally a great achievement – but they do seem pleased…
 I would also note that if there are any hopes to use the (probably very interesting) data from the app in future, questions along the lines of those raised above may come up: there are real ethical problems in using data collected in this way.
The Samaritans have emphasised the involvement of academics when trying to justify Samaritans Radar. The work that academics do is bound by a strict ethical framework and I’m therefore going to look at the questions that might be raised about Samaritans Radar if it were proposed as an academic social research project. This will come in two posts (ethics forms can be long, and I don’t imagine many people want to read 2,000+ words on the topic in one go). I will argue that the Samaritans Radar would – or, at least, should – face serious ethical questions were it proposed as an academic project. While what Samaritans Radar is trying to do is very interesting, the way the project has been ran so far may severely limit what academics can do with the data it has generated.
I’m not going to rehash discussions of data protection concerns around Samaritans Radar (others have covered this better than I could). However, a first thing to note is that an academic project is expected to have reasonable data protection measures in place: breaking data protection law would almost always be seen as unethical, both because the current law probably reflects important societal norms and because breaking the law might leave individuals and institutions involved in the project facing risks such as prosecution.
Ethics forms always ask about consent, and so they should. Samaritans Radar involves observing lots of activity in an online public space, and therefore raises interesting questions around consent. In some cases, most would view observation of public spaces without opt-in consent as reasonable – for example, if I counted the cars passing my office window and heading towards the city centre during rush hour or analysed above-the-line Comment is Free posts, this would probably be seen as fairly unobjectionable. Other observations of public space get into more sensitive or personal-seeming areas, though, and may be seen as unacceptably intrusive – for example, it would probably be unacceptable for me to observe people entering and leaving local religious buildings without at least getting the congregations to agree to this.
Even where observation is seen as acceptable and opt-in consent is viewed as unnecessary, one would normally be expected to offer opt-out consent if anyone does object: for example, if someone saw me looking out of my office window to count cars and called me up to complain, it would probably be appropriate for me to offer to stop counting their car. Observing people who have actively said they don’t want to be observed is quite different from just assuming people are happy to be observed if they don’t object. I would view observation in these circumstances as unacceptable unless there is a very strong reason to carry out the project.
Initially, Samaritans Radar didn’t offer any opt-out to individuals. It appears this was technically possible for them to do (organisational accounts could be ‘white listed’) but they chose not to. A number of Twitter uses (including me) were unhappy about this. I would argue that it was unethical to monitor individuals who had strongly objected to monitoring, but not been offered any way to opt out. I would therefore have real ethical concerns about using these data at all – doing so risks using data that which subjects did not want collected, had no reasonable way of preventing from being collected and would likely object to having processed/analysed. I can’t see how this could be acceptable.
Samaritans Radar does now offer an opt-out, and the acceptability of this type of monitoring is a more complicated question – can one assume acquiescence to observation if people don’t object? In some online or offline spaces, I think that would be reasonable – again, if I were counting cars passing my office window or analysing above-the-line posts on Comment Is Free, I think this would be OK. On the other hand, in some cases monitoring without opt-in consent would seem unreasonably intrusive – for example, I don’t think it would be acceptable for me track below-the-line Comment Is Free posts in order to assess the mental state of commenters without getting informed consent. Samaritans Radar is actively collecting sensitive personal data.
One issue with taking the fact someone doesn’t opt-out as implying consent is whether those who are being monitored know that this is the case and therefore have the option to opt out (clearly, many monitored by Samaritans Radar don’t). If I was considering a proposal for running project like this on an opt-out basis, it would be reasonable to make monitoring much more overt: for example, the app could Tweet daily from those who have it installed to make clear that they’re using it to monitor those they follow. This would still be far from perfect – many would still be unaware they’re being monitored, or would want neither to be monitored nor to have their names on an opt-out list – but would be better than the current situation.
I’m aware I’m looking at this with the benefit of hindsight, and I’m not sure what my answer would have been if – prior to launch – I’d been asked about running Samaritans Radar on an opt-out basis. Quantumplations asks “Was an opt in app ever considered until the initial backlash on Twitter? If so, why was it rejected?” From an ethical point of view, I think there would have needed to be a compelling reason to reject an opt-in model where informed consent could have been gained from all monitored users. I now think that Samaritans Radar should be opt-in only. The app is currently being used to monitor spaces where it is clearly unwelcome and to monitor people who – while not wanting to be on an opt-out list – don’t want to be monitored. I can’t see how this can be ethical.
For these reasons, I also don’t think using data resulting from current Samaritans Radar monitoring would be ethical. Once again, academics using these data would risk analysing data from people who have vocally refused consent to be monitored and who would likely object to their data being analysed in this way. They are also analysing data from a lot more people who are being monitored but – in part because the app doesn’t take all reasonable measures to inform monitored people –don’t know about this and don’t have the option of opting out.
One of the sad things about how Samaritans Radar has been run is that this may severely limit what academics can do with data coming from the project. I’m doubtful that it would be ethical to use the data generated so far at all. This is a real pity – this is a fascinating research topic, Samaritans Radar could have made a major contribution to driving research in the area forwards. Being bound by a tighter ethical framework might also have strengthened the project, and avoided some of the problems it has ran into.
This is the first post of two on ethical issues around Samaritans Radar. The second one is now (3/11) available here.
Update 3/11: Prof Jonathan Scourfield (who took part in the app launch) has blogged on the topic. He states that
The idea for the app came from Samaritans and digital agency Jam…When the development was already far advanced, I offered to contribute a lexicon of possible suicidal language, derived from our ongoing research on social media…we are not collecting any research data via Samaritans Radar”
 To be up-front about some of my own biases, I should say that I have previously observed online interactions as part of my research and hope to do so in future. I have also asked for data from social media companies for reanalysis in the past, though this never progressed to the point where I’d need to complete an ethics form.
 I appreciate that charities aren’t bound by the same norms as academics – and don’t think they should be – some of these ethical questions will be relevant to the Samaritans and others will be relevant to academics who have worked/are working on the Samaritans Radar project or who are thinking about using data resulting from the project.
 If anything, current law may not be strong enough to adequately reflect current societal concerns about privacy and data analytics. However, failing to meet the standards laid out in current law is likely to fall short of societal expectations.
 I don’t view expecting someone to leave Twitter or make their Twitter account private in order to stop monitoring by Samaritans Radar as reasonable. This would be like me telling a driver who objected to me counting their car that they should cycle instead in order to avoid being observed.
 For the record, I object to any analysis being carried out on data on me collected by Samaritans Radar.
 The way the app currently works seems over-cautious about the privacy of those who are in a position to give informed consent to run it – those installing the app – but far too casual about the privacy of those who are monitored.