Samaritans Radar: questions from a research ethics point of view (post 2 of 2)

This is the second of two posts looking at Samaritans Radar from a research ethics point of view: thinking about the type of questions that would be asked if you were applying for ethical approval for the project. The first post is here.

I’ve found Hay’s work on research ethics helpful when introducing the topic. Hay suggests three principles of ethical behaviour: justice, beneficence/non-malevolence and respect (see p. 38). I’d argue that, when doing social research, an ethical approach is important for several reasons: it’s the right thing to do, it can strengthen a research project, and it protects the possibilities for future research.

Hay describes respect in terms of how “individuals should be regarded as autonomous agents”.[1] Quantumplations points out that, with Samaritans Radar, the

whole focus of the app is designed towards the ‘user’, a follower who may wish to identify when people they follow on twitter are vulnerable. Originally the privacy section of their website only mentioned the privacy of the user of the app – the person doing the monitoring – with no mention of any privacy concerns for the individuals who are being followed… An app designed for an individual with mental health problems would be very unlikely to remove the agency of the individual in such a fashion.

Samaritans have pointed out that “Samaritans’ Radar…has been tested with several different user groups who have contributed to its creation, including young people with mental health problems”. I’m glad to hear that this has been done. However, a deeper engagement with groups who are particularly affected by the app – for example, people with mental health problems – might have allowed the Samaritans to show greater respect for monitored people. One of the great things about social media is that it can facilitate quite open engagement. For example, if the Samaritans had held an open Twitter chat about the project a few weeks before launch they might quickly have learnt that the lack of an opt-out for individuals was viewed as objectionable by many. This type of feedback could have let them start out with a better product.

Pain (PDF, p. 22) emphasises the “principles of collaboration and social justice…in impactful research” and asks “[s]hould we be striking a blow, or walking together?” In some ways, Samaritans Radar will have achieved high impact – it has had lots of media coverage, lots of discussion on Twitter, and has been monitoring and sending alerts about over 1.6m Twitter feeds. However, walking together with some of the communities most affected by the project might have allowed a more collaborative process of enhancing the support available through Twitter. This might have led to a different type of app and project, but the way these negotiations played out would itself have been interesting and might have generated something far more useful. A more participatory and respectful approach could have led to a stronger and more ethical project.

Hay also emphasises the principle of beneficence/non-malevolence – carrying out research in a way that benefits research participants, or at least does not do harm. When seeking ethical approval for a project this is rightly given lots of weight – harming participants is normally, quite rightly, viewed as bad. Samaritans Radar is aimed at bringing wide-ranging benefits – for example, helping to support people when they are in distress or prevent suicides. However, it also carries significant risk of harm and I’m not convinced these have been adequately mitigated. I’ll first go through risks that I think should have been anticipated and dealt with better pre-launch, and them look at some significant post-launch harms.

Firstly, the app sends alerts when tweets appear to show cause for concern. It is clearly important to support the users getting these alerts as well as possible. However, when responding to an alert, users are initially taken to a page that asks them to click a box to say whether or not they are worried out the tweet. As far as I can tell, advice about what to do if they are worried is not available until they have given this feedback. This is inappropriate: given the risks of harm here, advice and support should be offered up-front, while giving feedback to improve the app should be entirely optional.

Secondly, while the Samaritans do offer excellent support by telephone, e-mail, post and face-to-face, they should have planned to offer better social media and international support in order to mitigate the risk of harm from a social media project on this scale. The level of support a researcher is expected to provide alongside a project tends to depend on its size and on the risks involved. In terms of size, this project is huge – over 1.6m Twitter accounts monitored. It is also very high-risk: it’s trying to flag up tweets from suicidal people. With this in mind, I’d argue that the @Samaritans Twitter account should have been set up to be monitored – and reply to distressed tweets – 24/7 (even if the replies largely just point people towards other sources of support). I’m aware that people in the UK or Republic of Ireland (ROI) can phone Samaritans, but people don’t always look up the correct contact details – especially when upset – so I think a 24/7 social media presence would be reasonable to mitigate the risks from a project like this.

The Samaritans’ presence is largely in the UK and ROI. However, this project will clearly go far beyond these borders. With this in mind, the Samaritans should consider what support they can offer to mitigate the risk of harm to people in other parts of the world. Currently, the information they offer for people outside of the UK and Ireland is limited and – while it might be OK for a UK-focussed charity – is nowhere near adequate for an organising running an international project on this scale.[2]

There is also the risk that the app will be used to find when people are vulnerable in order to harm them. As far as I can tell, Samaritans don’t have any measures in place to block abusers (and if it’s not clear how to report abuse of your social media product, your anti-abuse strategy is probably broken). When considering the ethics of the project, this issue should have been addressed.

Of course, things don’t always go to plan. After the launch of Samaritans Radar, it has become clear that the app is causing considerable distress to many Twitter users; I presume this wasn’t intended or predicted. Some people are closing their accounts, making them private or censoring what they say as a result of the app. The Samaritans’ online responses haven’t been adequate – their official Twitter account is now back to tweeting about raffles and their response to the concerns raised doesn’t adequately address problems or include an apology for things like the lack of an opt-out which have now been corrected. The Samaritans should act to mitigate these harms, but are not doing so effectively. With this in mind, I would argue that the harm being done is sufficient that the app should be stopped from running in its current form – people are being harmed by the thought that they’re being monitored and alerts may be being sent against their wishes so (given that the Samaritans have failed to find any adequate resolution to these problems) the best way to deal with this is simply to stop the monitoring.[3]

A last potential harm to note is that interventions from Samaritans Radar might be worse than useless. Lots of interventions that seem to work well sadly don’t, when they are tested. In the case of this app it is, for example, plausible that false positives are harmful. Prof Scourfield has acknowledged that

There is not yet what I would consider evidence of the app’s effectiveness, because it is a brand new innovation which has not yet been evaluated using a control group.

Although the app has been launched on large scale there is no way to be confident that it is not actively harming users and monitored people, even where all parties are happy about the monitoring.

Hay also discusses the principle of justice – considering how benefits and burdens of research will be distributed. A major problem here is that a lot of the burden from Samaritans Radar appears to be falling on people with mental health problems. I would have major worries about the justice of a project that makes life harder for a group that is already too-often marginalised and stigmatised. A lot of the more obvious benefits from the app seem to be accruing to the Samaritans: for example, the Samaritans seem pleased by metrics associated with the project[4], and they have received positive industry coverage for the campaign. While it is also likely that some people will benefit from alerts, the distribution of benefits, burdens and harms associated with the app and marketing so far raises real questions of justice.

I do, then, have serious concerns about how well these ethical issues were considered when developing Samaritans Radar.[5] A more participatory approach might have allowed the development of a better, more ethical and more just project, but there’s no way to turn the clock back. I think the most appropriate action now – given the harm being done, and in order to show respect to those who find the app is stopping them from using Twitter as they want – is to stop the app. This is why I’ve signed the petition to stop the app, and why I think you should sign it too.

Footnotes

[1] I’m not altogether comfortable with the emphasis on autonomy here. I’ve previously drawn on Levinas’ and Derrida’s decentralisation of subjectivity through emphasis on an Other, and other approaches to ethics have been influenced, for example, by Carol Gilligan’s relational ethics of care. The participatory approach discussed in this post is often linked with a relational view of subjectivity. However, I still do think it is important for research to respect the agency of participants and – with this in mind – the idea of autonomy is still useful. Also, I’m not sure a long digression on ideas of subjectivity would enhance the box office gold that is a 1,900 word discussion of research ethics.

[2] Providing additional support online and internationally may well be expensive, and the Samaritans may feel this draws them away from their core goals. However, this is why these ethical issues should be discussed prior to the launch of a project – decisions on what to do can then be informed by the support that can (or cannot) be provided.

[3] While I had hoped that the Samaritans might find a way to stop the harm that’s currently being done while keeping the app running, this now looks unlikely. Hopefully, something can be salvaged from this in due course; for now, though, I think closing the app is the best option. This is why I have now – regretfully – signed the petition calling to shut down Samaritans Radar.

[4] I think they’re wrong to be pleased – getting lots of people to hate your project enough to tweet about it isn’t normally a great achievement – but they do seem pleased…

[5] I would also note that if there are any hopes to use the (probably very interesting) data from the app in future, questions along the lines of those raised above may come up: there are real ethical problems in using data collected in this way.

Advertisements

Tags:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: