Archive | November 2014

Suspending Samaritans Radar: inadequate mitigation of risks continues

I’ve criticised the limited support that the Samaritans provided to users of their Samaritans Radar app when they deactivated it. I’m pleased to say that (1 week after deactivating the app) they have now the app’s e-mailed users; however, the support provided to users is still inadequate. This is the e-mail they sent:

As you may already be aware, following the feedback and advice Samaritans has received since the launch of Radar, the app has been suspended. As a result we have deactivated all subscriber accounts and are deleting all associated data.

For a full statement on the suspension of the app, please see our website http://www.samaritans.org

Thank you for subscribing to the Samaritans Radar app and for your support to date. If you have any further questions please email radar@samaritans.org

There are a number of problems with this e-mail:

  • There is no offer of support to users. The link in the e-mail takes users to a recent announcement about the app’s closure, not a support page. An e-mail like this might be OK if this was an app for sharing funny cat pics, but when suspending an app which aims to help prevent suicide the Samaritans should be pointing users towards support in case they’re worried or distressed.
  • They don’t give any information on what to do with alerts received but now inaccessible.
  • The e-mail is being sent 1 week after the app was suspended. Users might have been expecting the app to be working and sending alerts during this week.
  • The Samaritans don’t solicit feedback from users (or even link their own feedback survey).

Seriously – 1 week to send a 78 word e-mail and it’s still not especially good. This is really disappointing.

Update: I had linked to the wrong Samaritans announcement; corrected this now.

Suspending Samaritans Radar: inadequate mitigation of risks from suspension

The Samaritans have now suspended their Samaritans Radar app. They’re right to suspend it, but have done so in a way that risks adding to the harm done by the app.[1] The Samaritans have – quite rightly – emphasised the value of Twitter as a support mechanism; they have also claimed that Samaritans Radar can contribute significantly to this. With this in mind, I’d have expected them to be careful not to do harm when suspending the app. Sadly, I don’t see evidence off this: there hasn’t been adequate notification of users, and I don’t see adequate support systems in place.

So far as I can tell, people who signed up to use the app haven’t been told that it’s no longer working.[2] Some may notice the Twitter discussion of this, but many won’t. While I think the app was a bad idea overall there were, as quantumplations notes, some positive ways in which it could be used: for example, it might be used by someone “who knows they follow people on twitter who might have mental health issues, wants to keep an eye on those people, and [is] able to provide meaningful support to them.”[3] Such users may be relying on the app to keep an eye on people, and therefore not checking Twitter feeds manually; this could lead to them missing worrying tweets.[4]

If someone clicks on an alert that was sent prior to the app’s suspension, they are just taken to this page. This means that, while a user will have been told that there was a worrying tweet from someone they follow, they won’t be able to know who the tweet was from or what it said. This could be rather distressing. Given data protection concerns this is probably now unavoidable[5], but Samaritans should have provided better support to people in this situation.

Unfortunately, the first thing someone clicking on a Samaritans Radar alert will see is a fairly generic statement which seems to be aimed at media covering the suspension; if they scroll down the page (which many may not do) the next thing they will see is a link to a survey. Finally, at the bottom of the page, they will see contact details for the Samaritans. This isn’t a good response to what may be a very worrying situation for a user of the app – instead, the Samaritans should have put up a statement tailored to people in this situation, along with putting details of available support up-front. There should also be details of support for users outside the UK.

A final thing to note is that suspending the app at around 6pm on a Friday means that sources of support people would typically rely on may be unavailable or harder to access and that – if users of the app are worried about some of those they follow – there’s more chance people may be offline over the weekend. Not only have Samaritans suspended the app in a rather messy way, the timing of doing so may make the situation worse.

I’ve argued that the Samaritans did not adequately mitigate the risk of harm from Samaritans Radar. The same applies to their suspension of the app. Even if they only made the final decision to suspend the app at 5:45pm on Friday, they must have been aware that this was a possibility for some time – so should have prepared better measures to reduce the risk of harm to users of the app and to other people affected by its suspension.

Update 9/11/14: quantumplations is looking at some problems with the survey the Samaritans released to coincide with Samaritans Radar’s suspension. More effective engagement following the suspension might also have helped to mitigate current or future harms related to the app.

[1] They should have acted sooner – or, better, developed the app in such a way as to avoid these problems – and should have posted a more meaningful apology. There are also questions about what the Samaritans are doing with the data collected. I’ll pass over these issues for now, though.

[2] I can’t be sure that some users haven’t received notification of the suspension. I can’t find anyone who has, though – so, clearly, if there was a notification system in place it’s not working well.

[3] To be clear, I’d only view this type of monitoring as appropriate where all parties are happy for this to take place.

[4] There are concerns about false negatives produced while the app was running. However, a suspended app will produce no true positives at all.

[5] It could have been avoided had things been thought through better pre-launch, but none of us have a time machine…

Samaritans Radar: questions from a research ethics point of view (post 2 of 2)

This is the second of two posts looking at Samaritans Radar from a research ethics point of view: thinking about the type of questions that would be asked if you were applying for ethical approval for the project. The first post is here.

I’ve found Hay’s work on research ethics helpful when introducing the topic. Hay suggests three principles of ethical behaviour: justice, beneficence/non-malevolence and respect (see p. 38). I’d argue that, when doing social research, an ethical approach is important for several reasons: it’s the right thing to do, it can strengthen a research project, and it protects the possibilities for future research.

Hay describes respect in terms of how “individuals should be regarded as autonomous agents”.[1] Quantumplations points out that, with Samaritans Radar, the

whole focus of the app is designed towards the ‘user’, a follower who may wish to identify when people they follow on twitter are vulnerable. Originally the privacy section of their website only mentioned the privacy of the user of the app – the person doing the monitoring – with no mention of any privacy concerns for the individuals who are being followed… An app designed for an individual with mental health problems would be very unlikely to remove the agency of the individual in such a fashion.

Samaritans have pointed out that “Samaritans’ Radar…has been tested with several different user groups who have contributed to its creation, including young people with mental health problems”. I’m glad to hear that this has been done. However, a deeper engagement with groups who are particularly affected by the app – for example, people with mental health problems – might have allowed the Samaritans to show greater respect for monitored people. One of the great things about social media is that it can facilitate quite open engagement. For example, if the Samaritans had held an open Twitter chat about the project a few weeks before launch they might quickly have learnt that the lack of an opt-out for individuals was viewed as objectionable by many. This type of feedback could have let them start out with a better product.

Pain (PDF, p. 22) emphasises the “principles of collaboration and social justice…in impactful research” and asks “[s]hould we be striking a blow, or walking together?” In some ways, Samaritans Radar will have achieved high impact – it has had lots of media coverage, lots of discussion on Twitter, and has been monitoring and sending alerts about over 1.6m Twitter feeds. However, walking together with some of the communities most affected by the project might have allowed a more collaborative process of enhancing the support available through Twitter. This might have led to a different type of app and project, but the way these negotiations played out would itself have been interesting and might have generated something far more useful. A more participatory and respectful approach could have led to a stronger and more ethical project.

Hay also emphasises the principle of beneficence/non-malevolence – carrying out research in a way that benefits research participants, or at least does not do harm. When seeking ethical approval for a project this is rightly given lots of weight – harming participants is normally, quite rightly, viewed as bad. Samaritans Radar is aimed at bringing wide-ranging benefits – for example, helping to support people when they are in distress or prevent suicides. However, it also carries significant risk of harm and I’m not convinced these have been adequately mitigated. I’ll first go through risks that I think should have been anticipated and dealt with better pre-launch, and them look at some significant post-launch harms.

Firstly, the app sends alerts when tweets appear to show cause for concern. It is clearly important to support the users getting these alerts as well as possible. However, when responding to an alert, users are initially taken to a page that asks them to click a box to say whether or not they are worried out the tweet. As far as I can tell, advice about what to do if they are worried is not available until they have given this feedback. This is inappropriate: given the risks of harm here, advice and support should be offered up-front, while giving feedback to improve the app should be entirely optional.

Secondly, while the Samaritans do offer excellent support by telephone, e-mail, post and face-to-face, they should have planned to offer better social media and international support in order to mitigate the risk of harm from a social media project on this scale. The level of support a researcher is expected to provide alongside a project tends to depend on its size and on the risks involved. In terms of size, this project is huge – over 1.6m Twitter accounts monitored. It is also very high-risk: it’s trying to flag up tweets from suicidal people. With this in mind, I’d argue that the @Samaritans Twitter account should have been set up to be monitored – and reply to distressed tweets – 24/7 (even if the replies largely just point people towards other sources of support). I’m aware that people in the UK or Republic of Ireland (ROI) can phone Samaritans, but people don’t always look up the correct contact details – especially when upset – so I think a 24/7 social media presence would be reasonable to mitigate the risks from a project like this.

The Samaritans’ presence is largely in the UK and ROI. However, this project will clearly go far beyond these borders. With this in mind, the Samaritans should consider what support they can offer to mitigate the risk of harm to people in other parts of the world. Currently, the information they offer for people outside of the UK and Ireland is limited and – while it might be OK for a UK-focussed charity – is nowhere near adequate for an organising running an international project on this scale.[2]

There is also the risk that the app will be used to find when people are vulnerable in order to harm them. As far as I can tell, Samaritans don’t have any measures in place to block abusers (and if it’s not clear how to report abuse of your social media product, your anti-abuse strategy is probably broken). When considering the ethics of the project, this issue should have been addressed.

Of course, things don’t always go to plan. After the launch of Samaritans Radar, it has become clear that the app is causing considerable distress to many Twitter users; I presume this wasn’t intended or predicted. Some people are closing their accounts, making them private or censoring what they say as a result of the app. The Samaritans’ online responses haven’t been adequate – their official Twitter account is now back to tweeting about raffles and their response to the concerns raised doesn’t adequately address problems or include an apology for things like the lack of an opt-out which have now been corrected. The Samaritans should act to mitigate these harms, but are not doing so effectively. With this in mind, I would argue that the harm being done is sufficient that the app should be stopped from running in its current form – people are being harmed by the thought that they’re being monitored and alerts may be being sent against their wishes so (given that the Samaritans have failed to find any adequate resolution to these problems) the best way to deal with this is simply to stop the monitoring.[3]

A last potential harm to note is that interventions from Samaritans Radar might be worse than useless. Lots of interventions that seem to work well sadly don’t, when they are tested. In the case of this app it is, for example, plausible that false positives are harmful. Prof Scourfield has acknowledged that

There is not yet what I would consider evidence of the app’s effectiveness, because it is a brand new innovation which has not yet been evaluated using a control group.

Although the app has been launched on large scale there is no way to be confident that it is not actively harming users and monitored people, even where all parties are happy about the monitoring.

Hay also discusses the principle of justice – considering how benefits and burdens of research will be distributed. A major problem here is that a lot of the burden from Samaritans Radar appears to be falling on people with mental health problems. I would have major worries about the justice of a project that makes life harder for a group that is already too-often marginalised and stigmatised. A lot of the more obvious benefits from the app seem to be accruing to the Samaritans: for example, the Samaritans seem pleased by metrics associated with the project[4], and they have received positive industry coverage for the campaign. While it is also likely that some people will benefit from alerts, the distribution of benefits, burdens and harms associated with the app and marketing so far raises real questions of justice.

I do, then, have serious concerns about how well these ethical issues were considered when developing Samaritans Radar.[5] A more participatory approach might have allowed the development of a better, more ethical and more just project, but there’s no way to turn the clock back. I think the most appropriate action now – given the harm being done, and in order to show respect to those who find the app is stopping them from using Twitter as they want – is to stop the app. This is why I’ve signed the petition to stop the app, and why I think you should sign it too.

Footnotes

[1] I’m not altogether comfortable with the emphasis on autonomy here. I’ve previously drawn on Levinas’ and Derrida’s decentralisation of subjectivity through emphasis on an Other, and other approaches to ethics have been influenced, for example, by Carol Gilligan’s relational ethics of care. The participatory approach discussed in this post is often linked with a relational view of subjectivity. However, I still do think it is important for research to respect the agency of participants and – with this in mind – the idea of autonomy is still useful. Also, I’m not sure a long digression on ideas of subjectivity would enhance the box office gold that is a 1,900 word discussion of research ethics.

[2] Providing additional support online and internationally may well be expensive, and the Samaritans may feel this draws them away from their core goals. However, this is why these ethical issues should be discussed prior to the launch of a project – decisions on what to do can then be informed by the support that can (or cannot) be provided.

[3] While I had hoped that the Samaritans might find a way to stop the harm that’s currently being done while keeping the app running, this now looks unlikely. Hopefully, something can be salvaged from this in due course; for now, though, I think closing the app is the best option. This is why I have now – regretfully – signed the petition calling to shut down Samaritans Radar.

[4] I think they’re wrong to be pleased – getting lots of people to hate your project enough to tweet about it isn’t normally a great achievement – but they do seem pleased…

[5] I would also note that if there are any hopes to use the (probably very interesting) data from the app in future, questions along the lines of those raised above may come up: there are real ethical problems in using data collected in this way.

Samaritans Radar: questions from a research ethics point of view (post 1 of 2)

The Samaritans have emphasised the involvement of academics when trying to justify Samaritans Radar. The work that academics do is bound by a strict ethical framework and I’m therefore going to look at the questions that might be raised about Samaritans Radar if it were proposed as an academic social research project.[1] This will come in two posts (ethics forms can be long, and I don’t imagine many people want to read 2,000+ words on the topic in one go). I will argue that the Samaritans Radar would – or, at least, should – face serious ethical questions were it proposed as an academic project. While what Samaritans Radar is trying to do is very interesting, the way the project has been ran so far may severely limit what academics can do with the data it has generated.[2]

I’m not going to rehash discussions of data protection concerns around Samaritans Radar (others have covered this better than I could). However, a first thing to note is that an academic project is expected to have reasonable data protection measures in place: breaking data protection law would almost always be seen as unethical, both because the current law probably reflects important societal norms[3] and because breaking the law might leave individuals and institutions involved in the project facing risks such as prosecution.

Ethics forms always ask about consent, and so they should. Samaritans Radar involves observing lots of activity in an online public space, and therefore raises interesting questions around consent. In some cases, most would view observation of public spaces without opt-in consent as reasonable – for example, if I counted the cars passing my office window and heading towards the city centre during rush hour or analysed above-the-line Comment is Free posts, this would probably be seen as fairly unobjectionable. Other observations of public space get into more sensitive or personal-seeming areas, though, and may be seen as unacceptably intrusive – for example, it would probably be unacceptable for me to observe people entering and leaving local religious buildings without at least getting the congregations to agree to this.

Even where observation is seen as acceptable and opt-in consent is viewed as unnecessary, one would normally be expected to offer opt-out consent if anyone does object: for example, if someone saw me looking out of my office window to count cars and called me up to complain, it would probably be appropriate for me to offer to stop counting their car. Observing people who have actively said they don’t want to be observed is quite different from just assuming people are happy to be observed if they don’t object. I would view observation in these circumstances as unacceptable unless there is a very strong reason to carry out the project.

Initially, Samaritans Radar didn’t offer any opt-out to individuals. It appears this was technically possible for them to do (organisational accounts could be ‘white listed’) but they chose not to. A number of Twitter uses (including me) were unhappy about this. I would argue that it was unethical to monitor individuals who had strongly objected to monitoring, but not been offered any way to opt out. I would therefore have real ethical concerns about using these data at all – doing so risks using data that which subjects did not want collected, had no reasonable way of preventing from being collected[4] and would likely object to having processed/analysed.[5] I can’t see how this could be acceptable.

Samaritans Radar does now offer an opt-out, and the acceptability of this type of monitoring is a more complicated question – can one assume acquiescence to observation if people don’t object? In some online or offline spaces, I think that would be reasonable – again, if I were counting cars passing my office window or analysing above-the-line posts on Comment Is Free, I think this would be OK. On the other hand, in some cases monitoring without opt-in consent would seem unreasonably intrusive – for example, I don’t think it would be acceptable for me track below-the-line Comment Is Free posts in order to assess the mental state of commenters without getting informed consent. Samaritans Radar is actively collecting sensitive personal data.

One issue with taking the fact someone doesn’t opt-out as implying consent is whether those who are being monitored know that this is the case and therefore have the option to opt out (clearly, many monitored by Samaritans Radar don’t). If I was considering a proposal for running project like this on an opt-out basis, it would be reasonable to make monitoring much more overt: for example, the app could Tweet daily from those who have it installed to make clear that they’re using it to monitor those they follow. This would still be far from perfect – many would still be unaware they’re being monitored, or would want neither to be monitored nor to have their names on an opt-out list – but would be better than the current situation.[6]

I’m aware I’m looking at this with the benefit of hindsight, and I’m not sure what my answer would have been if – prior to launch – I’d been asked about running Samaritans Radar on an opt-out basis. Quantumplations asks “Was an opt in app ever considered until the initial backlash on Twitter? If so, why was it rejected?” From an ethical point of view, I think there would have needed to be a compelling reason to reject an opt-in model where informed consent could have been gained from all monitored users. I now think that Samaritans Radar should be opt-in only. The app is currently being used to monitor spaces where it is clearly unwelcome and to monitor people who – while not wanting to be on an opt-out list – don’t want to be monitored. I can’t see how this can be ethical.

For these reasons, I also don’t think using data resulting from current Samaritans Radar monitoring would be ethical. Once again, academics using these data would risk analysing data from people who have vocally refused consent to be monitored and who would likely object to their data being analysed in this way. They are also analysing data from a lot more people who are being monitored but – in part because the app doesn’t take all reasonable measures to inform monitored people –don’t know about this and don’t have the option of opting out.

One of the sad things about how Samaritans Radar has been run is that this may severely limit what academics can do with data coming from the project. I’m doubtful that it would be ethical to use the data generated so far at all. This is a real pity – this is a fascinating research topic, Samaritans Radar could have made a major contribution to driving research in the area forwards. Being bound by a tighter ethical framework might also have strengthened the project, and avoided some of the problems it has ran into.

This is the first post of two on ethical issues around Samaritans Radar. The second one is now (3/11) available here.

Update 3/11: Prof Jonathan Scourfield (who took part in the app launch) has blogged on the topic. He states that

The idea for the app came from Samaritans and digital agency Jam…When the development was already far advanced, I offered to contribute a lexicon of possible suicidal language, derived from our ongoing research on social media…we are not collecting any research data via Samaritans Radar”

[1] To be up-front about some of my own biases, I should say that I have previously observed online interactions as part of my research and hope to do so in future. I have also asked for data from social media companies for reanalysis in the past, though this never progressed to the point where I’d need to complete an ethics form.

[2] I appreciate that charities aren’t bound by the same norms as academics – and don’t think they should be – some of these ethical questions will be relevant to the Samaritans and others will be relevant to academics who have worked/are working on the Samaritans Radar project or who are thinking about using data resulting from the project.

[3] If anything, current law may not be strong enough to adequately reflect current societal concerns about privacy and data analytics. However, failing to meet the standards laid out in current law is likely to fall short of societal expectations.

[4] I don’t view expecting someone to leave Twitter or make their Twitter account private in order to stop monitoring by Samaritans Radar as reasonable. This would be like me telling a driver who objected to me counting their car that they should cycle instead in order to avoid being observed.

[5] For the record, I object to any analysis being carried out on data on me collected by Samaritans Radar.

[6] The way the app currently works seems over-cautious about the privacy of those who are in a position to give informed consent to run it – those installing the app – but far too casual about the privacy of those who are monitored.