Tag Archive | Samaritans Radar

Suspending Samaritans Radar: inadequate mitigation of risks continues

I’ve criticised the limited support that the Samaritans provided to users of their Samaritans Radar app when they deactivated it. I’m pleased to say that (1 week after deactivating the app) they have now the app’s e-mailed users; however, the support provided to users is still inadequate. This is the e-mail they sent:

As you may already be aware, following the feedback and advice Samaritans has received since the launch of Radar, the app has been suspended. As a result we have deactivated all subscriber accounts and are deleting all associated data.

For a full statement on the suspension of the app, please see our website http://www.samaritans.org

Thank you for subscribing to the Samaritans Radar app and for your support to date. If you have any further questions please email radar@samaritans.org

There are a number of problems with this e-mail:

  • There is no offer of support to users. The link in the e-mail takes users to a recent announcement about the app’s closure, not a support page. An e-mail like this might be OK if this was an app for sharing funny cat pics, but when suspending an app which aims to help prevent suicide the Samaritans should be pointing users towards support in case they’re worried or distressed.
  • They don’t give any information on what to do with alerts received but now inaccessible.
  • The e-mail is being sent 1 week after the app was suspended. Users might have been expecting the app to be working and sending alerts during this week.
  • The Samaritans don’t solicit feedback from users (or even link their own feedback survey).

Seriously – 1 week to send a 78 word e-mail and it’s still not especially good. This is really disappointing.

Update: I had linked to the wrong Samaritans announcement; corrected this now.

Advertisements

Suspending Samaritans Radar: inadequate mitigation of risks from suspension

The Samaritans have now suspended their Samaritans Radar app. They’re right to suspend it, but have done so in a way that risks adding to the harm done by the app.[1] The Samaritans have – quite rightly – emphasised the value of Twitter as a support mechanism; they have also claimed that Samaritans Radar can contribute significantly to this. With this in mind, I’d have expected them to be careful not to do harm when suspending the app. Sadly, I don’t see evidence off this: there hasn’t been adequate notification of users, and I don’t see adequate support systems in place.

So far as I can tell, people who signed up to use the app haven’t been told that it’s no longer working.[2] Some may notice the Twitter discussion of this, but many won’t. While I think the app was a bad idea overall there were, as quantumplations notes, some positive ways in which it could be used: for example, it might be used by someone “who knows they follow people on twitter who might have mental health issues, wants to keep an eye on those people, and [is] able to provide meaningful support to them.”[3] Such users may be relying on the app to keep an eye on people, and therefore not checking Twitter feeds manually; this could lead to them missing worrying tweets.[4]

If someone clicks on an alert that was sent prior to the app’s suspension, they are just taken to this page. This means that, while a user will have been told that there was a worrying tweet from someone they follow, they won’t be able to know who the tweet was from or what it said. This could be rather distressing. Given data protection concerns this is probably now unavoidable[5], but Samaritans should have provided better support to people in this situation.

Unfortunately, the first thing someone clicking on a Samaritans Radar alert will see is a fairly generic statement which seems to be aimed at media covering the suspension; if they scroll down the page (which many may not do) the next thing they will see is a link to a survey. Finally, at the bottom of the page, they will see contact details for the Samaritans. This isn’t a good response to what may be a very worrying situation for a user of the app – instead, the Samaritans should have put up a statement tailored to people in this situation, along with putting details of available support up-front. There should also be details of support for users outside the UK.

A final thing to note is that suspending the app at around 6pm on a Friday means that sources of support people would typically rely on may be unavailable or harder to access and that – if users of the app are worried about some of those they follow – there’s more chance people may be offline over the weekend. Not only have Samaritans suspended the app in a rather messy way, the timing of doing so may make the situation worse.

I’ve argued that the Samaritans did not adequately mitigate the risk of harm from Samaritans Radar. The same applies to their suspension of the app. Even if they only made the final decision to suspend the app at 5:45pm on Friday, they must have been aware that this was a possibility for some time – so should have prepared better measures to reduce the risk of harm to users of the app and to other people affected by its suspension.

Update 9/11/14: quantumplations is looking at some problems with the survey the Samaritans released to coincide with Samaritans Radar’s suspension. More effective engagement following the suspension might also have helped to mitigate current or future harms related to the app.

[1] They should have acted sooner – or, better, developed the app in such a way as to avoid these problems – and should have posted a more meaningful apology. There are also questions about what the Samaritans are doing with the data collected. I’ll pass over these issues for now, though.

[2] I can’t be sure that some users haven’t received notification of the suspension. I can’t find anyone who has, though – so, clearly, if there was a notification system in place it’s not working well.

[3] To be clear, I’d only view this type of monitoring as appropriate where all parties are happy for this to take place.

[4] There are concerns about false negatives produced while the app was running. However, a suspended app will produce no true positives at all.

[5] It could have been avoided had things been thought through better pre-launch, but none of us have a time machine…

Samaritans Radar: questions from a research ethics point of view (post 2 of 2)

This is the second of two posts looking at Samaritans Radar from a research ethics point of view: thinking about the type of questions that would be asked if you were applying for ethical approval for the project. The first post is here.

I’ve found Hay’s work on research ethics helpful when introducing the topic. Hay suggests three principles of ethical behaviour: justice, beneficence/non-malevolence and respect (see p. 38). I’d argue that, when doing social research, an ethical approach is important for several reasons: it’s the right thing to do, it can strengthen a research project, and it protects the possibilities for future research.

Hay describes respect in terms of how “individuals should be regarded as autonomous agents”.[1] Quantumplations points out that, with Samaritans Radar, the

whole focus of the app is designed towards the ‘user’, a follower who may wish to identify when people they follow on twitter are vulnerable. Originally the privacy section of their website only mentioned the privacy of the user of the app – the person doing the monitoring – with no mention of any privacy concerns for the individuals who are being followed… An app designed for an individual with mental health problems would be very unlikely to remove the agency of the individual in such a fashion.

Samaritans have pointed out that “Samaritans’ Radar…has been tested with several different user groups who have contributed to its creation, including young people with mental health problems”. I’m glad to hear that this has been done. However, a deeper engagement with groups who are particularly affected by the app – for example, people with mental health problems – might have allowed the Samaritans to show greater respect for monitored people. One of the great things about social media is that it can facilitate quite open engagement. For example, if the Samaritans had held an open Twitter chat about the project a few weeks before launch they might quickly have learnt that the lack of an opt-out for individuals was viewed as objectionable by many. This type of feedback could have let them start out with a better product.

Pain (PDF, p. 22) emphasises the “principles of collaboration and social justice…in impactful research” and asks “[s]hould we be striking a blow, or walking together?” In some ways, Samaritans Radar will have achieved high impact – it has had lots of media coverage, lots of discussion on Twitter, and has been monitoring and sending alerts about over 1.6m Twitter feeds. However, walking together with some of the communities most affected by the project might have allowed a more collaborative process of enhancing the support available through Twitter. This might have led to a different type of app and project, but the way these negotiations played out would itself have been interesting and might have generated something far more useful. A more participatory and respectful approach could have led to a stronger and more ethical project.

Hay also emphasises the principle of beneficence/non-malevolence – carrying out research in a way that benefits research participants, or at least does not do harm. When seeking ethical approval for a project this is rightly given lots of weight – harming participants is normally, quite rightly, viewed as bad. Samaritans Radar is aimed at bringing wide-ranging benefits – for example, helping to support people when they are in distress or prevent suicides. However, it also carries significant risk of harm and I’m not convinced these have been adequately mitigated. I’ll first go through risks that I think should have been anticipated and dealt with better pre-launch, and them look at some significant post-launch harms.

Firstly, the app sends alerts when tweets appear to show cause for concern. It is clearly important to support the users getting these alerts as well as possible. However, when responding to an alert, users are initially taken to a page that asks them to click a box to say whether or not they are worried out the tweet. As far as I can tell, advice about what to do if they are worried is not available until they have given this feedback. This is inappropriate: given the risks of harm here, advice and support should be offered up-front, while giving feedback to improve the app should be entirely optional.

Secondly, while the Samaritans do offer excellent support by telephone, e-mail, post and face-to-face, they should have planned to offer better social media and international support in order to mitigate the risk of harm from a social media project on this scale. The level of support a researcher is expected to provide alongside a project tends to depend on its size and on the risks involved. In terms of size, this project is huge – over 1.6m Twitter accounts monitored. It is also very high-risk: it’s trying to flag up tweets from suicidal people. With this in mind, I’d argue that the @Samaritans Twitter account should have been set up to be monitored – and reply to distressed tweets – 24/7 (even if the replies largely just point people towards other sources of support). I’m aware that people in the UK or Republic of Ireland (ROI) can phone Samaritans, but people don’t always look up the correct contact details – especially when upset – so I think a 24/7 social media presence would be reasonable to mitigate the risks from a project like this.

The Samaritans’ presence is largely in the UK and ROI. However, this project will clearly go far beyond these borders. With this in mind, the Samaritans should consider what support they can offer to mitigate the risk of harm to people in other parts of the world. Currently, the information they offer for people outside of the UK and Ireland is limited and – while it might be OK for a UK-focussed charity – is nowhere near adequate for an organising running an international project on this scale.[2]

There is also the risk that the app will be used to find when people are vulnerable in order to harm them. As far as I can tell, Samaritans don’t have any measures in place to block abusers (and if it’s not clear how to report abuse of your social media product, your anti-abuse strategy is probably broken). When considering the ethics of the project, this issue should have been addressed.

Of course, things don’t always go to plan. After the launch of Samaritans Radar, it has become clear that the app is causing considerable distress to many Twitter users; I presume this wasn’t intended or predicted. Some people are closing their accounts, making them private or censoring what they say as a result of the app. The Samaritans’ online responses haven’t been adequate – their official Twitter account is now back to tweeting about raffles and their response to the concerns raised doesn’t adequately address problems or include an apology for things like the lack of an opt-out which have now been corrected. The Samaritans should act to mitigate these harms, but are not doing so effectively. With this in mind, I would argue that the harm being done is sufficient that the app should be stopped from running in its current form – people are being harmed by the thought that they’re being monitored and alerts may be being sent against their wishes so (given that the Samaritans have failed to find any adequate resolution to these problems) the best way to deal with this is simply to stop the monitoring.[3]

A last potential harm to note is that interventions from Samaritans Radar might be worse than useless. Lots of interventions that seem to work well sadly don’t, when they are tested. In the case of this app it is, for example, plausible that false positives are harmful. Prof Scourfield has acknowledged that

There is not yet what I would consider evidence of the app’s effectiveness, because it is a brand new innovation which has not yet been evaluated using a control group.

Although the app has been launched on large scale there is no way to be confident that it is not actively harming users and monitored people, even where all parties are happy about the monitoring.

Hay also discusses the principle of justice – considering how benefits and burdens of research will be distributed. A major problem here is that a lot of the burden from Samaritans Radar appears to be falling on people with mental health problems. I would have major worries about the justice of a project that makes life harder for a group that is already too-often marginalised and stigmatised. A lot of the more obvious benefits from the app seem to be accruing to the Samaritans: for example, the Samaritans seem pleased by metrics associated with the project[4], and they have received positive industry coverage for the campaign. While it is also likely that some people will benefit from alerts, the distribution of benefits, burdens and harms associated with the app and marketing so far raises real questions of justice.

I do, then, have serious concerns about how well these ethical issues were considered when developing Samaritans Radar.[5] A more participatory approach might have allowed the development of a better, more ethical and more just project, but there’s no way to turn the clock back. I think the most appropriate action now – given the harm being done, and in order to show respect to those who find the app is stopping them from using Twitter as they want – is to stop the app. This is why I’ve signed the petition to stop the app, and why I think you should sign it too.

Footnotes

[1] I’m not altogether comfortable with the emphasis on autonomy here. I’ve previously drawn on Levinas’ and Derrida’s decentralisation of subjectivity through emphasis on an Other, and other approaches to ethics have been influenced, for example, by Carol Gilligan’s relational ethics of care. The participatory approach discussed in this post is often linked with a relational view of subjectivity. However, I still do think it is important for research to respect the agency of participants and – with this in mind – the idea of autonomy is still useful. Also, I’m not sure a long digression on ideas of subjectivity would enhance the box office gold that is a 1,900 word discussion of research ethics.

[2] Providing additional support online and internationally may well be expensive, and the Samaritans may feel this draws them away from their core goals. However, this is why these ethical issues should be discussed prior to the launch of a project – decisions on what to do can then be informed by the support that can (or cannot) be provided.

[3] While I had hoped that the Samaritans might find a way to stop the harm that’s currently being done while keeping the app running, this now looks unlikely. Hopefully, something can be salvaged from this in due course; for now, though, I think closing the app is the best option. This is why I have now – regretfully – signed the petition calling to shut down Samaritans Radar.

[4] I think they’re wrong to be pleased – getting lots of people to hate your project enough to tweet about it isn’t normally a great achievement – but they do seem pleased…

[5] I would also note that if there are any hopes to use the (probably very interesting) data from the app in future, questions along the lines of those raised above may come up: there are real ethical problems in using data collected in this way.

Samaritans Radar: questions from a research ethics point of view (post 1 of 2)

The Samaritans have emphasised the involvement of academics when trying to justify Samaritans Radar. The work that academics do is bound by a strict ethical framework and I’m therefore going to look at the questions that might be raised about Samaritans Radar if it were proposed as an academic social research project.[1] This will come in two posts (ethics forms can be long, and I don’t imagine many people want to read 2,000+ words on the topic in one go). I will argue that the Samaritans Radar would – or, at least, should – face serious ethical questions were it proposed as an academic project. While what Samaritans Radar is trying to do is very interesting, the way the project has been ran so far may severely limit what academics can do with the data it has generated.[2]

I’m not going to rehash discussions of data protection concerns around Samaritans Radar (others have covered this better than I could). However, a first thing to note is that an academic project is expected to have reasonable data protection measures in place: breaking data protection law would almost always be seen as unethical, both because the current law probably reflects important societal norms[3] and because breaking the law might leave individuals and institutions involved in the project facing risks such as prosecution.

Ethics forms always ask about consent, and so they should. Samaritans Radar involves observing lots of activity in an online public space, and therefore raises interesting questions around consent. In some cases, most would view observation of public spaces without opt-in consent as reasonable – for example, if I counted the cars passing my office window and heading towards the city centre during rush hour or analysed above-the-line Comment is Free posts, this would probably be seen as fairly unobjectionable. Other observations of public space get into more sensitive or personal-seeming areas, though, and may be seen as unacceptably intrusive – for example, it would probably be unacceptable for me to observe people entering and leaving local religious buildings without at least getting the congregations to agree to this.

Even where observation is seen as acceptable and opt-in consent is viewed as unnecessary, one would normally be expected to offer opt-out consent if anyone does object: for example, if someone saw me looking out of my office window to count cars and called me up to complain, it would probably be appropriate for me to offer to stop counting their car. Observing people who have actively said they don’t want to be observed is quite different from just assuming people are happy to be observed if they don’t object. I would view observation in these circumstances as unacceptable unless there is a very strong reason to carry out the project.

Initially, Samaritans Radar didn’t offer any opt-out to individuals. It appears this was technically possible for them to do (organisational accounts could be ‘white listed’) but they chose not to. A number of Twitter uses (including me) were unhappy about this. I would argue that it was unethical to monitor individuals who had strongly objected to monitoring, but not been offered any way to opt out. I would therefore have real ethical concerns about using these data at all – doing so risks using data that which subjects did not want collected, had no reasonable way of preventing from being collected[4] and would likely object to having processed/analysed.[5] I can’t see how this could be acceptable.

Samaritans Radar does now offer an opt-out, and the acceptability of this type of monitoring is a more complicated question – can one assume acquiescence to observation if people don’t object? In some online or offline spaces, I think that would be reasonable – again, if I were counting cars passing my office window or analysing above-the-line posts on Comment Is Free, I think this would be OK. On the other hand, in some cases monitoring without opt-in consent would seem unreasonably intrusive – for example, I don’t think it would be acceptable for me track below-the-line Comment Is Free posts in order to assess the mental state of commenters without getting informed consent. Samaritans Radar is actively collecting sensitive personal data.

One issue with taking the fact someone doesn’t opt-out as implying consent is whether those who are being monitored know that this is the case and therefore have the option to opt out (clearly, many monitored by Samaritans Radar don’t). If I was considering a proposal for running project like this on an opt-out basis, it would be reasonable to make monitoring much more overt: for example, the app could Tweet daily from those who have it installed to make clear that they’re using it to monitor those they follow. This would still be far from perfect – many would still be unaware they’re being monitored, or would want neither to be monitored nor to have their names on an opt-out list – but would be better than the current situation.[6]

I’m aware I’m looking at this with the benefit of hindsight, and I’m not sure what my answer would have been if – prior to launch – I’d been asked about running Samaritans Radar on an opt-out basis. Quantumplations asks “Was an opt in app ever considered until the initial backlash on Twitter? If so, why was it rejected?” From an ethical point of view, I think there would have needed to be a compelling reason to reject an opt-in model where informed consent could have been gained from all monitored users. I now think that Samaritans Radar should be opt-in only. The app is currently being used to monitor spaces where it is clearly unwelcome and to monitor people who – while not wanting to be on an opt-out list – don’t want to be monitored. I can’t see how this can be ethical.

For these reasons, I also don’t think using data resulting from current Samaritans Radar monitoring would be ethical. Once again, academics using these data would risk analysing data from people who have vocally refused consent to be monitored and who would likely object to their data being analysed in this way. They are also analysing data from a lot more people who are being monitored but – in part because the app doesn’t take all reasonable measures to inform monitored people –don’t know about this and don’t have the option of opting out.

One of the sad things about how Samaritans Radar has been run is that this may severely limit what academics can do with data coming from the project. I’m doubtful that it would be ethical to use the data generated so far at all. This is a real pity – this is a fascinating research topic, Samaritans Radar could have made a major contribution to driving research in the area forwards. Being bound by a tighter ethical framework might also have strengthened the project, and avoided some of the problems it has ran into.

This is the first post of two on ethical issues around Samaritans Radar. The second one is now (3/11) available here.

Update 3/11: Prof Jonathan Scourfield (who took part in the app launch) has blogged on the topic. He states that

The idea for the app came from Samaritans and digital agency Jam…When the development was already far advanced, I offered to contribute a lexicon of possible suicidal language, derived from our ongoing research on social media…we are not collecting any research data via Samaritans Radar”

[1] To be up-front about some of my own biases, I should say that I have previously observed online interactions as part of my research and hope to do so in future. I have also asked for data from social media companies for reanalysis in the past, though this never progressed to the point where I’d need to complete an ethics form.

[2] I appreciate that charities aren’t bound by the same norms as academics – and don’t think they should be – some of these ethical questions will be relevant to the Samaritans and others will be relevant to academics who have worked/are working on the Samaritans Radar project or who are thinking about using data resulting from the project.

[3] If anything, current law may not be strong enough to adequately reflect current societal concerns about privacy and data analytics. However, failing to meet the standards laid out in current law is likely to fall short of societal expectations.

[4] I don’t view expecting someone to leave Twitter or make their Twitter account private in order to stop monitoring by Samaritans Radar as reasonable. This would be like me telling a driver who objected to me counting their car that they should cycle instead in order to avoid being observed.

[5] For the record, I object to any analysis being carried out on data on me collected by Samaritans Radar.

[6] The way the app currently works seems over-cautious about the privacy of those who are in a position to give informed consent to run it – those installing the app – but far too casual about the privacy of those who are monitored.

Samaritans Radar should be opt-in only

The Samaritans Radar app has had a disastrous launch: by initially refusing to allow individuals to opt out of being monitored and by responding badly to criticisms, the Samaritans have created a lot of bad feeling. The app is now working at a very large scale: the Samaritans report that after day 1 it was monitoring 900,000 Twitter feeds. Unfortunately, although allowing opt-outs is a big improvement, the way the app is being handled is still a mess and many people clearly don’t find the app’s monitoring acceptable. I will therefore argue that Samaritans Radar should be made opt-in only.

Monitoring public spaces

As I’ve said, I think that launching Samaritans Radar with no opt-out was inexcusable. An uproar was generated when the Samaritans initially refused to allow individuals to opt out and treated the public spaces of Twitter[1] with what looked like distain: arguing that “All the data used in the app is public, so user privacy is not an issue”. While these online spaces are public, they are also valued by many and are covered by norms of acceptable behaviour. As Mark Brown notes, “For some, Twitter is the only place they have felt able to meet others with mental health difficulties and to be honest about their true feelings.” Online spaces are important.

The way the Samaritans behaved would be viewed as unacceptable in an offline public space.[2] Imagine, for example, that I went into a busy local park and told users I’ll be monitoring what they do, storing records and alerts about their activity whether or not they like it – after all, everything they do in a public place is public. When people argue back, I tell them to think about issues of privacy in parks. This would not make me popular. At best, I think I’d be told to go away.

Not surprisingly, the Samaritans have received a lot of negative responses to their app launch and lot of people clearly view Samaritans Radar’s monitoring of public space unacceptable. I wouldn’t feel comfortable monitoring a public space after many people from the community had told me to stop, and I don’t think it’s appropriate for Samaritans Radar to continue monitoring Twitter after having had such a negative response. Changing the app to work on an opt-in basis – so that only those who want to be monitored are monitored – would be much more acceptable.

Opt-in and opt-out observation

I think that opt-out observation of public spaces can be acceptable in some circumstances, and when I was planning this post yesterday I was going to argue for Samaritans Radar’s work to just be made more overt.[3] However, opt-out monitoring by Samaritans Radar isn’t acceptable now. When I was trying to defend the potentially benefits of an opt-out project – saying that an opt-in project would be smaller scale – @MLBrook pointed out that when I say smaller scale I “mean it might have included just those who actually consented”. This argument for an opt-in approach is persuasive for two reasons. Firstly, given the response the app’s launch has had, I don’t think that Samaritans can take the fact that users don’t opt out as implied consent for monitoring (indeed, some people object to the Samaritans storing their details on an opt-out list). Secondly, the project is scooping up a lot of sensitive personal data (it will be collecting more the longer it runs for) and this type of project needs very careful handling. However, as well as the problems at launch, the Samaritans Radar mess continues today. For example:

  • @Endless_Psych reports that, after they sent a tweet to trigger Samaritans Radar, “Someone got a notification about the tweet. The next day.”
  • The Samaritans have still not made clear whether opting out from Samaritans Radar will stop the app from monitoring my Twitter feed or stop the Samaritans from collecting, storing and analysing data from my feed. The Samaritans statement re the introduction of an opt-out for individuals says that the opt-out is for “individuals who would not like their Tweets to appear in Samaritans Radar alerts” but doesn’t offer further information about the impact of an opt-out on their data collection and processing. They have since stated that opting out does stop monitoring – and I appreciate the prompt response to my question – but this should have been made clear from the start.
  • There is no clear way to report users who are abusing the app or get them blocked.
  • Samaritans have claimed – repeatedly and incorrectly – that people who don’t follow them can DM to opt out.

Given how badly things have been handled so far, I think the best thing for both the Samaritans and those they are trying to help would be for Samaritans Radar to start out opt-in only and on a much smaller scale.

I’ve covered problems around the launch of Samaritans Radar here. I’m posting this after a long day – let me know if you spot any typos, if anything seems unclear or if I’ve missed out a link I should have included. I should also say that I think the Samaritans are generally a great organisation and I’m sure Samaritans Radar was launched with good intentions; this makes what has happened with the app all the more disappointing, though.

Update 31/10: @Endless_Psych has updated me that “they got the notification [from Samaritans Radar] about two or so hours after I posted but they were asleep”. I’ve also updated the post to add the point that it’s not clear whether opting out from Samaritans Radar will stop monitoring, data collection etc.

Update 2 31/10: @J5nnRussell has clarified that opting out stops Samaritans Radar from monitoring your tweets. Post updated to reflect this.

[1] Sorry, I’m a geographer – I like spatial metaphors. I also like footnotes.

[2] I’m not arguing that there’s a clear online/offline divide, but talking about online and offline spaces seemed the clearest way to express my point in this blog post.

[3] To be open about my own biases, I have previously argued for and carried out online observation which has been opt-out rather than opt in (although I have done this in very different ways and on a very different scale to Samaritans Radar).

Problems with Samaritans Radar

The Samaritans Radar app is an interesting – and potentially valuable – idea. However, the app relies on the covert monitoring of Twitter users and will probably be collecting and processing lots of sensitive personal data. There is also the potential for the app to be used to target people when they are vulnerable. I will argue that the covert nature of the app’s monitoring and the lack of any apparent way for people being monitored to opt out are both unacceptable and that the Samaritans have not evidenced adequate safeguards against abuse.[1]

The app is presented as “a chance to help friends who may need support”. Some users will no doubt use it in this way. What the app actually is, though, is a means to get alerts when certain words or phrases crop up in the tweets of people a Twitter user chooses to follow (as long as their accounts aren’t private and they haven’t blocked me).[2] Monitoring is not transparent to those who are monitored: Samaritans make clear that “Samaritans Radar is activated discreetly and all alerts are sent to you alone…The people you follow won’t know you’ve signed up to it”.[3] I can’t find any way to opt out of being monitored by the app – the decision about whether to use it is made solely by the user who is signing up for alerts about people they follow.

Samaritans argue that “All the data used in the app is public, so user privacy is not an issue. Samaritans Radar analyses the Tweets of the people you follow, which are public Tweets”.[4] This is a bad argument. By way of analogy, my office window looks out on a public street – whatever people do there is public. There would still, though, be privacy issues if I installed a video camera in my window to tape what people did outside; there would be bigger issues if, say, I allowed interested parties to subscribe to alerts when person X or Y walks past my window drunk. It would be even more worrying if person Y found out about this and was upset but I didn’t offer any way for them to stop me from monitoring them or sending alerts.

There is also a real risk of harm here. People might, for example, feel less able to share their feelings and seek support on Twitter if this brings them a raft of well-meaning but unwanted contacts from followers or if they felt they were being surveilled in an oppressive way. As @Sectioned_ points out, an alert from the app “could seem like open encouragement to platitude-bomb someone when they’re feeling rubbish”. More worryingly, abusive people might use the app in order to find out when a target of theirs is feeling lousy: as @claireOT argues, “there’s a worrying lack of safeguards against ppl using the app to target vulnerable ppl”.[5] Even if someone is aware that they’re being targeted in this way and wants to stop it, I can see no way to opt out from being monitored. I also can’t see any way to report someone who’s using this app for abuse or to get them blocked from using the app.

The Samaritan’s Radar app is a nice idea, but the lack of any clear way to opt out seems inexcusable – and increases the likelihood of the app doing harm. I haven’t seen evidence of adequate safeguards against abuse of the app. If the app is popular, its launch will mean the covert monitoring of many Twitter accounts along with the collection, analysis and storage of a lot of sensitive personal data. It might be possible to justify this – and I’m sure the Samaritans have good intentions – but I haven’t seen anything like an adequate justification from the Samaritans.

UPDATE: Samaritans Radar is now covertly monitoring (or “supporting”, as they put it) 900,000 Twitter feeds. This is a large-scale monitoring, data collection and processing project, and really does need to have appropriate privacy and risk mitigation measures in place.

UPDATE 2: the Information Rights and Wrongs blog now has an excellent post on data privacy issues around Samaritans Radar. I now probably won’t write a post on data protection and the app – I don’t think I could do anything better.

Update 3 (30/10/14): Samaritans have announced that they will allow individuals to opt out from being monitored by the app. I have added strikethroughs to the post to reflect this.

I’ve tried to acknowledge sources here, but I may well have missed people making similar points about the app on social media. Please tell me, and I’ll add in appropriate links.

 I’ve kept this post brief-ish, but I also have a half-written post about data protection aspects of this an another looking at how issues like this are dealt with from the point of view of research ethics (I submitted an ethics form for some online ethnographic work not that long ago). I’ll try to write these up at some point – so there’s the excitement of discussions of data protection and research ethics still to come! I’d also like to write something about how this type of app might work in a more ethical, and less intrusive, way.

 

[1] I appreciate that people can leave Twitter or make their accounts private. However, people should not be forced to make their Twitter account less public in order to escape this type of monitoring.

[2] Though I imagine I could set up an anon sockpuppet account to follow anyone who blocked me and I still wanted to monitor.

[3] Clearly, some of those using the app may tell those they follow that they are doing so and some Twitter users may actually ask to be monitored. However, the app does not tell people that it is monitoring them.

[4] It is worth noting that much of the data collected will fall under the Data Protection Act’s definition of personal data – for example, I’m easily identifiable from my Twitter handle @JonMendel.

[5] I appreciate that an abuser can also just read a public Twitter feed, but this app is potentially making this far easier.