The Samaritans Radar app is an interesting – and potentially valuable – idea. However, the app relies on the covert monitoring of Twitter users and will probably be collecting and processing lots of sensitive personal data. There is also the potential for the app to be used to target people when they are vulnerable. I will argue that the covert nature of the app’s monitoring and
the lack of any apparent way for people being monitored to opt out are both unacceptable and that the Samaritans have not evidenced adequate safeguards against abuse.
The app is presented as “a chance to help friends who may need support”. Some users will no doubt use it in this way. What the app actually is, though, is a means to get alerts when certain words or phrases crop up in the tweets of people a Twitter user chooses to follow (as long as their accounts aren’t private and they haven’t blocked me). Monitoring is not transparent to those who are monitored: Samaritans make clear that “Samaritans Radar is activated discreetly and all alerts are sent to you alone…The people you follow won’t know you’ve signed up to it”.
I can’t find any way to opt out of being monitored by the app – the decision about whether to use it is made solely by the user who is signing up for alerts about people they follow.
Samaritans argue that “All the data used in the app is public, so user privacy is not an issue. Samaritans Radar analyses the Tweets of the people you follow, which are public Tweets”. This is a bad argument. By way of analogy, my office window looks out on a public street – whatever people do there is public. There would still, though, be privacy issues if I installed a video camera in my window to tape what people did outside; there would be bigger issues if, say, I allowed interested parties to subscribe to alerts when person X or Y walks past my window drunk. It would be even more worrying if person Y found out about this and was upset but I didn’t offer any way for them to stop me from monitoring them or sending alerts.
There is also a real risk of harm here. People might, for example, feel less able to share their feelings and seek support on Twitter if this brings them a raft of well-meaning but unwanted contacts from followers or if they felt they were being surveilled in an oppressive way. As @Sectioned_ points out, an alert from the app “could seem like open encouragement to platitude-bomb someone when they’re feeling rubbish”. More worryingly, abusive people might use the app in order to find out when a target of theirs is feeling lousy: as @claireOT argues, “there’s a worrying lack of safeguards against ppl using the app to target vulnerable ppl”.
Even if someone is aware that they’re being targeted in this way and wants to stop it, I can see no way to opt out from being monitored. I also can’t see any way to report someone who’s using this app for abuse or to get them blocked from using the app.
The Samaritan’s Radar app is a nice idea,
but the lack of any clear way to opt out seems inexcusable – and increases the likelihood of the app doing harm. I haven’t seen evidence of adequate safeguards against abuse of the app. If the app is popular, its launch will mean the covert monitoring of many Twitter accounts along with the collection, analysis and storage of a lot of sensitive personal data. It might be possible to justify this – and I’m sure the Samaritans have good intentions – but I haven’t seen anything like an adequate justification from the Samaritans.
UPDATE: Samaritans Radar is now covertly monitoring (or “supporting”, as they put it) 900,000 Twitter feeds. This is a large-scale monitoring, data collection and processing project, and really does need to have appropriate privacy and risk mitigation measures in place.
UPDATE 2: the Information Rights and Wrongs blog now has an excellent post on data privacy issues around Samaritans Radar. I now probably won’t write a post on data protection and the app – I don’t think I could do anything better.
Update 3 (30/10/14): Samaritans have announced that they will allow individuals to opt out from being monitored by the app. I have added strikethroughs to the post to reflect this.
I’ve tried to acknowledge sources here, but I may well have missed people making similar points about the app on social media. Please tell me, and I’ll add in appropriate links.
I’ve kept this post brief-ish, but I also have a half-written post about data protection aspects of this an another looking at how issues like this are dealt with from the point of view of research ethics (I submitted an ethics form for some online ethnographic work not that long ago). I’ll try to write these up at some point – so there’s the excitement of discussions of data protection and research ethics still to come! I’d also like to write something about how this type of app might work in a more ethical, and less intrusive, way.
 I appreciate that people can leave Twitter or make their accounts private. However, people should not be forced to make their Twitter account less public in order to escape this type of monitoring.
 Though I imagine I could set up an anon sockpuppet account to follow anyone who blocked me and I still wanted to monitor.
 Clearly, some of those using the app may tell those they follow that they are doing so and some Twitter users may actually ask to be monitored. However, the app does not tell people that it is monitoring them.
 I appreciate that an abuser can also just read a public Twitter feed, but this app is potentially making this far easier.