Boston Police Department deploying social media surveillance

The Boston Police Department has implemented an algorithmic monitoring tool developed by Geofeedia (a social media intelligence platform) for detecting potential threats on social media. The software allows law enforcement agencies to trace social media posts and associate them with geographic locations, something that has reportedly been used to target political activists of all sorts by both police departments and private firms (Fang 2016). For instance, it was purportedly deployed against protestors in Baltimore as part of the pilot phase of the project (Brandom 2016). This triggered a reaction from major social media companies that denied Geofeedia access to their data according to a report by the American Civil Liberties Union (Cagle 2016).

On top of that, in 2016 the American Civil Liberties Union of Massachusetts (ACLUM) discovered that between 2014 and 2016 the BPD had been using a set of keywords to identify misconduct and discriminatory behaviors online without notifying the Local Council (Asghar, 2018; Busquets, 2016). This was shown in a report that analyzed an array of official documents (ACLU 2016). Some terms used were “#MuslimLivesMatter” or “ummah” (which means community in Arabic). This tool has been criticized for its biased outputs that have particularly affected the Muslim community (Durkin, 2018).

After the release of the above mentioned, the ACLU indicated that the BPD was surveilling citizens on Facebook, Instagram, Twitter, and other social media networks. The Boston City Council held a hearing to discuss the concerns raised by the system, however, it never fully clarified the way in which the system defines the potential threats and tackles bias (Privacy SOS 2018).

In order to classify social media posts, Geofeedia gauges the level of “sentiment” contained in a social media post in order to predict public manifestations of unrest or violence and their intensity (Knefel, 2015). When asked about how this was accomplished, Lee Gutham (head of business development at the company) replied the following:

‘’What it would do is it would take all the words in the phrase, and it attributes positive and negative points to them, and then proximity of words to certain words’’(Ibid).

Evidently, the criteria used to sort out the different words into the categories of ‘’bad’’ and ‘’good’’ has remained undisclosed to the public. Not only is this lack of transparency disturbing, but the algorithm’s susceptibility to discrimination should sound alarms. Although the software only relies on public data, that data often reflects structural biases within societies which means the algorithm’s output can result in the discrimination against certain groups, like Muslim citizens (Ibid). Therefore, the advantages of these surveillance tools appear uncertain, while its potential repercussions clearly threaten to chill speech, silence social movements and enable racial profiling.