This research aims at identifying the typical patterns of online hate speech and suggests starting to regulate it. Online hate speech involves online harassment, invective, threats, and repeated, often group-led abuse going beyond mere conversation disruption.
In real life (IRL) hate speech is often banned and regulated through hate crime legislation. This research argues, therefore, that its online counterpart is in need of regulation too. The Internet should not be a space for people to harass or threaten others without fear of punishment: that goes beyond our right to freedom of speech. In overlooking the importance of a safe, constructive online environment, we are not only doing hate speech victims a disservice – we are doing a disservice to our community as a whole.
In order to create a checklist for potential hate speech and to detect online behaviour that is not in accordance with online or IRL legal standards, I analysed 500 tweets with one specific hashtag (out of the 2,000 tweets I found in total): #McCann. #McCann groups together all matters related to the disappearance of the British child Madeleine McCann, who disappeared from her parents’ holiday home in Portugal in 2007. #McCann provides a wealth of material, with users posting tweets containing the hashtag every day, blaming Madeleine’s parents for her disappearance (or, sometimes, for her death). Many of these tweets are a blend of harassment, defamation and insults that tend to result in conspiracy theories and "fake news".
Through an experiment combining metadata and discourse analysis, I observed the conversations in order to provide recommendations for Internet Service Providers or Content Providers to identify and regulate behaviour that counts as online hate speech.
Online hate speech, especially when it's related to high profile criminal cases, not only lowers the tone of public debate, but also undermines the foundation of the Criminal Justice System. It can jeopardise one of democracy’s pillars, such as the concept of the right to a fair trial, and bypass the media by sharing fabricated information that influences public decision-making.
A careful balancing act between freedom of expression and the understanding of online hate speech is needed to decide whether some users should be banned or reported to higher authorities. Below, I will outline five indicators of online hate speech in high profile criminal cases.
Online Hate Speech - The #McCann Case
#McCann is the perfect example of how online hate speech is not an act of a few. Behind the McCann hashtag there is a fully-fledged community involving “influencers” and “trollowers” who talk to each other daily. Considering the number of users involved, it’s hard to think of a group of disenfranchised psychopaths - it makes more sense to think of online hate speech on high profile criminal cases as a social ailment, a way for a fringe of society to come together and form a community.
As researchers, therefore, we shouldn't focus on the users or authors of the tweets, but on the actual content produced. The elements of a typical hate speech script using #McCann can be best observed in the discourse surrounding the parents: a delicate context, high-frequency and serious assumptions and conjectures expressed online in a way that makes the content easy to find, read and retrieve, together with the desire to silence everyone who has a different opinion.
Five Indicators Of Online Hate Speech
In the course of my analysis, I identified five aspects typical for the conversation under #McCann that can constitute a checklist to describe online hate speech on high profile criminal cases. Although these indicators have been created using just one hashtag as a case study, from observing the behaviour of the users I studied I can confidently say that the same scripts are applied to other high profile criminal cases, and that therefore these indicators can also be relevant in other scenarios.
1. The context: If we consider that Kate McCann (the mother) claimed to have felt “violated” by the ongoing accusations and conversations happening online and in the press, and if we consider the McCanns’ status as parents of a missing child, the accusations they receive (whether they see them or not) are cruel and harassing.
2. The content: Considering the trial by media the McCanns were subject to, it is fair to say that the ongoing, daily, hourly back-and-forth over their guilt or innocence, the accusations of lying and of murder, would have negative effects on any reasonable person. Therefore, it is necessary to have measures in place to limit or to contain conversations of this kind.
3. The intensity: While conducting this research, I collected over 2,000 tweets in less than two weeks during a period in which the McCann case was not dominating the press. The sheer number of tweets found containing the hashtag shows the intensity of the conversation which, going back to the first point, is consistent with harassment and taunting.
4. The medium: Lawmakers and researchers have agreed that the fact that these statements are being made online, and not in passing, constitute a publication and are therefore comparable to defamatory press statements.
5. The law: Considering that journalists and other individuals who have published statements very similar to the ones made by users of the McCann hashtag have been charged with defamation and harassment, one has to wonder whether the same could be applied here. Additionally, it is necessary to remember that, were the case to be solved, public opinion would already be saturated with opinions and conspiracy theories about the McCanns, possibly preventing justice from being done.
Recommendations
The five indicators of online hate speech listed above could act as a guideline for Internet service providers to apply a degree of regulation, considering the negative and hurtful effects that online hate speech, especially related to high profile criminal cases, can have on its victims and its danger to democracy and the justice system.
My initial recommendations would require human engagement and machine algorithms. Indeed, taken out of context, some of the tweets shared by users under #McCann (a script possibly replicated in other controversial, high-profile criminal cases) might not be seen as harassment. However, as mentioned above, the vulnerability of hate speech targets has to be seen within context and relative to the circumstances they find themselves in.
Therefore, to carefully balance freedom of expression and the the protection of hate speech victims (by applying the five indicators outlined above), I recommend to either build in further nuances into machine algorithms, or to pair them with moderation by legal and tech experts in order to judge whether some users should be merely banned or reported to higher authorities.
In case readers have any doubts about the damage online hate speech can cause to families affected by a tragedy such as the disappearance of their child, this video, an extract from a recent Netflix documentary, should provide some insight.
Carolina Are is fellow under the RIPE Academic Cooperation Initiative. She presented her research at the RIPE 78 Meeting in Reykjavik.
Comments 2
Comments are disabled on articles published more than a year ago. If you'd like to inform us of any issues, please reach out to us via the contact form here.
Sascha Luck •
What is the agenda behind publishing a paper calling for the regulation of speech in the NCC member's update and RIPE Labs? What is the applicability to the management of internet resources and internet engineering?
Hide replies
Mirjam Kühne •
Hi Sascha, Carolina presented her research as a RACI fellow at RIPE 79. We always encourage RACI fellows to also write a RIPE Labs article about their research. The views expressed by the authors on RIPE Labs do not necessarily reflect the views of the RIPE NCC. The scope of this blog is quite broad and are not restricted to Internet resource managment. And we always welcome discussion of course.