The offline solution to online hate

Anti-social behaviour offline is understood to have complicated and overlapping causes, and equally protean solutions. Online hate must be treated as not just the problem of tech companies and the legal sector, but as a symptom of deeper societal problems which are everyone’s responsibility, argues Jamie Bartlett.

Not all tech problems have a tech answer. In fact, some tech problems aren’t really tech problems at all. This is the case with online hate, which has become something of a public obsession, bordering on a moral panic. I’m not denying there is a problem. Precise figures are hard to obtain, but there almost certainly has been a proliferation of nastiness online. Some of this is misogynist, some is anti-Islamic, some is anti-Semitic, and some just plain threatening. While the worst material is found on private forums or chat rooms, most of the coverage about this problem relates to the big social media platforms.

The result has been growing demands for more to be done in response. The mistake, however, is to assume that because this hate manifests online, the answers are found there too. In 2017, a Home Affairs Select Committee reported that social media companies were not doing enough to deal with hateful content on their site. The Committee suggested the big tech firms needed to be more proactive, and that tougher laws are needed. This twin approach is the common response to the issue of online hate. But I’m not sure either are quite right.

Social media platforms are not directly legally responsible for things posted on their site. For example, the EU’s 2000 e-Commerce Directive judges them to be ‘mere conduits’ of content, more like a postal service than a newspaper. Like it or not, without this law much of the commercial Internet, from ISPs to social media, wouldn’t exist – at least not in its current form. If social media platforms were liable – like, say, a newspaper – for content, they would have to check everything before it was posted in case it was libellous, a copyright breach, hate speech, and so on. The sheer scale of the task – there are over half a million Facebook posts uploaded every minute of every day – would put the lot of them out of business within days.

Most of the large social media platforms do have certain measures in place. They all have terms and conditions you’ve probably never read, which include bans on hate speech, threats, and extremist content. Facebook expects that other users will report things that fall foul of these terms and conditions, which is then referred to a human ‘content manager’ who looks at it and decides if it should be removed. The company probably receives thousands of these reports every day; the company is believed to have 7,500 content managers working on this around the world.

Perhaps more could be done, but it’s not simply a question of numbers. One person’s hateful content is another’s reasonable criticism. Having conducted several studies into various forms of online cruelty – including Islamophobia and misogyny – I can attest that it is extremely difficult to determine. Language is full of grey areas, one reason why this is still (for now at least) a job for humans, and not for algorithms.

The legal route might be even less successful. For the last couple of years there has been a concerted effort to increase legal penalties for, and prosecute, people posting nasty things online. In 2015, there were 1,425 convictions under the section 127 of the 2003 Communications Act. To further toughen up the legal response, earlier this year the Crown Prosecution Service (CPS) said they would treat offline and online hate crime the same.

I don’t have a problem with the principle, and the CPS is generally quite sensible. However, this new effort will produce some serious and predictable consequences. The CPS advice on all types of hate crime – offline or online – is that it should be defined as a hate crime if the victim/third party considers it to be such. This is where context matters: who the perpetrator was, what they knew about the victim, and so on. But much of this vital context is missing with anonymous trolling or digital bullying. The victim of any ‘ill-will, spite, contempt’ would, I imagine, tend to view it as a hate crime by default.

The inevitable avalanche of reports will overwhelm an already very stretched police force. Everyone reporting everyone, constantly. Even more problematic is the fact that much online hate is published overseas, outside the jurisdiction of the relevant police force. I predict that in the next few months, some independent committee will say the police aren’t doing enough.

The only result of all this will be a self-created enforcement crisis. Trust in the police will suffer, something we can ill afford.

I don’t mean to absolve companies or the police of the duties they do have. Social media platforms are right to get rid of content that’s illegal when they are informed about it – and perhaps they could streamline it a little more. But they cannot reasonably be expected to find every case of hate crime, nor proactively seek it all out. And there are some sorts of hateful content that do deserve the full weight of the law. But the truth is neither law nor tech will rid us of the problem.

Both of these approaches are in some sense lazy: a superficial effort to deal with a human problem through technocratic means. The only answer is a long-term, hard slog: the task of teaching society to be decent. The task of educating young people about the responsibilities of life online and what it’s like to get bullied. The task of parents raising their children to understand the value of civility – or in some cases, children teaching their parents.

After all, if offline and online crime is the same, perhaps the root causes are similar too: decades of research has found that anti-social behaviour offline is driven by complicated and overlapping causes, including poor parental supervision, low school achievement, anti-social parents, low family income, antisocial peers. In other words, deep rooted social problems that are far more complex than even the most sophisticated algorithm or CPS guidelines could ever fix. Until we deal with any of these issues, online bullying, hate and cruelty will continue to exist, and even the best intentioned law or tech-led approach will not solve it.