The Dark Side of Big Data

To understand the true influence of social media on democracy, Jonathan Sebire argues we must delve into the murky world of ‘dark social’.

Democracy dies in darkness, or so the saying goes. But it is what is given life in the shadows that should be of most interest to those seeking to understand the influence of social media and other digital platforms on trust, perception and modern, functional, democracy.

The alleged influence of social media on recent political contests, the rise of Fake News and the erosion of trust in a post-truth world continue to be subjects of much debate, even amongst some of the companies initially credited with spearheading social campaigns that delivered Vote Leave and Trump successes. However, what is irrefutable is that the rise of defined, repeated tactics and organised playbooks, has seen the weaponisation of ‘dark social’ develop into the new norm for political campaigns. Brands and media owners are palpably interested in seeing how they can use the same tactics.

‘Dark social’ may sound nefarious, but it is simply a term used by data analysts to describe anything that can’t be seen publicly on a platform. This could be a result of numerous factors, from privacy settings to the masking effect of ad platforms, which mean that paid social is neither archived, searchable nor visible outside of target audiences.

Social media platforms break down broadly into two categories: public and private. Platforms like Twitter can generally be classified as public platforms (whilst you can have private channels the vast majority of users have their default settings as public). Platforms like Facebook, on the other hand, find the majority of users utilising privacy settings that limit their interaction and conversations to curated private networks. This creates very different behavioural patterns both amongst ordinary users and those looking to influence them, and also vastly divergent levels of resistance to the proliferation of content.

When contentious or false information is spread about someone on a public platform, it is much more likely that a) they will see this and b) they can respond and engage in debate with the original poster. This can, sadly, often lead to trolling and highly abusive arguments, but it does provide a check and balance system that at least slows the proliferation of false content. In the UK, there are good examples of Twitter users self-policing against fake news imagery and stories from the London Riots and the London Bridge, Manchester and Westminster terrorist attacks.

However, on private social networks, such checks and balances are either minimal or absent, with few people able to openly challenge your content or ideas. Several factors feed into this: platform algorithms that prioritise content similar to what you’ve previously engaged with (something which the alt-right have become expert in gaming to their advantage on both public and private platforms), high barriers for the addition of ‘friends’ resulting in bias towards trust and belief in content introduced by those friends, and the fact that ads can be served to specific groups, or exclude others, in ways that minimise the likelihood of resistance or disagreement.

The result is the perfect conductor for the dissemination of false, distorted or sensationalist content. Without the checks and balances of public scrutiny or countervailing views, content passes unquestioned and unchallenged. In such an environment, hard ‘truths’ can be invented and adopted; from Sandy Hook false flag operations to pizzagate and from Holocaust denial to shooters on Oxford Street.

Purveyors of fake news sit on all sides of the political divide. But the ratios of private vs. public social engagement by certain campaigns indicates that one subset of campaigners is developing this expertise much faster than legislators or the social networks themselves can guard against it.

Looking at examples such as Trump 2016 or Vote Leave, Signify’s research indicates that conversation aligned to the extreme-Right has a volume ratio of between 10 and 20:1 in favour of dark social conversation, whilst topics broadly aligned to the Centre and Left have a sharing ratio of 1:1 on private vs public networks. This is a result of a defined and repeated playbook which utilises private social networks as a recruitment base, bringing people in by sharing large volumes of content designed to increase followings, disinform, discredit, stoke fear and shift opinion to a desired viewpoint.

These tactics are not the preserve of the Right. Source sites that trade in alternative news from the hard-Left employ similar tactics and display a ratio disparity in favour of dark social. However, the identifiable volumes to date have been far smaller – seemingly due to lack of structured organisation and amplification networks and also far smaller paid social input.

What does this mean in terms of influence? Whilst ideological entrenchment is not new – after all, it has always been possible to source literature and find groups that embrace extreme or unpopular ideas – social media has given us two new catalysts; scale and anonymity.

Alongside numerous case studies, Signify had first-hand experience of these networks and bias ratios in action. When looking at discussions about the European Court of Justice (ECJ) for The Guardian we identified tactics used to seed specific content from a collection of alternative news sites into private social networks. Simply put, the goal was to enrage certain sections of the population over the handling of Britain’s relationship to the ECJ during the Brexit process and to coalesce that anger around support for Jacob Rees-Mogg. Over a period of monitoring, we saw the volume of engagement within dark social grow from a few hundred engagements, through thousands, to tens of thousands.

When Signify’s findings were published in The Guardian the volume of engagement and distribution of the Guardian article onto social media roughly retained the 1:1 ratio for public to private. However, within a few hours the story was lifted wholesale by the Daily Express and given a few lines to top and tail the story in a pro-Leave/Rees-Mogg manner.

The behavioural difference with this content was marked and defined. We saw the same networks and tactics Signify had observed on fringe content applied to the article. This resulted in a volume of engagement and distribution into dark social spaces 15 times larger than that on public platforms. The content was also adapted and repurposed by specific sites, which had been responsible for driving stories about Rees-Mogg and the ECJ, and which deliberately misrepresented the reality of the subject.

There are currently numerous investigations into how social media is being used to distort and mis-inform, particularly with a focus on Russia. But despite headline-grabbing stories about bots and the efforts of large social platforms to reform algorithms and news flow, dark social has become a campaign reality.

Twitter bots and trolling are sensational and newsworthy but the insidious drip, drip of misinformation between private individuals has had a far greater impact already, especially as a driver for the rise of nationalism and overt racism. Numerous campaigns in the UK, EU and US are still pouring money into dark social, and the threat that the misuse of dark social poses to functional and open democracy remains as potent as ever.