All over the web – from Facebook to the comments section of your favourite online newspaper – it’s not hard to find examples of people ganging up against one another.
Many individuals know only too well the feeling of a hateful mob bearing down on them via social media. Often, it is women and members of minority groups that are targeted.
But when neo-Nazi, white supremacist website the Daily Stormer was booted from various web services – including Google, GoDaddy and Cloudflare – in August, some members of the far-right complained that another kind of “mob rule” had prevailed.
Has the internet spiralled into an angry mess of tribes fighting with one another? The founding dream that heralded the arrival of the World Wide Web was very different, notes Lucas Dixon, chief scientist at Google’s Jigsaw project. “We used to fantasise that the internet would be a kind of utopia,” he says. While it did help to bring together people thousands of miles apart, the consequences are sometimes distinctly unfriendly. “Unfortunately, discussions have a tendency to turn pretty bitter,” he acknowledges.
Dixon has experienced it himself. After hosting a Reddit thread for victims of online harassment, he and his team were themselves harassed – some individuals sent malicious messages to their bosses at Google. “People were trying to get us fired,” he recalls.
With this deflating state of affairs before us, many are now asking what can be done to improve online interactions. How much must society change to achieve this? And could technology help crack down on the worst offenders?
Jigsaw, for one, seeks to develop machine learning algorithms that are better at flagging abusive exchanges between web users. It is far from infallible, but it’s currently in use at many outlets – including the New York Times.
Read more: We can’t let the dark web give online anonymity a bad name
Dixon points out that, having used Jigsaw to assist human moderators, the Times has been able to activate comments on 25 per cent of its news articles – up from 10 per cent.
While many of us enjoy the privilege of being able to express ourselves relatively freely online, it’s possible for that privilege to be forcibly taken away. Knocking someone’s website offline via a distributed denial of service (DDoS) attack has become a powerful mode of protest – but it’s actually very easy to do.
“It’s a little bit like smashing a shop window at this point,” says John Graham-Cumming, chief technology officer at Cloudflare. “You can make that protest fairly cheaply and it can make a lot of noise.”
In the last six months, Cloudflare has clocked a DDoS attack against one of its clients every 40 minutes.
Such attacks have, of course, been used by factions against one another – including, recently, to target the Daily Stormer. Last year, Anonymous took aim at the KKK. And in 2015, US sexual health body Planned Parenthood said its website was DDoS’d by “anti-abortion extremists”.
It doesn’t really matter what side of a debate you’re on – if enough of a backlash builds up against you, tools are available to frustrate your access to the web itself.
And Graham-Cumming notes that DDoS techniques are getting more sophisticated. It used to be that an attacker would just flood an IP address with random packets of data, but those can be easily filtered out to defend against an onslaught.
Now, he says, it’s common to see requests at the HTTP level, so the traffic looks like genuine web browsing and has to be processed by the target’s web server, effectively evading some DDoS defences.
“Free speech” crosses lines for most people when hate speech is involved, or when the speaker in question is promoting terrorist propaganda, for example. While laws exist to curb this sort of thing in many jurisdictions, exactly how and when they get applied to online content is often unclear.
Social media sites have done a lot to restrict the proliferation of material spread by Isis, for instance, says Charlie Winter at the International Centre for the Study of Radicalisation and Political Violence at King’s College London.
But that material is still out there.
“Right now, there are over 300 channels on Telegram devoted exclusively to multiplying the activities of the official Islamic State mouthpiece, Nashir,” he explains. “Each of these channels is very easy to identify, yet remain active nonetheless.” Stamping out Isis’s messaging completely is impossible, he argues.
One person who says he isn’t surprised by the nature of today’s web is Jamie Bartlett, author of Radicals: Outsiders Changing the World.
He points out that extremist groups, for instance, have always taken to new technology with gusto as they search for ways to expand their audience.
The internet, he argues, “smashes down the centre-left, centre-right consensus”.
For Dixon, a good way to tackle the antagonism and vitriol that often accompanies online “debates” is to create spaces in which abuse is carefully checked and where people – whatever their opinions – feel comfortable engaging with others. That’s not always easy, especially when certain opinions are so abhorrent to some, but the alternative may simply be endless strife.
For example, Bartlett points out that mainstream political parties will perhaps have to adopt some of the ideas and techniques of their counterparts on the fringe if they want to remain relevant. “That’s essentially how you prevent violent revolutions,” he says.
And that, probably, is what is ultimately at stake here. A democratic consensus suggests we should be able to co-exist, no matter how diverse the views in society may be. Perhaps people’s declining confidence in that idea is what is playing out online. Are we simply waiting for whatever tribe or mob becomes dominant to decide our fate?
The dystopic web has certainly thrown us a gauntlet. And for Bartlett, it’s one that means democracy itself will have to change in order to prevail.
“Democracy has to change quite radically,” he says. “It hasn’t changed much in 150 years, and everything else around it has.”
Lucas Dixon, Charlie Winter and John Graham-Cumming will be speaking at WIRED Security 2017.
This article was originally published by WIRED UK