It’s now 22 years since Section 230 of the Communications Decency Act came into force: a landmark case even in 1996. It meant that those hosting information on the then nascent internet were not to be treated as “publisher or speaker” of any of that information. This made perfect sense at the time, when the internet remained a small curiosity, and in which it was fair to assume that small, practically amateur companies would have had difficulty managing to scan all the content they were hosting (the early giants like AOL even then seemed like a different story, but that seems by the by now).
Online content – and the avenues in which it is created and uploaded and perhaps most importantly, disseminated – has only grown since then. We are past Web 2.0, with its blogs and microblogs, and into the Internet of Things – and yet we are plagued with some very pre-digital problems online: child abuse, and terror, and defamation (admittedly in new forms, such as revenge porn). The tech industry’s answer has usually been to shrug when confronted with this sort of material. There’s the free speech argument, and then there’s the legal backing which legislation like Section 230 (and its national/regional equivalents) offer.
British Prime Minister Theresa May’s speech at Davos stands in rude contrast to this civil libertarian ethos. This is not a shock: May views the internet as another sphere which requires total government control and regulation, a mere expansion of the offline world. This is not a new view, and it’s one rooted both in government over-reach, and in a complete lack of technical knowledge. Consider Amber Rudd’s desire to combat the evils of encryption without having any understanding of how it works: there’s a rank arrogance to the professional politician which is only matched (fittingly) by that of the big tech company.
The collision course between the two has long been set, but it’s increasingly clear that public opinion has turned against the argument for laissez-faire. Facebook, Twitter et al long assumed that the utility they offered could trump governmental arguments that they should be regulated more heavily. However, a slew of stories about objectionable content – such as terror and borderline child abuse on YouTube, or targeted ads used to support hatred on Facebook – have increasingly eroded this position.
And that’s bad news for users. The current scheme is broken, admittedly (content providers seem to care much more about PR after scandals than actively working on solving major structural problems), but heavy government regulation is concerning at best. At worst, we can expect to see a quiet creep of illiberal regulations under the guise of national security. Lest we see this as too much of a conspiracy, let’s not forget that in the wake of the Snowden revelations, the British government decided to consolidate mass surveillance powers. By failing to self-police, big tech has fundamentally removed its own popular support base, allowing governments which don’t understand technology and which seek to gain more power in the name of national security an open goal.
Content providers and platforms are in no ways victims here – they are equally complicit in what amounts to a rising risk to their users. If they wish to truly avoid over-regulation, they need to move beyond the sort of measures which are patently designed to improve their appearance. Consider YouTube’s plans to fund counter-terror videos: does anyone really believe this will stop someone moving down the path towards radicalisation? A greater emphasis on moderation which goes beyond horribly underpaid contracts (with no support) or crude algorithms may be the only way to save them – and us – from a future which looks a little Orwellian.