by Martin Moore
We are at a peculiar moment when governments – democratic and authoritarian alike – are itching to regulate and legislate the major tech platforms. In the UK in April, Jeremy Hunt gave an ultimatum to social media to better protect children or face new laws.
His threat followed similar ones by Matt Hancock, Theresa May, and before her David Cameron. And, in the same month as Hunt’s ultimatum, Facebook’s Mark Zuckerberg was hauled in front of Congress for two days of questioning. “Congress is good at two things” Republican Senator Billy Long said then, “doing nothing, and overreacting. So far, we’ve done nothing on Facebook… [now] We’re getting ready to overreact.”
Still, if the US and UK governments are getting ready, many other governments have already begun. Their reactions to date do not fill one with confidence. See, for example, how many governments have responded to the phenomenon labelled ‘fake news’.
In Germany, the German Network Enforcement Act – the ‘NetzDG law’ – was voted through last year, requiring large social media platforms to remove illegal content within a short time period or risk being fined up to €50 million. It sounds outwardly straightforward, but in practice is anything but. It puts the onus on these commercial companies to decide – from a lengthy list – what is or is not ‘illegal content’ and then remove it, without reference to the courts and without providing a right of appeal.
The German law is, as Human Rights Watch has said ‘vague, overbroad, and turns private companies into overzealous censors to avoid steep fines, leaving users with no judicial oversight or right to appeal.’ Other countries are rushing to pass their own fake news laws, from Russia to the Philippines, from Kenya to Malaysia.
Nor is fake news the only policy area where governments are looking to respond. They are also considering how to deal with the tech giants’ influence on elections, on cyber bullying and harassment, on digital fraud, on identity theft, on systematic privacy breaches, on encryption… the list goes on.
Governments – having suddenly woken up to the power of the tech giants – are rushing to assert control. In their rush they are doing things that will only partially address the issues and are likely to do more harm than good. This is certainly not to suggest there are not problems with these dominant platforms – indeed the problems are legion – rather it is to suggest that democratic governments have misunderstood their nature and extent and are acting precipitately and myopically.
The result will, at best, be poor regulation and legislation that is difficult to administer. At worst it will enhance the existing problems or land us with bigger new ones.
Our current digital disruption is about much more than ‘fake news’, election interference, or social media harassment. This disruption represents a fundamental shift in the way we communicate and interact with one another. How we learn about and navigate our world, and how we define our identities and who we are. If we are to respond successfully to this disruption we first need to acknowledge the scale and profundity of this shift.
To do this we need to better understand the power of the platforms and from where they derive that power. We need to figure out ways to address platform dominance without jeopardising many of the very real benefits that they bring.
And we need to develop a vision of what we think the future digital world ought to look like in five, 10, or 20 years’ time. Governments will need to respond, but should not rush to legislate before even working out in which direction they wish to head.
Dr Martin Moore is director of the Centre for the Study of Media, Communication and Power at King’s College London, and joint editor, with Damian Tambini, of Digital Dominance: the Power of Google, Amazon, Facebook and Google – published in June by Oxford University Press.
You can follow @martinjemoore on Twitter
This article first appeared in the spring 2018 issue of Society Now.