The power of law or the law of power?

Why Europe must lead the way in the governance of technology

If technology is the new governance, then the tech giants are the governors, operating without a democratic mandate. Europe must take the lead in pioneering a rules-based system in which the public interest matters, writes Marietje Schaake. Otherwise, authoritarian regimes and private companies will continue to set the standards.

A couple of years ago, I visited the Romanian parliament in Bucharest. It is the world’s heaviest building, in more ways than one. Built on top of the ruins of the old city, where 40,000 people used to live, worship and go to school, its architecture is so pompously authoritarian that it is hard to experience the place as the home of a democratically elected government.

Inside, however, was a small exhibition on the history of hunger and repression. In the 1980s, when food shortages were extreme and breadlines filled the streets, state TV showed a weekly program with Ceaușescu designed to convince Romanians of the wealth and success of the communist model. But because all the country’s food was being exported to support the national budget, and literally no real fruit was available, they painted wooden apples for décor. Back then there were no mobile phones to document the story behind the television screens. State security was so severe that archives were kept on every type of typewriter, to trace back anonymously written letters to the senders.

Fast forward to 2016, when Russian intelligence operations tried to manipulate the US presidential elections through hacking and disinformation campaigns. Many noted that there was ‘nothing new’ about using propaganda and disinformation to erode trust and drive voters into the arms of a strong leader. Indeed, until relatively recently, state media bombarded people in central and eastern Europe with official narratives. In Hungary, history is back in the form of Viktor Orbán, who preaches ‘illiberal democracy’ from the ranks of the EU’s biggest political group, while placing hundreds of media outlets under government control.

Without being free to speak, people become sceptical of everything they hear. They say things like ‘I don’t watch the news anymore’ and believe that nothing and no-one can be trusted. When people stop caring and participating, democracy has already been dealt a blow.

The question as to who controls information flows has always been political. Under repressive regimes, media and information are instruments used to strengthen power at the expense of populations. In democracies, pluralist debate and critical journalism speak truth to power and empower populations – whether via radio, TV or the internet. It is not the medium itself, but governance and values that are decisive. What distinguishes free societies from dictatorships is respect for rule of law. The power of law or the law of power – it makes all the difference.

Sebastian Weissinger / Fresh van Root / ERSTE Stiftung

Delusions of laissez-faire

In light of the very few positive experiences of centralized of power, the excitement around the technological revolution was understandable. The internetcarried the promise of disrupting monopolies of power and information. The impact soon became visible. Wikileaks published classified military footage and confidential diplomatic cables; Snowden revealed state intelligence gathering on a massive scale. Individuals could now pierce military, diplomatic and security protocols to embarrass even the most powerful states. From 2009 onwards, people the world over could see, in real-time, how police beat protesters in Iran, Tunisia, Egypt and Syria. In 1982, it had taken months for the news of the massacre of thousands of people in Hama, Syria, to reach the outside world. Today, mobile phones make us eyewitnesses to peaceful protests and police violence from Hong Kong to Venezuela.

Information is power. As technologies change the flows of information, they change relations of power. But the empowerment of individuals and the disruption of monopolies is not the whole story. Authoritarian states have also increased their control through surveillance technologies, while private companies now cater to billions, collecting data on a far greater scale than any government. We have to ask ourselves who is governing whom.

Technology can be considered an extension of agency. Governance of technologies is as important for democracy as governance by technologies. It all hinges on values. Designing for profit is unlikely to have optimal effects on democracy; automated weapons technologies or facial recognition systems will inevitably have consequences for peace and freedom.

The promise of empowerment and democratization that came with the new technologies – as if mobile phones and social media platforms would somehow make democracy go viral – may partly explain why democratic governments were reluctant to develop rules for the digital age. Given what we know about the impact of information flows on freedom, this was remarkable.

Since World War II, western democracies have successfully designed a framework of international agreements on trade, rights, war and peace, and development. These rules, and the institutions set up to apply them, have ensured the quality of life of populations, ensuring that standards are in place and that accountability exists. But today, that rules-based order is under pressure.In combination with the rise of nationalism, this development poses a major challenge to international cooperation, be it on climate change, immigration or digitization.

This is bad enough. But the rules-based order does not yet sufficiently cover the digital domain – despite the West’s head start. Until recently, the US was dominant in the development of tech products and digital services. Guided by self-centredness, belief in the first amendment and a libertarian culture, tech companies and their armies of lobbyists convinced lawmakers that any regulation of ‘the internet’ would stifle innovation, and that authoritarian regimes would be inspired by regulatory steps. If only authoritarian regimes were actually inspired by democracies!

Regulation of the internet continues to be met with heavy resistance, including from civil society groups whom I admire. But are we regulating ‘the internet’, or rather ensuring that algorithmic decision-making is accountable, in the same way thatwe regulate for fair competition and free speech in other sectors? It is not only more helpful, but also legitimate to focus on the principles that need to be safeguarded, no matter what technological revolutions or disruptions we might experience. Democratically elected governments, based on a popular mandate, have the legitimacy to act.

Precious time has been wasted in pioneering a rules-based system that preserves the benefits of the open internet and other technologies. Exciting opportunities have been created that we enjoy every day – and that more people all over the world should enjoy. We need to make sure that wherever people are using the internet, their rights are protected.

The Chinese ‘alternative’

An alternative to a global, rules-based order is gaining ground and the digital domain is increasingly its frontier. The government of China believes that states should manage their own ‘national internets’ – the Westphalian model applied to the online domain. The concept of ‘cyber sovereignty’ in Chinese cybersecurity law requires that foreign firms doing business in China – and there are many – store their data on Chinese territory, making inspection a lot easier. Other parts of the law interfere with Chinese citizen’s right to privacy and freedom of speech. China’s so-called ‘social credit systems’ score people’s behaviour, while the Uyghur communities in in Xinjiang are imprisoned in high-tech surveillance camps. Minorities anywhere are often the first to experience how technologies exacerbate repression and discrimination.

We also know that the Chinese government is hyper-ambitious to develop artificial intelligence technologies. This makes questions of governance and the values that go into systems all the more urgent. If artificial intelligence not only benefits from, but also helps build centralized data governance, then we should think twice about idealizing it. If, conversely, we believe that the technology can be built to serve humanity and democracy, then we must prove it.

China not only has ambitions at home. Through the Belt and Road Initiative, Chinese companies harvest as much data as possible, including through the rollout of apps in India and on the African continent. These are countries where rights and data are often without any legal protection at all. There are also fundamental questions about the separation between the Chinese Communist Party, the state and private companies. Huawei surveillance technologies are now used in 50 countries; no other company has a comparable market share. The question of technology governance is becoming a more and more important element of foreign policy and a key strategic question of our time. We may not see tanks rolling in the streets, but we know that people in China cannot find records online of that day in 1989 when tanks crushed people on Tiananmen Square. In diplomatic efforts too, China actively seeks to shape global norms on the basis of what it considers responsible state behaviour online. This sovereignty model does not put the rule of law, or the rights and freedoms of users first. It is also profoundly protectionist.

Graffiti against surveillance on the wall of the British Library Photo by Oxyman from Wikimedia Commons

Big tech and the new arbiters of truth

At the other end of the spectrum, we see the emergence of a market-powered model. Mark Zuckerberg’s infamous dictum – ‘move fast and break things’ – has long been the informal motto of Silicon Valley. And the ‘breaking things’ part has been quite a success. Many sectors have been entirely disrupted, from transport to the media, and if they have not been yet, then they will be soon: think about financial services and healthcare. Crucially, we see democracy being disrupted too.

The irony is that many of the people running the largest tech companies, despite being busy avoiding the law or staying ahead of it, effectively became regulators themselves. By designing technology for maximum efficiency, customer eye time, or for personal data-sharing, the tech giants have been in the business of governing, without appreciating that role. They certainly haven’t had a democratic mandate, nor have there been proper checks and balances that keep power in check. Over the past two decades, companies like Facebook, Google, Twitter, Amazon have become the guardians of information flows. Others, slightly less well known, but also quite powerful, such as Palantir, analyse data for commercial intelligence purposes. And there are countless other smaller companies that are building commercial, off-the-shelf surveillance and hacking tools for anyone with the money to buy.

In post-communist countries, the notion of a Ministry of Information, or a Minister of Truth, invokes nightmares. As it also does in countries that were never communist. But CEOs like Zuckerberg effectively act like arbiters of truth. They are increasingly uncomfortable with these responsibilities, which is why we hear Facebook and Apple calling for regulation of content and privacy.But their companies’ profit-driven algorithms and business models still decide whether a parent worried about their child’s health finds medical advice or a YouTube video convincing them that vaccination is actually damaging for children.

Company governance without oversight allowed Cambridge Analytica to harvest personal data in order to influence election results, whether in the United Kingdom or in Kenya. Profit optimization allows conspiracies to rise above news stories in search results. Until recently – and I don’t mention this lightly – it was possible to advertise on Facebook against categories such as ‘Jew Hater’ or ‘Hitler did nothing wrong’, and to target people interested in ‘Joseph Goebbels’ and ‘Heinrich Himmler’.

Does paying rather than convincing now determine one’s position on the marketplace of ideas? When a post on social media receives thousands of likes, do those thumbs ups, clicks and hearts represent bots or real people? We know the answers anecdotally, scandal by scandal. We know there should have been consequences for those responsible. But there is very little systematic transparency, let alone accountability.

Platforms rule

Platform companies decide what internet users see. But the formulas for matching data and advertisements remain largely a mystery. The algorithms running the online lives of billions are considered trade secrets. They are protected from public scrutiny and accountability in order to preserve competitive advantage.

We know that algorithms are changed constantly using new settings and machine learning. Platforms collect data in order to sell precisely-targeted advertising. But there is no systematic oversight to assess whether equality, free speech and fair competition – freedoms protected by law – are reflected in computer code and its outcomes. Even the American Department of Housing has taken Facebook to court, alleging that the platform ran discriminatory ads allowing homeowners to sell to some people but not others.

In short, the technologies that promised to democratize have put democracy under pressure. Instead of disrupting monopolies, they have formed monopolies of their own. The new gatekeepers – the tech oligarchs, if you will – process massive amounts of information without any form of regulation. As soon as anti-trust investigations were announced in the United States, tech companies hired more lobbyists to campaign in Congress, spending tens of millions of dollars every year. Seventy-five percent of those lobbyists had previously worked in congressional offices on issues relevant to the cases they were now lobbying against. Talking of revolving doors, the former Chairman of the US Federal Trade Commission now works for Amazon. Platforms also fund numerous thinktanks that work on tech policy, as well as many academic programs.

Because democratic governments have been reluctant to develop a rules-based order for the digital sector, authoritarian regimes and private companies set the standards. The question, therefore, is not whether we will see regulation, but who is in charge of it and what principles it is based on. In his prophetic book Code and Other Laws of Cyberspace (1999), Lawrence Lessig foresaw how technology would become the new form of governance. In an interview with Harvard magazine in the year 2000, he said that:

‘Ours is the age of cyberspace. It, too, has a regulator. This regulator, too, threatens liberty. But so obsessed are we with the idea that liberty means “freedom from government” that we don’t even see the regulation in this new space. We therefore don’t see the threat to liberty that this regulation presents. This regulator is code – the software and hardware that make cyberspace as it is.’

Fast forward twenty years and we increasingly see companies, both large and small, taking governing types of decisions, if not actually stepping into the role of governments themselves. This profoundly changes the role of the state. For example, critical infrastructure and services, such as tax collection, census-taking, healthcare and energy provision are digitized. Infrastructures and services are not only built, but also protected by private companies. Another example: online identity. This is usually verified by a credit card company or a social media company, but hardly ever by a publicly issued identity card for the online world. Or cryptocurrency – for example Libra, the new currency proposed by Facebook. And with surveillance companies now developing attack capabilities, the monopoly of violence is slipping out of governmental hands. The growing influence of the private sector has barely been dealt with in regulatory frameworks.

All the implications of new technologies for democracy and the public interest are exacerbated by artificial intelligence. AI is not the future, it is the present. An AI application will soon become widely available that creates conversational language indistinguishable from human speech. ‘Deepfakes’ – the video equivalent of this technology – allow anyone to be made to say anything. If we thought disinformation was a problem, imagine what we are about to get into.

Protest in support of Net Neutrality. Photo by Backbone Campaign from Flickr

Europe as regulatory leader

We tend to see the abuse of tech, but we also need to look at its intended use. When I talk to computer engineers, what excites them most is the idea that artificial intelligence will produce unexpected outcomes. This means we need a public debate about how much risk we find acceptable, and not only focus on the ‘race’ for AI dominance. Take the example of the gene-edited cow in the US. It was presented a couple of years ago as a resounding success of the new technologies. Now, it turns out that the DNA included bacteria that were antibiotic resistant. Perhaps this will encourage Americans and others to appreciate what, in European debates on genetic engineering, is called the ‘precautionary principle’. This has often been ridiculed and labelled as unscientific, especially in the US, but in this case the evidence came two years later.

These days it is fashionable to answer any problem to do with artificial intelligence by pointing to ‘ethics’. In Europe today, there are over a hundred and twenty ethics guidelines for AI. Many tech companies have hired ‘chief ethics officers’. But what do we really mean when we talk about ethics? Can we agree on norms and, even more importantly, what happens when they are violated? More than philosophical discussions, we need to focus on preventing the worst imaginable outcomes and on the principles that must be safeguarded by law. This discussion will have to include questions of meaningful access to information, oversight, and research in the public interest. Here, the European Union has made steps in the right direction. We have the General Data Protection Regulation, but also net neutrality, competition policy, and a cyber-security bill that places the public interest, the open internet, at the heart of what needs to be defended. The Charter of Fundamental Rights of the EU applies to all regulations, including those on tech, and upholds, for example, the rightto privacy.

The EU has started to show that technologies, even if built in jurisdictions on the other side of the world, can be made to respect rules that protect people and their rights. The fact that Microsoft has adopted the GDPR as a global standard shows how companies make trade-offs when one significant jurisdiction – in this case the EU – pushes them to comply with standards. It may be easier for a company to have just one standard globally rather than different ones in each area where they work.

But while Europe has made steps in the right direction, we still see a lot of piecemeal approaches and clear fragmentation of regulatory initiatives. There is too little joined up thinking between, let’s say, directives on copyright, data protection and AI. Proposals to regulate online content also demonstrate this.Currently, the big online platforms enjoy exemptions to liability for content shared on their platforms. But the pressure to change this is growing fast. In Germany, online platforms must remove disinformation and content violating existing laws or face fines. In the UK, there is a new proposal that companies must take down what are called ‘online harms’ – covering everything from child abuse to disinformation. In France, there are plans for the oversight of algorithms, which would give media regulators more authority.

But in countries where trust in government is low, such proposals would be seen as attempts at censorship. Some fear that the proposals in Germany, the UK and France lack independent oversight and outsource too much responsibility for self-regulation to the companies concerned. High fines and other sanctions may cause them to over-censor, without the possibility of redress.

All these initiatives focus on the urgent question of how to apply existing laws online. The notion that laws should apply online as they do offline is underscored by the United Nations, and it applies to any imaginable law. It is a good starting point for European efforts. But the principle won’t be realized if we get twenty-eight different interpretations. Instead, we need to work towards agreement on broader principles and then to empower regulators to assess whether those principles have been violated. This will also make for more sustainable laws, since technologies change fast. We cannot regulate technology by technology, but we can focus on principles and on how to uphold them.

Europe needs to act with more speed and political ambition. The new start of a new legislature offers a real opportunity to develop a more integrated vision of value-based technology governance, to connect the different policy areas and, crucially, to start acting as a global player. We must ensure that development and human rights policies focus on the impacts of technology and that programs strengthen rule of law principles in the digital sphere. That means pushing for norms that prevent the escalation of cyber conflict and developing trade rules for data flows. It means a more ambitious, integrated artificial intelligence strategy that allows for public interest research, talent generation and an ecosystem that’s more friendly to business.

The world is not waiting for us to get our act together. The Chinese tech giant Alibaba has proposed an e-commerce or eWTOmechanism to facilitate digital trade. The trade wars between China and the US are affecting everything from software to technology supply chains. While challenging, this momentum gives the EU the space to lead in showing what a values-based model for comprehensive tech governance might look like. For decades, the EU has demonstrated how values-based rule-making across borders is done. Not only does it have the practical experience and institutions for getting this done, but it also enjoys credibility among non-EU countries.

Beyond the single digital market

Besides government-led efforts, there is a vibrant community of civil society actors, tech experts and private sector companies also engaged in discussions and forms of non-binding solutions to tech governance. These ‘multi-stakeholder initiatives’ are designed to include different people impacted by technology in the legislative process. Important as these efforts are, however, I worry that they are fragmented and lack strategy. You can have principles, but what happens the minute they are violated? That is usually the real test. In order to focus the diverse initiatives and actors, we need to focus on rule of law principles that commit governments, companies and civil society to a kind of horizontal governance effort for the digital age. Adoption of rule of law principles should be a litmus test for all actors.

The rule of law articulates a political situation regardless of a country’s specific laws. It is therefore suitable for a global context like the online world, because it focuses on methods, alongside substantial law. Rule of law principles include equality before the law, transparency of rules, codes and processes, and mechanisms of appeal and redress. For centuries, these principles have constituted the distinction between free and unfree, between open and closed systems. If big tech companies applied these principles to algorithms alone, accountability would be possible. Companies should be incentivized by the increasing pressure on them. A clearer, more principled approach is needed in response to criticisms about content removal decisions without accountability.

By clearly articulating a commitment to the rule of law, and by implementing rule of law principles in their terms of use, companies would build much needed trust. And it would allow companies operating in countries without the rule of law to make a commitment to their customers, and to show they are about more than just profit. Although I am a strong proponent of keeping universal human rights at the top of our agenda, in practice we see they are interpreted differently in different countries. Rule of law, however, would push companies to reveal the rules on which, for example, they promote or demote content, and would make the desired, intended outcomes of artificial intelligence transparent.

By making rule of law principles the explicit foundation of regulations on digital trade, development, state behaviour and anti-trust, Europe can show that it is not only seeking to build a single market, but a single online sphere, in which the public interest matters. Articulating a model of how rules can be applied is a clear opportunity for Europe and should provide a democratic model of technology governance.There is a real vacuum of global leadership in this sector and Europe could step into that space, developing a regulatory framework with democratic partners globally – for example Japan and India, which are extremely important for the future of democracy.

Against technological determinism

Rules and governance will be crucial for how technology impacts not only on our lives and those of our children, but also the lives of people on the other side of the world. In technology, issues of security, economy and human rights are intertwined. We are therefore talking about systemic issues and challenges. Private companies, and also authoritarian states, have moved ahead in governing technology, mostly because they took the space that we left wide open. Like in Romania, governance changes everything and architecture. once built, has values baked into it. The same is true for technology. Values get solidified from the moment of design, often invisibly so.That makes governance and transparency, and accountability, even more important. Without rule of law principles, decisions by profit-driven companies and control-driven states will determine the nature and architecture of our digital world.

The incredible promises of artificial intelligence echo the hopes placed in what was thought to be the open internet’s democratizing power. We now know that democracy did not go viral. In fact, the opposite might have happened. For precisely this reason, I believe that we should not accept a deterministic view of AI. How we choose to govern technology will have a huge impact on people’s lives. The future is in our hands.

 

This article is based on a talk given by the author on 19 September 2019 as part of the series ‘The Tipping Point Talks 2019’, organized by the ERSTE Foundation.

 

Published 16 October 2019
Original in English
First published by Eurozine

© Marietje Schaake / ERSTE Foundation / Eurozine

PDF/PRINT

Newsletter

Subscribe to know what’s worth thinking about.

Related Articles

Cover for: Remnant democracy

Trump returns to the White House at a time when the global stakes are higher than ever. What can be expected from his unpredictable foreign policy, and what does this mean for international solidarity, geopolitical stability and democratic values?

Cover for: Who’s afraid of gender?

Writing a trade book about the ‘anti-gender ideology movement’, feminist scholar Judith Butler takes on anti-intellectualism in form and content. Fear of gender diversity is confessional, they write: declaring cisgender rights under threat revokes those of all others. In contrast, gender studies opens up potential for the material and the social to be seen as one.

Discussion