In mid-January, The New York Times revealed that hundreds of law enforcement agencies and private companies across the world use a software called Clearview. It allows images of people to be identified within seconds, together with their name, address, occupation and contacts. The revelations are controversial for two reasons. First, Clearview identifies people using its own database of more than 3 billion private photos. By comparison, the FBI photo database has ‘only’ 640 million photos. The Clearview AI company scrapes the images from freely accessible internet sources – Facebook, YouTube and Twitter, as well as news and company websites – without people knowing. Second, it is not officially known which authorities actually use Clearview. So far, the software has been operating beyond the bounds of political or legal oversight.
If the programme were to appear in app stores, users could potentially identify anyone they wished: on the underground, on the street, at a protest. Clearview, therefore, presents an immense threat to the public, and in particular to political activists. It is no coincidence that in 2017 an earlier version of Clearview was offered to the notorious ‘white nationalist’ Paul Nehlen as a tool for ‘extreme opposition research’. Given the risks of misuse of such a technology, companies like Google have themselves shied away from offering similar applications. Clearview AI, however, has already designed a prototype for use with smart glasses, which would allow for even more covert surveillance.
The revelations also caused shockwaves in Germany, forcing the government to cancel plans to introduce biometric facial recognition nationwide. The technology was to have been installed at 135 train stations and 14 airports to facilitate the automatic detection of ‘terrorists and serious criminals’. One hundred and thirty million euros had been set aside and a law already drafted. The German secret services and authorities immediately insisted that they would not be using Clearview. However, with the risks of biometric surveillance suddenly in public focus, Horst Seehofer, the interior minister, had no choice but to shelve the project. As the ministry put it, there were too many unresolved questions.
Indeed, there are – both in technical terms and in terms of civil rights. So far, biometric systems have been anything but reliable, which is why innocent people have time and again been falsely investigated. But even if the technology were to operate without error, it is far from unproblematic. Biometric systems raise state surveillance to a completely new level, ultimately representing nothing less than the end of public anonymity.
An inaccurate technology
The security services take the opposite position. They see biometric video surveillance as a quantum leap in crime fighting that will allow much of their time intensive investigative work to be delegated to algorithms. Artificial neural networks are now able to ‘read’ people (i.e. collect and compare biometric data with databases) faster and more accurately than ever before, without subjects noticing that they are being scanned. Video surveillance in public spaces functions like an invisible data dragnet that is impossible to avoid.
Alongside the massive legal issues, there is a serious technical drawback to the dream of automated investigation. Biometric video surveillance does not function nearly as well as the authorities would like people to believe. However, in order to legitimise the rollout of surveillance technology, governments repeatedly inflate figures on recognition rates.
For example, in 2017, the German authorities ran a yearlong test of automated facial recognition in a Berlin train station. The test was deemed a complete success: on average, the technology had correctly identified 80% of the volunteer participants and was ready for blanket deployment. Experts, however, questioned the test’s scientific value. They established that the average result for the best of the three systems tested was actually just 68.5%. Even taking the official figure of 80%, this meant that a significant number of people were falsely identified (18,000 out of around 90,000 passengers per day). While the police dream of automated investigative work, biometric systems provide an ‘illusory security’, argue the German Bar Association.
Field test on face recognition at the Berlin Südkreuz railway station. C. Suthorn / cc-by-sa-4.0 / commons.wikimedia.org
In the UK, statistics are being falsified in similar ways. At the end of January 2020, Scotland Yard announced that it wanted to link London’s CCTV cameras (there are already around half a million across the city) with facial recognition software and a police database. If the system detects someone who is not in the database, then it deletes the information within seconds. But if it identifies a suspect, officers are engaged. According to the police, in only one in a thousand cases is someone falsely identified. Pete Fussey, an expert in surveillance systems at the University of Essex, who was commissioned by the London police to evaluate test runs, refutes this. By his estimation, biometric video surveillance is accurate in just 19% of cases.
Systemic racism
The numbers speak clearly against biometric video surveillance being fit for purpose. Yet governments remain committed to its roll-out, apparently in the hope that the technology will ‘mature’ through everyday use. The sheer recklessness of this strategy is seen in the USA. There, the security forces have been using biometric systems for decades – not only Clearview but also Amazon’s facial recognition software Rekognition. According to Amazon, Rekognition enables real-time surveillance of entire cities. But this system is not technically mature either. In July 2018, the American Civil Liberties Union used Rekognition to compare 535 images of members of the US Congress with 25,000 police photos of prisoners. It made 28 false identifications, including numerous non-white politicians such as the black civil rights activist John Lewis.
AI experts have long pointed out that facial recognition systems are particularly prone to error in cases of people with dark skin colour. In real life, false alarms can have fatal consequences. Disproportionately high numbers of black people are wrongly suspected of criminal behaviour, harassed by the police and shot dead. ‘Identification … could cost people their freedom or even their lives’, the ACLU has warned.
Nevertheless, biometric video surveillance continues to advance in the USA, reaching deep into the suburbs. For several years, Amazon’s surveillance firm ‘Ring’ has been providing intercom systems with wi-fi and video cameras. These devices, which have been sold in their millions, let residents monitor their driveways via smartphone. And not only residents: following an agreement with Amazon, more than 770 police departments across the country can access the videos – with the consent of the users, but without the need for a warrant. Ring still has no facial recognition function, but Amazon says it is working on it. It would mean that the alarm would go off if a ‘suspect’ approached a house. Whole communities could be observed virtually and gated off technologically.
Developments in China show what happens if a surveillance network, aided by biometric video, becomes more extensive still. There, facial recognition technology is part of everyday life: in many places, face scanners are used as a means of payment and to control access to the underground system and to private apartment blocks. The Chinese authorities also use biometric systems to monitor and control ethnic minorities. In 2019 it was revealed that the faces of more than a million prisoners had been scanned. Using the data, algorithms are trained to recognise the features of members of the Uighur minority. In the eastern Chinese region of Xinjiang, cameras are used to detect early signs of Uighur gatherings. The police can also mark individuals as a potential threat. If they try to enter a certain public place, then an alarm is immediately triggered.
The EU white paper on AI
This may seem like a distant horror scenario. But there is plenty of evidence that blanket biometric video surveillance poses a high risk to democratic societies. It hands the state an extremely powerful surveillance instrument that can eliminate public anonymity in a stroke. Citizens pay a high price for the promise of a little more security. In democracies, the constant feeling of being watched restricts civil liberties, individual expression and political participation. Anyone who, despite acting lawfully, fears being identified and recorded, might stop taking part in demonstrations.
For this reason, it seems, the European Commission considered temporarily banning biometric video surveillance. In December 2019, details were leaked of a draft white paper on the challenges posed by AI. The paper suggested that facial recognition should be banned in public places for three to five years, in order to better assess the societal impact of the technology.
This would have been an extraordinary U-turn. In April 2019, the European Parliament had voted to establish the Common Identity Repository (CIR) – a gigantic biometric database that combines border control, migration and criminal prosecution data systems. It was planned to eventually hold data on over 350 million EU citizens, which would have made it one of the world’s largest databases for tracking people – just behind the systems run by the Chinese and Indian governments.
Regrettably, the European Commission again revised its views. In the final version of the white paper, published on 19 February, there is no talk of a moratorium. When it comes to the protection of fundamental rights, this an alarming sign.