Algorithmic Governance
Total surveillance
Algorithms that reward or penalize the citizens of a country on the basis of collated data – in places like China and India, “algorithmic governance” has long been a reality. But even in Germany, computer software already takes decisions that have consequences for many people’s lives.
By Arne Cypionka
The intersection south of Changhong Bridge in the Chinese metropolis Xiangyang is crowded and chaotic. To a cacophony of car horns, pedestrians cross red lights and electric scooters weave through the congested traffic. At the side of the road is a large screen showing numerous portrait photographs. Just another electronic billboard? No, this one was put up by the police in 2017 to keep order. The faces displayed are those of road users who violated traffic rules and were identified by cameras with automatic facial recognition software. The idea is to “embarrass offenders in front of their neighbours and colleagues,” a spokeswoman for the city told the New York Times. Alongside the photographs of those held up to censure are their names and ID numbers. The aim is to teach them a lesson.
Screens like the one in Xiangyang are only a small part of the Chinese government’s surveillance and monitoring system. In recent years, the Communist Party has launched pilot projects in a number of cities for its new Social Credit System. Data is collected from CCTV cameras, private chats, online purchases and numerous other sources and then “structured”. By 2020, the state aims to create a standardised assessment system covering every man, woman and child in the country. In a process requiring a massive amount of computing power, the system will ultimately assign a specific “score” to every Chinese and keep that score updated using algorithms. To China’s authoritarian government, the system seems an obvious step – permitting control over every one of the 1.4 billion people making up China’s population.
Control and punishment by algorithm
There is a name for the practice where governments or their agencies make use of the huge amounts of data flowing from social media and other platforms and let machines decide on the basis of that data whether a citizen should be rewarded or penalised. It is called algorithmic governance. Computers recognise patterns and routines and automatically implement disciplinary measures: anyone returning a rental bike undamaged, waiting for a green light before crossing the road and turning up for work on time is rewarded with points. But someone who voices criticism of Chinese policy online or is even just in frequent contact with low-scoring people will see a steady decline in his or her personal rating. That has drastic consequences: people with a low social score have been denied access to flights and high-speed trains, for instance, or their children have been banned from attending private schools.
Monitoring the people of a country to such a extent, gathering every bit of personal information and using it against them is not just an assault on privacy; it is also a severe curtailment of personal rights and freedoms. But Chinese authorities are not the only ones analysing personal data – India is also trying to roll out a personal registration system to gain an overview of the population.
The system is called Aadhaar (“Foundation”) and at its heart is a database that now includes 1.2 billion users – the majority of the Indian population. A twelve-digit number linked to a user’s name, age, a full set of fingerprints and an iris scan image enables the system to identify every individual beyond doubt, establishing, for example, their entitlement to welfare benefits. That, at least, is the theory. At the beginning of this year, Indian journalist Rachna Khaira highlighted the weaknesses of the system. She contacted hackers who, for just 500 rupiah – around six euros, secured her access to the complete database. Actually intended to prevent unfair distribution and fraud in food aid operations, the Aadhaar database is increasingly proving a problem itself. There are reports of people starving because they cannot produce documents or the biometric screening fails to work. And leaks have been established, resulting in the public disclosure of millions of datasets.
Frauen registrieren sich in Indien für die Aadhaar-Datenbank.
| Foto (Zuschnitt): © picture alliance / NurPhoto
Time for a data debate in Europe
Even in Europe, algorithms are routinely used to analyse data. But unlike the risky experiments underway in the world’s two most populous countries – China and India – the mass collection of personal data in Europe is not necessarily done by state actors. In Germany, for instance, the credit bureau SCHUFA claims to have 864 million data entries on record for over 67 million people and 5 million enterprises. What is not known is how exactly the credit assessment process works – a process that can decide whether or not a person is accepted as a tenant or offered a mobile phone contract. Even the criteria that SCHUFA applies are a mystery. It is suspected that women, for example, may be disadvantaged.
Germany does not name and shame traffic offenders on roadside screens. Far from it. But it does need to address a question: How far should companies like SCHUFA or government agencies be allowed to collect and use data? The time has come to discuss the legitimacy of algorithmic governance. Ultimately, the question is whether we want to give machines control over society. Correctly configured, algorithms could be efficient, fast and fair decision-makers. But from the way they are used at present, it seems that the advantages they offer are wholly outweighed by the massive potential for abuse, the assault on personal privacy and the intransparency of the processes.