Artificial Intelligence
Awful AI
In our modern world, nothing can escape the digital revolution. The way we commute, communicate and consume is controlled by code, and that code is growing increasingly intelligent. But artificial intelligence is by no means as fair or neutral as it may seem.
On a Collision Course
“CRITICAL ERROR”: Red lights are flashing on the dashboard of your self-driving car. You are approaching a traffic light, when suddenly the brakes fail. In just seconds, the car will cause an accident. Will the computer program instruct your self-driving car to keep going and kill the pedestrians ahead? Or will it calculate a new path and swerve into a nearby pole, bringing the car to a stop but potentially killing you inside? Could anyone in the world, let alone a computer program, make the right decision in this scenario?
My interest in ethical artificial intelligence started with this modern-day equivalent of the well-known trolley problem. In 2016, I was a software engineer in Silicon Valley working on computer programs that controlled self-driving cars. This dilemma was not just a philosophical thought experiment for us. It was what we in computer science call a worst-case scenario.
In our modern world, nothing can escape the digital revolution. The way we commute, communicate and consume: In 2022, everything is controlled by code written by teams of engineers who invariably incorporate their own ideas, beliefs and biases regarding how the world works. However, algorithmic biases have always been part of computer science. In 1976, computer pioneer Joseph Weizenbaum warned us about the (un)intended harmful consequences of code. Early cases of unethical programs appeared soon after. Between 1982 and 1986, more than 60 women and ethnic minorities were denied entry to St. George’s Hospital Medical School due to biases in an automated assessment system. Unfortunately, not even Joseph Weizenbaum could have predicted the scale the problem would take on as artificial intelligence (AI) programs mature and drive the next technological revolution.
Great Power, Great Responsibility?
In the early 2010s, AI and machine learning provided us with a fundamental paradigm shift in how we write code. Instead of writing a deterministic sequence of instructions (think of it as a recipe in a cookbook) telling a computer what to do, machine learning methods now allow us to generate code from large amounts of data (analogous to a trained chef who has learned from experience). This allows developers to create applications that were previously thought impossible, such as accurately recognizing speech and images, but also programs that will one day be able to steer a self-driving car through complex traffic. There is no free lunch, however, as these spectacular results come with a heavy price. Artificial intelligence relies on black box models. In its current state, AI not only reflects the biases of the programmer; it also adds additional biases from the training data that was used to develop it. And because we do not really understand how these black boxes work yet, AI programs are quite susceptible to malicious attacks and notoriously difficult to control.
In its current state, AI reflects not only the biases of the programmer; it also adds additional biases from the training data that was used to develop it.
The newest technological advancements create ethical dilemmas that can seem like plots from a science fiction writer’s mind. AI programs in self-driving cars could potentially base their decision of whether to swerve or not on the underlying training data used. They could learn to count people and base decisions on total numbers. The programs could also base decisions on image-based recognition of qualities like gender, age or nationality (see MIT’s moral machines for more on this topic).
Working on this problem taught me that it is crucial to consider these possible failures and hidden algorithmic biases. There might never be a single right decision a future self-driving car could make in a trolley problem situation, but as a society we do know that AI programs cannot base their decisions on discriminative features such as race or gender.
It is often scary to think of the immense responsibility that comes with coding AI programs. The work of a few selected engineers can potentially impact the lives of billions. But there is no need to worry, I thought. Surely, we can trust the few on the top to be aware of critical system failures and to not use their power to create applications with (un)intended harmful consequences. Unfortunately, my hopes were swiftly dashed when I started taking a closer look.
The work of a selected few engineers can potentially impact the lives of billions.
AI Is already Here, and It Is Awful
I left Silicon Valley to start a PhD in artificial intelligence. My ultimate goal was to make AI more trustworthy. One day, I came across a news article describing a start-up that claimed it used AI-based video recognition for “bias-free” hiring. That immediately sent a cold shiver down my spine. Applying a poorly understood technology to one of the most important decisions for any person’s life – whether they are hired or not – is frightening. Going so far as to claim that AI can actually be used to circumvent human bias is not only wrong; it is also seriously dangerous.
Nothing could be further from the truth either, since in many cases, AI-based systems and their predictions actually amplify existing biases at unprecedented scale rather than avoiding them. To understand this phenomena, it is important to look deeper into the secret sauce of any AI program: the historical data the program draws its information from. Generally, datasets are created, collected and selected by people in power. In the case of AI-based recruiting, the data consists of the records of all past hires. It is a reflection of the biases, beliefs and worldviews of its curators and the environment it was collected in. A list of past hires can potentially go back decades to a society we now consider extremely unfair towards certain genders and minorities.
Therefore, I was in no way surprised when researchers showed that AI-based recruiting tools quickly teach themselves to prefer male job candidates over female ones. In some cases, they even went so far as to penalize CVs that include the word “women’s,” such as “women’s chess club captain.” Hoping to raise awareness of this issue, I started a list called Awful AI. Over the years, this list has grown to include hundreds of applications that fall into common categories such as discrimination, disinformation, surveillance, and weapons.
Automating Racism
The harmful consequences of AI applications are often unintended by their creators and can be attributed to a lack of understanding and awareness of ethical issues and technical limitations. Take Tay, for example, a Microsoft chatbot that learned from Twitter inputs. Within one day of its release, Microsoft was forced to take Tay down as it began spouting antisemitic messages. Google’s image recognition program is another example. It labelled the faces of several black people as gorillas. Amazon’s Recognition identified darker-skinned women as men 31 percent of the time, while lighter-skinned women were misidentified only 7 percent of the time.
It is well known that all large-scale image recognition services struggle with issues of automated racism. Researchers have developed tools and methods in recent years to study AI models in isolation with the hope of reducing their harmful effects. But while these tools and methods work well on individual cases, they are unable to prevent a more widespread problem: As we move towards an age of automation, biased predictions are now used as building blocks for many critical aspects of society, such as law enforcement.
Amplifying Discrimination
PredPol is an AI-based predictive policing program used by the Los Angeles police department to forecast potential crime hotspots. The program then recommends that police officers patrol the locations identified. This is the same, dangerous idea: Humans are imperfect, so we should use an objective AI technology to select patrol locations and, subsequently, arrest subjects based on its predictions. By now you might have guessed where the problem lies: AI researchers were not surprised when studies revealed that PredPol showed a serious bias towards locations frequently visited in the past. This resulted in over-patrolling of neighborhoods of color. The AI was trained on historical data of past arrests in the US, which is a strong proxy of systematic racism within the justice system. Therefore, PredPol not only mirrored the racism provided in its training data; it also automated this racism on a large scale and amplified its effects.
What makes this example dangerous to society is that the idea of an objective, if complex, AI program encourages police officers not to question its predictions. Far from being objective, the predictions of PredPol can be used to justify and amplify existing racism towards communities of color. This creates a vicious feedback loop: As the AI encourages more arrests in communities of color, this feeds into the new datasets that are then used to train the algorithm.
The European Union is experimenting with its own AI programs in law enforcement such as AI-based polygraph tests for travellers entering the EU or AI-based fraud detection systems. Fortunately, awareness of this issue is rising and the first major cities have started banning AI technology from law enforcement. But what if discrimination and automated control benefits a state’s agenda? What would society look like if a state embraced the problems of a biased AI instead of tackling them?
Autocratic governments have always pushed the limits of artificial intelligence for surveillance and unethical uses. Many breakthroughs in AI now allow surveillance to be scaled in real-time to cover every digital and physical footprint. Surveillance is not just limited to recognizing a person’s identity. Artificial intelligence programs are currently being developed to predict a target’s sexuality, diseases they may have or their likelihood of engaging in criminal activity based on facial features alone. The Chinese Communist Party plans to take this a step further and use AI applications to determine real-world consequences for those who do not follow certain rules. Social credit systems assign each citizen automated points and enable certain rights depending on a person’s score. These are currently being piloted as incentive and punishment systems in many regions in China and used to persecute ethnic minorities.
Preventing the Misuse of Data
This article has painted a dark picture of AI. Not all hope is lost though – and an important part of change is raising awareness that these applications exist or may exist in the near future. In order to prevent awful AI applications from taking over our lives, we need all parts of society to work together as a community. As engineers and technologists, we need to be mindful of the applications we develop and the responsibility we have when deploying them in the real world. It is important to consider possible failures, ethics, and social guidelines.
Novel research has shown that we can prevent the misuse of data. Decentralized technology and our own research projects such as Kara are exploring ways to automatically limit the development of AI applications without the consent of data owners. Initiatives such as “Stop Killer Robots” advocate at the highest levels of policy to ban harmful AI applications and educate decisionmakers. As policymakers, it is important to understand the limits of current technological advances and design regulations that can foster the safe development of AI.
As business leaders, we need to make sure not to fall for the hype of all-powerful intelligent robots for the sake of short-term profit making. AI is not a silver bullet that can be used to solve all the world’s problems. It is a tool that can provide and scale novel solutions while at the same time reinforcing existing problems. There is a lot of work to be done and no time to be scared.