Artificial Intelligence
Algorithms are like recipes
But the ingredients have to be carefully selected to ensure that a dish tastes good: Algorithm expert Sebastian Schelter talks about artificial intelligence, how it takes decisions off our hands, and how this impacts our lives.
While artificial intelligence (AI) and computers that learn on their own may still sound a bit like science fiction, both have long since become part of our everyday lives. Algorithms not only determine what we see and hear from streaming services; in some instances they decide who is approved for a loan and predict how likely a convict is to commit another crime. But what exactly is an algorithm and how does AI affect our lives?
The basic recipe: the algorithm
“First and foremost, an algorithm is a sequence of steps. You can think of it like a recipe. You start by getting the ingredients, which is like the input from the computer program. The recipe then tells you step by step what to do to prepare the dish correctly,” explains algorithm expert Sebastian Schelter, who researches data management and machine learning at the University of Amsterdam.
In this sense, algorithms are not new, since these are the principles that underlie every computer program. So when we talk about the growing influence of algorithms today, what we are really concerned with is machine learning: “In a conventional computer program, a human being defines all the steps to be taken to solve a particular problem. Sometimes though, we run into difficultly with problems where it is hard for us to write down exactly how the computer should solve them,” the expert says. So machine learning takes a different approach.
Sebastian Schelter completed a doctorate at TU Berlin and has worked as a researcher at New York University and Amazon Research. As a junior professor at the University of Amsterdam, he works on issues at the interface between data management and machine learning.
| Photo: © Sebastian Schelter
Advanced cookery: the learning algorithm
Schelter cites one difficult problem: “A spam filter is a simple example. It is designed to distinguish advertising emails from personal messages. This is not an easy problem for the person writing the computer program. You can define various criteria and rules that would theoretically identify advertising emails, which can include the time a message was sent or certain words that appear in the text. But at some point, you come up against the limits of human capacity.”
In machine learning, a computer is shown examples instead of being explicitly programmed. Based on the examples, the machine independently identifies the best solution. Rather than input the entire recipe, the programmer only specifies the result the program is designed to deliver.
“In this case, the input could be 1000 emails that you want to receive, as well as a whole series of negative examples of advertising emails you want filtered out,” Schelter continues. “The algorithm then uses these examples to estimate the probability of whether a new message is advertising or not. The big difference is that in conventional programming, the human being defines the exact steps, while learning algorithms create their own selection criteria based on examples and statistical probabilities.”
Tricked by AI
Learning algorithms are able to solve certain problems not only faster than humans, but also better. When the Pluribus program beat professional poker players in 2019, AI demonstrated that it can even learn to bluff better than a human. As their skills grow, so does their influence in areas like healthcare, the financial world and the legal system.
Most people in Germany are also confronted with decisions of smart algorithms on a daily basis. “Looked at scientifically, these are abstract mathematical procedures that can be applied to many areas, such as to determine how likely it is that a particular person will repay a loan,” Schelter says.
In countries like the USA and Australia, algorithms have been accused of discriminating against certain ethnic groups in their decision-making. Schelter explains the root of the problem: “This is partly because the discrimination is inherent in the sample data shown to the algorithm. If you then simply let it run blind, it reproduces the discrimination.”
A tactful algorithm?
The question of where to draw limits for algorithms and AI is not so much a technical question as it is an ethical one. And a question of whether it is possible to teach an algorithm tact.
“This is a question we are still working to answer. The way I see it is that there are areas where the wrong decision by an algorithm has no serious consequences – like when a streaming service plays me the wrong song. Then there are other areas where an algorithm can be used to make a recommendation, but a human being should make the final decision. And then there are other areas that should simply be left to human judgement,” Schelter explains.
“In the USA, algorithms are used to estimate how likely it is that a prisoner will commit another crime after being released. The courts and law enforcement agencies are given access to this data when they consider early parole.” This raises issues how fair the process truly is.
“Justice is difficult to encapsulate mathematically, because it primarily revolves around philosophical, political and legal questions. Everyone has a different idea of what is and what is not just, and it is mathematically impossible to satisfy all definitions of justice at the same time.”
Studies that explore how fair decisions made by algorithms are have also shown the need to inject a bit of tact into the process. In many areas, we need to come to an ethical, political and legal agreement on what a “correct” decision would look like before entrusting the task to an algorithm. “Even then, we still need to ask ourselves philosophically and politically if we want to do it at all,” Schelter says.