For several decades, researchers have been working on developing  artificial intelligences (AI) that can solve problems independently. For several decades, researchers have been working on developing artificial intelligences (AI) that can solve problems independently. | © Max Planck Institute for Intelligent Systems, Stuttgart

How did AI come about?

For several decades, researchers have been working on developing  artificial intelligences (AI) that can solve problems independently.

In the beginning they worked on rule-based, symbolic AI. But this form of AI is extremely limited. It’s only useful in fields  in which clear rules can be defined for all conceivable situations. Significant progress has been made since the 1980s using self-learning programs.

Machine learning means that a computer learns through training examples and experience how a decision needs to be made – without being programmed for a specific problem solution. Special algorithms learn from training data and develop models they can then also apply to new data that they haven’t encountered before. When self-learning machines are trained with large numbers of examples, they autonomously develop a decision-making process that becomes the basis for generalisation. How such self-learning programs arrive at their decisions, often can no longer be retraced even by the programmers. Depending on complexity there are various distinct levels of machine learning: supervised learning, unsupervised learning, reinforcement learning and deep learning.

“We already have machines that can learn quite well, but we don’t yet have machines that can think. Developing this kinds of machines is the big challenge.”
Bernhard Schölkopf, Director at the Max Planck Institute for Intelligent Systems in Tübingen

How does AI learn?

Rule-based learning
Rule-based learning, also called symbolic AI, is based on logical models and is often referred to as “classic” AI. It make decisions according to clear rules defined in advance in the coding.
One example of this form of AI is Deep Blue – the computer program that first beat world chess champion Kasparov in 1996. It works with symbolic AI and achieves its playing capability mainly due to its immense computing power. The chess software calculates on average 126 million positions per second. Deep Blue is not really intelligent – just very, very fast.

Supervised learning
With supervised learning, humans evaluate the training and test data, and allocate them to categories. In the training phase the AI learns to identify images of cats correctly and call them “cat”, for example. If an algorithm that has been trained to differentiate between dogs and cats is shown an image of an elephant, the AI considers that unsolvable. But when limited to a narrow field, these algorithms are very reliable and accurate, as long as there are plenty of training data of high quality.
Analysis of images using learning algorithms already plays a key role in diagnostic imaging. Several studies show that AI is able to diagnose faster and often more accurately than many doctors for instance when assessing skin cancer. The best results are achieved when humans and AI work together: first the AI assesses whether the image shows a skin cancer or a non-harmful skin change. The specialist consultants then decide on the treatment.

Unsupervised learning
Supplying the algorithm with unfiltered raw training data is called unsupervised learning. The program independently searches for common features and differences in the data. The goal is to identify interesting and matching patterns. However, this sometimes leads to mistakes s, especially if the AI detects similarities mainly in the image background and therefore reaches incorrect conclusions. For instance if the AI learns what a “wolf” is solely from pictures of wolves in the snow, it will also call a different animal pictured in the snow a “wolf”.

Pattern recognition through self-learning networks can help researchers to see more: fluorescence microscopy of living cells often has to be performed with low levels of light, because the organisms being investigated would otherwise be damaged. Self-learning image restoration software analyses these poorly illuminated, hard-to-read  microscopic images and compares them with patterns from known images, which then allows the “hidden” image content to become visible.

Reinforcement learning
In reinforcement learning, the learning system makes decisions adopts these as a blueprint for future behaviour.. For each operation the system receives positive or negative feedback. In this way the algorithm learns to estimate increasingly accurately how successful individual actions are in different situations. Deep learning is a method of reinforcement learning in artificial neuronal networks that imitate the brain. A neuronal network like this consists of multiple layers. The individual layers are made up of many artificial neurons that are connected with each other and react to the neurons in the previous layer. The larger the network the more complex the tasks that can be addressed.

Speech and text recognition
One of the fields where deep learning is currently applied is speech and text recognition. Both, the online translation service DeepL, developed in Cologne, and the simultaneous translation program Lecture Translator of the Karlsruhe Institute of Technology work with artificial neuronal networks, for example
 
Facial recognition

Nowadays, artificial neuronal networks are used for facial recognition as well. Just in the British capital of London there are over 600,000 cameras installed, and many of these are also used for facial recognition analysis. The technology is supposed to help the police to solve or even prevent crimes. But how considerableare the dangers of surveillance on this level? To what extent is it compatible with democracy and citizens’ rights?

Self-driving Cars
For decades, car manufacturers have been working on automation of driving using various driver assistance systems. Much of this is already a reality such as automatic speed adjustment or self-parking systems. The ultimate goal is autonomous driving, where computer programs use AI to take complete control of the vehicle and the humans are merely passengers. On the one hand this would prevent many a road accident, because today high numbers of accidents are caused by human error. But on the other hand there are also fundamental questions: who is held responsible in a collision with a driverless vehicle?
 

Examples from AI research

Come research with me
The little four-legged robot SOLO 8 comes from the robotics labs at the Max Planck Institute for Intelligent Systems in Tübingen and Stuttgart. The research robot is an open-source project, for which the build instructions and GitHub documentation are publicly accessible. Most of the components are made by a 3D printer and the rest are readily available to buy. This allows researchers all over the world to recreate SOLO 8 cheaply and easily and develop it further. The concept behind the project is that every robotics research lab can use the technology, so creating a global research platform built on common standards. The thing is, when large numbers of scientists carry out experiments on the same platform, comparable data can be obtained. This facilitates faster progress in the field of robotics.

  Roboter SOLO 8 © Max Planck Institute for Intelligent System, Stuttgart and Tübingen / Wolfram Scheible A long film exposure turns the highly dynamic movements of the SOLO 8 robot into a dance.

Recognising causality
One of the current research focuses of Bernhard Schölkopf at the Max Planck Institute for Intelligent Systems in Tübingen is something called causal inference. This research field centres on  algorithms that are able to recognise causality in datasets – in other words the relationship between cause and effect. One objective here is to make AI systems more robust against external interference. Once again, autonomous driving is good example: if a road sign in a residential area is manipulated so that it shows a speed limit of 130 instead of 30, a human driver immediately knows that this cannot be correct – because the environment provides many additional indications. However for AI this is not a straightforward task. And yet AI needs to be capable of this before cars can really become driverless, otherwise serious accidents are “pre-programmed” to happen.
  Zusammenhänge erkennen - Bild © Bosch mobility solutions
Perfect interaction
For major film markets like Germany, films and TV series are synchronised. For this, translators don’t just have to reproduce the content of the words spoken correctly. The new text also has to match the lip movements and facial expressions of the actors. But that could soon change: an AI technology developed at the Max Planck Institute for Informatics called “Deep Video Portraits” makes it possible to adapt the actors’ facial expressions to the best translation. To achieve this, the facial movements and head positions of the voiceover actors are recorded. The system applies these to the actors in the film. The result is that facial expression, looks, head position and even a wink fit perfectly with the spoken word. But similar techniques are used to fakemedia content, this is known as “Deepfake”.  Thus, today, any statement, no matter how absurd, can be put  into the mouths of politicians, for instance. So we need to get used to the idea that even apparently objective evidence has to be viewed critically.
  Facial recognition © Max Planck Institute for Intelligent Systems, Stuttgart
Learning languages with AI
Online language courses are “ten a penny”. But what’s offered frequently varies widely in quality and price. Courses in which learners receive plenty of feedback from tutors are particularly likely to be successful. But these courses are expensive. The Weizenbaum Institute – a collaborative research project in Berlin und Brandenburg – is working with the Goethe-Institut to develop AI that enables most efficient use of tutor time, concentrating on areas such as composition of texts and learning correct pronunciation. One feature of the program is that it can test both new vocabulary and the correct use of recently learned grammar  even in free formulated texts. It can actually detect whether students have translated a text themselves or whether they are “cheating” by using translation software. In this way, AI can take on some of the routine tasks normally performed by tutors.

da Vinci with a fine touch
These days, most human-machine interfaces are focused on hearing and seeing. But Katherine Kuchenbecker and her team at the Max Planck Institute in Stuttgart are convinced: for many areas of application, robots need better tactile interaction skills as well as higher levels of social intelligence. For this reason scientists are teaching robots to experience their environment through touch. This is just as important for robot’s interaction with people, such as those who require care, as for applications involving remote-controlled robots. Kuchenbecker has been enhancing the surgical robot “da Vinci”, which enables surgeons to perform operations even from a remote location. The AI system transmits the specialist’s movements to the far-away robot. Thanks to the system’s new functions, doctors can now not only see on the screen what the robot is doing, but also feel it directly.

For several decades, researchers have been working on developing  artificial intelligences (AI) that can solve problems independently. © Max Planck Institute for Intelligent Systems, Stuttgart “With the help of AI we can turn good surgeons into superb surgeons.”
Katherine Kuchenbecker, Director at the Max Planck Institute for Intelligent Systems in Stuttgart


 

Cooperation partners

GI-Logo MPG Logo