For several decades, researchers have been working on developing artificial intelligences (AI) that can solve problems independently.
In the beginning they worked on rule-based, symbolic AI. But this form of AI is extremely limited. It’s only useful in fields in which clear rules can be defined for all conceivable situations. Significant progress has been made since the 1980s using self-learning programs.
Machine learning means that a computer learns through training examples and experience how a decision needs to be made – without being programmed for a specific problem solution. Special algorithms learn from training data and develop models they can then also apply to new data that they haven’t encountered before. When self-learning machines are trained with large numbers of examples, they autonomously develop a decision-making process that becomes the basis for generalisation. How such self-learning programs arrive at their decisions, often can no longer be retraced even by the programmers. Depending on complexity there are various distinct levels of machine learning: supervised learning, unsupervised learning, reinforcement learning and deep learning.
“We already have machines that can learn quite well, but we don’t yet have machines that can think. Developing this kinds of machines is the big challenge.”
Bernhard Schölkopf, Director at the Max Planck Institute for Intelligent Systems in Tübingen
How does AI learn?
Rule-based learning
Rule-based learning, also called symbolic AI, is based on logical models and is often referred to as “classic” AI. It make decisions according to clear rules defined in advance in the coding.
One example of this form of AI is Deep Blue – the computer program that first beat world chess champion Kasparov in 1996. It works with symbolic AI and achieves its playing capability mainly due to its immense computing power. The chess software calculates on average 126 million positions per second. Deep Blue is not really intelligent – just very, very fast.
Supervised learning
With supervised learning, humans evaluate the training and test data, and allocate them to categories. In the training phase the AI learns to identify images of cats correctly and call them “cat”, for example. If an algorithm that has been trained to differentiate between dogs and cats is shown an image of an elephant, the AI considers that unsolvable. But when limited to a narrow field, these algorithms are very reliable and accurate, as long as there are plenty of training data of high quality.
Analysis of images using learning algorithms already plays a key role in diagnostic imaging. Several studies show that AI is able to diagnose faster and often more accurately than many doctors for instance when assessing skin cancer. The best results are achieved when humans and AI work together: first the AI assesses whether the image shows a skin cancer or a non-harmful skin change. The specialist consultants then decide on the treatment.
Unsupervised learning
Supplying the algorithm with unfiltered raw training data is called unsupervised learning. The program independently searches for common features and differences in the data. The goal is to identify interesting and matching patterns. However, this sometimes leads to mistakes s, especially if the AI detects similarities mainly in the image background and therefore reaches incorrect conclusions. For instance if the AI learns what a “wolf” is solely from pictures of wolves in the snow, it will also call a different animal pictured in the snow a “wolf”.
Pattern recognition through self-learning networks can help researchers to see more: fluorescence microscopy of living cells often has to be performed with low levels of light, because the organisms being investigated would otherwise be damaged. Self-learning image restoration software analyses these poorly illuminated, hard-to-read microscopic images and compares them with patterns from known images, which then allows the “hidden” image content to become visible.
Reinforcement learning
In reinforcement learning, the learning system makes decisions adopts these as a blueprint for future behaviour.. For each operation the system receives positive or negative feedback. In this way the algorithm learns to estimate increasingly accurately how successful individual actions are in different situations. Deep learning is a method of reinforcement learning in artificial neuronal networks that imitate the brain. A neuronal network like this consists of multiple layers. The individual layers are made up of many artificial neurons that are connected with each other and react to the neurons in the previous layer. The larger the network the more complex the tasks that can be addressed.
Speech and text recognition
One of the fields where deep learning is currently applied is speech and text recognition. Both, the online translation service DeepL, developed in Cologne, and the simultaneous translation program Lecture Translator of the Karlsruhe Institute of Technology work with artificial neuronal networks, for example
Facial recognition
Nowadays, artificial neuronal networks are used for facial recognition as well. Just in the British capital of London there are over 600,000 cameras installed, and many of these are also used for facial recognition analysis. The technology is supposed to help the police to solve or even prevent crimes. But how considerableare the dangers of surveillance on this level? To what extent is it compatible with democracy and citizens’ rights?
Self-driving Cars
For decades, car manufacturers have been working on automation of driving using various driver assistance systems. Much of this is already a reality such as automatic speed adjustment or self-parking systems. The ultimate goal is autonomous driving, where computer programs use AI to take complete control of the vehicle and the humans are merely passengers. On the one hand this would prevent many a road accident, because today high numbers of accidents are caused by human error. But on the other hand there are also fundamental questions: who is held responsible in a collision with a driverless vehicle?
Examples from AI research
Come research with me
The little four-legged robot SOLO 8 comes from the robotics labs at the Max Planck Institute for Intelligent Systems in Tübingen and Stuttgart. The research robot is an open-source project, for which the build instructions and GitHub documentation are publicly accessible. Most of the components are made by a 3D printer and the rest are readily available to buy. This allows researchers all over the world to recreate SOLO 8 cheaply and easily and develop it further. The concept behind the project is that every robotics research lab can use the technology, so creating a global research platform built on common standards. The thing is, when large numbers of scientists carry out experiments on the same platform, comparable data can be obtained. This facilitates faster progress in the field of robotics.
da Vinci with a fine touch
These days, most human-machine interfaces are focused on hearing and seeing. But Katherine Kuchenbecker and her team at the Max Planck Institute in Stuttgart are convinced: for many areas of application, robots need better tactile interaction skills as well as higher levels of social intelligence. For this reason scientists are teaching robots to experience their environment through touch. This is just as important for robot’s interaction with people, such as those who require care, as for applications involving remote-controlled robots. Kuchenbecker has been enhancing the surgical robot “da Vinci”, which enables surgeons to perform operations even from a remote location. The AI system transmits the specialist’s movements to the far-away robot. Thanks to the system’s new functions, doctors can now not only see on the screen what the robot is doing, but also feel it directly.