AI in society
Scientists regard artificial intelligence (AI) as a key technology, and we can expect to see its use in all areas of society. AI can improve the quality of life of many people and help to overcome global challenges such as climate change and health crises.
Vast sums of money are already being made available all over the world for the development of AI systems. This illustrates the huge economic potential of AI. However, along with the increasing use of AI-based applications, concern is also growing, for instance in relation to the importance of human values like fairness, freedom, data protection, safety and legal liability. There’s probably no other scientific development that currently poses the question in such a clear and comprehensive manner of how we want to see our technological capabilities in a social context. What is certain is, that this technology has already fundamentally changed our day-to-day life, and will continue to do so.
Morality for machines
Algorithms change the love lives of many people through dating apps, they manage the smart home, make buying decisions and influence public debate. AI will provide childcare, look after the sick, award jobs and loans, and make life-or-death decisions in autonomous weapons. Intelligent machines will develop their own behaviour patterns, which can neither be clearly programmed nor explained by traditional behaviour research. But is ethical practice without consciousness and conscience even conceivable? Then how can we develop AI that serves humans without harming them? Many AI experts are convinced that only a new research area can offer answers to these questions: “Machine Behaviour”. At any rate one thing is clear: we must resolve fundamental questions of ethics and morals today if machines are to behave according to these principles in the future.
The Moral Machine
Iyad Rahwan and his research groups work at the Max Planck Institute for Human Development in Berlin and at the MIT Media Lab in Boston. His project “The Moral Machine” is the biggest study so far in the field of machine ethics. The interactive survey investigates the ethical reasons behind people’s decisions in different regions of the world and whether behaviour rules for AI can be developed from this findings. How should a self-driving car behave if a serious accident cannot be prevented? The AI has to decide in which direction to steer the car – and therefore who survives the crash. It appears that all those questioned want to save as many human lives as possible, children as a priority, as well as people who follow the highway code. But on closer inspection it's clear that there are no globally applicable values. For instance respondents from France and South America primarily wanted to save women and children rather than men, the Japanese favoured the elderly as well – and most Germans did not want to intervene, preferring to let “fate” decide who has to die.