Thanks to new findings in artificial intelligence research, the performance of machine translation has improved considerably in recent years. How does this technology work, what are its benefits, and what are its possible consequences for the educational ideal of multilingualism?
Anyone who has put a machine translation system such as Google Translate through its paces recently is likely to have been pleasantly surprised by the quality of the translation. While it was often enough the case in the past that incomprehensible texts or ridiculous translation errors gave rise to considerable amusement, nowadays the software suggests translations that not only correctly reproduce the content of the source text but are also convincing in terms of their linguistic style. So what has happened?
The successful advance of neural networks
The answer can be found in artificial intelligence research, which has made some major advances in recent years thanks to the use of what are known as
artificial neural networks. Back in the 1940s, researchers had already come up with the idea of reproducing the structure of the human brain, with its billions of interconnected neurons, in the form of artificial networks in a computer. However, it is only recently that the two key prerequisites for turning this idea into reality have been met: a sufficiently high processing power, and the availability of large amounts of data (
big data) with which to train neural networks. This approach, which is also termed
deep learning, differs from earlier approaches in artificial intelligence that sought to use explicitly programmed algorithms to solve tasks. By contrast, artificial neural networks analyse large data sets to identify certain patterns or regularities and on this basis create their own rules – that are not normally comprehensible to the human observer – for completing a particular task.
The difference between the two approaches outlined here can also be found in machine translation. So-called
rule-based machine translation analyses a sentence in the source language strictly according to defined grammatical and lexical rules and, applying corresponding rules in the target language, generates a suggested translation. However, natural languages are hugely complex structures which, unlike programming languages for example, exhibit only a limited adherence to rules and are characterized by numerous exceptions and in some cases also contradictions. Accordingly, the success of this rule-based approach tends to be modest. In
neural machine translation, on the other hand, a neural network is trained by feeding it large quantities of source texts and their translations; it then extrapolates certain translation patterns from these data. In principle, this involves building an artificial translator’s brain that learns to translate new texts on its own on the basis of the data it is fed.
How machine translation performs today
Neural translation programs build an artificial translator’s brain. | Photo: Alfred Pasieka © mauritius images - Science Photo Library
The success of this new approach is remarkable. Neural machine translation was launched a mere four years ago and in no time at all had left previous approaches far behind it in terms of quality. In March 2018, a team of researchers at Microsoft even announced that the company’s neural translation system had reached “human parity” when translating newspaper articles from Chinese into English – in other words that it had achieved a quality of translation that would match any produced by a human translator.
Such claims should be taken with a pinch of salt, however, and cannot by any means be universally applied to all language pairs and subject areas. Generally speaking, neural machine translation still faces some fundamental challenges. One central problem is the inherent ambiguity of natural languages. Humans always interpret statements within their specific contexts. For example, it is not clear from the statement “I arrived at the bank” whether the speaker has arrived at the bank of a river or at a financial institution. The listener must interpret what has been said. A machine translation program is blind to context, on the other hand, and thus prone in principle to mistranslation. The dream of a universal translator capable of overcoming the Babylonian confusion of languages has not yet become a reality, in other words.
Machine translation for the benefit of society
While machine translation continues to battle various problems, it nonetheless remains a useful tool when it comes to
gisting translation. Nowadays, internet-capable smartphones and other mobile devices give users access at any time to cloud-based translation services such as
Google Translate, which they can use to gain a rudimentary understanding of foreign-language texts. The
in-ear translators like Google’s
Pixel Buds are one interesting new development. These wireless earphones use speech-recognition technology to turn a foreign-language remark into machine-readable text, have this translated into the desired language by Google Translate, and then play it to the user by voice output. This means that machine translation can also be used for verbal communication. Nonetheless, the susceptibility to error of this system is increased considerably as a result of the triple burden of speech recognition, translation and voice output, as well as by the fact that everyday communication is tied strongly to a particular situation and spoken language tends to be less structured.
Furthermore, there are institutional efforts to make machine translation usable for the benefit of society. The German Research Center for Artificial Intelligence for example is planning to apply for EU funding for a
Human Language Project, one of the goals of which is to help people of low educational levels, older people and people of migrant origin to participate more actively in a multilingual Europe. In addition, researchers involved in the EU’s
INTERACT (International Network on Crisis Translation) project are currently exploring how machine translation could be used in crisis scenarios in which the fastest possible communication across language barriers is of key importance.
A machine is not capable of nuanced communication across language borders. | Photo: stm © photocase.de
The advantages of multilingualism
The foregoing might suggest that acquiring foreign languages will become a luxury in future that only the more linguistically talented will achieve, but this is not in fact the case. The problems of quality that continue to face machine translation have already been mentioned. Nuanced communication between speakers of different languages – which involves not only a linguistic but also a cultural exchange – will probably never be performed by a machine.
The advantages of multilingualism are all too obvious, especially when one considers the machine-based alternative – however tempting it may appear. Those who choose not to acquire a foreign language, even at the most rudimentary level, and engage with foreign cultures only via the medium of a machine, are denying themselves not only the pleasure of direct and unfiltered communication with others, but are also imprisoning themselves within their own language. After all, learning a new language also opens up a new view of the world and gives one a different way to put reality into words. In today’s globalized world in particular, with its numerous opportunities for misunderstanding and resulting conflicts, such intercultural empathy is indispensable.
literature
Krüger, Ralph (2017): Von Netzen und Vektoren – Neuronale Maschinelle Übersetzung. In: MDÜ – Fachzeitschrift für Dolmetscher und Übersetzer, Volume 63, Issue 1.