In journalism, on social media, in novels or in the classroom. Artificial intelligence shapes the language of our everyday lives. It can reproduce bias - but it can also help combat discrimination.
This dossier compiles different perspectives on how artificial intelligence affects language and text production, and on why discriminatory language must already be addressed in schools. These articles marked the start to our coming events with which we would like to open up the topics of Artificially Correct and start the discussion around AI and text there, where text production begins for many of us: At school.
Artificial Intelligence & Text production
While we do not know what tomorrow’s algorithms will tell us, their answers will be based on the questions we ask and actions we take today. It is our duty to ourselves, and to the future, to make sure they are good ones.
Artificial intelligence and fakenews When robots make biased fake content on social media
Artificial Intelligence systems are massively implemented by organisations worldwide. However, these automatic systems are not always used for good, and they do not always make the best decisions. In this article I will highlight three types of biased robots, exemplified in three cases: disinforming social bots, biased AI tools, and synthetic profiles.
Artificial Intelligence in Journalism On the Hunt for Hidden Patterns
Even today, Artificial Intelligence (AI) already plays a key role in journalism: algorithms find stories in large data sets and automatically generate thousands of texts. Very soon, AI could become a critical infrastructure of media production.
Communicating with AI Meta – A Chatbot Against Discrimination
It is Said Haider’s vision to develop a “1-1-2” for discrimination incidents. With his team, the Meta founder and CEO has developed the first chatbot that provides round-the-clock legal advice for people experiencing discrimination and looking for counselling.
The artificial author How AI could change how we write fact and fiction
Artificial intelligence (AI) could completely change the way that we write. AI can generate huge amounts of comprehensive content on any topic, filter fact from fiction, and provide a creative spark to poets and screenwriters. But how should we think about this revolutionary tool? And could computers’ endless output possibly reveal something hidden about ourselves?
Expert statements ‘The collaboration between humans and machines needs to be redefined.’
What kinds of bias can be found in texts that were created with the help of AI? And what solutions are there to mitigate or even avoid distortions of reality? We talked about this with five experts from the UK, Germany, the Netherlands, and Switzerland.
Language and power
School as an institution contributed and continues to contribute to the fact that certain people were or continue to be invisible. An environment must be created in which young people dare to come out and are not bullied for doing so – and unfortunately, that is currently not the case.
Sovia Szymula
In conversation with Gesche Ipsen Translation can adjust the balance of power
In “Speaking and Being” Kübra Gümüşay explores the question of how language shapes our thinking and determines power relations in our society. The book has now been published in English translation. We talked to the translator, Gesche Ipsen, about her work on the book and the linguistic power of translation.
Sovia Szymula in conversation ‘It takes more than gendered teaching materials’
How do teachers deal with gender-sensitive language? Do non-binary people find their place in school? In this interview, student teacher and queer rapper Sovia Szymula a.k.a. LILA SOVIA talks about their personal experiences.
Racist language at school ‘Language creates reality’
In many schools, discriminatory language is commonplace. The activists of the Hamburg-based initiative We A.R.E. don't want to put up with it any longer. In an interview, they explained to our author what racist language does to children.
Many people routinely use machine translation services like Google Translate to translate text into another language. But these tools also often reflect social bias.
In conversation with Michal Měchura When the machine asks the human
Language technologist Michal Měchura has always wanted a tool that allows machine translators to ask humans questions about ambiguous phrases. With Fairslator, he has developed such a tool himself. Michal talks to us about bias and ambiguity in automatic translations.
Othering ~ Andern 10 terms related to identities that require sensitivity in translation
Translation is incredibly challenging: translating sensitively demands that historical, geographical, political and social contexts be taken into account.
In their article, the literature scholars and editors of the platform “poco.lit.” Lucy Gasser and Anna von Rath look specifically at English and German, and discuss 10 terms related to identity constructions that are difficult to translate.
Race ≠ Rasse 10 terms related to race that require sensitivity in translation
Translation is incredibly challenging: translating sensitively demands that historical, geographical, political and social contexts be taken into account.
In their article, the literature scholars and editors of the platform “poco.lit.” Lucy Gasser and Anna von Rath look specifically at English and German, and discuss 10 terms related to race that are difficult to translate.
Language defines the world. The Goethe-Institut stands for an inclusive language - and thus for an inclusive world.
With Artificially correct, we work with experts to develop a tool that minimises the bias in texts and strengthen a conscious approach to language.
Specifically, Artificially correct deals with AI-based translation- and writing tools and the biases (e.g. gender/racial bias) whose translations they generate. Artificially Correct creates an active network of people affected by this problem - translators, publishers, activists and experiential experts - and identifies partners who will pursue the issue with us in the long term. We bring together perspectives, create awareness, share knowledge and stimulate discussion.
Summary of the results Artificially Correct Hackathon 2021
Between the 1st and 3rd of October 2021, translators, activists and software developers from all over the world, came together to participate in the Goethe-Institut’s Artificially Correct Hackathon.12 teams had 52 hours (lots of coffee), and one mission – to develop innovative, forward-looking ideas and projects to help tackle bias in language and machine translators.
In conversation with Bias by Us
BiasByUs created a website at the Artificially Correct hackathon that acted as a ‘library’ for bias in translations. This collection of bias is to be generated through crowdsourcing bias found in machine translations. Watch them speak about challenges and scope of bias in machine translations, the ‘Bias by Us’ solution, their Artificially Correct hackathon experience, and more.
In conversation with Word 2 Vec
Word 2 Vec focused on one of Google Translates, and other deep learning systems flaws - the struggle to comprehend complex content during the Artificially Correct hackathon. As a solution the team developed a translation bias reduction tool that allows you to identify sentences susceptible to racial and gender bias. It works to identify and analyse the greater context of the sentence, and consequently highlights sentences and phrases that could be susceptible to bias. Find out more about their solution and their hackathon experience!
Database BiasByUs What happened after the Hackathon?
Janiça Hackenbuchner from the BiasByUs team won our hackathon with the idea of a database that raises awareness of gender bias in machine translation. On medium.com she explains how this works and how it has progressed since the hackathon.