NECE-Festival 2024:
Media literacy in East Asia & Europe
Some of the panelists were familiar faces at NECE, like Hui-An Ho, born in Taiwan and now based in Germany. She is the Head of International Affairs at the Taiwan FactCheck Center and a contributor to “Facts & Contexts Matter”, a bpb and Goethe-Institut media literacy project, to which she contributes stories focusing on Taiwan and Hong Kong. Hui-An, who is actively involved in several cross-border collaborations, is currently spearheading efforts to establish Taiwanese and Asian fact-checking alliances. Alongside Noa Horiguchi, who was also in attendance, she is partnering with fact-checking organizations in Indonesia and Thailand to host the “Youth Verification Challenge 2024”. This initiative combines engaging workshops with gamified contests designed to encourage young people to develop digital verification skills and critical thinking.
“Even though our societies face different contexts and challenges, the problems we encounter are often strikingly similar,” Hui-An said at the festival. “If we can monitor misinformation trends and narratives emerging in other societies early on and learn from their best practices, we can save a lot of time and avoid duplicating efforts.

Hui-An Ho talking about her work as Head of International Affairs at the Taiwan FactCheck Center and as a contributor to the “Facts & Contexts Matter” project. | © Zentaro Imai
“The aim of this session is to promote media literacy and ask: ‘How do others do it?’” said Anja by way of introduction. “Together, we will highlight the specific challenges faced in these regions and the innovative ways they are being addressed.

The “Facts & Contexts Matter: How Do Others Do It?” panel right before their session at the NECE Festival in Tirana on November 15. | © bpb/Goethe-Institut
The project sheds light on how distinct social, cultural, and historical contexts influence these efforts, particularly in areas with digital landscapes and social media ecosystems significantly different from those in Europe, and how best practices in these countries can transcend borders. Several panelists featured in the “Facts and Contexts Matter” mangas were now venturing beyond their national frames to share their experiences directly with some thirty European civic education practitioners. The session made use of a “speed-dating” format that involved five speakers rotating between four tables, a setup that enabled all the participants to engage with each speaker and to foster more in-depth discussions

The panelists introducing themselves and outlining their work to other civic education practitioners at the festival. | © Leslie Klatte
The session began with Anna-Veronika leading the audience in a game called “Unlock My Phone”, developed by Kunsht, a popular Ukrainian science media team. Participants in the audience used clues to decipher phone passcodes, thereby highlighting the importance of data security. The game is part of Kunsht’s “FAKELESS” project, another media literacy initiative conducted in association with the Goethe-Institut. Launched in 2023, this digital exhibition uses interactive and innovative approaches to empower individuals of all ages to navigate news and media mindfully. The exhibition’s content is available in nine Western languages, including English, German and Turkish, making it accessible to a wide public.
“War has prompted us to reevaluate our approach in Ukraine, particularly when it comes to helping people develop media literacy,” explained Anna-Veronika. “This knowledge is essential for building resilience to disinformation and teaching people how to fact-check information.” Prior to Russia’s invasion in 2022, Anna-Veronika and her Kyiv-based team worked hard to combat misinformation about COVID-19 and other conspiracy theories.
While moving from one table to another in the “speed-dating” session, Anna-Veronika was frequently asked about her team’s efforts to combat science disinformation. She told them about an initiative called “Science at Risk”, a database documenting the damage and destruction of research infrastructure in Ukraine, as well as the stories of individuals and scientific institutions affected.
“Have Ukrainians improved their fact-checking skills?” one participant asked. “Yes!” replied Anna-Veronika confidently. “Fact-checking and media literacy skills are like exercise: you need to train your muscles regularly to maintain and build strength,” she explained. “As Ukrainians have become more familiar over time with the kind of language used in fake news, their everyday experiences have gradually led to the development of new skills for resisting disinformation.

Anna-Veronika guiding visitors through the “Fakeless” exhibition at the NECE Festival. | © Anna-Veronika Krasnopolska
According to Noa, statistics show that young people in Japan are the demographic most heavily affected by misinformation. Japan’s media literacy teaching is often criticized for being dull and rigid. The teaching methods are often one-sided, and some schools even invite police officers to give lectures on misinformation, a strategy that does not engage students effectively. To address this problem, Noa and his team set about developing a fun and engaging way to promote fact-checking.
“I’m a trainee at the Japan Fact-Check Center,” Noa said, “The sheer volume of information is overwhelming, and the pace of fact-checking is too slow to keep up. So it’s simply insufficient. That’s why we wanted to explore a different approach.” Noa noted that young people in Japan generally show very little interest in politics, but they are heavily influenced by popular culture. And much of the misinformation they encounter originates from social media influencers and celebrities.
“Ray’s Blog” has gone international, expanding its reach beyond the confines of Japan. Noa’s team has presented the game in several countries, successfully engaging over seven thousand students around the world. The game is available in Japanese, English and Chinese, so it’s accessible to many people around the world.

Noa presenting “Ray’s Blog”, an immersive game designed for young people to improve their fact-checking skills. | © Leslie Klatte
One focus of the panel discussion was the role of chatbots in generating fake news and the challenges posed by AI that spreads misinformation and hate speech in South Korea and Taiwan. Some members of the public also expressed their concerns about the harm that may be caused by AI chatbots.
Yubeen Kwon, a graduate researcher from South Korea, has been studying the decline and resurgence of Iruda, a popular Korean chatbot, as well as the ethical challenges posed by AI in general. Iruda came out in late 2020, almost a year before ChatGPT, and was initially designed to provide lonely South Koreans with a lifelike conversational companion. However, it soon sparked controversy by spreading discriminatory and hate speech targeting minorities, as well as sexually inappropriate content.
Furthermore, because the original version of Iruda was trained on a hundred million chat logs drawn from the relationship advice platform “Science of Love Service”, it raised significant privacy-related concerns. Public backlash ultimately resulted in the suspension of Iruda just three weeks after its launch.
However, Yubeen continued, Iruda was relaunched with significant changes in 2022: the new version uses a generative AI model and has been operating smoothly since its reintroduction. Reflecting on the differences between the two versions, Yubeen observed that the emergence of ChatGPT has now familiarized the public with generative AI, helping people better understand and accept the limitations of chatbots, including their tendency to produce inaccurate content. In addition, unlike the original version trained on data drawn from real private conversations, the new iteration of Iruda generates conversations from scratch.
During the “speed-dating” sessions, the public were particularly struck by the ethical issues involved in using private data to train the first version of Iruda, as well as the intensity of gender conflicts in South Korea. Yubeen explained that, to address these concerns, the startup behind Iruda partnered with a government agency specializing in privacy and data protection to lay down ethical guidelines for AI chatbots. “This collaboration, along with some technological revisions, has been a key factor in Iruda’s continued success and widespread adoption,” she added

Yubeen sharing experiences in AI ethics research with civic educators. | © Leslie Klatte
“We’ve observed misinformation and political propaganda coming from other countries, and we believe that using AI to accelerate fact-checking can be helpful and efficient,” Billion explained. “Human fact-checkers get tired and exhausted, so we use AI and machine learning to automate parts of the process, allowing the chatbot to handle repetitive tasks.”
Cofacts’ work and impact reach beyond Taiwan. Thanks to its entirely open-source code and data, Cofacts’ resources are available to misinformation researchers in Taiwan and have even enabled organizations over in Thailand to develop Thai Cofacts using its open-source code. Cofacts now supports the Chinese, English and Thai languages.
“I’m actually a shy person,” Billion confessed. “But while participating in the marriage equality movement, I witnessed the spread of misinformation, which is what prompted me to start up Cofacts as a public participation project.” In addition to its online fact-checking efforts, Cofacts also organizes in-person workshops to teach the public how to use fact-checking tools effectively.

Billion presenting her cross-border fact-checking initiatives to civic educators. | © Leslie Klatte

Happy faces of the panelists and civic educators after an inspirational 90-minute discussion of media literacy practices and challenges. | © Goethe-Institut Korea/bpb
Author
Author: Hui-An Ho
Collaboration: Anja Troelenberg