Civil society in East Asia: the fight against disinformation

A person holding a cell phone with a drone in the background shows the connection between technology and innovation. © Yukari Mishima

Civil society organizations in South Korea, Japan and Taiwan play a central role in protecting information integrity and strengthening democratic resilience. They focus on technological innovation, citizen participation, international cooperation and educational initiatives. This article sheds light on their multifaceted approach and introduces key players.

Technological advances such as online collaboration tools, specialized apps, and chatbots play a crucial role in mitigating the impact of mis- and disinformation. Collaboration tools enable communities to work together efficiently, sharing information and even collecting data to inform public debate. Apps can provide users with immediate access to verified information and fact-checking services. Chatbots can interact with users, reviewing suspicious content, answering queries, and fact-checking to counteract false claims. These technologies facilitate rapid response to misinformation, raise awareness, improve media literacy, enhance citizen engagement and foster a better understanding of the information environment.

A “News Helper” project was started in Taiwan in 2013. Ronny Wang, a software engineer, proposed the idea at the 4th g0v[1] hackathon. News Helper was a browser extension designed to raise users’ awareness of suspicious information. Available for Google Chrome and Firefox, it automatically analyzed content as users browsed news websites, popping out warning notices whenever anything suspicious came up. Ronny decided to stop the service as news-reading habits switched from computers to mobile devices and the g0v community launched other projects like “Cofacts,” a fact-checking platform using crowd collaboration. The Cofacts chatbot, integrated into Taiwan’s popular LINE messaging app (which covers 90% of the market), enables users to forward suspicious messages easily, searches a crowdsourced hoax database for matches, and instantly provides fact-checking feedback.

Cofacts is a collaborative platform that enables individuals to fact-check and contribute to a shared knowledge base. The project underscores the importance of diverse perspectives in addressing misinformation. It is designed to gather and present various opinions, aiding users in making informed decisions amidst conflicting information. Cofacts encourages users to evaluate information independently rather than relying on a single authoritative source, based on the conviction that no single entity holds the absolute truth. The fact-checking results can be reviewed by other participants, and the platform also includes fact-checking results from other organizations, providing a forum for public discussion.

Parti Co-op, a South Korean cooperative, develops innovative online collaboration platforms that go beyond citizen-based fact-checking. These platforms serve a dual purpose: facilitating the collection of comprehensive information and datasets about controversial incidents and generally enhancing transparency by establishing a factual foundation for public discourse.

A woman uses a chatbot to communicate with a robot. © Yukari Mishima

With a little help from AI...

A well-known proverb in Taiwan says: “Spreading rumors takes just one mouth but refuting them can wear out your legs.” So, the aforementioned Cofacts has integrated AI technology into its fact-checking system. AI has come to play an increasingly important role in mitigating misinformation and hate speech.

This trend is conspicuous in Japan as well: In 2018, the FactCheck Initiative Japan (FIJ) incorporated an AI-powered Fact-Checking Console (FCC) into its fact-checking network. Developed in collaboration with SmartNews, Inc. and the Tohoku University Natural Language Processing Lab, its FCC system utilizes artificial intelligence and natural language processing (NLP) to automatically detect dubious information on social media platforms like X (formerly known as Twitter). This technology aids FIJ staff and volunteers in continuously monitoring and identifying false information, which is then investigated and fact-checked by media partners.

In addition to fact-checking, some CSO organizations have committed to looking at the bigger picture of the information environment. The Taiwan Information Environment Research Center (originally called Information Operations Research Group (IORG)) leverages its database, a vast repository of online content that processes over 12 million textual statements daily, to trace the sources of suspicious online content and track the ways in which specific narratives influence the public sphere. By means of a collaborative model combining human expertise with AI, the team analyzes the information ecosystem, identifying and dissecting manipulative and specious narratives.

Educational initiatives play a pivotal role in empowering individuals to navigate the complex information landscape. Traditional media literacy efforts need to be updated for “digital literacy” and delivered to a broader public.

Organizations like Fake News Cleaner in Taiwan seek to educate seniors about digital literacy, equipping them with the skills to identify and critically assess information. By fostering media literacy and critical thinking, these initiatives contribute to building a society that is more resilient to mis- and disinformation. The focus on educating vulnerable groups like seniors is particularly important, as they are often targeted by disinformation campaigns.

Another Taiwanese initiative, the Pangphuann Association, targets educators and students. Founded in 2020 by teachers and social advocates, it has collaborated with 160 schools, reaching out to 3,500 teachers. The association designs engaging, action-oriented curricula to bring civic issues into the classroom. These curricula transform the complex issue of disinformation into accessible subject matter for classes of various formats and foster cross-disciplinary collaboration in schools, enhancing teachers' and students' civic engagement and critical thinking skills for greater digital literacy.

International collaboration has emerged as a critical component in addressing the global nature of disinformation. This collaboration involves cooperating organizations and actors of various kinds within the global information ecosystem.

For example, fact-checking organizations in Taiwan, Japan, and South Korea actively participate in such international networks as the International Fact-Checking Network (IFCN) and the #CoronaVirusFacts Alliance. These networks facilitate the exchange of best practices, tools, and strategies, enhancing the overall effectiveness of civil society’s efforts on a global scale. The IFCN also awards a total of $975,000 in funding to fact-checkers serving 34 different countries, including Parti Co-op in South Korea.

Illustrative image of people working on laptops in front of a map. © Yukari Mishima

 

Government Interaction and Private Sector Support

The relationship between civil society, governments, and the private sector varies across these countries. In South Korea, the government’s relations with fact-checking bodies have been at times adversarial: During the 2017 presidential election campaign, the conservative Liberty Korea Party (now the People Power Party) sued the Seoul National University (SNU) FactCheck Center for bias and defamation. While the criminal complaint was summarily dismissed, the civil case dragged on for two years before the court ruled in favor of SNU FactCheck, affirming that fact-checking statements by public figures is vital for democracy and not a form of defamation.

Financial backers of Korean fact-checking initiatives have been pressured to reduce their funding. Naver (the “Google of South Korea”), for example, which had been providing roughly $783,000 in annual funding to SNU FactCheck since 2017, withdrew its funding in 2023, calling the move a “strategic decision.”

In Taiwan, in contrast, public-private collaboration is a common government approach to mitigating new challenges: It helps solve problems, build trust, and garner support, including for efforts to tackle disinformation. The Taiwanese Executive Yuan’s 2018 report stresses the importance of such collaboration, involving third-party fact-checkers and systems, to establish a robust defense against misinformation. And the Taiwanese government has been supportive of fact-checking initiatives that involve collaborating with organizations like Taiwan FactCheck Center and engaging with the tech community through platforms like Cofacts. Examples include real-time fact-checking during elections and the COVID-19 pandemic.

In Japan, the government’s support for initiatives to combat mis- and disinformation has been more indirect. Through its “Study Group on Platform Services,” the Ministry of Internal Affairs and Communications has identified the need to counter disinformation on social media while preserving freedom of expression. This has led to recommendations for voluntary private-sector efforts, prompting the Safer Internet Association (SIA) to create a “Disinformation Countermeasures Forum.” Discussions on the forum underscored the need for a fact-checking organization, ultimately resulting in the SIA’s establishing the Japan Fact-Check Center (JFC) in October 2022. Although the government ministry did not set up the JFC itself, its initial identification of the issue and subsequent recommendations proved instrumental in setting the process in motion.

Challenges
Civil society engagement in mitigating the impacts of disinformation in East Asia is a dynamic and evolving landscape. A landscape that faces many challenges, such as maintaining neutrality while effectively combating misinformation in highly polarized environments. Building robust domestic and international networks is crucial for these organizations. The overall ecosystem for combating false information should include fact-checkers, researchers, online platforms, media literacy educators, media outlets, and government agencies. By sharing best practices, technological tools, collaborative strategies, and regulatory instruments, these networks amplify the impact of individual efforts and create a united front against misinformation and disinformation.

AI is poised to dramatically transform the information landscape, introducing new challenges to maintaining the integrity of public discourse. The creation of sophisticated fake personas, combining fabricated accounts with AI-generated content, will become increasingly prevalent. This will lead to an unfair competition in which AI-driven entities can produce vast quantities of high-quality content, potentially drowning out human voices.

AI will also enable actors to manipulate narratives more subtly and more effectively in order to shape public opinion on a massive scale. Perhaps most concerningly, as the cost of and technological barriers to AI deployment become lower, these capabilities will become accessible to a wider range of individuals and organizations, potentially amplifying the spread of misinformation and disinformation across digital platforms. Civil society organizations, together with technological advances, community engagement, and global cooperation, are essential in addressing these issues. Absent such a concerted effort, the goal of creating a more informed and resilient society will drift further and further away.

Author

Isabel Hou is a practicing lawyer specialized in innovative technology and digital governance issues since 2000. She has been actively involved in g0v, a grassroots “civic tech community” based in Taiwan, since 2012. From 2020 to 2022 she served as a “civil society” member of Taiwan’s Open Parliament Committee and drafted Taiwan’s first “Open Parliament Action Plan.” She recently collaborated with community partners to promote a “Digital Citizens’ Literacy” action plan as a social infrastructure for the country’s next generation.

Follow us