Large Language Models and Democracy  A Way Forward or Artificial Insanity?

A robotic cowboy holding the scales of justice
A robotic cowboy holding the scales of justice prompt by savbeck @ Midjourney AI

Due to the training methods behind large language models (LLMs), they lack any understanding of truth, ergo their knack for spreading misinformation. But it’s important to emphasize that LLMs, like any technology, represent a powerful tool whose impact depends on how we utilize it. David Dao explores some of the ways that new AI technologies could weaken — or strengthen — our democracies.

Including Equitable Perspectives Through AI

As the 28th UN Climate Conference (COP28) in Dubai comes to a close, the atmosphere is charged with hope and fatigue. Delegates from around the world are diligently reviewing policy drafts and resolutions, gearing up for the final stretch of negotiations. For seasoned negotiators from developed countries, it’s familiar territory. English, the primary language of the Conference of the Parties (COP), is often their first or second language. They are backed by their extensive support teams at home who have done much of the preliminary work.

For the small and traditional resource-poor delegations of negotiators from the Global South, it’s a different story. They face a daunting task, with the dense jargon and technical terms often pushing their valuable perspectives to the sidelines, affecting their ability to participate fully in shaping climate policy. This inequity in democratic participation due to language is nothing new — I have seen it firsthand over six years at these conferences. Worse, Indigenous knowledge and perspectives — crucial for managing climate and ecosystems — are often left out from the discussion entirely. Yet, I believe something changed this year.

Recognizing this gap, the Youth Negotiators Academy trains and prepares young negotiators from 55 countries. For COP28, they partnered with our non-profit GainForest to build an AI Youth Negotiator assistant called Polly. Polly is an app developed from dozens of interviews with youth negotiators across countries like Liberia, Paraguay, and Indonesia. The app was designed to demystify working with policy documents, offering translations and real-time updates to help young delegates navigate the discussions more effectively during COP. Polly is now supporting 70 youth negotiators to contribute actively to climate negotiations and has been used during the conferences to help draft speeches and interventions.

Taina, another AI assistant developed by the same institutes, is going one step further. Taina is being co-created with Indigenous Peoples with the aim of bridging traditional wisdom and modern decision-making. Taina allows local and Indigenous communities to share their stories and knowledge in their own language through an interactive chatbot. With the consent of these communities, Taina captures and conveys their invaluable insights to COP decision-makers, guaranteeing that insights from every corner of the world — no matter how remote — contribute to shaping global decisions. Taina is currently being piloted with the Satere-Mawe, Tikuna, and Munduruku Peoples in Brazil as well as with riverside communities in the Philippines, combining traditional lore and global environmental strategy.

How Large Language Models (LLMs) Work

Polly and Taina’s effectiveness is based on generative AI and large language models (LLMs), which have dramatically changed our understanding of natural language processing. Contrary to the past belief that the explicit instruction of grammar and syntax are required to understand language, LLMs learn from a vast corpus of text data provided by the internet. During their learning phase, these models operate by predicting the next word in a sequence, similar to the autocomplete feature on a keyboard. This probabilistic approach has faced criticism from AI ethics researchers who argue that LLMs don’t truly “understand” language.

Despite these criticisms, the influence of LLMs and generative AI on our society cannot be overstated. ChatGPT, the popular chatbot from OpenAI, reached 100 million monthly users just two months after launch, making it the fastest-growing consumer application in history. Unfortunately, while Polly and Taina aim to include marginalized voices in the democratic process through assistance with text processing, LLMs can also create and amplify existing filter bubbles if used poorly.

The Dangers of Artificial Insanity

Due to the training methods behind LLMs, they lack any understanding of truth, mirroring the biases present in their training data and often producing fabricated information. This challenge is not unique to LLMs but is inherent in all machine learning models. For example, instances of medical chatbots offering inaccurate medical advice are well-documented. Similarly, Meta’s Galactica, intended to assist scientists in drafting academic articles, resulted in erroneous references, fictitious scientific studies, and more.

Efforts are underway by computer scientists to enhance LLMs by integrating external tools such as calculators or search engines, aiming to mitigate the issue of hallucination. However, what sets LLMs apart from traditional machine learning models is their sophisticated grasp of language and their capacity to persuade and interact with humans convincingly.

The World Economic Forum’s (WEF) Global Risks Report 2024 identified misinformation and disinformation as the foremost risk over the coming two years. Bad actors used fake news to affect stock markets, encourage harmful healthcare choices, and threaten democracy by influencing elections, including the 2016 and 2020 U.S. presidential elections. LLMs will only increase the effectiveness of fake news. Al Gore has described this phenomenon of locking users in an ever-convincing echo chamber as a new form of AI: “artificial insanity.”

A Way Forward

The potential to foster democratic inclusion and empower traditionally marginalized voices showcases the positive side of this technological evolution, as demonstrated by assistants like Polly and Taina. These tools not only democratize access to information but also ensure that diverse perspectives are heard and considered in global decision-making processes.

It’s important to emphasize that LLMs, like any technology, represent a powerful tool whose impact depends on how we utilize it.

However, the challenges associated with artificial intelligence, particularly those related to misinformation, disinformation, and artificial insanity, require a concerted effort to mitigate. As we navigate the complexities of integrating AI into the fabric of our societies, we need a multifaceted approach. Here are three recommendations:
 
  1. Develop robust ethical frameworks for AI development that implement rigorous oversight mechanisms and ensure transparency in training data and operations.
  2. Educate and raise awareness of the limits of LLMs to equip individuals with the critical thinking skills needed to evaluate their outputs.
  3. Foster open collaboration and co-design between AI developers, policymakers, and local communities that can drive the creation of more inclusive and equitable AI solutions.
By prioritizing the co-design of AI tools with those whose voices they aim to amplify, we can ensure that these technologies are being deployed as a way forward to improve our democracies.

You might also like

Failed to retrieve recommended articles. Please try again.