Quick access:

Go directly to content (Alt 1) Go directly to first-level navigation (Alt 2)

Interview with Pia Sombetzki
"Wherever we see potential benefits, local communities must be at the forefront of developing solutions."

Pia Sombetzki
© Studio Monbijou, CC BY 4.0 (Enhanced using AI)

As climate change accelerates, AI technologies are increasingly seen as part of the solution. But how effective are these tools, and at what cost? Pia, Policy & Advocacy Manager at AlgorithmWatch, offers a critical perspective on the role of AI in climate action. From environmental impacts to the ethics of AI deployment, she discusses the need for inclusive, sustainable approaches and accountability within the industry.

In what ways can AI be harnessed to effectively combat climate change and promote sustainability?
 
The tracking of environmental changes and the use of statistical methods, for example in weather forecasting or disaster prevention, is not new. Many of the new AI-driven tools that are presented by industry players are often indistinguishable from existing applications. Often, the effectiveness of AI tools for such use cases has more to do with what data is collected, where it is collected, and who has access to it. When asking how AI can be used to combat climate change and promote sustainability, we should critically assess and ask which capabilities are truly new and where we have missed out on benefits in the past due to the underlying conditions of use and access. Wherever we see potential benefits, local communities must be at the forefront of developing solutions that meet their own needs and that they can control. Greater access, easier-to-use, and more widely available tools can help.
 
Considering AI's impact, do you believe its overall influence on global issues like climate change is more beneficial or harmful? What AI-related challenges do you see here?
 
The impact of AI on society and specific communities is multifaceted, and we are still in the early stages of understanding it holistically. This is because we have little access to the internal figures of companies that train and operate AI systems. In the meantime, we try to work with external assessments that shed light on the immense thirst for more and more computationally intensive AI models. The industry's actions in shaping global industrial policy, building more data centres, and increasingly seeking control of and investing in energy and resource production facilities also show a trend that should not be ignored. The industry that once boasted of being the driving force behind renewable energy production is now adjusting its own benchmarks, signalling that this adjustment is worth it for the sake of developing AI. Meanwhile, we still lack effective frameworks to truly hold the sector accountable for its environmental and social impacts, and to promote AI development that is beneficial to society.
 
What ethical considerations should guide the deployment of AI in climate-related initiatives?

 
The use of AI systems in climate-related initiatives must be fit for purpose, both in general and from an ethical perspective. This requires, first and foremost, a preliminary assessment of the intended and unintended effects and ethical considerations of the deployment. This step must involve a wide range of stakeholders to avoid blind spots and to consider consequences that affect groups of people unequally. For such an assessment, AlgorithmWatch has developed a self-assessment tool based on a questionnaire built around 13 overarching criteria for assessing the sustainability of AI systems. These include, for example, transparency and responsibility, inclusive and participatory design, working conditions and jobs, energy consumption, CO2 and greenhouse gas emissions, sustainability potential in use and indirect resource consumption. These criteria are broken down into more than 40 indicators and operationalised to make them practical.

Take the example of energy consumption. Here, it is important to define the system requirements in advance and favour models with low complexity. Where possible, pre-trained models should be used and only fine-tuned to reduce energy consumption during the training phase.

Pia Sombetzki © Studio Monbijou, CC BY 4.0 About Pia Sombetzki

Pia is a Policy & Advocacy Manager at AlgorithmWatch, where she deals with the effects of the increasing use of automated decision-making systems in various areas of life, be it at work, in the public sector or on social networks. Among other aspects, she analyses the environmental impact and discriminatory effects of AI systems and draws attention of politicians and the media in Europe to how such problems can be tackled. Before joining AlgorithmWatch, she worked in political education and in advocacy, at organizations such as Human Rights Watch, INKOTA-netzwerk and European Alternatives.

Top