? Grid intensity view:

Welcome

Letter from the Editor
Michelle Thorne

Solarpunk and Other Speculative Futures

Swirling Sulas
Superflux

The Trouble with Imagination
Shayna Robinson

Solar Protocol
Tega Brain, Alex Nathanson, and Benedetta Piantella

Data Garden
Cyrus Clarke, Monika Seyfried, and Jeff Nivala

Big Tech Resistance 

Climate Disinformation: A Beginner’s Guide
Harriet Kingaby

Big Tech Goes Greenwashing: Feminist Lenses to Unveil New Tools in the Masters’ Houses
Camila Nobrega and Joana Varon

Bigger, More, Better, Faster: The Ecological Paradox of Digital Economies
Paz Peña

Sustainable Web Craft

A Carbon-Aware Internet
Chris Adams

Digital Sustainability: A French Update
Gauthier Roussilhe

Design Options for Sustainable Hardware and Software
Johanna Pohl, Anja Höfner, Erik Albers, and Friederike Rohde

Interview with Digitalization for Sustainability
Johanna Pohl, Maike Gossen, Tilman Santarius and Patricia Jankowski

A Guide to Ecofriendly CryptoArt (NFTs)
Memo Akten, Primavera De Filippi, Joanie Lemercier, Addie Wagenknecht, Mat Dryhurst, and Sutu_eats_flies

AI Promises and Perils

The Promise of AI: Can It Hold for Environmental Sustainability?
Cathleen Berger

A Social and Environmental Certificate for AI Systems
Abhishek Gupta

Artificial Intelligence and Sustainability – Emerging Challenges and Policy Implications
Friederike Rohde, Maike Gossen, Josephin Wagner, and Tilman Santarius

Change is a’ Commoning 

Aloha: Sovereignty and Sustainability Are Who We Are
Dennis “Bumpy” Pu‘uhonua Kanahele

City Data Commons against City Greenwashing
Renata Ávila and Guy Weress

Open Climate Now!
Shannon Dosemagen, Emilio Velis, Luis Felipe R. Murillo, Evelin Heidel, Alex Stinson and Michelle Thorne

Klasse Klima: Building a Resilient Collective through Tech and Education
Klasse Klima

The Story is a Forest: Narratives with Mass Resonance
Christine Larivière

About Branch

Unknown grid intensity

The Promise of AI: Can It Hold for Environmental Sustainability?

Diagrams from Dr Alesha Sivartha’s Book of Life (1898) – Source via Public Domain Review

A version of this was first published in April 2021 as part of The Raisina Edit 2021 curated by the Observer Research Foundation, Delhi, India.

The European Green Deal sets out a range of critical actions to address the climate crisis: A climate neutral continent by 2050, clean circular economy, transformations and innovation for public infrastructure, the energy sector, building efficiency and more. It also stipulates investments in “environmentally-friendly technologies” and economic growth that is decoupled from resource use.

Artificial Intelligence (AI) is often presented as a powerful solution to fuel this green transition. But is that true?

Aware of the risk that government incentives to boost growth post-pandemic may well undermine many of the necessary investments and reductions to mitigate the climate crisis, we increasingly hear calls for a “green recovery”. 

Different implementations of human-centric AI may certainly provide opportunities for change, including when it comes to advancements in medicine, food production, traffic management and more — all of which are highly relevant to managing the climate crisis. At the same time, any implementation of AI builds on massive and still growing volumes of data that need to be stored and processed, which has a significant environmental impact. In addition to mitigating harmful uses of AI that amplify discrimination and bias, undermine privacy, and violate trust online, we need a lot more transparency around its environmental impact, too.

What do we know about AI’s environmental impact?

Illustrative research from Massachusetts Institute of Technology (MIT) showed that training popular natural language processing AI models produced the same CO2 as flying roughly 300 times between Munich, Germany and Accra, Ghana. One of these models is called GPT-2, which was estimated to require 284 metric tons of carbon dioxide (mtCO2e).

In June 2020, GPT-3 was released – a model that is exponentially bigger than its predecessor. GPT-3 builds on 175 billion parameters, whereas the 2019 GPT-2 model builds on “only” 1.5 billion parameters.

In any case, there are countless models and implementations with similar or even bigger scope and larger data sets that all add to the overall environmental impact of AI.

And even just this one model consuming 284 mtCO2e could instead power 33 U.S. homes for an entire year.

In addition, we have to account for the physical presence of data centres which occupy extensive surfaces of land and put significant strain on global water resources, factors that are not consistently reflected in corporate sustainability reports.

Greenhouse Gas emissions (GHG) assessments

Greenhouse gas emissions (GHG) accounting is incredibly complex. And it is currently entirely voluntary for tech companies.

So it is of little surprise that tech companies only rarely publish the information necessary to make such calculations, and if they share findings or results of their impact assessments, methodologies remain vague. In part, this is aggravated by the fact that there is little detail or meaningful guidance about how to measure the environmental impact of digital products like AI — which is stunning, given that it forms the basis upon which we can identify where and how to improve.

In part, this is aggravated by the fact that there is little detail or meaningful guidance about how to measure the environmental impact of digital products like AI

Most companies report on the basis of the GHG Protocol, yet there are considerable differences in the (public) accounting of emissions. The GHG Protocol is the most commonly used standard for environmental impact assessments that also provides guidance for how to account for different greenhouse gases, not just carbon dioxide (CO2). It encompasses three scopes: scope 1 assesses direct emissions, scope 2 includes emissions from purchased electricity, heating or cooling, and scope 3 is supposed to span a company’s value chain, including business travel, events, or purchased goods and services, as well as the use of (sold) products.

Some companies only report against scope 1 and 2 (often described as “operational emissions”), while others include scope 3 value chain emissions yet share little about materiality or methodologies.

To give just one concrete example from my personal experience: The difficulty of clear boundaries was also visible in Mozilla’s 2019 Greenhouse Gas emissions report, in which the use of its products, like Firefox, contributed roughly 98% of the organisation’s overall emissions. However, whether people read the news, use their email client, watch cat videos, or shop was not distinguished, instead the assessment accounts for overall time spent online. While insightful for the internet’s impact at large, it will be challenging to mitigate the impact of the organisation’s own digital products with such rough estimates.

What do we need going forward?

Without obligatory reporting and a clear understanding that responsibilities can’t be delegated to consumers, there is little incentive to really meet the sort of ambitious climate targets we need in order to tackle this crisis.

The question is not whether technology has a role to play to fuel both a green recovery and long term societal transformation, but which technologies will make a net-positive difference. To answer that, we need to be in a better position to do our homework and genuinely assess the environmental impact of digital technologies, including AI. Otherwise, any claim that AI supports a green transition will remain unsubstantiated as environmental costs are not properly accounted for.

To put it more bluntly: Positive uses of AI for mitigating the climate crisis can only be net-positive if we know what their own environmental impact is, including training, storing, processing of data and the physical presence of data centres.

Standards and mandatory, transparent reporting

Systemic challenges require nuanced solutions. To innovate sustainably, we must ensure that we have the details we need to make informed decisions, so that we can remain alert and protect against potential risks, including for the environment.

To start, we need better standards for GHG accounting and mandatory, transparent reporting against all scopes and categories of the GHG protocol.

To start, we need better standards for GHG accounting and mandatory, transparent reporting against all scopes and categories of the GHG protocol.

We need regulation for environmental impact assessments in the tech sector, including for digital products like AI. This also means investing in open sourcing emission factors, calculation formulas and tools that do not just approximate but help calculate and determine the impact of digital products, too.

Ultimately, I like to think that in the discussions around privacy and data protection, certainly under GDPR, we have grown more willing to stop and ask: Is everything we can do, really what we should do? This is exactly the mindset we now need with a view to the environmental impact of AI as well: Does the benefit of the suggested solution really outweigh its negative environmental impact? Is it not just possible, but responsible?

Only then will we be able to really live up to the requirements of the EU Green deal, fuel a sustainable recovery, and promote healthy societal transformation to mitigate both the effects of the pandemic and the climate crisis.

Authors

Cathleen Berger is a political scientist whose work focuses on strategy, sustainability, tech and global policies, as well as governance processes. She most recently worked with Mozilla, where she headed the organisation’s environmental sustainability programme. In prior roles, she led Mozilla’s Global Governance efforts, worked as a policy strategist with the German Foreign Office, a consultant for Global Partners Digital, a research assistant at the German Institute for International and Security Affairs, and a visiting lecturer at the Friedrich Schiller University Jena.