A Degrowth Perspective on AI: Reimagining Our Digital Future

This month, we had the honor of presenting at the ISEE/Degrowth 2025 Conference in Oslo–a gathering of researchers, activists, and thinkers dedicated to challenging dominant paradigms of growth and imagining more just, regenerative futures. Rather than offering a set of fixed solutions, we used the opportunity to make a provocation: what if we understood artificial intelligence (AI) not just as a tool for addressing the climate crisis, but as a mirror of the systems and ideologies that helped create it?

AI as a Product of Growthism

AI is often framed as an immaterial solution to global challenges–from decarbonization to inequality. But what if AI is not merely a tool to fix the climate crisis, but a product of the very systems driving it?

At its root, AI reflects and reinforces the ideological, technical, and material logics of a worldview centered on endless growth, control, and standardization.

On the one hand, we can see this in AI’s technical architecture: systems built on binary code, statistical generalization, and the drive for optimization. AI works by categorizing, ranking, predicting, and automating. In this process, the messy, relational, and contextual dimensions of life are often flattened into standardized data points and efficiency targets. What cannot be quantified is typically excluded; what cannot be predicted is often devalued. This logic aligns with growthism’s broader commitment to simplification, abstraction, and the belief that complexity can and should be managed–if only through better data and faster computation.

On the other hand, AI is materially and institutionally shaped by a political economy that embodies growthism’s extractive imperatives. The entire AI supply chain–from the mining of critical minerals, to chip manufacturing, to cloud infrastructure, to model deployment–relies on massive inputs of energy, labor, land, and capital. This infrastructure is driven by a logic of scale: bigger models, faster systems, more data, more compute. The aspiration is rarely sufficiency; it is more–more powerful systems, more markets, more predictive accuracy, more control.

This is not just a technical tendency–it is a reflection of who holds power and how growth is defined. The dominant narrative privileges a narrow, anthropocentric vision of the "standard human"–often Western, male, and able-bodied–while marginalizing those whose lives, knowledge systems, and values do not fit that mold. The belief that more technology equals more progress silences communities that might seek repair, restoration, or refusal instead. And the growing centralization of AI in the hands of tech giants and military-industrial actors further entrenches the idea that intelligence, value, and futures must be defined from the top down.

AI codifies growthism at the level of logic and enacts it at the level of infrastructure. To imagine alternative digital futures, we must begin by questioning not just what AI does, but what kind of world it assumes–and what kind of world it makes possible.

Taken together, AI is not a neutral or incidental output of a growth-based world–it is becoming one of its most powerful instruments. It codifies growthism at the level of logic and enacts it at the level of infrastructure. To imagine alternative digital futures, we must begin by questioning not just what AI does, but what kind of world it assumes–and what kind of world it makes possible.

The Mirage of “Green AI”

In response to climate pressures, we’ve seen a rise in techno-optimistic narratives of “AI for climate”. Recently, we have seen peer-reviewed articles examining how AI can help reduce emissions in agriculture, mobility, and energy. These stories are compelling and appealing, but they rely on narrow metrics and shaky assumptions.

Most analyses measure climate impact solely in terms of CO₂ emissions, typically focusing on energy use in data centers. This ignores the wider planetary justice implications and material footprint of AI–from extractive mining to toxic chip manufacturing, exploitative labor in server assembly, and the growing tide of e-waste. These are not side effects and they are not disconnected from one another; they are foundational to how AI is currently built and deployed, and they spill over into one another.

Moreover, studies often overstate emissions reduction potential while ignoring rebound effects like the Jevons Paradox: greater efficiency leading to greater total consumption. For instance, AI that optimizes delivery routes might reduce emissions per package–but also boost global e-commerce, packaging waste, and warehouse demand.

Above all, these narratives leave power unexamined. Who decides how AI is used? Who benefits from it? Whose data is extracted, and whose livelihoods are put at risk? Without these questions, “green AI” becomes little more than a greenwashed extension of business-as-usual.

The Materiality of “Intelligence”

Behind every so-called “intelligent” system lies a deeply material infrastructure–built on energy, minerals, labor, and land. Understanding AI’s environmental and social impacts means looking beyond the algorithm to the supply chains and extractive systems that make it possible.

Raw Material Extraction

AI systems rely on critical minerals like lithium, cobalt, and rare earth elements, which are used to produce batteries, chips, and other hardware components. These materials are primarily extracted in the Majority World, often under dangerous and exploitative conditions. 

Materials Manufacturing

Once extracted, minerals must be processed and refined–a stage that is energy-intensive and often highly polluting. Smelting, chemical treatments, and purification procedures take place in facilities that emit significant carbon and generate toxic waste. This step transforms raw ore into usable materials for electronics, but often leaves behind environmental damage and health risks in nearby communities.

  • Baotou, Inner Mongolia (China): Once a major hub for rare-earth elements, Baotou’s refining facilities have severely contaminated local soil and air with harmful heavy metals. Residents suffer long-term exposure to radioactive pollutants and fine particulate matter–linked to respiratory, neurological, and cancer risks.

  • Cobalt Sulfate Refining in China: China processes most of the world’s cobalt (sourced from the Democratic Republic of the Congo), turning raw ore into salts for battery and electronics manufacturing. A recent life-cycle analysis found that refining cobalt for lithium-ion batteries contributes significantly to human health burdens–particularly from emissions of nickel, copper, and cobalt dust and chemicals.

Equipment Manufacturing

These refined materials are then turned into components like microchips and servers. Chip manufacturing in particular is resource-hungry: it requires vast amounts of electricity, clean water, and highly controlled environments.

Model Training

Behind every AI system are rows of energy-consuming servers running for days or weeks, consuming large volumes of electricity. In parallel, datasets used to train these models often rely on the invisible labor of data annotators, many of whom are based in the Majority World and work under precarious conditions. 

Model Deployment

Once trained, AI models are integrated into systems that influence real-world decisions–shaping everything from social service eligibility to credit scoring, hiring, resource distribution, and public safety. At this stage, the model’s assumptions and limitations become materially consequential, as its outputs guide actions that directly affect people’s lives.

Disposal & End-of-Life

Eventually, the hardware running AI becomes obsolete. Most of this e-waste is exported to countries with limited environmental protections. There, devices are dismantled–often by hand–to recover valuable metals, exposing workers and communities to hazardous materials. Informal e-waste sites have become toxic hotspots, reflecting the long-term costs of rapid technological development.

Each of these cases illustrates that AI’s promise of disembodied intelligence is sustained by highly embodied, unequal, and ecologically destructive processes.

The harms are not anomalies–they are structural. They are baked into the current model of AI development, which prioritizes scale, speed, and accumulation over care, sustainability, and justice.

This is why the call for a degrowth theory of AI is so urgent. If we want a future in which digital technologies truly align with planetary well-being, we must reckon with–and reconfigure–these extractive foundations.

Toward a Degrowth Framework for AI

So how do we begin to reimagine AI–beyond the logic of growth? A degrowth-oriented AI doesn’t mean rejecting technology altogether. It means redefining the goals, design practices, and governance of AI systems around values like sufficiency, plurality, and ecological care. To achieve this, we propose a degrowth framework for AI around three pillars: objectives, values, and action.

I. Objectives – What Are We Building AI For?

A degrowth-oriented theory of artificial intelligence begins not with the assumption that AI is inherently beneficial or necessary, but with a more foundational question: What is the problem we are trying to solve, and who defines it? In this framework, the purpose of any AI system must emerge from community-led processes of problem identification and deliberation. It is not enough for developers or institutions to impose AI as a default solution in the name of efficiency or innovation. Instead, communities must be at the center of deciding whether AI is an appropriate or desirable response to their needs and aspirations.

A degrowth-oriented theory of artificial intelligence begins not with the assumption that AI is inherently beneficial or necessary, but with a more foundational question: What is the problem we are trying to solve, and who defines it?

This shift in orientation requires beginning with the lived realities, knowledges, and priorities of those who are most affected. Before any technical intervention is proposed, there must be space for communities to articulate the problem on their own terms–grounded in their own ecological, cultural, and political contexts. If, through this process, AI is collectively seen as a fitting and ethical means to support a well-defined purpose–such as enhancing local water monitoring, enabling better coordination of mutual aid networks, or preserving endangered languages–then its use can be pursued within clearly defined boundaries. But if AI is not seen as part of the solution–or worse, if its deployment risks undermining existing relationships, practices, or forms of autonomy–then the answer must be to withhold or refuse it.

Degrowth-oriented AI does not treat automation or computational modeling as inherently superior forms of intervention. Rather, it holds that any technology must be subordinated to the goal of collective flourishing. This means that technological design cannot precede democratic deliberation. It also means that no AI system should be developed without a clearly articulated, community-validated purpose–one that serves sufficiency, ecological balance, and social care, rather than profit, control, or abstraction. In practice, this calls for robust participatory structures: assemblies, living labs, or data stewardship models in which people can co-design or contest not only the how of AI, but the whether.

Without such processes, the deployment of AI–no matter how well-intentioned–risks reproducing the very dynamics of growthism and dualism it claims to disrupt: imposing one-size-fits-all solutions, externalizing costs, and diminishing the capacity of communities to define their own futures. In contrast, a degrowth approach insists that AI must never be the starting point. The starting point is the question of purpose–asked collectively, answered pluralistically, and acted upon only if the path leads there.

II. Values – How Should We Build AI? 

A degrowth-oriented approach to artificial intelligence calls for a fundamental rethinking of the values that shape how AI is conceived, developed, and deployed. This is not merely a question of ethics as an afterthought or add-on, but of embedding alternative value systems into the very foundations of technological practice. Mainstream AI development is typically governed by values such as efficiency, optimization, prediction, and control–values inherited from industrial capitalism and technocratic rationality. A degrowth perspective challenges these priorities and instead centers values that are relational, situated, and regenerative.

At the core is the value of ecological embeddedness–the recognition that AI systems are not abstract, immaterial tools, but extensions of physical, social, and planetary systems. Designing AI under this value means respecting ecological limits, prioritizing AI frugality, accounting for harm across supply chains, and embedding sustainability not as an output metric but as a structural condition of development.

Equally central is relationality–the idea that knowledge, agency, and well-being are co-produced through relationships among humans, nonhumans, and the environment. In practical terms, this calls for AI design processes that are grounded in collective inquiry, where relationships–rather than outputs–guide the form and function of technological systems.

Plurality is another guiding value. Rather than striving for universal models or generalizable solutions, degrowth AI embraces diversity–in ways of knowing, in cultural interpretations of intelligence, and in visions of the good life. This value encourages the development of tools that are responsive to local contexts and epistemologies, resisting the flattening tendencies of standardization.

A further commitment is to care–understood not as a sentiment, but as a practice of sustained attention, responsibility, and repair. Building AI with care means attending to the slow, often invisible forms of labor that sustain both social and ecological life. It means recognizing that some of the most urgent technological work is not innovation, but maintenance, support, and the cultivation of mutual trust.

Finally, refusal must be seen as a legitimate and generative value–not all problems require a technological solution, and not all innovations are progress. Refusal, in this context, is not obstructionist; it is an act of discernment and boundary-setting. It is what enables communities to protect their autonomy, knowledge systems, and ecological balance against the encroachment of extractive, imposed technologies.

Together, these values–ecological embeddedness, relationality, plurality, care, and refusal–form the moral and political compass of a degrowth approach to AI. They do not dictate a single method or model, but offer a way of orienting AI practice toward futures that are more just, interdependent, and alive.

III. Action – How Do We Enact Degrowth AI? 

Putting degrowth AI into practice requires moving beyond critique to concrete, actionable approaches that realign development processes with ecological responsibility, social equity, and community self-determination. These practices must be grounded, not in abstraction, but in the everyday realities of communities, infrastructures, and governance systems. What follows are key practices–feasible and adaptable–that operationalize degrowth principles in AI development and deployment.

Begin with Community-Led Problem Definition

No AI system should be deployed without first ensuring that the problem it aims to solve has been defined by the community most affected. This reverses the top-down logic of "tech looking for a problem" and centers the lived realities, priorities, and interpretations of those who will bear the consequences.

This can mean:

If (and only if) AI is identified as a potential solution to the identified problem, then there are a few things that a degrowth-oriented approach to AI governance would include:

Establish Co-Design Processes Throughout the AI Lifecycle

First, the AI system to be used must be co-designed–not merely consulted on–with affected groups. Co-design is not a one-time survey but a structured process that includes continuous involvement from ideation to decommissioning.

This can include:

  • Integrating community representatives into design teams as paid collaborators.

  • Structuring design cycles around culturally relevant timelines and accountability milestones.

  • Prioritizing low-tech, low-cost tools that communities can govern and adapt without external dependency.

Operationalize Collective Data Governance

Degrowth AI requires that data collection, storage, and use be controlled by the communities from which it originates. This counters extractive datafication and reinforces collective autonomy and self-determination.

This can include:

Apply Social and Ecological Thresholds to System Design

Degrowth AI development also involves respecting hard limits on environmental and social impacts. Systems should not be scaled–or even developed–if they exceed thresholds for water use, energy consumption, or social disruption.

Implementation could involve:

Center Frugality and Repairability

Frugality is a guiding principle in degrowth AI: systems should use only as much computational and material resources as necessary for a clearly defined public good.

This can include:

  • Prioritizing smaller, purpose-built models over general-purpose or large foundation models.

  • Designing hardware and software for long-term use, repair, and upgradability.

  • Using open-source, interoperable tools that enable local control and adaptation.

Design Exit and Sunset Strategies

Degrowth AI should never be permanent by default. Systems must include pre-designed conditions for their discontinuation, scaling down, or transformation based on changing community needs or harms detected.

This can include:

  • Embedding time-bound licenses that require renewal based on participatory review.

  • Including pathways for decommissioning and data withdrawal in project planning.

  • Ensuring communities are supported in the removal or redesign of systems that no longer serve them.

Institutionalize Community Refusal and Oversight

Practically, degrowth AI must make refusal structurally possible. Communities need accessible and enforceable ways to say no–before, during, or after system deployment.

This can include:

  • Legal or administrative mechanisms (such as community veto rights, digital environmental impact tribunals, or municipal data ethics boards) to block or halt deployments.

  • Formal requirements for demonstrating social legitimacy–such as social license protocols–prior to deployment.

  • Transparent accountability procedures that allow harms to be surfaced, challenged, and remedied without bureaucratic delay or legal expertise.

Conclusion

Ultimately, the AI we live with today is not a foregone conclusion. It reflects a series of historical decisions–shaped by the ideologies of growth, separation, and control. But these trajectories can be redirected. We have the collective power to make different choices–to design and govern AI in ways that are rooted in ecological care, social justice, and the recognition of diverse ways of knowing.

This is not merely a question of technical adjustment; it is a profoundly political and cultural undertaking. It requires us to reject the binary, extractive logic that frames progress as domination–and instead embrace a vision of the future that is interdependent, situated, and plural. A future not of more–but of enough. Not of control–but of care. Not of inevitability–but of possibility.

Next
Next

Call for Submissions: Observatory of Planetary Justice Impacts of AI