The India AI Impact Summit and the Planet Pillar that Wasn’t

In February 2026, policymakers, technologists, and academics gathered in New Delhi for the long-awaited India AI Impact Summit, positioned as a moment for India to articulate the AI priorities and ambitions of emerging economies which seek to leverage AI for economic transformation rather than just global competition. The summit’s framing rested on three alliterative pillars: People, Planet, Progress. For those of us tracking AI’s environmental footprint, the explicit inclusion of Planet was particularly encouraging.

Yet the summit offered few meaningful spaces to candidly discuss the current and emerging environmental costs of resource-intensive AI infrastructure. Instead, the conversation centered overwhelmingly on opportunity: how AI could accelerate development, and how India could attract large-scale investment for data centers and compute infrastructure powering the AI economy. This was also consistent with messaging from other resource-rich emerging powers. India’s AI’s strategy seems to be to leverage industrial policy to make AI infrastructure investment as frictionless as possible.

Sovereignty and the Push for AI Infrastructure

Two dynamics shaped how climate and AI were discussed at the India AI Summit. First, the tenor was unmistakably pro-investment in data centers, with climate framed largely through the technosolutionist promise that advanced AI systems will help mitigate future environmental risks. This framing sidesteps the immediate material footprint of AI data centers, from the water- and energy-intensive cooling systems that power data centers to the growing planetary costs of deploying large models at scale. 

Second, the language of AI sovereignty - albeit in its amorphously defined state - emerged as a shorthand for urgent and rapid expansion of locally hosted infrastructures. India’s recent budget, for example, offers tax holidays until 2047 for foreign cloud providers using Indian data centers, part of a broader effort to attract hyperscale AI infrastructure. Similarly, Brazil, a counterpart to India in AI ambition and capacity, has introduced tax incentives and renewable-energy-backed infrastructure policies to attract hyperscalers, even as environmental groups warn that data-center expansion could strain water resources in drought-prone regions. In Mexico, the rapid build-out of large facilities, some approaching gigawatt-scale capacity, has already triggered local concerns about water shortages and grid pressures.

Across these narratives of AI opportunity and sovereignty, a critical discussion has largely gone missing: the material environmental costs that AI infrastructure imposes on ecosystems and communities that are often already under resource stress.

The Planetary Costs of Hyperscale AI

Hyperscale data centers across the key regions in the Global MajoritySouth are already producing measurable planetary impacts. In India, where cities like Mumbai, Hyderabad, Bengaluru, and Chennai are emerging as AI infrastructure hubs, water demand from data centers is projected to rise from about 150 billion liters in 2025 to roughly 258 billion liters by 2030, with 60–80 percent of facilities expected to operate in regions facing high water stress. These facilities also drive substantial energy demand: globally, AI-related data-center electricity use is expected to surge dramatically by the end of the decade as compute needs scale. Brazil has become a focal point for hyperscale investment because of renewable-energy potential and tax incentives, yet new facilities are being proposed in drought-prone regions such as Ceará, where communities have repeatedly faced water shortages. Across Brazil, Mexico, and Chile, the region’s data-center sector already has around 700 MW of installed IT capacity and consumes an estimated 2–3 TWh of electricity annually, with demand expected to rise sharply as AI workloads expand. These facilities often deliver limited employment benefits, large centers may create only around 100 permanent operational jobs after construction, while placing pressure on local grids, land use, and water systems. 

The Transparency Gap: A Political Choice

Compounding these concerns is a persistent transparency gap: companies frequently treat water and energy data as proprietary, and governments or utilities often withhold technical documentation from licensing processes, citing industrial secrecy. As AI infrastructure scales across resource-constrained regions, these governance gaps will become increasingly consequential. Further, the choice to demand more transparency from corporations on water and energy consumptions - in addition to being a strategic one - is also a political one. Governments see introducing friction by demanding robust environmental clearances and auditing as counterintuitive to the frictionless approvals that characterize AI industrial policies; Governments also face regional regulatory competition, as other resource rich nations embrace deregulation and lax environmental oversight as they position themselves to attract investments. 

This is a short-sighted approach, creating a race to the bottom conditions that will entrench resource extraction, posing disproportionate harms to the Global Majority in the long run. It is in the interest of national governments to adopt proactive environmental monitoring and assessment policies by design, rather than an afterthought. The AI Supply Chain Impact Framework by AIPJ offers one pathway for governments to systematically assess the environmental and community impacts of AI infrastructure: embedding water, energy, land, and labor considerations into permitting, procurement, and AI policy design.

As AI infrastructure expands, the environmental footprint of compute will increasingly move from the margins of policy debates to their center. Electricity demand from data centers is projected to surge this decade, while water withdrawals and land use pressures will become more visible in the communities hosting these facilities. The next generation of AI summits will therefore need to broaden the scope of what counts as AI risk. Today’s risk and safety conversations remain largely focused on model misuse, security, and governance. But as the physical backbone of the AI economy scales, upstream risks will become equally consequential. Concerns that stronger safeguards might slow investment should not push countries into a regulatory race to the bottom. Future summit series should instead make deliberate space for these countervailing perspectives and expand the remit of safe AI to include planetary and community impacts: ensuring that the infrastructure powering AI’s growth is built with foresight, transparency, and planetary justice.

Next
Next

New Cases on the Observatory: February Roundup