New Cases on the Observatory: February Roundup

The AI models that we use in our day-to-day life are the products of a vast and complex supply chain, the planetary impacts of which we aim to uncover and make visible. In our Observatory of Planetary Justice Impacts of AI, we aim to bear witness to the impacts of the AI supply chain, so as to to spread information, share resources, and build a catalogue of real-world examples.

Latest Cases

In this blog, we highlight five of the ten latest additions to the Observatory, which highlight political and structural injustice throughout the AI supply chain.

‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI

By Anuj Behal

Stage: Model Training

There is a hidden psychological toll on women in rural India who are employed as content moderators and data annotators to train global AI systems by reviewing and classifying disturbing content. Workers like Monsumi Murmu in Jharkhand spend hours each day watching violent, pornographic, and abusive videos and images to teach AI to recognise harmful material, a foundational but psychologically taxing task in machine-learning pipelines. Many of these women experience severe distress, including sleep disruption, intrusive thoughts, emotional numbing, and lasting trauma. They often only learn the true nature of the work after being hired under vague job descriptions and are bound by non-disclosure agreements that discourage open discussion, adding to their isolation. Despite their crucial role, they typically receive low pay, limited or no mental health support, and minimal legal protection, as India’s labor framework does not recognise psychological harm as an occupational injury. Milagros Miceli, a sociologist leading the Data Workers’ Inquiry, describes this emotional blunting — “feeling blank” — as a defense mechanism to prolonged exposure to graphic content, but one with potentially deep personal impacts. There are structural inequities in the global AI supply chain, where vulnerable and marginalised workers bear the hidden costs of training technologies used worldwide. 

ICE and CBP’s face-recognition app can’t actually verify who people are

By Dell Cameron and Maddy Varner

Stage: Model Deployment

The U.S. Department of Homeland Security’s Mobile Fortify app, deployed by Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP), does not reliably verify individuals’ identities despite official claims. Launched in spring 2025 under an executive order aimed at intensifying immigration enforcement, the app has reportedly been used tens of thousands of times across the country, capturing facial images and other biographic data in the field. However, experts and internal records show Fortify only returns possible matches based on algorithms rather than confirming identity, a limitation exacerbated by its use in uncontrolled conditions such as street stops rather than consistent biometric settings. Its approval came after DHS dismantled internal privacy review safeguards, allowing the tool to be rolled out with minimal scrutiny. Civil liberties advocates warn that the technology’s inaccuracies can lead to misidentification and privacy infringements, with agents acting on ambiguous facial matches and escalating enforcement based on weak biometric evidence. The app’s widespread use, including on U.S. citizens, protesters, and bystanders, and its integration into long-term government databases raise concerns about civil rights, oversight, and the expansion of surveillance far from the border, prompting calls by US senator Ed Markey and nonprofit Electronic Privacy Information Center for stricter limits on biometric tools used in interior enforcement

X tried to sidestep Brazil’s inquiry on AI deepfakes; the government just pushed back

By Tatiana Dias

Stage: Model Deployment

In Brazil, tensions have escalated between authorities and the social media platform X over its generative AI tool Grok after reports that it produced millions of sexualized images and deepfakes, including content depicting minors. Brazilian regulators and consumer protection advocates demanded accountability, technical safeguards, and mechanisms to block the generation and dissemination of harmful content, backed by the threat of daily fines if the platform fails to comply. In response, X Brasil sought to deflect responsibility by arguing that the Grok profile should be treated as a regular user account and that its Brazilian subsidiary, the U.S.-based parent company X Corp, and the AI operator xAI are legally distinct entities. Brazilian agencies rejected this framing, asserting that the integration between X and Grok makes this defence legally weak and that X must jointly answer for the failures under local laws. Regulators, including the Federal Public Ministry, the National Data Protection Agency, and the Ministry of Justice’s Consumer Secretariat, formally demanded clearer technical and organisational measures to eliminate harmful outputs and report on content removal and moderation metrics. The Instituto Brasileiro de Defesa do Consumidor (IDEC) argues that the platform’s slow and vague responses have allowed harmful deepfakes to persist, creating a standoff that highlights broader regulatory challenges around AI accountability and platform liability in Brazil.

French police raid X’s Paris office as UK investigation continues

By Stevie Bonifield

Stage: Model Deployment

French law enforcement, working with Europol and national cybercrime units, executed a raid on X’s Paris offices as part of a widening investigation into the company and its AI chatbot Grok, which has been linked to the proliferation of harmful deepfake and sexualized AI-generated content. The probe, which began in early 2025, has also summoned senior executives including Elon Musk and former X CEO Linda Yaccarino to testify, amid allegations ranging from the distribution and possession of illicit material to potential algorithm manipulation and data extraction offences. The Paris prosecutor’s cybercrime division is examining whether X and its associated entities may have breached French laws, particularly surrounding child protection and online content standards. Simultaneously, regulators in the UK, including the Information Commissioner’s Office and Ofcom, are investigating aspects of X’s operations, focusing on risks posed by Grok’s content generation capabilities and potential failures in compliance with data protection and safety legislation. These actions reflect growing global scrutiny of social media platforms’ AI systems, with authorities seeking to hold them accountable for moderating risky output and ensuring legal compliance. The raids mark an escalation in enforcement and signal that regulators are willing to use criminal investigative powers to probe major tech firms’ algorithms and AI features

AI needs the mineral resources under Greenland’s permafrost

By Agence France-Presse (AFP)

Stage: Raw Material Extraction

The recent geopolitical scramble over Greenland reveals its subterranean mining potential. The island is believed to hold deposits of rare-earth elements, graphite, nickel, and zinc, all necessary elements for building tech infrastructure, including AI. However, Greenland’s mining sector remains nascent, with only one fully operational mine (the Lumina Sustainable Materials anorthosite extraction site) despite the island’s vast geological potential and more than 1,200 identified mineral sites. The harsh Arctic environment, including extreme weather, permafrost, logistical hurdles, and a lack of infrastructure such as roads, rail, and reliable power, has made large-scale resource extraction costly and operationally challenging. Environmental concerns and stringent regulatory frameworks add further constraints. The remoteness of the deposit and the high cost of extraction have limited commercial interest. Planning on extracting Greenland’s mineral resources is a climate disaster plan: as the planet gets warmer, its deposits will gradually become easier to access. 

Our Observations

Technologies like AI are often treated as if they happened in a vacuum: purely technical developments born of someone’s brain, whose potential for good or harm is yet to be seen. In contrast, the cases in our observatory highlight the impacts that have been felt from the conception of the tool to its disposal, and how deeply political these impacts are. 

This month, we have seen a few updates on the impacts of the deployment of AI models - in particular, X is being investigated in France and Brazil, among others, for its AI chatbot Grok and its generation of non-consensual pornographic deepfakes, including of children. The harm of such generated images is incalculable, and it can be directly traced to the failure of its developers to stop it from generating them, either with additional training or with rigorous content moderation. In the case of Grok, it does not even seem to be a failure, but an ideological decision, on the part of Elon Musk, for minimal regulation and moderation (what he calls ‘censorship’).

Even when AI tools are appropriately trained, that is a task with an extreme psychological toll on those who are doing it, as explored by the Guardian in the piece reported above, for which Anuj Behal interviewed women who are hired to watch extreme material to flag it for AI training. Tools developed for the benefit and profit of those in the Global Minority require training that gets displaced to the Global Majority - in this case, a displacement of trauma: in order to preserve Global Minority eyes and ears from experiencing such extreme images, the eyes and ears of women in India are being violated - the injustice of the tool is inherent in its development. 

The political dimensions of AI do not arise in its deployment, not in its training: they emerge all the way upstream in the extraction of the necessary materials. France 24’s deep-dive into Greenland’s mineral deposits highlights the role of AI in the current geopolitical landscape: mineral resources are now perceived as even more valuable, since they are integral to the profits accrued in the selling of AI tools. President Trump’s interest in Greenland is in no small part linked to its geological deposits.

The political dimensions of AI, from cradle to grave, are inherent and impossible to ignore. While using its tools in our everyday lives, users must remember the planetary justice costs that we are incurring, and how they are affecting the lives of people and more-than-human communities near and far. Even more importantly, the developers of these tools must see their materiality, and the materiality of the harm that they are causing. Responsibility and accountability are essential if we are to develop sustainable tools by asking the question: is it worth it to develop this?

We want to emphasise that the Observatory is a collective repository, and that we are always looking for submissions of case studies. This is a community-led, adaptive data collection effort. If you come across case studies similar to these, please submit them here!

Next
Next

New Cases on the Observatory: January Roundup