Sunday, January 18, 2026
22.8 C
Colombo

‘Not Too Late To Regulate AI,’ Says Empire of AI’s Author Karen Hao

Author Karen Hao argues that while the cost of Silicon Valley-led AI expansion is being borne by the Global South, there are ways to reverse this narrative and build collaborative AI systems that benefit all.

Sonia Bhaskar | 27 November, 2025

As the dependence on artifical intelligence increases and tech companies across the world invest heavily in data center facilities, award-winning reporter and author of Empire of AI, Karen Hao, argues that this scale of development is “unnecessary”. She says that this narrative about building larger data centers has been solidifed and sold to the public and policymakers. The expansion, which Hao refers to as “imperial expansion”, is happening at the cost of communities and resources which are scarce. While highlighting the widening Global North and Global South divide, she opines that the AI development should be a cross-border endeavour. 

In this conversation with Sonia Bhaskar, Hao also focuses on the role of governments to regulate AI and says that AI should lead to public benefit, rather than benefitting a few “that are running these Silicon Valley companies”. 

The interview has been edited for brevity.

What have been some of the more alarming developments you’ve noticed recently in the AI industry?

Sam Altman, just a few weeks ago, said that he envisions just for his company, OpenAI, building 250 Gigawatts worth of data centers by 2033, which he estimates could cost 10 trillion dollars. 250 Gigawatts, a reporter actually compared this to the entire amount of energy demand in India and so that would be an extraordinary increase in energy demand globally, and most of that would be powered by fossil fuels, because that is the energy available.

I want to emphasise that this scale is actually technically unnecessary. Even if we didn’t have that scale, we would still have AI. And yet, they’re trying to build larger and larger data center facilities, supercomputer facilities, around the world, and have sold a narrative to the public and policymakers that this is required for getting more AI capabilities.

The Global North and South divide seems to be widening with this kind of model of AI development. Are these countries geared up to even counter the threat coming their way?

There are a lot of activists, organisations, and communities themselves that have taken big leadership roles in countering this kind of expansion, which I describe in my book as a form of imperial expansion. One of the anecdotes that I highlighted in my book was of these Chilean water activists that were very concerned about a proposed Google data center landing into their community. And it was a project that was sanctioned by the Chilean government, it was one that the Chilean govt in fact celebrated, because they considered it to be a mark of economic development for the country, of participating in this technological revolution, but what the community and the water activists saw on the ground was that it was taking their already scarce fresh water resources, and without their consent, they had not been notified of this project.

They actually discovered it after someone tipped them off, that was not supposed to tip them off. They then pushed back aggressively on the proposal, so they started mobilising, talking with every single one of the neighbours within that community, explaining to them in plain terms, what exactly having a data center within this community could mean for their ability to access fresh water, and they built up so much public support that they were able to pass a referendum to block the project before there was further investigation into the environmental impacts.

And then they went even further and escalated to Google Mountain View, where Google Mountain View had to evaluate, make concessions to the community, and then they escalated to the Chilean government, which has now created this roundtable that brings together community residents and activists, as well as the tech companies, to discuss future data center projects in the country.

So that kind of spirit, even when there aren’t government protections from imperial expansion projects, and even when the government is actually in cahoots with the companies to bring even more of these facilities in, we have seen remarkable mobilising where the community steps up, and they take the leadership and are able to still gain significant concessions, and even block projects completely, when it’s not within their interest.

How do you see the role of Asia in terms of the incoming data center investments and a source of cheap labour?

Basically any time a chatbot or an AI system can do anything, it has to be taught by a human. And because chatbots are language technologies, these companies will look to contract cheap English speaking workers. And because of the history of colonialism with the UK colonising India, or the US colonising places like the Philippines, those are places where there is a large base of English speaking workers that can perform a task that these tech companies want them to do.

In my book, I highlight this example of how OpenAI contracted an Indian firm to do content moderation on their models for image generation. So, these workers were looking through all of this really horrific AI-generated images to try and figure out which ones should be allowed or not allowed with OpenAI’s technologies.

In the Philippines there’s also a huge base of data industry workers, not just chatbots, but they are doing things like labelling images or they’re trying to train self-driving cars. And all of these workers work in often extremely exploitative conditions. Because the workers are not actually told who they’re working for, they’re often unclear on who the boss even is. Like if there are problems they’re seeing, they don’t even know who to flag it too. And they’re also often paid very very little amount of money, maybe a couple dollars a day, or a couple dollars an hour, depending on where they sit geographically, and how much these companies think they can get away with.

So it has become more and more of a problem, as the industry has continued to grow and the labour demand has continued to grow.

We see three models: the US (big tech, concentrated power/profit), China (state-backed incentives and narrative control), and the EU (cautious regulation, with talk of General Data Protection Regulation [GDPR] relaxation). Where do you think the solution lies? What features of these models offer the most just and sustainable path for AI’s wide-reaching societal and economic impact?

China’s case illustrates something that we have seen for a long time, which is that in resource constrained environments there’s actually much more innovative AI work being done. So in my own interactions with researchers around the world, some of the most exciting research is by African AI researchers, or Latin American researchers, or Chinese AI researchers, because they’re just not operating under the same circumstances, of having an extraordinary amount of money or computational resources. They’re developing models that aren’t just the same as American models in terms of capabilities, with just less resources, but they’re also looking at fundamentally different types of AI technologies that are centering the problem that they ultimately want to solve with AI.

So, with African AI researchers that I’ve spoken to in the past, grid reliability is a huge issue, not just in Africa but actually around the world. And they’ve figured out how to create very small specialised AI models to improve grid reliability such as simply predicting when a piece of grid equipment is going to fail before it fails so they can swap in newer equipment, upgrade it, and not have any kind of blackout, or downtime, for that grid. And these are things that the U.S. has really underinvested in because they’re just not pursuing this kind of thinking anymore. I sort of see it as they’ve become intellectually lazy because of a glut of capital and glut of resources. So, they’re not looking at how do we actually just solve these very practical problems that can improve the quality of life and human rights around the world. 

Before Silicon Valley really entered the scene with realising that there was a significant amount of money to be made in the AI industry, AI research was largely a cross-border endeavour. It was understood that if everyone around the world works together to build AI, we will just end up with better systems that improve lives for everyone. And so that is where I see the solution. We rewind the narrative of zero-sum game, like all of these competing models, and who’s gonna win and who’s gonna lose. We rewind all of that back to the narrative that once reigned, which was that we actually have cross-border collaboration, and that the best ideas can come from anywhere in the world, and everyone can take advantage of them.

Who should regulate AI, and what role should the government play? We failed to regulate social media, nowhere in the world have we successfully managed to do that. AI seems to be getting out of control and when we combine that with issues of mis and disinformation, deepfakes, privacy, security —what model of regulation can work? How can we learn from our mistakes in the past?

So first, I think we need a broader understanding of what even constitutes AI regulation. I think a lot of people think that regulating AI simply means allowing anyone to build AI, however they want, and then deciding how to constrain the deployment of that technology into the world. But actually, we also need to consider how AI is developed in the first place.

So the fact that these American companies have just been scraping whatever data they want, and hoovering up all of this intellectual property without any hindrance—that is a place where the government should be intervening.

They should be interpreting copyright law to strengthen intellectual property protections. The U.S. should please have a federal data privacy law, so that there’s actual protections from people having their children’s photos caught up in training these systems, that ultimately then can be used to harm their kids.

The data centers need to be regulated—that is also a form of AI regulation, where all governments should be thinking, within their own country’ context. What land would they actually allow these companies to build on? How much energy would they actually want these companies to be able to use for this type of technology? How much freshwater would they use? And there might be plenty of actually existing regulations that already apply, like environmental regulation, that were originally designed to check other industries and their pollution, that are actually extremely relevant for the AI industry as well. So the question when you say, who should regulate—the answer is everyone. Every government should be thinking about how to design regulations that help steer the trajectory of AI development so that ultimately leads to broad public benefit, rather than just benefits concentrated in the hands of the wealthy that are running these Silicon Valley companies. And that requires looking at the full AI development supply chain, and also the deployment scenarios, and there will need to be international coordination of course.

So we need to use all of the regulatory instruments that are available to us, that we already have, to also contain the way that the AI industry is currently creating a lot of harm. It’s never too late to do that. Because these companies require up-to-date data in order to continue developing their models, they require up to date computer chips for training the next generation of their facilities. So we might have missed the boat for the things that already exist now, but that doesn’t mean that we cannot regulate that’s gonna exist in the future.

In order for the companies to continue keeping up with the innovative feel, they need to develop new things. It’s not like, suddenly, OpenAI can continue cashing in on a ChatGPT that was generations older. So all of the things that are going to come after, the many AI technologies that come after, it’s absolutely not too late for us to actually implement regulation. In fact, we have a lot of evidence now from the last three years of this technology being around, of how to actually design this effective regulation for all the technologies moving forward.

‘There’s been talk of an AI bubble building up. What’s your estimate of where we’re headed over the next year and what are the bright spots you see in this landscape?

There’s a huge bubble.

I am very concerned that it’s going to burst in a way that is extremely harmful to the global economy and is going to leave a lot of people around the world devastated. But if there is a bubble pop, journalism is still essential. Community efforts, public education, and civil society work is still essential to figure out how do we rebuild after the bubble pops.

There’s, been plenty of work done by people like Naomi Klein, who have shown that in crisis, in disaster situations, there are two ways we can come out of it:

a) Authoritarian, fascists, corporate opportunists use that crisis to then make things worse: install more surveillance, to erode more human rights, to privatise everything,

b) It becomes a place for the phoenix to rise from the ashes. We are able to build something fundamentally better by going back to what are the real core things that we want to protect as a society, that are crucial to human flourishing, and how do we rebuild even better, more resilient institutions, communities, organisations, around those core principles.

All of the work we do now as public defenders, whether activists, advocates, or non-profit work, government work, journalism, all of that work is going to help determine which path we take.

You have drawn parallels between AI empires and colonial extraction. Where do you see this empire going, and where do we go from here as a society? 

It did take a long time to unwind colonialism, but it happened. And it happened at a time when there wasn’t really a broad understanding of democracy. This global belief in fortifying human agency and human rights, and making sure that people have the freedom to choose how they self-govern and what kind of future they ultimately want to live in. So we’re in a much better place now, where we actually know those things, and people are willing to fight for them because they already have tasted that.

So one thing that I see is like, the immune system response that’s been happening to these new empires has happened much much faster than in the past, and we know for a fact that throughout history, empires have fallen, and this time it will be no different.

This story was produced by Asian Dispatch and originally published on 27 November 2025. It has been republished by CIR with permission.

Hot this week

Nominate an Outstanding Journalist for the 2026 ICFJ Knight International Journalism Award

Each year, the International Center for Journalists honors outstanding...

Tribute: Iqbal Athas (1944–2026)

With the passing away of Iqbal Athas (81), a...

Public panic, mass displacement and criminal culpability

A wave of social media posts hamper Ditwah disaster...

GIJN Guide to Investigating Foreign Lobbying

by Andrew Lehren and Nikolia Apostolou • June 4, 2025 Table of Contents What Is...

AI, InfoWars and Storm 1516: New frontiers in election misinformation and disinformation

By Niresh Eliatamby With one more election expected in 2026...

Topics

Nominate an Outstanding Journalist for the 2026 ICFJ Knight International Journalism Award

Each year, the International Center for Journalists honors outstanding...

Tribute: Iqbal Athas (1944–2026)

With the passing away of Iqbal Athas (81), a...

Public panic, mass displacement and criminal culpability

A wave of social media posts hamper Ditwah disaster...

GIJN Guide to Investigating Foreign Lobbying

by Andrew Lehren and Nikolia Apostolou • June 4, 2025 Table of Contents What Is...

The Role of AI-Generated Disinformation in Elections

By Manjula Gajanayake The 2023 Slovak parliamentary election, a keenly...

2026 Sigma Awards for Data Journalism Are Open for Entries

GIJN is excited to launch a new edition of...

Related Articles

Popular Categories