Data colonialism comes home to the US: Resistance must too by Nick Couldry

 
 

LSE’s Professor Nick Couldry and SUNY Oswego’s Professor Ulises A. Mejias explain how developments in the US government can be seen through the lens of data colonialism, and what can be done to resist.

 

Elon Musk’s radical intervention in the US government through the Department of Government Efficiency (DOGE) has been called an “AI coup,” a “national heist,” and a “power grab.” Various experts are concerned that it is unconstitutional. But beyond its legal ramifications, the parts of it that involve getting access to government data fit well within the playbook of what we call data colonialism.

It is only through the lens of colonialism that we can understand what is happening— not just as the actions of a broligarch and his cadre of young DOGE hackers, but as a data grab—the largest appropriation of public data by a private individual in the history of a modern state. Elon Musk may have zero experience in government, but he has proven adept at weaponizing a data-extracting platform, and he seems to be applying the lessons he learned at X to seize sensitive federal data, assume control of government payment systems, and even gain access to classified intelligence.

This phenomenon can no longer be explained through the rubric of ‘surveillance capitalism’ since the point is not merely to make money by tracking what users do. The point of DOGE appears to be to put all the data that exists about US citizens in the hands of private corporations and government employees operating outside the law. In neoliberalism, citizens become consumers; in data colonialism, citizens become subjects. If the difference is not apparent, think of how government data, down to their DNA, is used to control the Uyghur population in China. In this version of colonialism, what’s being appropriated is not land but human life through access to data.

Once we view recent events in the US through a colonial lens, the disregard for legality is also unsurprising. Historical colonialism’s doctrine of terra nullius was designed precisely to rewrite the law of new ‘colonies’ simply by the act of seizing the land, with the excuse that no one smart was using it. Strip aside the faux democratic narrative, and that’s Musk’s playbook, too. As Musk ally and Palantir cofounder Joe Lonsdale put it to the Washington Post:

“Everyone in the DC laptop class was extremely arrogant. These people don’t realize there are levels of competence and boldness that are far beyond anything in their sphere”

In other words, only DOGE’s data manipulators are smart enough to deserve to recognize the potential of government data.

The new alliance between Musk and President Donald Trump’s government might seem shocking, seen from the perspective of recent liberal capitalism. But it makes absolute sense within colonial history where lawless individuals and corporations (from the Spanish conquistador Hernán Cortés in Mexico to the British East India Company) worked in ever-closer alliances with states to produce a mature colonialism that combined corporate and sovereign power.

Until recently, there was a prospect of the US state supporting regulations to restrain Big Tech’s extractivism, in some form at least. Now, that’s a distant prospect. Yet even this shift has a colonial parallel. Initially, the Spanish crown was embarrassed by the exploits of the conquistadors and looked for legal ways to restrain them. But by the mid-16th century, those attempts at restraint were abandoned, and the path of no-holds-barred colonialism was set, only to be refined further by new colonial powers, including Holland and England. Perhaps that’s what the US government’s transformation signifies globally: a scene-setting for generalized data colonialism, with China as the second pole, just as historical colonialism supported multiple rival powers.

Unless, that is, resistance emerges. What might resistance look like if understood through the lens of colonial history?

We should not rule out regulatory interventions outside the US having some effect. However, to have any chance of success, national governments are going to need to form some large alliances. An alliance of Europe and Brazil, possibly with the UK, Australia, and others, would be formidable against US power, especially if implicated in a wider trade war from which the US can expect only a pyrrhic victory.

New regulatory proposals are needed to address global data extraction as it is—an unacceptable continuation of colonial power—and to forge alternatives beyond what the US and China currently offer.

But regulation won’t be enough on its own, so entrenched is the power of data colonialists. The prospects of legal challenges in the US itself are entirely uncertain, depending ultimately, in some cases, on which way the conservative-dominated Supreme Court will turn. For effective resistance, something more like a popular revolt will be needed across many countries.

What about US federal workers and, more broadly, users of US federal services? Can they kickstart wider resistance by protesting the new administration’s most egregious actions? Rutgers University labor studies professor Eric Blanc recently argued Musk would be vulnerable to the combined efforts of federal workers and their unions. The history of the indignados movement in Spain following the 2008 financial crisis may also offer pointers.

However, the longer-term success of worker and user resistance will likely depend on the global resonances that US activism generates.

Current wider geopolitics will inevitably constrain many governments from challenging the vision of largely unrestrained AI and tech platforms that the Trump administration and Big Tech want to force on the world. That’s why popular and worker resistance will be essential: issues such as sustainability, energy use, and the protection of workers are universal cross-border issues.

Ultimately, the businesses from which the broligarchs profit are global. The new US administration poses risks for countless nations in relation to data platforms, AI, and many other areas. That’s why a long-term global historical perspective is needed. For that perspective, we can turn to the five-centuries-long combination of capitalism and colonialism that has now entered a crucial new phase.

This post was originally published by Tech Policy Press and is reposted with thanks. It gives the views of the authors and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

 
 

The elite contradictions of generative AI by Nick Couldry

 

LSE’s Asher Kessler and Professor Nick Couldry reflect here on a recent essay by Dario Amodei, CEO of Anthropic, in which he offers a vision of the future and of AI’s role in it. Amodei, who was interviewed this week by the Financial Times, predicts that AI will radically accelerate human progress and alter our world. Is this the future we want?

 

In October 2024, the CEO of one of the most important artificial intelligence (AI) companies in the world published a 40 page essay in which he imagined the future. Dario Amodei is the CEO of Anthropic, a company he co-founded in 2021 to research and deploy AI in an explicitly safe and steerable way. In Machines of Loving Grace, Amodei predicts that over the next decade, humans maybe able to eliminate most forms of cancer, prevent all infectious disease, and double our lifespans. With the radical power of AI, we can accelerate, according to Amodei, our “biological freedom”; that is, our freedom to overcome the constraints  of biology. It is clear that Amodei wants our attention.

The essay starts in a sober, scientific tone, with Amodei distancing himself from Silicon Valley hype about the ‘singularity’ and even the term ‘artificial general intelligence’ (AGI). But that does not stop him developing a very expansive view of how AI will change our lives: across biology and physical health, neuroscience and mental health, economic development, war and peace, and finally work and meaning. Even though he avoids the term AGI, he believes that extremely powerful forms of AI will be with us by 2026.

The result, according to Amodei, is that soon, “a larger fraction of people’s lives” will be spent experiencing “extraordinary moments of revelation, creative inspiration, compassion…”. Harnessing the immediate potential of AI will lead us to drugs that can make every “brain behave a bit better” and more consistently manifest feelings of “love and transcendence”. Alongside ‘biological freedom’ we will gain ‘neurological freedom’ – if, that is, we devolve much of the management of our bodies and minds to AI.

For Amodei, all this is possible, even probable, because AI will do more than add specific innovations: more fundamentally, AI will radically accelerate the rate of progress. In fact, Amodei predicts that over the next five to ten years, we may experience what ordinarily would be 50-100 years of transformation. And here comes his key image: we could be entering a “compressed 21stcentury” of progress.  

Yet Amodei acknowledges some limitations. It is less likely,he argues, that global inequality will be reduced, or economic growth will be shared. Nor, even with AI, is our the future one in which democracy or peace islikely to be secured. On the contrary, “the triumph of liberal democracy and political stability is not guaranteed, perhaps not even likely” in Amodei’s AI future.

At this point let’s pause, and ask why in Amodei’s essay certain things are depicted as probable, whilst other phenomena drift out of the realm of possibility. Why, in spite of AI’s extraordinary powers, does it give us a future in which governing through democracy, or living with less inequality, seem less possible than us living until the age of 160? And what does this bifurcation reveal about the ideological assumptions that underlie how Amodei, and other Silicon Valley leaders, imagine the world and our future?

Let’s take the example of democracy and democratic values.

In Amodei’s essay, there is a peculiar relationship to democracy. Yes, some of democracy’s essential functions may be handled better: he even envisions ‘AI finance ministers’. In what seems to be a welcome realism, Amodei anticipates a future in which democracies are less likely to exist, something that – unlike some other Silicon Valley leaders (notably Peter Thiel) – he regrets. But Amodei also stresses how our inefficient democratic governments constrain and limit the true potential of AI. But throughout the essay, there is a complete silence on what democracy entails,and what it means for people’s lives. Democracy is after all, the ability ofpeople to come together and collectively decide on what sort of future world they want to live in. In Amodei’s narrative, democracy and democratic values seem to be erased, or at least ignored, so it is perhaps unsurprising that he sees no reason to be optimistic about their survival. This erasure of democracy’s actual practice is hardly new.

Writing in the 1950s, against the backdrop of the space race,the philosopher Hannah Arendt warned that if we allow science and technology to capture ourability to imagine the future, we will abandon an older faith in collective agency. Whereas previously the future seemed open (in that it was imagined asat least partly the product of open-ended collective decision-making), today the colonization of the future by science and technology seems to have already captured and closed off the future, equating it to never-ending technological breakthroughs under corporate control, rather than what people come to decide in the future. As Satya Nadella, CEO of Microsoft (lead partner of OpenAI, which launched ChatGPT), put it chillingly in his 2017 autobiography: ‘we retrofit for the future’.

Put another way, if (as Silicon Valley seems to demand) we enable the arc of scientific and technological progress to colonize our future, this radically restricts humans from asking perhaps the most important political and social question: “what are we doing (and why)?” Arendt demands that we go on asking this question, which is fundamentally political, not technological:

“The question is only whether we wish to use our newscientific and technical knowledge in this direction, and this question cannot be decided by scientific means; it is a political question of the first order” (The Human Condition, 1958, p.3)

Do we want a future in which some people, almost certainly the richest, almost certainly concentrated in Western countries, double their life expectancy, while others’ life expectancy remains largely unchanged? Do we want a future without democracy? Do we indeed accept a world in which biologists like Amodei (biology is the expertise that he emphasises in his essay, although he also claims an earlier familiarity with neuroscience) have a privileged foresight of a future whose design tools and mechanisms they already control? Should the remarkable calculative feat of AlphaFold in predicting protein structure at inhuman speed (Amodei’s lead example) really dominate our debates about the social benefits and possible harms of AI?

These surely are the questions we can and must ask ourselves. To do so, we must rebuild faith in our agency to take back control of the future that the Big Tech visionaries and oligarchs of the past two decades have captured for themselves.

This post represents the views of the authors and not the position of the Media@LSE blog nor of the London School of Economics and Political Science.

 

Today’s colonial “data grab” is deepening global inequalities by Nick Couldry

 

What are the parallels between earlier stages of colonialism and today’s digital world? Nick Couldry and Ulises A. Mejias argue that instead of a land grab, we are today witnessing today a data grab whereby our lives, in all their aspects, are being captured and converted into commercial profits. How does this new era of informational power deepen existing global inequalities?

The worker who knows that every movement he makes, every gesture and every delay, however slight, will be tracked and scored by his employer. The child whose every response, every experiment and every mistake is recorded by an “EdTech” platform that never forgets or forgives. The woman who discovers that all the information she records on a fitness app is being sold to third parties with unknown impacts on her health insurance premiums.

Each case captures a very modern form of vulnerability that depends on a huge inequality of informational power. The three cases might seem unconnected in their details, but they are all part of a single phenomenon: a data grab whereby our lives, in all their aspects, are being captured and converted into profits that benefits corporations more than they benefit us.

The individual cases may well sound familiar, but the scale of the wider pattern probably is not. We are used to doing deals with individual services (clicking Yes to their impenetrable terms and conditions statements), but the larger picture tends to elude us, because it is intentionally being hidden from view. Behind the curtain of concepts like “convenience” and “progress” lies the audacity of an industry that claims that our lives are “just there” as an input for them to process and exploit for value.

It is easy to forget that this data grab is only possible on the basis of a form of inequality that simply wasn’t practicable four decades ago. Not because there weren’t businesses willing to exploit us in every way they could, but because around thirty years ago a completely new form of computer-based infrastructure emerged, connecting billions of computers and recording all interactions we had with them in the form of data. In itself, this might not have been a problem. What was crucial was the handing over of control of this infrastructure to commercial corporations, who developed business models that ruthlessly exploited those data traces – those digital footprints – and the new forms of targeted marketing and behavioural prediction that analysing those traces made possible.

And so, in the era that we usually associate with the birth of a new type of freedom (the online world), a new type of inequality was born: the inequality that derives from governing data territories – spaces built so that everything we do there is automatically captured as data under the exclusive control of that territory’s owner.

The most familiar form of data territory is the digital platform. The most familiar form of platform is social media. Over the past decade numerous scandals have become associated with social media platforms, scandals that are still largely unresolved while the platforms continue to be only partly regulated. But those scandals are merely symptoms of a much wider inequality of power over how data is extracted, stored, processed, and applied. That inequality lies at the heart of what we call “data colonialism”.

The term might be unsettling, but we believe it is appropriate. Pick up any business textbook and you will never see the history of the past thirty years described this way. A title like Thomas Davenport’s Big Data at Work spends more than two hundred pages celebrating the continuous extraction of data from every aspect of the contemporary workplace, without once mentioning the implications for those workers. EdTech platforms and the tech giants like Microsoft that service them talk endlessly about the personalisation of the educational experience, without ever noting the huge informational power that accrues to them in the process. Health product providers of all sorts rarely mention in their product descriptions the benefits they receive from getting access to our data in the growing market for health-related data.

EdTech platforms and the tech giants that service them talk endlessly about the personalisation of the educational experience, without ever noting the huge informational power that accrues to them in the process

This is a pattern whose outlines has its most obvious historical antecedents in the landgrab that launched colonialism five centuries ago: a landgrab that reimagined much of the world as newly dependent territories and resource stockpiles for the benefit of a few nations in Europe. Today, what’s being grabbed is not land, but data, and through data, access to human life as a new asset for direct exploitation.

Some think that colonialism as an economic force ended before capitalism properly got under way, and that colonialism was consigned to the past when the institutions of empire finally collapsed in the 1960’s. But the neocolonial influences of historical colonialism live on in today’s unequal global economy and embedded racism, and those inequalities are perpetuated by data colonialism.

The neocolonial influences of historical colonialism live on in today’s unequal global economy and embedded racism – and those inequalities are perpetuated by data colonialism

More than that, the ways of thinking about the world and its populations, about who has a prior claim on resources and the authority of science, live on in a process that Peruvian sociologist Anibal Quijano called “coloniality”. Coloniality – colonial thinking about how knowledge is produced and by who – is the clearest explanation for the sheer audacity of today’s AI giants who see fit to treat everything humanity has produced to date as fodder for their large language and other models.

In our recent book, Data Grab: The new Colonialism of Big Tech and how to fight back, we try to make sense of the parallels between the earlier stages of colonialism and today’s digital world. Doing so also helps us understand the ways in which the racial inequalities that are the legacy of earlier stages of colonialism go on being reproduced in the supposedly scientific guise of algorithmic data and AI processing today. Consider the forms of discrimination that black American sociologists Ruha Benjamin and Safiya Noble have outlined, or the hidden forms of work in the Global South that, as Ethiopian data scientist Timnit Gebru and others have shown, make a huge contribution to training the algorithms of so-called “artificial” intelligence in ways that is rarely recognised by the Big Tech industry.

The ongoing realities of five hundred years of colonialism live on, and are now converging with new inequalities associated with a data grab whose technical means only emerged three to four decades ago. Indeed, as earlier in history, the first step towards resisting this vast and all-encompassing social order is to name it for what is. Not just the latest improvement in capitalist techniques, but a new stage of colonialism’s ongoing appropriation of the world’s resources for the benefit of a few.

 

AI Companies Want to Colonize Our Data. Here’s How We Stop Them. by Nick Couldry

 

Artificial Intelligence companies are imposing a new “Doctrine of Discovery” on our digital commons, but we can resist.

In recent months, a number of novelists, artists and newspapers have sued generative artificial intelligence (AI) companies for taking a “free ride” on their content. These suits allege that the companies, which use that content to train their machine learning models, may be breaking copyright laws.

From the tech industry’s perspective, this content mining is necessary in order to build the AI tools that tech companies say will supposedly benefit all of us. In a recent statement to legislative bodies, OpenAI claimed that “it would be impossible to train today’s leading AI models without using copyrighted materials.” It remains to be seen if courts will agree, but it’s not looking good for content creators. In February, a California court dismissed large portions of a case brought by Sarah Silverman and other authors.

Some of these cases may reveal ongoing negotiations, as some companies figure out how to pressure others into sharing a piece of the AI pie. Publisher Axel Springer and the social media platform Reddit, for example, have recently made profitable deals to license their content to AI companies. Meanwhile, a legislative attempt in the United Kingdom that would have protected content generated by the creative industries has been abandoned.

But there is a larger social dilemma involved here that might not be as easy to detect: What about our content — content that we don’t usually associate with copyright laws, like emails, photos and videos uploaded to various platforms? There are no high-profile court cases around that. And yet, the appropriation of this content by generative AI reveals a monumental social and cultural transformation.

It’s easy to miss this transformation, because after all, this kind of content is considered a sort of commons that nobody owns. But the appropriation of this commons entails a kind of injustice and exploitation that we are still struggling to name, one not captured in the copyright cases. It’s a kind of injustice that we’ve seen before in history, whenever someone claims ownership of a resource because it was just there for the taking.

In the early phases of colonialism, colonizers such as the British claimed that Australia, the continent they recently “discovered,” was in legal terms “terra nullius” — no one’s land — even though it had been inhabited for millennia. This was known as the Doctrine of Discovery, a colonial version of “finders, keepers.”

Such claims have been echoed more recently by corporations wanting to treat our digital content and even our biometric data as a mere exhaust that’s just there to be exploited. The Doctrine of Discovery survives today in a seamless move from cheap land to cheap labor to cheap data, a phenomenon we call “data colonialism.” The word “colonialism” is not being used metaphorically here, but to describe a very real emerging social order based not on the extraction of natural resources or labor, but on the continuous appropriation of human life through data. Data colonialism helps us understand today’s transformations of social life as extensions of a long historical arc of dispossession. All of human culture becomes the raw material that is fed to a commercial AI machine from which huge profits are expected. Earlier this year, OpenAI began a fundraising round for $7 trillion, “more than the combined gross domestic products of the UK and France,” as the Financial Times put it.

What really matters is not so much whether generative AI’s outputs plagiarize the content of famous authors owned by powerful media groups. The real issue is a whole new model of profit-making that treats our lives in data form as its free input. This profitable data grab, of which generative AI is just an egregious example, is really part of a larger power struggle with an extensive history.

To challenge this, we need to go beyond the narrow lens of copyright law and recover a broader view of why extractivism, under the guise of discovery, is wrong. Today’s new — and so far largely uncontested — conversion of our lives and cultures into colonized data territories will define the relations between Big Tech and the rest of us for decades, if not centuries. Once a resource has been appropriated, it is almost impossible to claim it back, as evidenced by the fact that the Doctrine of Discovery is still cited in contemporary government decisions to deny Indigenous people rights over their lands.

As with land, so too with data. Do nothing, and we will count the costs of Big Tech’s Doctrine of Discovery for a long time to come.

Applying Historical Lessons in the Age of AI

Unfortunately, one-track approaches to confronting these problems, like quitting a particular social media platform, will not be enough. Since colonialism is a multifaceted problem with centuries of history, fighting back against its new manifestations will also require multifaceted solutions that borrow from a rich anti-colonial tradition.

The most important tool in this struggle is our imagination. Decolonizing data needs to become a creative and cultural movement. It is true that no colonized society has managed to decisively and permanently undo colonialism. But even when colonial power could not be resisted with the body, it could be resisted with the mind. Collective ingenuity will be our most valuable asset.

All of human culture becomes the raw material that is fed to a commercial AI machine from which huge profits are expected

In our recent book Data Grab: The New Colonialism of Big Tech and How to Fight Back, we outline a number of practical ways in which we can begin to apply this kind of creative energy. We borrow a model from Latin American and Latine activists, who encourage us to act simultaneously across three different levels: within the system, against the system and beyond the system. Limiting ourselves to only one of these levels will not be enough.

What might this look like in practice? Working within the system might mean continuing to push our governments to do what they have so far largely failed to do: Regulate Big Tech by passing anti-trust laws, consumer protection laws and laws that protect our cultural work and heritage. It might seem tempting to want to abandon mainstream politics, but doing so would be counterproductive in the long term.

But we cannot wait for the system to fix itself. This means we need to work against the system, embracing the politics and aesthetics of resistance as decolonial movements have done for centuries. There are plenty of inspiring examples, including those involving unionization, workers’ rights, Indigenous data sovereignty, environmental organizing, and movements against the use of data technologies to carry out wars, surveillance, apartheid and the persecution of migrants.

Finally, we need to think beyond the system, building ways of limiting data exploitation and redirecting the use of data toward more social, democratic goals. This is perhaps the most difficult but most important task. It will require new technologies as well as new ways of rejecting technology. A large collective and imaginative effort is needed to resist data colonialism’s new injustices. This effort is a crucial step on the longer journey to confronting and reversing colonialism itself.

 

Are we giving away too much online? by Nick Couldry

 

Do we really know how much data we’re giving away and how it’s being used? A new book by Nick Couldry and Ulises Mejias explores the murky world of big tech and how we can fight back.

Do you use social media? Shop online? Use a fitness tracker? Have a smart meter in your house? Chat with friends on messaging apps?

So many of our daily activities now take place online, it’s hard to imagine our lives without these services at our fingertips. But how often do you check the terms and conditions when downloading an app or signing up to an online account? How much do you know about the data that you’re giving away and how it’s being used?

In a new book, Data Grab, Professor Nick Couldry from the Department of Media and Communications at LSE and his co-author Professor Ulises A Mejias, a Mexican/US author from State University of New York Oswego, explore how big tech companies use our data and how it can be repackaged to manipulate our views, track our movements and discriminate against us.

They argue that through this “data grab”, colonialism – which was historically a land grab of natural resources, exploitative labour, and private property – has taken on a new form where big tech companies control and exploit our data for profit.  

A new land grab could be happening right now, right in front of our eyes, through human life being captured in the form of data.

The new colonialism

When undertaking research for the book, Professors Couldry and Mejias found data was being extracted from every aspect of human life. “We realised the closest parallel was in the colonial land grab that happened around 1500 when Spain and Portugal suddenly realised there was a whole new world they could grab for themselves,” Professor Couldry says.

“It seemed to us this was a good analogy for the serious scale of what’s happening with data and that's when we started developing a framework for data colonialism. We weren't the first people to come up with this term, but we were the first people to see this as not just a metaphor but a new stage in the evolution of colonialism. What if colonialism could evolve? And that a new land grab could be happening right now, right in front of our eyes, through human life being captured in the form of data?”

A curated universe

Professor Couldry argues we’re at a moment where we are facing a radical change in social life, which will “become enforced until there is no way out of it” and we become ever more reliant on these services.

“We are increasingly going to be locked into a completely curated universe which is governed by corporations rather than ourselves,” he warns. We are already starting to see something like this in China, for example, where the platform WeChat – which started off as an app to chat with your friends – is now being used for all aspects of life.

You can buy goods on WeChat, get credit, submit your tax returns and deal with the government. “It has now become a complete platform for life and, as we know, Elon Musk has a similar vision for the platform X,” explains Professor Couldry.

“All these platforms work off the network effect,” he says. “The more people who are on there, the more convenient it is for you to be on there and the more inconvenient it is for you to step off.”

Professors Couldry and Mejias call this a “civilising narrative” – something which distracts us from the reality of what is going on and makes it seem more palatable, even appealing. With data extraction, we are told that it will make our lives more convenient, and we will be better connected to each other. With historical colonialism, the notions of progress or Christian salvation were often given as a justification.  

We are increasingly going to be locked into a completely curated universe which is governed by corporations rather than ourselves.

The dark side of data

On a personal level, you might not be too worried about your data being collected, you might think you are resistant to its negative effects. At worst, you think it might lead to targeted adverts.

However, on a macro level, when our data is aggregated it can be used in ways we could never imagine. For example, it can be used to train algorithms to make decisions that affect large groups of people. Decisions such as whether you receive state support, are successful in a job application or have a visa approved. These algorithms can be opaque and discriminatory, leaving us with little knowledge about how a decision was made. And, like historical colonialism, the effects are usually felt most strongly by those who are already vulnerable.

And that is before we get on to the damage data collection can do to the environment. Data requires processing by huge banks of computers (known as data centres), which use a significant amount of electricity and deplete the power supply for other uses. In the book, the authors cite the example of west London where the building of much-needed new homes has been constrained until at least 2035 due to a lack of electricity supply caused by the expansion of data centres in the area.

Globally, it is estimated data centres will use between 3 and 13 per cent of all electricity globally by the year 2030, compared to the one per cent they used in 2010. This electricity creates heat which needs to be cooled down using vast amounts of fresh water. Thames Water has already expressed concern that its water supplies are getting dangerously low and data centres are a key reason behind this.

We can only change things together and we need to help each other make these changes.

How to fight back

This all paints a very bleak picture, but Professor Couldry doesn’t want us to despair. He argues this future can be averted by a large, collective effort to resist data colonialism’s injustices. “We can only change things together and we need to help each other make these changes. This is what we try and offer in the book: a new vision to help people understand that it doesn’t need to go this way.”

To offer inspiration, Data Grab provides examples of individuals and groups who are resisting. In the US, 17 communities have issued bans against the use of facial recognition software by police. Workers across the globe are taking a stand and there is an increasing number of unions for companies like Google, Apple and Amazon. Gig workers are taking matters into their own hands, exerting pressure on governments to guarantee their basic rights. Some are even undertaking “account therapy”, which involves coaxing algorithms to behave in ways more favourable to workers and counter their exploitative effects.

Whistleblowers such as Edward Snowden and Frances Haugen have helped expose US surveillance apparatus and the willingness of big companies to put profit before the safety and mental wellbeing of their users. Some companies, such as Lush cosmetics, have closed down some of their social media accounts, and taken the financial hit for doing so, due to the harmful effects of these platforms.  

Not all actions have to be on a large scale. As is noted in the book, “even putting your phone down for a couple of hours might be an act of defiance". Likewise, refusing to accept cookies when visiting a website might be a form of resistance – something which apparently so far only 0.5 per cent of users do.

Professor Couldry also outlines several alternative platforms which are focused on community rather than profit and can be used instead of mainstream apps. These are known as “federated platforms”. The best-known is probably Mastodon which is an alternative to X. Pixelfed can be used for sharing photographs and PeerTube is a federated video-sharing platform.

With our lives increasingly taking place online, we are giving away more data than ever. Maybe, as Professors Couldry and Mejias state, “in the long run, a life full of smart devices is not really smart at all.” Maybe this is the time to take a stand.

Professor Nick Couldry was speaking to Charlotte Kelloway, Media Relations Manager at LSE.