Resonance, Not Scalability by Nick Couldry

 

Interdisciplinary Workshop on Reimagining Democracy Essay Series

Over the past three decades, humanity made a fundamental error in spatial design—an error that makes it vanishingly unlikely that we can create positive conversation spaces for democracy. I’ve been trying to think about how we might correct that error; in fact, I have just completed a book on the topic. My book is written from the perspective of a social theorist who thinks mainly about data institutions and social order.

The error’s outlines will be familiar, even if how I describe it may not be. In essence, we’ve (inadvertently) allowed businesses to generate what I call “the space of the world”: the space of (almost) all possible spaces for social interaction and therefore for democratic practice. That didn’t happen because we asked businesses to design the spaces where democracy plays out; no one ever planned that. It happened because we allowed large corporations to design shadow spaces (we call them platforms and apps) with two key properties: First, these spaces can be controlled, indeed exploited, by these large corporations for their own ends, mainly profit. Second, these spaces mimic aspects of daily life sufficiently well that they bolt on to our actual social world under conditions largely dictated by these corporations (above all, the condition that we are incentivized to use them because everyone else is).

By allowing large corporations to promote these shadow spaces, humanity made three fundamental design mistakes that are now constraining our social world and our democratic futures. Our first mistake was allowing the creation of a space of spaces of unlimited scale to which everyone potentially could connect, without regard to the consequences of allowing unlimited feedback loops across all our activities within that space. The toxic results have been seen on every scale.

Second, we never even considered the possibility of designing and controlling the spaces in between our platform spaces. We neglected that possibility completely, allowing businesses simply to optimize engagement and the profit that flows from it, whatever the scale.

Third mistake: We let ourselves be driven by the value of unlimited scalability—exactly the wrong valuefor social and political design. In fact, a value that is orthogonal to how political theory has thought about the conditions under which democracy—or any nonviolent political life—is possible for millennia. Neither of the two main traditions of Western political theory—the Aristotelian idea of politics as a natural human activity on a relatively small scale or the Hobbesian idea of a social contract for societal security—ever imagined that politics could safely unfold on the scale of the planet or in smaller spaces of continuous interaction and unlimited playback and feedback.

If you’d asked anyone 30 years ago (political theorist or not) whether it made sense to build a space like the one that has emerged—linking together all possible social and political spaces and, what’s more, incentivizing feedback loops of engagement across it—they would have said, “No, don’t do it.” But we did it, and we need to actively think about what it might mean to dismantle the space we’ve built—or at least override it with different values and different design thinking.

We can’t erase the idea of platforms, let alone the internet, as a space of connection. Instead, we need a very different approach to rebuilding our space of the world. It’s a problem that we unwisely got into, but now we have no choice but to invent better solutions—solutions that are less risky for democracy. To start, we need to think about platform space in a completely different way, securing the “spaces-in-between” (the firebreaks, if you like) that limit flow and enable “friction,” as legal scholar Ellen Goodman puts it, and reducing some of the risks of toxic feedback loops (we can’t solve them all). Whatever their limitations currently, I believe that federated platforms, such as Mastodon, point in the right direction.

Second, and because we will have started to build spaces-in-between, we should trust more in the new possibilities those firebreaks protect: the possibility of discussion in spaces whose values and purpose align more with specific communities rather than just abstract business logics. Put in political terms, this means trusting more in subsidiarity and rejecting scalability as a guiding value.

Thirdly, this opens up the possibility of giving a greater design role to existing communities as hosts of platform spaces and to government and civil society, not as hosts (the risk of censorship is too great) but as general sources of subsidy for the infrastructure on which healthy spaces of social encounter and civic discussion depend. This aligns with what communications scholar Ethan Zuckerman has called an “intentionally digital public infrastructure.” All this means thinking about design differently by moving away from the mixture of narrow economic logic plus a roll of the dice that has characterized how today’s space of the world has unfolded. But that’s hard without a guiding principle. To give us one, I want to return to the principle of resonance that I tentatively introduced at last year’s International Workshop on Reimagining Democracy (IWORD). Then, I talked about it in perceptual terms: as basically the possibility of sharing with others the perception that, even if you don’t entirely trust each other or the government, you are all in various ways responding to broadly the same set of problems within broadly the same horizon of possibility.

What I hadn’t realized then is that the design choice that makes this possible is even more important than this shared perception. It’s that alternative design approach to how we configure large social space for which I now want to reserve the term “resonance.” In the physical world, resonance occurs when electromagnetic waves on multiple frequencies propagate across space and objects start resonating at the frequency in the sound source which is their natural frequency. That resonating doesn’t happen because this frequency is imposed on those objects or because a set of external priorities forces that particular frequency onto the space. It results from the interaction between the sound source and the properties of the objects themselves; this positive, non-disruptive outcome occurs without any external attempt to optimize for one solution. Yet while resonance builds from the natural frequencies of objects, today’s social media landscape seems to be built against our natural frequencies, undermining whatever helps democracy and our common interests. Last year at IWORD, science fiction writer Ted Chiang asked, “How do we stop AI from being another McKinsey?” In other words, why are we locked into seeing AI only in terms of what it can do for capitalism? The same point could be made in relation to the design of digital spaces and platforms: Why only think about them in a framework driven by profit extraction? Why is that useful for democracy? It’s not a rejection of markets to suggest that, in designing the spaces in which we live, we should be oriented by broader principles of what’s good for life in general, for democracy, and for making better collective decisions. That yields different priorities.

Let me list a few:

• Always build platforms and spaces to the smallest scale needed.

• Always pay attention to the spaces-in-between (or the firebreaks).

• Maximize variety and experimentation (the other side of the minimum scale principle).

• Trust communities of various sorts as the best context for platform use and development.

• And, because we are freed now from the business goal of scalability, don’t maximize people’s time spent on any one platform. Instead, do everything to encourage more connections between online spaces and between offline and online spaces—connections whose intensity actual communities have some chance of managing.

Do all this, and we might have a chance of fulfilling political scientist Elinor Ostrom’s principles for protecting the commons, which included maximal decisional autonomy at the local scale and protecting the boundaries between groups and spaces. This might also yield a modest but workable approach to the other spatial possibilities for redesigning democratic practice that digital technologies really do enable. For example, why shouldn’t populations forced by climate change to migrate have a say in where they can move and under what conditions? Why should it only be the receiving states that get a say? We need to find some technological solutions.

Fail to rethink the design of platforms, and I fear we’ll forever be condemned to mop up the mess that commercial platforms have generated, in pursuit of a societal challenge that they should never have been allowed to mess with in the first place.

A publication of the Ash Center for Democratic Governance and Innovation

 

Twenty Years of Media and Communications Research: from Media Studies to Media Ecology by Nick Couldry

 

LSE’s Department of Media and Communications celebrates its 20th anniversary this year, and is marking the occasion with the upcoming Media Futures Conference on 15-16 June. Here Nick Couldry, Professor of Media, Communications and Social Theory at the LSE, reflects on how the study of media and communications has evolved in the 20 years since the Department was founded.

Look back to conference programs of 2003, the year in which LSE’s Media and Communications department was founded, and the sense of discontinuity is strong. So many topics have since faded (telecentres, digital divide, reality TV). Priorities have changed, and the huge interdisciplinarity of perspectives that we now take for granted was largely absent.

Yet important things have endured. Contrary to breathless predictions, television did not die, even if prime time has shrunk and is now distributed across multiple streaming channels. Nor did radio die (quite the contrary) or newspapers (yet). Habits are, after all, more enduring than hype or futurology.

Nor have battles for the status of the media and communications field been entirely resolved. Although few people in rich societies believe that media in the extended sense (to include everything we do with our smartphone and other computing devices) are without implications for the feel and structure of daily life, that has not stopped established disciplines from continuing to operate as if media do not matter: from economics to political theory, from international relations to social theory, media and communications tend at best to be an afterthought, with notable exceptions such as the work of Manuel Castells and Judith Butler. Worse, there is still work (I won’t give examples) that presumes to talk about media as something in common experience without any attention to the extensive literatures in media and communications research.

Inside the field, some battles also lie unresolved. I remember the urgency with which arguments for the importance of religion in the field were in 2003 being then proposed at the International Communication Association, but twenty years later, there is still no ICA division or interest group focussed on religion, such is the field’s default secularism.

But those enduring patterns mask more fundamental change.

When the LSE Department was founded in September 2003, the shock of the World Trade Tower attacks two years earlier still reverberated. The need for some way of opening up ethical debate about the role of media as weapon, and the potential of media as a space for recognising others who are silenced in global media agendas, were high on our list of concerns. Roger Silverstone’s book Media and Morality was the fruit of that, as later has been Lilie Chouliaraki’s work at LSE (The Ironic Spectator) and in another way my own.

But that early recognition of the need for a media ethics that went beyond a discussion of journalistic codes in the years that followed grew into a veritable paradigm shift with ever more books over the past decade or so foregrounding ethics as their core question, controversially or otherwise. Think of such different books as Sherry Turkle’s Alone Together, Robin Mansell’s Imagining the Internet, or Mark Deuze’s Media Life (a rare positive take). Indeed one could argue that this expanding sense that something is ethically troubling about the media landscape was what began to connect communications research on media and the internet to previously distant work in legal theory, such as Julie Cohen’s synoptic book Configuring the Networked Self. The sense of a common cross-disciplinary topic about the nature of our media and information environment had emerged by the early years of the 2010s.

If one way of describing this shift, seen from the perspective of media studies’ older agendas, was ‘ethics’, a newer way of formulating it was in terms of ecology. While the topic of media ecology (and even sound ecology) had a longer history in North America, it absolutely was not a familiar way of framing what there was to talk about in media studies two decades back. But Roger Silverstone’s statement in the preface to Media and Morality that ‘global societies’ are facing an ‘environmental’ ‘crisis in the world of communication’ (2007: vi) sensed the direction of travel, even if for a media landscape that looked very different from today’s.

Just a few years after Silverstone, Julie Cohen’s call for a ‘cultural environmentalism’ that can help us see more clearly the problems with our growing dependence on computing infrastructures and platforms that continuously surveil us seemed both original and absolutely inevitable. By the start of the last decade, our sense of the scale and nature of what there was to be discussed about media had changed profoundly.

For one thing, it no longer made any sense to talk about media without also talking about the internet and the whole matrix of information and communication technologies in which legacy media like television, radio and the press are now embedded.

For another, old battles between political economy and cultural approaches to formulating the key questions about media now seemed quaint, because it was a profoundly changed political economy that, in full view, has driven the changes in how media culture feels.

The core thing that had changed was, of course, the rise of social media platforms and indeed the rise of platform-focussed capitalism generally: not as one phenomenon among others, but as a total transformation of the economic, social and technological space in which media survive or die, grow or wane. Our smart phones are portable ecologies of media inputs, but, more than that, they give access to, indeed demand our attention to, a transformed ecology of social communication.

Ten years from the beginning of this ecological (and ethical) turn in media communications and information research, it is versions of ecological thinking that are opening up new avenues for exploration in our field. To give some diverse examples: Amanda Lagerkvist’s work on Existential Media, Thomas Poell, David Nieborg and Brooke Duffy’s work on Platforms and Cultural Production, Shakuntala Banaji and Ramnath Bhat’s work on Social Media and Hate, the work by Deen Freelon and other political communications researchers on Disinformation, and Sarah Banet-Weiser’s work on popular misogyny.

Those are just a few examples among many of exciting new directions of research. But they show that, at this state in our field’s history, the starting-point has become thoroughly ecological, in a way that it was only subliminally twenty years ago. Our field has come a long way and yet, in a sense, it has taken just a small step towards addressing the growing challenges posed by capitalist communication platforms for our chances of living well together in the future.

This post represents the views of the author and not the position of the Media@LSE blog nor of the London School of Economics and Political Science.

 

It’s time to stop trusting Facebook to engineer our social world by Nick Couldry

 

As a recent US Senate hearing hears that Facebook prioritises its profits over safety online, Nick Couldry, Professor of Media, Communications and Social Theory at the London School of Economics and Faculty Associate, Berkman Klein Center for Internet and Society, Harvard University, argues that a public scrutiny and a tighter regulatory framework are required to keep the social media giant in check and limit the social harms that its business model perpetuates.

The world’s, and in particular the USA’s, reckless experiment with its social and political fabric has reached a decision-point. Almost a year ago Dipayan Ghosh of the Harvard Kennedy School of Government and I argued that the business models of Facebook and other Big Tech corporations unwittingly aligned them with the goals of bad social actors, and needed an urgent reset. Why? Because they prioritize platform traffic and ad revenue over and above any social costs.

Yet, in spite of a damning report by the US House of Representatives Judiciary Committee last October and multiple lawsuits and regulatory challenges in the US and Europe, the world is no nearer a solution. But for the case of Facebook, whistleblower Frances Haugen’s shocking Senate testimony last week confirmed exactly what we argued: that this large US-based corporation is “buying its profits with our safety”, because it consistently prioritizes its business model over correcting the significant social harms it knows it causes.

As Robert Reich notes, it would be naïve to believe that accountability will follow the public outcry. That’s not how the US works anymore, nor indeed many other democracies. Meanwhile Mark Zuckerberg’s response to the new revelations rang hollow. Of course, he is right that levels and forms of political polarization vary across the countries where Facebook is used. But no one ever claimed that Facebook caused the forces of political polarization, which inevitably are variable, only that for its own benefit it recklessly amplified them.

Nor, as Zuckerberg rightly protests, does Facebook “set out to build products that make people angry or depressed”: why would they? But the charge is more specific: that Facebook configured its products to maximize the measurable “engagement” that drives its advertising profits. Facebook’s 2018 newsfeed algorithm adjustment, cited by Haugen, was a key example. Yet we know from independent research that falsehoods travel faster, more deeply and more widely than truths. In other words, falsehoods generate more “engagement”. So, optimizing for “engagement” automatically optimizes for falsehoods too.

It is not good enough for Facebook now, under huge pressure, to claim credit for the “reforms” and “research” it conducted in earlier attempts to mollify an increasingly hostile public. Facebook can say, as Mark Zuckerberg just did, that “when it comes to young people’s health or well-being, every negative experience matters”, but its business model says otherwise, and on a planetary scale. It is time for that business model to be examined in the harsh light of day.

The problem with the underlying business model

In a report published a year ago, Dipayan Ghosh and I called this model the “business internet”. Its core dynamics are by no means unique to Facebook, but let’s concentrate there. The business internet is what results when the vast space of online interaction becomes managed principally for profit. It has three sides: data collection on the user to generate behavioral profiles; sophisticated algorithms that curate the content targeted at each user; and the encouragement of engaging – even addictive – content on platforms that holds the user’s attention to the exclusion of rivals. A business model such as Facebook’s is designed to maximize the profitable flow of content across its platforms.

If this sounds fine on the face of it, remember that the model treats all content producers and content the same, regardless of their moral worth. So, as Facebook’s engineers focus on maximizing content traffic by whatever means, disinformation operators – wherever they are, provided they want to maximize their traffic – find their goals magically aligned with those of Facebook. All they have to do is circulate more falsehoods.

Facebook will no doubt say it is doing what it can to fix those falsehoods: many platforms have tried the same, even at the cost of damping down the traffic that is their lifeblood. But the problem is the underlying business model, not the remedial measures, even if (which many doubt) they are well-intentioned. It is the business model that determines it will never be in Facebook’s interests to control adequately the toxic social and political content that flows across its platforms.

It is the business model that determines it will never be in Facebook’s interests to control adequately the toxic social and political content that flows across its platforms.

The scale of the problem is staggering. As recent Wall Street Journal articles detail, Facebook’s business model (and obsession with controlling short-term PR costs) push it to connive when celebrities post content that even Facebook’s rules normally ban, discount the impacts on teen girls’ self-esteem from Instagram’s image culture, misunderstand the consequences for political information when it tweaks its newsfeed algorithm, and fail in its own drive to encourage Covid vaccine take-up.

Some Facebook staff seem to believe that the Facebook information machine has become too large to control.

Yet even so, we can easily underestimate the scale of the problem. We may dub Instagram the ‘online equivalent of the high-school cafeteria’, as the Wall Street Journal does, but what school cafeteria ever came with a continuously updated and universally accessible archive of everything anyone said there? The problem is that societies have delegated to Facebook and other Big Tech companies the right to reengineer how social interaction operates – in accordance with their own economic interests and without restrictions on scale or depth. And now we are counting the cost.

A turning point?

But thanks to Frances Haugen, through her Senate testimony and her role in the Wall Street Journal revelations, society’s decision-point has become startlingly clear. Regulators and governments, civil society and individual citizens could consign the problem to the too-hard-to-solve pile, accept Facebook will never fully fix it, and allow the residual toxic waste (inevitable by-product of Facebook’s production process) to do whatever harm it can to society’s and democracy’s fabric. Or key actors in various nations could decide that the time for coordinated action has come.

Key actors in various nations could decide that the time for coordinated action has come.

Assuming things proceed down the latter, less passive path, three things require urgent action.

  1. Facebook should be compelled by regulators and governments to reveal the full workings of its business model, and everything it knows about their consequences for social and political life. Faced with clear evidence of major social pollution, the public cannot be expected to rely on the self-motivated revelations of Facebook’s management and their engineers working under the hood.

  2. Based on the results of that fuller information, regulators should consider the means they have to require fundamental change in that business model, on the basis that its toxicity is endemic and not merely accidental. If they currently lack adequate means to intervene, regulators should demand extended powers.

  3. Equally urgent action is needed to reduce the scale on which Facebook is able to engineer social life, and so wreak havoc according to its whim. At the very least, the demerger of WhatsApp and Instagram must be put on the table by the US FTC. But a wider debate is also needed about whether societies really need platforms on the scale of Facebook to provide the connections on which social life undoubtedly depends. The time has passed when citizens should accept being lectured by Mark Zuckerberg on why they need Facebook to “stay in touch”. More comprehensive breakup proposals may follow from that debate. Meanwhile, analogous versions of the “business internet”, in Google and elsewhere, also need to be examined closely for their social externalities.

 

Some fear that the medicine of regulatory reform will be worse than the disease. As if the poisoning of democratic debate, the corrupting of public health knowledge in a global pandemic, and the corrosion of young people’s self-esteem, to name just some of the harms, were minor issues that could be hedged.

Something like these risks was noted at the beginnings of the computer age, when in 1948 one of its founders, Norbert Wiener, argued that with “the modern ultra-rapid computing machine . . . we were in the presence of [a] social potentiality of unheard-of importance for good and for evil”.

Nearly 75 years later, Wiener’s predictions are starting to be realized in plain sight. Are we really prepared to go on turning a blind eye?

 

Regulation of online platforms needs a complete reset by Nick Couldry

 

How to regulate social media companies and other large digital platforms is a pressing question for governments around the world. In this post, Nick Couldry, Professor of Media, Communications and Social Theory at the LSE and Dipayan Ghosh, Co-director of the Digital Platforms & Democracy Project at the Harvard Kennedy School, argue that a much broader approach is required to understand what they call the “consumer internet” business model of  today’s large digital platforms and press the need for a “new digital realignment”.

Two decades ago, the US, UK and many other societies, without exactly intending to, delegated to digital platforms the redesign of the spaces where human beings meet, ignoring the possible social consequences. The result today, is a media ecosystem where it is business models, like those of Facebook and Google, that shape how our ideas and information circulate.

The results have often been disastrous. Big Tech has been forced to firefight, damping down the circulation of incendiary messages on WhatsApp, constraining the spread of false claims about vaccines, and confronting the plethora of misinformation about the global pandemic, particularly in the US. And yet, in the wake of one of the most divisive elections in US history, Google was found last week profiting from placing ads on sites such as Gateway Pundit that have spread false information about election turnout.

Something is deeply out of alignment here, but contemporary societies haven’t quite put their finger on what it is.

Yes, politicians are starting to take notice of the problem. In the month of October the US  saw a report from the Democrat-led House antitrust subcommittee on Google, Amazon, Facebook and Apple’s excessive monopoly power and the Justice Department’s lawsuit against Google. Meanwhile in Europe, politicians and competition authorities signaled a tougher stand against Big Tech platforms.

But these interventions do not go nearly far enough. The reason is simple: they remain locked within a narrow antitrust model of how digital platforms should be regulated. But this essentially economic framework cannot deliver solutions to a problem it was not designed to solve: the negative social side-effects of platforms’ basic business model. We need a much broader approach.

No one intended things to work out this way. But combine the embedding of connected computer devices in daily life with a few hugely successful platforms and the internet’s early 1990s shift from a commercial to a public model, and you have the basic recipe for today’s problems. Only one further ingredient was needed – the business model of today’s large digital platforms – and bad consequences for public life predictably flowed.

In a new report, we call that business model the “consumer internet”: it is the outcome when the vast space of online interaction becomes managed principally for profit. The model has three sides: data collection on the user to generate behavioral profiles; sophisticated algorithms that curate the content targeted at each user; and the encouragement of engaging – even addictive – content on platforms to hold the user’s attention to the exclusion of rivals. The model is designed to do only one thing: maximize the profitable flow of content across platforms. And it applies in various forms across the industry – not just at Facebook, where one of us once worked.

The problem is not that platforms make a profit, but that they reconfigure the flows of social information to suit a business model which basically treats all content suppliers the same. When platform operators seek to maximize content traffic by whatever means, and disinformation merchants too just want to maximize traffic, their goals can easily interact in a dangerous spiral.

And here is the paradox: that, whatever corporations’ pro-social claims, the goals of dominant digital platforms and bad social actors are in deep and largely hidden alignment. There is no social world without some bad actors; our misfortune is to inhabit a world that directly incentivizes their proliferation.

The risks of a computer-based social infrastructure of social connection were predicted as long ago as 1948 by the founder of cybernetics, Norbert Wiener, who wrote: “It has long been clear to me that [with] the modern ultra-rapid computing machine . . . we were here in the presence of [a] social potentiality of unheard-of importance for good and for evil”. Wiener’s unease was ignored in the headlong rush to commercially develop the internet, but it is not too late, even now, to heed Wiener’s warning.

Societies through their regulators and lawmakers must renegotiate the balance of power between the corporate platform and the consumer. A new digital realignment is needed. But how would this work?

First, we need radical reform of the market behind digital media platforms, enabling consumers to exercise real choice about how data that affects them is gathered, processed, and used, including a real choice to use platforms without data being gathered. Locking in this last point would challenge the privacy-undermining impacts of the platforms’ business model at its heart.

Second, much greater transparency must be imposed on platform corporations, uncovering not just their detailed operations but the so far uncontrolled social harms from which they profit. Platforms should be required to uncover their business models’ full workings, revealing exactly where they create advantages for bad social actors, and how they gain from this. Platforms must be required to take urgent remedial action against those social harms that they discover or are reported to them, for example algorithmic discrimination, data-driven propaganda, or viral hate speech. And they should be compelled to stop forms of data collection that corrode broader social values.

Achieving this will involve reform of legal frameworks that effectively exempt platforms from liability for what passes across them, whether via reform of Section 230 of the Communications Decency Act in the US or via the proposed European Digital Services Act in the EU. Failing such remedial action in all key jurisdictions, more drastic measures against the social damage caused by the consumer internet’s business model, such as platform break-up, must be considered.

Without such radical reforms, societies will have no chance of salvaging a citizens’ internet from the wreckage of today’s consumer internet. Such reforms are as relevant for Europe as North America. Yes, regulation is more advanced in the former, yet the need for regulators to confront not only platforms’ economic harms but their social harms remains unheeded.

A lot is at stake. After a presidential election whose build-up was disfigured by toxic content on platforms large and small, the US has an incoming government potentially interested in platform reform, yet the danger of extreme right-wing politics spreading virally online is anything but resolved. Nor does most of Europe want its politics to go down the US’s path.  Revisiting the apparently dry, technical details of platform regulation could today hardly be more urgent.

This article represents the views of the authors and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.