Interdisciplinary Workshop on Reimagining Democracy Essay Series
Over the past three decades, humanity made a fundamental error in spatial design—an error that makes it vanishingly unlikely that we can create positive conversation spaces for democracy. I’ve been trying to think about how we might correct that error; in fact, I have just completed a book on the topic. My book is written from the perspective of a social theorist who thinks mainly about data institutions and social order.
The error’s outlines will be familiar, even if how I describe it may not be. In essence, we’ve (inadvertently) allowed businesses to generate what I call “the space of the world”: the space of (almost) all possible spaces for social interaction and therefore for democratic practice. That didn’t happen because we asked businesses to design the spaces where democracy plays out; no one ever planned that. It happened because we allowed large corporations to design shadow spaces (we call them platforms and apps) with two key properties: First, these spaces can be controlled, indeed exploited, by these large corporations for their own ends, mainly profit. Second, these spaces mimic aspects of daily life sufficiently well that they bolt on to our actual social world under conditions largely dictated by these corporations (above all, the condition that we are incentivized to use them because everyone else is).
By allowing large corporations to promote these shadow spaces, humanity made three fundamental design mistakes that are now constraining our social world and our democratic futures. Our first mistake was allowing the creation of a space of spaces of unlimited scale to which everyone potentially could connect, without regard to the consequences of allowing unlimited feedback loops across all our activities within that space. The toxic results have been seen on every scale.
Second, we never even considered the possibility of designing and controlling the spaces in between our platform spaces. We neglected that possibility completely, allowing businesses simply to optimize engagement and the profit that flows from it, whatever the scale.
Third mistake: We let ourselves be driven by the value of unlimited scalability—exactly the wrong valuefor social and political design. In fact, a value that is orthogonal to how political theory has thought about the conditions under which democracy—or any nonviolent political life—is possible for millennia. Neither of the two main traditions of Western political theory—the Aristotelian idea of politics as a natural human activity on a relatively small scale or the Hobbesian idea of a social contract for societal security—ever imagined that politics could safely unfold on the scale of the planet or in smaller spaces of continuous interaction and unlimited playback and feedback.
If you’d asked anyone 30 years ago (political theorist or not) whether it made sense to build a space like the one that has emerged—linking together all possible social and political spaces and, what’s more, incentivizing feedback loops of engagement across it—they would have said, “No, don’t do it.” But we did it, and we need to actively think about what it might mean to dismantle the space we’ve built—or at least override it with different values and different design thinking.
We can’t erase the idea of platforms, let alone the internet, as a space of connection. Instead, we need a very different approach to rebuilding our space of the world. It’s a problem that we unwisely got into, but now we have no choice but to invent better solutions—solutions that are less risky for democracy. To start, we need to think about platform space in a completely different way, securing the “spaces-in-between” (the firebreaks, if you like) that limit flow and enable “friction,” as legal scholar Ellen Goodman puts it, and reducing some of the risks of toxic feedback loops (we can’t solve them all). Whatever their limitations currently, I believe that federated platforms, such as Mastodon, point in the right direction.
Second, and because we will have started to build spaces-in-between, we should trust more in the new possibilities those firebreaks protect: the possibility of discussion in spaces whose values and purpose align more with specific communities rather than just abstract business logics. Put in political terms, this means trusting more in subsidiarity and rejecting scalability as a guiding value.
Thirdly, this opens up the possibility of giving a greater design role to existing communities as hosts of platform spaces and to government and civil society, not as hosts (the risk of censorship is too great) but as general sources of subsidy for the infrastructure on which healthy spaces of social encounter and civic discussion depend. This aligns with what communications scholar Ethan Zuckerman has called an “intentionally digital public infrastructure.” All this means thinking about design differently by moving away from the mixture of narrow economic logic plus a roll of the dice that has characterized how today’s space of the world has unfolded. But that’s hard without a guiding principle. To give us one, I want to return to the principle of resonance that I tentatively introduced at last year’s International Workshop on Reimagining Democracy (IWORD). Then, I talked about it in perceptual terms: as basically the possibility of sharing with others the perception that, even if you don’t entirely trust each other or the government, you are all in various ways responding to broadly the same set of problems within broadly the same horizon of possibility.
What I hadn’t realized then is that the design choice that makes this possible is even more important than this shared perception. It’s that alternative design approach to how we configure large social space for which I now want to reserve the term “resonance.” In the physical world, resonance occurs when electromagnetic waves on multiple frequencies propagate across space and objects start resonating at the frequency in the sound source which is their natural frequency. That resonating doesn’t happen because this frequency is imposed on those objects or because a set of external priorities forces that particular frequency onto the space. It results from the interaction between the sound source and the properties of the objects themselves; this positive, non-disruptive outcome occurs without any external attempt to optimize for one solution. Yet while resonance builds from the natural frequencies of objects, today’s social media landscape seems to be built against our natural frequencies, undermining whatever helps democracy and our common interests. Last year at IWORD, science fiction writer Ted Chiang asked, “How do we stop AI from being another McKinsey?” In other words, why are we locked into seeing AI only in terms of what it can do for capitalism? The same point could be made in relation to the design of digital spaces and platforms: Why only think about them in a framework driven by profit extraction? Why is that useful for democracy? It’s not a rejection of markets to suggest that, in designing the spaces in which we live, we should be oriented by broader principles of what’s good for life in general, for democracy, and for making better collective decisions. That yields different priorities.
Let me list a few:
• Always build platforms and spaces to the smallest scale needed.
• Always pay attention to the spaces-in-between (or the firebreaks).
• Maximize variety and experimentation (the other side of the minimum scale principle).
• Trust communities of various sorts as the best context for platform use and development.
• And, because we are freed now from the business goal of scalability, don’t maximize people’s time spent on any one platform. Instead, do everything to encourage more connections between online spaces and between offline and online spaces—connections whose intensity actual communities have some chance of managing.
Do all this, and we might have a chance of fulfilling political scientist Elinor Ostrom’s principles for protecting the commons, which included maximal decisional autonomy at the local scale and protecting the boundaries between groups and spaces. This might also yield a modest but workable approach to the other spatial possibilities for redesigning democratic practice that digital technologies really do enable. For example, why shouldn’t populations forced by climate change to migrate have a say in where they can move and under what conditions? Why should it only be the receiving states that get a say? We need to find some technological solutions.
Fail to rethink the design of platforms, and I fear we’ll forever be condemned to mop up the mess that commercial platforms have generated, in pursuit of a societal challenge that they should never have been allowed to mess with in the first place.
A publication of the Ash Center for Democratic Governance and Innovation