Mediating Consent

This entry is part 4 of 4 in the series The Feed

When theologian Martin Luther debuted his Ninety-five Theses in 16th-century Germany, he triggered a religious Reformation — and also a media revolution.

1630 map of the Maluku Archipelago (Moluccas, or Spice Islands)

The printing press, invented approximately 50 years before the 95 Theses,  extended Luther’s reach from the door of the cathedral to the entirety of Europe. His criticisms of the Church were the first use of mass media: critiques of Catholic doctrine in pithy, irreverent pamphlets, produced at scale and widely distributed. As a result, Luther ushered in not only Protestantism, but an entirely new media landscape: one in which traditional gatekeepers — the church, wealthy nobles — no longer held a monopoly on the information that reached the people. The Catholic Church responded, of course, with pamphlets of its own — defending Catholic doctrine, refuting the new heretics, fighting the battle for hearts, minds, and Truth. 

The battle for control of narratives persists today, though the speed and scale have changed.

Editor’s note: This essay is based on Renee’s Refactor Camp 2019 talk, From Pseudoevents to Pseudorealities.

The most prominent example at the moment is the ongoing impeachment hearing: despite listening to identical witness testimony each day, media reports vary wildly along partisan (rather than religious) lines. To the New York Times and its audiences, former ambassador Marie Yovanovitch is a courageous truth-teller; to Fox News, Yovanovitch is a partisan and the very act of testifying is part of an illegitimate coup. One side says the president is guilty; the other says the president is not only innocent, but virtuous. Meanwhile, a cacophony of independent media personalities — some of them little more than ideologues with Twitter accounts — contribute their own opinions, rancor, and, at times, blatant disinformation, including, notably, a conspiracy theory that Ukraine, not Russia, hacked the DNC in 2016. 

Luther and the Vatican also clashed over the Truth, but today’s information ecosystem dynamics differ in scale, scope, and speed. Unlike the residents of Luther’s Wittenberg, readers today aren’t just contending with divergent pamphlets. They’re contending with a deluge of hyperpartisan content, tailored precisely to preexisting beliefs, compounded by nearly-instantaneous viral social media transmission and a news cycle that lasts less than a day. This portends a societal transformation: our information ecosystem no longer assists us in reaching consensus. In fact, it structurally discourages it, and instead facilitates a dissensus of bespoke pseudo-realities. 

Twenty years ago, the internet began to transform the flows of information and the means of creation. As the cost to publish dropped towards zero, content creation was democratized and blogs proliferated — an explosion of diverse voices and creative energy. The era of effortless video creation delivered YouTube and its vloggers, giving people the power to develop channels and grow audiences to sizes that rivaled network TV shows; in 2009, a teenage comedian’s channel became the first to hit 1 million subscribers; in 2019, there are over 8000 channels of that size, and the numbers are growing. The realms of entertainment, news, information, and commentary are being blended together in a new personalized, analytically-optimized consumption space.

In parallel with this evolution in the information ecosystem, the Internet redefined the topology of human connections, divorcing them from geographic proximity and letting thousands of communities proliferate. A subset of companies — the social networks — made this transformation their business. They built products to enable people to find others like them, and then fostered those connections in persistent, instrumented environments. This resulted in an explosion of factions and fandoms – close-knit communities bonded by a particular interest or belief.   

The unexpected consequence of this information ecosystem transformation and social restructuring was a Cambrian explosion of bubble realities– communities that operate with their own norms, media, trusted authorities, and frameworks of facts —  all at war with each other. The uniformity with which we as a society perceive reality began to decrease. Today, there is no institution with the legitimacy capable of bridging these gaps and restoring our capacity for achieving consensus, and neither are there credible technological means with which to create and preserve harmony within a pluralist dissensus. 

This tension is unsustainable for democratic society, which leaves us with two choices: either we rebuild mechanisms for achieving consensus, or develop new technologies for operating sustainably within persistent dissensus. The tension itself is not new — the history of media is in part the history of managing it — but the severity with which it is manifesting today, is.

Manufacturing Consent 1.0

“All knowledge, including the most basic, taken-for-granted common sense knowledge of everyday reality, is derived from and maintained by social interactions,” wrote Peter Berger and Thomas Luckmann in their 1966 work “The Social Construction of Reality”. 

Beyond matters strictly within the realm of material reality, fully determined by the laws of physics and for which consensus opinion is irrelevant — a tornado is destructive regardless of whether the community believes in it or not — reality has always been, to an extent, a matter of social consensus. Achieving consensus requires that people communicate, discuss facts, debate actions, and reconcile – or compromise on – differences of opinion. 

Historically, media and technocratic institutions played a significant role in mediating this process within liberal democracies. The institutions served as persistent repositories of knowledge and expertise, responsible for interrogating facts. The media’s role was to inform the people, conveying messages from — as well as critiquing — institutions and authority figures. In its idealized form, this interplay was intended to assist the public in making sense of topics about which they had no first-hand expertise. Although the phrase was popularized by Noam Chomsky decades later, it was journalist Walter Lippmann who first described the process, which he called the “manufacture of consent”.

A few short decades ago in the United States, the process of manufacturing consent was largely monopolized by a handful of networks and newspapers. The government enjoyed a high degree of trust. Prominent information theorists of the day were remarkably transparent about what they saw as the ideal role of each in facilitating societal consensus: experts within technocratic institutions had a substantially higher-than-average degree of knowledge, so the media should convey their informed opinions to the masses. Crystallizing public opinion through the strategic presentation of persuasive information was seen as critical to democratic order; “propaganda” was not yet fully a pejorative. Lippmann, in his 1922 book “Public Opinion,”  described the natural state of individuals as operating within “pseudo-environments” — personal, subjective realities, in which people (quite reasonably) spent their time focused primarily on their own day-to-day issues and needs. People “live in the same world, but they think and feel in different ones,” he wrote. In light of this, a well-functioning society needed experts and institutions who spent their time researching and determining a course of action for a given issue. The experts’ preferred course of action was presented to those in power, who could then use media to communicate its value to the people. This persuasive communication would bring them into the broader shared reality – and generate social cohesion – necessary for democratic consensus. 

Although romanticized today, this centrally controlled model for facilitating consensus is, of course, deeply problematic if institutions are putting out deliberately false narratives or the press is suppressing inconvenient truths. And, as it turns out, that happened on numerous occasions — including during the Vietnam War. Public information related to prior wars had been largely managed by government spokespersons, propaganda agencies, and a handful of consolidated media entities, but the Vietnam War happened during an information ecosystem shift: the mainstream adoption of television. That new affordance of technology meant that people were suddenly able to observe a conflict between official government statements and press conference reports, and what they saw on their televisions. Journalists did cover some of these discrepancies — they did not operate as a full-on rubber stamp for the official government narrative — but, according to journalism historian Daniel Hallin, author of The Uncensored War, they rarely challenged government narratives or questioned motives.

Rise of the Fifth Estate

Anger over lies, government cover-ups, and perceived media complicity during the Vietnam War sparked the underground press movement: a collection of small independent regional newspapers that published left-leaning, counterculture, anti-war content (and, occasionally, hoaxes) popped up to tell the truth about the war. A handful of these publications eschewed editors, to avoid any gatekeeper influence on the words of the people. Several of these regional entities networked, creating an Underground Press Syndicate that allowed content republishing to assist the movement in spreading messages. One member newspaper, created in Detroit in 1965, was called the Fifth Estate. That name came to be used to describe independent media that emerged to hold power accountable — including the powerful established media. The underground press papers themselves, interestingly, largely dissolved after the end of the war.

A few decades later, the Internet arrived, and the Fifth Estate exploded. In 2004, Merriam-Webster declared “blog” its word of the year. Free and easy creation tools meant that anyone could create a gatekeeper-free website or blog to share their thoughts, or become a citizen journalist at their own independent media property. Not everyone would receive attention, of course — there was no real distribution infrastructure besides emailing posts or cross-posting to friends’ blogs, so amassing an audience was challenging. But the explosion in adoption led to a slew of articles heralding the era of citizen media. In 2006, an op-ed in the BBC, Why We are All Journalists Now, observed: “The blossoming of citizen journalism stands as one of the Internet’s most exciting developments. With millions of bloggers, tens of millions of Internet posters, and hundreds of millions of readers, online news sources have radically reshaped the way we access our daily news.” There was widespread optimism about the upcoming era of accessible knowledge and proliferation of voices. 

In 2006,  “truthiness” was chosen as the word of the year, for the first time via online voting, in the midst of another war. Truthiness was a construct that managed to capture the increasingly prevalent idea that a gut feeling — and, specifically, one’s own gut feeling — was not only an acceptable substitute for objective facts or evidence, but that it should triumph in the event of conflict. As Stephen Colbert put it, “It’s not only that I feel it to be true, but that I feel it to be true.” Truthiness was not caused by social media, of course; it was first applied to lampoon the politicians who made decisions despite overwhelming counter-evidence, the technocratic institutions that quietly fell in line, and the old media channels that uncritically covered the situation. As more and more false narratives and examples of truthiness were exposed — the lie about WMDs in Iraq was a particularly notable example — trust in the legacy arbiters of information and manufacturers of consent continued to decline. New digital-first entrants to the media market capitalized on the decline in trust, declaring themselves to be the true independent voices who were holding not only government but mainstream media itself accountable. 

The proliferation of voices in the early days of the blogosphere ended the era of centralized media. Zero-cost publishing accelerated the decline of gatekeepers, with all of the resulting pros and cons; misinformation and conspiratorial blogs of course existed during this time — the 9/11 truthers had started their own web portal — but the overwhelming majority had virtually no readership. In fact, one of the key challenges the citizen journalists and writers faced was distribution. The new information environment was still largely decentralized. It was hard to reach large quantities of people. Aggregator sites emerged in an attempt to solve the challenges of curation, and cross-posting and guest blogging attempted to replicate some of the strategies of the old Underground Press Syndicate, but discovering voices was still a challenge. Distribution at the scale of broadcast media was impossible. Virality and mass reach were not yet within anyone’s grasp. 

Platform Consensus

Meanwhile, the social web was growing. At first, people turned to social networks primarily to connect with existing friends, find new ones, and date. The feed, as the chronological stream of posts came to be called, was comprised of stream-of-consciousness thoughts and photos from friends. But as the platforms grew, they amassed increasingly large standing audiences, and increasingly large data sets about the behavior and preferences of those audiences. And in 2009, they began to instrument the web. Analyst Jeremiah Owyang described the progression of social as a kind of colonization — the platforms proliferating outside of their own domains, taking over the open web by instrumenting every website with analytically-rich ‘Like’ and ‘Share’ buttons. He and others predicted the impact of this on corporate marketing: “People will lean on the opinions and experiences of their trusted network” for everything from restaurants to service providers to product recommendations. Forrester made charts about the phenomenon. 

As it turned out, that social-first, trusted-community experience would come to include news and media as well. For the first time in the history of information ecosystems, attention brokers could design instrumented and personalized media experiences, targeted to ensure that users remained on their platform and didn’t “change the channel”. They could track precisely what kept people on site and serve them more of it — and, gradually, could suggest things they were statistically likely to be interested in based on similarity to other users. Curation and recommendation processes were automated. Users no longer had to proactively search for communities, they simply appeared as suggestions. The algorithms did not have any sense of what they were recommending or curating, though, so conspiratorial and polarizing Groups were served up alongside run-of-the-mill interest fandoms with regularity — come for the natural parenting, stay for the chemtrails. Regardless, The Fifth Estate creators suddenly had standing audiences, targeting capability, and mass-broadcast mechanisms at their disposal. Their audiences served as both content recipients as distributors, as virality features put the power to amplify posts in user’s hands. 

Despite the mirage of being the new public square, the social internet was privately run, centrally consolidated, and its owners were responsible for maximizing shareholder value. Within a few years, the decision to prioritize engagement and time-on-site above all else led to many of the same messes that media scholars had pointed out in the wake of the  last significant information ecosystem shift: from radio to television. Daniel Boorstin, writing The Image in the 1960s, described how a need to fill the news cycle on television transformed news coverage, focusing it not on actual news but on “pseudo-events”: moments created by the media that enabled broadcasters to fill dead air and retain attention. Neil Postman, writing Amusing Ourselves to Death in the 80s, discussed the brave new world in which the limitations of the medium limited the message, resulting in reduced understanding of nuanced issues as entertainment supplanted information. 

In Boorstin’s time, the space that these manufactured moments filled was the newly-emergent hourly TV news program. In the internet age, the never-ending feed delivered the five-second news cycle. We became an audience that requires constant simulation. The velocity of the social networks, our expectations of entertainment and gripping, breaking news at any given second of the day, has made Boorstin’s complaint seem quaint. The feed no longer has an end, it scrolls ad infinitum; content is found to fill the space at all times. Content is not always synonymous with information, and in fact, the information ecosystem’s algorithms reward spectacle. Anyone — not just the powerful media companies of old — can now create pseudo-events. Within some of the echo chambers of the internet this happens with such intensity and regularity that it creates what might best be termed a full-on pseudo-reality: an unending stream of fabrications, truthiness, and distraction, filtered to reinforce and strengthen the beliefs of the members.

Manufacturing Consent 2.0

Today, the most profoundly divergent narratives of the impeachment proceedings are mass-disseminated conspiracy theories — the Crowdstrike server nontroversy, the inane theory that Ambassador Fiona Hill was a Soros-funded infiltrator. And they originated in the online echo chambers of the Fifth Estate, which have come to resemble decentralized cults. Blog posts and memes, shared into sympathetic online Groups, spread from person to person, hopping from platform to platform. They were not produced, top-down, by any major media organization,  though, in some cases, they were further perpetuated by the popular talking heads of opinion programming, and by institutional leaders such as the President of the United States. These conspiracy theories matter; they become accepted history within particular online factions, impacting the interpretation of any related information, shaping what the community will be receptive to in the future. 

The vast majority of the posts within online factional communities reinforce in-group identity and beliefs. Even in the most highly manipulative — say, those created by the Internet Research Agency — the goal was primarily to entrench people in their view of the world. Through that entrenchment, however, it becomes possible to increase the divide between groups. The high degree of trust and consensus within a community stands in stark contrast to the extent of expressed distrust (often, disgust) for members of other factions, their media sources, and their authorities, institutions, and facts. We’ve entered an era of conflicting pseudo-realities, a sort of Archipelago of Dissensus. No institutional or media entity retains sufficient moral authority, legitimacy, or the degree of trust required to link the islands, to mediate conflicting narratives or resolve debates about facts.

The manufacture of consent has not stopped; it just happens at the micro-reality level now. Anointed citizen journalists, bloggers, micro-media stars, and influencers within the factions and fandoms still run the old playbook even as they rail against “bought” politicians and the “lying media”. Except, they are the media. They engage with partisan authority figures, process narratives, create pseudo-events, and communicate their version of the world to their base. The internet didn’t eliminate the human predilection for institutional authority figures or media interpretations of facts and narratives — it just democratized the ability to claim the moral authority to do it. And sometimes, the only thing required to sustain authority is the ability to claim one possesses it, loudly and frequently enough. The manufacture of consent is thriving. The practice has just gotten significantly more shoddy, cynical, and unprincipled, and has become as available to bad actors as good actors.  

There will be no return to the good old days when the Presidency of the United States next changes hands. There will be no going back to the era of a consolidated handful of media properties translating respectable institutional thinking for the masses. Despite our current nostalgia, it is not immediately obvious that the big lies pushed by the old power structure are demonstrably preferable to the thousands of small lies on social channels. But this pervasive, acrimonious dissensus is untenable for a democratic society; something has to change. 

Can we redesign or create an information ecosystem that engenders sufficient consensus for governance functions? If not, how do we transition to a non-toxic form of dissensus that can sustain governance at least as well as older processes of manufacturing consent, for all their faults, did? 

In the United States, as in many other nations that have undergone a digital media shift, we have an actively disinformed citizenry, fragmented into an archipelago of perpetually warring island realities, at a time when we face profound societal challenges that require consensus action. Addressing issues like climate change, public health disasters, technological shifts transforming the workforce will be impossible if we remain in this phase of intractable hyperpartisanship and pyrrhonism.   

We’re at a turning point: the fragmentation of media and the collapse of elite institutions did not lead to a better, more honest, or functional system. There is a deep irony in the fact that platforms built to connect the world instead reduced our capacity for finding common ground. 

Mediating Consent

The path forward requires systems to facilitate mediating, not manufacturing, consent. We need a hybrid form of consensus that is resistant to the institutional corruption of top-down control, and welcomes pluralism, but is also hardened against bottom-up gaming of social infrastructure by malign actors. The question is whether the more viable solution comes from addressing the formation of factions, or by creating an environment that will lead to a politically and socially healthy relations among the island realities in the archipelago of dissensus. 

We are never going to eliminate factions. They’ve always been part of American society; in another famous pamphleteering debate, James Madison argued in Federalist 10 (1787) that factions are, simply, a function of human nature

“The latent causes of faction are thus sown in the nature of man; and we see them everywhere brought into different degrees of activity, according to the different circumstances of civil society. A zeal for different opinions concerning religion, concerning government, and many other points, as well of speculation as of practice; an attachment to different leaders ambitiously contending for pre-eminence and power; or to persons of other descriptions whose fortunes have been interesting to the human passions, have, in turn, divided mankind into parties, inflamed them with mutual animosity, and rendered them much more disposed to vex and oppress each other than to co-operate for their common good.” 

Instead, we can focus — as Madison did — on mitigating the harmful effects. 

The infrastructure of the social internet appears to have made the process of factional formation easier, faster, and more prone to extremes. Although we can’t change human nature, we do have the power to change that. We can more thoughtfully design online communities and experiences: Tristan Harris and the Center for Humane Technology, and Tobias Rose Stockwell, currently at NYU, have each made tangible suggestions toward rethinking specific design choices and incentive structures that push users into malign communities or warp them into perpetually outraged, aggrieved trolls. Suggestions that academics and activists have made to rethink the recommendation engine, or the information curation function, to decrease the sensationalist, conspiratorial, and polarizing content are finally starting to be implemented, but there’s more to be done. 

We can also more productively moderate the factional universe: Reddit, with its readily visible factions (subreddits), took the controversial step of banning some of its nastiest, most brigade-prone communities in late 2015. A 2017 study indicated that disbanding these communities improved the behavior of their members. Some moved into adjacent communities, but the existing norms and culture proved resistant to any attempt at bad influence from the new additions. It’s way past time for Facebook to put better moderation tools into the hands of its users. More targeted moderation interventions are an unpalatable solution for free speech absolutists who believe they should be entitled to say whatever they want to whomever they want or their rights are being violated. They’re unconcerned with the unique dynamics of the internet, in which some types of government-legal speech still negatively impact the speech rights of others, especially those harassed out of the digital commons. Regardless, there are already other marketplace solutions — niche social networks and boards — that offer the type of environment absolutists are looking for. 

However, even after mitigating the worst types of factional behavior, the issue of bespoke realities and perpetual dissensus remains. Madison’s solution was federalism. What is the federalist system for the internet age? By what process do we prevent the current slow-burning war of all against all from heating up? This is where we need to draw not only on the information ecosystem but the media players, technocratic institutions, and influencers with the potential to bridge those gaps. 

Offline experiments like America in One Room suggest that, even in these hyper-partisan times, compromise remains possible when people meet face-to-face. Experts on organizational psychology reiterate that it is possible to find common ground (or at least, disagree civilly), but that seeing the other person, reading unspoken physical indications of good intentions, is key. That’s hard to replicate online, particularly in the rancorous world of social media. In their essay The Memetic Tribes of Culture War 2.0, Peter Limberg and Connor Barnes propose the idea of a memetic mediator — someone capable of straddling multiple tribes and bridging the communication gaps between them. It would be novel to see this approach deployed into the most heated debates on Twitter. To see if, perhaps, uninvolved bystanders intervening in a neutral way might transform a conversation from toxic to healthy.

At the moment, the world’s largest social network is headed down the opposite path. Mark Zuckerberg’s articulated strategy is to move people into smaller, more private online groups — specifically, encrypted WhatsApp groups. These newer online factions will exist largely beyond the reach of oversight. Neither curious users, nor investigative journalists, nor Facebook’s moderation AI will be able to assess them, but they will still have most of the virality affordances of Facebook itself. Although there is some benefit – privacy for activists in authoritarian environments — the cost of this particular infrastructure for micro-realities has included viral hoaxes that led to violent murders. Since it’s unclear how Facebook could develop any meaningful moderation capability in the end-to-end encrypted environment, this shift will likely amplify some of the worst aspects of the current information ecosystem.

Twitter, by contrast, has taken significant steps to downrank low-quality accounts and mitigate viral hoaxes and the manipulation of its trending topics algorithm. The speed and structure of its feed predisposes it to outrage-inducing pseudo-events. But the company is at least aware that it must prioritize conversational health; Jack Dorsey recently announced an initiative to develop an “open and decentralized standard” for Twitter and other social media entities that puts greater control of curation in the hands of users.  

Finally, there is the media. Social networks get the brunt of our attention as the new, unregulated, disaster-prone information ecosystem of the day. But they are most often the amplifiers and curators of content, and the underlying media ecosystem is in a sorry state as well. There are arguments to be made about the decline of the journalism being partially a function of monopolistic social networks taking a greater share of their advertising revenue, but in response to the pressure to appeal to the factions most likely to subscribe, the Fourth Estate appears to have become more overtly factional itself. On several occasions in recent years, newspapers have changed their headlines and coverage styles in apparent response to Twitter outrage spikes. Brands working to appeal to their customers is not unique, but it is troubling to see the idea of a “paper of record” — an entity that reports facts neutrally to the best of its ability, that informs the public, that helps us be better citizens — fall away.

The Fifth Estate has turned out to be not a replacement or check and balance for the Fourth Estate, but an amplifier of some of its worst tendencies. The information environment that we have created online — the assemblage of distribution platforms, fourth and fifth estates, and actively commenting and sharing consumers — directly impacts our democracy offline. Media literacy efforts might help; at a minimum, they can raise awareness of how hyperpartisan and conspiratorial sites themselves manufacture consent in the age of factions. But we need something to serve as a trusted factual mediator as well. It has been possible to design systems that facilitate collaborative consensus realities that largely align with the truth; Wikipedia’s negotiated facts are one such example. 

In the wake of the religious fragmentation triggered by Martin Luther, failure to engineer either a consensus or a harmonious dissensus proved incredibly costly. The limited detente achieved by the Peace of Augsburg in 1555 unraveled into one of the bloodiest wars in history, the Thirty Years War. And it wasn’t until the Peace of Westphalia in 1647, almost a century later, that a new equilibrium was achieved. 

Each successive major information revolution has delivered a period of turmoil, much like today’s. The pamphleteering wars eventually wound down after the Federalist Papers more than 200 years later; the flow of ideas moved to newspapers and other emerging media. Ultimately, the promise of social media — systems to facilitate human connection, and to disintermediate access to information – still has the potential to be a powerful force for good in the world.  But it won’t happen on its own. The future that realizes this promise still remains to be invented.

Series Navigation<< The Digital Maginot Line

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Renee DiResta

Renee DiResta is an Editor-at-Large for ribbonfarm. She writes about techno-sociological weirdness, with a focus on digital mass manipulation.


  1. You mentioned the concept of the social construction of reality, I can’t recall if this is their thesis, or my reconstruction of their thesis, but the message I took (at least) from their book was that if you have a series of specialised truth procedures, all operating on sub-domains of reality, in order to communicate and operate together they need a shared symbolic universe, something that compartmentalises their individual forms of knowledge and presents them to each other in a comprehensible way.

    The symbolic universe is a legitimisation layer, but a legitimisation of heterogeneity and the abstraction layer that allows everyone to be right simultaneously, and others to be wrong but not dangerously wrong.

    How does this relate to the modern world? The essential problem of the internet is that it makes introspection into personal experience a heterogenous field of … truth production? I’m not sure. There’s definitely something being produced, and just as mathematics can increase in its power and self-consistency long before it finds any application, it’s possible these subcommunities are actually articulating some patterns that have some kind of emotional consistency that parallels the logical consistency of mathematics, even if they fail the ontic task of describing reality accurately.

    The collaboration on the basis of experiential similarity, in finding like minded or similarly-experiencing individuals across different social contexts, forms a kind of feverish phenomenology, where the same experiential framework is explored from many different angles, and sometimes experimented with, with the world of pickup artists generating an incel-shaped echo of disappointment as NLP, motivation and “frames” proves insufficient to it’s marketing (ironically) as an automatic rhetorical procedure for achieving sex and validation, leading to a rejection of positivity, non-biologically-ranked possibility and the capacity for self-construction as such.

    These communities of thought form archetypes, but not something we joke about without rolling our eyes, because archetype humour seems to operate best in the process of stabilisation of the symbolic universe, and it’s parallel coordinative structures; we joke about stereotypes when we have a way to pretend we understand them, and also recognise that we do not, when there is some sense that one can agree to disagree or contain the disagreement of another person.

    In contrast to old fashioned stereotype humour however operating on the sense of aggregate identity, we do see sub-cultures engaging in a strange process of self-parody and appropriation of each other’s forms of self parody. Bad attempts at articulating a particular viewpoint become transformed into memes within subgroups, part of maintaining their communicative identity by compartmentalising-by-parody impulses considered to be unrealistic or overly purist, only for such images to be redeployed sincerely under the cover of parody, or taken on by alternative groups as sincere, or something to own in contrast.

    In other words, instead of the group itself gaining recognition, instead we see their meme language, and ephemera of in jokes and identity production being borrowed and constantly reformed by outsiders.

    Instead of groups forming separate identities and having meme translators, who consciously transform one group’s perspective into the language of the other, we have meme detourners, where groups come to share warped versions of each other’s language precisely because of their attempts to re-articulate the terms under which they operate relative to each other, they interact by trying to disassemble one another’s reflective machinery. (As well as to march through each other’s spaces, smash things up, knife one another etc. but that’s less interesting from a communicative perspective, we already did that in the 1930s)

    When this isn’t spilling over into open warfare, doxing and attempts to get each other marked as terrorists, these kinds of subcultural battles are pretty interesting, as they constantly try to rearticulate each other’s articulations, requoting, reworking, spinning each other’s mistakes and lapses of alliance maintaining judgement in order to confirm the superiority of their particular worldview.

    In other words, these groups are staying within filter bubbles, but they are spending their time watching favoured media figures report on, rant or create absurdist stories or sketches about people in other bubbles. Conservatives talking about what “liberals” think, feminists about what “manosphere” people think, all with a sense of the obviousness of their derision.

    It occurs to me that the most immediate way to make such things healthy is to scale back conflict into this endless domain of requoting, and use some combination of AI journalistic oversight and distributed gruntwork to reconnect various characterisations to verifiable sources, take advantage of the internet’s capacity to short-circuit distances, and so render tenable only those expressions of alternative readings that can stand easy access to the original. In other words, push people towards interpretative competition rather than clever obfuscation.

    This is not consent, but conflict, but it nevertheless gives them a coherent if adversarial relationship that can parallel for example the legal system at its better moments, where we understand that each side is arguing its case, with the jury, in this case participants in these social systems with opportunities to defect, being able to see the cross examination of theses attempts to define their broader mutual relationship.

    You could sum this approach up as “at least their talking, right??”

  2. It was almost 300 years from Gutenberg’s invention of the European printing press (1450) until the Age of Enlightenment beginning in 1715. If we see digital media — first invented as telegraph transmissions in 1855 — to be the dawn of the electric/network age, then we may have a similarly long period of turmoil still ahead of us. It takes a long time for all generations to forget how things used to be. In the meantime, I think the best approach for our current, mixed-up, society is to keep experimenting with new organizational and governance models. It can be our legacy to future generations living in a network society.

  3. Excellent statement on how we are now at a turning point in the evolution of how human society achieves consensus – or breaks down in strife.

    However, I believe there is already a clear path forward, if we focus resources on it. You say “The future that realizes this promise still remains to be invented.” Actually, the core of that future has already been invented — the task is to decide to build out on that core, to validate and adjust it as needed, and to continuously evolve it as society evolves.
    1. The first step in the invention was the reputation mechanism in Google’s original PageRank algorithm. It succeeded by inferring hybrid consent on which Web pages are relevant and high quality. That was based on the human judgement of those who link in to them, with clever algorithms that weight that judgement based on many levels of reputation (and with awareness of subject domains and communities). (Unfortunately Google since sold its soul to its ad model.)
    2. As a further step, my patent disclosures (from 2002-3, now in public domain) extend those mechanisms to collaborative systems for mediating and augmenting our marketplace of ideas — under decentralized user control. A more accessible extension of those methods for broad forms of social media feeds, “The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings” (2018), is at

    Some key points on how that goes in the directions you so nicely outlined:
    –The methods PageRank applied (an modernization of long-used citation analysis methods) may not fully understand what this man-machine algorithm is recommending or curating, but begins to provide a workable operational sense of that.
    — My extension adds decentralized control that can recognize the nuances of communities of interest, and engineer permeability, surprising validators, and serendipity into their filter bubble micro-realities. This approach assumes and mediates pluralism. It also creates a cognitive immune system for managing bad actors. This is all done using a hybrid of man-machine intelligence to enable an emergent hybrid bottom-up consensus, moderated with controlled levels of top-down guidance.
    — Downranking less desirable items is central to PageRank, and to my extensions (which might be called RateRank). The examples of smarter downranking by Facebook and others that you cite are exactly the kind of corrective effort that my architecture seeks to build into its algorithms.
    — You are right that this requires openness and transparency to counter factionalism – secrecy is the mortal enemy of consensus.

    Of course building such an ecosystem for our augmented marketplace of ideas will be a long and continuously evolving process. And the initial challenge is to motivate our platforms to move in that direction and to alter their business models to serve users, not advertisers — or alternatively, to enable new platforms to supplant the misdirected platforms we have now.

    (Your article also adds helpful insight and currency to the similarly broad historical/cultural themes of Niall Ferguson’s book, “The Square and the Tower,” which I commented on at

    I hope we can all work together to see and realize this vision of augmented, mediated consent.