The Digital Maginot Line

There is a war happening. We are immersed in an evolving, ongoing conflict: an Information World War in which state actors, terrorists, and ideological extremists leverage the social infrastructure underpinning everyday life to sow discord and erode shared reality. The conflict is still being processed as a series of individual skirmishes – a collection of disparate, localized, truth-in-narrative problems – but these battles are connected. The campaigns are often perceived as organic online chaos driven by emergent, bottom-up amateur actions when a substantial amount is, in fact, helped along or instigated by systematic, top-down institutional and state actions. This is a kind of warm war; not the active, declared, open conflict of a hot war, but beyond the shadowboxing of a cold one.

Section of the Maginot Line, 1940 (Public Domain)

We experience this as a state of continuous partial conflict. The theatre opportunistically shifts as geopolitical events and cultural moments present themselves, but there is no sign of abatement — only tactical evolution as the digital platforms that serve as the battlespaces introduce small amounts of friction via new security checks and feature tweaks. As governments become increasingly aware of the problem, they each pursue responses tailored to the tactics of the last specific battle that manifested in their own digital territory; in the United States, for example, we remain focused on Election 2016 and its Russian bots. As a result, we are investing in a set of inappropriate and ineffective responses: a digital Maginot Line constructed on one part of the battlefield as a deterrent against one set of tactics, while new tactics manifest elsewhere in real time.

Like the original Maginot Line, this approach is about as effective a defense as a minor speed bump.

The Maginot Line was, in its time, believed to be a significant innovation in national defense; foreign leaders came from all over to tour it. It was a series of fortresses and railroads, resistant to all known forms of artillery, built using the latest technology. The purpose was both to halt an invasion force — keeping civilians safer — and to deliver early warning of an attack. The line was built by France along the entirety of its border with Germany. It extended into the border with Belgium, but stopped at the Forest of Ardennes because of the prevailing belief among experts that Ardennes was impenetrable.

Ardennes, it turned out, was not impenetrable. Moving through the forest would perhaps have been a futile effort for an army using the attrition warfare strategies prevalent in World War I, but it was vulnerable to new modes of warfare. As the French focused on building the Maginot Line, the Germans developed exactly such a new model of warfare — Blitzkrieg —and sent a million men and 1500 tanks through Ardennes (while deploying a small force to the Maginot Line as a decoy).

The Line, designed to very effectively fight the last war, had delivered a false sense of security.

Both the Maginot Line and the doctrine of Blitzkrieg emerged in the interwar years, a period that is perhaps also best characterized as a Warm War, much like the period we’re living through today.  But while the Maginot line embodied the tactical thinking and technological assumptions of the last war, Blitzkrieg embodied the possibilities of new technologies and the next war.

Dramatis Personae

The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless).

In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.

There are state-sponsored trolls, destabilizing societies in some countries, and rendering all information channels except state media useless in others. They operate at the behest of rulers, often through military or intelligence divisions. Sometimes, as in the case of Duterte in the Philippines, these digital armies focus on interference in their own elections, using paid botnets and teams of sockpuppet personas to troll and harass opponents, or to amplify their owner’s candidacy. Other times, the trolls reach beyond their borders to manipulate politics elsewhere, as was the case with Brexit and the U.S. presidential election of 2016. Sometimes, as in Myanmar, elections aren’t the goal at all: there, military-run digital teams incited a genocide.

There are decentralized terrorists such as ISIS, who build high-visibility brands while asynchronously recruiting the like-minded. These digital recruiters blanket the internet with promises of glory and camaraderie via well-produced propaganda, then move the receptive into encrypted chat apps to continue the radicalization. The recruits pledge allegiance to the virtual caliphate in Facebook posts before driving trucks into pedestrian plazas IRL.

There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.

Combatants  evolve with remarkable speed, because digital munitions are very close to free. In fact, because of the digital advertising ecosystem, information warfare may even turn a profit. There’s very little incentive not to try everything: this is a revolution that is being A/B tested.  The most visible battlespaces are our online forums — Twitter, Facebook, and YouTube — but the activity is increasingly spreading to old-school direct action on the streets, in traditional media outlets, and behind closed doors, as state-sponsored trolls recruit and manipulate activists, launder narratives, and instigate protests.

One thing that all of these groups have in common is a shared disdain for Terms of Service; the rules that govern conduct and attempt to set norms in platform spaces are inconveniences to be disregarded, at best. Combatants actively and systematically circumvent these attempts at digital defenses, turning the very idea of them into a target of trolling: the norms are illegitimate, they claim. The rules are unfair, their very existence is censorship!

The combatants want to normalize the idea that the platforms shouldn’t be allowed to set rules of engagement because in the short term, it’s only the platforms that can.

Meanwhile, regular civilian users view these platforms as ordinary extensions of physical public and social spaces – the new public square, with a bit of a pollution problem. Academic leaders and technologists wonder if faster fact checking might solve the problem, and attempt to engage in good-faith debate about whether moderation is censorship. There’s a fundamental disconnect here, driven by underestimation and misinterpretation. The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.

The Nature of Information Wars

One of the reasons for this gap is a fundamental misreading of the end goal. Wars have been fought for centuries over a fairly uniform set of goals: territorial control, regime change, religious or cultural mores, and to consolidate or shift economic power.

Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble.

But ultimately the information war is about territory — just not the geographic kind.

In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.

Meanwhile, the new digital nation states – the social platforms that act as unregulated, privately-governed public squares for 2 billion citizens — have just begun to acknowledge that all of this is happening, and they’re struggling to find ways to manage it. After a year of Congressional hearings and relentless press exposés detailing everything from election interference to literal genocide, technology companies have begun to internalize that the information world war is very real, is causing real pain to many, and is having profound consequences.

This particular manifestation of ongoing conflict was something the social networks didn’t expect. Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle.

But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.

This is not to say that infrastructure defense isn’t critical; it is. The fact that infrastructure and network hacking is time-consuming, costly, and perceived as unambiguously hostile, however, means that a detente has evolved on that front, and pushed active conflict to the social layer. In the Cold War, a huge percentage of the defense budget was spent on maintaining deterrence capabilities that ensured that neither of the two primary adversaries would use nuclear weapons. Hot conflict still erupted on the periphery via proxy wars in Latin America and Vietnam. The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead. Deterrence imposes real costs on the adversary; a Maginot Line, by contrast, can be cheaply circumvented.

To ensure that our physical infrastructure and critical systems were defended, we empowered a myriad of government agencies to develop best-in-class offensive capabilities and prohibitive deterrence frameworks. No similar plan or whole-of-government strategy exists for influence operations. Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure. As trust in media and leadership continues to erode (a goal of influence operations), one of these information campaigns — a more sophisticated version of the Internet Research Agency’s Columbian Chemicals Plant hoax, perhaps in a powder-keg country — could be used to provoke a very real response, transforming the warm war into a hot war.

Tactical Evolution

This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”

The Defense Department anticipated it, too: in 2011 DARPA launched a dedicated program (Social Media in Strategic Communications, SMISC) that sought to preempt and prepare for an online propaganda battle. The premise was ridiculed as an implausible threat, and the program was shut down in 2015. Now, both governments and tech platforms are scrambling for a response. The trouble is that much of the response is focused on piecemeal responses to the last set of tactics; on building a Digital Maginot Line.

The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.

If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it. Some of the amplifier bots might get shut down – that really doesn’t matter either, they’re easy to replace.

Now, in 2018, we have reached the point at which most journalists and many world leaders understand the 2016 playbook. Media and activists alike have pressured platforms to shut down the worst loopholes. This has had some impact; for example, it’s become much harder to trigger a trending algorithm with bots. After getting pwned that way (and getting called on it) thousands of times Twitter finally adapted, greyboxing and underweighting low-quality accounts. Facebook eliminated their trending news feature altogether. Since running spammy automated accounts is no longer a good use of resources, sophisticated operators have moved on to new tactics.

But although the bots are of increasingly minimal value, lawmakers at both the state and federal level are still expending effort thinking about regulating them. California lawmakers went so far as to pass a law that makes it illegal for bot account creators to misrepresent themselves — while it’s nice to imagine that making it illegal for trolls to create troll accounts is going to restore order to the information ecosystem, it won’t. It’s incredibly challenging to tailor a law to catch, or label, only malicious automated accounts. Twitter’s self-imposed product tweaks have already largely relegated automated bots to the tactical dustbin. Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.

Focusing on feature-level tactical fixes that simply shift the boundaries of what’s permissible on one platform is like building a digital Maginot line; it’s a wasted effort, a reactive response to tactics from the last war. By the time lawmakers get around to passing legislation to neutralize a harmful feature, adversaries will have left it behind. Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.

Digital Security Theater

The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare. Creating better reporting tools, for example, is not actually a meaningful solution for mitigating literal incitements to genocide. Malignant actors currently have safe harbor in closed communities; they can act with impunity so long as they don’t provoke the crowd into reporting them — they simply have to be smart enough to stay ahead of crowd-driven redressal mechanisms. Meanwhile, technology companies have plausible denial of complicity because they added a new field to the “report abuse” button.

Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups.

We know this is coming, and yet we’re doing very little to get ahead of it. No one is responsible for getting ahead of it.

The key problem is this: platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors.

Platforms cannot continue to operate as if all users are basically the same; they have to develop constant awareness of how various combatant types will abuse the new features that they roll out, and build detection of combatant tactics into the technology they’re creating to police the problem. The regulators, meanwhile, have to avoid the temptation of quick wins on meaningless tactical bills (like the Bot Law) and wrestle instead with the longer-term problems of incentivizing the platforms to take on the worst offenders (oversight), and of developing a modern-day information operations doctrine.

Liberal Means, Illiberal Ends

What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation.

We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.

We seriously entertain conversations about whether or not bots have the right to free speech, privilege the privacy of fake people, and have Congressional hearings to assuage the wounded egos of YouTube personalities. More authoritarian regimes, by contrast, would simply turn off the internet. An admirable commitment to the principle of free speech in peace time turns into a sucker position against adversarial psy-ops in wartime. We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.

The solution to this problem requires collective responsibility among military, intelligence, law enforcement, researchers, educators, and platforms. Creating a new and functional defensive framework requires cooperation.

It’s time to prioritize frameworks for multi-stakeholder threat information sharing and oversight. The government has the ability to create meaningful deterrence, to make it an unquestionably bad idea to interfere in American democracy and manipulate American citizens. It can revamp national defense doctrine to properly contextualize the threat of modern information operations, and create a whole-of-government approach that’s robust regardless of any new adversary, platform, or technology that emerges. And it can communicate threat intelligence to tech companies.

Technology platforms, meanwhile, bear much of the short-term responsibility. They’re the first line of defense against evolving tactics, and have full visibility into what’s happening in their corner of the battlespace. And, perhaps most importantly, they have the power to moderate as they see fit, and to set the terms of service. For a long time, the platforms pointed to “user rights” as a smokescreen to justify doing nothing. That time is over. They must recognize that they are battlespaces, and as such, must build the policing capabilities that limit the actions of malicious combatants while  protecting the actual rights of their real civilian users.

Towards Digital Peace

Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.

If the warm war is allowed to continue as it has, there is a very real threat of descent into illegitimate leadership and fractured, paralyzed societies. If algorithmic amplification continues to privilege the propagandists most effective at gaming the system, if combatant persona accounts continue to harass civilian voices off of platforms, and if hostile state intelligence services remain able to recruit millions of Americans into fake “communities”, the norms that have traditionally protected democratic societies will fail.

We don’t have time to waste on digital security theater. In the two years since Election 2016, we’ve all come to agree that something is wrong on the internet. There is momentum and energy to do something, but the complexity of the problem and the fact that it intersects with other thorny issues of internet governance (privacy, monopoly, expression, among others) means that we’re stuck in a state of paralysis, unable to address disinformation in a meaningful way. Instead, both regulators and the platforms throw up low-level roadblocks. This is what a digital Maginot line looks like.

Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets. When it’s all done and over with, we’ll look back on this era as being as consequential in reshaping the future of the United States and the world as World War II.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Renee DiResta

Renee DiResta is an Editor-at-Large for ribbonfarm. She writes about techno-sociological weirdness, with a focus on digital mass manipulation.

Comments

  1. A key dynamic that seems missing from this (*very* good – the “might need to reframe my thinking” kind of good) analysis is that one of two political parties in the US — the one that currently holds the the Executive branch, half of the Legislative one, and has arguably the upper hand in shaping the Judicial one — has made an ex post implicit alliance with most of the threat actors you describe. Trump is essentially the in-the-spotlight version of the reality-denial strategies, and the political strategy of the Republican Party (e.g. climate change denial) now relies on this kind of vulnerabilities as well [and I would argue that this predates the Internet; Fox News, over the long term, has perhaps been as successful a political disinformation actor as any in history].

    The implication is that any analysis that assumes the US government (among others) is in the reality-based side of this mess isn’t realistic; there’s an ongoing informational attack on US democracy, but successful enough that by now the US government itself (and half of its political system) is in the attacking side.

    I have absolutely no idea of how that translates to actionable ideas. But while

    “[…] The government has the ability to create meaningful deterrence, to make it an unquestionably bad idea to interfere in American democracy and manipulate American citizens. It can revamp national defense doctrine to properly contextualize the threat of modern information operations, and create a whole-of-government approach that’s robust regardless of any new adversary, platform, or technology that emerges. And it can communicate threat intelligence to tech companies.”

    is undeniably true, it’s not in the interest of either the current administration nor its party — quite the contrary.

    • Paula Johnson says:

      Exactly right. I appreciate this article but the real Maginot Line was built in good faith with a true effort to defend a nation. Our Digital Maginot line is essentially caution tape as we neither have the technological capabilities nor the interest building a better plan of defense. Further, even after seeing the enemy cross the line our leadership tells us it’s not actually happening. This is a fantastic article but I think it’s somewhat reckless for people to believe that this administration is about to invest in any kind of deterrence strategy. Making “reality a matter of perception” is essentially the only guiding principle of this administration and clearly it works.

    • Carlos Ramirez says:

      I wouldn’t call either side reality based. Both are fond of ignoring certain facts that counter their narratives. And the fact thing is only secondary to the primary value conflict which is actually driving all this. There is no rigorous way to jump the is-ought divide, so facts are only of limited value in that conflict.

      • [I’m breaking my own rule regarding sub-comments, but what the heck, it’s almost New Years]

        > I wouldn’t call either side reality based. Both are fond of ignoring certain facts that counter their narratives.

        I’m not saying the Democratic Party is an epistemologically committed fact-seeking enterprise, but there’s a qualitative difference of scale between their ignoring of some facts and the President of the United States publicly stating that almost every climatologist in the planet is part of a conspiracy orchestrated by the Chinese, or that there are steel plants being built in the US that just aren’t — to mark things at different ends of the scale of inherent system complexity. There are lies of detail, lies of omission, deliberate misunderstandings, and so on. And then there’s standing in broad daylight and asking applause for steel mills and coal plants that just don’t exist.

        • Carlos Ramirez says:

          That’s just a tu-quoque fallacy. My point is there is no “reality based side”, as you phrased it in your original comment. That one side lies harder than the other does not change this. Hence my problems with the article: absent an actual “epistemologically committed fact-seeking enterprise” it is profoundly unethical to be talking about shaping the ideological landscape via silencing and censorship. And probably even with one, given the difficulties in crossing the is-ought gap.

          • Jiaoning Bu says:

            “epistemologically committed fact-checking enterprise” is such a complicated proposition anyways. Reading Thomas Nagel lately, I’m profoundly suspicious of a rational-physicalist approach to this.

            If this was pre-modern, we’d all end up in Iran. But modern and post-modern thinking have their epistimic problems as well. Even the idea of fact-checking requires a level of information disclosure that would probably be unfeasible because it would commodify the information (and destroy any market value).

            In the end, it still needs to be wrapped into a product and packaged in order to sustain itself in an economic system. So, beyond the facts, there basically HAS TO BE commentary, re-runs, analysis, meaning, secrets, and a woven narrative. Otherwise, you don’t have a product.

    • Jiaoning Bu says:

      “The implication is that any analysis that assumes the US government (among others) is in the reality-based side of this mess isn’t realistic; there’s an ongoing informational attack on US democracy, but successful enough that by now the US government itself (and half of its political system) is in the attacking side.”

      It’s public-domain knowledge that the United States government has been involved in and perpetuating fake news and false information for most of the last 60 years. It’s just that in the past, “everyone else” was the victim of getting their democratic elections influenced by our information campaigns (and if we failed, often an assassination or a coup or a war).

      This is what bothers me most about the article. I basically agree with everything the author said, except I don’t see a clear argument that “the past” was much better.

      I have enjoyed my visits to India because at least life there is honest. You go to a five-star hotel and all the trash from that hotel is in the pit next to that hotel. In the West we mostly put the trash in other countries and pretend it doesn’t exist. This is true of both our literal trash (the human suffering and environmental destruction involved in almost every item available for purchase) and our ideological and societal trash — the suffering and destruction caused in order to perpetuate Western Power, Influence, Economic hegemony….

      One way to read this article is, “It was all fine and good as long as it was not us, now that we’re getting shit on, something really needs to be done about it.” I actually agree that something needs to be done, and has needed to be done for 60 years. Perhaps we’re in the process of something getting done — but the worst possible outcome of humanity’s newfound communication technology is that we all want to go ‘back’ to ‘the good old days’ when someone held the keys to moderating ‘what was true’ and ‘what was not true.’ I guess in one form or another, this sentiment, that we need a curated and protected ‘dominant narrative’ is exactly how Strongmen have been gaining power.

  2. It’s a good article, but I’d like to challenge the comments on Brexit (the UK vote to leave the European Union).

    ‘Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) …”

    Doubtless there’s evidence that there were attempts to do this – and it was just as likely on one side as the other. But I don’t think there’s any evidence that such techniques had the slightest influence on voting patterns. Sure both sides used social media, as well as the traditional methods (UK election campaigns are all run on the cheap with tight spending limits). It was a hard fought campaign.

    The decisive win for Leave (52-48) came as a surprise to the elite (the main beneficiaries of EU policies), mainly because they didn’t expect traditional working class voters (middle class in US terminology) to turn out in such large numbers. So ever since they’ve been looking for excuses (Campaign lies, stupid voters, Russian bots, etc).

    • Found the ideologue trying to shape the narrative about his pet-cause.

      • What a disgustingly ignorant response this is. I wish I could explain to you the harm done by bland, unframed accusations such as yours. There can be no doubt in the minds of informed users of the internet that a broad front of special interests are using online propaganda to bend popular narrative to their will. Who exactly wins a given engagement is entirely the business and concern of Internet Users. If you truly believe that Brexit or Anti-Brexit politics explains away the parent post, then you are not worthy of internet access. Fucking “tarzan”. My ass.

        • Greystoke says:

          Tarzan. Keep on being lord of the internet jungle. Personally, I found your commentary quite entertaining and “on point”. If we’re galloping towards a breakdown of the social contract, let’s do it with free speech on our minds and a laugh rumbling in our bellies.

    • I so agree with this comment, you’ve put my own thoughts into words, thanks.

    • The Brexit ‘referendum’ was a fraudulent disinformation sham. For example the infamous ‘£350 million for the NHS’ has now been exposed as a blatant lie.

      The result of the ‘referendum’ has thrown the UK into political paralysis, and I’m sure that was the motivation behind the campaign in the first place.

      I note the commenter above implies that anyone who voted Remain was an ‘elite’ – this is typical of the divisive rhetoric used in the ‘referendum’: vilifying anyone who disagrees with your opinion.

    • David Hun says:

      How many people claim that ‘Advertisements do not affect what I purchase’? Do you really believe that advertisements, of the legitimate commercial type, or the illegitimate type described in this excellent article, may cost many, many millions, but have no effect on the transient opinion of real people? Your naivety, or paid incredulity as a troll, are beyond a believable position.

    • Neal Romanek says:

      I’m not sure I would call a 52-48 vote “decisive”, but to answer your main point: UK citizens were exposed to an enormous amount of pro-Brexit (and anti-immigrant) propaganda from Russian bots and trolls on social media. There are some well-researched scholarly studied on it.

  3. You should seriously read Martin Gurri’s The Revolt of the Public. His major insight is that the information war is fundamentally negative; it can build skepticism of and resistance to anything but can’t build a consensus for action. There are too many dissonant voices; it is increasingly impossible for anyone to build the (nearly) unchallenged narrative needed to govern with legitimacy.

    • Alexander Bard argues that the dominance of networks will erode democracy and replace it with a netocracy in which individuals with the most connections gain the most power, and where information becomes increasingly hidden from the masses. Under this mindset, the flood of fake news would not elicit concern, since the masses will be increasingly deprived of actual power.

  4. Carlos Ramirez says:

    Pretty creepy article. Even if some kind of information suppression strategy were deployed, it won’t matter, both sides can find enough real news and facts to shore up their narrative. Meanwhile, ideological suppression was deployed quite extensively against the actual Nazis, and it did approximately nothing. Winning ideological battles (i.e. hearts and minds) is no easy feat.

    • “Meanwhile, ideological suppression was deployed quite extensively against the actual Nazis, and it did approximately nothing.”
      Although I’ve read a couple of dozen books on the rise of Hitler and Naziism, this doesn’t ring a bell, and I’m not at all sure what it means. Could you, if this comes from a secondary source, point me to it, or if it is your own analysis, summarize?
      Thanks.

      • Carlos Ramirez says:

        Sure: https://www.newyorker.com/news/news-desk/copenhagen-speech-violence

        Excerpt:

        Researching my book, I looked into what actually happened in the Weimar Republic. I found that, contrary to what most people think, Weimar Germany did have hate-speech laws, and they were applied quite frequently. The assertion that Nazi propaganda played a significant role in mobilizing anti-Jewish sentiment is, of course, irrefutable. But to claim that the Holocaust could have been prevented if only anti-Semitic speech and Nazi propaganda had been banned has little basis in reality. Leading Nazis such as Joseph Goebbels, Theodor Fritsch, and Julius Streicher were all prosecuted for anti-Semitic speech. Streicher served two prison sentences. Rather than deterring the Nazis and countering anti-Semitism, the many court cases served as effective public-relations machinery, affording Streicher the kind of attention he would never have found in a climate of a free and open debate. In the years from 1923 to 1933, Der Stürmer [Streicher’s newspaper] was either confiscated or editors taken to court on no fewer than thirty-six occasions. The more charges Streicher faced, the greater became the admiration of his supporters. The courts became an important platform for Streicher’s campaign against the Jews. In the words of a present-day civil-rights campaigner, pre-Hitler Germany had laws very much like the anti-hate laws of today, and they were enforced with some vigor. As history so painfully testifies, this type of legislation proved ineffectual on the one occasion when there was a real argument for it.

        • David Holzgang says:

          Thanks for the link and the excerpt. I did not know that the Weimar Republic had been doing that — obviously (in retrospect) ineffectively.

  5. Henry Farrell and Bruce Schneier say: sure.

    https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3273111

  6. It’s all Randall Munroe’s fault. I swear the internet gradually started getting worse after he created this comic

    https://xkcd.com/386/

  7. I’m very concerned about where this line of thinking can go:

    “An admirable commitment to the principle of free speech in peace time turns into a sucker position against adversarial psy-ops in wartime. We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.”

    If the state is going to intervene, it ought to be done through public and impartial channels. Impartiality is the key here: any permanent bureaucracy charged with countering influencing operations will eventually be used to further the political power of various factions within the country. I think we have enough real world examples to know this.

    The entire public influence bureaucracy ought to be set up like the jury system, to ensure no permanent stakeholders who have an conflicting interest in expanding the power of their own bureaucracy, and who are randomly selected so that they can’t be bribed, influenced, etc before they’re called for duty. Sealing the records of the identities of those called to service for some period of time could also prevent bribery through the promise of post-service rewards for those who advocate for particular factions.

    And I would argue against utilizing censorship to counter influence operations. The value of placing Freedom of Speech as the highest social ideal cannot be underestimated.

    • You reap what you sow. The western state-sponsored information war on Twitter to justify the Arab Spring and the war against sovereign countries – soft dictatorships that were replaced either by radical fundamentalist governments or societies bordering failed states, was the first such example of digital platform use.
      To these societies, your liberal but externally aggressive governments and societies ARE the malign actors.

  8. Great article.
    It’s just funny how biased it is, another piece in the information war, pretending to be objective.
    ” a variety of malign actors simultaneously realized that there was another way.”
    It’s obvious that the only actors the author deems malign are the ones not protecting the international military-financial-political establishment formed at Breton Woods.
    So, the obvious malign information war lead over Twitter during Arab spring, or the biased reporting of traditional media during the Balkan wars in the 1990s, was justified, because it was protecting these “liberal” and “humanitarian” goals, that ended up with carnage, “collateral damage” and oil, minerals, strategic territory and the like, in their hands.
    Now, when the other side strikes back using the same tools, you call them “malign”.
    So, Renee, you are just another sophisticated, ideologically biased shitposter, aren’t you.
    Or you are just a hypocrite.

    • ThinkNpray@meboi says:

      LOL

      Truthing shitposter: you missed the gist of the article. As odious as the great Breton Woods cabal is, that doesn’t negate the analytical value of the article which you conveniently overlook.

      The various platforms which serve as hosts to the various information war campaigns are not held accountable for their de facto facilitation of said war efforts. While the author might be too generous in their view of state actors benevolently intervening to this end (and benevolence is always in the eye of the beholder), that doesn’t lessen the ethical imperative (i.e. the idea of accountability) they identify.

      I know reading and critically comprehending slighty meaty arguments is a challenge in this day and age, but it’s well worth your time unless you are, as you say, “just another sophisticated, ideologically biased shitposter”.

      Hope this helps!

      • Winter Pratt says:

        “The various platforms which serve as hosts to the various information war campaigns are not held accountable for their de facto facilitation of said war efforts. While the author might be too generous in their view of state actors benevolently intervening to this end (and benevolence is always in the eye of the beholder), that doesn’t lessen the ethical imperative (i.e. the idea of accountability) they identify.”

        To bring this thread back into line and make it useful: I agree that “Milos” is probably a shitposter, but what you just said here is true regardless.

        Our legal systems are basically still struggling with 1990s-level technology issues (and hardly has its head around those), things like fair use of copyright, who is responsible for linked material, legal liability for a hacked system (in Germany, for example, if your system is hacked and someone uses it for illegal activities, you are basically responsible), are your fifth amendment rights protected vis-a-vis speaking your password in the USA, etc.

        I just don’t think our legal system is capable of dealing with cutting-edge tech issues — and whatever attempts it makes are going to bite us later. Really sweeping attempts to build monoculture-like protections against ‘fake’ news (which definitely existed before, see Chomsky on Clinton’s war in Haiti, or Chomsky on nearly all of U.S. gov for the last 60 years) are the type of thing people mistrust for very good reason.

        To use a metaphor (which I know is dangerous): No one can build a small house or create a legal meal-sharing dinner-party app where someone gets compensated because of zoning laws that were passed in an age where infrastructures were used differently, everyone expected to have a job with a pension and a home was a good investment because you weren’t going to move for 20-30 years. Laws usually don’t go away and they are bad at adapting to rapid changes, and when they are broad, overarching policies , they almost always end up with massive unexpected downstream consequences. The only question about said downstream consequences is “when,” not “if.”

  9. A relevant article.
    Everyone is aware that the full potential of the power of information is realized.
    Left unchecked it can lead to worse situations than those that have already occurred.
    The reluctance to check it stems from the concern that moderation can become a greater abuse than the present one.
    It is a catch-22 situation.
    The world shall soon become one set of people who will believe nothing (fact checking endlessly to certify even the basics) and another set who will believe anything and become pawns in the game without realizing it in the first place.
    What Yuval Harari mentioned as “making us act in a way that “they” want us to act while convincing us that we are acting out of our free will.

    • Winter Pratt says:

      “The reluctance to check it stems from the concern that moderation can become a greater abuse than the present one.”

      You end up building China’s dream infrastructure and then you never know who gets elected downstream.

      Or, as my dad used to say when I lived in the USA during the 80s and 90s increases on the “war on drugs” (we were a White family at the time, through marriage we have become interracial): “Whatever they’re doing to poor black people now, they will be doing to everyone in 15 years.”

      So, mistrust of the institutions is nothing new, and having damned good reason to mistrust them is nothing new. I really don’t see a good way to do this coming about. Even if you manage to get facebook onboard and create the perfect culture within facebook, well…. the mass exodus from facebook started in 2013, didn’t it? In 2024, facebook is going to be friendster and there will be something else. Do we alter corporate law to make every new technology get onboard with the moderation systems?

      I basically agree that the situation we’re in is chaotic. I am just not confident that our legal system is even possibly capable of dealing with this. EVEN IF you created partnerships between big players and governments, would it be legal and what would the downstream effects of those precedents be?

  10. Quoting Milos:
    “western state-sponsored information war on Twitter to justify the Arab Spring and the war against sovereign countries – soft dictatorships that were replaced either by radical fundamentalist governments or societies bordering failed states”

    “your liberal but externally aggressive governments and societies ARE the malign actors.”

    “the only actors the author deems malign are the ones not protecting the international military-financial-political establishment formed at Breton Woods.”

    “obvious malign information war lead over Twitter during Arab spring, or the biased reporting of traditional media during the Balkan wars in the 1990s”

    Some interesting bits of BS narrative that are new to me. Perhaps if I read/watched RT more, or paid more attention to Sputnik, or hung out in the cafeteria of IRL.

    Tell us a little about yourself. Are you a real person? Nice touch, the almost professional quality headshot of a thoughtful looking face. And when I click on it to “access full profile”, I just get a bigger version of the photo.

    The western powers have much to answer for, such as being the USSR’s partner in all the proxy wars and proxy quasi wars that kept the developing world from developing and in the worst case lead to Cambodia being almost a graveyard, and the Congo being what it is today. Yes, the U.S. and the government of that time is most explicitly responsible for Cambodia. But we westerners should realize that it was built on some pretty good ideas, and should be asking ourselves, where did we go wrong and how to get on the course that was at least declared and somewhat realized from the beginning.

    To join the orgy of shitposting will be emotionally gratifying, but if not countered in some way, seems to be heading towards destruction.

    • We westerners it is. That’s the information war.

      This article and/or its author (Ms Renee) is almost botish. You could help make it less, if you cared.

  11. When the shared reality that’s breaking down is actually capitalist ideology / capitalist realism, total information chaos is hardly worse.

  12. Winning ideological battles (i.e. hearts and minds) is no easy feat.

    I’m going to suggest that if you’re in an ideological battle, there’s a very good chance you’ve lost the war, or rather the enterprise of trying to bring a world of accelerating chaos, with some form of singularity looming to a soft landing. The war metaphor is part of the problem; maybe it’s even part of the problem of Renee’s thinking, though I appreciate her work very much.

    “Heighten the contradictions” is an old extremist MO. Don’t work with peoples’ anger and distress to move things in a positive direction unless it will bring about the revolution. Lenin was a great practioner; Newt Gingrich harnessed it to American politics, and the whole of Movement Conservatism adopted it. One conservative gave one form of it a name: pushing the “Overton Window”. In its most extreme form, one heightens the contradictions with the Mostly Sane Medium by pushing lies relentlessly. To drive this to the point of a separate reality has become essential to convincing everyone in ones camp that the MSM is constantly lying and driven by conspiracies.

    I believe the writer you quote certainly went too far in her willingness to protect the speech of people carrying placards saying she should be killed, If I understood that right. I do believe those pushing lies to produce an at best very reckless effects, that could turn catastrophic are culpable.

    The case about the Nazis in the article (which I read in full) isn’t very strong. Weimar did very little to restrain speech that included incentives to violence, or to restrain actual violence for that matter. Hitler lead an attempt to overthrow the state, in which people were killed, and faced a very short stay with friendly and helpful jailers given everything he needed to write Mein Kampf. BTW was Mein Kampf ever censored? Not very effectively if at all as it was the best selling book in Germany. The Nazi party was illegal for a few years, but this allowed them to build up and elaborate their shadow government. Once made legal again, they quickly soared to the top.

    One gets the impression Streicher’s work was nearly continuous, and if it was a particular target it was largely because it was so pornographic. He was too much even for the Nazi leadership, but too useful to stop. He was shunned in the cell blocks in Nurenburg.

  13. Jiaoning Bu says:

    Two problems I see:

    1) Information war is not new. In “the” “past,” the debate was campaign finance, and whether PACs should be able to influence the monoculture (which no longer exists) through television (which no longer exists in that form). Arguably, America was ‘radicalized’ against every arab-country except Saudi Arabia because of the degree to which the Saudis controlled spheres of influence. Meanwhile, the Saudis continue their Shalafi education system, training the next set of 12 people that will do the equivalent of the next 9-11. I often wonder what would have happened if Bin Laden had spent every last dime he could get his hands on hiring a fancy New York PR firm instead of training people to blow up buildings. I think in 2018, he would seriously have considered this tactic.

    So in other words, should information and narrative control be limited to rich actors as it used to be? Did we not have genocides and government manipulations “back then?” Is faith in the institutions which overthrew democratically elected officials and did genocides in distant lands while protecting all sorts of bad actors (insert Chomsky citation here) somehow a good thing?

    2) In the most recent Taiwan election, information manipulation occurred on basically private channels using apps like “Line.” The bubbles were already-existing social groups re-posting to each other within unmoderated private chatrooms. Just imagine your asshole racist uncle and all his buddies with their own-little-world type of social group and you get the idea of how Line works. I see very little method to moderate these kinds of social groups without creating a real Orwellian nightmare.

  14. I’ve read the article and most of the comments, and found the article interesting and most of the comments serious and well meant, all in the project of trying to understand this new information order that was sprung on us nearly thirty years ago and which has become increasingly complex in that span. All technical breakthroughs impact us, some greatly, but changes in the information order are a special case. I’m thinking back to the invention of the printing press and the disorderly impact it had in the 16th century, resulting, among other things, in the loss of the Catholic Church and Italian influence in Northern Europe.

    Still, we’ve all come to like the old printing press, even though some terrible things have been printed, of course.

    You have to believe that the digital universe we now inhabit is having, and will continue to have for some time, an impact as least as great, and probably greater. In other words, it’s kind of rough going at the beginning, and the end result cannot be easily predicted.

    • Amen to that.

      (and I think -and please believe me, it is me, not a bot, lulz- I have made one too many comments here today)

  15. Thank you for a most provocative post.

    1. It is Americentric, ignoring the fact that the international problems it describes cannot be solved by a national response. If there is to be an institutional response of any kind, it must be multinational, coordinated, consistent. Otherwise, there will be a virtual Ardennes.

    2. The importance of epistemology cannot be overestimated. Where in the public school curriculum is epistemology taught? How can we expect citizens to think critically and act effectively if it is not?

    3. As George Lakoff explains so well, rational thought will not save anyone because, while we might appear to think rationally, we don’t. The last 10 years of brain research has changed thinking, and should be common knowledge.

    4. Media Literacy, which includes understanding the complexities of this post, and epistemology, and George Lakoff, can be a grass roots way of helping people understand and mobilize. If we don’t know how social media and governments work, we cannot respond meaningfully.

    • Agree 1,2,4. An international task force to coordinate efforts on the immediate issues and investment in universal media literacy to inoculate everyone and end the war.

      I’ll need to read up on Lakoff!

      • George Lakoff has a good book: Don’t Think of an Elephant. He also creates the FrameLab podcast, has several video lectures on YouTube and many blog posts at https://georgelakoff.com/

        His ideas have profoundly revised my position on critical thinking, which has been largely misunderstood and needs to be overhauled in light of current brain research.

  16. Ross Williams says:

    Mostly this misses the point. The war is between authoritarians and the rest of us. The actual victim is self-government, not democracy. “Social media” is irrelevant. It merely helps carry the narratives created by those with power to a wider audience and disguise their source. Unless of course, you want to include Fox News, CNN, the New York Times, the Washington Post, the Wall Street Journal, NPR, et al “social media”. The reality is all of our sources of information go through a click bait filter. If it is does not lend itself to quality click bait, then it must be dressed up or left out. bJust look at who actually told us about Podesta’s emails and how they characterized them:

    “Four of the juiciest leaked Podesta emails – USA Today” October 13, 2016

    The real problem is that with this blizzard of clickbait, characterized by its authors as information overload, its impossible to glean any real information and there is no chance for thoughtful public discussion above the din of the media megaphones. You might as well choose what car to buy solely by watching TV commercials while the sales person interrupts your conversation with various pieces of “information” to help “inform” you.

  17. Patrick Perdu says:

    For the records, Blietzkrieg was theorized by a little-known (at the time) French General de Gaulle in a 1937 book called “Vers l’armée de métier” that sold 700 copies in France and 7000 in Germany.

  18. Thanks for the fascinating and excellent read! The digital entities we create around ourselves to inform and shape our lives are rapidly evolving, shifting and changing how we perceive ourselves, how we think and how we communicate. I believe confronting the associated risks and opportunities can in this sense be counted among the most significant challenges we are facing as we move further and further into the emerging digital life of the information age. (I am clumsily albeit nothing less pretentiously paraphrasing a passage out of Douglas Adams’ speech at Digital Biota 2, held in September 1998 http://www.biota.org/people/douglasadams/)

    I would like to try and add one thought on the situation: As we are witnessing the rise of large scale digital information warfare operations and are slowly beginning to conceive how mindboggingly fast incremental iterations of them are unfolding on a global scale, I believe that the financial markets have been serving as the single most forceful catalyst in the birth of this new arms race. Back in the early 1980ies, when mathematicians, physicists and programmers were first starting to apply the incredible potential of computers as mathematical modeling devices to the markets, developing algorithms based on the increasing amounts of available data, analysing and interpreting them in ways that were made possible by the power of information technology. Since then, increasingly fine-tuned and highly sophisticated financial instruments took shape in computer programs and have been battling against each other on the networks of the financial sector, constantly being evolved to maximize profits.

    You have pointed out the similarities behind some of the struggles to regulate the digital aspects of financial engineering and the current efforts to come up with useful, applicable regulations for tech giants and social media platforms in your excellent essay: “There are bots. Look around.”

    In light of the above, the stunningly exemplary nature of Robert Mercer’s role as both Co-CEO of Renaissance Technologies LLC, running the most profitable fund on wall street (Medallion, referred to by Bloomberg News as the “Blackest box on Wall Street”) on the one hand and as key figure and driving force behind the Brexit and Trump campaigns’ staggeringly successful online-maketing efforts – according to the Guardian Newspaper allegedly via AIQ, SCL and Cambridge Analytica – on the other has in my view remained puzzlingly and worryingly underrepresented in the News-Cycle up until now.

  19. The article, which reads like a pamphlet, hasn’t a clear delineation of its ontology. At times the author speaks about “our society” and its central government, dedicated to protect it, at other places it mentions informational nation states, which are in war with each other and which might eventually span more than a country ( deterritorialization a la Facebook). If we locate ourselves in “our society”, which is aligned with a traditional and territorial nation state than our enemies might be Russian hackers and rogue companies, if not, they might be our allies, at least for the last and next combat – whoever this “who” and “our” is in the warring informational nations.

    Venkat made already attempts of a classification of the levels of conflict, with respect to scale, which would allow clarifications, with actors ranging from individuals to whole political ideological systems such as S.Huntingtons cultural spheres, memetic organisms which might exist for centuries or longer. The latter are full of internal differences and inconsistencies with traditionalists vs progressives, right vs left, … as emergent properties. Those can be seen as the primal forces of the current conflict, which just moved from a cold peace to hot culture war. It is this transition, which is certainly correlated with the rise of digital media. They also got intensified by attempts of a complete annihilation of the right/the traditionalist wing through liberal treatment of immigration and a push for “diversity” which ripples through the whole of the West, not just the USA.

    What I personally expect is not that all the sudden the USG or the EU commission becomes a neutral arbiter protecting a common subject which identifies with it but the whole dynamic, unstable as it is, will move somewhere and produces a surprise, something which just didn’t exist yet on our screens, which wasn’t in the world before software ate it. The “event” was a dark deity of 20th century philosophers, one which allowed them to be faithful to negativity, to reject both the state of things as well as all visions: that which changes my mind and is not among all known opinions.

  20. It’s free, provides great statistics and the counter does not possess
    to show on the search page. Just make sure they’re relevant, interesting, and informative.
    However for all your effort it produced dismal end result. http://Lanjutkan.Pe.hu/scr888genting750705

  21. Clear and stark, though who the real provocateurs are and why, is an important open question to getting to solutions. In the end, though, like with the climate change fight, we cannot rely on government to lead, other than to help hold content platforms responsible and ensure info transparency to citizens, who individually, with business and local government, must collectively organize and take up the fight.

Leave a Comment

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.