The Strategy of (Subversive) Conflict

The strong do what they will, the weak do what they must, and the manipulated do what they think they must (which is what the strong or weak will). Manipulation — influencing behavior by altering another’s viewpoint in a manner indifferent to whether or not the alterations are true or desirable — is one of the most important aspects of social conflict and competition.  While you may not be interested in manipulation, manipulation is interested in you (though it may disguise this interest beneath layers of dissimulation).  In this post I provide a selective overview of the theory and practice of manipulation. Why does this matter? Whether in geopolitics or at home, we must either understand and confront manipulation or be victimized by a Machiavellian Mini-Me.

Manipulation and Strategy

To coerce is to forcibly impress upon the target that the objective “facts on the ground” make compliance rational. Do as I say or the gun goes bang bang. To manipulate is to do the opposite: change the target’s perceptions of facts on the ground such that the target believes compliance is rational even when it may not be the case. Given that strategic behavior concerns the fine art of using coercion to accomplish a goal in the face of some form of opposition, manipulation is difficult to reconcile with how we ordinarily think about conflict and competition.

All strategy involves asymmetry. By generating massive asymmetries in military-industrial production and logistics over the Axis, the World War II Allies were able to create and exploit a powerful advantage that the Axis could not. By analogy, if a con man is conning a sucker out of his hard-earned cash by abusing a position of trust, the ultimate asymmetry is in effect. He is scamming you, you do not know he is scamming you, and you are unwittingly participating in the scam operation! This is what makes the idea of manipulation as an alternative model of strategic behavior superficially attractive.

Suppose Alice would like to deceive Bob such that she can achieve strategic surprise in some operation she is planning. Elements of the scenario include:

  • At least two alternative goals and expectations that Alice could potentially select.
  • At least two alternative goals and expectations that Bob could potentially select.
  • A technique (stratagem) by which Alice’s goals and expectations are designed and Bob’s goals and expectations manipulated to ensure that Alice achieves strategic surprise over Bob.

In order for Alice’s deception attempt to succeed, Alice must “hide the real” (conceal what she is doing) and “show the fake” (convince Bob that she is doing something that she really is not).  In military and intelligence studies, the field of “denial and deception” (fittingly abbreviated D&D) deals with these dark arts in exhaustive detail. However, there is nothing specific to the military about D&D. Some of the foundational sources of knowledge about it come from sources as eclectic as card cheats, magicians, and animal and plant behavior observed in the wild.

But what if Alice is engaged in a far more ambitious maneuver than simply just deluding Bob as to what choice that she will make in a critical situation? Suppose Alice is looking to subvert an situation that marginally favors Bob. What can she do? It is here that we see that the line between creating asymmetries by altering perceptions vs. creating asymmetries by altering realities is rather blurry. Alice might use a combination of clever rhetoric and subtle agenda-shaping so that she  is more likely to win in the long term than Bob is. But Alice can and likely will do far more unsavory things to win. Suppose Alice engages in dirty tricks, covert operations, etc.

Alice manufactures lies and fabrications that lessen Bob’s hold on power, spies on Bob and other relevant actors, and infiltrates Bob’s organization to make it less effective and cohesive. Or suppose Alice tries her hand at heightening the contradictions. Alice sets key components of Bob’s power structure against each other, spreads discord, encourages Bob’s opponents, or otherwise undermines Bob’s hold on power by accelerating existing flaws and dysfunctions that threaten Bob. Alice may also utilize divide and conquer strategies that exploit media and social media to accelerate inter-group and intra-group conflict (and thus further strengthen herself at the expense of Bob).

After this expansion of our original view of Alice, perhaps we can say that Alice is engaging in a kind of subset of strategic behavior: subversion and Machiavellian trickery. But there is still a problem with this redefinition. Alice is inducing Bob to act in a way that suits her interests and disadvantages Bob’s. Alice wants to exert Bob to overextend himself to match a key advantage she holds, unnecessarily isolate himself and maximize his own internal disorganization, and generally act in a way that harms himself and benefits her. But how? Suppose Bob fervently opposes Alice, distrusts her, and is already on the lookout for her manipulations. Why on Earth would Bob do anything that benefits such a bitter enemy that he already opposes, despises, and distrusts?

To make matters worse, Alice has a additional but far larger problem. A large number of other people must – seemingly spontaneously – act in a way that favors her interests (even if it harms theirs). And believe that they are doing so spontaneously and independently. In short, Alice needs Bob – and an enormous amount of related or unrelated third parties – to engage in behavior that weakens or subverts Bob and benefits Alice. Without Alice’s hand being obviously seen. This behavior must be done voluntarily, with the belief that it is being performed due to one’s own free will, and not the desire of some external puppetmaster. Alice must arrange a collection of these behaviors such that their interaction and runaway escalation produces a breakdown of the Bob-controlled establishment and dominant order. But how?

Broadly speaking, Alice ideally must first be sphinxlike in character to pull any of this off. She must foster ambiguity about what her ultimate goals are in order to have an advantage over those around her. This style of behavior is called “robust action” in sociology. For Alice to be an ideal manipulator, her actions must be interpretable from multiple perspectives at once, potentially function as moves in multiple games at once, and conceal her public and private motivations. This maintains her flexibility and discretion and thwarts attempts by rivals to narrow her space of choices. Forced clarification of her commitments and lock-in to hard goals only gives Bob an ability to constrain her.

Meanwhile, Alice may quietly shape perceptions of others to her end, alter the structure of power through her actions, or set up situations in which others act in a way that favors her goals. Alice might be a scheming palace insider behind the scenes or a polarizing public figure that foments enough division to succeed. But Alice need not be anyone particularly important or even a single person. Perhaps “Alice” is a composite for a group of people acting with some degree of coordination and some degree of independence! Regardless of who or what Alice is, others must be socially legible to Alice, but she is illegible herself to them.

Others must be constrained, predictable, explainable, and fixed enough for Alice’s plans to reach fruition, but she must continuously defy final clarification and understanding. She must utilize a variety of political strategies – such as concealment, clientage, and dissimulation – to maintain her own autonomy and freedom of action. But in order for Alice to really get what she wants she has to – pardon the profanity – fuck with Bob’s mind. If Bob’s mind is fucked with, he will be both less coherent and competent and also more likely to be fooled by Alice into doing her bidding. When we think about manipulation this way, it is hard to think of a strategic activity that doesn’t require some form of it.

War, chess, business, and other similar competitive activities are all forms of “adversarial problem solving” (also known as “adversarial reasoning”) – the art of anticipating, understanding, and counteracting the actions of an opponent.  But this conception of adversarial reasoning admits some ideas of strategy and excludes others. The difference between strategy as perceived by economists and strategy as perceived by cognitive scientists is essentially the difference between cognitive optimality vs. cognitive maximality. The cognitive optimizer has to learn the optimal probabilities for performing each move and select her own moves at random according to these probabilities. Maximal players, however, do not use a fixed way of responding. They use some form of learning or thinking to improve the choice of future moves over time, adjusting their responses to exploit perceived weaknesses in the opponent.

Most real-world entities are closer to  maximal players than optimal ones except in certain highly constrained niche environments. It helps to  “construct a model of an opponent that includes the opponent’s model of the agent” so that you can better predict what the opponent will do based on your belief about what he believes you will do. You are more likely to succeed in such a situation if you manage the perceptions of an opponent in a way that makes his internal model of you and the world around him systematically unreliable. At a minimum, you want to manage the opponent’s perceptions in a way such that he cannot impede you from attaining your own aims.

The Cold War eventually turned all strategy into a subset of manipulation, as seen in ideas like the “madman” theory of deterrence:

In his post-Watergate memoir The Ends of Power, former White House chief of staff H.R. Haldeman wrote that [Richard Nixon’s] use of the strategy was hardly unconscious. “I call it the Madman Theory,” Haldeman recalled the president telling him. “I want the North Vietnamese to believe I’ve reached the point where I might do anything to stop the war. We’ll just slip the word to them that, ‘for God’s sake, you know Nixon is obsessed about communism. We can’t restrain him when he’s angry — and he has his hand on the nuclear button,’ and Ho Chi Minh himself will be in Paris in two days begging for peace.”

Whether or not Nixon was actually insane was secondary to whether or not the Communists believed he was insane. The entire basis of Cold War lay in what Dr Strangelove famously depicted as a ghastly absurdity: the idea that someone would use a weapon that would likely obviate any meaningful strategic objective that could be attained through force of arms. Yet understanding manipulation requires us to comprehend a conceptual continuum of manipulative activities, ranging from the scheming of the stereotypical criminal mastermind to brute force mindfucking.

From Moriarty to Mindfucking

The origin of strategic thinking as a whole in the West can be conceptually traced to strategos, the art of the general. This conjures up images of a central commander moving pieces on a chessboard to attack the enemy. The challenge of strategos is combinatoric: one could spend an eternity pondering the best possible sequence of moves to defeat the enemy without the ability to prune possible operations that need not be considered. The problem with this example is that chess is a deterministic game of complete information. The enemy’s pieces can be seen and the direct effects of each action are known. Suppose we alter chess by adding a condition of incomplete information or “fog of war.” We can see our own pieces but not that of the opponent.

In the case of regular chess, we can assume that the opponent’s preference is to choose the action most likely to undermine us. But in this new form of chess, this is not enough. We need to make some assumptions about what our adversary believes about the state of the game. And thus we enter into the realm of beliefs about beliefs. In the Sherlock Holmes story “The Final Problem,” Holmes faces off against the criminal mastermind and “Napoleon of Crime” Professor James Moriarty. In the course of the story, Holmes seeks to elude the pursuing Moriarty.

In response to Moriarty’s threats, Holmes asks Watson to come to the continent with him, giving him unusual instructions designed to hide his tracks to Victoria station. On meeting at Victoria Station Holmes plans that the two head to Dover in order to flee to the continent. The next day Watson follows Holmes’s instructions to the letter and finds himself waiting in the reserved first class coach for his friend, but only an elderly Italian priest is there. The cleric soon makes it apparent that he is in fact, Holmes in disguise.

As the train pulls out of Victoria, Holmes spots Moriarty on the platform, apparently trying to get someone to stop the train. Holmes is forced to take action as Moriarty has obviously tracked Watson, despite extraordinary precautions. He and Watson strategically alight at Canterbury (before reaching Dover), making a change to their planned route. As they are waiting for another train to Newhaven a special one-coach train roars through Canterbury, as Holmes suspected it would. It contains Moriarty, who has hired the train in an effort to overtake Holmes and catch him before he and Watson were to reach Dover. Holmes and Watson are forced to hide behind luggage, but they manage to make their escape to the continent!

Note that the entirety of this duel takes place within the minds of Holmes and Moriarty. Physical violence is only a possibility should Holmes be caught by the Professor. Those familiar with game theory will likely automatically start to draw payoff matrices and model Holmes and Moriarty mathematically:

…Holmes is faced with the decision of either going straight to Dover or disembarking at Canterbury, which is the only intermediate station. Moriarty, whose intelligence allows him to recognise these possibilities, has the same set of options. Therefore the strategy sets for both players contain only Dover and Canterbury. ….Holmes believes that if they should find themselves on the same platform, it is likely that he’ll be killed by Moriarty. If Holmes reaches Dover unharmed, he can then make good his escape. Even if Moriarty guesses correctly, Holmes prefers Dover, as then, if Moriarty does fail, Holmes can better escape to the continent.

This case is a useful illustration of stratagem, a  companion concept to strategos. Stratagem is the practice of achieving the aim via the cunning plan, specifically a plan that an opponent cannot thwart outright. This is the essence of the stratagem view of conflict: an abstract game played against a logical but cunning opponent. Perhaps one of the more interesting recent ideas of how this works is the Russian theory of reflexive control, as “reflexive control” is defined as “a process by which one enemy transmits the reasons or bases for making decisions to another.” To engage in reflexive control is to, quite simply, convey information to an opponent that inclines her to voluntarily make a decision you have predetermined:

According to the concept of reflexive control, during a serious conflict, the two opposing actors (countries) analyze their own and perceived enemy ideas and then attempt to influence one another by means of reflexive control. A reflex refers to the creation of certain model behavior in the system it seeks to control (the objective system). It takes into account the fact that the objective system has a model of the situation and assumes that it will also attempt to influence the controlling organ or system. ….In a war in which reflexive control is being employed, the side with the highest degree of reflex (the side best able to imitate the other side’s thoughts or predict its behavior) will have the best chances of winning. …

Although no formal or official reflexive control terminology existed in the past, opposing sides actually employed it intuitively as they attempted to identify and interfere with each other’s thoughts and plans and alter impressions of one, thereby prompting an erroneous decision …..If two sides in a serious conflict—A and B—have opposing goals, one will seek to destroy the other’s goals. Accordingly, if side A acts independently of the behavior of side B, then his degree of reflex relative to side B is equal to zero (0). On the other hand, if side A makes assumptions about side B’s behavior (that is, he models side B) based on the thesis that side B is not taking side A’s behavior into account, then side A’s degree of reflex is one (1). If side B also has a first degree reflex, and side A takes this fact into account, then side A’s reflex is two (2), and so on.

The key to reflexive control is manipulating the filter that the opponent utilizes to process information and thus has some relation to the technical idea of compromising information systems via the exploitation of their enabling subsystems   Reflexive control is thus a useful exemplar of the broader principles of manipulation in social conflicts. However, the degree of psychological harm inflicted as a result of the filter hacking varies. “Mindfucking” is an example of reflexive control on the extreme end of the spectrum of harm. Philosopher Colin McGinn has written a book about mindfucking and defines mindfucking as a kind of deliberate unbalancing of the target’s psychological equilibrium.

Putting these various expressions together, then, we may speak of fucking with somebody’s head by playing mind games on them, pushing their buttons and, as a result, mindfucking the individual in question. To put it in less slangy terms, one may interfere with a person’s psychological equilibrium by playing on their emotional sensitivities, and leaving that person in a state of mental violation. A mindfuck can plant seeds in the mind that cause it to conceive a new life, and that life may go forth into the world and multiply. ….[t]he mindfuck involves planting seeds in someone else’s mind that then take on a life of their own and may spread through the population.

Suppose you are a senior military officer in a wealthy republic. You have been decorated for courage under fire and return victorious to a beautiful and loving wife, the respect of the people, and the trust of your superiors. But there is a small part of you that is weak, afraid, paranoid, and above all else vulnerable. A scheming subordinate exploits your weakness, planting lies in your head and playing on your emotions. With every counterproductive action you take based on this initial lie, you become that much more vulnerable to his manipulations. Things get worse and worse for you, until you hit rock bottom. You destroy the one you love the most and finally destroy yourself. You are noble Othello the Moor from Shakespeare’s Othello. And you have just been mindfucked.

Othello illustrates several salient features of mindfucking:

  1. The target (Othello) is subtly influenced by the mindfucker to act in a way that he believes to be of his free will, and the potency of the initial manipulation is reinforced by the negative consequences of the target’s actions.
  2. The target is slowly yet eventually totally disconnected from external reality by the mindfucker’s lies and the effects of thoughts and actions that he believes to be of his own volition.
  3. The target unwittingly cooperates with the mindfucker to shape the external environment such that the target is eventually hopelessly disadvantaged.
  4. The target cannot properly attribute intentions to the behavior of the mindfucker because the mindfucker can always semi-plausibly provide a benign explanation for it.

So if one were to prioritize, mindfucking is really the most significant and dangerous form of manipulation. Mindfucking both simplifies the act of manipulation and also inflicts the most psychological violence on the target.  One can develop mental heuristics for detecting subterfuge reasonably well, build bureaucratic procedures for manipulation detection, or develop automated tools for detection and response. One may also develop generalized security or regulatory procedures that make it more difficult for manipulation to be successfully carried out. All of these steps may be enough to make the manipulation effort costly relative to the likely gains and the manipulator’s expectations of success or failure. But all of this preparation can be defeated if someone successfully mindfucks you. Additionally, some of the most effective recent examples of manipulation involve attacks on trust, coherence, and the target’s perceptions of reality.  In other words: mindfucking.

Consider the murky claims of self-confessed election hacker Andrés Sepúlveda:

Rendón, says Sepúlveda, saw that hackers could be completely integrated into a modern political operation, running attack ads, researching the opposition, and finding ways to suppress a foe’s turnout. As for Sepúlveda, his insight was to understand that voters trusted what they thought were spontaneous expressions of real people on social media more than they did experts on television and in newspapers. He knew that accounts could be faked and social media trends fabricated, all relatively cheaply. He wrote a software program, now called Social Media Predator, to manage and direct a virtual army of fake Twitter accounts. The software let him quickly change names, profile pictures, and biographies to fit any need. Eventually, he discovered, he could manipulate the public debate as easily as moving pieces on a chessboard—or, as he puts it, “When I realized that people believe what the Internet says more than reality, I discovered that I had the power to make people believe almost anything….Sepúlveda’s team installed malware in routers in the headquarters of the PRD candidate, which let him tap the phones and computers of anyone using the network, including the candidate. He took similar steps against PAN’s Vázquez Mota. When the candidates’ teams prepared policy speeches, Sepúlveda had the details as soon as a speechwriter’s fingers hit the keyboard. Sepúlveda saw the opponents’ upcoming meetings and campaign schedules before their own teams did.

Money was no problem. At one point, Sepúlveda spent $50,000 on high-end Russian software that made quick work of tapping Apple, BlackBerry, and Android phones. He also splurged on the very best fake Twitter profiles; they’d been maintained for at least a year, giving them a patina of believability. Sepúlveda managed thousands of such fake profiles and used the accounts to shape discussion around topics such as Peña Nieto’s plan to end drug violence, priming the social media pump with views that real users would mimic. For less nuanced work, he had a larger army of 30,000 Twitter bots, automatic posters that could create trends. One conversation he started stoked fear that the more López Obrador rose in the polls, the lower the peso would sink. Sepúlveda knew the currency issue was a major vulnerability; he’d read it in the candidate’s own internal staff memos. Just about anything the digital dark arts could offer to Peña Nieto’s campaign or important local allies, Sepúlveda and his team provided. On election night, he had computers call tens of thousands of voters with prerecorded phone messages at 3 a.m. in the critical swing state of Jalisco. The calls appeared to come from the campaign of popular left-wing gubernatorial candidate Enrique Alfaro Ramírez. That angered voters—that was the point—and Alfaro lost by a slim margin.

As illustrated in this anecdote, the contemporary manipulator attempts to manage or manufacture large-scale social processes. These processes play out in highly connected information-age societies that are manipulatable through various forms of “political technology.” The manipulator thus indirectly seeds/manipulates information and features of the environment such that large groups of people simultaneously yet semi-independently act, react, and interact in a way that a desired macrobehavior emerges from low-level microbehavior. So are we fated to be manipulated? Can we avoid being fucked with?

Just Because They Are Out To Get You Doesn’t Mean You Aren’t Paranoid

Where do we go from here?  Group manipulation is a collective action problem. The manipulator only succeeds when she convinces enough members of the group to act as she desires. If the members of the group are on the lookout for her activities and work together to thwart them, the attack can be defeated. This why everyone from the CIA to labor activist groups develops counterintelligence and operational security procedures and a means of getting the entire organization to participate in them. Without this precaution, they would be helpless in the face of  spies and agent provocateurs.  But there is a catch: just because they are out to get you doesn’t mean you aren’t paranoid. CIA counterintelligence chief James Jesus Angleton, in his zeal to root out double agents and traitors, inflicted massive damage on his organization and nonetheless failed to catch the spies he was looking for.

Angleton’s woes suggest that the first step to countering manipulation and mindfucking is to not fuck your own mind in the process (which may or may not be an objective of the adversary). Paranoia can also be corrosive when it spreads through a group, which is why fomenting fear, confusion, and insecurity within the group degrades their overall effectiveness. In one particularly bad case,  members of the group killed each other out of paranoia while the real mole patiently watched among them. The cure can be worse than the disease.  A bigger problem is the way in which collective action problems in cutting-edge manipulation seemingly tilt in favor of the manipulator.

The very structure of our media culture, for example, facilitates manipulation via the stirring-up of online flamewars. This gives the manipulator a potent mechanism for achieving her aims, because all it takes for someone to unwittingly participate in the operation is an like, tweet, or share. What can we do? Perhaps the best place to start is to think about how to stop a very particular type of tactic used to  further undermine and divide:

Express an opinion that engenders either fanatical support from those who identify [with it] or rabid opposition from those who do not. Wait for media – social or traditional – to amplify the vitriol and divisiveness. One group yells; the other group yells back. Everyone slings insults. It’s a feedback loop. At the end of each episode, people forget the intellectual basis for their arguments – only group identity remains as a salient factor.

The power of this approach is that the manipulator exploits pre-existing social scripts and simply combines them together such that they yield macrobehavior consistent with the manipulator’s plan. Any solutions will likely be ones that dilute, complicate, starve, or otherwise thwart the attempt to spark scripted responses. In the longer term, solutions also need to interrupt or muddle the way in which group identity produces scripts. This will not be easy.

Declining trust in media and other official sources of information also suggests that the manipulator can take advantage of an adaptive tactic the social media user employs to derail what they increasingly feel is manipulation from the establishment: paying more attention to information that people with strong ties tell them is important.  Censorship, filtering, nudging, or a naïve belief that experts can curate quality information will not save us. At the end of the day it is an individual choice to be manipulated or not-manipulated. We cannot make that choice for the public, and attempting to do so often simply just reduces our ability to influence their views and behaviors. Finding a way to counter manipulation without Orwellian measures is a critical problem for policymakers, journalists, scientists, and other interested parties.

Should we engage in manipulation ourselves? Perhaps the best defense is a strong offense. Or not.  Just because an opponent does it well does not mean that you should try to mimic them. The entire political system of the Soviet Union was a giant lie, so it is unsurprising that USSR intelligence services would be better than American ones at deceiving and manipulating. It is also questionable whether either side really understood the long-term strategic tradeoffs and risks inherent in covert operations and the ethical complications they create.  Certain forms of manipulation, such as tricking an opponent into acting in a way contrary to her interests, may seem to be more justifiable than others that draw in third parties or inflict long-term damage on social institutions. But this assumption can sometimes be misleading.

If the USSR’s political system itself was a manipulation, American capitalism has normalized perception management of varying sorts as an organizing principle  since the days of famous showman P.T. Barnum. American public relations pioneer Edward Bernays was famously  a propagandist for the idea of propaganda, reasoning that without propaganda no one would be able to solve every single problem or make every single choice that modern life throws at us. That is still not necessarily equivalent to a KGB black operation. But given the significant philosophical ambiguities as to what constitutes manipulation, the line can easily blur.  And it is interesting that Bernays – a unrepentant veteran of World War I propaganda operations – made an link between propaganda in war and the “propaganda” of selling people items in department stores.

Lastly, any ethical assessment must also take account the fragility of our minds to deception and misrepresentation.  What we perceive and experience is very much a crude simulation of reality enabled by various forms of internal filtering, simplification, and “good enough” heuristics and inference. Our preferences and identities are also not necessarily revealed by our actions. Rather, our actions often construct and reinforce existing identities and preferences. The consistency we attribute to the coherence of our selves and the decisions that flow from them is itself an illusion that effortlessly rectifies troubling inconsistencies, contradictions, and ambiguities.

Moreover, our experience of the world is also mediated by a number of external devices such as the things we use to display, record, and communicate information, the social representations that influence our collective thought and memory, and the various social fictions that our thoughts and actions embed in our external environment. We probably have also always required some kind of helpful external signal in order to process complex abstractions, and this need is greater than ever in a world characterized by a chaotic flood of internal and external stimuli.

In short, we already fool ourselves far more than we believe. Let us carefully consider whether we should fool others.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Adam Elkus

Adam Elkus is a PhD student in Computational Social Science at George Mason University and a New America Foundation Cybersecurity Initiative fellow. All opinions are his own.

Comments

  1. Something that reverberates from the slew of OB experiences that many of us have endured; every conversation is a negotiation, and in the context Elkus lays out here, every negotiation is a manipulation. To change the targets perception of facts is the principle tennant of trade. “My widget is worth more than the price point I’ve set.”

    If you would have asked me a year ago where this ethical divide lies, I would have probably used the author’s split between coercion and manipulation. But the faults of the social echo chanber have slewed my needle on that one. I now would suppose that the ethics of manipulation lie in the outcomes. It’s the responsibility of the ethical entity to evaluate the results, tools be damned.

    Great piece Dr. Elkus

  2. The last point about paranoia makes me wonder whether systems that seek efficiency over discerning commitment might be more robust; is it possible to design a system such that a mole must remain suffciently helpful in order to maintain their cover that they overwhelm their own negative contribution? It probably depends on the extent to which your strategy relies on concealed vs distributed variables.

    It also occurs to me that “first past the post” elections are easier to hack; if you can get yourself set up as one of the front runners, your main priority can be making the other side loose; the structure of the voting system pushes things towards a zero sum game. Those you defeat can put enough investment into the game itself that they will respect and reinforce your win, regardless of if it breaches the unwritten rules by which they play and so the reasons that they respect it.

    • Thinking again, I’ve realised an obvious problem of people seeking efficiency as a way to avoid having to work out loyalty; essentially, you are avoiding subversion working on the level of strategy by fixing strategy as a constant; a completely trust free system operates as a formal system, either with predictable behaviour or as some kind of agent based AI. This means that even leaving aside the problems of hacking the structure itself, (spamming bitcoin with tiny transactions to slow down block chain processing etc.) you can also try to the create a divergence between the objectives built into the formal system and those people implementing it: If the assumptions under which the system was designed no longer hold, then it can work perfectly well on it’s own terms, but produce slowly less and less helpful results to those operating it. At some point the system will need to be redesigned, despite the internal pressures against it (if the system has been designed specifically as a low trust system to avoid interventions from the human level), at which point your subversion operation designed for that level can spring back into action.

  3. A Disgustingly Obese Man says

    >Any solutions will likely be ones that dilute, complicate, starve, or otherwise thwart the attempt to spark
    >scripted responses. In the longer term, solutions also need to interrupt or muddle the way in which group
    >identity produces scripts. This will not be easy.

    And this is why 4chan builds its norms around anonymity. It is difficult to build a virtue-signalling feedback loop when you are punished and mocked for having a stable identity.

    Note that the most pro-Trump board on 4chan, /pol/, does not subscribe to this (posts are marked both by country of origin and a stable, randomized alphanumeric ID).

    • Fascinating, I was not aware of that, it makes a huge amount of sense to enact nationalist roles if those are the primary representations of your identity, that and a marker for retaliation.

  4. I was recommended this website by my cousin. I’m not certain whether
    or not this post is written by means of him as nobody else recognise such particular about my problem.
    You’re amazing! Thank you!