Cooperative Ignorance

Sarah Perry is a contributing editor of Ribbonfarm.

“Ironically, once ignorance is defined, it loses its very definition.”
—Linsey McGoey, “The Logic of Strategic Ignorance

In game theory, rational parties try to maximize their expected payoff, assuming that the other parties are rational, too. Rationality can be a handicap, though: a rational party is limited in the threats it can make compared to an irrational party, because a rational party can’t credibly threaten to harm its own interests. An irrational party may be harder to cooperate with and less likely to be chosen as a cooperation partner, but in certain situations, it has more powerful strategies open to it than a party limited by rational maximization of expected value. An irrational party is not to be messed with, and can often demand concessions that would not be given to a rational party. Evolutionary psychologists, for instance, posit that altruistic punishment is an adaptation that fits in this slot – giving people sufficient irrational motivation to harm their own interests for the sake of promoting fairness norms. Rationality is good, but a little strategic irrationality is better – especially in the service of promoting cooperation.

Similarly, information and options are valuable, but in certain situations, not having options and not getting information are valuable, too. There is power both in limiting the responses that are available to you and limiting your knowledge. In the game of chicken, in which two cars are speeding toward each other, each with the option to swerve and be disgraced or continue forward and risk a crash, the classic strategy in the literature is to toss one’s steering wheel out the window – signaling to one’s opponent that one has given up the option of swerving. (One might alternatively blacken one’s own windshield, the information-avoidance equivalent of tossing the steering wheel out the window.) A contract is the cooperative version of the steering wheel out the window: it limits one’s future strategies and serves as a costly signal that one will pursue a particular course of action, so that the other party will be motivated to act in accordance with one’s self-limited strategy.

Thomas Schelling, in his 1960 book The Strategy of Conflict, calls “the power to bind oneself” a type of bargaining power. “The sophisticated negotiator may find it difficult to seem as obstinate as a truly obstinate man,” Schelling says. “If a man knocks at a door and says that he will stab himself unless he is given $10, he is more likely to get $10 if his eyes are bloodshot.”

Options, in and of themselves, can be harmful. It is not the case that we are always better off having a choice. For instance, the homeowner who opens the door to the man in the previous paragraph with bloodshot eyes obviously would prefer not having this “option” at all, even though he makes the correct choice under the circumstances. A more common example of a choice that makes the offeree worse off is an unwanted invitation: if an annoying or dull colleague invites you to dinner, you may accept (and suffer through it) or decline (and feel guilty and lousy), but you may not be returned to the blissful moment before being offered the choice. J. David Velleman makes much of this fact in his article “Against the Right to Die,” pointing ultimately to terminally ill individuals who may wish to remain alive without having to choose to do so. Some people, he says, prefer the position of helpless ignorance to either choosing to die or taking responsibility for the choice to remain alive. (Of course, faced with this argument, dark minds like mine immediately conjure up people who prefer to be killed without choosing it, rather than choosing to die or choosing to live.) Velleman provides other examples (mostly via Schelling):

The union leader who cannot persuade his members to approve a pay-cut, or the ambassador who cannot contact his head-of-state for a change of brief, negotiates from a position of strength; whereas the negotiator for whom all concessions are possible deals from weakness. If the rank-and-file give their leader the option of offering a pay-cut, then he may find that he has to exercise that option in order to get a contract, whereas he might have gotten a contract without a pay-cut if he had not had the option of offering one. The union leader will then have to decide whether to take the option and reach an agreement or to leave the option and call a strike. But no matter which of these outcomes would make him better off, choosing it will still leave him worse off than he would have been if he had never had the option at all.

So options can be, and frequently are, a harm. Information is inextricably tied up with options, and information can be a harm, too. Every option is only exercisable if it is communicated; if you don’t know you have an option, then you can’t exercise it. Poor communications may sometimes be a negotiating advantage. But the possible strategic harms of information go beyond knowing that one has an option; all the information about states of the world and possible outcomes is potentially harmful, depending on the situation. The phrase strategic ignorance is used to cover situations in which information is strategically and rationally avoided in order to maximize expected value.

Strategic Ignorance, Negative Knowledge, and Knowledge Alibis

Often the value in strategic ignorance is not ignorance itself, but being able to plausibly claim that one is ignorant, in order to avoid the consequences of knowledge. Plausible deniability is the tactic investigated in Linsey McGoey’s “The Logic of Strategic Ignorance,” in which she investigates a scandal involving pharmaceutical testing:

The effort among senior management to demonstrate non-knowledge of [a low-level medical researcher’s] actions suggests that the most important managerial resource during the scandal was not the need to demonstrate prescient foresight, or the early detection of potential catastrophes. What mattered most was the ability to insist such detection was impossible. For senior staff at SocGen, the most useful tool was the ability to profess ignorance of things it was not in their interest to acknowledge.
…[O]rganizations often function more efficiently because of the shared willingness of individuals to band together in dismissing unsettling knowledge.

McGoey emphasizes the strategic reliance on knowledge alibis – experts whose function is to prove the plausibility of the ignorance of a given actor, allowing one to defend one’s ignorance by

mobilizing the ignorance of higher-placed experts. A curious feature of knowledge alibis is that experts who should know something are particularly useful for not knowing it. This is because their expertise helps to legitimate claims that a phenomenon is impossible to know, rather than simply unknowable by the unenlightened. If the experts didn’t know it, nobody could. [Emphasis mine.]

In order to employ strategic ignorance and its sub-strategies like knowledge alibis, a special type of knowledge is required: negative knowledge. This, McGoey says, is not “non-knowledge” or a void of knowing, but “knowledge (whether unconscious or articulated) of the limits and the adverse repercussions of knowledge.” How do you know what to avoid knowing? This is the domain of negative knowledge. Negative knowledge is an active process, similar to the active cognitive stopping (“closure”) that helps brains function well, identified in my previous piece. When performed by organizations rather than brains, the process appears a bit more sinister:

Negative knowledge is an awareness of the things we have no incentive or interest in knowing about further. As Matthias Gross writes in a seminal article on the epistemology of ignorance, negative knowledge involves “active consideration that to think further into a certain direction will be unimportant.” [Citation omitted.]

In strategic ignorance, to think further into a certain direction can be not just unimportant (a waste of resources), but damaging and dangerous. Even free, easily available information must often be avoided. And the value of strategic ignorance does not lie just in plausible deniability. Information itself, even when not known by other people, can be harmful. Consider as a minimal case the defector from North Korea, who must obviously prefer not to know about how his family members who stayed in the country have been tortured. And in a happier case, those who value surprise prefer not to receive “spoilers,” that is, information out of the proper order that would maximize the aesthetic experience of surprise. The spoiler taboo is one of the most innocuous manifestations of cooperative ignorance.

Plausible Deniability Isn’t Everything

A colorful illustration of the negative value of information, which doesn’t rely on the value of being able to plausibly deny information, is found in the 2000 paper “Strategic Ignorance as a Self-Disciplining Device,” by Juan Carrillo and Thomas Mariotti. The example central to the paper is the fact that almost everyone (including smokers) vastly overestimates the harmful effects of smoking cigarettes. This is true even though information about the true risks of smoking is freely and widely available. In a large sample, “the average perceived probability of getting lung cancer because of smoking is 0.426 for the full sample and 0.368 for smokers,” say Carrillo and Mariotti. “By contrast, the U.S. Surgeon General’s estimate for this risk lies in a range from 0.05 to 0.10.”

The authors’ posited explanation is that people remain strategically ignorant of the true, surprisingly low risks from smoking in order to bind themselves to a course of action of not smoking and avoiding the small risk. In a sense, they are playing a game against their future selves, each of whom have some time preference. If at any point they chose to learn about the true risks of smoking, the pleasure and productivity benefits of smoking might outweigh the risk (now known to be small), and each future self might over-consume cigarettes throughout the lifetime. But by keeping themselves ignorant of the true risks of smoking, they can bind their future selves not to smoke despite the low risks, preventing the risk of overconsumption. It’s the only way they can get their time-inconsistent future selves to all “cooperate” despite sacrificing pleasure and productivity at every stage.

The example and the model raise many questions. If knowledge of the relatively low risk of smoking were a powerful factor in the decision to smoke, shouldn’t physicians and other medical experts smoke at a very high rate? They are effectively forced to learn something like the forbidden information identified by Carrillo and Mariotti. Yet doctors tend to smoke at a rate close to that for their country, only in a few cases smoking more than the reference population. And in many countries, doctors continued (or continue) to smoke at very high rates for decades after the breakthrough reports linking cigarettes to lung cancer emerged during the 1950s. 38% of Greek doctors smoked in the early 2000s, and half of them started smoking in medical school. Meanwhile, fewer than 10% of doctors in the United States admit to smoking, despite having presumably the same information about the risks as the Greek doctors. Information about the risk of disease does not seem to be a great motivator for real people in the world making smoking decisions.

Consider this one piece of information, however, as a representative of a body of information – “evidence that smoking isn’t that bad” – and consider its counter-body, “evidence that smoking is really bad actually.” These are, in many ways, conflicting ideologies as much as bodies of information. What evidence exists under each umbrella? And which category is more likely to reach people – either by their personal (perhaps rational) choice, or by factors outside their control? We must include not only cold medical facts, but also soft social facts, such as the social status and moral judgments accorded to smoking by the surrounding culture.

image

To remain ignorant of “evidence that smoking isn’t that bad” facts, and only be exposed to “evidence that smoking is really bad actually” facts, might be a decision a rational person might make, to force their fickle, time-inconsistent selves to cooperate across time to avoid smoking. It is certainly a decision that is made for most people in developed countries. Most Americans would agree that this is a huge beneficial accomplishment. The most interesting question to me is this: what methods do societies use to accomplish these feats of cooperative ignorance?

A related question is, how do people know what to avoid learning in the first place? How is negative knowledge accomplished? If you don’t know the risk of lung cancer in smoking, for example, from your truly ignorant perspective, it’s equally likely that the true risk is higher or lower than your belief. Even if you wished to discipline your future selves to prevent them from smoking, there would be no reason to suspect that learning more about the true risks would endanger your goal and make smoking more attractive. The existence of cooperative ignorance presupposes that there are strategies that groups use to guide members away from certain information. I will speculate about these shortly.

Additional intuitive hooks are available if the lung cancer risk example seems questionable. People who wish to avoid overconsumption of a drug may avoid learning about the desirable effects of a drug simply by not trying it. Students in Ph.D. programs, aspiring actors and musicians, and those in multilevel marketing schemes tend to avoid information about their likelihood of career success, as this might endanger their commitment to their path. People in committed marriages tend to avoid learning about possible outside romantic options by not going on dates or maintaining profiles on dating sites. In fact, a married person would likely be offended to learn that his spouse went on dates, had a profile on a dating site, or otherwise appeared to gather information about extramarital options: merely being willing to gather this information signals a lack of commitment to the marriage. This is ideology on the smallest scale, the “zone of motivated ignorance” that Jonathan Haidt says we tend to find protecting the sacred.

Strategic ignorance as self-discipline is an elegant model, and demonstrates at least an existence case of strategic ignorance without plausible deniability: parties need not deny knowledge to anyone other than themselves. The authors say of their model:

[O]ur model would be analogous to a multi person situation where the information obtained by any individual becomes automatically public. While this assumption is in general hard to motivate, it seems particularly natural in our intra-personal game with perfect recall. [Emphasis in original.]

The authors may not be charitable enough about the applications of their model that assumes that “information obtained by any individual becomes automatically public,” for a fact of information is that it tends to leak. Often the best way to plausibly deny having certain knowledge is to actually not have it. This may be true both in an institutional setting, when paper trails are the means of checking, and in ordinary interpersonal settings, when emotion, cognition, and ritual responses are the primary means of “checking” the truth.

Leakage

Robert Boyd and Sarah Mathew recently released a little paper, containing a mathematical model that demonstrates the value of third-party monitoring in reliable communications. When communication is only one-to-one, “cheap talk” (signaling that is not costly, like a peacock tail or tattoo) that may be deceptive is slow to evolve; the fitness gain necessary to motivate it into existence is huge. But when third parties monitor communications for truthfulness, cheap talk becomes a much more economical proposition: the fitness gain necessary to get the ball rolling is much smaller. Cheap communication seems to require reputation. The emotion of shame – the sick dread of being publicly caught acting improperly, as in a lie – motivates truthfulness, just as the emotion of spite motivates the altruistic punishment mentioned in the first section.

Once a communication framework exists, deception is still valuable and tempting, but deception-detection tends to get better as deception becomes more sophisticated. (The existence of a small, stable population of sociopaths may indicate that there are multiple stable strategies existing at different frequencies, as with some animal species.)

Many tactics have been developed to verify the truthfulness of information. In ancient Sumer, a protocol was developed as a “level of indirection” (a sort of checksum) to verify the accuracy of cheap information: marking not only the number of tokens representing different kinds of goods, but also the number of different types of goods, and multiplying these together into a primitive (but hard-to-fake) hash. (This is very similar to the modern use of techniques like parity bits; while the originals worked mostly against deception, the same techniques protect us against the information decay of noise.)

Levels of indirection are useful in less abstract contexts as well. An ideology (or aesthetic) can be regarded as containing a truth claim, and ideologies often form axes of cooperation. Commitment to an ideology, or an “egregore” in previous terminology, with its flags and costumes and rituals and outlandish beliefs, demonstrates commitment to one’s group, without the necessity of constant one-to-one commitment signaling. It’s very efficient, just as speech is more efficient than grooming.

The truth claim of an ideology can be dissected into parts: first, “this ideology is true;” second, “I believe this ideology.” Skill at asserting the first generally implies the second, and for that matter, helps everyone, not just the arguer, more plausibly portray belief (not least by actual, sincere belief).

Beliefs leak. They leak out from conscious (and even unconscious) knowledge into behavior, emotional display, display of cognition, and ritual performance. The more “checks” a group has available to detect deception by a member, ideological or otherwise, the better its chances of reliably detecting deception. So a member must be able to pass as many checks as possible: crying or laughing or becoming angered when necessary, performing rituals without groaning too much, and demonstrating appropriate cognition as evidenced by speech and action. As an example of cognitive performance leakage, the time it takes to make a decision or produce a response is itself a signal of what’s going on mentally. An insincere person or liar, or one playing another strategy that requires more processing time, will take longer to produce a response.

Since leakage is so prevalent, I emphasize again that actually being ignorant of information is often the best way to plausibly deny the possession of information, just as sincerely believing something is often the best way to signal believing it. Before the turn of the most recent century, it was fashionable to investigate self-deception as a means of facilitating the deception of others. How can you deceive yourself? Self-deception is possible because knowledge is stored in many layers of consciousness, not just one coherent store of knowledge. And self-deception may be broadly beneficial. A famous and much-reported study from a quarter of a century ago found that a measure of self-deception predicted success in competition in a single college swim team; my research has not revealed any major attempts at replicating the result. So we are not left with much besides our armchairs to help us determine the plausibility of the hypothesis that self-deception facilitates the deception of others.

It is the same for the hypothesis that others help us to deceive ourselves.

The Cooperative Aspects of the Scam

Not all deception is intentional. Animals deceive each other all the way down the phylogenetic ladder. An animal’s coloring, deceptively mimicking the coloring of a poisonous species, arises through no intent at all. And in the human world, many deceptions are unintentional as well; a party to a romantic relationship may form a belief that the relationship is exclusive or permanent when it is not, even though the other party had no intent to cause such a belief. Doctors (and other healers) prior to the twentieth century no doubt believed they had the power to heal their clients, though in fact they did not; their patients were deceived, but not intentionally.

The term “scam” implies an intentional deceiver as well as a victim. In this section we will be considering only intentional deception.

There is a truism about scams that a mark can only be deceived if he wants to believe; all deception relies on self-deception. The ideal victim of a scam is a person who desperately wants to believe in a reality different from actual reality. Perhaps he receives no utility – or even negative utility – from the actual state of affairs. In other words, reality, such as it is, causes him pain. He is willing to risk everything on the possibility of a different reality being true, precisely because the present reality is of little use to him.

The scammer provides a temporary service, offering for sale a plausible facsimile of a different reality. In this fantasy world, the mark gets to be rich, or loved (as in dating scams), or healthy (as in healing scams), or young (as in anti-aging products); he gets to contact his deceased relatives, or achieve spiritual transcendence or high status. What does the victim get out of it? The service the scammer provides to the mark is a plausibility structure for a desirable belief – usually only temporary.

The problem with the scam is that it comes to an end, with the victim generally worse off than if he had never received the “service” provided by the scammer. But what if it never had to end? What if we could all scam each other, forever, to believe in a reality that is better for us than actual reality? That is the hope of cooperative ignorance.

Methods of Cooperative Ignorance

If we deceive ourselves and remain ignorant of certain information for our own benefit, how do we know what information to avoid? Within a single mind, knowledge and memory do not exist as a single, unitary set of beliefs, all of which are available at all times. Rather, there are many layers of consciousness, many selves for different situations and roles, each of which has access to only the knowledge and memories relevant to the current situation. Before learning information consciously, we get many emotional cues about how to treat the information, and whether to pay attention to it at all. The capacity to be interested or bored with possible research areas or trains of thought helps us seek information that is valuable and avoid information that is harmful or useless. The sinister-sounding process George Orwell describes is quite natural, and frequently valuable:

Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.

(from 1984)

“Protective stupidity” and “cooperative ignorance” have negative connotations, but recall the very strategic, useful, and even socially beneficial applications outlined in previous sections. It’s not always a bad thing.

One method that I have already treated in detail is the maintenance of sacredness. Groups protect their foundational myths and fantasies by maintaining an emotional taboo around the sacred myth, a zone of motivated ignorance. It is impolite and even disgusting to question foundational myths, or to discuss them with the wrong emotional valence.

But humans are curious creatures, and some research is bound to occur. A powerful method to deal with curiosity is to manipulate the results of research by proffering a biased sample: ensuring that when research does occur, the early stages of research confirm the group fantasy and discourage further research. Benevolent governments or organizations might ensure that research on smoking quickly turns up only negative information: short, easty-to-digest information with headings like “Get the Facts!” that obscure the true risk in favor of reporting on harms alone. If governments don’t want citizens to use drugs (perhaps for the citizens’ own good), but realize that citizens will try drugs as part of their research, they might make particularly poisonous and unpleasant drugs legal and widely available, such as alcohol. If reading and learning in general are to be discouraged because they are bad for us, a government might fill its schools with boring, irrelevant, tedious literature.

A related method is to disguise a tabooed area of reality as something else. When a curious human attempts to research “medicine” or “education” or “philosophy,” he will be harmlessly diverted into a safe zone of research that conveniently goes by the name of the dangerous area, and never know the difference. In accordance with Linsey McGoey’s observation that “experts who should know something are particularly useful for not knowing it,” certain people might be given high status, legitimacy, and attention, on the expectation that this makes them experts who should know something. However, as a condition for their high status and legitimacy, they must be persuaded to only reveal beneficial information, and not defect from the cooperative ignorance pact. Such people can make cooperative ignorance agreements much stronger.

All of this may sound rather top-down and authoritarian, but we all cooperate to maintain desired ignorance. Law enforcement is top-down, but if everyone simultaneously decided to litter and steal and burn down buildings, the police would not have enough resources to address it. Rather, we all cooperate to not commit crimes (or even impoliteness), both by choosing not to personally commit crimes and by altruistically punishing those who do, at cost to ourselves. In this same way, we all cooperate to maintain ignorance about the things it is better for us not to know.

The main problem with cooperative ignorance is that it’s hard to check whether it’s really better not to know something, except by knowing it. And then it’s too late.


Thanks to Rob Sica for librarian assistance.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Sarah Perry

Sarah Perry is a ribbonfarm contributing editor and the author of Every Cradle is a Grave. She also blogs at The View from Hell.
Her primary interests are in the area of ritual and social behavior. Follow her on Twitter

Comments

  1. Jeff Morrow says

    Lovely!

    Not having taken time to explore the following, possibly relevant, use case, I offer it for more thoughtful consideration by others.

    In loftier regions of many firms a substantial amount effort gets devoted to risk characterization and mitigation. Frequently both characterization and mitigation (especially so) are badly performed and, in my experience, there is not much appetite for improving either. Little attention is paid, for example, to the interactions between risks. And mitigation devolves to scheduled reviews of the *status* of risks rather than determined action. The tracking spreadsheet becomes a write-only database.

    Following the logic of the post, the size of the “substantial” effort to deal with risk, the opacity of the means for characterizing it, the arbitrary prioritization schemes (e.g., risk independence, linear scales, etc.), and careful, opaque expression (many arcane nouns/few verbs) all serve plausible deniability – “We had a big process. All the experts contributed. No one could have foreseen …” It seems key that the charade be a large one, with lots of compromised participation sanctioning further compromise.

    When enough risks manifest in corporate performance bad enough to move the stock price decisively lower for a long enough time, the senior team loses members to their need to spend more time with family and pursue other opportunities. Replacements skim the risk assessments and order up new ones, and the deniability machine cranks on. The departed indeed find new opportunities, their reputations untarnished by the unknowable.

  2. Awesome. Very thorough exploration of territory I’ve only caught glimpses of in my own wanderings.

    I’ve been coming at ignorance from another angle. Strategies of ignorance, I suspect, are necessarily self-limiting in a certain way. Gaining an advantage through ignorance is like shorting a stock, where you’re at risk in ways you’re not with other trading strategies.

    The example I’ve been thinking about the most is veil-of-ignorance arguments (and moral luck). The success of these strategies requires 2 things: continued ignorance, and forgoing the upside of knowledge. For example, the increasing pressure to rethink insurance models based on vast new amounts of data in healthcare/genomics/driving. This is like a stock going up endlessly when you thought it would go down. As more genomics data comes in through new discoveries, the cost of ignoring it in health insurance actuarial models goes up. The cost of strategic ignorance is not static over time.

    These are conscious-ignorance cases, but in unconscious-ignorance (cases of being denial, cases of actually burning boats/bridges/throwing out steering wheels in a moment of anger as opposed to being a calculated move), things get even trickier, because you’ve now created a boundary of space-and-time where the strategy is net-positive and have to stay within via unconscious behaviors. If you wander out, you’ll get hit by the costs. And if the costs within your net-positive zone increase over time, you’re a boiled frog.

    Not sure where I’m going with these thoughts, but they make me generally wary of ignorance strategies, just as I’m wary of trading via shorts.

  3. This was an interesting read. Regarding risks of various activities, this is further complicated by the fact that humans are crap at judging risks, especially long term. And taboos and desired social behaviours often cause a lot of fearmongering or denial of the actual levels of risk. To give some examples, we overestimate the risks of having sex with strangers, underestimate the risk of having sex with friends, underestimate the risks of driving, and overestimate the risks of getting cancers from various sundry foodstuffs.

  4. “dark minds like mine immediately conjure up people who prefer to be killed without choosing it”

    Seems too obvious to require a dark mind like yours. Many religious people consider suicide a mortal sin, but nobody AFAIK considers it a mortal sin to be a murder victim. It makes perfect sense to me that there would be people who would prefer to be killed but wouldn’t dare choose to be killed. Not at all clear to me why one would assume only the converse case to be worthy of attention, or indeed why one would even expect it to be more common. It’s rather disturbing, of course, that in ostensibly making the case against the right to die, Velleman is actually making a case in favor of murder.