In the months since I wrote “The Cyberpunk Sensibility”, the dystopian flavor in the air has only become more potent. Consider these recent events:
- An exiled prince from a communist country was felled by a pair of dark-haired ingenues, one wearing an “LOL” shirt.
- The CIA has been either pwned or framed, or both, their secrets extracted and disseminated via Twitter.
- One of the richest, most powerful men in The Free World™ released a globalist manifesto, in which he promised to preside over the digital citizens of his monetized belief garden according to his own determination of benevolence.
- A sovereign nation appointed an ambassador to remonstrate with this rich, powerful man, and his brethren. Private entities that span continents. Hey, maybe it’s just a PR stunt!
- The predominant meal-replacement brand has an eerie AI mascot. (Which is definitely a PR stunt, but still.)
In “The Cyberpunk Sensibility” I noted that these absurd events can act as triggers, waking people up not unlike the vaunted red pill. But the feeling that we’re living in a parody timeline is starting to wear on me. Many have reflected (at least in the United States) that 2016 was a bizarre year, and 2017 is shaping up to shame its antecedent.
I wrote the cyberpunk bullet points to sound dramatic, but it’s equally possible to make them sound ridiculous. Regardless of which perception I adopt, I find myself marveling at how profoundly strange all of this feels. Paradoxically, what’s normal now is for everything to feel strange. Is that feeling adaptive, I wonder? Is it safe?
Lay of the Land: Treacherous
“Reality” is an individual construct that can be carefully built into a community consensus. I’m not the first person to notice this. Many different communities build many different consensuses — in other words, we all see the world differently. You could argue that building and participating in a distinct reality is integral to being a community. (On the internet, it can be the whole project. Look at any number of subreddits, or Frog Twitter.) Communities are characterized by the attitudes and aesthetic preferences found among most members, especially the prominent ones.
Philosophers and scientists have been pondering this phenomenon for centuries, probably millennia, since a patchwork of perspectives has always been the norm. “While journalists and other experts maintain that truth is basically facts added up, the reality is that all of us, to very different degrees, uncover our own facts and assimilate them to our pre-existing beliefs about what’s true and false, right and wrong,” Nathan Jurgensen wrote. “Even when people see the same information, it means radically different things to them.”
“Post-truth” is a current political buzzword, but whatever “truth” state existed was brief, and “pre-truth” looked a lot like “post-truth” in a different techno-economic landscape. Oral traditions weren’t so different from Facebook, nor is “fake news” an unprecedented innovation. The main difference, dating to the start of globalization and accelerating rapidly once the internet appeared, is that now community-formation isn’t limited by geography. Many IRL social networks and interest groups are still situated locally, but even more have sprung up online. If you’re reading this, you probably belong to at least a handful of these subcultures. “Some days everything feels like a maelstrom,” as Jia Tolentino wrote, “a series of fights over identity, in which everyone is constantly misrepresenting their own stakes.” Like traditional tribes, petite subcultural cyber-clans are constantly clashing.
“All systems of communication and control — from the human mind to an command and control network — can be subtly degraded, disabled, or subverted by feeding them false inputs or exploiting weaknesses in how they process, evaluate, and act on information,” according to Adam Elkus. Francis Fukuyama described our current environment as operating exactly thus:
The traditional remedy for bad information, according to freedom-of-information advocates, is simply to put out good information, which in a marketplace of ideas will rise to the top. This solution, unfortunately, works much less well in a social-media world of trolls and bots. There are estimates that as many as a third to a quarter of Twitter users fall into this category. The Internet was supposed to liberate us from gatekeepers; and, indeed, information now comes at us from all possible sources, all with equal credibility.
Venkat wrote about the same dynamic from a different angle:
Capitalists and communists disagreeing about the nature of the world in the twentieth century was a global collision of just two consensus realities. Back then, consensus realities were a read-only game for most people. It was expensive to create one. You had to build impressive buildings, give speeches about values, own newspapers, hire goons to beat up opponents, declare manifestos, and so forth. We are now entering a read/write era of consensus realities. Our ability to disagree about the nature of our world, at an extremely fine-grained and visceral level, is about to explode.
In this essay, I’m exploring how to navigate the internet’s chaos of subjectivity without going insane. This is my version of Sarah Perry’s epistemic hygiene, a way to engage with the never-ending deluge of information without drowning. I needed to write this as a guide for myself.
No Knowing Before Knowing
Immediately post-election — you know which election I’m talking about — Sarah Constantin’s reaction resonated with me deeply:
Like many people, I’ve thought 2016 was a surreal year; the Cubs won the World Series, the Secretary of State went on television to warn people about white-supremacist memes, Elon Musk has landed rockets on ocean platforms and started an organization to develop Friendly AI. Surreal, right?
It’s real, not surreal. If reality looks weird, this means our stories about it are wrong. […]
There may be a crisis in politics. But before we can do anything sensible about that, we need to understand that there is a crisis in credence. If the world looks weird to you and me today, that is not a matter for rueful laughter, it is a sign that we are probably badly wrong about lots of things.
And being totally wrong about how the world works is a threat to survival.
The election sent me spinning into a slow-motion epistemic breakdown. It’s an uneasy state. Every time I feel sure of something, I remind myself: “You thought that there was no way Trump could win. You are capable of being catastrophically wrong without realizing it.” I never used to obsess about this, but it’s a tautology that you can’t see your own blind spots. (“We were unconsciously incompetent, and it took years for that incompetence to become conscious,” Slava Akhmechet said after his startup failed.)
Up until the night of November 8th, I felt unassailably sure that my interpretation of reality was correct. I was the opposite of a superforecaster, confident that I could choose between binary futures with total certainty. Sure, I relied on the polls (which weren’t so very wrong) but I mainly relied on my intuition about fellow Americans, which turned out to be laughably myopic. The dataset of “fellow Americans” that I fed into my mental model was not a representative sample. More crucially, I didn’t realize that it wasn’t representative.
The liberal bubble has been talked about a lot, to the extent that there’s backlash to the anti-bubble sentiment. So I won’t flog that horse too much, just tell you that I was very cozily bubbled. The election showed me that I needed to break out, to understand what other people actually thought. I needed to be able to pass more Ideological Turing Tests. Because I believed, as Constantin said, “being totally wrong about how the world works is a threat to survival.” A big part of how the world works is determined by how other people believe the world works.
So I read people I disagreed with. I filled my feeds with them. And it has helped, in terms of broadening the viewpoints that I understand. Yet I’m still uncertain about everything — well, a lot of things — most of the time. I’m uncertain about which political policies are optimal (which isn’t so new). I’m even uncertain about how human nature actually works. That leads to being uncertain about how societies function. Going further, I’m uncertain about how I should function within society.
I’d rather be uncertain than wrong, but I’d rather be right than uncertain. Next I must ask, should I want to be certain at all? I’ve been taking this as a given — an end in itself. I’ve been looking for heuristics that work, trying to rebuild my intuition with the goal of optimizing for prediction power.
Some of my preexisting core principles still hold (e.g. there are always tradeoffs; certain individuals can resist incentives but groups will always respond to them). But many of my preexisting core impulses were what blocked my ability to see clearly pre-election. They were more cultural than intellectual — I looked for people who performed ingroup-ish thoughtfulness and tuned out the rest. I’ve been trying to adapt my old interpretation patterns to a newly broadened reality, instead of developing new frameworks and response mechanisms.
I said “my ability to see clearly” — did you catch that? I’m still expecting there to be one reality that I can digest if I figure out the right enzymes.
We Dare to Diagnose
Adam Elkus put it simply: “The price of being able to not be overwhelmed by the world is to have a filter, a cartoon-like image of reality that we can use in our day-to-day lives.” Systems engineer Mathias Lafeldt wrote, “One reason we tend to look for a single, simple cause of an outcome is because the failure is too complex to keep it in our head. Thus we oversimplify without really understanding the failure’s nature and then blame particular, local forces or events for outcomes.” I made a similar argument, along with a warning, in “The Cyberpunk Sensibility”:
Fundamentally, the danger is that mental models are the enemy of complexity. They’re useful as sources of decision-making heuristics, shortcuts that guide you in reacting to new developments. This is just a thing that human brains need because contemplating every single occurrence and choice in depth is mentally taxing.
There are two basic ways to apply a filter to your personal information-processing:
- Apply a filter before intake, e.g. limiting your reading material to certain trusted websites or authors, or only watching the news on a particular TV channel, or only following members of the ingroup.
- Apply a filter after intake, e.g. by using mental models to interpret what you’ve consumed.
I think everyone uses a combination of these strategies, some relying more heavily on one than the other. Before the election, I mostly used the first approach. After the election, I’ve been leaning on the second approach, but it makes me anxious all the time because I don’t feel like I can trust my mental models. After all, they were trained on a heavily filtered infostream! Either way, as I swim through the chaos of subjectivity, I’m starting to doubt that strengthening any filtering mechanism is the right focus.
What I need is intuition that relies on experience rather than belief. Intuition that is optimized for being a functional human being, one who has a variety of healthy engagement modes at her disposal, rather than intuition that is optimized for yielding intellectually consistent results.
I’ll analogize it to Twitter. Instead of 1) following particular types people, or 2) following everyone under the sun, perhaps it’s better to 3) be able to view anyone else’s incoming feed at any given time, and to switch between them frequently. (Apparently #3 used to be a native feature of Twitter, but of course they got rid of it.)
Calming the Chaos
Flipping from viewpoint to viewpoint is admittedly a version of the “meta-rationality” that David Chapman drew from Kegan’s stages of personal development. In a comment on the blog Dirdle, Rogelio Dalton summarized Chapman’s formulation:
Imagine that two people are arguing about the death of Michael Brown in Ferguson. One person is using a social-justice oriented system to analyze the situation, and one is using an individual responsibility system to analyze the situation. Postmodernism would say that these are just two equally valid interpretations by which to view the situation since all truth is relative. The Chapmanian view is that these are two systems, and that one can be more valid than the other, but that proving which is more valid is potentially impossible (possibly except by reference to a goal, but that may be me adding to Chapman’s explicit writings). The normal person would not even realize that there are two systems at work here and would argue about how evil one or the other is. In real reality, there is no little tag in the universe that says that one system is correct or not, all of the facts surrounding it come down to molecules changing configurations in various ways. But because categories have structure, we can decide a useful system, and because some systems can be more applicable than other, we can influence the systems other people use through argumentation and changing their intuition.
Maybe flexibility and the ability to move fast are more useful than being right. Earlier I said, “I’d rather be uncertain than wrong, but I’d rather be right than uncertain.” Maybe being right isn’t just not everything — it’s not a thing at all. Although I’ve long accepted that truth is subjective on an academic level, it’s hard for me to let the idea go deep.
Returning to “intuition that relies on experience rather than belief,” I also want to bring up Fingerspitzengefühl. It’s a German term that roughly translates to “fingertips feeling” and which denotes the swift instincts of experts. Taylor Pearson offered a helpful concrete example:
The easiest and most literal example of finger tip feeling is the finicky lock on my apartment door. When I first moved in, it would sometimes take me four or five minutes to get it locked or unlocked. Now, I have a sort of intuitive feel for what the lock needs and can usually get it open or closed in a few seconds.
Venkat has written about the process of developing Fingerspitzengefühl:
Finger-tip feeling is sometimes experienced as clumsy groping, sometimes as skilled probing. Sometimes as awkward actions, sometimes as flowing ones. The associated mind state is of course, a turbulent mix ranging from extreme exhilaration to extreme distress. Masterful expert mind to scared beginner mind. As you get better at this, your mind’s no-go zones shrink. You can wander from your darkest shadows, fighting your most deeply buried demons, to brightly lit parts of your self. The more you do this, the easier it gets. You can go into an increasing range of situations knowing your mind can handle any resulting emotions or feelings of powerlessness. […] Lived mindfulness means not shrinking away from anything the world throws at you, pleasant or distressing. Staying engaged. Mind-fingers intertwingled in the world.
Tying together Chapman’s meta-rationality and Fingerspitzengefühl brings me to what I think is the solution to the question I posed in the beginning: Is it okay for day-to-day normality to feel strange? Is it adaptive or safe to process the world that way? The answer I’ve come to is both yes and no. If reflecting on the absurdity is where the processing ends, then no, it isn’t adaptive or safe. But navigating meta-rationally with sensitive fingertips is adaptive, and epistemically safe, although not necessarily comfortable.
Like I said before, I don’t think either pre- or post-filtering information is the right approach. Filtering is a way to block content from entering the mind or suffusing through it, out of fear that the content will be a contaminant. Chapmanian meta-rationality removes this fear. If you allow yourself to dance from mindset to mindset, using empathy to explore different perspectives at will, no particular thoughtspace can be tainted because you’re able to occupy them all.
Truth is not objective; every paradigm is true through a particular lens. Certainty becomes irrelevant, because certainty requires a commitment to one viewpoint, which you refuse to grant. Incidentally, I find that this helps a lot in terms of maintaining emotional equanimity.
In my own epistemic praxis, I’m layering a drive to achieve Fingerspitzengefühl on top of Chapmanian meta-rationality. Near the beginning I said, “I don’t feel like I can trust my mental models. After all, they were trained on a heavily filtered infostream!” Chapmanian meta-rationality makes it possible to expose my mental models to diverse infostreams without driving myself crazy, without worrying that my own intellectual identity is at risk. I can relax again and let my fingertips sense the patterns.