The past hundred years have transformed how we imagine ourselves. Freud catalyzed a greater emphasis on the unconscious. Kahneman and Tversky inspired a lot of research into how our subconscious biases affect day-to-day decision-making. Between those tectonic shifts, our understanding of our selves has been radically overhauled.
Gone is the Cartesian, centralized mind mystically separated from the physical world. In its place we’re left with a schizophrenic brain inextricably bound to the body and, at bottom, nothing but atoms. We’re still struggling to work out the implications of this new perception for different areas of human endeavor. Building effective institutions is one of them.
Here at ribbonfarm, Mike Travers last year summarized the arguments against treating people as the only entities with agency, and for instead ‘smearing’ assumptions of agency to a wider variety of entities as we go about our lives. In this post I want to focus on group agency in particular, and explore how our intellectual limitations and cognitive biases as individuals collectively cause dysfunction in long-lived groups — our institutions. Our modern lives are inextricably bound to juggernauts operating by complex rules, that we seem to inevitably lose control of. Perhaps the problem has something to do with how we construct the rules our groups and institutions operate under, how our individual biases cascade and multiply into institutional decay.
The belief that agency can be distributed is hard to internalize even after you’ve been intellectually convinced. I nodded along as I read Mike’s posts last year, but I keep catching myself acting in violation of these beliefs. As an example but without intending to get bogged down in politics, it’s easy to read about congressional corruption and gain a sense that all congressmen are bad people. Then you might read a story about a specific congressman and think, “hmm, he wasn’t so bad.” Ok, so maybe he’s an exception. Or today’s congressmen are more corrupt. But there’s a third possible synthesis that the mind shies away from: perhaps the system made them that way. Perhaps sequences of simple actions that are each beyond reproach can cause the group as a whole to grow hostile toward the people who form it or who caused it to be formed.
For all our sophistication, our reaction to bad news is, all too often, to seek someone to blame. A scapegoat. Scapegoating often manifests relatively innocuously, with a verbal or written complaint. But from the same fount as that complaint or election loss, or that firing, or that promotion not granted, can spring more unfair and barbaric tendencies. In addition, scapegoating when it goes bad has a disquieting tendency to take on a life of its own. Revolutions have to work to not turn into their oppressors. Or worse.
Reacting to this risk (among others) is the conservative worldview, as fellow resident blogger Keith Adams once eloquently argued in a private conversation:
A conservative regards history as full of examples of functioning societies that, for some reason or another, suddenly fell apart. Since the reasons for these failures are illegible, any kind of change is to be avoided, since we have that most precious of things: a stable, prosperous, free, safe society. While our society is not *absolutely* stable, prosperous, free, and safe, it is historically all of those things, and there are a lot more ways to break it than there are to fix it.
These words lead to a tendency to cargo-cult social institutions, per the parable of Chesterton’s fence which can be shortened to, “When you can tell me the use of it, I may allow you to tear it down.” Chesterton’s fence is a great idea, except for one thing: we keep losing our collective memory of why we put up that damn fence. Fences are easy, but as we get into the complications of systems of rules, it’s all we can do to just keep the trains running on time. What’s more, in the wrong hands a high bar for change can be a potent political weapon for gridlock. Gridlock might be pragmatic, but a gridlocked system is basically treating the outside as an externality, relying on those around it to do the adapting for it. I want to aim for more, a synthesis that jail-breaks us out of the cycle of scapegoating and gridlock.
The monkeys and the storm
Let me begin with this observation: people have trouble managing small numbers. For example, consider poker as an environment of small numbers. Odds of getting dealt two aces are 220 to 1. Odds of catching three more cards of your suit are 118 to 1. Small numbers like these (with two zeroes after the decimal place) have an odd effect on beginners who are used to rounding them down to zero in the rest of their lives. Faced with zero odds of winning hand after hand they may go into a shell, or they may start bluffing outrageously. It’s like being on an extremely flat plain exposes them to the quantum froth of their own anchors. I have in the past pursued a bad strategy for weeks because I happened to win with it once. The mammalian brain is extremely sensitive to small doses of positive feedback, capable of stretching them out for long periods.
Outside of poker, small numbers have a more serious effect when they denote ownership. You can share a house with a roommate and feel a sense of ownership toward it. But consider the neighborhood post office or some other public building. Do you feel like you own one of its 300 million parts? Your brain probably rounds that small number down to zero, and as a result you treat it as a commons.
Anytime the topic of a commons comes up, the examples cited are usually physical or environmental: community property, pollution, climate change, antibiotic-resistant bacteria, etc. But many of the abstract rule systems we organize around also rapidly turn into commons and decay in their efficacy. They get less care because nobody watches over them — especially after the the original author(s) depart. We the people the institution serves want to harvest without cultivation. We don’t care what its internal needs are, only what services it can provide us with. We imagine that delegating our authority is enough to keep it working indefinitely. This is caused by two interacting limitations of human nature: our insensitivity to gradual change and our bias for action. I’m on more shaky ground here, so bear with me a moment while I wave my hands.
I don’t know about you, but I have trouble with the way seconds turn into hours and hours turn into years. I need prostheses because I don’t have an absolute sense of time or distance. It’s considered remarkable for someone to have perfect pitch; most of us go through life looking for things to compare (which seems like anchoring). Kahneman and Tversky studied a phenomenon they called insufficient adjustment which might connect up with anchoring. I speculate that all these mechanisms of adjustment are analogous, and they share common circuitry in our brains the way our visual cortex has a deep influence on our cognition.
Even though we make adjustments, and it’s core to how we think, we can’t string too many adjustments together at once. Do that and they start blurring and losing their individual identities. A bunch of small changes can gradually add up to a lot but still slip under our radar. This causes trouble in many places, but I’ll pick two in particular: large software projects and a nation’s laws. Both have a few things in common: regardless of initial conditions they rapidly turn into commons that no participant has any sense of ownership towards. As a result, small changes accumulate in them as first one actor gets support for his change, and then another. They grow monotonically larger, accumulating loopholes, exceptions and corner-cases.
Complexity and corner-cases aren’t just aesthetic concerns, though I’m certainly guilty of more than my share of programmer OCD. You can tell when your plant is about to die because it turns brown over a period of days. The institutions we neglect go bad over decades, something that’s much harder to notice. In addition, as rules grow more complex and bureaucracies accumulate forms, they become more intimidating to a newcomer to understand. If you hadn’t already been paying attention, it gets harder and harder to catch up. More and more, we treat them as externalities, an unpleasant task to be completed with dispatch and wiped from memory. As people shy away from the complexity, control is gradually ceded to a small coterie of insiders who grow more fluent with the complexity and increasingly (first unconsciously, later deliberately) work to maintain a ‘moat’ around their influence. This is how our institutions get captured: insensitivity to low ownership, being seen as an externality and oh-so-gradually creeping complexity. To round out the toxic cocktail, capture gives insiders further incentives to deliberately make rules more complex and inaccessible to latecomers.
Why don’t we do something about it? Our bias for action enters the picture here. Because we lack ‘perfect cognitive pitch’, our brains rely to a great degree on thinking by doing. If you think you aren’t ‘good with your hands’, reflect on the next monkey you see, and the closeness of his relation to us. Monkeying is what we do, and we’re exceedingly good at it. But monkeying can get us into trouble sometimes. Because we rely so much on tweaking, we are more persuaded by calls for action than by criticism that doesn’t propose an action of its own. We prefer glamorous actions to schleps. The episode of The Simpsons about the monorail is a very incisive take on this tendency.
A more lyrical way to say this: our long-lived institutions are often lying metaphorically on a very flat plain with a huge storm very slowly creeping towards them. The movie Idiocracy is a thought-provoking illustration of such a situation. In such situations, our evolution puts us at risk of bouncing aimlessly around on the plain, seemingly doing a lot but in actuality accomplishing nothing. What we need, instead, is to go against every instinct we have. We need to sit back and reflect, motionless and frog-like, for potentially a very long time, before we make the very consequential decision of which way to jump.
But this course can be very hard to orchestrate for large groups of people. In the past, groups and societies have dealt with such situations by eliminating the need for orchestration and electing a dictator in times of crisis. That solution has the obvious principal-agent problem: you can’t be sure the dictator will do what you want, or that he’ll give power back when you want. A second, more subtle, issue is that setting up a dictator requires detecting a crisis. The storm’s moving slowly, and the monkeys will sense no crisis until it’s too late. What we need is a decentralized way of working that continually monitors the storm while fitting our strengths and weaknesses.
Glimmerings of a solution
Now some of the properties I just described will seem very familiar to engineers. Engineers are used to domains where we have to deal with very small numbers, where it’s futile trying to estimate the answer, you just need to measure it, and where getting the answer even slightly wrong can be catastrophic. Since I’m a programmer, and that’s almost as good as an engineer, let me try to describe how we deal with these difficulties:
- As we build something we build a scaffolding alongside it, something that lets us experiment on its properties using test drills. How strong is it? How fast? Does it break if you leave it in the sun for three days? Do the hinges corrode after 3000 swivels? Such drills need to be easy to set up, so that we don’t need to build the bridge before we can check that it works. They need to be reliable, so that if we pass them it means the bridge is ok. And they need to be reproducible. If I run the drill the same way multiple times and get a different answer, perhaps that bridge isn’t worth building. Over time the set of accumulated drills for building a road or a bridge gets handed down in the course of educating new engineers. It’s easy to take them for granted but each drill is precious.
- Anytime we attempt a change, we go through all the drills to make sure everything still works.
- We try to keep our ambitions parsimonious, because even relatively simple drills can be expensive. We continually look for fat to trim, in a process called… refactoring.
Can we apply these lessons to our laws, our org charts, our coop bylaws? I think so. It wasn’t an obvious answer to programmers, and it’s taken us decades to realize, but the answer we came up with for software also applies, I think, to other kinds of rule systems: write down the various scenarios you considered when you came up with your rules, and what the outcome should be. Don’t let anyone change the rules until they can convince everyone that the existing scenarios will continue to work as before (or that some scenario is now obsolete and need not be considered). That’s how you keep systems of rules from circling the drain of regulatory capture.
But all this checking and rechecking of rules is extremely tedious. Nobody wants to do that, it’s going to bleed people’s motivation and sense of ownership, and then we’d be back where we started. Can this be fixed? Here I’m on even thinner ice, but I’ll explore the question in my next post.