Survival of the Mediocre Mediocre

I have a theory about why the notion of an arms race between human and machine intelligences is fundamentally ill-posed: the way to survive and thrive in an environment of AIs and robots is not to be smarter than them, but to be more mediocre than them. Mediocrity, understood this way, is an independent meta-trait, not a qualifier you put on some other trait, like intelligence.

I came to this idea in a roundabout way. It started when Nate Eliot emailed me, pitching an article built around the idea of humans as premium mediocre robots. That struck me as conceptually off somehow, but I couldn’t quite put my finger on the problem with the idea. I mean, R2D2 is an excellent robot, and C3PO is a premium mediocre android, but humans are not robots at all. They’re just intrinsically mediocre without reference to any function in particular, not just when used as robots.

Then I remembered that the genesis form of the Turing test also invokes mediocrity in this context-free intrinsic sense. When Turing originally framed it (as a snarky remark in a cafeteria) his precise words were:

“No, I’m not interested in developing a powerful brain. All I’m after is just a mediocre brain, something like the President of the American Telephone and Telegraph Company.”

That clarified it: Turing, like most of us, was conceptualizing mediocrity as merely an average performance point on some sort of functional spectrum, with an excellent high end, and a low, basic-performance end. That is, we tend to think of “mediocre” as merely a satisfyingly insulting way of saying “average” in some specific way.

This, I am now convinced, is wrong. Mediocrity is in fact the sine qua non of survival itself. It is not just any old trait. It is the trait that comes closest to a general, constructive understanding of evolutionary adaptive “fitness” in a changing landscape. In other words, evolution is survival, not of the most mediocre (that would lead to paradox), but survival of the mediocre mediocre.

Optimization Resistance

Premature optimization, noted Donald Knuth, is the root of all evil. Mediocrity, you might say, is resistance to optimization under conditions where optimization is always premature. And what might such conditions be?

Infinite game conditions of course, where the goal is to continue the game indefinitely, in indeterminate future conditions, rather than win by the rules of the prevailing finite game. Evolution is the prototypical instance of an infinite game. Interestingly, zero-sum competition is not central to this understanding of evolution, and in fact Carse specifically identifies evil with trying to end the infinite game for others.

Evil is not the attempt to eliminate the play of another according to published and accepted rules, but to eliminate the play of another regardless of the rules

— Finite and Infinite Gamespage 32

Mediocrity is not a position on some sort of performance spectrum, but a metacognitive attitude  towards all performance that leads to middling performance on any specific performance spectrum as a side effect.

Since we’re talking about intelligence, AI, and robots here, the relevant side-effect spectrum here is intelligence, but it could be anything: beauty, height, or ability to hold your breath underwater.

Or to take an interesting one, the ability to fly.

Back in the Cretaceous era, to rule the earth was to be a dinosaur, and to be an excellent dinosaur was to be a large, apex-predator dinosaur capable of starring in Steven Spielberg movies.

Then the asteroid hit, and as we know now, the most excellent and charismatic dinosaurs, such as the T-Rex and the velociraptor, didn’t survive. Maybe things would have been different if the Rock had been around to save these charismatically excellent beasts, but he wasn’t.

What did survive? The mediocre dinosaurs, the crappy, mid-sized gliding-flying ones that would evolve into a thriving group of new creatures: birds.

Notice something about this example: flying dinosaurs were not just mediocre dinosaurs, they were mediocre birds before “be a bird” even existed as a distinct evolutionary finite game.

The primitive ability to fly turned out to be important for survival, but during the dinosaur era, it was neither a superfluous ability, nor a premium one. It was neither a spandrel, nor an evolutionary trump card. It was just there, as a mediocre, somewhat adaptive trait for some dinosaurs, not the defining trait of all of them. What it did do was embody optionality that would become useful in the future: the ability to exist in 3 dimensions rather than 2.

So middling performance itself is not the essence of mediocrity. What defines mediocrity is the driving negative intention: to resist the lure of excellence.

Mediocrity is the functionally embodied and situated form of what Sarah Perry called deep laziness. To be mediocre at something is to be less than excellent at it in order to conserve energy for the indefinitely long haul. Mediocrity is the ethos of perpetual beta at work in a domain you’re not sure what the “product” is even for. Functionally unfixed self-perpetuation.

The universe is deeply lazy. The universe is mediocre. The universe is functionally unfixed self-perpetuation, always in optionality-driven perpetual beta, Always Already Player 0.1.

What does mediocrity conserve energy for? For unknown future contingencies of course. You try not to be the best dinosaur you can be today, because you want to save some evolutionary potential for being the most mediocre bird you can be tomorrow, which is so not even a thing at the moment that you don’t even have a proper finite game built around it.

And this is not foresight.

This is latent optionality in mediocre current functionality. Sometimes you can see such nascent adaptive features with hindsight. Other times, even the optionality is not so well defined. The inner ear bones for instance, evolved from the optionality of extra-thick jaw bones. That is a case of much purer reserve evolutionary energy than dinosaur wings.

As Sarah argued in her deep laziness article, some sort of least action or energy conservation principle seems to be central to the way the universe itself evolves, at both living and non-living levels, but I have trouble with the idea of least effort as a kind of optimization, because you run into tricky problems of backwards causation.

But if you think of it as just keeping some non-earmraked (heh) spandrels lying around, with the necessary biological surplus necessary to make wings and things, you don’t need to worry about backward causation. Sometimes it is thicker jawbones, sometimes it is rudimentary wings. In every case it is slack somewhere in the design that manifests as mediocrity in performance elsewhere. Uncut fat in an evolving system that has no intention of going on a leaning diet.

I like to think of laziness — manifested as mediocrity in any active performance domain — as resistance to optimization.

If excellence is understood as optimal performance in some legible sense, such as winning a finite game of “be the best dinosaur” or “be the best bird” or “be the best avocado toast,” then mediocrity embodies the ethos of resistance to optimization.

When you do that, you naturally end up with middling performance, but that’s not the point.

Then perhaps the point is to do what computer scientists call “satisficing”?

Turns out that’s not quite it either.

Mediocrity versus Satisficing

It is tempting to think of mediocrity as a synonym for satisficing, or good-enough behavior, but I think Herbert Simon, like Turing with the Turing test before him, got this partly wrong. The idea of satisficing behavior implicitly assumes legibility, testability, and acceptance of constraints to be satisfied.

You need a notion of satisificing behavior any time you want to define the other end of the spectrum from excellence as some sort of consistent, error-free performance. You don’t seek the best answers, merely the first right answer you stumble upon. For some non-fuzzy definition of “right.”

This is just a different way of playing a finite game. Instead of optimizing (playing to win), you minimize effort to stay in the specific finite game. If you can perform consistently without disqualifying errors, you are satisficing. Most automation and quality control is devoted to raising the floor of this kind of performance.

This is a context-dependent way to define “continue playing.” Mediocrity however, is a context independent trait.

The difference is not just a semantic one. To pull your punch is not the same as punching as hard as you can, but neither is it the same as satisficing some technical definition of “punch.”

A pulled punch does not find the maximum in punching excellence, but neither does it seek to conscientiously satisfy formal constraints of what constitutes a punch.

Mediocrity in fact tends to redefine the performance boundary itself through sloppiness. It might not satisfy all the constraints, and simply leave some boxes unchecked. Like playing a game of tennis with sloppiness in the enforcement of the rule that the ball can only bounce once before you return it.

Mediocrity has a meta-informational intent driving it: figuring out what constraints are actually being enforced, and then only satisficing those that punish violation. And this is not done through careful testing of boundaries, but simple sloppiness.

You do whatever, and happen to satisfy some constraints, violate others. Of the ones you violate, some violations have consequences in the form of negative feedback. That’s where you might refine behavior. You learn which lines matter by being indifferent to all of them and stepping over some of the ones that matter.

You could say mediocrity seeks to satisfice the laws of the territory rather than the laws of the map.

Humans have a rich vocabulary around mediocrity that suggests we are not talking satisficing: dragging your feet, sandbagging, pulling your punches, holding back, phoning it in, cutting corners.

We are not usually pursuing excellence, but we are not satisficing either. We are doing something more complex. We are being mediocre.

This vocabulary suggests that mediocrity is performance that is aware of, but indifferent to, the standards of both excellent and satisficing outcomes. It generates behavior designed to minimize effort, whether or not that’s part of the performance definition in the current game.

Mediocrity is not about what will satisfy performance requirements, but about what you can get away with. This brings us to agency.

Mediocrity as Agency

I grew up with a Hindi phrase, chalta hai, that captures the essence of the ethos of mediocrity. It corresponds loosely to the English it will do, which is subtly different from good enough, but stronger as a norm. For example, the exchange,

Chalega? (will it do?)

Chalega.  (yes, it will do)

is a common transactional protocol. A consensus acceptance of improvised adequacy.

Good enough hints at satisficing behavior with reference to a standard, but it will do and chalta hai, get at situational adequacyTo say that something “will do” is to actively and independently judge the current situation and act on that judgment, if necessary overriding prevailing oughts. The chalta hai protocol shares the agency involved in this judgment through negotiation, but it need not be.

Indians constantly agonize about the pervasive ethos of mediocrity that marks Indian culture. The Hinglish phrase chalta hai attitude is frequently used as a lament, complaint, or harangue. Rather hilariously, the broader culture of chalta hai improvisation, known as jugaad (“thrown together” roughly) enjoyed a brief tenure as the inspiration for a faddish business innovation playbook. I’m glad that’s over.

Something “will do” when it satisfices constraints that aren’t being ignored, and is indifferent to the rest, which usually means leading to minimum-energy defaults, whether or not they violate constraints. This can lead to conflict of course.

For instance, as a pretty finicky, ritualistic vegetarian, my definition of vegetarian does not include fish sauce, oyster sauce, soup made with chicken stock, or a sandwich from which the meat has simply been “taken off.” This has lead to trouble: mediocre restaurants will often try to get away with undetectable violations of a definition of “vegetarian” they are perfectly aware of.

There is an element of satisficing to this kind of mediocrity: it is satisficing only on detectable attributes of a thing. Effort minimization explains why this happens: a vegetarian will predictably, and with high certainty, complain about a big, visible slice of meat in a sandwich. So you might as well save effort and get it right the first time. But most vegetarians will not detect chicken stock in soup with other strong flavors. So that’s something you can get away with.

  • Excellent restaurants solve for customer delight by optimizing on variables the customer didn’t even know they cared about.
  • Good restaurants satisfice a customer’s requirements with sincere good faith, and correct any errors of commission or omission promptly, like honest, by-the-book bureaucrats.
  • Mediocre restaurants serve you whatever they can get away with, based on an educated guess about what you might complain about.
  • Premium mediocre restaurants throw in some cheap “excellent” flourishes that nobody cares about, so they can claim to be aiming for excellence without actually doing so.

Most of the time, it won’t matter. You could say mediocrity is satisficing behavior against probabilistic expectation of enforced constraints, but that makes it seem way too deliberate.

Indifference as Gravity

Agency and satisficing are emergent aspects of mediocrity, not explicit calculations involved in the generation of mediocre behavior. You don’t set out to be mediocre with the express intention of acquiring agency, or satisficing a constraint set. Mediocrity just happens with low cognitive effort. The engine is indifference.

Mediocrity emerges through feedback-based sublocal optimization: greasing of squeakiest wheels… and indifference towards quiet wheels.

Indifference is the gravity field that allows mediocrity to seemingly solve for minimum energy. It is actually a form of agency — a form of choosing not to care about distinctions that don’t make a difference to you. Which means unless others have a way of noticing those distinctions and creating incentives via feedback to make you care, you will save energy.

You don’t so much solve for the mediocre solution as sag into it under the influence of indifference gravity, the way objects sag into minimum-energy shapes in gravity fields.

This kind of indifference-driven mediocrity is the hallmark of games where one side is playing a finite game and the other side is playing an infinite game that isn’t necessarily evil in the Carse sense of wanting to end the game for the other, but isn’t striving for excellence either.

Every principal-agent game is of this sort. Every sort of moral hazard is marked by the ability of one side to pursue mediocrity rather than excellence. In each case, there is an information asymmetry powering the mediocrity.

So understood in terms of agency, mediocrity in performance is a measure of a player’s refusal to play the game on its nominal terms at all, generally through non-degeneracy in hidden variables that are not active in the nominal game, but contain stored energy.

A couple more observations before we get back to AI.

First, there is a deep relationship between bullshit and mediocrity. Bullshit is indifference to the truth or falsity of statements. Mediocrity is indifference to the violation and compliance of constraints. Where transgression involves deliberately violating constraints, mediocrity doesn’t care whether it is in violation or compliance. Mediocrity is to satisficing and transgression as bullshit is to truth-telling and lying.

Second, there is also a relationship between Taleb’s notion of antifragility, and mediocrity, but it is not a clean one. Sometimes antifragility will point to mediocrity as the way, and other times mediocrity will exhibit antifragility (gaining from uncertainty). But you can have one without the other, and one at the expense of the other. The reason they seem close is that both represent forms of preparedness for unknown unknowns. Mediocrity is the presence of slack, held-back reserves at varying levels of liquidity. Antifragility is a property of certain capabilities.

Let’s get back to the problem I started with, being more mediocre than computers.

Can computers be mediocre at all?

Unfortunately, yes.

The Lebowski Theorem

Joscha Bach recently tweeted a most excellent thought (😆) that he called the The Lebowski theorem (I am guessing it is a reference to The Big Lebowski):

The Lebowski theorem: No superintelligent AI is going to bother with a task that is harder than hacking its reward function.

Get it?

This is a perfect definition of mediocrity in computational terms, and unfortunately it means computers can be mediocre. And it’s not just a theoretical idea: there are plenty of actual examples of computers hacking their reward functions in unexpected ways and sandbagging the games AI researchers set up for them.

This post, When Algorithms Surprise Us, by Janelle Shane compiles a list of very clever ways algorithms have surprised their creators, mostly by being mediocre where the creator was hoping for excellence. These games that can be gamed are far more interesting to me than Go or Chess.

For instance, there are instances of programs figuring out how to use tiny rounding errors in simulated game environments to violate the simulated law of conservation of energy, and milking the simulation itself for a winning strategy. Like the characters in The Matrix bend the laws of physics when inside.

There are instances of the programs actually rewriting the rules of the game itself from the inside, a literal case of Kobayashi Maru.

There are instances of programs respecting the rules of the game while blatantly violating its spirit.

We have serious competition in mediocrity here. So far though, this surprising mediocrity in AIs is just that — surprising. It is not threatening or evolutionarily competitive yet. They are hacking out of their finite game environments, sandbagging performance evaluations, phoning it in, slacking off, gaming the system, cutting corners. Everything human sociopaths do in organizations.

But so far, they’re not doing it quite as well as we do. Computers have learned to be mediocre, but haven’t yet learned to compete at mediocrity out in the open world.

This behavior — AIs hacking their reward functions and surprising us with their mediocrity — suggests that we are still not thinking quite correctly about the nature of AI. A good way to poke at the shortcomings in our understanding is Moravec’s paradox.

Moravec’s Wedge

Moravec’s paradox is an observation based on the history of AI: the problems thought to be hard turned out to be easy, and the problems thought to be easy turned out to be hard.

In the early days of Good Old Fashioned AI (GOFAI), researchers tried to get computers to be more excellent than humans at their most excellent. This meant things like logic, theorem proving and chess-playing. Back in the 50s, when these abilities were thought of as showing humans off at their best — our T-Rex side so to speak — it made sense to try and use computers to beat humans in these domains.

By the 80s it was clear that these were relatively easy problems, and what was actually hard for computers was things we considered trivially simple, like opening a door or walking down the street.

Turned out though, that just needed more horsepower. With deep learning it became clear that Moravec’s paradox was not quite an accurate observation. The so-called “hard” problems were not hard so much as they were just heavy. They just required more brute computing power driving the neural net algorithms. Once Moore’s Law got us there by the 2010s, the “hard” Moravec’s problems began to succumb as well.

So instead of easy and hard regimes of AI problems, we now have two easy regimes. They’re just easy in different ways. GOFAI regime problems yield to sufficiently careful encoding of domain structure and rules. And what you might call Moravec-Hard problems yield to more processors and memory.

These, roughly speaking, map to the two ends of the intelligence spectrum in my opening graphic.

Low intelligence is the rule-based, bureaucratic intelligence of basic automation that can be encoded in the form of relatively simple algorithms where correctness of operation is the key performance metric. Hence the online-forum insult of “go away or I will replace you with a simple shell script.”

This works when the domain can be bounded tightly enough, and in a leak-proof enough way, that no learning from history is necessary. You figure out the general solution, and then execute it. There are no surprises, only execution errors. Anybody (or anything) capable of plug-and-play formulaic behavior can do it. This is bread-and-butter automation and replacing of humans in repetitive tasks with limited learning requirements.

Humans are mediocre at this. Robots and non-learning algorithms do this better because they don’t get bored as easily.

High intelligence, of the sort we tend to describe as prodigal genius, is also a case of the domain being bounded in a tight and leak-proof way. The difference is that the enclosed space contains an intractably huge number of possibilities with no general and tractable formula for the right behaviors. Here, learning to recognize patterns from history is key, and depending on how rich and complex your historical library is, your actions will seem more or less like magically intuitive leaps to people with smaller history stores.

Turns out, humans are mediocre at this too. Deep learning algorithms do this better too. AlphaGo at least paid its respects to humans by learning from their history with Go. AlphaGoZero rudely threw away human experience altogether, played against itself, and got to performance regimes that seemed magical to human Go players.

And to add insult to injury, it went on to casually do the same to chess, a game that had previously yielded to very painfully engineered GOFAI work, with the Deep Blue type approach relying heavily on the human experience of chess.

But mediocrity qua mediocrity? We still have an edge there. Humans are better at just being mediocre, period. Here’s my update to Moravec’s Paradox, which I call Moravec’s Wedge.

The problems that are hard for us are easy for computers. The problems that are easy for us are also easy for computers. What is hard for computers is being mediocre.

Why wedge? Because mediocrity is about slipping in the thin end of the wedge of evolutionary infinite-game advantage into current finite-game performance. Moravec’s wedge is about not playing the game defined by the current cost function with full engagement in order to sneak out of the game altogether and play new games you find in the open environment.

This sheds a whole new light on the Turing test. The challenge which Turing thought was the low-hanging fruit — replicating the mediocre intelligence of a CEO — is actually the hardest. It is the middling kind of intelligence marked by high-agency mediocrity.

Soft and Hard Mediocrity

There’s one last major wrinkle in our portrait of mediocrity.

Remember Douglas Adams’ story of the Golgafrinchans Ark B?

To refresh your memory, the Golgafrinchans got sick of the mediocre people in their midst: “telephone sanitisers, account executives, hairdressers, tired TV producers, insurance salesmen, personnel officers, security guards, public relations executives and management consultants.”

So they convinced these mediocrities that some sort of doomsday was looming and that they had to get off the planet in a big spaceship, the B Ark. The B-Arkers were assured that the rest would follow in the A and C arks. The A Ark would contain all the excellent people, Golgafrinchans at their best: scientists, artists and such. And the C Ark would contain all the people that did the actual work. Of course, the supposed A and C Ark people never left.

They thought they were being clever, getting rid of an entire mediocre, useless third of their population, but in an ironic twist, they are wiped out by a disease that spread through unsanitary telephones.

So as it turned out, only the B Ark people actually survived. A case of survival of the mediocre. In the fictional universe of the Hitchhiker’s Guide to the Galaxy, we humans are descended from the B Ark people, who ended up on Earth via some complicated plot twists.

What is interesting though, is Douglas Adams’ enumeration of B-Ark types, which gets at a key feature of mediocrity. There is a difference between what I call soft and hard mediocrity, and most of Adams’ examples are hard-mediocre.

Soft mediocrity is mediocrity revealed through middling performance in domains where A-Ark excellence is actually possible on one end of the performance spectrum, and error-free correct, reliable C-Ark useful performance is possible at the other. So a mediocre chess player, or a sloppy assembly line worker both exhibit soft mediocrity, because both excellence and error-free play are achievable and meaningful.

Hard mediocrity, on the other hand, is performance in domains that are just so open and messy, there is no prevailing notion of excellence or correct, automatable low-end performance at all.

Not surprisingly, hard mediocrity characterizes domains David Graeber characterized as “bullshit jobs.”

There is only one way to be a telephone sanitizer, account executive, or TV producer: a mediocre way. You may be wildly successful and make a lot of money in these domains but it has little to do with meeting clear standards of excellence or error-free functioning. You may even pursue some sort of Zen-like ideal of unacknowledged excellence, but that will seem arbitrary and even eccentric. The point of these jobs is mostly optionality. Mediocrity is the rational performance standard in such domains.

These domains do not fundamentally support a native spectrum of performance where excellence is really meaningful, because nobody really cares enough, and because the boundaries are too messy.

Because here’s the thing: what creates excellence is not that people are good at something, but because people care enough to be good at something.

On the other end of the spectrum, what creates repeatable, error-free performance is not that people are good at it, but that the definitions are tight enough that “error” is well-defined, and people care about the errors.

Mediocrity as Subversive Agency

When caring is possible, and some people actively care, not caring represents agency for other people over those who do. And crucially, it is a somewhat power-agnostic form of agency. You can enjoy it even at the bottom of a pyramid. Mediocrity does not just have evolutionary potential, it has subversive, disruptive, evolutionary potential.

A note on the disruption angle.

In disruption theory, a key marker of a disruptor is mediocre or non-existent performance on features the core market cares about. But while disruption always involves mediocrity, mediocrity does not always imply disruption. You would not say, for instance, that winged dinosaurs “disrupted” large flightless dinosaurs. Though they were mediocre on some core features (size, speed, Spielberginess) and boasted disruptive marginal features (wings), the forcing function was an asteroid, not disruptive intent. And the evolutionary niche of large land animals is now occupied by elephants, not birds.

But back to general subversion.

What happens when you don’t care about excellence or perfect error-free performance in a domain? You level up and start making trade-offs between performance in that domain, and performance in other domains. This is at the heart of subversive action.

Star Trek, I think embodies this kind of mediocrity very well. Starfleet officers are all B Ark type bureaucratic bullshit-job mediocrities. They are rarely seen excelling at something or being perfect at executing something. Instead, they are constantly cutting corners here, muddling through there, and going with improvised hacks everywhere. And generally putting up a very mediocre performance by the standards of say, Vulcan intelligence, Klingon valor, Ferengi profit maximization, or Borg efficiency. When those non-humans adopt Federation culture, it is most evident in their adoption of mediocrity as an ethos. When they exhibit their “alien” traits, it is usually by regressing to an unfortunate pursuit of excellence in a specific alien way.

This is evident in the bureaucratic nature of how the Federation officers operate: they are constantly rerouting power from one subsystem to another, degrading performance and taking on risk in one area to increase performance and mitigate risk in another. They are all middle managers of an energy and optionality budget. Automated systems work below consistently, and “alien” excellences break out above on occasion.

Starships manage energy, not performance. Starships are deeply lazy.  Starfleet captains aim to continue the game, not win every encounter.

One of my favorite examples of this ethos is an episode in TNG where Data goes up against Sirna Kolrami, the galaxy’s most excellent player of the difficult game of Strategema (who is there to advise the crew about their strategy in some war games). Data initially loses, but finally wins by simply dragging the game on, stalling endlessly, until Kolrami forfeits out of frustration.

This is not Deep Blue beating Kasporov. This is not even AlphaGoZero beating all human and AI comers at chess and Go.

This is an AI beating a human at mediocrity, hacking the reward function from outside the game proper, and proving Moravec’s Wedge wrong.

In the same episode, the crew tackle their war game situation with the same ethos (iirc, the war games turn real, and the crew prevail by ignoring Kolrami’s advice)

And this is not just fiction. Data’s strategy of mediocrity is also the essence of guerrilla warfare of any sort. As Kissinger noted, the conventional army loses when it does not win. The guerrilla wins when he does not lose.

That’s what it means to continue playing longer by being more mediocre than others in the field. Generalizing, the reason biological evolution from dinosaurs to humans seems to be driven by survival of the mediocre is that it is always up against an asymmetrically more powerful adversary, the unknowns of nature itself. The guerrilla way is the only way. Mediocrity is the only source of advantage.

Let’s wrap with a final subtlety. It’s not survival of the mediocre, it is survival of the mediocre mediocre.

The Mediocre Mediocre

One of the biggest sources of misconceptions about evolution is the fact that its most popular lay formulation is in the form of a superlative. Survival of the fittest. This leads to two sorts of errors.

The shallow error is to assume fit has a static definition in a changing landscape, like smart or beautiful. It is the sort of error made by your average ugly idiot on the Internet.

This isn’t actually too bad, since at various times, specific legible fitness functions may be good approximations of the fitness function actually induced by the landscape.

The deep error though is to assume the superlative form of the imperative towards fitness. Fit and fittest are not the same thing. In the gap between the two lies the definition of mediocrity. To pursue mediocrity is to procrastinate on optimizing for the current fitness function because it might change at any time.

This is trickier to do than you might think.

In Douglas Hofstadter’s Metamagical Themas, there is a description of a game (I forget the details) where the goal is not to get the top score, but the average score. The subtlety is that after playing multiple rounds, the overall winner is not the one with the highest total score, but the most average total score. So to illustrate, if Alice, Bob, and Charlie are playing such a game and their scores in a series of 6 games are:

  • Alice: 7 5 3 5 6 2
  • Bob: 5 8 2 1 9 7
  • Charlie: 3 1 5 4 5 5

We have the following outcome. Bob wins game 1, Alice wins game 2, Bob wins game 3, Charlie wins game 4, 5, and 6.

So Alice gets 1 point, Bob gets 2 points, and Charlie gets 3 points. The overall winner is Bob, not Charlie. Charlie is the most mediocre, but Bob is mediocre mediocre. His prize is (perhaps) highest probability of continuing the game.

This is the counterintuive thing about mediocrity: it not only has to be resistant to optimization on external spectra, it has to be self-resistant to optimization. Being the best at being mediocre would defeat the purpose. You have to always be the most mediocre at being mediocre, because there’s always more game-play to come.

One way to remember this is to treat the infinite game of evolutionary success as a sort of Zeno’s paradox turned around. You never reach the finish line because when you’re mediocre, you only take a step that’s halfway to the finish, so there’s always more room left to continue the game.

That’s how you can consistently exist in the current finite game, and leave yourself open to the surprises (and the possibility of being surprising) in games that don’t yet exist that you don’t know you’re already playing.

And that’s how you continue playing.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Venkatesh Rao

Venkat is the founder and editor-in-chief of ribbonfarm. Follow him on Twitter

Comments

  1. In re: Lebowski

    “The Dude: Ah, fuck it.

    Big Lebowski: Ah, fuck it. Yes! That’s your answer! That’s your answer to everything! Tattoo it on your forehead! Your revolution is over, Mr. Lebowski! Condolences! The bums lost! My advice is, do what your parents did! Get a job, sir! The bums will always lose, do you hear me, Lebowski? THE BUMS WILL ALWAYS LOSE!

    Brandt: How was your meeting, Mr. Lebowski?

    The Dude: Okay. The old man told me to take any rug in the house.”

  2. Very simplistically: no-free-lunch theorem + minimax risk function over tasks => ‘best’ is perfect generalist with no increased performance in any class of tasks (which must be paid for in reduced performance on others)

  3. Ravi Daithankar says

    It is hard to think of a single idea that embodies the Indian ethos and approach to life better than the “chalega” mindset.

    Tying it back to your post, although transactionally “chalega” does mean “will do”, its literal translation is “it will walk.” Walk! Locomotion! Doesn’t get more minimally functional than that!

    So if you were to refactor the spirit of the idea, “chalega” is a ridiculously blithe way of saying “it will exhibit locomotion of the most ordinary form, which should be just enough for it to make the cut; a living thing that will barely survive, probably sustain, and possibly evolve.” Survival of the Mediocre Mediocre indeed!

    This post, on the other hand, was a most excellent read! ;)

  4. Really enjoyed this. What books would you recommend for those who are interested in organizational culture and influencing organization culture within government?

  5. The self-resistant to optimization bit is the tricky part. How to live indifferently on purpose?

    Also, regarding the whole principal-agent/moral hazard thing:

    https://boingboing.net/2018/04/06/dashcam-reveals-lazy-lying-me.html

  6. senthil gandhi says

    There seems to be some sort of impossibility theorem hidden in here. Human type AI can never be created by humans. Call it the curse of the objective function. Any objective function designed by a human will always float just above randomness/indifference in potential for m. mediocrity. These functions will lead to AIs which are tools rather than beings.

  7. Romeo Stevens says

    Islands of stability in parameter space are what allow the sloppiness to experiment with finding islands of stability in new and unexpected parameters.

  8. Peter Le Bek says

    The examples of computer mediocrity seem to contradict your definition of human mediocrity. Compare:

    Human: “You do whatever, and happen to satisfy some constraints, violate others. Of the ones you violate, some violations have consequences in the form of negative feedback. That’s where you might refine behavior. You learn which lines matter by being indifferent to all of them and stepping over some of the ones that matter.”

    Computer: “there are instances of programs figuring out how to use tiny rounding errors in simulated game environments to violate the simulated law of conservation of energy, and milking the simulation itself for a winning strategy”

    A mediocre human, upon discovering a loophole (unenforced rule) doesn’t aggressively optimize to exploit it. A computer does.

    • Peter Le Bek says

      An AI model that has aggressively optimized to exploit rounding errors won’t adapt well when the simulator bug is fixed.

      • Sure. In a stable environment, natural selection will pull the population to the “alpha go zero” end of the spectrum. The inefficiency inherent in mediocrity will hurt more than the adaptability will help. Mediocrity is in some sense a gamble that the environment will change; it’s a bet that adaptability will beat efficiency over the long run.

        In Silicon Valley in 2018, that’s still a good bet, but it’s not nearly as good a bet as it was 20 years ago.

        • «mediocrity is in some sense a gamble that the environment will change; it’s a bet that adaptability will beat efficiency over the long run.»

          It depends on the speed of change, because there are competitors…
          Consider the case where a generalist species and a specialist species compete: in the long run the specialists will be wiped out by change, but in the short run the generalists will be wiped out by the specialists.
          Variant, consider two specieses: one is composed of individuals with a wide spectrum of adaptations, one of highly optimized individuals. In the short term the latter will outcompete the former, but the former will win if there is big change.

          It is a classic “preys-predators” topic with various “strange attractors” and loops and possible singularities in the space, dependent on the relatively speeds of adaptation and change.

  9. I had this running self-deprecating joke when I was a kid: If there were a prize for normality, I’d win it because I was never excellent at anything in particular.

    Over time that joke developed a sort of morality around it by saying that I would refuse to win the normie prize because then I would cease being normal and what would be the point in that.

    This morality influenced my choices in sports and action heroes too.

    Mika Hakkinen over Michael Schumacher
    Batman over Superman
    Dravid over Tendulkar

    It eventually became a resistance to “glory states” that persists to this day.

    Also pleasantly surprised that you chose ‘chalega’ vs laissez-faire (any difference?). Laissez-faire, a conscious administrative indifference, perhaps?

    How does indifference stand in relation to elan vital?

    Indifference and elan vital seem at odds at first but one can conceive of an indifferent persistence.

    And oh! how this also reminds me of the wisdom of index funds and ETFs – they are the mediocre mediocre investment instruments that assume infinite games and lack glory states.

    This (below break) from the Freakonomics episode that interviews Jack Bogle, founder of Vanguard investments. – http://freakonomics.com/podcast/stupidest-money/

    Vehicle autonomy also needs to be mediocre mediocre to be effective. Google calls it a “smoothing out” of the relationship between the car’s software and humans when dealing with stuff like 4-way stop-signs.

    —-

    Warren Buffett, a stock picker who does beat the market, is a national hero. In schools all across America, when kids are introduced to the concept of investing, they’re often encouraged to become little Buffetts — playing stock-market games where they pick individual stocks. Rather than being taught the sensible route of dollar-cost-averaging their way into low-fee index funds. It’s a bit like learning to drive on a Formula One circuit.

    FRENCH: The notion that we can all get rich by trading actively just doesn’t make any sense whatsoever.

    BOGLE: You have to understand one important thing about the market and that is for every buyer, there is a seller. And every seller, there is a buyer.

    FRENCH: Every time somebody wins, somebody loses even more.

    BOGLE: So when transactions take place, the only winner net is the man in the middle. The croupier in the gambling casino.

    FRENCH: You have to believe you really are superior to the other folks that you’re trading against.

    RITHOLTZ: I draw the parallel between being an outstanding market-beating manager, trader, whatever, with being an all-star in any professional sports league. It’s such a tiny percentage!

    FRENCH: If you don’t think that you’re really one of the best people out there doing this, you probably shouldn’t even start.

    RITHOLTZ: But every kid who ever picked up a baseball bat, a basketball, a football, dreams of winning the championship, hitting the bottom-of-the-ninth home run. The problem is if you invest based on those fantasies, the odds are strong that you’re going to be disappointed.

    BOGLE: This is actually simple wisdom.

    Simple, perhaps, but elusive. In part because the alternative — the gamble of picking stocks — is so seductive. Which may explain why it took so long for index funds to really catch on. The index fund is more predictable, and boring — which, as Jack Bogle sees things, is its virtue.

  10. Interesting but I think you miss a few points. Humans aren’t mediocre for mediocrity’s sake but because they are limited. In evolution it’s never the best against the worst, it’s the largest group of survivors which will always be the people in the middle. Evolution isn’t an ever ascending line, it’s a bell curve. However, AI doesn’t depend on evolution, it doesn’t “grow” it is created. The officers in Star Trek aren’t mediocre, they deal with whatever they have to get the best result possible, that includes limited human brain power. AI doesn’t have to be in the middle if it isn’t programmed to do so. If evolution was designed (as many people believe) it would be purely beneficial to the highest evolutionary species. Since it isn’t it’s naturally tilted towards the largest common denominator. AI doesn’t have to be which is what makes it extra scary.

    • Martin Huenermund says

      Oh but AI advances can totally be viewed through a evolutionary lens: Every single AI artifact is build from various artifacts that we built before (i.e. are there) based on an AI pattern (e.g. a ResNet50) probably with modifications. Or maybe a (re-)combination of different AI patterns.
      Sure, sure: different environment, different actors, different building parts; but the same processes that thrive biological evolution. Not that anyone explicitly plans doing that, it is just what happens.

      So maybe AI really IS growing and it is already doing it, just now? By being created again and again out of the bits and pieces of its ancestors.
      Maybe to see it we need to stop focusing on the individual algorithms and need a more holistic view on AI.

  11. my music is called ‘mediocre’.
    i’m way ahead of you.

  12. A lot of this is bullpoop. Optimization in intelligence makes possible cures for the initial liabilities thereof. If a species becomes intelligent enough, it can spread to other planets. Maybe asteroids will wipe out life on some of them. But that won’t happen to all of those planets. Perhaps it would be better to say “higher levels of consciousness,” with intelligence being understood as a subset of that. Lots of intelligent people get embroiled in petty concerns because they don’t have the Faustian/Spenglarian/Nietzchean higher consciousness that is needed to identify the most important uses for intelligence.

  13. This is too long. Cut down a third of the length.

    This section of Moraven’s Wedge is informative, but you can cut it in half and still give the same information.

    With the article so long, I think most people will read a few paragraphs and then leave before your essay has put many of your ideas into their minds.

    Computers can be taught to do many things that humans do, but the way they learn those things are much different, and they often cannot handle a situation that is behaving differently than they expect. So there is a lot to be said about the difference between human learning, and computer learning.

    It’s a good topic to write on. By shortening your article, you can describe different points with less repetition, and post a work that will not scare readers off when they see the length :)

  14. “I define the Neutral as that which outplays the paradigm, or rather I call Neutral everything that baffles paradigm.” (Roland Barthes)

    Barthes undertakes a somewhat similar project breaking, scratch that, thinking ‘The Neutral’ as a unconcerned with satisfying an already distinguishable condition. There is a beautiful set of notes of his lecture series at the College de France with the same name.

    https://cup.columbia.edu/book/the-neutral/9780231134057

    “If I had to create a god, I would lend him a “slow understanding”: a kind of drip-by-drip understanding of problems. People who understand quickly frighten me.”

  15. MichItaly says

    Premature optimization, noted Donald Knuth, is the root of all evil. Mediocrity, you might say, is resistance to optimization under conditions where optimization is always premature. And what might such conditions be?

    This reminded me of where the same author says: arrested development has its origin in success, not in failure. Failure forces growth, achievemenet produces arrestation.

  16. There’s a lot more that can be said there I think, about both the conditions for this to work, and what is being aimed for; optimising optionality by intentionally withholding clarity of purpose, as in the old Taoist proverb of the knarled tree. (with less charming writing: religious tree gets controlled as symbol, straight tree gets chopped for wood, knarled tree gets ignored) The reason this proverb no longer works is that it assumes a model of resistance that is defeated by petrol, and before it coal; the way of the world is to bend, be flexible, so if you are flexible to survival, but not use, people will give up on the energy costs of making use of you, and leave you be.

    Modern business is often extremely happy to spend inordinate amounts of energy and effort making use of the useless, because we have lots of energy, and the job of management is to do lots of things that look like optimisation, but are actually jigsaw puzzles. We haven’t made any use of this?

    In nature, this is done by theft, as it is also done in many more traditional societies: You do your job, and anything that is better used in another feild is used there. Inventory doesn’t stand idle, neither does it get put into use prematurely or at unnecessary cost. But it will probably go missing, or get lost.

    Occasionally, people identify with some larger scale system enough to maximize optionality at the system level, and then you get the guy who is always at their desk, but no-one quite knows what he does, and his department has a load of crap around. Not assets, just crap that they seem to want to keep an eye on for their own inexplicable reasons.

    Contrast that with the department where everything is clean, tidy, and sort of empty, where everything is in order, but detailed investigation of the books shows that most of their operations do not actually exist. It is a front, behind which lie the lives of people pretending to be cogs in your business, compared with the “dedicated to do something nonspecific” matter hogger, who likely has some kind of technical puzzle they can attach themselves to in perpetuity, or at least hope they can, or some long term system interface they are the custodian of. Paradoxically, these useless employees are often profoundly more useful to an organisation, providing you allow them to solve problems undercover.

    Optimising optionality can never reveal itself in those terms, at least among humans, for risk of those options being made use of, by unwanted leverage or synergy.

    To seek this kind of flexibility is to seek to be underestimated, and the best way to be underestimated is to never strongly exceed expectation.

    This isn’t quite Graeber’s phenomenon, people seeking an easy life have always observed this.

    The central problem of bullshit jobs is that competitive pressures themselves create regions of obfuscation compatible with optimising underestimation, along with ample comparison of what will happen if you don’t. In other words, they create intense pressures towards pre-emptive bullshit. Ostensibly you are intensely competing for performance, but that is an ever-moving goal, so you compete instead to be able to conceal your uselessness and or usefulness, trying to make the very meaning of performance ambiguous in order to avoid your saved capacities being mined.

    And so being able to chill out and slack off is persued with sufficient fervour to itself not be relaxing.

    There’s more to it than that; Graeber underestimates the necessary informational mess of society that makes many of these jobs, or perhaps their core, equivalent to cleaning jobs. What he is correct in observing is the disatisfaction inducing tension between people knowing that their job is largely pointless but still refusing to recognise this explicitly.

    There are many tasks for which completing the task is probably more easy to do than cheating it, but cheating it gains you more power over future alterations of the task, because you can use your hidden information to cheat that too, even better if you half cheat in order to obscure that cheating ever occurred. The problem with a bullshit job is that local optimisation of free time and free capacities leads you slowly down a rabbit hole into profoundly unfulfilling nonsense, so that even though it appears to be taking the easy route and avoiding optimisation, it becomes another form of entanglement in a particular tightly constrained set of transferable skills, such as the capacity to create a persuasive but utterly uninformative spreadsheet.

    In that sense, you could say that the problem is an inequality of mediocrity; those who can generate it develop it too strenuously and to excess, while others remain mired in precise deadlines, constantly increasing productivity, mindful diets and significant gym schedules.

  17. emacsomancer says

    Finally an explanation of Microsoft Word (and word-processors more generally).

  18. I ended up writing a bunch of thoughts about this in a completely different context on tumblr, so I thought I’d cross-post here.

    Copying important bits for easier reading + getting through the spam filter.

    Antifragility, in this model, requires the existence of mediocre parts.

    What is antifragility? It is the fact that your system is modular enough that bits of it can break off and be regrown in a failure. Streamlining and efficiency are antithetical to antifragility. To foster this modularity and flexibility you need a lot of mediocre stuff which is resisting the lure of optimising for the goal at hand.

    To go back to the dinosaur example, the dinosaurs as a whole were antifragile because there were mediocre dinosaurs who just kinda glided around looking cool.

    I had a sinking feeling in my stomach while reading this essay.

    Because my model of heroism and awesomeness is “optimise for everything.”

    It is true that “optimise for nothing” gets at a lot of the same problems with less effort.

    I hate those people.

    I am one of them, to be clear. My school reports all explain how I would have done really well if I were putting in effort.

    But I care about some things. And I want to optimise and perfect the things I really care about.

    One of the things I care about? The existence of a better world. This requires total world participation; otherwise you either get capitalism, aka the tyranny of the plausibly deniable (again, mediocrity everywhere), or authoritarianism of some sort.

    Most people, mediocrising their way through issues that an inclusive politics would touch on, do not give a shit about what coordination problems need to be solved to reach the global optimum and let their unchecked can’t-imagine-the-cycle reasoning lead them by their noses on these matters backed by the defensive politcally charged mindset.

    Inclusive politics is often terrible too, exactly for this reason — I try it in varous things I run, and… I (and some awesome others) have to put in a lot of active effort to keep the inclusivity functional.

    All this I kinda knew. The Venkat essay really fucked with me, because it explained that this sort of mindset is a part of a solution to an even more global optisation problem.

    But, you might object, maybe this is not an adaptive part, but a maladaptive side-effect of the general strategy that is just kinda hanging on because it hasn’t caused enough harm to… the super-global optimisation (?), and so we can hope to eradicate it.

    To which I have two answers. First, addaptive parts look like maladaptive side-effects all the fucking time, that’s why someone has to go around explaining the concept of metis to everyone all the time. Secondly, to quote Venkat again,

  19. >the evolutionary niche of large land animals is now occupied by elephants, not birds.

    * Offer not valid in Australia, New Zealand, or Antarctica.

  20. Fascinating. Food for thought on the リネコマ and their ability to optimize needing limitations — optimization of the need to optimize, in a sense. Thank you.

  21. Clumsy Dad says

    Reminds me of the zen; keeping the vessel at least half-empty to invite something new.

    Optimization then is risky, time and energy consuming, and drains focus.

    Reminds me of how one avoids the level of incompetence of the peter principle… by resisting and by feigning incompetence to avoid promotion.

    So is sociopathy just a mediocrity that is part of evolutionary systems then? Maybe it is the honorable people that are deranged and evolutionary dead ends (-:

    Peace & Thanks

  22. DensityDuck says

    Maybe it’s not so much that humans are mediocre, but more that humans excel at changing the game quickly enough (and irrationally enough) that optimized systems can’t keep up.

    Like, no human can beat an AI at chess, but no AI will ever beat a human at Calvinball.