The Veil of Scale

There’s an old Soviet-era joke about communist notions of sharing. Two party workers, let’s call them Boris and Ivan, are chatting:

Boris: If you had two houses, would you give one to your comrade?
Ivan: Of course!
Boris: If you had two cars, would you give one to your comrade?
Ivan: Without a doubt!
Boris: If you had two shirts, would you give one to your comrade?
Ivan: You’re crazy, I couldn’t do that!
Boris: Why not?
Ivan: I have two shirts!

There are two things going on here. One is of course, the skin-in-the-game effect. The other is what I call the veil of scale: we choose small-and-local behaviors differently depending on how we think those behaviors will have emergent scaled consequences. The joke here depends on going from large-scale to small-scale questions, surprising Ivan with a question that’s real for him.

The veil of scale is about thought experiments of the form: how would you act in a situation if you didn’t know the extent to which your actions were going to be scaled?

The Veil of Ignorance

The veil of scale is a naturally occurring relative of a well-known artificially constructed idea called the veil of ignorance. In its traditional form, the veil of ignorance is a device that helps you correct biases that are a result of knowing certain things.

The classic illustration is this thought experiment: if you could choose before you were born, what sort of society would you choose to be born into if you didn’t know what race or gender you’d be born as? In this case, the veil of ignorance argument suggests that you should choose a society that treats its underprivileged well, whatever “well” means.

A more practical illustration is the idea of the corporate veil. If a society wants its entrepreneurs to take risks, limiting liability for failures is potentially a good idea. One way to realize the idea (“limited liability” is not by itself a usable idea) is to hide corporate non-personhood behind a veil, allowing all the implications of limited liability to be worked out via the metaphor of corporate personhood. We pretend ignorance of the fact that corporations are just bunches of people acting together in certain ways.

The corporate veil illustrates two features of the veil of ignorance in practice.

First, you can’t just make up an abstract veil. That makes pretending not to know too hard, like trying to not think of an elephant. Instead, you have to create a practical fiction in front of the veil. Otherwise you can’t operationalize it. So our veil of ignorance in the case of corporations is an anthropomorphic interface. As a result of the particular form, corporations have names and identities (brands), can enjoy genuine human goodwill, lend themselves to being discussed in terms of birth, growth and death, and so forth. The veil of ignorance is, to coin a term, a painted veil.

The painted veil makes it possible to lead corporations and direct their energies in relatively simple ways, by co-opting and overloading natural human social behaviors.

One cost is that we are vulnerable to emotional appeals to “save” the “life” of corporations. We respond very differently when such appeals are made in more direct, unveiled terms: “save our jobs and the lifestyles we’ve become used to” or “too big to fail” (which are unveiled labor and unveiled shareholder narratives respectively, and are more or less appealing than “save this great company,” depending on your politics). Thanks to the painted corporate veil, we are also prone to overestimating how much power leaders actually have, since the it tempts us into viewing the CEO as the personification of the company.

The second point follows from the first. A veil of ignorance in everyday use, with a practical fiction painted onto it,  is an amoral device. When we engineer a particular veil of ignorance and paint it a certain way, we hope it will do more good than harm in the particular situation. Where it doesn’t work, we need to over-ride the decisions suggested by the veil. In the case of corporations, that’s the idea of piercing the veil. While it’s a legal idea, we use it personally every day. We might accept it when a low-level employee saying s/he is helplessly bound by rules and unable to help us. But we informally pierce the veil when ask to speak to managers and demand exceptional treatment. We might accept a company’s stock tanking and costing us a bundle, but we are less likely to accept the veil when it hides extreme environmental damage.

There are other veils of ignorance around us: the “Market” and “The Law” and “Nature” are other major ones. In most cases (but not always), the fiction we project onto the veil is an anthropomorphic one. All organizational metaphors are also veils of ignorance by definition, since they provide simplified interfaces to complex realities, by highlighting some aspects over others.

So a veil of ignorance is really not a very complex idea: it is a user interface; a societal equivalent of the everyday engineering idea of a functional abstraction. You hide complexities beneath a wrapper, paint on an interface based on a fiction, and hope the abstraction doesn’t leak much in the practical situations it’s been designed for. Where it leaks, you do some messy leak-patching.

In user experience terms, any veil of ignorance translates the complex behaviors of an unfamiliar entity into the simple behaviors of a familiar entity.  It is a kind of manufactured normalcy.

The Veil of Scale

The Veil of Scale is a related idea: imagine a situation or problem that is manageable or solvable at the scale of one person, and ask what your natural problem solving behavior would be.

Now imagine that your particular ad hoc solution is going to be scaled to an arbitrary size, in an arbitrary organizational context that you cannot know the nature of before it actually emerges. Does it still work under arbitrary possible scalings?

The Veil of Scale is simply the obscurity of scaled versions of everyday things, designed or not. It exists naturally as opposed to being constructed legalistically.

Let’s take a simple example, a situation involving splitting a restaurant check that I was in recently. The scaled version is all the checks being split across the world at any given time. This larger slice of civilizational behavior exists whether you want it to or not, and whether or not you attempt to organize it at any given scale.

Let’s say you and a half dozen friends are at dinner. To keep things simple, you offer to pay, do a rough calculation to split the check evenly and arrive at a suggested contribution. Others pay you what you come up with. Some promise to pay later. Some pay a little less or a little more due to the difficulty of handling change or because they guess they ordered more or less than the average. You don’t keep precise tabs or police contributions.

You expect to come out either a little ahead or a little short, and you assume that each individual instance of being the check-splitter will neither bankrupt you, nor suddenly enrich you. You also expect that over time, with the same group of friends, it will all balance out in the end.

This is what I usually do or suggest at gatherings, and things work exactly that way when you’re talking 5-6 people. With a dozen people, things get more volatile, with bigger surpluses and shortfalls and more people feeling unfairly treated. Recently, I did this with a group of 20+ and came up wildly short and was forced to call for additional contributions to make up the deficit, since it was larger than I was prepared to cover on my own.

Upon reflection, I realized why you are more likely to run deficits  with informal check splitting with larger groups: a larger fraction of people are likely to feel unfairly treated and adjust accordingly. There are multiple aspects of the transaction that don’t scale well, causing both actual and perceived unfairness. So my guess is that scaling fails for the following reasons:

  1. People ordering drinks, appetizers and differently priced entrees in uncoordinated ways stress the “what I owe for what I ordered” assessment beyond what people think is a reasonable range. Drinks cause particular stress, since they are expensive and many don’t drink, while others drink a lot.
  2. Small miscalculations can add up: I forgot to add in the tip before doing the division, and a $5 individual deficit added up to a $100 group deficit, which was a big chunk of the total deficit.
  3. Shared items get “unfairly” distributed: with 3-4 people sitting around a table, an appetizer plate is within reach for all. With a larger group, there are multiple local clusters of conversations, each of which operates as a shared-ordering unit.
  4. Rounding precision matters. If a split check comes out to $18/head, you’re likely to get a lot of $20s and come out ahead. If it comes to $22, you’re likely to end up short. This is of course a simple function of the denomination distribution of cash in typical wallets.
  5. With larger and more open groups, there is also less synchronization and people arriving late (and ordering less) or leaving early (often asking a friend to cover for them). This also contributes to Item 1. It becomes harder even to tell whether somebody has just dropped by for a few minutes or has been part of the group.
  6. Larger groups also mean more variation in relationship histories and expectations. As a rule of thumb, you could say that two people who have been on dinners together will expect to go on N more dinners together. So newcomers might expect to never see the group again, while old-timers might expect the group to continue forever. This affects how much slop/variation people are willing to accept. For the game theorists among you, this is iterated prisoner’s dilemma with a wide distribution of expectations about how long the interactions will continue.
  7. Larger groups also are likely to have greater income diversity (as in the Friends episode when the 3 poor friends get upset with the 3 rich friends). A model of even check-splitting penalizes those who order frugally, expecting a more equitable check-splitting process.
  8. And then of course, there is the free-rider problem. The larger and more open the group, and the more varied the relationship history lengths, the more likely it is that free riders will join the party.
  9. Larger groups also make deferred settlement more complex, if people simply forget to pay later.

I could go on, but the broader point here is that as groups scale, marginal things start to matter disproportionately, and new variables enter the picture.

Big Men and Small Behaviors

Let’s step back and consider my account above more carefully.  I explained a simple, everyday situation, with natural behavioral responses, in a particular way. I made particular guesses about what goes wrong with scaling. In the process, I loaded the analysis very heavily with particular assumptions about how it might be scaled.

The particular way I read the small behaviors, and the corresponding guesses I made about scaling, is not how such behaviors are normally read or justified.

The restaurant check-splitting problem, when scaled sufficiently, is actually the problem of managing an entire macroeconomy. But the interesting thing is that you actually get to nearly every key feature of an entire country’s economy with a group as small as 20-30. This means you should expect multiple competing analyses of it.

Normally, informal and imprecise check-splitting by one person is read as a Big Man (an anthropological term) gesture. In primitive medieval societies based on some sort of naive economics of pricelessness, such as a tribal Stone Age village or a group of Wall Street bankers dining together, the Big Man would simply pick up the whole tab. Informal even-splitting is a weaker version of that, where one person is not overwhelmingly more wealthy than the rest. The ability to wholly or partially underwrite the uncertainties of a priced economy to benefit a priceless economy translates into Big Man status. By underwriting the uncertainty on behalf of the group, the Big Man fosters a money-does-not-matter ambiance of abundance, thereby reinforcing group values. An interesting feature of the Big Man explanation is that it can be specialized to both capitalist and communist explanations.  Only the specific values change, not the pattern of unequal underwriting of risk and uncertainty by a Few Big Men (and/or Many Small Slaves — in the check-splitting case, a bad solution might end up leaving the waitstaff either stiffed on tips or having to do way too much work) .

Non-Big-Men who overpay also do so, but less visibly, to a lesser extent, and with lower risk exposure. Those who underpay experience guilt or shame, depending on the culture. Or glee if they happen to be marginals with values different from those of the culture being reinforced, who might enjoying scoring off foes.

Engineers like me more naturally tend to adopt OCD check-splitting practices that aim to achieve perfect fairness. We only do simpler things when the OCD way gets painful enough that the spirit of the party (and therefore its ability to reinforce priceless values) might be ruined, and the waitstaff driven to suicide. When forced into functionally Big Man behaviors, we tend to explain things using very odd constructs like game theory, amortization, ergodicity of processes, regression to the mean and so on.

It should be obvious, but 99% of humanity does not even understand these constructs, let alone naturally gravitate to them to explain situations or solve problems. In my experience, engineers who identify as engineers are really uncomfortable playing Big Man, and harbor deep hatred towards those who do.

Marketers and other non-technical subcultures of work on the other hand, are really uncomfortable if they are forced into mathematical precision while solving problems, or prevented from solving problems in Big Man ways, which only require an ability to decide which values trump which.

This is an important point: how you explain your behaviors affects how you expect it to be scaled, and this affects how you modify the behaviors in-the-small in response to guesses about what lies behind the veil of scale.

If you think like an engineer, your natural guess about what’s behind the veil of scale will be “some sort of complex macroeconomy with imperfect information and lots of latent and degenerate variables, which were previously quiescent, becoming active.”

To accommodate anticipated scaling you will naturally turn to technologies that (for example) make it easier for information to be tracked accurately and problems to be solved easily. For instance, in my case, and in the case of the people I talked to about the check-splitting problem, the conversation naturally drifted to better apps for capturing the nature of the group transaction. Better solutions in short.

If you think naturally like a Big Man, your natural guess about what’s behind the veil of scale will be a vague idea of some sort of larger version of your priceless culture, with bigger Big Men and more abundance underwritten by them (this is one reason painted veils are often anthropomorphic).

You would naturally focus on solving the small problem in ways that (for instance) are a better display of scalable leadership or community virtues. Instead of thinking about a better app for the 20-person dinner party, you might think about how you could be a better host; how you might make everybody feel welcome and valued; how you might subtly signal, model and encourage people to contribute more than their assessment of a fair share, leaving the group with a net surplus. Better values in short.

Guessing the Scaling

Here is why it is crucially important to understand how and why people guess differently about the effects of scaling: it becomes a self-fulfilling prophecy in the worst possible way. In a way, all of civilization is a game of people arguing about how things scale and what lies beyond the veil of scale for both new and emerging realities.

Anecdotally, I’ve observed seven basic guessing patterns about how individual behaviors might scale.

  1. Degeneracy: you guess that your behaviors don’t scale at all, beyond a point, and that other phenomena will kick in. In this case, you see the scaled regime as fundamentally disconnected from the small-and-local regime. This leads to (for instance) not voting, being mostly expedient in everyday decisions, and apathy. In the check-splitting situation, you’re likely to say, “whatever.”
  2. Scaling by Pricelessness: you guess that all scaling will depend on scaling of the values that helped solve your problems locally. You also guess that if you get the values right, the scaled solution will preserve the characteristics of the unscaled one. This leads to everything from manifestos to universalist religious proselytizing. In the check-splitting situation, you might explicitly articulate a value, as in “pay whatever you think is fair” or as I once saw someone do, announcing “I am just going to pay for lunch, if anybody wants to give me a twenty, you’re welcome to.”
  3. Scaling by Feature Creep: you guess that all scaling will depend on adding features to the mechanisms that helped solve your problems locally. So scaling becomes the problem of having sufficient foresight to add features before they are needed. This sometimes leads to good engineered solutions and more often to awful over-engineered ones. In the check-splitting case, you might add adjust the basic even-splitting approach by splitting the alcohol and food checks and doing two calculations instead of one: “if you ordered drinks, give me $20, if you didn’t drink, give me $15.”
  4. Impossibility: you guess that the emergent impacts of your behaviors are so complex and unpredictable, there’s no point thinking about them. The traditional form is religious fatalism. The modern form is complex-systems fatalism (butterfly effect resignation, where everything is a strange attractor beyond reach of meaningful influence). You accept the outcome but are not necessarily indifferent to it. In the check-splitting situation, you might remark on the outcome and what surprises you about it, and attempt hindsight explanations. For instance, “wow, that turned out to be more/less per head than I expected.”
  5. Scaling by Scaling: Here you assume that problems simply change character as they scale and must be solved anew at each scale (of both size and time) by actually trying to scale them through progressively more complex and long-lasting regimes. It is seemingly the most reasonable approach, except that there is no guarantee that complexity will increase smoothly with scale. Sometimes it is easy to solve N=10 and N=1000 cases, but nearly impossible to solve the N=100 case. Sometimes it is easier to build very transient and very enduring things than it is to build things with a design lifespan somewhere in between. In the check-splitting situation, if the group is larger than the largest one you’ve coordinated before, you might try to build on the old solution in better ways than just adding features. You might even remove features.
  6. Scaling by First Principles: Here you simply pose and solve the problem at a specific scale as best you can, assuming that it has no correlation to lower or higher scales. In the check-splitting situation, you might make up entirely new ideas. For example, deciding that a certain group size merits pre-catering and a fixed cover charge for joining the group.
  7. Assumption of Evil: Here you expect that whatever the form of the scaled solution, it will be evil simply by virtue of being a scaled solution. This is a hugely popular guess right now. One response to this guess is to actively work against scaling, and attempting to preserve anarchy at higher scales. This unfortunately does not work, since nature is full of things-that-scale whether we want them to or not. Preventing scaled structures from emerging, ironically, requires scaled efforts. In the check-splitting situation, you might decide to limit the party size to the known capabilities of the best check-splitting method you know and decide that larger groups are actually bad because they require more complex methods.

Each guess is a partial truth. Each is laden with value-based assumptions to a greater or lesser degree.

But if there’s one thing we’ve learned through millennia of scaling failures, it is this: the greater the diversity of guessing patterns about what lies on the other side of the veil of scale, the better the scaled reality that actually emerges, no matter how good or bad the individual guess.

This does not mean, however, that all 7 (there may be more) guessing patterns are equally good or in peer relations to each other.  I won’t attempt it now, but the seven guesses can be classified (for instance) into the sociopath-clueless-loser hierarchy. Their impact on the reality that emerges depends on the sorts of people who tend to guess that way. Complex, scaled realities emerge in this ecosystem of guesses about how things scale or ought to scale.

To adapt the old fable of the three blind men and the elephant, this is a case of a lot of sighted people who have never seen an elephant guessing what it might look like by examining a mouse.  Some may guess better than others, but many guesses are better than few guesses.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Venkatesh Rao

Venkat is the founder and editor-in-chief of ribbonfarm. Follow him on Twitter


  1. Venkat I love your nonstandard analyses to death but don’t you think this claim warrants a bit more than a bald assertion: “The restaurant check-splitting problem, when scaled sufficiently, is actually the problem of managing an entire macroeconomy. But the interesting thing is that you actually get to nearly every key feature of an entire country’s economy with a group as small as 20-30.”

    What’s the analogue for economic growth or inflation or monetary policy in a 20-30 person check splitting group? You might say that’s just one feature but doesn’t that illustrate how the 20-30 person check splitting scenario operates parallel to the lump of labor fallacy (i.e. static supply of stuff to be distributed)?

    • I had a fairly detailed mapping worked out. Monetary policy shows up as the goodwill (or lack thereof) of other groups/individuals dining in the same location, which can result in people appreciating you for creating a lively atmosphere or asking management that you be thrown out. Of course, this means the bond market is all goodwill and the domestic economy is cash, and it’s not clear how much more the group would be willing to spend on the basis of increased external goodwill. A more direct cash monetary policy (rare these days) is bar owners extending credit to regulars and allowing them to run up a tab that they may have difficulty paying for.

      Inflation does happen. Defined as too much money chasing too few things, it describes tourist trap pricing strategy well, especially ones that cater to tourist groups from richer regions.

      Of course, to get to some of these extended effects, you have to model the boundary conditions more clearly: restaurant owner, other diners, waitstaff…

  2. Bill Coutinho says

    I think this could be an example of pattern #3, but if you expand the context, it could be a #2, the priceless value being democracy.

  3. When you say:
    “When we engineer a particular veil of ignorance and paint it a certain way, we hope it will do more harm than good in the particular situation. ”

    Did you mean “more good than harm”?

  4. Seems reminiscent of the categorical imperative

  5. When engineers split the check but don’t want to let it get unduly difficult, do they ever invoke the value of time? I mean saying something like”that last dollar takes too long to figure out, let’s just split it”.