What is the Largest Collective Action, Ever?

I’ve lately become interested in the question of climate change from the perspective of the scale of organizational capabilities that are emerging globally to tackle it (a question that exists and matters whether or not you believe climate change is real). I came up with this conceptual graph to think about it. I’ll explain my capability measure in a minute.

Screenshot 2015-09-29 14.02.54

 

In some ways, “dealing with climate change” is the largest, most complex collective action ever contemplated by humans. Here I don’t mean collective action in the leftist sense of a political coalition based on egalitarianism and solidarity. I mean any kind of large-scale action involving coordination (not getting in each other’s way), cooperation (not working at cross-purposes), collaboration (combining efforts intelligently) and conflict (structured adversarial interactions encompassed by the system  to allow net action to emerge from a set of warring ideologies), in a politically neutral sense. Everything from weaponized sacredness (think the Pope’s statements on climate change) to war and unmanaged refugee crises can fit into this broad definition, but as I’ll argue, it’s not so broad as to be useless.

So the definition includes everything from the pyramids of Egypt and the Great Wall of China to the Normandy landings in WW II, the building of Standard Oil, the modern bond market, and the Chinese Cultural Revolution. Historically, the “peak-load capabilities” of our biggest collective action systems have been expanding steadily, modulo some ups and downs in the interstices of imperial ages, since the Neolithic revolution and the first pot-sized granary.

The interesting question is, what are those “some ways” in which a response to climate change futures is unprecedented, and what does that imply for the likelihood of it succeeding?

A useful way to focus this question is to ask what is the largest collective action, ever, and how much of a stretch are we talking to respond to (say) a speculative 2-degree rise scenario?

Measuring Capabilities versus Measuring Problems

The reason the scale of capabilities is more important than the scale of problems is that any non-trivial problem can be scoped in ambition to be arbitrarily big, and beyond the reach of the most complex mechanisms available. There is no shortage of experts in panic-causing demagoguery capable of creating a frenzy around unactionably large concerns (a “frenzy” is in some ways the opposite of collective action; it is collective anti-action, which is arguably worse than passivity in almost all situations, since it tends to enable profiteering — unless you view profiteering as useful Darwinian culling of the human herd).

So it is far more useful to approach challenges from the perspective of the scale of existing capabilities, and asking how much capability growth can be creatively accelerated without triggering collapse in the mechanisms themselves. Are we trying to grow capability as fast as it did in the 1880s-90s for instance? Or 2x that rate? 3x? 10x?

As Tainter argues in Collapse, what kills civilizations is failure of problem-solving mechanisms rather than the nature of the problems themselves. Moonshot objectives work better as a way to calibrate the desired expansion in capability rather than as actual goals to achieve. Capability expansion is about toeing the fine line between a stretch goal and a breaking stress; between breaking smart and breaking bad. What doesn’t break you only makes you stronger, but what does break you makes the problem worse, since you’re now part of the problem.

But let’s look at the problem of defining the scale of collective action capabilities.

Defining Scale

Defining a “scale of collective action” is obviously a non-trivial challenge in itself. Excluding participation-by-passive-consumption or generalized capabilities that are required for everything (that would make the scope and scale of too many things “all of civilization”), we can look around for some useful calibration examples.

Earlier this year, one billion people logged on to Facebook for the first time. So that’s a collective action that involved 1/7th of humanity in terms of scope. It is also quite obviously a fairly minor event on the scale axis, since the coordination/cooperation/collaboration/conflict (4Cs!) levels among the 7 billion  probably never exceeded even those involved in (say) constructing an electric power plant. If I had to guess, I’d say that the largest meaningful collective action that happened on Facebook that day was probably some largish group figuring out a venue and refreshments for a party. Or a nonprofit running a large fundraising campaign like that ALS ice bucket challenge (I bet you already forgot that, didn’t you?). But still, that’s quite an achievement with respect to at least one salient variable: the number of people acting collectively, even if trivially.

A more useful way to define the scale of collective action is to think of it in terms of the equivalent computational process. Most large-scale human collective action examples are best compared to algorithms for what computer scientists call embarrassingly parallel problems: ones where a small amount of central coordination and a large amount of relatively uncoupled (or at most, locally coupled) activity is enough to solve the problem. So Facebook solved an (extremely) embarrassingly parallel problem of size 1 billion: logging on within a 24-hour problem. A problem that registers very low on all but one of the 4Cs (coordination). Even getting one billion to “like” a specific kitten picture would be a 100x bigger problem.

At the other end of the spectrum we find high-complexity problem solving mechanisms, with highly sensitive and unpredictable couplings between distant parts of the system, a great deal of complex interconnectedness that must be modeled and accommodated in the organization up front, and multiple hierarchical layers of computation to assemble answers together. The largest problems of this sort we have ever solved are probably the ones involved in bringing a device like the iPhone to the market.

Counting only the big architectural chunks: Apple has about 100,000 employees; there are about 300,000 people in the “app ecosystem”; and Foxconn has about a million employees. Of course, not all of the approximately million-and-a-half people involved are fully devoted to turning sand and copper into app-for-everything iPhones, but the “computer” that they represent is of at least that scale, even if it isn’t 100% dedicated to making, maintaining and operating a huge fielded set of iPhones. There are certainly bigger organizations but they arguably don’t involve the same complexity of collective action. The only other likely contender for “most complex societal scale computer ever” is likely the Amazon ecosystem. Emerging Chinese equivalents (Alibaba, WeChat) are probably bigger by many measures, but cannot run systemic algorithms as complex as the Apple and Amazon ecosystems can.

In case you’re wondering, it is not limiting to use specific problems to characterize capability stacks. Just as the capabilities of American industry could be redirected from cars to tanks and planes with relative ease during World War II, the Apple and Amazon “machines” could in theory be redirected to many other purposes without breaking them (which is not to say we should do such things: just that it is possible). “iPhone ecosystem” simply becomes convenient shorthand for referring to a certain coherent set of capabilities. We are merely using the kinetic state of a given production web to measure its potential.

So in a certain very meaningful sense, the tech-critic insult of “click here to save the world” literally translates to “run a ‘program’ on this largest, most complex bio-silico computer ever, involving 1.5 million people, and perhaps 10x as many chips, working at the tightest levels of cooperation, coordination. collaboration, and conflict, ever.” From a narrative perspective, I wouldn’t be surprised if in 2100, the story of the climate change response is actually told with an app-click as the symbolic opening event. Just as the gunshot that killed Archduke Ferdinand is often used as the symbolic opening event in telling the WW I story.

So that insult turns into a serious characterization of scale of capabilities.

There are of course, a whole bunch of variables that feed into thinking of something like the iPhone-ecosystem as a problem-solving computer with certain capabilities (see the last few essays of Breaking Smart for some discussion of the subtleties). The problem really is like trying to measure the capabilities of a supercomputer. You have to dig beyond vanity metrics like petaflops and ask what the computer can actually do, relatively to benchmark problems like computing the weather or simulating nuclear explosions.

We can think of headcount, capital mobilized (weighted by “smartness” of the money perhaps), time-scales of action, various measures of graph complexity characterizing how people, information and money move in the system, the complexity of internal contracting and market mechanisms involved in running the “computer”, and so forth.

Let’s call this hypothetical compound measure c, the civilizational speed-of-problem-solving limit (feel free to suggest constituent variables for c, or calibration systems that fill out the spectrum between Facebook and the iPhone ecosystem; I might actually try to construct this measure). Maybe you think eradicating polio involved higher c, maybe you are married to that old favorite, Apollo. Whatever your favorite calibration cases for c and cmax, it helps characterize the curve.

I’d say any useful definition of c would involve at least a dozen variables at the aggregate level of characterization (if you need more than a dozen, then the “computer” is probably too unwieldy to deploy controllably in any direction that’s better defined than “the fate of humanity”).

One immediate takeaway from this thumbnail portrait of the characterization problem is that it is idiotic to evaluate such a “computer” from an ideologically pure perspective, such as those embodied by financial markets, corporate (and corporatist) management theories,  public institutions or left/right/libertarian ideologies.

Because something like the “iPhone ecosystem,” embodying a “click here to save the world” cutting-edge capability, spans all those categories.

“iPhone making” as a collective action is simply a loosely defined and partially open boundary, with activities within being more salient to iPhone making than to other activities. Inside that boundary, we find a microcosm of civilization itself, in a more densely enriched form than outside. If “civilization” in the large is a subcritical pile of unenriched uranium, the stuff inside the “iPhone making computer” is a supercritical, controlled nuclear reaction running on enriched uranium. Seen this way, collective action is merely an intensified state of a part of civilization that has a somewhat less indeterminate future trajectory than the whole thing, and less of a mismatch between scale of concerns and scale of capabilities.

Conceptually, we could plot a scope-versus-scale capability curve with an x-axis between the smallest  scope (individual action, in isolation, no connection to the rest of humanity, either through a smoke signal or an iPhone) to the largest scope (all 7 billion doing something). We’d have the capability of the “scalable computing potential” of humanity in terms of our hypothetical on the y-axis. The curve probably has a peak somewhere near the iPhone-ecosystem portion of the x-axis (1-2 million people in scope), as I’ve sketched it. “Capability expansion” means moving the peak of the curve upwards and to the right.

The largest-scope collective action currently clocks in at perhaps a couple of hundred million today (we could probably get that many people to Like a specific kitten picture on Facebook by roping in the top 100 global music and sports celebrities). Beyond that, we really have no capabilities. We have c=0 beyond the x=200 million point. The most complex (delivering an app-based button to an iPhone user) involves about 1.5 million.

This, incidentally, is not the way most people like to measure these things. People like to measure, or qualitatively characterize, the scale and nature of problems rather than problem-solving systems/institutions. So phrases like wicked problem or the regimes of the popular Cynefin framework (obvious, complex, complicated, chaotic) are usually used to describe problems rather than problem-solving systems (even though their proponents often claim they can characterize both).

I have concluded this is usually pointless. It feeds analysis-paralysis.

Why we do this is an interesting question. I think it is because we feel so much anxiety from not knowing what monsters are hiding under the bed that we tend to strive mightily to learn all about the monster, leaving no time to look around for the largest possible weapon to fight it with.

Characterizing the Climate Change Solution

So let’s characterize the climate change solution under construction, rather than the problem.

This system is emerging around a rough consensus that a 2-degree rise in temperature is likely. Is this going to stretch our capabilities or break them?

For a taste of what’s to come, consider one view on the provenance of the VW emissions fraud. The linked article (HT Sam Penrose, via Marginal Revolution) argues, rather persuasively, that you can trace the root cause of the fraud back to rather ill-conceived incentives designed by EU governments in pursuit of climate change mitigation objectives.

I am not citing this article to argue against large-scale efforts in the climate change department from a market-fundamentalist perspective (that would be a case of solution aversion, as I argued in a recent issue of the Breaking Smart newsletter), but to note that we’re in a regime of problem solving that involves larger scales and more complexity than any we have tackled before.

In December, at the UN conference on climate change, significantly more ambitious goals will be set globally. And like it or not, there will be a messy, complex increase in the scale and scope of our problem-solving mechanisms. Let’s call this capability expansion (the delta, not the whole thing) the “climate change response computer.”

There will also be a corresponding increase in the scale of two other kinds of related capability: our global uncoordinated capability to game the climate change response computer (as the VW example demonstrates — it represents a fascinating new kind of algorithmic fraud and gaming that will only increase) and a coordinated anticapability that will obviously emerge to resist the actions of the climate change response computer (we saw signs of how this sort of thing works in 2015, in the vaxx vs. anti-vaxx conflict in California). Each of these dialectically adversarial capabilities will have its own capability curves (we could call them c’ and c” if you like).

So a true measure of the scale of a a problem-solving capability is actually the net capability that emerges from this 3-piece system: a problem-solving computer, a game-the-first-computer-computer (which might be characterized by the most complex byzantine conspiracy capability it embodies, for milking, without destroying, the first computer), and an open-resistance computer (which will be asymmetrically simpler than the first two, since it will have simple destruction goals).

Much as you might detest the second and third pieces, they actually serve to maintain systemic health: the “gaming” computer acts as a check-and-balance on runaway processes of lousy mechanism design in the first computer, while the open-resistance computer keeps the capability growth epistemologically honest.

This is a huge drama about to unfold. We are about to refactor the fundamental principles of checks and balances that were first debated and designed during the emergence of modern nation states between the 1650s and 1900, through the process of writing constitutions of various sorts.

So here is what we know, so far, about the unprecedented nature of the response to climate change:

  1. It will involve shifting the capability curve peak up and to the right, and perhaps lengthening the scale beyond it’s current 1 billion limit on the x-axis. The shift in the curve will embody a “climate change computer.”
  2. A more sophisticated “gaming computer” will emerge in response, and the VW fraud represents the first significant action on that front.
  3. It will also involve an anti-computer that is at least as sophisticated as the anti-vaxx movement.

For better or worse, and whether you believe climate change is real or a Chinese conspiracy (the Yellow Peril, Trump Edition!), we are headed into a regime of collective action that is more complex than we’ve ever encountered. It won’t be markets versus governments or hippies versus CEOs. As Einsten once said, you can never solve a problem at the level on which it was created.

May you live in interesting times.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Venkatesh Rao

Venkat is the founder and editor-in-chief of ribbonfarm. Follow him on Twitter

Comments

  1. What is the largest collective action ever? According to your definition of collective action (action that involves coordination, cooperation, collaboration, and conflict):

    Capitalism, and the perpetuation of the useful fictions surrounding our use of currency. Markets-versus-collectivism is a false dichotomy- useful only for promoting tribal memes and in-group policing.

    • Using Venkat’s narrative scheme I’d say that our market capitalism has been a civilization built around the anti-computer, not the primary computer. Instead the primary computer is used now for checks and balances of the anti-computer. There can possibly be no peace among those who believe that this is the right order of things and those who think it is a perverted.

      • I am working on a project to use art as currency. Sort of like Bitcoin but instead of mining coins with CPU energy, they are minted by humans and can be valued subjectively, that is, each coin may have a different value to different people. I wrote a short essay comparing the idea to a p2p gamified Sotheby’s. Maybe a non-commodity currency can tip the scales in favor of Kay’s “primary computer”.

      • That sounds about right. Very loosely, competition is the antithesis of any flavour of cooperation, and while both are present anytime humans act/interact together, one usually dominates and the middle is an unstable zone. Though I suppose parasitism and gaming computers in the sense I’m talking about are sort of stable middle-of-spectrum entities.

  2. Doc Searls just wrote that “ad blocking is the biggest boycott in human history”

    http://blogs.law.harvard.edu/doc/2015/09/28/beyond-ad-blocking-the-biggest-boycott-in-human-history/

  3. I think you are using “collective action” in a more expansive term than normal usage. In the standard meaning, a collective action problem exist when group goals conflict with immediate individual ones. Governments are supposed to implement solutions to collective action problems, if self-interest can solve it, then it’s a job for business/markets. So some of your examples don’t count: the Normandy Invasion, sure, but not Apple’s supply chain or liking something on Facebook, which presumably does not involve people sacrificing their personal interests for the greater good. Or maybe it does, which is why corporations (Apple especially) so often come to resemble cults.

    Anyway, yes, climate change is a very large collective action problem of the first variety, and there are no existing mechanisms (global governance) that can solve them. I am pessimistic that any such mechanisms will become effective until the immediate consequences of climate change are harder to ignore than they are now — which may not be that long.

    • I’d say that’s a game-theoretic/tragedy-of-commons definition a la Mancur Olson. I’m actually overloading an even narrower definition. As I stated upfront, “Here I don’t mean collective action in the leftist sense of a political coalition based on egalitarianism and solidarity.”

      While this instrumental-social understanding of collective action in terms of a particular leftist *solution* to the collective action *problem* in the sense of Olson. The solution is sometimes sufficient, but rarely necessary. There are other ways to solve the Olson-sense collective action problem, such as authoritarianism.

      I think these are too narrow and have actually stalled advancement in thinking about organizations by prematurely defining a sharp line between problems that feature tragedy-of-commons/free-rider/iterated prisoner’s dilemma type phenomenology as a dominant element, and problems that do not.

      In particular, I don’t think climate change is primarily defined by that sort of phenomenology. It is more about complexity of science, the problem of scaling organizations beyond a point, non-collocated costs and benefits, state of energy technologies, etc. etc…

  4. Mike Walsh says:

    The behavior associated with “climate change” is better understood as a mass hysteria than a collective action. The abundant contradictions and absurdities at the various associated chinwags (not enough parking for the private jets at Davos) bespeak cognitive dissonance on an unprecedented scale.