Elderblog Sutra: 13

This entry is part 13 of 13 in the series Elderblog Sutra

The last time I added to this blogchain in March 2021, it felt like the writing was on the wall as far as traditional blogging was concerned, thanks to the mixed blessings of the text renaissance. Not only was blogging as practiced here on ribbonfarm dot com dying for real (update: continuing to decline on schedule), but it felt like most people were saying …and good riddance.

Actually, if you squint a bit, blogging is not so much out of scope as the undeclared shared enemy of the text renaissance.

… 

The environmental assumption underlying WordPress is wildly untrue now. This is not the digital wilderness of 2001. This is the heavily built-up digital urban environment of 2021; cities at the intersection of the gravity fields of large platforms. WordPress is totally an anachronism. A befuddled, blinking cowboy on a horse wandering among New York skyscrapers, wondering where the stables and saloon are. What was once a patch of database-driven CMS civilization in the wilderness of hand-coded “home pages” on Geocities is now a spot of wilderness in the civilizational heart built on React and graph databases. It’s the Olmsted parks movement of digital urbanism. Lungs of the digital city and so forth.

As a response to this gloomy assessment, I quoted Lovecraft (“That is not dead which can eternal lie, And with strange aeons even death may die.”) and ended on a note of what I can only call weird Lovecraftian optimism — that to the extent it remains uniquely valuable, perhaps blogging can be resurrected as an Elder god.

And maybe blogging too will undergo enough of a technical renaissance that we’re no longer talking a reactionary hedge bet on horses, but a futuristic hedge bet on Mars rovers. That will probably require rebuilding of the foundations on something other than PHP and MySQL, but I suspect it will eventually happen when the hedge value of a non-platform alt-stack, with capacity for genuine commercial independence, becomes high enough.

That “hedge value” just went up sharply. I want to revisit the question of the future of blogging in light of the impending reconfiguration of the social media environment due to Elon Musk buying Twitter.

Is this a threat or an opportunity. Will this accelerate what seems like the terminal decline of blogging, or increase the odds of resurrection in a new form?

[Read more…]

The Ribbonfarm Lab

This entry is part 1 of 5 in the series Ribbonfarm Lab

As I’ve mentioned in passing a few times, through the pandemic, I’ve been been spending a lot of time getting back into hands-on engineering, after nearly 20 years. I finally have one small thing worth showing off: the first test drive of one of my robots:

It’s not a kit design. I designed and fabricated this robot from scratch. It is probably the most complex engineering project I’ve ever done by myself in my life. I’ve had bit parts in larger, more “real” engineering projects, but they were all much easier to be frank, since my bit parts were mapped to my strengths.

Getting to this video has been a long, slow 18-month journey.

[Read more…]

Fermi Estimates and Dyson Designs

A Fermi estimate is a quick-and-dirty solution to an arbitrary scientific or engineering analysis problem. Fermi estimation uses widely known numbers, readily observable phenomenology, basic physics equations, and a bunch of approximation techniques to arrive at rough answers that tend to be correct within an order of magnitude or so. The term is named for Enrico Fermi, who was famously good at this sort of thing.

A particular anecdote is often cited to explain the idea. During the Trinity test in 1945, Fermi began dropping bits of shredded paper in his office near the test site, and based on how far they drifted from vertical, estimated the yield to be equivalent to 10 kilotons of TNT. This was surprisingly close to the more precise measurements.

Cliche consulting interview questions of the “how many ping-pong balls can fit in a 747” are a degenerate form of Fermi estimation without either the physics angle or active empiricism.

It struck me that there is counterpart to this kind of thinking on the synthesis side, where you use similar techniques to arrive at a very rough design for a complex engineered artifact. I call such a design approach Dyson design, after the physicist Freeman Dyson, who was one of the best practitioners of it (not to be confused with inventor James Dyson, whose designs, ironically, are not Dyson designs).

[Read more…]

Divergentism

This entry is part 3 of 3 in the series Lexicon

Divergentism is the idea that people are able to hear each other less as they age, and that information ubiquity paradoxically accelerates this process, so that technologically advancing societies grow more divergentist over historical time scales. The more everybody can know, the less everybody can see or hear each other. I first outlined this idea in a December 2015 post, Can You Hear Me Now? Rather appropriately, that post reads a little weird and hard to understand now, because the title and core metaphor comes from a Verizon ad that was airing on television at the time.

Here is how I described the idea then:

Divergentism is the idea that as individuals grow out into the universe, they diverge from each other in thought-space. This, I argued, is true even if in absolute terms, the sum of shared beliefs is steadily increasing. Because the sum of beliefs that are not shared increases even faster on average. Unfortunately, you are unique, just like everybody else

The opposed, much more natural idea, is convergentism. In my experience, this is the view most people actually hold:

Most people are convergentists by default. They believe that if reasonable people share an increasing number of explicit beliefs, they must necessarily converge to similar conclusions about most things. A more romantic version rests on the notion of continuously deepening relationships based on unspoken bonds between people. 

In the 6+ years since I first blogged the idea, it has turned into one of my conceptual pillars, so I figured it was time to put down a short, canonical account of it. Here is a whiteboard sketch of the idea. The x-axis is time, interpreted as either historical time or individual life-time, and the y axis is something like size of collective belief space. The cone represents the divergence.

The core idea remains the same, but I’ve added two corollaries:

First, the divergentism/convergentism dichotomy applies to societies at large, and individual psyches as well, not just the intersubjective level between atomic individuals.

At the societal level, societies understand each other less and less with increasing information ubiquity, at any level of aggregation you might consider, from packs to nations. You might get random spooky entanglements, but by default, society is divergentist. The social universe expands.

This idea is consistent with one in Hitchhiker’s Guide, that the discovery of the Babel Fish, by removing all translation barriers to communication, sparked an era of bloody wars. But conflict in my theory is merely the precursor to a more profound universal mutual disengagement.

Second, At the sub-individual level, where you consider the non-atomicity of the psyche, things are more complex, and I’m fairly sure the psyche by default is not divergentist. It is convergentist. A divergentist psyche is one characterized by a sort of progressive fragmentation of self-hood. A simple example is when you read something you wrote 10 years ago and it feels like it was written by a stranger. Or when somebody quotes something you wrote at you, and you don’t recognize it.

As a thought experiment, imagine you could have different versions of you, at different ages, all together. How much would you agree about things? How well would you understand each other? How easily could you reach consensus on things. Like say all versions of you needed to pick a restaurant to get dinner after the All-Yous conference. Would it be easy or hard? How about a book to read together?

I think I’m a psyche-level divergentist, but I think most people are not. Most people grow more integrated over time, not less. In fact, increasing disaggregation of the psyche is usually treated as a mental illness, though I think there is a healthy way to do it.

So to summarize the 3 laws of divergentism:

  1. Most societies diverge epistemically at all scales of aggregation over historical time scales
  2. Most social graphs get increasingly disconnected over societal time scales
  3. Most individuals get increasingly integrated over a lifetime, but some have divergent psyches

I am most confident about the second assertion.

Divergentism is both an idea you can believe or disbelieve, and a basis for an ideological doctrine (hence the –ism) that you can subscribe to or reject. You could capture both aspects with this simple statement: Humans diverge at all levels of thought-space, from the sub-individual to species, and this is a good thing. The doctrine part is the last clause.

If you are a divergentist, you hold that the social-cognitive universe is expanding towards an epistemic heat death of universal solipsism, and you are at peace with this thought. You explain contemporary social phenomena in light of this thought. For example, political polarization is just an anxious resistance to divergence forces. Subculturalization and atomization are a natural consequence of it.

Locally, there may be reversals of this tendency, even in very late historical stages. These manifest as what I call mutualism vortices, which are a bit like islands of low entropy in a universe winding down to a heat death. Dissipative structures of shared knowing and meaning. But overall, everything is divergent. But they become progressively rarer, just as there is an infinite number of primes, but they get rarer as you go down the number line.

Tools

This entry is part 2 of 3 in the series Lexicon

There are two kinds of tools: user-friendly tools, and physics-friendly tools. User-friendly tools wrap a domain around the habits of your mind via a user-experience metaphor, while physics-friendly tools wrap your mind around the phenomenology of a domain via an engineer-experience metaphor. Most real tools are a blend of the two kinds, but with a clear bias. The shape of a hammer is more about inertia and leverage than the geometry of your grip, while the shape of a pencil is more about your hand than about the properties of graphite. The middle tends to produce janky tools unusable by everybody.

Physics-friendly tools force you to grow in a specific disciplined way, while user-friendly tools save you the trouble of a specific kind of growth and discipline. Whether you use the saved effort to grow somewhere else, or merely grow lazier, is up to you. Most people choose a little of both, and grow more leisured, and we call this empowerment. Using a washing machine is easier than washing clothes by hand, and saves your time and energy. Some of those savings go towards learning newer, cleverer, more fun tools, the rest goes to more TV or Twitter.

Physics-friendly tools feel like real tools, and never let you forget that they exist. But if you grow good enough at wielding them, they allow you to forget that you exist. User-friendly tools feel like alert servants, and never let you forget that you exist. If you grow good enough at wielding them, they allow you to forget that they exist. When a tool allows you to completely forget that you exist, we call it mastery. When it allows you to completely forget the tool exists, we call it luxury.

The nature of a tool can be understood in terms of three key properties that locate it in a three-dimensional space. One we have already encountered: physics-friendliness to user-friendliness. The other two dimensions are praxis and poeisis.

The praxis dimension determines how a tool is situated in its environment. The poeisis dimension determines its intrinsic tendencies.

Shell scripting is high praxis, low poeisis. Shell scripts live in the wide world, naturally aware of everything from the local computer’s capabilities to the entire internet. Scripting in a highly sandboxed language like Matlab is low praxis, high poeisis. Matlab scripts are naturally aware of nothing except the little IDE world that contains them.

The shape of the range of a tool in this 3-dimensional space might be called its gamut, by analogy to the color profiles of devices like monitors and printers in 3-dimensional colorspaces (which are variously defined in terms of user-friendly variables like hue/saturation/value, or their more physics-friendly cousins like “La*b*” CIELAB color space).

What we think of as the “medium of the message” is a function of this gamut. Extremely specialized tools, such as say wire strippers, have a tiny gamut, but are very precisely matched to their function. They are the equivalent of precise Pantone shades used by color professionals. Other tools, with very large gamuts, like hammers, are not very precisely matched to any particular function, but are roughly useful in almost any functional context.

I am bad at learning new physics-friendly tools. In my entire life, I’ve really only learned three to depths that could be called professional-level (but still well short of self-dissolving mastery): Matlab, LaTeX, and WordPress. Matlab is high poiesis, low praxis. WordPress is the opposite. LaTeX is somewhere in the middle. I’m much better at learning user-friendly tools, but then, so is everybody, and what makes an engineer worth the title is their ability to pick up physics-friendly tools quickly and deeply.

I’ve learned dozens of physics-friendly tools in a very shallow way, up to what might be called hello-world literacy. Deep enough to demystify the nature of the tool, and develop a very rough appreciation of its gamut, but not enough to do anything useful with it. I can do this very quickly, but run into my limits equally quickly. This makes me a decent technology manager and consultant, but not a very good engineer.

In the last couple of years, through the pandemic, I self-consciously tried to change this, and learned several physics-friendly tools in deeper ways. For a while, I was calling myself a “temporarily embarrassed 10x engineer” on my twitter profile, a joke reference to a John Steinbeck line that was mostly lost on people. A more honest assessment is that I’m a 0.1x engineer who might make it to 0.5x with effort.

Most of the tools I learned through the pandemic were tools I’d previously learned to hello-world level, while a few, such as crimping and 3d printing, were entirely new to me. Here is a partial list:

  1. CAD (with OnShape)
  2. Soldering
  3. Electronics prototyping
  4. Embedded programming (with Arduinos)
  5. 3d printer use
  6. Working with a Dremel tool
  7. Python
  8. Animation with Procreate

Right now, I’m trying to pick up a few more — PyTorch (a machine learning framework in Python), 3d design/animation with Blender, and the basics of Solidity, the programming language for Ethereum. I hope to get to amateur levels of competence in at least a dozen tools before I turn 50, spanning perhaps 2-3 different technological stacks and associated tool chains. I have a sort of nominal goal for this middle-aged tool-learning frenzy converging towards “garage robotics” capabilities, but I’m not very hung up on how quickly I get to the full range of skills needed to build interesting robots (and yes, my current conception of robots includes machine learning and blockchain aspects). It’s going to take me a while to acquire a garage anyway.

This is uncomfortable territory for me because I’m by nature a tool-minimalist. Getting good at even one tool feels like an exhausting achievement for me. That’s why, despite being educated as an engineer, I am primarily a writer. Writing typically requires you to work with only a single, simple toolchain. If you’re good enough, you can limit yourself to just pen and paper, and other people will trip over each other trying to do all the rest for you, like formatting, editing, picking a good font, designing a good cover, getting the right PDF format done, and so forth. I’m not that good, so I have to work with more of the writing toolchain. Fortunately, WordPress empowered writers enough that you can get 90% of the value of a writing life with about 10% of the toolchain mastery effort that old-school print publishing called for, and I am perfectly happy to lazily give up on that last 10%.

So why try to gain competence at dozens of tools? So many that you have to think in terms of “stacks” and “toolchains” and worry about complicated matters like architecture and design strategy? The reason is simply that doing more complex things like building robots takes a higher minimum level of tooling complexity. We do not live in a very user-friendly universe, but we do live in a fairly physics-friendly one. So you need something like a minimum-viable toolchain to do a given thing.

There’s fundamental-limit phenomenology around minimum-viable tooling. A machine that flies has to have a certain minimal complexity, and building one will take tooling of a corresponding level of minimal complexity. You won’t build an airplane with just a screwdriver and a hammer like in the cartoons you see in Ikea manuals. In an episode of Futurama, there is a gag based on this idea. Professor Farnsworth buys a particle accelerator from Ikea that comes with a manual that calls for a screwdriver, a hammer, and a robot like Bender.

Periodically, there is a bout of enthusiasm in the technology world for getting past the current limits of minimum-viable tooling, and so you get somewhat faddish movements like the no-code/low-code movements that move complexity around without fundamentally reducing it. Often, such efforts even lead to tools that are overall harder to use. Even generally lazy people like me, who eagerly await the convenience of more user-friendly tools end up preferring more “geeky” tools in such cases. This is something like the tool equivalent of a popular science book making an idea much harder to understand by refusing to include even basic middle-school mathematics. So instead of a simple equation like a+b=c, you get pages of impenetrable prose.

Premature user-friendliness is the root of all toolchain jankiness perhaps.

Fundamentally reducing the complexity of tooling required to do a thing requires understanding the thing itself better. Simpler, more user-friendly tooling is the result of improved understanding, not increased concern for human comfort and convenience. You have to get more engineering friendly to generate such improved understandings before you can get more user friendly with what you learn. Complex tooling usually gets worse before it gets better.

If you try to skip advancing knowledge, you end up with tools that try to be more user-friendly by becoming less physics-friendly, and the entire experience degrades.

Animation Sublimation

I’ve decided to teach myself the basics of animation this year. Writing hasn’t been as much fun lately but drawing is suddenly becoming more fun. This is probably some sort of sublimation response to writer’s block making me mildly stabby and grumpy 🤬🔪🔪(“I write, therefore I am”).

I’m starting with the rudimentary capabilities of the $10 Procreate app, and am posting gifs approximately daily on Twitter. My initial goal is to make 100 simple animations in the form of gifs a few seconds long. I’ve made 8 so far. You can follow my 100-gif-adventure on this thread. Once I get to 100, hopefully in a few months, I’ll probably upgrade to a more expensive tool and try to make longer things. Maybe 10 one-minute shorts will be the next goal. Here is one of my early attempts with an actual story.

I’ve always harbored animation ambitions, and idle dreams of making a Futurama or Rick and Morty style animated comedy science fiction show, but the tooling is finally getting good enough individuals can do stuff. One can dream :)

Storytelling — Narrative Wet Bulb Temperature

This entry is part 6 of 11 in the series Narrativium

Telling jokes at a funeral is hard. Even entertaining an urge to do so is perhaps not a decent thing to do. At best, you might get away with telling a poignantly humorous anecdote about the deceased as part of a eulogy. The context of a funeral is simply not appropriate for joke-telling, and it’s not just a matter of social norms and performance expectations of grieving solemnity. People simply wouldn’t be in the mood.

Even if you were a comedian who left instructions for your funeral to be conducted in the form of a comedy festival, if people actually liked you, they’d likely find it somewhat difficult to get into the spirit of the idea.

Jokes at a funeral are a simple example of what we might call poor narrative-context fit, NCF. Not all stories can be told at all times with equal impact. And here I mean any performance with a narrative structure, not just actual fiction. The idea applies to nonfiction works too.

What drives narrative-context fit? I don’t have a general answer, but I have one for a special case: storytelling in a time of generalized crisis, such as we are living through now.

It is no secret that it’s been hard to tell compelling stories in the past few years. Television and cinema have turned into a wasteland of reboots and universe extensions. Thought leadership storytelling has descended from the smarmy heights of TED talks to the barely readable op-ed derps of today. It’s not that there are no good stories being told, but compared to say 2000-2017 or so, we’re definitely in a tough market.

A clue about why this is hard can be found in Robert McKee‘s description of narrative suspense:

“As pieces of exposition slip out of dialogue and into the background awareness of the reader or audience member, her curiosity reaches ahead with both hands to grab fistfuls of the future to pull her through the telling. She learns what she needs to know when she needs to know it, but she’s never consciously aware of being told anything, because what she learns compels her to look ahead.”

And

Suspense is “curiosity charged with empathy…” Suspense focuses the reader/audience by flooding the mind with emotionally tinged questions that hook and hold attention: “What’s going to happen next?” “What’ll happen after that?” “What will the protagonist do? Fee?”

Suspense is a “what happens next” curiosity you care about that anchors your attention to a period of time leading up to potential resolution. Or to put it another way, suspense literally creates your sense of future time. If you are not feeling suspense about how something in the future might turn out, in a sense, you’re not feeling the future at all. Your consciousness is concentrated in the past and present only, and not in a good way.

No suspense, no story, no future.

Now, extend this logic to the general background of suspense in the environment that a story has to compete with. We do not consume stories against a blank canvas backdrop. Whatever is going on in the world — a pandemic, a space telescope on a fraught deployment journey, a critical election — shapes the suspensefulness of life in general.

In fact, we might frame a hypothesis, which I call the suspense blindness hypothesis: You can’t see past the next big identity altering thing in your future that’s keeping you in suspense. The most acutely felt “what happens next” thing.

Note that this is a spectator point of view. Suspense only exists if you can’t do much to change the uncertain outcome. You can only watch. If you can act, you’re in the story, not watching it unfold from the sidelines.

When there is a high level of suspense in the general background, it is harder to tell stories because you have to beat that level of suspense. It gets especially hard if you have to tell a story that extends far beyond the temporal horizon created by the suspense blindness. If everybody is waiting for the outcome of a critical election in a year, it’s hard to tell a story spanning the next decade. And this applies equally to a TED talk painting (say) a vision of progress over the next decade, and to a fictional story that plays out over the next decade.

Some of this is merely technical difficulty dealing with storytelling in a forking future. If there is no vague consensus around the future being a certain way, it’s hard to tell stories set in that future. It’s a bit like having to choose a foreground paint color that works against many different background colors, ranging from black to white.

Your only technical recourse is to jump far enough out into the future — a century say — that the stark forking divergences of today can be assumed to have been sorted out. But then the storytelling loses access to the emotional energies of the present.

I came up with a weird metaphor for thinking about this — narrative wet-bulb temperature.

The wet-bulb temperature is a complicated measure of the body’s ability to cool itself. It is a function of temperature and humidity, and when it goes above around 35C, the body can no longer cool itself through sweating. This is one of the many ways in which climate change is a more serious threat than you might think, since it can drive dangerously high wet-bulb temperatures.

Here’s the metaphor: we tell ourselves stories to regulate the amount of narrative tension we feel in life generally. Felt suspense is one measure of this tension (though it’s a rich mess of many contributing textures, such as cringe, horror, fear, amusement, mystification). We metaphorically “cool” or “warm” ourselves through stories (where “temperature” maps to a vector of attributes. Like thermoregulation, narrative regulation is a function of context.

Narrative wet-bulb temperature is a measure of how well narrative regulation can work in a given zeitgeist. Beyond some metaphoric equivalent of 35C, perhaps it becomes impossible to tell stories. Perhaps the appropriate scale is a weirdness scale, measured in Harambes. Perhaps above 35H, storytelling is psychophysically impossible.

As with climate, we have some ability to control our environments through the narrative equivalent of air-conditioning. Personal climate control, through management of exposure to the stresses of the general outdoor zeitgeist, can be done through gatekeeping information aggressively (this idea is central to the book I’m writing). But to the extent storytelling is a public act, such “air conditioned” stories can only be heard by those who share your particular cozy climate-controlled headspace.

We appear to have collectively accepted this particular tradeoff, in that we have collectively abandoned public spaces (and by extension, truly public storytelling) and retreated to the cozyweb.

Random Acts of X

The phrasal template random acts of ________ is clearly one of my favorites. I seem to have used it 20+ times on Twitter in the last few years. Here are the actual instances:

  1. random acts of ontology
  2. Random Acts of Web3ing
  3. random acts of policy vandalism
  4. random acts of templing [as in, treating something as a temple]
  5. random acts of patchy, pointillist, impressionist worldbuilding
  6. Random acts of philosophy in the “air game” and random acts of tinkering in the “ground game”
  7. Random Acts of Magical Thinking
  8. random acts of tariffs
  9. random acts of sciencing
  10. random acts of art production
  11. random acts of revenue-generation
  12. Random acts of petrichor
  13. random acts of strategy
  14. random acts of cash-flow management
  15. random acts of consulting
  16. Random Acts of System Integration (RASI)
  17. Random Acts of Product Development
  18. Random Acts of Workflow Improvement and Unnecessary Optionality
  19. random acts of solutionism
  20. Random Acts of Mildly Profitable or Break-Even Teaching
  21. random acts of twitter strategy
  22. Random Acts of Overt Marketing
  23. random acts of garam-masala-ing

At one point I tweeted a prompt inviting people to fill in the blank, and got a whole bunch of responses, some clever, others not so clever.

iirc, the very first example I encountered, sometime in the 90s I think, was “random acts of marketing.” That stuck with me because it seemed like such an apt description of the marketing efforts of most companies.

Random acts of X are a regime of behavior that you might call “bullshit agency” — some fraction of it works, but you don’t know, and to a certain extent don’t care, which fraction. Hence the famous John Wanamaker quote, ““Half the money I spend on advertising is wasted; the trouble is I don’t know which half.”

Random acts of X happen when you act opportunistically, based on circumstantial possibilities and very little thought, and with indifference to whether or not your actions make any sort of larger strategic sense. The randomness in what the immediate circumstances allow or encourage you to do translates to randomness into what you actually end up doing. Noise in, noise out.

This does not mean that the opposite of “random acts of X” is strategy. You can have “random acts of strategy” too, and in fact most strategy fits that description. A CEO goes off on a leadership retreat with a few buddies, enjoys good food, good wine, and whiteboard sessions, and returns with a nice mind-map and strategy notes… and it’s back to the quagmire of operations within a day. That’s random acts of strategy.

Random acts of X regimes are attractive because they allow you to act in very low energy regimes, with low intelligence. And we default to such regimes as a slightly superior alternative to being frozen in inaction and doing nothing at all. The leap of faith underlying random acts of x-ing is belief in a benevolent universe where doing something, anything, beats doing nothing.

Reviewing my tweets, I notice that I use the phrasal template more often to refer to my own behaviors than to comment on others’ behaviors. The template has no particular stable valence for me. Sometimes random-acts-of-x-ing is good, sometimes it is bad.

But looking at my (over)use of the template, I do wonder, what does it take to move such behavior into a non-random regime, without overwhelming it with the artifacts of deterministic planning, and destroying what little energy there is.

The best guide I’ve found so far is Charles E. Lindblom’s classic 1959 management article, The Science of Muddling Through. It is one of the articles I recommend most often to consulting clients (I found it via John Kay’s excellent book, Obliquity)

Muddling through is the act of adding just enough determinism to a default random-acts-of-x situation to get it to make some sort of roughly right directional progress. In Lindblom’s account, muddling through involves a “method of successive limited comparisons” as opposed to a “rational comprehensive” approach.

Muddling through is both a better term, and a better concept, than its degenerate modern descendants like “agile.” The salient feature of Lindblom’s account is that he doesn’t claim muddling through is a “theory” but rather a manner of doing that “greatly reduces or eliminates reliance on theory.”

Still, whether you call it agile and pretend you have a theory, or call it muddling through and admit that you don’t, the problem remains — how do you prevent this regime of behavior slipping into either useless randomness or getting swamped by the imposition of energy-draining theorizing?

One part of the answer is, as Karl Weick argued, to give up on theory, but not on theorizing. The idea that “what theory is not, theorizing is” has been the linchpin of my consulting work for a decade now, but I’ve never quite clarified the essence of the distinction to myself.

Weick’s idea is similar in spirit to the Eisenhower line that plans are nothing, but planning is everything; or Frederick Brooks’ idea that you should “plan to throw one away” (and Joel Spolsky’s counter-argument that you should not throw one away)

I think the common thread here is that your history of engagement with a problem or question is important, but the specific conceptual scaffoldings you used in generating that history are not. The data matters, the algorithm you used to generate it doesn’t. Be the data, not the algorithm.

This then is the solution to the perils of the “random acts of X” regime — better memory. Turn the memoryless random acts of X into memoryful not-so-random acts of X.

This assumes that memory by itself has something like a gradient to it; a historical logic that can bias the context of random-acts-of-x-ing enough that your actions acquire a drift, a direction of muddling through.

This direction is not a True North. It is not a teleological potential induced by a goal, but an etiological potential induced by a history (or more generally, data). A True Past perhaps. The test of truth being that it creates a coherent future despite the randomness of circumstantial forces. Such an etiological potential is, however, merely necessary, not sufficient. To get past historical determinism, the True Past must only be allowed to frame the random acts of x-ing in the present, not fully specify it. And if your random acts are not capable of blowing up the historical context that contains them, they are not random enough.

I think of it as “fuck around and find out, but never forget.”

2021 Ribbonfarm Extended Universe Annual Roundup

This entry is part 15 of 17 in the series Annual Roundups

There is no getting around it: I basically took the year off from this blog, not just in the sense that I wrote much less here than usual (29 posts), but in the sense that all the posts were short ones with self-consciously modest ambitions. In fact, most posts were actively anti-ambitious, since I carefully avoided writing anything with viral potential. The blog basically went underground. For the first time ever, and by design, there was not even a single post that could be called a hit, let alone a viral one.

A big reason was: I had nothing to say in 2021 in blog mode.

And a big reason for that was that the medium of blogging itself is not sure what it wants to say anymore. We are in a liminal passage with blogging, where the medium has no message.

So it’s not just me. It feels like the entire blogosphere (what’s left of it) took the year off to figure out a new identity — if one is even possible — in a world overrun by email newsletters, Twitter threads, weird notebook-gardens on static sites or public notebook apps, and the latest challenger: NFT-fied essays.

All those new media seem to have clear ideas of what they are, or what they want to be when they grow up. But this aging medium doesn’t. And while I have a presence in all those younger media, they don’t yet feel substantial enough to serve as main acts, the way blogging has for so long.

Perhaps there is no main-act medium in the future. Perhaps we are witnessing the birth of a glorious new polycentric media landscape, where the blogosphere will be eaten not by any one successor, but by a collection of media within which blogs will merely be a sort of First Uncle to the rest. The medium through which you say embarrassing things at Thanksgiving, with all the other media cringing. Maybe, just as every unix shell command turned into a unicorn tech company, every kind of once-blog-like content will now be its own medium. Listicles became Twitter, photoblogs became Instagram, and so on.

The entire blogosphere is going through perhaps its most significant existential crisis since the invention of blogging 22 years ago. And I’ve been at this for 15 of those years — this is the 15th annual roundup! Ironically, every couple of years through that period, there has been a round of discussion on “the death of blogging,” but now that it seems to be actually happening, there isn’t an active conversation around it.

If this is the end, it’s a whimper rather than a bang.

One sign it is real is — this is the second roundup I’ve felt compelled to title “extended universe” because my publishing presence is now simply too scattered for the blog alone to represent it.

But I rather hope not. I think there’s a chance it’s going to be a Doctor Who style regeneration instead, and if so, I’m here for it. If blogs must die, so be it. If there’s a fighting chance of a regeneration, the fight will be worthwhile.

On to the roundup, with embarrassing-uncle commentary on the brave new world.

[Read more…]

Thinking in OODA Loops

I’ve been meaning to turn my OODA loop workshop (which I’ve done formally/informally for corporate audiences for 5+ years) into an online course for years, but never got around to it. So I decided to just publish the main slide deck.

Here’s the link.

This deck is 72 slides, and takes me about 2 hours to cover. It actually began as an informal talk using index cards at the 2012 Boyd and Beyond conference at Quantico, to a hardcore Boydian crowd, so it’s survived that vetting.

The two times I’ve done the full, day-long formal version for large groups, I’ve paired a morning presentation/Q&A session with an afternoon of small group exercises applying the ideas to specific problems the group is facing. More commonly, I tend to just share the deck with consulting clients who want to apply OODA to their leadership challenges. We discuss 1:1 after they’ve reviewed it, and begin applying it in our work together.

In the spirit of John Boyd, whose OG briefing slides are freely available on the web (highly recommended), I’m releasing these slides publicly without any specified licenses, restrictions, or guarantees. There’s a lot of random google images and screenshots from documents in the slides, so use at your own risk.

Feel free to use these slides as part of your own efforts to introduce others to OODA thinking, including as part of paid courses. You can also modify/augment/remix them as you like. Attribution appreciated, but not expected.

Read on, for some notes/guidance on how to design a workshop incorporating this material.

[Read more…]