A blogchain of terms I try to define to my taste. Either idiosyncratic definitions of common terms, or straightforward definitions of neologisms.

Ghost Protocols

This entry is part 1 of 3 in the series Lexicon

A ghost protocol is a pattern of interactions between two parties wherein one party pretends the other does not exist. A simple example is the “silent treatment” pattern we all learn as kids. In highly entangled family life, the silent treatment is not possible to sustain for very long, but in looser friendship circles, it is both practical and useful to be able to ghost people indefinitely. Arguably, in the hyperconnected and decentered age of social media, the ability to ghost people at an individual level is a practical necessity, and not necessarily cruel. People have enough social optionality and legal protections now that not being recognized by a particular person or group, even a very powerful one, is not as big a deal as it once was.

At the other end of the spectrum of complexity of ghosted states is the condition of officially disavowed spies, as in the eponymous Mission Impossible movie. I don’t know if “ghost protocol” is a real term of art in the intelligence world, but it’s got a nice ring to it, so I’ll take it. One of my favorite shows, Burn Notice, is set within a ghost protocol situation.

If you pretend a person or entire group doesn’t exist, and they’re real, they don’t go away of course. As Philip K. Dick said, reality is that which doesn’t go away when you stop believing in it.

So you need ways of dealing with live people who are dead to you, and preventing them from getting in your way, without acknowledging their existence. When you put some thought and structure around those ways, you’ve got a ghost protocol.

[Read more…]


This entry is part 2 of 3 in the series Lexicon

There are two kinds of tools: user-friendly tools, and physics-friendly tools. User-friendly tools wrap a domain around the habits of your mind via a user-experience metaphor, while physics-friendly tools wrap your mind around the phenomenology of a domain via an engineer-experience metaphor. Most real tools are a blend of the two kinds, but with a clear bias. The shape of a hammer is more about inertia and leverage than the geometry of your grip, while the shape of a pencil is more about your hand than about the properties of graphite. The middle tends to produce janky tools unusable by everybody.

Physics-friendly tools force you to grow in a specific disciplined way, while user-friendly tools save you the trouble of a specific kind of growth and discipline. Whether you use the saved effort to grow somewhere else, or merely grow lazier, is up to you. Most people choose a little of both, and grow more leisured, and we call this empowerment. Using a washing machine is easier than washing clothes by hand, and saves your time and energy. Some of those savings go towards learning newer, cleverer, more fun tools, the rest goes to more TV or Twitter.

Physics-friendly tools feel like real tools, and never let you forget that they exist. But if you grow good enough at wielding them, they allow you to forget that you exist. User-friendly tools feel like alert servants, and never let you forget that you exist. If you grow good enough at wielding them, they allow you to forget that they exist. When a tool allows you to completely forget that you exist, we call it mastery. When it allows you to completely forget the tool exists, we call it luxury.

The nature of a tool can be understood in terms of three key properties that locate it in a three-dimensional space. One we have already encountered: physics-friendliness to user-friendliness. The other two dimensions are praxis and poeisis.

The praxis dimension determines how a tool is situated in its environment. The poeisis dimension determines its intrinsic tendencies.

Shell scripting is high praxis, low poeisis. Shell scripts live in the wide world, naturally aware of everything from the local computer’s capabilities to the entire internet. Scripting in a highly sandboxed language like Matlab is low praxis, high poeisis. Matlab scripts are naturally aware of nothing except the little IDE world that contains them.

The shape of the range of a tool in this 3-dimensional space might be called its gamut, by analogy to the color profiles of devices like monitors and printers in 3-dimensional colorspaces (which are variously defined in terms of user-friendly variables like hue/saturation/value, or their more physics-friendly cousins like “La*b*” CIELAB color space).

What we think of as the “medium of the message” is a function of this gamut. Extremely specialized tools, such as say wire strippers, have a tiny gamut, but are very precisely matched to their function. They are the equivalent of precise Pantone shades used by color professionals. Other tools, with very large gamuts, like hammers, are not very precisely matched to any particular function, but are roughly useful in almost any functional context.

I am bad at learning new physics-friendly tools. In my entire life, I’ve really only learned three to depths that could be called professional-level (but still well short of self-dissolving mastery): Matlab, LaTeX, and WordPress. Matlab is high poiesis, low praxis. WordPress is the opposite. LaTeX is somewhere in the middle. I’m much better at learning user-friendly tools, but then, so is everybody, and what makes an engineer worth the title is their ability to pick up physics-friendly tools quickly and deeply.

I’ve learned dozens of physics-friendly tools in a very shallow way, up to what might be called hello-world literacy. Deep enough to demystify the nature of the tool, and develop a very rough appreciation of its gamut, but not enough to do anything useful with it. I can do this very quickly, but run into my limits equally quickly. This makes me a decent technology manager and consultant, but not a very good engineer.

In the last couple of years, through the pandemic, I self-consciously tried to change this, and learned several physics-friendly tools in deeper ways. For a while, I was calling myself a “temporarily embarrassed 10x engineer” on my twitter profile, a joke reference to a John Steinbeck line that was mostly lost on people. A more honest assessment is that I’m a 0.1x engineer who might make it to 0.5x with effort.

Most of the tools I learned through the pandemic were tools I’d previously learned to hello-world level, while a few, such as crimping and 3d printing, were entirely new to me. Here is a partial list:

  1. CAD (with OnShape)
  2. Soldering
  3. Electronics prototyping
  4. Embedded programming (with Arduinos)
  5. 3d printer use
  6. Working with a Dremel tool
  7. Python
  8. Animation with Procreate

Right now, I’m trying to pick up a few more — PyTorch (a machine learning framework in Python), 3d design/animation with Blender, and the basics of Solidity, the programming language for Ethereum. I hope to get to amateur levels of competence in at least a dozen tools before I turn 50, spanning perhaps 2-3 different technological stacks and associated tool chains. I have a sort of nominal goal for this middle-aged tool-learning frenzy converging towards “garage robotics” capabilities, but I’m not very hung up on how quickly I get to the full range of skills needed to build interesting robots (and yes, my current conception of robots includes machine learning and blockchain aspects). It’s going to take me a while to acquire a garage anyway.

This is uncomfortable territory for me because I’m by nature a tool-minimalist. Getting good at even one tool feels like an exhausting achievement for me. That’s why, despite being educated as an engineer, I am primarily a writer. Writing typically requires you to work with only a single, simple toolchain. If you’re good enough, you can limit yourself to just pen and paper, and other people will trip over each other trying to do all the rest for you, like formatting, editing, picking a good font, designing a good cover, getting the right PDF format done, and so forth. I’m not that good, so I have to work with more of the writing toolchain. Fortunately, WordPress empowered writers enough that you can get 90% of the value of a writing life with about 10% of the toolchain mastery effort that old-school print publishing called for, and I am perfectly happy to lazily give up on that last 10%.

So why try to gain competence at dozens of tools? So many that you have to think in terms of “stacks” and “toolchains” and worry about complicated matters like architecture and design strategy? The reason is simply that doing more complex things like building robots takes a higher minimum level of tooling complexity. We do not live in a very user-friendly universe, but we do live in a fairly physics-friendly one. So you need something like a minimum-viable toolchain to do a given thing.

There’s fundamental-limit phenomenology around minimum-viable tooling. A machine that flies has to have a certain minimal complexity, and building one will take tooling of a corresponding level of minimal complexity. You won’t build an airplane with just a screwdriver and a hammer like in the cartoons you see in Ikea manuals. In an episode of Futurama, there is a gag based on this idea. Professor Farnsworth buys a particle accelerator from Ikea that comes with a manual that calls for a screwdriver, a hammer, and a robot like Bender.

Periodically, there is a bout of enthusiasm in the technology world for getting past the current limits of minimum-viable tooling, and so you get somewhat faddish movements like the no-code/low-code movements that move complexity around without fundamentally reducing it. Often, such efforts even lead to tools that are overall harder to use. Even generally lazy people like me, who eagerly await the convenience of more user-friendly tools end up preferring more “geeky” tools in such cases. This is something like the tool equivalent of a popular science book making an idea much harder to understand by refusing to include even basic middle-school mathematics. So instead of a simple equation like a+b=c, you get pages of impenetrable prose.

Premature user-friendliness is the root of all toolchain jankiness perhaps.

Fundamentally reducing the complexity of tooling required to do a thing requires understanding the thing itself better. Simpler, more user-friendly tooling is the result of improved understanding, not increased concern for human comfort and convenience. You have to get more engineering friendly to generate such improved understandings before you can get more user friendly with what you learn. Complex tooling usually gets worse before it gets better.

If you try to skip advancing knowledge, you end up with tools that try to be more user-friendly by becoming less physics-friendly, and the entire experience degrades.


This entry is part 3 of 3 in the series Lexicon

Divergentism is the idea that people are able to hear each other less as they age, and that information ubiquity paradoxically accelerates this process, so that technologically advancing societies grow more divergentist over historical time scales. The more everybody can know, the less everybody can see or hear each other. I first outlined this idea in a December 2015 post, Can You Hear Me Now? Rather appropriately, that post reads a little weird and hard to understand now, because the title and core metaphor comes from a Verizon ad that was airing on television at the time.

Here is how I described the idea then:

Divergentism is the idea that as individuals grow out into the universe, they diverge from each other in thought-space. This, I argued, is true even if in absolute terms, the sum of shared beliefs is steadily increasing. Because the sum of beliefs that are not shared increases even faster on average. Unfortunately, you are unique, just like everybody else

The opposed, much more natural idea, is convergentism. In my experience, this is the view most people actually hold:

Most people are convergentists by default. They believe that if reasonable people share an increasing number of explicit beliefs, they must necessarily converge to similar conclusions about most things. A more romantic version rests on the notion of continuously deepening relationships based on unspoken bonds between people. 

In the 6+ years since I first blogged the idea, it has turned into one of my conceptual pillars, so I figured it was time to put down a short, canonical account of it. Here is a whiteboard sketch of the idea. The x-axis is time, interpreted as either historical time or individual life-time, and the y axis is something like size of collective belief space. The cone represents the divergence.

The core idea remains the same, but I’ve added two corollaries:

First, the divergentism/convergentism dichotomy applies to societies at large, and individual psyches as well, not just the intersubjective level between atomic individuals.

At the societal level, societies understand each other less and less with increasing information ubiquity, at any level of aggregation you might consider, from packs to nations. You might get random spooky entanglements, but by default, society is divergentist. The social universe expands.

This idea is consistent with one in Hitchhiker’s Guide, that the discovery of the Babel Fish, by removing all translation barriers to communication, sparked an era of bloody wars. But conflict in my theory is merely the precursor to a more profound universal mutual disengagement.

Second, At the sub-individual level, where you consider the non-atomicity of the psyche, things are more complex, and I’m fairly sure the psyche by default is not divergentist. It is convergentist. A divergentist psyche is one characterized by a sort of progressive fragmentation of self-hood. A simple example is when you read something you wrote 10 years ago and it feels like it was written by a stranger. Or when somebody quotes something you wrote at you, and you don’t recognize it.

As a thought experiment, imagine you could have different versions of you, at different ages, all together. How much would you agree about things? How well would you understand each other? How easily could you reach consensus on things. Like say all versions of you needed to pick a restaurant to get dinner after the All-Yous conference. Would it be easy or hard? How about a book to read together?

I think I’m a psyche-level divergentist, but I think most people are not. Most people grow more integrated over time, not less. In fact, increasing disaggregation of the psyche is usually treated as a mental illness, though I think there is a healthy way to do it.

So to summarize the 3 laws of divergentism:

  1. Most societies diverge epistemically at all scales of aggregation over historical time scales
  2. Most social graphs get increasingly disconnected over societal time scales
  3. Most individuals get increasingly integrated over a lifetime, but some have divergent psyches

I am most confident about the second assertion.

Divergentism is both an idea you can believe or disbelieve, and a basis for an ideological doctrine (hence the –ism) that you can subscribe to or reject. You could capture both aspects with this simple statement: Humans diverge at all levels of thought-space, from the sub-individual to species, and this is a good thing. The doctrine part is the last clause.

If you are a divergentist, you hold that the social-cognitive universe is expanding towards an epistemic heat death of universal solipsism, and you are at peace with this thought. You explain contemporary social phenomena in light of this thought. For example, political polarization is just an anxious resistance to divergence forces. Subculturalization and atomization are a natural consequence of it.

Locally, there may be reversals of this tendency, even in very late historical stages. These manifest as what I call mutualism vortices, which are a bit like islands of low entropy in a universe winding down to a heat death. Dissipative structures of shared knowing and meaning. But overall, everything is divergent. But they become progressively rarer, just as there is an infinite number of primes, but they get rarer as you go down the number line.