Sometime in the last few years, apparently everybody turned into a hacker. Besides computer hacking, we now have lifehacking (using tricks and short-cuts to improve everyday life), body-hacking (using sensor-driven experimentation to manipulate your body), college-hacking (students who figure out how to get a high GPA without putting in the work) and career-hacking (getting ahead in the workplace without “paying your dues”). The trend shows no sign of letting up. I suspect we’ll soon see the term applied in every conceivable domain of human activity.
I was initially very annoyed by what I saw as a content-free overloading of the term, but the more I examined the various uses, the more I realized that there really is a common pattern to everything that is being subsumed by the term hacking. I now believe that the term hacking is not over-extended; it is actually under-extended. It should be applied to a much bigger range of activities, and to human endeavors on much larger scales, all the way up to human civilization.
I’ve concluded that we’re reaching a technological complexity threshold where hacking is going to be the main mechanism for the further evolution of civilization. Hacking is part of a future that’s neither the exponentially improving AI future envisioned by Singularity types, nor the entropic collapse envisioned by the Collapsonomics types. It is part of a marginally stable future where the upward lift of diminishing-magnitude technological improvements and hacks just balances the downward pull of entropic gravity, resulting in an indefinite plateau, as the picture above illustrates.
I call this possible future hackstability.
Hacking as Anti-Refinement
Hacking is the term we reach for when trying to describe an intelligent, but rough-handed and expedient behavior aimed at manipulating a complicated reality locally for immediate gain. Two connotations of the word hack, rough-hewing and mediocrity, apply to some extent.
I’ll offer this rather dense definition that I think covers the phenomenology, and unpack it through the rest of the post.
Hacking is a pattern of local, opportunistic manipulation of a non-disposable complex system that causes a lowering of its conceptual integrity, creates systemic debt and moves intelligence from systems into human brains.
By this definition, hacking is anti-refinement. It is therefore a barbarian mode of production because it moves intelligence out of systems and into human brains, making those human brains less interchangeable. Yet, it is not the traditional barbarian mode of predatory destruction of a settled civilization from outside its periphery.
Technology has now colonized the planet, and there is no “outside” for anyone to emerge from or retreat to. Hackers are part of the system, dependent on it, and aware of its non-disposable nature. In evolutionary terms, hacking is a parasitic strategy: weaken the host just enough to feed off it, but not enough to kill it.
Breaching computer systems is of course the classic example. Another example is figuring out hacks to fall asleep faster. A third is coming up with a new traffic pattern to reroute traffic around a temporary construction site.
- In our first example, the hacker has discovered and thought through the implications of a particular feature of a computer system more thoroughly than the original designer, and synthesized a locally rewarding behavior pattern: an exploit.
- In our second example, the body-hacker has figured out a way to manipulate sleep neurochemistry in a corner of design space that was never explored by the creeping tendrils of evolution, because there was never any corresponding environmental selection pressure.
- In our third example, the urban planner is creating a temporary hack in service of long-term systemic improvement. The hacker has been co-opted and legitimized by a subsuming system that has enough self-awareness and foresight to see past the immediate dip in conceptual integrity.
Urban planning is a better prototypical example to think about when talking about hacking than software itself, since it is so visual. Even programmers and UX designers themselves resort to urban planning metaphors to talk about complicated software ideas. If you want to ponder examples for some of the abstractions I am talking about here, I suggest you think in terms of city-hacking rather than software hacking, even if you are a programmer.
For the overall vision of hackstability, think about any major urban region with its never-ending construction and infrastructure projects ranging from emergency repairs to new mass-transit or water/sewage projects. If a large city is thriving and persisting, it is likely hackstable. Increasingly, the entire planet is hackstable.
The atomic prototype of hacking is the short-cut. The urban planner has a better map and understands cartography better, but in one small neighborhood, some little kid knows a shorter, undocumented A-to-B path than the planner. Even though the planner laid out the streets in the first place. What’s more, the short-cut may connect points on the map that are otherwise disconnected for non-hackers, because the documented design has no connections between those points.
Disposability and Debt
I got to my definition of hacking after trying to assemble a lot of folk wisdom about programming into a single picture.
The most significant piece for me was Joel Spolsky’s article about things you should never do, and in particular his counter-argument to Frederick Brooks’ famous idea that you should plan to “throw one away” (the idea in software architecture that you should plan to throw away the first version of a piece of software and start again from scratch).
Spolsky offers practical reasons why this is a bad idea, but what I took away from the post was a broader idea: that it is increasingly a mistake to treat any technology as disposable. Technology is fundamentally not a do-over game today. It is a cumulative game. This has been especially true in the last century, as all technology infrastructure has gotten increasingly inter-connected and temporally layered into techno-geological strata of varying levels of antiquity. We should expect to see disciplines emerge with labels like techno-geography, techno-geology and techno-archaeology. Some layers are functional (“techno-geologically active”), while others are compressed garbage, like the sunken Gold Rush era ships on which parts of San Francisco are built.
Non-disposability along with global functional and temporal connectedness means technology is a single evolving entity with a memory. For such systems the notion of technical debt, due to Ward Cunningham, becomes important:
“Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation.”
For me, the central implicit idea in the definition is the notion of disposability. Everything hinges on whether or not you can throw your work away and move on. We are so used to dealing with disposable things in everyday consumer life that we don’t realize that much of our technological infrastructure is in fact non-disposable.
How ubiquitous is non-disposability? I am tempted to conclude that almost nothing of significance is disposable. And by that I mean disposable with insignificant negative consequences of course. Anything can be thrown away if you are willing to pay the costs.
Your body, New York City and the English language are obviously non-disposable. Reacting to problems with those things and trying to “do over” is either impossible or doomed. The first is impossible to even do badly today. You can try to “do over” New York City, but you’ll get something else that will probably not serve. If you try to do-over English, you get Esperanto.
Obviously, the bigger the system and the more interdependent it is with its technological environment, the harder it is to do it over. The dynamics of technical debt naturally leads us to non-disposability, but let’s make the connection explicit and talk about the value in the patchwork of hacks and workarounds in a complex system that, as Spolsky argues, represents value that should not be thrown away.
Quantified Technical Debt and Metis
If a system must last indefinitely, cutting corners in an initial design leads to a necessary commitment to “doing it right” later. This deferral is due to lack of both resources and information in an initial design. You lack the money/time and the information to do it right.
When a new contingency arises, some of the missing information becomes available. But resources do not generally become available at the same time, so the design must be adapted via cheaper improvisation to deal with the contingency — a hack — and the “real” solution deferred. A hack turns an unquantified bit of technical debt into a quantified bit: when you have a hack, you know the principal, interest rate and so forth.
It is this quantified technical debt that is the interesting quantity. The designer’s original vague sense of incompleteness and inadequacy becomes sharply defined once a hack has exposed a failing, illuminated its costs, and suggested a more permanent solution. The new information revealed by the hack is, by definition, not properly codified and embedded in the system itself, so most of it must live in a human brain as tacit design intelligence (the rest lives in the hack itself, representing the value that Spolsky argues should not be throw away).
When you have a complex and heavily-used, but slowly-evolving technology, this tacit knowledge accumulating in the heads of hackers constitutes what James Scott calls metis. Distributed and contentious barbarian intelligence. It can only be passed on from human to human via apprenticeship, or inform a risky and radical redesign that codifies and embeds it into a new version of the system itself. The longer you wait, the more the debt compounds, increasing risk and the cost of the eventual redesign.
Technological Deficit Economics
This compounding rate is very high because the longer a system persists, the more tightly it integrates into everything around it, causing co-evolution. So eventually replacing even a small hack in a relatively isolated system with a better solution turns into a planet-wide exercise, as we learned during Y2K.
Isolated technologies also get increasingly situated over time, no matter how encapsulated they appear at conception, so that what looks like a “do-over” from the point of view of a single subsystem (say Linux) looks like a hack with respect to larger, subsuming systems (like the Internet). So debt accumulates at levels of the system that no individual agent is nominally responsible for. This is collective, public technical debt.
Most complex technologies incur quantified technical debt faster than they can pay it off, which makes them effectively non-disposable. This includes non-software systems. Sometimes the debt can be ignored because it ends up being an economic externality (pollution for automobiles, for instance), but the more all-encompassing the system gets, the less room there is for anything to be an unaccounted-for externality.
The regulatory environment can be viewed as a co-evolving part of technology and subject to the same rules. The US constitution and the tax code for instance, started off as high-conceptual-integrity constructs which have been endlessly hacked through case law and tax code exceptions to the point that they are now effectively non-disposable. It is impossible, as a practical matter, to even conceptualize a “Constitution 2.0” to cleanly accommodate the accumulated wisdom in case law.
In general, following Spolsky’s logic through to its natural conclusion, it is only worth throwing a system away and building a new one from scratch when it is on the very brink of collapse under the weight of its hacks (and the hackers on the brink of retirement or death, threatening to take the accumulated metis with them). The larger the system, the costlier the redesign, and the more it makes sense to let more metis accumulate.
Beyond a certain critical scale, you can never throw a system away because there is no hope of ever finding the wealth to pay off the accumulated technical debt via a new design. The redesign itself experiences scope creep and spirals out of the realm of human capability.
All you can hope for is to keep hacking and extending its life in increasingly brittle ways, and hope to avoid a big random event that triggers collapse. This is technological deficit economics.
Now extend the argument to all of civilization as a single massive technology that can never be thrown away, and you can make sense of the idea of hackstability as an alternative to collapse. Maybe if you keep hacking away furiously enough, and grabbing improvements where possible, you can keep a system alive indefinitely, or at least steer it to a safe soft-landing instead of a crash-landing.
Hacker Folk Theorems
With disposability as the anchor element, we can try to arrange a lot of the other well-known pieces of hacker folk-wisdom into a more comprehensive jigsaw puzzle view.
The pieces of wisdom are actually precise enough that I think of them as folk theorems (item 5 actually suggests a way to model hackstability mathematically as a sort of hydrostatic — bug-o-static? — equilibrium)
- “Given enough eyeballs, all bugs are shallow.” — Linus’ Law, formulated by Eric S. Raymond
- Perspective is worth 80 IQ points. — Alan Kay
- Fixing a bug is harder than writing the code. — not sure who first said this.
- Reading code is harder than writing code. — Joel Spolsky
- Fixing a bug introduces 2 more. — not sure where I first encountered this quote.
- Release early, release often. — Eric S. Raymond
- Plan to throw one away — Frederick Brooks, The Mythical Man-Month
Take a shot at using these ideas to put together a picture of how complex technological systems evolve, using the definition of hacking that I offered and the idea of technical debt as the anchor element (I started elaborating on this full picture, but it threatened to run to another 5000 words).
When you’re done, you may want to watch (or rewatch) Alan Kay’s talk, Programming and Scaling, which I’ve referenced before.
I don’t know of any systematic studies of the truth of these folk-wisdom phenomena (I think I saw one study of the bugs-eyeballs conjecture that concluded it was somewhat shaky, but I can’t find the reference). But I have anecdotal evidence from my own limited experience with engineering, and somewhat more extensive experience as a product manager, that all the statements have significant substance behind them.
So these are not casual, throwaway remarks. Each can sustain hours of thoughtful and stimulating debate between any two people who’ve worked in technology.
The Ubiquity of Hacking
At this point, it is useful to look for more examples that fit the definition of hacking I offered. The following seem to fit:
- The pick-up artist movement should really be called female-brain hacking (or alternatively, alpha-status hacking)
- Disruptive technologies represent market-hacking.
- Lifestyle design can be viewed as standard-of-living hacking
- One half of the modern smart city/neo-urbanist movement can be understood as city-hacking (“smart cities” includes clean-sheet high-modernist smart cities in China, but let’s leave those out)
- All of politics is culture hacking
- Guerrilla warfare and terrorism represent military hacking
- Almost the entire modern finance industry is economics-hacking
- Most intelligence on both sides of any adversarial human table (VCs vs. entrepreneurs, interviewers vs. interviewees) is hacker intelligence.
- Fossil fuels represent energy hacking
Looking at these, it strikes me that not all examples are equally interesting. Anything that has the nature of a human-vs.-human arms race (including the canonical black-hat vs. white-hat information security race and PUA) is actually a pretty wimpy example of hackstability dynamics.
The really interesting cases are the ones where one side is a human intelligence, and the other side is a non-human system that simply gets more complex and less disposable over time.
But interesting or not, all these are really interconnected patterns of hacking in what is increasingly Planet Hacker.
The Third Future
So what is the hackstable future? What reason is there to believe that hacking can keep up with the downward pull of entropy? I am not entirely sure. The way big old cities seem to miraculously survive indefinitely on the brink of collapse gives me some confidence that hackstability is a meaningful concept.
Collapse is the easiest of the three scenarios to understand, since it requires no new concepts. If the rate of entropy accumulation exceeds the rate at which we can keep hacking, we may get sudden collapse.
The Singularity concept relies on a major unknown-unknown type hypothesis: self-improving AI. A system that feeds on entropy rather than being dragged down by it. This is rather like Taleb’s notion of anti-fragility, so I am assuming there are at least a few credible ideas to be discovered here. These I have collectively labeled autopoietic lift. Anti-gravity for complex systems that are subject to accumulating entropy, but are (thermodynamically) open enough that they might still evolve in complexity. So far, we’ve been experiencing two centuries of lift as the result of a major hack (fossil fuels). It remains to be seen whether we can get to sustainable lift.
Hackstability is the idea that we’ll get enough autopoietic lift through hacks and occasional advances in anti-fragile system design to just balance entropy gravity, but not enough to drive exponential self-improvement.
Viewed another way, it is a hydrostatic balance between global hacker metis (barbarian intelligence) and codified systemic intelligence (civilizational intelligence). In this view, hackstability is the slow dampening of the creative-destruction dialectic between barbarian and civilized modes of existence that has been going on for a few thousand years. If you weaken the metis enough, the system collapses. If you strengthen it too much, again it collapses (a case of the hackers shorting the system as predators rather than exploiting it parasitically).
I don’t yet know whether these are well-posed concepts.
I am beginning to see the murky outlines of a clean evolutionary model that encompasses all three futures though. One with enough predictive power to allow coarse computation of the relative probabilities of the three futures. This is the idea I’ve labeled the Electric Leviathan, and chased for several years. But it remains ever elusive. Each time I think I’ve found the right way to model it, it turns out I’ve just missed my mark. Maybe the idea is my white whale and I’ll never manage a digital-age update to Hobbes.
So I might be seeing things. In a way, my own writing is a kind of idea-hacking: using local motifs to illuminate some sort of subtlety in a theme and invalidate some naive Grand Unified Theory without offering a better candidate myself. Maybe all I can hope for is to characterize the Electric Leviathan via a series of idea hacks without ever adequately explaining what I mean by the phrase.