Hacking the Non-Disposable Planet

This entry is part 4 of 15 in the series Psychohistory

Sometime in the last few years, apparently everybody turned into a hacker.  Besides  computer hacking, we now have lifehacking (using  tricks and short-cuts to improve everyday life), body-hacking (using sensor-driven experimentation to manipulate your body), college-hacking (students who figure out how to get a high GPA without putting in the work) and career-hacking (getting ahead in the workplace without “paying your dues”). The trend shows no sign of letting up. I suspect we’ll soon see the term applied in every conceivable domain of human activity.

I was initially very annoyed by what I saw as a content-free overloading of the term, but the more I examined the various uses, the more I realized that there really is a common pattern to everything that is being subsumed by the term hacking. I now believe that the term hacking is not over-extended; it is actually under-extended. It should be applied to a much bigger range of activities, and to human endeavors on much larger scales, all the way up to human civilization.

I’ve concluded that we’re reaching a technological complexity threshold where hacking is going to be the main mechanism for the further evolution of civilization. Hacking is part of a future that’s neither the exponentially improving AI future envisioned by Singularity types, nor the entropic collapse envisioned by the Collapsonomics types. It is part of a marginally stable future where the upward lift of diminishing-magnitude technological improvements and hacks just balances the downward pull of entropic gravity, resulting in an indefinite plateau, as the picture above illustrates.

I call this possible future hackstability.

Hacking as Anti-Refinement

Hacking is the term we reach for when trying to describe an intelligent, but rough-handed and expedient behavior aimed at manipulating a complicated reality locally for immediate gain. Two connotations of the word hack, rough-hewing and mediocrity, apply to some extent.

I’ll offer this rather dense definition that I think covers the phenomenology, and unpack it through the rest of the post.

Hacking is a pattern of local, opportunistic manipulation of a non-disposable complex system that causes a lowering of its conceptual integrity, creates systemic debt and moves intelligence from systems into human brains.

By this definition, hacking is anti-refinement. It is therefore a barbarian mode of production because it moves intelligence out of systems and into human brains, making those human brains less interchangeable. Yet, it is not the traditional barbarian mode of predatory destruction of a settled civilization from outside its periphery.

Technology has now colonized the planet, and there is no “outside” for anyone to emerge from or retreat to. Hackers are part of the system, dependent on it, and aware of its non-disposable nature. In evolutionary terms, hacking is a parasitic strategy: weaken the host just enough to feed off it, but not enough to kill it.

Breaching computer systems is of course the classic example. Another example is figuring out hacks to fall asleep faster. A third is coming up with a new traffic pattern to reroute traffic around a temporary construction site.

  • In our first example, the hacker has discovered and thought through the implications of a particular feature of a computer system more thoroughly than the original designer, and synthesized a locally rewarding behavior pattern: an exploit.
  • In our second example, the body-hacker has figured out a way to manipulate sleep neurochemistry in a corner of design space that was never explored by the creeping tendrils of evolution, because there was never any corresponding environmental selection pressure.
  • In our third example, the urban planner is creating a temporary hack in service of long-term systemic improvement. The hacker has been co-opted and legitimized by a subsuming system that has enough self-awareness and foresight to see past the immediate dip in conceptual integrity.

Urban planning is a better prototypical example to think about when talking about hacking than software itself, since it is so visual.  Even programmers and UX designers themselves resort to urban planning metaphors to talk about complicated software ideas.  If you want to ponder examples for some of the abstractions I am talking about here, I suggest you think in terms of city-hacking rather than software hacking, even if you are a programmer.

For the overall vision of hackstability, think about any major  urban region with its never-ending construction and infrastructure projects ranging from emergency repairs to new mass-transit or water/sewage projects. If a large city is thriving and persisting, it is likely hackstable. Increasingly, the entire planet is hackstable.

The atomic prototype of hacking is the short-cut.  The urban planner has a better map and understands cartography better, but in one small neighborhood, some little kid knows a shorter, undocumented A-to-B path than the planner. Even though the planner laid out the streets in the first place. What’s more, the short-cut may connect points on the map that are otherwise disconnected for non-hackers, because the documented design has no connections between those points.

Disposability and Debt

I got to my definition of hacking after trying to assemble a lot of folk wisdom about programming into a single picture.

The most significant piece for me was Joel Spolsky’s article about things you should never do, and in particular his counter-argument to Frederick Brooks’ famous idea that you should plan to “throw one away” (the idea in software architecture that you should plan to throw away the first version of a piece of software and start again from scratch).

Spolsky offers practical reasons why this is a bad idea, but what I took away from the post was a broader idea: that it is increasingly a mistake to treat any technology as disposable. Technology is fundamentally not a do-over game today. It is a cumulative game. This has been especially true in the last century, as all technology infrastructure has gotten increasingly inter-connected and temporally layered into techno-geological strata of varying levels of antiquity. We should expect to see disciplines emerge with labels like techno-geography, techno-geology and techno-archaeology. Some layers are functional (“techno-geologically active”), while others are compressed garbage, like the sunken Gold Rush era ships on which parts of San Francisco are built.

Non-disposability along with global functional and temporal connectedness means technology is a single evolving entity with a memory. For such systems the notion of technical debt, due to Ward Cunningham, becomes important:

“Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation.”

For me, the central implicit idea in the definition is the notion of disposability. Everything hinges on whether or not you can throw your work away and move on. We are so used to dealing with disposable things in everyday consumer life that we don’t realize that much of our technological infrastructure is in fact non-disposable.

How ubiquitous is non-disposability? I am tempted to conclude that almost nothing of significance is disposable. And by that I mean disposable with insignificant negative consequences of course. Anything can be thrown away if you are willing to pay the costs.

Your body, New York City and the English language are obviously non-disposable. Reacting to problems with those things and trying to “do over” is either impossible or doomed. The first is impossible to even do badly today. You can try to “do over” New York City, but you’ll get something else that will probably not serve. If you try to do-over English, you get Esperanto.

Obviously, the bigger the system and the more interdependent it is with its technological environment, the harder it is to do it over. The dynamics of technical debt naturally leads us to non-disposability, but let’s make the connection explicit and talk about the value in the patchwork of hacks and workarounds in a complex system that, as Spolsky argues, represents value that should not be thrown away.

Quantified Technical Debt and Metis

If a system must last indefinitely, cutting corners in an initial design leads to a necessary commitment to “doing it right” later. This deferral is due to lack of both resources and information in an initial design. You lack the money/time and the information to do it right.

When a new contingency arises, some of the missing information becomes available. But resources do not generally become available at the same time, so the design must be adapted via cheaper improvisation to deal with the contingency — a hack — and the “real” solution deferred.  A hack turns an unquantified bit of technical debt into a quantified bit: when you have a hack, you know the principal, interest rate and so forth.

It is this quantified technical debt that is the interesting quantity. The designer’s original vague sense of incompleteness and inadequacy becomes sharply defined once a hack has exposed a failing, illuminated its costs, and suggested a more permanent solution. The new information revealed by the hack is, by definition, not properly codified and embedded in the system itself, so most of it must live in a human brain as tacit design intelligence (the rest lives in the hack itself, representing the value that Spolsky argues should not be throw away).

When you have a complex and heavily-used, but slowly-evolving technology, this tacit knowledge accumulating in the heads of hackers constitutes what James Scott calls metis. Distributed and contentious barbarian intelligence. It can only be passed on from human to human via apprenticeship, or inform a risky and radical redesign that codifies and embeds it into a new version of the system itself. The longer you wait, the more the debt compounds, increasing risk and the cost of the eventual redesign.

Technological Deficit Economics

This compounding rate is very high because the longer a system persists, the more tightly it integrates into everything around it, causing co-evolution. So eventually replacing even a small hack in a relatively isolated system with a better solution turns into a planet-wide exercise, as we learned during Y2K.

Isolated technologies also get increasingly situated over time, no matter how encapsulated they appear at conception, so that what looks like a “do-over” from the point of view of a single subsystem (say Linux) looks like a hack with respect to larger, subsuming systems (like the Internet). So debt accumulates at levels of the system that no individual agent is nominally responsible for. This is collective, public technical debt.

Most complex technologies incur quantified technical debt faster than they can pay it off, which makes them effectively non-disposable. This includes non-software systems. Sometimes the debt can be ignored because it ends up being an economic externality (pollution for automobiles, for instance), but the more all-encompassing the system gets, the less room there is for anything to be an unaccounted-for externality.

The regulatory environment can be viewed as a co-evolving part of technology and subject to the same rules. The US constitution and the tax code for instance, started off as high-conceptual-integrity constructs which have been endlessly hacked through case law and tax code exceptions to the point that they are now effectively non-disposable. It is impossible, as a practical matter, to even conceptualize a “Constitution 2.0” to cleanly accommodate the accumulated wisdom in case law.

In general, following Spolsky’s logic through to its natural conclusion, it is only worth throwing a system away and building a new one from scratch when it is on the very brink of collapse under the weight of its hacks (and the hackers on the brink of retirement or death, threatening to take the accumulated metis with them). The larger the system, the costlier the redesign, and the more it makes sense to let more metis accumulate.

Beyond a certain critical scale, you can never throw a system away because there is no hope of ever finding the wealth to pay off the accumulated technical debt via a new design. The redesign itself experiences scope creep and spirals out of the realm of human capability.

All you can hope for is to keep hacking and extending its life in increasingly brittle ways, and hope to avoid a big random event that triggers collapse. This is technological deficit economics.

Now extend the argument to all of civilization as a single massive technology that can never be thrown away, and you can make sense of the idea of hackstability as an alternative to collapse. Maybe if you keep hacking away furiously enough, and grabbing improvements where possible, you can keep a system alive indefinitely, or at least steer it to a safe soft-landing instead of a crash-landing.

Hacker Folk Theorems

With disposability as the anchor element, we can try to arrange a lot of the other well-known pieces of hacker folk-wisdom into a more comprehensive jigsaw puzzle view.

The pieces of wisdom are actually precise enough that I think of them as folk theorems (item 5 actually suggests a way to model hackstability mathematically as a sort of hydrostatic — bug-o-static? — equilibrium)

  1. “Given enough eyeballs, all bugs are shallow.” — Linus’ Law, formulated by Eric S. Raymond
  2. Perspective is worth 80 IQ points. — Alan Kay
  3. Fixing a bug is harder than writing the code. — not sure who first said this.
  4. Reading code is harder than writing code. — Joel Spolsky
  5. Fixing a bug introduces 2 more. — not sure where I first encountered this quote.
  6. Release early, release often. — Eric S. Raymond
  7. Plan to throw one away — Frederick Brooks, The Mythical Man-Month

Take a shot at using these ideas to put together a picture of how complex technological systems evolve, using the definition of hacking that I offered and the idea of technical debt as the anchor element (I started elaborating on this full picture, but it threatened to run to another 5000 words).

When you’re done, you may want to watch (or rewatch) Alan Kay’s talk, Programming and Scaling, which I’ve referenced before.

I don’t know of any systematic studies of the truth of these folk-wisdom phenomena (I think I saw one study of the bugs-eyeballs conjecture that concluded it was somewhat shaky, but I can’t find the reference). But I have anecdotal evidence from my own limited experience with engineering, and somewhat more extensive experience as a product manager, that all the statements have significant substance behind them.

So these are not casual, throwaway remarks. Each can sustain hours of thoughtful and stimulating debate between any two people who’ve worked in technology.

The Ubiquity of Hacking

At this point, it is useful to look for more examples that fit the definition of hacking I offered. The following seem to fit:

  1. The pick-up artist movement should really be called female-brain hacking (or alternatively, alpha-status hacking)
  2. Disruptive technologies represent market-hacking.
  3. Lifestyle design can be viewed as standard-of-living hacking
  4. One half of the modern smart city/neo-urbanist movement can be understood as city-hacking (“smart cities” includes clean-sheet high-modernist smart cities in China, but let’s leave those out)
  5. All of politics is culture hacking
  6. Guerrilla warfare and terrorism represent military hacking
  7. Almost the entire modern finance industry is economics-hacking
  8. Most intelligence on both sides of any adversarial human table (VCs vs. entrepreneurs, interviewers vs. interviewees) is hacker intelligence.
  9. Fossil fuels represent energy hacking

Looking at these, it strikes me that not all examples are equally interesting. Anything that has the nature of a human-vs.-human arms race (including the canonical black-hat vs. white-hat information security race and PUA) is actually a pretty wimpy example of hackstability dynamics.

The really interesting cases are the ones where one side is a human intelligence, and the other side is a non-human system that simply gets more complex and less disposable over time.

But interesting or not, all these are really interconnected patterns of hacking in what is increasingly Planet Hacker.

The Third Future

So what is the hackstable future? What reason is there to believe that hacking can keep up with the downward pull of entropy? I am not entirely sure. The way big old cities seem to miraculously survive indefinitely on the brink of collapse gives me some confidence that hackstability is a meaningful concept.

Collapse is the easiest of the three scenarios to understand, since it requires no new concepts. If the rate of entropy accumulation exceeds the rate at which we can keep hacking, we may get sudden collapse.

The Singularity concept relies on a major unknown-unknown type hypothesis: self-improving AI. A system that feeds on entropy rather than being dragged down by it.  This is rather like Taleb’s notion of anti-fragility, so I am assuming there are at least a few credible ideas to be discovered here. These I have collectively labeled autopoietic lift.  Anti-gravity for complex systems that are subject to accumulating entropy, but are (thermodynamically) open enough that they might still evolve in complexity. So far, we’ve been experiencing two centuries of lift as the result of a major hack (fossil fuels). It remains to be seen whether we can get to sustainable lift.

Hackstability is the idea that we’ll get enough autopoietic lift through hacks and occasional advances in anti-fragile system design to just balance entropy gravity, but not enough to drive exponential self-improvement.

Viewed another way, it is a hydrostatic balance between global hacker metis (barbarian intelligence) and codified systemic intelligence (civilizational intelligence). In this view, hackstability is the slow dampening of the creative-destruction dialectic between barbarian and civilized modes of existence that has been going on for a few thousand years. If you weaken the metis enough, the system collapses. If you strengthen it too much, again it collapses (a case of the hackers shorting the system as predators rather than exploiting it parasitically).

I don’t yet know whether these are well-posed concepts.

I am beginning to see the murky outlines of a clean evolutionary model that encompasses all three futures though. One with enough predictive power to allow coarse computation of the relative probabilities of the three futures. This is the idea I’ve labeled the Electric Leviathan, and chased for several years. But it remains ever elusive.  Each time I think I’ve found the right way to model it, it turns out I’ve just missed my mark. Maybe the idea is my white whale and I’ll never manage a digital-age update to Hobbes.

So I might be seeing things. In a way, my own writing is a kind of idea-hacking: using local motifs to illuminate some sort of subtlety in a theme and invalidate some naive Grand Unified Theory without offering a better candidate myself. Maybe all I can hope for is to characterize the Electric Leviathan via a series of idea hacks without ever adequately explaining what I mean by the phrase.

Series Navigation<< The Stream Map of the WorldTechnology and the Baroque Unconscious >>

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Venkatesh Rao

Venkat is the founder and editor-in-chief of ribbonfarm. Follow him on Twitter

Comments

  1. Steve Hoover says

    Venkat, I truly love how deeply you think about things and your undieing urge to create a theory to model everything. I mean that. But perhaps it is just as simple as the 2nd law of thermodynamics is true and if you draw the system boundary large enough disorder never decreases. Doesn’t this explain all of this? Hacks are simply local ways to decrease disorder but imply an increase in disorder elsewhere. And they are e favorable as they expolit the structure of the larger hack putinplace to decrease entropy localy. Complex technical systems are all about local decreases in disorder while pushing the “waste heat” elsewhere and so are just hacks themselves as you point out about fossil fuel energy systems. So everything is a hack. It’s fractal local entropy decreasing endeavours. A hack just is just an exploit of a larger scale hack. For those large scale hacks which have pushed back the system boundary for lcal entropy decreases any attempt at a ” mulligan” or “do-over” is just too highly energetically unfavorable due to the energy expenditure just to recreate the ever increasing local order with no real gain. Make sense?

    • Exactly. This is all about just working out the implications of the 2nd law, loosely and metaphorically applied. I am totally a believer in the “everything is a hack” axiom. Some things just exhibit their true nature more clearly on the surface.

      It is also interesting to work out the interplay of the platonic universes in our heads, where we construct idealized systems with conceptual integrity, and the entropic universe onto which those idealizations are projected. The sociology of engineering is all about the discourses around this process. The hacker sensibility is to avoid getting dazzled by the false sense of security that can be found in the platonic world.

      • Alexander Boland says

        But is the universe inside our heads entirely Platonic? Platonic to me implies pure reductions–we have those, but we also have narratives. My thought on narratives is that they’re actually anti-fragile–they’re a loose network of interacting paradigms that have additional space in which to plug in new paradigms (somewhere in this is second-order effects from its basic paradigms interacting with each other.) A bit confusing, but I’d bring it back to thinking about how a novel has very little concrete “information”, and yet we can picture an entire world. Hogwarts *looks* and *feels* like something to me because it’s pluggable–I can work with the empty space between its semantic denotations to build an image of a castle in the middle of nowhere.

        Anyway, that was a bit of a tangent, but it came to mind because for a while I’ve been convinced that intertextuality is a form of negentropy–which helps elucidate a lot of problems in interactive storytelling. Also because I wonder what implications it has for knowledge and by extension “hackstability”. John Boyd had the insight to hypothesize that knowledge itself is a game of entropy no different than what was discussed above.

        For that matter I would say that this is where most “lifehackers” meet their downfall. I always sense a bit of a perverse desire for a set of immortal cheap tricks. I have them too, of course; we all have moments where out fear we want to grasp on to something permanent. I guess this is what Taleb means by saying that religion can protect us from irrational and dangerous ideas.

        • I would agree with the idea that thinking is non-platonic in general (that was the point of the construct I labeled ‘Freytag Staircase’ in Tempo).

          Design, broadly understood is an entropic creative-destructive dynamic (a.k.a, Boyd’s famous snowmobile). But individual designers are often not aware of or not voluntarily engaged in, this dialectic. Instead, they turn ‘snowmobile’ into a platonic abstraction and use that as a reference against which to judge the conceptual integrity of realizations of their snowmobile design. This is especially the case if they have strongly classicist aesthetics.

          I find many chefs to have this mindset. To them, there is one classic, “right” way to do a traditional dish, and they become purists around that platonic recipe. Omelets, ironically, seem to attract this kind of aesthetic deification (as in phrases like “classic french omelet” or “the perfect omelet is…”). Curiously, omelets are also the snowmobile of cooking (“you cannot make an omelet without breaking eggs”… eggs being a different kind of platonic ideal…)

        • “hypothesize that knowledge itself is a game of entropy”

          dovetails nicely with

          “Knowledge is a ship we must constantly rebuild while at sea”
          – Otto Neurath

      • “everything is a hack”

        Is that to say you view biological living systems as the ultimate outermost envelop of all 2nd law hacking and that organic cellular autopoiesis is the fractal penny in the currency that ultimately drives all other subsumed/embedded/embodied entropic hacks?

        • Well, the selfish gene is the better penny unit, but basically, yeah.

          • I tend to think of “the selfish gene” as a particular instantiation of a more fundamental phenomenological penny. That being, standing waves of pure self-replicating probability within any given environmentally symbiotic context.

            I like to use autopoiesis as a more accessible stand in for that wordy abstraction. I find this metaphor more scalable at both sub-cellular and supra-cellular levels of granularity.

            You are right of course “the selfish gene” is far more accessible do to it’s wide spread usage!

  2. Hacking is often seen as asocial. Yet you appear to view it as essential to our survival.

    • Yes. I ignore the hacker-cracker distinction. It is asocial, but that doesn’t mean it cannot also be essential. A lot of very critical things depend on highly unsociable people.

  3. An excellent definition of hacking, the best I’ve seen.

    “Given enough eyes, all bugs are shallow” is often misunderstood to mean eyes looking at source code. In that sense, the eyes must be backed by a brain which has a sufficiently detailed model of the system, and simulate it to the point where it can detect a problem *without the problem actually occurring*. The worst issues are not of the “hey there’s a missing semicolon” but “hey there is an assumption here which is broken in a very rare case which has never yet been seen”. (See also: @macroresilience http://www.macroresilience.com/2011/12/29/people-make-poor-monitors-for-computers/ )

    What actually helps Linux and open source is:
    1. the sheer diversity of environments they are deployed,
    2. the huge number of running instances,
    3. bug reports with higher SNR than those from the general public

    The eyeballs which really matter are the ones which report bugs with sufficient context to make diagnosis possible. IMO much of this benefit can and is reaped by closed-source vendors via automated bug reporting.

    #3 is a watered down version of one of my favourites, by Brian Kernighan, one of the original Unix neckbeards: “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.”

    Finally, I cannot extrapolate the existence of cities like London to optimism for a hackstable future. First, there’s survivorship bias. There are a hundred dead cities to every living one. Second and far more important, cities move at much lower clock speeds than humans, giving time for us to react. Computer software and biology can replicate and move at much, much higher clockspeeds than humans.

    • “3. bug reports with higher SNR than those from the general public”
      “IMO much of this benefit can and is reaped by closed-source vendors via automated bug reporting.”

      But isn’t this exactly the problem with automated bug reporting, the SNR? Automated systems (although getting better) have insanely high N for the S, and come with very little context.

      Open Source bug reports, more often than not, come from someone who -cares- about the problem being solved. Even if they are non technical, they are engaged, unlike automated systems. Look into any project’s Bugzilla and you’ll see at least one back-and-forth from a developer and the bugee where more information is solicited and provided (usually information that is not obviously required from the initial problem).

      This is why automated systems will never be as useful: They have no “this is important” filter and they have no means to follow up with additional information that did not seem relevant at the time.

  4. Good thoughts.

    One of the ideas I found quite interesting was Goldratt’s Strategy and tactics trees. A hierarchy of “Why’s” right down to the bottom. A bit too high-modernist as you might say, but still useful under certain circumstances.

    A minor nitpick with one point here. The idea behind the singularity is one level deeper than self improving AI. It is the understanding of general intelligence and how humans manage to give decent results in multiple domains, well enough to code it in. Once that is done, then the AI starts to improve itself.

  5. Hmmmm… No, I don’t think you can beat entropy by hacking, most hacks come at a cost in complexity and this “debt” has to be repaid sometimes, only a small minority of hacks correspond to a more “enlightened” view of the hacked system such as to (in the end) lead to a smarter solution.

    • My gut impression of this is that smart hacks (the ones we need) are enlightened, in the sense of embodying a broader understanding.

      More specifically, a smart hack does a little more work than it needs to, because it has a slightly broader context than the specific functionality you are trying to produce, whether that’s through “pride in your work”, mischievous one-upmanship or a desire for a cludgy kind of aesthetic harmony.

      That’s my own experience in making stuff, when I put in a little more effort than is strictly necessary to make stuff fit, and go for a slightly broader perspective, then the result of the hacks builds into something more substantial, rather than collapsing under it’s own weight.

      Of course, that might be because those times when it collapsed under it’s own weight, I was unable to get the broader perspective, due to other factors..

  6. I wonder how to distinguish between hackstability and collapse at the larger scales. How would a person from the future determine which scenario has happened globally? At the level of business you always see a collapse-by-disruption, but Rome has collapsed as an empire but is nearly immortal as a city. There is practically nothing left from the days of the empire except a few carefully ‘refrigerated’ monuments and maybe a pattern of streets in the center.

    So describing the balance between autopoeitic lift and entropic gravity seems tricky. For example, if most people keep benefiting from better and better technology, does that count a singularity if they are poor? Many people who live in favela have cell phones or even laptops.

    I would be tempted to measure it in GDP per capita, but that seems outdated considering what you’ve written about perspective economics. That suggest that if autopoeitic lift wins, we will get some kind of inconceivably weird singularity that might not be smart in the science fiction AI-sense. Do you have any speculations on what such a singularity could look like?

  7. Hans der Hase says

    Hacking (as in [http://en.wikipedia.org/wiki/Hacker_ethic “Hacker Ethics”] minus “computer”) has always been a stabalizing force in human history. Just like weed is a stabalizing force in an ecosystem. “Weed” of course being an anthropogenic category for plants that we don’t like to be part of the (eco)system. Just like some don’t want Hackers to be around, misunderstandig their huge (long term) value for keeping the system in balance.

    Thanks for sharing your (again) deep-thought-out perspective on the inner workings of our world, Venkat.

  8. Some thoughts provoked by a thought-provoking post.

    On hacking:

    An important dynamic to hacking, I feel, is that of gaming the system; of subverting a bit of the ‘typical’/provisioned flow (whether said flow is designed like a drainage system or emergent like a river) to do something else. Therefore, it needs an already existing system or an ongoing flow to survive. The point at which it begins to compete with or supplant either, is the point at which it stops being hacking. In turn, this means that we can have a “hacker ethos” approach to complex system creation (rapid-prototype/crowdsource/data-drive/whatever), and we can have below-the-radar hacking (whereby hackers give themselves a leg up on everyone else but without making a big enough impact to be noticed), and the two should not really be conflated. Furthermore, neither should they be confused with ‘patching’ (both in the software and the quilt sense), which is what we do when we have little power over and little knowledge of the full range of conditions within which we are trying to act, and which was the default mode of action before the Scientific Revolution (confusingly ‘a patch’ and ‘a hack’ can be used interchangeably, but you can’t use ‘a hack’ to hack into the Pentagon). Hacking is deep understanding and surgical subversion. Patching is local improvisation – favella, systeme D, jugaad. Like hacking, but not like hacker ethos, patching needs some entropy lowering processes elsewhere if it is to avoid having a low value ceiling – jugaad doesn’t work if there isn’t a high-precision factory stamping out engineer-designed engine parts somewhere.

    So pick-up artists, lifestyle design, and finance are hacking (which is also why finance is struggling now that it is too big to be a hack). Disruptive technologies, fossil fuels, and smart-cities are hacker ethos. Guerrilla warfare, politics, and adversarial tables are patching. Each have their own strengths and weaknesses.

    On disposability:

    I like plant analogies. High-disposable systems are like annual plants. They grow, they flower, they produce a bunch of seeds, and then the main plant dies, and hopefully a few of the scattered seeds can take root and grow from scratch all over again. They’re resilient without having to be hardy (at least one of the seeds should germinate) as well as quick(er) to evolve, but since so much of the plant periodically goes to waste, they are not much for scale and complexity. Peas don’t grow to redwood heights. Given the plagues and wars of most of human history, most of our historical systems (pre-industrial revolution) were rather highly-disposable in my view. Indeed, it may well be the low-disposability of the systems that encourage people to go to war (not much to lose) and why major industrialised powers quickly learnt to stop having total wars with one another.

    Low-disposable systems are like trees. Slow growing and slow to change, they are fine as long as external conditions do not change much. If they do, then good-bye forest. After a while, because of their inability to change or rejuvenate ‘from scratch’, they also start to develop all of the problems of age – cancers, plaques, autoimmune, parasites. Eventually, they succumb. And when they succumb, they fall. And when they fall, they most certainly make a sound. High modernism tries to grow redwoods. Collapse should be a real worry. Also, parasites could be hacking. Cancers, plaques, and autoimmune may well be the body (ok, I’m mixing metaphors now, but hopefully without loss of clarity) trying to patch. Patching a monolithic OS, for instance, produces similar problems.

    The question is if there is a third way? Something like a potato where the plant feeds the potato and the potato feeds the plant… Some cycle in which ‘value’ can be stored outside of the system while the system gets periodically disassembled, ‘cleaned’, and re-designed/re-built from scratch. It does not even have to be an ‘all-at-once’ deal – after all, disassembly and reassembly take time. So we would just have to pour-and-store some of the value from some of the system, while the rest keeps working. Like city roads DESIGNED with the knowledge that all of them will HAVE TO BE closed for maintenance – or even complete redirection – at some point.

    • I think we disagree here, see my reply to d3vin. I think patching/exploiting _should_ be conflated because hacking exists at a level below the codification of the ethos as an externalized system of ethics. Hacker vs. cracker (and your over/under radar distinction, which is related) is a function of extant notions of political legitimacy. But the ethos and sensibility are intrinsic. Whether you are an outlaw or not depends on the context. One person’s terrorist is another’s freedom fighter/Robin Hood. One person’s cop is another person’s mob enforcer in a kleptocracy, etc.

      In a way, by it’s very methods, hacking (especially computer hacking) becomes defined as a political act, like the use of a gun. Unlike, say, eating a banana, using a gun immediately stresses some political idea of legitimacy. The less notions of ‘state’ have extended into a domain, the more hacking is intrinsically political.

      Pure cases of banal crimes (cracking purely to steal a credit card number or identity, in a way functionally identical to breaking and entering) are a subset that happens to be easy to parse.

      • I do not think legitimacy or ethics plays any part of the distinction I was trying to make. Lifehacking or body hacking, for instance, would both be hacking to me and not patching. Hacking outfoxes the system, while patching surfs it. It’s a fundamental difference in both approach and result – deliberation vs. improvisation – “where, in the guts, are the hidden weaknesses?” vs. “what can I fashion with what it throws at me?”. What they do have in common is a way of acting orthogonally to a given system’s design, and an impact whose is highly dependent on the power of the system to which they are acting orthogonally.

        Likewise, to reiterate, both are also different from hacker ethos (sharing, openness, decentralisation, libertarian individuality), the proponents of which are much more likely to be anti-fragile singulartarians who believe in the transformative power of bottom-up, generative systems (and who want to replace the current systems with them), than they are to be hustling improvisers (patch) or short-term inefficiency exploiters (hack).

  9. Venkatesh, thanks for the post.

    Below are a few excerpts you may enjoy from an essay about the origins of the term “hacker,” by UC Berkeley CS professor, Brian Harvey [1]. (The entire essay is a good read IMHO.)

    -d3v

    [1] http://www.cs.berkeley.edu/~bh/hacker.html

    The concept of hacking entered the computer culture at the Massachusetts Institute of Technology in the 1960s. Popular opinion at MIT posited that there are two kinds of students, tools and hackers. A “tool” is someone who attends class regularly, is always to be found in the library when no class is meeting, and gets straight As. A “hacker” is the opposite: someone who never goes to class, who in fact sleeps all day, and who spends the night pursuing recreational activities rather than studying.

    In 1986, the word “hacker” is generally used among MIT students to refer not to computer hackers but to building hackers, people who explore roofs and tunnels where they’re not supposed to be.

    A hacker is an aesthete. (note: see essay for further elaboration)

    A hacker must be fundamentally an amateur, even though hackers can get paid for their expertise.

    • I’d say hackers are ontogenically ancestral to both amateurs and professionals. They stand prior to that distinction, which is a legalistic one. The primary result of the emergence of the divide is the emergence of a professional code of honor. IMO this is the deeper cause of the hacker/cracker divide and debate and why it never reached closure (and never will). Hackers operate by an individualist ethics because they operate at a framing level that is below the one where ethics get codified.

      I have to develop this idea a bit more.

  10. Have you read Christopher Alexander yet? This post seems to build upon some of his main ideas, especially his theory of design: The best forms evolve incrementally over time, gradually adjusting to correct their imperfections (another way of describing hacks, as you’ve defined them). Alexander mainly wrote about architecture and urban planning, and the city is where his theories are best observed: Urban renewal, a modernist city planning tool, involves clearing away a complex, incrementally-evolved neighborhood, imposing a unified master plan (which usually doesn’t work as well), and imposing legibility in the process. Unified designs are poor substitutes for the collective intelligence that has traditionally produced cities.

    A contemporary thinker who has addressed hacking with respect to infrastructure is Kazys Varnelis, whom you might find interesting. He contends that hacking becomes increasingly necessary as infrastructure ages and heavy construction becomes more difficult due to increased societal complexity. Using technology to optimize our use of existing infrastructural capacity is the primary form of hacking that he imagines. Here’s an excellent summary: http://www.c-lab.columbia.edu/0162.html

    Finally, I wrote something about smart cities on my blog that dovetails with what you wrote about them here (also invoking Christopher Alexander): http://kneelingbus.wordpress.com/2012/03/29/a-smart-city-is-not-a-tree/

  11. Have you read McKenzie Wark’s ‘A Hacker Manifesto’? He says some similar things. http://bit.ly/HLYGXg

  12. this is wild

  13. I could go on for some length on the core of what you’re saying here, but that’d be better over a pint some time, and I largely agree with what you’re saying, as far as it goes. That said, you should note that there are two categorical limits to system continuity, first the continuance of the system’s dynamic pattern (which is where the kind of hacking and technical complexity scaling you talk about here come in to play), and second the continuance of the field over which the pattern operates, which is to say fundamental resource limits. This is a relatively standard collapsonomics critique, but it’s important in the analysis that you’re doing to look at the distinction in failure modalities between system collapse from scalability failure, etc., and system collapse from resource shortage.

    Hacked systems, like any other form of highly efficient system, heavily leverage the shape of their functional environment and as such are subject to sudden hyperdestabilization from relatively small environmental changes — this is separate from the slow drag of technical debt and entropy accumulation, and this is the form of systemic failure we’re worried about within collapsonomics. I’m quite confident in our (in the largest sense) ability to hackstabilize our complex infrastructure, even though there are some times cause for alarm around the human externality cost, paid in suffering, of this process. I am profoundly worried about our historical and current inability to respond to resource limits without mass systemic disruption and the accompanying mass death and civilization-level existential events; historically, our success record as a species with these kind of events is, like that of all other species, nonexistent.

  14. StrangeLoop says

    In other words, traditionalist conservatism (e.g., Edmund Burke, David Hume, etc.).

    Given transaction costs of moving radically to a redesign (let’s posit anarcho-capitalism), strategic actors will maximize payoffs (typically local immediate gains). What’s emerges is a relatively stable order, if we can keep it.

    The real question, for me, is how hacks will proceed if moral nihilism reigns supreme from a scientized culture, with normative prescriptions (and self-regulated restraints) exposed as mere blather. Will we shift toward a predatory point of no-return?

    • More complicated than I have time for, but hacker culture is not value neutral, not moral nihilism. Hacking implies a strong cultural locus for knowledge (because the understanding of the hacked system is not part of the system, nor can it be formalized without spending to remove technical debt), so that while acts may sometimes be asocial, the actors are in general quite social, albeit often not on normative lines. Likewise, the systems thinking worldview necessary to functionally perform hacks begets some understanding within the culture of the social contract and human lives, albeit again not necessarily in a traditional manner. The end result is a culture and a way of working which is moral, although not necessarily traditional moral. Similar to (although, really, largely inverse from) how few aetheists there are in foxholes, there are few nihilist hackers. Especially if they’re hackers in foxholes.

    • Alexander Boland says

      I don’t like making Predictions but I don’t think moral nihilism could ever truly happen. We have emotions, and any systems that hacks them to the point of irrelevance would likely collapse from doing so.

  15. Fixing a bug is harder than writing the code. — not sure who first said this.

    It was Brian Kernighan: “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?”

  16. Thanks for delivering yet another feast of metaphoric food for thought.

    I find myself putting off visiting your blog because it tends to make my brain hurt and use up a lot of my somewhat limited wetware processing power.

    Still in the end this site is an irresistible strange-attractor for me as it delivers a constant stream of creative metaphoric reframing.

    • My sentiment excactly

      Reading Ribbonfarm stretches my brains capacity to map to the limits..and beyond

  17. You say, “I’ve concluded that we’re reaching a technological complexity threshold where hacking is going to be the main mechanism for the further evolution of civilization.” In the 80’s I did a systems physics proof of approximately the opposite, that “efficient” navigation of that kind, making ever quicker fixes for ever larger and more complex systems would become self-defeating. It naturally produces conditions no one could respond to, exceeding any ideal natural limit of response capacities, if for no other reason then delay in getting the information on what to respond to.

    In all likelihood something else would go haywire first, but the interesting part of the proof is the certainty if systemic failure assuming ideal responses, transparency and cooperation on the part of all participants. ;-)

    Unconditional Positive feedback in the economic system, http://www.synapse9.com/PosFeedbackSys.pdf

  18. Some random thoughts/links…

    You need a picture/reference to HarryTuttle in Brazil.

    Almost everything can be thought of its own system, made up of sub-systems, and acting as part of larger systems. All systems at all layers have life-cycles, they are all ultimately disposable, though some die only at rates too slow to matter to most of us.

    See BigBallOfMud writings, esp http://www.laputan.org/mud/

    StewartBrand on fast-and-slow-layers – various links at: http://webseitz.fluxent.com/wiki/LayEr

    StewartBrand again: “demolition is the history of cities”: http://radar.oreilly.com/archives/2005/04/a-world-made-of.html

    • “All systems at all layers have life-cycles, they are all ultimately disposable, though some die only at rates too slow to matter to most of us.”

      Reality viewed as a layered-stack of recombinant-component-platforms might help elucidate some key themes within the economics of complex-interdependent-system redesign or its biological equivalency, environmental-fittness-refresh.

      – atoms are a platform for building out molecules
      – molecules are a platform for building out life-molecules
      – life-molecules are a platform for building out cells
      – cells are a platform for building out organisms
      – human-organisms are a platform for building out social/technology-structures

      As you move up reality’s stack of platform-layers, to cellular-platform structures, then to social-platform structures, with each higher platform-layer both the platform’s structures and the network-mediated synchronization required to maintain the fabric of component building-block interplay that defines the platform’s structures becomes exponentially more complex, volatile and interdependent. Starting at the cellular platform layer you start to get significant cross-layer level-mixing feedback-loops.

      It seems obvious that the cellular-platform-level is the breakpoint launching pad for complex, modular, networked, recombinant volatility.

      Multicellular life has via evolutionary statistical trial and error been homing in on the best strategies for optimizing the tipping point between complexity and sustainably flexible environmental-fittness-refresh for over 1 billion years now.

      This seems like the obvious complex-system redesign and maintenance cheat sheet we should be mining for possible solutions.

      The modular decoupling of individual cells and organisms, purchased at the cost of duplicating the complete set of functional and reproductive mechanics within each instantiation, seems an obvious starting point.

      That tipping point between networkable-recombinance and modular containment seem like an important focal point.

      Cells and organisms don’t need to constantly check in with the Google/Facebook/Whatever-API mothership just to find out how to execute their endemic functions. They can autonomously participate in a statistical, mutually-adaptive, wavefront that systemically re-engages with other biologically autonomous agents to effectively redesign an emergent new global homeostasis over time.(Client-Server vs distributed-Peer-to-Peer cloud structures)

      Studying fixed biological lifecycle durations might yield some hints as to why and how we could formalize social/technological redesign cycle timing?
      (If we could clear the built-in-obsolescence and marketing terrorists out of the way ;-)

      With cellular-decoupling optimization techniques we could potentially fashion some sort of round-robbin, successive-approximation, localized-adaptive-incremental-changes that collectively over time continually inch their way toward refreshing the overall homeostatic fitness of our social/technological systems ?
      _____________________________________

      you can’t make plastic or computer chips
      working with an alchemy based metaphor
      talking the language of – earth – wind – fire – water

      how are we to – drive – steer – brake
      this emerging organic social network dynamic
      trapped inside a 20th century linear metaphor @ hull speed

      we need to pull a linguistic strange loop and collaboratively forge
      a new self-referential global organic-metaphor / lexicon
      reframing – ourselves – our social structures – and our world
      as ubiquitous instantiations of organic living systems
      as instances of a universal network-organizating principle

      we are all at the Mad Hatter’s tea party
      spinning our wheels
      speaking in tongues

      we are hobbled by obsolete linguistic memes
      that sabotage our collective ability to realistically frame
      our emerging organically interdependent social realities

      we need a new global organic-networking metaphor
      a magic little lexicon for the rest of us
      capable of injecting the very heart and soul
      of Organic Process Literacy
      that magic-Mojo at the core of all living systems
      into every day language and culture

  19. Great thought provoking post.

    While I would generally agree with the notion that hacking is fundamental to life itself and “everything is a hack”, I believe that there are definitely different categories of hacks, some “good” (optimal?) and some sub-optimal or wrong-headed or just plain lazy. Since the word “hack” has a negative connotation in popular culture (or even among engineers / scientists), all of these get clubbed together into one.

    I believe that there is a need to separate the optimal hack – something that is the best possible solution under the constraints of resources / competitive pressures / available knowledge and skills and so on – from all the other hacks. Particularly those that result from the feeling that “if everything is a hack, why bother” or even worse “if everything is a hack, why not do something that serves my agenda”. I wish I could think of a pithy name for these optimal hacks, but nothing is coming to mind at present.

    • While I would generally agree with the notion that hacking is fundamental to life itself and “everything is a hack” …

      It saddens me to see that the author has put so much care into the article and what pops out is a cheap philosopheme to take home. What is so incredibly attractive about monist reductions? I’ll never understand…

      So now good and bad hacks. Why not linear, polynomial and NP complete hacks or parent, children and grandma hacks? If “everything is a hack” you can arbitrarily differentiate hacks. It doesn’t matter because the concept has already lost its distinctive denotational meaning. Another example. Look at Alan Kay, the father of object oriented programming. First everything became an object and then he began to complain about how the term “object” was abused by the likes of C++ developers and now somehow intents to reboot the whole thing with even vaguer ideas.

      Software was already in crisis in the 1970s. That it is inefficient and about to collapse is part of the computing science folklore since I’m born. The programmers-on-the-street have always been a barbarian tribe which overrun the academic citadel and the society has to suffer for it by facing doomsday.

      There is a funny and related anecdote I heard recently. A couple of CS professors were sitting together and discussing how incredibly low the standards for software production are, how buggy the products and that this sorry state must lead to one or another crisis. Suddenly one of them stood up and said that he used Photoshop at the weekend and it was great.

      • My comment wasn’t meant to be a summary or critique of the whole post. Just pointing out the uncomfortable feeling I have about the word “hack” due to the baggage it carries. Your observation of what happened to “objects” is actually quite useful to think about.

        As for the entire post, there is no doubt that it is probably the most thoughtful post I have read in a long time and is resonating extremely well with me.

  20. This is a wonderful piece. I have picked up on it over at our NPR blog “13.7 Cosmos and Culture”. The question, of course, is the nature of the equilibrium. Is Hackstability really stable or just a local minimum.

    http://www.npr.org/blogs/13.7/2012/04/24/151269428/can-hackstability-save-civilization

  21. Given your many thoughts on the future I would wonder what you would make of the, La prospective movement. And the clash between prediction and building.

    http://www.wfs.org/futurist/may-june-2012-vol-46-no-3/predict-or-build-future-reflections-field-and-differences-between

  22. Venkat (and other commenters) — I want to recommend Michael Thompson’s ‘Organising and Disorganising’. His four- or five-part typology of complex systems corresponds in three of its parts with the ‘singularity’, ‘collapse’ and ‘hackstable’ models that you chart above. These also correlate with ecological succession stages (perhaps this is obvious but I didn’t see it mentioned by you or by other commenters). The ‘hackstable’ ecosystem is normally known as a ‘climax’ ecology. One implication of this is that global systemic stability / resilience are much better when a system embodies multiple ecosystem stages — what Thompson calls the ‘clumsy solution’, because it is inelegant, not optimised for particular environmental conditions but better able to deal with multiple classes of unknown.

  23. This is a pretty amazing article. I won’t pretend to fully understand the deep technical details, but the concepts I do understand are stitched together in an incredibly comprehensive way… I’ve never seen this done before!

    One thing I’m wondering, though, is whether hacking necessarily introduces additional entropy into the system in question, as you seem to be suggesting. (Steve’s question above is similar, but I don’t understand the answer). Rather than simply paying down an existing technical debt, couldn’t hacks sometimes be considered investments? A good hack will be more efficient than the original design, and can even be incorporated into the system eventually. It could have a positive return over time, not just a less-negative return, and could permanently reduce the entropy or instability of the system.

    I suppose that means the increase in entropy would be ejected to somewhere outside the system. But if the system is civilization… what would that even mean?

    I wrote a bit about this, and about the “everything is a hack” idea, on my own blog at http://cosmicrevolutions.wordpress.com/2012/05/02/hacking-civilization/, but I couldn’t hash out an answer to this question.

    Anyway, thanks for the provocative post! My brain’s been running at full speed since I read it.

    • Steve, the point of the hackstability concept is that by their very nature, hacks cannot get you onto an increasing returns curve. You have to throw away the system and start over if you want to be more effective overall, and most hackers recognize that their hacks are expedient and that there is a theoretical better way.

      What makes this a sort of Catch 22 is that no individual hack is ever worth throwing the system away for, and each hack makes the system harder to throw away, so it spirals down into high-entropy non-disposability.

      Our civilization is “thermodynamically open” of course, so in principle, we should be able to maintain low entropy by ejecting high entropy (that’s what we do with sunlight after all.. take in low-entropy high-frequency sunlight and re-radiate higher-entropy low-frequency sunlight, if I recall my college physics correctly). Ignoring greenhouse effects and geothermal heat, you get thermal equilibrium at a low-entropy point.

      But it is hard to imagine what an equivalent for settled, planet-wide civilization could be. Slash-and-burn subsistence farmers deplete plots and move on to new ones, returning when old plots regain nutrients. But when all land is occupied, this is not possible. Instead you get the “center periphery” dynamics. But this is an insufficient mechanism to explain modern technological civilization, since our primary resources (like oil) aren’t exactly friendly to center-periphery-oscillation-regeneration.

      • Alexander Boland says

        I suppose most things are “thermodynamically open” to an extent, but you seem to have hit the more practical point on the head about things like peak oil. Forgive me if I’m being too abstract about thermodynamics (I come at it from having studied games vs. narratives as closed vs. open systems), but our civilization in its current state seems to be propped up by the negentropy of fossil fuels, topsoil, fisheries, etc. Doesn’t that mean we’re not quite as “open” as we used to be?

        • I meant to reply here, but “session restore” kept the content and lost the comment placement. The content is below.

  24. Just wanted to say: I got this hunch that, if things are getting more complicated over time, then there must be a red-queen-like human-against-human arms race somewhere. In the sense that maybe PUA and City Planning are not so very different, or their differences are just less obvious.

  25. You’re probably thinking of openness in terms of possible state state space? As in comparing a game with multiple branching points vs a narrative with a single sweep?

    That’s actually a very different definition of open, which I’d like to bring out:

    Open in the systems theory sense normally refers to the actual structure being sustained by a flow of energy through it, food being the obvious example, but extended all the way up to the flow of
    heat from the sun->ecosystem->space.
    As human beings we’re even more open than this definition because not only does the energy of food flow through us, but the particles of the food flow through us as well, composing our physical structure and then being discarded.

    The funny thing about this kind of openness is that it sort of makes us the opposite of open in the computer system sense; I can eat all kinds of foods, but thanks to very careful screening and processing on my inputs, I still end up with the same number of arms and legs etc. Multiple different inputs lead to the same basic physical system, like a game narrative that follows the same path despite different player actions in each game.

    You could imagine it like this; a computer takes a flow of energy, transforming it into heat, and thanks to being sustained by that process gives the player the same damn story whatever he hammers onto the keys. Or, it takes the information as well as the power input, processes both in some way, then uses that as content to tell the same story. The latter form is still a single predictable narrative (within certain limits) but is now built out of interactivity rather than independent of it.

    So to answer your question, there is an extent to which fossil fuels make the earth as a whole less open, in that we are using reserves within the planet in order to power things, and so part of our industry is not based on that sun->earth->space flow. This element could then be expected to be subject to the usual entropy rules. But that’s obvious, it’s basically saying “if you eat from your backpack rather than getting food from outside, you will run out of food.” and the “backpack feeding system” will break down as naturally as any other system.

    If I understand your use of open though, I don’t think it’s possible to say whether fossil fuel use makes us more responsive and able to produce a higher variety of futures in the long term; on the one hand, easy energy means we are not diversifying our energy production, but on the other, we’ve been using that time to do other things, which may have their own benefits, so long as we can manage the side effects properly, as well as the bump as oil runs out.

    In complex strategy games, sometimes it’s effective to have a strategy based on one massive, hidden and ignored flaw, because you can use the strategic benefits of not investing in fixing that flaw. Of course, if anyone sets up a situation to bring out the downsides of that flaw, accidentally or intentionally, then you’re stuffed. This is basically the gamble we’ve been playing for the last few hundred years!

    • Alexander Boland says

      I think we’re actually on the exact same page and you might just be confused by some of my terms. That’s probably because of the thing with narratives vs. games that I wrote. It’s a strange idea that could lead people to wonder whether I’m on the same page about the terms “open” and “closed” in thermodynamics. The answer to that (I hope) is “yes”, it’s just that after reading Reception Theory (a branch of literary theory started by Wolfgang Iser), I saw a parallel between the reading process and thermodynamics that other people might not immediately see if they’re not familiar with that body of theory.

      Anyway, about fossil fuels. I agree; the fossil fuels are the food in our “backpack”. Backpacks, just like fruit growing on wild trees, are sources of negentropy for our hypothetical hiker. There are still many points of open-ness for our planet, but I suppose I should rephrase the question to “is our system open *enough* to provide sufficient negentropy for the continuance of high-tech civilization?” I also agree I agree that the answer depends on whether our fossil-fuel burning habit leads to the technology sufficient to create something like, say, fusion power.

      By the way, if you’re interested in what I was talking about before, I highly recommend “The Reading Process: A Phenomenological Approach” by Wolfgang Iser.

  26. I have to thank you for the efforts you’ve put in penning this blog. I am hoping to view the same high-grade content from you in the future as well. In truth, your creative writing abilities has encouraged me to get my own site now ;)