The Misanthrope’s Guide to the End of the World

To diagnose somebody’s worldview, the single most effective test is to ask about their end-of-the-world opinions. You find out whether they have tragic or idealistic worldviews. You learn about their morality. You find out whether they are self-centric, ethnocentric, anthropocentric, bio-centric, enviro-centric or cosmos-centric. You get at how they ride the tension between individualism and collectivism. Attitudes towards grit and survival shine through. You get a read on their views of politics, technology, globalization, religion and mysticism. You find out whether misanthropy or empathy rules their heads and hearts. Their ability to transcend the varied dichotomies involved gives you a read on their intelligence. Perhaps most important of all, you find out about their sense of humor. So here is an introduction to the End of the World. Popcorn not included.


(Note: most links are to items on my End of the World trail. Email me or post a comment if you think of important areas I’ve missed. I’ll provide an overview of the trail at the end so you can go exploring and autodidacting.)

The Eschatology Party Game

You may want to play this game at your next party. Make sure you leave out the seriously religious or vapid. You will offend the former and be bored by the latter. Everybody answers the following questions, first sober, and then drunk (maybe you do a shot every time it is your turn). It is a catechism of sorts:

  1. How will the world end?
  2. Why will it end?
  3. Should we try to stop it from ending?
  4. Who and what will survive?
  5. Do you want to be among them?
  6. How do you feel about what survives?
  7. How do you feel about what is destroyed?

There is an eighth question that I will reveal at the end, which is perhaps the most fundamental of all.

Something in my writing apparently attracts people who are passionate about these things. Since I began writing online, I’ve been emailed by at least half-a-dozen people who want to talk end-of-the-world with me. They range from atheists out to save the world (I was one myself a decade ago) to religious allegorists and literalists deeply inspired by their understanding of everything from the Moschiach to the Matrix. Whether they are interesting or scary, they are invariably too earnest about the subject for me to engage.

These eager correspondents react with anything ranging from annoyance and disappointment to outright rage when they realize that my views are mostly of the “lemme make some popcorn and watch” variety (which is why I’ve stopped responding to such emails). The best representation of my views is probably the famous “The World will End” animation, which involves a great deal of anti-kangaroo sentiment.


Not that I am an unalloyed misanthrope. Haiti is a small-scale example of the suffering and pain most realistic end-of-the-world scenarios involve. But still, if you are among those who conclude that some unpleasant end is inevitable (whether or not it is total enough to prevent resurrection), you do need doomsday humor in your life. Preferably humor that pokes fun at our self-important and anthropocentric sense of the world as our world. Which explains why, among science-fiction apocalypses, my favorite is the one in The Hitchhiker’s Guide to the Galaxy, where the Earth is destroyed to make way for an intergalactic highway. Even those Independence Day aliens had more respect for us.

Curiously though, my misanthropy does not extend beyond humans. Watching the fabulous Life After People series on the History Channel, and reading the equally fabulous book, The World Without Us by Alan Weisman (I am partway through it), I was surprised to find that I was heartened by the conclusion that the planet, with all its non-human flora and fauna, would recover and quickly heal itself and forget the torture we’ve subjected it to in recent centuries.

I like the idea that we are not really necessary. The world doesn’t need saving by us, and the world is worth saving without us (a premise The Day the Earth Stood Still treated reasonably well).

Doing my big-picture survey of eschatological thinking (my trail took several hours to build), I was mildly surprised to find that there isn’t a whole lot of difference between secular and religious eschatologies, if you allow yourself allegorical/metaphoric readings. To use Christian eschatology as a reference point, you find that there is nearly always a distinction between the chosen ones and the left-behind for instance. There is usually something like a Rapture and an Armageddon. There is usually a morality angle (whether it is sins against the environment or against God). There are usually messianic and satanic figures. Not nearly so often, there are cyclic-time (which seem unique to Hindu and Buddhist eschatologies) and preservation-and-resurrection angles (by both gritty motorcycle-gang survivalists and Louvre-raiding chosen ones, as in 2012). Overall, the religious ones are vastly more entertaining.

None of this is very new of course. What is new is the rise of two competing techno-apocalyptic visions.

The Singularity and its Discontents

In techno-apocalyptic visions, there is usually a tension between humanist/Gaia/environmentalist messianism on the hand, and technology-as-evil on the other. Suspicion of technology – and a natural instinct to cast it in the role of the Antichrist – runs deep. Consider this excerpt from an editorial in Nature, from 22nd June, 2006, titled “The Mad Technologist,” which teases out a very subtle distinction:

“We find that pure scientists are often treated kindly by film-makers, who have portrayed them sympathetically, as brooding mathematicians (A Beautiful Mind) and heroic archaeologists (Raiders of the Lost Ark). It is technology that movie-makers seem to fear. Even the best-loved science-fiction films have a distinctly ambivalent take on it. Blade Runner features a genetic designer without empathy for his creations, who end up killing him. In 2001: A Space Odyssey, computers turn against humans, and Star Wars has us rooting for the side that relies on spiritual power over that which prefers technology, exemplified by the Death Star.”

The one exception to this rule are those Singularity folks, whose views have rather appropriately been named rapture of the geeks. They have a messiah (Ray Kurzweil) and a specific save-the-world agenda. There is an entity called the Singularity Institute that pursues it. To vastly oversimplify, their views are that Skynet is inevitable and that we should work to make sure it is friendly. This sort of transhumanism has its strident critics. Jaron Lanier, author of the newly-published humanist manifesto, You Are Not a Gadget (which I have not yet read) is among them (he also coined the clever phrase Digital Maoism, which I love).

I find the debate with the humanists ultimately dull. I am not a transhumanist, but I am not a humanist either. The more interesting debate around technology has to do with a distinctly unintelligent outcome. The true antipode to the Singularity view is what I call Garbage eschatology.

Garbage Eschatology and Non-Messianic Futures

I am shocked that nobody has really studied garbage eschatology, besides the writers of Wall-E. Garbage eschatology (I claim credit for this neologism) is based on the premise that our technological infrastructure has acquired too much complexity for us to fix. It will kill us not by turning sentient and (for whatever obscure reason) wanting to kill us, but by stupidly and dumbly collapsing on top of us, like a gigantic Windows Vista, while we watch, powerless to prevent our impending accidental death. Technology will kill us by collapsing into a pile of rubble, turning the planet into a gigantic landfill.

The thing I like most about this is that there are no Messiahs involved, and no Antichrists. That strikes me as realistic, as well as appealing to my Cat aesthetics. Messiahs are part of Dog aesthetics (I am referring to my dog-people-cat-people theory from On Seeing Like a Cat). To anthropomorphize the forces that might kill or save us, is a case of unnecessary and distracting animism.

To class it up a bit, my view is based on the idea that the entropy of a software system (broadly defined to include the civilization-ware that runs the planet, including the mechanically embodied computational intelligence of such things as sewer systems) inevitably increases with time, past a point of no-return. Beyond that, we cannot stop it from collapsing under its own weight, and cannot marshal the resources to reverse the aging process either.  The best we can do is hide and then emerge from the rubble and build ourselves Mad Max or Waterworld civilization resurrections. And don’t waste your time agonizing. We probably crossed that threshold in the 14th century, by my calculations (maybe I should call it the garbage singularity).

I won’t argue this in detail, because I don’t have a detailed argument. Two suggestive fragments of arguments I have are that it takes a smarter person to fix a buggy program than to write the program in the first place, and that every bug fix introduces 2-3 deeper bugs/flaws/cracks in the system. Eventually, intelligent environments run on civilizational software built by the smartest possible humans, and the even-smarter people required to fix bugs in that system don’t exist. The probability is far higher that a fatal bug will turn the system into junk, rather than lending it sentience. So the smartest humans start bug-fixing and cause a vicious cycle of increasing bugs.  Extrapolating from that to a “collapse under its own weight” argument for all of civilization is a stretch, but that’s the leap of faith I am most inclined to make. It’s much easier for me than to imagine Skynet turning sentient.

The Garbage vs. Singularity divide is not a deep one. I suspect many Singularity types consider the Garbage outcome a possible alternative future. We differ mostly over whether size, complexity and rustiness are increasing faster than the intelligence of a hopefully benevolent Skynet. And the uncertainty is all about the rate at which “intelligence is increasing.” The rate of entropy increase in our crumbling civilization-ware is sort of known. We have a rough idea (accurate to within half a century say) about the dynamics and controllability of such things such as Peak Oil, the aging of the global population (which affects optimistic beliefs about “the indomitable human spirit” prevailing) and the collapse of water-tables.  At a more local level, we have a good idea of how bad the aging sewage and water systems of most major cities are. I also agree with the Singularity people that there is nothing theoretically impossible about the Singularity folks’ agenda (Von Neumann’s ideas on self-replicating machines clinched the theory for me).

Where I part ways with the Singularity folks is in believing that the world as we know it will collapse before we get to the Singularity. Some who think along the same lines (but around more modest component issues) include Cory Doctorow (Metacrap: Putting the Torch to the Seven Strawmen of Meta-Utopia) and Elisha Sacks/Jon Doyle (Prolegomena to Any Future Qualitative Physics).

Perhaps I lean towards the Garbage view simply because by training I am a mechanical and aerospace engineer. The most primal manifestations of the laws of thermodynamics, such as friction and drag, loom larger in my imagination than that of the AI folks who typically believe in the Singularity. If you spend even a few hours trying to get fussy optical encoders to work right, eliminating ground loops in power electronics boards, and getting gyroscopes to spin right, you’d spend less time worrying about Asimov’s Laws of Robotics and Friendly AI, and more time worrying about Robotic Arthritis.  Skynet would need, for its hardware, some version of a clanking replicator, and we’re still struggling with Roombas and Predator drones.

I quit experimental work in robotics and control theory and shifted to modeling and simulation mainly because things are vastly easier in the frictionless worlds inside computers. One of the problems I worked on for a few months before quitting hardware work was compensating for friction in high-speed/high-accuracy pneumatic actuators — easily the most frustrating work I have ever done. After switching to computing, I spent years producing grand, frictionless operas of swarming spacecraft constellations and aircraft formations on the computer screen, by glibly ignoring everything that was actually hard. It earned me a couple of degrees, and a very healthy respect for entropy.

Misanthropy and Convenient Fatalism

Let’s Pave the Stupid Rainforests

Okay, I don’t actually mean that. That’s the title of a book (a rightist collection of humorous essays). I saw it in a bookstore a few years ago, and instantly burst out laughing. I didn’t actually read it, since a quick glance convinced me that it was too lightweight to take seriously, but my reaction convinced me that I had completely lost the self-important concern about the planet that I affected in my late teens.

But the title told me that I was in danger of slipping too easily into a convenient sort of fatalism, the other extreme from teenage doomsday gloom. Convenient fatalism is the sort of liberating hopelessness that makes you bait your green friends by telling them you’re going to be burning your share of gasoline while the gettin’ is good (I have a niece who is going through the whole earnestly-green-teen phase at the moment, and I admit I’ve used the “let’s pave the stupid rainforests” line on her. But the last gift my wife and I bought her was an acre of rainforest. I don’t take jokes too far).

I am not actually as misanthropic as I like to pretend. I am just congenitally incapable of denying the inevitability of (thermodynamic-entropic) death, bet it at the level of humans, civilizations or sentient Borgs. Even if Skynet comes to be, it will die of old age if John Connor doesn’t defeat it. So why get all profound about it?

The realization that civilizations face possible garbage eschatology futures does not mean we should roll over and give up, any more than the knowledge that we will all die means that we should just sit around waiting to die. Convenient fatalism must be resisted, not because the world is worth saving, but because (existential hat on), ironically struggling against all manifestations of death, at all scales, knowing that you’ll eventually lose, is what gives life meaning. At least to us misanthropes.

Which means, if apocalyptic events did start to occur, I would probably fight. My reasons wouldn’t be the same as most people. I wouldn’t be fighting to fulfill humanity’s manifest destiny (there isn’t one), or as penance for my part in bringing about an environmental doomsday (I don’t do guilt, and Life After People has reassured me that perhaps I don’t need to), but out of sheer bloodymindedness. If John Connor gave a pompous speech, I would probably heckle him.

That said, misanthropic approaches to the end of the world do lead you to do other things differently, even if you join the annoying liberals in their smug Priuses. You don’t care so much about preservation of human culture and civilizational memory for instance. That’s the crumbling stuff that is killing us in the first place, remember? Even given my extremely low expectations from the clearly godawful narrative in 2012 (I admit it, I watched it entirely because an aircraft carrier crashes into the White House), the sheer banality of one particular plotline got to me: Thandie Newton’s character leading a team of self-important bureaucratic archivists (with incredibly pedestrian and unimaginative approaches to history) setting out to systematically save the world’s cultural heritage. Their incredibly banal choices begin with (what else), the Mona Lisa.

Which brings us to the most revealing question of all. What would you save?

Question 8: What Would You Save?

This question is oddly similar to the “3 movies you would take to a desert island” game that the characters in The Office play in one episode. Ignoring practicality (you should rummage in Wall-E’s junkyard landscape for that), what non-survival/resurrection material would you try to save?

My own answer tells me why I am legitimately a misanthrope here, even if I fight ironically next to John Connor against a Skynet I secretly admire, and alongside a Wall-E in my junkyard home.

There’s really nothing I would want to save. Starting from scratch sort of appeals to me. If you forced me to pick, I’d pick random things.

Overview of the End of the World Trail

It was surprisingly hard to organize the End of the World trail. But here is a rough guide to the organizing visual logic of the Trail Map. It took me a few hours to build, and it should provide you with a couple of days worth of entertainment, and a diploma-mill level mastery of the subject.

  • At the center are some general conceptual pieces on eschatology. Start with Eschatology.
  • Towards the east and north-east, you have various branches exploring different sorts of religious eschatology, mostly from Wikipedia. Start with the Messianic Age article.
  • Towards the north, you have a few doomsday humor selections. Start with the End of the World video.
  • The north-west is for Singularity-related stuff. Start with a glance at Ray Kurzweil’s book.
  • The south-west is for doomsday movies and literature. Start with Dr. Strangelove.
  • The south branch is for environmental and economic apocalypse scenarios. Start with the Doomsday page.  The 9 Global Devastation Hotspots page is particularly thought-provoking.

Email me/post comments if you have any suggestions on both structure and stuff to include.

[Added April 5: Clay Shirky’s The Collapse of Complex Business Models and Tainter’s book, The Collapse of Complex Societies, have been added to the End of the World trail]

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Venkatesh Rao

Venkat is the founder and editor-in-chief of ribbonfarm. Follow him on Twitter


  1. Yeah! Entropy Rules!
    So the point isn’t what to do about “end of times” but what to do before YOU turn into rubble.
    All the rest is just young monkeys fantasies and projections about avoiding death (probably fueled by hormones and shaped by evolutionary fitness constraints).
    A relevant concept for technology frailty is Fabricatory Depth.
    (BTW, isn’t this post mostly a publicity stunt for trailmeme?)

  2. More a technical exercise in futuristic hyperlinking than a pub stunt. I think I’ve milked this audience for as much PR as I can get already :)

    The idea of fabricatory depth is very interesting. I hadn’t thought of that in relation to fraility of tech ecosystems. I had thought of the related concept of economic webs though (the process by which the ancient and fundamental process of agricultural comes to depend on modern and derivative things like tractors and GMO seeds…).

    I think the 2 are related… you could view all technology as one big constantly maintained artifact, in which case the economic web becomes a fabrication web, where the ‘tractor’ is a tool supporting agriculture and so forth.

    Need to think more about that one.


  3. I like Hitchhiker ending best. However the world ends, it will be absurd to anyone who is not on it. In that respect, I saw this movie called Fantastic Planet, which shows (at least in the first half) exactly how important humans are in the great scheme of things.

  4. ricky_elias says

    I’d like to think that skynet will develop sufficient skills in collaboration and knowledge creation to keep pace with complexity and rustiness, but yes, no proof.

    Does make for better movies though :)

  5. Fascinating post! The idea that complexity will be the downfall of our society definitely resonates with me as a software developer :-)

    I’m curious if you’ve ever read “The Collapse of Complex Societies”? I haven’t read it yet, but from everything I’ve heard it’s very much along your line of thinking. This speech here summarizes it pretty well, I thought:

    One of the key ideas in all that is that people underestimate the cost of adding complexity to any system, because the short term benefits are obvious (fixing the immediate pressing problem), but the long term costs are invisible. A society tends to collapse once the energy required to maintain it grows past the benefit the society/system provides.

    (Sort of a tangent, but: I think it applies even on smaller scales though, especially in corporations. You always hear of policies being added in order to ensure quality or to fix problems, but how often do you hear of policies being removed unless they cause immediate obvious pain? And yet this is what kills companies as they grow — a new policy is added to address a short term problem, and it works, but at some point the collective cost of all those policies outweighs the benefit, because enacting any sort of change becomes way more effort than its worth. Once the company is forced to adapt to changing market conditions, it finds that it can’t, because the energy cost of overcoming all the built in complexity is way too high. )

  6. If you haven’t, you should read “The Metamorphosis of Prime Intellect” — set in a sort of technologically advanced apocalypse that is itself emerges to a new apocalypse. It’s a decent quick read, I don’t want to spoil too much.

  7. I am a software developer and the future that you paint really rings true – it is entirely possible that complexity will be civilization’s downfall. You have provided excellent pointers to drill down deeper into the subject. Definitely worth a bookmark on Thanks again.

    Brett Alistair Kromkamp

  8. @JM
    I had a quick glance at “The Metamorphosis of Prime Intellect” , monkey dreams again, no matter how imaginative we try to be we cannot escape our mammalian past.
    This is why all Sci-Fi (or other “serious” forecasts) is bullshit.

  9. Perhaps it’s because I have read “The World Without Us” (great book), but my first thought on the End of the World is that the sun will expand and burn it up. Of course humans will be long long long gone by that time. So the rest of the questions don’t make much sense. However, everybody else here seems to understand that what you mean by the World is human civilization.

  10. However, everybody else here seems to understand that what you mean by the World is human civilization.

    Yes, of course, it’s another “monkey centric” view but it cannot be otherwise.
    Actually this goes even deeper than Protagoras or others may have thought, in modern terms think about the result of any measurement, it obviously depends on both the thing measured and the measure instrument.
    We are the instruments of our views and there is no reachable separate “reality” with which they could be checked, all goes ultimately through our “judgments”.

  11. It strikes me that the current economic challenges (and those about to come) are also based on the principle of over-complexity….

  12. How strange for me to read another blogger dealing with so many of the same topics I’ve been considering over the past few years. Your central thesis — garbage eschatology — reads like a mash-up of things I’ve written about malicious ecophagy, aged institutions, and living among refuse. It clearly takes wherewithal and cleverness to put together such an overview, but your tone strikes me as remarkably glib considering the foreseeable consequences for all living things over the next few hundred years (which may admittedly emerge in another few thousand years much better off — those that aren’t extinct anyway). But then, I suppose I’m prone to the the earnestness you dismiss with a jibe.

    I note that your frequent movie citations omit an obvious one: Idiocracy. The refuse heaps and broken infrastructure shown in that film prophesy one version of your thesis, though the mechanism through which it’s realized is different.

  13. @Brutus: Yes, I suppose the tone is glib; a case of some convenient fatalism/liberating hopelessness creeping in, though I try to resist it at more modest levels. The future looms unpleasant, but I lack the hubris/courage (however you view it) to imagine that I can do anything productive to prevent it.

    And I missed Idiocracy because I haven’t watched it. On my list now thanks!

    @Ian: Bureaucratic process-entropy creep is a certainly a smaller-scale version, and it is what I modeled in the Gervais Principle as the MacLeod lifecycle. It can’t be stopped, but at least at that scale, you have a creative-destruction environment to rely on.

    JM, JLD, Divya: thanks for additional recos.

  14. Ron Strelecki says

    An end-of-the-world scenario that is suggested in “Idiocracy”, death by stupidity, is carried further and more seriously in Kurt Vonnegut’s “Galapagos”. People make the mistake that evolution intends intelligence; that evolution is a “ladder” leading from stupid to smart. The Koala is an example of an animal that has lost much of its ancestors capabilities. They are slow, stupid, highly specialized and very limited in their ability to adapt. Nature has no intentions. “Idiocracy” posits that there is no reason for intelligence to thrive.

    “Galapagos” follows the very, very few survivors of a worldwide economic crisis and subsequent disease that renders humanity sterile. A few humans persist on the Galapagos Islands (unable to return to Civilization… no oil, no diesel, etc…). The surviving humans simply evolve into a seal like creature, not particularly intelligent, but well-adapted to their environment. The survivors contemplate the collapse and take hopeful, nihilistic, and practical views.

  15. i think you haven’t really included the “darwins radio” as a possibility for end of the world :)

  16. ricky_elias says

    Another aspect to consider with this discussion on complexity, is political debates on complex topics, e.g. climate change where democratic ‘leaders’ representing a majority make decisions based on popular opinion. The point being, the complexity of the issue makes popular opinion almost worthless – leading to misguided, potentially dangerous outcomes.

  17. @ricky_elias

    The definitive answer about complexity has been brought forth long ago:
    For every complex problem there is an answer that is clear, simple, and wrong.
    H. L. Mencken

  18. I would fight. Not out of guilt, or bloodymindedness, but out of a will to preserve the first thing that I would seek to save, my own life. Without that, I couldn’t save anything or anyone else.

  19. Most “end of the world” scenarios (in time) would be large-scale disruptions to civilization (from Global Warming to Supervolcanic Eruptions), but wouldn’t end the world per se.

    My vote goes to a Gamma Ray Burst coming from within 1000 light years of the Earth … or a 5+ mile wide comet/asteroid hitting the Earth … those things would wipe out everything but bacteria!

  20. Power, water, garbage, fuel. Do you know where they come from and where they go? I sure don’t, and I doubt any city dweller really knows. I’d bet on the garbage eschaton too. Even a medium-sized cough from the sun is enough, apparently, to fry our transmission lines.

    I lost the smugness which I used to face EotW scenarios when I gained parenthood. Parenthood is one of those things which need to be experienced to truly appreciate the world-view-changing it brings about.

    Jared Diamond’s Collapse is depressing and a must read.

    I’m still amused at the global warming dudes, though. A complete lack of historical (geological time scale) perspective leads to people worrying about cows farting, as if we haven’t seen much worse fluctuations already. Eating into our natural resource capital, food shocks and the collapse of the precarious human social pyramid will kill us dead much before we warm up enough to make good on our futuristic beachfront property purchases.

    • Yeah, we are one black swan away from extinction, and several white swans also loom. I am told Delhi will run out of groundwater in 2012.

      The CIA, I read somewhere, has lately been doing a lot more scenario planning around food and water wars than Iraq or Afghanistan. That says something.

  21. @tubelite

    Why should one be worried more about the “death” of civilization or the death of your children or grandchildren (I have two) than about one’s own death?
    Is it even reasonable to worry that much about one’s own death?
    Isn’t this expectable and guaranteed?
    If you are into doomerism do you know these “professionals” in the trade:
    Kurt Cobb.
    John Michael Greer.

  22. Very interesting piece. However, I have a hard time figuring out how I would answer your questions, largely because most of the world views you discuss confuse “end of the world” and “end of the universe” (or “end of everything”) with “end of the dominant civilization” We’ve had the “end of the dominant civilization” occur a number of times in the history of this planet and there has always been a jerry-built structure to handle the daily wants of life until a new dominant civilization arises.

    So from where I sit, an “end of the dominant civilization” is not an “end of the world”. An “end of the world,” to me, is something that would kill most of the life, including most of the intelligent life, on the planet. This is the category where AGW and Malthusian scenarios go. We haven’t had one of those yet, and hopefully never will. If we were going that route, I would definitely fight, because I want to live. No other reason is necessary.

    As for “the end of everything?” There are some cosmological theories that hypothesize that that will happen, billions of years in the future. I’m not sure whether I believe them or not.

  23. Another variant of the “end of time” has been explored by J.G.Ballard who made references to Dalís surrealist image of the soft clocks. Time becomes fibred and incoherent and so the mind which drifts apart into psychedelia or catatonia which is then called “the trance”. This corresponds with the typical Ballardian landscapes: the wasteland suburbia, the Manhatten ghost-town ( reminiscent to those in “Life after people” rather than post-nuclear deserts ) and the broken sunglasses at the bottom of swimming pools. Luckily Ballard is usually brief in explanations.

    Maybe Jean Baudrillard was a Ballardian philosopher who owed more to surrealism than to sociology, actually.

  24. I suspect perhaps the semantics of trailmeme links has changed since you’ve posted this, as all the /follow links left me wondering if maybe my browser wasn’t supported.



    I had to fiddle to find the /walk links (as they only come from double-clicking on the map, which isn’t exactly intuitive).

    ps. It’s pretty sweet that the trailmeme maps are now HTML5/canvas for those that support it. I never saw them before as I block Flash.

  25. I haven’t read Lanier’s “You Are Not a Gadget” yet, either, but I saw a quote from him (don’t recall source) where he opposed the technological Singularity to a more likely outcome: becoming “a nation of help-desks”.

  26. Your summary of the Singularity claims seems inaccurate and unfair.There are a variety of distinct claims made by different classes of Singularity proponents. The emphasis on making AI that is friendly is due more to the Yudkowsky end of the Singularity movement, many of whom have a very negative view of Kurzweil. Kurzweil’s version of the Singularity involves the power of exponential growth across a large variety of technological frontiers.

    Note also that the “rapture of the nerds” is a very inaccurate statement. There’s no equivalent of the small set of saved individuals. Everyone ends up as part of the Singularitarian utopia.

    Finally, note that pattern matching is not in general a good approach. That certain eschatologies have superficial similarities is not a strong reason to assign them the same confidence.

    • “Rapture of the nerds” was not my line. It was the headline of an IEEE article.

      The Singularity idea is obviously more complex and multifaceted than I could convey in this broad piece which has an emphasis elsewhere.

  27. > I am shocked that nobody has really studied garbage eschatology, besides the writers of Wall-E. Garbage eschatology (I claim credit for this neologism) is based on the premise that our technological infrastructure has acquired too much complexity for us to fix. It will kill us not by turning sentient and (for whatever obscure reason) wanting to kill us, but by stupidly and dumbly collapsing on top of us, like a gigantic Windows Vista, while we watch, powerless to prevent our impending accidental death. Technology will kill us by collapsing into a pile of rubble, turning the planet into a gigantic landfill.

    Curiously, this is the topic of one of the oldest and still fairly well known SF stories – E.M. Forster’s “The Machine Stops” (

  28. Surprised not to see any discussion of Nick Bostrom’s writing here.
    He even looks misanthropic, given the right lighting.

  29. cherrysthorne says

    Great piece … sadly enough probably the most interesting read I’ve had in a long time