Welcome to the Future Nauseous

Both science fiction and futurism seem to miss an important piece of how the future actually turns into the present. They fail to capture the way we don’t seem to notice when the future actually arrives.

Sure, we can all see the small clues all around us: cellphones, laptops, Facebook, Prius cars on the street. Yet, somehow, the future always seems like something that is going to happen rather than something that is happening; future perfect rather than present-continuous. Even the nearest of near-term science fiction seems to evolve at some fixed receding-horizon distance from the present.

There is an unexplained cognitive dissonance between changing-reality-as-experienced and change as imagined, and I don’t mean specifics of failed and successful predictions.

My new explanation is this: we live in a continuous state of manufactured normalcy. There are mechanisms that operate — a mix of natural, emergent and designed — that work to prevent us from realizing that the future is actually happening as we speak.  To really understand the world and how it is evolving, you need to break through this manufactured normalcy field. Unfortunately, that leads, as we will see, to a kind of existential nausea.

The Manufactured Normalcy Field

Life as we live it has this familiar sense of being a static, continuous present. Our ongoing time travel (at a velocity of one second per second) never seems to take us to a foreign place. It is always 4 PM; it is always tea-time.

Of course, a quick look back to your own life ten or twenty years back will turn up all sorts of evidence that your life has, in fact, been radically transformed, both at a micro-level and the macro-level. At the micro-level, I now possess a cellphone that works better than Captain Kirk’s communicator, but I don’t feel like I am living in the future I imagined back then, even a tiny bit. For a macro example, back in the eighties, people used to paint scary pictures of the world with a few billion more people and water wars. I think I wrote essays in school about such things.  Yet we’re here now, and I don’t feel all that different, even though the scary predicted things are happening on schedule.  To other people (this is important).

Try and reflect on your life. I guarantee that you won’t be able to feel any big change in your gut, even if you are able to appreciate it intellectually.

The psychology here is actually not that interesting.  A slight generalization of normalcy bias and denial of black-swan futures is sufficient.  What is interesting is how this psychological pre-disposition to believe in an unchanging, normal present doesn’t kill us.

How, as a species, are we able to prepare for, create, and deal with, the future, while managing to effectively deny that it is happening at all?

Futurists, artists and edge-culturists like to take credit for this. They like to pretend that they are the lonely, brave guardians of the species who deal with the “real” future and pre-digest it for the rest of us.

But this explanation falls apart with just a little poking. It turns out that the cultural edge is just as frozen in time as the mainstream. It is just frozen in a different part of the time theater, populated by people who seek more stimulation than the mainstream, and draw on imagined futures to feed their cravings rather than inform actual future-manufacturing.

The two beaten-to-death ways of understanding this phenomenon are due to McLuhan (“We look at the present through a rear-view mirror. We march backwards into the future.”) and William Gibson (“The future is already here; it is just unevenly distributed.”)

Both framing perspectives have serious limitations that I will get to. What is missing in both needs a name, so I’ll call the “familiar sense of a static, continuous present” a Manufactured Normalcy Field. For the rest of this post, I’ll refer to this as the Field for short.

So we can divide the future into two useful pieces: things coming at us that have been integrated into the Field, and things that have not. The integration kicks in at some level of ubiquity. Gibson got that part right.

Let’s call the crossing of the Field threshold by a piece of futuristic technology normalization (not to be confused with the postmodernist sense of the term, but related to the mathematical sense). Normalization involves incorporation of a piece of technological novelty into larger conceptual metaphors built out of familiar experiences.

A simple example is commercial air travel.

The Example of Air Travel

A great deal of effort goes into making sure passengers never realize just how unnatural their state of motion is, on a commercial airplane. Climb rates, bank angles and acceleration profiles are maintained within strict limits. Back in the day, I used to do homework problems to calculate these limits.

Airline passengers don’t fly. The travel in a manufactured normalcy field. Space travel is not yet common enough, so there is no manufactured normalcy field for it.

When you are sitting on a typical modern jetliner, you are traveling at 500 mph in an aluminum tube that is actually capable of some pretty scary acrobatics. Including generating brief periods of zero-g.

Yet a typical air traveler never experiences anything that one of our ancestors could not experience on a fast chariot or a boat.

Air travel is manufactured normalcy. If you ever truly experience what modern air travel can do, chances are, the experience will be framed as either a bit of entertainment (“fighter pilot for a day!” which you will understand as “expensive roller-coaster”) or a visit to an alien-specialist land (American aerospace engineering students who participate in NASA summer camps often get to ride on the “vomit comet,” modified Boeing 727s that fly the zero-g training missions).

This means that even though air travel is now a hundred years old, it hasn’t actually “arrived” psychologically. A full appreciation of what air travel is has been kept from the general population through manufactured normalcy.

All we’re left with is out-of-context data that we are not equipped to really understand in any deep way (“Oh, it used to take months to sail from India to the US in the seventeenth century, and now it takes a 17 hour flight, how interesting.”)

Think about the small fraction of humanity who have actually experienced air travel qua air travel, as a mode of transport distinct from older ones. These include fighter pilots, astronauts and the few air travelers who have been part of a serious emergency that forced (for instance) an airliner to lose 10,000 feet of altitude in a few seconds.

Of course, manufactured normalcy is never quite perfect (passengers on the Concorde could see the earth’s curvature for instance), but the point is, it is good enough that behaviorally, we do not experience the censored future. We don’t have to learn the future in any significant way (what exactly have you “learned” about air travel that is not a fairly trivial port of train-travel behavior?)

So the way the “future” of air travel in 1900 actually arrived was the following:

  • A specialized future arrived for a subset who were trained and equipped with new mental models to comprehend it in the fullest sense, but in a narrowly instrumental rather than appreciative way. A fighter pilot does not necessarily experience flight the way a bird does.
  • The vast majority started experiencing a manufactured normalcy, via McLuhan-esque extension of existing media
  • Occasionally, the manufactured normalcy broke down for a few people by accident, who were then exposed to the “future” without being equipped to handle it

Air travel is also a convenient metaphor for the idea of existential nausea I’ll get to. If you experience air travel in its true form and are not prepared for it by nature and nurture, you will throw up.

The Future Arrives via Specialization and Metaphor Expansion

So this is a very different way to understand the future: it doesn’t arrive in a temporal sense. It arrives mainly via social fragmentation. Specialization is how the future arrives.

And in many cases, arrival-via-specialization means psychological non-arrival. Not every element of the future brings with it a visceral human experience that at least a subset can encounter. There are no “pilots” in the arrival of cheap gene sequencing, for instance. At least not yet. When you can pay to grow a tail, that might change.

There is a subset of humanity that routinely does DNA sequencing and similar things everyday, but if the genomic future has arrived for them, it has arrived as a clean, purely cerebral-instrumental experience, transformed into a new kind of symbol-manipulation and equipment-operation expertise.

Arrival-via-specialization requires potential specialists. Presumably, humans with extra high tolerance for g-forces have always existed, and technology began selecting for that trait once airplanes were invented. This suggests that only those futures arrive for which there is human capacity to cope. This conclusion is not true, because a future can arrive before humans figure out whether they have the ability to cope. For instance, the widespread problem of obesity suggests that food-abundance arrived before we figured out that most of us cannot cope. And this is one piece of the future that cannot be relegated to specialists. Others cannot eat for you, even though others can fly planes for you.

So what about elements of the future that arrive relatively successfully for everybody, like cellphones? Here, the idea I called the Milo Criterion kicks in: successful products are precisely those that do not attempt to move user experiences significantly, even if the underlying technology has shifted radically.  In fact the whole point of user experience design is to manufacture the necessary normalcy for a product to succeed and get integrated into the Field. In this sense user experience design is reductive with respect to technological potential.

So for this bucket of experiencing the future, what we get is a Darwinian weeding out of those manifestations of the future that break the continuity of technological experience. So things like Google Wave fail.  Just because something is technically feasible does not mean it can psychologically normalized into the Field.

The Web arrived via the document metaphor. Despite the rise of the stream metaphor for conceptualizing the Web architecturally, the user-experience metaphor is still descended from the document.

The smartphone, which I understand conceptually these days via a pacifier metaphor, is nothing like a phone. Voice is just one clunky feature grandfathered into a handheld computer that is engineered to loosely resemble its nominal ancestor.

The phone in turn was a gradual morphing of things like speaking tubes. This line of descent has an element of conscious design, so technological genealogy is not as deterministic as biological genealogy.

The smartphone could have developed via metaphoric descent from the hand-held calculator; “Oh, I can now talk to people on my calculator” would have been a fairly natural way to understand it. That it was the phone rather than the calculator is probably partly due to path-dependency effects and partly due to the greater ubiquity of phones in mainstream life.

What Century Do We Actually Live In?

I haven’t done a careful analysis, but my rough, back-of-the-napkin working out of the implications of these ideas suggests that we are all living, in user-experience terms, in some thoroughly mangled, overloaded, stretched and precarious version of the 15th century that is just good enough to withstand casual scrutiny. I’ll qualify this a bit in a minute, but stay with me here.

What about edge-culturists who think they are more alive to the real oncoming future?

I am convinced that they frozen in time too. The edge today looks strangely similar to the edge in any previous century. It is  defined by reactionary musical and sartorial tastes and being a little more outrageous than everybody else in challenging the prevailing culture of manners. Edge-dwelling is a social rather than technological phenomenon. If it reveals anything about technology or the future, it is mostly by accident.

Art occasionally rises to the challenge of cracking open a window onto the actual present, but mostly restricts itself to creating dissonance in the mainstream’s view of the imagined present, a relative rather than absolute dialectic.

Edge culturists end up living lives that are continuously repeated rehearsal loops for a future that never actually arrives.  They do experience a version of the future a little earlier than others, but the mechanisms they need to resort to are so cumbersome, that what they actually experience is the mechanisms rather than the future as it will eventually be lived.

For instance, the Behemoth, a futuristic bicycle built by Steven Roberts in 1991, had many features that have today eventually arrived for all via the iPhone. So in a sense, Roberts didn’t really experience the future ahead of us, because what shapes our experience of universal mobile communication definitely has nothing to do with a bicycle and a lot to do with pacifiers (I don’t think Roberts had a pacifier in the Behemoth).

At a more human level, I find that I am unable to relate to people who are deeply into any sort of cyberculture or other future-obsessed edge zone. There is a certain extreme banality to my thoughts when I think about the future. Futurists as a subculture seem to organize their lives as future-experience theaters. These theaters are perhaps entertaining and interesting in their own right, as a sort of performance art, but are not of much interest or value to people who are interested in the future in the form it might arrive in, for all.

It is easy to make the distinction explicit. Most futurists are interested in the future beyond the Field. I am primarily interested in the future once it enters the Field, and the process by which it gets integrated into it. This is also where the future turns into money, so perhaps my motivations are less intellectual than they are narrowly mercenary.  This is also a more complicated way of making a point made by several marketers: technology only becomes interesting once it becomes technically boring. Technological futurists are pre-Fieldists. Marketing futurists are post-Fieldists.

This also explains why so few futurists make any money. They are attracted to exactly those parts of the future that are worth very little. They find visions of changed human behavior stimulating. Technological change serves as a basis for constructing aspirational visions of changed humanity. Unfortunately, technological change actually arrives in ways that leave human behavior minimally altered. 

Engineering is about finding excitement by figuring out how human behavior could change. Marketing is about finding money by making sure it doesn’t. The future arrives along a least-cognitive-effort path.

This suggests a different, subtler reading of Gibson’s unevenly-distributed line.

It isn’t that what is patchily distributed today will become widespread tomorrow. The mainstream never ends up looking like the edge of today. Not even close. The mainstream seeks placidity while the edge seeks stimulation.

Instead, what is unevenly distributed are isolated windows into the un-normalized future that exist as weak spots in the Field. When the windows start to become larger and more common, economics kicks in and the Field maintenance industry quickly moves to create specialists, codified knowledge and normalcy-preserving design patterns.

Time is  a meaningless organizing variable here. Is gene-hacking more or less futuristic than pod-cities or bionic chips?

The future is simply a landscape defined by two natural (and non-temporal) boundaries. One separates the currently infeasible from the feasible (hyperspatial travel is unfortunately infeasible), and the other separates the normalized from the un-normalized. The Field is manufactured out of the feasible-and-normalized. We call it the present, but it is not the same as the temporal concept. In fact, the labeling of the Field as the ‘present’ is itself part of the manufactured normalcy. The labeling serves to hide a complex construction process underneath an apparently familiar label that most of us think we experience but don’t really (as generations of meditation teachers exhorting us to ‘live in the present’ try to get across; they mostly fail because their sense of time has been largely hijacked by a cultural process).

What gets normalized first has very little to do with what is easier, and a lot to do with what is more attractive economically and politically. Humans have achieved some fantastic things like space travel. They have even done things initially thought to be infeasible (like heavier-than-air flight) but other parts of a very accessible future lie beyond the Manufactured Normalcy Field, seemingly beyond the reach of economic feasibility forever.  As the grumpy old man in an old Reader’s Digest joke grumbled, “We can put a man on the moon, but we cannot get the jelly into the exact center of a jelly doughnut.”

The future is a stream of bug reports in the normalcy-maintenance software that keeps getting patched, maintaining a hackstable present Field.

Field Elasticity and Attenuation

A basic objection to my account of what you could call the “futurism dialectic” is that 2012 looks nothing like the fifteenth century, as we understand it today, through our best reconstructions.

My answer to that objection is simple: as everyday experiences get mangled by layer after layer of metaphoric back-referencing, these metaphors get reified into a sort of atemporal, non-physical realm of abstract experience-primitives.

These are sort of like Platonic primitives, except that they are reified patterns of behavior, understood with reference to a manufactured perception of reality. The Field does evolve in time, but this evolution is not a delayed version of “real” change or even related to it. In fact movement is a bad way to understand how the Field transforms. Its dynamic nature is best understood as a kind of stretching. The Field stretches to accommodate the future, rather than moving to cover it.

It stretches in its own design space: that of ever-expanding, reifying, conceptual metaphor. Expansion as a basic framing suggests an entirely different set of risks and concerns. We needn’t worry about acceleration. We need to worry about attenuation. We need not worry about not being able to “keep up” with a present that moves faster. We need to worry about the Field expanding to a breaking point and popping, like an over-inflated balloon. We need not worry about computers getting ever faster. We need to worry about the document metaphor breaking suddenly, leaving us unable to comprehend the Internet.

Dating the “planetary UX” to the fifteenth century is something like chronological anchoring of the genealogy of extant metaphors to the nearest historical point where some recognizable physical basis exists.  The 15th century is sort of the Garden of Eden of the modern experience of technology. It represents the point where our current balloon started to get inflated.

When we  think of differences between historical periods, we tend to focus on the most superficial of human differences that have very little coupling to technological progress.

Quick, imagine the fifteenth century. You’re thinking of people in funny pants and hats, right (if you’re of European descent. Mutatis mutandis if you are not)? Perhaps you are thinking of dimensions of social experience like racial diversity and gender roles.

Think about how trivial and inconsequential changes on those fronts are, compared to the changes on the technological front. We’ve landed on the moon, we screw around with our genes, we routinely fly at 30,000 feet at 500 mph. You can repeat those words a thousand times and you still won’t be able to appreciate the magnitude of the transformation the way you can appreciate the magnitude of a radical social change (a Black man is president of the United States!).

If I am still not getting through to you, imagine having a conversation over time-phone with someone living in 3000 BC. Assume there’s a Babel fish in the link. Which of these concepts do you think would be easiest to get across?

  1. In our time, women are considered the equal of men in many parts of the world
  2. In our time, a Black man is the most powerful man in the world
  3. In our time, we can sequence our genes
  4. In our time, we can send pictures of what we see to our friends around the world instantly

Even if the 3000 BC guy gets some vague, magic-based sense of what item 4 means, he or she will have no comprehension of the things in our mental models behind that statement (Facebook, Instagram, the Internet, wireless radio technology). Item 3 will not be translatable at all.

But this does not mean that he does not understand your present. It means you do not understand your own present in any meaningful way. You are merely able to function within it.

Appreciative versus Instrumental Comprehension

If your understanding of the present were a coherent understanding and appreciation of your reality, you would be able to communicate it. I am going to borrow terms from John Friedman and distinguish between two sorts of conceptual metaphors we use to comprehend present reality: appreciative and instrumental.

Instrumental (what Friedman misleadingly called manipulative) conceptual metaphors are basic UX metaphors like “scrolling” web pages, or the metaphor of the “keypad” on a phone. Appreciative conceptual metaphors help us understand present realities in terms of their fundamental dynamics. So my use of the metaphor “smartphones are pacifiers” (it looks like a figurative metaphor, but once you get used to it, you find that it has the natural depth of a classic Lakoff conceptual metaphor) is an appreciative conceptual metaphor.

Instrumental conceptual metaphors allow us to function. Appreciative ones allow us to make sense of our lives and communicate such understanding.

So our failure to communicate the idea of Instagram to somebody in 3000 BC is due to an atemporal and asymmetric incomprehension: we possess good instrumental metaphors but poor appreciative ones.

So this failure has less to do with Arthur C. Clarke’s famous assertion that a sufficiently advanced technology will seem like magic to those from more primitive eras, and more to do with the fact that the Field actively prevents us from ever understanding our own present on its own terms.  We manage to function and comprehend reality in instrumental ways while falling behind in comprehending it in appreciative ways.

So my update to Clarke would be this: any sufficiently advanced technology will seem like magic to all humans at all times. Some will merely live within a Field that allow them to function within specific advanced technology environments.

Take item 4 for instance. After all, it is Instagram, a reference to a telegram. We understand Facebook in terms of school year-books. It is exactly this sort of pattern of purely instrumental comprehension that leads to the plausibility of certain types of Internet hoaxes, like the one that did the rounds recently about Abraham Lincoln having patented a version of the Facebook idea.

The fact that the core idea of Facebook can be translated to the language of Abe’s world of newspapers suggests that we are papering over (I had to, sorry) complicated realities with surfaces we can understand. The alternative conclusion is silly (that the technology underlying Facebook is not really more expressive than the one underlying newspapers).

Facebook is not a Yearbook. It is a few warehouse-sized buildings containing racks and racks of electronic hardware sheets, each containing etched little slivers of silicon at their core. Each of those little slivers contains more intricacy than all the jewelry designers in history together managed to put into all the earrings they ever made. These warehouses are connected via radio and optic-fiber links to….

Oh well, forget it. It’s a frikkin’ Yearbook that contains everybody. That’s enough for us to deal with it, even if we cannot explain what we’re doing or why to Mr. 3000 BC.

The Always-Unreal

Have you ever wondered why Alvin Toffler’s writings seem so strange today? Intellectually you can recognize that he saw a lot of things coming. But somehow, he imagined the future in future-unfamiliar terms. So it appears strange to us. Because we are experiencing a lot of what he saw coming, translated into terms that would actually have been completely familiar to him.

His writings seem unreal partly because they are impoverished imaginings of things that did not exist back then, but also partly because his writing seems to be informed by the idea that the future would define itself. He speaks of future-concepts like (say) modular housing in terms that make sense with respect to those concepts.

When the future actually arrived, in the form of couchsurfing and Airbnb, it arrived translated into a crazed-familiarity. Toffler sort of got the basic idea that mobility would change our sense of home. His failure was not in failing to predict how housing might evolve. His failure was in failing to predict that we would comprehend it in terms of “Bed and Breakfast” metaphors.

This is not an indictment of Toffler’s skill as a futurist, but of the very methods of futurism. We build conceptual models of the world as it exists today, posit laws of transformation and change,  simulate possible futures, and cherry-pick interesting and likely-sounding elements that appear robustly across many simulations and appear feasible.

And then we stop. We do not transform the end-state conceptual models into the behavioral terms we use to actually engage and understand reality-in-use, as opposed to reality-in-contemplation. We forget to do the most important part of a futurist prediction: predicting how user experience might evolve to normalize the future-unfamiliar.

Something similar happens with even the best of science fiction.  There is a strangeness to the imagining that seems missing when the imagined futures finally arrive, pre-processed into the familiar.

But here, something slightly different plays out, because the future is presented in the context of imaginary human characters facing up to timeless Campbellian human challenges. So we have characters living out lives involving very strange behaviors in strange landscapes, wearing strange clothes, and so forth. This is what makes science fiction science fiction after all. George Lucas’ space opera is interesting precisely because it is not set in the Wild West or Mt. Olympus.

We turn imagined behavioral differences that the future might bring into entertainment, but when it actually arrives, we make sure the behavioral differences are minimized. The Field creates a suspension of potential disbelief.

So both futurism and science fiction are trapped in an always-unreal strange land that must always exist at a certain remove from the manufactured-to-be-familiar present. Much of present-moment science fiction and fantasy is in fact forced into parallel universe territory not because there are deep philosophical counterfactuals involved (a lot of Harry Potter magic is very functionally replicable by us Muggles) but because it would lose its capacity to stimulate. Do you really want to read about a newspaper made of flexible e-ink that plays black-and-white movies over WiFi? That sounds like a bad startup pitch rather than a good fantasy novel.

The Matrix was something of an interesting triumph in this sense, and in a way smarter than one of its inspirations, The Neuromancer, because it made Gibson’s cyberspace co-incident with a temporally frozen reality-simulacrum.

But it  did not go far enough. The world of 1997 (or wherever the Matrix decided to hit ‘Pause’) was itself never an experienced reality.

1997 never happened. Neither did 1500 in a way. What we did have was different stretched states of the Manufactured Normalcy Field in 1500 and 1997. If the Matrix were to happen, it would have to actually keep that stretching going.


There is one element of the future that does arrive on schedule, uncensored. This is its emotional quality. The pace of change is accelerating and we experience this as Field-stretching anxiety.

But emotions being what they are, we cannot separate future anxiety from other forms of anxiety. Are you upset today because your boss yelled at you or because subtle cues made the accelerating pace of change leak into your life as a tear in the Field?

Increased anxiety is only one dimension of how we experience change. Another dimension is a constant sense of crisis (which has, incidentally, always prevailed in history).

A third dimension is a constant feeling of chaos held at bay (another constant in history), just beyond the firewall of everyday routine (the Field is everyday routine).

Sometimes we experience the future via a basic individual-level “it won’t happen to me” normalcy bias. Things like SARS or dying in a plane crash are uncomprehended future-things (remember, you live in a manufactured reality that has been stretching since the fifteenth century)  that are nominally in our present, but haven’t penetrated the Field for most of us. Most of us substitute probability for time in such cases. As time progresses, the long tail of the unexperienced future grows fatter. A lot more can happen to us in 2012 than in 1500, but we try to ensure that very little does happen.

The uncertainty of the future is about this long tail of waiting events that the Field hasn’t yet digested, but we know exists out there, as a space where Bad Things Happen to People Like Me but Never to Me.

In a way, when we ask, is there a sustainable future, we are not really asking about fossil fuels or feeding 9 billion people. We are asking can the Manufactured Normalcy Field absorb such and such changes?

We aren’t really tied to specific elements of today’s lifestyles. We are definitely open to change. But only change that comes to us via the Field. We’ve adapted to the idea of people cutting open our bodies, stopping our hearts and pumping our blood through machines while they cut us up. The Field has digested those realities. Various sorts of existential anesthetics are an important part of how the Field is manufactured and maintained.

Our sense of impending doom or extraordinary potential have to do with the perceived fragility or robustness of the Field.

It is possible to slide into a sort of technological solipsism here and declare that there is no reality; that only the Field exists. Many postmodernists do exactly that.

Except that history repeatedly proves them wrong. The Field is distinct from reality. It can and does break down a couple of times in every human lifetime. We’re coming off a very long period — since World War II — of Field stability. Except for a few poor schmucks in places like Vietnam, the Field has been precariously preserved for most of us.

When larger global Fields break, we experience “dark” ages. We literally cannot process change at all. We grope, waiting for an age when it will all make sense again.

So we could be entering a Dark Age right now, because most of us don’t experience a global Field anymore. We live in tiny personal fields. We can only connect socially with people whose little-f fields are similar to ours.  When individual fields also start popping, psychic chaos will start to loom.

The scary possibility in the near future is not that we will see another radical break in the Field, but a permanent collapse of all fields, big and small.

The result will be a state of constant psychological warfare between the present and the future, where reality changes far too fast for either a global Field or a personal one to keep up. Where adaptation-by-specialization turns into a crazed, continuous reinvention of oneself for survival. Where the reinvention is sufficient to sustain existence financially, but not sufficient to maintain continuity of present-experience.  Instrumental metaphors will persist while appreciative ones will collapse entirely.

The result will be a world population with a large majority of people on the edge of madness, somehow functioning in a haze where past, present and future form a chaotic soup (have you checked out your Facebook feed lately?) of drunken perspective shifts.

This is already starting to happen. Instead of a newspaper feeding us daily doses of a shared Field, we get a nauseating mix of news from forgotten classmates, slogan-placards about issues trivial and grave, revisionist histories coming at us via a million political voices, the future as a patchwork quilt of incoherent glimpses, all mixed in with pictures of cats doing improbable things.

The waning Field, still coming at us through weakening media like television, seems increasingly like a surreal zone of Wonderland madness.

We aren’t being hit by Future Shock. We are going to be hit by Future Nausea.  You’re not going to be knocked out cold. You’re just going to throw up in some existential sense of the word. I’d like to prepare. I wish some science fiction writers would write a few nauseating stories.

Welcome to the Future Nauseous.

For the record, I haven’t read Sartre’s novel ‘Nausea.’ From Wikipedia, it seems vaguely related to my use of the term. I might read it. If somebody has read it, please help connect some dots here.

About Venkatesh Rao

Venkat is the founder and editor-in-chief of ribbonfarm. Follow him on Twitter


  1. Seems that perhaps your imminent doom and gloom scenario might play out, not on fields collapsing on *everyone*, but rather on the fields collapsing on the vast majority of people. Once this happens, society as we know it probably descends into chaos.

    After all, once you lose touch with a workable model of the world and your place in it, how can you begin to support yourself? There will always be a handful of Jobses, Bezoses, and Andreessens who can wrap their heads around our increasingly weird technology and model a coherent world view that lets them thrive. But the companies they run will employ fewer and fewer people as more and more tasks are outsourced to computers. You’ve recently written about how Facebook is a teeny, tiny company in terms of the people it employs relative to its impact on culture. And Amazon is likewise about the start firing workers at its warehouses and replacing them with fulfillment robots. So while Bezos and his lieutenants are safe, what about the rest of society? What happens when the majority of people can no longer model a coherent view of the world, and their place in that world?

    You then end up with people I’m seeing more and more of: people who have lost their work ethic because they no longer see a place for themselves in the world—people seeking endless diversions and vacations since they’ve lost any hope or desire to contribute in a meaningful way.

    • I don’t have answers, but I’ll say one thing: I don’t the successful survivors/thrivers actually understand what’s going on either. They just know how to make money off it. It’s an uncomprehending specialist skill like any other, just one that leads to a better life than most.

      Understanding reality and thriving in it are only weakly correlated. Some sorts of psychosis might actually help you thrive.

    • Aaron Davies says:

      Have you read Manna?

      • Paula Thornton (@rotkapchen) says:

        “people who have lost their work ethic because they no longer see a place for themselves in the world” people replaced by technology?

        Here Venkat has the upper hand again. This is the exact same pattern as my great&granfathers experienced. Human cogs in the ‘machine’ of collieries during the coal mining era, my father was ‘saved’ from such a destiny because of WWII and the after-effects of investing in the military. My grandfather, later stripped of his daily responsibilities (in this case due to age), became a professional alcoholic (although as history might show, he was already one to survive his grueling existence).

        My mother, an ‘operator’ for AT&T, adept at switchboards was literally a human router. Fortunately she had motherhood to interrupt her ‘meaning’ in life.

        It’s really a case of personal ‘meaning’. The problem is when we supplant our personal meaning with contrived, artificial professional meaning. The latter is not sustainable, regardless of any changes in the social, temporal Field.

  2. The normalization process might take place at several levels as the specialized experts might themselves be in a smaller field. If you read Kuhn (a short summary here http://www.des.emory.edu/mfp/kuhnsyn.html), and substitute paradigm for field, normal science for field expansion, and revolutionary science/paradigm shift for field popping, you have an analogous picture. From your description, it would seem that the field popping happens much more rarely than a paradigm shift – which, though overused as a word, does happen much more often. Not completely sure if field popping is extremely rare. But, in science it might be more frequent because there is a conscious, directed effort to stretch the paradigm as far as possible.

  3. Mikael Suomela says:

    If I’d take a historical and a eurocentric view, I’d say that societies respond to ruptures in and of reality by war and forming new institutions. The Great Depression and the World War II that resulted seems to be an example of that – should it be so perceived. Whenever technological advance displaces a great many number of people, social unrest seems to follow.

    Personal information bubbles are also something that might have existed in history, too. Marie Antoinette’s quip “Let them eat cake” at a time of famine is obviously an arrogant and ignorant remark to be said by an elite member of society.

    There will probably be some counterforces to increasing isolation. I might see internet also as a accelerating technological possibility for people to congregate and to form clubs. Some people might even go further and try make a more sweeping agenda on a global scale. You’d be amazed how many non-governmental organizations have international co-operation – how many non-profits try to weave new social fabric in place of the fabric that is being torn.

    Althought I’m not altogether sure whether your essay was about increasing fragmentation and isolation of humans? The mass media made a very convenient picture of reality from the end of the 19th century up until now. We all had an idea what the world looked like, even though in the end the lens was very, very American.

    Still I identify as an European, Nordic and host of other collective things. This is how I think I belong. I am in my identity apart from being an individual a part of a group. This model has shown a resilience of eons, also in times of great upheaval and strife. The damage that WWII did to Europe was considerable – yet, if you look at, say, Estonians, who had their reality badly broken by Soviets are now coming back with an ever tighter Estonian identity.

    Reality at a cultural level is also made by the language that we speak. I’m not a native English speaker (my first language is Finnish). Finnish language has a very different feel to it, than using English. English is to me artificial, and even in my striving it does not cease to feel so. I try not to generalize my experience, but I think that my experience is not that rare. The language that we use is important, and the striving to be understood by most people is important as well. There is Finnish language that would be for me unintelligible (say, very technical law and contract language), I need an interpreter for that. A common reality springs from common understandings and in there I can see the rupture in reality that I think you describe in this essay.

  4. Would you say that the historical “Dark Age” (5th-15th century) arose from such a Field popping? It seems like you are suggesting that, locating the re-emergence of the modern Field at approximately the end of that Dark Age.

    In that case, would those living through the Dark Ages have experienced a sort of 1000-year nausea? Would their sensation be relevant to our future experience?

    • Hardassi says:

      Barbara Tuchman’s “a distant mirror’ does a good job of conveying the profound differences between us in the current field versus those in 14th century Europe. The differences are far greater than between our future selves and the alien species in science fiction. Thus their sensations are not relevant to our future, but a sense of the the profound difference may serve to waken us to how huge the rift can be.

      • May I suggest the late John H. Rowe’s essay on ‘The Renaissance Foundations of Anthropology”, and this comment upon it by Dr. John W. Bennett. Dr. Rowe’s point is that while Europeans were having their disruptive encounters overseas with humans not mnetioned in the Bible, they were also discovering, via the newly-available Classical texts then in print, just how very different their own ancestors, the Romans, Greeks, Celts and others, were to themselves.


  5. This fits in a mental meme I’ve been carrying around for a while — for all our advances, we’re really no different than people from Rome, ca. 100 AD. Sure we move faster, and communicate faster, but nothing in our day to day lives is really that far different than the lives of Aurerlius. No slaves, agreed, but we do have very poor people working as wage-slaves. Different, and yet not. MAybe the 15th century is a better model, since many of Rome’s evils are gone, however life is still lived in the same models.

    • In his later writing, Philip K. Dick toyed with the idea that this is literally true, that the actual year is 70AD and everything around us (2012) is a “Black Iron Prison” created by the bad guys to control us. Venkat is proposing sort of the opposite here, that the year is 2012 but we pretend it’s 1412 in some ways to keep our sanity.

  6. For the record, I haven’t read Sartre’s novel ‘Nausea.’ From Wikipedia, it seems vaguely related to my use of the term. I might read it. If somebody has read it, please help connect some dots here.

    Started it, didn’t finish it. Did not find it rewarding. As far as I can tell, “nausea” in the book meant something very similar to mindfulness in eastern meditation traditions. Basically a non-rational, liminal mode of consciousness in which the narrator ceases to recognize objects as themselves and sees them purely as gross corporeal masses, as one might imagine a non-verbal animal to experience.

    I’m starting to see how Sartre’s nausea might be the inverse of the field…in hindsight, I think the book was about social alienation, and more specifically how alienation is staved off by maintaining a web of social conventions and ties to other human beings. So now I’m seeing this as another field mediating social normalcy rather than technological normalcy, and “nausea” in the book is the disorientation involved in “falling out” of the social Field.

    Maybe I’ll give it another read one of these days. I hadn’t seen this connection when I initially read it and it may make the experience more worthwhile.

    • raycote says:

      “social normalcy rather than technological normalcy”

      Could we not see these as inextricably intertwined ?

      • Absolutely. I drew the distinction only because there’s pretty much nothing explicit about technology in Nausea. The relationship between the two “Fields,” social and technological, would probably be a really interesting topic in its own right. And I suspect you’re correct that they’re not really distinct.

  7. Your “Fields” are Wittgenstein’s language games. The thick residue of habit is constituted by the rules on substantive content for what can be said. These rules are historically predictable, reliable. I reject your “Document Metaphor may pop” notions. Let’s review the laws of performative language as evolutionary laws. Facts regarding the cognitive capacities of those users of that language correspond to actual usage of that language by those language users.

    “A language is no less complex than the organism whose language it is.” — I want to say that the organization of “normalcy fields” follows from casually coherent laws, so I consider how the content of a species and the content of a language may be similarly determined by analogous rules. For instance, Spinoza held that Thought and Extension are attributes of Substance, each expressing the same thing, but through different modes. Archaic jargon aside, the mentality is the same: the laws by which extended things are casually linked are logically similar to the laws by which thinking things are casually linked.

    What is “normal” is what can be said.

  8. Christian Molick says:

    There are some common patterns to introductions to the Field. Bodybuilding was a freak show competition before it became a mainstream way to market protein powder to kids like Cartman. Small, basic robots were a freak show competition before they started vacuuming and scrubbing people’s floors. Really popular things like competitions and parties can be used as Trojan metaphors in order to change the freakish into the mainstream.

  9. Stephen S. Power says:

    A very thought-provoking article, although a few more instrumental metaphors would have been appreciated.

    I don’t agree with this paragraph,though, because it relies on the current normal: “So we could be entering a Dark Age right now, because most of us don’t experience a global Field anymore. We live in tiny personal fields. We can only connect socially with people whose little-f fields are similar to ours.”

    A global field only exists now thanks to the speed and cheapness of communications, starting with transatlantic cables. Were people in New Jersey reading about bus accidents in Bangalore in their newspapers 50 years ago? I doubt it. Now you can just follow @BangBus. For most of human history the globe was a very small space, bounded usually by where you needed or could afford to go. (Hell, I rarely go to the eastern side of the floor in the office where I work. And why should I ever go north of the office when the train is one block to the south?) In fact the article I was reading just prior to this (in an Australian paper about Paris, Maine!) noted that 100 years ago most second-gen Americans never met an actual European because there was no way to get to Europe easily. We’ve always lived in tiny personal fields. Now they’re just tiny in different ways. The Lodges still only talk to God, they just do so at e-chapel.

    • Aaron Davies says:

      Were people in New Jersey reading about bus accidents in Bangalore in their newspapers 50 years ago?

      Yes, yes they were.

      • Lincoln says:

        That link does not even remotely show that people were reading about bus accidents in remote parts of the world 50 years ago.

        • Arguably the one big normalcy field is an artifact of the 20th century, with its radios, televisions, and and standardized news services. In Lippmann’s Public Opinion (1922) the tendency of small groups of people to have their own idosyncratic views of the world was a problem that needed fixing by the emerging public relations industry and mass media.

          The 21st reminds me a little of the 19th century where people get their news from a wide range of specialized or partisan sources, so social consensus is harder to come by. Visiting relatives over the holidays, it felt like visiting a parallel universe. We had not only different sets of facts, but we attributed wildly different significance to the facts that we could agree on.

          And I agree with the article that this has implications for, e.g. sustainable development. Just the other day I read somewhere about how mealyworms are supposedly an efficient sort of protein. But will people eat them? Not if you call them “bugs”. You’d have to give people some other way of thinking (or rather, not thinking) about what they’re eating. There are many, many marketing type problems like that around sustainable development.

  10. heteromeles says:

    You know, I can make a much, much simpler model, based on Darwinian evolution.

    Let me say it this way: More future memes are born than can survive. This leads to a survival of the fittest memes. The environment that selects for the fittest memes is the present, and memes spread through adoption by humans.

    Most disruptive memes fit Taleb’s definition of a black swan (especially the part about being unpredictable). Therefore, criticizing most inventors or futurists for not getting rich misses the point. If there was a way to predict which black swan memes would make their inventors wealthy, they wouldn’t be black swans, would they?

    The critical point to understand here is that I’m not just talking about memes for advanced technology as the future. Rather, I’m talking about the memes for all possible futures, including those of the doomsday preppers and those of the billions of poor and disenfranchised. If we go through something like Peak Oil, The Great Los Angeles Earthquake (let alone New Madrid 2.0) or a Carrington Event, the memes that will be favored by such a dramatically different environment will be radically different than the ones we hold today.

    That’s life.

    • While I like this succinct Darwinian account of meme propagation, especially the unpredictability (I am addicted to Taleb after reading the Black Swan), you don’t give Venkat credit for exploring the hightened sense of anxiety that this fosters in the present (what he is describing as nausea). Trying to be resilient and adaptive enough to survive whatever meme actually manifests itself provokes an almost visceral sense of “alertness” and stress akin to the fight-or-flight response of being hyped up on adrenalin with very little down time.

      Indeed, whilst reading this post, I found myself more anxious and was almost immediately prompted to recollect the frequent desire of Roman elites in the late Republic for otium or tranquilitas, usually achieved by retiring from the City to a rural estate or to the seaside south of Rome. Of course, this coping strategy was restricted to the then 1%ers. I think a large part of the future nausea that Venkat is suggesting will increase is rooted in the fact that many ordinary, educated folks no longer believe they will have adequate resources to adapt to future circumstances. A new smartphone may be excellent for updating my Facebook status (if I indeed had one and did that sort of thing — I don’t), but it will be much less useful that the possession and mastery of some low-tech tools and procedures, e.g. hoe-gardening in the backyard and canning in the kitchen, in the event some of the memes you list actually come about.

      The human animal seems to like a challenge, but he or she also wants to know what that challenge is. The unknown unknown is by definition incapable of being prepared for in a specific way. That’s why Taleb makes such a big deal about focusing on anti-fragility. The goal shouldn’t be to know the meme that’s coming; rather it should be to survive or thrive no matter what that meme is.

  11. “How, as a species, are we able to prepare for, create, and deal with, the future, while managing to effectively deny that it is happening at all?”

    You just answered your own question.

  12. On a more serious and actually responsive note…

    You make some interesting points, and it’s a fascinating way to think about the future. There are, however, some issues that I think it’s worth pointing out before hashing out the usefulness of your theory.

    First of all, whatever field we’re describing for time also has to take into account space. And not in a Newtonian sense, but in political/state/cultural/economic sense. When we say the future is unevenly distributed, what we mean (I think, at the most prosaic level) is that some people are experiencing recent inventions more fully than others. That’s not just simple and prosaic, it’s frankly not very interesting. Some people got cars before others did. For some, the lack of good, modern, paved roads made even the getting of cars less useful. A future filled with “horseless carriages, unbound by rails of steel,” was “real” the moment the first car was invented. So what? If people in my [country] [economic slice] can’t afford a car or don’t have roads… I don’t experience “that future.” Does that make my present more of someone else’s past? Maybe… but then my field is as much about where/how I live as when.

    The second thing I’d point out is that field variations based on activities and environment vary in time for us regardless of technological futures; there are a variety of futures we imagine and then ingest or reject having to do with jobs, mates, living arrangements, etc. So my idea of my movement through past/present/future may have as little or less to do with the future as it does with when I first got laid, stopped smoking, moved to Ohio or lost a finger in a band-saw accident.

    And, in some cases, we can also see how early adopters of technology aren’t envied or even cared about. I was reading ebooks on a Palm pilot in 1996. This is the response I received 97% of the time when that topic came up: “How can you read on such a little screen? I could never do that.” Now the same people are reading on their little screens. The adoption didn’t have to do with technology, but with the socialization of a set of tools and ideas about how we interact with technology. In 1996, there was no YouTube, no blogs, no twitter, no texting, barely any email. So the idea of doing something different on a little computer was, well… not just odd… but unremarkably odd. Like someone who only eats blue food. You might think, “That’s strange,” but not, “That’s what I’d like to do,” or “There must be a fascinating reason behind that choice.”

    So while the field through which we view the future is interesting, I think it’s more warped by other gravities than technology.

  13. Depressing, slightly obnoxious, and very ‘engineering’, but still one of the most useful things I’ve read so far this year. This does a good job of explaining why I’m so thoroughly fixated on notions of the sublime, subjective and gonzo futures, and so on.

    It also reminded me a little of Adam Greenfield’s 2008 piece, ‘Thoughts for an eleventh September‘.

    • Heeheehee. I can now reveal that my private test for the success of this post was purely its ability to annoy you. Yes, you specifically, Justin.

      I am secretly hoping to make some sufficiently sensitive type actually throw up.

      • Half-suspected as much. :)

        And the book that nails this sense best, perhaps, is John Brunner’s extraordinarily prescient Stand on Zanzibar (1968). If you haven’t already read it, add it to the pile; I think it’d amuse you.

        • Oh, wow, this is great.

          Stand On Zanzibar not only ranks as one of the greatest works of the past century, but immediately leaped to mind while reading this article. If I say that your post reads like one of Dr. Chad C. Mulligan’s riffs, I pay you the highest complement. A fictional character in an imagined future could, through the power of poetry, nick the surface of the trajectory Brunner saw in the late 60’s.

          John Brunner’s vision of humanity hedged in between chaos and hope, with all sorts of answers but no guarantees or security, is instructive precisely because it’s a fiction spun out of the elements of the Tomorrowland future that the upheavals of the 1960’s railed against. Its vertiginous structure and pointed nods to McLuhan and his mentor Innes come as close as any commercially-published novel to the kaleidoscopic nature of present-perception in our times. The soupcon of conceit, the fact that Stand On Zanzibar is set in 2010, somehow carries the story up above irony into something haunting and poignant.

  14. Alexander Boland says:

    What about the rise of prescription drugs? How do Prozac and Adderall fit into the picture? I just can’t resist saying that a discussion of future nausea is incomplete without this part.

    • Definitely. Perhaps ADD can be reframed as being too focused, rather than too distractable. If you stare in the same direction, you will witness a nauseating churn. If you are able to follow one thought as it twists and turns through the distracting mess, we call that focus.

      A bit of a stretch :) But yeah, Prozac etc. definitely belong here somewhere.

  15. What would doing an honest job of living in the present look like?

    I just took an airplane flight– admittedly, there was some novelty value because it’s been well over a decade since I’ve been on a plane.

    I looked out the window– hey, there’s a lot of distance underneath me. And saw parts of the wing rearranging themselves on takeoff and landing. I felt that I was being hauled along by powerful engines. I’ll grant that I’d be more impressed by modern tech if jets did aerobatics– possibly more so if I saw them from the ground instead of watching jets hang in the air like special effects– but it’s not necessary to ignore tech.

    Maybe it’s that I’m 59 and a science fiction fan, but I notice that people have cell phones, and that google is a marvel.

    I also notice when tech-savvy people still don’t remember what’s available– I was recently involved in a search for a cell phone charger which would have been made easier if anyone had thought to take a picture of the socket.

    • These days I routinely take pictures of where I park my car (either a pillar with a number in a parking lot or a street-crossing sign showing both street names). For street parking you can also drop a pin on Google Maps on your phone as a GPS marker.

      This sort of jury-rigged, opportunistic adaptation to the future is the raw material from which normalization is born. If enough people start doing it, somebody will come up with a normalizing narrative and codify the behavior, and enable it in simplified ways.

      Hmm… there’s something here. Maybe that’s the art of real future prediction. Spot an in-the-wild adaptation and figure out the normalizing narrative that might take root. Make a product for it and you have a startup.

      The photography-as-geo-memory might end up being called “landmarking” for instance, with a social app using image recognition to match your picture to others and infer a GPS pin. So now the old idea of “landmark” gets overloaded into an ad-hoc, just-in-time navigational aid.

  16. Semon Rezchikov says:

    I apologize for my lack of historical background, but:

    Why is the start-date of the current Field the 15th Century, rather than the 14th? Yes, yes, the beginning of the European Renaissance, etc. Are you putting the date there because that’s when the printing press was invented and so many of our conceptual metaphors stem from the printed book?

    • The printing press is one piece. A lot more happened. The discovery of sea routes to America and India (1492, 1498), the conquest of Constantinople by the Ottomans, (1453), the expulsion of the last Muslim ruler of Granada. In the east, China sent out the Zheng He voyages, and India came decisively under Muslim rule (though the Mughals didn’t start till the 16th century).

      Basically, the ‘Age of Exploration’ ended and the modern globalized world took root. There were big parts of the world where nothing particularly significant happened, but the point is the neural web of human connection was kinda woven.

      But there is of course an element of the arbitrary in any exercise of this sort. A less arbitrary model would actually trace metaphors back, at micro-level, to various genesis events.

      • Aaron Davies says:

        This reminds me, someone pointed out to me recently that Linus Torvalds’ initial announcement of Linux, Tim Berners-Lee’s initial announcement of the World Wide Web, and the collapse of the Soviet Union all occurred in the same month (August, 1991). Perhaps we can call that the modern world’s 1492?

  17. Fran Arant says:


    Thanks for continuing to put out such interesting and well-thought observations, it’s disheartening to me sometimes how rare that is in anything I read on the internet (big shock?).

    I’m wondering if you’ve read John Taylor Gatto’s “The Underground History of American Education,” the whole of which is actually available online for free at: http://johntaylorgatto.com/chapters/index.htm

    To summarize a lot of information off the top of my head, and probably not to do his research or presentation justice, Gatto talks about how in the 19th century people discovered that they could run machines on coal. It started out with devices that saved human labor in coal mines themselves, water pumps to deal with flooding and then steam powered coal carts (which supplanted child powered coal carts) and evolved directly into railroads above ground. The railway corporation quickly became, by orders of magnitude, the most profitable and most powerful corporation in U.S. history.

    But the story is more about how business people realized that there was the potential to use coal to engage in industrial manufacturing. The problem, at the time, was that the U.S. was an incredibly entrepreneurial, well-educated society. Coal was available to everyone, and there was no reason that a society of small-producers wouldn’t be able to compete against these new corporate business structures that were considering investing massive amounts of capital in their machines. This was the problem, the investment of massive amounts of capital. It was far too risky to do so in such a society that would be able to compete so effectively. This is also, to my understanding, part of the reason why the legal structure of the corporation became dominant: investors would not be liable financially for the debts of the business if it were to fail, the “corporation” would.

    There were also a significant proportion of the elite that was terrified of all the immigrants that were coming in waves after the Civil War. So, using philosophical ideas stretching back to Plato’s Republic and the life-long compulsory military training of the basically fascist Prussian Empire (which all of the Western world admired since their army had defeated Napoleon), the United States compulsory education system was created, state by state over the course of decades, starting in Massachusetts.

    In our rhetoric today we say that compulsory education is a great thing because it ensures that the whole population has a certain level of base knowledge. The hole in this idea is that, at the time, the population of the United States was incredibly well educated, without compulsory schooling, because it was (and I would say always is) in any human being’s best interest to be as knowledgeable about their world as possible. The real motivation of compulsory education at the time (and up through the present) was a massive experiment in social conditioning where early industrialists created a social factory, the mechanisms of which they controlled, in order to mass produce workers and consumers on national scale and, doubly, to eliminate the population’s ability to compete with industrial corporations, with all the financial risk that entailed being eliminated as well.

    This dovetailed nicely with the social elite’s fear of immigrants, because it gave them a tool to forcibly seize children of immigrant communities, socially condition them to be “proper” Americans, and to break the ties of culture that would form if families were allowed to naturally raise their children. This is part of how compulsory education was designed by industrialists to standardize the human population. It was so effective that it spread across the world, and I don’t think that there is any part of the 1st world dialogue on understanding anything, especially anything that has arisen in the past 130 years (which is about how long compulsory education has been operating), that hasn’t been affected by (or isn’t based out of) the propaganda that was created to feed to the vast majority of the population.

    Here’s a couple interesting quotes:

    “In 1882, fifth graders read these authors in their Appleton School Reader: William Shakespeare, Henry Thoreau, George Washington, Sir Walter Scott, Mark Twain, Benjamin Franklin, Oliver Wendell Holmes, John Bunyan, Daniel Webster, Samuel Johnson, Lewis Carroll, Thomas Jefferson, Ralph Waldo Emerson, and others like them.”

    “In 1995, a student teacher of fifth graders in Minneapolis wrote to the local newspaper, ‘I was told children are not to be expected to spell the following words correctly: back, big, call, came, can, day, did, dog, down, get, good, have, he, home, if, in, is, it, like, little, man, morning, mother, my, night, off, out, over, people, play, ran, said, saw, she, some, soon, their, them, there, time, two, too, up, us, very, water, we, went, where, when, will, would, etc. Is this nuts?'”

    “We want one class to have a liberal education. We want another class, a very much larger class of necessity, to forgo the privilege of a liberal education and fit themselves to perform specific difficult manual tasks.” — Woodrow Wilson

    I believe that your “Manufactured Normalcy Field” might be symptomatic of a population that has been conditioned to have it’s ability to think dramatically eroded for the past 130 years. On the one hand I think this might be hard to swallow for some, because it requires people to accept that perhaps the U.S. government and it’s corporate sponsors don’t have the average person’s best interests at heart. Of course we are indoctrinated from a very young age to believe otherwise, but I think there is enough evidence available in the present of government and corporate corruption to ask oneself “How could human beings have been so different in the past that, in the past, this corruption wouldn’t exist as well?” The positive side of this realization is to realize that the entire 1st world has been systematically dumbed down in order to be more controllable for more than a century. That means that there is an enormous human potential waiting to be tapped in every single person’s natural born intelligence, if we can just figure out how to model ourselves after real independent thinkers who used their minds to imagine and practically create their world (hint: look back at least five hundred years, if not farther, or look to people who are not part of the “1st world”), and we can rid ourselves of all the beliefs that are being blasted at us through propaganda about humans being innately dumb and impossible to trust (especially poor ones, of course), about some things being too “difficult” for normal people to understand, and about everyone needing to turn to an expert to learn how to think.

    Anyway, I think this article is fascinating and I think it’s very useful in thinking about our current culture. The only point I wanted to suggest was that perhaps there are large parts of our current culture that are artificially sustained and that there is an enormous potential out there for human beings to transcend the behavioral and intellectual limitations you’re describing. I also wanted to suggest John Taylor Gatto’s book to you because I think it’s full of really fascinating history that ties in very well to what you’ve been researching.

    Thanks again for sharing your thoughts!

    • This is a fascinating angle. I’ve covered the 19th century quite a bit lately, and touched on the education as labor-manufacturing angle, but hadn’t looked at it front-and-center. Thanks for the summary of the book. Definitely going on my reading pile, since it supports a related thesis I am developing about the current entrepreneurial wave.

      • Grant Gudgel says:

        I am very curious to hear your thinking on the entrepreneurial wave.

        Mind sharing your premise? Or at least any suggested reading from past posts or other sources on the topic?

        Just discovered ribbonfarm, really excellent work. Thanks for sharing!

        • It’s the ‘entrepreneurs are the new labor’ thesis I mentioned in passing in a recent post. Will be developing/blogging it more fully at some point.

          In the meantime, welcome aboard.

    • Aaron Davies says:

      […] the United States compulsory education system was created, state by state over the course of decades, starting in Massachusetts.

      Ah, Massachusetts, the native habitat of bad ideas…

      • Daniel Clee says:

        If you look at film production, for example, you may gain an idea to back-up what I suspect will be your angle here. Films are very expensive, and as ever, he who pays the piper calls the tune. Notwithstanding forced editorial changes to a script to suit the marketability of the project by having a known film star/ego attached (hence movies today are an endless rehashing of Campbell’s ‘Hero’), the producer as entrepreneur raises the money from an investor who demands repayment asap, and then 50% of all further returns. Indeed, the entrepreneur is the new labourer.

  18. V – how could you not have read “Nausea.”
    My PhD is in Philosophy – specifically Sartre’s theory of the imagination – so i have wallowed in “Nausea,” as it were.
    What he’s driving at is very much at the core of your latest post. It’s the sense that whatever we’ve created to paper over our actual responses and engagements with the “outside world,” the reality is utterly contingent, arbitrary, without Big Picture purpose – in other words, we simply exist but without any meaning to back it up.
    “Absurdity” is the metaphor of which “Nausea” is the physical counterpart.
    Love your flight plan, existentially speaking.

    • Looks like I’ll have read it now.

      A friend commented online that Sartre’s nausea is a perspective on mindfulness, which seems right to me.

  19. Aaron Davies says:

    To (mis)quote the greatest renaissance man the world ever saw, “No matter when you go, there you are.”

  20. Aaron Davies says:

    3. In our time, we can sequence our genes[…] Item 3 will not be translatable at all.

    “The gods make us as we are though spells, cast differently for each of us. We have learned to read the language of the gods, and we will be able to cast our own spells soon.”How’s that?

  21. Wow. This is a stunning piece, thank you so much for sharing it. There is so much here that feels right, I think I’m going to be sick right now.

  22. I really enjoyed the thoughts in this article. If I could change one thing, I would translate it out of “academese,” which I find to be an ineffective writing style. Reading it somehow feels like eating a bowl of dirt and seeds and calling it “vegetable soup.”

    What about death? The Field is designed, in part, to shield us from the reality, finality and horror of death. Hence, religious afterlives and modern euphemisms like “passing away.” But maybe the Field has been expansing. More and more, people are becoming areligious and more confused about their beliefs regarding an afterlife. Can this enter the Field?

  23. raycote says:

    I would frame the need to manufacture a social-coherence normalcy field as an existentially mandated biological survival strategy for a species that is totally reliant on collaborative interplay as its primary competitive strength.

    Assuming that evolutionary changes happen at a pace far to slow to impact the effect discussed in this post.

    Reality past and present for humans always was and presently is constrained within the visceral envelop of our perceptual, representational and inferential biological apparatus. It is the evolutionary rate at which these three cognitive element can adapt that limit the speed with which technologies can translate into emergent new socially coherent and embodied realities/memes/paradigms/fields. Of the three only our biological abstract-representational-apparatus is by evolutionary design inherently targeted at take on the job of rapidly reconfiguring our representational cross mapping of reality in response to high rise-time and high-variability environmental change. Thus the speed with which we can move to inhabit the future, without popping an over stretched metaphoric field along the way, is ultimately constrained by the rate at which we can discover/inculcate new more organically effective metaphoric cultural memes/paradigms/fields and do so in ways that are widely accessible to being embodied into the mass culture of us mire mortals.

    The future is an organically complex emergent effect so of course it can only be simulated as theatric contrivance in the present.

    As for stretching some 15th century baseline social metaphor too far until it flips from being a tool into being an impediment, one could make the case for that cycle having repeated itself at least twice since then.

    – One could visualize the 15th century baseline social metaphor as the magic volitional power of the persona.
    – That social metaphor gives way during the scientific revolution to a new baseline social metaphor centred on linear chains of cause and effect.
    – And now, propelled by the organically complex synchronizing-potential inherent in a network-effect based culture, that old linear cause-and-effect baseline social metaphor is beginning to morph into organic process literacy centred around collaboratively-programable distributive-webs of socially synchronized interdependence.

    The network-organizing-effect is just now going supernova, levelling-up to enter the arena of human social affairs, our new network-effect-platform positions us in a very real and practical sense to generate a new social fabric based on living 3D webs of social synchronicity.

    Denial is not and option
    We have opened PANDORA’s network-effect
    The existential realities of organic level complexity wait for no MAN
    There is no turning back
    We are all-in whether we like it or not !

    to all our Gods
    from all our peoples
    please bestow blessings
    of optimal inertia dampening
    upon our emerging noosphere

    From the liner notes of Andy Clark book
    “SUPERSIZING THE MIND – embodiment – action – and cognitive extension”

    “When historian Charles Weiner found pages of Nobel Prize-winning physicist Richard Feynman’s notes, he saw it as a “record” of Feynman’s work. Feynman himself, however, insisted that the notes were not a record but the work itself.

    In Supersizing the Mind , Andy Clark argues that our thinking doesn’t happen only in our heads but that “certain forms of human cognizing include inextricable tangles of feedback, feed-forward and feed-around loops: loops that promiscuously criss-cross the boundaries of brain, body and world.” The pen and paper of Feynman’s thought are just such feedback loops, physical machinery that shape the flow of thought and enlarge the boundaries of mind.

    Supersizing the Mind offers both a tour of the emerging cognitive landscape and a sustained argument in favor of a conception of mind that is extended rather than “brain-bound.”

    The importance of this new perspective is profound. If our minds themselves can include aspects of our social and physical environments, then the kinds of social and physical environments we create can reconfigure our minds and our capacity for thought and reason.”
    I think the concepts outline in Andy’s book speak directly to the conceptual metaphors outlined in this post.

    Andy Clarks has made the book freely available online

  24. The most potentially shocking example of your thesis, to my mind, are the increasing potentials of genetic engineering and even cloning.

    We watch so-called science fiction films that almost universally depict these issues in dystopic, totalitarian-state terms. Then we leave these films, perhaps unaware that it’s now possible to manufacture life, clone life, and genetically modify life—not in any science-fictional universe, but at a laboratory in your city. Codes and ethical standards are being written to prevent abuses of these new technologies.

    And yet most of us have not processed (and are perhaps not capable of processing) the true ramifications of these technologies for our own humanness.

  25. I’ve thought often about the hypothetical conversations with past humans, and I think they go about like yours :)

    That part reminded me of the uneasiness I feel when faced with a request by my children to attend their career days at school. How does one explain to children one’s own work? Doctors, firemen, police, yes. Probably these days one could even get kids to understand if you’re a computer programmer of some sort. I’m not any of these things, and a large number of people I know aren’t either. We have, in essence, been rendered unable to explicate our own work to others. I suspect we don’t understand it ourselves, which seems to fit with what you’re saying.

    I suspect the best coping mechanism for those of us who really don’t have explicable jobs might be what Erik suggested above, seeking endless diversions.

  26. Sedicious says:

    As if to serve as illustration, the following article was published the same day as your post:

    The Floppy Disk means Save, and 14 other old people Icons that don’t make sense anymore

  27. Sedicious says:
  28. This is a blinder! I love it because of how it encourages and integrates my biases.. But still, good work!

    If this is true, then futurists will probably be pretty happy with this psychic chaos, because they have specifically enjoyed living in the points of breakdown of the field, and chatting to other weird people mentally occupying alternative universes.

    In other words, the alien integrative strategies they use will be less about prediction and more about immediate sense-perception, doing without the mundaneness field and riding the higher levels of stimulation.

    But isn’t this a normal view of futurists? That the future will be so wild as to lead to them or people even more futuristic than them being the only ones to handle it?

    So if we’re not careful, we just get back to “in the future, the future will win”, just distinguishing the timelike and spacelike “future”s. In fact, to make that even more impenetrable to someone who doesn’t already know what I mean; “in the future, the future will win, or something even more futurey”.

    There is an alternative explanation; that the present will ping back like a stretched band and form a new, closer horizon, and we’ll find that the dark ages weren’t, but rather were a different field, and that we are now in such a different one from the 15-20th centuries that we find them idiots. Perhaps we’ll have a “renaissance” of 4th century states of being, that’s actually nothing like the 4th century, but which nonetheless allows us to construct historical continuity independent of the era we most contrast with and find incomprehensible, our recent past.

    How did that previous horizon form anyway?

    • Oops, got the relation between centuries and years backwards, was thinking roughly of the 6th century, the 500s AD.

  29. This is the third time I read the last paragraph titled . It still makes me nauseous. I think I’ll try and read Cryptonomicon again. That made me nauseous too back in the day and I gave up. Hopefully all the yoga I practice will help some this time around.

  30. I invite some writers to create some stories set in a “somewhat strange” future ( i.e. a future that is different enough to be remotely exotic from our current world) and then have nothing special happen there. No conspiracies, no singularities, no end of the world, no great industrial wars, no conquest of the stars, no revolution. Just life muddling on for relatively average people. Maybe a bit of love, but with emphasis it’s the kind of love (sex) nearly everyone has in that future.

    Emphasis on things the people of that future (preferably people who lived in the 20th century as well!) take for granted things that we here in 2012 would find near magical. The tedium of consumer level nanoreplicators and crowded slummed space colonies and the banality of omnipresent AI, and the sheer superficiality of perfect virtual or immortality/body reconstruction driven orgies.

    But without the usual tawdry cyberpunk cliche’s. The future not as a urban sprawl covering had a continent, or as a with hipsters carrying shoulderslung 4.7mm caseless flechette coilgun pulse rifles – no, not much violence at all.

    The term I believe is Wei-wu-wei?


  31. DRTL
    Have to look at some cats.

  32. If you’d like an interesting example of the Manufactured Normalcy Field in action, consider this: we carry the sum of human knowledge and enterprise in our pockets, and most of us only use it to look at cute pictures of cats.

  33. Jan Dockx says:

    So … Maybe it is the Singularity that will reset the field?

  34. Daniel Clee says:

    (have you checked out your Facebook feed lately?)

    Funny you should say that, I have. A few months ago and the wind was blowing in the direction of doom, gloom and disaster, and then about a month ago, even before the Olympics, there seemed a general upswing. But yes, in all it’s a chaotic soup of “drunken perspective shifts”. And as my friends are a reflection of me, it’s reasonable for me to worry about myself and my friends. (Also, for fun, check my improbable cat… http://www.youtube.com/watch?v=8DUebR5ycJM .)

    Great article (although I would dispute your broad stroke as to why Star Wars works). Question: In the flavour of Joseph Campbell, in your opinion does the following appreciative metaphor have any value for the future, however you wish to define the ‘future’ – “The Hero must rediscover the love of playing simply for the love of playing”?

  35. Maleorderbride says:

    This was an entertaining read, but you seem to have a few misconceptions about how meaning is made and how language operates. These misconceptions limit your essay to focusing solely on how the future is strategically normalized through metaphors. All language is always already normalized and understood through metaphors. By not understanding this fundament you drastically overstate the importance of your observation in relation to future tech and misdirect the conversation away from the real topic—language .

    Language is not Platonic and never will be. We never “understand” something in isolation. We create categories of same/difference with other objects which we then use to understand. Language is always metaphorical. It is never literal.

    For example, Nietzsche, in his essay “Lecture Notes on Rhetoric,” analyzes such simple words as “snake” and reveals that in Latin, serpens, means an image of a creature walking. Our language is utterly comprised of layers and layers of just such analogies and metaphors that have been naturalized so completely that we no longer even realize that they are metaphors. To say that we do not understand facebook because we describe it through metaphor, is instead to show that you do not understand language.

    You have limited this understanding of how language and meaning operation to this single function of naturalizing the future. I am not disagreeing that this occurs, but rather I am disagreeing that this is something unique or special. By over-emphasizing your single example of the naturalizing effects of language you obscure the mechanism by which this happens. You then get into the whole madness, nausea, etc daydream. That has no place as an endpoint of your argument. That is never an option linguistically.

    All concepts are naturalized through metaphor. All meaning is rendered through metaphor. There is no outside of language, so you are always using language–and these metaphor driven same/difference categories–to understand everything that you perceive. It is not that you are wrong about the naturalization of the future–it is that you missed 99.999999% of the cases where this is always already happening and why this is happening. You can’t see the forest for the trees.

    • A very thorough response, and I like it although I think I need to re-read as it’s so dense with information and ideas. One thing to ponder, though. While I fully understand your comment that the world is not Platonic in that we don’t think literally, but instead metaphorically, I can’t help but feel that the two are not mutually exclusive. We think metaphorically, yes – but this doesn’t counter any truth that there are universal forms, say, like a Chair or a Tree. I don’t think I’m clever enough to reason this out logically, and certainly not to thoroughness by which you approached your own response above. It’s just an intuition on my part, and my best response – either whimsical or too darn way out there – is that Plato’s thesis is itself a metaphor.

      • Maleorderbride says:

        You are correct, a Platonic world and language as non-Platonic descriptor of that world would not be mutually exclusive. However, my comment was one step removed. I said that “language is not Platonic.” I think the distinction is an important one and for just the reasons you bring up.

        If I made any claims about the word as Platonic/non-Platonic then I would need to talk about religion, which I do not think anyone really wants to do on the internet. ;p

        • Maleorderbride says:

          Well, that is an unfortunate typo. I wish there was an edit function.

          I meant to say:
          “If I made any claims about the world…”

    • Come on man, your arguments are contradictory:

      If “the normalcy field” is nothing more than the universal functioning of language, with nothing special about it, then it’s breakdown (a real recognised experience) shows us a space beyond language.

      Or, if it’s breakdown is not the breakdown of language itself, then it is not co-extensive with language, and so he is talking about something different.

      If you’d cut the bluster a bit your insights about the universality of metaphor could be a lot more useful.

      • Maleorderbride says:

        Josh, your assumptions are not logical. You restrict us to:
        If the field and language are coextensive with each other. (i.e. the same)
        if there is something outside of the field (which you can not talk about anyway), then it is outside of language.

        There are other possibilities. The field does not define language; language defines the field (and beyond). They are not synonymous; this is not a two way street.

        To continue, what does one mean by “a real recognized experience” of a breakdown? How did you recognize it without using language to conceptualize it or share it with others? As Butler would say, to have a concept of the exterior of language would be to lose the exteriority that the concept is supposed to secure. I agree that you found something outside of the field, but it was only another part of language.

        My original comment was largely related to the above point. Rao’s concept of a “normalcy field” seems to be a small piece within a common and widespread critical theory of how language already operates (from the 70’s!). A breakdown of the field only refers us to another feature of language that is already in operation. That is why I posted. The confusion that you display is mirrored in Rao’s misconception of the role of the field within the larger machine of language.

        I image what bothers you about my post is my apparent dismissal of anything outside of language. I am right there with you. However, I do not think it does not exist, but rather a whole lot of hand waving in the vague direction of something we can not talk about, address, quantify, or qualify is hardly productive.

        We are inextricably enmeshed in language. Get over it.

  36. John Devine says:

    I was wondering while reading if you have read a book by Robert D. Romanyshyn called ‘Technology as Symptom and Dream.’ I think that he touches on the same topic in this book as you do in the post. His focus is on linear perspective and how it changed our view of the body, along with the view of the rest of the world.

  37. This essay made me think of 2 things – first, Philip Dicks’ Ubik. Second, the global resurgence of fundamentalism in the past 50 years. I’d be curious if the collapse of a technogarchic perspective of the field and the attendant terror experienced from rubbing up against the 500 years of unassimilated technological progress has pushed people back to medieval, magical metaphors.

  38. Good day! I simply would like to offer you a
    huge thumbs up for your great information you’ve got here on this post. I will be coming back to your website for more soon.

  39. CommonSense says:

    Excellent, excellent essay. Though as to the 15th century argument, I’d actually agree with Gibson in that our current everyday culture dates from around 1912 instead, (everything changed, in one of his books).

    Not in terms of technology, but in terms of familiarity of the everyday experience of the consumer.

    As far back as 1912 or so, perhaps even into the late 1910s, a lot of today’s consumer experience would not be ENTIRELY alien to a hypothetical time traveller. All through the 50’s, 40’s, 30’s, 20’s and just before, if deposited onto a major city street, there would be familiarities.

    You could walk into a market and purchase packaged cereal and bottled, refrigerated milk.

    You could walk into an eatery and order familiar foods, from an omelet to a hamburger to just a cup of coffee, and while the price would be different, the experience really would not. Ordering, paying, thanking would all be similar.

    You could go into a store and purchase mass-produced goods.

    Even men’s clothing has barely changed at all since the late 1910’s. Trousers, shirt, jacket of similar cut to now. The addition of beltloops, that’s about it.

    Before that, the late 19th century, for many people, it was VERY different. That is an alien, alien world to people today, and any periodical of the time show just how much so.

    So I’d call us stuck in around 1912.

  40. puts a new slant on projectile vomiting… we are projectile vomiting right into the future..

  41. I would like to point a few things out:

    1. I would like to posit that your reflections on the process through which this membrane of the Field stretches to encompass future technologies and concepts by translating them into something easily understandable by the mass of ordinary people are, unfortunately, informed by a myopia resultant from living and thriving in a consumer-driven capitalist economy for your entire life. Such a thing necessitates an enormous marketing contingent to mediate this Field-stretching process, and marketeers do so by causing the least amount of inconvenience to their targets. The reason Facebook and iPhones rely so heavily on folksy, worn tropes is because it’s easier to sell them to the consumer market this way. We have prioritized the uses of our human resources so that, to paraphrase, “the brightest minds in the world are being used to figure out how to make you click on a banner.” Even the decision by a youngster of which advanced degree to pursue is heavily influenced by the demands of a consumer economy. Also, some of the biggest complaints from the “core userbases” of various OSes and pro-grade software arise from a decision made by the company to “dumb down” or “switch gears” or “make the product more user-accessible” (think of how much hate is poured on Final Cut Pro 10, or every Windows version after XP). These are attempts to stretch that membrane. But it’s all an artificial process, engineered by capitalist interests! You do touch upon the role of the marketer in all of this, but as a peripheral, not as the central engine that drives it. Think of the Soviet Union – that society was basically force-fed the future through fear and hope, as well as through concrete directives like the destruction of religious spaces, construction of factories, etc. Unlike the capitalists, they were not trying to stretch the membrane to accomodate the past, but rather scrunching it OUT of the past to accomodate (their version of) the present-future. No one worried about usability, marketability, or really anything that had to do with what Ivan the Average desired. They worried about results – “will this get us to space?” or “will this yield better crops?” Sure, the tradeoff was terribly inhumane and bloody, but I would argue that was more of a “picking the wrong tyrant” problem than anything inherently wrong with their blueprints for reorganizing society.

    2. Your linguistic kung fu is weak! You flat out say that explaining genes to a 3000 year old guy is impossible, and then you postulate that all sufficiently advanced tech is like magic to all beings. I take great issue with that – we have taken great pains to adjust our early development to maximize our ability to comprehend this ever-increasing complexity. Even more simply, it was only recently that we even begun to get a sense for the curvature of future progress. Back before the Enlightenment, the Industrial Revolution, Verne and Shelley, the only concrete visions of the future people had involved a vengeful God raining down judgment and destruction upon our wicked lot. No one had any real idea that a few centuries from now, things will be drastically different. Sure, there were futurists, but their insufficient knowledge about reality – or how to go about systematically acquiring such knowledge – gave them little room to operate, and resulted in a lot of silly speculation (which makes for great reading in Umberto Eco novels). But anyway, you want to explain genes? Start off by saying something like: “from the loins of every man bursts forth the life-giving seed, and that seed is a book upon which the fundamental aspects of a man’s being are inscribed. [If needed, clarify:] The shape of his face, the color of his hair, the quickness with which he strikes, his cunning and wisdom… Women inside them possess the flower of life, and on that flower the woman’s being is inscribed as well, much like that of man. When seed and flower combine, the texts of both combine as well, and a new being is thus born…now, in our glorious century, we have the ability to WRITE AND EDIT THESE BOOKS OURSELVES, thus shaping a man and his character before he ever leaves the womb.” Takes only a minute, but I think this is on a Babylonian’s level.

    Perhaps similar points were already made on the comment space…I didn’t have time to skim as thoroughly as I would have liked. My apologies if I stepped on any toes.

  42. I predict that the collapse of the fields will be something akin to a zombie apocalypse.

  43. Lovely piece, and you’re not wrong – you’re just too young.
    I read once that the tragedy of living past 50 is that the world you are adapted to no longer exists. At 63 I feel that more strongly than ever – my field popped a decade ago.
    I read the whole newspaper the other day, leisurely, with coffee, and alone in public, and looked up when I was done with the powerful sense that the whole thing had been written by Kurt Vonnegut.
    My 16Gb thumb drive astounds me – I rub it in my pocket like a totem.
    A friend in Tokyo called my cell from her cell while I was visiting Manhattan. SHE DIALED DIRECT. Stunning, and impossible.
    My grandmother was born in 1896 and I knew her well, she remembered all the Spanish-American war vets, the Wright brothers, WWI, you know – cars and stuff.
    My dad, 1911, fought in that B&W movie they shot on Guam and then the color one in Korea, he always told me “you can’t tell-a-phone from a streetcar” but of course you can.
    Give it time, kid, it’ll happen to you too.
    Welcome to the real future where you’re a stranger in a strange land. They think you should “get it” but the joke’s on them – – -you do.

  44. Matthew Petersen says:

    Do you have citations for this that you could direct me to?