The Abundances of Ages

by Venkat on October 4, 2012

High culture organizes its world views using overarching frames: intellectual superstructures that serve as extrinsic conceptual coordinate systems.  “Globalization” and “Industrialization” are examples of such frames.

Popular culture on the other hand, tends to be driven by the most visible and drama in the immediate environment.  From the chaos of turbulent change, popular culture tends to pick out specific motifs around which to grow a world view. These motifs mostly arise from the economic abundances that drive that particular age.

In trying to compare and contrast the motifs of different ages, something interesting struck me: the motifs tend to cycle between material, object and cognitive motifs. The objects aren’t random objects, but ones created by the operation of technology. So iron is a material motif for the Iron Age, the steam engine is an object motif for the Industrial Age, and writing is a cognitive motif for the Bronze Age.  Here’s an approximate and speculative table of the motif-cycling I made up.

(I have endnotes for the less obvious table entries, which may need some explanation; and obviously the model is more speculative for ages for which contemporary written records are not available to us).

Why is this cycling important? Well, for all you futurists out there who are stuck in a mental rut asking yourself, what’s the next big thing? the next big thing is almost certainly not going to be a thing at all (object motif).  It’s going to be a material motif. So the right question is what’s the next new material? 

So answers like “3D printing” are wrong in a specific and interesting way. Let me explain.

Understanding the Cycling

Thinking in terms of temporal cycles is dangerously seductive. It is extremely easy to delude yourself that you can predict times and transitions; that your labels for specific squiggles on a graph are somehow objective and essential rather than subjective and arbitrary; that patterns in and of themselves somehow represent knowledge.

If useful (in an instrumental sense) cyclic thinking is possible at all, it tends to require extreme sophistication and mountains of data to do correctly. Looser types of cyclic thinking are much better at suggesting the right questions than discovering the right answers.

When you apply structural-cycle thinking to something as subjective and ephemeral as the motifs that symbolize entire temporal epochs (why atom and airplane for 1945? why not aluminum and atomic bomb instead?), you’re basically in fertile territory for self-serving speculative nonsense, where keeping yourself honest is really hard.

That said, there is one mitigating factor with this model: unlike purely structuralist cyclic models, there are some fairly obvious dynamics at work here: first we discover a new natural abundance, then we learn to engineer with it (turning it into an engineered abundance, often created by wasting the natural abundances of a previous era), and finally our brains adapt to think in the new environment (usually via new educational models that manufacture a new normal).

This last stage creates a cognitive abundance or surplus of a specific type. This specificity is what is missing in Clay Shirky’s model of cognitive surplus.  Universal high school education, which became a reality in the US around 1910, did not create a generic cognitive surplus. It created an abundance of reading and writing bandwidth. A Coal Mind literacy. Our age too, is creating specialized forms of cognitive surplus that I will get to, not a generic capacity for creativity and innovation.

So what we have here is a basic moving bottleneck phenomenon where first we make things and then the things make us. The remade humans are then cognitively equipped to create new material abundances, restarting the cycle. If the cycle ever stalls without a new material abundance, you get decay and its consequences.

The reason explicit frame-based world views are less useful than implicit motif-induced ones is that they represent the workings of technologically shaped cognitive capabilities, and are therefore derivative with respect to motif-level world views. So “industrialization” is not an absolute frame, but one that a “Coal Mind” so to speak, can process, given the trajectory of cognitive development till around 1900. The “Silicon Mind” retains the label “Industrial Age” for referential convenience, but processes the historical record very differently (in a less triumphalist, more dystopian way for one).

With those caveats aside, let’s foolishly shove the angels aside and rush right in.

The State of Play

We seem to be near the end of the third major reboot cycle in human history (these seem to be marked by the co-existence of an emerging material motif (DNA in this case) and a maturing cognitive motif — the new abilities of the maturing Silicon Mind in this case).

We exited the object-motif zone of the current cycle around 2000, when the social web dumped us all into a sudden abundance of relationship possibilities. We’re not in paleolithic tribal villages anymore. We’re not in Kansas anymore. We are in a social environment where we must manage relationships within a space (the “cloud”) that extends far beyond family, village, nation or metropolitan region.

But I’d say, after a decade, we’re finally on top of relationship abundance. We know how to manage it. And the result is that Facebook now has a billion members (as of yesterday), furiously manufacturing thoughts from a billion different perspectives for their managed social neighborhoods, all vying for the last dregs of attention.

In other words, we’ve learned a new cognitive mode: relating. Navigating interpersonal realities in ways that our brains weren’t really designed to.

But those billion different perspectives aren’t about a billion different things. Imitation and social proof, we’ve discovered, make the dynamics of collective attention far simpler than they could be.  It is not a 3-channel universe, but it is not a billion-channel universe either. It is a million channel universe with a lot of slightly different rerun channels.

We are now developing a new mental muscle to navigate this million channel universe: perspective shifting, or refactoring.  It is how we manage attention in a world that is past Peak Attention from the point of view of recently disrupted entities that relied on mass media. One useful way to understand Peak Attention is that we did not run out of attention per se, but that the portion that had been colonized during the industrial age started escaping from domesticated captivity around 1980. Perspective economics is about dealing in wild, ungoverned attention, which is more abundant than the governed kind ever was, but is much harder to influence at scale. Sort of like fresh water versus salt water.

A “refactor/rethink everything” mentality is taking hold at scale. This is giving us everything from a media space driven by furiously competing Facebook memes, to business ideas like Airbnb and Zipcar, to what the “View from Hell” blog calls the insight porn industry. That post incidentally, has ribbonfarm classified into this last phenomenon, and I have to reluctantly accept that the classification makes sense. I manufacture insight porn for the perspective economy.

The category includes everything from the glossy manufactured Aha! experiences of TED, to the frantic building of an entire “insight industry” out of Big Data technology, to the flavor-of-the-year world of programming languages.

That then, is the state of play. We’re basically in the last stages of learning to think with our new Silicon Mind. Hacking, relating, refactoring, these are the basic new literacy skills for the Silicon Mind. These skills will form the basis for a new education system and turn into a new cognitive abundance within the next ten years.

So meh, that’s old news. Yesterday’s future.

The Next Big X is Y

Returning to the motif cycling, it is clear that Something Is Up with a new material: DNA. Costs for gene sequencing are dropping faster than the cost of processing power ever did during the Silicon Age. As part of the resultant engineering bounty, we can now clone individual organisms, grow ears on arms, engineer entirely new varieties of plants and animals, and do other extremely weird and nauseating things. Monsanto seeds are nothing compared to what is coming.

So far starters, the next X is material and the particular material Y is DNA.

But it is equally clear that this is just a material abundance. It has not turned into a true engineering abundance, let alone a cognitive one.

At the same time, it is also clear that silicon has shaped our minds as much as it is going to. This shaping is not yet pervasive, but there are enough instances of homo siliconus wandering around that we know roughly what this mind (or rather, collection of minds) looks like. There isn’t going to be another age soon whose motif is another kind of cognitive development. We will merely scale and refine what we have. If somebody reads this blog in a hundred years, he or she will probably criticize it for the crudeness of the relating, hacking and refactoring going on here.

But the action is shifting back to the material layer. There is a new dominant material abundance in town. And this one is special because it is the first material abundance that involves the living material of nature (not counting plastic).

But it is clearly not enough. Before we can get to engineering abundances, we need a few more material abundances to form a resource base. In the last cycle, we had six basic and complementary material abundances — oil, coal, steel, atoms, plastic and silicon — to work with. Oil was the first among equals in that set, just as DNA is probably going to be first among the equals in the next set.

You can sort of see where the material bottlenecks are:

  • Gene therapy, longevity technologies and cancer treatment almost certainly requires MEMS or nano-level targeted delivery mechanisms that require the sort of material abundances nanotechnologists are pursuing.
  • Lithium and other battery-making materials are another bottleneck for most engineering futures. We need materials for better batteries in all senses of “better” (lighter, smaller, longer-lasting, less-polluting). Energy scarcity is not going to be a problem much longer, as we slowly replace most atom transport with bit transport and dematerialize many things, but energy portability is going to be a bitch to deal with. The most portable form today (electricity) is hard to store, while the most storable form (coal) is hard to move. It is a stock-and-flow nightmare. But Elon Musk may have solved this one.   
  • Cheap 3D printing and other small-scale/batch/high-variety technologies are mostly being held back by the lack of material abundances in the various “toner” like materials required to fuel the processes (this might be an artificial scarcity, created by IP laws, rather than fundamental resource scarcities).
  • Water scarcity is a huge, looming problem that may be on a runaway course. But the costs of desalination are apparently falling rapidly. As an investor acquaintance of mine said, cheap desalination is the bottleneck here (apparently Kennedy saw that one coming).
  • People: the world is rapidly aging. You need young people to drive the story forward. Even young countries like India will have an aging population within a few decades. There is no way to create an abundance of people while living women remain the bottleneck resource. Like it or not, there is a huge economic motivation to pursue vat-grown babies. Parenthood may be obsolete in another century.
What happens if we create an adequately complete resource base of complementary material abundances for a new era of engineering?
We’ll enter another age of object motifs. The key here is that these motifs must represent ubiquity rather than novelty. Not Dolly the sheep. Not an artist with an ear grown on his arm. Something like the steam engine, touching the lives of nearly the entire population.
If I had to bet, I’d bet on artificial, lab-grown meat being the first object motif. By 2050, we will have 9 billion people trying to live first-world lifestyles. The protein has to come from somewhere, otherwise the world will melt into a hyper-obese puddle created by grain-based diets. Or kill itself in moral disgust and shame at the cruelties of factory farming. Lab-grown meat solves these problems at the cost of evoking a certain kind of horror for us Silicon Mind people who think of biology as an inviolable domain. But I am betting that by 2070, at least a third of the world population will be getting most of its protein from lab-grown meat.
Sounds yucky, doesn’t it? That visceral reaction by itself tells us we are nowhere near possessing the kinds of cognitive surplus required to accept such realities. We are creatures of silicon, who consider our electronic cognitive prosthetics “normal” but rebel at the thought of objects from a much more intimate technology base that can invade our bodies at all scales from nano to macro, instead of just being attached to them.
But let’s try to hold back the nausea, and try to imagine the even more distant future, when the cycle now getting underway runs its course. The material abundances are history. Various object abundances define the environment.  Like lab-grown meat, vat-babies, 200 year old humans on their fifth hearts and  environments flooded with batteries of all sizes and types. Imagine a physical environment that has far less stuff and uses far less energy, but exhibits a lot more variety. Variety that replicates, maintains itself, and evolves, through reprinting and regenerating as necessary, part and whole. Variety that blurs all visible distinctions between natural and artificial, thanks to the ho-hum genomic abundance at the material level.
This is the new normal world of 2200, say. But what sort of mind emerges out of it?
We cannot even comprehend this mind, yet in our teeming perspective circuitry of insight porn,  we can navigate the infinite delta streams of future probability and see that there must one day come a Mind whose merest operational parameters we are not worthy to calculate, but which it will be our fate eventually to design, using our puny Silicon Mind.*

This will be the Next New Mind, circa 2300 AD. The Transhuman Mind.

* For the Silicon Uncultured among you, this is a reference to the Deep Thought computer in The Hitchhiker’s Guide to the Galaxy.

Endnotes

  1. Water as a material abundance motif in the list represents water as an energy source. The massive mechanical creativity that was unleashed in the middle ages, was driven during its most inventive period by water power, not coal. By the time steam power arrived on the scene, the mechanical arts were already being codified, as I discussed in my Hall’s Law post.
  2. I struggled to find an appropriate material abundance for what was generally considered a chaotic Dark Age everywhere except the realm of Islam. Then it struck me with head-slapping obviousness that the primary answer was paper (and to a lesser example, gunpowder). Though the Chinese invented both, it was the Arabs who turned both into material abundances, once they discovered and democratized the secret after the Battle of Talas in 751 AD. Islamic Central Asia soon became the leading center of paper manufacture. Water power helped scale paper production in later centuries. We forget that for the Gutenberg revolution to occur, paper had to be abundant.
  3. The object motif for the Cold War/Atomic Age was obviously the jet plane or space rocket in the popular imagination, but I personally prefer the shipping container. This example shows that the motif that sticks in the popular imagination may not always be a reliable guide to the actual dominant object-abundance.
  4. Why “Imagining” as the cognitive motif of the Neolithic Revolution? For agriculture to develop, you had to have a mind capable of imagining the future at least a year ahead, and engaging in behaviors like storing surplus grain in pots.
  5. Why “trading” as the cognitive motif for the copper/bronze ages? Because an international trade in tin had to develop for the Bronze Age. I believe this was the start of trade proper as we understand it today, and the original of the “business” mind, used to thinking in terms of trading surpluses over long distances.
  6. This one should be obvious, but “narrating” was the cognitive motif for the Epic Age precisely because that’s when we started telling long and complicated stories about ourselves and using them to store civilizational wisdom across time.
  7. It may seem surprising to make mathematics the cognitive motif for 800 AD, given the long earlier history. But arguably, it was the development of algebra that led to the development of a true mathematical mind. Arithmetic and geometry are just a little too close to physical reality. Wrapping the mind around the concept of zero was probably the first step, but algebra was where abstraction truly began.
  8. “Seeing” as a cognitive mode is basically what I understand art to be. The development of the rules of perspective, and the resultant rise of realistic drawing, was a hugely important development. As it spread, it became the abundant cognitive skill that led to the other intellectual developments we attribute to the Renaissance.
  9. Why 1800 for “organizing”? While humans have been organizing for millennia, thinking fluidly about an abundance of organizational forms had to wait till the limits of human energy and horse/ship communication were transcended. Pre-1800s organizational thinking was relatively impoverished. After 1800, we basically went nuts, inventing an entire zoo of organizational forms that reflected different patterns of energy and information use, and various models of authority and legitimacy. The process probably started with the invention of the modern nation-state around the Treaty of Westphalia (1645) and the English Civil War, but it took the emergence of early corporations like the East India Company, and the possibilities of fossil fuels and the telegraph, to create the Cambrian explosion of organizational variety.
  10. Hacking as a cognitive motif probably started with phreaking on the telephone grid. Certainly creative makers probably hacked before then, but there were no large-scale engineered realities to “hack” in the sense of a systematic culture. See my post Hacking the Non-Disposable Planet  for more.
  11. “Refactoring” as a cognitive motif is unfamiliar and hard to think about because it is so new, and there is no consensus yet on labels. Perhaps my terms will stick. Maybe “Insight Economics” will win out once the “Insight Porn” bubble collapses. I don’t particularly care, so long as we agree we are talking about the same mental faculty that a lot of people seem to be autodidactically developing at the same time: an ability to see things from lots of different angles in search of the best one. Programmers obviously lead the pack (hence the label, which is drawn from programming in case you didn’t know).
Zora October 4, 2012 at 10:20 pm

Oh dear, a grand scheme of history based on nothing more than “well, it looked good to me.”

Mathematics in 800 AD? Because you think algebra is more properly mathematical than arithmetic and geometry? That’s what YOU think; I’m not sure any historians of mathematics would agree with you.

Ditto for everything else chosen; it’s all subjective and arbitrary.

There are scholars out there doing big history, or world history, with more intellectual rigor. It would have helped to have read something in that field.

Sorry to be so critical. Venkatesh, when you look at trends close to your own time and culture, you can be quite perceptive. But that doesn’t equip you to opine re thousands of years of world history.

Venkat October 4, 2012 at 10:28 pm

Coal Mind historical rigor I presume. :p

Gabriel Duquette October 4, 2012 at 10:48 pm

The second section reminds me of this:

“It takes, essentially, literary talent, to look at the world and construct a novel and precise description of what you see rather recognizing the closest stored pattern and outputting the most appropriate available cliche.”

It’s from a comment by Michael Vassar on this OB post:

http://www.overcomingbias.com/2012/09/missing-life-lessons.html#comment-651545326

How do you think micronarrative navigation “will form the basis for a new education system”? If indeed you do?

Julio October 4, 2012 at 11:00 pm

The narrative you propose is more valuable than the specific process. Somehow it feels you made the table just to get people arguing about it.

The conclusions you come to I’d consider somewhat uncontroversial. If we avert total collapse it’s going to be partly because we learn to re-engineer our biology and that of our planet. So far I’m not convinced we’re mature enough for that kind of tinkering but history doesn’t wait for you to tie your shoelaces.

If we were to use your system, this transition is so different that it renders all previous transitions meaningless anyway. Tinkering with the very stuff of life at scale is something else entirely. Religiosity and a particular concept of the sacred have been with us since more or less the beginning. If there’s any wisdom at all in it, it’s because it speaks to a certain biological foundation that hasn’t really changed, until now.

Sister Y October 5, 2012 at 12:20 pm

In case there’s any doubt, I do not mean “insight porn” as any kind of slur. I think creating insight experiences is about the most beneficial, least harmful thing you can do for the people I care most about.

Alexander Boland October 5, 2012 at 1:41 pm

On atom-transport vs. bit-transport I thought to myself “energy and information are interchangeable–they’re both exploitable asymmetries that are fungible with regards to maintaining a complex system; even if not totally fungible in other ways”. In fact, now that I think about it, oil and fiber-optics are arguably more interchangeable than reading/writing bandwidth and refactoring ability.

You described this with more precision. I have to read it a second time and let it sink in, but what I see is a constant shifting of bottlenecks. Currently we have an energy shortage but an object abundance (not sure about cognitive surplus). Soon we may have an object shortage of the kind you mentioned; or maybe we have that now if we’re thinking in terms of solar power (lack of batteries.)

Also seeing your comparison of frames vs. motifs as similar to the distinctions between cognitive scientists who see things in frames or prototypes respectively.

Also, as we move from atom to bit transport, I’m starting to believe that there’s another big shortage that will come in the 21st century: information. Systems will be gunked up with not just noise, but data/software (once again, interchangeability) where you can’t even sort out the signal from the noise at a decent EROEI. After all, this is already a giant problem in software today. I think fundamentally this signal/noise problem is actually no different than pollution and depleting fossil fuels–there’s plenty of oil/gas/coal but the signal/noise ratio is becoming pretty meager.

Jon October 5, 2012 at 1:48 pm

From a writer’s perspective, this is an endlessly interesting way of looking at the evolution of society and technology. Both in the past (in, say, a second-world fantasy where something went differently in history) and future (science fiction, of course).

Of course, science fiction has already been forseeing vat-grown meat and vat-grown people (at least, Louis Bujold has) for quite some time, so we’ll have to extrapolate in a different direction… but the thought process is useful nonetheless.

Thanks for sharing!

Walter Williams October 5, 2012 at 3:40 pm

You completely misunderstand the movement forward of 800 AD. Darkness is a pejorative given to an era some one doesn’t understand or for when they don’t value the accomplishments of that era. Learn something about the period. I’ve also noticed a very very clear bias towards “western” civilizations in your chart. Bigotry at its worst.

Shame on you.

Venkat October 5, 2012 at 3:50 pm

I used the term ‘Dark Age’ because that is a common label for that frame.

You are welcome to disagree with me and present alternative models here or in a response blog. More ad hominems and vague accusations of bigotry will get you blocked from this site.

Kevin O. McLaughlin October 6, 2012 at 10:59 am

That really isn’t a phrase commonly used by historians; it’s a pop culture name for the period. Generally, a historian will talk about “migration period” instead of “dark age”. It wasn’t any darker than the time before or after, really. And while there were losses of technology in some areas, there were also advances. A truly chaotic time, with lots of groups of people moving around a ton (hence migration period). But not a dark age.

I find myself picking nits with a great number of the specifics on your timeline table (especially dates), but overall find the concepts you’re putting forth interesting food for thought. Perhaps the article would be more effective without the distraction of the table. ;)

I think it’s also important to at least look at the idea of the Vinge Singularity, when postulating ahead: a combination of the technologies of AI, biotech, and nanotechnology. Already, we see the early elements falling into place. I carry a smartphone, so I have the entire Internet in my pocket. I can look up any bit of data I might need, any time, anywhere I am. Extend that in terms of the above technologies, and you have implanted minicomputers inside a person’s head, complete with a low level AI, able to call up data from the Internet at the speed of thought. In essence, the sum total of human information becomes available to each person instantly, with an AI to help vet, sort, and organize the material.

So I don’t see DNA and biology replacing silicon sciences, but rather working together and augmenting each other. The fascinating thing about this sort of tech development is that is has a self-accelerating nature; the better and faster we can think and organize/find information, the better and faster new technologies will be developed. A cycle of steadily intensifying acceleration.

Aleks Jakulin October 5, 2012 at 4:05 pm

This reminds me of The Fourth turning: http://en.wikipedia.org/wiki/Strauss-Howe_generational_theory

Daniel October 5, 2012 at 7:48 pm

I thought the high culture/pop culture contrast in the first paragraph was more interesting than the periods.. Do the “themes” of each era come from a pop-type perception or high-type perception? Do various ages lend thmselves more to a pop interpretation and others a high?
Actually, in light of the last perons comment, In current times, the Middle Ages tend to attract “pop” enthusiasts (gamers, Tolkein fans, etc.) that the academy would could consider vulgar. But that’s of course a different question…

Kay October 6, 2012 at 2:46 am

There is a new dominant material abundance in town. And this one is special because it is the first material abundance that involves the living material of nature (not counting plastic).

In a sense there is a market crier inside of you who wants to come out. O.K. maybe that’s your tribute to our global post-Christian pop culture …

I have serious doubts that genetic engineering will be a game changer at this time and age. We don’t have some sort of Universal Turing Machine for life to begin with, no idea how to build living beings with arbitrary properties from the scratch, sending a team of genetic engineers a requirements list. It is all just a very sticky and complex biochemical environment which can be hacked using some neat tricks. DNA screening is still information age, not DNA age technology. Recently we got a simple artificial cell, but happened to cloning mammals? As with mathematics or AI we learn that solving some problems is (surprisingly?) easy, while solving others takes fairly long or they remain unsolved.

You could argue that I wouldn’t even notice changes because of a “manufactured normalcy field” but even in that case I would notice the large scale effects of a technology even though I would be shielded from all of its possible manifestations invented by futurists, much like all of us did in the atomic age which occasionally returns and haunts us but not quite as an Armageddon.

Isaac Lewis October 6, 2012 at 8:20 am

I would say our current era is not

DNA / Smartphone / Refactoring / Perspectivization

but rather:

Digital content (or “information”) / Smartphone / Networks / Unknown unknowns.

Isaac Lewis October 6, 2012 at 8:59 am

Scratch that. I really should read the whole article before commenting.

Goblin October 6, 2012 at 6:24 pm

I share the opinion of the other comments here when it comes to the table.

That said, I am highly wary of “refactoring” and “cognitive abundance” ideas. I suppose given my history here this isn’t surprising, but logically speaking “refactoring” isn’t completely possible (even with Google’s hive mind). I think at best we end up with what we have today an world where there is no true “authority” and the politics and motives of sources may be questionable.

What’s left out of the multi-viewed perspectives is the issue of knowledge relevancy and knowledge depth. So we end up , like with Google, lots of people with lots of general information or perhaps overly-specific information; either way the information isn’t quite on target but off by prior knowledge minus “X” number of degrees.

To put it in terms of computing, there’s a hardware issue, the human mind is only capable of holding so many concepts in place at any one time. The whole idea that some (Kevin) have discussed in the comments here reminds me of the moment in The Matrix when trinity learns to pilot a helicopter in an instant. There are other “craftsman disciplines” like this as well.

I suppose in a way it comes back to the necessity of the “refactored” viewpoint to reject the “craftsmanship” view point. No one will deny that it takes more then just learning of concepts to properly pilot a helicopter. It as a matter of craft requires many many hours of repeated practice to learn properly and safely. So refactoring is good for science but at odds with pursuits that require proper focus, in any of its various forms.

Now I’m not denying that some process of multi-angled analysis is developing or developed, I am sure that in some disciplines it is has its good use. I just wonder how other factors might play into the sweeping analysis. As a military veteran I could help but notice the absence of war or strife from any of the motifs (it seems to be in the background). The fact your analysis is also “resource oriented” strikes a similar cord.

Kay October 14, 2012 at 4:40 am

Since you mentioned your “military veteran” status you might be interested in the following piece about military tactics which is a fine showcase of the art of refactored perception. The IDF strategists applied poststructuralist theory for their own case which is, in terms of theory building, Venkat’s “evil twin”.

Goblin October 14, 2012 at 6:30 pm

That was an interesting read, thank you.

However I am still not sure how it demonstrates “refactored” perception. I’m of the personal opinion that military life, in general, promotes “maverick” or “outside the box” creative thinking. Thus, I’m unsure of where to draw the line between “refactored” and “outside the box”. I don’t see the room for formal or academic necessity.

In essence, I am asking, precisely where is the predictive (“refractive”?) power suggested? Or, is there no predictive power and is understanding built moment to moment based upon a complex of individual, team, and environmental interactions?

A lot of the post-structuralist and abundance metaphors (in context of the internet) could be said to tie-in with other methods that one might use to keep a cool head and make “sense” of massive amounts of simultaneous information. Then the formal debate (much like the internet of now) is over who’s interpretation and opinion is of most value: the reader’s, the writer’s, or “Personality X”. Each one has its own pre-suppositions, and chances are most would (have?) run from the ensuing “meta” discussions.

I will even go so far as to suggest that a good welder must make sense of massive amounts of information in order to know what is critical at what moment as he lays in a weld. Is this creative practice then also “refactored”? The application within the article is within the confines of a well established structure and the same could be said for welding. In such cases, then the potential for refactoring must be built upon something. Thus, how much and what type of structure is necessary for “refactoring”, and how much structure and of what type is enough to inhibit refactoring?

I suppose I could nitpick more but I think most of it would be out of bounds from your point. Thanks again for the link, it is a good article.

Kay October 15, 2012 at 12:11 am

In essence, I am asking, precisely where is the predictive (“refractive”?) power suggested? Or, is there no predictive power and is understanding built moment to moment based upon a complex of individual, team, and environmental interactions?

That’s an interesting question. Venkat’s relationship to science isn’t an easy one as we have seen. Otherwise he wouldn’t start a discourse about scientific experiences to discern them and position himself.

Wouldn’t it suffice to say that it can help to clarify things using models/metaphors up to the point where qualitative properties and differences to other models become visible and cause a shift in the frame of perceiving? “Out of the box thinking” can be subsumed but it is often used more loosely, like having a clever idea on how to solve a puzzle which cannot be solved in a straight way. “Hacking” is out of the box thinking/acting but it isn’t necessarily reflective or theory-driven. Odysseus didn’t use models, the IDF does.

In a sense a new box/model is created for what exists out-of-the-box for a short moment and a skilled model builder like our host, can build them almost on the fly ( or in weekly increments ) and deliver a new frame of thinking inside a new box. I admit the skill would be even much greater when we could attach numbers and a dynamics ( i.e. predictions ) to our new found models in the same pace, but this is so hard, that it took extremely long for science to even boot and mechanical models really worked well in some domains only. Taleb and Derman argue that they can actually do much harm when applied in the social sciences and in finance.

The old fear of philosophical scientism that once we give up on hard-rejections, everything becomes subjective and open to voluntarism can itself be softly rejected: instead of a free floating subjectivity we experience a couple of boxes i.e. models, ideologies, scientific theories in various states of maturity, religious convictions and so on. Since when aren’t those affected by and open to rational criticism based on arguments of various sorts?

Goblin October 15, 2012 at 1:58 am

Odysseus didn’t use models, the IDF does.

I would clarify this. Officers as planners build models, yet in action, in the moment, such models are subject to change based upon the unforeseen. So the “plan” is often just a terse statement, and the “model” a heuristic device. Both allow room for contingency.

I am essentially pointing to the inevitably of any “plan” military or otherwise, enacted in a true fast-changing environment, to fall completely apart or cease to be effective. Contingency is the only effective solution.

The difference I perceive then, between theory and action, is that in one situation the “box” is shifted voluntarily and in the other the “box” is shifted by external forces. So the question becomes is “refactoring” a shift in an internal box or an external box?

In the context of external the meaning exists outside the self and as such any contingency is (or should be) based upon external factors. My concern with refactoring is that the factors in question are contingent upon internal perceptions.

In either case we are left with contingency. Yet the difference between the two, internal and external, is of great importance.

Now you might reject this under your terms of “voluntarism” yet I was deeply surprised at how quickly and emphatically any suggestion of the fluidity of language riled our host. Being the know-nothing that I am, the emotion itself seemed indicative of something, of what I don’t know, and if anything the reaction intrigued me.

So yeah, I’m still a skeptic, and given my own experiences in the social sciences I know that most debates on many theories are essentially timeless debates with no resolutions and no firm answers outside of the statistical acceptance or rejection of those theories.

Yes you can be critical in the internal frames of theory, you can even build strong statistical cases, yet with no recourse to external contingency I’m not sure how to measure the value that remains. That question itself has dogged me, and if anything the internet, rather then allaying my concerns, has driven me to greater skepticism.

(Small note: I’m using the more general definition of contingency not the clunky definition attached to the formalized, “Contingency Operation”)

Venkat October 15, 2012 at 12:00 pm

You guys give me way too much credit for putting thought into aligning writing/thinking with tagline. “Refactored perception” is at best a loose motif-phrase for my natural style of thinking. I don’t “set out to refactor” something, or make it a go/no-go test for publishing something.

My process is basically 90% unconscious/on autopilot. I only pay attention to it when things go wrong or a post stalls for some unclear reason and I have to back up.

Predictive power (I don’t know what you mean by “refractive” here) is going to be weak, but this particular post does actually make some falsifiable claims (for example, we’ll see a few more material advances before we see the true “next big thing” comparable to the microchip).

Kay October 16, 2012 at 12:30 am

My process is basically 90% unconscious/on autopilot.

Yes, exactly!

When some authors like R.Barthes proclaimed the “death of the author” in the 1960s the meaning wasn’t that the author becomes now dissolved in some web style hypertext or an anonymous Maoist writer collective, but that “language speaks” that the author is on autopilot most of the times. It had been unfortunate that this position later sailed under the flag of “literary criticism” or “deconstruction”, as if readers/writers who admired texts of other writers from a distance, had to permanently position themselves, being de(con)structive and unmask the comrades as counterrevolutionaries. Not to talk about the paradox of speaking with academic authority and using its jargon while being anti-authoritarian. But so is history.

They “refactored perceptions”. Why not? It is a catchy phrase and can be put to work even retroactively: the IDF wants to refactor its perceptions and relies on theorists who refactored perceptions, while providing concrete applications ( their reliance on architects which are not “too obscure” ). So you might just have pulled the phrase from your ambience and now you give it back to it and it works outside of your brain and your text. Since you came up with it, your influence on it, is obviously still very strong. Your usage might determine the course/prohibit other usages.

Maybe it doesn’t have impact on the life of the mind but goes nowhere in the end and it was just another show in your “penny theater”? This uncertainty is the market. Perhaps at some point you’ll shed light on the market of ideas but it is also possible this is too close to you and you need a hustler, while being the engineer in this game.

Venkat October 16, 2012 at 12:47 am

That is very perceptive and useful. It also lets me off the hook for a lot of things, so thanks.

I wouldn’t say the marketplace of ideas is “too close”. I do have fairly well developed views on it and how it works. Just haven’t felt the need to share my models yet (mainly because the resultant conversation could easily degenerate into “blogging about blogging” if I am careless in execution).

Buster Friendly and his Friendly Friends October 9, 2012 at 12:59 pm

Nice.
You mention water scarcity.
I wonder how the bottleneck of our new atmospheric chemistry will affect outcomes. Any thoughts on this?

Andy Drake October 9, 2012 at 4:01 pm

Very interesting.

I’m just going to pick a nit about the People bottleneck. The sub-point that living women are the bottleneck to an abundance of people is a bit silly. The bottleneck to abundance of people is people signing up for the commitment of parenthood. India is already seeing the aging trend hit with newly affluent people having fewer children, and it’s not for lack of capable women.

That being said I do think vat-babies will happen but for health and connivance reasons and I don’t think it will material impact the lack of young people.

I think the lack of young people will be handled more organically by life-extension / heath improvements and a new type of mid-life crisis. People who can live to 200 and be healthy and active for most of it will probably reinvent themselves into young people a couple of times.

Nazmus October 20, 2012 at 11:57 pm

Seriously, why are all you people ‘dogging’ Venkat? Obviously, this is his interpretation and observation. When did he ever try to prove his theory? As a natter if fact, when did he even say this was a theory?

Anyways, this was an awesome read and now I can use this to construct my own thought. Thanks..

Goblin October 21, 2012 at 10:07 am

I’m not ‘dogging’ I am seeking to understand. You may not see that, but curiosity is what drives my questions.

Brutus November 11, 2012 at 3:33 pm

I’ll congratulate you at the outset of this comment on the wherewithal to draw out of (human) history various cyclical motifs within overarching frames. I recognize the signposts you cite, and it’s a very interesting speculative model. However, things about it have been bothering me for three weeks (since I first read it).

To start, the accelerating rate of the cycles (only three!?) does reveal some truth to the way history has unfolded, but the fallacy you don’t observe (while acknowledging many others, such as pattern finding amid chaotic events) is that today’s history (the Industrial Age) is so much more energetic than that of the past (the prehistoric, the pre-agrarian, the early civilizations, etc.). While we absolutely possess far greater power to engineer the world than in the past, the branching points are all well in the past — hundreds if not thousands of years ago. So the self-congratulations I discern in the latest cycle of subcycles only decades long is bothersome.

The discussion of refactoring is IMO built upon a slew of meaningless jargon: peak attention, perspective economics, relationship abundance, insight porn. This reminds me of various online gibberish and doublespeak generators for business, IT, politics, Yoda, buzzwords, or other BS where the wording sounds significant but lack rootedness in agreed-upon referents and understandings. Lingo moves too fast in most cases to have true expressive power. In truth, much of what you call refactoring could instead be called the disintegration of knowledge, which could well be interpreted as an outgrowth of pomo deconstructivism and radical relativism (not to simply throw more jargon on the pile).

Further, your statement “it is … clear that silicon has shaped our minds as much as it is going to. This shaping is not yet pervasive …” is far from having reached a clear conclusion. I don’t believe we yet know the longitudinal effects produced by such relatively tame mass media as radio, movies, and TV, which are the equivalents of live human experimentation without even a hypothesis in place before the commencement of trials. Maybe the computer/Internet is just mass media on steroids, but I don’t think so because its biggest effect thus far is fragmentation (away from the mass) leading to erosion of authority and allowing individuals to cherry pick their opinions and restrict their information environments to echo chambers or halls of mirrors. Additionally, we’re not yet even past the last generation of Coal Mind thinkers, the new Silicon Minders (those born since ca. 1990) not being old enough to assume the reins of power or fully resist the influence of older cognitive styles. Until living memory passes beyond literacy/rationalism as the central organizational feature of cognition to whatever follows (interrelatedness amid truly ghastly levels of ignorance?), I just can’t agree that the jury has rendered its verdict.

Your final remark about transhumanism is certainly correct about the way things are pointed, but I have grave doubts we will ever get there. The futile notion that we can ever escape our essential embodiedness and join the hive, the cloud, the singularity, or whatever is definitely part of the Zeitgeist, but we will never perpetuate or overcome the catalyzing power of fossil fuels once they’re no longer so abundant or cost effective, which is not too far away in time. You also wave away the paradoxical ookiness and ugliness associated with inchoate desires to transcend the body and instead occupy a clean, antiseptic, orderly, virtual mindspace, but this is a very First World way of thinking from which those without access to the greater benefits of abundance (the bulk of our 7 billions, I suspect) are foreclosed. Those “barbarians” may eventually storm the gates and destroy their oppressors, assuming some plague or disaster isn’t first unleashed by our own tinkering with nature’s building blocks. I for one believe that a cascade failure of all the institutions of civilization will happen before any further abundance cycles appear. There is considerable evidence that industrial collapse is already underway, not that one can see it just yet on store shelves or at the multiplex.

Comments on this entry are closed.