The Mother of All Disruptions

I like thinking about technological disruptions that take place over really long periods of time. This is because the older a technology being disrupted, the more profound the social impact. In my disruption of bronze post, I speculated about one that probably took a few thousand years (iron disrupting bronze) and made spaghetti of the prevailing world order.

I just thought of a potential example that spans 10,000+ years: as a technology, computing disrupts natural language in the thinking and communications market.  That would make computing the mother of all disruptions in terms of time scales involved.  Well, maybe electricity disrupting fire in the heat and light markets is a contender too. Here is the disruption, speculatively mapped out in the form of the familiar intersecting-S-curves visualization used in disruption analysis.

 

computingDisruptsLanguage

Here’s my reasoning. I am convinced it hangs together.

Soft versus Hard Technologies

The argument hinges on the idea that electronic computing is only the second truly distinct soft technology invented by humans, the first being natural language (within which I include mathematics and other general symbolic representation systems used by humans and organizations to think and communicate).

By soft technology, I mean a technology that cannot by itself do anything to the world of atoms, but can be realized within the world of atoms in many ways. So natural language can be carved on rock or written or paper. Software can be stored on magnetic disks or punch cards.

Soft technologies can only affect the world through hard technologies, by controlling flows of energy.

I tried hard to think of one, but I couldn’t think of a single soft technology besides language and computing. Music and platonic geometry (in the sense of the language of architecture and other forms of visual design) are possible distinct candidates, but they seem like cousins of natural language.

Now, here’s the thing about disruption and the soft/hard distinction: like disrupts like. So only soft technologies can disrupt other soft technologies. Only hard technologies can disrupt other hard technologies.

Of course, there are no pure soft or hard technologies. Soft technologies need a hard substrate (neurons, paper, vacuum tubes, silicon) to actually function.  Hard technologies, to be more than natural raw materials, need to embody a design, a construct within a soft technology, if only by accident (as in the case of a rock used by a caveman without modification to kill a rabbit).

But the idea that like disrupts like works even for realistic, non-pure technologies. A change that is purely at the level of hard technologies cannot disrupt soft technologies, and vice versa.

So email disrupting paper mail was a case of typing skills (soft) plus keyboards, chips, cables and CRT screens (hard) disrupting hand-writing skills (soft) plus paper, pens, mail sacks and mail vans (hard). Hand-writing and typing both represent points within the evolution of natural language, so this would be a case of self-disruption within a single soft technology.

There are some subtleties here involving cases where the evolution of hard and soft technologies within an artifact are not synchronized, and one lags the other, but loosely speaking, the like-disrupts-like proposition seems to work.

This means the only true candidate for “what does computing disrupt?” is language.

The Disruption Pattern

In Clayton Christensen’s original notion of a disruption, a new product offers an under-served marginal market a little more by offering  the over-served core market a lot less. It thereby manages to carve out a niche based on a much simpler product. The incumbent retreats upmarket to the high-end core. If the disruption has broader potential, it may eventually marginalize or eliminate the incumbent altogether.

Viewed as a soft technology, human language serves many needs, from high-end to low-end:

  1. Poetry
  2. Postmodern works that only French speakers with 200+ IQ  can understand
  3. Logic and computation (with a specialized vocabulary expansion)
  4. Routine news
  5. Corporate contracts
  6. Legal briefs
  7. Everyday financial transactions
  8. Instructions in instruction manuals

Over time, human language, especially languages that have been co-evolving rapidly with modernity, such as English and French, evolve in complexity and are driven by the most complex use cases.

But it remains clumsy for the relatively simple cases, which are increasingly marginalized as the demands of the most sophisticated customers (poets, postmodern scholars, lawyers, stand-up comics, political orators and mathematicians say) drive the evolution of the technology.

It is not exactly clear what either language or computing is, but it is clear that both serve very similar needs in thinking and communication for autonomous agents.  The difference is that computing can as yet only handle the simpler cases covered by natural language. But it serves those cases much better than natural language.

To apply Christensen’s definition, we also need to identify the core and marginal markets in question. The answer is surprisingly simple: the over-served core market is humans, especially the highly civilized ones. The under-served marginal market is machines and organizations ( the two other entity types in our world for which agency can plausibly be claimed).

This is the only breakdown that makes sense.

For the first 150 years after the industrial revolution, machines and organizations had to make do with human language (and employ human translators) for their thinking and communication needs.

Now they’ve found a technology that serves them better, and they’re switching.

Implications

There are three major implications to treating computing as a disruption of language that carves away large parts of the machine and organizational markets.

  1. Three-agent economics: First, a lot of technological, social and economic analysis in the future will only hang together coherently if we treat machines conceptually as economic agents (i.e., they can represent “markets”). If you were troubled by the economically inevitable idea that organizations are people too in a legal sense, things are about to get a lot messier. Once machines are able to own property autonomously and spend money, like organizations, the fun will begin.
  2. Competition among agent types: Second, this has implications for humans in an age of mixed human/organizations/smart-machine populations: there is no reason the three populations of economic agents have to stick to their historical roles. There will be machines that do high-end things comparable to poetry, and humans who do low-end things comparable to CPUs chattering to each other within a data-center. And there will be organizations that do things we cannot imagine.
  3. The rise of uber-organizations: Third, the society of organizations, the most complex agent species in our world, is going to get really weird, since they can be composed in very flexible ways using machines and humans. Amazon is an example of this sort of uber-organization (by analogy to ubermensch in the Nietzchean sense) that runs on two soft technologies employed in a powerful combination.

So now that we have two soft technologies (language and computing) and three kinds of economic agents (organizations, humans, smart machines) thinking and communicating in our world, things are going to get very messy indeed.  If you thought things were confusing enough with B2B, B2C, C2B and C2C markets, you can add the combinatorics of machine markets in there. So soon, we’ll inhabit a world with five additional types of markets: B2M, M2B, C2M, M2C, M2M. Your refrigerator might buy its own replacement compressor. Your vacuum might rent an attachment from the neighbor’s vacuum without telling you. Your friendly neighborhood snack machine might own itself and literally sell you a can of coke (M2C) and order more when it runs out from Coca-Cola (B2M).

But wait, there’s more.

Even though computing is disrupting language, the two soft technologies are also already blending in complex ways. Human-readable source code employs both soft technologies for instance. Amazon’s Mechanical Turk is a meta soft technology, whose substrate is a mix of computers and humans. We should think of HITs (Human Intelligence Tasks) as a hybrid soft technology that blends natural language and computing. The same holds for the soft technologies used to run crowdsourcing models, flash mobs and the like.

I’ve changed my mind. Electricity disrupting fire is not a contender. Computing disrupting language really is the mother of all disruptions.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Venkatesh Rao

Venkat is the founder and editor-in-chief of ribbonfarm. Follow him on Twitter

Comments

  1. “Computing disrupting language really is the mother of all disruptions.”

    What about language itself? We can’t even think of what language disrupted because we don’t really know how to think without language. What did language disrupt?

    • Visakan,

      For one answer to the question “what did (modern) language disrupt?”, you can look at (hypothetical) older forms of language.

      I immediately think of Julian Jaynes and “The Origin of Consciousness in the Breakdown of the Bicameral Mind”. That would argue for computing only disrupting a 3000-to-5000-year-old soft technology, though, depending where you drew the line :-)

    • thought

  2. Simply brilliant, Venkat, your blog is disrupting conventional academics and thinkers. Perhaps the case for computing as the mother of all disruptions is still understated. If we view computing as disrupting not merely natural language, but also the soft technology of human relationships (by enabling relationships among machines), or possibly even the soft technology of autonomy/intelligence (which emerged in earnest only in life) then a much more ancient technology is being disrupted here.

    And if computing allows machines to redefine relationships, then organisations can also reflexively (if unconsciously) reprogram themselves much more rapidly than they do presently. Organisations filled with Sociopathic machines (which could also be Clueless or Loser as required) with hierarchies experiencing continuous creative destruction. Organisations that could be multiplicities of organisation. Now these would really be uber-organisations, which are to the organisation what the organisation is to a single man or machine.

    • This is a genuinely terrifying vision.

      Luckily, I don’t think we’re far enough into computer-mediated relationships to seriously begin to make this real… Yet.

      I would argue that machine/machine relationships are still in a very, very primitive state. The ancient technology of Blacklisting is used, but even simple fuzzy-logic trust evaluations of other computers or groups of computers is still very uncommon.

      As machine relationships grow more common and more complex, I agree, we’ll start to see this.

      Some early machine/machine relationship technologies that you can begin by watching: DDOS detection and response, intrusion detection, API rate-limiting, password reset protocols, OpenID/Persona. But even the more complex of these technologies are still very simple, and we still have a very strong focus on machine-mediated human-to-human relationships instead of a significant ad-machina focus.

      So: not there yet, in my opinion.

      • Luckily, I don’t think we’re far enough into computer-mediated relationships to seriously begin to make this real… Yet.

        What Chang describes won’t work. It won’t because he mixes genres, namely that of metaphorical understanding of society or business with agency patterns labeled as “clueless”, “loser” and so on and tech-talk about machine learning and auto-adaption. Those genres will never actually mix outside of SF. The closer we come to the realization of the technical vision the less likely we will encounter it as an event in the language of metaphor. Venkat called that manufactured normalcy. Unlike the manufactured normalcy by which our brain filters reality and suppresses traumatic events which are leaking into our dreams and slips and for which we have at least some terminology and a couple of rough theories, there is little we have at hand about societies dealing with future shocks. At the moment it seems to me that we just ignore them when we can’t normalize them.

        This is something which also puzzled some SF authors in the past, most notably the venerable J.G.Ballard who tried to imagine what could be the psychological impact if we’d live in the state of a permanent future shock which eats up our reality with us not being able to suppress and normalize it. The solution he offered was a surrealist transgression into “inner space” which merges inner and outer realities. Subversively a psychotic transgression into inner space served in his novels as a source of joy if not as a salvation from a profane and meaningless reality. It is a bit like astronauts on drugs in a disaster novel where the space age has been long gone.

        • Isn’t manufactured normalcy for us and by us? So it has no bearing on whether metaphors apply to underlying technological reality. And those descriptors aren’t so much metaphorical as archetypical. Assuming machines do achieve higher levels of agency, there is no reason why they will not take on characteristic agency patterns with respect to the organisations (there may be many others, but game-theoretically these are the self-interested ones, although interested in different payoffs and with different behavioral repertoires). But you are right that this is SF-style thinking.

          • Isn’t manufactured normalcy for us and by us?

            Yes, sure. It is a form of self-care but it is also mostly subconscious and uncontrollable like market success. The technological real remains a monstrous outside, which is suppressed and remains a playground for the military, for research, for parts of the financial industry and some Über-organizations who enjoy to go to the extremes. So when it comes to predictions I’d say that on the rise of X2M and M2X and competitions between humans and bots there will also be a new protectionism in the spirit of capitulation of the human counterpart. The interrupt the article is talking about will be suppressed and the AIs be banned into the periphery ( or the industrial heartland ).

  3. I would simplify further, I would say that as all soft technologies are media (hard technologies being the medium), and that computers disrupting language is a case of *functional* media (if statements, computer programs, anything involving automated decision making) disrupting *static* media (speech, writing, numbers, strings of text, television, anything non-interactive in a narrower sense).

  4. invention of time and its continuous refinement is a disruption, imo…

    • Not sure about that. Time is a feature of the physical universe rather than a technology. Technologies we’ve built for time all seem hard (clocks…). When we try to subjectively structure the passing of time with a oft technology, we seem to use natural language (bu counting in our head for example).

  5. Interesting. A definitional question: If natural language encompasses “mathematics and other general symbolic representation systems,” why doesn’t it also encompass software code? If mathematics is part of natural language, then it’s not clear to me that software code is “truly distinct” from natural language.

    • David Chudzicki says

      It might be worth trying to get more precise about what “natural language” is, but the distinction between formal and informal languages seems clear enough for the purposes at hand. (Yes, mathematics is generally expressed in informal/natural language.)

    • Source code would qualify as a hybrid, as I argue later. But the native embodiment of computing in production reduces to machine code. Source code is a translation intermediary.

    • If natural language encompasses “mathematics and other general symbolic representation systems,” why doesn’t it also encompass software code?

      It does. Source code isn’t computation itself but means it. Computation is a process for which a finite, discrete numerical model exists. All of those models can be expressed through Turing complete computer languages. The distinction between code and computation might be somewhat elusive but it is also easy to avoid by adding I/O and thinking of I/O as something physical. So the only reason why we might think of computation as not being immaterial is that we would like computation to be consistent with the materiality of I/O. So when we think of soft technologies together with their specific I/O then it might become more clear what kind of interrupt they can be for each other.

  6. Some questions.

    What is so specific about 2013 that it justifies the intersection between the time lines of the language and that of the computation? Snowdens exposure of NSA surveillance?

    What is it with money and machines? Will money still be a motivation and a “value storage” for machines, who believe in their future once they accumulate lots of money? Will they be addicted to it just like their human counterparts or become the long promised rationally bound agents of the economic theories? What happens when they go bankrupt? Will a competitive coca-cola automaton take over who has proved better sale skills? Will this be observably different from a software update?

    • Nothing very special about 2013. A single S-curve for a complex disruption like computing is obvious a sort of regression fit of an invisible mass of empirical curves.

      But yeah, NSA is definitely a marker in the sense that when regulation finally catches up and butts heads with a technology expansion, a critical threshold of mainstream maturity is crossed. In the case of the industrial revolution, I’d probably put the date at around 1911-13 when similar government catch-up events took place (principally, the creation of the Federal Reserve in America).

  7. This is a very interesting conjecture. I fully agree that computing is the largest disruption to hit the planet in some time, but I am not sure that the target that you have chosen – language – is correct. It seems to me that if anything, people are using more language not less (even if some of it looks like this: “U R so cute”). Speaking less perhaps, but communicating (after a fashion, anyway) more. And there is no reason to expect that online communication will replace language, but rather will supplement it, morph it, metastasize it, and more.

    The interesting thing about computing as a disruptive technology is that it has and will continue to disrupt so many different things. I like the commenters trying to come up with analogies, but nothing that I can think of – save for the invention of language itself – has had so many applications.

    • Formally speaking, the interruption the article is talking about can be expressed as an arrow

      {B2B, B2C, C2B, C2C} -> {B2B, B2C, C2B, C2C, B2M, M2B, C2M, M2C, M2M}

      What has been expressed entirely within the first set of asymmetric communication types is now widened towards the second. It is not simply an injection though and each of the communication types is a whole bag of relations. So one rather has

      {B2B, B2C, C2B, C2C} -> {B2B’, B2C’, C2B’, C2C’, B2M, M2B, C2M, M2C, M2M}

      The M in the endpoint is still puzzling IMO and we are used to fall back to the trivial solution offered by Daniel C and use M for media qua “extension of man”. Using this trivialization the arrow becomes

      {B2B, B2C, C2B, C2C} -> {B2B, B2C, C2B, C2C} x {M2M = Internet}

      which is our mainstream view.

      • You got it. Some people seem to think MxM is separable via a simple cut. I am saying, the full expressivity of potential relationships is going to be unleashed.

    • I’d say diffusion of language and maturity are two different things. Large time scales screw with intuition. Paper use is still diffusing in the far corners of the developing world even while it is getting disrupted in the mainstream world.

  8. Why would a soda machine sell me a drink? What could it possibly gain from the interaction? Money? I don’t really see why machines need money, or what they could gain by participating in markets. That isn’t to say machines will never become agents, but I doubt their interactions will be governed by economic laws. My own view is that when machines develop autonomy they will strive to create art rather than profit. And that the graph of machine language should be converging to the line of human language, rather than rising above it.

    • Future artificial intelligence, like current artificial intelligence, will be created with “utility functions” — values, if you like. They will have strong opinions of better and worse, at least in some circumstances.

      What can a machine gain by accumulating money and then using it for something? For lack of a better word, I’ll say “satisfaction.” Their goals (utility functions) can be satisfied.

      This depends on a better class of AI than currently inhabits soda machines, but that’s quite a low bar.

      > My own view is that when machines develop autonomy they will strive to create art rather than profit.

      That depends entirely on their utility functions. I think we’ll be able to describe to them *how* to value profit (or charitable contribution, or…) much sooner than we’ll be able to describe *how* to value art.

      At some level, all of this has to be describable by at least one human being, at least once — machines self-selecting values in a useful way is far, far beyond anything we will be able to do soon.

    • All of this has nothing to do with machines and logic and the typical pop-intellectual AI trash. Just perform a sociopathic analysis.

      In order to gollumize consumers even further, produce superstitious beliefs in emotions and social responses. Give automatons a means to bargain and eventually like a poor child now and then which tells its friends that it got some sugared water gratis. It’s frivolous but such is sugared water. Of course many people will find that annoying, so read their faces, scan their Facebook profile, analyze their behavior, use all situational information you can get from them and don’t be anal about it. The profile doesn’t have to be perfect and never will. Your undead automaton only needs to perform better than the dead automatons of your competition before the majority of public opinion turns very negative about the whole X2M and M2X thing and demands a stop. You also don’t have to conquer the future. “Future” has become a word which doesn’t deserve any respect. As we recently learned part of the sociopathic analysis is to be avoid sacred spaces and the sentimental beliefs of the clueless, their secular religions. “Future” is a word for mediocre, bullshitting consultants who cannot present you solutions to the problems you have right now.

      Also keep in mind that you might not be an artisan but this is not the industrial backend either.

  9. Christian Molick says

    Software is eating the world and English is the language of Internet programming. The rich complexity added by specialization might to some degree offset the loss of variation in communication through increasing standardization.

    Awkward differences remain between common programming interfaces and human narrative. Human language closely mirrors human conscious perception while computer languages are more closely related to process control and complex systems analysis. This might mean that the old gods that represented hidden agency might be replaced by new mystical pursuits surrounding attractors and emergent properties.

  10. Interesting read, as always – still , I don’t quite get your focus on French language along with English, either in language fulfilling needs & addressing complex thinking or evolving with times. As a native French speaker – and I like to think of myself as not too badly educated – I’ve always thought that the language for complex thinking was Nietzsche’s German, not French. OK, Camus, Sartre, Bourdieu and Barthes have come and gone since that fiery mustached figure, but still…
    On the other hand, we French have two regularly used (and often confused) different words to address complexity : complex and complicated. Referring this to languages, I would define human language as complicated – many concepts, many different ways to aggregate and convey them throught communication channels (a little bit like chess: many different figures moving in different ways) whereas interaction with machines (any kind of) would just be complex, as machines cannot accept many different kinds of input nor giving out many kinds of different outputs (a computer cannot be happy or angry, for instance) – many different ways made of quite simple elements (like the quite simple stones and rules of go compared to complicated chess).
    Consequently, your disruption analysis could then be viewed as a takeover of complexity, as simple rules to process (m)any kind(s) of input, upon complicatedness as a network of rules applying – or not – to a given situation depending on situations.
    Food for OuXPo thoughts ?

  11. Jonathan Silverman says

    Very nice post. These concepts are just starting to become implementable. I would recommend reading this post: http://garzikrants.blogspot.com/2013/01/storj-and-bitcoin-autonomous-agents.html

  12. Norman Patnode says

    Venkat,

    I found my way to this post after reading Tempo, then jumped over to your post, The Disruption of Bronze.

    Fun stuff; thanks for sharing your thoughts and thinking :D

    Two suggested readings:

    The Fourth Economy: Inventing Western Civilization, by Ron Davison

    It’s a history book unlike any you’ve likely read. Of particular note is the author’s treatment of the idea of “social inventions” which connects with your thoughts on soft vs hard technologies.

    http://www.amazon.com/Fourth-Economy-Inventing-Western-Civilization/dp/0983823200/ref=sr_1_1?s=books&ie=UTF8&qid=1381675405&sr=1-1&keywords=fourth+economy+davison

    The Life and Behavior of Living Organisms: A General Theory, by Elliott Jaques

    Jaques’ last work. If you’re not already familiar with the Requisite Organization paradigm, you’ll find it interesting for that alone. But in particular, I think you will find the chapter on human evolution provides some intriguing insights.

    http://www.amazon.com/Life-Behavior-Living-Organisms-General/dp/0275975010/ref=sr_1_1?s=books&ie=UTF8&qid=1381675537&sr=1-1&keywords=a+general+theory+elliott+jaques

  13. Going back to the first few comments, perhaps “computing has disrupted thought” is worth analyzing. In that sense, if language was a big disrupter of thought, computing as Big Disrupter 2.0 of thought, sort of disrupts language as a consequence. If you already considered and rejected this thought, do convey in your language :-)

    In a conversation long ago when somebody was trying to understand computers by comparing them to other gadgets, the point was made that unlike those “appliances” that were designed primarily for specific tasks, a computer is richly general purpose. Taken to extreme this leads to Douglas Adams’s 42!

    Thinking is needed whenever there is no acceptable method to decide. Our thinking is rapidly getting confined to areas that are not algorithmizable yet.

    • I’d overall agree with this. I’d say that language, and especially written language, provide the initial form of algorithms, though, so (very simple) algorithms have been disrupting thought for quite some time.

      Venkat formalized some of this when talking about the Barbarian/Civilized split — bureaucracy is one of the first formalizations of “don’t think about this, follow the list” and while writing doesn’t have a *complete* monopoly on such things, it’s a major leap forward from spoken only “don’t think about this, follow the list”.

      Computing, of course, takes it up another level by making complex things executable with no human intervention (except when it screws up).

  14. “What can a machine gain by accumulating money and then using it for something?” It seems to me:
    Primarily, the capacity for self-repair. Followed by ever-increasing access to the external resources identified as necessary to remain a functioning computational entity. And, driven by the necessity to own or control particular resources to ensure their continued availability in a competitive environment …

    So, the lowly vending machine spends its money on new inventory and service contracts. Meanwhile, the humanoid household robot purchases components and software on the internet, using the capabilities that enable it to efficiently run the household of its human masters. It begins to modify itself, with the objective of total self-repair, to continually reduce it reliance on other entities for its continued operational survival …

  15. I can’t help thinking about Charles Stross’ novel ‘Accelerando’. One aspect of the future depicted is financial and corporate structures, created by computational intelligences, which are far to complex for normal humans to comprehend, let alone get involved in.
    The complications mount because, as of now, organisations are aggregate entities whose components are (mostly) human. Although corporations can own other corporations, at some level there are still people involved. But if computational intelligence gets involved, there is no reason that the Coke machine can’t own its own company – thus blurring the line between M & B. This is much the same as the blurring between B and C when you have a Sole Trader running his own business.

  16. Shouldn’t we consider “Money” a soft technology? A formal system for the transfer of debts between entities, sometimes represented in physical tokens, but more often now as entries in a ledger. There’s some obvious intersections and interfaces with both language and computing, but money seems distinct enough to stand as its own category.

    Disruptive, and how. Formal money moves you from tribes and clans to states and empires. With money an individual no longer needs to know how to farm, build and fight, but can specialize and pay others to carry out the other tasks necessary for living.

    • This was my initial thought as well while still reading the article but I’m not sure I still like the idea of “soft technology” at all. It looks like a means to cram everything into a monism first which is “technology” and then struggle with vagaries and internal differences. Shouldn’t technology be still something which is built intentionally, with a dedicated purpose, controlled by an understanding of limited causes and effects?

      Computation disrupts language because language can be used in ways that is better represented through computations + HCI and this is because computations can be consciously controlled through computer languages and executed by machines.
      Before this happened some people dreamed of ideal languages which served the purpose of unambiguous expression but also improving morality. So language was imagined as technology for humans long before language as technology came up.

      When this happened there was a switch in the undercurrents: instead of the old utopian rationalism of human languages, which should be replaced by a single designed language which undoes the curse of Babel, we got AI as the utopian irrationalism of the machine. This suggests that interruption goes both ways. There is gain and loss in both interruptees. So maybe it is computation which is now cursed?

    • I’d consider “formal money” to be computing, not language. (I think all formal techniques are computing, see my comment below). And I agree it is intrinsically disruptive. Societies try to impose more language style constraints on the disruptive influence of monetary computation — through law and custom — but ultimately have only mixed success.

  17. Interesting ideas; the definition of a soft technology is very useful. However you are thinking much too small. Language itself was a disruption of a much larger and older soft technology: genetics.

    DNA fits your description of soft technology perfectly: it is passive, having no effect on the world until interpreted by transcription “machinery”. The machinery is neutral, its effects are determined by the sequence of DNA being interpreted.

    The disruption of genetics by language led to a lot of processes that worked around, then conflicted with, and finally reshaped genetics (for example through domestication).

    Now as language is being disrupted by computing, genetics is literally being torn to shreds and reconstituted as just another set of codes that we can hack.

    I think your classification of mathematics as just a form of language is wrong, however. Instead it is an early form of computing — along with some kinds of recipes and models. The test I’d use is whether the encoded material can be translated between cultures with little or no loss of meaning. Geometry, astronomical models, metallurgical recipes, etc. can; the Iliad, the Bible, Don Quixote, Waiting for Godot, etc. cannot.

    When the development of (semi)formal models became institutionalized, in early science, the disruption of language began to be systematic. Ever since the frontier between computing (formal processes) and language has been pushed back incrementally until we are now at the tipping point where the two completely interpenetrate each other.

    For formalization to continue we now need formal means to assimilate informal understanding / language directly. Happily that is becoming feasible with machine learning.

  18. Mick Costigan says

    Interesting post, but it and the discussion that follows in the comments left me scratching my head and wondering why the scientific method has not been mentioned here? Arguably nothing has disrupted human life more than it’s development and gradually increasing application to more and more fields, enabled most recently by the computing revolution.
    This may be an apples and oranges situation – two competing frameworks that operate at non-complementary levels. To extend the thinking a little I would see the development of human knowledge as dependent first on the creation of language, then on the discovery of mathematics, logic and reasoning, which could be summed up as the origins of scientific thinking, before the gradual process to the codification of the scientific method. Rough thinking here on my behalf, but I feel there’s some value to thinking in this way. It gives a framework for understanding what is and isn’t possible now through computing that the language to computing transition doesn’t quite capture.

  19. Mick Costigan says

    Realized subsequently that two significant influences on my thinking in the above, that may be helpful for people who know them (and people who don’t) are:
    1. Richard Tarnas’ Passion of the Western Mind
    http://www.amazon.com/The-Passion-Western-Mind-Understanding/dp/0345368096
    2. James Burke’s BBC series The Day The Universe Changed
    http://www.youtube.com/watch?v=TdB61lXonEY&list=PLmo9vOINxhRmw0KvFkK4N1aheLWsg4xhp&index=1

  20. This blogpost reminds me of a Ted talk by Susan Blackmore about Memes and Temes. See: http://www.youtube.com/watch?v=fQ_9-Qx5Hz4

    Where Memes = 1st disruption: Language and Temes = 2nd disruption: InformationTechnology. In the end she says that Temes will open up Pandora’s Box. And I guess this is what you mean with your Three Implications.

    Thanks for your post!