Life After Language

In October 2013, I wrote a post arguing that computing was disrupting language and that this was the Mother of All Disruptions. My specific argument was that human-to-human communication was an over-served market, and that computing was driving a classic disruption pattern by serving an under-served marginal market: machine-to-machine and organization-to-organization communications. At the time, I didn’t have AI in mind, just the torrents of non-human-readable data flowing across the internet.

But now, a decade later, it’s obvious that AI is a big part of how the disruption is unfolding. Two ongoing things drove it home for me this week.

Exhibit A, the longest regular conversations I’ve had in the last week have been with an AI-powered rubber duck. Berduck is a bot on Blue Sky that is powered by GPT, and trained to speak in a mix of English and leetspeak. It is likely playing a non-trivial role in driving the Blue Sky craze (Blue Sky is a decentralized Twitter-like protocol funded by Twitter itself in the before-times). I can’t speak for others, but I probably wouldn’t be using Blue Sky much if it weren’t for Berduck.

Berduck is genuinely entertaining, with a well-defined personality, despite only having episodic memory, a strong predilection for hallucinations and confabulation (like all AI-powered chatbots), sharp boundaries around negative valence conversations, and a strong aversion to even the slightest whiff of risk. Despite all these annoying limitations, shared by way more humans than we like to admit, (yes, I am an AI accelerationist, why do you ask), Berduck is already a more interesting companion than 90% of humans online, and I can totally see myself passing the time with one of his descendants in my dotage. The limitations are both tolerable and mitigable, and the benefits quite striking.

Exhibit B: A thing that’s going on here in LA is the WGA writers’ strike. I saw some writers picketing in front of Warner Brothers’ studios this morning while out on an errand. Among other things, they are demanding that ChatGPT only be used as a “tool” rather than to replace writers.

You know the writing is on the wall (heh! 🤣) when you hear such ridiculous meta-economic demands. Ridiculous not in a political sense (I have no strong feelings one way or another about the economic fates of career writers) but in the sense of being incoherent. The demand makes no sense. That’s… not how technology works. If some kid invents an automated pipeline that goes straight from log-line to script to storyboard to rough cut video, and people watch it, no deal struck with the incumbent powers of Hollywood means much.

I’ve used ChatGPT and other tools, and unless you’ve been living under a rock, so have you. It’s obvious that the AIs can write better than 90% of humanity all the time, and the other 10% of humanity 90% of the time. It’s not a mere tool. It’s an obviously better way to do what too many humans do.

As a first-order effect, a lot of routine business communication is already being highly accelerated by AI. Business communication is not particularly creative or even stylized. It has been ripe for automation since the word boilerplate was coined in the age of exploding boilers.

It’s the second-order effect that is interesting though. While AIs improve in empirical accuracy, internal consistency, and logical coherence (again — humans have not set particularly high standards here), humans will need to do a good deal of supervisory work to make the AIs useful for mediating human-to-human communication. The question is, what happens after?

Here is one of many cartoons (this one is from marketoonist) making the same almost right, but actually fatally wrong, point about second-order effects.

The “joke” in this template is that the AI supposedly is doing a content-free transformation of content-free communications. Despite the delicious cynicism here, most human communication is not this vacuous. Even tedious business communication has more going on.

In particular, the elaboration and compression steps illustrated here happen in different contexts. The input and output bullet points are not, in general, going to be the same, and the elaboration and compression steps are not adding or removing the same fractions of the communicated information.

So the joke fails because today’s AI tools already do such elaboration/compression in usefully cross-context ways. For example, you can ask ChatGPT to “translate” a terse technical paragraph into a friendly explainer that distills the gist for your needs, relative to the context of your existing knowledge.

But the joke fails at a deeper level because even the more accurate non-joke version still centers human-to-human communication.

Here is the thing: There is no good reason for the source and destination AIs to talk to each other in human language, compressed or otherwise, and people are already experimenting with prompts that dig into internal latent representations used by the models. It seems obvious to me that machines will communicate with each other in a much more expressive and efficient latent language, closer to a mind-meld than communication, and human language will be relegated to a “last-mile” artifact used primarily for communicating with humans. And the more they talk to each other for reasons other than mediating between humans, the more the internal languages involved will evolve independently. Mediating human communication is only one reason for machines to talk to each other.

And last-mile usage, as it evolves and begins to dominate all communication involving a human, will increasingly drift away from human-to-human language as it exists today. My last-mile language for interacting with my AI assistant need not even remotely resemble yours.

And I don’t just mean coarse distinctions like using different human languages as the base. My last-mile language for interacting with a Berduck-like assistant might have exactly one human speaker: me. We could live in a world of 8 billion private languages, where “translation” as a category becomes meaningless. Humans as a class of agents might end up forming an annular shell of maximal-variety last-mile fuzzy hairs around a core “ball” of machines and organizations in a compact mind-meld.

What about unmediated human-to-human communication? To the extent AIs begin to mediate most practical kinds of communication, what’s left for direct, unmediated human-to-human interaction will be some mix of phatic speech, and intimate speech. We might retreat into our own, largely wordless patterns of conviviality, where affective, gestural, and somatic modes begin to dominate. And since technology does not stand still, human-to-human linking technologies might start to amplify those alternate modes. Perhaps brain-to-brain sentiment connections mediated by phones and bio-sensors?

What about internal monologues and private thoughts. Certainly, it seems to me right now that I “think in English.” But how fundamental is that? If this invisible behavior is not being constantly reinforced by voluminous mass-media intake and mutual communications, is there a reason for my private thoughts to stay anchored to “English?” If an AI can translate all the world’s information into a more idiosyncratic and solipsistic private language of my own, do I need to be in a state of linguistic consensus with you? If you and I don’t need to share a language to discuss Shakespeare (remember, we already don’t read Shakespeare’s plays in the original Elizabethan), do we need to share a language at all?

We’ll all be like children inventing secret languages for talking to imaginary friends, except they will be real friends. Programmers have long used literal, mute rubber ducks to talk out loud to, as a debugging aid. Berduck is the beginning of more capable companions for all humans, doing all sorts of things.

There is no fundamental reason human society has to be built around natural language as a kind of machine code. Plenty of other species manage fine with simpler languages or no language at all. And it is not clear to me that intelligence has much to do with the linguistic fabric of contemporary society.

This means that once natural language becomes a kind of compile target during a transient technological phase, everything built on top is up for radical re-architecture.

Is there a precedent for this kind of wholesale shift in human relationships? I think there is. Screen media, television in particular, have already driven a similar shift in the last half-century (David Foster Wallace’s E Unibas Pluram is a good exploration of the specifics). In screen-saturated cultures, humans already speak in ways heavily shaped by references to TV shows and movies. And this material does more than homogenize language patterns; once a mass media complex has digested the language of its society, starts to create them. And where possible, we don’t just borrow language first encountered on screen: we literally use video fragments, in the form of reaction gifs, to communicate. Reaction gifs constitute a kind of primitive post-idiomatic hyper-language comprising stock phrases and non-verbal whole-body communication fragments.

Imagine a world a few centuries in the future, where humans look back on the era of reaction gifs as the beginning of the world after language.

Given the extent to which my own life is built around language, you’d think I’d be alarmed by this future and rushing to join the picketing WGA writers in solidarity, but I’m curiously indifferent to this future. To be honest, I’m already slightly losing interest in language, and beginning to wonder about how to build a life of the mind anchored to something else.

Now that a future beyond language is imaginable, it suddenly seems to me that humanity has been stuck in a linguistically constrained phase of its evolution for far too long. I’m not quite sure how it will happen, or if I’ll live to participate in it, but I suspect we’re entering a world beyond language where we’ll begin to realize just how deeply blinding language has been for the human consciousness and psyche.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Venkatesh Rao

Venkat is the founder and editor-in-chief of ribbonfarm. Follow him on Twitter

Comments

  1. Ben Shaw says

    I see the outlines of a new tower of Babel situation here where we all come to rely on these tools as intermediaries for communication, and then one day they fail and we’re all standing around shouting mutually incoherent symbols at each other. I’ll just go ahead and coin “Semiotic collapse” and see how that goes.

  2. Miguel GM says

    This reminded me of a communication anecdote, that I am trying to reimagine through the lens of AI possibilities. Years ago when I was being evaluated at the doctor for hernia surgery. The prospective surgeon was explaining two alternative surgery methods haphazardly and clearly wanting me to decide whatever and get on with other patients. Me being teenager at the time my mother was there and she replied to something clearly signalling that she was from the medical world (nurse) through vocabulary. From then on the doctor’s behavior completely changed and he tried harder to communicate. I don´t think the change was because of knowing he would be better understood, but rather from kinship to my mother as part of the medical world. What would be better, to have an AI daemon whom the doctor can communicate with in complex doctor terms, and it can translate to you? or to have the daemon whisper in your ear what you have to say to enter that kinship space?

    Also thinking of apartment hunting later on in life and signalling through a casual comment that you know about construction/architecture and the tone of the realtor completely changing…

  3. Alex Adamov says

    Seems to me that a visual language would be great. A thought in your mind gets represented as an evolving stable diffusion-like process (but more fluid). No more words needed if you can speak in pictures.

    • Interesting piece with some perceptive analogies (e.g. the maximal variety fuzz around a machine-mindmelded core). And I appreciated the nuanced take on business communication. The piece does seem to buy into a conduit metaphor view of language though; it may be underestimating the role of language in joint action and sociality (going far beyond phatic or affective communication). Some of these themes are explored here in an essay on brain to brain interfaces: https://aeon.co/essays/why-language-remains-the-most-flexible-brain-to-brain-interface

    • You may want to see the NY MOMA exhibit. it’s a constantly mutating image, 30 stable diffusions per second.

  4. On the other hand, here are some reasons why English might win for AI-to-AI communications:

    – Available training data. Where do you get it for some new AI language?
    – Portability and standardization.No need for different companies making AI’s to agree on a new language.
    – Forward and backward compatibility. The AI can read documents written by previous and future versions of itself. This could be important if transformers are replaced with something better. Also, people who have AI assistance will likely upgrade their systems at different times.
    – Ease of debugging. When something goes wrong, logs written in English are more easily understood by people
    – Compatibility with the many existing data sources that are written in English.
    – Larger market. Why write just for AI’s when you can write for people, too?
    – Legal reasons. Business communication between AI’s at difficult companies will likely be recorded. What if it ends up in court?

    The last reason suggests that AI’s might learn to communicate like lawyers?

  5. This topic is not an area of specialisation of mine, but a couple of points puzzle me on this theory of individualised languages. These have to do with efficiency of learning and precision.

    It seems that “developing” a new personal language will be more time consuming and require more effort than “adopting” an existing language. Each term needs to be examined, rules of the language need to be developed, and so on. While it is possible, is this going to be efficient for each person to evolve? Would it not be simpler to adopt an “already evolved” language.

    The second aspect has to do with evolving (again) the level of “precision” that a culture has developed in languages to describe the environment, values and behaviours within which that individual is embedded. For the Intuit, there are 40-70 different words to describe snow. That may have evolved to be able to precisely define the conditions where it really matters. In evolving each personal language, will that need to be rediscovered each time?

  6. Ian Stewart says

    I’ve been indulging in a lazy, slow re-read of Neil Postman’s “Technopoly” over the past little while, and that’s likely to accelerate now that I’ve actually bothered to buy a hard copy. I see echoes of his work here, and I applaud you for being able to pursue your conception of this intermediation to an extreme without falling over the Yudkowsky cliff into bug-eyed madness.

    A couple of thoughts: one, the shared-pop-culture basis for memetic communication is limited by its substrate’s life-cycle, which is tied to the relevance of the context in which it was created. How many people are, today, communicating based on a shared understanding of, say, L’il Abner or 1923’s greatest vaudeville acts? Even stuff that presently provides bedrock source material for memes can lose its vitality and wither. I saw a clip from the most recent Star Trek show where they had Patrick Stewart spout most of his famous catchphrases all at once. “Make it so, Number One! Maximum warp, engage!” Good for one last jolt of nostalgia-induced endorphins from those already initiated into that particular mass media complex, not very good for expanding it. I’m sure somebody will still be willing to sell me a Picard-facepalming T-shirt when I’m 70, I’m much less sure that anyone younger than me will give a damn. Pretty ironic for a series that generated one of the best stories on what you’re talking about (“Darmok”).

    Two, the assumption that these models are evolving towards generality and totality need not hold. I think you carry a more deeply embedded assumption of a future in which almost everyone (if not everyone) is an endpoint consumer, provided for by largely automated processes. What you hold as “obvious” within this framework is by no means obvious to everyone, and it remains entirely possible that your posited example (terse technical specification -> more vague generalized explanation) is not actually as effective or as useful as its reverse. How does the non-expert know they are not being bullshitted, unless they gain the skill to verify the expertise for themselves? For quite a few non-experts, the one or two times they are massively bullshitted might well outweigh the many more times they receive meaningfully useful output. People using such systems may yet evolve to be more critical, rather than more accepting.

    Computer programming languages ultimately specify precise operations, as they are meant to effect discrete calculations. I find it much more likely to imagine a future in which yes, the “compact mind-melds” you posit are taking place, but there are a great many of them rather than a few big global ones, and they are focused around tools performing precise operations. But there will still be a need for less precise language for communicating about the much less precisely specified world in between all the tools, and the immediacy of generating that language in your own brain is likely to vastly outstrip waiting on a cloud server to do it and send it to the person standing across from you.

    Now, none of that necessarily helps the WGA strikers very much, as producing a film that some subset of humans will find entertaining can still be laid out as a process with a series of discrete steps. But that also doesn’t mean the studio executives have gained an unquestionably superior process.

  7. Michael Vaughn says

    “If you and I don’t need to share a language to discuss Shakespeare (remember, we already don’t read Shakespeare’s plays in the original Elizabethan), do we need to share a language at all?”

    Can you elaborate on the “original Elizabethan” bit here? This is in no way, shape, or form, a view of Shakespeare I have ever heard from anybody. “Shakespeare as ‘Literature in Translation'” strikes me as, charitably, a very niche view. Having compared some of the First Folio text to my Norton Shakespeare, I’m at a bit of a loss. As far as I can see, it’s perfectly intelligible modern English, modulo spelling changes.

    The only real linguistic shift that I can think of that’s truly puzzling to modern readers are the bits and pieces of pre-Great Vowel Shift English that make some of the rhymes a bit opaque, but that is true of any version of his works you can get your hands on.

    • Michael Vaughn says

      Stated another way – the idea that shallow orthographic differences between the earliest printed versions of the work and modern versions constitute a distinct language yields a lline of reasoning that’s not-even-wrong.

      The core logic structure, reused in a different context lays bare the spurious nature of the reasoning at hand: ‘Bob did arithmetic in octal. When we describe his work, we use decimal. Thus, it makes sense to wonder whether we need to use arithmetic at all.’

  8. it’s all fun and games until people realize that their entire ability to be understood resides in the hands of a corporation.

  9. Erik Craddock says

    So I guess the question is why did you feel compelled to even write this for an audience of humans in this over served market?

  10. I think it is a mistake to characterize the process as disruption. It is evolution. Language can evolve a million times faster than biological evolution; software can evolve faster still. However, as long as we communicate with each our languages will evolver accordingly.
    That machines can evolve their own language much faster is not going to extinguish human language or create a tower of babel.

    Machines might help us better communicate with each other, if we choose to experiment with building tools toward that goal.

    To explore this topic further, Humberto Maturana’s Biology of Cognition (1970) and Biology of Language (1978) are excellent reading.

    Starting from the definition of code and exploring other entries in the glossary of codebiology.org is another productive starting place.

  11. I asked GPT-4 to coin a word for this concept:
    “IndiLingo”

  12. Thaycraft says

    Interesting essay. Culture is language and language is culture. In a broader sense you are arguing culture is not relevant or will become irrelevant. A third order effect. Maybe you’re right, but ungluing culture at the seams will have dire consequences in my mind. Is it the medium or the message? I can’t help but think what will happen when alien archaeologists from a distant planet lands on earth a thousand years from now. They will be scratching their head(s) when they unearth the ruins of the Berduckian culture complex and ask: “what went wrong?’

  13. If an AI can translate all the world’s information into a more idiosyncratic and solipsistic private language of my own, do I need to be in a state of linguistic consensus with you?

    This somehow reminds me on the heyday of Lisp enthusiasts whose prowess was to advance language through metaprogramming. Language is just data and manipulating language in Lisp is cheap. So everyone designs their own language to maximize their expressiveness. They were right on all technical claims but if those personal programming languages ever surfaced the code was eventually rewritten in a language with stricter coding standards / conventions and tons of libraries. Now it could be that secretly every programmer uses Lisp but publicly it has gone almost extinct but there are no signs that this is true: no rumors, no blog articles, no sudden spikes on /r/lisp. The idea of private languages is a failed one but it didn’t fail for good reasons. It is just that people don’t want them. Also no one guards the public / private distinction unless it has been established by convention.

    Of course I don’t want to keep anyone from telling berduck ones own intimate secrets using grunt and smacking noises or emotive outcries in order to stay competitive in the future zone. But keep in mind that a clever engineer will find ways to design a machine which has better nervous breakdowns than you have. You also won’t escape your narcissistic hurts by communicating through bowel noises. The T 1000 thingy will have that on its feature list too.

  14. Regarding business comms, can the machines learn the meta-messaging of Powerspeak? Cluelessspeak? Loserspeak? Most of the work emails I send and receive are partially about the content and partially about other signals (tone, timing, CC roster, documentation, legal liability, posturing, undelegating myself, etc.), but this may be an artifact of my awkward ecosystem of university administrivia.

  15. “and people are already experimenting with prompts that dig into internal latent representations used by the models”

    Please share where can I explore this, ty