Storytelling: Narrative Wet Bulb Temperature

This entry is part 6 of 6 in the series Narrativium

Telling jokes at a funeral is hard. Even entertaining an urge to do so is perhaps not a decent thing to do. At best, you might get away with telling a poignantly humorous anecdote about the deceased as part of a eulogy. The context of a funeral is simply not appropriate for joke-telling, and it’s not just a matter of social norms and performance expectations of grieving solemnity. People simply wouldn’t be in the mood.

Even if you were a comedian who left instructions for your funeral to be conducted in the form of a comedy festival, if people actually liked you, they’d likely find it somewhat difficult to get into the spirit of the idea.

Jokes at a funeral are a simple example of what we might call poor narrative-context fit, NCF. Not all stories can be told at all times with equal impact. And here I mean any performance with a narrative structure, not just actual fiction. The idea applies to nonfiction works too.

What drives narrative-context fit? I don’t have a general answer, but I have one for a special case: storytelling in a time of generalized crisis, such as we are living through now.

It is no secret that it’s been hard to tell compelling stories in the past few years. Television and cinema have turned into a wasteland of reboots and universe extensions. Thought leadership storytelling has descended from the smarmy heights of TED talks to the barely readable op-ed derps of today. It’s not that there are no good stories being told, but compared to say 2000-2017 or so, we’re definitely in a tough market.

A clue about why this is hard can be found in Robert McKee‘s description of narrative suspense:

“As pieces of exposition slip out of dialogue and into the background awareness of the reader or audience member, her curiosity reaches ahead with both hands to grab fistfuls of the future to pull her through the telling. She learns what she needs to know when she needs to know it, but she’s never consciously aware of being told anything, because what she learns compels her to look ahead.”


Suspense is “curiosity charged with empathy…” Suspense focuses the reader/audience by flooding the mind with emotionally tinged questions that hook and hold attention: “What’s going to happen next?” “What’ll happen after that?” “What will the protagonist do? Fee?”

Suspense is a “what happens next” curiosity you care about that anchors your attention to a period of time leading up to potential resolution. Or to put it another way, suspense literally creates your sense of future time. If you are not feeling suspense about how something in the future might turn out, in a sense, you’re not feeling the future at all. Your consciousness is concentrated in the past and present only, and not in a good way.

No suspense, no story, no future.

Now, extend this logic to the general background of suspense in the environment that a story has to compete with. We do not consume stories against a blank canvas backdrop. Whatever is going on in the world — a pandemic, a space telescope on a fraught deployment journey, a critical election — shapes the suspensefulness of life in general.

In fact, we might frame a hypothesis, which I call the suspense blindness hypothesis: You can’t see past the next big identity altering thing in your future that’s keeping you in suspense. The most acutely felt “what happens next” thing.

Note that this is a spectator point of view. Suspense only exists if you can’t do much to change the uncertain outcome. You can only watch. If you can act, you’re in the story, not watching it unfold from the sidelines.

When there is a high level of suspense in the general background, it is harder to tell stories because you have to beat that level of suspense. It gets especially hard if you have to tell a story that extends far beyond the temporal horizon created by the suspense blindness. If everybody is waiting for the outcome of a critical election in a year, it’s hard to tell a story spanning the next decade. And this applies equally to a TED talk painting (say) a vision of progress over the next decade, and to a fictional story that plays out over the next decade.

Some of this is merely technical difficulty dealing with storytelling in a forking future. If there is no vague consensus around the future being a certain way, it’s hard to tell stories set in that future. It’s a bit like having to choose a foreground paint color that works against many different background colors, ranging from black to white.

Your only technical recourse is to jump far enough out into the future — a century say — that the stark forking divergences of today can be assumed to have been sorted out. But then the storytelling loses access to the emotional energies of the present.

I came up with a weird metaphor for thinking about this — narrative wet-bulb temperature.

The wet-bulb temperature is a complicated measure of the body’s ability to cool itself. It is a function of temperature and humidity, and when it goes above around 35C, the body can no longer cool itself through sweating. This is one of the many ways in which climate change is a more serious threat than you might think, since it can drive dangerously high wet-bulb temperatures.

Here’s the metaphor: we tell ourselves stories to regulate the amount of narrative tension we feel in life generally. Felt suspense is one measure of this tension (though it’s a rich mess of many contributing textures, such as cringe, horror, fear, amusement, mystification). We metaphorically “cool” or “warm” ourselves through stories (where “temperature” maps to a vector of attributes. Like thermoregulation, narrative regulation is a function of context.

Narrative wet-bulb temperature is a measure of how well narrative regulation can work in a given zeitgeist. Beyond some metaphoric equivalent of 35C, perhaps it becomes impossible to tell stories. Perhaps the appropriate scale is a weirdness scale, measured in Harambes. Perhaps above 35H, storytelling is psychophysically impossible.

As with climate, we have some ability to control our environments through the narrative equivalent of air-conditioning. Personal climate control, through management of exposure to the stresses of the general outdoor zeitgeist, can be done through gatekeeping information aggressively (this idea is central to the book I’m writing). But to the extent storytelling is a public act, such “air conditioned” stories can only be heard by those who share your particular cozy climate-controlled headspace.

We appear to have collectively accepted this particular tradeoff, in that we have collectively abandoned public spaces (and by extension, truly public storytelling) and retreated to the cozyweb.

Random Acts of X

The phrasal template random acts of ________ is clearly one of my favorites. I seem to have used it 20+ times on Twitter in the last few years. Here are the actual instances:

  1. random acts of ontology
  2. Random Acts of Web3ing
  3. random acts of policy vandalism
  4. random acts of templing [as in, treating something as a temple]
  5. random acts of patchy, pointillist, impressionist worldbuilding
  6. Random acts of philosophy in the “air game” and random acts of tinkering in the “ground game”
  7. Random Acts of Magical Thinking
  8. random acts of tariffs
  9. random acts of sciencing
  10. random acts of art production
  11. random acts of revenue-generation
  12. Random acts of petrichor
  13. random acts of strategy
  14. random acts of cash-flow management
  15. random acts of consulting
  16. Random Acts of System Integration (RASI)
  17. Random Acts of Product Development
  18. Random Acts of Workflow Improvement and Unnecessary Optionality
  19. random acts of solutionism
  20. Random Acts of Mildly Profitable or Break-Even Teaching
  21. random acts of twitter strategy
  22. Random Acts of Overt Marketing
  23. random acts of garam-masala-ing

At one point I tweeted a prompt inviting people to fill in the blank, and got a whole bunch of responses, some clever, others not so clever.

iirc, the very first example I encountered, sometime in the 90s I think, was “random acts of marketing.” That stuck with me because it seemed like such an apt description of the marketing efforts of most companies.

Random acts of X are a regime of behavior that you might call “bullshit agency” — some fraction of it works, but you don’t know, and to a certain extent don’t care, which fraction. Hence the famous John Wanamaker quote, “โ€œHalf the money I spend on advertising is wasted; the trouble is I don’t know which half.”

Random acts of X happen when you act opportunistically, based on circumstantial possibilities and very little thought, and with indifference to whether or not your actions make any sort of larger strategic sense. The randomness in what the immediate circumstances allow or encourage you to do translates to randomness into what you actually end up doing. Noise in, noise out.

This does not mean that the opposite of “random acts of X” is strategy. You can have “random acts of strategy” too, and in fact most strategy fits that description. A CEO goes off on a leadership retreat with a few buddies, enjoys good food, good wine, and whiteboard sessions, and returns with a nice mind-map and strategy notes… and it’s back to the quagmire of operations within a day. That’s random acts of strategy.

Random acts of X regimes are attractive because they allow you to act in very low energy regimes, with low intelligence. And we default to such regimes as a slightly superior alternative to being frozen in inaction and doing nothing at all. The leap of faith underlying random acts of x-ing is belief in a benevolent universe where doing something, anything, beats doing nothing.

Reviewing my tweets, I notice that I use the phrasal template more often to refer to my own behaviors than to comment on others’ behaviors. The template has no particular stable valence for me. Sometimes random-acts-of-x-ing is good, sometimes it is bad.

But looking at my (over)use of the template, I do wonder, what does it take to move such behavior into a non-random regime, without overwhelming it with the artifacts of deterministic planning, and destroying what little energy there is.

The best guide I’ve found so far is Charles E. Lindblom’s classic 1959 management article, The Science of Muddling Through. It is one of the articles I recommend most often to consulting clients (I found it via John Kay’s excellent book, Obliquity)

Muddling through is the act of adding just enough determinism to a default random-acts-of-x situation to get it to make some sort of roughly right directional progress. In Lindblom’s account, muddling through involves a “method of successive limited comparisons” as opposed to a “rational comprehensive” approach.

Muddling through is both a better term, and a better concept, than its degenerate modern descendants like “agile.” The salient feature of Lindblom’s account is that he doesn’t claim muddling through is a “theory” but rather a manner of doing that “greatly reduces or eliminates reliance on theory.”

Still, whether you call it agile and pretend you have a theory, or call it muddling through and admit that you don’t, the problem remains — how do you prevent this regime of behavior slipping into either useless randomness or getting swamped by the imposition of energy-draining theorizing?

One part of the answer is, as Karl Weick argued, to give up on theory, but not on theorizing. The idea that “what theory is not, theorizing is” has been the linchpin of my consulting work for a decade now, but I’ve never quite clarified the essence of the distinction to myself.

Weick’s idea is similar in spirit to the Eisenhower line that plans are nothing, but planning is everything; or Frederick Brooks’ idea that you should “plan to throw one away” (and Joel Spolsky’s counter-argument that you should not throw one away)

I think the common thread here is that your history of engagement with a problem or question is important, but the specific conceptual scaffoldings you used in generating that history are not. The data matters, the algorithm you used to generate it doesn’t. Be the data, not the algorithm.

This then is the solution to the perils of the “random acts of X” regime — better memory. Turn the memoryless random acts of X into memoryful not-so-random acts of X.

This assumes that memory by itself has something like a gradient to it; a historical logic that can bias the context of random-acts-of-x-ing enough that your actions acquire a drift, a direction of muddling through.

This direction is not a True North. It is not a teleological potential induced by a goal, but an etiological potential induced by a history (or more generally, data). A True Past perhaps. The test of truth being that it creates a coherent future despite the randomness of circumstantial forces. Such an etiological potential is, however, merely necessary, not sufficient. To get past historical determinism, the True Past must only be allowed to frame the random acts of x-ing in the present, not fully specify it. And if your random acts are not capable of blowing up the historical context that contains them, they are not random enough.

I think of it as “fuck around and find out, but never forget.”

2021 Ribbonfarm Extended Universe Annual Roundup

This entry is part 15 of 15 in the series Annual Roundups

There is no getting around it: I basically took the year off from this blog, not just in the sense that I wrote much less here than usual (29 posts), but in the sense that all the posts were short ones with self-consciously modest ambitions. In fact, most posts were actively anti-ambitious, since I carefully avoided writing anything with viral potential. The blog basically went underground. For the first time ever, and by design, there was not even a single post that could be called a hit, let alone a viral one.

A big reason was: I had nothing to say in 2021 in blog mode.

And a big reason for that was that the medium of blogging itself is not sure what it wants to say anymore. We are in a liminal passage with blogging, where the medium has no message.

So it’s not just me. It feels like the entire blogosphere (what’s left of it) took the year off to figure out a new identity — if one is even possible — in a world overrun by email newsletters, Twitter threads, weird notebook-gardens on static sites or public notebook apps, and the latest challenger: NFT-fied essays.

All those new media seem to have clear ideas of what they are, or what they want to be when they grow up. But this aging medium doesn’t. And while I have a presence in all those younger media, they don’t yet feel substantial enough to serve as main acts, the way blogging has for so long.

Perhaps there is no main-act medium in the future. Perhaps we are witnessing the birth of a glorious new polycentric media landscape, where the blogosphere will be eaten not by any one successor, but by a collection of media within which blogs will merely be a sort of First Uncle to the rest. The medium through which you say embarrassing things at Thanksgiving, with all the other media cringing. Maybe, just as every unix shell command turned into a unicorn tech company, every kind of once-blog-like content will now be its own medium. Listicles became Twitter, photoblogs became Instagram, and so on.

The entire blogosphere is going through perhaps its most significant existential crisis since the invention of blogging 22 years ago. And I’ve been at this for 15 of those years — this is the 15th annual roundup! Ironically, every couple of years through that period, there has been a round of discussion on “the death of blogging,” but now that it seems to be actually happening, there isn’t an active conversation around it.

If this is the end, it’s a whimper rather than a bang.

One sign it is real is — this is the second roundup I’ve felt compelled to title “extended universe” because my publishing presence is now simply too scattered for the blog alone to represent it.

But I rather hope not. I think there’s a chance it’s going to be a Doctor Who style regeneration instead, and if so, I’m here for it. If blogs must die, so be it. If there’s a fighting chance of a regeneration, the fight will be worthwhile.

On to the roundup, with embarrassing-uncle commentary on the brave new world.

[Read more…]

Thinking in OODA Loops

I’ve been meaning to turn my OODA loop workshop (which I’ve done formally/informally for corporate audiences for 5+ years) into an online course for years, but never got around to it. So I decided to just publish the main slide deck.

This deck is 72 slides, and takes me about 2 hours to cover. It actually began as an informal talk using index cards at the 2012 Boyd and Beyond conference at Quantico, to a hardcore Boydian crowd, so it’s survived that vetting.

The two times I’ve done the full, day-long formal version for large groups, I’ve paired a morning presentation/Q&A session with an afternoon of small group exercises applying the ideas to specific problems the group is facing. More commonly, I tend to just share the deck with consulting clients who want to apply OODA to their leadership challenges. We discuss 1:1 after they’ve reviewed it, and begin applying it in our work together.

In the spirit of John Boyd, whose OG briefing slides are freely available on the web (highly recommended), I’m releasing these slides publicly without any specified licenses, restrictions, or guarantees. There’s a lot of random google images and screenshots from documents in the slides, so use at your own risk.

Feel free to use these slides as part of your own efforts to introduce others to OODA thinking, including as part of paid courses. You can also modify/augment/remix them as you like. Attribution appreciated, but not expected.

Read on, for some notes/guidance on how to design a workshop incorporating this material.

[Read more…]

Jumping into Web3

This entry is part 1 of 1 in the series Into the Pluriverse

I’m kicking off a new blogchain to journal my explorations of Web3: the strange world of NFTs (non-fungible tokens), DAOs (decentralized autonomous organizations), domain names ending in .eth, and so forth. I wasn’t going to get into it quite yet, but events in the last week dumped me unceremoniously into the deep end.

I’m chronicling the play-by-play in an extended Twitter thread. There is also now an NFTs page for ribbonfarm. I’ve already sold two (on and on

As I write this, a 24-hour auction for my third NFT, is underway on I’m thinking of it as my first serious minting, since it’s a piece that a lot of effort went into — the ribbonfarm map of 2016 (if you’re interested in bidding, you’ll need the metamask wallet extension and some ether).

I’m still pretty down in the weeds and haven’t yet begun to form coherent big picture mental models of what’s going on. But I did make this little diagram to try and explain what’s going on to myself… and then made an NFT out of it.

I’ll hopefully have more interesting things to share after I have some time to reflect on and make sense of the rather hectic first week.

Beyond the fun game of making money selling artificially digitally scarce objects, the broader point of diving in for me is that it’s clear Web3 is going to drastically transform the way the internet works at very deep levels. Not just in the sense of deeply integrating economic mechanisms within the infrastructure, but also in terms of how content is created, distributed, and presented. If this develops as it promises to, Web2 (what used to be called Web 2.0) activities like blogging and writing newsletters are going to be utterly transformed. So this is as much a sort of discovery journey, to figure out the future of ribbonfarm, as it is a dive into an interesting new technology.

The highlights of my first week (details in the Twitter thread):

  • Minted and sold 2 NFTs, participated in a 3rd via a minority stake
  • Got myself a couple of .eth domains, including ribbonfarm.eth — which led to an unexpected windfall
  • Set up a Gnosis multi-sig safe for the Yak Collective, and helped kick off plans to turn it into a DAO
  • Entered something called the $WRITE token race to try and win a token for the Yak Collective to start a Web3 publication on (you can help us get one by voting tomorrow, Wednesday, Nov 10)
  • Signed the Declaration of Interdependence for Cyberspace, my first crypto-signed petition
  • Presumably pissed off about 20% of my Twitter following going by this poll (Web3 is a very polarizing topic)

There’s a lot going on, as I’m discovering. Every hour I spend exploring this, I discover more new things, at every level from esoteric technical things to subtle cultural things.

If you, like me, have been thinking that being roughly familiar with the cryptocurrency tech scene of a few years ago means you “get” most of what’s going on here, you’re wrong. The leap between the 2016-17 state of the art and this is dramatic. There’s a great deal more to understand and wrap your head around.

I’ll update this blogchain with summaries and highlight views as I go along, but the devil really is in the details on this one, so if you’re interested in following along without getting lost, I recommend tracking my twitter thread too.

Ghost Protocols

This entry is part 1 of 1 in the series Glossary

A ghost protocol is a pattern of interactions between two parties wherein one party pretends the other does not exist. A simple example is the “silent treatment” pattern we all learn as kids. In highly entangled family life, the silent treatment is not possible to sustain for very long, but in looser friendship circles, it is both practical and useful to be able to ghost people indefinitely. Arguably, in the hyperconnected and decentered age of social media, the ability to ghost people at an individual level is a practical necessity, and not necessarily cruel. People have enough social optionality and legal protections now that not being recognized by a particular person or group, even a very powerful one, is not as big a deal as it once was.

At the other end of the spectrum of complexity of ghosted states is the condition of officially disavowed spies, as in the eponymous Mission Impossible movie. I don’t know if “ghost protocol” is a real term of art in the intelligence world, but it’s got a nice ring to it, so I’ll take it. One of my favorite shows, Burn Notice, is set within a ghost protocol situation.

If you pretend a person or entire group doesn’t exist, and they’re real, they don’t go away of course. As Philip K. Dick said, reality is that which doesn’t go away when you stop believing in it.

So you need ways of dealing with live people who are dead to you, and preventing them from getting in your way, without acknowledging their existence. When you put some thought and structure around those ways, you’ve got a ghost protocol.

[Read more…]

MJD 59,514

This entry is part 21 of 21 in the series Captain's Log

This Captain’s Log blogchain has unintentionally turned into an experiment in memory and identity. The initial idea of doing a blogchain without meaningful headlines or fixed themes — partly inspired by twitter and messenger/Slack/Discord modes of writing — was partly laziness. I was tired of thinking up sticky and evocative headlines, plus I was getting wary of, and burned-out by, the unconsciously clickbaity nature of headlined longform.

I couldn’t remember anything of what I’ve written here, so I just went back and read the whole series, all 20 parts, and it’s already slipped away from my mind again. Names are extraordinarily strong memory anchors and without them we barely have textual memories at all. I can recall the gist of many posts written over a decade ago given just the name or a core meme, but for this blogchain, even having re-read it five minutes ago I couldn’t tell you what it was about. The flip side is, it wasn’t actively painful to reread the way a lot of my old stuff is (which is why I rarely re-read). In some ways it was kinda surprising and interesting to review. The lack of names means a lack of fixed mental models of what posts were about. It’s weird to be able to “cold read” my own posts. It’s like simulated Alzheimer’s or something, and it’s almost scary. It would be terrible to go through life with this level of non-recall.

The amnesiac effect of lack of names is reinforced by the lack of narrative, which is a function of lack of theme (or more concretely, lack of memetic cores). Over the 20 parts so far, I’ve wandered all over the place, with no centripetal force driving towards coherence. The parts were also far enough apart, there was no inertia from being in the same headspace between parts. It’s been a random walk of my mind.

This feels weird. It’s easy to remember at least a few highlights of themed blogchains, even if they lack a proper narrative throughline. I have a (very) vague sense of the ideas I’ve covered in the Mediocratopia or Elderblog Sutra blogchains for instance. Even if there isn’t a necessary order and sequence to the writing, a themed series grows via a web of association. So if you recall one thing, you remember some other things.

But order matters too. We remember things more easily when there is a natural and necessary order to them. This was reinforced for me in this blogchain in dealing with a bug The series plugin I use screwed up and indexed several of the posts out of order, which I took 5 minutes to fix. But reading the posts out of order made zero difference. Since they are not related, either by causation or thematic association, order is neither necessary, nor useful. It’s like how chess players have uncanny recall of meaningful board positions that can actually occur in a game, but not of boards with randomly placed pieces. It’s more than a mnemonic effect though. There is intrinsically higher randomness to a record of unnamed thoughts. The only order here is that induced by me and the world getting older.

This all seems like downsides. Recall is far worse, coherence is far worse. For the reader, the readability is far worse. Is there any upside to writing in this way? I’m not sure. It does seem to tap into a sort of atemporal textual subconscious. It also makes for a very passive mode of writing. A name is a boundary that asserts a certain level of active selfhood. A theme is a sort of grain to the interior contents. A narrative is a sequence to the contents. Each of the three elements acts as a filter to what part of the outside world makes it into the writing. When you take down all three, the writing occupies something like an open space where ideas and thoughts can criss-cross willy-nilly. It is homeless writing, with all the attendant unraveling and disintegration of the bodily envelope (I wrote about this in a paywalled post on the Ribbonfarm Studio newsletter).

A named idea space is a space with a wall. A named and themed idea space is a striated space with a wall (in Deleuze and Guattari sense). A named, themed, and narrativized space is a journey through an arborescence. A nameless, themeless, storyless space develops in a rhizomatic way, reflecting the knots and crooks of the environment. It is not just homeless writing, it is writing where there’s nobody home. It’s the textual equivalent of the “nobody home” affect of far-gone mentally unravelled homeless people.

Another data point for this effect. I just finished a paper noteboook I started just before the pandemic. So it’s taken me about 2 years to fill up. Back in grad school, 20 years ago, I used to be very diligent with paper notes. There was a metacognitive process to it. I’d summarize every session’s notes, and keep a running table of contents. I’d progressively summarize every dozen or so sessions. My notes were were easy and useful to review. Now I’m lazy, I don’t do anything of that sort. It’s just an uncurated stream of consciousness. With just a few pages left in the notebook, I tried to go back and reconstruct a table of contents (thankfully I was at least dating the start pages of each session) but it was too messy, hard, and useless, so I gave up. Progressive summarization ToC-ing is only useful and possible when you do it nearly real time. Naming and headlining work only when you name and headline as you work. So what I have with this latest filled notebook is just one big undifferentiated idea soup that’s nearly impossible to review. It’s worse than Dumbledore’s pensieve. It’s something of a memory blackhole. It is recorded but not in a usefully reviewable way. But arguably, not doing the disciplined thing led to different notes being laid down. I thought and externalized thoughts I would otherwise not have thought at all. I can’t prove this, but it feels true. And while it’s harder to review, perhaps the process of writing made it more transformative?

About the only thing I’ve been able to do with both this blogchain and the paper notebook, in terms of review, is go back (with a red pen or the editor) and underline key terms/phrases, and maybe tabulate them elsewhere into an index. I can trace the evolution of my thought through the index phrases. These nameless memories are indexable, but not amenable to structuring beyond that. It’s the part of your mind that you can Google but not map (this is the real “googling yourself”). These are demon notebooks. It’s dull to review now, but in a few years perhaps, it will be interesting to review as a record of what I was thinking through the pandemic. Maybe latent themes will pop.

Twitter of course is the emperor of such demon notebooks, though shared with others. I’ve taken to calling the nameless structures that emerge in my tweeting threadthulhus. These blog and paper demon notebooks though, are not threadthulhus. They are more compact and socially isolated. They are lumps of textual dark matter. They are pre-social, more primitive. They lack the identity imposed by mutualism.

With both this blogchain and my unreviewable demon paper notebook, I think I’ve kinda explored what names/headlines, target themes, and narratives do in writing: they alienate you from your own mind by allowing you to create a legible map of your thoughts as you think. Anything you structure with a name/theme/narrative (the alienation triad) is a thing aside from yourself that you can sort of distance from yourself and point to as an object, and let go off, and even meaningfully sell or give away to others. Alienation is packaging for separation. Anything that you don’t do those things to remains a part of you. This is not a bad thing. Not everything you can think is ready to be weaned from your mind. Even if you’re willing to share it with the world, it does not mean you are able to separate it from yourself. Just because you make second brains doesn’t mean first brains disappear. Exploring them is a distinct activity.

This sort of writing is arguably indexical writing. Writing as self-authorship. What doesn’t have its own name, theme, and narrative is part of you. In fact the only thing holding it all together is the fact that you’re writing it. This is a self-reinforcing effect. The act of writing in that mode sort of encourages those least detachable thoughts in your head from emerging and making themselves available to hold and be.

There is a paradox here. The most indexical writing is also the most open-to-the-world writing since it lacks filters. So it is both a self-authoring process and a self-dissolution process. What comes out is both most truly you, and not you at all. Self-authorship and self-dissolution are two sides of the same coin. Being is unbecoming. To be homeless is for there to be nobody home.

You could argue that it is the process of giving names, boundaries, and thematic and narrative structure to thoughts to externalize them that is a highly unnatural and strange process. Like mutilating your brain by carving out chunks of it to push out. I am not sentimental enough about the writing process to actually feel that way, but I kinda get now what angsty poets must feel.

I think this is the key difference between diary-writing or journaling and “writing.” The lack of traumatic separation and self-alienating packaging.

This experiment hasn’t yet run its course, and I might keep it going indefinitely, but I think I finally understand the point of it, and why I unconsciously wanted to do it and why I feel it helps the other writing.

Where do you go from this kind of writing? Well, if you continue down this course — and I already see this happening a bit — you head towards increasingly commodity language. You seek to avoid evocative turns of phrase, stylistic flourishes, and individual signature elements — anything that asserts identity. You seek to make the writing unindexable, not just unmappable. You seek to go beyond individual self-authorship and channel a larger vibe or mood. Or maybe you try to fragment your own mind into a bunch of authorly tulpas. Or maybe you mind-meld with GPT-3 and write in some sort of transhuman words-from-nowhere mode. Ultimately you get to various sorts of automatic writing. I don’t necessarily want to go there, but it’s interesting to see that that’s where this path leads. This is the death of the author as an authorial stance as opposed to a critical readerly stance. It’s a direction that naturally ends in a sort of textual suicide. At the level I’m playing it, it’s merely a sort of extreme sport. Textual base-jumping perhaps. But this direction has strong tailwinds to it. Increasingly large amounts of public text in the world form this sort of featureless mass that’s grist for machine-learning mills, and increasingly, no identity of its own.

You might say the natural end point of this kind of writing is when it becomes indistinguishable from its GPT-3 extrapolations and interpolations.

Or going the other way, there are potential experiments in radical namefulness. Everything is uniquely identifiable, memorable, evocative, and nameable, and has a true name. Narrative coherence is as strong as possible. Thematic structure and causal flow is as tight as possible. Un-machine-learnable texts. I’m not sure that kind of text is even possible.

Mediocratopia: 12

This entry is part 12 of 12 in the series Mediocratopia

A key insight recently struck me, and it is one that I should have worked out and written up earlier, but I didn’t think of — one of the biggest reasons mediocrity gets a bad rap is conflation with what I call Somebody Else’s Optimality, or SEO (the rest of this post is just me attempting to manufacture justification for this joke ๐Ÿ˜†).

Situations and conditions that suck, and attract the label “mediocre” (as in “why is service so mediocre?”) usually aren’t mediocre at all, but designed to optimize Something Else for Someone Else; some aspect that is less visible than whatever aspect you’re responding to.

It’s usually not even particularly disguised or denied. You just have to stop to think for a second. Quite often, the “something else” is cost to owners of the assets involved. Aggressively driving cost efficiency by cutting corners in services is obviously not mediocritization, it is optimization of something other than service quality for somebody other than customers. Actual mediocritization creates slack and mediocrity along all dimensions. The point of mediocrity is slack and reserves for dealing with uncertainty, as I’ve argued elsewhere in this series several times.

SEO is an important phenomenon in its own right, but in this post, I mainly want to untangle it from mediocrity.

[Read more…]

Storytelling — Cringe and the Banality of Shadows

This entry is part 5 of 6 in the series Narrativium

Thinking about cringe comedy recently, it struck me that the genre is built around characters who are entirely driven by their shadows, and draws its comedic power from the sheer banality of the unconscious inner lives thus revealed. An example is the character of Mick played by Caitlin Olson on The Mick. Olson played a similar character named Dee on It’s Always Sunny in Philadelphia. Cringe characters of this type can be traced back nearly two decades to characters like Larry David in Curb Your Enthusiasm, through several characters on The Office to modern incarnations. While cringe is an old element of comedy (you can find healthy doses in Chaplin), cringe as the defining trait of the (prototypically female, or somewhat feminized male) protagonist seems to be a 2010s phenomenon. The fully-realized form seems to have emerged around 2013. Not coincidentally, this was right after The Office ended. Arguably, that show was proto-cringe. Bleeding edge comedies between 2000 and 2012 gradually refined cringe-based narrative, leading up to modern examples.

The idea that a shadow can drive an entire character complicates Campbell’s Hero’s Journey, which is usually understood as a structure with its middle half being buried in the shadow realm (of both the outer and inner worlds of the protagonist). A cringe character basically never leaves the shadow realm, so there is no heroism in venturing there, and no hope of ever making it back. The cringe self is not a redemptive self.

[Read more…]

MJD 59,487

This entry is part 20 of 21 in the series Captain's Log

People who have a literal-minded interest in matters that extend beyond their own lives, and perhaps those of a couple of generations of ancestors and descendants, are an odd breed. For the bulk of humanity, these are zones for mythological imaginings rather than speculative-empirical investigation, if they are of interest at all. For the typical human, what happened in 1500 AD, or what might happen in 2500 AD, are questions to be answered in ways that best sustain the most psyche-bolstering beliefs for today. And you can’t really accuse them of indulging in self-serving cognitions when the others who might be served instead are either dead or unborn.

As a result, mythology is popular (by which I mean any heavily mythologized history, such as that of the founding of the United States, not just literal stories of Bronze Age gods and demons), but history is widely viewed as boring. Science fiction is popular, but futurism (of the wonky statistical trends and painstakingly reality-anchored scenario planning variety) is widely viewed as boring.

But if you think about it, it is history and futurism that are the highly romantic fields. Mythology and science fiction are pragmatic, instrumental fields that should be classified alongside therapy and mental healthcare, since they serve practical meaning-making purposes in the here and now, in a way that is arguably as broadly useful as antibiotics.

History proper is rarely useful. The only reason to study it is the romantic notion that understanding the past as it actually unfolded, even if only 10 people in your narrow subfield pay attention and there are no material consequences in the present, is an elevating endeavor.

Similarly, long-range futurism proper (past around 30 years say) is rarely useful. Most political and economic decisions are fully determined or even overdetermined by much shorter range incentives and considerations. There is also usually crippling amounts of uncertainty limiting the utility of what your investigations reveal. And humans are really bad at acting with foresight that extends past about a year anyway, even in the very rare cases where we do get reliable glimpses of the future. So the main reason to study the future is the romantic notion that it is an elevating endeavor.

Who exactly is it that believes these endeavors are elevating, and why should their conceits be respected, let alone potentially supported with public money?

Well, people like you and me for one, who read and write and argue about these things, and at least occasionally try to rise above mythologizing and science-fictional instincts to get a glimpse of the past and future as they really were or will be, with high plausibility. And I can’t say I have good arguments for why our conceits should be respected or supported. Fortunately, they are also very cheap conceits as conceits go. All we need is time and an internet connection to indulge them, and a small cadre of researchers in libraries and archives generating fodder.

How do we even know when we’ve succeeded? Well of course, sometimes history at least is dispositive, and we find fragments that are no longer subject to serious revisionism. And sometimes the future is too — we can predict astronomical events with extraordinary certainty, for instance.

But that’s just the cerebral level of success. At a visceral, emotional level, when either history or futurism “work” in the romantic sense that interests me, the result is a slight shrinkage in anthropocentric conceptions of the world.

Every bit of history or futurism that actually works is like a micro-Copernican revolution.

When they work, history and futurism remind us that humans are no more at the “center” of time, in the sense of being at the focal point of unfolding events, than we are at the “center” of the universe. The big difference between space and time in this regard is that decentering our egos in historical time is a matter of patient, ongoing grinder-work, rather than one of a few radical leaps in physics.