About Venkatesh Rao

Venkat is the founder and editor-in-chief of ribbonfarm. Follow him on Twitter

Jumping into Web3

This entry is part 1 of 1 in the series Into the Pluriverse

I’m kicking off a new blogchain to journal my explorations of Web3: the strange world of NFTs (non-fungible tokens), DAOs (decentralized autonomous organizations), domain names ending in .eth, and so forth. I wasn’t going to get into it quite yet, but events in the last week dumped me unceremoniously into the deep end.

I’m chronicling the play-by-play in an extended Twitter thread. There is also now an NFTs page for ribbonfarm. I’ve already sold two (on mirror.xyz and on OpenSea.io).

As I write this, a 24-hour auction for my third NFT, is underway on foundation.app. I’m thinking of it as my first serious minting, since it’s a piece that a lot of effort went into — the ribbonfarm map of 2016 (if you’re interested in bidding, you’ll need the metamask wallet extension and some ether).

I’m still pretty down in the weeds and haven’t yet begun to form coherent big picture mental models of what’s going on. But I did make this little diagram to try and explain what’s going on to myself… and then made an NFT out of it.

I’ll hopefully have more interesting things to share after I have some time to reflect on and make sense of the rather hectic first week.

Beyond the fun game of making money selling artificially digitally scarce objects, the broader point of diving in for me is that it’s clear Web3 is going to drastically transform the way the internet works at very deep levels. Not just in the sense of deeply integrating economic mechanisms within the infrastructure, but also in terms of how content is created, distributed, and presented. If this develops as it promises to, Web2 (what used to be called Web 2.0) activities like blogging and writing newsletters are going to be utterly transformed. So this is as much a sort of discovery journey, to figure out the future of ribbonfarm, as it is a dive into an interesting new technology.

The highlights of my first week (details in the Twitter thread):

  • Minted and sold 2 NFTs, participated in a 3rd via a minority stake
  • Got myself a couple of .eth domains, including ribbonfarm.eth — which led to an unexpected windfall
  • Set up a Gnosis multi-sig safe for the Yak Collective, and helped kick off plans to turn it into a DAO
  • Entered something called the $WRITE token race to try and win a token for the Yak Collective to start a Web3 publication on mirror.xyz (you can help us get one by voting tomorrow, Wednesday, Nov 10)
  • Signed the Declaration of Interdependence for Cyberspace, my first crypto-signed petition
  • Presumably pissed off about 20% of my Twitter following going by this poll (Web3 is a very polarizing topic)

There’s a lot going on, as I’m discovering. Every hour I spend exploring this, I discover more new things, at every level from esoteric technical things to subtle cultural things.

If you, like me, have been thinking that being roughly familiar with the cryptocurrency tech scene of a few years ago means you “get” most of what’s going on here, you’re wrong. The leap between the 2016-17 state of the art and this is dramatic. There’s a great deal more to understand and wrap your head around.

I’ll update this blogchain with summaries and highlight views as I go along, but the devil really is in the details on this one, so if you’re interested in following along without getting lost, I recommend tracking my twitter thread too.

Ghost Protocols

This entry is part 1 of 1 in the series Glossary

A ghost protocol is a pattern of interactions between two parties wherein one party pretends the other does not exist. A simple example is the “silent treatment” pattern we all learn as kids. In highly entangled family life, the silent treatment is not possible to sustain for very long, but in looser friendship circles, it is both practical and useful to be able to ghost people indefinitely. Arguably, in the hyperconnected and decentered age of social media, the ability to ghost people at an individual level is a practical necessity, and not necessarily cruel. People have enough social optionality and legal protections now that not being recognized by a particular person or group, even a very powerful one, is not as big a deal as it once was.

At the other end of the spectrum of complexity of ghosted states is the condition of officially disavowed spies, as in the eponymous Mission Impossible movie. I don’t know if “ghost protocol” is a real term of art in the intelligence world, but it’s got a nice ring to it, so I’ll take it. One of my favorite shows, Burn Notice, is set within a ghost protocol situation.

If you pretend a person or entire group doesn’t exist, and they’re real, they don’t go away of course. As Philip K. Dick said, reality is that which doesn’t go away when you stop believing in it.

So you need ways of dealing with live people who are dead to you, and preventing them from getting in your way, without acknowledging their existence. When you put some thought and structure around those ways, you’ve got a ghost protocol.

[Read more…]

MJD 59,514

This entry is part 21 of 21 in the series Captain's Log

This Captain’s Log blogchain has unintentionally turned into an experiment in memory and identity. The initial idea of doing a blogchain without meaningful headlines or fixed themes — partly inspired by twitter and messenger/Slack/Discord modes of writing — was partly laziness. I was tired of thinking up sticky and evocative headlines, plus I was getting wary of, and burned-out by, the unconsciously clickbaity nature of headlined longform.

I couldn’t remember anything of what I’ve written here, so I just went back and read the whole series, all 20 parts, and it’s already slipped away from my mind again. Names are extraordinarily strong memory anchors and without them we barely have textual memories at all. I can recall the gist of many posts written over a decade ago given just the name or a core meme, but for this blogchain, even having re-read it five minutes ago I couldn’t tell you what it was about. The flip side is, it wasn’t actively painful to reread the way a lot of my old stuff is (which is why I rarely re-read). In some ways it was kinda surprising and interesting to review. The lack of names means a lack of fixed mental models of what posts were about. It’s weird to be able to “cold read” my own posts. It’s like simulated Alzheimer’s or something, and it’s almost scary. It would be terrible to go through life with this level of non-recall.

The amnesiac effect of lack of names is reinforced by the lack of narrative, which is a function of lack of theme (or more concretely, lack of memetic cores). Over the 20 parts so far, I’ve wandered all over the place, with no centripetal force driving towards coherence. The parts were also far enough apart, there was no inertia from being in the same headspace between parts. It’s been a random walk of my mind.

This feels weird. It’s easy to remember at least a few highlights of themed blogchains, even if they lack a proper narrative throughline. I have a (very) vague sense of the ideas I’ve covered in the Mediocratopia or Elderblog Sutra blogchains for instance. Even if there isn’t a necessary order and sequence to the writing, a themed series grows via a web of association. So if you recall one thing, you remember some other things.

But order matters too. We remember things more easily when there is a natural and necessary order to them. This was reinforced for me in this blogchain in dealing with a bug The series plugin I use screwed up and indexed several of the posts out of order, which I took 5 minutes to fix. But reading the posts out of order made zero difference. Since they are not related, either by causation or thematic association, order is neither necessary, nor useful. It’s like how chess players have uncanny recall of meaningful board positions that can actually occur in a game, but not of boards with randomly placed pieces. It’s more than a mnemonic effect though. There is intrinsically higher randomness to a record of unnamed thoughts. The only order here is that induced by me and the world getting older.

This all seems like downsides. Recall is far worse, coherence is far worse. For the reader, the readability is far worse. Is there any upside to writing in this way? I’m not sure. It does seem to tap into a sort of atemporal textual subconscious. It also makes for a very passive mode of writing. A name is a boundary that asserts a certain level of active selfhood. A theme is a sort of grain to the interior contents. A narrative is a sequence to the contents. Each of the three elements acts as a filter to what part of the outside world makes it into the writing. When you take down all three, the writing occupies something like an open space where ideas and thoughts can criss-cross willy-nilly. It is homeless writing, with all the attendant unraveling and disintegration of the bodily envelope (I wrote about this in a paywalled post on the Ribbonfarm Studio newsletter).

A named idea space is a space with a wall. A named and themed idea space is a striated space with a wall (in Deleuze and Guattari sense). A named, themed, and narrativized space is a journey through an arborescence. A nameless, themeless, storyless space develops in a rhizomatic way, reflecting the knots and crooks of the environment. It is not just homeless writing, it is writing where there’s nobody home. It’s the textual equivalent of the “nobody home” affect of far-gone mentally unravelled homeless people.

Another data point for this effect. I just finished a paper noteboook I started just before the pandemic. So it’s taken me about 2 years to fill up. Back in grad school, 20 years ago, I used to be very diligent with paper notes. There was a metacognitive process to it. I’d summarize every session’s notes, and keep a running table of contents. I’d progressively summarize every dozen or so sessions. My notes were were easy and useful to review. Now I’m lazy, I don’t do anything of that sort. It’s just an uncurated stream of consciousness. With just a few pages left in the notebook, I tried to go back and reconstruct a table of contents (thankfully I was at least dating the start pages of each session) but it was too messy, hard, and useless, so I gave up. Progressive summarization ToC-ing is only useful and possible when you do it nearly real time. Naming and headlining work only when you name and headline as you work. So what I have with this latest filled notebook is just one big undifferentiated idea soup that’s nearly impossible to review. It’s worse than Dumbledore’s pensieve. It’s something of a memory blackhole. It is recorded but not in a usefully reviewable way. But arguably, not doing the disciplined thing led to different notes being laid down. I thought and externalized thoughts I would otherwise not have thought at all. I can’t prove this, but it feels true. And while it’s harder to review, perhaps the process of writing made it more transformative?

About the only thing I’ve been able to do with both this blogchain and the paper notebook, in terms of review, is go back (with a red pen or the editor) and underline key terms/phrases, and maybe tabulate them elsewhere into an index. I can trace the evolution of my thought through the index phrases. These nameless memories are indexable, but not amenable to structuring beyond that. It’s the part of your mind that you can Google but not map (this is the real “googling yourself”). These are demon notebooks. It’s dull to review now, but in a few years perhaps, it will be interesting to review as a record of what I was thinking through the pandemic. Maybe latent themes will pop.

Twitter of course is the emperor of such demon notebooks, though shared with others. I’ve taken to calling the nameless structures that emerge in my tweeting threadthulhus. These blog and paper demon notebooks though, are not threadthulhus. They are more compact and socially isolated. They are lumps of textual dark matter. They are pre-social, more primitive. They lack the identity imposed by mutualism.

With both this blogchain and my unreviewable demon paper notebook, I think I’ve kinda explored what names/headlines, target themes, and narratives do in writing: they alienate you from your own mind by allowing you to create a legible map of your thoughts as you think. Anything you structure with a name/theme/narrative (the alienation triad) is a thing aside from yourself that you can sort of distance from yourself and point to as an object, and let go off, and even meaningfully sell or give away to others. Alienation is packaging for separation. Anything that you don’t do those things to remains a part of you. This is not a bad thing. Not everything you can think is ready to be weaned from your mind. Even if you’re willing to share it with the world, it does not mean you are able to separate it from yourself. Just because you make second brains doesn’t mean first brains disappear. Exploring them is a distinct activity.

This sort of writing is arguably indexical writing. Writing as self-authorship. What doesn’t have its own name, theme, and narrative is part of you. In fact the only thing holding it all together is the fact that you’re writing it. This is a self-reinforcing effect. The act of writing in that mode sort of encourages those least detachable thoughts in your head from emerging and making themselves available to hold and be.

There is a paradox here. The most indexical writing is also the most open-to-the-world writing since it lacks filters. So it is both a self-authoring process and a self-dissolution process. What comes out is both most truly you, and not you at all. Self-authorship and self-dissolution are two sides of the same coin. Being is unbecoming. To be homeless is for there to be nobody home.

You could argue that it is the process of giving names, boundaries, and thematic and narrative structure to thoughts to externalize them that is a highly unnatural and strange process. Like mutilating your brain by carving out chunks of it to push out. I am not sentimental enough about the writing process to actually feel that way, but I kinda get now what angsty poets must feel.

I think this is the key difference between diary-writing or journaling and “writing.” The lack of traumatic separation and self-alienating packaging.

This experiment hasn’t yet run its course, and I might keep it going indefinitely, but I think I finally understand the point of it, and why I unconsciously wanted to do it and why I feel it helps the other writing.

Where do you go from this kind of writing? Well, if you continue down this course — and I already see this happening a bit — you head towards increasingly commodity language. You seek to avoid evocative turns of phrase, stylistic flourishes, and individual signature elements — anything that asserts identity. You seek to make the writing unindexable, not just unmappable. You seek to go beyond individual self-authorship and channel a larger vibe or mood. Or maybe you try to fragment your own mind into a bunch of authorly tulpas. Or maybe you mind-meld with GPT-3 and write in some sort of transhuman words-from-nowhere mode. Ultimately you get to various sorts of automatic writing. I don’t necessarily want to go there, but it’s interesting to see that that’s where this path leads. This is the death of the author as an authorial stance as opposed to a critical readerly stance. It’s a direction that naturally ends in a sort of textual suicide. At the level I’m playing it, it’s merely a sort of extreme sport. Textual base-jumping perhaps. But this direction has strong tailwinds to it. Increasingly large amounts of public text in the world form this sort of featureless mass that’s grist for machine-learning mills, and increasingly, no identity of its own.

You might say the natural end point of this kind of writing is when it becomes indistinguishable from its GPT-3 extrapolations and interpolations.

Or going the other way, there are potential experiments in radical namefulness. Everything is uniquely identifiable, memorable, evocative, and nameable, and has a true name. Narrative coherence is as strong as possible. Thematic structure and causal flow is as tight as possible. Un-machine-learnable texts. I’m not sure that kind of text is even possible.

Mediocratopia: 12

This entry is part 12 of 12 in the series Mediocratopia

A key insight recently struck me, and it is one that I should have worked out and written up earlier, but I didn’t think of — one of the biggest reasons mediocrity gets a bad rap is conflation with what I call Somebody Else’s Optimality, or SEO (the rest of this post is just me attempting to manufacture justification for this joke 😆).

Situations and conditions that suck, and attract the label “mediocre” (as in “why is service so mediocre?”) usually aren’t mediocre at all, but designed to optimize Something Else for Someone Else; some aspect that is less visible than whatever aspect you’re responding to.

It’s usually not even particularly disguised or denied. You just have to stop to think for a second. Quite often, the “something else” is cost to owners of the assets involved. Aggressively driving cost efficiency by cutting corners in services is obviously not mediocritization, it is optimization of something other than service quality for somebody other than customers. Actual mediocritization creates slack and mediocrity along all dimensions. The point of mediocrity is slack and reserves for dealing with uncertainty, as I’ve argued elsewhere in this series several times.

SEO is an important phenomenon in its own right, but in this post, I mainly want to untangle it from mediocrity.

[Read more…]

Storytelling — Cringe and the Banality of Shadows

This entry is part 5 of 5 in the series Narrativium

Thinking about cringe comedy recently, it struck me that the genre is built around characters who are entirely driven by their shadows, and draws its comedic power from the sheer banality of the unconscious inner lives thus revealed. An example is the character of Mick played by Caitlin Olson on The Mick. Olson played a similar character named Dee on It’s Always Sunny in Philadelphia. Cringe characters of this type can be traced back nearly two decades to characters like Larry David in Curb Your Enthusiasm, through several characters on The Office to modern incarnations. While cringe is an old element of comedy (you can find healthy doses in Chaplin), cringe as the defining trait of the (prototypically female, or somewhat feminized male) protagonist seems to be a 2010s phenomenon. The fully-realized form seems to have emerged around 2013. Not coincidentally, this was right after The Office ended. Arguably, that show was proto-cringe. Bleeding edge comedies between 2000 and 2012 gradually refined cringe-based narrative, leading up to modern examples.

The idea that a shadow can drive an entire character complicates Campbell’s Hero’s Journey, which is usually understood as a structure with its middle half being buried in the shadow realm (of both the outer and inner worlds of the protagonist). A cringe character basically never leaves the shadow realm, so there is no heroism in venturing there, and no hope of ever making it back. The cringe self is not a redemptive self.

[Read more…]

MJD 59,487

This entry is part 20 of 21 in the series Captain's Log

People who have a literal-minded interest in matters that extend beyond their own lives, and perhaps those of a couple of generations of ancestors and descendants, are an odd breed. For the bulk of humanity, these are zones for mythological imaginings rather than speculative-empirical investigation, if they are of interest at all. For the typical human, what happened in 1500 AD, or what might happen in 2500 AD, are questions to be answered in ways that best sustain the most psyche-bolstering beliefs for today. And you can’t really accuse them of indulging in self-serving cognitions when the others who might be served instead are either dead or unborn.

As a result, mythology is popular (by which I mean any heavily mythologized history, such as that of the founding of the United States, not just literal stories of Bronze Age gods and demons), but history is widely viewed as boring. Science fiction is popular, but futurism (of the wonky statistical trends and painstakingly reality-anchored scenario planning variety) is widely viewed as boring.

But if you think about it, it is history and futurism that are the highly romantic fields. Mythology and science fiction are pragmatic, instrumental fields that should be classified alongside therapy and mental healthcare, since they serve practical meaning-making purposes in the here and now, in a way that is arguably as broadly useful as antibiotics.

History proper is rarely useful. The only reason to study it is the romantic notion that understanding the past as it actually unfolded, even if only 10 people in your narrow subfield pay attention and there are no material consequences in the present, is an elevating endeavor.

Similarly, long-range futurism proper (past around 30 years say) is rarely useful. Most political and economic decisions are fully determined or even overdetermined by much shorter range incentives and considerations. There is also usually crippling amounts of uncertainty limiting the utility of what your investigations reveal. And humans are really bad at acting with foresight that extends past about a year anyway, even in the very rare cases where we do get reliable glimpses of the future. So the main reason to study the future is the romantic notion that it is an elevating endeavor.

Who exactly is it that believes these endeavors are elevating, and why should their conceits be respected, let alone potentially supported with public money?

Well, people like you and me for one, who read and write and argue about these things, and at least occasionally try to rise above mythologizing and science-fictional instincts to get a glimpse of the past and future as they really were or will be, with high plausibility. And I can’t say I have good arguments for why our conceits should be respected or supported. Fortunately, they are also very cheap conceits as conceits go. All we need is time and an internet connection to indulge them, and a small cadre of researchers in libraries and archives generating fodder.

How do we even know when we’ve succeeded? Well of course, sometimes history at least is dispositive, and we find fragments that are no longer subject to serious revisionism. And sometimes the future is too — we can predict astronomical events with extraordinary certainty, for instance.

But that’s just the cerebral level of success. At a visceral, emotional level, when either history or futurism “work” in the romantic sense that interests me, the result is a slight shrinkage in anthropocentric conceptions of the world.

Every bit of history or futurism that actually works is like a micro-Copernican revolution.

When they work, history and futurism remind us that humans are no more at the “center” of time, in the sense of being at the focal point of unfolding events, than we are at the “center” of the universe. The big difference between space and time in this regard is that decentering our egos in historical time is a matter of patient, ongoing grinder-work, rather than one of a few radical leaps in physics.

Mediocratopia: 11

This entry is part 11 of 12 in the series Mediocratopia

Under stress, there are those who try harder, and there are those who lower their standards. Until very recently, the first response was considered a virtue, the second was considered a vice. The ongoing wave of burnout and people quitting or cutting back on work when they can afford to suggests this societal norm is shifting. I want to propose an inversion of the valences here, and argue that under many conditions, lowering your standards is in fact the virtuous thing to do.

A mediocritizing mindset typically doesn’t bother with such ethical justification, however. It typically rejects the idealism inherent in treating this as a matter of virtue vs. vice altogether. Instead we mediocrats try to approach the matter from a place of sardonic self-awareness and skepticism of high standards as a motivational crutch. This tweet says it well enough:

This is less cynical than it seems. Motivation, discipline, and energy are complex personality traits. While they are not immutable functions of nature or nurture, they do form fairly entrenched equilibria. Shifting these equilibria to superficially more socially desirable ones isn’t merely a matter of consuming enough hustle-porn fortune cookies or suddenly becoming a true believer in a suitably ass-kicking philosophy like stoicism or startup doerism. Life is messier than that.

You can’t exhort or philosophize your way into a new regime of personal biophysics where you magically try harder or behave with greater discipline than you ever have in your life. Gritty, driven people tend to have been that way all their lives. Easy-going slackers tend to have been that way all their lives too. People do change their hustle equilibria, but it is rare (and pretty dramatic when it happens). And the chances of backsliding into your natural energy mode are high. Driven people will find it tough to stay chilled out, and vice versa.

Emergencies and life crises can trigger both temporary and permanent changes. Type A strivers might let themselves relax for a few months after a heart attack, or make permanent changes. A slacker might find themselves in a particularly exciting project and turn into driven people for a while, and occasionally for the rest of their lives.

But the stickiness of these equilibria means the response to stressors is typically something other than behavior change, and that’s a good thing. Typically it is lowering standards while retaining behaviors.

[Read more…]

MJD 59,459

This entry is part 19 of 21 in the series Captain's Log

In 2018, historian Michael McCormick nominated 536 AD as the worst year to be alive. There was a bit of fun coverage of the claim. This sort of thing is what I think of as “little history.” It’s the opposite of Big History.

Big History is stuff like Yuval Noah Harari’s Sapiens, Jared Diamond’s Guns, Germs, and Steel, or David Graeber’s Debt. I was a fan of that kind of sweeping, interpretive history when I was younger, but frankly, I really dislike it now. Historiographic constructs get really dubious and sketchy when stretched out across the scaffolding of millennia of raw events. Questions that are “too big to succeed” in a sense, like “why did Europe pull ahead of China,” tend to produce ideologies rather than histories. It’s good for spitballing and first-pass sense-making (and I’ve done my share of it on this blog) but not good as a foundation for any deeper thinking. Even if you want to stay close to the raw events, you get sucked into is-ought historicist conceits, and dangerously reified notions like “progress.” Yes, I’m also a big skeptic of notions like Progress Studies. To the extent that the arc of the moral universe has any coherent shape at all, to proceed on the assumption that it has an intelligible shape (whether desired or undesired), is to systematically blind yourself to the strong possibility that the geometry of history is, if not random, at least fundamentally amoral.

About the only Big History notion I have any affection for is Francis Fukuyama’s notion that in an important sense, Big History has ended (and I’m apparently the last holdout, given that Fukuyama himself has sort of walked back the idea).

Little histories though, are another matter. Truly little histories are little in both space and time, and much of history is intrinsically little in that sense, and fundamentally limited in the amount of inductive or abductive generalization it supports once you let go of overly ambitious historicist conceits and too-big-to-succeed notions like “Progress.” But some little histories are, to borrow a phrase from Laura Spinney’s book about the Spanish Flu, Pale Rider, “broad in space, shallow in time.” They allow you to enjoy most of the pleasures afforded by Big Histories, without the pitfalls.

Whether or not the specific year 536 was in fact the worst year, and whether or not the question of a “worst year” is well-posed, the year was definitely “broad in space, shallow in time” due to the eruption of an Icelandic volcano that created extreme weather world-wide. The list of phenomena capable of creating that kind of globalized entanglement of local histories is extremely short: pandemics, correlated extreme weather, and the creation or destruction of important technological couplings.

The subset of little histories that are “broad in space, shallow in time” — call them BISSIT years (or weeks, or days) — serve as synchronization points for the collective human experience. Most historical eras feature both “good” and “bad” from a million perspectives. Sampling perspectives from around the world at a random time, and adjusting for things like class and wealth, you would probably get a mixed bag of gloomy and cheery perspectives that are not all gloomy or cheery in the same way. But it is reasonable to suppose that at these synchronization points, systematic deviations from the pattern would emerge. Notions of good and bad align. Many of us would probably agree that 2020 sucked more than most years, and would even agree on the cause (the pandemic), and key features of the suckage (limitations on travel and socialization). Even if there were positive aspects to it, and much needed systemic changes ensue in future years, for those of us alive today, who are living through this little history, the actual experience of it kinda sucks.

The general question of whether the human condition is progressing or declining to me is both ill-posed and uninteresting. You get into tedious and sophomoric debates about material prosperity versus perceived relative deprivation. You have people aiming gotchas at each other (“aha, the Great Depression was actually more materially prosperous than optimistic Gilded Age 1890s!” or “there was a lot of progress during the Dark Ages!”).

The specific question of whether a single BISSIT year should be tagged with positive or negative valence though, is much more interesting, since the normal variety of “good” and “bad” perspectives temporarily narrows sharply. Certainly, BISSIT years have a sharply defined character given by their systematic deviations, and sharp boundaries in time. They are “things” in the sense of being ontologically well-posed historiographic primitives that are larger than raw events, but aren’t reified and suspect constructs like “Progress” or “Enlightenment.” There is a certain humility to asking whether these specific temporal things are good or bad, as opposed to the entire arc of history. Two such things need not be good or bad in the same way. History viewed as a string of such BISSIT beads need not have a single character. Perhaps there are red, green, and blue beads punctuating substrings of grey beads, and in your view, red and green beads are good, while blue beads are bad, and so on. We don’t have to have a futile debate about Big History that’s really about ideological tastes. But we can argue about whether specific beads are red or blue. You can ask about the colors of historical things. We can take note of the relative frequencies of colored versus grey beads. And if you’re inclined to think about Big History at all, you can approach it as the narrative of a string of beads that need not have an overall morally significant character. Perhaps, like space, time can be locally “flat” (good or bad as experienced by the people within BISSIT epochs) but have no moral valence globally. Perhaps we live in curved historical event-time, with no consistent “up” everywhere.

MJD 59,436

This entry is part 18 of 21 in the series Captain's Log

A week ago, for the first time in decades, I spent several days in a row doing many hours of hands-on engineering work in a CAD tool (for my rover project), and noticed that I had particularly vivid dreams on those nights. My sleep data from my Whoop strap confirmed that I’d had higher REM sleep on those nights.

These were dreams that lacked narrative and emotional texture, but involved a lot of logistics. For example, one dream I took note of involved coordinating a trip with multiple people, with flights randomly canceled. When I shared this on Twitter, several people replied to say that they too had similar dreams after days of intense but relatively routine information work. A couple of people mentioned dreaming of Tetris after playing a lot, something I too have experienced. High-REM dreaming sleep seems to be involved in integrating cognitive skill memories. This paper by Erik Hoel, The Overfitted Brain: Dreams evolved to assist generalization, pointed out by @nosilver, argues that dreaming is about mitigating overfitting of learning experiences, a problem also encountered in deep learning. This tracks for me. It sounds reasonable that my “logistics” dreams were attempts to generalize some CAD skills to other problems with similar reasoning needs. REM sleep is like the model training phase of deep learning.

Dreams 2×2

This got me interested in the idea of tapping into the unconscious towards pragmatic ends. For example, using the type of dreams you have to do feedback regulation of the work you do during the day. I made up a half-assed hypothesis: the type of dream you have at night depends on 2 variables relating to the corresponding daytime activity — the level of conflict in the activity (low/high) and whether or not the learning loop length is longer or shorter than 24 hours. If it is shorter, you can complete multiple loops in a day and the night-time dream will need to generalize from multiple examples. If it’s less than a complete loop, there is no immediate resolution, so instead you get more dreams that integrate experiences in an open-loop way, using narrative-based explanations. If there’s no conflict and no closed loops, you get low dreaming. There’s nothing to integrate, and you didn’t do much thinking that day (which for me tends to translate to poor sleep). I made up the 2×2 above for this idea.

I have no idea whether this particular 2×2 makes any sense, but it is interesting that such phenomenology lends itself to apprehension in such frameworks at all. I rarely remember dreams, but I think even I could maintain a dream journal based on this scheme, and try to modulate my days based on my nights.

This also helps explain why people in similar situations might have similar dreams (such as all the “corona dreams” that many of us were having during early lockdown months). It also lends substance to the narrative conceits of stories like H. P. Lovecraft’s Call of Cthulhu, which begins with people having widespread similar dreams (where it turns into science fiction is in the specificity of the shared motifs and symbols that appear).

You don’t need to buy into dubious Jungian or Freudian theories of individual and collective dreaming to think about this stuff in robust ways. The development of deep learning, in particular, offers us a much more robust handle on this phenomenology. Dreams are perhaps our journeys into our private latent spaces, undertaken for entirely practical purposes like cleaning up our messy daytime learning (there’s other theories of dreaming too of course, like David Eagleman’s theory that we dream to prevent the visual centers from getting colonized by other parts of the brain at night, but we’re only hypothesizing contributing causes, not determinative theories).

Mediocratopia: 10

This entry is part 10 of 12 in the series Mediocratopia

I once read a good definition of aptitude. Aptitude is how long it takes you to learn something. The idea is that everybody can learn anything, but if it takes you 200 years, you essentially have no aptitude for it. Useful aptitudes are in the <10 years range. You have aptitude for a thing if the learning curve is short and steep for you. You don’t have aptitude if the learning curve is gentle and long for you.

How do you measure your aptitude though? Things like standardized aptitude tests only cover narrow aspects of a few things. One way to measure it is in terms of the speed at which you can do a complete loop of production. Your aptitude is the rate at which this cycle speed increases. This can’t increase linearly though, or you’d be superhuman in no time. There’s a half life to it. Your first short story takes 10 days to write. The next one 5 days, the next one 2.5 days, the next one 1.25 days. Then 0.625 days, at which point you’re probably hitting raw typing speed limits. In practice, improvement curves have more of a staircase quality to them. Rather than fix the obvious next bottleneck of typing speed (who cares if it took you 3 hours instead of 6 to write a story; the marginal value of more speed is low at that point), you might level up and decide to (say) write stories with better developed characters. Or illustrations. So you’re back at 10 days, but on a new level. This is the mundanity of excellence effect I discussed in part 3, and this is an essential part of mediocratization. Ironically, people like Olympic athletes get where they get by mediocratizing rather than optimizing what they do. Excellence lies in avoiding the naive excellence trap.

This kind of improvement replaces quantitative improvement (optimization) with qualitative leveling up, or dimensionality increase. Each time you hit diminishing returns, you open up a new front. You’re never on the slow endzone of a learning curve. You self-disrupt before you get stuck. So you get a learning curve that looks something like this (yes, it’s basically the stack of intersecting S-curves effect, with the lower halves of the S curves omitted)

The interesting effect is that even though any individual smooth learning effort is an exponential with a half-life, since you keep skipping levels, you can have a roughly linear rate of progress, but on a changing problem. You’re never getting superhuman on any vector because you keep changing tack to keep progressing. The y-axis is a stack of different measures of performance, normalized as percentages of an ideal maximal performance level, estimated as the limit of the Zeno’s paradox race at each level.

Now we have a slightly better way to measure aptitude. Aptitude is the rate at which you level up, by changing the nature of the problem you’re solving (and therefore how you measure “improvement”). The interesting thing is, this is not purely a function not of raw prowess or innate talent, but of imagination and taste. Can you sense diminishing returns and open up a new front so you can keep progressing? How early or late do you do that? The limiting factor here is the imaginative level shift that keeps you moving. Being stuck is being caught in the diminishing returns part of a locally optimal learning curve because you can’t see the next curve to jump to.

Your natural wavelength is the rate at which you level up (so your natural frequency is the inverse of that). Two numbers characterize your aptitude: the half-life within a level, and the number of typical iterations you put in before you change levels (which is also — how deep you get into the diminishing returns part of the curve before you level up).