MJD 59,487

This entry is part 20 of 21 in the series Captain's Log

People who have a literal-minded interest in matters that extend beyond their own lives, and perhaps those of a couple of generations of ancestors and descendants, are an odd breed. For the bulk of humanity, these are zones for mythological imaginings rather than speculative-empirical investigation, if they are of interest at all. For the typical human, what happened in 1500 AD, or what might happen in 2500 AD, are questions to be answered in ways that best sustain the most psyche-bolstering beliefs for today. And you can’t really accuse them of indulging in self-serving cognitions when the others who might be served instead are either dead or unborn.

As a result, mythology is popular (by which I mean any heavily mythologized history, such as that of the founding of the United States, not just literal stories of Bronze Age gods and demons), but history is widely viewed as boring. Science fiction is popular, but futurism (of the wonky statistical trends and painstakingly reality-anchored scenario planning variety) is widely viewed as boring.

But if you think about it, it is history and futurism that are the highly romantic fields. Mythology and science fiction are pragmatic, instrumental fields that should be classified alongside therapy and mental healthcare, since they serve practical meaning-making purposes in the here and now, in a way that is arguably as broadly useful as antibiotics.

History proper is rarely useful. The only reason to study it is the romantic notion that understanding the past as it actually unfolded, even if only 10 people in your narrow subfield pay attention and there are no material consequences in the present, is an elevating endeavor.

Similarly, long-range futurism proper (past around 30 years say) is rarely useful. Most political and economic decisions are fully determined or even overdetermined by much shorter range incentives and considerations. There is also usually crippling amounts of uncertainty limiting the utility of what your investigations reveal. And humans are really bad at acting with foresight that extends past about a year anyway, even in the very rare cases where we do get reliable glimpses of the future. So the main reason to study the future is the romantic notion that it is an elevating endeavor.

Who exactly is it that believes these endeavors are elevating, and why should their conceits be respected, let alone potentially supported with public money?

Well, people like you and me for one, who read and write and argue about these things, and at least occasionally try to rise above mythologizing and science-fictional instincts to get a glimpse of the past and future as they really were or will be, with high plausibility. And I can’t say I have good arguments for why our conceits should be respected or supported. Fortunately, they are also very cheap conceits as conceits go. All we need is time and an internet connection to indulge them, and a small cadre of researchers in libraries and archives generating fodder.

How do we even know when we’ve succeeded? Well of course, sometimes history at least is dispositive, and we find fragments that are no longer subject to serious revisionism. And sometimes the future is too — we can predict astronomical events with extraordinary certainty, for instance.

But that’s just the cerebral level of success. At a visceral, emotional level, when either history or futurism “work” in the romantic sense that interests me, the result is a slight shrinkage in anthropocentric conceptions of the world.

Every bit of history or futurism that actually works is like a micro-Copernican revolution.

When they work, history and futurism remind us that humans are no more at the “center” of time, in the sense of being at the focal point of unfolding events, than we are at the “center” of the universe. The big difference between space and time in this regard is that decentering our egos in historical time is a matter of patient, ongoing grinder-work, rather than one of a few radical leaps in physics.

Mediocratopia: 11

This entry is part 11 of 13 in the series Mediocratopia

Under stress, there are those who try harder, and there are those who lower their standards. Until very recently, the first response was considered a virtue, the second was considered a vice. The ongoing wave of burnout and people quitting or cutting back on work when they can afford to suggests this societal norm is shifting. I want to propose an inversion of the valences here, and argue that under many conditions, lowering your standards is in fact the virtuous thing to do.

A mediocritizing mindset typically doesn’t bother with such ethical justification, however. It typically rejects the idealism inherent in treating this as a matter of virtue vs. vice altogether. Instead we mediocrats try to approach the matter from a place of sardonic self-awareness and skepticism of high standards as a motivational crutch. This tweet says it well enough:

This is less cynical than it seems. Motivation, discipline, and energy are complex personality traits. While they are not immutable functions of nature or nurture, they do form fairly entrenched equilibria. Shifting these equilibria to superficially more socially desirable ones isn’t merely a matter of consuming enough hustle-porn fortune cookies or suddenly becoming a true believer in a suitably ass-kicking philosophy like stoicism or startup doerism. Life is messier than that.

You can’t exhort or philosophize your way into a new regime of personal biophysics where you magically try harder or behave with greater discipline than you ever have in your life. Gritty, driven people tend to have been that way all their lives. Easy-going slackers tend to have been that way all their lives too. People do change their hustle equilibria, but it is rare (and pretty dramatic when it happens). And the chances of backsliding into your natural energy mode are high. Driven people will find it tough to stay chilled out, and vice versa.

Emergencies and life crises can trigger both temporary and permanent changes. Type A strivers might let themselves relax for a few months after a heart attack, or make permanent changes. A slacker might find themselves in a particularly exciting project and turn into driven people for a while, and occasionally for the rest of their lives.

But the stickiness of these equilibria means the response to stressors is typically something other than behavior change, and that’s a good thing. Typically it is lowering standards while retaining behaviors.

[Read more…]

MJD 59,459

This entry is part 19 of 21 in the series Captain's Log

In 2018, historian Michael McCormick nominated 536 AD as the worst year to be alive. There was a bit of fun coverage of the claim. This sort of thing is what I think of as “little history.” It’s the opposite of Big History.

Big History is stuff like Yuval Noah Harari’s Sapiens, Jared Diamond’s Guns, Germs, and Steel, or David Graeber’s Debt. I was a fan of that kind of sweeping, interpretive history when I was younger, but frankly, I really dislike it now. Historiographic constructs get really dubious and sketchy when stretched out across the scaffolding of millennia of raw events. Questions that are “too big to succeed” in a sense, like “why did Europe pull ahead of China,” tend to produce ideologies rather than histories. It’s good for spitballing and first-pass sense-making (and I’ve done my share of it on this blog) but not good as a foundation for any deeper thinking. Even if you want to stay close to the raw events, you get sucked into is-ought historicist conceits, and dangerously reified notions like “progress.” Yes, I’m also a big skeptic of notions like Progress Studies. To the extent that the arc of the moral universe has any coherent shape at all, to proceed on the assumption that it has an intelligible shape (whether desired or undesired), is to systematically blind yourself to the strong possibility that the geometry of history is, if not random, at least fundamentally amoral.

About the only Big History notion I have any affection for is Francis Fukuyama’s notion that in an important sense, Big History has ended (and I’m apparently the last holdout, given that Fukuyama himself has sort of walked back the idea).

Little histories though, are another matter. Truly little histories are little in both space and time, and much of history is intrinsically little in that sense, and fundamentally limited in the amount of inductive or abductive generalization it supports once you let go of overly ambitious historicist conceits and too-big-to-succeed notions like “Progress.” But some little histories are, to borrow a phrase from Laura Spinney’s book about the Spanish Flu, Pale Rider, “broad in space, shallow in time.” They allow you to enjoy most of the pleasures afforded by Big Histories, without the pitfalls.

Whether or not the specific year 536 was in fact the worst year, and whether or not the question of a “worst year” is well-posed, the year was definitely “broad in space, shallow in time” due to the eruption of an Icelandic volcano that created extreme weather world-wide. The list of phenomena capable of creating that kind of globalized entanglement of local histories is extremely short: pandemics, correlated extreme weather, and the creation or destruction of important technological couplings.

The subset of little histories that are “broad in space, shallow in time” — call them BISSIT years (or weeks, or days) — serve as synchronization points for the collective human experience. Most historical eras feature both “good” and “bad” from a million perspectives. Sampling perspectives from around the world at a random time, and adjusting for things like class and wealth, you would probably get a mixed bag of gloomy and cheery perspectives that are not all gloomy or cheery in the same way. But it is reasonable to suppose that at these synchronization points, systematic deviations from the pattern would emerge. Notions of good and bad align. Many of us would probably agree that 2020 sucked more than most years, and would even agree on the cause (the pandemic), and key features of the suckage (limitations on travel and socialization). Even if there were positive aspects to it, and much needed systemic changes ensue in future years, for those of us alive today, who are living through this little history, the actual experience of it kinda sucks.

The general question of whether the human condition is progressing or declining to me is both ill-posed and uninteresting. You get into tedious and sophomoric debates about material prosperity versus perceived relative deprivation. You have people aiming gotchas at each other (“aha, the Great Depression was actually more materially prosperous than optimistic Gilded Age 1890s!” or “there was a lot of progress during the Dark Ages!”).

The specific question of whether a single BISSIT year should be tagged with positive or negative valence though, is much more interesting, since the normal variety of “good” and “bad” perspectives temporarily narrows sharply. Certainly, BISSIT years have a sharply defined character given by their systematic deviations, and sharp boundaries in time. They are “things” in the sense of being ontologically well-posed historiographic primitives that are larger than raw events, but aren’t reified and suspect constructs like “Progress” or “Enlightenment.” There is a certain humility to asking whether these specific temporal things are good or bad, as opposed to the entire arc of history. Two such things need not be good or bad in the same way. History viewed as a string of such BISSIT beads need not have a single character. Perhaps there are red, green, and blue beads punctuating substrings of grey beads, and in your view, red and green beads are good, while blue beads are bad, and so on. We don’t have to have a futile debate about Big History that’s really about ideological tastes. But we can argue about whether specific beads are red or blue. You can ask about the colors of historical things. We can take note of the relative frequencies of colored versus grey beads. And if you’re inclined to think about Big History at all, you can approach it as the narrative of a string of beads that need not have an overall morally significant character. Perhaps, like space, time can be locally “flat” (good or bad as experienced by the people within BISSIT epochs) but have no moral valence globally. Perhaps we live in curved historical event-time, with no consistent “up” everywhere.

MJD 59,436

This entry is part 18 of 21 in the series Captain's Log

A week ago, for the first time in decades, I spent several days in a row doing many hours of hands-on engineering work in a CAD tool (for my rover project), and noticed that I had particularly vivid dreams on those nights. My sleep data from my Whoop strap confirmed that I’d had higher REM sleep on those nights.

These were dreams that lacked narrative and emotional texture, but involved a lot of logistics. For example, one dream I took note of involved coordinating a trip with multiple people, with flights randomly canceled. When I shared this on Twitter, several people replied to say that they too had similar dreams after days of intense but relatively routine information work. A couple of people mentioned dreaming of Tetris after playing a lot, something I too have experienced. High-REM dreaming sleep seems to be involved in integrating cognitive skill memories. This paper by Erik Hoel, The Overfitted Brain: Dreams evolved to assist generalization, pointed out by @nosilver, argues that dreaming is about mitigating overfitting of learning experiences, a problem also encountered in deep learning. This tracks for me. It sounds reasonable that my “logistics” dreams were attempts to generalize some CAD skills to other problems with similar reasoning needs. REM sleep is like the model training phase of deep learning.

Dreams 2×2

This got me interested in the idea of tapping into the unconscious towards pragmatic ends. For example, using the type of dreams you have to do feedback regulation of the work you do during the day. I made up a half-assed hypothesis: the type of dream you have at night depends on 2 variables relating to the corresponding daytime activity — the level of conflict in the activity (low/high) and whether or not the learning loop length is longer or shorter than 24 hours. If it is shorter, you can complete multiple loops in a day and the night-time dream will need to generalize from multiple examples. If it’s less than a complete loop, there is no immediate resolution, so instead you get more dreams that integrate experiences in an open-loop way, using narrative-based explanations. If there’s no conflict and no closed loops, you get low dreaming. There’s nothing to integrate, and you didn’t do much thinking that day (which for me tends to translate to poor sleep). I made up the 2×2 above for this idea.

I have no idea whether this particular 2×2 makes any sense, but it is interesting that such phenomenology lends itself to apprehension in such frameworks at all. I rarely remember dreams, but I think even I could maintain a dream journal based on this scheme, and try to modulate my days based on my nights.

This also helps explain why people in similar situations might have similar dreams (such as all the “corona dreams” that many of us were having during early lockdown months). It also lends substance to the narrative conceits of stories like H. P. Lovecraft’s Call of Cthulhu, which begins with people having widespread similar dreams (where it turns into science fiction is in the specificity of the shared motifs and symbols that appear).

You don’t need to buy into dubious Jungian or Freudian theories of individual and collective dreaming to think about this stuff in robust ways. The development of deep learning, in particular, offers us a much more robust handle on this phenomenology. Dreams are perhaps our journeys into our private latent spaces, undertaken for entirely practical purposes like cleaning up our messy daytime learning (there’s other theories of dreaming too of course, like David Eagleman’s theory that we dream to prevent the visual centers from getting colonized by other parts of the brain at night, but we’re only hypothesizing contributing causes, not determinative theories).

Mediocratopia: 10

This entry is part 10 of 13 in the series Mediocratopia

I once read a good definition of aptitude. Aptitude is how long it takes you to learn something. The idea is that everybody can learn anything, but if it takes you 200 years, you essentially have no aptitude for it. Useful aptitudes are in the <10 years range. You have aptitude for a thing if the learning curve is short and steep for you. You don’t have aptitude if the learning curve is gentle and long for you.

How do you measure your aptitude though? Things like standardized aptitude tests only cover narrow aspects of a few things. One way to measure it is in terms of the speed at which you can do a complete loop of production. Your aptitude is the rate at which this cycle speed increases. This can’t increase linearly though, or you’d be superhuman in no time. There’s a half life to it. Your first short story takes 10 days to write. The next one 5 days, the next one 2.5 days, the next one 1.25 days. Then 0.625 days, at which point you’re probably hitting raw typing speed limits. In practice, improvement curves have more of a staircase quality to them. Rather than fix the obvious next bottleneck of typing speed (who cares if it took you 3 hours instead of 6 to write a story; the marginal value of more speed is low at that point), you might level up and decide to (say) write stories with better developed characters. Or illustrations. So you’re back at 10 days, but on a new level. This is the mundanity of excellence effect I discussed in part 3, and this is an essential part of mediocratization. Ironically, people like Olympic athletes get where they get by mediocratizing rather than optimizing what they do. Excellence lies in avoiding the naive excellence trap.

This kind of improvement replaces quantitative improvement (optimization) with qualitative leveling up, or dimensionality increase. Each time you hit diminishing returns, you open up a new front. You’re never on the slow endzone of a learning curve. You self-disrupt before you get stuck. So you get a learning curve that looks something like this (yes, it’s basically the stack of intersecting S-curves effect, with the lower halves of the S curves omitted)

The interesting effect is that even though any individual smooth learning effort is an exponential with a half-life, since you keep skipping levels, you can have a roughly linear rate of progress, but on a changing problem. You’re never getting superhuman on any vector because you keep changing tack to keep progressing. The y-axis is a stack of different measures of performance, normalized as percentages of an ideal maximal performance level, estimated as the limit of the Zeno’s paradox race at each level.

Now we have a slightly better way to measure aptitude. Aptitude is the rate at which you level up, by changing the nature of the problem you’re solving (and therefore how you measure “improvement”). The interesting thing is, this is not purely a function not of raw prowess or innate talent, but of imagination and taste. Can you sense diminishing returns and open up a new front so you can keep progressing? How early or late do you do that? The limiting factor here is the imaginative level shift that keeps you moving. Being stuck is being caught in the diminishing returns part of a locally optimal learning curve because you can’t see the next curve to jump to.

Your natural wavelength is the rate at which you level up (so your natural frequency is the inverse of that). Two numbers characterize your aptitude: the half-life within a level, and the number of typical iterations you put in before you change levels (which is also — how deep you get into the diminishing returns part of the curve before you level up).

The Retiree

This entry is part 6 of 10 in the series Fiction

The media storm the publicists had been bracing for never occurred. There was no damage to control. The attention they had been instructed to deflect from the Baikal Trust never materialized.

And it was not because Ozy Khan was the thirty-seventh billionaire to launch himself boringly into space, in a space mansion of his own design. The thirty-fifth and thirty-sixth billionaires to do so, after all, had endured nearly as much press, both hostile and adulatory, as the first few had, decades earlier. The public, it seemed, never lost its appetite for the spectacle of great wealth ascending to extra-terrestrial heights. And the billionaires too, had perfected the art of image management in space. There had already been at least three short-lived, but successful reality shows from orbiting mansions.

Nor could the lack of a media storm be attributed to Ozy Khan being an obscure Central Asian oligarch rather than a prominent American or Chinese one. More obscure billionaires had managed to inspire large spikes of interest by ascending to vacations in luridly ostentatious space mansions, and been rewarded with notoriety around the world for their extended departures from it. A space mansion was a reliable ticket onto the center stage of global affairs.

Space after all, as one much-quoted wag had remarked in the 2020s, was the new Davos.

Even the fact that Ozy Khan would not be coming back was not without precedent. The seventh and eleventh billionaires, each terminally ill and with less than a year to live, had both launched themselves on one way trips into space with much funereal solemnity. Both had duly died in space with cosmic gravitas, and been forgotten. Only old people made hope-he-doesn’t-come-back jokes anymore.

Perhaps the lack of drama could be attributed, one commentator suggested, to the fact that Khan had been such a dull presence on earth, it was was hard to craft a story around his departure from it. His sprawling renewables and sequestration technologies empire lacked charisma. It embodied no daring technological vision, only powerful political connections, a lot of imitation and luck, and plodding, sound financial management. His official biography offered little of interest to the story-minded. His career suggested no more than the usual amount of tedious politicking, grift, and geopolitical murkiness.

There really was very little to say about Ozy Khan’s time on earth before he decided to leave it.

[Read more…]

MJD 59,396

This entry is part 17 of 21 in the series Captain's Log

I’ve developed two obsessions through the pandemic that I think will persist long past the end of it, probably to the end of my life: tinkering and story-telling.

On the tinkering front, I’ve built out a nice little science-and-engineering workshop over the last year and acquired more skills in less time than I expected to, since I don’t have a high opinion of my own hands-on abilities. As I’ve mentioned before, this is still hard to write about because while the doing is fun, getting to interesting things to show off and talk about will take some time. It’s good enough fodder for tweeting though, and I’ve been maintaining several fun ongoing threads about electronics experiments, my rover project, and 3d printing. At some point, I hope I’ll be able to write essays about this stuff, but right now it’s only coming together at Twitter level. Overall, tinkering has been the easier journey, I guess because I’m an engineer by training, so I am not really starting from scratch. Though all my old knowledge feels rusty, I think I did hit the ground running when I started around August last year.

Storytelling has been the tougher journey. In many ways, it’s very like tinkering, except with machines that run inside human brains. It is very unlike nonfiction writing. I’ve made more progress on exploring storytelling theory than in actually telling stories. But one of my breakthroughs was realizing that storytelling as a skill is orthogonal to writing skill, and the latter even gets in the way. One way to short-circuit the writer brain is to use cartoons, and I’ve done 2 comic-format stories so far this year: Space Luck and Comet Bob. I’ve also managed one prose story, Non-Contact, though it’s more a world-building design study of an idea than a fully developed story, kinda like the design study prototype I built for my rover early on. I am not yet sure what my storytelling medium is — words or pictures.

Together, these two obsessions are driving what I think is the biggest pivot not just in the life of this blog, but in my own adult life. It’s a lifestyle shift, and I’m still coming to grips with the cascading effects on other aspects of my life. Storytelling tinkerers, I am discovering, must necessarily live a different kind of life than essayist-consultant-observers. So I’ve unwittingly set up a certain narrative tension in my life that’s going to resolve itself one way or another. It’s a different headspace, as lived from the inside, and presents a different picture when viewed from the outside. Switching between nonfiction and fiction modes, or between management consulting and maker-tinkerer modes, is very disorienting, but not in an unpleasant way.

One interesting thing about both is that they are behaviors that can get you put in more of a box than the sorts of thing I’m better known for. Storytelling and tinkering are both play-like behaviors that have a lot more disruptive potential than most “serious” behaviors, but they look harmless and are easy to put in a box and ignore. They are the quintessential mostly-harmless human activities. The median tinkering project or story is entirely inconsequential. Net likelihood of impact, zero. You either enjoy the safe, marginalized obscurity of the boxes you get put in, or you’re playing for the one-in-a-million shot at making history. I’m not sure what I’m aiming at with either activity. Probably both outcomes in proportion to their actual probabilities.

At any rate, it’s nice to have some obsessions going. It makes me feel strangely young again. Obsessiveness is naturally a young person’s mode of being. To discover it again in middle age, in a somewhat mellowed form, is something of an unexpected gift, even if the precipitating event of a pandemic makes it something of a gift from the devil.

Storytelling — Matthew Dicks

This entry is part 4 of 12 in the series Narrativium

I recently finished, Storyworthy by Matthew Dicks, a quintessentially American storyteller in the Mark Twain tradition. It is perhaps the most unique book on narrative structure and theory I’ve read, after Keith Johnstone’s Impro.

Dicks appears to have lived a very colorful, eventful life that supplies all the raw material you might ever want, to tell lots of outrageous, extreme stories. A very American life. I have friends like that, whose lives seem to be a string of outrageous and improbable events that make for naturally good stories. Only the manner of telling needs work. Dicks insists, however, that you do not need to live a colorful life in order to tell colorful stories. That’s good news for me.

[Read more…]

Storytelling — Mamet’s Conflict Airing Theory

This entry is part 3 of 12 in the series Narrativium

One of the big questions to which I have yet to find a satisfying answer is what stories are, in the set of things that includes various other kinds of speech. David Mamet has what I think is a partial answer in Three Uses of a Knife, a short, stream-of-consciousness meditation on storytelling which I recently finished (ht: Sachin Benny).

I like plays, but not enough to be an avid theater-goer, so my only real exposure to Mamet’s work is the movie version of Glengarry Glen Ross, which lives up to its reputation, and a few episodes of The Unit, which I didn’t quite get into. But his storytelling chops are clearly strong enough for his theorizing to be interesting. His practical advice certainly is — here is a memo he sent to the staff of the Unit (ht Steve Hely), with plenty of gems in it.

But this post is about Mamet’s philosophy of storytelling, not his bag of tricks.

Mamet opens Three Uses of a Knife with a discussion of our tendency to dramatize entirely mundane everyday events, like a bus being late, or the state of the weather, into proto-stories. His opening example is:

“Great. It’s raining. Just when I’m blue. Isn’t that just like life?:

His exegesis:

[Read more…]

Non-Contact

This entry is part 7 of 10 in the series Fiction

Perhaps it was some sort of strange precognitive cultural memory of the future, but the cliches, it turned out, were all true. Well, almost all true. The aliens did come in large flying saucers that could hover silently and move silently at physics-defying speeds. They did make mysterious crop circles and abduct and probe hundreds of unfortunates — except this time, they were taken from and returned to (disoriented and with memory gaps, but otherwise unharmed) busy public areas, in broad daylight, in full view of hundreds of smartphones. Those who had been taken in previous years and decades, from deserted highways or remote farms, were at once ecstatic and depressed. Now everybody agreed they’d been telling the truth all along, but nobody thought they were special, or even uniquely insane, anymore.

[Read more…]