Captain's Log

A Captain's Log with entries titled with Modified Julian Dates (MJD). Unrestricted scope of themes. This is an experiment in writing without memorable headlines (and hey, that part worked! I can't for the life of me remember what I've been writing in this blogchain). Like those Star Trek ones with Star Dates. Though I prefer Zapp Brannigan's version:

Zapp Brannigan: Captain's journal; Star date... uh...
Kif: April 13th.
Zapp Brannigan: April 13th... point two.

MJD 59,163

This entry is part 11 of 21 in the series Captain's Log

Moore’s Law was first proposed in 1965, then again in revised form in 1975. Assuming an 18-month average doubling period for transistor density (it was ~1 year early on, and lately has been ~3y) there have been about 40 doublings since the first IC in 1959. If you ever go to Intel headquarters in San Jose, you can visit the public museum there that showcases this evolution.

The future of Moore’s law seems uncertain, but it looks like we’ll at least get to 1-3 nanometer chips in the next decade (we were at 130nm at the beginning of the century, and the first new computer I bought had a 250nm Celeron processor). Beyond 1-3nm, perhaps we’ll get to different physics with different scaling properties, or quantum computing. Whatever happens, I think we can safely say Gen X (1965-80) will have had lives nearly exactly coincident with Moore’s Law (we’ll probably die off between 2045-85).

While there have been other technologies in history with spectacular price/performance curves (interchangeable parts technology for example), there is something special about Moore’s Law, since it applies to a universal computing substrate that competes with human brains.

GenXers are Moore’s Law people. We came of age during its heyday. Between 1978-92 or so, the personal computer (pre-internet) was growing up along with us. The various 6502-based home computers, and the 8088, 286 “AT”, 386, 486, and Pentium were milestones of my childhood and teenage years. During that period, performance was synonymous with frequency, so there was a single number to pace our own adolescence. Those computers were underpowered enough that we could feel the difference from power increases even with simple applications. Today, you have to design stress tests with hungry apps to detect the performance limits of new computers.

After the Pentium, things got complicated, and growth was no longer a simple function of frequency. There was register size, watts, core count, RISC vs. CISC…

Life also got complicated for X-ers, and growth was no longer about growing taller and buying higher-frequency computers. Moore’s Law shifted regimes from micrometers to nanometers (in a decade, it should be in the picometer regime)

There’s an Apple event going on today, featuring Apple’s own silicon for the Mac for the first time. The M1 5nm chip. But Moore’s Law is not in the spotlight. Apple’s design is.

I think some of the message of the silicon medium rubbed off on us Gen X’ers. We got used to the primary thing in our lives getting better and cheaper every single year. We acquired exponential-thinking mindsets. Thinking in terms of compounding gains came naturally to us. For me personally, it has shown up most in my writing. At some level, I like the idea of producing more words per year (instructions per cycle, IPC?) with less effort (watts). This is why anytime a new medium appears that seems to make it easier to pump up quantity — Twitter, Roam research — I jump on it. Quantity has a quality all its own, as Stalin said. We are lucky to live in an age when we can expect the fundamental tradeoffs of writing to change several times in a single lifetime. A few centuries ago, you could live an entire lifetime without writing technology changing at all.

But like Moore’s Law, I too am slowing down. The natural instinct when you feel yourself slowing down is to switch gears from quantity to quality. I think this is a mistake, at least for me. Quantity is still the most direct road to quality, as the parable of the pottery class suggests. But as with semiconductors, it doesn’t just happen. You have to consciously develop the next “process node” (like the upcoming jump from 5nm to 3nm), work out the kinks in it, increase “yield rate” (the number of usable chips you get out of a wafer of silicon, a function of defects, design, etc), and then architect for that scale. Each jump to a new process node takes longer, and you face new tradeoff curves.

But each jump begins the same way: stripping away complexity from your current process and going back to the basics of words each time. You can’t add volume to complexity. You can only add complexity to volume.

For writing, sometimes the new “process node” is a whole new medium (moving from blogs to twitter threads), other times, it is primarily a matter of rearchitecting the way you write, like with my blogchains and now this headline-free Julian-date-numbered shtick. It’s always about pushing quantity, not quality. Right now, I’m trying to engineer my next “process.” I don’t think I’ll ever produce the sheer volume of words I used to 10 years ago, but I suspect can be vastly more efficient with my energy if I arrange the toolchain right. More words per year per watt, that’s the thing to shoot for.

MJD 59,169

This entry is part 12 of 21 in the series Captain's Log

If you remember your high-school physics, free energy is the energy available to do work. Energy is conserved, but free energy is not. For example, when a heavy ball drops from a height, the free energy stays roughly constant (ignoring drag) right until the moment of impact. The amount of free energy lost via inelastic collision is proportional to the height lost at the peak of the bounce. The rest is turned into useless heat. With each bounce, more free energy is lost, until finally all of it is lost. In a world without non-conservative forces like friction (which lower free energy), a ball could bounce for ever. Satellites orbiting earth approximate this: orbital motion is useful work that can be continued indefinitely for free.

***

[Read more…]

MJD 59,256

This entry is part 13 of 21 in the series Captain's Log

2021 is turning out to be a slow year getting off the ground here on ribbonfarm. A quick-and-dirty theory I made up and began testing last year about a kind of play-to-production pipeline for my writing isn’t quite working out:

  • Twitter for play-level shitposting and transient lightning-rod stuff
  • Ribbonfarm for experimental and R&D writing with no QA (open)
  • Breaking Smart and Art of Gig for production-grade stuff with a bit of QA (paywalled)

Ribbonfarm is in a way getting starved of low-hanging fruit to work with as raw material.

On the one hand, if it’s early enough, I shitpost on Twitter about it, often making what I now refer to as threadthulhus — messy intertwingled threads that QT and reference each other in weird ways like a bad Cthulhu dream, exploring a big topic with an utter lack of discipline.

These are not easy to clean up and serialize into essays, so I only do it when the idea feels extra strong, like the Internet of Beefs post which started life as a threadthulhu. But often, the very act of letting something sprawl into a threadthulhu precludes it ever become a cleaned-up essay. You have to switch gears early enough to do that, or it becomes impossible.

On the other hand, if it’s old-style enough (as in, a style I developed here 5+ years ago), it ends up in the newsletters. The subscription mode of the two newsletters keeps me on my toes on the production end, and even when it’s not fun, I’ve disciplined myself to keep writing. And because it is usually very familiar topics, and styles I’ve been honing for a decade, I can produce that kind of content even when I’m not feeling at the top of my game. It’s also stuff that I feel kinda doesn’t deserve a place on ribbonfarm anymore (which is a weird kind of self-snobbery, since I expect people to pay for it) because it is not bold enough in its intentions. It’s safe stuff for me personally. The risk of writing a truly bad newsletter are low because I don’t take many risks with newsletters.

Ribbonfarm is where I’ve always done stuff where I don’t want to be held to others’ expectations (which comes with taking money), but do have my own expectations. The expectations I have of myself here are the opposite of the QA type expectations I have of myself with newsletters and books. I don’t care about consistent quality or thematic coherence. But I do expect stuff I write here to be fun to write, break new ground thematically, and be at least a little technically challenging in writing terms, forcing me to develop new tricks or skills to execute on an idea (I’m almost never methodologically experimental in the newsletters).

This experimental quality means only a small fraction of readers will have the patience to ignore the failed experiments and wait for the experiments that work. It also means it would be kinda unfair to charge for it, since there are no implied promises.

Hey, there’s a 2×2!

The thing that’s making it hard this year is that the two kinds of writing I want to experiment with this year — fiction and maker projects — both involve a lot of upfront design and planning.

For the former, stories have to have more structural work and plotting up front (even if you are a pantser like me, and approach fiction as an improv activity, it still involves way more planning than nonfiction).

For the latter, well, you have to actually design and/or build stuff offstage and take photos to talk about, as in my clockmaking project posts. Otherwise you’re vaping rather than making.

In both cases, you need more time, and longer-range planning. You can’t just make shit up in a day, which creates a problem.

Historically, 90% of the posts on ribbonfarm were conceived and written in a single day — those that took longer did so simply because they were long (~4-5k finished words per day is my physical limit), not because I was planning them. I’ve almost never done preparation, outlining, note-taking etc. I wake up with an idea, and if I have the energy, I write for 4-14 hours, and I have a post. Done.

Lately of course, my energy has been closer to the 4-hour end than the 14-hour end, which is one of the reasons I shifted to the blogchain model. I simply don’t have the physical energy to do the 14-hour–day heavy lifts anymore that fueled this blog in the early years. Worsening middle-aged eyesight and stiffening joints aren’t helping either. I have to pace myself now, and break up writing sessions with physical activity to stay sane and avoid physical pain. Working on stuff that takes more planning and preparation fits better with my current energy patterns in theory, but clearly I’m having some startup troubles.

Anyhow, this is obviously an excuse-post for why more exciting things haven’t been happening here this year, given we’re already into February. The good news is, I’m working it out, and figuring out workflows and tooling and mechanisms to actually write the fiction and maker posts I am itching to write. It’s just taking longer than I expected to retool the factory.

MJD 59,323

This entry is part 14 of 21 in the series Captain's Log

Yesterday, I was testing a new bench power supply I just bought. I tested it with a multimeter, then connected it up to a motor, to make it go brrr for fun. It’s the sort of thing I haven’t done since grad school, decades ago.

As I was tinkering, I was idly wondering about whether there was any fodder for blog posts in what I was up to. I don’t mean Maker posts. A lot of people write about Maker stuff, and do it a lot better than I ever could. I mean riffs on Life, the Universe, and Everything inspired by tinkering with a new power supply.

Most of my writing to date has been inspired by things like working in an office, consulting, watching TV and of course, reading words written by others. That stuff is good fodder for riffs on Life, the Universe, and Everything.

Though I’m having a lot of fun rediscovering engineering with a middle-aged mind, I’ve found it surprisingly difficult to mine tinkering for insights on Life, the Universe, and Everything. Tinkering In, Words Out, TIWO, is a tougher transformation than SIWO: Symbols In, Words Out. Which is why it’s an interesting challenge.

Possibly it’s just me getting used to once again looking at the world through a long unused lens, but I think there’s more to it. There seems to be some sort of mutual inhibition function between tinkering with ideas with words and tinkering with stuff with atoms. Digital bits, as in programming, are somewhere in between.

While tinkering, you’re thinking a lot of mostly nonverbal thoughts. In this case, I was wondering about what the floating ground terminal was for, noticing how the sound of the motor was changing pitch at different voltages, observing the voltage deadzone between the motor starting/stopping as you turn the voltage up/down, thinking of ways to measure rpm and torque easily, and so on.

Depending on how you think about it, there’s either nothing to say about this sort of mundane tinkering stream of consciousness, far from Archimedean eureka moments, or there’s enough to merit several thousand words of prose. And I don’t mean how-to and instruction manual type stuff.

You could, for instance, write about the edifying, soul-uplifting effects of working with your hands. You could write satire about cliche mid-life crisis activities like tinkering in a home workshop, triggered by too much time spent in the world of symbols. You could wax philosophical about materiality, and sensory-experiential mindsets. You could write some poignant poetry about the smell of multimeters in the morning.

And of course, you could write about the actual activity itself, like the not-in-textbooks metaphysical subtleties lurking beneath apparently well-understood things like voltage and current. That is the sort of thing Brian Skinner has been blogging about on here lately.

But the thing is, whatever you think you might want to say, you have to stop the physical tinkering and start the verbal tinkering. You have to switch context from subsymbolic to symbolic ways of experiencing the world.

Physical activity radiates plenty of cues from which verbalized thought can begin, but to actually follow a verbal train of thought you have to stop the physical stream of activity, and think with symbols again. The context-switching is much more drastic than between two symbolic-domain activities.

I suspect the blue-collar/white-collar divide is about more than pre-modern class boundaries being perpetuated by industrial forms of organization. The prototypical activities involve different sorts of cognition.

Physical tinkering is basically 5-sense environment scanning at a very high bitrate driving tactile action that’s much more complex than producing symbol streams, aka typing. The literal Fingerspitzengefühl — finger-tips feeling — is more complex and less available to ensnare with words. And if you force it, either the words will suffer, or the skill will.

While tinkering, you’re logging a lot of information, and even though most of it is very low-salience, processing it is fundamentally different than working with streams of symbols. Symbol tinkering is very low bitrate, but the acrobatics it can sustain are much more complex. Physical tinkering is a Big Data computation for the human brain, while symbolic tinkering is ordinary computation. Reading and writing of course is mostly symbolic. Writing about social stuff and interactions with other people is also mostly symbolic, though of course there’s a world of non-verbal detail to observe if you want to.

Programming is somewhere in between subsymbolic and symbolic tinkering, and is harder to turn into Life, the Universe, and Everything fodder than either. Maybe that’s why movies and TV shows have struggled the most with portraying lives lived amid code.

MJD 59,326

This entry is part 15 of 21 in the series Captain's Log

I am considering adopting two rules for projects that I think are very promising for 40+ lifestyles.

  1. No new top-level projects (TLPs) (twitter thread)
  2. Ten-year commitments to projects or no deal (twitter thread)

I don’t mean practically necessary projects like doing something to earn money. I mean non-necessary life projects like writing a blog, or a maker project.

Shitposting and idle dabbling are still allowed so long as they don’t grow into new TLPs. They can grow into subprojects of existing TLPs, but even then I need to make a ten-year commitment or not do them at all. There’s a bunch of ambiguity here, around what’s a project versus the contents of one (is a new blogchain a subproject or just a thread of content for the overall blogging project?), but set that aside for now.

[Read more…]

MJD 59,354

This entry is part 16 of 21 in the series Captain's Log

Peter Turchin’s concept of elite overproduction has been on my mind increasingly lately. It refers to historical conditions during which there are more people aspiring to elite roles in society than power structures can absorb. In 2021, to a first approximation, this is people with college degrees in fields with low market demand. A good measure of the degree of overproduction is the intensity and rancor around STEM vs. humanities type debates, and “do you want fries with that?” jokes about art history degrees. The idea of elite overproduction is descriptive, not normative. It does not matter who wins Twitter debates about the “true” cultural value of various elite roles and aspirations. What matters is the actual distribution of unemployed human elite overstocks. When large masses of people fail to find economic means to sustain the elite social roles they’ve been conditioned to expect, and trained and enculturated to occupy, you have elite overproduction going on. The prevailing default perception of the specifics of the distribution of surplus elites is correct in broad strokes, even if there are weird exceptions and corner cases. It is probably true right now that the average STEM degree is less likely to make you part of the elite overstock than a humanities degrees.

The jokes about do-you-want-fries-with-that are particularly fraught this year. Service industries struggle to hire workers, but are wary of letting wages inflate even as various other price levels succumb. Unlike commodity prices, which can go up and down with supply and demand, wages are something of a sociopolitical one-way door. They’re harder to push back down once they manage to creep up. There is revolutionary fervor in the air as well, around everything from student-loan forgiveness and stimulus economics to policing and urban blight. The optics around great wealth are much uglier today than 10 years ago, when they were last in the spotlight to this degree. Gen X has joined the Boomers on the villains side of the aisle. Younger generations struggle, while older generations sit on top of record savings.

One reason to take elite overproduction theory seriously as a lens right now is that Turchin has been unusually right lately in his calls about the timing of historical crisis points. He anticipated that 2020 would be a year of crisis, and it was. He didn’t predict Covid afaik, but the pandemic was merely a cherry on top of the dire basic scenario he foresaw.

Thirteen years ago, the Global Financial Crisis led to a generation of disaffected and underemployed young graduates turning their online-native skills to culture-warring. Ten years ago, that reached a flashpoint with the Occupy movement, and led to far right and far left movements making inroads into mainstream politics and shaping the next decade. That whole story was primarily an elite overproduction story. To the extent there was non-elite energy in the movements, it was there because it had been co-opted by wannabe-elite actors in service of their own frustrations. In the US, urban black political issues turned into white wannabe-elite causes, rural and small-town rust-belt blue collar issues turned into white wannabe-elite causes as well. For a few years, all political roads led to elite overstocks, often being transformed unrecognizably in the process. The result was the volatile mix of genuine and imagined grievances, insincere co-option of non-elite causes, and outright grift, that gave us the Great Weirding.

It’s commencement season and we can expect to see a new crop of commencement speeches soon. The global Class of 2021 will probably be much smaller than normal, and have to make do with curtailed or online ceremonies. Despite the small size of the cohort though, I suspect, most of this year’s crop of fresh graduates will still struggle to find jobs and careers, and be in a worse situation than the Class of 2008. I wonder what the commencement speakers will say. I’d have nothing much inspiring to say if challenged to give such a speech. It is hard for privileged older generations to say useful things to younger generations entering adulthood under much worse conditions.

Conditions today are far more fraught than in 2008. Freshly minted Zoomer wannabe-elites today are likely more disaffected than the Millennials who came of age through the GFC, more skilled at channeling that disaffection into elite overstock unrest, and have more history to learn from. On the plus side (such as it is) they have only every known fraught times, and have never known hope in the sense Millennials did. Will that make them more or less energized? I don’t know.

But in the meantime, on the demand side, elite roles have become even more scarce, non-elite under-the-API roles are under even greater stress, and there has been essentially no political or economic movement on the issues of 2011. The far right has, to some extent, shot its shot, but the far left has yet to do so. All-in-all it’s a much bigger powder keg than 2011.

Unless something exceptionally big and positive happens soon, as the emergency civic discipline of Covid loosens its grip on populations around the world, we can expect the 2020s to get even more explosively weird than the 2010s.

Here we go again. Fasten your seatbelts.

MJD 59,396

This entry is part 17 of 21 in the series Captain's Log

I’ve developed two obsessions through the pandemic that I think will persist long past the end of it, probably to the end of my life: tinkering and story-telling.

On the tinkering front, I’ve built out a nice little science-and-engineering workshop over the last year and acquired more skills in less time than I expected to, since I don’t have a high opinion of my own hands-on abilities. As I’ve mentioned before, this is still hard to write about because while the doing is fun, getting to interesting things to show off and talk about will take some time. It’s good enough fodder for tweeting though, and I’ve been maintaining several fun ongoing threads about electronics experiments, my rover project, and 3d printing. At some point, I hope I’ll be able to write essays about this stuff, but right now it’s only coming together at Twitter level. Overall, tinkering has been the easier journey, I guess because I’m an engineer by training, so I am not really starting from scratch. Though all my old knowledge feels rusty, I think I did hit the ground running when I started around August last year.

Storytelling has been the tougher journey. In many ways, it’s very like tinkering, except with machines that run inside human brains. It is very unlike nonfiction writing. I’ve made more progress on exploring storytelling theory than in actually telling stories. But one of my breakthroughs was realizing that storytelling as a skill is orthogonal to writing skill, and the latter even gets in the way. One way to short-circuit the writer brain is to use cartoons, and I’ve done 2 comic-format stories so far this year: Space Luck and Comet Bob. I’ve also managed one prose story, Non-Contact, though it’s more a world-building design study of an idea than a fully developed story, kinda like the design study prototype I built for my rover early on. I am not yet sure what my storytelling medium is — words or pictures.

Together, these two obsessions are driving what I think is the biggest pivot not just in the life of this blog, but in my own adult life. It’s a lifestyle shift, and I’m still coming to grips with the cascading effects on other aspects of my life. Storytelling tinkerers, I am discovering, must necessarily live a different kind of life than essayist-consultant-observers. So I’ve unwittingly set up a certain narrative tension in my life that’s going to resolve itself one way or another. It’s a different headspace, as lived from the inside, and presents a different picture when viewed from the outside. Switching between nonfiction and fiction modes, or between management consulting and maker-tinkerer modes, is very disorienting, but not in an unpleasant way.

One interesting thing about both is that they are behaviors that can get you put in more of a box than the sorts of thing I’m better known for. Storytelling and tinkering are both play-like behaviors that have a lot more disruptive potential than most “serious” behaviors, but they look harmless and are easy to put in a box and ignore. They are the quintessential mostly-harmless human activities. The median tinkering project or story is entirely inconsequential. Net likelihood of impact, zero. You either enjoy the safe, marginalized obscurity of the boxes you get put in, or you’re playing for the one-in-a-million shot at making history. I’m not sure what I’m aiming at with either activity. Probably both outcomes in proportion to their actual probabilities.

At any rate, it’s nice to have some obsessions going. It makes me feel strangely young again. Obsessiveness is naturally a young person’s mode of being. To discover it again in middle age, in a somewhat mellowed form, is something of an unexpected gift, even if the precipitating event of a pandemic makes it something of a gift from the devil.

MJD 59,436

This entry is part 18 of 21 in the series Captain's Log

A week ago, for the first time in decades, I spent several days in a row doing many hours of hands-on engineering work in a CAD tool (for my rover project), and noticed that I had particularly vivid dreams on those nights. My sleep data from my Whoop strap confirmed that I’d had higher REM sleep on those nights.

These were dreams that lacked narrative and emotional texture, but involved a lot of logistics. For example, one dream I took note of involved coordinating a trip with multiple people, with flights randomly canceled. When I shared this on Twitter, several people replied to say that they too had similar dreams after days of intense but relatively routine information work. A couple of people mentioned dreaming of Tetris after playing a lot, something I too have experienced. High-REM dreaming sleep seems to be involved in integrating cognitive skill memories. This paper by Erik Hoel, The Overfitted Brain: Dreams evolved to assist generalization, pointed out by @nosilver, argues that dreaming is about mitigating overfitting of learning experiences, a problem also encountered in deep learning. This tracks for me. It sounds reasonable that my “logistics” dreams were attempts to generalize some CAD skills to other problems with similar reasoning needs. REM sleep is like the model training phase of deep learning.

Dreams 2×2

This got me interested in the idea of tapping into the unconscious towards pragmatic ends. For example, using the type of dreams you have to do feedback regulation of the work you do during the day. I made up a half-assed hypothesis: the type of dream you have at night depends on 2 variables relating to the corresponding daytime activity — the level of conflict in the activity (low/high) and whether or not the learning loop length is longer or shorter than 24 hours. If it is shorter, you can complete multiple loops in a day and the night-time dream will need to generalize from multiple examples. If it’s less than a complete loop, there is no immediate resolution, so instead you get more dreams that integrate experiences in an open-loop way, using narrative-based explanations. If there’s no conflict and no closed loops, you get low dreaming. There’s nothing to integrate, and you didn’t do much thinking that day (which for me tends to translate to poor sleep). I made up the 2×2 above for this idea.

I have no idea whether this particular 2×2 makes any sense, but it is interesting that such phenomenology lends itself to apprehension in such frameworks at all. I rarely remember dreams, but I think even I could maintain a dream journal based on this scheme, and try to modulate my days based on my nights.

This also helps explain why people in similar situations might have similar dreams (such as all the “corona dreams” that many of us were having during early lockdown months). It also lends substance to the narrative conceits of stories like H. P. Lovecraft’s Call of Cthulhu, which begins with people having widespread similar dreams (where it turns into science fiction is in the specificity of the shared motifs and symbols that appear).

You don’t need to buy into dubious Jungian or Freudian theories of individual and collective dreaming to think about this stuff in robust ways. The development of deep learning, in particular, offers us a much more robust handle on this phenomenology. Dreams are perhaps our journeys into our private latent spaces, undertaken for entirely practical purposes like cleaning up our messy daytime learning (there’s other theories of dreaming too of course, like David Eagleman’s theory that we dream to prevent the visual centers from getting colonized by other parts of the brain at night, but we’re only hypothesizing contributing causes, not determinative theories).

MJD 59,459

This entry is part 19 of 21 in the series Captain's Log

In 2018, historian Michael McCormick nominated 536 AD as the worst year to be alive. There was a bit of fun coverage of the claim. This sort of thing is what I think of as “little history.” It’s the opposite of Big History.

Big History is stuff like Yuval Noah Harari’s Sapiens, Jared Diamond’s Guns, Germs, and Steel, or David Graeber’s Debt. I was a fan of that kind of sweeping, interpretive history when I was younger, but frankly, I really dislike it now. Historiographic constructs get really dubious and sketchy when stretched out across the scaffolding of millennia of raw events. Questions that are “too big to succeed” in a sense, like “why did Europe pull ahead of China,” tend to produce ideologies rather than histories. It’s good for spitballing and first-pass sense-making (and I’ve done my share of it on this blog) but not good as a foundation for any deeper thinking. Even if you want to stay close to the raw events, you get sucked into is-ought historicist conceits, and dangerously reified notions like “progress.” Yes, I’m also a big skeptic of notions like Progress Studies. To the extent that the arc of the moral universe has any coherent shape at all, to proceed on the assumption that it has an intelligible shape (whether desired or undesired), is to systematically blind yourself to the strong possibility that the geometry of history is, if not random, at least fundamentally amoral.

About the only Big History notion I have any affection for is Francis Fukuyama’s notion that in an important sense, Big History has ended (and I’m apparently the last holdout, given that Fukuyama himself has sort of walked back the idea).

Little histories though, are another matter. Truly little histories are little in both space and time, and much of history is intrinsically little in that sense, and fundamentally limited in the amount of inductive or abductive generalization it supports once you let go of overly ambitious historicist conceits and too-big-to-succeed notions like “Progress.” But some little histories are, to borrow a phrase from Laura Spinney’s book about the Spanish Flu, Pale Rider, “broad in space, shallow in time.” They allow you to enjoy most of the pleasures afforded by Big Histories, without the pitfalls.

Whether or not the specific year 536 was in fact the worst year, and whether or not the question of a “worst year” is well-posed, the year was definitely “broad in space, shallow in time” due to the eruption of an Icelandic volcano that created extreme weather world-wide. The list of phenomena capable of creating that kind of globalized entanglement of local histories is extremely short: pandemics, correlated extreme weather, and the creation or destruction of important technological couplings.

The subset of little histories that are “broad in space, shallow in time” — call them BISSIT years (or weeks, or days) — serve as synchronization points for the collective human experience. Most historical eras feature both “good” and “bad” from a million perspectives. Sampling perspectives from around the world at a random time, and adjusting for things like class and wealth, you would probably get a mixed bag of gloomy and cheery perspectives that are not all gloomy or cheery in the same way. But it is reasonable to suppose that at these synchronization points, systematic deviations from the pattern would emerge. Notions of good and bad align. Many of us would probably agree that 2020 sucked more than most years, and would even agree on the cause (the pandemic), and key features of the suckage (limitations on travel and socialization). Even if there were positive aspects to it, and much needed systemic changes ensue in future years, for those of us alive today, who are living through this little history, the actual experience of it kinda sucks.

The general question of whether the human condition is progressing or declining to me is both ill-posed and uninteresting. You get into tedious and sophomoric debates about material prosperity versus perceived relative deprivation. You have people aiming gotchas at each other (“aha, the Great Depression was actually more materially prosperous than optimistic Gilded Age 1890s!” or “there was a lot of progress during the Dark Ages!”).

The specific question of whether a single BISSIT year should be tagged with positive or negative valence though, is much more interesting, since the normal variety of “good” and “bad” perspectives temporarily narrows sharply. Certainly, BISSIT years have a sharply defined character given by their systematic deviations, and sharp boundaries in time. They are “things” in the sense of being ontologically well-posed historiographic primitives that are larger than raw events, but aren’t reified and suspect constructs like “Progress” or “Enlightenment.” There is a certain humility to asking whether these specific temporal things are good or bad, as opposed to the entire arc of history. Two such things need not be good or bad in the same way. History viewed as a string of such BISSIT beads need not have a single character. Perhaps there are red, green, and blue beads punctuating substrings of grey beads, and in your view, red and green beads are good, while blue beads are bad, and so on. We don’t have to have a futile debate about Big History that’s really about ideological tastes. But we can argue about whether specific beads are red or blue. You can ask about the colors of historical things. We can take note of the relative frequencies of colored versus grey beads. And if you’re inclined to think about Big History at all, you can approach it as the narrative of a string of beads that need not have an overall morally significant character. Perhaps, like space, time can be locally “flat” (good or bad as experienced by the people within BISSIT epochs) but have no moral valence globally. Perhaps we live in curved historical event-time, with no consistent “up” everywhere.

MJD 59,487

This entry is part 20 of 21 in the series Captain's Log

People who have a literal-minded interest in matters that extend beyond their own lives, and perhaps those of a couple of generations of ancestors and descendants, are an odd breed. For the bulk of humanity, these are zones for mythological imaginings rather than speculative-empirical investigation, if they are of interest at all. For the typical human, what happened in 1500 AD, or what might happen in 2500 AD, are questions to be answered in ways that best sustain the most psyche-bolstering beliefs for today. And you can’t really accuse them of indulging in self-serving cognitions when the others who might be served instead are either dead or unborn.

As a result, mythology is popular (by which I mean any heavily mythologized history, such as that of the founding of the United States, not just literal stories of Bronze Age gods and demons), but history is widely viewed as boring. Science fiction is popular, but futurism (of the wonky statistical trends and painstakingly reality-anchored scenario planning variety) is widely viewed as boring.

But if you think about it, it is history and futurism that are the highly romantic fields. Mythology and science fiction are pragmatic, instrumental fields that should be classified alongside therapy and mental healthcare, since they serve practical meaning-making purposes in the here and now, in a way that is arguably as broadly useful as antibiotics.

History proper is rarely useful. The only reason to study it is the romantic notion that understanding the past as it actually unfolded, even if only 10 people in your narrow subfield pay attention and there are no material consequences in the present, is an elevating endeavor.

Similarly, long-range futurism proper (past around 30 years say) is rarely useful. Most political and economic decisions are fully determined or even overdetermined by much shorter range incentives and considerations. There is also usually crippling amounts of uncertainty limiting the utility of what your investigations reveal. And humans are really bad at acting with foresight that extends past about a year anyway, even in the very rare cases where we do get reliable glimpses of the future. So the main reason to study the future is the romantic notion that it is an elevating endeavor.

Who exactly is it that believes these endeavors are elevating, and why should their conceits be respected, let alone potentially supported with public money?

Well, people like you and me for one, who read and write and argue about these things, and at least occasionally try to rise above mythologizing and science-fictional instincts to get a glimpse of the past and future as they really were or will be, with high plausibility. And I can’t say I have good arguments for why our conceits should be respected or supported. Fortunately, they are also very cheap conceits as conceits go. All we need is time and an internet connection to indulge them, and a small cadre of researchers in libraries and archives generating fodder.

How do we even know when we’ve succeeded? Well of course, sometimes history at least is dispositive, and we find fragments that are no longer subject to serious revisionism. And sometimes the future is too — we can predict astronomical events with extraordinary certainty, for instance.

But that’s just the cerebral level of success. At a visceral, emotional level, when either history or futurism “work” in the romantic sense that interests me, the result is a slight shrinkage in anthropocentric conceptions of the world.

Every bit of history or futurism that actually works is like a micro-Copernican revolution.

When they work, history and futurism remind us that humans are no more at the “center” of time, in the sense of being at the focal point of unfolding events, than we are at the “center” of the universe. The big difference between space and time in this regard is that decentering our egos in historical time is a matter of patient, ongoing grinder-work, rather than one of a few radical leaps in physics.