ε/δ Thinking

We live in a world shaped by, and highly conducive to, discrete or 0/1 thinking. But not so long ago, we lived in a world shaped by, and highly conducive to, a kind of thinking you could call continuous, or ε/δ (epsilon/delta) thinking.

The basic idea behind ε/δ thinking is to think of the world primarily in terms of change, and secondarily in terms of extremely smooth, in fact infinitely smooth types of change. The symbols ε and δ are known as infinitesimals — quantities that can approach zero arbitrarily closely. The δ refers to an arbitrarily small input, while the ε refers to the corresponding arbitrarily small output.

The discrete, or 0/1 view of the world is fundamentally built around the fiction of being. Things either are or aren’t. Change is an illusion. “Things” merely get “created” and “destroyed.” 0s turn into 1s and 1s turn into 0s. There are no gray areas, merely insufficient bits of precision. There is no smoothness, merely ontologies beyond resolution limits. Reality is quantized.

The continuous, or ε/δ view of the world is fundamentally built around the fiction of becoming. Things always are (or equivalently, never are; in a becoming-centric view, it doesn’t really matter). There is no creation or destruction. Things just become more or less beingy.

Thinking about this stuff always reminds me of the Cheshire Cat in Alice in Wonderland, which goes from being a 0/1 cat to being an ε/δ cat:

“Did you say pig, or fig?” said the Cat.

“I said pig,” replied Alice; “and I wish you wouldn’t keep appearing and vanishing so suddenly: you make one quite giddy.”

“All right,” said the Cat; and this time it vanished quite slowly, beginning with the end of the tail, and ending with the grin, which remained some time after the rest of it had gone.

I suspect Carrol intended this as commentary on discrete/continuous mathematics, including the little bit of wordplay about pig/fig (which reads to me like a joke about ε/δ adjacency ambiguities exposed by comprehension noise in a notionally discrete symbol space). There’s a world of philosophical fun in this one passage.

Peter Thiel introduced a popular theology you could call 0/1/n thinking, where the 0/1 distinction represents the transition from non-being to being, a contrarian act of ab initio creation, while the 1/n distinction represents a kind of mindless mimetic repetition. I suspect a certain kind of person really likes constructing world views this way. Entities springing fully formed from the fabric of the cosmos and extending their reign over reality through pure replication. Some events are information-privileged. The bring newness into the world or destroy oldness. Other events contain no new information. They are replications and reruns. Such views lend themselves very easily to good/evil distinctions, and derived notions of progress — events containing more creation/replication of “good” and destruction/extinction of “bad.”

But information cannot be created or destroyed, and viewing the world this way makes most people rather giddy, like Alice. Most people prefer to see the world in terms of gradual changes and strong continuities. Good/evil is much less popular as an overlay down in the weeds of life than it is in the courts and churches of 0/1/n thinking.

The more you pay mindful attention, the more it seems like reality is a bog of ambiguity, both ontological and moral. Can we lean into this? Can there be an ε/δ theology? One that doesn’t preclude the possibility of radical change, but resists the temptations of comprehending it through 0/1/n fictions coded with good/bad moral overlays?

Calculus rests on the ε/δ relationship being well-behaved. The world may or may not have any continuous aspects to it, but in calculus models of it, effects can be made arbitrarily small by making causes arbitrarily small. Many things behave approximately like this even if they’re ultimately discrete. For example, if you nudge, or perturb, a ball on a flat floor slightly (small δ), it moves slowly and slightly (small ε). If you nudge it hard, it rolls fast and far.

Equilibrium points mess with these simple relationships. Stable equilibria are fine. A small nudge to a ball at the bottom of a valley will only push it a bit up the slope and it will roll back, oscillating in the basin till it settles again, thanks to friction. The larger the nudge (δ), the higher up the valley slope (ε) the ball will go.

But unstable equilibria are complicated. A small nudge to a ball poised at the top of a hill is enough to send it down to the valley below. In the former case, the δ can be as close to zero as you like, but the ε cannot be made arbitrarily small, since the nearest neighboring stable equilibrium is unlikely to be infinitesimally close (somebody has probably made up a theoretical mathematical system where stable equilibria form what is known a dense subset; if you know of such, do share in the comments).

If the system is sufficiently uneven, inputs that are too close to tell apart result in outputs that are radically far apart. This is of course, the phenomenon chaos theorists call sensitive dependence on initial conditions. Ian Malcolm in the original Jurassic Park demonstrates this with a drop of water on the back of his hand rolling in unpredictable different directions. Replace the drop of water or ball with a butterfly flapping its wings in the atmosphere, and you get the classic motif of chaos theory.

Interacting equilibria are at the basis of digital ontologies, but are rarely contemplated outside the semiconductor industry. The classic static memory physical bit is some sort of bistable circuit that switches between two stable equilibrium points but requires a large ε/δ impulse to switch between them. DRAM bits are maintained in an unstable state by constant refreshes. Either way, the basis for 0/1/n thinking as embodied by computation has to be constructed out of phenomenology that is naturally ε/δ.

We can generalize these ideas to entire histories of systems in time. Imagine a boulder rolling down a dry river valley. Small perturbations to its path make little difference, since it will roll back to the channel. But a boulder rolling along the top of a ridge will react to small perturbations very differently, depending on which side they come from. When you apply ε/δ thinking to entire trajectories you get the calculus of variations. The ε/δ quantities turn into functions of time and space. You can think of a ship going along a nominal route R(x(t), y(t)), and a disturbance function from wind and waves, ε(t)/δ(t) can model how the trajectory is disturbed by the environment. To produce macro-discreteness in spaces of trajectories, we talk about (mathematical) catastrophes, bifurcation diagrams, period doublings, and all the other fun things that make up chaos theory. When we talk stylized discrete decision trees, or narrative or game trees, it is good to keep in mind that somewhere in the background there are physical processes playing out, crafting landscapes of discrete 0/1 adjacent possibility out of the plate tectonics of ε/δ phenomenology.

Imagine paths in some abstract space, like human life trajectories. Narrative possibilities that are normally understood in 0/1/n terms like going to college or taking a particular job appear in a different light through an ε/δ lens. They are merely the particle-like summary epiphenomena that emerge out of the constant flux of ε/δ processes in the adjacent possible band. Did you really choose to go to University A over University B, navigating a discrete pair of futures via a fork as an autonomous sovereign individual with two admission letters? Or did the universe discretely nudge you down one discrete path (heh!) with a series of ε/δ impulses that ultimately created the illusion of a fork having been navigated?

A simple illustration. If you prompt someone to pronounce a series of xxOP single-syllable words like MOP, COP, TOP, SOP… that you spell out, and then suddenly interrupt with the question, “what do you do when you come to a green traffic light?” the person will almost always say STOP rather than GO. It’s a silly conditioning trick, but illustrates how and why big decisions are often the emergent automatic result of smaller decisions unfolding just below conscious awareness. To the extent being is constructed out of the events that pop above the level of subconscious automaticity into conscious contemplation, conscious agency is a kind of fiction. If you poke a bit, what one might call the ontic originalism of 0/1/n thinking tends to fall apart. Things don’t spring fully formed from the cosmos. Instead, an unconscious current in materiality is nudged into subjective perceptions of thingness by a particular perturbation. Giving the thing a name and a boundary is more about human conceits than intrinsic being.

Of course, this is not always true. There are true phenomenological thresholds in nature. There is a real distinction and genuine boundary between a combustible mixture and an actual flame; between a pile of fissile material and an explosion; between laminar and turbulent. But such natural boundaries are both rarer than we like to pretend, and less friendly to ontic-originalist conceits than we might prefer. The world is much more of a vast ε/δ swamp than we imagine, and our 0/1/n conceits are much more fragile than we like to admit.

I always found it amusing to think that in the Biblical worldview, God told Adam to name all the plants and animals, but categories like tree are taxonomic fictions. We can partition, cluster, categorize, and name things all we want, but if a convenient boundary we are attached to isn’t grounded in deeper structures in the underlying ε/δ landscapes, all we will do is confuse ourselves with pantheons of being, and pay the price as traumas in becoming.

One good definition of science is that it progressively frees us from unnecessary ontological commitments, and gives us the option of constructing less arbitrary worldviews that do more and more with less and less. This is the epistemic equivalent of Buckminster Fuller’s idea of ephemeralization, and Ockham’s Razor is one heuristic based on it. The charge that science is “reductionist” is, I think, a misunderstanding of its tendency to dismantle fragile 0/1/n views and replace them with ε/δ bogs comprehended with the fewest commitments possible. And this is not minimalism for its own sake, but in service of unleashing becoming over being.

I’m not trying to make any sort of deep point about whether reality is continuous or discrete in an absolute sense. As far as I understand modern physics, the fundamental structure seems to be discrete at least up to the Planck level. The point I’m making here is a relative one. Relative to the fuzzy boundary between subconcious and conscious awareness, ε/δ thinking is the real thing, and the 0/1/n thinking is something of a post-hoc fiction constructed as an exercise in continuous retconning.

For fiction writers, of course, ε/δ is much richer in possibilities. Time travel, multiverse, and historical counterfactual thinking all explore the question of what happens if you make arbitrarily small changes to event streams.

The claim that we used to live in an ε/δ world, but now live in a 0/1/n world I think can be read as a radically increased separation between being and becoming, or a kind of alienation. The distinction is similar to the one between natural and artificial. For a thing to be artificial it must have a 0/1/n character. For becoming to be construed in terms of being, we must necessarily introduce the violence of ontological illusions. Things that are thought to be in an absolute sense can only be “created” and “destroyed” in traumatic ways.

But if you see being as supervening on becoming (which is roughly the idea behind philosophies like Taoism), you open up more possibilities, and perhaps lower the trauma of being.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Venkatesh Rao

Venkat is the founder and editor-in-chief of ribbonfarm. Follow him on Twitter

Comments

  1. Simon McClenahan says

    But isn’t epsilon/delta just discrete 0/1 at a much lower (down to Planck level, or beyond) magnitude? I feel that using non-discrete (aka continuous) math or computation is effectively a shortcut for predicting with computation time’s sake, whether it be AI or human brain.

    I have also considered that maybe we conflate 0 with null a lot. Something either exists/true, or not. In most computer systems null is encoded as 0 in memory, but there’s plenty of bugs to be found if your program assumes null is the same as zero, or an empty set (including empty String “”)

    The mystery of choice and illusion of free will is because it’s just not possible to perfectly predict the future with the information available to us at any moment in time, unless it is very “simple” like adding numbers to generate a score, some sort of computational algorithm that works at a specific level or maybe even range of levels, giving the illusion of non-discrete fuzziness.

    The philosophy of choice-less awareness https://en.wikipedia.org/wiki/Choiceless_awareness represents to me a 0/1 or null/exists model.

    Then there’s quantum computing or algorithms, which is based on probability instead of discrete/continuous values. Lots of discovery to be done there I believe.

    • No, the fundamental operation with ε/δ is taking limits of convergent series, not a specific scale. Where the limits exist, useful things happen.

      I’ll have to think about zero vs. null

  2. Many scientific breakthroughs and creative combinations in the arts have come about by replacing or supplementing 0/1 thinking with a reframe as a continuum: Intro/Extroverts –> Ambiverts; fusion music; liberal arts flexible curriculum.

    In general a new theory or framework begins with discrete categorization and is later challenged or enriched with nuances of a scale or hybrid/combos.

    It seems difficult to conceive of progress in the reverse sequence where something is thought of as a spectrum and is later found to be just two distinct types.

    The initial 0/1 need not be “primitive” or limited or backward in any sense. It could be a conscious attempt to conceptualize for bringing clarity and theorizability.

    In general (or always?) tinier and lower levels reveal the range beyond 0/1 up to a point whereas bigger and higher levels facilitate simpler clubbing.

  3. Bewildering.

  4. Mattheus von Guttenberg says

    Great article. I’ve considered many of these ideas myself, though not in Venkat prose (which I appreciate). Here’s a new form of a Russell’s paradox for you:

    Is the distinction between natural and artificial natural or artificial?

    • That’s an easy one: artificial.
      Whether something was created by humans or not only matters to humans, not e.g. to the laws of physics.

      • If the distinction is artificial, then the artificial is natural, which means that the distinction is natural, which means that the artificial is not natural, and round and round we go.

        • Marc Hamann says

          Nope. Unless you agree with Gongsun Long that a white horse is not a horse.

          • Care to elaborate?

            If the distinction is natural, then it means both “natural” and “artificial” are natural categories, making artificial actually natural. If the distinction is artificial, the same holds true. Where is my thinking wrong?

          • Care to elaborate? Where is my thinking wrong?

          • Marc Hamann says

            The words don’t have the same meanings in the two cases. In one, artificial and natural are disjoint, in the other artificial is an arbitrary subset of natural. You can’t exchange them through negation, hence no recursion paradox.

  5. It’s fun to ponder your essay in relation to my field, chemistry. Non-reversible reactions are a collection of nudges of activation energy to new equilibria (asymmetry of delta and epsilon). Reversible reactions are even more interesting in challenging 0/1/n thinking…