Tarpits and Antiflocks

Computers used to be the size of buildings. Today my computer gets lost between the seat cushions. But two parts of the computer didn’t become a million times smaller and faster: the display and keyboard. They are the low speed, power hungry, monkey-compatible data ports. Our biology is holding us back.

Naturally, there are a hundred teams screwing around with every idea they can think of to connect directly to the brain. They are tickling neurons with magnets and brain monitors and even wires under the skull. They are training computers to pick up subvocal cues, theorizing about quantum tunneling, etc etc. Then they talk to journalists who put out breathless articles on “USB ports for your brain”.

The people who make fun of the dubious science in these projects are right. They also miss the point. Individually, each team’s approach is almost certainly wrong. But collectively they are doing the right thing. They are an antiflock, exploring a tarpit.

Proper science (ideally) operates like a flock of birds. Everyone maintains a steady distance from the ones around them, and reacts to changes in speed or direction. This generates a rough consensus about where every bird should be, and keeps the group together. This consensus is leaderless, yet coordinated and highly dynamic. It’s entrancing to see thousands of starlings boiling out of a meadow like a sentient thundercloud.

But flocks are not great for exploring large tracts of territory. For that you need an antiflock.

In antiflocks you try to get as far away as possible from everyone else in an idea space. And instead of doing experiments because you believe your theory will guide you to a better one, you pretend that your “neurons are wet transistors” theory is already correct. And you try to ship something.

If that sounds like magical thinking, that’s because it is. But in the absence of a solid theory, antiflocking is the right collective strategy. Antiflocks are not anti-science, as in the opposite of science. They are ante-science, as in what comes before to make science possible.

The results are only slightly better than random. People are playing with real neurons at the same level of understanding we had about electricity in the 1700s. Everyone had a different analogy for what electricity was, and everyone was wrong.

Some of those weirdos thought electricity was an invisible fluid– something like a thinner version of alcohol. Naturally they tried to catch it in jars. By pure luck they did most of the right things for all of the wrong reasons, and managed to store an electric charge. The Leyden Jar brought everyone flocking to consensus around the —still comically wrong— analogies of the fluidicists.

Fluidic theory became a stationary target everyone could shoot at. From there regular science kicked in. We think we understand electricity now and it has nothing to do with fluids. But we still talk about how it “flows” and “leaks”, and measure its “current”. The comically wrong analogies served as scaffolding for the final answers. In a generation or so when we have a good theory about the mind, it will have no resemblance to anyone’s ideas right now. But we will probably still talk about “neural laces” or “firing synapses”.

So when you have a breathtaking goal and consensus around the angle of attack, that’s a moonshot. It’s a proper scientific venture with centralized focus, clear priorities, and all the heroic bureaucrat trimmings. Without that consensus, it’s a tarpit: a breathtaking goal with zero clue how to get there. It’s a distributed anarchic all-star game of whatever seems to work.

Obvious goals without obvious organized activity around them are intellectual tarpits. Ambitious people wander in and get stuck. They call their friends for help and their friends get stuck too. A really powerful one can just sit there bubbling away and taking out victims for millennia.

That is the usual outcome. But sometimes these vortexes of wish-fulfillment break out into one of three evolutionary paths:

  • Historical curio: This is when we go back and do something just because we can. Turning lead into gold was solved several ways as a by-blow of nuclear physics research, and once just for fun. My favorite tarpit made good is human-powered flight. People have been dreaming about it since forever. But sticking feathers up your butt does not make you a chicken. It was only after we thoroughly mastered engine-powered flight that we figured out how to bridge the thrust-lift-weight gap for humans. (And to know there was one in the first place.) The result was the Gossamer Condor: a beautiful, inspiring, and entirely useless achievement.
  • Crack in the world: This is when random antiflock activity triggers advances in a bunch of different directions. Electricity is an obvious one. Another is anaesthetics for surgery. It came out of recreational drug use, of all things. Young people would score a jar of ether and get silly in private parties. A doctor (and the hookup for said ether) noticed that if you huffed too much you became insensible to pain. Going from there to cutting people open was a broad leap, even then.
  • Moonshot: suddenly being able to go to the moon after centuries of joking about it. Robert Goddard, that paranoid hack, managed to learn just enough about rocket mechanics to pique the attention of the Nazis. They refined the tech, added Tsiolkovsky’s idea of liquid hydrogen fuel, and then gave it right back to the US as spoils of war. And who could have predicted that?

Scientific Deniability

The lack of permission needed from proper science also means there is a lack of respect. Proper scientists can’t stop regular people from screwing around, so they use ante-science as a kind of expendable advance party. If something turns up, great. If not, who cares? Call it scientific deniability.

This forms a weird equilibrium of mutual contempt and co-dependence. Status and relative power determines whether you take green-ink letters seriously. Then often the rejection itself becomes a weird badge of honor for the ante-science kook. Goddard and the US military famously talked past each other for decades. When he said “rockets for space” the Army heard “jet-assisted takeoff” and that was pretty much that until the day he died.

We live in an uncaring universe and there is no Guinness Book of Nice Tries. So that’s where things usually end up. But every once in a while, at the bottom of the tarpit, is a door to a new world.

That is why I pay attention to kooks and antiflocks. If you can explain precisely why something is impossible you’re halfway to making it possible after all. And if nothing else, kooks are really good at pulling those explanations out of you. It’s very hard to tell the difference between something that’s impossible and something that’s just never been done. And since you’d have to be pretty stupid or desperate to try something impossible, it’s only the stupid or desperate who succeed.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Carlos Bueno

Carlos Bueno is a Ribbonfarm editor-at-large. He is a former Facebook engineer, graphic designer, video game repair man, and tattoo artist. His children's novel Lauren Ipsum has the curious distinction of having featured in both academic reviews of theoretical computer science and School Library Journal.

Comments

  1. This is a version of the Feyerabend/Lakatos methodological anarchy thesis.

    I like to think in terms of a P/NP and game theory analogy.

    The thing is regular science is an almost-dominant strategy almost all the time (Thomas Kuhn’s “Normal Science”) in a game-theoretic sense. In other words, it is the way to “win” no matter what the opposition (magical thinkers or nature) does.

    But during paradigm shifts, or your anti-flocking periods, methodological anarchy rules. It’s like a shift from a P to an NP regime, and the “nondeterministic polynomial” strategy is anarchic dabbling that is not so much as unreasonable as indifferent to reasonable/unreasonable (ie it is also methodological bullshit).

    As somebody pointed out on Twitter recently, the metaphor of “17 dimensional chess” is illuminating, and is usually applied the wrong way. When dimensionality gets higher, the dominant strategy gets simpler, not because the problem gets easier (quite the reverse) but because it gets so complex, it is better to try random-ass shit than to try and think things through.

    The high-dimensionality regime is your tarpit. Your point about anti-flocking is in some ways a randomization strategy, where you prioritize covering the larger space over coordinating in a smaller subspace.

    There’s also a relation here to the curse of dimensionality. I forget the exact citation, but there is a formal result showing that in sufficiently high-dimensional spaces, randomized approaches outperform systematic ones.

    One of the things that interests me is whether there is such a thing as structured pseudorandomness, where your strategy is systematic but not in a normal-science way. For example, alchemy as you pointed out. Here, it’s not so much anti-flocking in the sense of spreading over a larger space, but flocking in a non-normal corner of the space defined by the alt methodology.

    • That’s a good way to put it. Anti-flocking is a semi-directed “slightly better than random” strategy that only works when there is no consensus around the angle of attack.

      Mathematicians have a sheaf of open problems in their back pockets that have no attack, so they don’t work on them. It’s those problems that kooks target with their green-ink manuscripts.

  2. Dave Foster says

    Venkat,

    I think your structured pseudorandomness is sensitvity and variable correlation analysis because the parameters of a system, even a largely not understood high dimensionality system, are not randomly associated. So some random approaches will be more useful than others. Randomishness.

  3. Reminds me of this: https://www.edge.org/conversation/nassim_nicholas_taleb-understanding-is-a-poor-substitute-for-convexity-antifragility

    Think what you may about the guy, but he produces some interesting thoughts…

  4. Jack Williamson says

    What about using biofeedback techniques to teach poor hapless experimental subjects to transmit Morse code by stimulating electrosensitive patches, or using minor muscular movements? Dot, dash, space…. Slow as molasses in January though.