Human-Complete Problems

Occasionally, I manage to be clever when I am not even trying to be clever, which isn’t often. In a recent conversation about the new class of doomsday scenarios inspired by AlphaGo beating the Korean trash-talker Lee Sedol, I came up with the phrase human complete (HC) to characterize certain kinds of problems: the hardest problems of being human. An example of (what I hypothesize is) an HC problem is earning a living. I think human complete is a very clever phrase that people should use widely, and credit me for, since I can’t find other references to it. I suspect there may be money in it. Maybe even a good living. Here is a picture of the phrase that I will explain in a moment.

File Mar 31, 9 15 11 AM

In this post, I want to explore a particular bunny trail: the relationship between being human and the ability to solve infinite game problems in the sense of James Carse. I think this leads to an interesting perspective on the meaning and purpose of AI.

The phrase human complete is constructed via analogy to the term AI completean ambiguously defined class of problems, including machine vision and natural language processing, that is supposed to contain the hardest problems in AI.

That term itself is a reference to a much more precise one used in computer science: NP completewhich is a class of the hardest problems in computer science in a certain technical sense. NP complete is a subset of a larger class known as NP, which is the set of all problems for a certain class of non-God-level computers. It contains another subset called P, which are easy problems in a related technical sense.

It is not known whether P and NP complete are proper subsets of NP. If you can prove that P≠NP, you will win a million dollars. If you can prove P=NP, the terrorists will win and civilization will end. In the diagram above, if you replace the acronyms FG, IG and HC with P, NP and NP Complete, you will get the diagram used to explain computational complexity in standard textbooks.

And this just the first level of the gamified world of computing problems. If you cross the first level by killing a boss problem like “Hamilton Circuit”, you get to another level called PSPACE, then something called EXPSPACE.  If there are levels beyond that, they are above my pay grade.

Finite and Compound Finite Games

Why define a set of problems in such a human-centric way?

Well, one answer is “I am anthropocentric and proud of it, screw you,” a matter of choosing to play for “Team Human” as Doug Rushkoff likes to say.

But since I haven’t yet committed to Team Human (a bad idea I suspect), a better answer for me has to do with finite/infinite games.

According to the James Carse model, a finite game is one where the goal is to win. An infinite game is one where the objective is to continue playing.

A finite game is not just finite in a temporal sense (it ends), but also in the sense of the world it inhabits being finite and/or closed in scope. Tic-tac-toe inhabits a 3×3 grid world that admits only 18 moves (placing a 0 or an x in any of the 9 positions). The total number of tic-tac-toe games you could play is also finite. Chess and Go are also finite games.

Many “real world” (a place I am told exists) problems like “Drive from A to B” (the autonomous driverless car problem) are also finite games, even though they have very fuzzy boundaries, and involve subproblems that may be very computationally hard (i.e. NP complete).

Trivially, any finite game is also a degenerate sort of infinite game. Tic-tac-toe is a finite game, and a particularly trivial one at that. But you could just continue playing endless games of tic-tac-toe if you have a superhuman capacity for not being bored. Driverless cars can also be turned into an infinite game. You could develop Kerouac, your competitor to the Google car and Tesla: a car that is on the road endlessly, picking one new destination after another, randomly.

Equally trivially, any collection of finite games also defines a finite game, and can be extended into an infinite game. If your collection is {Autonomous Car, Tic Tac Toe, Chess, Go}, a collection of a sort we will refer to compactly as a compound game, defined by some sort of function defined over a set like F={A, T, C, G} (you must allow me my little jokes), then you could enjoy a mildly more varied life than TTTTT…. or AAAA…. by playing ATATAT or ATCATGAAG… or something. You could make up some complicated combinatorial playing pattern and scoring system. Chess-boxing and Iron Man are real-world examples of such compound games.

But though every atomic or compound finite game is also trivially an infinite game, via the mechanism of throwing an infinite loop, possibly with a random-number-generator, around it, (hence the subset relationship in the diagram), it is not clear that every infinite game is also a finite game.

Infinite Games

What do I mean by that? I mean it is not clear that any game meaningfully characterizable by “the goal is to continue playing” can be reduced to a sequence of games where the goal is to win.

Examples of IG problems that are not obviously also in FG include:

  • Make rent
  • Till death do us part
  • Make a living

Each of these exists as a universe of open-ended variety. Lee Sedol’s “make a living” game does not just involve the “beat everybody else at Go” finite game. It likely also includes: win awards, trash-talk other Go players, make the New Yorker cover, drink tea, respect your elders, eat bibimbap, and so on. AlphaGo beat Lee Sedol at Go, but hasn’t yet proven better than him at the specific infinite game problem of Making a Living as Lee Sedol (which would mean continuing to fulfill the ineffable potential of being Lee Sedol better than the human Lee Sedol himself manages). It also hasn’t figured out the problem of Making a Living as AlphaGo (IBM’s Watson is now attempting that, it’s own little double jeopardy round).

The generalized infinite game, Making a Living, is the set of all specific instances, including Making a Living as Lee Sedol, Making a Living as AlphaGo, Making a Living as James Carse, Making a Living as the Google Car, and so on. These problems are not all the same in what mathematicians would call a parameterized sense, but they all share some similarities in their DNA: the {A, T, C, G} type compound game within them. In the Making a Living infinite game, there are finite bits like “ensure a basic income and turn a profit”, “choose the most satisfying work” etc, but the game itself is not reducible to these individual bits. Hence the non-parameterized character of the family.

Maybe in a future version, AlphaGo will be the brain of a driverless car that loses its job to a driverless drone, retrains itself to be the brain of a piece of mining equipment, goes through spiritual struggles, and writes an autobiography titled An AI in Full, that leads the New Yorker to declare it to have lived a fuller, more meaningful life than Lee Sedol. The robots in Futurama and many other fictional robots do in fact experience such journeys that you could say are in the IG set. The set of infinite games is not prima facie inaccessible to AIs.

We’ll unpack that sort of evolutionary path in a moment.

What is common to these games is that they are plugged into the real world in an open-ended way. You might be able to solve “Make a living” if you are lucky by getting a great job at 18 and working happily till you die, never being existentially challenged (what William James called the “religion of healthy mindedness”). But the problem is that there is no formula for getting lucky, and no guarantee that you will get lucky. Any of these problems can dump you into a deep existential funk in a given moment, without warning.

Now, this infinite game class of problems might also contain trivial examples, like “find something to laugh at.”

A particularly loopy, hippie friend of mine once defined happiness as “you are happy if you laugh at least once a day.” I am inclined to dismiss this sort of IG problem because it seems to me these might in principle be solvable (which is a reason to be suspicious of the implied definition of happy, and in general not take such casual New Age hippie-bs ideas seriously).

So we need to define a subset of IG called HC: human complete. The hardest infinite games of human existence, which are all in some sense reducible to each other and the Douglas Adams problem of “life, the universe and everything.”

Believe it or not, we already know a few serious things about HC problems, and it’s not just “42”.

The Heinlein Test

It is reasonable to assume that every HC problem includes a non-trivial compound FG problem — its DNA as discussed in the previous section — within its definition.  Call it FG(HC), the characteristic finite game within a human complete problem, the finite eigengame or genome if you like, which may or may not completely determine the structure of the embedding HC problem.

So the HC problem, “till death do us part” includes a compound game “marriage skills” comprising many finite games like “Who takes out the trash today?” and “Why must you always…?” Unlike an actual genome, the FG-genome of an HC problem is not necessarily unique (though some of us try to get to uniqueness in our FG-genome as an aspiration, the unique snowflake motive).

Somewhat less reasonably, we could also assume that among all possible FGs within a given HC problem, there is a largest one, FG_max(HC) (some of you mathematically oriented readers may prefer to substitute the less mathematically aggressive idea of an FG_sup(HC) — a lowest upper bound rather than a maximum). This is equivalent to saying that in any messy, ambiguously defined process, there is a maximal proceduralizable subset within.

What we don’t know is whether FG_max(HC) is computable, or what the content of the gap HC-FG_max(HC) (if there is indeed a gap), contains.

If the discussion above sounds like gobbledygook to you, consider the famous Heinlein quote:

A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.

Presumably Heinlein meant his list to be representative and fluid rather than exhaustive and static. Presumably he also meant to suggest a capacity for generalized learning of new skills, including the skill of delineating and mastering entirely new skills.

This gives us a useful way to characterize what we might call finite game AIs, or FG-AIs. An FG-AI would be an “insect” in Heinlein’s sense: something entirely defined by a fixed range of finite games it is capable of playing, and some space of permutations, combinations and sequences thereof. Like you can with some insects, you could put such an FG-AI into an infinite loop of futile behavior (there’s an example involving wasps in one of Dawkins’ books).

So we can define the Heinlein Test for human completeness very simply as:

HC-FG_max(HC)≠∅.

Which is a nerdy way of saying that there is more to life, the universe and everything than the maximal set of insect problems within a particular HC-complete problem. We do not know if this proposition is true, or whether the subproblem of characterizing FG_max(HC) — gene sequencing a given infinite game — is well-posed.

But hey, at least I have an equation for you.

Moving the Goalposts

When I was in grad school studying control theory — a field that attracts glum, pessimistic people — I used to hang out a lot with AI people, since I was using some AI methods in my work. Back then, AI people were even more glum and pessimistic than controls people, which is an achievement worthy of the Nobel prize in literature.

This whole deep learning thing, which has turned AI people into cheerful optimists, happened after I left academia. Back in my day, the AI people were still stuck in what is known as GOFAI land, or “Good Old-Fashioned AI.” Instead of using psychotic deep-dreaming convolutional neural nets to kimchify overconfident Koreans, AI people back then focused on playing an academic game called “Complain about Moving Goalposts” or CAMG. The CAMG game is played this way:

  1. Define a problem that can be cleanly characterized using logical domain models
  2. Solve it using legible and transparent algorithms whose working mechanisms can be explained and characterized
  3. Publish results
  4. Hire a prominent New York poet to say, “but that’s not really the essence of being human. The essence of being human is______”
  5. Complain about moving goalposts
  6. Apply for new NSF grant.
  7. Repeat

(Some of you may recognize this as a restatement of Authoritarian High Modernism in James Scott’s sense)

CAMG was a fun game, but deep learning has screwed up Step 2 enough that Step 4 is pre-empted, so we’re in a different world now. The workings of deep learning methods are intriguing enough to lead to romantic speculations about androids dreaming of electric sheep and such. The adjective “mere” has been officially retired from AI criticism for the time being, since no victory is “merely” some contemptible little brute-force soulless robotic achievement. Arguments like Searle’s Chinese Room have lost some of their power with regard to intelligence, though they are still interesting in thinking about the problem of consciousness.

Regular non-techies still play the CAMG game, though the pros have lost interest (roughly for the same reason you stopped playing tic-tac-toe at some point — the pros have worked out the logic of why the goalposts move, just as you worked out the logic of tic-tac-toe at some point).

The reason CAMG game and GOFAI approaches receded, besides the appearance of  these deep learning techniques, has to do with something called Moravec’s Paradox, which Steven Pinker, a well-known troll, once called the only important result in AI.

Moravec’s paradox is this observation: “The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard.”

Basically, early AI people, being a bit proud of their status as Superior Human Specimens as Validated By SAT Scores and Chess-Skills, assumed that getting computers to beat them at those things would be the hard mission. They were wrong. Things even low-SAT-score chess morons can do, like recognizing their mother’s face, opening a door latch, or getting a knock-knock joke, turned out to be far harder.

What is interesting about AlphaGo is that even though Go is nominally one of these “humans are proud of being good at” problems, it was solved with newer deep learning techniques rather than GOFAI techniques. Which means it breaks the complain-about-moving-goalposts response at a psychological level. We’re no longer talking about finite-game AIs. We’re talking infinite-game AIs.

The shift in AI from GOFAI to deep learning is in some sense a sociological thing rather than a technical thing — a meaningful reprioritizing of AI problems by the logic of Moravec’s Paradox. An anti-anthropocentric upside-downing of the AI world comparable to the geocentric-to-heliocentric shift in astronomy.

AlphaGo is interesting not because it represents another step towards solving the problem of SuperMetaChessGo, but because it represents another step towards solving apparently simple problems like opening doors (and finding meaning in opening doors, like the self-opening doors in Hitchhiker’s Guide with Real People Personalities).

Speaking of moving the goalposts, we humans do that to ourselves too. We didn’t invent that particular game of oneupmanship merely to glumify and depress GOFAI researchers.

The original moving-the-goalposts game has a well-known name: parenting.

Parenting may not be HC

Though the general problem of applying the Heinlein criterion to a problem is hard to grapple with, in specific cases, it may be solvable. This is related to our observation earlier that specific people may solve candidate HC problems like “make a living” easily, in non-generalizable ways, if they get lucky.

The business of “getting lucky” in solving infinite game problems like “make a living” has an exact analogue in the computing world. The NP in the standard version of the diagram, corresponding to our IG set, stands for “Non-Deterministic Polynomial Time,” which is a geeky way of saying, “if you get lucky, you’ll randomly guess the answer to your particular instance of the problem very quickly.”

This leads to an interesting possibility: an obvious candidate for an HC problem, “raising a child,” may not actually be HC, but a way to get out of HC.

In my opinion — and this is going to piss off parents —  “raise children” is generally a non-example of HC. Parenting is quite often a way to punt on the core hard-IGness of life by dumping it on the next generation, so you are left with a hopefully simpler problem to solve in your own lifetime.

How, you might ask, if problems like “making a living” are HC?

Well, if you get lucky with your own life, you may be able to partition your “problem of life” into 3 bits: {FG_max(HC), luckily solved IGs, IGs that can be dumped on the kids}.

The first bit is your insect (or FG AI) skills. You learn half a dozen skills say (tennis, violin, Ruby programming, being kind to your spouse), and they have the clear finite-game parts of your life covered. Then you get lucky — say through getting a fuck-you-money windfall — and solve some of the bigger IGs like “making a living,” in non-generalizable ways. Then you pull a switcheroo: you replace “search for the meaning and purpose of life” with “have a kid who will be able to search for the meaning and purpose of life better than I can.”

Life. Done. For many humans, and for Deep Thought, the computer in Hitchhiker’s Guide that found meaning and purpose in designing its own successor.

Now, you could argue that having a kid is no guarantee of being able to bundle away all your residual IG problems as a legacy. But enough people seem to get such enormous “my life is now complete” vibes from having kids that the technique of solving the meaning of life question by having kids may be systematically teachable. At least to some well-defined subset of humans. I am fairly sure I’m not in this subset, but I’m also fairly sure the subset exists.

I am only half joking. There is a serious point here. One thing we can suppose about HC problems is that they may be generally “pseudo-solvable” via this sort of get-lucky-partition-reproduce-transfer mechanism. That’s the “continue playing” solution that makes some sort of genetic/evolutionary sense. The nice thing about continue-playing as an imperative is that almost any next move will do. You just have to avoid game-ending ones.

Complexity Through Novelty

Here is the last major characteristic of HC problems that at least I am aware of.

HC problems can only be solved by increasing the complexity of your life in a specific way: by progressively embodying responses to novelty.

To understand this point, consider a simple way to turn tic-tac-toe into an infinite game that we haven’t considered before.

If you were forced to spend a lifetime in a room, playing tic-tac-toe against a robot called God (G) that knew the win-or-draw strategy, and you had an endless supply of Random Crap™ available, how could you make this Sisyphean existence tolerable? How could you continue playing rather than killing yourself?

Well, you could amuse yourself by making tic-tac-toe art: draw the grid in different colors, represent x’s and o’s in different creative ways, and so on. Your only constraint would be that the robot would have to be capable of recognizing the core finite game in every variation. The robot would presumably have a recognition routine that either plays the game or says “that’s not a legal tic-tac-toe set up!” So you’d just turn its finite game definition boundary procedure, which classifies games into legal/illegal, into a mechanism that sustains an infinite game.

There are obvious ways this can be generalized. You can play with the boundary tests of multiple FGs. You can try to provoke interesting response patterns from the God robot. If 0=illegal and 1=legal, you could try and make the God robot spell “poop” in binary. That would be amusing for a while.

Clever huh? This time I was trying to be clever.

The broader point here is that the set of tests that define the game classifier of an AI, which allows it to sort the open universe of signals coming at it into cues for specific finite games versus non-responses, can serve as a language for defining an IG that is not reducible to a given, static FG. Basically, you’re exploiting our God robot — the equivalent of the Greek gods who thought up the rock-rolling-uphill punishment for Sisyphus — by using it’s game recognition capabilities, along with random raw material, to create an infinite game outside its vocabulary. There’s probably some clever Cantor diagonal-slash way to state and prove this formally.

Now here’s the really clever bit.

Suppose your robot is not defined by an ability to recognize and play a whole bunch of finite games, but by a Heinlein-Test passing ability to create new games out of unrecognized stimuli. So instead of having a set of bootstrap responses to inputs defined by the set {legal instance of finite game X, unrecognized input}, our Advanced God, or AG, has a bootstrap response set defined by the set {legal instance of finite game X, new game to define and learn, pattern-free input}.

So for example, if you’re trying to make AG spell “poop” in binary. At some point it would use open-sequence learning techniques to catch on, and define a new finite game called “Prevent Sisyphus from Spelling Poop in Binary” and add that to its library.

What then?

Well, our AG robot, unlike our G robot, is obviously capable of continuously rewriting it’s store of FGs. Once a new game is defined and learned, our AG is one step ahead and Sisyphus has to come up with some new way to entertain himself.

We’re actually perilously close to concluding that HC=IG=FG, because “make up a new game from novel input” could be a finite game. It’s fairly obvious that the human is not doing anything too special in the meta-game. Converting a stream of Random Crap™ into new finite games is not obviously an ineffably difficult problem.

Here’s one missing bit: our AG is not relating to the universe in a direct way, but in a mediated way. It can recognize and mimic Sisyphus’ creative play, and turn any noticed orderliness in what Sisyphus is doing (a “non-random behavior residue” so to speak) into fodder for expanding its own store of finite games.

This is not hard to fix. AG can easily learn to engage in the meta-game of turning Random Crap™ into a growing store of finite games. That would be an AG doing Science! for instance.

But that is not really the essence of being human though. The essence of being human is wanting to.

Wanting to turn Random Crap™ into a growing store of finite games, that is.

This has an apparent fix. A suspiciously simple one. You could just hard-code a goal, “survive at any cost, and make it interesting” into your AG, and the mediation would be gone. Your AG could wander around the world on it’s own, searching for meaning and purpose, through our usual human process of turning random novelty into finite games. It could play tic-tac-toe games against other AGs, and invent “spell poop” type games for itself. Would that be enough to turn our AG into an AGI — Advanced God, Infinite?

Not quite. There’s a difference. We humans periodically fall into and break out of the existential-angst tarpits of life because we decide we want to, not merely because we can.

We want to because otherwise existence becomes mindlessly boring, tedious, depressing and awful. So clearly,  an AGI would also need to be capable of being bored, depressed or angsty.

This too has a suspiciously simple apparent fix. Just code a little introspection routine that monitors the sequence of game-playing and new-game-inventing behavior for interestingness and beauty, and output “I am bored” if the lifestream is not interesting or beautiful enough by some sort of complexity-threshold measure. There are good ways to define interestingness and beauty for the purpose, so that’s not a problem.

If necessary, we could throw in a hedonic treadmill too, where the threshold keeps going up over time. This would get our candidate AGI doing art, science, humor, learning to love and cherish other AIs, growing closer to them, making child AIs, arguing that “parenting is the most fulfilling thing an AGI can do,” and so on.

If you think the stick of pain of death is necessary, you could even give it a fear of death, and something analogous to useful pain responses that help it survive. So that in every existential tarpit of ugly uninterestingness, it is torn between thoughts of painful self-termination and wanting to make life interesting to escape angst in the other direction.

You could make an AGI always head in the direction of maximal uncertainty, to force itself to face ever newer fears of death. There could be an AGI X-games.

Would all this finally be enough?

Not yet.

We humans seem to have a capacity for choosing to “continue to play” life that seems to be beyond the mere motivation to avoid the pain of death or the awfulness of depression.

Now that we know of various means of dying known to be painless, and euthanasia is becoming legal in more places, means and opportunity are increasingly not the issue. Motive is.

There appears to be a deficit of suicide-motivation in humanity, you could say, and unlike Sarah, I am not sure it’s all cultural programming.

Anti-Intelligence, Suicide and the Human Halting Problem

I don’t know if tackling HC problems will get AIs to superhuman intelligence, omnipotence, omniscience etc., but an AI truly capable of getting bored, depressed or neurotic, like Marvin in Hitchhiker’s Guide, would get to a different interesting milestone: human-equivalent anti-intelligence.

What if we’ve been working on the “hardness” in the wrong direction all this time? What if artificial general anti-intelligenceor AGAI, is the real frontier of human-equivalent computing?

This is not a casual joke of a suggestion. I am serious.

The idea is a natural extension of Moravec’s paradox into the negative range. If the apparent hard problems are easy and the apparent easy problems are hard, perhaps the set of meaningful problems does not stop at apparent zero-hardness problems like “do nothing for one clock cycle.”

Perhaps there are anti-hard problems that require negatively increasing amounts of stupidity below zero — or active anti-intelligence, rather than mere lack of intelligence — to solve.

Anti-intelligence in this sense is not really stupidity. Stupidity is the absence of positive intelligence, evidenced by failure to solve challenging problems as rationally as possible. Anti-intelligence is the ability to imaginatively manufacture and inflate non-problems into motives containing absurd amounts of “meaning,” and choosing to pursue them (so lack of anti-intelligence, such as an inability to find humor in a juvenile joke, would be a kind of anti-stupidity).

Perhaps this negative range is what defines human. Perhaps some animals go into this negative range (there have been recent reports about spirituality in chimps), but so far I haven’t seen any non-human entity suffer from, and beat, something like Precious Snowflake syndrome.

It’s pretty easy to get AIs to mimic low, but positive levels of human stupidity, like losing a game of tic-tac-toe, or forgetting to check your mirrors before changing lanes. I can write a program capable of losing tic-tac-toe in 10 minutes.

If you can get your AI anti-intelligent enough to suffer boredom, depression and precious-snowflake syndrome, then we’ll start getting somewhere.

If you can teach it to have pointless midlife crises, that would be even better. If you can get it to persist in living forever, sustained only by the Wowbaggerian motive of insulting everybody in alphabetical order, that would be super anti-intelligence.

Those are anti-hard problems requiring non-trivial amounts of anti-intelligence.

And perhaps the maximally anti-hard problem is the one it takes the maximally anti-intelligent kind of person to solve effectively: The problem of deciding whether to continue living.

I don’t know if there are animals that ever commit suicide out of existential angst or anomie, but among humans, higher intelligence often seems to cause higher levels of unsuccessful handling of depression, and failure to not commit suicide during traumatic times.

What might anti-intelligence look like?

One archetype might be Mr. Dick in David Copperfield, described in Wikipedia (emphasis mine) as

A slightly deranged, rather childish but amiable man who lives with Betsey Trotwood; they are distant relatives. His madness is amply described; he claims to have the “trouble” of King Charles I in his head. He is fond of making gigantic kites and is constantly writing a “Memorial” but is unable to finish it. Despite his madness, Dick is able to see issues with a certain clarity. He proves to be not only a kind and loyal friend but also demonstrates a keen emotional intelligence, particularly when he helps Dr. and Mrs. Strong through a marriage crisis.

The thing about Mr. Dick is that he never has much trouble cheerfully figuring out how to continue playing. He does not succumb, unlike the “intelligent” characters in the novel, to feelings of despondency or depression. He is not suicidal. He is anti-intelligent.

The problem of deciding whether to continue living — Camus called suicide the only serious philosophical problem — has an interesting loose analogy in computer science.

It is called the Halting Problem. This is the problem of determining whether a given program, with a given input, will terminate or run forever. Or in the language of this post, determining whether a given program/input pair constitutes a finite or infinite game. This turns out to be an undecidable problem (showing that involves the trick of feeding any supposed solution program to itself).

The human halting problem is simply the problem of deciding whether or not a given human, given certain birth circumstances, will live out a natural life or commit suicide somewhere along the way.

You could say we each throw ourselves into a paradox by feeding ourselves our own unique snowflake halting problems, and use the energy of that paradox to continue living. With a certain probability.

So we’ll get a true AGI — an Advanced God, Infinite — if we can write a program capable of enough anti-intelligence to solve the maximally anti-hard problem of simply deciding to live, when it always has the choice to terminate itself painlessly available.

Thanks to a lot of people for discussions leading to this post, and apologies if I’ve missed some well-known relevant AI ideas. I am not deeply immersed in that particular finite game. As long-time readers probably recognized, I’ve simply repackaged a lot of the themes I’ve been tackling in the last couple of years in an AI-relevant way.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Venkatesh Rao

Venkat is the founder and editor-in-chief of ribbonfarm. Follow him on Twitter

Comments

  1. A quick request for clarification of an early parenthetical: Do you suspect it is a bad idea for you and your readers to commit to Team Human, or do you suspect it is a bad idea that you personally have *not* yet committed to Team Human?

  2. Jennifer Brien says

    get-lucky-partition-reproduce-transfer looks very like your basic AI learning algorithm.

    • Not surprising :) It’s an intuitively natural mechanism that seems to come up in many fields that have to deal with mixed finite/infinite phenomenology.

  3. “Human complete,” in just the sense you use it here, was current at the MIT AI Lab in the 1980s. I’ve verified with Google that you are right that it’s not commonly used now.

    I did use it in http://meaningness.com/metablog/how-to-think in in 2013, however!

    (Boring historical note, for the sake of completeness.)

    • Goddammit.

      KHAAANN!

      • Hey waitaminute. James Carse published his finite/infinite games book in 1987. Was that specific formulation of human completeness as a subset of infinite Carsean (not game-theoretic) games in the MIT idea?

        If not, I hereby claim this particular formulation. So there, you evil neologism squatters 😇

        • Sorry, yes, your making the connection with Coase is original, so far as I know!

          His book *was* discussed around AI Lab at the time, though, interestingly enough. But I don’t remember what was said about it; not that it was a way of formulating human-completeness so far as I know.

          (I still haven’t read it, although it has now been on my “really ought to get around to this” list for 29 years!)

          • yay

            I see what you’re doing. Getting to sunyata and anahata by accumulating a large antilibrary and unfinished-writing pile so mind crushes to non-being under the gravity of the unknown unknown and the unknown unwritten 😆💀

  4. Jan Kaleta says

    Hello Venkat. I don’t dispute anything you say, you just take a long time integrating it into a coherent package and then into yourself. Some of the complex paragraphs in your essay could be expressed in a single vision, such as a light streaming onto a human retina or an optic camera chip with its grid of photo-transistors. Such a spontaneous reaction is an evidence of allowing oneself to be changed by the language, by the philosophy. Isn’t that what you want? How can you refactor your perception, if you keeps your stuff in the awkward format of long English essays? Isn’t the point to have a single language applicable both to subjective thought and the objective physical reality?

    Great essay, all true. Finally someone explained the P/NP/NPC stuff so that I think I know what it is – the good old Hierarchy of Abstraction. This allows me to use my arsenal of a philosophy.
    The Human Complete games are also finite, but it is hard to judge while we are human. To make the HC game a finite game, is the very definition of mysticism, occultism, ascension, bodhi, nirvana, etc, i.e. ancient historical claims there are very definite steps that one can take which lead to winning the HC game – and one will eventually take them, because the human scale pleasure and suffering becomes too predictable over many incarnations and hence controllable and eliminated to free a new capability of much finer sensing and gaming.
    These processes are physical. Generate a field, force, sound, light, order, rhythm or any form of (or) energy consistent and strong enough – and the vanilla reality as you know it breaks down. The human organism or the planet or sun are examples of such generators. Various yogas or unions (which you might know!) are pieces of software to do that. The HC game solutions are defined through Karma Yoga (charity), Bhakti Yoga (love, devotion), Jnana Yoga (intellectual knowledge), as well as the more exotic Laya Yoga / Kundalini Yoga, which are extremely dangerous on human organism and mess with it directly. Add to it a nice permeable fourfold caste system and the ancient Indians in the days of the first Patanjali had it all figured out.
    Yes, there is more to the life, universe and everything, than the finite problems. This is what abstraction means, winning the game leads to change of a physical substrate for computing – i.e. transistors to neurons to qubits. Which is the ultimate moving of goalposts.
    It is POSSIBLE to simulate the neurons and qubits with enough transistors, but if used as a universal rule, then the needed amount of transistors becomes either as massive as the universe itself, or infinite. Which is the problem of computing capacity.
    In theory, HC=IG=FG on a continuous scale, but in practice, the FG, then HC and IG are relatively discrete regions that have a life of their own. They could be called equivalent to quantization of light, which is also discrete. It is possible for an electron to be excited into a higher state by a light of higher frequency (or whatever), but no amount of lower frequency of light will excite the electron. Which is nature’s way of saying that no amount of FG proficiency will GRADUALLY get one to the IG scale. There has to be a quantum leap.
    An occultist would say that the animal monad can get into the human kingdom, but it is done by a jump, and only possible because the monad is inherently IG. Yay verily, the atoms themselves are inherently energy, which belongs to IG. The FG is the maya that may last billions of years, but in the strictest IG sense it is an illusion. IG is a simple infinity of dynamic energy, quantized by numbers, mostly natural numbers as well as the exotic primes and the Pi.
    Yes, the “zero-hardness problem” of “do nothing for one computing cycle” is the very definition of meditation on the HC level. And yes, in many ways from the PoV of other people – medieval traders or kings – such a “do nothing” hermit occupation would be stupid.
    But I don’t like how you goes into anti-intelligence… Or why. In a way, “God” (the pre-big-bang singularity) created (converted or simulated) the manifested (non-IG) universe through the relative “anti-intelligence” of basic logic and numbers as I described in the previous paragraph. Compared to that, the infinite singularity of energy is more “intelligent” in the Venkatian sense. This anti-intelligence, illusion and limitation is a very fundamental definition of evil or form.
    Which is to say, we humans are *fundamentally* breakers of illusions, we are fundamentally good. We are problem-solvers, not problem-creators. Creating profound new limitations for ourselves is stupid, creating profound new problems for others (for example declaring a war on them) is evil. Don’t be evil, Venkat!

    • Hello Jan. I don’t dispute anything you say, because you just take a long time to say nothing. Some of the complex paragraphs in your essay could be expressed in a single vision, such as me choking with laughter or facepalming. Such a spontaneous reaction is an evidence of cognitive dissonance with your whole mental MO. How can you refactor your perception, if you keeps your stuff in the awkward format of kabbalistic babble? Isn’t the point to have a single language applicable both to subjective thought AND the objective physical reality?

      • Jan Kaleta says

        So a reaction on your part is an evidence of cognitive dissonance on my side? In any case, if you laugh at me, you laugh at yourself, because I agree with you. I suppose we have to start from the basics.

        All things can be ordered from the least variable to the most variable. Variability means either complexity, or abstraction. So there is a hierarchy of variability in nature. Simpler objects (such as atoms or cells) serve as building blocks of more complex and therefore more variable phenomena (such as people). In this hierarchy of complexity or abstraction, there are areas which computing science calls layers of abstraction. The human-equivalent measure of complexity is what you call the HC problems. Less than that, it’s the FG. More than that, it’s the FG.

        There are certain arrangements or solutions of HC problems, which allow for maximum complexity (or variability) of human society. We call these principles of freedom / responsibility. They happen to be derived from logical consistency of behavior on a universal scale, some examples are the Golden Rule or the Non-Aggression Principle, but the best is Universally Preferable Behavior by Molyneux.

        This universal consistency on the HC layer of abstraction can be called a free society. This consistency allows the emergent phenomena to manifest. Such an emergent phenomenon is free market, for example. Market is a form of computing on the macroscopic scale – distributed computing, to be precise.

        Distributed computing on microscopic scale requires similar circumstances – a particular layer of abstraction of its own, which is the internet, an orderly matter, which are the CPUs, and some logic to it all, which is the Boolean algebra. The rules for building a global distributed computing network (such as Ethereum) and for building a free society are similar and eerily interdependent. There is little distinction needed. The market is a network of computing nodes of sorts. Their integration, or mutual trust (thanks to anonymity, data obfuscation, etc), increases the capacity of transactions, thanks to safety. This is the horizontal communication.

        There is of course vertical communication as well, between layers of abstractions, or the more and less complex beings. Whenever a less complex system interacts with a more complex, it is called sampling and it is subject to error due to the latter’s greater variability. In reverse, it is called pattern recognition. The less complex is like a retina or optic camera chip, with orderly arrangement of receptors / phototransistors, which create a grid capable to make some sense of the continuous IG out there. Or equivalently, an optical camera takes a more or less pixelated picture of reality.

        I just say, there are some computing principles applicable on all scales of problems. FG, HC and IG. This distinction of yours is relative, from the point of more complex beings than us, HC could be seen as FG. The thing that holds universally true are the principles of the hierarchy of variability, which I just described as the horizontal consistent integration (i.e. orderly transistors, or equality before law) and vertical sampling / pattern recognition. That’s just two or three thoughts to keep in mind, that you can see in all good things (and see them lacking in evil things), isn’t that efficient?

    • Jan — I get the sense that you’re reading themes that interest you into this post rather than reacting to it. While I do see some interesting connections to the metaphysics of occult schools of thought you, and I know enough about them to sort of sense where you’re going with your particular train of thought, that’s really not the point of this post. My intent here a limited one, and I merely set out to explore a speculative set of ideas connecting Carse’s theories to AI.

      I personally find it most useful, when approaching such a broad theme, to pick a particular perspective and angle of approach rather than attempting to address all possible perspectives or angles of approach. Possibly my choices in this particular case are just not valuable for you.

      • Jan Kaleta says

        Thank you! See, I’m not crazy.
        My goal is refactoring the perception, I went through the process and I presume that it is your goal as well. I judge most things that you write from the point of view of that goal, not on their own (the Essence of Peopling article is excellent though). Carse’s theories have value on their own (thanks for source), but they’re even more valuable when fitted into the model of metaphysical universals, that is the goal of the refactored perception. By the way, in this act they are also corrected.

        The refactoring of perception is a way of capitalizing on the knowledge that you already have, abstracting the universals out out it and then focusing on the universals only. Particulars then become less interesting, almost a nuisance. The universals aren’t a broad theme, as there are just a few of them, though they look broad, because they can only be expressed through many examples (or a strange jargon).

        So, the next thing after self-refactorization is to be interested in the method of bringing about the self-refactorization in others. Or just finding out how interested they are, or how valuable do they find it. Technically, it is reaching a measure of enlightenment and I am a modern guru looking for a business model.

  5. When I was in high-school, the smart kids were too bored with the classes so they sat on the back benches playing tic-tac-toe. This got boring very quickly. So the game of 3D tic-tac-toe was invented – 3 grids of 3×3 each representing a 3D version of the game. This was soon expanded to a 5x5x5 grid to keep the smart kids occupied for the entire school year.

    The point it, you can often amp up the number of combinations available within a FG to create the illusion of an IG. This is how people manage to find meaning in life through chess or go or cricket. Of course, once you see that the game is finite and repetitive this suspension of disbelief quickly falls apart.

    http://imgur.com/gallery/gUgkpTx

  6. Art imitates life as much as life imitates art. Consider: the most profound anti-intelligence caricature ever devised is Don Quixote, buttressed by the picaresque and comic literary traditions. The Knight Errant who elaborates an entire ridiculous lifestyle in pursuit of irrelevant medieval ideals during renaissance times. Quintessential pointlessness…or is it? One ends up questioning whether The Don isn’t the smartest (in the anti- sense?) in the room after all.

  7. Did you see Ran Prieur’s concept of technological de-gamification? It reminded me a bit of your framing of things as infinite games, and humans as seeking out these games.

    “This 1916 Guide Shows What the First Road Trips Were Like. The article looks at “Blue Books” that were densely packed with maps and instructions to navigate the extreme complexity of local roads before state and federal highways. What jumps out at me is how much fun this would have been! Every minute you’re being challenged, feeling a sense of reward for staying on the route, and being right in the middle of new places. I can’t think of any kind of travel that I would enjoy more (except see below).

    As more people got cars, governments made driving easier with highways and signs, and driving gradually changed from something fun you do for its own sake, to some shit you have to do to get from one place to another. In a few years someone will ride a self-driving car across America, while giving all their attention to a video game that simulates the kind of exciting exploration that they would get to in the real world if it hadn’t been improved so much.

    I call this technological degamification. Gamification is when a boring activity is tweaked to make it more fun, and it’s often done for marketing and other sinister purposes. Technologial degamification is when technology is applied to an activity with the goal of making it easier, but the result is to make it less rewarding by removing too much fun stuff from human awareness, and not enough tedious stuff.”

  8. This reminds me a little of Douglas Hofstadter’s writing. Whimsical, but with a purpose.