The Government Within

Mike is a 2013 blogging resident visiting us from his home blog Omniorthogonal.

Are ordinary people really populations of interests rather than something more solid? It’s disturbing to think of yourself as so fluid, so potentially unstable, held together only by the shifting influence of available rewards. It’s like being told that atoms are mostly empty and wondering how they can bear weight. Yet the bargaining of interests in a society can produce highly stable institutions; perhaps that’s also true of the internal interests created by a person’s rewards…these patterns look like familiar properties of personality. – George Ainslie, Breakdown of Will, p 44

Productivity methods and self-help advice that promises to improve one’s effectiveness at achieving goals (Getting Things Done, Lifehackers, etc) are all the rage these days, but I have mixed feelings about them. On the one hand, who can argue with people trying to improve themselves and become more effective? But something about this form of discourse makes me suspicious, something doesn’t quite add up. How can one will oneself to be more willful? Becoming more dedicated to your goals sounds good, but that is true only if those goals are the right ones to have; where did they come from, how did they get chosen out of all the available goods in the world? And how do you know when it is time to let go of your goals and revise or replace them? People occasionally have to pivot just like startups do; and a narrowly-focused dedication to one goal can mean missing out on better ones. In short, the management of goals and the willpower that they direct is a fundamental mystery of human action, and the productivity experts seem to blithely ignore all the theoretically interesting aspects of it.

This literature reads as if Freud never existed. If there is one valuable insight to be gleaned from his problematic legacy, it is that our conscious intentions are at best the tip of a very large hidden iceberg of unconscious motivations. Our true purposes are obscure; the mind is a disorderly riot of conflicting drives, we are constantly tripped up by desires we are not even aware we have.

Freud and his inheritors are distinguished by their method of anthropomorphizing internal mechanisms of the mind – treating a part of the mind, say the id, as an autonomous being with its own goals, agency and some limited intelligence. Marvin Minsky’s Society of Mind theory is probably the most worked-out contemporary theory in this mode. I’ve studied with Minsky, and have internalized much of his worldview, but this post is an attempt to grapple understand the work of George Ainslie, whose approach is rather different. Where Minsky tends to be more cognitive in his approach, Ainslie’s theory is deeply rooted in drives, rewards, behavior, and quantitative utility theory. His version of this he calls “picoeconomics” – that is, the internal and very-small-scale internal economy of the mind.

All these thinkers, themselves no doubt motivated by disparate goals, focus on unpacking the seeming solidity of the self and unveiling an underlying disunity. Viewing the mind as an internal society of conflicting agents has some conceptual problems, of course – mechanisms, after all, are not people, and it is easy to criticize such theories for positing an infinite regress and hence explaining nothing. As science then it may be problematic, but as a matter of practical personal self-understanding, it seems to be an extremely powerful technique, and also a possibly dangerous one.

The actual self seems extremely slippery in such a model. There is no central control of the mind, it’s just a loose collection of desires, agents, plans, and assorted bric-a-brac. How then does a coherent self get pulled together from this mess? One of Ainslie’s key points is that the main or only reason we have a self at all is to construct and enforce long-term bargains between independent behaviors and interests, and the self is better understood as a temporary alliance than an organ or a structure.

A practical person might ask, why you would want to see yourself this way? Isn’t it a somewhat destructive intellectual goal? Even if the self is fictional, it is surely a necessary fiction, and undermining it might be a bad idea. Integrity is a virtue, why the emphasis on its opposite? In at least the cases of Ainslie and Freud, the motive is at least in part therapeutic. Freud wanted to treat people who were suffering from problems caused by the irruption of repressed desires; and Ainslie is motivated by his work on addiction, a paradigmatic case of the failure of the unitary self. An intergrated self, if it is anything, is a construction, an achievement, an emergent fact out of a pre-existing disorder, and one needs to understand this process in order to avoid or correct the pathologies that the process is prone to. But one thing Ainslie is not doing is offering a self-help manual on increasing your willpower. In fact he is deeply skeptical of the idea and includes a chapter on “The Downside of Willpower” pointing out that an overactive rational pursuit of will can be self-undermining.

Outline of Ainslie

First, an apology: this is a very compressed account of a book that is already quite dense with ideas. For more detail, see this summary, and if you are just starting down the road of self-division, you might do better to start with Minsky or the popularized Freudianism of Transactional Analysis.

The temporal inconsistency of preferences

Ainslie starts from the observation that human preferences tend to be heavily biased in terms of favoring immediately available rewards over those that are distant in time. In other words, we discount a goal based on how far away in time it appears. The ice cream we are trying not to eat is easy to resist if it is down the road in a store, harder if at home in the freezer, impossible when sitting in bowl in front of us. The rewards of ice cream and the rewards of dieting both produce a curve that discounts rewards over time, and they compete (along with other rewards and behaviors) for dominance
If these curves obeyed the normal rational rules of of economics, they would discount rewards at a constant rate over time, producing a curve of preference that decayed exponentially with increased distance in time. However, it appears that we actually discount the future at super-exponential rates, producing a hyperbolic curve, more sharply curved than an exponential. This hyperbolic discounting takes place at a broad range of time scales (from sub-second to weeks or longer) and has been experimentally verified in both humans and animals.

One of the key consequences of hyperbolic discounting is that our preferences can be inconsistent over time. A purely exponential discounter doesn’t have this problem; a temporal graph of perceived utility over time of various exponential reward curves will not have any of them intersecting. But with hyperbolic discounting, short-term goals, when they are imminent, can override longer term ones, or in other words short-term preferences can dominate longer-term ones (figure from Ainslie):
ainslie-1

In B, the behavior with a short time perspective is temporarily capable of overriding the longer-term one. This temporal inconsistency, according to Ainslie is the mechanism underlying not only succumbing to temptations, but a wide variety of other common problems as addiction, procrastination, and compulsions. At a more fundamental level, the inconsistency of our preferences is a problem for a would-be rational agent, because it means we can never trust ourselves to pursue the same goals in the future as we hold now.

Blank

Intertemporal bargaining

The need for long-range goals to somehow suppress temporarily dominant short-term goals leads to an existential impasse. Because we can’t naively trust our future selves, we are thus in some sense radically alien to ourselves. But our future selves are not wholly out of control: their behavior can be predicted, their choices can be influenced. So what methods are available to enforce our current preferences on these future versions of ourself? Ainslie catalogs some techniques for what he calls “intertemporal bargaining techniques”, including:

  • Extra-psychic commitment: this means changing one’s external circumstances so that future temptations are avoided or resisted. The classical example is Ulysses’ strategy of resisting the call of the Sirens by having his men tie him to the mast of his ship. A more modern example would be a drug addict who checks themselves into a rehab center to force themselves to be away from drugs, or an alcoholic who doses themselves with a drug that changes the metabolism of alcohol so that drinking leads to severe nausea. In all of these cases, a person acts in the present to alter the choices or rewards available to them at a future time.
  • Personal rules: You swear on New Year’s to forego desserts, and somehow hope that the resolution has enough strength to overcome future temptations. Sometimes it does, but more often it doesn’t. The act itself seems rather mysterious: how can this purely mental action affect future choices? How can you bind yourself to follow a rule that you are also motivated to break? The tenuous possibility of success of such a resolution seems to involve the substitution of an abstract principle for the concrete situation which causes the undesired short-term choice, in essence implementing a sort of Kantian categorical imperative. If this mental trick succeeds, then breaking your diet is no longer merely a single act (whose isolated consequences are, of course going to be minor) but an act that destroys a valuable abstraction – not only the diet, but the fact of commitment, and the person’s own image of their strength of character. In other words, having a rule raises the stakes of an individual choice are raised to the point where long range goals can override short term ones. The well-known Seinfeld technique for self-enforcement of personal rules is a partly externalized and explicit version of this technique.

Commitment as recursive self-prediction

Ainslie is essentially saying that a person is in a situation of playing an iterated prisoner’s dilemma game with one’s future selves – like the classical IPD, one can cooperate (act in a way that serves the shared long-term interests) or defect in favor of more “selfish”, local, short-term interests:

Hyperbolic discounting makes decision making a crowd phenomenon, with the crowd made up of the successive dispositions to choose that the individual has over time…. Participation in the acts of this crowd of successive choice-makers is an extremely self-referential process, hidden from the outside observer and even from the person herself facing it in advance. She can never be sure how she herself will choose as she tries to follow this crowd and also lead it from within. (BoW p130)

If simple rational utility maximization with time-consistent discount curves was the norm, we wouldn’t need anything that looked like decision-making: we’d just act on whatever drive promised the most utility, with no more complex mental machinery required than a simple maximizer function. But due to temporally-inconsistent preferences, such a simple scheme doesn’t work this way. Instead, our conflicting agents are forced to negotiate with each other, creating bargains and alliances using the techniques described above.

The self as the locus of intertemporal bargaining

Picking a strategy in an prisoner’s dilemma game involves predicting the actions of the other player, and hence modeling them. What Ainslie seems to be saying is that this recursive process precedes, generates, and underlies the self. In some sense we bootstrap our selves into being through this process of trying to wrangle our infantile drives into coherent longer-term actions. We base current behavior on both our past actions and responses, and our predicted future actions and responses, and our self-representations are tools for enabling and/or artifacts generated by this highly recursive and self-referential process.

Here is where Ainslie’s story becomes hard for me to follow. I deal with complex recursions all the time, it’s part of the job of a computer scientist. But there is always an element of mystery to them. Recursions involving the self, even more so. So I can’t quite grasp this process in its entirety, and so can’t fully judge whether I find it a believable theory of mind. Unlike some of the more straightforwardly cognitive models of the mind, Ainslie’s seems to be rooted firmly in the precognitive unconscious and animal behavior.

Ainslie includes a chapter on the experience of intertemporal bargaining. where he warns that his theory may not only be counterintuitive but actively resisted, because it is in the nature of our internal behavioral rules to resist attempts to weaken them, and becoming aware of the somewhat arbtirary nature of rules that we create for ourselves can only serve to weaken them. In other words, it is potentially dangerous to become too self-aware of your mental machinery and the way real way your motivations work. You have been warned.

The Politics of Mind

…this last is of special use in moral and civil matters: how, I say, to set affection against affection and to master one by another: even as we use to hunt beast with beast…For as in the government of states it is sometimes necessary to bridle one faction with another, so it is in the government within. – Francis Bacon (quoted in BoW, p5)

A person, like a society, is composed of parts with their own private agendas, all taking part in a continuously renegotiated dance of conflict, cooperation, and compromise. Our disparate motivations are like politicians trying to advance a faction, and the self, such as it is, is something like a prime minister – not powerful in its own right, but because it has managed to become the public face for the most powerful faction. Our inner life is a noisy parliament.

Given the dysfunctional reputation of external legislatures, this might be cause for despair. But consider two points: First, what are the alternatives? The older model of the self is equally a reflection of the monarchic system of government, or a military hierarchy, the king-general-self seated at a pinnacle of command and handing down orders to the lower parts who execute them. That model doesn’t comport well with either the reality of human action or a modern esthetic of systems. Second, given the inescapable fact of conflict in both society and the individual, we have no choice but to struggle towards systems of governance that work, where “work” means to successfully arrive at suitable compromise solutions that are stable and satisfy a reasonably large subset of constituent interests.

Of course it is common to have legislatures that are mired in stalemate, like the current US Congress. The internal parliament of mind has its own set of pathologies: akrasia, addiction, compulsion, irresolution, repression, etc. The distributed agency theories of Freud and his descendents open up the tantalizing possibility of a unified theory of power and action over distributed systems, one that can show how assemblages of agents form, compete, cooperate, and dissolve, that cuts across psychology and political science. Perhaps these two most intractable problems – the organization of society and the organization of the self – can some day illuminate each other.

So you and every person you interact with are each a whole society of separate agents, with an internal economy, government, and politics. Some people may be organized like monarchies with a strong central self, others may be more anarchic bundles of disparate impulses, others may be flexibly improvising democracies of interest. A kind of mental anarchy is probably the infantile ground state, with structures of governance emerging over time. We all probably are familiar with people who have either too much or too little governance over their impulses. Personal interactions are like diplomatic missions between countries, and our social selves the ambassadors, forced to represent a complex system in a simple, polished, and understandable form.

Most of us in the technology world I think find politics (the external kind) distasteful – because of its dysfunctionality, inelegance, and because when it does work at all it requires dealing with humans on a retail, personal level, rather than as abstractions. But if individuals are themselves loose collections of divergent agencies, with only cobbled-together alliances maintaining a semblance of unity and coherence, then politics in a sense can’t be avoided at all – it’s what we are made of. The dominant political philosophy of technologists appears to be libertarianism, an ideology that may be pretty accurately defined as the belief that politics can and should be replaced by something else (the market). Fighting this tendency has been an obsession of mine for decades, and while I am not going to fight that battle here I cannot resist the fresh insight gained from an immersion in Ainslie: that the flight from politics is in some sense a flight from authenticity, a denial of our true nature, and a proposal to replace it with something shiny and superficially attractive but utterly foreign to who we really are.

 

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Mike Travers

Mike Travers is a software engineer based in the Bay Area. His ribbonfarm posts explore the nature of consciousness, community, and varied other themes. Follow him on Twitter.

Comments

  1. This is a very deep and thought-provoking approach that I will have to re-read. But given both my interest in AI and my inability to resist thinking about increasing my own efficacy (something I actually find narcissistic), I’d like to give making sense of this a go:

    First, this seems to fit in very well in some ways to Damasio’s own somatic-mapping theory of consciousness, in which all of our abstract and conscious thoughts are fractal patterns recursively built out of visceral feelings that represent basic homeostasis-seeking drives.

    Second, I think that one of the key issues of all of this is that preferences, emotions, feedback, learning, and what we know as “will” are all part of the adaptive process that defines life, with none of them being a purely self-contained concept. While drawing taxonomies is important for investigating something, I fear that the notion of “will” as we know it now is an exhausted taxonomy that’s inhibiting further inquiry. I also have been avoiding the notion of “rationality” because it seems to be a concept that, in its conventional usage, requires a “calculative rational” model. Venkat’s “narrative rationality” is an interesting step forward that I largely agree with it (and it’s influenced my current approach to AI), but I’m going to put that aside for now because of it’s complexities.

    So while avoiding the dichotomy of rational/irrational, I’d like to bring up that our preferences change because there’s always new information coming in. In addition, there’s the strange effect of stochasticity making many algorithms more effective: something that every computer scientist has encountered (I think there’s something even deeper about this that has to do with the “probability is subjective” notion of Bayesian theory and the notion of Shannon-entropy, but I don’t want to get too verbose). The isolated notion of “willpower” starts to break down here, because if we view emotions, preference changes, &c as sources of feedback, then the idea of “willpower” suddenly turns into unreasonable stubbornness. At the same time, it would be silly to throw away the idea of delayed gratification, especially since we have high-level processes for a reason.

    My current conclusion on this may best be described by paraphrasing what seems to be a popular meme around here: willpower is an extension of learning by other means. I wrote a blog post on what exactly I mean by this, which I think might be useful with regards to the questions brought up on this blog post:

    http://simulacrumbs.com/2013/09/shouts-whispers-and-the-myth-of-willpower-a-recursive-guide-to-efficacy/

    Now, hopefully my current thinking-out-loud leads me to some kind of satisfying thought about the recursion question and the question of whether we ought to break ourselves down into these separate conflicting parts. I think that these two seemingly opposed answers can be synthesized via the concept of tensegrity. These conflicts of preferences, emotions, and willpower are all “growing pains” of some kind. These growing pains might be the kind of “freedom” that Venkat talks about in his post on “Freedomspotting”, but to explain it in my own way for a second: our society currently has a folk concept of “happiness” based on static rational goals, which means that the concept of “willpower” is “Happiness is X, I need to do Y to get X, so I need to will myself to do Y.” From this point of view, anything that obstructs Y is a sort of “unproductive distraction” akin to the way people talk about the US Congress on a daily basis. But if we were to drop this idea and think about our most basic conflict as the creation of a coherent self in the midst of chaotic circumstances, then we can get over this idea of tension being a bad thing, and consider that maybe they are the adaptive mechanisms that actually help us continually construct a stable sense of self against changing circumstances.

    So where does that leave us about self-inflicted misery? Beyond the anomie caused by not striving for something, one has to ask about the irrationality of smoking cigarettes or eating too much ice cream–things that cause real physical consequences down the road. How can a failure of willpower here be considered anything but categorically bad? (yes, hypothetically the tradeoff could be truly worthwhile for someone.) Part of the answer is that our physical well-being is as much about “integrity” as our mental well-being–all life is maintaining a coherent, albeit chaotic, whole; the very notion of “survival” is about maintaining the precarious membrane that separates subject and object. Sometimes it’s a matter of acting purely on metacognition (such as the abstract knowledge that smoking will likely kill us); but if we had unlimited “willpower”, we would quickly derail ourselves by disregarding the constant feedback telling us to change what we do. This is important, because following through on good intentions could be just as deadly in some cases as failing to live up to them. In both cases, I could attribute them to the exact same thing: faulty feedback/learning. Creating behavioral hacks to avoid cigarettes in this case is installing software to patch up an obvious defect in our feedback system. While one must always tread lightly when tampering with complex systems, some of the most obvious cases of self-inflicted misery are a kind of “low hanging fruit”, where the case is clear-cut enough that we can take direct action without huge risks.

    In all these cases, however, it does seem to come down to maintaining a self via tensegrity: lung-cancer and excessive ambition can both potentially break down the structures that are essential for building and maintaining integrity. This constant quest for integrity requires very complex feedback, which would explain why we have this odd paradox between chaos and discipline.

  2. I find myself getting sucked into one of two modes of processing this idea.

    First, I find myself in distributed homunculus mode, imagining myself to be a sort of bag of little homunculi running around in the Cartesian theater, with my “I” operating a spotlight off stage, highlighting one or the other.

    Second, a sort of recursion being in the sense of the inception movie.

    Neither is satisfactory as a way of unpacking “what is it like to be a bunch of picoeconomic, hyperbolically discounting behavioral loops?”

    In that sense, there is something string-theory like about this. The base primitive unit is a response to a perceived (via hyperbolically discounted from the future) stimuli that attempts to take control of conscious action. These stack up in some sort of networked economy with a few messy hierarchical layers.

    The distinction from Minsky that I see is that this market, unlike Minsky’s behavior agents, is predictably irrational in the behavioral economics sense. This predictable irrationality arises from hyperbolic discounting, and makes the economics illegible to a hypervisor homuncular POV.

    Am I even close in understanding this?

    The sociology of these units is not clear to me.

    The intertemporal bargaining examples seem to be at too coarse a level of abstraction to get at how all this works. We really want to think about contending processes at the level of opening a fridge door. Which process gets control of the motor neurons under contention?

    This is also a little like Dennett’s model of heterophenomeology.

  3. Yes. Agreed. Self-mastery is the mastery of intrapersonal politics.

  4. Venkat — you seem close, or at least, about as close as I am. I think you are absolutely right that these processes take place in as fine-grained situations as opening up a door. Ainslie uses itching as one of his very short time scale examples, because that involves some clear pain/pleasure dynamics, but perfectly ordinary actions also require bunches of agents to engage in short-term cooperative alliances.

    Phil Agre developed a theory of routine behavior (fridge-door level activity) that was trying to be both phenomenological and computational. I don’t think he had much of a theory of motivation though. Putting him and Ainslie together sounds like a good project for some eager grad student.

  5. Check out something like Coherence Therapy as an alternative way of looking at many selves bargaining. Or, neocortex vs amygdala. If you rationally know you “should” be doing something, but you’re not, then the amygdala(s) think they have a good reason. I reached the limits of GTD pretty quickly because I had a lot of conflict inside. See also Eugene Gendlin’s Focusing and Jay Earley’s take on Internal Family Systems Therapy. If GTD is a technology. So are Coherence Therapy, Focusing, and IFS. They explicate the realm prior to the realm that GTD operates in.

    http://www.coherencetherapy.org/discover/reconsolidation-FAQ.htm

    I make these recommendations after ten years of doing GTD-esque things and many hours working with the resources above.

  6. Also see Peter Watts on peer-reviewed article by a guy named Morsella:

    http://www.rifters.com/crawl/?p=791

    The paper is about what consciousness is actually *for.* Don’t role your eyes, it’s a real contribution on top of the infinity of papers that have come before about this topic. :)

    Watts:

    “Morsella calls it PRISM: the Principle of Parallel Responses Into Skeletal Muscle. He claims the acronym works conceptually, ‘for just as a prism can combine different colors to yield a single hue, phenomenal states cull simultaneously activated response tendencies to yield a single, adaptive skeletomotor action.’ Yeah, right. I bet the dude spent as long playing with Scrabble tiles to come up with a cool-sounding name as he did writing the actual paper, but we’ll let that slide.”

  7. Very evocative article, definitely warrants a re-read sometime later. For me (aspiring Neuroscience grad student), the debate about the actual nature of “drives” is fascinating – for some decision processes and cases where they go awry, such as addiction, there is research showing that there are localized networks in different parts of the brain interacting, biasing and influencing each other. They all seem to get more complex every time someone takes a look, so it might be a *long* time before there’s an applicable, comprehensive explanation, I think.
    For the meantime, it might be more insightful to model what’s going on in the way you described in this post. Maybe I’m stating the obvious, but i think that giving agency to our murky inner imperatives, describing them as some sort of human-like beings, is both more intuitive and more powerful analytically (to me at least). It is because we, being humans, are very well fitted for understanding social agents, their motivations, and predicting their actions. Thus, when we model our inner motivations as different, quasi-social entities, we can bring our own, very powerful, social special-purpose procesors to bear the analysis, yielding new insights.
    Taking this to the extreme, maybe enlightenment-level self-awareness is only figuring out which (innate or learned) special-purpose processors to use to scrutinize one’s own inner motivations. The level of enlightenement would then simply be the distance between one’s own introspection vs. other people’s who haven’t figured out this cheap trick.
    In short: Average humans are far better at predicting other’s actions than their (future) own, so they stand to gain by viewing themselves as a collection of separate entities, leveraging their social understanding for introspection.
    Libertarianism, to me, seems like the reverse: The application of a less-social, less-empathic, economic-flavoured rule-set to analyse human interactions. I’d predict that a libertarian would rather try to model between-people interactions in economic-rational terms, because this is the way they experienced those interactions all along (with less emphasis on empathy than other people might experience, for whatever reason). I’d also assume that a “economics of mind” model makes much more sense to a libertarian than a “society of mind” model, because the former fits more with their experience their (social) surroundings. But then, maybe I’m mistaken because most of my encounters with libertarianism were in angry conversations on the ‘net.

  8. excellent article.
    many ancient indians had solved this puzzle of ‘internal government’
    there are 4 entities inside us, which work in a parliamentary fashion.
    they dont have exact eq. english words, only close ones.
    1. Ahamkar = the perceived doer inside you; aham+kar ‘i do things’ , sort of the ego.
    2. Chitta = the entity that shows us pictures [Chitra=picture in sanskrit]
    3. Maan = the entity that has thoughts, [manan] sort of the mind
    4. Bhuddi = the intellect. tells you ‘good’ from ‘bad’

    the ancients said that these 4 operate in a parliamentary fashion.
    Chitta and Maan offer ideas and pictures to the bhuddi, which says ‘good’ or ‘bad’ to the suggestion, and the good ones get implemented by the ahamkar. the end.

    Clarifying example.
    ‘you’ want an ice cream. its actually the maan and chitta working togather.
    chitta will show ‘you’ pictures of icecreams eaten in the past, their flavors and aromas, the coolness, maan will describe the taste.

    bhuddi then steps in and says ”you” have a bad throat, so avoid. and thats the end of that.

    but instead, bhuddi may say – yes, its a warm night, and an ice cream will be good. ahamkara then instructs the body to get up and go and eat the ice cream. the body then eats the icecream. the end.

    who is the ‘you’ in all this ? that is the real puzzle. solved by many ancients.
    one good modern example : http://www.dadabhagwan.org

    • bhuddi then steps in and says ”you” have a bad throat, so avoid. and thats the end of that.

      In my experience this is just the beginning :-)

      The model doesn’t seem to be all that different from the Freudian one with buddhi being the internalized father = superego, the host of moral values and prohibitions/permissions etc. It’s a pretty good model because of its simplicity and roughness. It works on very low resolution and that’s what is good about it. If you attempt to add detail and complexity to what will always only be a first sketch you are misdirected and get lost in intellectual fantasies. I guess this is why I don’t like the society of the mind with all those actors or the slightly hilarious company of small traders and shop owners who build the non-society of the mind as a free market place, free from the delusion of central government and common welfare. But even here I agree it can be useful to model an addictive winner-takes-it-all situation, where buddhi is suspended. So maybe the model of the ancients is a bit too neat but I also have no idea how they explained neurotic behavior.

    • To consider a puzzle as tricky as this one to be solved by ones own ancient progenitors is a case of…ahamkara :)

      • you are both correct.
        knowledge of our internal world [maan bhuddi chitta ahamkar] is not communicable in the same ‘scientific’ way as knowledge of our external world [gravity, electromagnetism, nuclear forces etc]
        the problem is that knowledge of the internal world is *experiential*
        the only way for anyone to ‘know’ it is to experience it.
        eg: you cant ‘explain’ the taste of sugar it to someone who has never tasted it. ditto for the internals. so let sleeping giants lie….

  9. Well here I am, after the second reading. Wanted to say thanks again, Mike, for this article. It tickled me in all the right ways. Society-of-mind is one of the truly fundamental ideas, but one that’s perennially underappreciated. (Which makes sense if we buy Ainslee’s description of it as “not only counterintuitive but actively resisted.”) When you remember that there’s no unified, centralized locus of control, a nontrivial number of philosophical debates evaporate overnight.

    There are at least two follow-ups to these ideas that I’m particularly interested in:

    1. How far ‘down’ can you push the “disorderly riot”? I ran across an Edge interview with Dennett recently (http://edge.org/conversation/normal-well-tempered-mind) where he explores the idea that you can push it all the way down to the level of neurons, which have gone a little bit wild/rogue and can be modeled, at least in part, as individual agents with selfish agendas. A further fascinating follow-up is whether neurons (or even just traditional Minskian agents) can join up in political coalitions with the neurons/agents in other minds(!).

    2. It seems like there are (at least) two big pressures which cause the ‘self’ to come into being as a coherent thing: (a) inter-temporal bargaining and (b) inter-personal interactions. Another way to put this is, how much of a ‘self’ would a human creature develop if he was bumbling around in a mindless world, vs. how much does his ‘self’ require the presence of other self-like things who reward or punish him for certain behaviors? It seems like there’s a lot of pressure _from other people_ to be consistent, to follow through on promises, and not to act too impulsively. But I have no idea how to model or measure that w/r/t the pressures that arise from inter-temporal bargaining.

    BTW, this is one of the most profound sentences I’ve read all year:

    “The self, such as it is, is something like a prime minister – not powerful in its own right, but because it has managed to become the public face for the most powerful faction.”

    We have a bad habit of overestimating the amount of agency in the world. This knocks both our selves and our leaders down a peg.

  10. I wonder if there are mathematical proofs that hyperbolic discounting emerges naturally in systems similar to governments?