Patterns of Refactored Agency

This is a guest post by Mike Travers, who develops software at Collaborative Drug Discovery, blogs on diverse topics at Omniorthogonal, collects his random hacks at Hyperphor, and has a PhD in Media Arts and Sciences.

The scientific picture of the world has some disturbing implications when its assumptions are worked out to their ultimate conclusions. Brains and bodies are pieces of machinery subject to the laws of physics, and If we are simply mechanisms, then our ability to be free seems to disappear, along many of the basic foundations of everyday cognition and action (choices, selves, values, morality, consciousness, etc). The scientific worldview has proven both extraordinarily powerful and immensely unsatisfactory, given how at odds it is with our everyday experience. The disjunction between scientific thought and traditional humanistic thought was captured by CP Snow’s Two Cultures in 1959 but has only gotten worse since then. As a scientifically trained person who has worked on the margins of artificial intelligence, I’ve always struggled for ways to reconcile these two worldviews.

One solution to the problem is to simply recognize that consistency is overrated, and having an embodied life in this universe means maintaining a variety of inconsistent worldviews for different occasions. This is what we all end up doing; even the most radical materialists must get on with their lives, which requires thinking of themselves and others as more than machines. Most scientists have no problem being materialists at their lab benches and normal humans when interacting with others. Philosophers may trouble themselves with the contradictions of free will; the rest of us have actual decisions to make.

Still, there is a constant flux of trade, immigration, and skirmishes along the porous border between the mechanical world of science and the value- and meaning-laden world of everyday life. The appetite for popular books on neuroscience is one indicator that people are not content to let these domains stay separate. For whatever reason, there is an enormous market for mechanical explanations of and interventions in our inner lives.

 What is agency?

Agency simply means “the quality of being capable of taking action”. You and the people around you seem to have agency; while rocks generally do not. Inanimate objects are sometimes granted agency in a kind of humorous quote marks (eg “the washer decided to break today”); later we will try to take such constructions seriously. Agents (entities that have agency) have the additional implied quality of having goals, and that the actions they take are generally in pursuit of these goals. Agency thus carries a presumption of at least some rudimentary rationality, and a degree of autonomy.

Agency is a quality that seems to contradict physicalism – because in physics nothing is ever initiated, nothing acts. “In physics there are only happenings, no doings.”1 Yet we can’t understand the world without it – if agency is a fiction, it’s a necessary fiction. We live in a world of goals and actions, not merely mechanical forces guided by differential equations, and thus we are assigning agency constantly. Whether machines (including us) actually have agency is a philosophical black hole that we will try to avoid being sucked into. But the problematic status of agency frees us to consider it as not necessarily a fundamental feature of the universe, but more plausibly a kind of way of talking about phenomena. Agency is a conceptual framework, and one more suited to real life than pure science.

The quality of agency is deeply rooted in grammar. A well-constructed sentence has an agent (generally but not always the subject2), a verb, and a patient (object). Institutions and other collectives thus appear as agents just by dint of common usage – sentences like “Apple released a new iPhone”, “The bank foreclosed on my house”, “the crowd stormed the embassy”, all serve to cast non-humans in the agent role.

Human agency, despite its familiarity, is beset by well-known problems. We are subject to anomie and akrasia, to both overconfidence and crippling self-doubt. Psychologists have become adept at teasing out paradoxes of agency, such as that voluntary actions seem to start before we are aware of them, and that the startling number of false confessions to crimes shows we can easily be mistaken about our own agency. Freud and others since have dissected the unconscious and unintegrated goals that exist beneath the surface of everyday action. The model of the mind that emerges from these thinkers is that we are at our base bundles of autonomous and somewhat anarchic behaviors, tied together by higher-level functions that work on a kind of narrative basis – we hold ourselves together by telling stories about our actions, before and after the fact. But our tools for doing this are highly imperfect and limited. We are so conditioned to see ourselves as a unitary agent that the various malfunctions of our agency can be very troubling.

Locating Agency in Unusual Places.

I’ve found it to be a good general-purpose cognitive tool to try to see the world with agency located in unconventional places. Normally, we like to imagine ourselves as the chief agents in our lives – making choices, taking actions, pursuing our own interests that we have identified for ourselves. There is nothing wrong with this, of course. It’s no doubt much more healthy to think in that way than the inverse – to view yourself, for example, as nothing but a puppet of external forces. But it is not so good to be trapped in a single fictional model of the universe. To understand large systems we need to go beyond the everyday model of agency and think in new ways.

To refactor agency is to break up stale ideas about who causes things to happen and why. That book wants you to read it. The food in the fridge wants to be eaten, the mess in the sink wants you to clean it up. Your computer wants you to use it, to invest yourself further into the particular corner of the technosphere (Apple, Microsoft, web, whatever) that it embodies. Your car wants you to drive it, a million events in the city call to you to participate in them. The ocean tears at the cliffs, the cliffs hold back the ocean. Shops want your money, your money wants you to spend it. Blogs like this one cry out for your attention. If we learn to see the agency in other things we may get a more realistic and thus more useful portrait of our own semi-fictional agency.

There is history the way Tolstoy imagined it, as a great, slow-moving weather system in which even tsars and generals are just leaves before the storm. And there is history the way Hollywood imagines it, as a single story line in which the right move by the tsar or the wrong move by the general changes everything. Most of us, deep down, are probably Hollywood people. …Since we are agents, we have an interest in the efficacy of agency.” – Louis Menand

Our ideas about our personal agency are so entrenched that it can be quite difficult to go beyond them. There are some intellectual disciplines that do this: macroeconomics or some forms of sociology or systems thinking or Tolstoyan overviews of history. There are also some personal disciplines that also seem to head in this direction, such as Buddhist meditation3. Refactoring agency does not at present rise to the level of an intellectual, religious, or personal discipline – in its current form, it is merely a grab-bag of tricks for getting beyond normal patterns of thought; for learning how to ignore, subvert, or replace our usual stories about agencies with alternatives.

One rather immediate practical application of refactoring agency is that it can provide a better relationship with your distractors. All of us are fighting distractions – web sites, noises, snacking, minor tasks, watching episodes of Bad Lipreading: – anything that momentarily seems more attractive than the task we are supposed to be working on. It is slightly paradoxical, but I have found that endowing these distractors with agency helps me to politely but firmly dismiss their attempts to grab my attention. Maybe it’s not so paradoxical – if resisting distractors requires willpower, it is not so fanciful to think that it is easier to resist an agent than an inanimate attractor, if only because we have lots of practice and techniques for opposing other agents.

Patterns of Refactored Agency

This section catalogs a number of refactorings or patterns4 of agency. Each pattern describes a method in which agency is transformed or viewed from a non-ordinary perspective. This listing (not yet qualifying as a taxonomy) is tentative and almost certainly incomplete. Almost all of these patterns are well-known and in some cases have been the subject of study for millenia. But as far as I know the attempt to collect all these different modes of thought together is somewhat novel. I’ve tried to make a nod towards possible pragmatic justifications for each pattern, and also indicate some possible pathologies that might arise from it.

1. Splitting

Splitting agency means taking an entity that is normally thought of as a single agent and viewing it instead as a composite of multiple agents. Minsky’s Society of Mind theory is the paradigmatic source of this refactoring for me, but it has acknowledged roots in other thinkers, notably Freud and Niko Tinbergen, an ethologist who developed a theory of drive centers in animal behavior. Given the cultural domination of Freud in the last century, it is not surprising that this type of refactoring is quite common. Everyone talks about how they have conflicting drives, or how a part of themselves wants something different from the whole. The human tendency towards akrasia (acting against ones own stated interests) goes back to classical times, as does its attribution to internal agents with their own interests:

For the good that I would I do not: but the evil which I would not, that I do / Now if I do that I would not, it is no more I that do it, but sin that dwelleth in me… / But I see another law in my members, warring against the law of my mind, and bringing me into captivity to the law of sin which is in my members. (Romans 7:19-23)

More recently, George Ainslie’s Breakdown of Will provides an intriguing economic model of the relationships between mental subselves, particularly as it relates to akrasia and time-inconsistent preferences.

Pragmatics: We all talk this way anyway, we might as well get good at it. Everyone should become adept at identifying, naming and tracking different subselves. Giving these names, characters, and being able to converse with them has been suggested as a therapeutic technique.

Pathologies: the pathologies of splitting are also an entrenched part of popular culture. Taken to extreme in Multiple Personality Disorder, we all have experience with our own or other people’s un-integrated subselves. It may be that encouraging this style of self-modelling could lead to even more disintegration.

2. Clumping

Clumping or group agency means thinking about the agency of collections of people (so it is the dual of splitting). This too is a pretty common trope. Corporations, states, nations, and other human groupings are often treated as if they have agency of their own. Even the law treats corporations as persons.

Group agency may be so common as to not be worth noting, but after thinking about it for awhile, it can seem very strange. For example, a recent radio report mentioned how “Pakistan is becoming our enemy” (notable as news because Pakistan is ostensibly a current ally). It is fairly strange to think that an entire country can be friends or enemies with another one, and the fact that such usage is common does not detract from the fact that we are projecting the qualities of an individual human agent onto large collections of them. Pakistan has 100 million people and some undefinable but large number of social and political subgroups. Each person and subgroup presumably has its own feelings about the United States; for Pakistan to change its mind probably means that one faction is starting to dominate or outnumber another one. The fiction that an entire country has an opinion has a fascinating (and in the case of war, horror-inducing) way of becoming real – the more we think that way, the more realistic a description of the world it becomes.

Pragmatics: We are so used to talking of group agency that the most pragmatic thing to do may be to undo it – the next time you think that Pakistan or Microsoft has an opinion, force yourself to evaluate the component factions and individuals of this collective, if they are actually aligned or pulling in different directions, and how the group agency is related to its components, and whether the goals of individuals are being served by the agency of the group.

Pathologies: The pathologies of groups and group agency are a whole study in themselves. One particular cogent description of a group agency failure mode is the Iron Law of Institutions, which states that actors within an institution will act to preserve their own rank within an institution over the success of the institution as a whole.

A common pathological form of group agency attribution is the conspiracy theory, in which people imagine coordinated group activity where none exists.

3. Crosscutting

Crosscutting agency is related to splitting, but rather than dividing a single agent into subagents, crosscutting focuses on interests that cut orthogonally across the boundaries of individuals.
For example, Richard Dawkins used the concept of the “Selfish Gene” as a way to convey the idea that the units of evolution are genes rather than organisms, in some respects relegating the individual person as a mere puppet being manipulated by genetic interests that were often at odds with the individual and each other:

Four thousand million years on, what was to be the fate of the ancient replicators? They did not die out, for they are past masters of the survival arts. But do not look for them floating loose in the sea; they gave up their freedom long ago. Now they swarm in huge colonies, safe inside gigantic lumbering robots, sealed off from the outside world, communicating with it by tortuous indirect routes, manipulating it by remote control. They are in you and in me; they created us, body and mind; and their preservation is the ultimate rationale for our existence. – Dawkins, 1976

A gene, in the sense above, is a genetic pattern that persists across generations and populations, one capable of being “selfish”, which in this context means something like “successfully perpetuating itself”. The gene is thus both bigger than an individual , but also smaller, since individuals are conglomerations of the products of tens of thousands of genes, all exerting their own selfishness.

Dawkins invites us to see the world in which the agents cut across the boundaries of the individual and predominate or preempt their interests.

Another example: The Marxist theory of class consciousness is an attempt to both describe and encourage the formation of a collective agency (a clumping) that cuts across the normal hierarchical control lines of society.

Pragmatics: Crosscutting can be one of the most revolutionary ways to refactor agency, since it deliberately ignores the everyday dimensions of agency in favor of completely new and alien dynamics. Unlike clumping or splitting, the new forces revealed are not simple combinations or parts of existing agents, but entirely new agencies.
Pathologies: The promise to reveal a secret and revolutionary explanation for the the way the world works can lead to crackpottery and false claims of scientific rigor (see Marxism)

4. Inversion

Inversion of agency means deliberately taking the stories you tell about action and turning them around so that the entities that are normally the objects of action are framed as the agents, and the usual polarity of transitive verbs are reversed. Instead of you eathing the potato chips, the potato chips do something to you. When applying this to yourself, it requires some humility. “The Music Played the Band” is a lyric from a Grateful Dead song that makes explicit a fairly common report of musicians and other artists of their normal selves disappearing into their work which was apparenlty speaking though them.
Although I am the author of this post, it now and then seems like it is using me, driving me to write it. “Driving” is a poor word, but ordinary grammar is an obstacle to this form of refactoring, and must be challenged with invention:

The post is making me write it.
The post is gnitirw me.
The post is ɓuıʇıɹʍ me.

Of course those inventions are unreadable and unpronouncible, but that’s what happens when you try to break through the boundaries of ordinary grammar.

Pragmatics: A little humility is generally good for a person. Also, since it is actually part of ground truth that we are pushed around by the world, this is a good way of becoming aware of some of the details about how that works. It may even enable forms of resistance, since the first step in resisting a force is becoming aware of its existence.

Pathologies: Seeing yourself as a passive victim of external action can lead to powerlessness, helplessness and depression.

5. Pervasiveness

Pervasive agency means various tropes and techniques for imagining that agency is diffused throughout the entire universe or some large subsystem of it. Such theories date back to classical times and are referred to as hylozoism, a sort of near-mystical vision of the universe as shot through with desire, practically bursting at the seams with the energies of self-creation.
Pervasive agency can be imagined in a unitary sort of way (Life wants this, Technology wants that) or in a more anarchic spirit that acknowledges that every local living thing or bit of technology might be an independent agent pursuing its own individual desires. This distinction can be seen in two theorists of technology that have a (loosely) hylozoic approach. Kevin Kelly, with a more unitarian vision entitled his book, What Technology Wants, while Bruno Latour, a sociologist who often writes about the agency of technology and other non-human argument, writes of the “Parliament of Things”, envisioning a world where nonhumans give voice to their desires but those voices and desires are a multitude rather than a unity.
Some other interesting perspectives on pervasive agency include Christopher Alexander’s Nature of Order (which is more about a pervasive aesthetic of life than agency per se, but since Alexander’s work formed the basis of the software pattern movement and thus is in the background of refactoring and patterns, I thought it deserved a mention here.

When a place is lifeless or unreal, there is almost always a mastermind behind it. It is so filled with the will of the maker that there is no room for its own nature. – Christopher Alexander

More recently a variety of philosophers have taken up the cause of non-human agency, such as Jane Bennett in her book Vibrant Matter, and the movement known as object-oriented ontology. I’m not philosophically competent enough to evaluate these theories, but they seem to be about applying philosophical concepts usually reserved for human subjects and applying them to material objects, and thus well within our refactoring umbrella.

Pragmatics: Mystics and drug users will often report visions of pervasive agency – of being aware of a kind of living energy that pervades the cosmos. This is “pragmatic” in the sense that such visions often bring joy and a sense of oneness and inner peace. Of course such visions may well be harmful delusions, but people who have them seem to value them.

Pathologies: We are afraid of pervasive agency when it appears to be inhuman or disconnected from human values. Science fiction has so often imagined technology as a unified hostile agency (The Matrix, Skynet) that escapes human control and become destructive that it is a hoary cliche.

6. Elimination

Elimination of agency is sort of a dual to the pervasive stance. While the latter is concerned with the universe as a living thing, filled with a mysterious vital quality, eliminative materialism wants to banish all that nonsense in favor of a strict materialistic, mechanistic, physical picture. We won’t get sucked into the philosophical debate, but will note that banishing agency-related concepts may be necessary to let a more physics-like model emerge. We can see this kind of practice, for instance, in macroeconomics, where the agency of individuals is not very important but the resultant collective behaviors may be described by physics-like models.

The Buddhist idea of “emptiness” may also be a form of eliminative refactoring3. Buddhist meditation appears to be a technology for training the mind to loosen its grasp on certain fixed concepts, and certainly agency seems to be the kind of thing (if not the exact sort of thing) that makes up samsara:

If you look at your I right now, you’ll see that it appears to be permanent, whereas you know that in reality it is impermanent in nature. Other views hold, for example, that while the I is dependent upon parts, there is the appearance and the belief that it exists alone, not dependent upon parts, or that while the I is dependent upon causes and conditions, there is the appearance and the belief that it exists with its own freedom, without depending on causes and conditions. These gross hallucinations are described and posited as the object of refutation by the first Buddhist school, the Vaibashika.

Pragmatics: the mechanical view of the universe may seem cold, but its subversion of moral righteousness and anger has some utility. B. F. Skinner tried to elucidate the value of a purely causal view of human nature; he was not very convincing but his arguments are worth a look.

Pathologies: Eliminative materialism is a manifestly silly philosophy, and by denying the reality of concepts that are quite obviously real (minds, selves, thoughts) it makes itself too detached from human reality to be useful. Furthermore, denial of free will means denial of responsibility and threatens the system of morality and justice, as when evil acts are blamed on causal factors such as brain chemistry.

7, Acephalous

A distant relative of eliminativism, an acephalous refactoring means deliberately subverting ordinary patterns of leadership, substituting either anarchy or a diffuse system of control. The Occupy Movement is the most visible recent example of an attempt to create a leaderless structure, along with a vocabulary and technology of doing so (eg, the mic check). While that is an explicit attempt to create a leaderless structure; another more cognitive form of acephalous refactoring is to train oneself to see that despite the hierarchical control structures of a corporation or other organization, people really act on their own and for their own interests. A presidential election may seem to be extremely important, but the president does not run the country, CEOs don’t run their corporation, and generals don’t run their armies. But it takes deliberate effort to see that.

Other types of acephalous forms of agency include the flocks and schools formed by animals and superorganisms such as ant colonies. Such collectivities seem to act as a unit while having no individual in charge.

Pragmatics: The world is anarchic at its roots, but humans are prone to see and believe in leadership. Learning to think about leaderless collectivities is a valuable skill for seeing reality.

Pathologies: learning to distrust leaders and leadership is healthy to some extent, but taken too far can result in alienation, since most social structures do in fact rely on explicit or implicit leader figures. Leaderless groups have a tough time coordinating (see again the Occupy Movement).

8. Religion

Locating agency in supernatural beings is of course as old as humanity. Pascal Boyer theorizes that the origins of religion might be found in an evolved tendency to over-attribute agency. Since religion is a central feature of culture, Gregory Bateson’s suggestion that it has a functional role in compensating for an overly short-term outlook seems plausible:

“I suggest that one of the things man has done through the ages to correct for his short-sighted purposiveness is to imagine personified entities with various sorts of supernatural power, i.e., gods. These entities, being fictitious persons, are more or less endowed with cybernetic and circuit characteristics…. I suggest that the supernatural entities of religion are, in some sort, cybernetic models built into the larger cybernetic system in order to correct for non-cybernetic computation in a part of that system.” – Gregory Bateson

Or in other words, gods are conceptual tools used by humans to envision (and/or create) the larger-scale agency of their social groups.

Pragmatics: One particular religious move is to use god’s agency to dissolve your own, in order to achieve things beyond the reach of normal consciousness. Soli Deo gloria is how Bach prefaced his works, meaning “Glory to God alone”. Was this just false humility, or an empty gesture, or did this reflect a genuine attitude which was integral to his accomplishments? Of course giving glory to God also means giving responsibility to him, which can be a great weight off one’s shoulders.

Having been raised a hardcore materialist/skeptic, I find it very useful to appreciate religion and god-talk using an as-if framing: if there was a god, what would be my prayer? This may be cheating from an authentic religious perspective – it is anything but the whole-hearted worship they encourage – but it is a useful cognitive trick, in effect allowing even the materialist fundamentalist to deploy the agency-related parts of their brain.

Pathologies: the pathologies of religion are too well-known to need discussion here.

9. Externalization

Externalizing agency means allowing yourself to be controlled by your own tools – eg, once you construct your to-do list or other task management structure, you to some extent free yourself of the burden of agency and allow yourself to be driven by your list, or calendar, or inbox, or issue tracker. This of course is a very common pattern, and there is a whole industry of time management techniques and software to encourage it. Externalized task management also permits the construction of shared agendas within groups, providing a nexus for group agency.

Pragmatics: The burden of everyday agency can be overwhelming in modern lives – so many tasks, responsibilities, and decisions cry for our attention, and having external cognitive scaffolding can be a great help in structuring time.

Pathologies: Some people get so caught up in externalizing and formalizing task management that they lose sight of their larger purpose. There is even a technical term for this pathology, “addiction to busyness”. Over-scheduling of life is akin to the over-design of space noted by Alexander. And of course while one can always ignore a personal to-do list, that is harder to do when similar tools are used by a group, and can more easily become oppressive.

Conclusion

Agency is a powerful and important concept; something of a master key that that is capable of unlocking a wide variety of phenomena and concerns; and perhaps providing practical traction on some intractible problems. But agency as a mere conceptual category seems to not quite capture what is important about it. Yes, sentences have agents, everyday activity is a dance of agents and agent-based cognition, but isn’t there something more going on here? Agency is not merely a conceptual category, it is a conceptual category that comes close to the essence of who we are. We are agents, our selves are prototypes for all the other agency we see in the world, and/or vice-versa.

You go through life as an agent, you interact on a daily basis with other agents. Perhaps it would do you some good to have at your fingertips the idea/tool that agency (and hence daily life) is a type of semifiction; that we are all constantly telling and enacting stories about ourselves and the things around us, but we have an enormous and little-explored freedom to change the kind of stories we tell, to break the bounds of conventional genres in search of more effective tales.

Everyday life and social structure conspires to make certain conventional fictions about ourselves seem to be as realistic and seamless as possible. It takes a real effort to penetrate behind this particular type of illusion, an effort which also entails some risk, since these are to some extent necessary illusions. But to the extent that our default ideas imprison us, any possibility of escape must be investigated.

Footnotes:

1 Stuart Kauffman, Reinventing the Sacred, p74

2 In fact, the grammatical subject of a sentence is not always the agent. Agent and patient are technical grammatical terms that are distinct from the subject and object. The passive voice switches the usual roles of subject/object while leaving the agent/patient distinctions intact.

3 For some reason I feel a need to apologize whenever I pretend to know something about Buddhism, although my knowledge of some of the other things I write about is equally incomplete. But: a key Buddhist idea is that the self is not a solid thing, and that thinking of the self that way is at the root of a good deal of human suffering, and that you should stop.

4I have a lot of quarrels with the design patterns movement in software, but I have to admit that the pattern people (and Christopher Alexander whose style they have somewhat crudely appropriated) have invented a uniquely useful way of speaking and thinking.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Venkatesh Rao

Venkat is the founder and editor-in-chief of ribbonfarm. Follow him on Twitter

Comments

  1. Ric Phillips says

    Nice post. As clear and cogent as many I have read in refereed journals of philosophy.
    It’s nice to see humble blogging raised to such levels of literary and intellectual proficiency.

  2. First of all, thank you for writing this brilliant post!

    In the section on religion, you say:
    “Or in other words, gods are conceptual tools used by humans to envision (and/or create) the larger-scale agency of their social groups.”

    I agree with this wholeheartedly. It makes a lot of sense to view God as a personification of the emergent structure of society. This clumping helps us to overcome the limitations of Dunbar’s number: we don’t have the computational resources to care about each person individually, but it’s easy to care about this one really big person named God. This explains moral messages like “Whenever you hurt another person, you are really hurting God”; I tend to read this as “Whenever you hurt another person, you are really hurting the emergent structure of society.”

    (Apologies if this has already been discussed somewhere.)

    • > I tend to read this as “Whenever you hurt another person, you are really hurting the emergent structure of society.”

      And yet there are still religious wars between Judeo-Christians and Muslims, who ostensibly worship the same God. You could say the clumped agent “God” has been forked, or maybe it’s undergone a kind of asexual reproduction.

  3. very nice.

    While false consciousness maybe another agency pattern, whatever demon possessed Jim Lehrer will not be forgiven for the time I just wasted on that distraction.

  4. Alexander Boland says

    I have yet to finish this post, but I wanted to put down this comment in case I forgot:

    Whether machines (including us) actually have agency is a philosophical black hole that we will try to avoid being sucked into.

    Let’s take the metaphor a step further and think about an event horizon. There is a point where if you try to look to far, you can no longer make meaningful philosophical distinctions. The best example I can think of is when we have those moments where we think “if this is all Newtonian chain reactions, then what’s the point of me doing anything?” and then remember the paradox, that even if that’s so we still have to make a choice–that if this deterministic view of the world is true, then our very intentionality is part of it.

    Now back to reading this excellent post.

  5. Alexander Boland says

    A couple of thoughts on the random part about religion:

    On religion: what you’re saying makes sense in the literal sense of the term, but I think it’s far more nuanced. Myth and history were much more ambiguous and intertwined back in earlier times and it seemed to be a form of epistemology that was suitable. In addition there’s the fact that while we find mythology today to be a bit ridiculous, it seems to me that you can’t string together a narrative without some assumption of intentionality on behalf of something; so I suppose we can take mythology as some category of refactoring, maybe even some meta-category.

    With that said, religion also seems to have a purpose well beyond the literal: we give up the illusion of knowledge and become more skeptical in some ways. Many religions understood the perils of debt and the benefits of fasting well before the economics and healthcare institutions even considered such an idea. More to the point: people mistake the illegibility of religion for being irrational. It is an illegible set of heuristics, sentiments, and interactions. I think the same may be said for mythology.

    On the whole, I think this is an excellent taxonomy–but ultimately we ought to think of going further than taxonomies. I have a tendency to see things as tangled fractal messes, which leads me to think that mythology is a very good starting point; I see it as perhaps the fractal seed of this process of refactoring. Perhaps that and some animal behaviors.

  6. @Alexander – I completely agree that the role of religion deserves more subtlety than my brief summary. This essay was an overview and an attempt to integrate a bunch of different things, and the individual fields necessarily were slighted in the process. I wrestle with religion quite a bit on my blog, eg here.

    I like your application of illegibility to religion. There’s been a good bit of theorizing about that from the outside (eg this paper) in terms of evolutionary psychology, that religion requires and displays joint commitment to counterfactuals. Illegibility seems like a more pragmatic and situated view of the same phenomenon.

  7. Random comments, in no particular order:

    1. My favorite understanding of the free will hypothesis is that its atomic characteristic is the ability to generate true random numbers (this is also I believe the existentialist position). Human coin-toss calls are presumably the chaotic output of an otherwise deterministic, sensitive-to-initial-conditions equation that models our emergent executive functions. But I find this to still be a good heuristic in attempts to locate “fundamental” agency in a way that brackets the free will question. Within a given model of a system, fundamental agency is wherever the basic random numbers come from. This is a standard view in control theory. Often this means agency is assigned to a boundary locus (as a “disturbance” or noise input).

    2. This also gives us yet another refactoring pattern: white versus colored agency, as in noise, not race. White agency is agency that generates white noise. Colored agency can be meaningfully modeled as system that is a white noise plus some filters. This is regression in reverse: gaussian residuals of the input/output behavior of the system represent its agency, while the actual filter transfer function is in some sense its “personality” (= biases if you want to use behavioral economist notions of agency). Any boundary you draw that does not contain an internal random number generator is not an agency. Conversely, any arbitrary boundary you draw that DOES contain a random number generator can be considered an agent, even if it isn’t a very coherent one. Possibly one way to identify a “useful” agent among all possible such random agents (i.e., all ways of partitioning a system such that each piece has a RNG) is that the resulting agent must have an elegant (in the Kolmogorov complexity sense) algorithm for its “color” part. Otherwise, if the agent is nothing more than random-in/random-out, it is not a useful agent for analysis.

    3. I think your “cross-cutting” pattern is not a pattern so much as your miscellaneous drawer. Your other items are basic structuralist decompositions for the most part (curious, since I consider Minsky more behaviorist than structuralist). But cross-cutting is a can of worms. Some patterns I’d unpack from it include:

    a) Polarity flipping: most agencies can be factored across any of your favorite polarities (like good/bad). You can also mostly flip the polarity through reframing. So heroes can become villains. Most polarities can be understood as specific patterns of “colored” agency, so this is like taking a photograph and negativizing it.

    b) Functional decomposition: your other patterns are kinda homogeneity-preserving factorizations to some extent, where cardinality goes up and down or a sign changes, but the semantic “content” of the agency (the personality around the random number generator) does not. But this need not be the case. For example, “cop team” need not be factored into interchangeable buddies. You can factor it out as “good cop/bad cop” (which are polar opposite, but functionally similar). Or you can do true functional decomposition as in “senior cop/junior cop.”

    c) Jungian archetypes, which I’ve been studying quite carefully with Greg Rader lately, represents a bunch of systematic cross-cutting agencies (again, mostly in the deterministic scaffolding of personality around the random number generator).

    4. Not sure how this connects, but I’ve always found it interesting that the 3 founding centers of AI roughly followed the 3 founding approaches in psychology: structuralism, functionalism and behaviorism. Freud, James, Skinner. In AI, I’d map those to Stanford (McCarthy), CMU (Simon) and MIT (Minsky, and later Brooks with his subsumption).

    5. Dr. Octopus in the second Sam Raimi Spider-man is a great illustration of agency refactoring: the AI arms basically take control of the brain after that chip fuses.

    6. Speaking of that, ants and anthills are a favorite for agency refactoring. Hofstadter spent a lot of time on that, attributing agency to the anthill. Dawkins has one of the cleverest empirical studies of refactored agency there as well, where he looks for genetic evidence to determine whether the queen ant or daughter ants run the show. Turns out, the daughter ants do. The queen is a slave.

    7. Distributed AI/agent-based systems research takes a curiously unreflective approach to this problem. Phrases like “swarm intelligence” and “agent heterogeneity” and “species” are used very uncritically.

    8. Legal notions of agency are a rich vein here. Probably worth exploring.

    9. I think you need to distinguish between functional and metaphysical notions of agency. For instance, the entire genealogy of leviathan agency metaphors is interesting. Some (like Gaia) assume a sort of sentience, while others, like Spencer’s “social organism” (1850s vintage) is mostly a convenient shorthand description. Hobbes’ original notion is also just convenient shorthand (though there is also a use of a distinct religious agency). The much-reviled Vedic Purusa Sukta (often blamed for being the root of the Indian caste system) is either just a convenient description/justification of social stratification, or a Gaia-like notion of mystical agency attribute to the social organism.

    10. I am surprised you didn’t get into recursion and its relation to agency. A self-similar/fractal agent can presumably be refactored into a behavioral generator rule, a structural recursion rule and an RNG? Might work to describe a starfish for example. By contrast, agents whose “color” involves more heterogenous ontogeny (like humans) cannot be recursively decomposed. But they can be functionally decomposed: hence the old Readers Digest series of articles with titles like “I am Jane’s Liver” and “I am John’s Kidney”…

    11. There are probably really odd agents that defy pattern abstractions. Like Adams’ “super intelligent shade of the color blue” or Chomsky’s “colorless green ideas thinking furiously.” Some will no doubt turn out to be no more than clever wordplay upon further probing, but I bet a bunch would turn out to be well-posed notions of agency in some bizarre way.

    12. Probably need some notion of “agent algebra” here, where you pose notions of how how agents can form other agents, not as a matter of refactoring, but to actually function (swarms are a basic example: tribes come together to form super-tribes to fight a war, then go home…).

    13. Though agency is much more arbitrary than matter, is there a point to looking for agency quarks or superstrings through repeated decomposition? Are Minsky agents the right primitives? In opposition to the RNG primitive idea, I’ve often taken the behaviorist/vipassana meditation idea of aversions and attractions as being the basic building blocks of agency. Brooks’ notion of a subsumption architecture is sort of like a more involved version of this.

    Lots more to say/think about here, but I’ll stop here. This is basically a book-length topic. Pity it falls right through the cracks of many formal disciplines (AI, control theory, philosophy of mind, linguistics, psychology…).

    • Lots more to say/think about here, but I’ll stop here. This is basically a book-length topic. Pity it falls right through the cracks of many formal disciplines (AI, control theory, philosophy of mind, linguistics, psychology…).

      I’m not so sure about the mentioned disciplines. When you get to the point where your “agent algebra” is actually going to work, you have created an abstract data type, which falls into the realm of ordinary computing science i.e the study of algorithms and data structures. In that moment you and your readers will be enlightened, you can close your blog and we collectively subscribe to the Haskell mailing list :)

      I guess there is some irreconcilability in your both approaches to agency. You don’t want to lose the formal disciplines and desire a nerdy approach to agency whereas Mike sticks too deeply into one for not wanting to get rid of it. Of course there is nothing wrong about this left vs right brain, move-into vs escape-from dialectics that runs the show.

  8. Very engaging read.

    I find my thought revolving around the pragmatic “relativism” or “contingency” involved in one agent, in-motion (through time, at the very least), observing or acting with another agent or agency.

    The arbitrariness of the concept agency is what I find hard to grasp.Wouldn’t the mere admission of arbitrariness fundamentally exclude any integration of a scientific-based approach?

    There’s a reason agency itself is the study of philosophy – its locked in the Rubik’s Cube of logos – itself a creation of agency or an perhaps an agency itself.

  9. Thanks to everyone for the very insightful responses.

    Venkat, let me reply to you in random order, starting with the easiest points:

    4. I think it’s wrong to identify Minsky with behaviorism — basically AI (and Chomsky and cognitive science) was a reaction to behaviorism, which was too extreme in its methodology to get anywhere. Here’s a nice short history of that era. Minsky was not exactly a structuralist but was quite influenced by Piaget (via Seymour Papert).

    That’s all way before my time, but I was actually around MIT when Brooks was developing his subsumption architecture, and I can quite confidently tell you that Minsky absolutely hated it – thought it was a disasterous research direction. I made a couple of attempts to try to get the theories and the people involved ot play nice with each other, but it wasn’t happening.

    1. I didn’t quite follow that but my intuition is that randomness can’t serve as any explanation for free will. Being pushed around by random processes is just as un-agenty as being pushed around by completely deterministic processes (actually I cribbed that argument from Minsky). I’ve found Daniel Dennett’s writings of free will to make sense (and now I wish I had figured out a way to shoehorn him into the post…). His position is roughly that there is no metaphysical free will but the relationship between our selves, outside reality, and our representations produces something that feels like it.

    2. Re Randomness, I think that being random or indeterminate is not so important, it’s being sufficiently complicated that outside observers have to create agent-like models. But perhaps that is a distinction without a difference.

    3. Cross-cutting may be a bit miscellaneous (although the Selfish Gene theory seems to fit pretty nicely, and doesn’t really fit any of the other categories).

    I didn’t talk much about polarity or polarization. The latter is another obsession of mine, specifically as it arises in conflict (eg, groups build their agency by polarizing against another group – as in the rise of the European empires and nation states, whose justifcations were in part to defend themselves against the others). Good/evil polarity seems to be a pretty fundamental thing, no question (see below).

    9. I think you need to distinguish between functional and metaphysical notions of agency. Actually, deliberately ignoring metaphysical questions about agency is part of my intellectual strategy. Let the theists and atheists engage in their tedious arguments.

    6/7. Yeah, I got my start in this area trying to model ant colonies years ago.

    8. Yes the law has very worked out pragmatic theories for agency, even if they don’t always make philosophical sense (like the M’Naghten rules for the insanity defense and their elaborations).

    13. Though agency is much more arbitrary than matter, is there a point to looking for agency quarks or superstrings through repeated decomposition? Are Minsky agents the right primitives?

    In opposition to the RNG primitive idea, I’ve often taken the behaviorist/vipassana meditation idea of aversions and attractions as being the basic building blocks of agency. Brooks’ notion of a subsumption architecture is sort of like a more involved version of this.

    You keep reminding me of research directions I was poking at back in grad school and have long forgotten. Anyway, this paper was an attempt to graft a theory of emotions onto a Brooks/Minsky sort of agent theory. I’m reminded of that because emotions, although they often seem complex, basically seem to reduce to a simple positive/negative or attraction/aversion polarity.

    This is basically a book-length topic.

    No kidding!

    • 1. I didn’t quite follow that but my intuition is that randomness can’t serve as any explanation for free will. Being pushed around by random processes is just as un-agenty as being pushed around by completely deterministic processes (actually I cribbed that argument from Minsky).

      I’d like to add that existential choice, mentioned by Venkat, is sort of a radical choice in a world without an inherent meaning and direction. To be a subject of radical choice one has to have a world to reject; existential nausea makes one want to reject the world as a whole. It is this gesture that constitutes subjective solitude and the free will. It is certainly not about an inherent randomness inside of the subject that makes it unpredictably flip between alternatives but about a subject that constitutes itself through a distance to the whole world and its content, including its own physical, psychological and social manifestations. If predictability induces existential nausea a possible counter measure is to play dices with ones own life. If in turn the Law of Big Numbers becomes an object of existential nausea the subject may begin to play subtle and poetic games or decides to stick with certain principles wherever they lead.

      Existential thinking is not a realist, empirical psychology that predicts actual behavior based on preferences and social scripts using Bayesian inference or medical imagery. It is a philosophical meditation about agency and the necessity of choices which is dedicated to those who are perceptive enough to understand its unfoundedness.

    • Alexander Boland says

      I’m just going to add in that while randomness on its own might not be enough to establish a sense of “agency”, I think the concept of entropy does. It’s a total hunch, but it comes from a couple of ideas:

      1) Maxwell’s Demon suggests there’s no impermeable barrier between subject and object. When we view the universe as “mechanistic” with “meaning imposed on it”, it cannot follow to me that the universe is “meaningless.” Our observations seem to be just as much a part of that universe as the “mechanical” phenomena happening.

      2) The idea of irreversibility and strange attractors suggests to me that agents have a way of leaving a “signature” or a “trace”. Conflating this with point (1) would suggest to me that our observations leave some sort of mark of authorship that doesn’t go away.

      Also, thanks for introducing ANT. I’ve been gobbling it up because my own work on Interactive Storytelling is an attempt at conflating material-semiotic relationships with my own mathematical framework for extensional semiotics based on principles from functional programming.

      Also, have you read Phoebe Sengers? She might be of some help.

  10. Wow, this is rich stuff, and right up my alley. Thanks!
    Let me try to reframe things from a biological/neurological/armchair-psychological perspective (because that’s what I aspire to understand, eventually: My lack of rigorous study is evidenced by the sparseness of links: I heard stuff somewhere, but often do not know who pre-thought these thoughts. Thus, I am mainly unaware of any professional discussions about these topics, which undoubtedly exist).

    – I think that refactoring problems in terms of agency might be a fertile move because it effectively moves them to the realm of social relations, in which we can use our social cognition capabilities to tease the problem apart. Think of some evo-psych angle here: Humans have (as AI designers painstakingly found out) quite enormous special-purpose processing capabilities for social questions, rules, and relations; those capabilties transfer poorly (if at all) to other realms, such as logical reasoning (which I’d guess is some abstract generalization of our language processing capabilities, for example from grammar rules). You hinted at that when you wrote about dealing with disturbances as agents: essentially leveraging social skills to deal with distractions.
    Thus, moving agency around a conceptual space might be a projection of our special-purpose social cognition skills, like a mental spotlight. We understand intentions much more intuitively than mechanisms. In similar terms, reifying abstract concepts into concrete things renders them accessible to our experience with manipulating physical objects. Maybe this is the core of refactoring: moving tricky problems into conceptual spaces where we can leverage our domain-specific experience and metaphors to deal with them more effectively?

    – One interesting angle on the perception of a self, as a mostly independent agent, might be the theory of “embodied cognition”, see http://en.wikipedia.org/wiki/Embodied_cognition . According to this, neurotypical humans learn that there is sensory and motor-feedback from the “own” body, but not from others (or inanimate objects). This feedback also might form a vast part of the “subconscious” part of cognition. From an evolutionary standpoint, there is the “motor chauvinist hypothesis”, stating that a primary reason for the evolution of a brain is the coordination of movements of the body, and the processing of feedback from that body. (I’d graft the social hypothesis for humans and other highly social mammals on top of that, in order to accord for our enormous prefrontal cortex). Thus, if you reframe agency in these terms, a primary reason why the idea of “agency” as “some entity (distinct from others) actively doing things (manipulating other distinct entities)” seems so natural to us is that it matches experiences we, being humans, all make from the earliest moments in our lives. Douglas Adams illustrates this beautifully in this talk, recorded shortly before his death: http://www.youtube.com/watch?v=_ZG8HBuDjgc&feature=player_detailpage#t=4025s (the rest is equally insightful, if you have 1.5h to spare, see the whole talk. Your weekend will be a better one for it :))
    Since I only have second-hand knowledge of AI development: Would the necessity of an embodiment and manipulation-as-mot0r-control be a useful boundary condition to simulate human-like intelligences (and, by generalization, anything we would classify as sentinent)?

    -One final snippet of thought: The normal perception of agency is different, for example, in schizophrenia: Typically involves mismatched prception of the sphere of one’s own agency: Smaller than normal (Hearing voices, having “ideas implanted” (i.e. thoughts arising seemingly from the outside), receiving commands, i.e. outside agents overruling the perceived self-agency), and larger than normal (one’s own thoughts/actions having influence on other’s, perceiving agency in random events and objects). One interesting evo-psych angle to this is that schizophrenia is often linked to larger-than-average creativity, and that schizophrenic people often can give explanations to phenomenons others struggle to understand, or do not see at all (true today for cult leaders as well as conspiracy theorists; regardless of the truth of their claims, in the absence of other explanations, closed narratives might be true enough). With a little stretch of imagination, this might have played part in the genesis of religion: Every group will have had some high-functioning, hypercreative schizophrenic from time to time, who might be very efficient narrative-generators. Agency perception might be an important part of how we define “normal” and “different from normal”, insane or visionary might be simply a matter of perspective.

  11. @wirrbeltier: lots of nice points, thanks.

    I think that refactoring problems in terms of agency might be a fertile move because it effectively moves them to the realm of social relations, in which we can use our social cognition capabilities to tease the problem apart.

    Yes. But also the other way around: some intractable problems that are usually seen in agent terms may be solvable by removing or changing the agents.

    Here’s the best example I know of the phenomenon that you’re talking about: http://en.wikipedia.org/wiki/Wason_selection_task#Policing_social_rules

    Embodied cognition is sort of the subtext of the talk about about Brooks above. Not to get too far into that here, but in the 80s there was a movement within AI to take more cognizance of the situated and embodied nature of thought, with mixed results (I think Phil Agre’s The Dynamic Structure of Everyday Life might be of interest to Ribbonfarmers).

    The normal perception of agency is different, for example, in schizophrenia

    Yes yes yes. I think a lot of psychiatric disorders might be usefull recast in terms of problems of agency (autism spectrum disorders, eg, are already pretty widely recognized as involving a malfunctioning “theory of mind module”). There was a volume put out by MIT Press called “Disorders of Volition” that covers some of this territory.

  12. @mtraven: Thanks for your answer, very intriguing.
    I think that not only evolutionary psychology could provide us with plausible stories as to how we got to possess the many peculiar features we have, but (rather more scientifically founded) could modern neuroscience give us theories of the neuronal circuits underlying conscious thought. Then still, fMRI is rather coarse, so for the time being it’ll probably be quite hard to see at a level smaller than groups of several thousand neurons in action, at least in humans. The body of research is not quite ready for prime time yet, but it might be within the next few years: See, for example, the wikipedia article about resting-state fMRI: http://en.wikipedia.org/wiki/Resting_state_fMRI (watching the brain activity while someone is not engaged in tasks – turned out, the brain uses most of the energy when not doing any specific tasks), and the neuronal networks that have been found in that data: for example the so-called default mode network (see: http://en.wikipedia.org/wiki/Default-mode_network ).
    If the current research in that direction is to be believed, we may have found (by accident, more or less) the closest thing to the circuits that somehow enable consciousness. Metaphor-wise, I’d guess the relation is similar to the one between land and an ecosystem on top of it: One arises out of the other, and both influence each other constantly, in complex, nonlinear ways. But probably that’s more because it’s close to what I know… The metaphor of meta-patterns arising out of chaos of significant enough size sounds tempting, but I understand too little of the underlying principles to dare using it confidently ;)
    If you want to get into the bleeding edge of the research, this (free, open access) report might be interesting: http://www.jneurosci.org/content/32/14/4935 The researchers placed anaestetized people in an MRI machine, and scanned their brain activity as the people woke up, i.e. their anesthesia subsided. Interestingly, the researchers measured the level of regained consciousness by assessing response to a motor-related spoken command “open your eyes”, and checked which brain circuits came back online at which stage of consciousness regaining. So there’s the motor paragidm being used to reverse-engineer the human brain, as it were ;)

    I wouldn’t be surprised when the bases of most circuits also would show up in different animal species, and we’d be faced with the tricky definition problem of “gradations of consciousness”, on a neurological basis. And, I guess, the intermingling of these principles with AI and robotics (and eventual abstraction in mathematical frameworks?) might bring up fertile new ideas. Who knows, maybe it turns out that building human-style consciousness actually hinges on reverse-engineering all the weird, evolutionarily old, low-level circuitry, and thus is uneconomic? Or maybe we’ll suddenly recognize that similar patterns already have been independently reproduced in complex systems? Will we have to acknowledge consciousness-like properties to ecosystems, markets, networks? Cities?

    In unrelated news: I just found a nice example of refactored agency at work: this talk on TED (what Venkat once called “glossy, manufactured … Insight Porn”). In this talk, gardener and author Michael Pollan refactors agency from him, planting plants, into the plants themselves, and, within 15 minutes, manages to illustrate how domestication actually is co-evolution of two species. The sentence “maybe we all are just pawns in corn’s strive for global domination” is meme-worthy, and may be a nice example of refactoring at work. Link to the talk: http://www.ted.com/talks/michael_pollan_gives_a_plant_s_eye_view.html

    As to psychology: Maybe, in the far future, the hard nuts (e.g. alzheimer’s, schizophrenia, autism, phobias…) may be attacked from different angles, by identifying where in the neuronal base of the consciousness things go wrong, and maybe then therapies can be directly tailored to the circuits that need tuning. This may be working for psychotherapy (essentially adding a new readout for therapy success), medication (as circuits may be targeted more precisely), or even fancy stuff such as gene therapy (as far as I know, neuro-gene-therapy is on the verge of entering human trials, for epilepsy IIRC) or brain-machine interfaces (which are actively being deployed as we speak, for example in parkinson’s and epilepsy patients).
    The fragile nature of agency perception (one of the easiest definition for psychological illness is psobably agency being percieved out of the normal range) probably places it squarely in the middle of brain function: think, for example, of a child’s mental development, where you essentually can see different functions “coming online” during the first 20 years of life.

    • If the current research in that direction is to be believed, we may have found (by accident, more or less) the closest thing to the circuits that somehow enable consciousness.

      The basic problem of “current research” as you call it ( is there no other? ) is that all findings are merely accidental and one cannot even conceive any findings, predicted by theory. The anti-intellectual nature of certain research directions might explain Minsky’s derisive judgment, who retrospectively called AI “braindead” precisely from the point on where it went brainy, connectionist and embodied.

      • The basic problem of “current research” as you call it ( is there no other? ) is that all findings are merely accidental and one cannot even conceive any findings, predicted by theory.

        Just to clarify: By current research, I meant the current state of neuroscience in general, and human-centered functional brain imaging (i.e. psychology with fancy machines and more accurate readouts) in particularly. Maybe I should mention as well that I’m getting into neuroscience myself at the moment, but at the biological side of the story (studying brain circuitry on a cell-by-cell base). From the biological side of the divide, it seems quite normal that theories are rather general frameworks at the beginning, because the matter you’re studying is messy, tangled and thus quite illegibile at first. It is a science much stronger driven bottom-up than, say, physics or informatics (as far as I understand them). That might change with the arrival of data-crunching tools in biology (made tricky by the fact that many biologists, myself sadly included, pretty much lack mathematical literacy, not to speak of progamming literacy) on a large scale, which is happening as we speak, and probably will have trickled down to the teaching within a few years. But as of now, the best tool to express ideas and hypotheses have been more or less elaborate stories, backed by elaborate experiments and rather limited statistics. From that point of view, having not everything predisposed by theories beforehand is not a bug, but a feature – if legibility and thus predictability is low, it is a better strategy to experiment and see what happens. At the moment, there is a huge push to quantify as much as possible, where qualitative statements were sufficient in the past. The field is undergoing quite dramatic changes at the moment, as it is essentially digitized and thus sped up tremendously.
        But back to fMRI studies: While there is a lot of quite reasonable criticism towards this technique (coarseness, indirectness, hype,…), it has its valid niche of application – it is easily usable in humans. If you see it from the perspective that fMRI it is essentially used to repeat the classic psychological experiments of the past century, only with an actual readout of physiology happening, it is much more precise than the body of research that preceded it (that is mainly generated using questionnaires, and thus has a different, more holistic and narrative, approach). The real problem of this field is, in my humble opinion, the hype generated around many studies, since colorful images of brain scans are good sellers, and the press releases often lend themselves well to over-interpretation. But in the grander scheme of things, the more robust findings can well provide scaffolding for other, more precise, but slower, techniques: This data can be used to ask the right questions down the line.
        The quip about the “by accident” referred to the fact that a popular set of theories, that of resting-state networks, first was found when researchers looked at data from their “control trials”, i.e. when the subjects were being scanned, but not doing anything in particular. The idea of that “discovery by accident” actually comes from an (unfortunately paywalled) article with the title “The serendipitous discovery of the brain’s default network” ( http://www.sciencedirect.com/science/article/pii/S1053811911011992 , for reference). Since then, the hypothesis has been widely tested (and confirmed) in independent experiments, and in my personal opinion, this might prove to be one of the most important findings of the functional imaging field to date.

        So, I think that biological science in general has to adapt to the illegibility of the object of study: it can only be made legible in small ways, one experiment at a time. (Actually, I picked the idea of (il-)legibility up in this blog, which clarified a lot of thinking about why biology was so different from other natural sciences in its approach). In biology, first principles can do little more than inform general directions of research, and the rest has been found out by actually testing out many possibilities, because the space of possible right answers is simply too large (at the moment) to pre-consider. Literally: Learning biology consists of learning tons of little factoids, whereas reasoning from principles is much more important elsewhere. This probably clashes with the way of doing research in other sciences, where first principles are much more important – If you try to interface, say, physics and biology, you can get both breakdown of communications (if done wrong), or fruitful “ideas having sex” (if done right).

        I can well imagine that AI more or less *had* to become more “brainy”, bio-inspired, because that might be the best way to handle the problem of emerging complexity. If you think of biological evolution, you’d assume that it solved the problem of motor control and sensory processing in the most efficient way possible given the specific circumstances (in which case our own intelligence might have been more of a byproduct of social-biological co-evolution), and thus emulating evolution seems like a promising avenue for AI research to me.
        I understand this has happened in the 80s already, and you talk about it in the past tense, so did it live up to at least some of the promises?
        (I think that generally, AI has a very strong hype cycle, because we think it *should* be easy. After all, we effortlessly *feel* intelligent and conscious, so we probably perpetually underestimate the hardness of the problem. Thus, even though AI undoubteldy has made tremendous leaps, it is nearly impossible to live up to the implicit hype of building human-like intelligences. Or am I being overly naive here?)

  13. Surrogacy

    A sense in which “agent” is often used is when one agent (the representative) represents the interests of another (the constituent), in a space in which, for whatever reason the constituent doesn’t or can’t act. Examples:

    Elected officials (from which I took the representative/constituent) act, at least in theory, as a representative of the zeitgeist of the electorate.
    Reason for representation: the electorate is too large for cohesive action, or would tend toward mob rule.

    Lawyers act for a client in a legal space for which the client doesn’t have adequate training or aptitude.

    Officers or managers of a corporation act as representatives of the corporation entity itself, rather than the zeitgeist of employees which might tend toward different interests.

    Often the constituent is a set of rules (in the case of judges, or anyone who takes an oath to “uphold and defend” the Constitutuion of the U.S.) that itself acts as a representative of some larger body (“We the people…”).

    The representative’s actions are interpreted by outsiders as the actions of the constituent. The degree to which the representative is actually acting in the interests or according to the wishes of the constituent probably depends on where the two agents fall on the Mccleod Hierarchy.