The Well-Being Machine

 

Social policy is a machine for turning force into utils.

 

A Rube Goldberg machine. Nancy Cartwright analogizes the “nomological machine” to this type of contraption.

 

This is an extreme reduction of a view that is widely held (if unconsciously), but, I will argue, wrong. As my friend David Chapman says, “Philosophy has no good new thoughts to teach you. However, you can learn why the thoughts you didn’t know you had are wrong.” The subjects here are two of the messiest folk concepts in existence, and they are the most central to whatever it is that we care about: causality and well-being.

Beyond “Correlation is not Causation”

“Correlation is not causation” is the mantra version of an argument by Hume, that even though we perceive regularities that appear to us as cause and effect, we can never perceive causation directly. This makes the most sense in contexts like public policy and science, in which the observation that two factors occur at the same time (or sequentially) cannot be taken as a guarantee that one factor causes the second.

The Humean view goes beyond such cases, though: even the observational evidence of our own arm throwing an object is not actually a perception of causation. There is always some underlying aspect of the model that we cannot perceive: the exact way the brain works, or the underlying subatomic structure of the universe, or the like.

Consider a virtuoso playing a musical instrument, or singing, or an elite athlete performing, or a dancer dancing. Here, despite years of practice of repeated apparently-causal actions and observations, there can be no actual observation of causation in Hume’s strict sense. The musician only apparently learns to make music, based on past observations. (It’s not even clear whether past observations can have an observable causal effect in this system – I think they can’t, by the nature of what causality is taken to mean, an inherently analytical and not directly observable property.) But here, compared to the social policy or science situation, this view seems much sillier and more untenable. Despite moment-to-moment contact with all the seemingly relevant parts of the apparatus – instrument, hands, fingers, lips, breath, ears, even audience’s ears – and constant feedback between the two – the musician does not really play, but rather observes that it seems as if he plays. The observation is something like, there will always be features of the machine that you cannot perceive – your own neurons, or molecules, or subatomic particles. But despite there being some necessarily unobservable analytical property of the system, it seems much sillier to say “people can’t really sing” than to say “correlation is not causation” in the context of science.

The philosopher Nancy Cartwright argues that the concept of causality is too abstract to be really useful – we do things like “scrape, burn, push, eat (in Cartwright, “Can structural equations explain how mechanisms explain?” 2017),” sing, throw, seduce, persuade, kick, carry, but there is no similar “causation” thing underlying these “thick,” fairly well-understood relations; we just “pick out (ibid.)” certain relations as causal. The causality implied with “seduce” and “persuade” is very different from the causality of billiard balls knocking into one another and transferring energy. Perhaps they have nothing much in common.

The Nomological Machine

Rather than “folk concept,” Nancy Cartwright uses the term “Ballung concept” to describe causality. Ballung concepts, she says, are

characterized by family resemblance between individuals rather than by a definite property. Ballung is a German word for a concentrated cluster; the term Ballungsgebiet (Ballung region) is used to describe sprawling congested urban complexes like the area from New York to Washington on the East Coast of the US.

(Nancy Cartwright and Rosa Runhardt, Measurement, in Philosophy of Science: A New Introduction (2014).)

I cannot tell if the connotation of another English word beginning with “cluster” is intended, but it seems warranted. Nonetheless, Ballung concepts can be systematized in different ways for different purposes. There is no one correct way to formalize concepts. For instance, Cartwright argues (“Single-case causes: What is evidence and why,” 2016) that single-case causes demand different causal modeling and evidence than, for instance, randomized controlled trials. Single-case causes can be valid as evidence – she gives the example of accidentally ingesting poison, then taking an emetic and vomiting, hence being saved – but the formal specification of causation is different from what would be done with many observations:

What matters is that the concept we develop be able to do the job we require of it AND that we stick with the sense characterised throughout. It is no good gathering positive evidence using a method that is good for evidencing singular claims as made precise in one way and then drawing inferences that are licensed by some other sense. That is to do science by pun.

(Single-case causes: What is evidence and why, 2016.)

“Science by pun” is Cartwright’s term for equivocation on the exact sense of causation. She disambiguates several meanings that scientists intend with the word “mechanism;” I wish to focus on one particular distinction here. It is the distinction between inputs and outputs, on the one hand, and the guts of the machine on the other.

The first sense of “mechanism” is a regularity in response to perturbation, an expected output based on a given input in a system, perhaps modeled by equations. This is the facet of causation explored by the randomized controlled trial. Consider the claim that exercise reduces the symptoms of depression (made appropriately precise to your liking). This is an example of a “mechanism” in this sense of inputs and outputs, or phenomenological observations after interference.

The second sense of “mechanism” is the underlying structure that gives rise to the regularity. Cartwright calls this second sense the “nomological machine.” It is the guts of the engine, the reality on the ground of what is going on. Questions about what happens in the body and mind, at both the subjective and the neurological or chemical levels, in different people when they exercise, are on this level of causation. How does the machine work, and what are its conditions of functioning? This sense of mechanism might seek to explain when and how exercise reduces the symptoms of depression, rather than just seeking evidence as to whether it does in general.

Here is Cartwright’s definition of nomological machine (from “Where do laws come from?” 1997):

What is a nomological machine? It is a fixed (enough) arrangement of components, or factors, with stable (enough) capacities that in the right sort of stable (enough) environment will, with repeated operation, give rise to the kind of regular behaviour that we describe in our scientific laws.

The “easy” cases of causation mentioned earlier – dancing, throwing, singing – are cases in which there is constant contact with all relevant aspects of the nomological machine. In the case of dancing, the guts of the machine are quite literally your guts (among other things) and can be felt directly, still and in motion. The musical instrument’s “nomological machine” (including the body, breath, fingers, lips, hearing, and memory) is investigated through practice, with tight feedback between input and output observation.

Much harder cases of causation occur where the systems under investigation are so diffuse, abstract, or difficult to observe that we can’t hope to get our usual embodied grasp on thick forms of causation (whether mechanical or social – that is, whether throwing or seducing). This is the situation faced by social science. Consider first an exception: John Snow’s famous cholera outbreak map. This level of analysis actually does reveal the most important factor in the underlying nomological machine: an infected well. The infected well, we might say, has a stable (enough) capacity to produce cholera symptoms in people nearby who are drinking it; the observation of “outputs” (cases) placed on a map reveals the underlying structure that gives rise to the regularity. This is evidence of a mechanism in the second sense. Note that it’s not a randomized controlled trial!

Andrew Abbott’s “The Causal Devolution” (1998) describes the encounter of social science with causation. He traces the history of ideas of causality in social science beginning with Durkheim, and concludes that sociology has largely accepted the “inputs and outputs” account of causation, though in an even stronger form in which correlations amount not only to causation but to causation in a “forcing” or “determining” sense, while ignoring the nomological machine. He says:

One central reason for sociology’s disappearance from the public mind has been our contempt for description. The public wants description, but we have despised it. Focusing on causality alone, we refuse to publish articles of pure description, even if that description be quantitatively sophisticated and substantively important. Commercial firms pay millions for such work. Our society is, in fact, “described” in surpassing detail by proprietary market research. But we who like to imagine ourselves responsible for the public’s knowledge of society despite description and indeed despite the methods that are generally used for quantitative description. our social indicators are simply disaggregated variables, ready for input to causal analysis. The notions of complex combinatoric description, of typologies based on multiple variables – these fill the average sociologist with disgust.

Abbott argues that “causalists” (those taking causation as the fundamental project of sociology, and associational input-output statistical analysis as its methodology) have (at least, in his writing 20 years ago) won all the battles but lost the war. The war, he says, is to provide an interesting, compelling, comprehensive account of social life, and sociology has failed on all accounts.

Note, however, that the undeniably interesting foundational, descriptive studies in the discipline of psychology have not held up to later scrutiny. One walks a razor’s edge in order to be interesting (i.e., violate expectations) and epistemologically sound.

What’s at stake, however, is not merely interesting explanations; real policy decisions often depend on impoverished, if not outright mistaken, concepts of causality. Cartwright notes (Can Structural Equations Explain How Mechanisms Explain? (2017)) that many agencies publish lists of scientific research on policy questions under headings like “What Works.” Typically, the evidence is composed of controlled trials in one or two sites, focusing on surface regularities (inputs and outputs) rather than underlying structure. The placement on “What Works” lists

…seems to suggest that a cause known to have produced a desired outcome in a handful of settings will work in new places unless something special goes wrong, or that the assumption that it will work in a new place is the default assumption. But when, as is typical, the generic causal relations under consideration are surface relations, whether a proposed cause will work in a new place depends on whether the new location has the right underlying structure to support the same causal relations. But no-one says that finding a policy in a ‘What works’ list gives you negligible reason to use it if you have no information about the underlying mechanisms [nomological machines] needed to produce the causal relations you’d be relying on. The two-tier picture keeps this firmly in view.

(Can Structural Equations Explain How Mechanisms Explain? 2017)

The problem remains even if the underlying studies are valid for what they are, not falling prey to any of the many pitfalls of bad science aside from problematic conceptions of causality.

Good Outputs

The idea of “throwing money at a problem” can be explained with reference to nomological machines. The thrower of money has an input-output model: money in, socially desirable result out. However, the phrase implies that the money thrower has no particular model in mind as to how money might turn into good results. Without such a model, there’s no reason to suspect that an input of money will cause good outputs. Consider calls to increase spending on, say, mental health treatment and addiction treatment. This assumes and input-output framework, and tacitly assumes many things about the nomological machine underlying it, at all levels – for instance, that mental health and addiction constructs are causally responsible for bad outcomes, that treatment programs are effective means to change the bad outcomes, that funding is a limiting factor in providing effective treatment, that the increase in funding will remedy this limitation (rather than cause perverse market distortions), etc. We have little reason to suspect that interventions in the form of “increased funding” without a well-founded nomological machine foundation will improve outcomes; consider the situation with mental health spending and suicide rates  and the famous $100 million Newark school district debacle.

The most important “output” is a Ballung concept I’ve called well-being. It’s extremely hard to conceptualize, much less measure. In part, this is for the same reason as with causality: there are many ways in which we experience something like well-being (dancing, relaxing, lost in the flow of work) and their apparent analytic similarity may be some kind of illusion, unimportant, or at least causally uninformative. The most common method of measuring this “output” is by survey, asking people to introspect on their well-being by asking them questions about how they feel and how their life is going. The validity of this measure depends on how good people are at introspecting on their own subjective well-being. (I have argued that positive well-being is often experienced as absence of inquiry into one’s own affective state; this would make self-report a strange criterion, if we had anything better.) Sometimes the complexity of the concept is increased, allowing for both positive well-being (feeling pleasure or feeling good) and negative well-being (pain and suffering) to independently contribute to ideas of well-being. (This resolves the apparent paradox that suicide rates and the overall happiness of a society appear to be positively, not negatively, correlated, according to some studies which I do not cite because I am not concerned whether the conclusion is true; it is merely a possible paradox that is resolved by a richer, more complex concept of well-being.)

Sometimes self-report is enhanced by factoring in relatively measurable external or objective correlates (such as disability or life expectancy) that are widely agreed to contribute to, or detract from, well-being. More commonly, since the thing we really care about (well-being, happiness, eudaimon, thriving) is so hard to measure, we simply study other things that are easier to measure: disease constructs, suicide, homicide, drug use, unemployment, divorce.

In “In Praise of Passivity” (2012), Michael Huemer argues that policy makers are always in a position of ignorance with respect to the underlying nomological machine, poking and prodding it without understanding. We are in the position of eighteenth century doctors, he says, engaging in something like bloodletting (the typical intervention being worse than nothing), because of our poor understanding of the underlying structure of social reality. Even when some outcomes are measured and a positive outcome is apparent, we can’t know how or why the intervention “worked” or even if it did, not to mention whether some seemingly-unrelated negative outcomes were the result of the policy.

Some popular policy is incredibly simplistic: it operates as if it can simply adjust the emergent outcomes of a nomological machine, to “set” the system with certain values. Price controls, wage controls, rent control, and efforts to make inflation “illegal” during Communist death spirals are examples of this. Since low rents, high wages, low prices, and low inflation are desirable properties within the current system, it is imagined that mandating these desirable results will have the same effect as prices, rents, wages, etc. in a market economy. It is difficult to see the nomological machine implied by such policies, but the machine implied does not have second-order effects (decreased housing supply, decreased employment of those with lowest productivity, shortages, etc.). In the excessively simplistic nomological machine implied by these policies, the values can simply be set by outside force. Drug prohibitions seem to operate by a similar “variable setting” principle: simply imagine if there were no drugs (as opposed to examining the workings of the underlying nomological machine when force is used in different circumstances to prevent different aspects of drug use). Some policy interventions propose a more sophisticated model, such as that some things are public goods that are non-excludable and subject to free-rider problems. But there are still disagreements as to when these input-output-type assertions (something like economic laws) accurately describe the ground of the nomological machine.

Compounding the problem of ignorance of the true underlying causal structure of societies, Huemer argues, is that the desire to signal caring is at odds with the epistemic motivation to seek out true causal relations. “Throwing money at the problem” is good enough to signal caring; if signaling caring is all you care about, you won’t be motivated to dig up the workings of the nomological machine. Huemer says:

But there is at least one way of distinguishing the desire for X from the desire to perceive oneself as promoting X. This is to observe the subject’s efforts at finding out what promotes X. The basic insight here is that the desire [to perceive oneself as promoting X] is satisfied as long as one does something that one believes will promote X, whereas the desire for X will be satisfied only if one successfully promotes X. Thus, only the person seeking X itself needs accurate beliefs about what promotes X; one who merely desires the sense of promoting X needs strong beliefs (so that she will have a strong sense of promoting X) but not necessarily true beliefs on this score.

(In Praise of Passivity, 2012)

Here, epistemic effort is one indicator of sincerity. Note that this is a bit of an input-output regularity in and of itself: it doesn’t tell us much about the conditions under which people choose their levels of epistemic investment, or descriptively what motivates people to engage in investigation, or how good they are at it, or if it’s actually worth their time. Huemer acknowledges that the costs of acquiring information are high, and does not advocate for a general duty to spend all our free time finding out why people are wrong. Rather, he advocates for the position that there is a duty not to intervene in the marginal case, since poking a poorly-understood complex system is more likely to result in harm than good.

Complexity and Causality

Like causality, complexity is an analytical property that in theory should not be able to be directly observed – yet we seem to have a sense for it, as with causality.

It’s usually interesting to claim that something that seems very real is fake. For instance, Hume’s argument that we cannot perceive causality directly is compelling, and is made very interesting by the fact that we do, in fact, seem to have immediate sense data about causality. The French psychologist Albert Michotte (The Perception of Causality, 1963) studied the direct apparent perception of causation during the 1940s. He showed subjects simple projections of geometric shapes moving, stopping, changing color, etc. In the central experiment, a black square moves across the screen toward a red square at a certain speed; at the moment it touches the red square, the black square stops, and the red square begins moving in the same direction at a slower speed for a brief time.

Michotte says:

…the observers see [the black square] bump into [the red square], and send it off (or ‘launch’ it), shove it forward, set it in motion, give it a push. The impression is clear; it is the blow given by [the black square] which makes [the red square] go, which produces [the latter’s] movement.

This experiment and the following one have been tried out on a large number of subjects (several hundreds) of all ages. All of them have given similar descriptions, with the exception of one or two, who, observing in an extremely analytical way, said that they saw two successive movements, simply co-ordinated in time.

(The Perception of Causality, 1963)

These exceptional “one or two” either fail to perceive causality in the normal way (they seem to lack a cognitive capacity!), or have read Hume (guessing the teacher’s password) and know to deny their sense impressions, because they know they cannot “really” have this capacity. In fact, the causation Michotte presents in the projection is an illusion, and they have avoided succumbing to it. (Personally, when I watch a movie, I say, “Ha! You’ll not fool me! All I see is a succession of photographic images, concurrent with a sequence of musical notes and speech sounds.”) But the fact that a sense is vulnerable to illusion is not a reason to discard all confidence in the sense; the senses of vision, hearing, and time are subject to illusion, but that is hardly a reason not to use them in the ordinary course of things. The “one or two” who don’t use causal language to describe the clip might imagine that human brains are inferior to artificial intelligences that can accurately measure and report only the position, speed, and color of the shapes, artificial intelligences that wouldn’t be temped into spurious judgments of causation. But imagine the complexity of an artificial intelligence that could judge and report causation from short video clips as well as a human! This would be very impressive to me, because the human capacity to judge causation – which presents to the senses as immediate data, not as conscious inference – is both sophisticated and useful, despite systematic errors. (I imagine that the judgment of agency or intention is similar, if that is not just a subcase of causality.)

Note that the subjects report causality in Cartwright’s thick, situated manner – not “cause” but “knock,” “bump into,” “launch.”

I recently encountered a paper called “Coincidences and the Encounter Problem: A Formal Account” by Jean-Louis Dessalles (2008) . Even though I’ve written about coincidence before, I think this account is better than the account I was previously able to give. Intuitively, a “coincidence” seems to be an event with very low probability; however, Dessalles explains that this is not meaningfully descriptive of actual coincidences as experienced:

Though coincidences are systematically experienced as improbable by subjects, their relation to probability is notoriously unclear: Among events of same probability, some may appear coincidental and others not (Griffiths & Tenenbaum, 2007). For instance, children’s attention is grabbed when the family car reaches 66666 km on the clock, but they do not care when they read 67426 km. People are stunned when unexpectedly meeting a friend in a remote place, although they are fully unable to quantify the probability of the event.

People are very good at perceiving coincidence, and know what makes a good coincidence, Dessalles says. It is not merely an improbable event, but a conjunction of circumstances such that the expected descriptive complexity of the environment is suddenly reduced in observation. On digital clocks, the time “11:11” is experienced as special, and is accompanied by wish-making powers in some cultures. Why is 11:11 the special time? Dessalles would argue that it is because of the unexpected drop in complexity. Usually a digital time must be represented by three or four digits that can take various values; four digits that must be independently specified is the highest expected complexity a time can take. However, when 11:11 is observed, only one digit (repeated four times) is necessary to specify the time. The difference between how much complexity it “should” take to calculate the environment, and how much it is actually observed to take, is the essence of coincidence – not mere improbability (any time is as improbable as any other). Dessalles describes the ability to perceive coincidences as a “cognitive capacity,” which I take to be something like a sense. In order for this to operate, the nomological machine underneath must be maintaining a prediction of expected environmental complexity at some level – that is, it implies that the brain is constantly modeling expected complexity. A sense of complexity – and especially relative complexity – seems to underlie the near-universal perception of coincidences.

There’s another important step that Dessalles offers, relevant to our discussion of causality. A coincidence is only compelling, spooky, or otherwise interesting if the only causal explanations are very unlikely – for instance, if it is the result of extrasensory perception, or of an alien conspiracy, a model that is very expensive to specify in terms of complexity, and also very unlikely. On the other hand, if an easy causal explanation comes to hand, that reduces the complexity of the world “as it should be,” and the coincidence (unexpected drop in complexity) dissolves. In causal explanation, the world is revealed to be less complex than it otherwise might seem.

Abbott argues (“The Causal Devolution,” 1998) that what we want out of the discipline of sociology is an interesting, compelling, comprehensive view of society. More generally, what we want out of an explanation is not just input-output causal laws, but a simpler, less cognitively costly model of the world.

As with causality, the sense of complexity is most accurate at close quarters, with tight feedback and many points of contact with relevant parts of the nomological machine (as with learning to play an instrument). It’s least accurate at a distance, with legibilizing process intermediating.

I think that encountering a folk concept is the opposite of coincidence: what “should” have been simple, turns out on examination to be unexpectedly complex. “Social policy is a machine for turning force into utils” is an extreme reduction, and hides complexity in every word, even the prepositions. Falling complexity is interesting and motivating; rising complexity can be demoralizing. However, a Huemerian would argue that in the case of social policy, actors being demoralized by complexity is good: if they are properly demoralized, they will be less likely to engage in dangerous interventions.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Sarah Perry

Sarah Perry is a ribbonfarm contributing editor and the author of Every Cradle is a Grave. She also blogs at The View from Hell.
Her primary interests are in the area of ritual and social behavior. Follow her on Twitter

Comments

  1. Huh, this brought back ancient memories of a philosophy of science course or book where there was a citation of Cartwright’s view that there are no fundamental laws in physics, it’s all constitutive laws. This kinda explains the thinking behind that assertion.

    (traditionally, fundamental = stuff like E=mc^2, constitutive = stuff like F=kx, Hooke’s law of springs, where k is some fudgy parameter of spring properties that is a shallow constant, unlike c).

    • Yeah I found the distinction between “phenomenological laws” and “theoretical laws” useful (in How The Laws of Physics Lie which is a fantastic title)

    • Fancy seeing VR here. Guess it shouldn’t be a surprise. This was a delightful piece and I can’t wait to read the Abbott’s Causal Devolution.

      I’m still trying to understand, however, exactly the nature of the misconception around correlation does not equal causation. It sounds like you’re simply saying that we don’t understand causation well enough to make that assertion?

      • I think the misunderstanding is that correlation doesn’t necessarily imply causation. When you arrange for occurrence of input A and measurement of output B, you’re building a case for a Bayesian understanding of what will happen next, but that’s entirely orthogonal to the mechanism by which A leads to B. So in many individual cases, we don’t understand causality well enough that we can categorically state correlation equals correlation.

  2. turrible tao says

    this reminds me of the taoist concept of “mutually arising”
    that there isn’t cause and effect, but that things mutually arise, which pairs nicely with in praise of passivity being almost taoist

  3. Carlos Giudice says

    Looking into the black box seems like a nice way to go when it is not convenient to empirically try different approaches. I think of deep learning as an extreme example of opacity, where one can usually find useful solutions by trying an arbitrarily big amount of approaches and learning about an arbitrarily big amount of data.
    But when it comes to public policy, we can’t simply go with 500 generations of randomly generated policies and see which works best.
    Maybe lessons can be learned from different fields and problems that require optimizing complex systems behaviour with little data?
    https://arxiv.org/abs/1610.00946
    I’d love to hear policy makers disclaiming “we don’t know what we are doing but hopefully we’ll learn something”.