Predictable Identities: 6 – Creeps

This entry is part 6 of 27 in the series Predictable Identities

We’ve looked at predicting people from a distance: employing stereotypes and homogenizing outgroups. Moving a step closer to the self, consider an individual you have just met. What are you looking for in the first few minutes of interaction? Among other things, often first among them, is predictability.

A member of your tribe is highly predictable. If you share a taste in clothes and podcasts you can predict with high confidence how they’ll react in social situations, what their habits and motivations are, etc. That’s why small talk about last night’s episode of Game of Thrones isn’t a waste of time. It communicates: I am like you, you can model me well by looking at yourself, we can cooperate.

Second best is someone who fits well in a group stereotype, even if that’s not your ingroup. You may not befriend many middle-aged bearded men who speak a language you can’t recognize, but you feel comfortable getting in a taxi with one.

Now consider someone who looks like you, watches the same shows, and also eats spiders they catch around the house. Would you get in a taxi with them?

The opposite of predictability is creepiness. In the leading research paper on the nature of creepiness, the author describes it:

The perception of creepiness is a response to the ambiguity of threat. […] While they may not be overtly threatening, individuals who display unusual patterns of nonverbal behavior, odd emotional responses or highly distinctive physical characteristics are outside the norm, and by definition unpredictable.

Eating spiders by itself doesn’t make one dangerous, but it’s a signal that this person can’t be predicted well by pattern matching to stereotypes or by self-modeling (unless you’re an arachnophage yourself). We react to creepiness not with fear but with aversion. We dislike what we cannot predict.

Weirding Diary: 6

This entry is part 6 of 11 in the series Weirding Diary

Weirding is related to uncanniness, as in uncanny valley: a near-subconscious sense of map-territory misregistration. I think there are two varieties, A and B.

Type A uncanniness, which Sarah Perry explored, evokes an “emotion of eeriness, spookiness, creepiness” that you notice instantly, but cannot immediately isolate or explain. Here’s an example:

Human lookalike robot Repliee 2, detail from image by BradBeattie, CC BY-SA 3.0

Type B uncanniness, which Sarah Constantin explored, does not necessarily evoke those emotions, but may provoke a double take. Here are examples from an article on recognizing AI fakes.

Sample faces generated by a GAN, from How to recognize fake AI-generated images by Kyle McDonald

Two increasingly important domains — markets and AI — exhibit both kinds. Free markets and deep learning AIs generate more Type B uncanniness. Markets distorted by regulation, and GOFAI (including humanoid robots) generate more Type A uncanniness.

Kahnemann’s System 1/System 2 model is useful here.

Type A uncanniness is pattern wrongness, detected by System 1, evoking an immediate emotional reaction. If you aren’t fleeing, a slower System 2 response might kick in and supply a satisfactory explanation, and possibly creepiness relief.

Type B uncanniness is logical wrongness (unexpected facial asymmetry or incoherent grammar for example), prompting the double take (and possibly a delayed creeped-out reaction). You have to re-engage System 1 to reconstruct a narrative around the actual prevailing logic rather than the assumed one.

Too-symmetric faces are Type A uncanny. Mismatched earrings on a face are Type B.

Drug prices shooting up 10x suddenly is Type A. Bond markets defying “normal” logic is possibly Type B (I need a better example).

Markets are eating the world, and AIs are eating software. In both, we’re seeing a shift from Type A to Type B. Less creepy, more double-takey. It’s easier to get used to creepy patterns than unexpected logics.

What If We Already Know How to Live?

This is a guest post by Oshan Jarow.

Sometimes, an event seismic enough to rip a fault line through history forever divides time into two equally infinite halves: before said event, and after. Among the previous divisive events in time, I can think of fire, and language. Suggesting the internet did so for society is nothing new, but I suggest the digital age did so for the most basic, insoluble of human questions: how to live. The question is a pure expression of philosophy, distilled and stripped of distractions. I view digitalization on the seismic scale of fire and language, forever changing the landscape of the question, splitting the history of our existential strivings into before and after.

Philosophy is, in part, kept alive by ever-changing sociocultural circumstances that demand new lived responses to its question. But the changes brought by the digital age are of a magnitude beyond the routine vicissitudes of history. The global distribution of knowledge is arming, perhaps overloading us with more information than ever before, and the proliferation of digital interfaces is reprogramming how we experience life itself, our attentive and perceptual faculties.

Annie Dillard asked in 1999: “Given things as they are, how shall one individual live?” Asking the same question now is a new inquiry, for things are no longer as they were. That was all before. Inaugurated by information abundance & global connectivity, philosophy begins a new timeline. The ‘after’ has just begun. How has our inquiry into how to live metamorphosed? What new challenges animate our search for a fullness of being? What is philosophy after the internet?

[Read more…]

Predictable Identities: 5 – Outgroup Homogeneity

This entry is part 5 of 27 in the series Predictable Identities

There are more ways for someone to be different from you than to be similar. But psychologically, it works the other way around. We perceive those like us as uniquely distinct and the outgroup as undifferentiated. It’s called the outgroup homogeneity effect.

This effect extends to physical appearance (e.g. faces of other ethnicities looking the same) and mental traits (e.g. people of the other gender all supposedly wanting the same things). Surprisingly, the effect is unrelated to the number of ingroup and outgroup members one knows; it’s not about mere exposure.

I recently wrote about a remarkable case of the outgroup homogeneity effect: Ezra Klein’s strange attempt to make the case that liberaltarian podcaster (and Klein’s co-ethnic) Dave Rubin is a reactionary.

Klein starts by looking at the network graph of podcast appearances which links Rubin to several unsavory reactionaries. But Klein himself is just two podcasts removed from Richard Spencer, so that’s not great. He then defines “reactionaries” narrowly as those who seek “a return to traditional gender and racial norms”. Of course, most of Rubin’s flagship positions (gay marriage, drug legalization, abortion access, prison reform, abolishing the death penalty) have to do with gender and race norms. Specifically: changing them.

I think what happened is that Ezra Klein picked up intuitively on the one important similarity between Rubin and conservative reactionaries: they both strongly dislike him.

People legislate the distinction between pies and tarts and between plums and nectarines, but only geologists care about telling apart inedible rocks. Same with people: it’s important to keep track individually of potential cooperators, reciprocity relationships, etc. But once you model someone as a defector, you don’t need more detail to predict that they’ll defect. The outgroup is good for writing snarky articles about. For this, you don’t have to tell them apart.

Domestic Cozy: 2

This entry is part 2 of 13 in the series Domestic Cozy

Phrases like domestic cozy and  premium mediocre are what you might call world hashes, fingerprints of worlds. They enable you to instantly classify whether a thing belongs in a world, or is an alien element within it, even before you have characterized the world at any significant level of detail.

Take this picture (a screenshot of the landing page of Offhours.co, an “inactivewear” company, ht Adam Humphreys) for instance: domestic cozy or not?

I’m going to say yes, that’s domestic cozy. It’s not an exact science. The associations with inactivity, indoor life, and comfort over presentability put it firmly in the domestic-cozy world.

There are certainly problems at the margins. The well-groomed look of the model, and the non-messiness of the background suggest there’s a residual element of Millennial premium mediocrity in the positioning. It’s more the fake “good-hair” domesticity of a staged Instagram performance than a representation of a genuinely domestic aesthetic. Maybe they’re trying to get some crossover appeal going.

If I had to fine-tune this graphic to strike exactly the right note, I’d pick a more ordinary looking model, perhaps with properly unkempt frizzy hair and freckles. Maybe  a pile of laundry and unwashed coffee mugs/plates in the background (not disgustingly messy, just TV-messy). Maybe softer, darker evening lighting. Maybe a less glossy, more scruffy visual texture. Maybe a board game next to the model. Maybe a note of anxiety.

Still, close enough. This passes the fingerprint matching test.

Domestic cozy is a world hash that picks out a grammar in a world. As with premium mediocre, domestic cozy is tempting to reductively see as just an aesthetic. But if you like where this going, I suggest you check that tendency, because it makes things so much less interesting. To confuse a world hash with an aesthetic is like saying Sherlock’s Holmes ability to read the clues in his clients’ appearances made him a fashion critic rather than a detective.

This grammar is easiest to pick out in visual elements, but it suffuses all aspects of the world. I’ll save more general theorizing about world hashes for the worlding blogchain, but what does the grammar of domestic cozy tell us about the underlying world? What parts of what it picks out are enduring traits of the generation (remember, Gen Z can expect to live into the next century), and what parts are simply a function of life stage and contemporary conditions?

One thing that strikes me about examples I’ve noticed so far is that they paradoxically combine passivity and sense of play. As Visakan noted in a comment last time, there is a dark note of palliative self-care. Instead of Bruce Sterling’s “acting dead“, what we have here is a kind of playing dead. Instead of favela chic, we have mortuary chic.

This is an aspect that, I predict, will not endure. It is an artifact of life stage and 2019 conditions, not the generational temperament.

But the playfulness will mature into a more alive version of itself.

Worlding Raga: 3 — Slouching with God

This entry is part 3 of 7 in the series Worlding Raga

Last week, my wife and I watched the new Captain Marvel movie. It strikes a slightly quieter note than the typical Marvel Cinematic Universe romp, and it occurred to me that that’s because the character is arguably the most powerful in the MCU, like Superman in the DC universe. She’s more like a god than even Thanos or Thor, so the usual wisecracking smart-assery would have struck a false note.

A line in Ian’s Worlding Raga episode last week, What is a World, leaped out at me in relation to this:

This voluntary desire to surf chaos, metabolize it into new order, and then do it all over again, is sometimes called “walking with god.” Maybe it’s more like slouching with god around here.

In the MCU, Nick Fury walks with many gods, and Captain Marvel appears to be the most powerful of the lot, which is why Fury sends a prayer-pager call out to her as his last act in Infinity War. Presumably she’ll play a key role in defeating Thanos in Endgame.

Since I’ve been jokingly referring to Ribbonfarm and its surrounding web zone as the “Ribbonfarm Blogamatic Universe” (RBU), Ian’s characterization immediately provokes the question: am I Captain Marvel or Nick Fury in the RBU? I hope I’m not Hawkeye.

[Read more…]

Predictable Identities: 4 – Stereotypes

This entry is part 4 of 27 in the series Predictable Identities

How do we predict strangers?

Humans evolved in an environment where they rarely had to do this. Practically everyone a prehistoric human met was familiar and could be modeled individually based on their past interactions. But today, we deal with ever-stranger strangers on big city streets, in global markets, online…

We need to make quick predictions about people we’ve never met. We do it using stereotypes.

Early research on stereotypes focused on their affective aspect: we dislike strangers, but less so as we get to know them. But new studies have looked at the content of stereotypes, finding that groups are judged independently on the dimensions of warmth/competitiveness and status/competence. For example, Germans see Italians as high-warmth low-competence, i.e. lovable buffoons; Italians see Germans as the exact opposite, i.e. mercenary experts.

These dimensions are primarily about predicting someone’s behavior and capability. In game theory terms: Will that person cooperate? And can I safely defect on them or do I have to play nice?

Contrary to the well-meaning wishes of most stereotype researchers, there is robust empirical evidence on stereotype accuracy. The short of it is that people’s stereotypes are, in fact, quite accurate, especially stereotypes of gender and ethnicity. Whether stereotyping is good or bad, it cannot simply be wished “educated” away. It is universal because it’s very useful for prediction.

A smart person noted that problems arise from having too few stereotypes, not too many. If you have a single stereotype for “Jews” you’re doing a bad job of modeling Jewish people, and are likelier to mistreat them due to prejudice. If you have separate stereotypes for Hasidic Jews, Brooklyn conservative Zionists, liberal Jewish atheists, secular Israelis etc. you’re one step closer to treating (and modeling) unfamiliar Jews as individuals. Studying cultures is about acquiring many useful stereotypes.

Infinite Machines: 1 – An Introduction

Like the universe, technology, an extension of the self, is expanding fast.

The infinite machine is the idea that we’re becoming machine-like through the use of human-like machines. It is a phenomenon at the intersection of automation, labor, gratification, and human desire.

In this expansion of technology, I argue that we compromise aspects of our humanity in ways that are hard to see for some, and harder to associate meaning to for others. So the further we ‘progress’, the less we intrinsically understand why we choose to expand.

AI is still evolving (broadly completing narrow tasks) and has done a decent job mimicking human attributes: neural computation, analytical decision-making, and natural language processing to name a few. But despite the rudimentary functionality of AI today, the idea of an AI singularity sparks both fear and allure amongst the world’s top physicists and inventors.

This series explores contending identity attributes between the computer science of AI and spirit of humanity, through a few critical lenses:

  1. Growing emotional and psychological dissonance of laborers involved in the delivery of AI technologies.
  2. Unrealized tension that laborers experience in the process, which range from microaggressions to economic exploitation.
  3. Evolving perceptions of power and free will as AI technologies become more anthropomorphic.

A recurring challenge across these areas, which I’ll examine, is detangling the inherent value from its value proposition: Let’s connect you to the world in ways that you never imagined. For example, last week, I booked a taxi, confirmed a tinder date, and discovered a new music genre – all in three minutes. As the third minute passed, I realized I hadn’t pushed any buttons in the elevator which I was standing in.

I was doing ‘things,’ but going nowhere. This, of course, is a metaphor for the collective human identity.

Predictable Identities: 3 – Prisoner’s Dilemma

This entry is part 3 of 27 in the series Predictable Identities

Much of human interaction is shaped by the structure of the prisoner’s dilemma. We put in place institutions and norms to enforce cooperation. We tell shared stories to inspire it. We evolved moral emotions to achieve cooperation on an interpersonal level: empathy and gratitude to assure cooperators of our cooperation, anger and vengefulness to punish defectors, tribalism and loyalty to cooperate with those we know well.

But the crux of the prisoner’s dilemma is that defection is always better for the defector. We try to get others to cooperate with us, but we also try to defect as much as we can get away with. We want our peers to pay their taxes, admit mistakes, share credit, and stay faithful. We also fudge our taxes, shift blame, boast, and cheat.

There are many strategies for dealing with PD, and some of them can be formalized in code and entered in competitions with other strategies. The simple strategies are named and studied: tit-for-tat responds to each play in kind, tit-for-two-tats forgives one deception in case it was a mistake, Pavlov changes tacks after being defected against and so on and so forth. Which strategy works best?

It turns out that the success of each strategy depends almost entirely on the strategies played by opponents. Each approach can fail to reach cooperation with others or under-exploit opportunities to defect; even a strategy of always defecting is optimal if enough other players always cooperate. If you only knew what your opponent is playing, you could always choose the best option.

And this brings us back to predicting other humans. If we can model their strategies, if we know who will be forgiving and kind, who will be vengeful and dangerous, we can play optimally in any situation. Predicting well is the unbeatable strategy.

Mediocratopia: 3

This entry is part 3 of 13 in the series Mediocratopia

Mediocrity is, rather appropriately, under-theorized.  An upcoming book by David S. Milo, Good Enough (ht Keerthik), seems set to make it a little less undertheorized. The subtitle is inspiringly underwhelming: The tolerance of mediocrity in nature and society.  Reader Killian Butler sent me this post on being mediocre. Our movement is really slouching along now.

There is a paradox at the heart of mediocrity studies: excellence is not actually exceptional. If you see an excellent behavior or thing, it’s likely to be a middling instance at its level. The perception of exceptionalism is an illusion caused by inappropriate comparisons: you think it is a 99 percentile example of Level 3 performance, but it’s really a median example of Level 4 performance.

Changing levels of performance is self-disruption. The moment you hit, say, the 60% performance point on the current S-curve of learning, you start looking for ways to level up. This is the basic point in Daniel F. Chambliss’ classic paper, The Mundanity of ExcellencePeople who rise through the levels of a competitive sport do so by making discrete qualitative changes to level up before they hit diminishing returns from the current level. This process of leveling up, has less to do with striving for excellence in the sense of exceptional performance, and more to do with repeatedly growing past limits. The visibly excellent are never at a local optimum.

In Age of Speed, skier Vince Poscente claims he won primarily by practicing his skills at a level above the one he was competing at. So during actual competition, he could win with less than 100% effort.

Making winning a habit is about making sure you’re always operating at a level where you have slack; where you are in fact mediocre. If you’re being pushed towards excellence, it’s time to find a new level.