Elderblog Sutra: 6

This entry is part 6 of 6 in the series Elderblog Sutra

Last time, we took an content graph view of this elderblog. Now let’s take a word-count view. There is something pleasantly honest about word counts. They strike a thoughtful balance between weighing writing in too-literal (compressed file sizes) and too-abstract (post count) ways. Ribbonfarm has averaged 11,491 words/month over its 11.5 year history.

3 month moving average of ribbonfarm word count

The two spikes in 2007 and in 2017 are explained by me flushing out a pile of private drafts when I started, and the publishing of the longform course participant essays respectively. The steady accumulation is even clearer in the cumulative word-count graph:

[Read more…]

Infinite Machines: 3 — Turking Interfaces

This entry is part 3 of 3 in the series Infinite Machine

At its peak, the 18th century Mechanical Turk toured the world; leaving audiences in awe at its seemingly advanced ability to beat opponents in chess. It only became publicly known after decades the Turk had a human chess master below who would manipulate the machine to make moves on their behalf. Over two centuries later in 2017, Google’s AlphaGo beat the world’s best Go player with no human intervention. In this case, the core technology evolved from human to machine, but personas were constructed along the way to disguise human labor behind the interface.

Beyond entertainment, this type of persona construction extends to humans used for service labor. As trains became commercialized in the 1860’s, black porters were known as George’s amongst passengers. This name comes from George Pullman, manufacturer of the Pullman Sleeper car. The George, similar to the Turk, functioned as a mask to human identities. Though George’s (as an interface) represented a deeper charade of power relationships.

Amazon adopted the Mechanical Turk name for one of their platforms, and it has since grown to be the world’s largest online workforce, comprising roughly 500,000 contract-based employees around the world. These ‘turkers’ help researchers and tech companies bring structure to unstructured data and train AI; with activities ranging from spotting fake news to filling out surveys. While it’s known that humans are behind the interface, they’re represented only as a string of letters and numbers to requestors.

In Finland, the Criminal Sanctions Agency is partnering with Vainu, an enterprise SaaS tech company, to employ prisoners as ‘turkers’ to validate data that will help organizations arrive at more comprehensive business decisions. While the company boasts the prisoners are gaining transferable skills, the dissonance between the worlds of the end user and prisoner blur the lines of where the human labor ends and the machine begins.

Predictable Identities: 7 – Roles People Play

This entry is part 7 of 7 in the series Predictable Identities

We fit strangers in stereotypes, and we like strangers who fit our stereotypes and act congruently. Dealing with people we know allows for more personalized modeling, but we still want associates and companions to stick to their roles.

Most everyone in most every office resembles a character on The Office, not just in the broad strokes of the Gervais Principle but down to details of job, dress, and personality. Husbands and wives have played out “if it weren’t for you” patterns long before Games People Play described them.

The main difference from predicting strangers is that dealing with familiar people allows for active inference: enforcing others’ conformity to prediction. Valentine Smith describes this in his excellent essay The Intelligent Social Web:

You move away [from your family] and make new friends and become a new person […] But then you visit your parents, and suddenly you feel and act a lot like you did before you moved away. You might even try to hold onto this “new you” with them… and they might respond to what they see as strange behavior by trying to nudge you into acting “normal”: ignoring surprising things you say, changing the topic to something familiar, starting an old fight, etc.

This nudging is effected by thousands of small actions perpetrated by dozens of people. We receive small negative reinforcements when we do something unpredictable that causes others’ models to momentarily fail, and positive rewards when we conform to our roles. The social life of an office or a family is too complex to compute without stable roles assigned to people, the same way a brain can’t cohere a visual scene without the expectation that visual objects remain stable.

Life in the social web means that growth and change are harder than they appear. Hard, but not impossible.

Mediocratopia: 4

This entry is part 4 of 4 in the series Mediocratopia

You’ve probably heard of optimization, that nihilistic process of descending into valleys or ascending up hills till you get stuck, having an existential crisis, and then flailing randomly to climb out (or down) again. Mediocritization is the opposite of that: never getting stuck in the first place. Here’s a picture.

Optimization versus mediocritization
Optimization versus mediocritization

The cartoon on the left is optimization. The descent is a relatively orderly process (“gradient descent” takes you in the local steepest incline direction). The getting-out-again part is necessarily disorderly. You must inject randomness. The cartoon on the right is mediocritization: don’t get stuck.

When people talk of “global” optimization, they usually mean that over a long period, you flail less wildly to get out of valleys because the chances that you’ve already found the deepest valley get higher as you explore more. This process goes by names like “annealing schedule”.

Global or local, the thing about optimization is that it likes being stuck at the bottoms of valleys or the tops of hills, so long as it knows it is the deepest valley or highest hill. The thing about mediocritization is that it does not like either condition. Mediocritizers likes to live on slopes rather than tops or bottoms. The reason is subtle: on a slope, there is always a way to tell directions apart. The environment is different in different directions. It is anisotropic. Mediocritization is an environmental anisotropy maintaining process (not a satisficing process as naive optimizers tend to assume).

Anisotropy is information in disguise. Optimizers get stuck at the bottoms of valleys or tops of hills because the world is locally flat. No direction is any different from any other. There are no meaningful decisions to make relative to the external world because it is the same in all directions, or isotropic. This is why you need to inject randomness to break out (mathematically, the gradient goes to zero, so can no longer serve as a directional discriminant).

Generalizing, in mediocritization, you always want to have a way available to continue the game that is better than random. This means you need some anisotropic pattern of information in the environment to act on.

Three examples of mediocritization:

  1. When Tiger Woods was king of the hill (a position he just regained after a long time), his closest competitors performed worse by about a stroke on average. Apparently, when Tiger is in good form, there’s no point trying too hard. See this paper by Jennifer Brown..
  2. My buddy Jason Ho, who just had this entertaining profile written about him, is on the surface, a caricature of an optimizer techbro. But look again: he trained hard and placed second in an amateur body-building competition, and then moved on to newer challenges rather than obsessing over getting to #1.
  3. When I was in grad school, and occasionally hit by mild panic at the thought of somebody scooping me on the research I was working on, I came up with a coping technique I called “+1”. For any problem, I’d always take some time to identify and write down the next problem I would work on if somebody else scooped me on the current one. That way, I’d hit the ground running if I was scooped.

Carsean moral of the 3 stories: optimization is how you play to win finite games, but mediocritization is how you play to continue the game.

Domestic Cozy: 3

This entry is part 3 of 3 in the series Domestic Cozy

I increasingly like a thesis I initially resisted: many unusual and toxic culture-war phenomena in nominally public spaces can be understood as an outward projection of a cozy ethos prevailing in domestic spaces. Applying Jungian magical thinking, we should expect this projection to be anything but cozy. The shadow of domestic cozy ought to be a particular pattern of public strife. We should expect this strife to have a recognizably domestic heat signature — the ugly family scene rather than the barroom brawl, soccer riot, or gang war.

When I first tweeted about domestic cozy, Ben Mathes suggested that phenomena like safe spaces and trigger warnings on college campuses, and associated high incidences of depression and anxiety in Gen Z adolescents, ought to be considered an expression of the Zoomer personality. It does seem like the spike in those phenomena coincided with Zoomers starting to enter college. An epimemetic product of a stressful coming-of-age decade, and overprotective (but not necessarily overindulgent) parenting. I resisted the suggestion initially, since it seemed inconsistent with the peaceful domestic expression of the archetype, but I am now on board, via the Jungian argument.

[Read more…]

Weirding Diary: 7

This entry is part 7 of 7 in the series Weirding Diary

The lament that the United States is turning into a third-world country is at once too pessimistic and too optimistic. What is actually happening is that a patchwork of post-industrial first and fourth-world conditions is emerging against a second-world backdrop.

Here are my definitions:

  • First world: Small, rich European countries. Islands of gentrified urbanism in the US.
  • Second world: Suburban/small-town America, parts of larger European countries, small Asian countries, parts of the Soviet Union before it collapsed, parts of China today.
  • Third world: Countries in global south that began modernizing a century later than Europe, and still have relatively intact pre-modern societal structures to backstop the shortcomings of incomplete industrial development.
  • Fourth world: Parts of the developed world that have collapsed past third-world conditions because industrial safety nets have simultaneously withered from neglect/underfunding, and are being overwhelmed by demand, but where pre-modern societal structures don’t exist as backstops anymore.

The fourth world emerges when large numbers of people fall through the cracks of presumed-complete development, and find themselves in worse-than-third-world conditions: More socially disconnected, more vulnerable to mental illness and drug addiction, with fewer economic opportunities due to the regulation of low-level commerce, and less able to stabilize a pattern of life.

Schemes like LBJ’s Great Society failed to fulfill their promises, but still prevent those facing impoverishment from fending for themselves. The fourth world is the worst of all worlds; an artifact of failed authoritarian high-modernism. A condition of pervasive dependency on non-dependable systems that eliminate old alternatives and limit the growth of new ones. The underbelly of zombie monopolistic safety nets that lack the autopoietic potential to endure through political and economic cycles as living social systems. The functionality withers away, but the negative externalities don’t.

The Great Weirding is revealing that modernization and development are not the same thing. It is a mistake to govern under the presumption that entire populations must necessarily arrive at stable 100% first-world conditions after a transient “development” period. Modernization is the evolution of both wealth and poverty into newer technological forms.

Systems designed for the lowest strata must not assume those strata will eventually go away.

Escaping Reality: Refactor Camp 2019, Los Angeles, June 15-16

Refactor Camp is back! The 2019 edition will be held in Los Angeles, the weekend of June 15-16, at the lovely design studio of Philosophie in Santa Monica. The theme for this year is “Escaping Reality.”

Theme details, registration link and session proposal submission link can be found at the swanky new event website

(it’s the first time in the 7-year history of the event that we’ve had a proper website, thanks to long-time reader Megan).

As you know if you’ve attended before, we’ve always run the event on a no-profit/no-loss basis. The cost this year works out to $95. Registration will remain open until tickets run out. The venue capacity is limited to 120, and as I write this, 63 regular tickets remain (an auspicious 42 tickets were taken during the closed pre-registration period for returning attendees, and we’re holding 15 in our cronyism reserve). The event tends to sell out early, so if you plan on attending, you should register early.

Session proposals are due by April 30, and you can find the proposal submission link on the event site. Earlier is better, and if we get enough proposals early enough, the program may get locked down early, so if you’d like to do a talk or session, get your proposal in as early as you can.

We’re still working out the program details, but as usual there will be a mix of lightning talks, longer talks, interactive sessions, and hopefully a beach outing (outdoor walkabout sessions have always been a feature of Refactor Camp, though we couldn’t do one last year due to it being in the Texas desert with buzzards and rattlesnakes around).

Look for the final program sometime in early May. As with previous years, we’ll be trying to pull together a good mix of returning and new people among both attendees and speakers/session leaders. For now, the theme blurb should give you an idea of what to expect.

This year’s efforts are being led by Darren Kong (who was also a lead organizer last year in Austin), with support from Megan Lubaszka, Patrick Atwater, Nolan Gray, Ryan Tanaka, and myself.

So hope to see a bunch of both new and familiar faces in June. Register and/or submit session proposals here.

Predictable Identities: 6 – Creeps

This entry is part 6 of 7 in the series Predictable Identities

We’ve looked at predicting people from a distance: employing stereotypes and homogenizing outgroups. Moving a step closer to the self, consider an individual you have just met. What are you looking for in the first few minutes of interaction? Among other things, often first among them, is predictability.

A member of your tribe is highly predictable. If you share a taste in clothes and podcasts you can predict with high confidence how they’ll react in social situations, what their habits and motivations are, etc. That’s why small talk about last night’s episode of Game of Thrones isn’t a waste of time. It communicates: I am like you, you can model me well by looking at yourself, we can cooperate.

Second best is someone who fits well in a group stereotype, even if that’s not your ingroup. You may not befriend many middle-aged bearded men who speak a language you can’t recognize, but you feel comfortable getting in a taxi with one.

Now consider someone who looks like you, watches the same shows, and also eats spiders they catch around the house. Would you get in a taxi with them?

The opposite of predictability is creepiness. In the leading research paper on the nature of creepiness, the author describes it:

The perception of creepiness is a response to the ambiguity of threat. […] While they may not be overtly threatening, individuals who display unusual patterns of nonverbal behavior, odd emotional responses or highly distinctive physical characteristics are outside the norm, and by definition unpredictable.

Eating spiders by itself doesn’t make one dangerous, but it’s a signal that this person can’t be predicted well by pattern matching to stereotypes or by self-modeling (unless you’re an arachnophage yourself). We react to creepiness not with fear but with aversion. We dislike what we cannot predict.

Worlding Raga: 4 – Who Worlds?

This entry is part 4 of 4 in the series Worlding Raga

So far we’ve been discussing Worlding as an art. One that an individual creator can engage in on their own. As Venkat suggested, we are already living in an emerging Worlding culture replete with examples, from superhero franchises, to blogamatic universes, to people as channels of their own lives. It made me think it’s worth zooming out for a post to consider: what possesses a person to want to make a World? What reward does Worlding offer over all the other drives competing in an artist’s mind? Who Worlds in there?In the midst of the creative process, the artist experiences a jumble of voices and competing directives. To an untrained ear, this seems like the undifferentiated expression of an inner monologue that can’t make up its mind. But if you listen carefully, you can begin to hear distinct voices fighting to be heard. It took me a long time to realize that an artist is not one unified person, but something like a crew of sub-personalities or mental demons. Each with their own motivations, sense of opportunity and threat, and unique filter for relevancy. What if we could learn to identify each of these demons? What if we could become more aware of who is speaking, understand what each cares about, and begin to strategize how and when to use them?

[Read more…]

Weirding Diary: 6

This entry is part 6 of 7 in the series Weirding Diary

Weirding is related to uncanniness, as in uncanny valley: a near-subconscious sense of map-territory misregistration. I think there are two varieties, A and B.

Type A uncanniness, which Sarah Perry explored, evokes an “emotion of eeriness, spookiness, creepiness” that you notice instantly, but cannot immediately isolate or explain. Here’s an example:

Human lookalike robot Repliee 2, detail from image by BradBeattie, CC BY-SA 3.0

Type B uncanniness, which Sarah Constantin explored, does not necessarily evoke those emotions, but may provoke a double take. Here are examples from an article on recognizing AI fakes.

Sample faces generated by a GAN, from How to recognize fake AI-generated images by Kyle McDonald

Two increasingly important domains — markets and AI — exhibit both kinds. Free markets and deep learning AIs generate more Type B uncanniness. Markets distorted by regulation, and GOFAI (including humanoid robots) generate more Type A uncanniness.

Kahnemann’s System 1/System 2 model is useful here.

Type A uncanniness is pattern wrongness, detected by System 1, evoking an immediate emotional reaction. If you aren’t fleeing, a slower System 2 response might kick in and supply a satisfactory explanation, and possibly creepiness relief.

Type B uncanniness is logical wrongness (unexpected facial asymmetry or incoherent grammar for example), prompting the double take (and possibly a delayed creeped-out reaction). You have to re-engage System 1 to reconstruct a narrative around the actual prevailing logic rather than the assumed one.

Too-symmetric faces are Type A uncanny. Mismatched earrings on a face are Type B.

Drug prices shooting up 10x suddenly is Type A. Bond markets defying “normal” logic is possibly Type B (I need a better example).

Markets are eating the world, and AIs are eating software. In both, we’re seeing a shift from Type A to Type B. Less creepy, more double-takey. It’s easier to get used to creepy patterns than unexpected logics.