How Life Imitates Chess by Garry Kasparov

I’ve been slowly working my way through Garry Kasparov’s excellent How Life Imitates Chess.  I had rather low expectations, since in my experience superstars in a very narrow activity generally do not have the breadth of perspective to adequately situate what they know in broader ways.

But Kasparov’s book is excellent, a pleasant surprise. It is heavily focused on competitive decision-making of course, but he manages to abstract out lessons from chess encounters very well, so you can read the book even if you aren’t a player. It is helpful to know the basic rules of chess and the general nature of chess strategy (for example, it helps to know that openings and endgames are thoroughly studied and well-understood, while mid-games are complex), but you don’t need to know specifically what the Sicilian Defense is.

[Read more…]

Welcome to the Future Nauseous

This entry is part 1 of 6 in the series Thinkability

Both science fiction and futurism seem to miss an important piece of how the future actually turns into the present. They fail to capture the way we don’t seem to notice when the future actually arrives.

Sure, we can all see the small clues all around us: cellphones, laptops, Facebook, Prius cars on the street. Yet, somehow, the future always seems like something that is going to happen rather than something that is happening; future perfect rather than present-continuous. Even the nearest of near-term science fiction seems to evolve at some fixed receding-horizon distance from the present.

There is an unexplained cognitive dissonance between changing-reality-as-experienced and change as imagined, and I don’t mean specifics of failed and successful predictions.

My new explanation is this: we live in a continuous state of manufactured normalcy. There are mechanisms that operate — a mix of natural, emergent and designed — that work to prevent us from realizing that the future is actually happening as we speak.  To really understand the world and how it is evolving, you need to break through this manufactured normalcy field. Unfortunately, that leads, as we will see, to a kind of existential nausea.

[Read more…]

Creative Desks versus Administration Desks

For many of us, desks are where a lot of life happens. I realized about a year ago that psychologically, there are two different types of desks, which most people combine into one physical desk.

The two types are creative desks and administration desks.

Even if you have multiple desks (at home and at the workplace for instance) chances are, you combine both psychological types in each.

Creative desks are where you do serious maker work. Writing, coding, design, pen-and-paper math, spreadsheet analysis and so forth.

Administration desks are where you do all the overhead stuff. Expense reports, invoicing, book-keeping, contract signing, faxing, filing, travel arrangements, GTDing, certain kinds of email and calendaring, and so forth.

The two don’t go well together because people who get a high off  creative work are generally depressed by administration work, and vice-versa.  Basic systems and processes are also different around the two desks. If you consider emotion/energy aspects and system-process aspects, you could say that the two types represent very different field-flow complexes, with different tempos. Mixing them up results in a cacophony.

So how can you cope with both kinds of work? The solution is to separate the psychological desks physically to the extent you can afford to.

[Read more…]

Rediscovering Literacy

I’ve been experimenting lately with aphorisms. Pithy one-liners of the sort favored by writers like La Rochefoucauld (1613-1680). My goal was to turn a relatively big idea, the sort I would normally turn into a 4000-word post, into a one-liner. After many failed attempts over the last few months, a few weeks ago, I finally managed to craft one I was happy with:

Civilization is the process of turning the incomprehensible into the arbitrary.

Many hours of thought went into this 11-word candidate for eternal quotability. When I was done, I was tempted to immediately unpack it in a longer essay, but then I realized that that would defeat the purpose. Maxims and aphorisms are about more than terseness in the face of expensive writing technology. They are about basic training in literacy. The aphorism above is possibly the most literate thing I have ever written. By stronger criteria I’ll get to, it might even be the only literate thing I’ve ever written, which means I’ve been illiterate until now.

This post isn’t about the aphorism itself (I’ll leave you to play with it), but about literacy.

I used to think that the terseness of  written language through most of history was mostly a result of the high cost and low reliability of writing technologies in pre-modern times. I now think these were secondary issues. I have come to believe that the very word literacy meant something entirely different before around 1890, when print technology became cheap enough to sustain a written form of mass media.

[Read more…]

The 6-Hour Maker-Manager Work Day

There are some ideas that keep popping up. They’re like Rome. All roads lead there, and you end up finding different viewpoints for the idea depending on the path you take.

The Maker-Schedule/Manager Schedule idea from Paul Graham is one such. It may be his most fertile idea.

Once you get used to thinking of work-tempo management around the idea of two fundamental frequencies (4 hour maker upcycles and 1 hour manager upcycles) you have a  framework for analyzing many different types of creative class work. One conclusion I’ve reached is that if you do both kinds of work, you’ll end up working 6-hour days. Here’s why.

[Read more…]

Thinking in a Foreign Language

This is an idea that simply refuses to go away. Ever since the Sapir-Whorf hypothesis and its debunking in the original naive form, the idea that language shapes thought keeps popping up. Now the behavioral economists weigh in to show that decision-making changes when you switch languages. The research is reported in a Wired article, Thinking in a Foreign Language

This looks like it is primarily about the mere fact of shifting gears to a different language causing greater deliberation. But I strongly suspect there are going to be patterns related to mental model construction and use in the to and from languages as well (i.e., specific ordered language pairs, (A, B), will likely have measurable and characteristic effects on the nature of decision-making).

You’d need more subtle tests for that though.

The researchers next tested how language affected decisions on matters of direct personal import. According to prospect theory, the possibility of small losses outweigh the promise of larger gains, a phenomenon called myopic risk aversion and rooted in emotional reactions to the idea of loss.

The same group of Korean students was presented with a series of hypothetical low-loss, high-gain bets. When offered bets in Korean, just 57 percent took them. When offered in English, that number rose to 67 percent, again suggesting heightened deliberation in a second language.

To see if the effect held up in real-world betting, Keysar’s team recruited 54 University of Chicago students who spoke Spanish as a second language. Each received $15 in $1 bills, each of which could be kept or bet on a coin toss. If they lost a toss, they’d lose the dollar, but winning returned the dollar and another $1.50 — a proposition that, over multiple bets, would likely be profitable.

When the proceedings were conducted in English, just 54 percent of students took the bets, a number that rose to 71 percent when betting in Spanish. “They take more bets in a foreign language because they expect to gain in the long run, and are less affected by the typically exaggerated aversion to losses,” wrote Keysar and colleagues.

The researchers believe a second language provides a useful cognitive distance from automatic processes, promoting analytical thought and reducing unthinking, emotional reaction.

 

Go Deep, Young Man: 2012 Call for Sponsorships

It’s that time of the year again. Last year, sponsorships amounted to about $2000 (not counting  the “buy me a coffee” micro-payments, which added another $400). This year, they’ve already crossed the $500 mark without me doing a call.

Sponsorship and “coffee” money represent a fairly small fraction of my income, but on a dumb-money to smart-money spectrum, it is the smartest money I make.  I’d trade two dollars of any other kind of income for a dollar of sponsorship income any day. The “smart” in the smart money is the unadultrated goodwill it carries. Though there are no strings attached, I feel a strong urge to reinvest sponsorship income back into the blog and related activities rather than using it to pay the bills. In a way, the money comes with the opposite of a moral hazard attached.

So if you were considering sponsoring this year, consider this your cue and sponsor away.

When I did the call last year, I shared a line (the only line, actually) from my fledgling business philosophy: go where the wild thoughts are.

This year, I’ve added another line: go deep, young man.  At 37, I think I get to call myself young man for at least another three years.

Read on for more, if you are interested in my evolving philosophy of blogging. If you are a blogger yourself, chances are you won’t learn much. I am increasingly realizing that my approach to blogging says more about me than about blogging. If you’re not a blogger, this is your annual peek behind the scenes.

[Read more…]

Hacking the Non-Disposable Planet

This entry is part 4 of 15 in the series Psychohistory

Sometime in the last few years, apparently everybody turned into a hacker.  Besides  computer hacking, we now have lifehacking (using  tricks and short-cuts to improve everyday life), body-hacking (using sensor-driven experimentation to manipulate your body), college-hacking (students who figure out how to get a high GPA without putting in the work) and career-hacking (getting ahead in the workplace without “paying your dues”). The trend shows no sign of letting up. I suspect we’ll soon see the term applied in every conceivable domain of human activity.

I was initially very annoyed by what I saw as a content-free overloading of the term, but the more I examined the various uses, the more I realized that there really is a common pattern to everything that is being subsumed by the term hacking. I now believe that the term hacking is not over-extended; it is actually under-extended. It should be applied to a much bigger range of activities, and to human endeavors on much larger scales, all the way up to human civilization.

I’ve concluded that we’re reaching a technological complexity threshold where hacking is going to be the main mechanism for the further evolution of civilization. Hacking is part of a future that’s neither the exponentially improving AI future envisioned by Singularity types, nor the entropic collapse envisioned by the Collapsonomics types. It is part of a marginally stable future where the upward lift of diminishing-magnitude technological improvements and hacks just balances the downward pull of entropic gravity, resulting in an indefinite plateau, as the picture above illustrates.

I call this possible future hackstability.

[Read more…]

Hacking Grand Narratives

Grand narratives are probably the most frequently mentioned subject in reactions I get to Tempo, even though I carefully restricted myself to individual narratives in the book. Apparently the urge to apply narrative models to collectives is irresistible. Several readers have gone ahead and sort of hacked the narrative models I discuss in Tempo, and applied them to grand narratives. To be frank, I don’t completely understand most of these attempts. I know of applications to unconventional crisis response, the political process in Honduras, the history of Western art, and the history of debt/finance.

But as I’ve mentioned in previous posts, I am treading carefully here.  I’ve learned something from each hacking attempt people have told me about (do share if you’ve tried this sort of thing), and I’ve made two experimental attempts myself: applying the model to 19th century American business/technology history and on a smaller scale, to software projects. I am starting a third experiment: applying narrative analysis to wannabe-Silicon-Valley tech hubs like Boulder and Las Vegas. But overall, I am not satisfied that my models (or anyone else’s) are good enough yet.

But let me try and lay out the problem here, and have you guys weigh in.

[Read more…]

How Do You Run Away from Home?

My Big History reading binge last year got me interested in the history of individualism as an idea.  I am not entirely sure why, but it seems to me that the right question to ask is the apparently whimsical one, “How do you run away from home?”

I don’t have good answers yet. So rather than waiting for answers to come to me in the shower, I decided to post my incomplete thoughts.

Let’s start with the concept of individualism.

The standard account of the idea appears to be an ahistorical one; an ism that modifies other isms like libertarianism, existentialism and anarchism.

Fukuyama argues, fairly persuasively, that the individual as a meaningful unit only emerged in the early second millennium AD in Europe, as a consequence of the rise of the Church and the resultant weakening of kinship-based social structures. This immediately suggests a follow-on question: is the slow, 600-700-year rise of individualism an expression of an innate drive, unleashed at some point in history, or is it an unnatural consequence of forces that weaken collectivism and make it increasingly difficult to sustain? Are we drifting apart or being torn apart?

Do we possess a fundamental “run away from home” drive, or are we torn away from home by larger, non-biological forces, despite a strong attachment drive?

[Read more…]