The Greasy, Fix-It ‘Web of Intent’ Vision

The Web of Intent is a term that’s starting to get tossed around a lot, and I’ve gone from being wary about it to believing strongly in it. I was introduced to the term by Nova Spivack about a year ago and was initially skeptical. Could Web ADD be reversed? Can technology give us a true knob to allow us to tune our engagement anywhere from ‘distracted’ to ‘laser focused’? From knee-jerk reactive to coolly deliberate? Actually that’s how I think of the concept: a technology model that gives users this control knob to manage their online experiences:

The evidence is slowly starting to roll in. This conceptual knob can be created through a generation of “Intent” technologies. What’s more, this knob is what will likely save the publishing and media industries.  It will also save our brains from getting fried, and create a new dynamic in the ongoing disruption of all types of information work.

As I thought more about some of the core ideas (see Nova’s posts What’s After the Real-Time Web? and The Birth of the Scheduled Web), I started to understand the power of the model.

This is where I am placing my bets. Not the 3D Web, not the “Mobile/Touch Web”, not the “Internet of Things” and not the “Semantic Web.” Those are important, but secondary. I am going all-in on the “Web of Intent” as the next main act that will reshape the Internet. As I’ll explain later, it is a gritty, greasy, roll-up-sleeves, fix-it vision, that is emerging in response to actual problems, as opposed to a vision born out of new possibilities (combined with the smoking of illegal substances).

So here you go: my primer on what the Web of Intent actually is, in terms of user experience (UX), concepts and technology. We’ll need to start by reframing what Web 2.0 actually is.

Web 2.0 is a Messaging Bus  with Human Switches and Buffers

You may think of Web 2.0 as “social media,” or technology becoming social in the human sense. It may look like it’s all about user-generated content, online communities and rich apps to improve our personal and collective lives. A utopia of sharing and co-creating. It’s all about technology democratizing power and empowering average humans, right?

How conveniently anthropocentric. And wrong.

Social media is not about technology becoming part of human society. It is about humans becoming part of technological society, in a Matrix sense. Power isn’t migrating from the old plutocrats to the new long-tailers as much as it is migrating from humans to technology. Social media isn’t a set of tools to allow humans to communicate with humans. It is a set of embedding mechanisms to allow technologies to use humans to communicate with each other, in an orgy of self-organizing.

Om Malik nailed it when he called Twitter the “messaging bus” of Web 2.0. That’s a raw, lowest-level hardware metaphor, the level with the highest volume of raw bytes. And we’ve plugged ourselves right into the switching circuitry at that level. Think about it, Twitter is a massively parallel stochastic switching circuit built as a global human bus, where more of us are routing bit.ly links than actually reading them. Think about the fact that even the name BIT-ly, which beat out other brands, is a bus-level metaphor. Humans don’t deal in bits, chips do, right? We’ve moved ourselves into the bottom layer of the information work stack.

The Matrix had it wrong. You’re not the battery power in a global, human-enslaving AI, you are slightly more valuable. You are part of the switching circuitry.

“Oh no, I actually read stuff, not just tweet” you say? Well, my friend @amitseshan has a hardware, chip-level metaphor for you too: he classifies people as long-buffer (people like you and me who read and write 2000 word posts), and short-buffer (people who add value primarily by quickly scanning and passing links along strong and well-curated social networks). Feeling dehumanized yet? And you thought social media was going to let you truly express your humanity. And if you want to find the perfect expression of this “embedded humans” architecture, look no further than Mechanical Turk and Demand Media. There is no better illustration of power migrating into the technology, with humans being mere electronic parts. The industrial age had its indelible image of Charlie Chaplin literally becoming caught in in a gear train in Modern Times. That’s what humans as “cogs” meant. The image today is someone furiously RT’ing links on their iPhone. Here’s my bad attempt at capturing history repeating itself:

Here’s the looming extreme Dystopia: writers hired via Mechanical Turk create content that Demand Media believes will sell, and then we shorten those Demand Media article links using bit.ly and busily pass it around on Twitter. And the long buffer types read the most popular of THOSE articles and bid on new Demand Media writing jobs that are automatically generated based on that popularity. Not to pick on those companies (they are all locally-optimizing in good faith), but where the heck is the actual creative thinking and new value in this madness-of-the-crowds churn? We are faced by a downward spiral into the world of the movie Idiocracy.

The fact that the technology matrix is dumb and entirely lacking in goals and intentions actually makes things worse, not better. We are not being enslaved by Skynet. We are being enslaved by an emergent retard whose behavior is basically a viciously randomized reflection of our own collective manias.

Now reconsider the classic symptoms of “social media disruption” within this new framing. What has Web 2.0 actually done to us?

  1. It has unbundled all sorts of content and driven the center of gravity towards the 140 character tweet
  2. Appointment Content  has started to move to On-Demand Content
  3. Fixed publisher-subscriber models have been changed to Twitter/Facebook stochastic diffusion
  4. The temporal horizon has changed from past-present-future to just a narrow present
  5. We are starting to rely increasingly on analytics, and squeezing out creative intuition
  6. Polished content and code has given way to perennial beta
  7. Static search based on content-to-content links is starting to get displaced by dynamic search based on live social filtering

The scary part is that each of these is individually a good thing, but it all adds up to a toxic state of affairs.

That last two points are why we are switches in a messaging bus.

Implication of point 6: trading in incomplete stuff makes us part of the process middleware of some giant machine. The finished product that is finally made out of beta code and content is probably something like the hypothalamus of the emergent beast.

Implication of point 7: instead of linking to articles we like on our slow-changing static content, we are tweeting them live. In Web 1.0, while you slept, somebody could click on a link on your “home page,” find a valuable page, and be grateful to you. Win. Now that person is increasingly likely to ask a question on Twitter instead. And you lose sleep trying to stay in the stream, watching for every “real-time” opportunity to answer questions (or more likely, just flooding the timeline with your own tweets, hoping to intercept random intentions).

The whole thing could be called the “Random, Anxious Simul-Screaming Web!” (RASSW!).  The social psychology of the RASSW! is not pretty:

  • We are all desperately shouting to be heard above everybody else, anxiously scanning several firehoses, watching for our opportunities, and navigating this chaos using a random soup of tweeted links.
  • On-demand content, far from helping us manage our time better, has gotten us into an anxious state of over-demand. We now have the freedom to pack in extra RSS feeds and reading into every spare moment, and we do.
  • There is none of that relaxed letting go of the news between broadcasts/newspaper editions. We are like the monkey in that famous experiment that was given a button to stimulate the pleasure centers of its brain. It got into a frantic self-stimulation loop and almost starved I believe. In our case, our competitive status-seeking/money-making instincts have been hooked, rather than our pleasure centers.
  • We are being devoured alive by a mindless, formulaic empiricism; SEO aka “writing to the machine” is just the tip of the iceberg

Why are we doing this to ourselves? Are we just masochists with a species-wide death wish?

Actually no, it is a sort of tragedy of the attention commons. To see why ask yourself: why can’t we all agree to just take Sundays entirely off the grid as a planet?

The Tragedy of the Attention Commons

A finance expert once told me that most of the gains in the stock market in the last 50 years happened on just a handful of days. If you’d happened to be out of the market on those days, with your assets in cash, you’d have seen losses instead of the historic 8%. That’s why, he explained, buy-and-hold is best for long-term investing. You won’t miss those unpredictable big-jump days if you’re always in the market.

The same thing applies on the Web. Except that your Web 1.0 “Home page” is no longer your investment in the Web. Your personal live presence is.  Imagine having to show up on the NYSE floor everyday and having to shout above the noise, “I am still in!” to keep your investments in the market. Going off the grid is not really an option. Twitter eroding the position of RSS as a blog distribution medium for is the clearest instance. I now have to tweet new posts at optimal times. No more publish-and-forget.

But here’s why it is a tragedy of the commons: everybody is more frantic, but nobody is actually better off. It’s like one guy standing up at the stadium to get a better view, causing a chain reaction leading to everybody standing up. Now nobody has a better view, and everybody is paying the added cost of standing up.

Or to return to my “Sundays off” hypothetical, if there’s just one guy looking to buy something, tweeting on a Sunday, and just one guy willing to get on Twitter to listen on a Sunday, the rest of us are screwed. Now we all have to get on Twitter on Sundays or miss potential big wins. Actually we don’t listen much. We all choose to scream all the time.

There’s probably a nice game theory model here, but I’ll leave that to someone else.

Why this Disrupts Work and Media

Step back and you will see in this complex of effects the reason for both the disruption of the world of work and the world of media. We’ve paid a lot of attention to the coarse lifestyle level effects on work, such as virtual/distributed work and the economics of free agency. We haven’t really thought as much about the minute-to-minute work we are actually doing sitting at home, in our pajamas, working Skype and watching our free-agent earnings trickling into our Paypal accounts.

Yes, there are benefits, and added independence, and I’ve blogged about the positive side. But it isn’t a pleasant reality overall. The real-time Web so far, has created a race to the bottom in the labor force. We have to fight harder, with fewer protections, for every AdSense dime, rather than trusting that our paychecks will see us through our retirement. And a lot of the work is generally much duller. Not just the Mechanical Turk level of mindless drudgery, but also 90% of  formulaic “7 ways” list-post blogging drudgery. Hardly a fulfilling creative life for people inspired to write by Shakespeare.

The impact on media is an indirect effect, via the impact on work. Publishing “amateurs” (bloggers and the like) looking to establish free-agent/personal brand voices for the new economy are the prime villains in the disruption of old media. What frustrates Old Media attempts at creating new business models overnight is that people like me are grabbing thin slices of the attention that used to belong exclusively to them, and given the weight of numbers, it adds up. We are simultaneously eroding their attention market-share and disrupting their distribution channels (the blogosphere is like a giant, crowdsourced Walmart where every employee is creating his/her own store microbrands in addition to reselling bigger brands).

There is a solution. I hinted at it in a recent post on VentureBeat, reviewing Nick Carr’s Shallows. I offered the cautiously optimistic argument that technology is just a lever and that there is a powerful “intent” side and a manipulated “passive” side.  This post is a refinement of that argument: humans, not technology, are the only truly intentional beings in the picture at the moment. We’re not dealing with Skynet here, but a random, dumb emergent beast.

Greasy, Fix-it, Damage Control

I’ll define the Web of Intent in a very simple way:

A Web architecture that reduces the number and frequency of decisions you have to take, lets you control when you make those decisions, and prunes the number of options among which you need to choose in a trustworthy way. The overall effect of the Web of Intent will be to allow you to get OFF the Web without suffering an anxiety attack.

The Web of Intent isn’t like other big visions for the Internet. It is a trend that is emerging to solve an actual problem as opposed to creating a vision somebody figures is attractive. It isn’t a stimulating “new possibilities” vision like the Semantic Web or the 3D Web. It isn’t an enabling vision like the Mobile Web or the Internet of Things that allow us to do new things. It is also something of a damage control vision: lessons learned in the last 10 years show that our Great Information Overload Hope: filtering and “relevance” technologies, weren’t working well enough to significantly reduce our decision-making and information processing load (that’s why I said “prunes in a trustworthy way.” Most of us still don’t trust the existing relevance/filtration technologies). At the same time automation of decisions and action was also not really working. Most information still needed human judgment. Outside of a few things like email forwarding rules, we do most information handling manually. Information work is still largely manual labor.

The Web of Intent is a roll-up-your-sleeves, grungy, grease-stained “fix-it” vision. A vision that is about fixing the huge problems created by Web 2.0, which we’ve ignored while being distracted by the huge opportunities. We can’t live in the RASSW! for much longer without going collectively crazy. I can just imagine some crazed #iranelection style Twitter phenomenon in a few years creating the brinkmanship conditions for a nuclear war.

The Web of Intent solves these huge problems by amplifying the power of human intent, and taking power back from the (dumb, non-malicious) machines. It attempts to fix Web 2.0 before moving on to some new horizon labeled Web 3.0.

So as a fix-it vision, it starts not with the grand visionary designs of a single genius mind, but the collection of small local solutions that are already emerging, based on existing technology, to fix specific intent (little i) problems. We just need to generalize, grow and integrated these solutions into a coherent architecture. Here’s my list:

  1. DailyLit and Instapaper allow you to schedule and control your reading
  2. Trailmeme allows you prune and add intent to your browsing (since defunct)
  3. Newer Twitter clients like HootSuite allow you to gain some time control over your Twitter account
  4. Clicker is bringing back some of the benefits of the much-maligned Appointment TV without its costs
  5. Nova is up to some interesting general scheduling technology with Live Matrix
  6. Flipboard allows you to step back a bit from the Twitter feeding frenzy and bring some of the old leisurely magazine feel back to your Twitter/Facebook fueled reading
  7. Meetup is a scheduling, back-to-real-world technology that is the beginning of the “get off the Web” aspect of the Web of Intent.
  8. As befits a fix-it greasy vision, email, much maligned by the younger technologies, is being redeemed and restored to its position of respect
  9. In a way, the failure of Google Wave is another piece of evidence in favor of the Web of Intent. It aimed to improve email, but turned it down an anxiety/frenzy increasing path. We said, “No thanks.” Perhaps that’s the big turning point.
  10. The rise of social gaming on Facebook is very revealing. It may seem like distraction from a work point of view, but it is an example of how you can create intense focus in the middle of the Random Anxious Simul-Screaming Web (RASSW!).  It is particularly revealing that kaChing, a stock trading social game on Facebook, has now become an actual stock trading technology.

Note: This post originally had references to SXSW 2011, and to one of my own Xerox projects, Trailmeme, that has since become defunct. There are no substantive changes to the argument though. I left the concluding list of examples unchanged.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Venkatesh Rao

Venkat is the founder and editor-in-chief of ribbonfarm. Follow him on Twitter

Comments

  1. presumably you’ve seen Gelernter’s work on this:
    http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html
    “What does this mean for the internet: will the internet ever think? Will an individual computer ever think?
    We need to see, first, that in approaching the topic of human thought, we usually stop half-way through. In fact, the human mind moves back and forth along a spectrum defined by ordinary logic at one end and “dream logic” at the other. “Dream logic” makes just as much sense as ordinary “day logic”; it simply follows different rules. But most philosophers and cognitive scientists see only day logic and ignore dream logic — which is like imagining the earth with a north pole but no south pole.”

    Currently the web is ALL dream-logic …

    • Thanks, I hadn’t heard of Gelernter’s work, will add it to my input buffer :)

      Not sure I understand what you mean by the day logic/dream logic stuff, but presumably I’ll get a sense of it when I read your link!

  2. Christian Molick says

    The “The New Author Platform (Mary Ann Naples)” link goes to the “ARGS Doesn’t Work …” article, which emphasizes the common ground new journalism shares with code-test-release software production.

    Intent can be hard to quantify, but the prioritized life lists from “Living in More That One World” might be the best available model. Much of the rating and filtering and relevance stuff falls short because of the “Flaw of Averages”. In short, better models for interest and relevance might cover much of this.

    • Fixed, thanks!

      I think intent CAN be made simple if we apply a stock-market metaphor as I suggested in one of my other responses.

      Relevance (to an intent) and quality are not enough. Some things are relevance in an unpredictable and volatile value “stock” sense, while others are more reliably relevant in a bond sense, while still others are relevant in a deterministic sense (savings/checking account).

      We need technologies to balance our intent portfolio this way, so we don’t overinvest in stocks at the expense of bonds and cash positions for instance. I think that’s what a lot of us are doing.

  3. In 1940, Mortimer J. Adler and Charles van Doren published a book entitled, ‘How To Read A Book’. It expresses much of the ‘intentionalist’ reading viewpoint, but seventy years ago within the context of entirely traditional media. (And it was apparently popular enough at the time to inspire a parody, ‘How To Read Two Books’.)

    The idea was, when approaching a book, too many people will naively start at page one and plow in; instead, it’s better to read analytically, from the top down. Examine the table of contents. Skim the index, if there is one. Page through the book, reading first and last paragraphs of each chapter and section. If passages are highlighted, by either the publisher or a previous owner, take advantage of that, make note of what they considered important.

    Then, if you choose, dive in and read in detail. Feel free to make margin notes and highlight as you go. But if sections are not of interest, feel free to skip them. And finally, if at any point in this process you decide the book has sated your curiosity or is wasting your time, feel free to put it down and walk away without guilt.

    In essence: readers need not submit to authorial intent, which would in most cases lead them linearly from the front cover to the back. The reader’s time is his own, and the reader’s interests should be paramount; and those interests are often best served by a non-linear and incomplete reading of the material.

    (And they even ‘dogfood’ their premise, encouraging the reader at every step to apply these practices to reading ‘How To Read A Book’ itself.)

    I wonder if Nick Carr would consider Adler and van Doren’s approach to reading ‘shallow’. It does sound like they were reaching towards the Web of Intent half a century before the web even existed.

  4. > “There’s probably a nice game theory model here, but I’ll leave that to someone else.”

    You’re quite right and that game would be prisoners dilemma and because it’s an decision we face every day it’s really more like iterated prisoners dilemma. You, and everyone else, have to choose between either staying offline (cooperate) or joining the RASSW! (defecting).

    Identifying the game also gives us the possibility of breaking out of cycle by changing the payoff matrix. I suspect that if you take a closer look at the parts of your fix-it solution you’ll find that that’s exactly what they’re doing.

    • It probably is IPD. It usually turns out to be in these cases. But game theory is a little too full of tricky subtleties, so I won’t commit until I see the payoff matrix :)

  5. First, I laughed.
    “Social media isn’t a set of tools to allow humans to communicate with humans. It is a set of embedding mechanisms to allow technologies to use humans … Twitter [is] the “messaging bus” of Web 2.0. That’s a raw, lowest-level hardware metaphor … You’re not the battery power in a global, human-enslaving AI, you are slightly more valuable. You are part of the switching circuitry.”

    Then I did OMG.
    “Twitter is a massively parallel stochastic switching circuit where more of us are routing bit.ly links than actually reading them.”
    [That’s me on both counts, super-router unread and bit.ly compacter.]

    “And if you want to find the perfect expression of this “embedded humans” architecture, look no further than Mechanical Turk … writers hired via Mechanical Turk create content that Demand Media believes will sell, and then we shorten those Demand Media article links using bit.ly and busily pass it around on Twitter.”
    [C’est moi. I live by MT, but in my defense, perhaps less on lofty principle than scorn for the ‘$1 for 450 words on blah-blah’, I eschew the Demanders’ regurgitation / (mis)appropriation of content. For those of you who don’t know about Mechanical Turk, here is a typical example of a “HIT” (job) which is supposedly the acronym for Human Interface Task, which goes quite well with the theme of this blog post:
    Requester: Fuel Interactive – Reward: $3.00 per HIT
    “Write a 500 Word Article on the keyword: ‘golf communities in south’
    Write good English: proper grammar and spelling. Spell-check and grammar-check before sending. Do not repeat phrases or words. Do not copy and paste sentences from other sites. Rephrasings are ok, but put some original idea and thought into the article. You must Include the exact keyword once in the article title and once in the text, but no more than that.”
    Sigh. Any freelancers remember the days of $.25 per word = $125 for 500 words? It was in this millenium.]

    And then I did “Hmmm”
    ” I now have to tweet new posts at optimal times. No more publish-and-forget.”
    [Maybe that’s why nobody reads my tweets. I’m not OPing (optimally posting).]

    But I didn’t cry. I accept my cogdom. I embrace my penurious enchattelment. My name is ionavideo and I’m on the bus. The Twitter bus. The MT bus. Flash memory. Erased on reboot.

    Am I reassured that there is a messianic web of intent? Meh. We don’t want to be intended. Like the monkeys, we want to be gratified.

    • “Am I reassured that there is a messianic web of intent? Meh. We don’t want to be intended. Like the monkeys, we want to be gratified.”

      Looks like you are having one of your darker days, and I was having one of my more optimistic ones when I wrote this. I admit, 3 days out of 7 in a week, I am pessimistic like you. Optimism has a shaky 1/7 edge for me :)

      Thanks for the excellent example illustrating the point. Exactly what I meant. Web advertising has become such a weird numbers game that you can actually convert enough of a percentage by writing crud as “content marketing” and linking it to customer acquisition channels.

      This is one big reason Google is slowly losing power to social search. The main PageRank based anonymous search is one of the main engines of this descent into madness. That’s the main reason I am optimistic. There is a slight chance social search will raise the bar. But then again, as the Scamville scandal shows, the main mechanism could be social gaming, and the most successful social games might be the spammiest ones.

  6. Just voted for your panel. Hope I get to see it, too!

  7. Great post! I’m getting caught up in the term ‘web of intent’ however. It’s too close to ‘intention web’ which is a separate idea entirely. Maybe yours could be called the Sifted Web…

  8. Great post.

    Ask your mother why she adheres to tradition X. Chances are she won’t know, but she will be very insistent that it be ‘forwarded’ to the next generation. We’ve been store-and-forward agents of memes for a long time now, at the behest of large, poorly understood distributed idea-organisms.

    Meme networking has made several leaps in buffering and switching technology with spoken language, then written, then the printing press, and increasing the number of nodes in the network with agriculture and the industrial revolution.

    We build bigger and faster networks, enable more and more human and computer nodes to join the network. So we should have expected replicators like computer viruses and human link-memes and tweet-memes. Much higher frequency, much shorter lifetimes of individual memes.

    [ What happens post-singularity? Won’t it be funny if we throw the switch, only to watch the whole thing grind to a halt in ten minutes, stopped by a virulent, super-fast meme-outbreak blowing it to (haha) bits? ]

    Memeplexes don’t have to operate for the benefit of their carriers, so they routinely lead them to death in defiance of biological imperatives. So it’s no surprise that the memes are getting more out of the frenzied tweeting than their switches.

    Random, Anxious Simul-Screaming Web reminds me of Life of Brian, where there are a bunch of prophets shouting random stuff at the crowd and somehow the crowd fastens on Brian’s stream of nonsense. One of my favourite scenes ever.

    Internet ADD is bad, I hope we get something to tame it. Very easy to get caught up in the twitches and tics of the Twitter-beast.

    Oh, and browser tabs. I thought they were great, but what they let you do is accumulate a large pile of junk and cycle through them in futility, such that context switch overhead dominates, productive work plummets. Humans are not designed to multitask at this level.

    Personally, I’m going on a regime where all real-time distracting stuff – twitter, reddit, RSS feeds and email to the extent possible – get dumped into a fixed time slot and batch processed, to avoid context switching.

    Unfortunately the Paul Graham or Stallman method of completely isolating yourself from the net doesn’t work for me. Google is my IDE – I use it for programming and debugging and can’t live without it. I need a giant pie chart telling me in red/green how much time I’ve wasted today every time I open a tab. Monitoring software may be more necessary for adults than children.

    • “I need a giant pie chart telling me in red/green how much time I’ve wasted today every time I open a tab. Monitoring software may be more necessary for adults than children.”

      This is a very interesting thought experiment, since “time wasted” is hard to assess. Whether a random read, found via RSS or Twitter, leads to an Aha! moment that solves a problem, or whether it wastes time, is hard to determine a priori.

      Instead, this attention/intention dashboard will need to be something like a stock portfolio dashboard. You DO need some bandwidth invested in the highly volatile RSS/Twitter “stock markets” (stocks with unreliable returns), some bandwidth allocated to more reliable returns channels (“bonds” like TechCrunch for me, since that’s where I get most of my “must know” industry situation awareness news), and your basic savings/checking attention account which comprises channels that are not stochastic at all and every element must be processed (email, RSS feeds of competitors’ blogs etc.).

      It is surprising how few people get what is obvious to anyone who has read some Dawkins’ like stuff (as in people like you and me and most readers of this site)… that information travels on any network not because it is true or useful, but because it is good at traveling.

      The demand-media-mturk-bitly-twitter information loop (call it the “info loop”) can self-excite to a frenzy, and this is bad for 2 reasons.

      1. Humans in the loop are likely to be miserable and
      2. It optimizes for the survivability of the autocatalytic loop rather than the humans within it

      Feature 1 makes us miserable now, but feature 2 could kill us, just as certain badly-engineered parasites accidentally kill their hosts. There is no feedback loop connecting the info loop to environmental feedback about how well its components are faring. If we can somehow inject system survival/social cost signals into the loop, we can at least hope to avert collapse. Then we’ll have bought time to deal with problem 1: us being miserable in the info-loop.

      Venkat

  9. It optimizes for the survivability of the autocatalytic loop rather than the humans within it

    Except for the speed which allows us to see it I don’t think this is at all different from any life process in nature, the individuals are just cogs in the species trajectory and the species itself is driven by higher level chance events resulting in phenomena akin to Polya Urns.
    The trouble is consciousness allowing us to pretend we can (should ?) do something about it, because, if you suggest that “we can somehow inject system survival/social cost signals into the loop” what do you think the “we” acting for such purpose would and up being?
    (yeah: I am a bit of a (bad) Taoist)

    • Ah! Touche!

      You are right. Any system we design to counteract this would end up first acting in the interests of its own survival.

      That said, I still think we may do slightly better than emergent, unintended/unexamined design. Whether an arbitrary random emergent system is for us or against us is kind of a random coin-toss. Possibly we could build something that does slightly better by us. Okay me. Not us; me.

      I think, for instance, that democracy is better than autocracy, and individual liberty better than slavery. Even if democracy based on individual liberty DOES lead to this kind of free-market created RASSW!, this existence is still better than slavery under an autocracy.

      When I think of system designs that seem to improve things, they seem to share one feature: they incorporate a new feedback signal that STOPS them from doing certain things. Voting stops arbitrary executions and imprisonments. Property rights seem to halt serfdom to some extent. Minimum wage laws prevent the worst kinds of exploitation.

      Except that these kinds of statist interventions won’t work on the Internet.

  10. I think, for instance, that democracy is better than autocracy, and individual liberty better than slavery. Even if democracy based on individual liberty DOES lead to this kind of free-market created RASSW!, this existence is still better than slavery under an autocracy.

    Better for whom?
    It all depends on your personal preferences, that is, whether the given “system” effects happens to match more or less your preferences on so many points.
    For any individual this is no more than random chance weighted by the size of the cohort you identify to.

    I am not in favor of Autocracy either but Autocracy doesn’t necessarily mean slavery, there has been enlightened despots and Voltaire was supportive of them, it is sometimes (in the etymological sense) the lesser evil.

    I deem democracy to be one of the worst systems because it enshrine “power of the masses” and the masses are dumb and already have enough power to themselves, even Roman emperors bowed to the power of the masses.
    And you’ve only seen RASSW, “better” is to come.

    Though Sébastien-Roch Nicolas Chamfort was an early Republican and supported the French Revolution he was not fooled:
    “Il y a à parier que toute idée publique, toute convention reçue, est une sottise, car elle a convenu au plus grand nombre. “.

    As for myself, unfortunately, I do not belong to any significant cohort…

  11. Better for me of course :)

    Certain autocracies have been known to benefit serfs and slaves at the expense of power for the nobility and fiefs.

    Democracy is terrible, but is the best scheme for those who belong to no special category. In fact it seems to create a negatively-defined category for them, the “middle class” sandwiched between the true bourgeoisie and the truly oppressed poor. A layer of earnest, self-absorbed, humorless people who aren’t capitalists, but own at least the means of production closest to themselves, their own bodies.

    I think of them as the clueless of society as a whole. But I haven’t worked out the theory yet (or worked myself out of the class, which may be a necessary prerequisite for seeing the condition clearly).

    Venkat

    • You can easily work you out to the losers class. :-)
      Even psychopaths can do that when they turn really nuts.

  12. Venkat, thank you for this bright post.

    The sad thing is that most of the relevant people who’d you (or is it only me?) like to read this post, probably won’t do it. Just as you’ve mentioned – they are not used to read more than 140 chars at once, for sure no 2000 words plus insightful comments.

    It is a cliché but fits here – we need an evolution, not a revolution. Let’s say you’ve came up with the best idea ever for fixing this problem: A start-up/website/meme that could potentially reverse the ADD. Because of the all-standing crowd, the only way to get the audience today to listen to your idea/use your solution would be through the current RASSW…
    I’m sure you support the “evolution, not revolution” way of thinking, since you mentioned small initiations as the potential beginning of the change (which is what Kuhn teaches us in The Structure of Scientific Revolutions).

    My point is that while thinking about alternatives for today’s RASSW, we should also think of the ways to infuse those into the WWW.

    The first [glib] idea I could think of, is dividing this important post into 140 chars parts and twitt them one after the other over few hours/days.

  13. Aren’t you over-thinking this a bit? I understand the importance of trend-watching, but Twitter has 10-15M active users (see http://www.computerworld.com/s/article/9148878/Twitter_now_has_75M_users_most_asleep_at_the_mouse) – even if that’s a gross underestimate, there’s no real reason to believe that everyone is using Twitter, or soon will be.

    “Everyone in Silicon Valley” may be a different matter, and I did enjoy your perspective – but never forget that Joe Sixpack has never heard of TechCrunch, and never will.

    • True, it is a minority we are talking about, but it is the supposedly “elite” minority of bleeding-edge information workers. This is the part of the labor force that’s supposed to solve everything from global warming to AIDS… if this minority is ending up brain-dead, there will be unpleasant times.

      • I concur that Silicon Valley appears to be, at least, Twitter-obsessed: but I respectfully disagree with your conclusions.

        First off, Silicon Valley wasn’t going to solve global warming or AIDS anyway. Exhibit A: the OLPC project.

        I also see no indication that the intellectual elite is, in fact, Twitter-addicted; e.g. scientists, by and large, ignore Web 2.0. I know a bit about mathematics and cryptography: Terence Tao and Bruce Schneier have well-known blogs, but are definitely the exception (and it’s debatable whether Schneier still counts as a true cryptographer). And note that mathematician and cryptographers are pretty good with computers when compared to, say, most biologists. (Computer scientists do appear to blog more often, but even those don’t Twitter all that much, do they?)

        It *is* true that the technologists of Silicon Valley are Twitter-obsessed; but I see no real reason to suspect that this obsession will spread to non-technologist “elites”, or even to the technologists of, say, London or Delhi.

        I’m sorry, but I must be off now, so I’ll cut this short here.

        • There is a lot of emerging evidence that 2.0 technologies ARE sparking successful attacks on big global problems. See Rob Salkowitz’ new book, Young World Rising to see how young people in the developing world in particularly, are using 2.0 type technologies to attack very tough development challenges. OLPC is high profile in the West, but is not taken very seriously by the people who really work on this stuff.

  14. I think its okay for twitter to be both the problem and solution. I assume a lot of this is contextual based on your network and if there is a gain based on the 80/20 rule, and if you are enjoying the noise that often comes with social media.

    Loved your thoughts on curating the system.

    -pd

  15. Ho-Sheng Hsiao says

    Last winter at around this time, a lot of ideas converged together. This idea that “people are not using technology; technologies are using people” was one of them, coming from Kevin Kelly’s writing on the Technium. It wasn’t something I could emotionally accept at first, but it was too rational to dismiss. I saw it everywhere after that. When Jesse Schell gave his talk on the gamification of life and mentioned that, maybe, just maybe people can participate in a gamified world to break back into reality, I know something was wrong. Having some practice with Buddhist mindfulness, I know “reality” is to be had for anyone, anywhere, anytime. That’s when I realized that it isn’t so much that humans want to use technology to break through to reality, it is that technology uses humans to break through to reality.

    I hadn’t really thought about the immediate implications, though. You’re right thought that more automation technologies makes more sense. I think it was coming from a world where intent matters a lot more and automation is assumed so I took those ideas for granted. I’m rethinking this now. This might be the “three moves ahead” I’ve been looking for in 2011.

    As a side note, I have found “two moves ahead” and it involves Google Wave. And it will be awesome. I’ve found a good use-case for it. And because people have dismissed the technology along with the product, I get an information advantage. Whether people will pay for it will be something else…

    • :) Better come up with a marketing position that does not inherit Google Wave. No matter how good your idea, the perception will kill you. Bury the Wave inheritance somewhere on an about page.

  16. Justin Mares says

    How do you still feel about this post? Do you still think this is the future of the web?