The Big Switch by Nicholas Carr

Nicholas Carr, famous for being among the first to publicly point out, in IT Doesn’t Matter, that investment in information technology had gone from being a differentiator to a cost of doing business, is back in the limelight with an ambitious new book, The Big Switch (website). It starts out with a fairly focused intent — to understand the potential shift to a service-oriented, utility-based model of computing. It accomplishes that intent rather hurriedly, but reasonably well, and then marches on to bigger things, with mixed results.

The overall recommendation: well worth a read, so long as you stay aware of a couple of critical blind spots in the book’s take.

The Highlights

Anchoring the book is a fairly detailed analogy between utility-based computing and the shift, a century ago, from captive industrial power and candles to electricity grids. The analogy is quite detailed, down to a comparison between Samuel Insull, Edison’s one-time financial consiglieri and later a pioneer in creating the modern electric-power industry, and Marc Benioff, founder of Salesforce.com and one-time right-hand man of Larry Ellison of Oracle.

The title and set up evoke metaphoric visions of a giant, worldwide God computer (there is a chapter title iGod), tightly integrated with humans a la Matrix, and working through a vast infrastructure of both centralized and peer-to-peer computational intelligence. Many (myself included) believe in some version of this vision, and are placing bets accordingly. The ingredients Carr chooses to pick out, to weave into his synthesis, are the usual suspects; if you aren’t familiar with the raw material, here is the short list:

You could extend this cast of characters to include more minor players, but that’s enough to give you an idea of the unfolding drama Carr is attempting to chronicle.

Now the challenge in analyzing this grab-bag of fundamental infrastructure trends is to come up with a conceptual model that is somewhere between the dry, dull (and usually short-sighted) white papers produced by professional analysts, and purely metaphoric notions of a Borg/Gaia-like iGod.

Carr’s is a brave attempt, but doesn’t get there.

I had high hopes that something would come of the ‘Big Switch’ in the title, but the raw material doesn’t quite come together coherently, except by analogy to the story of electricity, which suggests that big computing utility companies are going to emerge soon, and that we might see anti-trust lawsuits against the biggest data center and SaaS outfits. But Carr is tantalizingly close. You get the feeling that if he’d just made one more leap of faith and imagination, he’d have come up with a very strong synthesis.

So this part of the book is definitely worth a read. If you work in the field, you will probably be familiar with most of the key technologies he discusses. My own work is slap bang in the middle of this sort of stuff, so I found only a few minor elements that I hadn’t already encountered or thought about. But seeing all the ingredients stewing together in one book was thought-provoking. Certainly a lot more thought-provoking than mulling Microsoft’s obscure and confused .NET version of the grand vision.
If you don’t work in the field, then you definitely need to read the book. I strongly suspect that people who only encounter the highly-visible and apparently chaotic consumer end of this wave of technology (using Flickr, whiling time away on Facebook, or complaining that Google Docs isn’t as good as Microsoft Office) haven’t realized that the apparent chaos of Web 2.0 is being driven by a small handful of deep infrastructural changes that will do more than just change the way you share photos and music.

Over-reach or Grand Narrative?

Now for the second part of the book that goes beyond the simple electricity-analogy treatment of utility/service-oriented computing. Here Carr attempts to explore the potential impact of the Big Switch on the world at large. This part of the book is weak.

As an example of what he attempts, consider this narrative thread: the impact on culture. Carr’s analysis starts with Schelling’s famous self-sorting argument for explaining residential segregation. Next, he cites research at the University of Colorado showing that when allowed to, people seek out like-minded friends and increase polarization with respect to ideological adversaries. Then, clutching hopefully at the idea that cheap/zero cost information flows might enable this dynamic online, he ends with a heavily-hedged vision of cyber-balkanization. A hop-skip-jump just-so story in fact. Entertaining, but not quite a solid foundation for the scale of mega-trend spotting he seems to be shooting for (in fact, I think my own germinating approach to this issue, which I plan to develop further, is more promising).

Privacy issues and security (terrorists using Google Earth) are treated in a similar manner.

In his defense though, nobody else has done a Toffler-esque future-visioning for the world being shaped by the Big Switch trends either.

So verdict for this part: definitely overreach. The convincing Grand Narrative isn’t here yet.

The Critical Blind Spots

I’ll finish up with the critical blind spots I mentioned. So long as you are aware of these, the book is a safe and useful read:

  • Not a Done Deal: Carr seems to think utility/service based computing models are a done deal, and only the engineering detail and economic logic needs to be worked out; that we soon won’t need anything more than browsers on our PCs. Far from true (though I wish it were). The fundamental science is far from mature and several pieces of the puzzle are very dubious indeed. Further breakthroughs are clearly necessary. But that’s a more geeky discussion that I can take offline with those of you who are interested.
  • Centralized vs. P2P: Carr also lightly glosses over the distinction between centralized and peer-to-peer aspects of the vision (huge earthquake resistant compute clouds on the one hand, and Napsterish P2P ideas on the other). He dismisses the distinction as irrelevant, saying that it is the ‘centralized coordination’ that is the key feature, whatever the physical morphology. Again, not true. The distinction might well end up being critical and substantively change the story the evolves.
  • It ain’t Electricity: Finally, the electricity analogy. Carr smartly covers himself by noting that the electricity analogy is limited. Yet, he doesn’t spend too much time exploring the ways in which it is limited. Information is a fundamentally different beast from energy, and the fact that bits can be delivered remotely as easily as watts is not sufficient to anchor an argument for utility-based computing. Portability is not the only (or even most salient) feature of bits.

Not that I have better treatments of these problem areas to offer, but for now, I’ll satisfy myself with pointing out that they are problem areas.

But overall, like I said, good book. Not to mention timely.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Venkatesh Rao

Venkat is the founder and editor-in-chief of ribbonfarm. Follow him on Twitter

Comments

  1. This is a response to your latest “Slate” clip, but I thought I’d do it here.

    Desktop apps and web apps are not as disjoint as the Slater imagines; they form a continuous spectrum.

    On the one hand you have your traditional desktop apps, like Word and Photoshop which reside and operate exclusively on the desktop.

    You might think that Firefox is a desktop app. But it checks for updates periodically, auto-downloads and updates itself. It is also capable of syncing and retrieving persistent state like bookmarks and options, with a central server.

    So, is Firefox really a “web app”, heavily cached and executed locally? More and more traditional desktop apps are sporting features like “Live” or “Online”, to enable sharing and other net-centric behaviour.

    At the other end of the spectrum you have the pure client-server web app, serving HTML to the client and keeping all the processing and persistent state on the server side.

    This runs into serious speed-of-light limitations for many cases, which is why the next generation of web apps like Google Docs pass on some amount of code to the client to execute locally, while still keeping all the state and the bulk of the processing at the server. This improves responsiveness quite a bit, though we still have the issues of network outages.

    Finally we come to the Google Docs + Gears model, which lets your web app store persistent application state locally, so that it can continue to work even if the machine is offline. From this to caching code is but a step, and then where are we? Picking our way through a continuum, where it would be difficult to point out where one stops and the other begins.

    “The question is, which is to be Master – that’s all”

  2. Hmm… I see your point about boundary blurring, but I think there could be a way to frame this issue in more fundamental ways, maybe flops/location as a measure of decentralization of computation or something. There’s got to be a deeper way to analyze this stuff than in terms of specific example architectures/design patterns like auto-updates, Gears… there’s an info-theory model somewhere here.

    I personally do think localized computation has a role to play once you factor in hard-real-time control (which is tough to do over TCP/IP, but is apparently easier over raw UDP I am told). This will become increasingly important as we go from computers to mobile devices to robotic devices that sense/act. The heterogeneity and material interactivity of the hardware should drive some interesting dynamics.

  3. Well, usually the way it happens is, someone solves a real life problem, then a grad student comes along to express it in terms of Greek letters and curlicues :)

    But yes, there is an interesting theory buried somewhere.

    One aspect of this theory is: trust boundaries.

    You can encrypt data before sending it off to be stored in Amazon S3 or some other remote store. You don’t need to trust the provider. But when you start doing any kind of non-opaque processing on the data offshore (like with Google mail or docs or spreadsheets or salesforce.com), you are trusting the provider not to misuse the data. A “Don’t be Evil” mission isn’t good enough.

    Does this impose a fundamental limitation on utility computing? Perhaps not. My favourite theory is that Trusted Computing will be turned on its head.

    Trusted computing refers to technology which can be employed to impose the will of the content owner (music/movie industry) on machines owned by consumers. Only signed software can be run on these machines and only signed software can process protected (DRM) content. A good example of a current implementation is the XBox.

    Utility computing providers will deploy TC-enabled computers. The “trusted” software stack which will process end-user data needs to be subject to third party audit and signature (so I guess it will be open-source). Your local machine will verify that the remote machine is running a trusted stack (much in the way we identify websites with certificates signed by Verisign or other CA) before handing it a key to decrypt the offshore data and process it in some meaningful way.

    I think there’s a nice niche out there waiting for a first mover :)