Digital Security, the Red Queen, and Sexual Computing

There is a technology trend which even the determinedly non-technical should care about. The bad guys are winning. And even though I am only talking about the bad guys in computing — writers of viruses, malware and the like — they are actually the bad guys of all technology, since computing is now central to every aspect of technology. They might even be the bad guys of civilization in general, since computing-driven technology is central to our attacks on all sorts of other global problems ranging from global poverty to AIDS, cancer, renewable energy and Al Qaeda. So turning around and winning this war might even be the single most important challenge facing humanity today. Even that bastion of the liberal arts and humanities, The Atlantic Monthly, has taken note, with this excellent feature on how the best security researchers in the world are losing the battle against the Conficker worm. Simple-minded solutions, ranging from “everybody should get a Mac” to “just stick to Web-based apps and netbooks” to “practice better digital hygeine” are all temporary tactical defenses against an adversary that is gradually gaining the upper hand on many fronts. I have concluded that there is only one major good-guy weapon that has not yet been tried: sexual computing. And it hasn’t been tried because major conceptual advances in computer science are needed. I’ll explain what I mean by the term (it is a fairly obvious idea for those who know the background, so there may be more accepted existing terms for the vision), but I’ll need to lay some groundwork first.

The Ground and Air Wars

The bad-guy ecosystem today is unbelievably diverse. Digital threats abound, from viruses and malware on your PC, to attacks on Web servers, Twitter and Facebook hacks and large-scale identity data theft from banks. Add cyberwarfare among countries and between countries and MNCs (as in, Google vs. China), and the existence of organized botnets, and the term “hacker” starts to seem quaint. You could broadly break this down into a ground war (stuff happening at the level of your PC or other end-user devices) and an air war (involving web servers, organizations (criminal, state, stateless and “good guy”), and the cloud infrastructure).

The annoying old semantic quibbling by the good guys (“we are the hackers, they are the crackers”) has gone from being merely annoying to dangerously inaccurate, because it suggests that the war is between two anarchic groups of  roughly similar lone individuals. The ecosystem on both sides is now so complex and organized that we need a whole new set of names just for the various roles. For instance, there are good guys who do nothing besides run rooms full of open, exploitable computers as “honeypots” to attract the newest malware. And there are other good guys working on esoteric encryption technologies. On the bad guys’ side, there are low-level foot soldiers who troll around online and offline looking for credit card numbers to steal, and then there are the big mob bosses running botnets, and (I assume) liaisons who manage the financial relationships between spammers and the pornography industry. Things are so complex that even the experts on both sides have to specialize. Even within my workgroup, the three people I rely on to educate me on this stuff have different areas of expertise.

I’ll leave it to a digital ethnographer to figure out good names all around. For now, I am just going to call them “good guys” and “bad guys.”

Barbarians at the Gates

Let me share my personal experiences with the bad guys, as a poorly-armed civilian digital homesteader. It is no contest. I feel like an 1850s settler in the American West, in my sod hut on the prairie, with only an old musket to defend against the Medellin Cartel, armed with Uzis and helicopters. I am actually particularly lucky, since I work closely with a couple of  learned security guys, and can call on them for help when I need to. Most of you are on your own. The only reason I haven’t suffered really badly is that I am just not important enough to be worth individual attention (unlike say, Paris Hilton and her cellphone).

When I first encountered a virus in the mid-80s (an infected floppy, on a pre hard-disk PC), the war against the bad guys was no more than a bunch of isolated skirmishes against not-very-skilful digital vandals who were in it for fun. Any technically-minded person could educate themselves on all the details in a week, and reformating floppies was all it took to get rid of threats and back to your life, after a brief interruption (today, a serious digital security problem can stop your life cold, as comprehensively as a heart attack.)

Then for a couple of decades, I was basically safe (and lucky) in Pax Digitalia. I recall no serious virus-like problems in my increasingly active digital life between about 1990 and 2007. I kept up with the news and best-practice advice and, rather sloppily, with my antivirus updates, which seemed to be working. Then around 2007 all hell started breaking loose. I was under the impression that if you kept your anti-virus software up to date, avoided shady (in particular porn) websites, were smart with passwords, and didn’t download suspicious attachments, you were safe. Apparently not. Here’s a rundown of stuff I’ve encountered since 2007.

  • As End-User: Three serious malware infections. All three managed to disable my main anti-virus software from getting updates, and mightily resisted attack by multiple alternative anti-malware programs, which mostly failed to even find something wrong. In the first case, what eventually did the trick was running a couple of different programs in a specific sequence, very quickly after a reboot. In the second case, I had to resort to one of the lesser-known anti-malware programs because the bad guys had apparently figured out how to block all the most popular ones. In the last case, I had to upgrade from XP to Windows 7. I am now seeing small signs that my Windows 7 PC is probably infected again. In each case, the symptom was the usual one, unwanted ads popping up all over the place.
  • As Website Owner: I discovered, purely via a casual check, that Google was blocking one of the parked domains I own, as “suspected of distributing malware.” Further checking revealed that 3 of my domains (in fact, ALL of them EXCEPT for had been hacked, and contained malware-distributing code. I had to clean up my sites, lock them down, and get them off Google’s blacklist. This shattered my illusion that Unix systems were fundamentally safer than Windows systems and that ISPs take care of this stuff. Ribbonfarm escaped (I think) because it runs WordPress, which adds an additional line of defense, but that’s hardly much comfort, since WordPress has its own changing set of exploitable security holes. Its vulnerability goes up and down as it evolves.
  • As Customer of Big Organizations: I received one of those ominous letters from an organization I used to be part of, telling me that I was among several thousand users whose personal data had been stolen from the organization, and offering me a free subscription to an identity-theft fighting company’s services to manage any potential consequences. Fortunately, my identity didn’t seem to have been stolen.
  • As Professional Technologist: Finally, Trailmeme, the project I manage for Xerox, which runs on Amazon’s EC2 infrastructure was, for a while, an innocent civilian site caught in a broader war. We couldn’t send email from our servers because a vendor of lists of spam sources (used by many firewalls) had added a lot of Amazon-owned IP addresses to their blacklist. For those who aren’t familiar with cloud computing, services like Amazon’s allow you to juggle Web servers like a circus clown, which adds a whole new layer of obscurity and illegibility to Web infrastructure, something that helps the bad guys more than the good guys. Compute clouds, like the real things, obscure visibility. It took some hard work from my team members to get ourselves off the blacklist. More broadly, I’d estimate that the time my development team spends on building the security features of our product is a very non-trivial fraction.
  • An Autoimmune Collapse: Like many of you, my laptop, running XP, succumbed to that strange auto-immune mess a month ago, when a flawed McAfee update deleted a legitimate and critical system file, crashing my system comprehensively. I am sure the bad guys were laughing it up, watching the good guys trip over their own feet.
  • Twitter hacks: I accidentally gave my Twitter login information to fake services twice, before I clued up and learned what to look for, to tell legitimate Twitter ecosystem services from exploits better (100% certainty is impossible of course). Now it looks like I’ll have to learn a new set of Facebook security behaviors.

And I am not even counting baseline bad-guy stuff, like the fact that there is more mail caught in my spam folders than legitimate stuff in my inbox, or that this site attracts more spam comments than real ones (so far, Akismet is keeping up). That’s my relatively-informed civilian view of the war. That I even understand this much is because I am an engineer (aerospace, not computer) and work directly with computing technology and software professionals. Chances are, you are not exposed on all these fronts, but the fact is, the bad guys are slowly gaining the upper hand on all, and you will be affected, directly or indirectly. Chances are, you imagine your online life is governed by social contracts and the rule of law like your city. Perhaps you think that the online world is just a little bit more Wild West. Like that one small bad neighborhood you avoid in your town.

You are living in a bubble. There is no rule of law; the digital landscape is mostly small islands of civilization surrounded by ungoverned and (currently) ungovernable wild lands. The barbarians are at the gates, and Rome is closer to collapse than you think.

A Fragile Security Bubble

Movies like Live Free or Die Hard, which had a relatively sophisticated depiction of cyberanarchy, still base their scripts on omnipotent digital Merlins on both sides. The good guys can hack into anything anywhere with just a cell phone, while the bad guys are led by one evil genius who can “shut down Norad with a laptop.” There are two desires driving such perceptions.

The first one, easy to dismiss, is simply Hollywood’s preference for strong individual heroes and villains, wielding tons of individual power. They know, and we know, that this is unrealistic. The second is a more seriously dangerous desire: the desire to believe that what’s going on is actually simple enough that individuals or even small groups can comprehend and operate in the cyberanarchy. This is a problem of miscalibration. It is like mistaking World War II for a small-scale gang war in New York. Just because only a tiny fraction of the population is involved in combat does not mean that it is a small war. It merely means very few people have any combat training. If Hollywood were to truly do a cyberspace story, it would be more like The Longest Day, with multiple narratives and an ensemble cast,  than a terrorist hostage thriller driven by a single pair of antagonists.

This isn’t entirely perverse Hollywood ignorance. The security companies and the major “good guy” vendors have fostered the illusion that they know what they are doing and have things under control.  My Norton software for instance (and I don’t mean to pick on them particularly), has a reassuring UI composed of bold green “good” check marks and red iconography for dangerous stuff. There are things like shields and glossy metallic-looking color schemes. When it runs checks, it tells me reassuring things I want to hear, like “Your System is Secure.” It is a manufactured sense of assurance. Windows promptly delivers key security updates. You get the sense that if you just behave, avoid bad neighborhoods, and keep up with the good guys, you’ll automatically stay ahead of the bad guys. You don’t realize the good guys are the ones who are behind and trying to catch up, until they fail you. When your defenses fail, you end up in Dr. House mode; trying one diagnostic test after another, trying different defender programs in varying sequences, gradually losing heart as you contemplate that nuclear option, a full reformat and OS reload (and several weeks of lost work and costly information recovery). Norton would like you to believe that their program is all you need, and that big, reassuring button, “Scan Now” is all you need to hit to magically get rid of every digital ill.

Unfortunately, that’s not true, and can’t be true; there are no digital panaceas, anymore than there are biological ones.There’s even a theoretical result that states that figuring out whether there is a virus on your computer is a formally undecidable problem (“undecidable” has a very precise meaning in computer science, but for our purposes, all you need to know is that no single “Scan Now” button can ever ensure complete security, even in theory).

Which brings me to the biological metaphor that’s been around since the beginning of the war. The biological metaphor is getting more solid every day, as the digital ecosystem becomes increasingly organic. The good news is that things have gotten sophisticated enough that we can borrow very powerful elements from biology now. I am talking about an idea called the Red Queen.

Homogeneity and the Red Queen

The war we are talking about has the character of an arms race. The defenders and attackers both have to keep working harder and harder in order to keep the defended where they are. Even if you’ve done nothing online in the last 15 years but send email and read the New York Times online, just maintaining those capabilities has cost both the bad guys and the security establishment increasing amounts of expense. It’s like the increasing military budgets on both American and Soviet sides during the Cold War. Eventually, one side can’t keep up the spending. In that case, communism couldn’t keep up. In this war, it is starting to look like it is the good guys who can’t keep up.

This “running to stay in the same place” is the reason people like the phrase “Red Queen” to describe such dynamics (I assume you know your Alice in Wonderland).  In biology, the best example is the arms race between hosts and parasites (which is why the “virus” metaphor works so well). We all like big, powerful creatures and pay more attention to predator-prey interactions (and watch our shark shows and lion/tiger documentaries). But parasite-host dynamics may well have been the more important driver in evolution.

Matt Ridley’s very entertaining The Red Queen, a book about sexual selection in biology, explains the very compelling theory that sexual reproduction evolved primarily as a defense against parasitism. It turns out that this is the most general sort of defense known. Why?

The reason the bad guys are winning the cyberwars is that they have one major advantage: mass production of computing infrastructure. Find one hole in one computing system, attack it in every computing system that looks like it. Even penny-scale benefits multiply into millions of dollars. Economies of scale and mass production of any sort invariably create security brittleness and hand the bad guys a decisive advantage: enormous leverage.  This isn’t a particularly new insight. In agriculture, monoculture crop lands can be devastated by a single bug. Airlines and air forces that use homogenous fleets can be laid low by a single defect. Diversity breeds robustness. Every bit of information that can be used to exploit a system has less leverage.

The problem with diversity though, is that the amount of diversity required to stay ahead of the parasites is far higher than the amount of diversity required to actually accomplish whatever the systems are designed to do. You need only one airplane design to run an airline, but to make it robust against single-point failures, you need more varieties, which add costs faster than they add any useful advantages. That’s one reason Southwest is so cheap. They’d be in trouble if a serious flaw were discovered in 737 designs (and I imagine they’ve thought through and insured against such scenarios).

Let’s distinguish two types of diversity. One is simple inter-species diversity. If there are cats and dogs around, cat diseases will probably not jump over and decimate the dog population. But this fact doesn’t particularly help cats. It makes the ecosystem as a whole more stable, but not cat populations. Having both cats and dogs around in a mixed group reduces the frequency of cat-cat contact and transmissions (since there are now dog-cat interactions), so species diversity slows down the spread of dangers through individual populations. For the minority species, it also adds a kind of protection-of-minorities, since parasitic attackers will find more room to grow in the majority species (think Windows vs. Macs until recently). But these are minor advantages.

The technology ecosystem is undergoing an explosion of this kind of  species diversity. There are now vastly more kinds of “computer” devices than ever before. It isn’t as significant as it might seem on the surface, since the number of operating systems behind this diversity is fewer than the number of device types. It might even be a loss, because most of this new diversity is in the form of tethered devices (like your Wii or TiVo) that you don’t really have access to, and are hooked into a large-scale system with its own vulnerabilities of scale. You can’t defend yourself, and you are hooked into single-point failure modes on the backend.

The other kind of diversity is intra-species diversity. Different kinds of cats in short.

Here, sexual reproduction drives security because it limits the utility of a parasitic advantage in time and space. At any given time, a parasite that evolves to exploit a flaw in a particular genetic type can only spread to other individuals that share that vulnerability. The advantage is also automatically temporary, since the next generational churn of the gene pool could remove the exploitable pattern, or contain a defense.

The cost of this intra-species diversity defense is sex. A fun cost you might say, but a cost nevertheless, since continually churning out new functionally identical designs is work, and because a whole new Red Queen’s race emerges (the focus of Ridley’s book): the one between male and female. Let’s not go there, read the book if you are curious. It might offend some of you politically, so you’ve been warned.

Biology isn’t the only place this happens. Among corporations, mergers and acquisitions serve a very similar function (an implicit premise in my Gervais Principle series; with the Clueless being the parasitic class).

Sexual Computing

That brings me to my big point. In this war, the good guys have no real offensive weapons, only defensive ones. They build what they hope are secure and safe systems, the bad guys find exploits, the good guys react, and the whole cycle repeats itself. Periodically, a good guy comes up with an architectural advantage that buys a period of peace.

This is asymmetric, and the advantage is with the bad guys. The good guys have to anticipate and block all known holes. The bad guys only have to find one oversight or new flaw (the Conficker story contains very scary examples of this kind of thing).

In biology and corporate ecosystems, sexual reproduction provides a true offensive weapon to the good guys. Sexual reproduction creates diversity fairly cheaply, without tying increasing diversity to the harder problem of  increasing functionality.  You have two control knobs, the frequency of mating, and the degree of mixing. Bad guys moving faster? Mix things up more frequently and broadly. The nice thing is that it is a generic defense, and one that can run somewhat ahead of the bad guys.

The problem is that nobody knows how to do sexual computing. That I know of. If any of you have kept up with the theoretical CS literature better than me, please educate me. Von Neumann showed decades ago that computer programs could evolve and reproduce, just like real biological systems, so long as there was a source of random mutations. There are things called genetic algorithms that allow individual programs that fulfill the same function to reproduce and evolve sexually. But as far as I know, nothing that allows entire computers to behave like sexual beings.

What we want is an architectural paradigm that can churn the gene pool of computing design at a controllable rate, independently of advances in functionality. In other words, if you have a Windows PC, and I have one, we should be able to have our computers date, mix things up, and replace themselves with two new progeny, every so many weeks, while leaving the functional interface of the systems essentially unchanged. Malware threat levels go up? Reproduce faster. Today computing only evolves at the pace of needed (or available) new functionality. New OS versions come out when there are useful new features to be added (not counting cosmetic releases that are simply created to make money). That’s too slow. Yes, upgrading from XP to 7 cured one of my infections, but that was a side effect (and an unreliable one, since Windows 7 is an asexual descendant of Windows XP).

I have no idea how to do that, but it ought to be one of the Grand Challenges of computer science. I don’t believe it is, but the impending collapse of computing civilization, under the onslaught of digital barbarians, should really be a good enough reason to prioritize this challenge.

Are there other promising directions of attack? I can’t think of any, and neither, it seems, has biology, which has been at it for a billion years. It’s a whole other long debate, but every argument I’ve ever heard about how to make computing sustainably secure has been local and tactical. There will be no permanent victory, ever (theory tells us that), but we are in danger of losing even the fragile dynamic equilibrium that has been maintained so far. Parasites are not known for foresight. Even if they destroy themselves in destroying their hosts, they usually proceed anyway.

Sexual computing seems like the only strategic capability we could conceivably build to stay ahead. We might need a Manhattan project sized effort.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Venkatesh Rao

Venkat is the founder and editor-in-chief of ribbonfarm. Follow him on Twitter


  1. I’ve been trying to decide if the fact that our version contains modelling of modelling changes the assumptions; sexual selection “using the same interface” might make spoofing the number one cause of attack; if your computer internals are completely new, how do you gauge that this program is actually doing what you asked it too, if a whole family of processes can look the same? And if you can distinguish functionality in that way, can you recreate the old exploit economies of scale?

    I wonder if the more mundane “OSs modularised and plugged together to suit your preferences” fulfils that criteria better than other solutions, because the diversity is doing double duty; personalisation of service and scrambling of the “generic computer”.

    How you bugfix that modularisation is another question obviously, (as it can’t be just some abstraction that renders all the computers identical again) but the service role of computers should form an additional pseudorandom seed that would require quite a bit of empathy to unravel, meaning that virus creators would be doing anthropological research at the same time!

  2. We could try taking computer crime more seriously by actually investigating it and sending bad guys to jail forever. In many respects, the increasingly professional nature of the bad guys makes this easy because there are a lot of guys who do this full time and thus cannot demonstrate normal, gainful employment. True, a lot of this does happen abroad, but if civilized nations make this a true priority — which it should be given the stakes — they can exert an enormous amount of leverage.

    • I think the fact that “a lot of this happens abroad” is the main fact, not a minor detail. In any country with decent rule of law and a good entrepreneurial environment, such talented people would find it far more lucrative to be on the good guy side. There are possibly cultural and historical factors too, but rule of law+capitalism is the main structural cure, I’d say.

  3. Great explanation (although I knew where you were going from your introduction). Unix is secure, but PHP removes many of its protections, including the distinction between data and executables, attackers running as the same user as the admin, which combined with wanting to edit the configuration over the web means it’s very easy for an attacker to breach J Random Coder’s ad-hoc security and insert their own code.

    Gentoo kind of provides the environment you want, as each user runs their own custom combination of libraries compiled by themselves at different points in time. It’s still vulnerable if it’s running a webserver with PHP though, as that provides a homogeneous attack surface as described above.

    • “It’s still vulnerable if it’s running a webserver with PHP though, as that provides a homogeneous attack surface as described above.”

      That’s the state of affairs that’s problematic. How are end users supposed to become experts on which combos are safe and which ones not?

  4. Venkat – succinct post! To most of the ordinary people like me who are not computer science folks, this was a good piece that shows up vulnerability in one more area for the times to come. The challenges would not ebb until strict governance laws and quick convictions are done!


  5. Pondering on whether there is a different approach we could take, I was reminded of Toffler’s statement in Powershift, “Law is sublimated violence.” The containment of crime and violence has been achieved through legal mechanisms including the enforcing authority of police and armies. So the good guys do essentially what the bad guys do but within a collectively agreed framework for the common good. So the infrastructure providers (clouds, ISPs, OS writers) need to write good viruses and worms that actively attack and contaminate devices and gradually increase the entry barriers for bad guys. Maybe it is time to revisit something like the chip identifier that Intel proposed (for piracy reasons) a decade ago but had to withdraw due to privacy concerns.

    @Andrew, @Manju: Cyber laws are getting tightened but the nature of this beast is different–identifying the bad guys is not all that easy.

    • Hmm… “good viruses” … you mean like find a hole, write a worm to propogate and exploit and close it all over, w/o user permissions? Slipperly slope to digital vigilantism… it’ll get increasingly hard to tell good/bad apart. Like Don Fanucci vs. Vito Corleone :)


  6. Maybe I am an exception in this world, but I have worked on multiple platforms and over the past 8 years have only had 3 major infections. The most serious one was on a Linux server I used at work which was compromised via a known ssh vulnerability that Redhat had not fixed. I only came to know of it when the university administration shut down my internet connection for suspicious traffic. The others one were on Windows laptops, fixing one of which was a really major pain to fix. Part of the problem was that the university administration recommended an antivirus that when installed prevented a dll from loading in MSWord making Office crash. It was a bitch locating that flaw. Since then, I have operated without antivirus and have been pretty happy.

    I largely work on a Mac (no issues so far), Linux and XP running in a virtual machine in Linux. Personally, antiviruses have consumed far more of my time than viruses have. In my experience, large number of vulnerabilities are taken care of simply by having a rather restrictive firewall, not loading USB drives automatically, and running as a limited user unless necessary. If I do end up with a virus, which does happen occasionally, I boot via a Linux live CD and edit registry entries to delete registry entries that execute the viruses and the programs themselves. Unlike you, for me the usual way in which I detect an infection is that I cannot safely ‘eject’ a USB drive.

    • You ARE an exception Farhat. You’re describing 0.01% power user behavior. Multi-OS VMs? dlls? SSH vulnerabilities? Manual registry editing?

      If that’s the level of knowledge it takes to be well defended (and you say you HAVE had issues), then the average citizen is screwed.

      If we were talking minor unwanted ads, it would be a mere annoyance. But add keystroke logging and screenshots, and this is an environment ripe for routine digital mugging. I suspect the only thing holding the bad guys back from using everything they have access to is the signal-to-noise ratio (hunting for id theft info in general logs would still take personal attention I imagine).

      • Just after I hit submit on that, I regretted it. Partly because even I’ve given up on security for others. Previously, (say 4-6 years back), people would occasionally bring me their infected laptop and I would remove the few viruses they had, clear bad registry entries and things would be fine. Now, my reply is ‘I don’t know windows that well’. Partly because I haven’t used Vista and have no experience with it, but largely because viruses have gotten incredibly complex and harder to remove. Even on XP, where I am comfortable, removing every trace can take hours of work, and usually my advise would be reformat and reinstall from a clean backup.

        Now, onto your evolutionary computing idea. It sounds fine in theory but I am not sure how it would work in practice. Evolution is really inefficient. At every generation there is huge number of failed matings (failed in the sense of conception occurred but no viable offspring). In a sense, evolution is blindly groping for a higher fitness level and the way it is done is you make a number of copies with variations and see which of them survive. Most of them will not. Even in higher animals like humans, half or so of fertilized eggs fail to implant or are spontaneously aborted. There is further attrition before one starts passing on their genes. In most organisms the ratios are far worse. In many, they are worse than million offspring to one viable offspring.

        So, sexual computing will mean that at some level we will have to get used to a large number of failed OSes that will result. Taking the biological analogies further, though I can suggest something else. Instead of the whole computing platform evolving (it anyway does though with updates and what not), antivirus programs can evolve in the sexual computing sense. Thus, the computing platform remains relatively slow changing but the defending programs evolve faster. In most organisms, the genes coding for the immune system tend to have a far higher evolution rate than the rest of the genome, to keep up with the rapidly evolving pathogens. We will have to deal with the failures of this though, as in organisms, say, autoimmune disorders, where an antivirus might misidentify a normal program as an intruder. I think the analogy has already been stretched a bit so I won’t go further.

        • Yes, there will be a high failure rate, but the nice thing about the digital world is that a lot of testing for survivability can be done in the womb, so to speak, so you wouldn’t fully build out any design that didn’t pass obvious survivability tests. In humans, that’s like aborting fetuses which are detected to have major genetic flaws, and is obviously hugely controversial, but in digital, you could brutally weed out flawed designs early and only invest in “growing” to maturity reasonably decent designs.

          Maybe, if the design tree exploring the space is truly rewindable (we’ve talked about this elsewhere in these comments), then it can be even more efficient, since evolution is not backtrack search, but digital evolution can be.

          An immune system subset evolving faster than the rest is an interesting optimization and makes complete sense.

  7. My knowledge of biology and replicator theory is even poorer than my CS, but…

    Biology is profoundly humbling to a programmer, showing that 0% programming – basically, random changes to code – and 100% ruthless QA, iterated through billions of parallel cycles, produces structures of amazingly complex design and functionality. But biological evolution is working blindfolded with both hands tied behind its back – no foresight, only incremental changes, no giant leaps across chasms of unviability. These limitations are writ large on every facet of blind-design replicators.

    “Intelligent design”, on the other hand, has more tools at its disposal. Therefore, strategies – like sex – which may be the only way out in biology may have much cheaper alternatives in software.

    Secondly, the replicator types are fundamentally different. Each biological organism is an independent replicator. A mutation in an individual, if beneficial, may spread to others of the species, either being damped by sexual recombination or, if the conditions are right, (geological separation etc) result in differentiation and speciation. Also, replication is the fundamental goal of the biological replicator – everything else, flesh, fins, fangs, brains are merely means to that end.

    Commercial software is developed in one place and “published”, or broadcast onto millions of devices. So it’s like a McDonalds franchise model, where there are many apparent replicators, but really only one. The McD meme is cooked up at one central location and pushed onto thousands of franchises. An individual franchise does not spawn other franchises, therefore a mutation at one place is not passed on to others. (I think your whole point is that this model needs to change)

    “Follow the information” is one of my favourite maxims. When two organisms have sex, there is a flow of information. Partners have different bloodlines and the information carried in their genes is hopefully complementary. Note the strong penalty against incest, where the information flow is much less.

    What would your and my Windows computer talk about? What’s the information exchange? I can’t think of anything right now at least, or even in the near future.

    I can conceive of malware instances in nearby locations having a lot of useful information to exchange with each other, what passwords they’ve sniffed, and so forth. Malware, too, has replication as its predominant goal. I can think of decentralized, mutating, hereditary, replicating malware, benefiting from a sexual/plasmid-exchange model much more readily than, say, Windows.

    Open source software is at yet another point in the spectrum, since forking is common. Linux has literally thousands of variants. Look at git, for instance, where the SCM model is completely decentralized, with multiple related code bases which are not strict hierarchical branches, and which can exchange packets of code much like related bacteria might exchange plasmids. Github, the up-and-coming code repo based on git, has institutionalized fork as a first class operation.

    While this increases diversity, it is unlikely to go so far as sex does. Where does this leave us? Off the top of my head, there are 3 broad strategies I’m aware of:

    First, increase variation without disturbing functionality. Similar to sex but without all the plumbing problems. There are techniques like randomizing address space layout and randomizing system calls, so that different running instances of the same program on different computers would present very different environments to maliciously injected code, say via a buffer overflow. Code which relies on a particular library being at a particular location, or that system call 0x12 corresponds to “system” will be in for a nasty surprise.
    Current state: Many mainstream OS’es implement ASLR, not sure about syscall randomization. I know of at least one commercial vendor for the latter.

    Second: With massive increases in storage capacities, don’t ever delete anything. As more and more things run in VM sandboxes, even finer grained state can be captured. Like Apple’s Time Machine but including running machine state. So you should be instantly able to run a copy of your machine as of two days ago and start a new timeline when you realize the error of your ways and vow to turn over a new leaf. Let me throw in a random reference to functional programming and its abhorrence and isolation of state changes, and journaling file systems, as enabling technologies towards this goal.

    Truly WORM devices will also help, ensuring that even subversion at the highest level will not destroy data.

    Current state: Time Machine and its ilk solve half the problem. Still, vastly better than nothing at all.

    Thirdly, trusted computing platforms, where a hardware-enforced chain of trust ensures that only cryptographically verified code can run. Though this is also not without its chinks, it narrows the window of potentially exploitable code. The systems which have taken longest to hack have been of this type. Currently used in embedded devices like the Wii, iPhone and satellite TV boxes, but likely to expand.

    • Wow! Okay, your comment adds about 3x more value than my original post :). And your knowledge of both biology and CS are a) better than mine and b) probably better than half the people out there claiming to be CS or biology types.

      In no particular order:

      1. Yes, my whole point is that this franchise model needs to change. Or we’ll die of digital mad cow disease soon. One black swan (cow?) is all it takes.

      2. Good point about intelligent design having more foresight than blind evolution, but in the huge design space we’re talking about, I think that’s like going from 0 visibility to 1 inch, in a craggy fitness landscape of hundreds of square miles. Still, we might find something.

      3. Address space randomization — reminds me of a very interesting point made in James Scott’s “Seeing like a state” … cities and towns with very “illegible” maps and addressing (“Oh, him, he lives by the butcher’s shop near where the big tree used to be, before the fire.”) One of the first things big, centralizing empires do is make addressing more legible, as part of many simplifying moves designed to make central governance more powerful rather than more “rational.” An even closer example is last names. The fname, lname combo is fairly new around the world, and was created primarily to make inheritance taxation and conscription easier. In the Phillipenes, under the Spaniards, the locals, who didn’t do last names, were all assigned Spanish ones and (this is where it gets funny) even today you can lay out a map of the country where entire districts have last names that are in lexical order. The whole damn country can be neatly lexically ordered by district on a map. Now this sort of removal of randomization was done by an absolutist state, but the general principle is that randomizing local addressing makes a locality less vulnerable to the outside, due to illegibility. What makes it ungovernable also makes it impregnable to information-based assault.

      4. The rewind/time line forking idea: that’s a very powerful direction, but it has to be done in a deeply solid way. I believe there’s Windows malware out there that messes with Windows’ primitive version of this (the “saved system state” stuff). Maybe Apple has a fundamentally sounder model?

      5. Validated hardware chain. There was a rich guy in Indonesia who had his finger hacked off by a guy who wanted to steal his fingerprint-unlocked biometrically protected BMW. Need I say more? But at least that brings criminality into the domain of violence, where the legitimized firepower of the state can be brought to bear. I suspect a lot of dark, brooding young kids turn easily to cybercrime, but not many would pick up a machete.

      6. Full decentralization a la Git philosophy. Going back to the Scott point on illegibility, this direction illustrates why this is a double-edged sword. Illegibility, confused hierarchies etc. will confuse attackers up to a point, but soon they’ll start confusing us as well, and cause governance problems.

      7. Parasitism in sexual species: yes, sure, that will evolve too, but I think it is at least a more equal arms race, and it raises the bar, possibly beyond the resources of organized digital crime.

      8. What will your Windows PC and mine talk about? Yes, I too ‘follow the information’ in this sort of analysis (haven’t heard that happy turn of phrase before though). I actually have a couple of thoughts here. You know how similar users set up computers in similar ways, and some discover better hacks than others? Think of this personalization and customization as Lamarckian changes to the factory genome. Or alternately as neoteny and education. As you use a new computer more, it gets more customized. Maybe if computers get organic enough, there will be a lot of value (as opposed to brittleness and entropy) through this process, to the point where it may be easier to use an “experienced” computer for a specific job than a new one. This is already sort of true… it takes enough annoying setup to adapt a new laptop to my corporate environment that I am hanging on to my old one for probably too long. So when a computer has say, six years experience in a particular environment, it will become quite deeply adapted. It then has useful info to exchange with other computers in close, but not too close, environments. You could even selectively breed. The problem right now is that computers are too “inorganic” to improve with age, unlike biological organisms. And artificial Lamarckian evolution btw, is an example of “non blind.” So we’d end up with Dell making baby identical computers, Microsoft giving them an identical pre-school education, and then they get unique experience through use.

      9. One possibly minor quibble about your idea that the mappings from bio to bits needn’t be straightforward. I think there is huge value in having the sexual selection happen with units (computers) that end users understand, as opposed to processes, APIs, address spaces, that are hidden from view.

      I think there’s room here for a bunch of security-focused no free lunch theorems :)

      • 4. Time Machine: No, Apple doesn’t have a fundamentally better scheme. A targeted virus could stomp my Time machine disk quite easily. That’s why I suggested WORM. It would be easy enough to create a disk with a simple append-only switch, which unless flipped, would not let you delete things already written. Like the read-only switch found in all kinds of media old and new.

        5. Validated hardware chain: Physical security is a different thing altogether. Anyway, the idea is to prevent remote attacks over the wire. Current uses of TPM tech is like DRM – to protect the publisher rather than the consumer. I rather think it will turn around soon, and TPM tech will be used to favour the consumer (for instance, for a cloud provider to “prove” to the consumer that only valid, signed third-party audited code accesses his cloud-stored data).

        8 and 9. Yes, you have mentioned this before in other contexts, and I continue to be very skeptical. From an information-centric standpoint, the amount of variation contributed by users is dwarfed by information coming from outside, like new programs and updates. Most users are big-chunk programmers i.e. the choice which they make is restricted to which software packages they want to use. Fine-grained programmers – who would write their own software – are the ones who would contribute meaningful variations – but the numbers are very limited. I can’t visualize the education analogies you give becoming practical any time soon. Sounds very… Asimovish.

        What are your reasons to suppose that user-visible units are the best for sexual selection? Again, replication is not by means of my computer and yours getting together and spitting out a new VM image. If your computer had Safari and Thunderbird and mine had Firefox and Outlook Express, would the baby VM randomly get Safari and OE? Who would use this new VM? Or would it happen at a lower level, with every user-visible application and preference knob left as is? I’m afraid I still can’t see this playing out, perhaps because I’m too close to actual programming as it is done today.

        Sex itself is an unsolved problem AFAIK. Many reasonable explanations exist – I haven’t read Ridley – but no “Aha” moment yet. Why 2 sexes, for instance? If it was only about recombination, you need have only “females”. Why males? I guess it must have started out that way, and it was an unstable equilibrium. A mutation might result in a defective “female” capable of giving genetic material but not receiving and reproducing on her own account. This mutation, being heritable, would race through the population (what a coup! love ’em and leave ’em holding the baby), finally achieving a stable ESS ratio. Parasites must limit themselves to let the host reproduce, after all, or who would they parasitize? The Y chromosome is thus an atrophied, “defective” variant of X, males are really like brood-parasite cuckoos. In fact, cuckoos can be thought of as a cross-species “male”!

        So why aren’t there more parasitic sexes, why just one? Maybe there are, corresponding to all kinds of weird ploidies apart from the simple diploid one most vertebrates use, which I haven’t really looked at.

        Some uncomfortable speculation can be done down the path of what male parasites, having freed themselves of the whole childbearing business, do with the extra resources. Much of it is consumed by trying to parasitize females, of course.

        The idea of a tumour growing in my stomach, finally bursting through, wriggling and screaming a la Aliens, frankly gives me the heebie-jeebies, and the fact that women are not only willing to do it once, but multiple times is one of the starkest differences between the sexes. Anyway, it behooves well for us that women put up with parasites like babies and men :)

        Anyway – until we figure this shit out completely, I think applying it to other fields is fraught with difficulty.

        Random interesting link:

        • As it happens, 2 sexes IS arbitrary, and Ridley’s book has many delightful examples of more than 2, as well as species that change their sex across time/seasons and generations etc. Lots of variety is possible within the basic idea of sexual reproduction. But yeah, the higher animals seem to have stabilized around 2. Kinda like 2 political parties/bipolar political spectrum in politics.

          Re: use-driven variability. You are probably right about big-chunk… but that’s because fine-grained control is so tough and designed for programmers by programmers. This actually lends some support to my UX-level variability. If users are to be part of the variability-creating loop, then they need both meaningful control and an ability to understand what they are seeing and tweaking, and lots of recommendations etc.

          Not sure how to get there, but I know of some “co-evolution” paradigms in design where good designs are evolved via focus groups selecting along an evolution tree.

          One way to think of this is that the creation of the unique genome via Lamarckian inclusion of learned attributes is itself an evolutionary process where end users are the selectors/fitness function.

          • Alexander Boland says

            This is an amazing conversation, but the one flaw I’m seeing is that we’re talking about this as if we have to think about it as “Computer X + Computer Y –> Computer Z”. Nature may have ended up that way with babies, but in the case of computers wouldn’t it be possible to break this into more manageable chunks? For example, what if we just periodically updated browsers with some random mixing up on traits and then applied that idea to individual parts of our computing experience–basically building up new “computers” piecemeal?

  8. Hi Venkat,

    There is indeed an entire field of research devoted to the evolution of computer programs. The main approach is generically called “Genetic Programming”. There is an excellent free textbook, if you’re interested in taking a look: Its application are very broad, from developing new drugs to generating artificial art.

    The problem with evolving end-user software with GUI’s though, is the fitness function. It is very hard to automatically evaluate the quality of a program that is meant to interact with the user, short of having hundreds of thousands of human testers going through the soul-crushing job of testing all the variations generated by the evolutionary algorithm. I’m sure we will find ways to do it through some form of AI, but we’re not there yet.

    Closer to your idea, you could argue that the initial program could be developed, and the evolutionary algorithm would only have to conserve functionality while generating diversity. Granted that it’s an easier problem to solve, but we would still need some sort of automated QA department to make sure that all the possible use cases are intact. I don’t think anyone knows how to do that at this point, or even if we’re close to knowing.

    Another problem is that many security exploits work at the functional level. They don’t take advantage of the internal structure of programs, but of their common interfaces. A simple example: SQL injections. You can’t really change SQL, otherwise many systems would loose the ability to inter-operate. What you can do is try to protect each node in the system against malicious exploitations of the interface. But it’s extremely hard to out think the bad guys. And this extra layer of protection is indeed new functionality, and not merely genotypical diversity.

    Some people argue that an interesting source of biological inspiration to attack this problem is the immune system. I’m a bit skeptical though, because of the different “paradigms” followed by contemporary human engineering and nature. Human engineering is developing systems that are essentially sequential, synchronized and full of single points of failure, while nature’s creations are asynchronous, highly parallel, redundant and degrade gracefully.

    BTW, I’m a big fan of your “Gervais Principle” series and am eagerly awaiting the next installment :)

    • I am familiar with genetic programming (have used simple techniques myself). I guess I am proposing a bigger vision where entire computers evolve, so no two are identical.

  9. pierre dolnik says

    Evolution is essentially a massive trial and error process with environmental feedback. Sexual computing, while an interesting idea, seems to only address the former. To put an actual evolutionary process in place you need some sort of environmental selection – and not just a static fitness function; in a changing environment the fitness measure needs to follow.

    Then to get anywhere with non-trivial problems, the evolutionary approach requires huge amounts of generations (gene recombinations, mutation etc.) over large and diverse populations. Read eons of computing time (where are those quantum computers we were promised?!).

    The process also requires the freedom to be wrong and random – if you focus it too much (by overvaluing certain characteristics), it will get stuck in local optima and may never go beyond sub-par solutions (like in science). From an OS perspective it means that the system would need to be allowed to perform basic tasks in a randomized way to verify which version works best – cool on paper, but would you want to use such a system? You might as well hire someone to do the stuff ;)

    All in all a nice idea, but at this point, IMHO entirely in the realm of science fiction as far as real world systems are concerned.

  10. I nature, the feedback happens at the level of sexual selection. Clear skin, high symmetry, big tails on peacocks etc.

    So in this metaphor, female computers would accept male computers as mates only if the latter showed visible signs of freedom from parasitic infection :)

    Re: time, yes, that is a big problem. But as tubelite suggested, maybe there’s ways intelligent design can speed up evolution over blind Darwinian-design analogs.

    • But you can’t trust what any other computer says; in Australia a parliamentary committee just came out with a report suggesting ISPs should be responsible for disconnected their subscribers if they’re infected, and the opposition from network operators is clear:

      When NAC was first proposed (long before it was announced and attempted to be productized), I was resolutely opposed to it because of its fundamental flaw – namely, that one simply can’t trust end-nodes to self-report security posture, as they’ll be subverted and will then misreport. I also noted that posture assessment doesn’t matter, anyways, as the miscreants always find ways around (or even to exploit) antivirus and other end-point protective measures – and pointed out that it isn’t scalable, anyways, even within small organizations.

      These flaws aren’t specific to any one vendor’s implementation; rather, it’s the fundamental concept which is unworkable.

      Events have validated these misgivings, given the essentially zero uptake of NAC-type solutions in the industry in the 6 years or so since its introduction. It’s quite surprising to see the utterly discredited NAC canard being raised yet again in the context of a Parliamentary enquiry.

      Computing is a libertarian paradise – you can’t force me to run anything on my computer. Even if you legislate to do so, as I can just run a VM that gives the “correct” responses and funnel everything through that. The only way to enforce it would be strict government DRM embedded into the very hardware, and you can imagine how that would go down.

  11. I have thought about ‘adaptive systems’ before but more from the behavioral side than the ‘structural side’. You’re talking about machines changing their digital DNA to confuse attackers. I was thinking about changing just the skin or behavior, more along the lines of adaptive camouflage.

    For some vectors, simply stepping outside the rigid nature of the protocols helps quite a lot (run the server on a different IP port). To make this ingrained and adaptive, techniques similar to port knocking (kind of like a secret handshake), could be used to thwart attacks AND gain information about the attacker which puts the ‘good guys’ in a much needed offensive mode. For example, an attacker probing for an SSH vulnerability hits port 22 with SSH initiation, but my server requires a port knock first. Attacker info gets recorded and no further attempts from that attacker are allowed (insert variations of draconia here). If you couple this idea with some of your thoughts on randomness, ‘bad guys’ will likely have a much harder time violating a machine through these types of vectors.

    Also, this opens the door for good-guy counter-attacks, if the attacking machine can truly be identified. The problem is, these types of things aren’t ingrained in the systems – smart people have to put them in place manually. Security has traditionally been an afterthought and the price in now being paid.

    You are rightly lamenting that there are only .01% power users out there, but the techniques used by the ‘knowledgeable few’ can be built into the systems. For instance, those malicious attachments that alter critical system configuration have no power in a least privilege environment, but people still do routine work with elevated privilege (I’m thinking of Windows ‘home’ editions). I have never seen a system rendered useless that was operated in a least privilege environment, though I have seen some annoying infections, and my experience is only moderate. Anyway, this vector could easily be fixed if MS would just decide the cost of the added complexity (having to elevate privilege to install new software or change system configuration settings) for their users is worth the benefit, and educate people about it. This particular example isn’t that hard to understand for someone that knows how to use a computer even on a remedial level.

    Evolutions along this line are not as sexy (pardon the pun) as computers merging and replicating but I see them as having a more devastating effect in the immediate future. There are other vectors that are likely to be more problematic than the ones that I understand better. For instance, I’m not intimately familiar with the PHP problem but it sounds similar to SQL injection where incoming requests are expected to be data but are actually malicious code. I don’t see any adaptive solution solving that problem – it’s a gaping hole that you have to plug when you build applications in that environment.

    And then there are the social engineering attacks that only knowledge and diligence can thwart…

  12. froogger says

    Quite right that the bad guys are winning and the only current option for our team is defense.

    I’ve been following this battle since 1989 and I’m not optimistic. Ever since we started connecting computers, weaknesses have been found and malware has spread. In this vast metanet called internet the problem is inherent, being both its strength and weakness. The problem lies in the design. It’s based on open standards – come one, come all, and I’d hate it so much if we were to let go of this and segregate traffic.

    I do like your suggestion of constantly changing diversity as a preventive measure. How this could be applied, I cannot say. Something radical should definitely be done now that we’ve already lost the battle over useful services such as finger, email, usenet, webforums etc.

    Fortunately for the bad guys, the general attitude towards all things digital have mostly been in their favour. Most people don’t take the digital realm seriously. They feel it’s there for their amusement and that it’s no big deal. Someone breaking into their home is a violation, but a trojan is considered more of an annoyance. Losing a harddrive from a malicious virus is far less aggrevating than all your heaps of paper. I sense that this is now changing slowly, and that is where my hopes lie. When enough people get it we might have a momentum that moves media, corporations and government (the “it” being that information is key). A better solution will present itself, but will require cooperation on so many levels.

    We can’t (and shouldn’t, but that’s another discussion) stop people reverseengineering and fiddling with things, and there’s always someone out there willing to abuse this knowledge.

  13. Thomas Lindgren says

    I think similar ideas have been tried under the metaphor of “diversity” and “immune systems”. See for instance Larry Chen:

    or Jarett and Seviora:

    I suppose there have to be more work along these lines, it looks like some DARPA initiative. But I can’t say I’ve followed things very closely.

  14. Senthil Gandhi says

    Take Google Chrome,

    I read somewhere that security bounty hunters rarely take on Chrome because it is so hard to crack. For eg. The winner of last years Pwn2Own contest confessed that he doesn’t spend time trying to crack google chrome since it is so hard and does not make economic sense for him to crack.

    The reason its so hard is because chrome the browser runs inside a sand box – just like any program running inside a virtual machine cannot crack the host (well, cannot crack is not the truth- it is very very very hard to make it crack is more like it)

    Dj.Bernstine is a leading researcher in this field. He has announced awards for finding holes in his DNS program and Mail (qmail) program which run on thouzands of servers these days. I believe that the prize was claimed only once in a span of 10 years.

    DJB is concentrating right now on super extending the idea of Google Chromes architecture – where in all the programs/every operation in an OS run inside some virtual sand box of some kind.

    I am putting my money on DJB and Extreme Sandboxing.
    ( read section 9 of this pdf for more info : )

  15. Amusingly enough, this post came up in my RSS reader just a little while after a post by Bruce Schneier [1] which seems to mention some similar concepts (security inspired by biology, immune systems in this case).


  16. Thanks for the links, Thomas, Ravi and Senthil. Am getting myself quickly and cheaply educated here :)

    frooger: yes, I am sensing people are starting to take digital more seriously now. It only takes one serious incident and direct experience of the FUD (fear-uncertainty-doubt) as you shadowbox with an invisible adversary, that you realize that it is all real. The digital realm can put you through all the psychological and emotional stress of physical conflict, except for the physical injury part. Especially if you know enough to be scared, but not enough to be an equal combatant. It’s like getting mugged.

    For most people though, they usually default to a friend or paid help to sort things out, so they are still shielded a bit. But that model is getting untenable and soon everyone will be exposed and calibrated right.

    Danny: from what I see on security forums, there is a huge supply gap of volunteer pros helping others out, and the paid model (like health insurance) hasn’t really stabilized yet. You only have somewhat dubious models like the Best Buy geek squad.


  17. The real point of contention on this topic is that we *can* make technological changes in order to alleviate much of this crime, but we generally don’t bother because the cost-benefit ratio of the effort to implement protection isn’t high enough. We could be encrypting all data on every PC (law enforcement would hate it, but frankly our computing power is high enough to handle this these days). We could fix/abandon protocols like FTP, POP3 and SMTP. We could have operating systems that enforce far better privilege separation so that compromises are not system wide.

    But that won’t happen until the bad guys win or come damned close to it. So, on one hand I’d like to see *more* successful attacks so that the good IT people (I’d like to put myself into this camp) get the resources necessary to make substantial changes to the way technology works. On the other, like most others, I just want the bad guys drawn and quartered.

    • Inoculation can kill, so perhaps there’s a cheaper/safer way to educate people about the cost of scenarios?

      The issue is making it emotionally real, to drive home the fact that the losses and enemies are real. Otherwise people will think of it as a maintenance problem/breakdown scenario.

  18. Just some random questions & thoughts that may or may not be relevant…

    1. In biology, mutations happen at the DNA level and are expressed in the organism’s larger body. What would be the “DNA level” equivalent in a computer? Would it be the processor?

    2. Information exchange between sexual organisms that results in procreation is limited to DNA. (Other info’s get exchanged too, but they do not result in procreation.) What would be the computer equivalent of an X chromosome, and a Y chromosome?

    3. Mitochondria are thought by some to have been a parasitic bacterium that got assimilated into cell structure and its functions put to work serving the cell at some point in the very distant past. Any chance of computers being programmed to assimilate parasites in this way?

    4. Would biomolecular computers address the issue?

    • I don’t think you can map the metaphor at such a literal level. You have to work at the level of abstraction of information theory applied to the primitive design parameters. “Follow the information” as one of the other commenters said.

      That said, genetic algos DO have chromosome level metaphor mapping. But they are pieces of code, not whole computers. Erik Klavins at U. Washington has done a lot of research on hardware machines that literally mimic DNA.


  19. Another thing to consider: Biological viruses are created by random processes, and their only “goal” is to propagate themselves, which places a limit on how destructive they can be and survive. Computer viruses, on the other hand, are designed and (initially) spread by people looking to gain some benefit, so the rules are a bit different.

    I think that, if some mad biologist started creating viruses that were as sophisticated and focused as computer viruses are, we would fare about as well as computers do today.

  20. Andrew B says

    But what about the sexually transmitted diseases? Aren’t those (mostly) transmitted by the act of reproduction?

    In the human world, some of the worst diseases we know of are STDs, like HIV/AIDS. And some statistics say 1 in every 4 adults has herpes. With infection rates as high as herpes and with infections as devastating as HIV/AIDS, sexual reproduction doesn’t come without added attack vectors. Granted, maybe these added attack vectors are acceptable based on the advantages, but I’m not sure how to quantify that with regards to computers. In humans (and with no personal medical training), I’d have to say the risk is worth it, our population is expanding quite well and has been for many many years.

    For example, in the computer world with your example of two PCs “mixing it up,” what prevents a bad guy from impersonating a good guy and “mixing it up” with your PC just to infect you? It’d be like the dating scene, only for computers. A new meaning to “online dating?”

  21. Wouldn’t that be genetic computing?

  22. At the risk of not having the expertiser for this I would like to comment on this discussion. I found it very interesting as my job at Xerox is Field Systems Analyst for our products. We all spend a great deal of time in discusions about protection of our products and our customers data with our customers. But I am sure you all know that what you are discussing has wide ramifications from an industrial view point also. I am a private pilot and often fly airplanes with so called “glass cockpits”. That is the instruments are displayed on flat panels complete with weather information, traffic, and of course GPS moving maps. With a little research I discovered most of them are running on Windows XP imbedded. But you know we update them with a USB memory stick. Scary. Here is my thought on this why not assume that we have reached a point where computing has reached the maturity that we can build a processor that is locked down with operating system in ROM and cannot be altered except at the point of production. I do not want to go back to a typewriter. But I would like to have something that gives me the reliability of same. I think that evolving computer systems is a good thing but perhaps slowing the whole business down for those systems that require security a stability might be a good thing. Let me have freewheeling machines for kicking around but let me have locked down systems from the factory for my finances, or my airplane or the power grid. At this point you may ask me who is the authority that sets the standard at the lockdown. And I see your point. As the wild west and fate of frontiers illustrates, anarchy has it’s limits. It is time a for a little law and order.

    • The unfortunate side effect of this would be an OS that can never have security updates. It’d certainly be possible to build an OS that was read-only, *and* burned into the hardware, *and* secure, but it’d involve cutting off almost all user customizability.

      Google’s actually trying something similar with Chrome OS – it’s an extremely restricted system, but the OS is read-only and they’ve worked out a secure update method for OS updated.

  23. I am on the 1st day of a auto detox diet plan. I haven’t had a activity to eat all day and I’m STARVING!! I cannot accede how individuals say that they did not feel ardent bold this. I’m aswell a bit bloodless and all-a-quiver and like I said, this is abandoned the anterior day. I’m not a abounding abandoned or huge eater either. I’m not complete how broadcast I wil last. At the ability of the day I am appetite myself which can’t be an able thing. I’ll try to stick at it though, I’m just avaricious that the blackout and weakness is traveling to carelessness or I will not acquire the adeptness to administer on my plan on Monday.