The Design of Crash-Only Societies

Ryan Tanaka is a resident blogger, visiting us from his home turf at  The improv session that inspired this article can be found here.

Blue Screen of Death Windows 8

Crash-only software: it only stops by crashing, and only starts by recovering.  It formalizes Murphy’s Law and creative-destruction into an applicable practice, where the end-of-things and the worst of outcomes are anticipated as something to be expected as a routine occurrence.  When done well, however, it has the potential to make software more reliable, less erratic, faster and easier to use overall.

But there is also a social component to crash-only designs that has yet to be fully explored: the potential for using these ideas to develop practices for building communities and social applications online.  As the worlds of tech, politics, and culture continue to collide, the demand for alternative modes of communication will likely continue to rise.  Crash-only designs hint at possible new approaches toward community and content moderation on the web, expanding the means and methods by which online content and interactions can be organized more effectively and intuitively.

Crash-Only Democracies

Democratic elections, in many ways, can be said to be a type of crash-only system by design.  Civilized societies have formalized the process of “rebooting” the political system every few years as a way to give dissidents and minorities an opportunity to make their voices heard.  Despotic societies usually “crash” through revolutions, nepotistic successions, assassinations, coups and war, whereas democracies transfer power internally by forcibly putting an “expiration date” on the seats themselves.  The temporal designs of democratic systems are more symmetric, reversible, and predictable — therefore, more stable.

The internet is typically hailed as a medium that fosters autonomy, free speech, and individualism, often used synonymously with the democratic process in it of itself.  But an honest look at the ways in which the web “governs” its users reveals practices that are much more autocratic.  The authority yielded to administrators and community managers on web interfaces tend to be absolute: operated by top-down, non-negotiable decision trees and class-based identity systems (admin, manager, editor, subscriber, etc.) that run on for indefinite periods of time.

At the other extreme, some sites run on the “anything goes” principle: community moderation mechanisms, if they even exist at all, are minimal.  In these contexts, users are allowed to virtually do or say anything that they want without any fear of retribution from a higher power.  An anarchistic society where everything is permissible: the good, the bad, the ugly…and the downright abominable.  These spaces can be said to be very “free” in many respects, but due to their  disorganized nature, noise and chaos tend to reign as their default state of being.

A few questions you might ask yourself: when was the last time you voted on something online where the results mattered and were actually enforced?  Can “bad” moderators ever be removed from their seats without an appeal to a power of an even higher order?  Are there any legal and judicial protections you can get online without having to appeal to the authority of real-world law enforcement?  By most definitions and standards, democratic societies don’t actually exist online, since the Internet’s models of governance are closer to those of feudal societies than ones typically found in representative democracies.  (Past attempts at internet democratization, such as Usenet,  have failed to meet the criteria of being “real” democracies, at least among analysts and scholars.)

As the web-governance debate continues to rage on both in tech and politics, online communities are now largely caught in between an authoritarianism-anarchism duality: strict enforcement of community guidelines vs. letting the people do whatever they want.  Attempts at moderating the community are always done heavy-handedly, while not doing enough leads to the degradation of the civility and composure of the community overall.  Striking an ideal balance between the two is extremely difficult, short-lived, and largely unteachable as a skill. As advanced as our technologies have become, shouldn’t there really be a better way to handle our content and interactions online?

It turns out that there actually is: the web can simply imitate the ways in which the democratic process works, both in ideology and in function.  In democratic societies we allow our representatives’ positions to expire after a few years (like a timeout event, one kind of designed crash), in hopes that someone new will bring new possibilities and ideas for the future to come. Using expirations as means for feature-building, temporal designs have the potential to systematize the process of community and content moderation in ways new and novel.

Social Timeouts

Crash-only software, especially on web-based platforms, often necessitates the use of timeout functions as part of its communicative process: requests and retrievals must be allowed to expire over a given period of time in order to avoid getting stuck in an infinite loop.  As features, timeouts have traditionally been used for simple things, such as logging inactive users off of servers after a period of time, or putting the computer to “sleep” in order to save resources and energy.  In recent years, however, more complex forms of the function have started to emerge.

The most visible and practical form of timeouts-as-features can be found in companies that offer subscription-based services: you pay x dollars every so often to have access to a service for y amounts of time.  If you decide to cancel your subscription or decide not to pay, then your relationship with the company simply expires after the given period is met.

Many of us have had the frustrating experience of having to work to cancel a service (e.g. cable companies, content subscriptions, mailing lists) where the system defaults to making the customer go through a long, multi-step process before they’re “allowed” to cut off their relationship with them.  Like a controlling and overbearing partnership, these company-customer relationships can be said to be insecure and unhealthy, often leaving a bad taste in everyone’s mouths after the ordeal finally comes to an end.

Although the stakes might not seem quite as high on the surface, the relationships that social media platforms fosters between its users can be said to be similar — there is a “shutdown” process, as opposed a “crash-only” process, for people to end the connections that they develop with people and entities online.  In most cases, “shutting down” simply means having to click “unfollow” only once.  But that one click — and all of the considerations that go behind it — is still work, and if you multiply that by the hundreds and thousands, it can easily become overwhelming to even the most tech-savvy of users out there.

Seasoned social media experts understand the importance of “pruning” your feeds by keeping the number of people and entities you follow manageably low, in order to keep  them free of unwanted notifications, spam, and clutter.  Social platforms have the option of bundling this feature into their systems by allowing connections to expire by default, after a given period of time.  The maintenance of social networks, then, can be thought more as fostering the connections that are active, rather than a process of having to “purge” the inactive ones.  It’s a simple inversion on the process that already exists.

Here are an additional few examples of timeout-based functionalities that could prove to be useful:

Administrative Roles with Expiration Dates

Some web communities have known to collapse because the administrator/moderator, over time, loses the ability to manage the projects that they’ve lead up to that point.  In some cases it becomes a tabloid-worthy story of vengeance and self-destruction, but in most instances the admin simply loses the motivation to keep on going, for personal, professional, or psychological reasons.  This can often create problems of succession: sometimes admins will simply “disappear” from the web altogether, along with all of its passwords and administrative data, leaving a power vacuum in its wake.

Some communities may be able to survive the transition if there’s an admin of an higher order (e.g. webmaster, owner) willing to manually promote someone new.  But in most cases, the transition is rarely smooth.  There may be no one capable of taking over the responsibilities of the old administrator to begin with.  The new candidate, even if available, may not want the power or responsibility that comes with the job.  Even with a ready and willing successor, the community itself might contest the new leadership as being unworthy and/or unfit.  In some cases, the community itself may splinter over ideological or philosophical differences prompted by disagreements as to how to move forward.

While some of these stories might seem like they come straight out of an episode of The Game of Thrones, they often accurately describe the political realities of online governance models in its current form.  Lacking the means of collective anticipation and rhythm, web-based communities often run purely on the wisdom, expertise, and guidance of the administrator alone, with no real means of moving forward once the individual effort itself is gone.

Requiring the administrator to renew their license every year, with the possibility of them being ousted/replaced on those dates, creates an expectation within the community that administrative functions are things that are perpetual, open, and ever-changing.  This framework also doubles as an election cycle, where the users may have the option to vote for leadership of their own choice.  When people start to become disillusioned with the direction the community is going overall, these expirations become particularly important: if nothing else, it gives them a reason to be hopeful that things have the potential to be turned around in the future to come.

User Timeouts

Already in use by a number of spaces out there, the temporary-ban method is the real-world equivalent of administering jail-time (a literal timeout) to the user who stands accused.  Most online moderation methods today focus on content deletion (censorship), account deletion (capital punishment), and/or banishment (deportation) as means of “preserving the peace” within online communities, but these options are often excessive and unnecessary in the vast majority of cases.  Temp-bans have one thing that other methods of punishment don’t: the potential for the user to “recover” from their infraction and become reintegrated into the community over time.

If someone happens to be having a bad day and ends up throwing a number of personal insults at another user, their behavior might end up violating the website’s terms of service and become subject to a permanent ban.  Temp-bans may appear to be too “soft” of a solution for trigger-happy moderators, but are arguably a much better solution for handling disputes, given that:

  1. They allow the user to stay active on the site after the ban-period is over.
  2. They handle disputes in a way that gives the offender an opportunity to reform, make amends, and state their case after the fact.
  3. They make the judicial process of the site more transparent by creating a body of “case law” for the rest of the community to see.  (As opposed to the Orwellian model of deleting everything and erasing the dispute from historical memory altogether.)
  4. The duration of the ban can be made adjustable to fit the severity of the crime.  (i.e. a way to avoid “cruel and unusual” punishment.)

Websites that have this system in place will attest to the effectiveness of temp-bans — the community becomes stronger, not weaker, as a result of its ability to integrate dissent into its overall system.  It makes very little sense to kick someone out permanently for a minor infraction if they’re otherwise a fairly solid contributor.  Effective moderators will instinctively use this methods, but making it a policy may be a more effective way to moderate communities that are larger in scope.

Comment Limits

Debates and discussions — particularly of the political variety — have a tendency to devolve fairly quickly into endless back and forths between 2 or more people with opposed viewpoints.  Even when the atmosphere itself is relatively cordial, a simple disagreement over a minor issue or fact can lead to comment threads becoming extremely long and filling up visual space very quickly, often causing loss of context and increased levels of noisy banter.

Some sites have adopted the commenting practice of hiding threaded replies from the default view, unless you “opt-in” to read the whole thing by clicking an expand or more link.  But this solution doesn’t really address the problem at the fundamental level — commenting systems have great trouble being taken seriously as a publishing medium, due to the fact that they tend to overwhelmingly reward volume over quality.

The phenomenon of the long threaded replies emerges from the human tendency of wanting to get “the last word in” when engaged in a discussion or debate.  (Which most of us are probably guilty of, to some degree.)  Online, this behavior has become somewhat of a currency in itself: if not by sheer volume, impulsive commenting styles get implicitly rewarded by allowing new content to get pushed to the top of the feed for being the “most recent” of updates.  Over time, impulsive comments begin to dominate the landscape overall, making it difficult for the community to attract users and ideas with greater insight and depth.

Some websites have experimented with systems that are quasi-democratic in nature: users are given the option to vote for their favorite comment, which brings the comment with the most votes up to the top.  This method can be made to work in some instances, but has the long-term effect of enforcing a system of mob rule: it discourages minority and contrarian voices from being heard, driving many users to take their engagements elsewhere.  Eventually the community begins to lose its ability to maintain a diversity in its viewpoints, becoming less interesting and compelling over time.

But a simple and effective system of moderation can be created using time-based constructs — limiting user input to one comment per day, for example.  This limits the volume of content that the site itself displays (making it cleaner to read overall), while distributing the presence of different commenting styles more evenly, all without having to impose direct or biased restrictions on the users themselves.  This system may not necessarily always bring the “best” comments to the top, but it can help to prevent individuals from flooding or “hijacking” conversations through brute force alone.  (Comment limitations help to curb the “too much free time” bias that many communities face today.)

As a matter of simple economics, in order for comments to be seen as valuable, the comments themselves must be limited in supply.  If users are allowed to only post once a day, people may then, perhaps, learn to treat each utterance with greater care.  The perception that you can comment all the time, anytime and anywhere implicitly promotes a careless attitude toward online interactivity — restricting its supply in an even and fair manner can help to curb some of the negative effects of destructive commenting styles.

Zones of “Anything Goes”

This takes the idea of expirations to the extreme, where content and posts are deleted/erased from the website’s log and history after a very short period of time.  In exchange for its shorter life-span, moderation rules become relaxed and the users are also given the option to post anonymously if they choose to do so.  This creates an “anything goes” environment where the users can basically do or post anything they want while there.

Some web communities run solely on this methodology as a matter of principle, but a few sites have experimented with the idea of having a “free-for-all zone” that exists as a separate-but-still-connected wing from the rest.  The original intentions of most of these efforts were to give some of the more impulsive/ADD-ridden members a place to vent without losing their participation, but these areas have sometimes doubled as a place for members to experiment with ideas and discussions more controversial in nature.  On rare occasions something interesting will happen (an insight made in an otherwise taboo subject, for example), which might then make its way into the mainstream once it’s findings have been structured and organized in a more appropriate manner.

From a moderator’s point of view, these “free-for-all zones” have multiple uses:

  1. It exists as second-tier content dump for “inappropriate” content, without having to delete the content itself. (Maybe offensive to mainstream visitors but not ban-worthy.)
  2. Serves as an internal example to contrast with the discussion styles of moderated content.
  3. Can function as a purgatory for troublesome (but not ban-worthy) members.
  4. Creates the implicit understanding that the “free” zones will probably more offensive/risky, and members are less likely to be caught off guard during their participation.

For these zones to work properly, however, both its users and content must be shielded from public scrutiny and the pressures of having to produce “results” of a particular kind.  Lines can be drawn at the extremes (e.g. death threats, bigotry, illegal activity) but the area must be seen as being relatively “free”, otherwise.

Done effectively, these zones can become a highly functional mechanism in which the society absorbs dissent in a creative and constructive manner.  Put another way, these areas can be said to be the social equivalent of “sandboxing” in coding practices — a place where ideas and experiments are allowed to fly and crash without the worry of what actually comes out of the process itself.

The democratic process can be thought of, simply, as the rhythms that we subscribe to as a society as a whole: the decentralization of power made possible through the reversibility and periodicity of temporal design.  But these ideas may not necessarily be appropriate in all situations: if you just happen to be visiting a site hosted by a friend or relative, it would be absurd to try to bring everything to a vote.  These ideas are more geared towards communities that are larger and more public in nature, as they may prove to be useful for arriving at moderative solutions that don’t require the administers to constantly micromanage and survey the users themselves.

Will these methodologies solve all of the web’s social, cultural, and content management problems that exist today?  Probably not.  (Democracy itself is, after all, a work in progress in it of itself.)  But temporal design is an oft-neglected aspect of web development that is fertile ground for future innovations in the social web’s political and cultural practices in the years to come.

A word of caution: there’s nothing people value more than their time.  Temporal designs — especially when it involves grants and removals of admin privileges —  must be done with great foresight and sensitivity, or the potential for backlash is pretty high.  I recommend that developers and community managers use time-patterns that the users are already familiar with as starting points, and apply these changes as a flow-pacing process where the intensity of their effects can be micro-adjusted.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Ryan Tanaka

Ryan Tanaka is a writer, musician and technologist. His ribbonfarm posts explore the nature of ritual, gaming culture, and themes in UI/UX . Follow him on Twitter.


  1. Fantastically lucid thoughts on time & digital interaction. Look to molecular biology – maintenance of a far-from-equilbrium steady state requires shockingly high turn-over rates.

    A gaussian curve, mean == now, negative z-scores == memory, recollections, mainly of the recent past, with a long tail of the distant past, positive z-scores == prediction, planned events, mainly in the near future with high-certainty such as events, with a long tail of uncertainty trailing into the distant future….


    • I really like the biology connection you made there — kind of like how our skin grows, our body evolves and stays healthy through a renewal process of growth and shedding. We’d slowly suffocate ourselves if all we did was acquire and amass new cells — and I think the same can be said about the social connections that we make online.

      Of course the ones important to us we’d want to keep, but I think letting things expire by default would have a profound effect on the way we think about social media and its contents.

      • Yes.

        Look at the structure of evolved networks – gene regulatory, social, internet etc. – the most evolutionarily nimble + mutationally robust are those with some immutable central nodes of low turn-over (think actin, Elvis, google), as well as exploratory peripheral nodes of high turn-over (think de novo genes, flashes in the pan, temporary user accounts).

        There should be a scale-free distribution of turnover rates – I will keep my primary email address unchanged for my life, just as some neurons will never die or divide. And most user accounts, like skin cells or this blog moniker, will soon be swept into the dustpan of history, or just into dustpans.



  2. I don’t think I buy the framing of (US) democracy as crash-only:
    1. Even when “control” in the legislature changes parties, most incumbents keep their seats.
    2. Even “wide” shifts in seats between the 2 parties is usually based on a pretty small shift in popular vote.
    3. “Most” of the government is administrative-branch, which doesn’t change much at all.

    • The connection I made is mostly based on an ideal definition of democracy — it doesn’t mean that our societies always live up to the promise. But even then, a president can’t be in power for more than 2 consecutive terms, and this is a “crash rule” that’s been strictly enforced since the beginning. The social web, on the other hand, has no such rules — power goes on indefinitely, subject only to the admin’s whims and preferences.

      The idea here is to make democracy on the web more functional than ideological, using the same methods that make politics work in the real world. It may not be appropriate for all situations, but I do think that a few communities would be able to flourish under social systems that hits closer to home.

      • Ryan, actually the two-term limit is a modern innovation. There was an informal tradition of two terms from the beginning, but the strict enforcement came with the 22nd amendment of 1951. Franklin D. Roosevelt was elected President 4 times and died during his fourth term.

  3. cf Cory Doctorow: “You can’t be a citizen of a theme park.”

    • Yeah, that’s a pretty good example. Right now you see it here and there in basic forms, but I think designers and developers have yet to make it part of their core philosophy.

      That example also representative of how a lot of tech products often comes across as being “needy” and pretty passive-aggressive from a social point of view. Click me! Why do you want to leave?? Hey, I haven’t heard back from you in a while…

  4. The different exchange sites ( ) seem to have commenting and leaving answers pretty well figured out. By scoring good answers, and a higher score gives you more privileges.

    One of the guys behind stackoverflow, Jeff Atwood. ( ) has several blog posts about the subject of how to reduce the signal to noise ratio you normally see in blogs. And his software is trying to improve it. So that might interest people.

    Disclosure, I do have a stackoverflow account, but that is the only link I have with Atwood.