Common Sense Eats Common Talk

In November 2008, with the financial crisis in full swing, Queen Elizabeth attended a ceremony at the London School of Economics. Facing an audience of high ranked academics, she posed a simple question: “Why did nobody notice it?”

How could it be that no one among the smartest economists, commentators, and policymakers in all her kingdom – and beyond – had been able to see the formation of a bubble of such dimensions?

Illustration of The Emperor’s New Clothes by Vilhelm Pedersen, Andersen’s first illustrator

And yet critical facts were readily available – facts that could have warned about the craziness of the housing market, on which an even bigger financial house of cards had been erected. A short trip to a “regular” American neighbourhood – like the one undertaken by Mark Baum in The Big Short – would have presented an endless list of properties under foreclosure, real estate agents openly bragging about the laxity of credit requirements, and exotic dancers with multiple mortgage-financed properties.1

Such evidence would have been sufficient to convince most people of the existence of a bubble. However, in London, New York and the other financial centres of the world, an entire class of experts kept blatantly ignoring the facts, anecdotal evidence, and common sense that could have anticipated what was about to happen.

This is a high profile example of a more general situation in which a narrative establishes itself and resists being disproven, even when it is clearly contradicted by information right under our noses. Like the crowd in Hans Christian Andersen’s famous parable, we watch our sovereign parading naked in the street, but are unable to see through his invisible clothes. Until a young boy steps forward and with a little common sense lifts the veil on our “common talk”.

Falling for conformity

Common talk is the unreflective parroting of smart-sounding theories, stories, and arguments without applying any test, even the most basic one, to verify their validity. It’s not that common talk is necessarily false – some of what people repeat mindlessly happens to be true. It’s just that veracity is of secondary or tertiary importance. In this respect, it is a lot like Harry Frankfurt’s concept of bullshit: a lack of concern for the truth.2

But why do we fall for and perpetuate common talk?

Sometimes it is a deliberate choice. This happens when we perpetuate narratives out of direct personal interests – like the bankers levered up in the real estate market – or in an effort to please those in a position of authority. More often, though, our motives are less conscious.

During the 1950s, Solomon Asch demonstrated in a series of “conformity experiments” how easily social pressure can cause people to second-guess their judgments, even on a question as basic as the length of a line. This stems in part from a fear of ridicule or humiliation. Faced with a dominant opinion, it is easy to doubt ourselves and question whether we are qualified to contradict so many other people – especially if they are acknowledged “experts” in complex domains like finance.

In other situations, what draws us towards common talk is the desire to maintain consensus. When an idea or theory starts spreading, there is a lot of inertia to stick with it. This is a tendency that runs deep in our genes. As Roy Baumeister has recently argued, humans have “an innate propensity to value consensus above accuracy.” Although groups have a strong incentive to seek accurate information about a given topic, Baumeister concludes that other criteria are indeed more powerful:

…groups value consensus and shared reality, and so members are often reluctant to bring up information that goes against the emerging consensus. Although critique and argument would best serve the group’s epistemic goals, the goal of harmony tends to suppress those processes.

As social animals, we need a collective worldview within which to operate. Common talk is one of the main ways we construct that worldview.

Common talk is fragile

There is a tendency today to associate fake news and disinformation only with the uneducated, but this is extremely self-oblivious. Instead, as the financial crisis of 2008 shows, people that can be considered “very smart” by any acknowledged external measure – from IQ to educational and professional achievements – are far from being immune to common talk. Peter Thiel goes even further, arguing that “smart people” are more likely than average to pick up on trendy and fashionable thinking and get trapped by it.

A possible explanation lies in the content of common talk. People pick up on things when they somehow are looking for them. They are, in other words, receptive to a particular type of message. Common talk is particularly tempting for those who have an affinity for explanations, for overarching stories, for big-picture thinking.3 These seemingly coherent narratives have something in common: they all focus on the macro.

If macro thinking makes common talk attractive, it also makes it vulnerable to turn into big disappointments. The barriers to bullshit lowered, common talk can drag us to the false belief that we can rationalise the complexity of the world we live in, and inhibits our ability to erect defences against collective illusions. For these reasons, common talk, rationalisations, and narratives are failure prone. To say it like Nassim Taleb, they are “fragile”.

The same Taleb offers us a way out from this trap, encapsulated in this quote: “it is easier to macrobullshit than to microbullshit”. If we want to stay away from the temptation of the macro we need to turn our attention to the micro.

How to see through invisible clothes

Common sense sits at the opposite side of common talk along the macro/micro divide. Its focus is tangible and practical. Observations and experiences as opposed to rationalisations. Common sense is inherently micro.

In the context of the 2008 real estate bubble, common sense is the “layman’s” realisation that an increasingly large number of people cannot afford their mortgages and the ensuing conclusion that they will be defaulting on their loans. Its value doesn’t lie in the ability to offer comprehensive explanations, but rather in its empirical validity.4 Traditional common-sense knowledge, like simple heuristics and grandmotherly advice, is the ossified product of observations. They have endured through time not because they are attractive but because they work.5

We can now consider the optimal approach to navigate situations where common talk appears in contradiction with common sense. Because of its empirical and practical nature, I argue that common sense should be considered “default-right” while common talk should be considered “default-wrong”. In other words, faced with a dilemma, the burden of proof lies with the statement that contradicts common sense.

Thinking in terms of burden of proof can help us come up with a set of simple rules of thumb to guide our day-to-day decision making and communication. I’ll suggest three here, but there are surely more which I’d be happy to hear:

  1. When a proposition contradicts common sense, we should assume it to be wrong.

Jared Diamond made this point recently when naming “common sense” as his choice for a scientific concept that “ought to be more widely known.” As an example, he cites a recent debate among archaeologists regarding the discovery of pre-Clovis settlements in Southern U.S. states and Latin America. Common sense alone would be enough to dismiss such claims. There are hundreds of Clovis settlements south of the Canada/U.S. border, but no sign of human presence before that epoch has yet been discovered. If pre-Clovis populations had indeed passed through the continent, there would be plenty of evidence by now and it seems implausible that they have been “airlifted” directly to the south. Yet a large number of scholars overlook this basic logic and get blinded by the desire to claim a new discovery pre-dating those of their colleagues. Unsurprisingly, these claims usually turn out to be the result of measurement or sampling errors in the dating of radiocarbon.

The point is not to preemptively disregard any theory that contradicts common sense. Rather, it is a warning to avoid getting bogged down in details before the primary contradictions of a new theory are resolved. It also forces us to come up with a plausible explanation for why such a theory is not wrong.

  1. If we cannot strip a statement of its jargon and rephrase in our own words, we are likely perpetuating common talk.

One of the best formulations of this point comes from Timothy Snyder’s On Tyranny. In lesson nine – “Be kind to our language” – he writes:

Avoid pronouncing the phrases everyone else does. Think up your own way of speaking, even if only to convey that thing you think everyone is saying.

If we cannot explain a concept in plain terms, there is a high likelihood that we are either falling prey to consensus thinking or that we are simply indifferent to the validity of our statements. Writing and teaching are two great ways to avoid this trap. More often than not, they lead to an accurate realisation of our true level of understanding.

Naturally, it is not realistic to expect we can reach a teaching-level of knowledge in all possible subjects. There will be situations where we are forced to work from cached thoughts – thoughts that we pick from our memory or environment without critical processing. In these situations, it is worth realising that the burden of proof is on us. Everything we say without fully understanding it, or without a strong trust in the source, should be taken with a grain of salt.

  1. Always test macro thoughts against micro examples.

From a general point of view, this is simply one of the essential characteristics of science. In Feynman’s words: “It doesn’t make a difference how beautiful your guess (theory) is, it doesn’t make a difference how smart you are… if it disagrees with experiment, it’s wrong”.

As a methodological advice, I found a great example in this interview with Scott Aaronson. His approach to preventing the risk of falling for “elegant” but flawed theories is to avoid looking for general frameworks too early in his investigation. He starts instead by looking for “easy special cases and simple sanity checks”, or things he can try out “using high-school algebra or maybe a five-line computer program”. Not only does this micro focus prevent wild goose chases, but it also primes him for a better understanding at the macro level:

I find that, after you’ve felt out the full space of obstructions and counterexamples… finding the proof techniques by which to convince everyone else is often a more-or-less routine exercise.

This approach is valid for science as it is for running a business. Execution (the micro) without vision (the macro) can feel like mindless, unexciting work. But, to say it like Edison, “vision without execution is hallucination”.

When common sense fails

Seen like this, trusting our common sense would seem like straightforward, perhaps even bulletproof advice to follow. But common sense does fail us on occasion – and when it does, it fails us big time. The tricky part is distinguishing the situations where common sense works from those where it can lead us astray.

First, there is a practical aspect. Even when common sense is right, it can still result in an economic loss if used as a guide to investment decisions. Right or wrong, a common belief can push prices up in a growing spiral, fuelled by the self-fulfilling effect of bull markets. As every investor knows, “the market can remain irrational longer that you can remain solvent”.

Substantial structural changes – in economics, politics, or technology, for example – can also undermine the validity of our common sense. These are situations where previously reliable mental models stop working and new ones – new common sense – need to be created. We are currently facing one of these structural rearrangements as we move from an economy based on scarcity (industrial) to an economy based on abundance (information). Sensible industrial-age values like thriftiness, planning, and risk minimisation are now losing relevance and are being replaced by “new” ones like experimentation, learn-fast-fail-fast, and optionality.

Another significant shortcoming of common sense is that it has limited explanatory value. Common sense can be a valuable compass to guide our behaviour, it can help us spot and debunk flawed theories or common talk, but it does little to explain new phenomena or prove the validity of new propositions.

In this way, common sense plays a role akin to observation or experimentation within the scientific process. As argued by Hume, observation alone cannot produce scientific knowledge, or at least it is not sufficient for that purpose. Observing a thousand white swans does not prove that all swans are white. But a single black swan does refute the proposition. In this way, common sense is most useful as a falsification mechanism.

At the same time, science has the authority to revise our “common sense” just as it has the power to explain things beyond what we can directly observe. It was once fairly obvious that the sun revolved around the earth – but a long, arduous campaign of reason and observation convinced us otherwise. In other words, when common sense and science collide, either a hypothesis is proven wrong, or common sense needs to be updated.

At this point, we are left to answer a critical question. How can we decide when to overrule our common sense? What should we do in the many, almost daily, situations where it’s impossible to verify the validity of a statement? When can we trust common talk?

In this post, I have focused on situations when common talk should not be trusted. Put another way, I have tried to advocate for the adoption of a precautionary principle. But society rarely moves at the speed of precaution. Significant changes are initiated by people that find confidence in unproven convictions and are brought forward by people that disregard rationality to follow them.

I suspect the answer cannot be found in a positive theory of certainty, but in the acceptance that, as humans, our destiny is to live, and act, in doubt.

––––––––––––––––––––––––––––––––

[1] This may sound like an exaggeration but it is exactly what happened to Mark Baum, one of the few investors who decided to bet against the housing bubble, during a trip to Florida to investigate the validity of rumours of an impending crisis. The episode is reported in Michael Lewis’ The Big Short.

[2] The main difference between the notion of “common talk”, as presented in this post, and Frankfurt’s “bullshit” is whether the action is active or passive. In my case, I am more focused on the way people repeat what others are saying (passive action), whereas Frankfurt is more focused on the act of generating bullshit in the first place (active action). The two, however, are closely related.

[3] A possible complementary explanation could point to a correlation between the type N of the popular Myers-Briggs personality test and the conventionally “smart” in the population. This would provide some additional substance to the argument that “smarter” people are more easily tempted to fall for common talk. While there are some studies suggesting a link between academic performance, IQ test scores, and N types, these probably say more about the way we measure intelligence than anything else. The validity of Myers-Briggs to explain anything is also often criticised.

[4] Translated in a business context, macro vs. micro is typically embedded in the corporate vs. startup dichotomy. Despite the increasing level of bullshit that is pervading the tech scene, startups in their early days remain mostly immune from common talk thanks to a forced focus on the micro. Even the inevitable theorisations around entrepreneurship, considering the popularity of the subject, tend to focus on process rather than concrete prescriptions. Along the same lines, it is not surprising that YCombinator and its founder Paul Graham have become famous for dispensing advice that sound very much like common sense: “code and talk to customers”, “make something people want”, “the best way to become rich is to avoid dying”.

[5] Taleb calls this the Lindy effect: the longer some knowledge has been around, the longer it can be expected to last. Knowledge that is “Lindy prone” is knowledge that works as long as it is valid. Most elements of common sense work like this, when they stop being valid, they stop being used.

Get Ribbonfarm in your inbox

Get new post updates by email

New post updates are sent out once a week

About Stefano Zorzi

Stefano Zorzi is an entrepreneur based in Cophenhagen. He writes irregularly on his blog The Hiding Hand.

Comments

  1. We need more essays like this one — broadly accessible pieces that consider how best to climb about on ladders (or geodesic domes) of abstraction.

    Rawls might suggest that we should (and mostly do) aim for a reflective equilibrium between common sense and common talk.

    Eliezer’s essays on “tabooing terms” and “replacing the symbol with the substance” are related pieces that explore how to execute your suggestion #2.

    • Thanks. On your first point, I believe this is the biggest question left open by the essay. As I was nearing the end of it, I kept feeling a sense of dissatisfaction with the stricter interpretation of my message. Common sense alone is not sufficient. We need a robust mechanism to challenge it and renew it constantly. In the active sense – i.e. while trying to create new knowledge – we should be liberal with challenging (even defying) common sense. As receivers, however, we should be more conservative and try not to fall for common talk. Thanks also for pointing to the two Elierzer’s essay.

  2. Every problem has a solution that is simple, elegant, and false.

    – An old programmer’s saying.

    • Not 100% sure what your comment refers to. If I can interpret, I agree that our common sense – at any given point of time – is probably flawed. This doesn’t make it less effective as a defence mechanism against common talk – typically even more flawed.

  3. I like this. In the particular case of the irrational exuberance leading up to 2008, I can see it now. There’s a penalty for being contrarian when everybody’s making money. I’ve stopped talking bearish things about Bitcoin because I continuously sound like a fool. Booms are gradual; busts are fast. The result is that being a bear is only satisfying for a short time while being a bull pays most of the time. Nobody wants to talk to me anymore, and I don’t even want to hear myself talk because everybody around me is getting rich. All of which I think feeds the common talk. As they say about irrational levels as bubbles are in full swing: nobody wants the party to stop.

    • Being a bear in a bull market brings no short-term rewards. I think the answer here lies in freeing ourselves from the need of signalling our position. If you have certain (contrarian) beliefs, try to find the strength to stick to them, but don’t focus too much on convincing others. Armed with that strength – which I agree, can be very difficult to find – you can then approach others with an open mind and maybe you’ll even end up changing your position. In financial situations, in particular, we should really try to split what we believe in from what action is more likely to generate returns. We can be right and lose money or wrong and make money. Being able to act against our beliefs is probably a good skill to have if we want to be engaged in financial markets. When it comes to strong (potentially irrational) bull markets thinking in terms of “optionality” is probably a better approach.

  4. There’s a little irony in the use of Taleb and Thiel here; another way that standard patterns of discourse develop is by the use of common metaphors or reverent reference to commonly respected figures.

    We might also expect that the obviousness of the “emperor’s new clothes” metaphor might actually mean that we are not synthesising insights from different places here, just observing the repeated use of the emperor’s new clothes heuristic in different guises.

    Thiel particularly makes the emperor’s new clothes metaphor a fundamental part of his business principles “work on what you believe to be right that everyone thinks is wrong”.

    So the tips here focus on recoding insights outside of the terms of standard discourse, and in creating free standing proofs of ideas. I would add another element; careful reading of sources to which reverence is applied to find gaps between their definitions of the same words in different contexts, hunt for breakdowns of generality and agreement within the.

    So to break the example, imagine the emperor’s new clothes, but with a boy too far away to see the emperor himself, so he runs between the legs of the people, and hears what people say, that the emperor’s clothes are modest, and extravagant, sleek and puffed. He moves to the centre, and finds that the people in the middle try to avoid the topic, reticent to give their opinion either way. This combination of contradiction on the edges, and studied indecision in the centre, should suggest an attempt to create common agreement without an original source of information. This should lead to the conclusion, as it often does in politics, that we don’t really know.

    This would suggest that the opposite situation, of a committed core dedicated to a particular mode of experience, surrounded by a wider area of indecisive or low detail opinions, is more likely to be correct, as it suggests that less information has been generated endogenously within general human culture, relative to those generated within the subgroup. More accurately though, is probably a mexican hat potential, where there is a spike of discussion and detail within the group, a spreading down to lower detail in people with reasonable proximity, and then a more gradual increase as you get more distant:

    The people with the interest in the topic, and a knowledge of it’s principles, involve themselves in detailed discussion and confirmation of those principles, those with a little knowledge, but no great detail, are inhibited from general speculations, but still don’t have enough background to play with the underlying assumptions. And those far away, without knowledge of either, can make up whatever ideas they like. Doesn’t mean they’re right, but does suggest some coherent object of study or practice.

    Applying this to the subject itself, I don’t think the distinction between common sense and common talk applies consistently along all of these speakers; Jared Diamond’s common sense test proposes that people’s local dating procedures should be discounted because it assumes that the south was settled before the north. The airlift metaphor comes from it’s contradiction of the expected migration pattern from north america down. (The obvious alternative to an airlift being boats, pre-Polynesian boats perhaps.)

    In this case Jared Diamon’s common sense is a heuristic based on top down processing, filtering information according to it’s broader consistency within a recognised thesis, rather than allowing the bottom up, local evidence of the different studies, acting in competition, to overrule his general assumption of what makes sense. So the examples for 1 and 3 actually lead to direct conflict.

    The emperor’s new clothes model allows everyone to portray themselves as the little child, because all ideas have social pre-conditions, and it tends to be easier to see other people’s than your own, because being unfamiliar with them, you need to discover what they are. So you can focus on your own intuitive realism vs the obviously socially conditioned ideas of others.

    Fortunately, we have an alternative, in saying that the 1 vs 3 conflict is a paradigm conflict. What my experiment suggests vs what the broader patterns of evidence say. One side has sample size on it’s side, the other specificity.
    Not a conflict between two different paradigms, and not a paradigm shift, more like a paradigm wobble, the movements of a paradigm under the attempts to aggregate the different ideas into a single pattern of common talk.

    A free market works differently to science as a truth finding procedure in that part of it’s assumption is that in normal operation, people will be acting with different overall theories of the world. There is no assumption or requirement that people will pool their knowledge in order to create theories that combine all their evidence, instead, those various theories combine in practice by participants betting on the correct relative values of different goods based on those theories. In equilibrium, worldviews can be isomorphic up to the price of cheese. There is an assumption that bad ideas will be outcompeted by good ideas, but bad ideas, particularly constellations of mutually strategically reinforcing bad ideas, can outcompete good ideas for long enough to lead to their own catastrophic undoing.

    An example being the subprime crisis in the US, where people thought goldman sachs was selling them risk-mixed subprime bonds because they were good, when they were actually selling them because they felt they were hedged against the bubble popping and could enjoy it while it lasted.

    The answer given at the time to the queen was “At every stage, someone was relying on somebody else and everyone thought they were doing the right thing.”.

    In that sense, you might think that the problem was caused by the absence of practical necessity for agreement on details, in the way their common talk was generated.

    • Ah “Jared Diamon”, famous anthopologis. Also I hope my post doesn’t seem too unappreciative while being critical, I wouldn’t have thought of any of this stuff without your post.

      If this inverted argument has any validity, it might suggest that silicon valley’s common talk is just better than in other business areas, perhaps because of non-market driven mechanisms, like a tendency for staff to swap companies, and a relatively short fail cycle that provides feedback. So someone can say, “we did this in my last company, it went belly up” without having the useful information dwarfed by negative social signalling.

    • Thank you Josh. There are a few things there.
      In relation to the reverence to (some) sources, I think the main difference is reverence towards towards facts – true or assumed true – or towards method. Both Taleb and Thiel – in the examples i have used here – are offering meta-advices: mental models and heuristcs that can help identifying common talk. It is fair to argue that even their advices constitute common talk, but I don’t believe that people of superio (formal) intelligence are more prone to believe in common talk because thiel says so, I am using Thiel to reinforce saomthing I knew as well.

      More interesting is your comment about the ability to go beyond common talk. I really like your challenge to the J.Diamond example: if we stick to the “common sense” explanation of human evlution on migration, we will end up overlooking perfectly plausible opporutnities.

      In relation to this argument, I agree with Feyerabebd that there is a “tirrany of tighly-knit, highly-corroborated, gracelessly presented theoretical systems” and that science can only proceed if we allow it to overtake the status quo, to grant new ideas the ability to ovverride old ones not through logic but through other means than argument. Through: “irrational means such as propaganda, emotions, ad hoc hypotheses and appeal to prejudices of all kind”.

      I still haven’t found the right equilibrium. I started ot think that a strong barrier against bullshit is, in the end, less beneficial than allowing ideas – even heretical ones – to flow free;”

  5. Ralph W Witherell says

    Thanks. Great Post.

  6. “But why do we fall for and perpetuate common talk?”

    Well, simply enough, humankind is a mammalian herd, or pack, species.

    Landmark Jaynes’ “The origin of consciousness in the breakdown of the bilateral mind” explains it better than any one or pair of Solomon Asch’s type experiments.
    I have, against my will and to my shock, been part of an “organization”, that is, a swarm made of hens (which to my wonderment love the place of hens) and cocks (which in no time are made as blind as it can be by power. No doubt, being obeyed is more addictive than what commonly is regarded as drugs, to humans), and can say that Jaynes’ study explains most thoroughly what I had observed.

    A passage:

    “If the bicameral mind existed, one might expecr utter chaos, with everybody following their private hallucinations. The only possible way in which there could be a bicameral civilization would be that of a rigid hierarchy, with lesser men hallucinating the voices of authorities over them, and those authorities hallucinating yet higher ones, and so on to the kings and their peers hallucinating gods.”

    What amazes me the most is… 3 or 4 thousand years later (indeed, a short span, in evolutionary terms), much substance has not varied.
    And, yes, they are absolutely, completely, unaware of what goes one.

  7. I think this could be supplemented by Hall’s distinction between ‘good sense’ and ‘common sense’. This would add another dimension to your analysis.

    https://www.opendemocracy.net/ourkingdom/stuart-hall-alan-oshea/danger-of-common-sense