Organizational Theory isn’t a science, though it would like to be. Unfortunately, building a scientific approach requires understanding from a number of fields that themselves are still only aspiring to be sciences. Because psychology, economics, and sociology are a mish-mash of rules of thumb and vague, non-predictive, and generally unfalsifiable “theories”, organizations are reduced to ad-hoc rules and guesswork: critical, but prescientific.
For now, to abuse the parable of the blind men and the elephant, organizational theorists are still groping at their respective elephants, unable to figure out that the trunk is next to the tusks, or even that they are part of the same animal. It’s not a science: if anything, it’s a field of engineering, albeit one without a grounding in physics or Asimovian psychohistory to draw from. Precisely because the field isn’t scientific, understanding the engineering rules of thumb that were developed over time is fantastically useful for a practitioner.
Henry Petroski’s excellent To Engineer is Human introduced me to the history of engineering. Failure is the watchword of that history. Even generations after Newton, science was simply incapable of answering basic engineering questions, like “what load will this beam support?” So engineers developed rules of thumb in different domains that assured safety, grounded in experience. This approach was almost scientific — the theory is that this structure will be stable, and if it’s untrue, it will be falsified all on its own. Organizational Structure is similar: we know a lot about what doesn’t work.
As with most fields, it’s easiest to dissect organisms once they are dead, so I’ll stick to ideas that are older than I am. Understanding the various theses and antitheses won’t lead to synthesis without the basic grounding to unify them, but the history of failed ideas can still give us a map of the pre-scientific minefield of organizational design. Once we’ve traced out the map, I’ll add some ideas about how we can navigate around the unknown dragons, and find useful insights into organizations without actually pretending to understand them.
Prehistory, History, and the Future
The textbooks all try to tell us that the earliest theory of management is F. W. Taylor’s Scientific Management (1911). Before this, according to scholars of the field, “Craftsmen owned their tools, [which] minimized the possibility of management’s establishing general measure of productivity and quality.” But this is a ludicrous contention: cost accounting was well established, and factories were common a century before Taylor developed his insights. When interchangability was discovered, there must have been some theory that preceded Taylor which allowed businesses to optimize their processes — and there was! Unfortunately, it is so basic, and still so prevalent, that people haven’t noticed it.
Intuitive Organizational Theory
Intuitive management is the grandfather of all management theories. Everyone has worked with others in some form or another, and management is just working with others. Some people are batter at this than others, and those best able to manage will be obvious, so there is no need for scientism. Instead, we admit we can’t formalize everything, and let people work it out. And it works!
Well, it works for a while. But at some point, things get a bit too complex, and people need to go corporate or go home. But, as I argued in that post, there is a tremendous advantage to having little structure, and hence little need for organizational theory. And as the world consolidates and bifurcates into lumbering mega-corporations and nimble scavengers and upstarts, the big guys need to realize why they aren’t able to replicate that agility — they are too large for intuition. Instead, they need theory.
Scientism, in the form of “Scientific Management”, reared its efficient head in the late nineteenth century, with clocks and measurements that found tremendous benefit from imposing order on the evidently previously unruly and disordered factory floor. Taylor observed and intervened — and he certainly found room for improvement, like allowing manual laborers rest breaks to increase their efficiency. Unfortunately he also managed to perpetuate the single most destructive management practice: the unthinking application of a paradigm to a complex problem. (This rigidity and oversimplification which replaces intuition is part of why models fail.)
One specific drawback of his approach is due to the Hawthorne effect, where measurement distorts the system it was trying to measure. Specifically, when you pay attention to someone, they get more efficient. It seems, like children, employees thrive on attention. But that means any attempt to improve efficiency by monitoring performance closely will be effective, spurring a proliferation of middle management supervisory roles, with additional costs that begin to offset this additional efficiency. Organizing many layers of management was difficult. Principles were needed to decide how to set them up — it would hardly be efficient and orderly without rules and procedures. And who better to institute strict rules then an authoritarian German sociologist?
Structure and Function
Max Weber’s theories demanded further orderliness in corporations. He took a term intended to critique French government, bureaucracy, and turned it into a principle. The so-called Bureaucratic Management Theory (~1920) was a way to try to ensure that everything in the system worked according to the rules. The successes of the approach are obvious in the increased industrialization of industry. He formalized things like the now nearly universal idea of listing and insisting on rigid job qualifications, and an explicit hierarchy with rigid roles to fill. Actual humans would be shuffled around these systems via systematic processes.
Weber was inspired and informed by Marx’s analyses of the role of capital, but unlike the more utopian Marx, he viewed the tension between owners and employees as unresolvable. At the same time, Weber was aware of the problems of bureaucracy that were being created by the layering of management, but thought that restructuring the system properly would be enough to allow humans to fit. Ironically, the proliferation of his theories and the effects of oppressing workers were a key reason for the later rise of the Marxist approach he disputed. The success of industrializing the workforce made workers just cog-like enough to be efficiently organized into unions. The rise of unions in the wake of scientific management was a counterbalancing force to bureaucratic organization. Of course, this led to a further proliferation of the structural dichotomy of workers and managers, following the Marxist vision. Weber lived to see the beginnings of the ultimately doomed collectivist approaches started in earnest in Russia, a few years before his death in 1920.
But when it’s time to fail because of insufficient appreciation of the problem, everyone fails. And so despite Weber’s opposition to Marxism, it failed to work for much the same reason his own models failed: humans don’t really work that way.
As most of us know, dealing with micromanagement sucks. Unfortunately, measurement and structure require it, to some extent, and the cumulative effects of this were not captured in Taylor’s original short-term studies. The increase in management structure only accelerated in response to the demand for unions, which fed the opposition, and cemented the structure in place. Management was dedicated to maximizing productivity in the face of union demands. This meant that management was precluded from doing anything other than managing work processes and structure, and the tension was resolved by limiting the tools available for managing a workforce, and the tools of scientific management were institutionalized. (Instead of being institutionalized.)
Span of Control and Magical Thinking
One particularly amusing construct in management that came out of Weberian approaches is the span of control. To lay the background for this fanciful notion, we pretend each manager has the same task, of monitoring and managing their reports — which means that all management is, in almost a parody of Weber’s approach, a single clearly defined role to be standardized. Bureaucracies are necessarily hierarchical, for deep reasons I discussed previously, so many people were led to an assumption that management is, or should be, fractal. Each manager has K people they manage, and they are managed by someone with K direct reports as well, going all the way up and down the K-ary tree. The puzzle induced by this simplification is finding the optimal value of K – and this is called “span of control.” Reams of paper have been filled with empirical and theoretical justifications for what the optimal span is, despite lack of conceptual clarity for why this single number is useful, or how it should be applied in the real world.
Several years ago, I had the privilege of hearing Francis Fukuyama speak to an audience of experts in policy and public organizations, and a few interested students like myself, at the RAND Corporation. (It must have been 2012 or early 2013, since he mentioned drafted chapters of his then-upcoming book.) As a side-bar to the discussion, he mentioned his critique of Span of Control in his essay, “Why There is No Science of Public Administration.” Part of the justification given for a span of control of seven, he pointed out, is Miller’s Law. The law states that the number of objects an average person can hold in working memory is about seven. The justification of this, Fukuyama amusingly noted, came from a paper suggestively titled “The Magic Number Seven,” where Miller suggested, tongue-in-cheek, that seven was somehow a universal value.
As Miller concluded: “What about the seven wonders of the world, the seven seas, the seven deadly sins, the seven daughters of Atlas in the Pleiades, the seven ages of man, the seven levels of hell, the seven primary colors, the seven notes of the musical scale, and the seven days of the week? What about the seven-point rating scale, the seven categories for absolute judgment, the seven objects in the span of attention, and the seven digits in the span of immediate memory? For the present I propose to withhold judgment. Perhaps there is something deep and profound behind all these sevens, something just calling out for us to discover it. But I suspect that it is only a pernicious, Pythagorean coincidence.”
Fukuyama pointed out that the adoption of seven for span of control was evidently an application of this magical thinking. In general, simplification to such rules is itself an example of magical thinking, something we see all over. And if this makes sense, you’ll agree that it’s not simply coincidence that the number seven also has deep numerological significance. According to one blog, which is evidently well-respected by Google’s Pagerank, “Number 7 also relates to the attributes of mental analysis, philosophy and philosophical, technicality, scientific research, science, alchemy… ahead of the times.” Perhaps there is something deep and profound about the fact that science and alchemy are grouped here. Perhaps my dismissal of the magical number seven is because I’m not “ahead of the times.” But blind application of similar generalizations in management led to quite a few pernicious management beliefs. So I suspect the magical thinking just mirrors people’s inability to accept that sometimes, simple connections are illusory, and things are complex.
Failing to Integrate Humanity into Management
Backing away from our foray into mysticism, and returning to the realm of scientism-istic “fact”, the rising star of psychology was soon adopted by management theorists. After World War 2, as psychology began to gain more widespread acceptance, the discipline of “Human Relations” was born, with the motivation to provide a human approach to management. Maslow’s work in the 1960s, on Eupsychian Management was an early push in that direction, promoting the primacy of worker actualization as the goal of management. The manager changed from a cog to a coach, creating character, instead of impersonally pushing profitability.
It turns out that this approach, and related ones, failed for almost exactly the opposite reason Taylorism did: it ignored business goals in favor of human factors.
Today, firms like Goldman Sachs proudly say that “our people are our greatest asset.” That may be true, but thankfully for investors, they don’t mean the welfare of their employees or their development as human beings; they mean that people are what allows them to create wealth. Human Relations is now an almost Orwellian euphemism for everything impersonal about business: screening interviews, harassment complaints, legal and liability issues, and of course, firing people.
As a personal aside, at the start of the great recession, I was working at an investment bank. As the most junior member of my group, I suspected I was first up at the chopping block, and eventually my team-lead clumsily made it obvious that I was on my way out. After a few painful days of clumsy excuses about cross-training on the tasks I managed and re-engineered, I was asked to meet with my manager. My manager sat there, uncomfortably. The HR person sitting in the corner (conspicuously failing to provide any of those vaunted eupsychian benefits to anyone involved) would occasionally prompt him. As he read through a literal script, with the obligatory lip-service to encouraging me to view being laid-off as an opportunity, I distinctly remember that the most uncomfortable part for me, having seen this coming for a week or so, was watching my manager forced through the charade.
The process was impersonal, insulting to everyone involved, and inefficiently redundant — which describes the result of these systems in general. Human Relations as originally envisioned as a discipline is a failure. Instead of convincing management to care about workers more than profits, it led to doublespeak, as the HR workers that wanted to be psychologists were forced to be cogs instead.
Systems Management and Premature Optimization
A more recent management model has been suggested is to understand organizations as complex systems. And they are complex systems. The approaches suggested, however, are usually somewhat more nuanced than throwing a copy of Gleick’s Chaos at managers and running away. But unless the question is “what buzzword is being used to obscure our lack of understanding?”, “Complexity” isn’t the answer. We don’t understand most complexity enough for it to be a useful predictive model in scientific fields, so applying it to organizational theory is a lost cause. As one paper puts it, “Organizational theory has shamelessly borrowed from the physical and biological sciences for its models and metaphors. These models and metaphors have been unsatisfactory in predicting the behavior of organizations, and to provide prescriptive designs for creating organizations that are more efficient and effective.”
That noted, there are approaches to complexity that have been useful in the sciences which have had some success in organizational theory as well. My introduction to management and organizational theory, in many ways, was The Fifth Discipline: The Art & Practice of The Learning Organization, (thanks to a fantastic recommendation by Todd Slingsby.) The primary insight of this theory was that there are many quantifiable factors in business, and their relationship can be understood quantitatively — and that then-recent tools in systems theory were the way to get there. Like other approaches, this was insightful, but limited by the lack of rigor in many of the underlying models.
What it got right, however, was that a model for how the components of an organization interact was helpful for lending insight. By simplifying organizations to the simple dynamics of factory physics, approaches like the Theory of Constraints, as explained clearly by Tiago Forte, were able to lend insight to where organizations, once understood, could be improved. This move back towards neo-Taylorism is more sophisticated, and more aware of the failures of the past, but it seems primed for a similar pushback against more globalized, efficient, and inhuman business, and a similar pattern of failure.
So it is useful back a bit to consider how the lack of scientific synthesis and the role of failure has been understood.
Multiple Failures and the Language of Synthesis
Herbert Simon’s 1946 The Proverbs of Administration notes that there are proverbs that are widely accepted, but exactly opposed to each other; “Look before you leap” and “He who hesitates is lost.” In organizational theory, he notes, different accepted aphorisms lead to similarly contradictory conclusions. This was a criticism of much of the pre-scientific wisdom, but it applies to the recommendations of many of the later theories as well.
A later wave of criticism went beyond this basic critique. In 1956, a decade after Herbert Simon’s critique, the journal Administrative Science Quarterly launched. Almost 60 years later, the journal’s editor wrote; “ASQ’s aim was not to provide practical advice to managers but to build an interdisciplinary science of administration that both drew on and contributed to the broader enterprise of social science.” And yet, even now, “it is difficult to point to many areas of settled science when it comes to organizations.” He argues, much to my approval, that the problem retarding progress is a misalignment of metrics, or as I termed this dynamic, underspecified goals. There has been progress, but the scientific elephant remains elusive.
The failure of the field as a science is a problem for researchers but practitioners need to move forward anyway. So how can we manage our understanding in a pre-scientific field? The trick is to exploit the failures of multiple models together.
Obviously the best method is to achieve the grand insights that would coalesce pre-scientific views into a coherent predictive model. Unfortunately, despite standing on the shoulders of giants, I’m much too short to see a way around the obstacles, but I do see ways to peek through to the other side. And the way I want to talk about it is inextricably tied to language. Here, the obvious shoulders on which to stand are those provided of Gareth Morgan, who surveys a set of eight different conceptual metaphors with which to view corporations in his classic Images of Organization. As Venkat notes the book is helpful both for understanding corporations, and understanding how people discuss and understand corporations, because as he notes, “these are not really 8 perspectives, but 8 languages.”
As the political scientist Philip Tetlock notes, using any single model is demonstrably worse than using many. But the problem is more complex than than foxes versus hedgehogs, because those aren’t the only options. As Venkat puts it, hedgehogs have strong views, but ideally are swayed by evidence – the views are weakly held. Strongly holding a single view is being what he calls a cactus. Systems that dictate decisions based on simplified metrics display exactly this failure mode, as I laid out exhaustively in my earlier posts. Of course, using an unchanging set of metaphors is the informal equivalent of this failure mode, and it’s appropriate to approach the topic of organizational dynamics using a different language that that of metrics and models.
As Martin Marty said about religion; “If you only know one religion, you don’t know any.” In a slightly different vein, as the old joke goes, “If a person speaks three languages, they are trilingual, if they speak two, they are bilingual, but what do you call someone who only speaks one language?” “American.” Strict adherents of most religions tend to find comparative religion blasphemous, and American insistence on English smacks of the same type of cultural puritanism. As another old American joke puts it, “There’s no need for foreign languages – if the English in the King James Bible was good enough for St. Paul, why learn any others?” But the reason these jokes exist speaks to a deeper point; lacking comparative understanding is perfectly okay if you possess the sole and complete truth.
This is the equivalent to the internal model principle I’ve discussed; if your model is exactly correct, you only need one. If your language is the only one anyone needs, foreign languages are a waste of time. And if your religion was ordained by god, any deviance or variation is not just worthless, but heresy. But if you think you have such a model of organizations, despite the deep reasons I have laid out for why one can’t exist, you should have better things to do with it than argue the point with me.
Jokes and organizations are like beliefs: if you’ve fully explain them, you’ve killed them. And while we can learn to be multilingual, humorless, and polytheistic, that doesn’t solve the problems with rigidly applying a single paradigm. And for an organization, rigid application of a single paradigm is deadly.
If an institution behaves exactly as incentives suggest it should, it is dying. Institutions are alive to the degree they are unpredictable.
— vgr (@vgr) December 22, 2016
Multilingual and Muddled, or Models and Mosquitoes
Understanding a system must occur on many levels, simultaneously. Speaking multiple languages can be helpful for untangling the umwelt of any particular oeuvre, or allow the speaker to grok the gestalt — rarely. Most of the time, it leads to a muddled mess. As you can see.
So how do we selectively apply the insights of our multiple incorrect models without devolving into a incoherent mess? A concrete example may help.
A mosquito is a component of an ecosystem, with behavior shaped by evolutionary and environmental pressures. At the same time, it is an organism with dietary needs dictated by its digestive system. Of course, it is also a physical system that obeys physical and chemical laws such as conservation of energy. A single aspect of the mosquito’s life, such as its diet, is not shaped by one factor or the other. Instead it is shaped by all of them in different ways.
The mental models we use, however, rarely combine these different classes and levels of understanding. The ecologist, biologist, and chemist read different journals, use different languages, and are only roughly familiar with the fields of the others. This works well when they are advancing their field individually, single-mindedly following their incentives to publish or perish, but it fails as soon as a cross-cutting question is asked.
The relationship between temperature, rainfall, ecology, and prevalence of a mosquito-borne disease can involve models at each of these three levels. Control of this sort of disease requires an understanding of many different aspects of the virus. Female mosquitoes bite people or animals in order to breed; after such a “blood-meal”, they can find a body of still water and lay their eggs. For Zika to spread, a (female) mosquito must first feed on a person with an infection for their blood meal, then, after the virus has time to multiply inside of it, feed on another person, re-transmitting the disease.
The hatching of different species of mosquitoes occurs when eggs previously laid near the water line are re-submerged. Different species lay them in different places, and then compete over resources. If frogs and fish live in the bodies of water, these predators may feed off of larvae after they hatch. If conditions are much more favorable for non-dengue mosquitoes, fewer disease carriers will exist. If temperatures are low, the mosquitoes mature slowly, and the females rarely live to have multiple blood-meals, meaning the disease cannot be spread.
A biologist might come to the conclusion that we need to control the breeding grounds, and advocate removing bromeliads that provide the locations for egg laying. A meteorologist instead focuses on where temperatures would lead to outbreaks. An ecologist could advocate introducing more predator or competitor species. A geneticist might advocate genetic engineering to stop the mosquitoes breeding. An entomologist could recommend potent insect poisons, or suggest when it is safe or unsafe to venture outside. A complete model of all of these factors is unlikely to be feasible, but all of the models can supply parts of a solution. And by considering and applying different approaches, you still don’t arrive at a perfect strategy, but with ongoing work from many angles, you can keep Zika out of Florida.
Diversity – Taking the Good with(out) the Bad
Given my insistence on switching languages and switching models, it should come as no surprise that I’m going to advocate diversity. But diversity isn’t monolithic, and it’s important to differentiate between what I’ll call inclusive diversity versus exclusive diversity. As an example of inclusive diversity, we want a variety of approaches when generating ideas. If we can eliminate insect breeding grounds by asking residents not to leave stagnant water in their yards, we don’t need genetic engineering or climate control. The gains from diversity are much more general than this single example, of course. In a software-oriented example, when a product is aimed exclusively towards people who are like the team building it, more inclusive diversity means a larger potential audience. Similarly, if there’s a way around an intractable coding problem via tweaking the UI, having the UI designer in the scrum can be critical. Being more inclusive creates gains in diversity, but it also has costs.
Exclusive diversity, on the other hand, is accompanied by privileged viewpoints and constraints, or adding additional goals and requirements. Needing to accommodate additional user types is a constraint, while being able to accommodate them is inclusive diversity. Contempt for others is exclusionary; it’s a constraint, and has real costs. So is ignorance.
Blind acceptance of diverse approaches isn’t useful either; we need to be selective about how we utilize diversity in our models. If we have many models providing constraints, showing different ways that a system can fail, we run the risk of eliminating possibilities and approaches instead of finding them.
Getting back to the point of synthesis of different incorrect approaches, balancing different models for organizational theory isn’t about trying to blindly apply multiple conflicting models, and being bound by the constraints of each. It’s about making sure all the approaches have a seat at the table. Esperanto was a disaster because it tried to synthesize instead of allowing for multiple maps. It created a new linguistic map of the world that didn’t particularly lend new insight, but mirrored the constraints of other languages.
Novices at a language frequently import the structure of their native tongue incorrectly into their speech. Once the new language is learned, however, it provides a new map of the world, one that can be integrated with the old one. Which is correct? Neither — all language is approximate, and maps are always only approximations.
Attempts to systematize organizational theory led to attempts to build single unified models, in bouts of physics envy. But this approach is backwards; before you can begin to build a single useful theory, a willingness to change your mind in the face of evidence is the most important thing a scientific mindset can provide. The fox doesn’t know which is the right model in each situation, but by evaluating them all, he can notice when predictions are shared, and when the models diverge, or are unclear. That doesn’t always make the fox’s predictions correct, but it can still keep him from holding on too tightly to the wrong answer.