A Meta-Model for Thinking about Organizational Change - Part II
Ganesh Sankaran
Supply chain management consultant specializing in SAP's Integrated Business Planning suite, book author
This is the second in a three-part series on a useful framework, meta-model as I had called it in the first part, for viewing organizational transformations. I will be giving my views as someone with a management consulting background, which I believe is a good vantage point as a vast majority of complex transformations involve consultants who are tasked with guiding organizations in their journey from where they are to where they want to be.
I'd encourage you to read part one, but if you are hard-pressed for time, here is a tl;dr version for you.
The main argument I made is that if you were to rewind the (imaginary) tape on most failed transformations, you would see different stakeholders taking stock of the reality they inhabit in very different ways leading to a slippery slope down the path of failure. Perceptions are coloured by their sense-making apparatuses or what is commonly called "mental models". As there are numerous such models at play, it is not possible to sync them up, and it is also not desirable as viewpoint diversity matters. However, is there a way by which we can judge the suitability of the models we employ, or the lenses we use, for the challenge (of effecting a transformation) at hand? Is there a meta-model of sorts that serves as a sieve constructed from common guiding principles to winnow out models unequal to the task? I argued that pace-layers, a framework that comes from architecture (the brick-and-mortar kind) fits the bill. I further argued that there are two aspects that make it an especially useful frame to adopt – it acknowledges the consequential role of time in that it sees different aspects of a complex system, such as an organization, as changing at different rates and requiring approaches that are attuned to their "natural" frequency. Secondly, the model takes a holistic/systems view to problem-solving and recognizes that accurately predicting the future is a fool's errand and instead of trying to optimize for current needs, you'd be better off making sufficient room for future needs. As the famous systems theorist Russell L. Ackoff noted: "to the extent we can predict accurately the behaviour of a system of which we are a part, we cannot prepare effectively for it; and to the extent that we can prepare effectively, we cannot predict accurately what we are preparing for." A pragmatic strategy, therefore, is to exercise humility and build systems that are nimble-footed and are able to adapt to change in short order.
I have used the term mental model several times and have talked about a meta-model that somehow gives one the ability to judge the value of models for the task at hand. Although I have alluded to what they are, before we go any further, some examples and definitions would help clarify their role and relevance. The following section provides a primer. It is followed by a short historical tour of change from the perspective of what or who determines the pace – this will motivate the discussion in the third and final part of the series that talks about why ideas embodied in the pace-layers model are of particular relevance to our current environment, one characterized by tumultuous, disruptive change.
The subjective nature of mental models is vividly illustrated by a story narrated by Ackoff in the book "Re-Creating the Corporation". At a large office building, the occupants are frustrated by the waiting times for the elevators. A consulting firm is promptly hired, and they propose various options like adding more elevators, upgrading the computerized control to improve utilization etc. However, none of the alternatives is deemed implementation-worthy. Much hand-wringing ensues, and the consulting firm gives up. In its stead, a psychologist steps in who "sees" the problem as one of boredom and suggests the installation of lobby mirrors so that people who are waiting can amuse themselves. The problem, in Ackoff's words, "dissolves" (apparently time flies when you are admiring yourself). The story illustrates that merely using a different lens might help see reality in an entirely new light. In other words, the psychologist's subjective mental models shaped by his discipline allowed him to cast a wider net that included the users' psyche. In contrast, the consulting firm had a narrower view with its mental models that were engineering-focused.
Not only are mental models subjective, but they are also pervasive – sometimes invoked without our express consent. Consider the famous Kanizsa triangle. It was believed that the imaginary triangle we see was the result of our higher cognitive functions. It made sense as such an ability would give us an evolutionary leg-up – the ability to reason into existence a potentially non-existent predator is better than ignoring a real one. However, not so long ago, it was discovered by researchers that these illusory contours are "drawn" by our primary visual cortex, which is much lower in the neuronal hierarchy and not generated through cognitive reasoning. And this shows that even if we desire to see reality shorn of any distortions, we are wired not to.
And things aren't looking good for people still harbouring hopes of accessing an unfiltered reality. More recent research into a type of visual illusion known as the "brightness contrast illusion" suggests that models that interpret reality for us are perhaps even more deeply ingrained. A typical instantiation of this illusion is one where the brightness of the dots, which is a mixture of surface darkness and illumination, has to be estimated by our visual system based on a single variable, namely, the received energy (the two dots in the illustration below are the same colour). It was previously thought that various environmental aspects, such as the play of light and shadows, are processed by our analytical machinery to make sense of what we are seeing. But the recent MIT-led research indicates that our visual system is primed from birth to carry out brightness estimation as most of the processing for it happens at the retinal level (and that it is not a learned function).
The preceding discussion shows that like many things in life, mental models are a mix of nature (innate) and nurture (learned and subjective). This fact places a further burden on models that we consciously acquire in the following sense: most innate models are a product of our evolutionary past and have given us shortcuts to improve our fitness. But what counts as fitness in the wild savannah is very different from what makes us tick in mild conference rooms and boardrooms. In other words, shortcuts that produce irrational behaviours might aid our survival, but in today's world, fitness is more a function of rationality and sound judgment. Therefore, the problem should not be framed in terms of shedding models to be able to process raw inputs to get at some notion of real truth (which is all but illusory), but it should rather be about acquiring "good" models and also the wisdom to assemble the right ones for a given problem context that help us stitch together a terrain that resembles reality as closely as possible.
The first step to selecting good mental models is acknowledging that problem-solving, their primary purpose, does not care much for disciplinary boundaries. Although the elevator example from above benefited from a "psychological" perspective, most situations call for an ensemble approach – where multiple models are brought to bear from across a wide variety of disciplines. As to the actual process of acquiring a repository of good models, Charlie Munger, the famous American investor and vice-chairman of Berkshire Hathaway, suggests looking for ones that do the most work per unit (of effort expended by the brain). This clearly emphasizes the fact, an intuitive one, that not all models are created the same – they all help separate signal from noise, but the truly exceptional ones accomplish a very high signal-to-noise ratio.
We have been talking about mental models in the abstract. To get more concrete, I present below an example of a model a consultant might find useful in an organizational context.
The model is due to Marshall L. Fisher, who proposed it in a seminal article in the Harvard Business Review. He argued that a critical point of leverage for decision making for organizations along a supply chain is whether a product is functional (table salt?) or innovative (smartphone?). A large number of demand and supply characteristics tend to cluster around this distinction allowing design choices to be evaluated for their fitness to the type of product (or the degree of innovation) that is being dealt with. For example, an innovative product is more amenable to a long-term supplier contract, whereas, for a functional product, a flexible agreement might make more sense.
Fisher's model fits Munger's idea of a high-leverage model that yields a lot of bang for your buck. The key insight of Fisher was that there are broadly two types of costs – physical costs (turning raw materials into finished goods) and market mediation costs (matching supply and demand by navigating uncertainties around both), and the object of primary focus for a supply chain should be driven by the nature of the product. Penny pinching does not help when a potential shortage of your super-innovative gizmo might end up making you lose market share. Whereas you might have seen an undifferentiated group of products before, with this insight, the products are cleaved in two classes with markedly different priorities for each.
The example from Fisher may also be variously termed a concept, framework, or just a model, and maybe justifiably so, but no matter the term, it shares the core idea of a "good" mental model, which is that it carries a kernel of deep truth that helps decomplexify a dizzying amount of information we receive, paving the way for effective decision making.
At the start of this article, I had summarized the two essential characteristics (discussed in the first part) that make pace-layers well suited as a meta-model for "recruiting" from a pool of candidate mental models in an organizational change context. But, I haven't yet justified the use of the "meta" tag.
(So meta - pictured above, a lake inside the Taal volcano in the Philippines, which is on an island in a lake on an island! Hat tip: Liv Boeree's YouTube video.)
I use "meta" to mean self-referential. As it is with thinking about thinking or any other aspect that involves the idea of recursion or nestedness, meta-anything requires one to step back and gain a broader perspective. It is in this sense that I think pace-layering is able to widen the scope of vision and identify essential aspects of the problem that make selecting the right tools for the job possible. The distinction to be made here is between a reductionist, analytical view where one runs the risk of missing the forest for the trees and a more holistic, synthetic view that yields answers as to the right tools for the job. Ackoff, whom we've encountered twice before, provides a wonderful example that drives home this type of synthetic thinking. He notes, if one were asked about the reason why early American cars were built for six people, any amount of analysis of the components, disassembly etc. will not produce an answer. The answer lies in the containing system – the society – where at the time the average size of the American family was around six, and this answer requires thinking about the problem itself in a larger context – i.e., going meta.
I had promised a potted history of change (addressing who or what determines "pace") at the start, which presents a nice opportunity to step into the meta-verse and ponder about how change has changed over time. Since the challenge is to do so in a few short paragraphs, we will look at the past using the automotive industry as the point of reference – not a bad choice as it was a hub of great activity during the second industrial revolution at the turn of the previous century.
The objective here is to see the course of change, albeit based on a tiny sliver and to comment on things that have likely changed for good and are being replaced by new trends. And it is also to call attention to some features that tend to age well. The hope is that this discussion will result in a better appreciation for the pace-layers model that has the property that it is not swayed too much by the fads of the moment – it does not discount time heavily and has great sympathy for the future "you". It also embraces principles that make sense (sometimes only so) in the fullness of time. As we go through this short historical tour, I will plant some flags pointing out some aspects that will be relevant to the discussion of pace layers in the last part.
Let's start our little tour with a snapshot of the "now" for comparison. A typical characterization one hears of change in the current environment is "turbulence". Its popularity is perhaps no coincidence as Joseph Schumpeter, the famous economist of the early twentieth century, coined the term "creative destruction", which is also known as Schumpeter's "gale". Just that in the current climate, gale-force seems to be at an all-time high (aided by recent technological innovations). Some of the telltale signs are – fierce competition leading to highly concentrated industries and a high frequency of leapfrogging among economic agents, winner-takes-all effects, and an ever-narrowing window to capitalize on innovations. One of the major contributing factors to all of these trends was, in fact, implicated in Schumpeter's theory – he called "consumers' goods" as one of the fundamental impulses for creative destruction. The most conspicuous manifestation of it in our lives is maybe the proliferation of product varieties (and attendant customer choices).
However, it is not as if we have suddenly developed a taste for variety. In fact, there seems to be an evolutionary basis. Variety-seeking humans that ate a lot of different types of foods in small portions minimized toxicity and maximized nutrition. What is different now, though, is the fortunate confluence of technology and our greedy palate (of course, not just for foods). To summarize by appropriating a definition for a business model, what we see is a coming together of what technology enables and what the market desires. Consider AI and big-data as the two most prominent technology artefacts of the day. They provide a very high-resolution picture of the needs of individuals and organizations are tripping over themselves to cater to those needs.
So, clearly, in our current march of progress, consumers are the pace-setters.
It wasn't always this way.
In the early twentieth century, Henry Ford was one of the most prominent figures of the second industrial revolution. At the height of his influence, he was the doyen of the automotive industry. As opposed to now, it was not the consumers who held the reins in Ford's heyday – he was the pace-setter. (The account below is based on the book "The Machine That Changed The World" about the origins of lean manufacturing.)
Around this time, Ford Motors helmed by Henry Ford revolutionized manufacturing on multiple fronts – interchangeable parts, the moving assembly line, and an obsessive eye towards standardization (with specialized jigs and fixtures) and operational efficiencies, to name a few. These innovations propelled the company far ahead of its competition and generated scarcity power, thanks primarily to the unit cost dynamics. It afforded the company a cost leadership position so significant that various quality problems (frequent breakdowns, shoddy paint jobs) did not seem to matter as customers tolerated these slights and more in exchange for an unbeatable price. Ford's approach was unapologetically inside-out so much so that he once quipped that his customers could have any colour as long as it was black. It is his relentless focus on perfecting "one product" (Model-T perhaps being the most famous) and his exercising control of the entire supply chain through massive vertical integration that gave him and his company what seemed like an unassailable lead.
This strategy worked until it didn't.
Alfred P. Sloan came on the scene as the president of General Motors, brought in by Pierre du Pont, who had just assumed the role of chairman replacing the mercurial William C. Durant. Durant had a knack for acquisitions and had accumulated an impressive array of companies under the GM marquee, but had to go as he had trouble assimilating them. Sloan was du Pont's trump card. His style could be summarized with a quip all his own: "a car for every purse and purpose." He oversaw the turnaround of GM, which also signalled an industry-wide shift – from Ford's push to Sloan's, shall we say, soft-pull (where customers seemed to have more agency).
Granted, he inherited a company that as a result of an acquisition spree had a substantial product overlap that had to be managed and could not be wished away, but manage he did. If Ford was the master of one M – manufacturing or mass production, Sloan mastered at least two – Management and Marketing that inevitably led to the fourth – Money. The other favourable circumstance was that gas guzzlers were on the way out – so smaller cars were in demand. This slowly, but surely, resulted in a transition towards an outside-in approach. This transition paved the way eventually for the next major manufacturing revolution, lean (pioneered by Toyota), which at its heart is about privileging customer's preferences over supply-side exigencies. However, the change from GM to Toyota was a smoother transition compared to the one from Ford to GM, which was a clear break from the one-size-fits-all (or thereabouts) past, the likes of which we will most likely never see again.
What are some of the lessons we can draw about the nature of change from the above account?
Firstly, it is that Ford's operation was, shall we say, introverted. His singular focus on eking out every ounce of efficiency from his facilities caused him to miss the winds of change (precipitated by the energy crisis) that were about to sweep the industry. It is noteworthy that GM and Toyota were progressively more extroverted in a couple of ways: from an internal perspective, there was more attention paid to cross-functional integration (rather than an obsessive focus on manufacturing alone). Also, customers and the external environment was beginning to factor more in the decisions, especially in the case of Toyota.
Secondly, Ford Motors circa 1920 is a good example of what Stewart Brand, the pioneer of the pace-layers idea, would call over-specification. Ford designed his facilities around one product (although Model-T had nine body styles, it rode on the same chassis). Visitors to the iconic Highland Park facility in Detroit noted that it was like one vast machine optimized to manufacture one kind of product. As testimony to the efficiency, Ford was able to bring down the task cycle time (the chunk of time that elapses before you start repeating the operations) down from 8.56 hours to 1.19 minutes. This over-specification or overfitting left Ford vulnerable and was unable to pivot fast enough when the change came calling.
Thirdly, Ford's relationship with his workers illustrates a type of thinking that is inimical to learning and adaptability. Not only were parts interchangeable at Ford, but workers were also interchangeable. The tasks were so simple that new workers could be trained in literally minutes. Given the efficiency gap between Ford and the rest, the company could afford to pay high wages plus it was in their vested interest as this would keep unions out, or so they thought. This type of reasoning in systems-lingo is called "event-driven" thinking where a decision or policy is thought of only in terms of action and outcome; the feedback or pushback from the system/real-world is barely considered. Double-loop thinking that supports learning would have anticipated that increasing wages would lead to workers wanting to stay on the job for much longer than they would otherwise do, making the monotony of their jobs much less bearable leading to worsening of worker-management relations and all the higher-order effects this entails (none of them good).
There is also a broader lesson to be learned by following the change of fortunes from Ford to GM to Toyota. Ford's one-size-fits-all approach could only be viable so long as the desire for variety paled significantly in comparison to the price advantage it had through many of its technological innovations. But as time progressed and as the technologies behind mass production started to diffuse widely, competition adapted those and rose up to meet the challenge of the desire for variety (which was always there, but latent). The shift was most pronounced in Toyota's case, which enthusiastically embraced the philosophy of "one-piece-flow" whereby supply-economies-of-scale considerations are subordinated to what the customers value, which is given top billing in decision making.
Fast-forward to today's digital era, and we see supply economies of scale have altogether given way for many companies that are digital-natives to demand economies of scale (characterized by network effects). This dynamic was the subject of the popular book "Platform Revolution". The authors cite Metcalfe's law that points out that the shift in the balance of power towards demand economies of scale leads to the network or the platform drawing its power from the number of participating users (the more, the merrier). Innovations in digital and the concomitant growth in data have meant that souped-up algorithms have allowed companies, especially those born-digital, to de-emphasize physical assets and instead play the role of matchmakers empowered by data assets. Matchmakers in the sense that those offering a ride are matched with those that need one or those needing a place to crash are matched with those having one, and so on. An interesting offshoot of this platform phenomenon is that digital sensibilities have now permeated across all realms of the digital economy. This has led to traditional companies playing catch up and adopting some of the strategies of the digital natives.
Thus, with the digital revolution we seem to have come around a full circle – from customers having to choose from what was on offer (Ford days) to be able to assert their will so forcefully that lot size one is not merely an "ideal", but a distinct reality.
Are there any patterns to be discerned from the digital revolution, relevant to thinking about "change" in the longer term, or is it too early to tell? It has been twelve years since Chris Anderson, in a famous Wired article, announced the arrival of big data (and algorithms) on the big stage. So, we have been here for a while and can safely posit some aspects that are here to stay.
For one, the growth of digital platforms has drastically reduced friction for doing business across organizational and geographical boundaries. Ford, who was famously controlling by disposition, relied on what Alfred Chandler, a Harvard Business School professor, called the "visible hand" of the organization to wrest control of his supply chain through vertical integration. But reduced friction today has meant that the "invisible hand" of the market well and truly has the upper hand. This also implies that the reduction in transaction costs between organizations via the digital medium is resulting in ever-smaller economic agents. And it is not difficult to draw a line from this to a significant increase in the number of possible configurations for agents to come together and create and capture economic value.
From the perspective of organizational transformations, this means the myth of "best practice" must be put to rest. The sheer uniqueness of the number of ways in which value can be created makes canned solutions unattractive. Therefore, coming up with true solutions requires a deep understanding of the situation. One reason why experts, while taking note of a spate of failed digital transformations (of the estimated $1.3 trillion spent in 2018, $900 billion were apparently wasted), have argued for more reliance on organizational insiders as they understand the value proposition better than outside consultants.
Furthermore, with great possibilities (value configurations) come great uncertainty. In this climate, the best way to predict the future would be to create it, or in other words, innovate. This means having the permission to, as Mark Zuckerberg liked to say, "move fast and break things". The authors of the research I cited above call for cultivating a silicon-valley start-up culture in which experiments are actively encouraged and where you can truly channel your inner Bayesian and give yourself a constant stream of opportunities to update beliefs.
From the above discussion, one might get the impression that the future consists exclusively of fast-moving parts. It is certainly not so. The same way data does not make models obsolete, and a "just do it" ethic does not make strategy obsolete. It is important to bear this in mind when thinking about the process of change over time. For example, from a technology perspective, to innovate digitized products it is important to have a strong digital backbone (ERP and the like), something a surprisingly small number of companies – a mere 28% according to one MIT-study – have managed to secure.
In probably one of the best commencement speeches of all time, the late David Foster Wallace narrated the following parable:
There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?
The point he was trying to make was that sometimes when you are deeply immersed in your reality, you may not see it for what it is. It is also a great story that brings home the idea of perspective-taking in the context of change – in the midst of turbulence, buffeted by it, it is understandably difficult to take notice of what is or isn't important. Hopefully, taking a small step back has provided some insights that we will take with us when we get into pace-layers in the final part. The discussions we have had so far should also provide some justification for recognizing pace-layers as a meta-model that comes preloaded with a perspective-taking module providing guardrails that coax us into thinking in the right way about the "process" of change such that we are able to choose the right thinking tools from our meticulously curated toolbox.