Optimizing Organizational Performance in an Uncertain World - Part 1 - The Limits of Prediction

How can technology enhance the performance of organizations? How does one characterize organizational performance in a world where our ability to predict is highly constrained? These are two general questions that have been on my mind for some time now. In an effort to clarify my thoughts on the matter and broaden the discussion, I thought I would summarize my view to date, providing the first of what I expect to be a series of blog posts on this general topic. In this post, we begin with some reflection on the limits of prediction. A thorough understanding of these limits will aid us in identifying meaningful measures of organizational performance and approaches to optimizing them.

My perspective has been influenced significantly by the writings of Nassim Taleb and Duncan Watts. “The Black Swan” was my initial exposure to Taleb’s thoughts on the limits of prediction. Taleb’s Edge essay “The Fourth Quadrant: A Map of the Limits of Statistics” is an excellent continuation of this thread. Duncan Watts' latest book “Everything is Obvious: Once You Know the Answer” takes a critical look at common sense reasoning to explore the pitfalls we encounter when attempting to understand complex phenomena. Taleb and Watts share a common interest in behavioral predispositions that lead us to repeated surprise. Watts delves into these issues deeper and considers the available options for planning given the limits of prediction. Not surprisingly, there are no obvious solutions and many questions remain. Yet a significant first step is simply understanding the limits of prediction so as to avoid being deceived by randomness.

One does not have to look far within the research community and elsewhere to see the overconfidence in mathematical and statistical models. I have watched with increasing concern the development and application of models of complex social phenomena. The field of social network analysis has seen an explosion of activity in the last five years as social media data has transformed the study of large-scale social systems. Many scientists and mathematicians have entered the space with the objective of uncovering general patterns in social systems that can be used for prediction. Some have made unreasonable claims about the range of predictions one could make, thus leaving the impression that present limitations will be resolved with more data and computation. The danger comes when such models are used as guides for future actions. I have pressed some program managers in the past on the implications of their approaches. Some remain confident in the validity of their methods. Others consider a marginal methodology better than none at all when forced to make decisions in high risk scenarios. Far stronger doses of skepticism are still needed in my opinion.

To appreciate the futility of predicting the future state of complex systems such as social networks, it is instructive to consider the difficulties we face when addressing prediction tasks that appear quite tenable. Watts provides a telling example in the context of the ultimatum game. In the ultimatum game, two players interact to decide on a given split of a sum of money. One player proposes a split and the other accepts or rejects the offer. If the offer is accepted, both players receive their proposed share of the money. If the offer is rejected, neither player receives money. For such a game, it seems reasonable to expect that the player making the proposal will offer a split that may be in his or her favor but not too egregious as to seem unfair to the other player. Yet what is viewed as reasonable and fair turns out to vary widely across cultures, leading to some surprising outcomes depending on your perspective. A study was conducted where the game was played in 15 small-scale, preindustrial societies across five continents. These experiments showed a wide variation in outcomes. At one extreme, even very low offers were readily accepted without resentment. In other cases, “hyperfair” offers, where the proposer would take a minority fraction of the money, were rejected as frequently as unfair offers.

What happened here? Watts explains:

“As it turns out, the Au and Gnau tribes had long-established customs of gift exchange, according to which receiving a gift obligates the receiver to reciprocate at some point in the future. Because there was no equivalent of the ultimatum game in the Au or Gnau societies, they simply “mapped” the unfamiliar interaction onto the most similar social exchange they could think of—which happened to be gift exchange—and responded accordingly. Thus what might have seemed like free money to a Western participant looked to an Au or Gnau participant very much like an unwanted obligation. The Machiguenga, by contrast, live in a society in which the only relationship bonds that carry any expectation of loyalty are with immediate family members. When playing the ultimatum game with a stranger, therefore, Machiguenga participants—again mapping the unfamiliar onto the familiar—saw little obligation to make fair offers, and experienced very little of the resentment that would well up in a Western player upon being presented with a split that was patently unequal. To them, even low offers were seen as a good deal."

This example highlights that common sense knowledge has a social context. To acquire that knowledge requires one to participate in the society. Without such an experience base, opportunities abound for misunderstanding and surprise.

Consider how often we apply our common sense knowledge to make sense of large groups of people distant from our daily lives. As we digest the world news, we can’t help but shape narratives based on our limited perspective. From these narratives, we are then compelled to derive causal explanations, thus setting ourselves up for the next surprise.

Another illuminating example comes from a 2010 study conducted by Goel, Mason and Watts where they examine the real and perceived attitude agreement among Facebook friends. The results of their study made clear that participants were very bad at identifying when their friends disagreed with them, even in the context of close friendships. Anecdotal reports from the participants showed surprise over how their friends had perceived them. The authors conducted additional analysis to understand how participants were responding in cases where they lacked specific information about their friend’s beliefs. It appears that when in doubt, the participants leveraged stereotypes to make inferences about their friend’s beliefs. Without realizing, we naturally fill in details when information is unavailable, leading to misrepresentation and potential overconfidence.

If we struggle with predicting attributes of our friends, one might expect we should fair better with predictions about ourselves. Social psychologists Dan Gilbert and Timothy Wilson have studied the topic of affective forecasting which is focused on how people think about the future and how they think they will react emotionally to future events that might occur in their lives. On the whole, they claim we clearly have some ability to make forecasts but that we are prone to make certain systematic errors. One of the most common is impact bias where we overestimate the emotional impact of a potential event on our lives. For positive events, we anticipate the gain in happiness will be more pronounced. For negative events, we see a more significant emotional burden in our future.

In many ways, we are not well suited for reasoning about the complexities of the world. We are all impacted by a range of cognitive biases that sway our judgement. Some of the cognitive biases, laid out by Watts, Taleb and others include:

At the same time, we face fundamental constraints in terms of the available context with which to reason about the world.

To delve into the implications of the coupling of these biases and constraints, we must begin by clarifying the definition of a complex system. My working definition of a complex system is a system with a multitude of input and state variables that interact often in a complex, nonlinear manner. Even when privy to the complete state of the complex system, the future output trajectory may be highly uncertain due to the randomness inherent in the input signals driving the system. Often we see complex systems exhibiting behavior where even minor perturbations can lead to significant changes in the state and output variables. So even in the ideal (unrealizable) scenario where the system is completely transparent to us, our ability to forecast the future state of the system is very limited due to the accumulation of uncertainty.

Compare this now to what we attempt to accomplish when we reason about the world. Whether constructing a mental or algorithmic model of a complex system, we are attempting to identify a representation of the underlying system that allows us to make inferences about its future state. Putting aside the fundamental uncertainty the system presents to us in the ideal scenario, we are now faced with significant observational limitations that only magnify that uncertainty. We are never certain we have uncovered all of the relevant input variables. At the same time, the opportunity to uncover the relevance of a given variable only comes after examples are available that clarify the underlying relationships. This assumes that we’ve measured the appropriate input and state variables to identify the model. When relevance is only clear in hindsight, it is impossible to ensure that one’s observational resources are appropriately focused to uncover those relationships.

Revisiting the examples I cited earlier with these constraints in mind, it should be no surprise that we are routinely surprised in such settings. We make predictions with what context is available to us at the time, not realizing what gaps in our knowledge exist. Meanwhile, even when the available context is rich, the fundamental complexity of the scenario at hand can simply be beyond our comprehension. Yet we continue to wrestle with the complexity, attempting to derive compelling narratives that provide an illusion of understanding. Our cognitive biases become involved in that process, providing additional distortion in our view of reality. Even when we attempt to counteract some of those biases through algorithmic inference, the deception continues. History provides only one realization of complex social systems. Unlike certain physical systems that we can meticulously study in controlled settings, complex social systems must be observed in the wild. Therefore we are not privy to multiple trials. Each example witnessed is arguably unique.

So where does that leave us? How do organizations plan in environments where the future is unpredictable? The short answer is to focus not on prediction but adaptation. When we truly embrace surprise as the norm, the question we must address is: how do we efficiently pivot when the surprise finally comes? This will be the subject of future posts.

Recommended reading:

N. Taleb - “Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets” - 2008

J. Cooper Ramo - “The Age of the Unthinkable: Why the New World Disorder Constantly Surprises Us And What We Can Do About It” - 2009

N. Taleb - “The Black Swan: The Impact of the Highly Improbable” - 2010

D. Watts - “Everything is Obvious: Once You Know the Answer” - 2011