From Chance to Choice: How to find strategy in uncertainty

From Chance to Choice: How to find strategy in uncertainty

We often think of games as existing on a spectrum between pure luck and pure skill. In reality, it's more like a matrix, with each game involving a mix of luck, skill, and strategy. Take Scrabble and Snakes and Ladders for instance:??

  • Scrabble: Winning requires a good vocabulary and strategic placement of tiles to maximize points and block opponents. However, luck also plays a role in the tiles drawn and the unpredictability of opponents’ moves. ?? High Skill/Strategy, Medium Luck
  • Snakes and Ladders: Winning is entirely based on rolling dice and moving accordingly. It requires no critical thinking or planning. ?? High Luck, Low Skill/Strategy

Skills & Strategy versus Luck

the same applies to the missions we lead at work. Our success neither relies on pure luck nor pure competence. Strategy and execution are crucial: success doesn't lands on our laps, and we do have to actively contribute to achieving positive outcomes. However, luck always plays a significant role in any business endeavour, whether it's launching a product, delivering client services, or driving large change programs.

"Business models are designed and executed in specific environments" —Strategyzer

All we can do is try our best to get a hold on the unknown and uncontrollable and come up with future-proof and successful strategies.?

?? Predicting the future

Foresight and expertise

Psychology professor Philip Tetlock conducted a two-decade-long study with 284 experts, asking them to forecast political and economic events 1, 5, and 10 years out. He then compared their predictions to actual events to measure accuracy.

“The average expert was roughly as accurate as a dart-throwing chimpanzee”

Two groups stood out: one made worse predictions than random guessing, and the other slightly outperformed randomness and basic algorithms (like "always predict no change" or "extend current trends").

Tetlock identified that what made one group better than the other was not their expertise but the way they thought.

The first group tended to fixate on big ideas and hold firm convictions. They made forecasts that validated their beliefs, making definitive assertions about the future, often declaring things “impossible” or “certain”.?

The second group was more pragmatic, relying on more analysis and adapting their thought process to each question. They were more open to disproving their convictions or admitting uncertainty, an they reasoned in terms of probability and possibilities.?

What does this imply for businesses, leaders and teams? Not that we should throw away experts and their convictions when forecasting future trends and developing successful strategies, but that we should invest more effort into checking our instincts and analysing possibilities.

Making the implicit explicit

One pitfall of expertise is relying on implicit, unchallenged knowledge or instincts in our predictions and decision-making. Mapping assumptions goes a long way in improving our reasoning. Why do we believe that a particular set of events will happen or that a given solution will work? Can we challenge these reasons? Are they facts or beliefs? Are they reliable predictors of future events??

For example, a business might be planning on developing an advanced AI-driven customer support solution, based on their prediction that artificial intelligence will become the standard tool in customer services within the next decade. This sounds sensible, but it's worth examining the assumptions behind this prediction:


Quick map of the assumptions behind the predictions that AI will become the standard tool in customer services within the next decade

Mapping assumptions can help spot our biases and blind spots. One common bias is our tendency to believe that current trends will continue in the future. It’s the fallacy of extrapolation, and it leads us to miss possible futures. In our example, we're assuming that AI technology will evolve fast enough within the next decade to handle the tasks that it currently cannot complete effectively. But what if after these last two years of fast progress, the next improvements take decades instead of continuing at the current rate?

Simulations

Predicting the future in business is more about uncovering unexplored options and risks than accurately picturing what will happen. What if our predictions are wrong and things unfold differently? Running simulations is a great way to explore alternative futures and resist the extrapolation fallacy.?We can examine our assumptions and play out different scenarios: What if things went left instead of right? What if things went completely wild?

Example: We assume that the operational costs of AI Customer Services (recurring integration price + continuous maintenance and system evolution) will be lower than human operations for our projected customer enquiry volume. Let's imagine three scenarios:

  1. Smooth sailing: Our systems only require minor updates, handled by a small part-time in-house team. AI integrations remain stable and our vendor provides comprehensive support at a fixed, affordable rate. ?? Low maintenance costs
  2. Steady upkeep: Our systems require regular upgrades for performance, security, needing a full-time dedicated team. Some support and maintenance are covered by our vendor, but additional costs are incurred for more extensive updates or troubleshooting. ?? Medium maintenance costs
  3. Major storm: AI technology evolves rapidly, vendors drastically increase their prices, and new regulations impose rigorous security and privacy standards, requiring frequent major upgrades handled by a large internal team. Vendors provide minimal support and go in and out of business in a volatile market, requiring us to hire external experts to manage issues and system migrations. ?? High maintenance costs

These scenarios will have different implications on operational costs of our AI Customer Services in the long run. Neither might actually happen but they force us to look at possible events, which we can then further assess.

A good follow-up is to rate risks by impact and probability to help us choose the right plan of action to mitigate them or allow us to completely avoid roads that would lead to them. One pitfall to be aware of: we tend to choose paths with the highest chance of success, even if they present risks that would lead to major impact but have very slim chances of happening. It's worth taking the time to explore "catastrophic" scenarios to reveal these risks and prepare for them (e.g. a future ban on using AI to fully replace workers).

“One thing a person cannot do, no matter how rigorous his analysis or heroic his imagination, is to draw up a list of the things that would never occur to him.” – Thomas Schelling, economist and professor of foreign policy

Bringing a diversity of brains into a simulation exercise can help imagining more scenarios, especially extreme and weird ones. Some people will come up with possibilities that others could not envision.

Premortems

This is another way to challenge assumptions and inspect the weaknesses of our ideas. Imagine we have fully implemented our plan, and it has failed:

  • How did it fail?
  • What factors and events caused this outcome?
  • What could we have foreseen and prevented?
  • What was completely unexpected and beyond our control?

Role-playing can also help by putting ourselves in the shoes of a competitor or detractor and critiquing our plan, playing Devil’s advocate.

These perspective shifts help us distance ourselves from our ideas and identify weak points more objectively. However, we must be ready to tweak or even abandon our plans if they don't withstand scrutiny.

Both simulations and premortem are also useful for looking beyond short-term impacts and considering the long-term effects of different plans.

Reducing unknowns with experiments

While we can't predict events accurately, we can bring some of the future forward with controlled experiments. Testing how things might play out with simplified factors can help narrow down or refine options, uncover unexpected impacts or risks, and explore new routes. That’s what MVPs are for in start-ups and product teams.

To de-risk our plan to develop an advanced AI-driven customer support system, we could run a limited-scope experiment to test our prediction that AI Customer Services will soon become the standard:

Our hypotheses:

  • Customers will prefer interacting with AI Customer Services. ?? Let's test if that's already true with the simple tasks that AI can already handle
  • AI can already handle simple customer service tasks more effectively than human operators ?? Let's verify this assertion

Our experiment: For a selected type of simple queries, deploy a basic AI system to handle half the customers queries while human agents handle the other half. Measure and compare resolution time, customer feedback, and issue escalation rates.

Making smaller bets

There is no such thing as long-term feedback loops. That’s what Annie Duke, author of "Thinking in Bets" and former professional poker player, argues: even with long-term goals, we must anticipate success by targeting short-term outcomes. What early predictors of success can we identify to ensure we're on the right path toward our longer-term goals?

In our example, a successful hybrid solution, where a lightweight AI system supports human Customer Services teams to handle simple tasks, could be a good milestone before pursuing more complex developments.

Duke also recommends setting kill criteria to recognise early that you’re going down the wrong path, rather than waiting to fail before quitting.?For instance, monitoring the estimated maintenance costs of our systems can prevent over-investing in a potentially cost-prohibitive solution. Our attention is precious, and time spent on a failing initiative detracts us from pursuing opportunities with better outcomes.


?? Further reading & listening


要查看或添加评论,请登录

Ludivine Siau的更多文章

社区洞察

其他会员也浏览了