A Whole New Ball Game: Overcoming the Odds Against Successful Implementation Katherine Rosback BS ChE, MA Org Comm
In a pivotal scene from the 2011 movie Moneyball,[1] actor Brad Pitt, playing the role of Billy Beane, general manager of the failing Oakland Athletics baseball team, asks a room full of scouts grappling with the loss of key players and a limited budget, “What’s the problem that we are trying to solve?”:
Billy Beane: Guys, you’re just talking. Talking “la-la-la-la” like this is business as usual. It’s not.
Grady Fuson: We’re trying to solve the problem here, Billy.
Billy Beane: Not like this you’re not. You’re not even looking at the problem.
Grady Fuson: We’re very aware of the problem. I mean...
Billy Beane: Okay, good. What’s the problem?
Grady Fuson: Look, Billy, we all understand what the problem is. We have to...
Billy Beane: Okay, good. What’s the problem?
Grady Fuson: The problem is we have to replace three key players in our lineup.
Billy Beane: Nope. What’s the problem?
Chris Pittaro: Same as it’s ever been. We’ve gotta replace these guys with what we have existing.
Billy Beane: Nope. What’s the problem, Barry?
Scout Barry: We need 38 home runs, 120 RBIs, and 47 doubles to replace.
Billy Beane: Ehhhhhhh! [imitates a buzzer] The problem we're trying to solve is that there are rich teams and there are poor teams. Then there's fifty feet of crap, and then there's us. It's an unfair game. And now we've been gutted. We're like organ donors for the rich. Boston's taken our kidneys, Yankees have taken our heart. And you guys just sit around talking the same old "good body" nonsense like we're selling jeans. Like we're looking for Fabio. We've got to think differently. We are the last dog at the bowl. You see what happens to the runt of the litter? He dies.
The team is drastically underfunded—there’s no money to buy “big-time” players—and Billy is frustrated that his scouts are discussing possible new prospects with the same old problem in play. His frustration is understandable: a lack of divergence regarding alternative ways to frame a problem generally results in striking out when it comes to successful implementation. And, unless you have the deep pockets of a Major League Baseball team owner, spending time on understanding why such implementations fail will be a pretty good investment, as the odds based on current practices are not in your favor. Multiple studies have documented the dismal outcomes of many a well-intentioned initiative. For example, research has shown that TQM and other programs like it have just a twenty to forty percent success rate when it comes to implementation, while eighty-five percent of reengineering programs fail to live up to expectations.[2] Several studies have highlighted how management commitment wanes during an implementation process because of unclear objectives. Explicitly, a lack of understanding about the specifics of what an initiative is intended to achieve leads to an erosion of managerial commitment.[3]
Of all the steps in a typical decision-making process, the one most overlooked is that of defining the problem—specifically, it is the step of diverging on different ways to formulate the problem that people tend to bypass. When groups come together to create a new strategy, address issues of underperformance, implement new initiatives, or make decisions, they will typically—and often at an unconscious level—tackle these challenges with unexamined, “old” problem definitions guiding and limiting the development of their solutions.[4] Like those baseball scouts gathered around the table in the Oakland A’s office, most people aren’t even aware of the particular problem that they are trying to solve; they just jump right into the solution phase.
This is an issue I often see in teams I work with. When a group is stuck on the question “Should we do this or not?,” I’ll ask everyone to grab a piece of paper and write down their definition of the problem they are trying to solve. Odds are, if you have 10 people in the room, you will likely get four different problem definitions. People tend to speak in solutions without even thinking about the problem or question that is driving those solutions. If you were to reflect on typical conversations that you’ve had in meetings—formal or informal—my hunch is that you would observe the same pattern. Just move into an active listening mode, and, pretty soon, you’ll likely hear some variant of “Should we invest in this project or not?,” “Should we let this person go or not?,” or “Should we move to into the one-story or the two-story house?” The question that is driving those solutions is rarely articulated.Hence, when a new initiative (solution) catches the eye of a leader and makes its way into an executive meeting, everyone starts to debate it or tries to find ways to get people to buy in to it. Asking questions such as “In response to what?,” “What is the question that you are trying to answer?,” “Why are we looking at this in the first place?,” or “Where’s the pain that is driving this?” will likely elicit either an awkward silence or numerous different perspectives. As Billy Beane despaired, no one is stepping back to distinguish the problem that they are trying to solve.
A recent Harvard Business Review article examined this issue of becoming too quickly enamored with “bright, shiny solutions” and programs, and suggested instead that one should “fall in love with the problem” and “spend time letting the challenge soak in, studying it from various angles, and understanding it more deeply.”[5] Successful companies, the article’s authors noted, did not run with the presenting symptoms, but, rather, persevered at understanding the essence of the problem. However, in my experience, leadership teams will often choose to avoid the discomfort that can accompany such deliberations, and choose instead to move on to the ostensibly easier work of implementation. Yet, whether in the fields of sport, science (which, as the neuroscientist Alcino J. Silva has observed, “is just as much about finding questions as it is about answering them”[6]), or business, the question step is clearly critical. Choose to avoid it and odds are you will join others who are similarly “confined to their situation”[7] by failing to articulate—and gain alignment on—the problem to be solved.
Several years ago, a client company resolved to become more “collaborative” in their discussions—a worthy goal, but one that lacked specificity. Following the announcement of this initiative, the firm’s senior managers got together to discuss what the term really meant, and, in my pre-meeting interviews with the team, one leader stated, “I already thought I was collaborative! I have no idea of what they are looking for!” He was not alone; there were many who struggled with what term meant, what specific characteristics should be taught, and what success would look like.
And therein lies the second issue when implementing initiatives: a lack of specificity backed by data analytics. Lewis’ Moneyball was, essentially, a book about asking a different set of questions and applying statistical analysis to answer those questions. Now that the problem has been articulated, what were the specific skills and abilities that will contribute to a winning team? As Billy Beane and company found out, it wasn’t RBIs that made the difference, but on-base percentage. Daryl Morey of the Houston Rockets (also a student of data analytics) found out that points and rebounds and steals per game was not very predictive of a good basketball player; but points and rebounds and steals per minute was.[8] So, going back to an organization’s goal to become more collaborative, which specific skills ensure collaborative leadership? What should be measured? What really makes the difference? The key issue is, when making such assessments, we often rely on outdated or flawed collective wisdom, and, rather than undertaking rigorous analysis to prove what really contributes to success, we are swayed by first impressions, confirmation and availability biases, unproven correlations, and a whole host of other heuristics.
After the use of data analytics was popularized in Moneyball, education, legal, manufacturing, sales, and human resources sectors alike began utilizing this approach in order to understand the patterns that drive success in their respective fields. However, Lewis’ insights in the book regarding “the ways in which any expert’s judgements might be warped by the expert’s own mind” had been described years ago by a pair of Israeli psychologists—Daniel Kahneman and Amos Tversky.[9] In Lewis’ latest book, The Undoing Project,[10] he explores the work of these two men who pioneered the work on the biases and behaviors that influence the way we make choices.
A particularly germane section in The Undoing Project discusses Kahneman’s work for the Israeli army in assessing new recruits. Having identified a disastrous correlation between who interviewers thought would perform well and who actually did, Kahneman put together a series of interview questions designed to “determine not how a person thought of himself, but how that person actually behaved.” [11] The core set of questions became “not ‘What do I think of him?’ but ‘What has he done?’” Through this work, Kahneman found that, if you remove the opportunity for the expression of gut feelings, people’s judgements improve. When it comes to deciding what makes for a successful baseball player, leadership candidate, or army recruit, our mental biases and preconceptions get in the way, but we can improve our judgement and decision-making through a more exacting analysis of current, relevant data—including information that is not readily available to us.
This approach of eliminating gut feelings and analyzing the data has been applied successfully in a multitude of sectors, including finance, education, criminal justice, and health care. In health care, this same “data-focused” technique is used to identify differences between doctors who are successful at engaging patients and those who are not. Why would anyone care about this? Because patients who are more engaged with the decision-making process are more likely to give clues to what the doctor might not even be thinking about, it improves patients’ understanding of the available treatment options, increase the proportion of patients with realistic expectations of benefits and harms, and improves agreement between patients’ values and treatment choices. [1]
So, rather than just chalking it up to “She’s just a nicer person,” researchers intent on improving the doctor-patient relationship have identified that “how a doctor asks questions and how he/she responds to his/her patient’s emotions” are both key to engaging the patient. These conclusions were not based on a quick assessment of the physician but rather by analyzing thousands of video tapes and live interactions[2], i.e. data, not gut feel. So, if you are part of the hospital training staff tasked with improving the level of shared decision-making, rather than just telling your doctors to “ask more questions!” your training can be much more specific on the type of questions and how to ask them, thanks to the data analytic findings.
This kind of detailed talk analysis is a career passion for Dr. Felicia Roberts, Professor and Associate Head in the Brian Lamb School of Communication at Purdue University. Robert’s focus is on using a data analytics methodology called conversation analysis (CA) to understand what is happening at a granular level in the face-to-face communications between doctor-patient, parent-child, and other such contexts. CA uncovers the intent or “doing” of the talk by examining a variety of conversation characteristics such as the production of pauses, the length of a given turn, the methods by which the speakers choose to craft their statements, the methods by which the second speaker chooses to respond, and multiple other linguistic strategies used by speakers in interactive events and by asking “what is being accomplished by this act?” [3] So, by examining the talk at this very granular level, one can uncover the specific differences of, for example, how oncologists might approach a cancer clinical trial explanation with an eligible patient in an effort to increase the level of enrollments.
As Roberts explains in a paper on this very topic, [4] enrollment in clinical trials is important to the field of medical research as trials are the means through which new treatments can be tested. No testing, no new medical solutions. So it is important to understand what facilitates or enables an enrollment and the area that Roberts chose to study was the differences of how oncologists talked to the patient. What makes the difference?
Forget surveys and interviews. Such methods are so peppered with the first impressions, confirmation and availability biases that we spoke about earlier, they are not of much use. Roberts, instead, chooses to capture the actual conversation on audiotapes and then goes through the painstaking process of writing down every “huh,” “uhhh”, and timed moments of silence that are part and parcel of an actual conversation. The outcome? There is a “sharp qualitative difference”[5] between how the two oncologists attempt to enroll a patient. One structures his talk in a way that involves the patient, the other tends to structure his talk as a recommendation. Roberts’ study provided specific data on those differences.
For the client company mentioned earlier whose goal was to become more collaborative, my question would be “What would we find if we taped the actual conversations between a leader who is deemed to be collaborative and one who is not?” What would the data analytics tell us? Do they ask more questions? How do they challenge opposing points of view? How do they employ silence? How do they respond to dis confirming information? My point is, this is the level of specificity needed if one is to succeed at changing the behaviors in one’s organization. Actions are driven by talk and we need to understand, at a very specific level, the root of the differences in talk.
In our consulting work we used a similar technique when evaluating “good” and “poor” performing virtual teams. We had been asked to design a training class to improve facilitation capabilities in a globally dispersed set of knowledge management teams. Rather than defer to collective wisdom on “what makes for a great facilitator,” and even my own gut feel (I have been teaching facilitation techniques as well as facilitating for over two decades), I taped and analyzed the dialogues of those teams that were performing well (as defined by their ability to deliver a set of results) and those that were struggling. At the end of my study, I could count, categorize, and pinpoint the specific differences in the length of uninterrupted speech, the amount of time allotted to responses, and the precise question structures posed by the more successful leaders to engage the team, among several other pertinent items. From these data, I was able to prepare a targeted set of skills that I could teach to improve the team facilitation.
In light of successes such as these, it is evident that, in order for an initiative to be successfully implemented, the question or need driving it must be first be uncovered and understood, and any subsequent solutions or decisions should be founded on unbiased, extensive, and applicable analysis.
What’s the problem, and what are the specific skills that will help you win your game?
ABOUT THE AUTHOR
Katherine Rosback is an expert facilitator, specializing in complex decisions and technical discussions requiring resolution. She teaches the skills in her highly acclaimed workshops and has facilitated hundreds of strategic planning, problem-solving, organizational development, and initiative implementation meetings. Katherine is noted for her ability to deepen a group’s understanding and perspective of the issue by asking thought-provoking questions and her “magic” in creating a highly-engaged and thoughtful discussion that leads to a resolution. Her workshops are known for their applicability and for the extent to which the tools and techniques are rooted in science.
Katherine has a B.S. in Chemical Engineering and an M.A. in Organizational Communication, both from Purdue University. She is an avid student of horsemanship and dressage and enjoys riding with her horses and dogs.
BIBLIOGRAPHY
Ballard-Reisch, D. S. A Model of Participative Decision Making for Interaction. Health Commun. 2, 91–104 (2009).
Beer, Michael. “Why Total Quality Management Programs Do Not Persist: The Role of Management Quality and Implications for Leading a TQM Transformation.” Decision Sciences 34, no. 4 (2003): 623–42.
Boudreau, John, and Rice, Steven. “Bright, Shiny Objects and the Future of HR.” Harvard Business Review 2015, July–August (2015): 72–78.
Groopman, Jerome, M. How Doctor’s Think. Boston: Houghton Mifflin Company, 2007.
Goldberg, Marilee C. The Art of the Question: A Guide to Short-Term Question-Centered Therapy. New York: John Wiley & Sons, 1998.
Lewis, Michael. “How Two Trailblazing Psychologists Turned the World of Decision Science Upside Down.” Vanity Fair, November 14 (2016).
———. Moneyball: The Art of Winning an Unfair Game. New York: W. W. Norton, 2003.
———. The Undoing Project: A Friendship That Changed Our Minds. New York: W.W. Norton, 2017.
______. “Basketball’s Nerd King.” https://www.slate.com/articles/arts/books/2016/12
Moneyball. Directed by Bennett Miller. Culver City, CA: Columbia Pictures, 2011.
Roberts, Felicia. “Qualitative Differences Among Cancer Clinical Trial Explanations.” Social Science & Medicine, 55, 1947-1955 (2002).
Rosback, K. The Talk of Transition: An Analysis of the Communicative Processes in a Family Firm Succession. Purdue University, 2002.
________. Overcoming the Odds. 2015.
Silva, Alcino J. “Memory’s Intricate Web.” Scientific American 317, no. 1 (2017): 30–37.
ARTICLE FOOTNOTES
[1] Moneyball, Miller. The movie is based on Michael Lewis’s 2003 book Moneyball: The Art of Winning an Unfair Game.
[2] Beer, “Why Total Quality Management Programs Do Not Persist.”
[3] Rosback, Overcoming the Odds.
[4] Goldberg, The Art of the Question.
[5] Boudreau and Rice, “Bright, Shiny Objects and the Future of HR.”
[6] Silva, Alcino J. “Memory’s Intricate Web.”
[7] Goldberg, The Art of the Question.
[8] Lewis, “Basketball Nerd King.”
[9] Lewis, “Two Trailblazing Psychologists”
[10] Lewis, The Undoing Project.
[11] Ibid.
[1] Ballard-Reisch, “A Model of Participative Decision Making for Interaction.”
[2] Groopman, How Doctor’s Think.
[3] Rosback, The Talk of Transition.
[4] Roberts, “Qualitative Differences Among Cancer Clinical Trial Explanations.”
[5] Ibid.
[14] Ibid.
[15] Groopman, How Doctor’s Think.
Mature Assets Production Optimization - ERD Wells - Artificial Lift - Technology Maturation - Project Management- Data Science
7 年Great piece of information. I will pass it to my team for deep reading . Thanks!
Director, Marine Business Improvement (Retired)
7 年Fantastic article Katherine. I hope your message gets through. Focus on results, understand the behaviors and skills that lead to the results, and build strategies to close the gaps. It's a great message.