Experimentation as a means for intelligent failure in marketing

Experimentation as a means for intelligent failure in marketing


Complementing Articles:


To fail or not to fail, that is the question.? And I have an answer: fail.? You know the gist: fail fast, fail frequently, fail safely.? This notion is anything but novel, but is always a timeless reminder we need. Plus, while the virtue of these descriptors is understood by marketers and other business disciplines alike, are they actually practiced? Furthermore, is failure practiced deliberately, methodically, and as an explicit means to drive competitive advantage? By and large, businesspeople and marketers alike don't hold their failure constructs to such rigor, and addressing failure may lack some intentionality or be done with little awareness to what it really means to fail. After all, 85% of executives believe fear of failure thwarts innovation efforts in their organizations, according to McKinsey (1).

You may be thinking, "I take some control and harness failure because I test and try new things." Well, you're on the right track, but missing an opportunity to position and execute your marketing program as one that is both innovative in market and highly respected internally. Moreover, you may be missing the opportunity to be deliberately cognizant with your failure - to carry categorized understanding of how failure manifests, how to identify and address specific categories of failure, and arguably the most exciting opportunity of them all: to fail intelligently. Intelligent failure is where we'll be focusing our attention and specifically through the practice of systematic experimentation. In being a deliberately cognizant marketing failure leader (that has an interesting 'ring' to it), ahem, experimenter, we must start with a few key actions:

  • Adhere to an explicit innovation strategy (don't miss the preceding introduction to Balanced Innovation )
  • Define the ways failure manifests
  • Leverage a standardized, repeatable and sharable framework for experimentation (you'll walk away from this article with a framework for you and your team to leverage)

Defining Failure

I espouse applying an academic lens to our marketing work and decisions, what I've mentioned as Deliberate Cognizance .? And the ability to carry this level of strategic reasoning in your work almost always relies on defining scenarios explicitly in order to address them effectively.? Failure is ripe for defining.? Below you’ll find an exhibit I created inspired by novel research by Amy Edmondson of Harvard Business School (2).? The driving message is that there is a spectrum where failure is predictable (failure without valuable lessons), is merely a product of complexity (detrimental but with lessons learned), and intelligent (safe and controlled failure with lessons learned).? Defining failure allows us to better act on it, and I've also experienced that tightly defining its stages provides a far more robust approach to managing people (proactively and reactively), and in revealing root cause which should always be an end goal whenever you or your team encounter a failure anywhere on this spectrum.? Our examination in this article is going to focus squarely on the right end of the spectrum, intelligent failure, and its material manifestation, experimentation.

Original Image // Evan Weiner


Experimentation

Experimentation, In Practice

You might find [the additional opportunities field] to be one of the most engaging pieces as you’re bridging the gap from the scientifically conclusive (or inconclusive) result you landed on, to new possibilities - this blend of vision and pragmatism is a winning recipe for persuasion with other functional groups or clients.

The article preceding this details the merits of balanced innovation, which in its most potent form is a sage guide for pursuing experimentation with deliberate cognizance (yes, there’s my term again); I encourage intaking this strategy before you pursue a backlog of experiments.? First, let’s break down my suggested elements of your experimentation structure and outputs, and then we’ll come back to laddering these up to a balanced innovation scheme through a practical, quantified scoring system.

The experimentation format we’ll leverage here, in its simplest form, ensures you subject your tests to the scientific method.? Subjecting your work to this level of rigor not only benefits the insights and decision guidance you’re seeking in the first place, but is just as much about being able to convey your ideas and program of work decisively and objectively to cross-functional stakeholders and senior leaders alike.? So, this figure has just as much utility as an artifact within your nuclear team as it does as a readout to champion outside of your team. Let's take a look:


Original Image // Evan Weiner


Let's examine the eight core elements of the experiment worksheet presented:

  1. Observation: The observation should be a quantitative and/or qualitative observation made from the baseline (or control) that leads to the hypothesis.? This is the trend or phenomenon you’re looking to make a conclusion on.
  2. Hypothesis: The proposed explanation for a phenomenon, which you are looking to test.? This is arguably the most important factor in your experiment build-out.? Be as specific as possible, leveraging quantified expectations 100% of the time - the less quantitative and less specific you are, the more you’re chipping away at the credibility and viability of your experiment. The one-liner shown here is an example of the simplest execution and works well with pure-play exploitative experiments.
  3. Test Design: The core architecture of how the test is run and set up.? Test design is frequently the type of experiment, tool used, eligibility criteria, channel and any other pertinent details that describe the experiment design. Test design can be a separate slide in and of itself, and can include expanded information around stakeholders, tools and measurement, daily or weekly breakdowns of the test period, and more.? Channel your own impulses on how detailed you make this based on your own preferences and the audience(s) you may be shopping this with.
  4. Measuring Success: Aside from your hypothesis, this is your hardest-hitting field; this isn’t surprising given it’s reporting on the outcome of your hypothesis.? You’ll notice this illustrative example clearly shows a “win” (hypothesis proven) based on the key judgment metric, and leverages statistical significance.? The complementary KPIs are purely for discovery and additional analysis, and they do not (and should not) influence the outcome of the experiment. It's very important you choose one measure of success to judge your hypothesis with, per experiment. If you have other measures you want to formally test (i.e. how ML might affect CTR or brand sentiment), perform separate experiment documents. See #5 below.
  5. Additional Opportunities: This field is where you continue to imagine the testing possibilities.? And in presenting this read-out, you might find this to be one of the most engaging pieces as you’re bridging the gap from the scientifically conclusive (or inconclusive) result you landed on, to new possibilities - this blend of vision and pragmatism is a winning recipe for persuasion with other functional groups or clients.? Lastly, experiments may be isolated tests or part of a saga, so this is where you detail the next step if you’re planning a series of tests. Explorative innovation, with its often complex and broad scale, often manifests as an astute chronology of experiments.
  6. Conclusion (as header metadata): Your experimentation output leads with the conclusion you were able to distill. And keep in mind, an experiment can be decisively inconclusive as well; sometimes our test doesn't get us to the meaningful results we need. What enables you to be conclusive or not is your 'Measuring Success' section, specifically the degree to which you're able to hit statistical significance (via confidence interval / p-value). If this last statement makes you anxious, especially for those of us working higher up in the marketing funnel, don't fret. We'll be right-sizing this in no time, below. In this example we have a conclusive win (satisfied our hypothesis and with statistical significance) which is seen in green, followed by a conclusive outcome or resulting action. Below this headline you can also include a byline that mentions a very succinct preview of your observation.
  7. Control Group Description & Assets: Consider items #7 and #8 free form areas to provide qualitative detail and artifacts (i.e. creative, in-market examples). In this way, you might choose to extend onto additional pages depending on your preference and the complexity of the experiment. For #7 we're focused on the control group. In this case it would be showcasing the targeting criteria of our manual, heuristic approach that we'd been using to date, as well as any other relevant detail, such as snapshots of creative.
  8. Test Group Description & Assets: This follows the same logic as #7, but details the test group. In this example it would be showcasing the targeting criteria from our ML tool, including relevant UI screenshots of the tool, creative, etc.

Right-Size: Valid Caveats to the Scientific Method for Full-Funnel Marketing

...be encouraged by your intent, and make that intent known. Your ability to communicate an objective level of sophistication and rigor against which you model your experiments and make your decisions... is an incredibly powerful means to gain the credibility and attention your marketing work deserves. In layman's terms we might call this 'sounding smart'. Well, it is, because in doing this you are doing so.

The example observation and hypothesis we see in the model is one of the more straightforward experiments we can perform in marketing: a paid digital media A/B test.? This is used as an example to ensure the fundamentals of the format are well understood. But, there’s a quick and very important call to action: use this format and its elements with anything and everything you’re testing. This is the point, in fact. Whether lower funnel, upper funnel, advanced resources or not, we want to subject to the highest and most systematic rigor possible to experiment more effectively, build bold psychological safety for your team (more on that, below), and position your marketing work with the highest credibility and persuasive qualities as possible. The magic appears in the right-sizing, which I prescribe in a few ways:

  1. Flex the elements so that you can complete and publish this readout against any innovation effort you pursue.? This might mean you can’t apply a confidence interval, but can lean very deeply into qualitative insights, for example.
  2. Be cognizant, quite literally. Don't be discouraged by not being able to hit certain quantitative exercises (statistical significance remains an easy target I'm picking on) and don't worry about what you might feel is too small of an experiment to create this worksheet against. Rather, be encouraged by your intent, and make that intent known. Your ability to communicate an objective level of sophistication and rigor against which you model your experiments and make your decisions - even if you don't accomplish in a single instance - is an incredibly powerful means to gain the credibility and attention your marketing work deserves. In layman's terms we might call this, 'sounding smart'. Well, it is, because in doing this you are doing so.
  3. Always push for the most advanced insights.? Think you can’t apply statistical significance? Be perfectly sure you can’t.? Build bridges into data science or BI and shop your experiment with them - they might be able to help you realize you can actually apply formal statistics, or they may inspire complementary ways to add quant rigor to your test. Plus, I've found that internal networking in the context of experimentation is one of the most meaningful in a corporate setting.

Achieving Balanced Innovation Through Experimentation

Deliberate, published experimentation has stellar utility on its own.? But, when spawned from an innovation strategy, utility becomes strategy, and strategy becomes competitive advantage. Balanced innovation is an academically researched approach no matter your innovation maturity.? With this scheme in mind, you can quantitatively score your experiments - such as a 1-10 scale - based on how exploitative (1-5) or explorative (5-10) they are; these scores can back into a quarterly or annual averaged experiment score goal. In the case of balanced innovation, you would classically pursue a total averaged score for your experiments to be between 4-6 (the middle of a 1-10 scale). This scoring system has worked very well in my own teams not just as a means to measure and track our progress internally and defend our efforts externally, but also as a neat application for personal goals.

As I and my groups have matured into mastery of experimentation process and strategy, I've further democratized ownership and boosted engagement in our program by presenting the option to incorporate quantified experimentation goals into each of their hard-coded performance goals (the ones that anchor promotion and bonus progress). At first proposal, a majority of my team members opted-in. One to two quarters later, 100% of the team decided on their own accord to incorporate experimentation goals into personal performance tracking. The formal bi-annual review periods following each of those personal goal decisions by team members spelled either level promotions or merit increases for each of them. And while there were other factors and achievements at play with each team member's reward, accomplishing their experiment output and balance score not only set them up for success with me as their manager, but enabled me to reinforce my argument for smooth approvals with my own management, even when it wasn't a shoe-in decision. If this was possible on an individual level, can you imagine what's possible at a group level? Imagine no more. Let's examine in the next section.

Empowering Your Team Through Experimentation

The discussion around 'test and learn culture' and structurally fostering curiosity in teams is extensive. Plus, its parent topic, organizational behavior, might be where some of the most intensive doctoral level management research has been done in leading business schools. For the purpose and brevity of this article I won't be diving into much of this published work, but it is the backdrop to the recommendations I've formed through my own success in fostering curious, performant team cultures.

Firstly, we've already covered the majority of my recommendation: employ Deliberate Cognizance by coming to terms with the meanings of failure, and then put to use a formal, scientific experimentation regime within your marketing program. But, how does it come to life? What about it makes your team 'bright eyed and bushy tailed' and smart about innovation? With the experimentation format in tow, realizing these merits comes down to the following process I've outlined here:


Original Image // Evan Weiner



The process illustrated above represents the full cycle implementation of an experimentation system within a functional marketing team. This is a process I've implemented and seen incredible results with. Results have included in-market wins including scores of experiments achieving everything from incremental brand engagement, to multi-million dollar cost efficiencies, to clear decision on new channel entries. Our results also manifested internally by using experiment readouts to broker participation in developing new and under-serviced brands, opt-in to new techstack capabilities, and creating outsize influence for channel programs, including social and digital customer experience, within broader cross-functional strategic initiatives.

Considering both the experimentation worksheet as well as the implementation sequence shared above, you can think of the experimentation readout as the chess piece, with the sequence above as the checker board. In essence, the experiment and its readout take form as the vehicle that activates the merits of each step. An experienced practitioner or team leader can interpret this figure quite well without qualification, and in doing so can make it their own. But, let's add some clarifying context, orienting with the process line, starting in the top left:

  • Kick-off & Training: You'll start your experimentation regimen by formally breaking ground, which consists primarily of communicating the new process and accompanying artifacts, and setting the cultural tone and norms. In its most ideal and arguably effective form, this kick-off would include a unique event or ceremony for your team and stakeholders, such as an offsite or a lunch and learn series that also includes internal or external experts on experimentation; the latter represents social proof, where your group can see material examples of experimentation enabling a team to create and communicate impact. In addition, while a consensus culture can be prohibitive as a run of business mode, it makes for a powerful option here by garnering initial and long-term engagement from your team. And since experimentation is such a participatory new process that involves systematic risk taking and an inherent level of vulnerability, finding ways for measured and creative feedback in the build-out will make the team far more endorsing in the long run.
  • Backlog strategy & brainstorm: This represents creation of your backlog of experiments. You'll notice how this process is in cycle with review ceremonies like reviewing past experiment results or recurring performance and OKR reviews (monthly and quarterly are the most classic durations). Depending on your resources, you might use either manual documents, light tools, or enterprise tools like JIRA to manage your backlog. Similarly, you might have a dedicated lead to manage this, or create shared responsibility. If you're quite new to systematic experimentation, don't let tool procurement or delegating responsibility slow down your start. Prototype running some experiments for a short period, likely with outsize responsibility on the team leader (perhaps you), as you suss out your resource needs and team roles.
  • Execute Backlog: Run those experiments! I suggest doing this however nimbly as you can produce meaningful conclusions on your hypotheses. In the figure you see 30-60 day sprints noted - I pose this as a natural suggestion, but not a firm directive. At any given point you may have a two-week exploitative experiment in play, with a three-month explorative lite experiment in play, and both might conclude to then be read out in the same review ceremony. In other words, in adapting experimentation process to a dynamic set of marketers, I don't believe you need to be too strict in boxing the start and stop of your experiments by design - let the other natural forces of urgency you're experiencing drive this.
  • Monthly Review & Tactics Exchange: This is where the magic happens - the learning and inspiration! I've found that monthly reviews are an effective time period to collate results, groom backlog, finalize readouts and come together around the results. Depending on the unique nature of your work, scale and team, this could potentially be more or less frequent. Doing so too infrequently (i.e. bi-monthly or quarterly) risks the timeliness of your learnings as well as reduces the endurance your team can maintain when running experiments. In my opinion, it's the collective learning we experience in review ceremonies, aside from serving our customers' and clients' needs and ambitions, that is the reason we are in business. If your team is hybrid or in office, come together physically for this.
  • Quarterly Showcase: It's time to champion your discoveries. Parade them around. Come one come all to the marketing group's innovation review! Perhaps you can sense that I urge you to take this step, and take it unapologetically. Gather a curated group of cross-functional and senior leaders, including invites to relevant C-suite, and champion your team by giving them the opportunity to report out to these leaders and share the magic of their big ideas and strategic diligence performed in testing those ideas. Choose your own cadence; by and large, realistically mobilizing C-suite or senior client groups - especially for physical meetings as this one ideally is - is a quarterly maneuver.
  • Psychological Safety & Balanced Innovation: In instating this process we reap the bold benefits of creating psychological safety in our teams. There's an interesting dualism at play when you get formal with experimentation, and it mirrors that of using Deliberate Cognizance all up. On one hand, the rigor and responsibility added can be, well, rigorous - precise rules can risk retreat from creativity. On the other hand, it catapults the objectivity and sinks the subjectivity used in judging ideas within your team. The latter phenomenon overshadows the former. In my teams we formed a reflex of setting subjective judgement aside and often just say, "let's experiment about it!".

Final Words

Innovation and experimentation as a means for building marketing equity and endorsement

Use your experiments to champion your team, dramatically boost respect for your work, and to create an intriguing sideshow - but not one of oddities, one of possibilities.

For us marketers we yearn for and need to create currencies with which we trade our craft and broker participation in our programs and campaigns; be it with cross-functional teams or with senior leadership who has increasing expectations of marketing as a revenue center.? Experimentation read-outs are a fabulous form of currency - they’re rooted in objectivity, innately rigorous, and commune us around curiosity. And especially when paired with a tight strategic infrastructure like Balanced Innovation , your experimentation artifacts fundamentally shift the internal dialogues of your colleagues. As PwC's Adam Grant has alluded to in his research (3), positive outcomes with shallow strategic reasoning and process are interpreted as "they were lucky", while those with firmly rooted strategic reasoning and process are interpreted as "that was a smart discovery." Similarly, negative outcomes with shallow reasoning are interpreted as "that was a failure" while those with firmly rooted reasoning are interpreted as "that was a smart way to innovate."

As I had mentioned in a marketing innovation panel with Brand Innovators last year (4), as a functional leader I work to create roadshows that put my team in the spotlight and that take on the err of delightful discoveries for other teams and leaders.? Some readouts are more reactive in addressing the need to sell-through an investment to a specific team or leader, and some are more proactive and of a “come one, come all” kilt that boosts understanding, curiosity and, better yet, ways of working together we might have never considered had my team not come proactively to share our structured learnings.? Use your experiments to champion your team, dramatically boost respect for your work, and to create an exciting sideshow - but not one of oddities, of possibilities.


Citations

  1. McKinsey & Co. / Strategy Practice; 'Fear factor: Overcoming human barriers to innovation', June 2022
  2. Harvard Business School / Amy C. Edmondson; 'Strategies For Learning From Failure', April 2011
  3. PwC / Adam Grant; https://www.strategy-business.com/article/Building-a-culture-of-learning-at-work
  4. Brand Innovators; Media Buying & Marketing Innovation Summit; July 2022


David Falato

Empowering brands to reach their full potential

4 周

Evan, thanks for sharing! How are you?

回复
MD MASUDAR RAHMAN

Digital Marketing Specialist at Outsourcing BD Institute

10 个月

Hello:?Evan Weiner I visited your YouTube channel. The channel is very beautiful. But the channel has some problems. Channel Video No SEO Good No Tags SEO Score (0) Views (19) Likes (0). So you can increase your channel views and subscribers if you want. I can help you. Your channel has a problem :

  • 该图片无替代文字
回复
Scot Chartrand, MBA

Strategic Planning | Program Management | Transformational Change | Business Operations

1 年

Enjoyed reading this article, Evan!?In particular, I appreciated the focus on “democratizing” experimentation across all levels of a team—even going a step further to incentivize this through their individual goals. ? From past work consulting with leaders to flush out their strategies and expand possible test levers to pull, it’s certainly played out that the best ideas can originate from any level of the business.

回复

要查看或添加评论,请登录

Evan Weiner的更多文章

社区洞察

其他会员也浏览了