What The Heck is... A Business Experiment?
Bernard Marr
?? Internationally Best-selling #Author?? #KeynoteSpeaker?? #Futurist?? #Business, #Tech & #Strategy Advisor
Everything useful that human beings have done has been based on some degree of experimentation. From the wheel to Penicillin, from the airplane to the iPhone, ideas have been conceived, tested and following failures improved and retested - and through this we have success. But what do we mean specifically when we talk about a Business Experiment?
In essence, a Business Experiment enables the applying of research tools and techniques to test different assumptions (or hypotheses) and subject these to measurement, validation and analysis, so that we can come to an evidence-based conclusion. If the conclusions are positive, then move forward with the product/service. If not, then abandon, or restructure and retest.
Thomas Davenport made a particularly useful contribution to our understanding of creating “smart,” Business Experiments in his article in Harvard Business Review, in which he observed that the process should always begin with having a shared understanding of what constitutes a valid test. ‘Too many business innovations are launched on a wing and a prayer – despite the fact that it’s now reasonable to expect truly valid tests.’
A Testable Hypothesis
As Davenport rightly stresses, the process must begin with the creation of a testable hypothesis (it should be possible to pass or fail the test based on the measured goals of the hypothesis). For example, the hypothesis might be that “rolling out a new customer service training program to front-line staff will increase customer satisfaction and profit margins,” is a reasonable assumption and might well support the “hypothesis,” embedded in your Strategy Map and Balanced Scorecard, but whether it is true or not has to be tested – not just assumed.
Design the Experiment
With the hypothesis agreed the next step is to design the test itself: that is, design the experiment. This means identifying sites or units to be tested, selecting the treatment groups and control groups (sometimes called A/B testing), and defining the test and control situations. So, with regard the customer service training example, do not train all staff at once: rather train some (the treatment group) and not others (the control groups, which become the baseline measure – what results do we get if we do nothing?).
Data Analysis and Action
Once the experiment has been completed, the data is then analyzed to determine the results and appropriate actions. For customer training the focus is on what is the difference between the performance of the treatment and control groups; did what we expected to happen (that is increase customer satisfaction and profit margins) actually take place within the treatment groups. What happened that was a surprise and what does that mean with regard to the fuller customer service training rollout or the need to improve other business processes or sub-processes; or perhaps the need for other, complementary experiments.
CKE Restaurant Case Example
Let's look at an example. This one comes from the US-headquartered CKE Restaurants, which includes major US brands such as the Hardee’s and Carl’s Jr. quick-service restaurant chains. It applies Business Experiments during the introduction of new products.
Testing begins with brainstorming, in which several cross-functional groups develop a variety of new product ideas. Only some of them make it past the next phase, judgmental screening, during which a group of marketing, product development and operations people evaluate ideas based on experience and intuition.
Those that make the cut are actually developed and then tested in stores, with well defined measures, treatment and control groups. At that point, executives decide whether to roll out a product system-wide, modify it for retesting or kill the whole idea. CKE has attained an enviable hit rate in new product introductions – about one in four new products is successful, compared against one in 50 or 60 for consumer products – and executives say that their rigorous testing process is part of the reason why.
eBay Case Example
As a further example, at eBay, there is an overarching process for making website changes, and randomized testing is a key component. Like other online businesses (such as Google or Yahoo) eBay benefits greatly from the fact that it is relatively easy to perform randomized tests of website variations. Its managers have conducted thousands of experiments with different aspects of its website, and because the site garners over a billion page views per day, they are able to conduct multiple experiments concurrently and not run out of treatment and control groups. Simple A/B experiments (comparing two versions of a website) can be structured within a few days, and they typically last at least a week so that they cover full auction periods for selected items. Larger, multivariate experiments may run for more than a month.
While broadly following the classic, and simple, Business Experiment process of hypotheses to analyze and learn, the company has also built its own application, called the eBay Experimentation Platform, to lead testers through the process and keep track of what is being tested at what times on what pages.
As with CKE’s new product introductions, however, this online testing is only part of the overall change process for eBay’s website. Extensive offline testing also takes place, including lab studies, home visits, participatory design sessions, focus groups and tradeoff analysis of website features – all with customers.
The company also conducts quantitative visual-design research and eye-tracking studies as well as diary studies to see how users feel about potential changes. No significant change to the website is made without extensive study and testing. This meticulous process is clearly one reason why eBay is able to introduce most changes with no backlash from its potentially fractious seller community. The online retailer now averages more than 113 million items for sale in more than 50,000 categories at any given time.
Not Proving – But Testing
Given the wealth of data analytics tools now at our disposal, and the increasing connectivity with customers and other stakeholder groups, running Business Experiments is a relatively straightforward approach for testing proposed new products, or product or service enhancements and should be a key element of an overall approach to testing, analysis and improvement. But note, the idea is to test not to prove an assumption (an easy trap to fall into). Organizations that routinely punish failure would do well to heed the time-proven fact that getting things wrong is usually a vital stepping stone to getting things right.
Please let me know your thoughts on this. Have you got any good examples of business experiments and testing? Share your views in the comments below...
-------------------
I really appreciate that you are reading my post. Here, at LinkedIn, I regularly write about management and technology issues and trends. If you would like to read my regular posts then please click 'Follow' (at the top of the page) and send me a LinkedIn invite. And, of course, feel free to also connect via Twitter, Facebook and The Advanced Performance Institute.
Here are some other recent posts from my buzzword busting 'What the Heck...' series:
- What the Heck is… Big Data?
- What the Heck is.. The Internet of Things?
- What the Heck is... The Cloud?
- What the Heck are... Infographics? And Why You Should Use Them!
- What the Heck is a... KPI?
- What the Heck is... Gamification?
- What the Heck is... Analytics?
About : Bernard Marr is a globally recognized expert in strategy, performance management, analytics, KPIs and big data. He helps companies and executive teams manage, measure and improve performance.
Bernard Marr's book 'The Intelligent Company' outlines how companies and managers can become more intelligent by applying the scientific model to the way they are managing their business, which includes business experiments.
You can read a free sample chapter here.
Photo: Shutterstock.com
Latino in Tech | First Customer Success hire for Workstream.
5 年How have others tracked these efforts? Google sheets??
Human Resource and Operations Leader with a Technology Focus, Business Analyst & Project Manager in Global Talent Acquisition Eli Lilly
10 年From a business point of view, a "failed" experiment is typically viewed as a bad thing and could negatively impact a person's performance rating. However, from a scientific view, a "failed" experiment provides just as much data as a "successful" experiment....moreover, there are far more failed experiments and the key is to learn from these outcomes. Companies who are willing to experiment, take risk - create disruption even - will go far.
Marketing Consumer Products
10 年Many Big companies are afraid to move fast! They lose the edge because they don't .
Director, Compensation and Benefits at Spire
10 年Business experiments, such as A/B Testing are great, but I am constantly amazed by how few companies use rigorous statistical analysis with it. You can save a lot of money by cutting off the losing side of an experiment as soon as you have statistically significant data demonstrating which is superior (alternative Google Adwords text, for example). Often, the data is conclusive to 95% or even 99% confidence long before it feels conclusive on a gut level. Guts waste bucks!
More brands should consider this approach. In my humble experience, I find they spend too much time talking and less time doing