Getting Value for Your Customers
Recently, I was talking with an industry veteran and good friend of mine about the state of the industry for software. We spoke over beers at a Silicon Valley bar. Software -- over valued? Under valued???Too much hype or not enough???
He paused for a moment and said, “You know what the problem is, don’t you?”
“What?” I asked, thinking this was going to be a good one.
He took a sip of his beer and said, “The vast majority of enterprise software is under-deployed.??That is why stocks are getting crushed.”?
“Vast majority?” I said. “What do you mean by that?”
“It is simple,” he said.??“Most software is either unused or under-used.??Even if a company is getting some value from their software, they are not getting as much as they could.”
I am not so sure about “vast majority”, but it is safe to say “a lot” of companies do not use the software they pay for as well as they could.??However, it was the next thing he said that really got my attention.
“With all that unused or under-used software” he continued, “there is a ton of money –?customer value?– just left on the table, wasted year after year.” He pointed to a CNBC graphic showing the stock price carnage for software companies. “Right now, the market is saying ‘Software companies can no longer afford to let their customers waste all that value.’ ”
"The vast majority of enterprise software is under-deployed."
I am not sure whether that is the greatest market insight ever, or not, but it got me thinking about just how successful most software companies are at what should be their prime directive:??Maximize the lifetime value of customers by maximizing the lifetime value those customers get from their solutions.
It is true, software implementation projects often fail to deliver as promised.??This topic has been heavily researched.??Legions of academic and industry researchers have documented the appallingly high failure rate of software deployment projects – failure rates ranging between 30% - 70% year after year, after year.??A study by the Standish Group looked at 15 years of projects. 1??Rather than sticking with the binary outcomes of “Successful” or “Failure”, the study refined the project results into three categories: Failure, Challenged, and Successful.??
They found the following:
Failed Projects?= 25%:??Projects that were cancelled or scrapped during implementation or after completion
Challenged Projects?= 50%:??completed and operational projects which were over-budget, late, and/or delivered fewer features and functions than initially specified
Successful Projects?= 25%:??on time, on budget and with all specified features
In this and similar studies, researchers point out that this incredibly high failure rate has torched?hundreds of billions of dollars spent on software solutions.???Fully one-quarter of projects are complete failures and another half of project deliver “less than desired” to put it charitably.
In the SaaS world this leads to churn. An outright program “failure to go-live” is an obvious problem and an immediate source of churn.??However, under-deployed software is a much broader and more pervasive problem.??This is the money left on the table and this is the mistake that most software companies cannot afford to let their customers make.?
PMI Perspective vs. Value Perspective
As noted above, many projects fail. Lots of software never goes live.??The research shows that.??However, I think that much of this research looks at failure and success in the wrong way.??While I do not doubt the high failure rates, I do challenge the standard?definition of failure?many researchers use. The definition is too stringent and too narrow.?
In most cases, these studies employ what is sometimes referred to as the “Iron Triangle” of?project success?which focuses on three inter-related and cross-dependent factors:
If the project is late OR over-budget OR fails to deliver the defined scope, then it is deemed a failure.??This assessment focuses solely on how well the?project?was defined and run.[2]??Think of this as the Project Management Institute [PMI] centric perspective. In this perspective all that matters is how well a project is planned (time, money, scope) and how well the project manager balanced those competing factors.??Fail in any one of those dimensions and the project is stamped a failure. Too stringent.?
The problem with the “PMI Perspective” is that it ignores the most important question:??did the work deliver any tangible business improvements???An implementation program that scores 100% on budget, schedule, and scope but that delivers no business impact should not be labeled a success.??That smacks of a “the operation was successful, but the patient died” mindset.??
It is vital to take a value or outcomes-based approach to assessing solution success.??Essentially, “Was the view worth the climb?”??An implementation may be over-budget and/or late but if it still delivers lasting business outcomes then the solution?might?be considered successful.??Sure, tighten up those project planning and execution skills – those do matter – but at the end of the day, did something get done and was it worth it? Manage the inputs (costs, time, features) but focus on the outcomes.
Another dramatic short coming of the PMI Perspective is the timeframe considered.??PMI concentrates on the efforts and time leading up to solution go-live. What happens after go-live, and what value may or may not be realized, is immaterial.??“Not my problem” from the perspective of the software vendor in the PMI Perspective.
The move to a SaaS world – cloud-based subscription software --??reversed all of this, turned it on its head.??Go-live is the beginning not the end.??In SaaS, there is no time to step back and admire your work, to rest on your laurels. Not only do all the subscribed licenses need to be deployed and adopted, and the business processes need to be improved (which rarely happens during solution implementation), there is the question of whether the business is able to extend and expand the solution value throughout the subscription period.?
Is there a program or effort post go-live to take advantage of newly released features and enhancements???In the SaaS world, software solutions do not “freeze” once they go-live.??They are intended to grow and improve – that is what a big chunk (15%-20%) of that annual subscription fee is supposed to go toward. Continuous improvement represents a critical and often ignored component of solution success. Without continuous improvement, that pile of money that the customer left on the table starts to grow.
The “Value Perspective” does not change the fact that a lot of solution implementations fail or at least fail to deliver as much value as they could –?the money left on the table.??The Value Perspective refocuses attention on where the potential failure points arise and how solution providers can help their customers avoid those failures.??The path to success and the means to achieve that prime directive:
Maximize the lifetime value of customers by maximizing the lifetime value those customers get from their solutions
Failure Exits
If you look backward at solution implementation projects, you will identify at least four ways that things can go very wrong.??Think of these as Failure Exits from what would otherwise be the road to lasting business value.??Specifically, these are the ways the potential business value from the solution is either eradicated or significantly compromised.??The first is rooted in the quality and mechanics of project execution although could probably be avoided with better business need definition and solution election.??The next three center in what happens after go-live.
Not Implemented:??The solution never goes into production
This is unequivocable failure.??Somewhere between 10% - 20% of software deployment projects meet this fate – nowhere near the PMI Perspective failure rate of 30% - 70% noted above but still an unforgivably high rate of disaster.??Additionally, companies that are in “year three of a one-year implementation” may wish they had pulled the plug at least two years ago as good money goes after bad, and the chances of success dwindle.?
In most cases of a “cannot fog a mirror” failure, the root cause is one of three broad situations:
Technical Failure:??Technical failures include, but are not limited to, incapability with existing infrastructure; unacceptable security issues; inability to integrate with other tools; and, in rare cases, a complete failure of the solution to work at all.?
Poor Solution Fit:??Poor solution fit represents both a somewhat broader and a somewhat more common reason for “failure to launch.” In this case, the explanation is likely to be “the tool works, it just does not work for us.”??This might be due to a flawed selection process or even a program that took the wrong approach to solving a legitimate business problem.??However, the real reason is likely that the customer’s business and management?processes?did not align with the “business process best practices” recommended by the solution provider.??Customers often describe these failures as “a lack of flexibility in the solution.”??Solution providers blame the customer for “trying to pave the cow path.”??Either way, the result is failure and a total waste of time, money, and opportunity.??
Little or No Program Sponsorship: By far the broadest and most common failure mode is the lack of program ownership, accountability, and the absence of a compelling case for change.??The failure excuse in this case is often “Well, it turns out there was NOT a problem worth solving.”??However, the seeds of failure can likely be found in poor process and organizational change management, poor communication / mobilization and, especially, a dramatic under-estimate of what it would take to deliver the desired business improvements.??As the promised utopia of “go-live” fades in the distance, many organizations run out of energy and quit.
There is one blatantly obvious similarity for all three of these “failure to get to go” scenarios:??they all should have been caught well before the subscription was contracted and the project was kicked off.??Something went very wrong in the process of problem identification, value definition, solution selection (including and especially reference checks!), implementation design, and alignment and validation across stakeholders.??These common errors point to a rushed decision, an under-estimate of the work required to realize value, and the failure to establish strong project leadership and ownership.
It may be tempting to scold the customer with a caveat emptor warning but it is important to recognize that project failure is just as bad for, or probably worse, for the solution provider as it is for the would-be customer.??Not only will the software company probably lose money on the deal – account acquisition costs often exceed first year subscription margin contribution – but now there is a negative reference in the market and a lost stream of future profits. Each of these root causes could have been – and should have been – caught in the evaluation phase, especially any technical gaps or failings.??However, often they are not caught and providers and customers proceed headlong into projects where they lack the purpose, organization, energy and the accountability to succeed.??They launch projects that waste money.
The next three Failure Exits --?which easily account for 50% - 75% of programs that reach go-live?-- represent a kind of phased progression of “necessary but not sufficient” achievements on the path either to lasting business value or money left on the table and likely churn / failure.
In a Value Perspective, the focus is on outcomes, not just inputs, and again there are three key considerations.??If the PMI Iron Triangle was on-time, on-budget, and full scope, the Value Iron Triangle is Deployed, Adopted, and Valued.??The project must of course go-live but if the last three steps are missed then the project will careen toward a failure exit.
For a project to succeed, and for a customer to renew or even grow their subscription, licenses must be deployed to users; users must engage with and adopt the solution; and business value must be realized and recognized.??This “iron triangle” emphasizes the three brutal truths of project success from a Value Perspective:
Undeployed.??Unadopted.??Unvalued.??No matter how you look at it, value is getting wasted – money is being left on the table and churn is just around the corner.??However, before jumping in how to avoid this and / or how to fix it, I want to explore for a minute how it is that so many software companies fail to see these risks let alone do anything about them.
UNDEPLOYED SOFTWARE?stands out like a flashing red light.??“We sold them 1,000 users and only 100 licenses have been assigned.??We have a problem.”??Most Customer Success organizations track and report on deployment statistics.??They are easy to track, easy to understand and obviously important.
However, teams rarely agree on just went wrong or who is accountable for the problem.??In fact, there are two very common modes:??Internal Blame and Customer Blame.??The table below highlights just a few of the common excuses and explanations for poor deployment.
领英推荐
UNADOPTED SOFTWARE?can be a bit harder to spot, particularly if the failure to adopt is at the business process or software feature level.??A user that never logs in is easy to track.??A user that uses just a fraction of the full power and capability of solution is harder to recognize but still a high risk.??The software is being used, just not very well.??Diligent Success organizations need to develop meaningful engagement metrics that tie back to relevant business processes and best practices.??It is not enough to know that an assigned user logged in and maybe executed some kind of transaction.??To understand adoption, the software provider needs to articulate, track and report on the work that the user is getting done in the solution.
Things brings up another challenge for solution providers.??As solutions increasingly leverage artificial intelligence and machine learning, and as automation increases, users spend less time actually using and thinking about the solution.??In many cases, from a business value standpoint this is not a bad thing.??The point is to improve the business process and to realize the business value.??Streamlining usage and minimizing “clicks and screens” for the user drives improved user satisfaction.??However, it can also drive down at least the perception of user engagement.??
Solution engagement often falls along a line that starts at one end with a small number of users working in the solution on a high frequency.??Think of highly skilled users working on a specialized tool – commodity sourcing specialists for example.??At other end of the engagement line is the model where a company has a broad number of people using a solution only occasionally and often for short periods of time. Wherever a solution falls on the engagement line, the provider needs to think deeply about what high quality adoption and engagement look like, why it is valued, and how to highlight that to the solution business owners?
UNVALUED SOFTWARE?can be the trickiest of the four failure exits.??It is easy for a solution provider to rationalize that “Hey, the licenses are deployed, and the users are engaged,?they must be getting value.??There is no churn risk, right?”??Unfortunately, if it is not crystal clear how the solution delivers value and how much value has been realized, there can indeed be churn risk.??This is the classic case of a solution provider thinking their solution delivers incredible value and the customer not thinking of the solution at all.??It is not enough just to point back to the business case created to sell the software, if one was even created, there needs to be defined and customer-agreed metrics that connect operational improvements to tangible business value – success metrics. Without clear and agreed to metrics that can be easily and objectively tracked, every solution runs the risk of price-based churn (aka downgrade) or rip-and-replace churn as the customer moves to a cheaper and/or more highly valued competitive solution.
How to Avoid Failure Exits
So how do you avoid the Failure Exits.??How do you keep your customer from leaving money on the table, and tempting the risk of churn, with undeployed, unadopted, and unvalued software?
It all comes back to value.??If you customer is getting value – and knows they are getting value – then the risk of churn drops dramatically. Sure, there will still be occasional “bolt from the blue” churns due to things like acquisitions, new CIOs or some other dramatic event.??However, if your solution sits on a solid foundation of documented, recurring value, you will be protected from most competitive incursions and many “bolt from the blue” risks.
To get this right, the customer engagement process – or customer lifecycle – but be built on value to the customer.??This must happen from first sales meeting to the 100th?renewal.??This can be tricky, particularly with highly technical solutions.??It is easy to deep dive into technical aspects and “differentiating features” and never to take the time to step back and ask, or answer, the question:??who cares???Who cares if the solution can do all these things???What gets better that your customer cares about from an operational or business outcomes point of view?
Successful renewals and expansions are predicated on selling and delivering real value.??This is not an area where a solution provider can “think they see the value” or where a customer can develop a secret business case that is not shared with the solution provider.??Best practices demand that the customers and solution provider clearly define the problem, the value of solving that problem, and, most importantly, what has to happen to achieve that benefit.??
Software companies too often over promise time-to-value and implementation speed.??The benefits will “auto-magically” materialize at go-live.??This is rarely true.??World-class solution providers encourage – sometimes even force – potential customers to define success metrics, baseline performance and target goals.??These hard fought metrics are worth their weight in gold throughout the customer lifecycle.
How to Get Back on Track
Sometimes, it can be much harder to get a program back on track than to get it right from the start.??If you are seeing slow deployment, low adoption, and little or no demonstration of business value you need to take action quickly to avoid churn.??However, if you are seeing those problems and symptoms you probably have some additional – perhaps bigger – problems regarding executive sponsorship and commitment, and organizational motivation.??You do need to steer away from the typical failure exits but you may have some deeper work to do to rebuild a solid program designed for success and built for value.
The first thing to do is to figure out what exactly the blockers to success.??You need to ask yourself and your customer “We know what needs to happen, why isn’t that happening???What is stopping us?”??The answer to that question can be something small and relatively minor or something big – an existential threat to the viability of your solution’s program.??No matter what the problem is, you need to reestablish the bedrock principles of a successful program:
Below are a two short examples of problems I have helped teams address – small and large.??There are a million permutations on these examples but hopefully these two will be illustrative.
Small Blockers
A Success team struggled to drive deployment and adoption for an infrastructure monitoring solution across a national retail company with a highly distributed environment.??Deployment was stuck at less than 50% and there was little evidence of adoption.??The company’s leadership raised concerns about deployment and pointed to the solution provider to “fix the deployment of your software” or they would not renew their contract when it expired in just under 6 months.
The Success team worked directly with an internal IT on the implementation and indirectly with business owners.??The IT team felt they were doing all they could do and the business leaders said they had been promised that “the tool was so easy to use that deployment took hours or maybe a few days” not the months that had been spent getting just a fraction of the expected thousands of users engaged the solution.
The Success team did a little digging into the problems plaguing their work and developed a few ideas about what was needed.??However, they also recognized that creating an action plan with no customer involvement was a recipe for continued failure.??They needed both the customer’s input and “fingerprints” or ownership of the program.???They formed a joint working team with participants from IT, the business and the software provider.
The first thing they did was to define the problem to be addressed and the outcome desired.??In this case it was pretty straight forward:??few people used the tool and no one used it well.??The chance that the company was realizing value from their investment was zero.??The team started to put together a plan of action.??
They also defined a set of metric-based goals and a timeline to achieve those goals.??The joint team committed to bold goals over the next 6 weeks:
As it turned out, the issues were relatively minor and centered on communication, awareness, and enablement.??To address gaps in communication and awareness regarding the solution, the joint team crafted a series of communication from management to emphasize to targeted users the strategic value of the program and the importance of each user’s participation.??The team created, shared, and tracked weekly deployment schedules.??Importantly, when target users slipped through the cracks, the team followed up with the user and the user’s team lead.??
The enablement issues were also fairly easy to overcome.??The IT team had insisted on pushing license entitlement through an internal tool.??As it turned out, this tool was slow and error prone.??In many cases, users were not boycotting the program they just did not have the access they needed to the solution.
The joint team also identified training as a barrier to adoption and engagement.??To address this blocker, the team orchestrated a series of formal training classes.??However, early response to the training was weak at best. Users did not show up.??Training did not happen.??The team then took a lighter approach to enablement by structuring offices hours to support users. They reached out to users that showed little or no adoption and offered support.??In reality, the tool was easy to use and rather than hours of formal training, users just needed a bit of hand holding to nudge them into action.
After just a few weeks, the program picked up a rhythm of success and the metrics showed the joint goals had been achieved and even exceeded.
Large Blockers
The Gentle Acres (not the real name) implementation project had not gone well. There had been technical delays, software bugs, and delays in data migration.??The project was off schedule.??The project team staggered through solution go-live – the starting line not the finish line – with little energy and no momentum.??Worst of all, just a few months after the solution went into production, the executive team had lost the faith that the solution would work for their business.??Users complained about the tool and there was little evidence of the promised productivity and purchase cost savings.
Executives from the Gentle Acres demanded a meeting with the solution provider’s executive team.??The software company expected Gentle Acres wanted to cancel their multi-year subscription contract.??Their CIO spoke first, launching into a litany of the technical problems the implementation team had wrestled with and the list of bugs they had discovered – some of which were still unresolved.??Next up was the CFO. She reminded the vendor how much they were paying for the solution and reiterated the fact that they had not yet gotten any benefit.??In fact, she pointed out, the solution had cost them a lot of money as they tied the tool to an uptick in employee turnover and a need to increase headcount to work through implementation.
The last person from the Gentle Acres to speak was the business champion, William (not his real name).??William talked about how they had trusted the software company and how he had laid his career on the line to deliver the solution benefits.??He said he wanted to believe that the project could succeed but he saw little evidence to support that belief. His frustration, pain, and feeling of outright betrayal were crushing.
The Chief Customer Officer from the software company spoke next.??He kept is simple – a full?hara-kiri,?fall-on-his-sword admission of guilt.??He said, “You are right.??We fucked up.??We did a terrible job.??But we are going to fix it and our VP of Success is going to tell you how.”
The VP of Success had only been with the company for a little over a month.??Howevr, he had heard about the problems at Gentle Acres – from the CCO, from the CEO, and from directly from William of Gentle Acres.??Most importantly, he had had a chance to sit down the Customer Success Manager for Gentle Acres.??She had a good sense of what was needed to get things back on track.??In her view, Gentle Acres needed an accelerated?reimplementation.??Not the technical parts of implementation - that was finally stable – but rather the business process and solution configuration elements of implementation.??Given the technical and bug issues of the original implementation, Gentle Acres had strayed a long way from the recommended best practices of the solution.
To make a long story short, the joint team did the things you always need to do to solve a problem – big or small.??They defined the problem.??While that seemed pretty clear – “we hate your software” – it was worth writing down in terms that were measurable and addressable.??They then defined the outcome Gentle Acres wanted.??Again, this was straight forward.??Work was supposed to be easier not harder with the solution.??The team figured out how to measure that.??Costs were supposed to go down not up with the solution.??The team figured out how to measure that.??Then they set goals and agreed to a plan.??
The effort was not a walk in the park but once the joint team had clear definition of the problem and the resolution, things moved forward. Early wins put a little wind behind their sails and they started to cultivate champions and enthusiasts among the user groups.??The joint team reported to the executive team on regular basis – every week for the first 6 weeks and then monthly for the rest of the year.??At first, the meetings were rocky as the team handled tough questions from an admittedly skeptical executive team.??However, as results started to materialize, skepticism turned to hope, and hope turned to excitement about the opportunity to extend and expand the value that Gentle Acres got from the solution.
There were two questions at the start of this article.??I think we can answer them now:
How much money are your customers leaving on the table?
Way too much and it shows up in un-used and under-used software.??If your customer is not continuously improving their business with your solution, then your solution is continuously sliding toward churn.
Can your business afford to let them keep doing that?
No!??Either your customer is going to leave you, or a competitor is going to take them.??Either way, you are left with churn.?
To get the most out of your customer, you need to work to help them get the most out of your solution.??Joint plans, defined metrics, accountability.??The work needed is not a mystery.??It can be hard if your solution lacks a business owner or if you failed to create a story of value around your solution that is rooted in operational improvements and business value.??The secret is that there is no secret.??Find a problem worth solving, implement a solution that works and partner with your customer to define, exceed and renew their value goals.
You can subscribe this and other articles on Substack
Rob Foster is a mentor and advisor to executives across the high tech industry. He helps companies achieve unfair advantage by developing and executing innovative strategies which in turn dramatically increase enterprise valuations. He is the founder and CEO of R Foster & Associates, LLC and the Co-Founder and Chief Operating Officer of The Silicon Valley Laboratory.
1?Dalcher, Darren.??“Experiences and Advances in Software Quality | Software Project Success:??Moving Beyond Failure”, UPGRADE Vol. X, No. 5 October 2009.
2?The Standish Group study (mentioned above) addressed this in part by creating the “challenged” category but that did not go far enough.
Thanks for sharing