Keeping Support in Work, Part 2
Tim Wolfe-Barry
Obsessed by Customer Success - Building better outcomes with Caffeine, Advocacy and Customer Centricity
Last time I posted on this topic, I was expecting to upset various friends and acquaintances who work as software developers – my theme then was how much work for Support comes from minor and preventable errors when something is first designed.
This time it’s the turn of QA, Project and Delivery teams and my over-arching theme is testing.
Let me start by saying that I *know* testing is hard. I have been a QA engineer in the past, I’ve also been a project trouble-shooter, so I understand that we’re all working together even if it sometimes seems not!
That said there are some things that anyone testing either a product or a deployment can do to help out. Too often these things slip past, or get dropped due to time and budget constraints. Unfortunately this tends to rebound later when the customer starts escalating in Support, impacting their overall satisfaction with the product and/or service and, in many cases, their confidence in the solution and the team delivering it.
So here are my top-3 requests to QA and project teams for testing:
Have Valid Use Cases
Sounds obvious but make sure, in QA, that what you’re testing is how the CUSTOMER expects to use the product, not just what the developers expected! I once found over 20 Severity-1 bugs in 90 minutes when testing a new UI, simply because my starting assumption about how it should work was different from the developers’.
If you have an open mind and verify the use cases with existing customers and customer-facing teams before you start to invest in QA, then you’ll be more likely to eventually release something that works how the customers want.
If you don’t do this, then your Support teams will spend all day saying “Don’t use it like that, use it this way, and no I don’t understand why…”
Do Load Testing, Properly
Probably the hardest thing to do in either QA or delivery, but absolutely essential. If you’re building an Enterprise-class solution with 10,000 users, then testing with 10, or even 100, doesn’t cut it.
- QA teams should have representatively–sized data sets and the tools to simulate expected usage
- Project teams need to test against a copy of the live data and, ideally, organize a test with a significant number of real users to both load the system and validate your test-cases
This is the only way to avoid unexpected performance impact when you go live…
Create Regression Test Plans
At some point in the future, your customer is going to need to update the system; either doing a whole-system refresh, or perhaps just applying a Service Pack. When they do this there WILL be unexpected impacts. The only way to identify them and verify that they are new issues rather than something that was there all along is to have a standardized regression plan.
Every time you deploy something you’re going to test the new functionality, but if you have a regression plan then you can use it to ensure that there’s no unexpected impact in other areas.
Follow these 3 rules in testing and you will probably eliminate half of the issues that occur after go-live; improving your customer’s temper and allowing your Support guys to focus on the genuinely unexpected problems…
Next time – the things that Support teams inflict on themselves!
Customer Success industry advisor | Principal at Valuize Consulting| Published author
9 年Excellent, Tim. These steps are some of the very early ones a vendor should do to improve the customer's experience and avoid retention issues later.