The Sameness Of The Same
Diwakar Menon
Helping organizations build responsible AI practices, navigate emerging regulations & build trustworthy AI solutions, mitigating bias & ensuring fairness
We recently interacted with two customers – one a product company that had a set of legacy products, with not sufficient product knowledge across the teams, but were, however, continuing to deliver product releases. The teams had to deal with regular releases every three months, and the perception was that the lack of adequate testing was surfacing issues post release. We were called in to assess the test function and provide a roadmap to improvement.
The other, a well-established SI, that was rolling out of a complex system to their customers, and the delivery tethered on the edge of a go-live, for a fairly long period, with the acceptance tests refusing to get signed off. We were called in to help the teams define the strategy and approach to testing for subsequent rollouts of the same application so that issues which surfaced in their user tests, got identified and fixed earlier in the cycle.
In the latter case, although there wasn’t a formal product acceptance phase, but I am sure if they had one, they would have also been tethering on the brink of the nearly completed for a long time, as in the first case.
That however, isn’t the topic of the blog. While the size, scale magnitude of resources that were available to be thrown at the problem were very different in each of the cases, what surprised us was the similarity in the underlying issues. Here are some…
Poor Estimation : In both the cases, there were sufficient rigour around the requirements definition, but over ambitious estimation, or an under-estimation of the complexity and the effort required to solve it. This again stemming from the fact that most stakeholders were not involved as a part of the initial estimates. In both cases, the QA teams were either never called, or lip service paid to the their opinions on the requirements
Structurally weak development processes : Again, in both scenarios, the teams would do great in the initial stages of getting the requirements right, and then as the development unfolded and the hidden complexity came to fore, the shortcuts would begin, and the most impacted would be the early QA, and lesser time being dedicated to end to end QA.
Lack of depth in QA coverage : In both the cases, we found a lack of organised test approaches, that focused testing on the most important parts of the system first, before moving to the others. In both cases, there was a lack of cohesive decision making, evaluating the outcomes and redefining the development and QA strategy.
Lack of anticipation of how users would use the products or its features : All of the testing was based on how the testers (from a technical background) viewed the product. All of the scenarios that failed later, were centred around how real users used the applications, bringing home the yawning gap in the approaches to testing internally, and by the users.
Lack of an agreed “we are ready to release” statement : In the case of the SI, a lot of what required to be released was also cast in iron clad contracts. Unfortunately, not many people had an idea of the nature of the contract or of the “changed” requirements, which slipped in through scope creeps and the churn in the team didn’t help matters.
In the product company case, it was also a matter of whether the support teams were trained on the newer functionality to be able to now support their customer’s migration to the next version of the product.
In both the cases, we have put together a roadmap, that has the buy-in and they are in the process of transformation.
So, how “same” are your projects?
Why not write to us at info AT lastmileconsultants.com if you would like an assessment?
This article first appeared here.
Learning & Development Consultant | Mental Health, DEIB & Women's Leadership Advocate | Gamified Learning Enthusiast
7 年So true about real user scenarios