TINY MISSES IN PERFORMANCE TESTS THAT CAN HUGELY COST ENTERPRISES
Vijayanathan Naganathan
Tech Co-Founder | Driving QE Innovation for Growth-Stage Companies | Customer Success Leader | IIM Kozhikode Alumini
If you have been reading our earlier series on performance testing, we are glad to have you back with our knowledge sharing series.
If you haven’t, then please use the links and gain some insight into performance testing.
领英推荐
? One of the key aspects to performance testing is to understand the expectations of the key stakeholders (business, product owners, technical architects, implementation teams, infrastructure support teams), & document the pain points/issues known. As part of the discussions with various stakeholders, one might derive a lot of historic data related to the usage of the systems, etc. This needs to be documented, and shared with stakeholders as part of receiving the overall sign in. Failure to do so is likely to yield in going-in-circles late in the life cycle.
? Workload Model is one of the key components of performance test strategy which essentially captures key operations to be simulated; be it expected number of users in the system for each operation, number of operations that will be performed in a given period of time based on the processing time per request, the think time involved within each step of the particular operation, request arrival rate, duration of the tests; all of which together form a realistic and higher accuracy workload model. Having an inaccurate workload model or not comparatively referring the results with the accurate workload model can skew the performance test results.?
? Load injection from right location plays a key role in the results. Often when performance tests are carried out, it is important to notice the region from where the intended deployment is to be made and the simulation with load injector happens from the same location. For example, if the intended production is going to be in the US region and the users are also going to come from the same US region, then the load simulation should happen from the US region. If the load simulation happens from another region, say India to the US it is likely to provide inaccurate results due to the additional network hops.
? Monitoring in itself is a key element to performance tests. Defining the right layers and right parameters to monitor is important. Often we have seen performance test teams use debug mode logging on server layers, which often leads to disk space exhausted. They might have tended to use for one test for deeper insights but would have forgotten to set it back to minimal debugging. These kinds of ghost issues tend to make infra and technical teams to spend their critical time on ghost problems.
If you have a thought process that is different from what is mentioned here, share it in the comments below.