Global performance testing- part 1
And so it begins...
It starts with one meeting - the management of the company you work with, has a big announcement to make and everyone seem hyped about it. The sales have been good recently, performance of your web and mobile application is at its peak. The news must be big, since everyone is invited. The CEO shares the results for the last year, reminds you of the company mission and vision, and then you hear the news : WE'RE GOING GLOBAL!*
*Global, but like a world tour of a band:
Changes, changes, changes
Developers have started integration with the payment channels to cover different payment methods, cloud architects have already started scaling up the integration environments to accommodate new countries, there is a new i18n engine, and the database tables (or NoTables for NoSQL) have grown in numbers. The important architectural decision was to scale up the existing application - keeping a single database, same API endpoints - there are currently no plans to develop a country-specific application and all the countries would share the same database. Soon they'll find out certain regions require region-specific actions/pop-ups, and they will add them too. You already know the tests you've been running for the last few years just became redundant and now cover only a single country and the complexity of your tests has to grow.
Brace yourself
A drastic change like that requires a great effort from testers and, if done right, will also bring huge benefits to your test. It's a great opportunity to re-assess the test strategy and tooling because you'll no longer be relying on the old codebase you've either prepared a few years ago, inherited from some other team, or simply copied over from automation tests and renamed as performance tests because now they're running on Performance Tool X. I have been in the situations where I had to re-write all the performance tests from scratch in between the releases and this was the best learning exercise. You're given an opportunity to improve the tests you currently had - only you have more experience with your application and most likely had some ideas how to improve them. It's important to re-visit all the planning stages to make sure you cover all the right points with your test.
Non-functional requirements
You'll soon find out the excel sheet you have with your NFRs suddenly takes more time to open. There are new tabs for each country, the peak loads occur at different times for each country due to time differences. The volumes have also gone up... a lot! Marketing team is already doing their job on getting the markets prepared for your company arrival and the plans are very ambitious. It's important not to aggregate the NFRs too soon - make sure you keep the details about the services used per country and the peak volumes expected. Based on the NFR's you'll be able to create proper test scenarios and the right reporting for business stakeholders.
Test Data
Now this is a part where you can extend your #observability from test perspective. I often see the test frameworks limiting their test data to a synthetic copy of an average account already existing on some production environment, less often I see the testers using account from the golden copy of production environment, with scrambled data. In all cases however, the test data extracted was the bare minimum to generate a load against the application. Instead of limiting yourself to only username and password, you can store various information about the account you're using, like how many orders this user has already placed, or how many products he/she has in the shopping cart. You can use this data in your reports, as described in my previous article. Of course you need to make sure this data stays up to date with every test you execute, this could also mean restricting access to usage of these accounts in your integration environment to performance tests only.
Synthetic vs Organic
This subject is important to performance engineering. In the project I've worked with, the test data was at some point just a requirement to execute load test and in most cases, the users were just clones of the "average" user representations. On many occasions I've learned that the test data can be also a useful performance testing tool and if you utilize it correctly, you should put it on your CV along JMeter, Loadrunner, NeoLoad etc. The more accurate your test data is, the less time you'll spend troubleshooting data-driven production problems. You already know how to find the metrics impacting the performance of your application from here - it's a good start to make sure your test data is a true representation of what's going to run on production. Even if you're required to generate a new synthetic data, make sure to keep the distribution of these metrics for the new accounts, otherwise you can only guarantee the performance for the average user and spend the rest of the release troubleshooting problems with customers of non-standard configuration.
Performance Engineer ??
2 年This is great stuff Jakub Dering. Thanks for sharing.
Test Automation Lead | QA-Devops Team Lead | SDET at Signal Maritime/JIT Team | Consultant | Mentor | Trainer | Blogger: automatyzacja.it
2 年Great article Jakub (Kuba) Dering! Looking for more :)
Performance Engineer | AWS Certified | JMeter | Gatling | Dynatrace | New Relic
2 年Amazing article! Congratulations!
AI / Machine Learning Researcher, Founder/CEO/Chief Scientist at Neocortix and Audience, Engineering Fellow at Femtosense, Caltech Ph.D.
2 年Jakub Dering - Really interesting. Looking forward to seeing where you go with this!