Diving deeper into performance testing

Diving deeper into performance testing

We live in a world which is driven by speed and performance. If applications and websites cannot perform an action almost immediately, we get frustrated that it is too slow and not worth the effort to use, regardless of how functional or practical it may be to use. Furthermore, we expect our software to be available 24/7 with downtime practically unacceptable. Therefore, it makes sense to not only build highly performant software but importantly know how to test for it correctly.

So, in this article, I hope to provide you with a bit more insight on what to specifically test for to help ensure your software meets its performance expectation and also enabling you to identify the right performance bottlenecks so that you can address those constraint/bottlenecks a little further. I have previously writing on the topic of designing high-performance software and included some aspects of testing too. In this article though I will be going into further detail on the testing side and provide more detail on the different aspects of the software that you need to focus on.

One of the key parts of performance testing is the tools involved. There are many to choose from, but they all typically work and test around the same principles. As a result, I won’t go into any analyses over the tools to choose, but rather just provide a good foundation of what you need to look for to test a system performance adequately.

Firstly, we’ll look at the different types of Performance Testing. While we commonly use the phrase of performance testing to focus on the performance of our software, its actually the culmination of a variety of testing techniques, which all look at different angles of software performance and reliability and help to identify different performance bottlenecks. 

Load testing - checks the application's ability to perform under anticipated user loads. The objective is to identify performance bottlenecks before the software application goes live.

Stress testing - involves testing an application under extreme workloads to see how it handles high traffic or data processing. The objective is to identify the breaking point of an application.

Endurance testing - is done to make sure the software can handle the expected load over a long period of time.

Spike testing - tests the software's reaction to sudden large spikes in the load generated by users.

Volume testing - Under Volume Testing large no. of. Data is populated in a database and the overall software system's behaviour is monitored. The objective is to check the software application's performance under varying database volumes.

Scalability testing - The objective of scalability testing is to determine the software application's effectiveness in "scaling up" to support an increase in user load. It helps plan capacity addition to your software system.

Common Performance Problems

Most performance problems revolve around speed, response time, load time and poor scalability. Speed is often one of the most important attributes of an application. A slow running application will lose potential users. Performance testing is done to make sure an app runs fast enough to keep a user's attention and interest. Take a look at the following list of common performance problems and notice how speed is a common factor in many of them:

Long Load time - Load time is normally the initial time it takes an application to start. This should generally be kept to a minimum. While some applications are impossible to make load in under a minute, Load time should be kept under a few seconds if possible.

Poor response time - Response time is the time it takes from when a user inputs data into the application until the application outputs a response to that input. Generally, this should be very quick. Again, if a user has to wait too long, they lose interest.

Poor scalability - A software product suffers from poor scalability when it cannot handle the expected number of users or when it does not accommodate a wide enough range of users. Load Testing should be done to be certain the application can handle the anticipated number of users.

Bottlenecking - Bottlenecks are obstructions in a system which degrade overall system performance. Bottlenecking is when either coding errors or hardware issues cause a decrease of throughput under certain loads. Bottlenecking is often caused by one faulty section of code. The key to fixing a bottlenecking issue is to find the section of code that is causing the slowdown and try to fix it there. Bottlenecking is generally fixed by either fixing poor running processes or adding additional Hardware. Some common performance bottlenecks are

·        CPU utilization

·        Memory utilization

·        Network utilization

·        Operating System limitations

·        Disk usage

·        Performance Testing Process

The methodology adopted for performance testing can vary widely but the objective for performance tests remain the same. It can help demonstrate that your software system meets certain pre-defined performance criteria. Or it can help compare the performance of two software systems. It can also help identify parts of your software system which degrade its performance.

Below is a generic process on how to perform performance testing:

Identify your testing environment - Know your physical test environment, production environment and what testing tools are available. Understand details of the hardware, software and network configurations used during testing before you begin the testing process. It will help testers create more efficient tests. It will also help identify possible challenges that testers may encounter during the performance testing procedures.

Identify the performance acceptance criteria - This includes goals and constraints for throughput, response times and resource allocation. It is also necessary to identify project success criteria outside of these goals and constraints. Testers should be empowered to set performance criteria and goals because often the project specifications will not include a wide enough variety of performance benchmarks. Sometimes there may be none at all. When possible finding a similar application to compare to is a good way to set performance goals.

Plan & design performance tests - Determine how usage is likely to vary amongst end-users and identify key scenarios to test for all possible use cases. It is necessary to simulate a variety of end-users, plan performance test data and outline what metrics will be gathered.

Configuring the test environment - Prepare the testing environment before execution. It’s important that the test environment is set up in a way that is as representative of production as possible. This doesn’t mean it needs to be as fast as production but needs to be configured and set up on the server in the same way so as to give predictable and comparable performance even if not exact. As long as your benchmarks are consistent with the test environment, it should be consistent in production.

Implement test design - Create the performance tests according to your test design. What this means is that your different tests should cater to actual possible use-cases and try and be as relevant to customer experience as possible. While it is great to just hit every API and UI object with random data to see if they still function, the truth is that user journeys will often carry with them their own specific data constraints and thereby it’s important that your tests cater for the.,

Run the tests - Execute and monitor the tests. To execute the tests, you will want your tool to be on a machine separate to your test environment and preferably with a fair amount f power. The reason for this is you don’t want a slow machine hampering your results, plus if the tool is sitting on the same server, the additional strain the tool is placing on the processor will skew your analysis. 

The real trick with any form of performance testing lies not in the scripting and execution of them, but rather in the monitoring. It’s the information that is gained from the different monitors you have created (explained below) that provides you with the most information and helps to identify what is really going on with your system.

Analyse, tune and retest - Consolidate, analyse and share test results. Then fine-tune and test again to see if there is an improvement or decrease in performance. Since improvements generally grow smaller with each retest, stop when bottlenecking is caused by the CPU. Then you may have the consider option of increasing CPU power. This phase of testing can be particularly problematic if you’ve left performance testing too late, as there could be several poorly optimised components in your system, all which will need to rectified individually to verify if they do change performance effectively.

Performance Testing Metrics: Parameters Monitored

The basic parameters monitored during performance testing include:

·        Processor Usage - an amount of time processor spends executing non-idle threads.

·        Memory use - the amount of physical memory available to processes on a computer.

·        Disk time - the amount of time disk is busy executing a read or write request.

·        Bandwidth - shows the bits per second used by a network interface.

·        Private bytes - number of bytes a process has allocated that can't be shared amongst other processes. These are used to measure memory leaks and usage.

·        Committed memory - the amount of virtual memory used.

·        Memory pages/second - number of pages written to or read from the disk in order to resolve hard page faults. Hard page faults are when code not from the current working set is called up from elsewhere and retrieved from a disk.

·        Page faults/second - the overall rate in which fault pages are processed by the processor. This again occurs when a process requires code from outside its working set.

·        CPU interrupts per second - is the average number of hardware interrupts a processor is receiving and processing each second.

·        Disk queue length - is the avg. no. of read and write requests queued for the selected disk during a sample interval.

·        Network output queue length - length of the output packet queue in packets. Anything more than two means a delay and bottlenecking needs to be stopped.

·        Network bytes total per second - rate which bytes are sent and received on the interface including framing characters.

·        Response time - the time from when a user enters a request until the first character of the response is received.

·        Throughput - rate a computer or network receives requests per second.

·        Amount of connection pooling - the number of user requests that are met by pooled connections. The more requests met by connections in the pool, the better the performance will be.

·        Maximum active sessions - the maximum number of sessions that can be active at once.

·        Hit ratios - This has to do with the number of SQL statements that are handled by cached data instead of expensive I/O operations. This is a good place to start for solving bottlenecking issues.

·        Hits per second - the no. of hits on a web server during each second of a load test.

·        Rollback segment - the amount of data that can rollback at any point in time.

·        Database locks - locking of tables and databases need to be monitored and carefully tuned.

·        Top waits - are monitored to determine what wait times can be cut down when dealing with how fast data is retrieved from memory

·        Thread counts - An applications health can be measured by the no. of threads that are running and currently active.

·        Garbage collection - It has to do with returning unused memory back to the system. Garbage collection needs to be monitored for efficiency.

Example Performance Test Cases

So, we’ve looked quite extensively at the different aspects of performance testing and what would likely cause performance issues in any given system, but there is still the matter of actually scripting your test cases. As I mentioned earlier, your test cases need to be relevant (though not exclusively) to your typical user journey, but then there is still the matter of knowing how to apply these scripts to get the results you need.

And while to do this effectively on any given system takes a significant amount of understanding for the system itself and how it works, the below test ideas should help to get you started in scripting scenarios that should unearth likely performance issues in your system

·        Verify response time is not more than 4 secs when 1000 users access the website simultaneously.

·        Verify the response time of the Application Under Load is within an acceptable range when the network connectivity is slow

·        Check the maximum number of users that the application can handle before it crashes.

·        Check database execution time when 500 records are read/written simultaneously.

·        Check CPU and memory usage of the application and the database server under peak load conditions

·        Verify the response time of the application under low, normal, moderate and heavy load conditions.

During the actual performance test execution, vague terms like acceptable range, heavy load, etc. are replaced by concrete numbers, as you will want to measure against the specific goal that would meet the needs of your software. These numbers should typically form part of the business requirements, and the technical landscape of the application and discussed during the planning of a particular part of the system.

When to Performance Test

A lot of people leave their performance testing for quite late in the development cycle, once the software is fairly complete and stable. While there is merit to this to get a sense of how everything works together, performance testing should take place at all stages. Leaving it too late to identify and rectify performance issues is expensive and likely to add serious delays to the delivery of every system. In this age of predictability, it’s not something you can afford.

Any piece of code or database change should essentially be performance tested as soon as possible to help identify any immediate bottlenecks and optimisation deficiencies. I have written previously about how to do this easily at a code level. However, it’s not just testing any piece of code, it’s testing each API, DB process and then testing whenever a change is introduced into a system – even hardware changes. 

The bottom line is you shouldn’t compromise with your performance in any way and as such, you want to test often and everywhere.

Performance is just as vital to the success of any system/application as its functional quality and so you need to place just as much emphasis on testing and monitoring this throughout your development cycle as you would the rest of your functional testing. 

要查看或添加评论,请登录

社区洞察

其他会员也浏览了