A Comparative Analysis of Gatling and Locust Through the Performance Testing Lifecycle
Benosam Benjamin
Automation Architect and Transformation Specialist | Passionate about automation and innovation
Why I wrote this Article ?
Couple of weeks back during a discussion some one mentioned how their customer preferred Gatling for performance testing , this got me to check-out about Gatling. For me , the first load testing tool I ever knew about was LoadRunner , but locust is what introduced me to the world of load testing. While I am by no means a performance testing expert , my day job gave me a good opportunity to explore Locust and build a solution and eco-system around it to help simplify and leverage the best out of locust for our performance test engineers.
I knew the existence of Gatling through a few people but never fully explored it. The last time it came up was when I was learning and customizing Karate test automation framework. I came to know that Karate provided some support to runs its script as Gatling tests , but did not fully explore it as by that time our locust eco-system was the de-facto preferred system. Now when my recent conversation brought up Gatling , I thought I needed to check it out and know if it got anything better to offer or if I could be inspired by any of its features.
Now while I did that combing through various links/articles(I have added a few of the references towards the last ) , it struck me that this would be a good article and an opportunity to flex my muscles with Gen AI for the comparison and content creation. Hope it is useful for you. Please do let me know your thoughts on this article as well as the performance testing tools you prefer and why.
Introduction
Performance testing is a crucial aspect of software quality assurance, as it ensures that the software can handle the expected load (in terms of users and transactions) and deliver the desired performance( response and the speed of response to an action/transaction). Performance testing tools are software solutions that enable testers to create, execute, and analyze performance tests. There are many performance testing tools available in the market, each with its own features, advantages, and disadvantages.
In the realm of performance testing, this article aims to contrast the capabilities of Gatling with those of Locust. We will begin by providing an overview of performance testing including its different phases and the key factors to consider when choosing a performance testing tool. We will then dive into a detailed comparison of Gatling and Locust, highlighting their strengths and weaknesses in each phase of performance testing.
By the end of this article, you will have a comprehensive understanding of performance testing and be able to choose the right tool for your needs
Gatling: Gatling is a open-source Scala based load testing tool that provides option to script using Scala/Kotlin/Java. It uses an event-driven, simulations-based approach to create and run performance tests. It is known for its high performance and detailed reporting capabilities. Locust: It is another open-source load testing tool that allows testers to write test scenarios in Python. It follows an imperative, code-based approach to define and execute performance tests. It supports distributed testing and real-time monitoring of performance metrics.
What is a performance test ?
Typically an end-user identifies performance in relation with how the application responds to an action or page/data/resource(image/video) load. This slowness can be attributed to multiple reasons like
The intent of a performance test is to identify and resolve these factors. To help identify this , different tests are undertaken as part of performance tests.
From these different performance tests, one can expect to gain a variety of metrics that provide insights into the application’s behavior under test conditions. Some of the key metrics include:
These metrics help in identifying areas of improvement and ensuring that the application meets performance expectations. They are typically obtained through a combination of sources, including:
By aggregating data from these sources, a comprehensive view of the application’s performance can be constructed, enabling teams to make informed decisions about optimizations and improvements.
Choosing a performance Test Tool
When selecting a performance test tool, it’s important to consider various factors to ensure it aligns with your project’s requirements. Here are some key factors to consider:
Performance Test Phases
A typical performance test follows these phases:
These phases ensure a structured approach to identifying and addressing performance issues within a system. While the role of locust and Grafana holds prominence only from the execution phase , we will still outline the other phases to help grasp their importance.
Phase 1 : Risk Assessment
Risk Assessment in performance testing is a process used to determine the need for performance testing of each component within a software system. It involves evaluating the potential risks associated with the system’s performance, such as response times, reliability, and scalability. The goal is to prioritize the testing scope and scenarios based on the criticality of the components and their impact on the business in terms of user experience, revenue, and potential application outages. It requires:
Phase 2 : Requirement Gathering and Analysis
Requirement Gathering & Analysis is a crucial phase in the Performance Testing Life Cycle where the performance testing team collects and analyzes the application’s performance requirements. This involves working closely with stakeholders such as product owners, business analysts, and developers to understand the expected user load, usage patterns, and performance goals. Here’s an expanded explanation:
Phase 3 : Performance Test Planning
Performance Test Planning is a strategic phase in the Performance Testing Life Cycle that outlines a roadmap for conducting successful performance tests. This ensures:
Phase 4 : Performance Test Design (Scripting)
Performance Test Design (Scripting) is an intricate phase in performance testing where the test scenarios are meticulously crafted to emulate real-world user interactions with the system. This involves:
Apart from the language difference of Locust and Gatling , the very nature of the language provides considerable benefits. Locust uses an imperative, code-based approach, where users define the test flow directly in Python code and execute them using the Locust framework. This approach allows for more flexibility and simplicity in defining the test flow and the user behavior, as well as easier customization and extensibility. Gatling uses an event-driven, simulations-based approach, where users define scenarios using the DSL (Domain-Specific Language - which can be more expressive for complex user behaviors) and execute them using the Gatling engine. This approach allows for more precise control over the test flow and the user behavior, as well as better performance and scalability. Gatling can be advantageous for teams familiar with the JVM ecosystem. It offers a powerful DSL (Domain-Specific Language) for writing test scenarios, which can be more expressive for complex user behaviors. One clear advantage Gatling has over locust is its Recorder Feature that is available with the open-source edition and No code quickstart that is available with Gatling Enterprise. These features help ease the test design phase. Both Gatling and Locust provides a modular and extensible architecture, . They allow users to organize and structure their test components into different files and folders, as well as reuse and import them as needed. Gatling and Locust both provide high maintainability, as they allows users to update and modify their test components or scripts easily and efficiently. They also provides good documentation and support, as well as a large and active community that contributes to its development and improvement.
领英推荐
A Simple Gatling Script Sample:
package computerdatabase;
import static io.gatling.javaapi.core.CoreDsl.*;
import static io.gatling.javaapi.http.HttpDsl.*;
import io.gatling.javaapi.core.*;
import io.gatling.javaapi.http.*;
public class ComputerDatabaseSimulation extends Simulation {
HttpProtocolBuilder httpProtocol =
http.baseUrl("https://computer-database.gatling.io")
.acceptHeader("application/json")
.contentTypeHeader("application/json");
ScenarioBuilder myFirstScenario = scenario("My First Scenario")
.exec(http("Request 1")
.get("/computers/"));
{
setUp(
myFirstScenario.injectOpen(constantUsersPerSec(2).during(60))
).protocols(httpProtocol);
}
}
A Simple Locust Script Sample:
from locust import HttpUser, between, task
class WebsiteUser(HttpUser):
wait_time = between(5, 15)
def on_start(self):
self.client.post("/login", {
"username": "test_user",
"password": ""
})
@task
def index(self):
self.client.get("/")
self.client.get("/static/assets.js")
@task
def about(self):
self.client.get("/about/")
A Locust Script Sample with HTML Parsing:
# This locust test script example will simulate a user
# browsing the Locust documentation on https://docs.locust.io
import random
from locust import HttpUser, between, task
from pyquery import PyQuery
class AwesomeUser(HttpUser):
host = "https://docs.locust.io/en/latest/"
# we assume someone who is browsing the Locust docs,
# generally has a quite long waiting time (between
# 10 and 600 seconds), since there's a bunch of text
# on each page
wait_time = between(10, 600)
def on_start(self):
# start by waiting so that the simulated users
# won't all arrive at the same time
self.wait()
# assume all users arrive at the index page
self.index_page()
self.urls_on_current_page = self.toc_urls
@task(10)
def index_page(self):
r = self.client.get("")
pq = PyQuery(r.content)
link_elements = pq(".toctree-wrapper a.internal")
self.toc_urls = [
l.attrib["href"] for l in link_elements
]
@task(50)
def load_page(self):
url = random.choice(self.toc_urls)
r = self.client.get(url)
pq = PyQuery(r.content)
link_elements = pq("a.internal")
self.urls_on_current_page = [
l.attrib["href"] for l in link_elements
]
@task(30)
def load_sub_page(self):
url = random.choice(self.urls_on_current_page)
r = self.client.get(url)
Phase 5 : Workload Modelling
Workload Modelling is a fundamental aspect of performance testing that involves creating a test scenario to simulate real-world usage of an application. It requires:
While both Gatling and Locust can simulate user behavior and generate traffic to test the application. Gatling is considered suitable for creating complex user behavior simulations. Apart from the code simulations , Locust's GUI enables user to control the traffic , this feature is particularly useful to anyone new to performance tests especially during the design and workload modelling.
Phase 6 : Performance Test Execution & Result Analysis
Performance Test Execution and Result Analysis is a pivotal phase in the performance testing lifecycle where the designed tests are run, and the resulting data is scrutinized to assess the system’s behavior under simulated conditions. This stage is essential for:
During test execution, Locust can generate and handle large and realistic user loads with minimal resource consumption. Locust can also distribute and coordinate the test execution across multiple machines or nodes using the Locust master-slave architecture, which is an open source feature of Locust that allows users to run multiple Locust instances in parallel. Gatling Enterprise provides distributed setup as a premium feature, though the same can also be achieved in the open source solution through a little complex steps. Gatling's test execution is known for generating detailed reports that include various performance metrics like response times, throughput, and error rates. It provides real-time reporting and interactive charts during the test execution. It also allows for extensive data analysis post-execution, with the ability to parse responses and verify request outcomes using assertions. Gatling’s reports can be generated in HTML or JSON formats, and can be integrated with other tools, such as Grafana or InfluxDB, for further analysis and monitoring. For result analysis, Locust provides real-time statistics and downloadable HTML graphs through its web interface, and it allows exporting the test results to CSV or JSON files for further analysis using third-party tools. Locust with its API's can also be extended and integrated with other tools like Grafana or InfluxDB. Gatling does not have a built-in API for invoking and controlling test execution, but it provides a way to do so by using its Java APIs.?For example, you can use the Gatling Runner class to programmatically run a simulation from a Java application.?You can also use the Gatling Maven plugin to run a simulation from the command line or a CI/CD tool. Locust has a built-in API for invoking and controlling test execution, which is exposed as a RESTful web service.?For example, you can use the /swarm endpoint to start a test with a given number of users and hatch rate.?You can also use the /stop endpoint to stop a running test.?You can also use the /stats/requests endpoint to get the current statistics of the test.
Below screenshots give a quick snapshot of Gatling and Locust test statistics.
Phase 7 : Reporting and Recommendation
The Reporting and Recommendation phase in performance testing is a critical juncture where the gathered data is analyzed, and a comprehensive report is generated. This report encapsulates the overall test results, observations, findings, and crucially, recommendations for performance enhancements. It serves as a decisive document for stakeholders, providing the necessary insights to make an informed GO/NO-GO decision on the application’s launch.
The report typically includes a detailed description of the performance test outcomes, a clear indication of the GO/NO-GO status, and a rationale for the decision. It also assesses whether the application meets the predefined Non-Functional Requirements (NFRs) and documents the status of identified defects.
Moreover, the report highlights any performance risks, attaches relevant artifacts, and offers actionable recommendations to address any issues.
The choice between Locust and Gatling during this phase hinges on the specific requirements of the test, the protocols involved, and the team’s proficiency with Python or Scala/Java. Gatling’s detailed graphical reports aid in a clearer understanding of performance bottlenecks for the stakeholders out of box, while Locust’s customizability allows for extensive tailoring and integrations.
Conclusion
In conclusion, Gatling and Locust are two popular open source performance testing tools that have their own features, advantages, and disadvantages. Gatling is a Scala-based tool that uses an event-driven, simulations-based approach to create and run performance tests. Locust is a Python-based tool that uses an imperative, code-based approach to define and execute performance tests. They have a few differences but have one common objective to help you with your performance test.
Based on our comparison, we can recommend the following:
However, these recommendations are not absolute, as the best tool for your performance testing needs depends on your specific context and scenario. Therefore, you should evaluate both tools based on your own criteria and preferences, and choose the one that suits you best.
We hope this article has helped you understand the differences and similarities between Gatling and Locust, and how to choose the best tool for your performance testing needs. For more information and resources on these tools, you can visit their official websites or follow the links in the search results. Thank you for reading! ??
References
?
Sr. Software Quality Assurance | Almosafer | Seera Group
8 个月?? Very informative