The Importance of Unit Testing and the Role of AI in Test Generation
Subrata Mukherjee
Enterprise Architect @ Blenheim Chalcot | Innovating with AWS, Azure, GCP, DevOps, MLOps, AI, LLM, LlmOps | Passionate about Leading Technology Transformations
Unit testing is a fundamental aspect of software development. It plays a crucial role in maintaining the quality and reliability of your code. Here are some reasons why unit testing is essential:
Unit testing allows you to verify that each part of your application works as intended. By testing individual components and endpoints, you can ensure that your API/Function behaves correctly. This aligns with the principles of Test-driven API development too.
Here is a simple example of a unit test in a FastAPI application:
Python
from fastapi import FastAPI
from fastapi.testclient import TestClient
app = FastAPI()
@app.get("/factorial/{num}")
async def calculate_factorial(num: int):
fact = 1
for i in range(1,num+1):
fact = fact * i
return {"factorial": fact}
client = TestClient(app)
def test_calculate_factorial():
response = client.get("/factorial/5")
assert response.status_code == 200
assert response.json() == {"factorial": 120}
In this example, a FastAPI application is created with a single endpoint that calculates the factorial of a number. A TestClient is then used to send a GET request to this endpoint, and assertions are made to ensure the response is as expected.
Remember, a well-tested application is more reliable, easier to maintain, and less prone to bugs. So, it’s always a good practice to include unit tests in your projects.
AI-Powered Test Generation with Cover-Agent LLM
Cover-Agent LLM is an AI-powered tool developed by CodiumAI that automates the generation of unit tests and enhances code coverage. It uses Large Language Models (LLMs) to generate tests, aiming to streamline development workflows.
Here’s a high-level overview of how you can use it:
1.???? Installation: First, you need to install Cover-Agent LLM. You can find the installation instructions in the official repository.
2.???? Requirements: Before you begin, make sure you have the following:
3.???? Running Cover-Agent LLM: Cover-Agent LLM can run via a terminal. It comprises several components:
4.???? Generating Tests: Cover-Agent LLM generates a bunch of tests, then it filters out those that don’t build/run and drops any that don’t pass, and finally, it discards those that don’t increase the code coverage.
Please note that the specifics of using Cover-Agent LLM can vary depending on the project and the LLM you are using. For more detailed instructions, please refer to the official repository and the CodiumAI blog post on the topic.
Remember, while automated test generation can be a powerful tool, it’s also important to manually review the generated tests to ensure they are well-written and add value to your test suite.
Automated Testing with Cover-Agent
To set up Codium’s Cover-Agent, follow these steps:
First, install Cover-Agent:
pip install git+https://github.com/Codium-ai/cover-agent.git
Make sure that your OPENAI_API_KEY is set in your environment variables, as it is required for the OpenAI API.
Command Arguments
Manual Testing
# calculator.py
def calculate_factorial(num):
if num < 0:
raise ValueError("Factorial is not defined for negative numbers")
fact = 1
for i in range(1, num + 1):
fact = fact * i
return fact
def main():
num = 5
result = calculate_factorial(num)
print(f"The factorial of {num} is {result}")
if __name__ == "__main__":
main()
And here’s the test_calculator.py with only one test case:
领英推荐
Python
# test_calculator.py
from calculator import calculate_factorial
class TestCalculator:
def test_calculate_factorial_positive(self):
assert calculate_factorial(5) == 120
To see the test coverage, you would still need to install pytest-cov:
pip install pytest-cov
Then, you can run the coverage analysis with:
pytest --cov=calculator
The output would show:
The coverage report is showing that out of the 13 statements in my calculator.py file, 5 are not covered by the tests, resulting in a coverage of 62%.
Coverage is a measure of how many lines/blocks/arcs of your code are executed while the automated tests are running. A line of code is considered covered if it’s executed by at least one test.
In my case, the calculate_factorial function in calculator.py has a condition if num < 0: which raises a ValueError. If you don’t have a test case that tests this condition (i.e., a test case where num is less than 0), then this line and the one following it will not be covered.
Automated Testing with Cover-Agent
Make sure that your OPENAI_API_KEY is set in your environment variables, as it is required for the OpenAI API.
Next, write the commands to start generating tests in the terminal:
cover-agent --source-file-path "calculator.py" --test-file-path "test_calculator.py" --code-coverage-report-path "coverage.xml" --test-command "pytest --cov=. --cov-report=xml --cov-report=term" --test-command-dir "./" --coverage-type "cobertura" --desired-coverage 80 --max-iterations 3 --model "gpt-4o" --additional-instructions "Since I am using a test class, each line of code (including the first line) needs to be prepended with 4 whitespaces. This is extremely important to ensure th
at every line returned contains that 4 whitespace indent; otherwise, my code will not run."
We can play with the desired coverage % and max iteration side. increased desired coverage from "80" to "100" and max iterations from 3 to 5
This generates the following code with 92% coverage:
Python
# test_calculator.py
from calculator import calculate_factorial
from calculator import main
class TestCalculator:
def test_calculate_factorial_positive(self):
assert calculate_factorial(5) == 120
def test_main_function_output(self, capsys):
main()
captured = capsys.readouterr()
assert captured.out == "The factorial of 5 is 120\n"
def test_calculate_factorial_negative(self):
try:
calculate_factorial(-1)
except ValueError as e:
assert str(e) == "Factorial is not defined for negative numbers"
You can see that the agent also wrote tests that check for errors for any edge cases.
Now it’s time to test the coverage again:
pytest --cov=calculator
Output:
Sources:
Senior Software Engineering Manager
9 个月brilliant work Subrata Mukherjee