Contract testing
Years ago I watched one conference that changed my view on how to approach testing. It is a conference called “Integrated tests are a scam” from J.B. Rainsberger and I’ve been following him since. In this conference he explains why integrated tests are not the solution and how to work following a contract testing approach.
J.B. Rainsberger talks about unit testing, I’d like to share how I’ve been trying to follow the same principle when we have a micro-services’ architecture, and what my experience is testing them, translating what I understood from that approach to platform with a number of services (not always proper micro-services).
I will not talk about technology because there are multiple options and you can do the same with different approaches depending on your necessities. I will instead focus on what I believe it is more relevant, how to approach testing.
To try to explain it easily I will split it in three different points of view.
Server
When a team works on one service there are a number of activities to be done as part of the development. Of course, unit tests have to be written, but there are other activities required from testing perspective:
- Define and maintain the contract of your service. Anybody using our service has to know how to work with it and, even though it may sound obvious, it doesn’t always happen. Integration can be done in different ways, but in all cases that should be defined.
- Write and maintain a mock of your service.
This mock will be the reference for anybody working with our service. The fact that the same team working with one service maintains the mock helps to have it always up to date with the latest version. Maintaining the mock requires not only having the same definition but also having configured a complete number of scenarios request - answer to be used from other services.
- Automate isolated tests of the service.
One service is responsible for a business function and it has to be tested.
The same criteria we will describe here related to not testing in integration what has already been covered in a component level, has to be followed with our unit tests and component tests.
This is one of the reasons why I believe these tests have to be written by developers as they know what is covered already with our unit tests. I am not saying with this that QAs have nothing to say here, I believe in a team job. QAs have to share their view with unit and component tests, the same way developers should when we talk about business flows.
- Automatically test that our mock and our service have the same contract.
With this we make sure that our mock can be used as a reference of how our service behaves.
- Configure static scenarios in the mock and write tests in your service to cover all those scenarios.
Having an automated system to check that all configured calls to the mock have their tests would help avoid human mistakes.
Client
When a team works on a service that needs to communicate with another service, there are extra steps that should be done:
- Write tests for our service working with the mock of the service we have to integrate with. We might have a different business flow depending on the information coming from other services, having the mock will help to test that logic without having to create data in another service.
- If we need a new scenario configured in the mock, either the team can configure it and let the team responsible for that service know, or they let the team know and they take care of the configuration, adding new tests to the service if required.
Integration
In theory, if we follow this approach properly no integration tests are needed, but I have to admit that I have never felt confident enough not to have a minimum number of E2E tests automated. It should be a maximum number of tests and you should be strict sticking to that number. It depends a lot on the platform, but I usually work with 100 E2E tests as a limit, especially if we work with a frontend.
Sometimes I’ve worked with some business flows tested from API perspective. It is quicker, cheaper and you can have QAs working on those when they are required as part of one story acceptance criteria. I’ve never joined a company where they had already any test strategy at this level, so I’ve needed to create/define some tests to cover the platform until we had the right level tests created. In any case, I can see the value of these tests because they add the view of QAs from a business perspective, they are automated, can be added to our pipeline, which can be done in parallel to developers so it can be another check before merging the code.
Working together as a team is important to avoid duplicating tests, so that what has been tested already in one level is not tested again. As an example, when we have a flow where the customer creates an account, requests a transaction and cancels the account. In this case, I wouldn’t test what happens with different transactions or different accounts unless they have different business flows.
Running tests
Merging code into development branch should trigger some actions, obviously building and generating the artefacts, then ramping up your docker or any other option you work with. That server should be configured to work with the mocks and not other services.
Once the environment is there we can run our tests and, if everything is green, we can accept the merge and close the environment.
Tests to be run are the ones testing our service, our contract and check that we are covering all the scenarios covered in our mock.
The last step is to deploy our service in our integration environment and run all our E2E tests. Once everything is green, we can deploy any moment into production. Depending on our needs, it can be done automatically, or we can wait until it has been decided by operations or whoever is in charge of the decision.
With all this we can assure that:
- Our service works as expected
- Any service working with us will be able to work in all the scenarios
- We will be able to integrate with all the services we work with
- The full E2E platform is working
- Number of tests to be executed decrease dramatically both in number and in time.
- Test coverage increase
This approach requires some previous work, especially if we haven’t started working on our platform this way, but it pays off by increasing the quality, predictability and velocity. It also improves time to market as everything merged will be already tested and can be deployed directly, which means that developers don’t have to go back to a task they finished days ago, leaving another one on hold to fix bugs a tester found.
We need to take into consideration the cost in infrastructure, but it increases productivity and in the cloud now it is really easy to spin up our environment only when we need it, rationalising our use of them and decreasing the cost.