Why '200' is not always OK!
So you've got an extensive Automated API test suite and all your major end points are covered with tests that run on every code commit right?
Better check those tests to make sure that they are not just glorified Health Checks!
A lot of effort can be misplaced when trying to test Web Services, by orchestrating the tool or code needed to act as the service's client, that often we forget what we're actually trying to test and just assert that the service status code was OK or equal to 200.
This is fine for Health Checks and as Post Deployment warm ups, but critically, we will be missing behavioural concerns & functional outcomes that interrogate the transactional health of our systems.
This is why it's so common that integration tests break, or don't bring any real confidence in the software, or why you're still finding critical defects despite having high levels of Automation Coverage.
Sound familiar?
It's important that when testing service interactions that the "tester" not test, or rather not rely heavily on implementation details, as these can change, but instead test "behaviours", as the core facilities of a service should not change their fundamental functionality.
This is why your UI tests break every time your DOM structure changes or worse, you change content such as text & links that tests rely on being the same every time.
This is tightly coupling you to details about your system that should have the flexibility change, but now force you to re-write your tests or worse, ignore them!
Again.. sound familiar?
The question you're asking of the system under test is that, given a certain system state will you behave in the manner I expect, rather than "assuring" that the system follows a certain prescribed procedure.
Essentially you want your API test to serve as a way to ensure the agreed contract between service provider and service consumer remains intact, and consumers can build our their own features sure in the knowledge that the underlying service behaviours remain the same.
What this means in practice, is that I'm testing that my assumption about a behavioural outcome will remain true, by not coupling myself to a specific implementation of said outcome.
But as with all types of testing we should be wary and make sure to always keep in mind, there's a difference between "testing".... and "checking".
How strong is your organisations middleware Test Automation strategy?
Retired
4 年Sounds like effective (dare to say best practices?) error handling and more lacking, Andrew? I like to say that nowadays anyone claims they can test, however, in my experience I see otherwise
Head of Customer Success
4 年Words of wisdom from the master himself. Great read, Andrew.?
Senior Automation Consultant
4 年Very well articulated! Assurance is always better than assumptions.