Creating a Test Plan
Does this sound familiar?
As a developer in a dev-team
- Have you ever wondered if you've created sufficient test cases? Maybe all of your tests are spread out over different test tactics (unit-tests, integration-tests, browser automation-tests...) which makes it more difficult for you to keep the overview?
- Have you ever struggled to decide if you should mock a dependency out or not? Maybe you weren't sure what criteria you should use to decide if you should mock.
As a stakeholder of a dev-team
- Have you ever wanted to get a view of what kind of tests your dev-teams are creating for your project? What happy paths are covered, what edge cases are covered, whether load tests are part of the plan...
- Have you ever wanted to know how much of your functionality is tested using for example browser automation versus other mechanisms? Maybe because you wanted to get a feel of the long term impact of all your tests.
This and many more questions can be answered using a Test Plan. In this article, I want to explain how to create one and what developers or stakeholders can gain by creating one. Hopefully, I can also show that this can be created with very little effort. Let's dive right in.
How to create a Test Plan?
Creating a test plan is a three-step process:
- Step 1: Define your Test Tactics and their Rules
- Step 2: Apply your Test Tactics on your Design Diagram
- Step 3: Create a Test Map
Step 1: Define your Test Tactics and their Rules
The first step is to choose the Test Tactics that are relevant for your team, organization, product, project, or service and then clearly define what the rules are for each tactic. Specifically focus on mocking, on when they run, and how they get their data. Test Tactics come in two different flavors:
Functional Testing Tactics
- Unit tests
- Integration tests
- Regression tests
- Acceptance tests
- UI Automation
- ...
Non-Functional Testing Tactics
- Stress tests
- Load tests
- DDOS tests
- Performance Regression tests
- Browser Compatibility tests
- ...
Here are some examples of how you can define your Test Tactics:
- Unit Tests test individual components and have no external dependencies (e.g. on data, databases, other services…). They run on every check-in.
- Integration Tests can expect some of the dependencies between its boundaries to be there. Dependencies that go outside of its boundaries are mocked out. Any data required by these tests have to be created on-the-fly. They run in our CI pipeline and have to run green before a merge.
- End-to-end UI tests are UI automation tests with all the dependencies in place. Nothing is mocked out. These tests run once a day on master-branch.
These are just some examples. Use any tactics and definitions that make sense for your organization, requirements, maturity, teams, and/or project.
But make sure that you have a definition! If you don't, you risk that different people or different teams assign a different meaning to "a unit test" or "a browser automation test" which will make it much more difficult as a stakeholder to assess the impact of these tests across teams.
Ideally, you have a Testing Strategy that you roll out throughout your entire organization that contains definitions of the Test Tactics that you want to use and you ask all your teams to align to those definitions.
Step 2: Apply your Test Tactics on your Design Diagram
Once you have a list of all your tactics, the next step is for the dev-team to choose which tactics will be used in the project and to visualize them in their Design Diagram.
First, come up with a way to visualize each Test Tactic. An example of how to do that is shown in the next drawing (note that I stole this idea from an article by Toby Clemson on martinfowler.com):
Next, starting from your design diagram, highlight for each component the different Test Tactics that you will use to test this component. For each tactic, draw its boundaries. This is important because a boundary determines what can be, should be, or can't be mocked out. Basically:
- any dependency within the boundary can be mocked out but doesn't have to be
- any dependency outside of the boundary has to be mocked out
A very simple example of a Design Diagram containing its Test Tactics can be seen here:
From this simple diagram, you can immediately see that:
- Integration Tests of "Component 1" have to mock out any calls to "Component 2" because it is outside of its boundary.
- Integration Tests of "Component 1" should not mock out the database and the storage, because the only other tests we have are Unit Tests and they won't use it. So if we don't test it there, they wouldn't be tested at all.
- "Component 2" is only covered using Unit Tests. This may or may not be ok.
- We may lack contract tests between "Component 1" and "Component 2".
- We may lack any tests covering both "Component 1" and "Component 2" in one integrated test.
The idea behind this drawing is that you can make assessments like this by simply looking at it for a minute!
Step 3: Create a Test Map
Finally, create a table containing all your components as columns and your tactics as rows. In every cell, specify all the tests that you will be creating for this tactic and this component. A simple example looks like this, but the example in the section will make it much more real:
Naturally, this doesn't work if you are adding thousands of tests. But if you do, then I would argue that the project is too big and has to be split into multiple smaller deliverables.
Example
Enough theory, let's make it real. For our example, let's use the same feature we've used in The merits of Design Diagrams. As a recap, here's is the description of the feature:
"We have an e-commerce site in which customers can write reviews about our products. So far, we have been reviewing these comments manually but our site became so popular that we can't process the load anymore, so we want to automate this process more. Every time a review is added to a product, we want to analyze automatically whether it is a positive or a negative comment. Products that got too many bad reviews are then flagged accordingly so that our product managers are made aware of this."
To create our Test Plan for this, we will first extend our Design Diagram with our Test Tactics (clearly highlighting our boundaries):
A Design Diagram with clear boundaries is extremely important to see which Test Tactics we will use to test which components. Remember, the boundaries indicate what the playing field of each test is, and it determines which dependencies need to be mocked out.
Next, we create our Test Map:
The table is important to visualize what kind of tests we will be writing, how many we will write, which happy path and edge cases we will cover, and what tactics we will use each time. This way we can immediately see which tactics are insufficiently or too much used. In our example, you could see that we are mostly relying on Integration Tests and have no UI Tests or Load Tests.
What are the benefits for me as a developer?
- Identify Gaps: without a test plan, it may be difficult to know for sure whether you have gaps in your tests because they may all be spread out over multiple tactics (like unit tests, integration tests, contract tests...). A test plan allows you to create a simple overview in which you can visually identify any gaps that you may have.
- Identify Possible Optimizations: useful to see where you may be using higher-level testing tactics where lower-level tactics are possible (e.g. Integration Tests over Unit Tests).
- Understand what needs to be mocked. Making this drawing helps you to understand why one dependency has to be mocked out and why another dependency could be mocked out.
What's in it for the organization?
- Understand Long Term Impact. Dev-teams that focus mostly on UI automation will have a different long term cost impact and coverage than teams that focus mostly on unit tests. If, in your organization, you have a desire to shift your testing to the left than having this very easy-to-read overview may allow you to quickly see to what extent this project is aligned with your strategy.
- Understand your Coverage. Useful for the stakeholders to understand the coverage you will get on this feature. It won't give you a number, but you'll know which edge cases and happy paths we are covering and where we still have gaps... If your test tactics include non-functional tactics like load-tests or stress-tests than the overview will show that as well.
When to create a testing plan?
Create it before the development starts or very early on in the development process. The sooner you have your test plan, the sooner you can anticipate how your tests will impact your code, and you can immediately update your code accordingly (e.g. by ensuring some dependency can be mocked out).
Also, the sooner there is a test plan, the sooner the progress on your tests can be easily shared with other people. For example, teams can use the table to communicate regularly to their stakeholders what is done and what is still to do.
Where does it fail?
I have recently started to ask some of our teams to create such a Test Plan because we wanted to address some of the questions earlier. But it's not perfect. The Test Plan as proposed here only has value during the lifetime of the project; it lacks long term value in a product-development context. I'm still searching for a good solution there. If you have any ideas, feel free to add them to the comments!
Conclusion
Testing should not be an afterthought! If we simply write our code and then afterward try to figure out how our code could be tested, then we are imposing a cost on the organization because the tests we write are likely going to be more expensive to maintain, to run and may even have less coverage.
Creating a Test Plan as described here should be something that teams can create in a brainstorming session in under two hours. It should not be a lot of work when done effectively. But, when done early, it forces a team to think about how they are going to test certain functionality and that may change how they develop certain components.
It also allows stakeholders of a project, from the get-go, to have a view of where the team will be taking the tests and what long-term debt we will be introducing to maintain them. If there are any misalignments, they can be discussed early on.
Even if you don't see the value in any of the benefits of a test plan, then maybe the only benefit that matters to you is this one: it helps to create awareness in the team about their testing. It makes writing tests a more conscious process.
Principal Engineer at Alithya Company
4 年Thanks for sharing Michael, the article is quite informative and useful, emphasizes crucial importance of good test suite, testing process and thinking about tests ahead on time. I especially like the last remark: Make a test plan at least to raise awareness about tests’ importance! My saying is: If you appreciate your work, and want to protect it, write a test for it :) If I may add, but this is more relevant on dev team level and coming from personal experience, defining common language or simple convention for storing and naming test projects, classes and test methods can help a lot to young team or new team members. Having mixed convention (different prefixes, suffixes, unorganized tests) can introduce confusion whether some functionality is covered or not among dev people, and what's even worse it can decrease team's trust in tests itself, if they start to see them as a ‘simple bunch of tests’ without any organization or pattern. Pretty much like scenario with production code if you see so many different styles and conventions, you don't trust it much. Just my two cents, at the top of excellent article.