Why Was This Check Created?
Richard Bradshaw
Industry leader in software testing and quality engineering. With a strong focus on automation and whole team quality approaches. I'm a tester, automator, keynote speaker, teacher, strategist, leader & a friendly human.
As I've been thinking more about Checking and Testing, and how to get them working harmoniously, I'm wondering if we are missing something from our checks. This post will focus on automated checks, but it I believe the same applies to non automated checks.
Some teams have become really adapt at writing automated checks. They are following good practices. Classes, methods and objects are all well named, and it's obvious what they do. Assertions are clear, and have a well structured message for when they fail. There are good layers of abstraction and code re-use. They are performant, execute fast and designed to reduce flakiness. It all sounds rather good.
But why is that well designed, well written, easy to read check there? Why does it exist. Why was this check written, over all the other possibly checks. I can read the check, it's well written as mentioned, I can clearly see what it is checking, but that is all I have. How do I know, that the steps and the assertion(s) there match the initial intention for it. What was it about this check, this system behaviour, that was worthy of having an automated check created for it. I don't know that.
Why should we care about the why? I believe the results of automated checks are impacting the way we test. I believe this is especially true in an environment that has adopted continuous integration. As before you test, by test here I mean testing once the developer believes she is "code complete", all the automated checks are ran, and the build is either red or green. A generalisation for now, as I am still giving this more thought, but when the build is red, we tend to immediately focus on that, by chasing green. We will then usually read over the other checks in that area to see what else is covered, and then design and execute some tests to see what else we can learn. Then return to the new piece of work. When the build is green, we tend to focus our testing efforts on and around the new piece of work. As I said, it's a generalisation for now, I know I/we don't always do this, but hopefully most can relate.
I believe we aren't always aware of how much trust we put in our automated checks, and all that trust without always knowing why the check exists or it's importance. We all have a lot of knowledge about our systems, a lot of that knowledge is interwoven, this is why we create automated checks, because we can't remember everything. We need to make some of this tacit knowledge, explicit. It's also why we create mindmaps and checklists, to prompt us to remember things. To consider things.
If the why was also included, I feel it would aid us with test design. It would also aid us when reviewing our automated checks, when deciding do amend some, or delete some. Regularly viewing your checks and questioning their value is something I encourage teams to do regularly. Just because a check is green, doesn't mean it helped you in anyway, doesn't mean it added any value to your testing efforts. Going back to test design, lets say a check failed that had the following why message somewhere: "This check was created as we had a major issues in live where the system did X and lead to Y downtime". If I saw such a failed check, I believe I would probably do more testing in that area than if that message wasn't there. If I was reviewing my checks, and saw such a message, I would be able to assess it's value a lot easier/faster.
Here are multiple ways we could add the why in.
- Code Comment - No doubt a lot of you have turned your nose up reading that. But I'm not talking about using them to be able to read what the code does, as stated, we can do that. I'm talking about a few lines above a check, explaining why it's been created.
- BDD Tool Lovers - While I discourage people using BDD tools to write automated checks, especially those places that aren't practising BDD, I know many of you are using such tools. So you could add the why to the scenario section of the feature file.
- Commit message - Perhaps we ensure to add excellent commit messages when new checks are created, clearly indicating the why there. We could then look at the commit history of the file. Has flaws if checks are moved around a lot during refactoring.
- External document - Or perhaps we could store the why in a document somewhere. Perhaps a mindmap with IDs for the checks
Even though my thoughts are early days, I don't believe adding the why is a huge deal, the fact you are creating it means you already know why, it's just not there later in the checks life. Or available for new team members to read. Or anyone. But I do believe it could play a significant part is assisting our testing efforts, especially in check reviews and test design.
These are some early thoughts, just had an urge to write something after several conversations at Euro Testing Conference on this subject. Would love to hear some of your thoughts if you have the time to engage.
Thanks.
The future of automated quality management
9 年To your reply of 12 hours ago... The point of self-documenting check steps isn't to help you develop the checks, although it can help you do that, it's to help transparency around the team beyond just those in the QA role, and to radically increase the value of the artifacts of what the check is doing. As I understand what you're talking about, visibility into what the checks are doing is limited to people are in the QA role and who take the time to look, and the value of the check results doesn't last very long because the code can always change (and, often does) and nobody would know, without looking at the history of code changes or (as you say) stepping through the code. So, doing the checks traditionally as you describe, if someone wants to do some manual testing, that person would have to ask you what is verified if that person does not want to repeat a bunch of tedious verifications manually. MetaAutomation shows how to do quality automation that makes all this transparent around the team; no need to look at the code, given that the people writing the checks follow some simple rules in their code. Transparency is important for, e.g., DevOps, which needs fast, reliable and trustworthy quality information, or geographically distributed teams separated by time zones, language and culture.
Test Automation Engineer at BJSS
9 年In Python you can use doc strings, these can be read using sphinx and used to create a document for why each test exists. I am sure there are equivalent structures for other languages.
The future of automated quality management
9 年I see in your reply to my question (sorry, I can't seem to add to that thread) "it's not about understanding what the check is doing, I can clearly see what they are doing..." How is it that you see what the check is doing? Stepping through the code in debug session, or staring at the screen, or something else?
The future of automated quality management
9 年This is a real problem. I like that you note that some form of documentation can help with this, and I agree, and would add that this is important for communicating around the team. It sounds like you're saying that the checks are created almost at random, although maybe that's not what you meant. There's a better way: itemize the business requirements, as you can, use those to identify *and prioritize* the functional requirements in your design, and write atomic checks to verify the functional requirements! This means that traceability is assured. Even better, write the checks with self-documenting and hierarchical steps. No need for code comments. Your check step hierarchy is in the artifact of the check run, with status of pass, fail, or blocked for each step. Now, the manual testers (or, whoever cares to look - manual testing should be done by everybody on the team, anyway) know exactly what is already verified and what is not. See MetaAutomation DOT net for more information. The 2nd edition of the book on MetaAutomation should be out in about a month.
Leading Next Gen Engineering Across UKIA | Driving Industry Transformation Through Engineering Excellence and Innovation
9 年Interesting - I'm yet to work in a project where this would rank on the worry list but like you say it's probably something that's taken for granted. For me, the reporting layer that sits on top of the check should answer the 'why'. Grouping checks into suites and then applying one of the suggestions you have made would be my strategy. More often than not, the importance of the 'why' tends to be to explain what the check (or suite of checks) is NOT doing. I'm definitely going to end up thinking about this more now so thanks for sharing. Steven Osman Paul M. Stephen Williams FYI