How QA can build quality into your software pipeline
While I was hosting a software testing–themed dinner-and-discussion group for the Ministry of Testing—Boston a few months ago, two questions kept coming up: What does quality mean when writing good automated tests? As automation test developers, how can we assure the quality of our own code?
I wasn't surprised. Up until a few years ago, coding knowledge wasn't a requirement for software testers unless the tester was interested in switching from manual testing to testing automation. Nowadays? It seems that every software QA engineer position calls for people who have coding experience.
Before I address the question about the quality of automated tests, you need to understand two important areas in which software testers need to focus: the quality of the end-user experience, and the quality of the software development process itself.
How to verify the quality of your tests
The main role of a QA engineer on an agile software development team is to be a user advocate. The QA engineer's job is to create a mental model of the customer based on the use case and carry that model throughout the software development process.
You explore the software application under test the same way the customer will operate it, with mouse and keyboard or fingers on a touchscreen, to see the product through the eyes of the customer. Testers need to cover the entire customer base, from the users of the software product to administrators of the product and every possible role in between. Testers take that gathered knowledge, sharing with the rest of the development team the unique perspective they have developed.
The software developers' job is to take the project requirements and turn them into a product. But their knowledge of the inner workings of the product may inhibit them from seeing and experiencing the product as the customer sees and experiences it. That's the QA engineer's main strength.
While testing, software testers make sure to ask themselves:
- Have I put the product through its paces exactly as the customer would use it, on all of the browsers, platforms, and mobile devices supported?
- Have I examined the logs to see what browsers and platforms our customers are using when accessing our product? If our customers are using Internet Explorer 8 (commonly used by healthcare industry and financial institution employees), have I tested with that browser so I see the product as our customers see it?
- When new features have been added to the product, have I checked to make sure that the old functionality still works?
- Have I done database checking to see that the input entered through the user interface is what's being stored in the database?
- How does the web page get its data? Is there a web service or RESTful API that is retrieving data for the page? Do I have tests for that layer? (See Guru99 for some examples.)
- When mobile testing, have I tested what happens when the product is placed in portrait and landscape modes? Or the different sizes and screen resolutions? Or what happens with the performance of the app when a user goes from Wi-Fi to LTE to 4G to only one bar, and then to offline mode?
These are just a few examples; I could cite many others. Quality,though, isn't simply a testing activity. As the adage says, you can't test quality into a product; it must be built in. Reviewing the software product throughout the many stages of its growth is an important part. But it is not the only part.
Quality is achieved when a product meets or exceeds the wants, needs, and expectations of all stakeholders invested in the product, not just the customer. You can't stop after checking the quality of the user experience. You need to make your contribution to the quality of the software development process.
How to verify the quality of the software development process
To assure themselves that they are on the right track to create robust tests, QA engineers create test plans, test cases, and test scripts from the requirements and use cases provided by business analysts and product owners.
As a QA engineer, here are the questions you should be asking:
- Do I examine the validity of the use cases and requirements of the software product as it is being developed, and even the validity of my own assumptions about the product?
- Do I check that the requirements for a new feature are clear, concise, and testable, with acceptance criteria listed? Are there any assumptions that need to be spelled out in the requirements? Are the developers and QA engineers on the same page regarding what the product should be?
- Do I review a rough draft of my test plan with my fellow QA engineers?
- Do I review a rough draft of my test plan with other stakeholders, such as the product owner, the user experience person, and the developers, soliciting their feedback to see if there is any missing information? It may be that I, development, and the product owner have different interpretations as to what the requirements mean.
- Do the tests I have written match all requirements? Have I checked to see whether the requirements for the tests changed and adjusted the tests accordingly?
- Before marking a feature as done, do I do a one-on-one demo with the product owner to make sure the feature built is the one the owner wanted?
By going through this process you can make major contributions to the quality of the project. But quality is not just QA's job; it is everyone's job. Everyone has a part to play, and QA shepherds the process.
Develop a good working relationship with team members to improve quality
Before jumping to how developers interpret quality, you need to understand two other metrics: the quality of teamwork and of communication within the team.
As Lisa Crispin and Janet Gregory mention in the book Agile Testing: A Practical Guide for Testers and Agile Teams, QA should avoid being seen as the "quality police":
"We've talked to testers who brag about accomplishments such as going over a development manager's head to force a programmer to follow coding standards. We've even heard of testers who spend their time writing bugs about requirements that are not up to their standards. This kind of attitude won't fly on a collaborative agile team. It fosters antagonistic behavior.
"Another risk of the 'Quality Police' role is that the team doesn't buy into the concept of building quality in, and the programmers start using testers as a safety net. The team starts communicating through the bug-tracking system, which isn't a very effective means of communicating, so the team never 'jells.'"
QA should not be a team apart. QA engineers should be embedded within the agile software development team and become as much as a part of the team as everyone else, Crispin and Gregory write.
"If you're a tester and you're not invited to attend planning sessions, stand-ups, or design meetings, you might be in a situation where testers are viewed as somewhat apart from the development team. If you are invited to these meetings but you're not speaking up, then you're probably creating a perception that you aren't really part of the team ...
"If this is your situation, your team is at risk. Hidden assumptions are likely to go undetected until late in the release cycle. Ripple effects of a story on other parts of the system are not identified until it's too late ... Communication might break down, and it'll be hard to keep up with what programmers and customers are doing. The team risks being divided in an unhealthy way between developers and testers, and there's more potential that the development team will be isolated from the customer team."
The relationship between development and QA shouldn't be like the relationship between an artist and an art critic. Rather, it should be like one between a writer and editor.
So far, I've gone over how, before you even think about automation, you need to verify:
- The quality of tests.
- The quality of the software development process.
- The quality of how the team works together.
- Quality as seen through the eyes of the customer.
- Quality as seen through the eyes of the other stakeholders.
Follow the software QA process above before you think about adding another layer to your testing.
Verify the quality of your automated tests
After following the above process, you should have a library of robust tests suitable for a regression test suite that you can use to validate that the old functionality still works after new features have been added to the product.
Do you have in your regression suite some tests that are unchanging? Do they always execute in the same way and the same time, and are these tests unlikely to change over the next six months? If so, these tests are excellent candidates for automation. But if you are spinning your wheels rewriting the same automated tests every two weeks, you aren't saving time.
Here are a few tips to improve the quality of your automated tests and the automation framework that runs them:
- Is each test independent from the others, with its own setup and teardown method? Each framework, whether you use TestNG, JUnit, or the Spock Framework to execute tests, has before methods, where you set up the data for each test and spin up a new browser before executing the test, and after methods to close the browser.
- Have you grouped similar tests into a suite?
- Have you encapsulated the significant web elements on a page in page objects—the classes representing a web page—with public methods, enabling you to enter text into a specific text box or to check the text on a heading?
- Did you make sure not to use thread.sleep or any other hard-coded sleeps? If you wait ten seconds for a slow-loading component to show up, that adds ten seconds of lag to each and every test.
- Do you have tests set up to continuously run, such as a short suite of smoke tests to run before developers merge their code, regression tests that run after every QA build, or a full regression test that runs overnight and tests with every browser and platform required?
- Do you have reports that clearly show what tests are executing, how each test is executing, and what passed and what failed?
Verifying the quality of the code
Here are a few tips I've assembled for improving the quality of your code:
Make it readable
Your code should be as clear and concise as any other piece of technical documentation. Why? Developers attempting to figure out how your piece of software works will ignore all other documentation and jump right into your code. Give your code the time and attention it needs!
"[T]he ratio of time spent reading vs. writing is well over 10:1. We are constantly reading old code as part of the effort to write new code. Because this ratio is so high, we want the reading of code to be easy, even if it makes the writing harder. Of course there’s no way to write code without reading it, so making it easy to read actually makes it easier to write."
— Uncle Bob Martin, " Clean Code: A Handbook of Agile Software Craftsmanship"
Make sure you give your methods and your variables meaningful names. Don't name any variable i, j, or k, unless it is in a simple forloop. Don't name a variable accountList unless it is actually a list. And as much as it may make you smile, don't use HolyHandGrenade as a method name, because there is no guessing what a method given that name will do.
Keep your code dry—don't repeat yourself
Imagine if you cut and paste the same four steps over and over again in your automation test suite. What will happen if one of the four steps changes? Will you remember where you placed all instances of the steps? It is better to group these four tests in a method and have all the tests call that one method.
"Every piece of knowledge must have a single, unambiguous, authoritative representation within a system."
— Andrew Hunt and Dave Thomas, "The Pragmatic Programmer: From Journeyman to Master"
Refactor the code
Refactoring, according to Martin Fowler, is the process of changing a software system in such a way that it does not alter the external behavior of the code, yet improves its internal structure. It is a disciplined way to clean up code that minimizes the chance of introducing bugs.
"In essence when you refactor you are improving the design of the code after it has been written. ...
"With refactoring you can take a bad design, chaos even, and rework it into well-designed code. Each step is simple, even simplistic. You move a field from one class to another, pull some code out of a method to make into its own method, and push some code up or down a hierarchy. Yet the cumulative effect of these small changes can radically improve the design. It is the exact reverse of the normal notion of software decay."
— Martin Fowler, "Refactoring: Improving the Design of Existing Code"
Don't write more code than you need right now
When writing a page object, it's easy to think that you will need to map out every web element, not just what you need for the test you are currently writing. Or, when writing a SQL query, to think that you can save time by writing a few others that you might need in the future. Unfortunately, websites and database implementations have a habit of changing over time. When the time comes, requirements might change, and your work may have been wasted.
"Often you will be building some class and you’ll hear yourself saying 'We’re going to need…' Resist that impulse, every time. Always implement things when you actually need them, never when you just foresee that you need them.
"The best way to implement code quickly is to implement less of it. The best way to have fewer bugs is to implement less code. You’re not gonna need it!"
— Blog post by Ron Jeffries
Create unit tests for your automation test suite
Your automation test suite can quickly grow large and unwieldy. When you are changing and updating your test suite, how can you tell that you didn't accidentally break something while refactoring that code? You need to test.
"Whenever I do refactoring, the first step is always the same. I need to build a solid set of tests for that section of code. The tests are essential because even though I follow refactorings structured to avoid most of the opportunities for introducing bugs, I'm still human and still make mistakes. Thus I need solid tests."
—Martin Fowler, "Refactoring "
Tests should either pass or fail and tell why they failed
It is not enough to have clear and concise tests. If they fail, can you easily see where they failed?
"An important part of the tests is the way they report their results. They either say 'OK,' meaning that all the strings are identical to the reference strings, or they print a list of failures: lines that turned out differently. The tests are thus self-checking. It is vital to make tests self-checking. If you don't, you end up spending time hand checking some numbers from the test against some numbers of a desk pad, and that slows you down."
—Martin Fowler, "Refactoring"
What metrics do you use to validate the quality of your automation test suite?
To validate the quality of your test automation suite, you need to be using all of the metrics mentioned above. To recap:
- Start with good, robust tests that cover the requirements.
- Have your tests reviewed by peers and stakeholders.
- Make sure your tests are clear and concise, with high value.
- Know whether your tests pass or fail, and why they fail.
- Set your tests to run continuously.
- Make sure test output is clear and concise.
- Ensure that your automation test suite has the same level of quality as the production code; it should follow good software development principles.
That's how I try to achieve quality. What about you? Do you have any tips you would like to share? Join the conversation in the comments section below.
This article originally appeared in TechBeacon. Want to read more stories? Read more of T.J. Maher's work at TechBeacon and Medium. Follow T.J.'s software testing blog Adventures in Automation. Follow him on Twitter at @tjmaher1, and watch his YouTube channel. Catch his talks on the TestingGuild.com and AutomationGuild.com. In the Boston area? Join the Ministry of Testing - Boston! We are always looking for new guest speakers.
Deputy CTO
6 年Clear & Interesting!Thanks
Playwright enthusiast and Automation Architect with experience in unit, integrated and end-to-end automated testing for in healthcare, manufacturing and banking. Accessibility advocate for inclusive experiences.
6 年Awesome.? Maybe add about mock and stub for unit tests.
Software Engineer in Test at MassMutual
6 年With this article, I barely scratched the surface on comparing and contrasting what quality means in relation to testing, coding, and building out a test framework... but it's a start.?
Sr Manager
6 年Very elaborative piece on the software development in test process