Building a QA process

Some context

I began my journey at Talkdesk in October 2016, motivated by the big challenge of building a QA process, almost from scratch. Since day one I could see that all my colleagues were wondering how I was  going to tackle this challenge and implement a QA process.

At the time I joined the Engineering team, its size was around 60 people, organized in scrum teams of 5 or 6 developers, and there was an independent QA team with 2 testers.

Before starting to even think about making any changes I wanted to get to know the people I was going to work with: their profiles, their expectations and their concerns. The QA team didn’t have much knowledge of QA processes and best practices, but they had a deep understanding about our product, which was key on my ramp up. Another interesting outcome was that both testers showed concerns regarding  not having time for exploratory testing.

While getting to know the team and the processes used I could clearly see the lack of a QA process, how the QA team was overloaded with work, but I also noticed some worrying practices. The one that alarmed me the most was when developers wanted a tester to test a given User Story, they would sit together and the developer would give information about the feature and almost drive the testing process because the tester didn’t have sufficient knowledge on the new feature. I could see straight away that the testers needed to be involved earlier on in the process, having information about the new features in advance in order to prepare and properly test the story. What value does a tester add when he is only performing the test cases a developer asks for? It was clear to me that we needed to change mindsets in order to implement a solid QA process.

Finding Test Management Tool

It came as no surprise to me that there was little documentation regarding the testing process and test cases. There were a couple of “how to” wiki pages, a few wiki pages with bullet points with features that needed to be verified in smoke tests after deploys and also some pages listing a few test cases for the core functionalities. Despite scarce, this documentation would prove to be key in the near future.

In order to start having better documentation and, more importantly, to give more structure to the process we were building, we felt the need for a Test Management Tool (TMT).

This new necessity triggered some thoughts: How can a TMT fit our business? What are the biggest pain points when using this type of software? What are the main characteristics I need to take into account when choosing a TMT? To answer these questions I mapped the requirements I thought were relevant on a TMT:

  • Integration with our project management tool - Allowing a quick relationship between User Story and Test Cases.
  • Usability - “Easy to use”, the team wasn’t much experienced using these tools, and that would make adoption easier.
  • Test cases reusage - One of the pains I felt when using these tools was that I often couldn’t reuse the same test cases in different context. A tool which allow test cases reutilization would potentially allow us to gain time when specifying test for new features.
  • Customization level - More customization usually means more complexity, but we needed the tool to be adaptable to the process and the business.
  • Continuous Integration (CI) and Continuous Deployment (CD) - Thinking on a future with automation testing, the CI and CD pipelines would feed test run results into the tool.
  • Price - Always a very important factor.

We scheduled demos, watched videos,tried a few trial versions and created lists with the pros and cons of each software in order to evaluate the different options. After carefully analysing and comparing all the alternatives, we made our choice.

Assigning testers to teams

I could see developers were seeking the testers’ help more frequently, especially at the end of sprints. We were only 3 testers and it was difficult to attend all the testing needs of each team. The testing effort  was unbalanced, concentrated at the end of the sprint, overloading the tests. If you add to this situation the fact we were only getting information about the stories in the last few days of the sprint, testing was becoming very hard.

I saw this as an opportunity to challenge the testers and teams to work more closely. I talked to both sides and showed them all the advantages and the impact we could make if the testers were more involved in the teams’ daily work, being part of the team and participating on the different sprint ceremonies. Getting testers involved in the development process from the beginning is crucial for the testers to do their job, understand the requirements sooner, helping define acceptance criteria,think about the testing scenarios and consequently finding issues sooner.

The proposal was well received by all, testers and development teams. Each tester was assigned to two frontend teams according to their areas of expertise and I was assigned to the remaining 3 frontend teams. These were hard times for me: my calendar was completely packed with overlapping meetings for 3 different teams and I didn’t have time to properly test and help each team. I was feeling frustrated, overloaded, but, with limited resources, that was the only way to show to everyone that this was the right process move and to improve the quality of the product.

Scaling the team

Being part of the development teams made our days as testers even busier and more teams were requesting QA resources and time. It was symptomatic, we needed to increase the number of testers in the company.

The team and the process were stepping up and I wanted to grow the team in a stable and sustainable way. Hiring a lot of people would be hard to justify to the management team, and the more people you hire, the more instability you introduce to the  team and the process. Therefore, we decided that we weren’t going to hire that many testers, but we had to look for the right profiles and define exactly what we were looking for in a tester.

The testing team had very good product knowledge, but lacked some technical QA expertise, which was making it very demanding and time consuming for me as the only senior tester in the team. It was time to find another senior tester who could boost the team with their experience and expertise. We seeked for this profile added 2 testers to the team.At Talkdesk, things tend to happen very quickly. The decision to open our Porto office was seemed taken almost overnight. With the engineering team growing faster in both locations, Porto and Lisbon, it was with no surprise that we started looking for possible candidates for our QA team in Porto and ended up hiring 2 more senior testers.

Consolidating the Process

Once we got the licenses to use our Test Management Tool, it was clear to me the tool had to be introduced gradually. I put together training sessions with the testers, where I presented the software, and we also discussed our regression and smoke testing initial batches. The discussion resulted in documentation prioritising and cataloging the tests, which we later imported into the tool.

Testers were now able to document their work more consistently. During each team’s grooming session, the testers analised the next stories in order to have enough information to  specify the test cases. Once  the stories were developed, the testers would run the manual test and log the test execution data.

All of this work was not reflected in the teams’ sprints, because the testers didn’t contribute for the story’s estimation points therefore, we could be a hidden bottleneck. I knew this was one of the reasons why sometimes teams didn’t deliver all the stories and I needed to do something to improve it. I arranged meetings with some of the team leads and explained to them why I thought testers should participate on estimations and we agreed I should start estimating user stories. The change was well received by everyone and produced good outcomes. A couple of weeks after, I proposed the same approach to the other teams and all the testers started to be involved in the estimation of stories.

Over the last few months, all testers have been struggling with time. We got together to analyze where we were using our time. Dealing with two teams and the constant context switching, smaller and more frequent deploys to Production meant more frequent smoke tests and many more tests cases to specify. Hiring more people to allow each tester to focus only on one team was not an option, so to address these time constraints issues we decided to create a guideline for test case specification, granting the same test specification principles to everyone. We also prioritised testing activities, enforcing the importance of the hands-on testing and minimizing the sprint story life cycle.

Challenges ahead

Today we have a stable manual QA process and, like any other team, there is much room for improvement. In our QA team, I see two main areas where we should primarily invest.

The first one is a classic in modern times: Automation. We need to automate our process as much as we can, and by this I mean not only automating our testing but also our continuous deployment. In the past, we’ve made some attempts to introduce UI automation tests using Selenium based approaches, but we suspended those, as the results were not consistent and ended up with too many false positives and false negatives. Our business is very prone to this kind of UI testing results, so I believe we should go for a very strong base of unit and integration testing layers and a small and simple UI automation layer. By having a strong automation process included in our continuous deployment pipelines, it will free up our testers from their daily manual validation tasks after each deploy.

The second area we need to invest in is people. We want to have one tester per team, enabling the tester to perform all of the QA tasks they currently struggle with due to time constraints.  Each tester will be focused on a specific team and will be able to specify test cases before the development phase, enriching the feature and anticipating possible issues. Having the developers aware of the test cases in previous stages will also speed up test cases automation and allow the testers to perform exploratory testing approaches covering edge cases and paths that haven’t been considered.

By having testers involved in the automation work, whether it is defining test cases, developing the tests or adding them to deployment pipelines, the quality of each release will increase.



Lisbon, 27 September 2017

Tiago Correia


Shahar Rosentraub

Head of product @ a.k.a-foods

6 年

From my experience, such scenario is very common to test leads, especially when they enter the wreckage of a previously failing test team or when the product transforms from a start up to an actual company with customers. What I found that works best for the scaling part is the following (and everything is in-my-humble-opinion :)): a. the scrum teams *must* have a dedicated tester. agile will not work without it. b. Automation infrastructure ,performance and general version release (regression and exploratory) deserve their own dedicated teams and require different profile from a manual tester - you need developers (code monkeys) for the automation and highly system-oriented people for the performance, also with strong coding background. c. the leads of the above teams should be your staff, this way you have very strong management backbone - such staff (assuming you were able to find good people to man it) allows collaboration and growth of the group - you can focus on the innovation, priorities and PR to the organization (this is a hated necessity but you need *your* managers to see what is the value of all the resources they put into your scaling group) d. automation - the testers should start looking at automation as a mean to perform the tests, but for this you first need a good automation infra.. e. Mentoring - cross training by you and your leads is a must. I believe that as opposed to "classic" QA in the form of functional testing, we need to train the developers to do better unit and integration tests, and allow the QA in the scrum to be more of a system-validation expert focusing and educating the team on the risks the features introduce into the system or testing them in a more customer oriented way. For this to happen both the developers and the testers need a-lot of guidance by you and your leads (with the support of your managers). I did this 3 times so far at 3 different companies, maybe next time will be different :)

要查看或添加评论,请登录

社区洞察

其他会员也浏览了