Should We Expect Developers to Test?

Should We Expect Developers to Test?

In my latest Testing Times newsletter, I discussed Microsoft and their poor software quality. This led to a couple of conversations on LinkedIn comments concerning Microsoft’s approach to testing… and whether developers should ‘test their own homework’ as the saying goes. Click the newsletter link to read the comments in full.

These conversations prompted me to examine Microsoft’s approach to quality more closely. Among the many sources I read, Gergely Orosz's article How Microsoft does Quality Assurance (QA) stood out.

My Personal Experience

In all candour, I find it hard to be entirely neutral on this topic. I have been involved in projects where developers have been asked to test, and this firsthand experience has strongly influenced my views.

On those projects, when developers tested, they almost exclusively focused on what the solution should do. They wanted to validate that a system conformed to the spec or user story.

The dedicated testers, on the other hand, in addition to the happy path also looked for edge cases and weird behaviour. They sought to understand and imagine all the weird and wonderful ways a user might interact with the solution and find areas where the system didn’t work as expected.

But that was just my experience…

So What Does Happen At Microsoft?

In his article, Gergely explains how Microsoft drove efficiencies by switching from separate Software Development Engineer (SDE) and Software Development Engineer in Test (SDET) roles to a combined Software Engineer (SE) role.

However, there was an interesting quote from Brian Harry, Technical Fellow at Microsoft: “We combined the dev and test orgs into a consolidated ‘engineering’ org. For the most part, we eliminated the distinction between people who code and people who test. That’s not to say every person does an identical amount of each, but every person does some of everything and is accountable for the quality of what they produce.”

Now, when something intrigues me, I like to investigate in detail … and when I apply this approach to Brian’s statement, it suggests that there is still a bit of an SDE/SDET split.

With that in mind, I was still left asking: Is it really possible to completely merge the roles of developers and testers? Just because they have the same name doesn't mean they’re all doing the same thing.

Also, in the article, Gergely mentions “We became a lot more productive by removing the SDET role from our team! SDETs still focused mainly on testing-related work, but also picked up development tasks.”

So, did they remove the SDET role or just the label? Of course, I don’t know; I wasn’t there, but it doesn’t sound like they completely removed the separation. And then what happened in the medium to long term? Did they recruit or train specific testing skills fo pick up the SDET-type tasks?

There’s A Clear Conflict of Interest

Look, I get that this practice is seemingly efficient and sounds impressive, but I still have concerns about objectivity and thoroughness in the quality process.

I can’t ignore the fact that asking development teams to test presents a fundamental conflict of interest, even when they work in pairs. After all, mutual back-scratching is the basis of many working relationships.

I’m not saying that developers will consciously pass defective software; instead, human nature will play its natural part, and our evolutionary history is difficult?to counter.

Plus, developers and testers necessarily have different mindsets: creative versus thorough.

In a recent article, I discussed how software testers’ brains play tricks on them, and this is even more likely when developers are asked to fulfil that role.

A rigorous quality assurance process is essential for delivering high-quality software that meets user needs and expectations. Achieving this requires skills, mindset, perspective, and independence that I believe only dedicated testers can provide.

Devs are more likely to unconsciously overlook errors or omissions in their own work, having confidence in their coding ability. This confidence can lead to confirmation bias, where they seek confirmation of their assumptions and potentially miss critical flaws.

Moreover, developer testing lacks the fresh perspective that independent testers can provide, which is?crucial for identifying issues that someone too close to the project might miss.

The Case for Independent Testing

I have over three decades of experience in software development and QA, and I firmly believe that truly independent testing, carried out by trained, process-driven, pedantic test professionals, is critical to high-quality software. Attention to detail and a hunger for destruction are also very beneficial.

I’m joking, of course, well, half joking. But sometimes, having niggly testers who really get into a system is key to mitigating business risks and producing high-quality software.

First and foremost, independent testers approach the software without preconceptions or emotional investment, allowing for a more objective assessment.

They also bring diverse perspectives and methodologies, increasing the likelihood of uncovering a more comprehensive range of issues.

As I said in the previous Testing Times, I have to deal with many bugs in Microsoft products. Almost universally, these are bugs that would not appear in the happy path.

Sometimes, You Need to Find a Middle Ground

I appreciate that complete separation of development and testing isn't necessarily desirable, as they need a close working relationship built on trust and mutual respect.

As we’ve seen with Microsoft, there can be tempting business reasons to integrate the roles, such as:

  • Shifting to Agile or DevOps
  • Aiming to increase velocity with earlier testing
  • Cutting costs by reducing headcount and/or overheads

If, for whatever reasons developers are going to test, you can actively lessen the risks associated with self-testing by adopting the following:

  • Implementing a peer review system where testers evaluate each other's work can help introduce an additional layer of scrutiny.
  • Regularly rotating testing responsibilities among team members can also provide fresh insights into the software.
  • Involving end-users or stakeholders in user acceptance testing can provide invaluable feedback from those who will ultimately use the product—although be prepared—they might not be happy with the additional effort and?responsibility.
  • Last but not least… use automated testing!

Reading between the lines of Gergely’s blog, this seems to be Microsoft’s approach, but again, I wasn’t there so I can’t say for sure.

Test Automation Is Priceless If You Don’t Have Dedicated Testers

As mentioned above, test automation is crucial if you expect developers to perform your testing. It helps mitigate risks and introduces efficiencies and time-savings into the process. Of course, automation should be used even when you have dedicated testers; the benefits are too substantial to ignore.

While automated tests do not completely eliminate bias, they objectively measure whether the code meets specified requirements, counteracting some subjectivity inherent in self-testing.

Automated tests ensure consistent execution, reducing human error and oversight. They also reduce the burden on developers, allowing them to focus on what they are good at.

Obviously, test automation has intrinsic benefits, such as rapid execution and feedback. This immediate insight enables them to catch issues early in the development process, reducing the cost of fixing bugs later.

Where they are used, developers can integrate automated tests into their continuous integration and delivery (CI/CD) pipelines, ensuring that code changes are verified before deployment and maintaining high quality in production environments.

Once established, automated tests require minimal effort to run repeatedly, allowing developers to focus on writing new features. As software complexity grows, manual testing becomes less feasible; test automation scales effectively, maintaining thorough testing practices as the codebase evolves.

While not a complete substitute for dedicated testers, strong test automation practices can help development teams uphold software quality when independent testing isn’t an option.

So, Should We Expect Developers to Test?

Clearly Microsoft are no mugs, and software development practices continuously evolve, but I am just not convinced that merging test and dev roles is practical or even realistic.

The simple fact is that some people will naturally gravitate to one role more than the other; they both require fundamentally different mindsets and perspectives.

Both Gergely Orosz and Brian Harry admitted, or at least strongly hinted at a separation between the day-to-day activities, even after combining the SDE and SDET roles.

I believe it’s better to have clarity and a clear distinction?and that organisations should use independent testing where possible.

Train your testers to be exceptional testers, train your devs to be great at that, and train them all how to best work together to drive efficiencies rather than merging them all and risking losing that gorgeously inquisitive tester mindset.

Where this is not possible, they must implement best practices to ensure thorough and unbiased quality assurance, including test automation with tools like UFT One—did you know it covers more applications than any other automation tool?

Drop me a note if you want a UFT One demo or a quote.

Ultimately, a rigorous quality assurance process is essential for delivering high-quality software that meets user needs and expectations. Achieving this requires skills, mindset, perspective, and independence that I believe only dedicated testers can provide.

Do you agree, or am I completely wrong on this? I’d love to hear your thoughts and continue the conversation. Let me know in the comments!

Fiona Brady

Quality Assurance

3 周

Too many times I have heard Testers say 'the developer never told me that' when an issue arises. This can seem valid, especially in a complicated code environment. But, we as QA should be asking developers what they have tested, this helps enhance conversations and understanding of what has been developed. As QA, we then look at what we think should be tested and remove, to save time, what developers have already tested. I do think that engineers can work together regardless of what the role is.

You ate in the kitchen. You are capable of moving your plate to the sink. Nobody expects you to polish the whole space. Its just when you come and see the fork stuck in the chair, the plate is broken all over the kitchen and the glass just disappeared, nasty thoughts crawling into your mind. ??

Assuming that the definition of testing ‘something already built’ is to find out what it does, not what it should do, anyone can test, but not everyone can test well This includes ime, a reasonable proportion of those who identified as testers The challenge for most is the ‘well’ part. It’s always contextual and therefore temporal. Feature ‘A’ might be deemed well tested by a group today, but apply a similar level of ‘well’ to a similar feature ‘B’ tomorrow and that might be inadequate I was at msft when that change happened, although in Xbox. I worked with many SDETs (I had that title myself tor a period of time) and many would fit into the category above. This isn’t a judgement on how good they were at the role, it’s a function of that role and how it rarely involved much ‘testing’

Angela Nolan MIET

System Integration & Test Engineer (Asc Mgr) at Lockheed Martin

1 个月

Yes. Component testing can often branch out further, especially if more than one Developer is coding components that integrate in some small way with other components. As for who tests what; that is entirely dependent on the development model and/or contractual obligations. In a Scrum team, the Test role might component test while Dev cracks on with another component. The function of Test is actually part of the Software Development Lifecycle, not a follow-on phase. In some cases the testing (even at System or Integration level) may contractually be done by an independent body/prime company etc. And at the Acceptance stage, entirely done by the customer, by a Test Specialist company on behalf of the customer, or both. In my experience there are no set rules about who does what, other than what’s written in the Contract or Development/Test/Acceptance strategies etc. What’s absolutely vital though is whoever plans and executes it, must have the right skill set, mind set and testing specialism.

Doug ("Dougal") LAWSON

Delivery of Quality frameworks and Test Assured solutions, audit and compliance - Testing services include: ? Functional testing ? User Acceptance Testing (UAT) ? Operational Acceptance Testing (OAT) ? Business Contin

1 个月

Developers should never be involved in testing, apart from the unit testing of their own work, as unit testing is a part of development.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了