Testing, It's About the Results

Testing, It's About the Results

We often get stuck focusing on the wrong thing, such as this notion of “manual” vs. “automated” testing, rather than on actual practices and results. We try to define the above as disparate (or even supplemental), when they are really one and the same, and our time would be better spent focusing on the outcome as opposed to the ways of achieving it.

This is a post for those of you in charge of software release and quality processes who want to be able to explain to your managers, peers, C-levels -- whomever -- how testing works, holistically. The clarity that comes out of this conversation is presented by some people who know it best. Thank you to Michael Bolton and Paul Gerrard for being open to this conversation and for allowing me to share your thoughts.

Let's Go

I recently posted an article, The Downfall of Manual Testing, that spurred a detailed conversation culminating with a core perspective and understanding of testing, that there’s only one type:

Testing is always performed by people and is neither "manual" nor "automated" but rather entails a combination of human agency and particular tools (and/or checks) that can be used to assist testers.

The below discussion began with a response by Paul Gerrard that quickly turned into an insightful dialogue between Michael Bolton and myself. The conversation begins with Paul Gerrard’s response to the above-linked article. (I’ve cut out pleasantries for clarity sake.)


Paul Gerrard

“Let's define ‘manual testing’ as exploratory real-world, human testing”

What does that mean exactly? All testing is exploratory – it involves humans exploring sources of information. We’ll disregard the real world thing – since no one other than fantasists exists in unreal worlds.

It’s interesting that the article uses Joe Colantonio’s statement. I’ll challenge each of the author’s responses.

1. Uses a lot of resources

Duhhh! This is no comment at all – yes testing costs time and money. Is that all you have to say?

2. Is time consuming

It’s a comment, but says not much like the previous section. Testing (in a customer experience story) takes an incredible 180,000 hours! How much of that was hands-on testing? How much was analysis? How much was discussion and debate of test outcomes to derive a sensible decision? Where’s the problem here?

3. Sometimes lacks proper coverage

Well, the simple answer is you aren’t doing it right. Next question.

4. Is often of a repetitive nature, and testers may become bored and make mistakes while testing

Certainly true. So. Let’s get tools to do our testing. They don’t get bored. But tools don’t have eyes or intelligence. Of the 100 things that happen on a screen, tools are usually programmed to observe just one or two. The rest are ignored. But a human tester at least has peripheral vision and will detect 5-10 times as many problems as a tool.

By all means hire a mob to test. But don't expect mobs to do well, unless they have a different perspective than your own testers.


John Kensinger (me, in response):

(2) The problem is not that the various parts of the testing process take time, but rather that this can be time that your team doesn't have to spend [this way]. Crowdtesting is able to extend your testing resources at the drop of a hat. The testing process is still rigorous, but the burden is now shared, or more aptly put, delegated.

(3) Yes.. OR, the team in question lacks the personnel or tangible resources to perform wide-coverage testing (i.e. small in-house team, out-of-date device farm, lack of online simulators, etc).

(4) You hit the nail on the head; I agree completely. And this is why we are proponents for a strong and balanced relationship between automated and manual testing (https://bit.ly/2EQh1YT)

Lastly, one of the primary reasons we believe in "hiring a mob" is to provide exactly that, a different perspective. This is especially useful for small in-house teams or fatigued or overworked testers/developers.


Michael Bolton:

“And this is why we are proponents for a strong and balanced relationship between automated and manual testing."

There. Is. No. Manual. Or. Automated. Testing. AAAARRRGH.

I'm sorry to be upset about this. But do you visit an automated doctor or a manual doctor? Do you attempt to maintain a strong and balanced relationship between automated medicine and manual medicine?

You're doing a certain kind of marketing here. You're using LinkedIn; an online platform. To do so involves machinery and tools that extend your reach. So are you doing automated marketing or manual marketing?

You report to a manager or executive. That manager uses tools routinely, as part of his or her job. Is that manager performing manual management or automated management?

The questions referring to medicine, to marketing, and to management probably sound faintly ridiculous. Why does speaking of manual testing and automated testing sound any less ridiculous?

"Let's define 'manual testing' as exploratory real-world, human testing."

Let's not. Let's stop using a modifier that does no useful work. Let's use an accurate term for exploratory real-world, human testing; let's call that what it is: TESTING.


John Kensinger (me, in response):

Yes, automated and manual testing are just that, TESTING; But, do you think that by conflating testing tools such as Selenium (which I would consider "automated" testing, in that scripts are being run via automation) with a "manual" approach to testing (i.e. having John Doe click around a website to find functional issues), we eliminate a dichotomy in types of testing? I assume, in this case, that you'd argue that there should be no dichotomy here whatsoever; if this is true, please help me to understand how you would juxtapose or discuss the above-mentioned approaches to testing. Yes, they're both "testing," but clearly there are differences. How would you define those differences?


Michael Bolton:

For an answer to your questions, start by looking to my previous comment, and ask if we observe a similar dichotomy in the fields that I mentioned. Doctors, marketers, and managers use tools. When they do so, we do not refer to automated medicine, nor automated marketing, nor automated management. Surgery is an activity with a significant physical dimension. We do not refer to the DaVinci laparoscopic surgical device as "automated surgery"; we instead see it and speak of such devices as a way for doctors to extend and enhance and enable surgery. We do this because we recognize that surgery is not just a physical activity; it involves skill, analysis, problem-solving, tacit knowledge of the domain, and so forth. We don't lose track of the agency of the doctor, even when the doctor is using impressive machines.

When we talk about testing, we make a fundamental error: we refer to John Doe *clicking around*, as though the physical act of clicking were at the centre of the activity. John is not *clicking* to find problems; John is investigating and analyzing and exploring and experimenting and obtaining experience with the product. (Interestingly, when Jane is programming, we don't focus on Jane's typing.)

Another mistake we make is in thinking that testing consists of John executing a script, without considering where the script came from, how it was designed, what risk analysis went into the design, how the outcome of the actions are to be interpreted, and so forth.

We don't say that programming consists of Jane converting a program into machine language, without considering where the program came from, how it was designed, what analysis went into the design, deciding whether the program addresses the user's task and so forth.

For the program, that conversion step *could* be done by a person, and it used to be. It is now almost universally done by machines, and we call it "compiling." It happens inside of programming; it's a part of programming; a part that can be automated, but we do not call it "automated programming."

Within a test, there's sometimes a set of steps performed by the machine. I have suggested that we call that "automated checking," exactly analogous to compiling. It happens inside of testing; it's a part of testing; it's a part that can be automated. We should not, in my view, call it "automated testing.”


John Kensinger (me, in response):

In other words, is it sensical to summarize that testing is always performed by people and is neither "manual" nor "automated," but rather it always entails a combination of human agency and particular tools (or "checks") that can be used to assist said testers in "investigating and analyzing and exploring and experimenting and obtaining experience with the product?"


Michael Bolton:

Spot on, in my view. The only way that you could improve on that, in my view, would be to say “(AND/OR ‘checks’).” But that would be taking it from 100 to 101.


In Summary

Tl;dr: There exists only TESTING; there is no “automated” vs. “manual,” “exploratory” vs. “non-exploratory,” “human” vs. “non-human,” nor “real-world” vs. “unreal-world” testing. The sooner we can come to terms with this, the less time we’ll spend feeling stuck with one set of tools or one initial approach, and the sooner we can focus on what really matters: the results and ultimate quality of our testing.

Dan Ashby

Director of Engineering @ Ada Health ? Innovating & Evolving Leadership ? Engineering, Quality, Regulatory, Agile, Culture, Strategy, Coaching ? Mental Health First Aider ? International Speaker

5 年

I think there is a much simpler way to looks at the distinction between automation and exploration, and that's to talk about "the expected" and "the unexpected" (aka, knowledge and ignorance, or lack of knowledge... or knowns and unknowns...) Automation is great for asserting our expectations. It's also great for assisting our exploration. But it can't uncover new information (beyond the information that our expectation is met or now). For that, we need exploration - to investigate and uncover unknown information. Personally, I don't use the term "manual" anymore, as it implies "manual labour", which is often seem as monotonous (laboriously following steps) and also implies that it doesn't require much thinking skills. But exploratory testing is the opposite! It isn't monotonous at all, it's investigative, and it relies on having really strong lateral and critical thinking skills.

Paula McSorley

Principal Engineer, QMS Software Quality at Getinge. Ensuring software compliance to adopt higher production software technologies.

5 年

I agree also that you cannot divide these types of testing. I was recently offered a test manager position, but the business insisted that they wanted all of their testing 100% automated. I debated with them that that wasn’t doing their software any good, and they took out the human experiential element and usability review. I ultimately turned down the position. I’ve seen it advertised at least four times over the last year. They obviously don’t get it.

Annie Robertson

Quality Assurance, Testing Best Practice, Governance & Processes; People, Test, Defect & Artefact Management.

5 年
回复
Paul Gerrard

Eminent Software Consultant, Author, Teacher and Coach

5 年

I don't argue that there is or isn't manual and automated testing. Of course, tools can implement a form of test execution without human intervention. But wait... Who read the requirements? Interviewed the users? Explored the old system? Explored the new system? Who challenged these sources to dig deeper? Who identified the situations to test? Who measured coverage? who wrote the script? Who prepared the expected results? Who debugs the code? Who integrates tests into a framework? Who wrote the framework? Who interprets the outcomes of a test? Who decides what to do with anomalies? Yes we can automate the button clicking and some checking, but testing is so much more than that. We can all agree with Michael's perspective. I take the view that the means of executing a test is irrelevant from the perspective of the testers' thought process. The tool we use or whether we use a human doing the same actions (with a broader perspective, let's say) doesn't change all those other activities. Whether we use tools, document our tests, draft models in modelling tools or use purely mental models are all matters of logistics. Logistics vary in every organisation and testers work differently in the same organisation. I argue however, that there is an underlying thought process that we all share. The New Model is my attempt to identify the critical thought processes and how they tend to sequence. But this isn't a process model with inputs, outputs, entry/exit criteria and procedures for performing each activity. The model expresses patterns of thinking which our brains follow. In fact, we're so smart, our brains might be working on 3, 4 or 5 of these thought processes simultaneously. Our patterns of thought are driven largely by events. I advocate this event-driven approach to processes to deal with, for example, Continuous Delivery regimens. I'm making progress too. I map the 'Applying' activity to Michael's Checking activity, by the way. Test execution tools support my Applying activity. They don't support the other nine activities. That's my argument against saying testing can be automated. If you want a better explanation of the New Model there's a webinar here, courtesy of Agile Testing Days. https://youtu.be/1Ra1192OpqY

  • 该图片无替代文字

要查看或添加评论,请登录

John Kensinger的更多文章

社区洞察

其他会员也浏览了