Alternatives to "Manual Testing"

This is an extension on a long Twitter thread from a while back.

?No one ever sits in front of a computer and accidentally compiles a working program, so people know — intuitively and correctly — that programming must be hard. But almost anyone can sit in front of a computer and stumble over bugs, so people believe — intuitively and incorrectly — that testing must be easy!

Testers who take testing seriously have a problem with getting people to understand testing work.  

The problem is a special case of the insider/outsider problem that surrounds any aspect of human experience: most of the time, those on the outside of a social group — a community; a culture; a group of people with certain expertise; a country; a fan club — don't understand the insider's perspective. The insiders don't understand the outsiders' perspective either. 

We don't know what we don't know. That's obvious, of course, but when we don't know something, we have no idea of how little we comprehend it, and our experience and our lack of experience can lead us astray. "Driving is easy! You just put the car in gear and off you go!" That probably works really well in whatever your current context happens to be. Now I invite you to get behind the wheel in Bangalore.

How does this relate to testing? Here's how:

No one ever sits in front of a computer and accidentally compiles a working program, so people know — intuitively and correctly — that programming must be hard.

By contrast, almost anyone can sit in front of a computer and stumble over bugs, so people believe — intuitively and incorrectly — that testing must be easy! 

In our world of software development, there is a kind of fantasy that if everyone is of good will, and if everyone tries really, really hard, then everything will turn out all right. If we believe that fantasy, we don't need to look for deep, hidden, rare, subtle, intermittent, emergent problems. That is, to put it mildly, a very optimistic approach to risk. That's okay for products that don't matter much. But if our products matter, it behooves us to look for problems. And to find deep problems intentionally, it helps a lot to have skilled testers.

Yet the role of the tester is not always welcome. The trouble is that to produce a novel, complex product, you need an enormous amount of optimism; a can-do attitude. But as my friend Fiona Charles once said to me — paraphrasing Tom DeMarco and Tim Lister—"in a can-do environment, risk management is criminalized." I'd go further: in a can-do environment, risk acknowledgement is criminalized too. 

In Waltzing With Bears, DeMarco and Lister say "The direct result of can-do is to put a damper on any kind of analysis that suggests 'can't-do'...When you put a structure of risk management in place, you authorize people to think negatively, at least part of the time. Companies that do this understand that negative thinking is the only way to avoid being blindsided by risk as the project proceeds." (I wanted to look for the book on Amazon to get a quick link to the publisher's site for Waltzing With Bears, but I was cut in on by dogs.)

Risk denial plays out in a terrific documentary, General Magic, about a development shop of the same name. In the early 1990s(!!), General Magic was working on a device that — in terms of capability, design, and ambition — was virtually indistinguishable from the iPhone that was released about 15 years later.

The documentary is well worth watching. In one segment, Marc Porat, the project's leader, talks in retrospect about why General Magic flamed out without ever getting anywhere near the launchpad. He says, "There was a fearlessness and a sense of correctness; no questioning of 'Could I be wrong?'. None. ... that's what you need to break out of Earth's gravity. You need an enormous amount of momentum ... that comes from suppressing introspection about the possibility of failure." 

That line of thinking persists all over software development, to this day. As a craft, the software development business systematically resists thinking critically about problems and risk. Alas for testers, that's the domain that we inhabit.

Developers have great skill, expertise, and tacit knowledge in linking the world of people and the world of machines. What they tend not to have — and almost everyone is like this, not just programmers — is an inclination to find problems. The developer is interested in making people's troubles go away. Testers have the socially challenging job of finding and reporting on trouble wherever they look. Unlike anyone else on the project, testers focus on problems — which the builders naturally resist.

Resistance to thinking about problems plays out in many unhelpful and false ideas. Some people believe that the only kind of bug is a coding error. Some think that the only thing that matters is meeting the builders' intentions for the product. Some are sure that we can find all the important problems in a product by writing mechanistic checks of the build. Those ideas reflect the natural biases of the builder — the optimist. Those ideas make it possible to imagine that testing can be automated.

The false and unhelpful idea that testing can be automated prompts the division of testing into "manual testing" and "automated testing". 

Listen: no other aspect of software development (or indeed of any human social, cognitive, intellectual, critical, analytical, or investigative work) is divided that way. There are no "manual programmers". There is no "automated research". Managers don't manage projects manually, and there is no "automated management". Doctors may use very powerful and sophisticated tools, but there are no "automated doctors", nor are there "manual doctors", and no doctor would accept for one minute being categorized that way. 

Testing cannot be automated. Period. Certain tasks within and around testing can benefit a lot from tools, but having machinery punch virtual keys and compare product output to specificed output is not more "automated testing" than spell-checking is "automated editing". Enough of all that, please.

It's unhelpful to lump all non-mechanistic tasks in testing under "manual testing". It's like referring to all of the elements of cooking (craft, social, cultural, aesthetic, chemical, nutritional, economic) as being "manual". No one who provides food with care and concern for human beings — or even for animals — would suggest that all that matters in cooking is the food processors and the microwave ovens and the blenders. Please.

If you care about understanding the status of your product, you'll probably care about testing it. You'll want testing to find out if the product you've got is the product you want. If you care about that, you need to understand some important things about testing.

If you want to understand important things about testing, you'll want to consider some things that commonly get swept a carpet with the words "manual testing" repeatedly printed on it.

Considering those things might require naming some aspects of testing that you haven't named before. Think about experiential testing, in which the tester's encounter with the product, and the actions that the tester performs, are indistinguishable from those of the contemplated user. After all, a product is not just its code, and not just virtual objects on a screen. A software product is the experience that we provide for people, as those people try to accomplish a task, fulfill a desire, enjoy a game, make money, converse with people, obtain a mortgage, learn new things, get out of prison...

Contrast experiential testing with instrumented testing. Instrumented testing is testing wherein some medium (some tool, technology, or mechanism) gets in between the tester and the naturalistic encounter with and experience of the product. Instrumentation alters, or accelerates, or reframes, or distorts; in some ways helpfully, in other ways less so. We must remain aware of the effects that instrumention brings to our testing.

Are you saying "manual testing"? You might be referring to the attended or engaged aspects of testing, wherein the tester is directly and immediately observing and analyzing aspects of the product and its behaviour in the moment that the behaviour happens. And you might want to contrast that with the algorithmic, unattended things that machines do — things that some people label "automated testing" — except that testing cannot be automated. To make something a test requires the design before the automated behaviour, and the interpretation afterwards.

Are you saying "manual"? You might be referring to testing activity that's transformative, wherein something about performing the test changes the tester in some sense, inducing epiphanies or learning or design ideas. Contrast that with procedures that are transactional: rote, routine, box-checking. Transactional things can be done mechanically. Machines aren't really affected by what happens, and they don't learn in any meaningful sense. Humans do.

Did you say "manual"? You might be referring to exploratory work, which is interestingly distinct from experiential work as described above. Exploratory — in the Rapid Software Testing namespace at least — refers to agency; who or what is in charge of making choices about the testing, from moment to moment. There's much more to read about that.

Wait... how are experiential and exploratory testing not the same?

You could be exploring — making unscripted choices — in a way entirely unlike the user's normal encounter with the product. You could be generating mounds of data and interacting with the product to stress it out; or you could be exploring while attempting to starve the product of resources. You could be performing an action and then analyzing the data produced by the product to find problems, at each moment remaining in charge of your choices, without control by a formal, procedural script.

That is, you could be exploring while encountering the product to investigate it. That's encountering the product like a tester, rather than like a user.

And you could be doing experiential testing in a highly scripted, much-less-exploratory kind of way; for instance, following a user-targeted tutorial and walking through each of its steps to observe inconsistencies between the tutorial and the product's behaviour. To an outsider, your encounter would look like a user's encounter; the outsider would see you interacting with the product in a naturalistic way, for the most part.

Of course, there's overlap between those two kinds of encounters. A key difference is that the tester, upon encountering a problem, will investigate and report it. A user is much less likely to do so. (Notice this phenomenon, while trying to enter a link from LinkedIn's Articles editor; the "apply" button isn't visible, and hides off the right-hand side of the popup. I found this while interacting with Linked experientially. I'd like to hope that I would have find that problem when testing intentionally, in an exploratory way, too.)

There are other dimensions of "manual testing". For a while, we considered "speculative testing" as something that people might mean when they spoke of "manual testing"; "what if?" We contrasted that with "demonstrative" testing—but then we reckoned that demonstration is not really a test at all. Not intended to be, at least. For it to be testing, it must be mostly speculative by nature.

The thing is: part of the bullshit that testers are being fed is that "automated" testing is somehow "better" than "manual" testing because the latter is "slow and error prone" — as though people don't make mistakes when they use automated checks. They do. At colossal scale. 

Sure, automated checks run quickly; they have low execution cost. But they can have enormous development cost; enormous maintenance cost; very high interpretation cost (figuring out what went wrong can take a lot of work); high transfer cost (explaining them to non-authors).

There's another cost, related to these others. It's very well hidden and not reckoned. A sufficiently large suite of automated checks is impenetrable; it can't be comprehended without very costly review. Do those checks that are always running green even do anything? Who knows? 

Checks that run red get frequent attention, but a lot of them are, you know, "flaky"; they should be running green when they're actually running red. Of the thousands that are running green, how many should be actually running red? It's cognitively costly to know that — so we ignore it

And all of these costs represent another hidden cost: opportunity cost; the cost of doing something such that it prevents us from doing other equally or more valuable things. That cost is immense, because it takes so much time and effort to to automate GUIs when we could be interacting with the damned product. 

And something even weirder is going on: instead of teaching non-technical testers to code and get naturalistic experience with APIs, we put such testers in front of GUIish front-ends to APIs. So we have skilled coders trying to automate GUIs, and Cypress de-experientializing API use!  The tester's experience of an API through Cypress is enormously different from the programmer's experience of trying use the API.

And none of these testers are encouraged to analyse the cost and value of the approaches they're taking. Technochauvinism (great word; read Meredith Broussard's book Artificial Unintelligence) enforces the illusion that testing software is a routine, factory-like, mechanistic task, just waiting to be programmed away. This is a falsehood. Testing cannot be mechanized

Testing must be seen as a social (and socially challenging), cognitive, risk-focused, critical (in several senses), analytical, investigative, skilled, technical, exploratory, experiential, experimental, scientific, revelatory, honourable craft. Not "manual" or "automated". Let us urge that misleading distinction to take a long vacation on a deserted island.

Testing has to be focused on finding problems that hurt people or make them unhappy. Why? Because optimists who are building a product tend to be unaware of problems, and those problems can lurk in the product. When the builders are aware of those problems, the builders can address them. Whereby they make themselves look good, make money, and help people have better lives.

Rapid Software Testing (with timing friendly for Europe, the UK, the Middle East, and India) runs https://www.eventbrite.ca/e/rapid-software-testing-explored-online-europe-uk-and-india-time-zones-tickets-133006222191/ 

Syhalla Ampula

Tester, Quality Advocate

3 年

The link to your site in the paragraph that begins "Of course, there's overlap between those two kinds of encounters." is taking me to a 404 error, rather than what I'm assuming should be a picture, based on the URL. Link: https://www.developsense.com/images/LinkedInApply.jpg I looked through the comments to see if anyone else had reported this, or if there was a comment on it being somehow intentional, but didn't find anything, so I'm guessing this is a bug.

回复
Christer Nilsson

Systems End-to-End Tester p? Volvo Group Connected Solutions

3 年

Do you recommend that book?: Meredith Broussard's book?Artificial Unintelligence

回复
Henrik Ahlgren

Utbildningsledare UX Designer & Mjukvarutestare | Samordnare EC Event p? EC Utbildning

3 年

Thanks for the post Michael Bolton

ILEANA BELFIORE

Quality obsessed and Agile enthusiast professional / Freelance hands-on software testing specialist and test coach / Blogger / Interpreter / Catalan Proofreader / Radio host / Humane Technologist

3 年

After reading and recommending (https://www.dhirubhai.net/posts/ileanabelfiore_activity-6774672964215422976-wBjj) this insightful article, I take the opportunity to also share my thoughts about the (potentially dangerous) relationship between testers and another (completely different) department. Hopefully useful tips included.??

回复

Overall good article, and I agree to most if it. Yet, I agree even more with @Wayne Roseberry 's perspective. There are no manual or automated doctors but there is clear distinction between physicians and surgeons. In agile process models these days, automated test play a vital role in helping testers complete the regression testing timely. For new features/updates, MANUAL/EXPERIENTIAL/EXPLORATORY testing is inevitable, so there should be no biased saying manual or automated is not testing or good testing.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了