Is AI based Test Automation the new Capture-Replay?

Is AI based Test Automation the new Capture-Replay?

Test Automation is ‘broken’, it has been for years. Even today, 25 years into the evolution of a discipline, 20-25% automation seems like the industry norm, so why? Fundamentally it’s a knowledge capture issue. 

Domain experts have the knowledge. It’s about articulation of that knowledge into a formal structure that can drive an automated test tool. How do we get from A (process knowledge) to B (a machine readable structure for replaying that process = code)?

a)     Capture-Replay – Here be Dragons: Teach a tool, by limited example, the steps that you want to automate and replay them. Accuracy, robustness, maintenance, re-use, application of verification, the list goes on. Fundamentally not a good idea, I think we all accept that.

b)    Abstraction frameworks – Provide a mechanism for domain experts to articulate process in a formalized abstract manner e.g. Tables or spreadsheets. Has some merits but requires a technical champion to build and maintain the framework with varying degrees of input, from keyword frameworks (large amounts of input in coding of keywords) to class-action frameworks (smaller amounts of input for coding specific requirements).

c)     BDD – A kind of abstraction framework that mashes up the capture of domain knowledge and test structure into a ubiquitous language that has shared application. i.e. Spec and test code structure. E.g. Gherkin, Cucumber, Specflow et al. Does this work in reality? The jury is still out on that one. Feature led development and testing evidence suggests works well, integration or larger E2E journeys seem harder, and technical input in implementing clauses in code is still needed.

d)    TBC.

e)    Artificial Intellgience (Hurrah!)- So we can now teach a tool using large quantities of examples through RL and ML techniques and just let bots do the testing for us! A bright future awaits! Here be Dragons (again). In essence a kind of aggregated capture-replay, learning how to interpret process “intent” by example. Still requires a formal mechanism for documenting intent for replay in an abstract form that is machine readable. On top of that, a way of specifying success/failure criteria and differentiating defects from automation failures is still needed. In this world is it a real defect, an application change or a gap in the AI’s learning?

Whilst I am genuinely immensely excited by the advances in AI and their application in the test automation space, particularly in interpreting micro-intent e.g. “Click the Login Button” to minimize automation brittleness. There still seems to be a disconnect between defining larger test process from domain expertise and automation code. 

I suspect that Natural language processing (to infer test intent) is a candidate for research and development, but fundamentally there is still a need to capture domain expertise and process in some form.  

So back to d) How do we capture domain expertise in a simple and flexible fashion that is machine readable? Well block coding might provide that interim: https://www.scriptworks.io 

Yonatan Landau

QA Engineer and Automation Associate @ Folloze

6 年

The best example is : The automobile assemble line has been populated by robots that build the automobile however, there is always always some kind of? professional trained human personnel that must assure those machines do their work as expected. From my humble point of view, the best combination shall be an automation framework which covers not all but most of the testing and be always with constant human supervision.

Jim Hazen

Software Test Automation Architect and Performance Test

6 年

As someone who has seen this evolution over the last 25+ years of "automation" I can hope that AI does not bring us back to Record/Playback (Capture/Replay).? The problem isn't with the tools, it has always been?the expectations of them.? The "Silver Bullet" that everyone seeks is like a Purple Squirrel, they just don't exist.? In other terms, "It's Automation, Not Automagic!" as I'm known to say (sorry, couldn't help but not stick it into this post).? We only have ourselves to blame for listening to and believing the hype (or hyperbole), and I'm not innocent either on this one (I admit I did bite hard into it when I started back in the late 80's / early 90's).? The key thing to any technical project is to first understand why we want to do it, what we need to do, how we want to do it, when and where we should do it.? We need to remember to always use that "automation" tool between our ears first.? And I know some people will see this as cynical and negative.? I'm not trying to do that.? I'm reminding you to use your brain first.? As Testers we are taught to be inquisitive and skeptical at the same time.? In some cases if it sounds too good to be true, it probably isn't what you want.

要查看或添加评论,请登录

Duncan Brigginshaw的更多文章

社区洞察

其他会员也浏览了