Artificial Intelligence and QA: Hype from reality
Artificial Intelligence (AI) is at the peak of inflated expectations in the Hype Cycle. There is a law called Amara’s law that states “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.“
I think most people in AI would agree that we are currently over-estimating the effect of AI on technology. Looking through the lens of AI in the Quality Assurance space I thought it might be beneficial to look at what areas of Quality Assurance seem ripe for automation (the business type of automation) via AI.
In the sections below I review various areas in Test case based execution and then provide my personal estimate of the impact that using AI would have in the near term. I quantify this prediction with two probabilities:
- Assistance:. The AI system will help the QA tester or QA Automation engineer.
- A rating of “1” is no assistance
- While a “10” is a high amounts of assistance - such as a 50% reduction in work
- Automation:. The AI system will do all the work.
- A rating of “1” means there is no automation and
- While aA “5” means 50% of the work will be fully automated (that is, no human assistance is needed)
- This forecasts is for technologies that are available today and will be widely available in the next 2 years or less
Execution of Test cases
There are several companies that are focused on creating software automation systems based on Artificial Intelligence. These systems typically reduce effort with the intent to eventually completely automate the following capabilities:
- Test case creation (Text based test case descriptions)
- Test case script creation (automation scripts such as Selenium Java code)
- Self-healing scripts that modify themselves when the UI changes
Test case creation
Test case creation is really challenging. The AI based system needs to be able to analyze a UI that it is asked to target and identify a series of test cases that implement user behavior when using the system. Since in most User Interfaces, particularly ones with dynamic data, a very large number of test cases can be created, the next question becomes is to identify the test cases that are a priority and select the ones to actually execute. It also needs some level of domain expertise.
Assistance: 3/10 Automation: 1/10
Test case script creation
Many companies have been focused in this area and rapid advancements are being achieved. Often times the focus here is to convert and regenerate test cases to test scripts. Or alternatively to re-code test cases based on execution or on user behavior and then to generate the execution scripts.
Assistance: 7/10 Automation: 4/10
Self-Healing scripts
To me this category really falls in the area of rocket science. Self-healing scripts adapt to the changes that occur in the UI and re-generate the automation scripts. Such a system needs to be able to detect a variety of changes:
- Changes in the UI that have caused scripts to break due to technical issues such as element Locator ID changes.
- Changes in the functionality of the User Interface.
- Self-healing must occur in minutes, that is minutes after getting access to the latest software build.
Combined, these two sets of requirements are a major technical challenge. In order to detect changes in functionality, complete (100%) script generation is a necessary precursor. And if the system is generating test scripts based on test cases then complete test case generation is needed as well too. The third requirement is that self healing must occur in minutes only compounds the technical challenge.
Assistance: 5/10 Automation: 1/10
Method of Execution of test cases
These are systems that determine which is the best method for executing a set of test cases. Execution systems could be:
- Multiple different AI test automation systems
- Multiple different software test automation systems
- Multiple different crowdsourcing channels
- Multiple different manual QA pools of talent
- Multiple different pools of devices (for different browser/OS/Mobile platforms)
And combinations of each of the pools above determine the optimum execution based on quality, execution time, setup effort etc.
Assistance: 4/10 Automation: 2/10
Execution Output
After test case execution is complete only half the job is done. There is a lot more work that goes into analysis of test case execution and defect prediction. Read More...
Retired management consultant. Open to helping others understand management consulting
5 年Aseem - thanks for the article.? High quality testing is key to building credibility within the organization and to reinforce the value of AI.??