Machine Learning and Quality Assurance
Debi Prasad Rath
@AmazeDataAI- Technical Architect | Machine Learning | Deep Learning | NLP | Gen AI | Azure | AWS | Databricks
Content: -?
?
Introduction to Machine Learning and Quality Assurance: -
?
The rise of software engineering and its principles drive the way that businesses actively look upon during software life cycle activities. Going forward risk associated with delivery or the puzzle of getting the right services running in production and running several assessments, there exists a lot of challenges and difficulties for QA analysts. It becomes very challenging for testers to test code changes throughout testing phases. In fact, code expansion makes it pretty tough for QA engineers thereby running new tests on a manual basis. This entire process can be time consuming and undertaking them in manual procedure is very overwhelming. Considering the software development life cycle that has become very complicated including delivery flow, getting the right evaluation in real time seems to be the task upon testers. Perhaps the time has come to adverse the effect of rigorous testing by the hand of machine learning algorithms, to find a right solution in place. This is so obvious in recent time which leaves no other option launching product and software, to be tested in a smarter way.
?
With the rise in artificial intelligence, machine learning and inductive learning, the testing process can go beyond the traditional process of testing manual models, and hence marching for an automated precision-based testing process. In this context an ai powered testing platform can surely check for smallest of changes with less of human error more efficiently. On the second note testing engineers can get into the testing platform which is then modelled via pre-train controls that are commonly seen during the setup process. One such peculiar scenario would be enabling testers to see hierarchy of controls and prepare the technical map such that the ai powered model can easily recognize different levels.? Therefore, ai is heavily used with regards to object categorization in user interfaces.
Under no circumstances testing is an easy job. Handling controls while performing tasks in testing phases certainly makes it challenging. Accessing user behaviour and preferences which makes it a lot easier collecting test data taking test verification into account. Artificial intelligence will ensure each user of the system to identify test cases and then it can automate. This exercise will lead to case of automated testing that will evaluate and remove anomalies. Outlier detection, correlation map and for sure heat map can help data scientist to see the bottlenecks, which tests should conduct, and which are redundant. Finally, automated test collects meaningful insight, data driven connections and hence make decisions successively.
?
?
ML-QA framework: -
As of now we have discussed on the subject matter quite well to address machine learning automated testing. Now we all know that healthful testing is the need of the hour in terms of making a system and its services error free. As a QA engineer the task is to deliver the best set of QA practices for ML models built or defining strategic roles throughout data life cycle activities.? These pointers can be adopted well to make life easy for business. As such we have seen that ML models are tested by data scientist individually or as team. To be honest, that is not the way of testing models but to test it by QA team in a timely manner. However, the problem is that most of the ML models do not behave like typical product/application as intended or pre-determined with different input. Keeping all this in mind, let us discuss about the things that can be tested with ML models.
Fig:1.0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Courtesy : DZone.com
Fig:1.1? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Courtesy: qapitol.com
?
?
The most common thing that is overlooked while building a machine learning model is sanity of data used in both training and testing the models , just to get an idea whether the data is a correct sample from the population. In this adversity the role of the QA is to sanitize of validate data used for training. Essentially the idea here is to test possible data attacks by several test instances before and after training. To do the same following things should be taken care,
?
?
Depending upon the relation of independent feature with target feature we could possibly find few features relevant and few others not necessary. There is a set of testing practices like feature engineering and feature selection to evaluate features.?
?
Increasing data because of business process automation and data inconsistencies could experience high prediction error rate. Every time developed model will be under feedback loop to run through iteratively in terms of converging better with good accuracy over previous. One way to achieve the same would be collecting new sample data to test model at run time. Adding upon following things should be taken care,
?
Until now we have discussed upon pointers that are relevant testing machine learning models. But there could be several other mandates that needs to be validated by the QA team listed as below,
ML models are developed manually keeping certain assumptions like data distribution, statistics etc. If the same assumptions were not taken into consideration then developed model could possibly impact results, not helping business taking right decision-making capability. Therefore, there is a need to constantly monitor the system once after the model is deployed to improve upon.?
As and when the complexity of model increases over time, it is so evident that adding new data sources will make model converge in better way. In this way the model will become robust over time in terms of providing better accuracy and results.
?
Steps needed for ML-QA: -?
Several approaches lead to a successful AI/ML implementation, but majority of the implementation did not travel successful journey in absence of automated QA testing in place. To build artificial intelligence and train machine learning models several attempts proven to be a must listed as below,
?
?
AI/ML are so useful for testing which is why some of the big names like Facebook, Netflix have their pilots doing the same set of tasks with that purpose.?
?
Skills needed for ML-QA: -
At the core level QA is meant for making a product suitable for the broader market. There will be always a demand of testing in all forms of software development. With all sorts of new technologies machine learning imparted significant shift in business and processes across all industries. Although machine learning can not replace QA to the fullest as it requires some manual intervention. However, using artificial intelligence and machine learning QA process seems a lot comprehensive in terms of providing automated tests, better test analysis and test coverage. Some of these artifacts that are most important,?
Modern testing evolves through a lot of artifacts that transformed the testing procedure to a large extent. Therefore, building a continuous integration/delivery pipeline with regards to test automation certainly change the game.?
When it comes to data decentralization and transparency while testing blockchain is the only solution. It helps minimizing cost of errors and ensure all transactions to be legitimate.
At times testing becomes very complex if data gets stored and accessed via contracts or service providers in various layers. As user experience has become the crucial point of interaction involving remote devices and sensor, there is a definite need to test for such exponential user experiences.?
?
Areas to be tested: -
While we see technical advancements, the whole world is shifting towards an increase adoption of AI/ML smart applications that will increase exponentially over the next few years. Even though the challenge will be to test these systems. This could happen due to no structural methodology present while testing. System actions and response significantly differ over time, thus become very less predictable. Now then test cases are no longer validated successfully with 100% accuracy due to model dependency and new versions viability. Given the facts, end to end testing would not provide high test coverage, not enough such that system is behaving as intended. This context demands continuous live monitoring and automated feedback/response for a long run sustainability. In more simpler terms, AI/ML based application focuses more on accuracy-based testing on models, prediction outcome on test data and coverage-based guidance.?
?
It involves testing with regards to a metrics rather than only on test data which includes precision, recall, F1-score, RMSE and MAE comparison to that of the model that is built and deployed to production. Performing real time a/b testing requires intelligence in testing to get correct label of data.
This is an attempt to understand whether the system behaves correct or not when tested with selected test cases and its outcome, as expected and actual outcome should match.
Our strategy is to provide all aspects of testing a model. For example, multiple models that use set of algorithms needs to be validated given the same input data set. The one that produces most expected outcomes selected eventually by the process. Again, data that has been fed to the model will be tested upon activating all feature activation. Often testers need this data to activate the feature activation pertaining to the model.
?
????? Fig:1.2 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Courtesy- wipro.com
?
?
Techniques involved in ML-QA: -
Over the years, application development domain is heavily involved in either speed development or speed delivery, as discussed above. Along with that the industry is shifting gear towards ai and machine learning to solve critical business process or automate processes. Moreover, the development process has gone through significant transformation, adopting a culture that prioritizes continuous delivery, coined as DevOps. With advent of AI along with cultural shift of DevOps to support continuous delivery we want to avail test automation on priority. However, getting success using combination of right set of techniques and tools/technology drive the way forward.
?
领英推荐
With regards to test AI/ML applications comparing builds and versions, subject to classify differences and then learn from feedback and responses.
To specify a specific test through an intent according to domain or via natural language and allowing system on how to execute test.
Image based learning can leverage the comparison of image/screenshots in specific time intervals to test look and feel of developed application.
?
Auto correcting test element selection when some functionality changes at run time.
?
As of now we have gone through basic level of testing required to test application that uses ML models. Well that said, we can use the same concept as part of invariant and integration testing to a large extent.?
?
?
?
Approach for different types of ML/DL: -
Software testing? should welcome the new era of AI/ML driven testing. Traditional testing looks a bit? fragile and bulky in terms of testing the system completely. The transition of AI will have fun and time has come to seize the moment of testing.Reasearchers always make use of a lot of training examples with different algorithms like ANN,CNN,RNN and start tweaking it accordingly. By far this is nothing but testing. Engineers and researchers are allocating most of their time testing these software and sytems built on AI/ML with loads of data and testing. Note that testers are often writing testing code rather limiting them to write only code. The process loops through a lot of search,algorithms and infrastrure/tools which makes it more sophisticated rather looks more simlilar to testing. Further more testing tools, algorithms and datasets are specifically dealt with AI driven testing to avail complex training mechanism and building test cases as easy ever.?
?
?
Possible platforms and tools:-
Here are the list of 5 ai powered test automation tools available. There might be a few others as well.
?
?
?Fig:1.3 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Courtesy-google.com
?
?
Use cases: -
Organizations can not embrace upon ML based test strategy completely. Development and testing team will have to spell out correct metric and KPI’s right for them to create success story. Following are the important use cases as far as ML-QA automation is concerned.
?
?
?
Future trends in ML-QA: -
?
Reference: -
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?