Machine Learning and Quality Assurance

Content: -?

  1. Framework to perform ML QA
  2. Steps needed
  3. Skills needed
  4. Areas to be tested and techniques involved
  5. Approach for different types of ML/DL
  6. Possible platform/scripting/data testing available in market.
  7. Use case/ case studies
  8. Future trends in ML-QA
  9. Reference

?

Introduction to Machine Learning and Quality Assurance: -

?

The rise of software engineering and its principles drive the way that businesses actively look upon during software life cycle activities. Going forward risk associated with delivery or the puzzle of getting the right services running in production and running several assessments, there exists a lot of challenges and difficulties for QA analysts. It becomes very challenging for testers to test code changes throughout testing phases. In fact, code expansion makes it pretty tough for QA engineers thereby running new tests on a manual basis. This entire process can be time consuming and undertaking them in manual procedure is very overwhelming. Considering the software development life cycle that has become very complicated including delivery flow, getting the right evaluation in real time seems to be the task upon testers. Perhaps the time has come to adverse the effect of rigorous testing by the hand of machine learning algorithms, to find a right solution in place. This is so obvious in recent time which leaves no other option launching product and software, to be tested in a smarter way.

?

  1. Understanding machines: -

With the rise in artificial intelligence, machine learning and inductive learning, the testing process can go beyond the traditional process of testing manual models, and hence marching for an automated precision-based testing process. In this context an ai powered testing platform can surely check for smallest of changes with less of human error more efficiently. On the second note testing engineers can get into the testing platform which is then modelled via pre-train controls that are commonly seen during the setup process. One such peculiar scenario would be enabling testers to see hierarchy of controls and prepare the technical map such that the ai powered model can easily recognize different levels.? Therefore, ai is heavily used with regards to object categorization in user interfaces.

  1. Deciphering the automated testing: -

Under no circumstances testing is an easy job. Handling controls while performing tasks in testing phases certainly makes it challenging. Accessing user behaviour and preferences which makes it a lot easier collecting test data taking test verification into account. Artificial intelligence will ensure each user of the system to identify test cases and then it can automate. This exercise will lead to case of automated testing that will evaluate and remove anomalies. Outlier detection, correlation map and for sure heat map can help data scientist to see the bottlenecks, which tests should conduct, and which are redundant. Finally, automated test collects meaningful insight, data driven connections and hence make decisions successively.

?

?

ML-QA framework: -

As of now we have discussed on the subject matter quite well to address machine learning automated testing. Now we all know that healthful testing is the need of the hour in terms of making a system and its services error free. As a QA engineer the task is to deliver the best set of QA practices for ML models built or defining strategic roles throughout data life cycle activities.? These pointers can be adopted well to make life easy for business. As such we have seen that ML models are tested by data scientist individually or as team. To be honest, that is not the way of testing models but to test it by QA team in a timely manner. However, the problem is that most of the ML models do not behave like typical product/application as intended or pre-determined with different input. Keeping all this in mind, let us discuss about the things that can be tested with ML models.

No alt text provided for this image


Fig:1.0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Courtesy : DZone.com

No alt text provided for this image

Fig:1.1? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Courtesy: qapitol.com

?

?

  1. Quality of data: -

The most common thing that is overlooked while building a machine learning model is sanity of data used in both training and testing the models , just to get an idea whether the data is a correct sample from the population. In this adversity the role of the QA is to sanitize of validate data used for training. Essentially the idea here is to test possible data attacks by several test instances before and after training. To do the same following things should be taken care,

?

  1. Statistics of the data – mean, median and mode etc.
  2. Understand data relation – correlation, multicollinearity, skewness, and kurtosis etc
  3. Run scripts to verify the above mentioned.
  4. Verify the same in regularity.

?

  1. Quality of features: -

Depending upon the relation of independent feature with target feature we could possibly find few features relevant and few others not necessary. There is a set of testing practices like feature engineering and feature selection to evaluate features.?

?

  1. Quality of ML algorithms: -

Increasing data because of business process automation and data inconsistencies could experience high prediction error rate. Every time developed model will be under feedback loop to run through iteratively in terms of converging better with good accuracy over previous. One way to achieve the same would be collecting new sample data to test model at run time. Adding upon following things should be taken care,

  1. Keep all the ML models in place to test it further with new data
  2. Retrain all the models to evaluate
  3. Track performance and verify accuracy?
  4. Start analysing the model test run and its performance, raise a bug in case of any issues.

?

Until now we have discussed upon pointers that are relevant testing machine learning models. But there could be several other mandates that needs to be validated by the QA team listed as below,

  1. Inherent human bias: -?

ML models are developed manually keeping certain assumptions like data distribution, statistics etc. If the same assumptions were not taken into consideration then developed model could possibly impact results, not helping business taking right decision-making capability. Therefore, there is a need to constantly monitor the system once after the model is deployed to improve upon.?

  1. Inclusion of data sources: -?

As and when the complexity of model increases over time, it is so evident that adding new data sources will make model converge in better way. In this way the model will become robust over time in terms of providing better accuracy and results.

?

Steps needed for ML-QA: -?

Several approaches lead to a successful AI/ML implementation, but majority of the implementation did not travel successful journey in absence of automated QA testing in place. To build artificial intelligence and train machine learning models several attempts proven to be a must listed as below,

?

  1. Organizing test and build automated test (test data) for a better test coverage
  2. Inferring what needs to run, what is to be fixed and where is the error
  3. Testing logs and verify the same with test outcomes
  4. Predicting and send out timely notification on possible code issues
  5. Prioritizing test cases and fixing tests in case of any during run time?
  6. Identify all the changes in course of entire testing phase
  7. Create analysis report, testing status and test coverage

?

AI/ML are so useful for testing which is why some of the big names like Facebook, Netflix have their pilots doing the same set of tasks with that purpose.?

?

Skills needed for ML-QA: -

At the core level QA is meant for making a product suitable for the broader market. There will be always a demand of testing in all forms of software development. With all sorts of new technologies machine learning imparted significant shift in business and processes across all industries. Although machine learning can not replace QA to the fullest as it requires some manual intervention. However, using artificial intelligence and machine learning QA process seems a lot comprehensive in terms of providing automated tests, better test analysis and test coverage. Some of these artifacts that are most important,?

  1. DevOps

Modern testing evolves through a lot of artifacts that transformed the testing procedure to a large extent. Therefore, building a continuous integration/delivery pipeline with regards to test automation certainly change the game.?

  1. Blockchain

When it comes to data decentralization and transparency while testing blockchain is the only solution. It helps minimizing cost of errors and ensure all transactions to be legitimate.

  1. IoT

At times testing becomes very complex if data gets stored and accessed via contracts or service providers in various layers. As user experience has become the crucial point of interaction involving remote devices and sensor, there is a definite need to test for such exponential user experiences.?

  1. AI/ML
  2. AI lets system and processes reach highest levels of testing as ever bringing QA automation fully self-reliance starting from running till adaption of processes.
  3. Having said this, there are practices in QA that must be performed by QA specialists. For instance, understanding complex algorithms and its test coverage.

?

Areas to be tested: -

While we see technical advancements, the whole world is shifting towards an increase adoption of AI/ML smart applications that will increase exponentially over the next few years. Even though the challenge will be to test these systems. This could happen due to no structural methodology present while testing. System actions and response significantly differ over time, thus become very less predictable. Now then test cases are no longer validated successfully with 100% accuracy due to model dependency and new versions viability. Given the facts, end to end testing would not provide high test coverage, not enough such that system is behaving as intended. This context demands continuous live monitoring and automated feedback/response for a long run sustainability. In more simpler terms, AI/ML based application focuses more on accuracy-based testing on models, prediction outcome on test data and coverage-based guidance.?

?

  1. Model performance

It involves testing with regards to a metrics rather than only on test data which includes precision, recall, F1-score, RMSE and MAE comparison to that of the model that is built and deployed to production. Performing real time a/b testing requires intelligence in testing to get correct label of data.

  1. Analyzing outcome

This is an attempt to understand whether the system behaves correct or not when tested with selected test cases and its outcome, as expected and actual outcome should match.

  1. Coverage guidance

Our strategy is to provide all aspects of testing a model. For example, multiple models that use set of algorithms needs to be validated given the same input data set. The one that produces most expected outcomes selected eventually by the process. Again, data that has been fed to the model will be tested upon activating all feature activation. Often testers need this data to activate the feature activation pertaining to the model.

?

No alt text provided for this image

????? Fig:1.2 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Courtesy- wipro.com

?

?

Techniques involved in ML-QA: -

Over the years, application development domain is heavily involved in either speed development or speed delivery, as discussed above. Along with that the industry is shifting gear towards ai and machine learning to solve critical business process or automate processes. Moreover, the development process has gone through significant transformation, adopting a culture that prioritizes continuous delivery, coined as DevOps. With advent of AI along with cultural shift of DevOps to support continuous delivery we want to avail test automation on priority. However, getting success using combination of right set of techniques and tools/technology drive the way forward.

?

  1. Differential testing?

With regards to test AI/ML applications comparing builds and versions, subject to classify differences and then learn from feedback and responses.

  1. Declarative testing

To specify a specific test through an intent according to domain or via natural language and allowing system on how to execute test.

  1. Visual testing

Image based learning can leverage the comparison of image/screenshots in specific time intervals to test look and feel of developed application.

?

  1. Self-testing

Auto correcting test element selection when some functionality changes at run time.

?

As of now we have gone through basic level of testing required to test application that uses ML models. Well that said, we can use the same concept as part of invariant and integration testing to a large extent.?

?

  1. At the invariant level of testing we should ensure what holds true in making correct predictions with a trained model artifact (metrics and parameter).
  2. This means every row that got passed as an input to test the model retrieve correct output prediction.
  3. Again, the model should predict the outcome in a certain amount of time, keeping a measure of time in case of any complex inference.
  4. To satisfy integration testing you need to have better test coverage on model’s prediction ability, catching errors as and when.
  5. If our data contains null values and unnecessary features causing a value error, then have your function to handle it accordingly.
  6. Again, once testing is over, allow few more relevant test cases as an example to learn iteratively.

?

?

Approach for different types of ML/DL: -

Software testing? should welcome the new era of AI/ML driven testing. Traditional testing looks a bit? fragile and bulky in terms of testing the system completely. The transition of AI will have fun and time has come to seize the moment of testing.Reasearchers always make use of a lot of training examples with different algorithms like ANN,CNN,RNN and start tweaking it accordingly. By far this is nothing but testing. Engineers and researchers are allocating most of their time testing these software and sytems built on AI/ML with loads of data and testing. Note that testers are often writing testing code rather limiting them to write only code. The process loops through a lot of search,algorithms and infrastrure/tools which makes it more sophisticated rather looks more simlilar to testing. Further more testing tools, algorithms and datasets are specifically dealt with AI driven testing to avail complex training mechanism and building test cases as easy ever.?

?

  1. Building image classification system to verify login screen,search options and identify elements like user name and password- at Test.ai
  2. Building visual tools to infuse data into pipelines , allowing testers to do the same sequence of drag and drop using reinforcement learning- at Test.ai
  3. Building search query tools to optimize model results mining from millions of data points, providing intuitive reporting infrastructure- at Test.ai
  4. Combination of test specific tools to ideate labelling, training and reporting via abstraction? has been the initiative so far.

?

Possible platforms and tools:-

Here are the list of 5 ai powered test automation tools available. There might be a few others as well.

  1. TestCraft
  2. Supports regression and continous testing on top of selenium
  3. Eliminate maintainence cost with automatic changes
  4. Drag and drop interface to create sequence of steps
  5. Test.AI
  6. It builds a tool that has the brain of selenuim and appium
  7. No code required which identifies elements and execute test cases
  8. Applitools
  9. Supports complex visual validation testing
  10. Able to group similar changes using machine learning algorithms
  11. Comparing algorithms and being able to understand the differences
  12. Functionize
  13. Cloud based platform to check functional, performace and load testing.
  14. Suppots functional test creation automatically using NLP
  15. One stop solution for all testing needs from test creation till test maintanance.

?

  1. Sauce Labs
  2. One of player that supports cloud based test automation
  3. Using machine learning to leverage meaningful insights
  4. Adding intelligence to build intelligent test automation
  5. Testim
  6. Leverage machine learning to speed up test authorisation through effective maintaince
  7. Easy test automation for anyone to create automated tests

?

No alt text provided for this image

?Fig:1.3 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Courtesy-google.com

?

?

Use cases: -

Organizations can not embrace upon ML based test strategy completely. Development and testing team will have to spell out correct metric and KPI’s right for them to create success story. Following are the important use cases as far as ML-QA automation is concerned.

  1. Eliminating specific and crumbling code-based test scripts
  2. Providing alternate test automation?
  3. Capturing full test coverage
  4. Accelerating automation maintenance cost and time?

?

?

?

Future trends in ML-QA: -

  1. The shift in the trend over the years will continue which means using AI to enhance techniques and frameworks to target specific business problem.
  2. However, as time passes, and technology progresses machines and models might take higher order task. This is obvious that AI will now switch to deeper context rather than just solve a problem with its metrics.
  3. ?Few examples in this conjunction are testing AI/ML models/applications on web, visual testing of interfaces and UI testing by adjusting auto corrections.
  4. Essentially, AI/ML will take over entire automation tasks that requires human intervention and finish less than no time.
  5. However, there are tasks which still require some input, may be complex algorithms and its higher order context.?
  6. Specifically, tasks like usability testing, security testing need thought progression to bring more clarity.

?

Reference: -

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?

?


?


要查看或添加评论,请登录

社区洞察

其他会员也浏览了