Testing Methodologies
Concept: Nimish Sonar, Illustration: Microsoft copilot

Testing Methodologies

Before information systems projects are deployed, we should do thorough testing by selecting appropriate testing methodologies during their implementation.

The 7 phases of SDLC (Software Development Life Cycle) are Project Planning, Gathering Requirements & Analysis, Design, Coding or implementation, Testing, Deployment and Maintenance.

Testing is essential before deployment for several reasons like, Identifying Bugs and Errors, Ensuring Functionality, Performance Verification, Security Assurance, Compatibility, Usability, Compliance, Preventing Costly Fixes and Customer Satisfaction.

Classifications or Types of Testing:

Unit Testing:

This is done for an individual s/w and its modules or components. The primary goal of unit testing is to validate that each unit of the s/w performs as expected. Unit tests are often automated, allowing them to be run frequently and consistently. Automation tools like JUnit (for Java), NUnit (for .NET), and pytest (for Python) are commonly used. It simplifies debugging and improves code quality.

Interface or Integration testing:

It is a h/w or s/w test that checks connection of two or more components. Integration testing typically follows after unit testing. For this, unit-tested modules are taken. It verifies and validates functioning of application with other systems in which set of data is transferred from one system to another. The primary goal is to verify that these combined units or components work together correctly. Integration tests can be automated using tools like JUnit, TestNG.

System Testing:

System testing is a comprehensive testing phase in software development where the entire integrated system is tested as a whole to verify that it meets the specified requirements. These are performed in non-production or test/development environment. System testing is crucial for delivering a high-quality product that meets user expectations and performs reliably in real-world scenarios. Many specific analyses are carried out during system testing like recovery testing, security testing, load testing, volume testing, stress testing, performance testing, functional testing and non-functional testing. We will lean them one by one.

Recovery Testing: It tests system's ability to recover after a s/w or h/w failure.

Security Testing: It makes sure that the new system includes provisions for appropriate access controls and don't have any vulnerabilities.

Load Testing: It evaluates the system's performance under expected load conditions (i.e. during peak hours)

Stress Testing: It evaluates the system's performance under maximum number of concurrent users or services connected to it.

Volume testing: Volume testing is a type of non-functional testing that focuses on evaluating how a software application performs when subjected to a large volume of data or records. Tools like Apache JMeter, SQL Server’s Database Engine Tuning Advisor, and Oracle’s SQL Performance Analyzer can help generate large volumes of data and test database performance.

Performance testing: It compares the system's performance to other equivalent systems using well defined benchmarks.

Functional Testing: Validates that the system performs its intended functions correctly.

Non-Functional Testing: Assesses aspects such as performance, usability, reliability, and security.

Final Acceptance Testing:

When system staff is satisfied with the system testing, the final acceptance testing is performed.

These are mentioned below and they have different objectives:

QAT (Quality Acceptance Testing): It focuses on logical design, technical aspects of application, the documented technical specifications and deliverables. It does not focus on functionality testing.

UAT (User Acceptance Testing): It supports process of ensuring that the system is production-ready and satisfies all documented requirements. It is done against the acceptance criteria. Ideally, UAT can be performed in secure testing or staging environment in which source code and executable code both are protected.

ITF (Integrated Test Facilities): In this, test data are processed in production-like systems. It confirms the behaviour of the new application in real-life conditions. In some organizations, subset of production data is used and this data is scrambled so that the confidential nature of data is not visible to the tester.

Certification or Accreditation Testing:

This is done after system is implemented and in operation, to produce evidence needed. It includes evaluating program documentation and testing effectiveness and results in final decision for deployment. It involves security staff and business owner of the application. When the tests are completed, an IS auditor should issue and opinion to management on whether the system meets business requirements, has implemented appropriate controls and is ready to be migrated to production.

Security Clearance Testing:

Security clearance testing, often referred to as security testing or penetration testing, is the process of evaluating the security of a software application or system by identifying vulnerabilities, threats, and risks. This type of testing is crucial for ensuring that the system is protected against unauthorized access, data breaches, and other security threats.

Alpha Testing:

Alpha testing aims to identify bugs and issues in the software before it goes to beta testing. It ensures that the software is stable and works as expected. It is conducted in a controlled environment, usually within the development organization. It is often carried out in a lab setting. Focuses on both functional and non-functional aspects of the software. It includes usability testing, reliability testing, and performance testing. It is conducted within the organization.

Pilot Testing:

Pilot testing, also known as a pilot run, is a phase in the software testing lifecycle where a new system or process is implemented on a small scale before a full-scale rollout. It focuses on specific and pre-determined aspects of a system. It is a miniature version of large-scale product. This is generally conducted before beta testing.

Beta Testing:

Beta testing aims to gather feedback from real users in a real-world environment. It validates the software’s functionality, usability, and compatibility with different systems. It is conducted in a real-world environment, outside the development organization. It is performed by actual users or customers who volunteer or are invited to test the software. This group is usually diverse to cover various usage scenarios. Its main focus is on usability, compatibility, performance.

White Box Testing:

A developer tests the checkout module by examining the source code, ensuring all loops and conditions work correctly, and achieving 100% code coverage. It is also called, glass-box, open-box, clear-box or transparent testing. It determines procedural accuracy or conditions of program's specific logic paths.

Knowledge Required: Internal knowledge of code and logic

Focus: Internal structures and logic

Techniques: Unit testing, code coverage, path testing

Advantages: Thorough, detects hidden errors, early detection

Disadvantages: Requires code access, time-consuming


Black Box Testing:

A tester checks the checkout process by simulating user actions, such as adding items to the cart, applying discounts, and completing the purchase, to ensure it functions as expected without knowing the underlying code. The tester only focuses on i/p and o/p of the s/w.

Knowledge Required: No knowledge of internal code required

Focus: External behavior and functionality

Techniques: Functional testing, equivalence partitioning, boundary value analysis

Advantages: User-focused, easy to perform, broad coverage

Disadvantages: Limited internal insight, potential gaps

Function/validation testing:

Similar to system testing but often used to test the functionality of the system against the detailed requirements to ensure that the software that has been built is traceable to customer requirements (i.e., Are we building the right product?)

Regression testing:

It ensures that recent code changes or enhancements to an application have not adversely affected existing features. It involves re-running tests on the modified parts of the software to verify that existing functionalities work correctly after the code changes. Regression testing aims to catch unintended side effects or regressions that may have been introduced by new code, configuration changes, or system upgrades. It helps to confirm that reported issues or bugs have been resolved without introducing new problems.

Parallel testing:

The process of feeding test data into two systems—the modified system and an alternative system (possibly the original system)— and comparing the results. The purpose of parallel testing is to determine whether the new application performs in the same way as the original system and meets end-user requirements. Sometimes, multiple versions or builds of an application are tested simultaneously to compare their outputs and performance. This method involves running tests on different configurations, environments, or platforms concurrently to identify differences and ensure consistency across all versions. It validates the reliability and consistency of the software under various conditions simultaneously.

Sociability testing:

Sociability testing, also known as social testing, evaluates how well a software application interacts and integrates with other applications, systems, or platforms in a social environment. This type of testing ensures that the software can effectively function within a networked or interconnected system, supporting collaboration, communication, and data sharing. It checks how well the software integrates with other systems, APIs, and services. It also verifies that the data shared between systems remains accurate and secure.

Testing Approaches:

The test approaches play crucial roles in ensuring software quality, functionality, and reliability throughout the software development lifecycle. Each approach offers distinct benefits and considerations, tailored to meet specific testing objectives and requirements of the software being developed.

Bottom-Up Testing Approach:

Bottom-up testing is an integration testing approach where lower-level modules or units are tested first, followed by higher-level modules or subsystems. This approach focuses on integrating and testing the smallest components of the system first, gradually moving towards testing larger components or the entire system.

Advantages:

Early Detection: Identifies defects in individual components early in the development process.

Incremental Integration: Allows for incremental integration, facilitating easier troubleshooting and debugging.

Parallel Development: Supports parallel development of modules by enabling testing as modules are completed.

Disadvantages:

Incomplete System View: May delay detection of integration issues or system-level defects until higher-level testing phases.

Requires Stubs: Requires the creation of stubs or mock components for testing when higher-level modules are not yet available.

Top-Down Testing Approach

Top-down testing is an integration testing approach where testing starts from the highest-level modules or subsystems and progresses to lower-level modules or units. This approach verifies the interaction between higher-level components and gradually integrates and tests lower-level components.

Advantages:

Early System View: Provides early visibility into system behavior and interactions.

Progressive Refinement: Facilitates early detection and resolution of high-level integration issues.

Customer Perspective: Aligns testing with user or customer requirements by validating system functionality first.

Disadvantages:

Dependency on Stubs: Relies on stubs or mock components for lower-level modules that are not yet developed.

Complex Coordination: Requires coordination and synchronization between development teams working on different levels of the system.

Conduct and Report Test Approach:

The conduct and report test approach focuses on executing predefined test cases, documenting test results, and generating test reports to evaluate software quality and functionality. It encompasses various testing types and methodologies to validate system requirements and ensure compliance with specifications.

Advantages:

Structured Approach: Provides a structured framework for planning, executing, and documenting testing activities.

Transparency: Facilitates transparency by documenting test processes, results, and outcomes.

Decision Support: Provides valuable insights for decision-making regarding software release readiness and quality assurance.

Disadvantages:

Resource Intensive: Requires significant time, effort, and resources for planning, executing, and reporting tests.

Manual Overhead: Relies on manual effort for test execution, result analysis, and report generation, which can be labor-intensive.

Address Outstanding Issues Approach:

The address outstanding issues approach focuses on identifying, prioritizing, and resolving unresolved defects, issues, or concerns identified during testing. It involves systematically addressing outstanding issues to improve software quality, reliability, and performance before release.

Advantages:

Quality Improvement: Enhances software quality by addressing and resolving identified defects and issues.

Risk Mitigation: Mitigates risks associated with unresolved issues that may impact system performance or functionality.

Customer Satisfaction: Improves customer satisfaction by delivering a stable and reliable software product.

Disadvantages:

Time Constraints: Requires sufficient time and resources for identifying, prioritizing, and resolving outstanding issues effectively.

Coordination Challenges: Involves coordination and collaboration between testing, development, and stakeholders for issue resolution.

Data Integrity Testing:

Data integrity testing is a set of substantive tests that examines accuracy, completeness, consistency and authorization of data presently held in a system. It employs testing similar to that used for input control. Data integrity tests indicate failures in input or processing controls. Controls for ensuring the integrity of accumulated data in a file can be exercised by regularly checking data in the file. When this checking is done against authorized source documentation, it is common to check only a portion of the file at a time. Because the whole file is regularly checked in cycles, the control technique is often referred to as cyclical checking.

Two common types of data integrity tests are relational and referential integrity tests:

Relational integrity tests:

Relational data integrity refers to the validity and consistency of data relationships within a relational database. It is enforced through rules and constraints defined at the database schema level.

Referential integrity tests:

Referential data integrity specifically focuses on maintaining the consistency of relationships between tables in a relational database. It ensures that references (foreign keys) between tables are valid and that operations involving these references do not result in orphaned or inconsistent data.

Four Data Integrity Requirements (ACID Test):

It is also called ACID test or ACID principle.

Atomicity:

From a user perspective, a transaction is either completed in its entirety (i.e., all relevant database tables are updated) or not at all. If an error or interruption occurs, all changes made up to that point are backed out.

Consistency:

All integrity conditions in the database are maintained with each transaction, taking the database from one consistent state into another consistent state.

Isolation:

Each transaction is isolated from other transactions, so each transaction accesses only data that are part of a consistent database state.

Durability:

If a transaction has been reported back to a user as complete, the resulting changes to the database survive subsequent hardware or software failures.

Application Systems Testing:

Application systems testing involves various methodologies and techniques to ensure that software applications or systems meet specified requirements, perform as expected, and are ready for deployment.

Below are types of these testing:

Snapshot Testing:

Snapshot testing involves capturing a snapshot or image of the system's state at a specific point in time and comparing it against a known baseline or expected state. This approach is commonly used in testing stateful applications where the behavior and output depend on the system's current state. It verifies program logic through logic paths.

Mapping and Tracing:

Mapping involves creating mappings or relationships between different components, functionalities, or data elements within an application or system while tracing is a performance testing to identify and optimize critical paths in application workflows. It increases efficiency by identifying unused code.

Tagging:

Tagging involves associating metadata or tags with specific components, test cases, or configurations within an application or system. Tags provide additional context and information that can be used for classification, grouping, or filtering purposes during testing and analysis. It provides exact picture of sequence of events.

Base-Case System Evaluation:

Base-case system evaluation involves testing the fundamental or base functionality of a system to establish a benchmark or baseline performance. It uses test data sets. Close cooperation is required among all parties.

Parallel Simulation:

Parallel simulation involves simulating real-world scenarios or environments in parallel to test application behavior and performance under various conditions simultaneously. It is a form of CAAT testing (Computer Assisted Audit Techniques). It defines multiple scenarios or use cases representing different user interactions, data inputs, or system configurations.

SCARF and SARF Testing:

SCARF stands for Security, Compatibility, Accessibility, Reliability, and Functionality testing. It is a comprehensive approach that ensures software applications meet specific criteria related to these key aspects. SARF stands for Security, Availability, Reliability, and Functionality testing. It focuses on evaluating and testing software applications with an emphasis on these key aspects. The terms SCARF and SARF testing are not widely recognized standardized terms in the field of software testing or quality assurance.


要查看或添加评论,请登录

社区洞察

其他会员也浏览了