Security Product Research in the Lab: A fair chance to prove your claim
This blog is a continuation of our product research and analysis series. Our last blog post discussed the need for and nuances of, product selection based on market analysis and scoring. Here, we will further scrutinize the assessment process for in-lab testing from a technical perspective. It is our belief that in-lab testing helps improve cyber maturity by mitigating unprecedented security risks with validated solutions. CPX uses in-lab testing for onboarded cyber defense technologies so that our customers can explore the future knowing that the present is secured.
In-lab assessment is a practical approach to design, implementation, and testing solutions prior to their procurement. It’s critical to validate advertised capabilities and functionality before investing resources and effort. That is why we implement an assurance process in our innovation lab that emulates a production environment mimicking all the major components found in an enterprise.
At CPX, we base our selection on testing four solution aspects:
Starting in the Innovation Lab
First, a flexible testing environment is needed to place the tested solution. This ecosystem should be similar to an enterprise’s actual architecture. Real-world scenarios ensure that solutions are tested more effectively and accurately. This is the main purpose for the creation of the Innovation LAB at CPX.
In the lab, you can access infrastructure and applications for development, testing, and demonstration purposes. To maximize lab utilization, there are different network segments which comprise of multiple domains, exchange servers, active directory architecture, DNS servers, Log servers, SQL servers, and so on.
Our lab also has management and user network segments where different solutions can be deployed and tested as per requirements. It runs in a virtual environment that is completely isolated from corporate networks and secured at the perimeter and across core environmental services. Moreover, there is a kill-switch configuration per virtual machine. The lab allows you to run tests using real-world threats against real targets, including variations that simulate targeted attacks, behavior emulation, malware detonation, exploits, and enumeration-based attacks. Consequently, all necessary attack tools are deployed in a separate network segment and dedicated cloud.
Evaluation Workflow Framework
To optimize a lab’s capacity and avoid overwhelming it with changes during test preparation and execution, one needs to establish a straightforward operational workflow. This framework remains the cornerstone of successful execution and timely delivery in dynamic environments, where evaluation results are needed immediately.
Invitation for Assessment
Evaluation licenses usually have time constraints. A ready testing procedure should be prepared ahead of time so that you do not run out of license time in the middle of testing. Being ready with draft testing procedures and use cases prepares test stands and contenders to showcase their solutions.
Upon agreement, walkthroughs or deployment calls with contenders can be arranged to showcase capabilities and collect deployment requirements. Training sessions should also be arranged to understand the functionality and configuration of the products in various scenarios (especially if your team has no exposure to the inner workings of evaluated solution type).
Architecture Design
Depending on the nature of the product, you should plan and design the deployment architecture to implement the solution, mirroring the real-world setup. A structured record of the deployment architecture will support test methodology, design, and development. This way, test cases build to perform against the Target of Evaluation (TOE).
Planning and Developing Tests
The planning phase is used to gather information needed for test execution such as the assets to be assessed, attack vectors for assets, and any additional security controls required to mitigate threats and develop the test approach. Before a test you need to define the team, test schedule, pre-requisites, scope, use cases, test workflow, and environment setup.
领英推荐
Testing methodologies (i.e. the techniques used to test the functional and non-functional requirements of a TOE) must keep the scope in mind while measuring results. In a good workflow, each method has its own defined expectations and outcomes. This ensures that testing is comprehensive and provides success metrics for each procedure.
The purpose of the ‘test design’ phase is to develop test cases that examine all the functionalities expected from the TOE and to help find effective scores in the ‘test execution’ phase. Hence, it is imperative that test cases are well-designed, easy to understand, and most importantly, provide 100% functional coverage. Acceptance test records can serve as a good framework to achieve this.
TOE Implementation and Configuration
Specific requirements for hardware configurations (CPU, RAM, space, graphics, etc.) are given to each solution as per their standard operating procedures. Specificity in configuration requirements can save you time on re-configuring. All the results should be a factor in the comparative resource allocation per solution to ensure fair scoring, especially when it comes to TOE performance metrics.
Fully functional policies are used so that basic or default policies are not tested and compared among contenders. This should also eliminate any failed assessments and elongation of procedures.
Test Execution
The primary goal for the execution phase is to validate functional and non-functional requirements. This phase addresses activities associated with the intended test method and technique. We recommend using the following test categories for a detailed and well-prepared execution:
Post-Execution Analysis, Regression, and Reporting
The post-execution phase focuses on analyzing identified vulnerabilities, gaps, and recommendations as well as developing the final report.
After all tests are performed on TOE, you will have the quantitative evaluation results along with the proper Proof of Concepts (POC) and evidence. Based on the requirements and expectations, you can develop a scoring system to record and score the test for quantitative analysis.
There will be flaws identified in TOE in almost any evaluation. You can approach vendors with POCs showcasing the observed malfunction that needs to be resolved. It is better to present evidence of all the identified gaps and vulnerabilities to attempt resolution.
Regression tests are performed after implementing proper settings/patches to the solutions. Scores are then applied according to the final outcomes of the tests. This gives a fair chance to all the contenders before final scores are given.
The report is prepared by comparing the solutions’ outcome scores. Do not forget to mention any additional features of TOE that can be highlighted in the report to give a little more insight.
Conclusion
Remember, it isn’t necessary to choose the TOE with the highest number on it if your organization’s goals and needs are aligned with most other eligible outcomes of specific test categories. At this stage, all risks are documented and available for review. The TOE that meets your required needs should ultimately be selected.
Authors: Aditya Upadhya , Konstantin Rychkov ?