Q&A From my Webinar: V&V and the Wedge Model(TM)
Robert Halligan
Systems engineering thought leader, consultant, trainer and coach, impacting people's lives on six continents.
Q1? Are there legitimate opportunities to use Model-Based Systems Engineering (MBSE) to reduce the Verification & Validation burden in the implementation. Where are the risks in doing so?
A1. The greatest value from MBSE in design lies in the modeling playing an integral role in the process of design, helping mere mortals design with early prevention and detection of errors leading to reduced errors in the implementation. So for the same degree of product assurance, the optimum amount of verification of the implementation is reduced if MBSE has been used as a design tool. Beyond the reduced likelihood of design error if model-based design is performed, the average cost of detecting an error in logical design is lower than the average cost of detecting the same error by software testing. The ROI, already good, is dramatically increased when the average cost of errors that go undetected in software testing is added.
All of the above is predicated on the MBSE being based on sound design processes, using a logically sound MBSE language that provides for rigorous mapping of logic onto implementation.
So a way to maximise the MBSE ROI is to emphasize verification of design models, enabling a reduction in the amount of software testing that would otherwise be needed to achieve the same degree of product assurance.
The other aspect of MBSE is constructed simulations, where the simulation and the design are in different languages. These need a LOT of V&V, because the opportunity for undetected errors in the simulation is a lot higher than with formal logical design mapped to implementation. A case to illustrate the point is a theater missile defense program on which much of the design work was based on the results of constructed simulation using a particular simulator that had never been validated, and turned out to be riddled with bugs. Failures in test came close to killing the project, and the system failed subsequently in operational use to provide the protection sought.
Sources of risk:
- using MBSE without a sound underlying design process
- loosing sight of the mail purpose of MBSE in design, which is to help us get the implementation right
- using inadequately verified and validated constructed simulations.
The “single source of truth” aspect of MBSE is also very valuable, as long as it is not in reality a single source of untruth! Also valuable is MBSE in the problem domain, provided that a sound requirements analysis process that is inherently model-based is used. Improved requirements validation aligns with reduced need for validation of the implementation.
Q2? Can you comment on the use of or need for a Test and Evaluation Master Plan and the difference between "Development Test and Evaluation" and "Operational Test and Evaluation”?
A2. We will benefit from planning any work that we intend to do, including test and evaluation, so in that sense, test and evaluation planning is always beneficial. Such planning would deal with: who is to perform the T&E, performing what tasks, in what timescale, at what cost, and using what resources. The plan would typically apply to a class of T&E, such as the T&E involved in development. The plan may reference specific lower level plans for specific test activities or groups of test activities, for example, reliability testing for a new model cellphone. This would be a small project within a larger test program. The planning could be as little as a list of test procedures to be executed. Like any plan, the T&E planning will be used to guide and control execution.
Having said that, there is a distinction to be made between planning and a plan. So test and evaluation planning could be incorporated within overall development planning or overall project planning. The larger the development effort, the more likely a separate plan for conducting T&E is to be beneficial.
Regarding Development Test and Evaluation, this is concerned mainly with verifying subsystems and system against requirements, and to a lesser degree, establishing to what degree goals (if any) have been achieved.
Operational Test and Evaluation is concerned mainly with assessing suitability for intended use, and thus is primarily a validation activity. Although OT&E is often focussed on evaluation of the end product, usually by trialling the product, OT&E may also be performed with respect to any subsystem.
Q3. Please comment on the issues for V&V when evolutions take longer than the intervals between whole system deliveries, so the evolutions overlap.
A3. The whole system delivery system will still be subject to requirements and to need, and therefore to system verification and validation.
I am not sure that this response got to the essence of the scenario that the questioner had in mind. I have invited elaboration of the question.
Q4? Given schedule or financial constraints, such that you can't afford to carry out ALL of the V&V activities, how do you figure out what is necessary/most beneficial?
A4. There are metrics available on ROI for various V&V activities that have their origins in past project data. For example, the ROI for requirements validation can be as high as 10 to 1 or even 100 to 1, depending on the circumstances. Design reviews, by contrast, have typically delivered ROIs in the range 5:1 to 7:1. The engineering manager in dialogue with the technical team can base decisions on available data, complemented by reasoning. Parametric costing models such as COSYSMO are also useful.
Q5? What is the best way to track results from The Wedge Model? Is it in one document, or through multiple documents. Where are validation results captured?
A5. Verification information - system requirements, verification requirements (if used), verification procedures and verification results are usually best captured and linked in some form of database, often provided as a part of the functionality of a requirements management software tool. These tools will usually accommodate sets of data relating to individual objects having problem and solution relationships, hence hierarchy and relationship to the Wedge Model. For validation, the most appropriate form of record is much more variable, depending on the nature of what is being validated. For requirements, Requirements Issue Records developed within a requirements analysis will carry a lot of requirements validation information. For a system or software application, a report documenting the results of a system or software validation activity is more likely.
Systems Engineering, Systems Architecture, Project Management, IT, and Telecom | The little engine that could...
4 年Robert, Good information. Are you seeing any large-scale implementations of MBSE and V&V Lessons-Learned?