Why should I care about Requirement Decomposition as part of system design?
Interaction between functional decomposition and the V-curve for an Earth Observation Mission Example

Why should I care about Requirement Decomposition as part of system design?

The overwhelming majority of system engineers would agree about the importance of functional decomposition, to assure efficient implementation of a complex system, like the Earth observation (EO) mission shown in the diagram above. Most would also concur upon making sure that the physical, functional and logical architecture are mutually compatible at every level of the system decomposition. But do we really need to care about comprehensive requirements for each layer in the design decomposition, and the associated traceability?

Requirements are generated to make sure that the system (and each lower level segment, sub-system or unit) is designed to meet its intended purpose and that this can be verified during acceptance. All elements of the design need to be justified to comply with their requirements. The specification of a unit that is being procured is used for exactly the same purposes: they form the contractual basis against which the unit is procured and accepted. Without conducting requirement decomposition between system level and basic functional units it may prove difficult to show whether the unit specifications meet the system need, with the risk that changes have to be introduced after contract placement. Traceability is also essential to plan the integration and verification activities as units are received. If the requirement set does not completely mirror the design decomposition it will be difficult to prove that the verification activities are complete at each level, with the risk that problems are identified only after integration or in use.

This article is not intended as a tutorial in how to do requirement decomposition but rather a justification of why it is important coupled with a few thoughts on how to avoid some of the problems that have cropped up on programmes in which I have been involved. Before getting into that, there are two definitions that are important and surprisingly poorly understood, in my experience.

Child requirements are the direct consequence of a requirement at one level upon the layer below, so if the EO Mission is to measure wind-speed using a circular polarised radar altimeter, there will be obvious child requirements on the payload and the data processor but there will also be child requirements for the platform to accommodate the payload and in the flight operations segment to make sure all relevant ancillary data is collected and instruments are accurately calibrated and much else. Allocation of budgets between sub-systems are also usually treated as child requirements, even if there is some flexibility in handling budgets between sub-systems.

Derived requirements reflect architectural choices made in designing a system or the assumptions made in analysing one layer of the system which then become requirements upon the layer below. Any trade off conducted during design will result in derived requirements. For example choosing a less stable frequency source for a navigation payload, can be mitigated by improving thermal control of the platform or by providing more regular corrections from the ground segment (but with implications on uplink station locations, numbers of ground monitoring stations and even upon how system time is steered towards UTC). All consequences of these choices need to find their way into the specification of impacted sub-systems.

Any analysis that contributes to design validation will rely upon assumptions that needed to be treated as requirements upon the sub-systems contributing to that function. If a safety critical sub-system has an operational constraint that it only be operated above a certain ambient temperature, then all stakeholders need to be aware of this if a catastrophic failure is to be avoided.

Counter intuitively, complete decomposition of requirements is particularly important to “Off the Shelf” equipment procurements. Choice on the basis that a sensor meets cost schedule and primary performance requirements may still present problems if it is too massive, power hungry,? unreliable or extra measures are need to address synchronisation, radiation intolerance or electromagnetic compatibility (EMC). There are well documented examples where reuse of a sub-system in an environment in which it has not been qualified have led to catastrophic failure. Proper decomposition should ensure that relevant requirements are assessed so that a unit is selected based upon its qualification for all aspects of the design lifetime and environment prior to committing to an OTS solution.

Here are a few analyses that come to mind which will result in requirements being fed into the design based upon boundary conditions chosen by analysis. If over looked during architectural design, these are likely to lead to missing requirements that either have to be introduced as changes later in development or managed as operational limitations on the system:

  • Mission analysis: the top level analysis of how mission objectives will be achieved makes many assumptions about system characteristics that must be captured in requirements to confirm that sub-systems conform. Concurrent engineering and rapid prototyping? are both powerful tools for progressing mission concepts but they often involve simplifications in the system model that need to be validated and fed into the requirements for lower level design;
  • Operability analysis (or Concept of operations): ensuring that all data that is needed to be available for operators to make informed decisions is available in a timely fashion. Where decisions must be taken autonomously on board then the assumed behaviour on which algorithms decide must reflect design (and be accurately reported back to the ground after the event);
  • Environmental analysis: Structural, thermal, radiation and EMC analyses all make assumptions about how the system reacts and what this implies for lower level design:

o?? avoiding the amplification of launch loads into fragile mechanisms; ?

o?? correctly identifying heat sources and managing flows to ensure equipment stays within operating (or survival) limits;

o?? choosing tolerant components or providing sufficient shielding;

o?? identifiying sources of EM noise and making sure that equipment is specified to accommodate them.

  • Pointing analysis: ensuring that assumptions in AOCS are reflected into the mass properties of the spacecraft and stiffness of deployed peripherals and sensitivity of sensors and actuators. On Hubble Space Telescope, accommodating the solar arrays during launch meant that the centre of rotation did not pass through the centre of mass. Very strenuous specifications for disturbance were placed on the solar array, reaction wheels had to be sized to accommodate the resulting torques and operations had to be planned to accommodate settling time during every eclipse transition;
  • Worst case ?& safe mode scenario analyses: ensuring that margins on power, data transfer and storage and on board computing power are reflected into operations scenarios and equipment design both for the most demanding parts of planned operations and credible survival scenarios;
  • Fault Tree (and later detailed FMECA) analysis: to identify failures that lead to degradation of the mission, so that requirements are introduced to minimise such risk, isolate and report such failures and provide operators with the means to recover from them ;
  • Reliability analysis: ensuring that equipment is designed to fulfil the mission or that redundancy accommodates inherent limitations, especially to mission critical operations.

This is absolutely not exhaustive: many readers will be aware of other analyses that lead to important design constraints and I’d welcome comments suggesting others that have emerged unexpectedly in your experience! What is important in relation to such assumptions is that they are clearly captured and reported so that their impact as derived requirements is clearly understood. Many of us will have experience of finding a key requirement buried as a boundary condition, deep in a discussion of the design of a simulation with the analyst blissfully unaware of the implications upon the wider system design!

This article has concentrated upon making sure that requirements are properly captured as part of requirement decomposition on the way down the left hand side of the V-curve so that a traceable detailed design can be justified at critical design review. It goes without saying that, at each layer of the design, these need to be associated with the relevant verification means so that the capture of that evidence and demonstration of compliance can be conducted on the way back up the right hand side of the curve. If there is a sub-system integration step that AIT are having to perform without an associated layer of consolidated, traceable requirements, then its success –or otherwise- may only emerge later in the acceptance or operation of the system. Making sure requirement decomposition goes hand in hand with functional decomposition and verification should be reflected in a triangle of systems engineering processes tailored for the complexity of a particular programme or product via the SEMP. Time spent assuring that requirement decomposition matches architectural design will save many hours of reiteration of detail design, or worse discovery of avoidable non-conformances, later in development or operations.

This is the fourth article in a series. If you are a glutton for punishment you may want to read the others:

https://www.dhirubhai.net/feed/update/urn:li:linkedInArticle:7150461931466690561/

https://www.dhirubhai.net/feed/update/urn:li:linkedInArticle:7152990525757747201/

https://www.dhirubhai.net/feed/update/urn:li:linkedInArticle:7152989709646917632/

Christopher Hanbury-Williams

Payload Lead & Senior Space System Engineer at Astroscale

1 年

Chuckling to myself about how many projects this reminds me of!

Mubasher Malik

Aerospace Engineer

1 年

Great article Alan. I think you've covered the main points of proper requirement decomposition and I think you hit the nail on the head with the main purpose of having proper requirement decomposition and traceability, is to allow us to prove the system functions as designed. When it comes to verification you will not test the design, but the function required which the requirements described. By not having an adequate description of the system functions via the requirements many issues can arise for example finding functions fail test late in the design process leading to costly retesting or even more worrying features or functions are not tested at all and these appear in service. Some studies have found that insufficient or poor requirements engineering are the cause of up to 80% product defects. I do find that requirements engineering is an art form that doesn't have a one approach fits all and finding the right balance of detail is often a collaborative effort. One thing to add would be that for derived requirements rationale is an important attribute to be included since these requirements do not have parents in the traceability.

Another thing that springs to mind is that there seems to be an unsettling trend to “fail fast - learn fast”. That might be OK for some types of system, but not where lives are at risk. During our careers we’ve followed strict formal methodologies for a good reason. It strikes me that this trend is trying to cut costs by possibly cutting corners. Elon Musk is reportedly a fan of this method – was this the reason behind the SpaceX rocket’s 'rapid unscheduled disassembly'?

回复

You know that it pains me to say this, but I agree with what you say! We’ve both been through enough projects to have learned “the right way” to do complex systems. I think there are a number of possible aspects to the problems that you are seeing. Firstly, it’s a long time since I completed my formal education, but I wasn’t taught about requirements definition or verification and validation. I’m not convinced that things have necessarily changed. Certainly many of them do not seem to understand the sheer volume of verification and validation required for complex (e.g. safety critical) systems. Also, I wouldn’t expect new graduates to be assigned to work on the up front parts of the lifecycle until they have some real world experience under their belt. They may be inflating their knowledge of requirements gained from development and/or verification experience to apply for their next position. Were the problems in the Post Office Horizon system a result of poor requirements, or insufficient verification or both?

回复

要查看或添加评论,请登录

Alan Fromberg的更多文章

社区洞察

其他会员也浏览了