DE-DMM Article 11 DoD Digital Engineering & Material Management (DE & DMM) Why are DE approaches vital? Core Values and Differentiators
William Cooper
Chief Solution Architect @ LMI | Creative analytical solutions at scale, helping passionate people keep their promises to the world | Digital Engineering, MBSE, Cloud Computing, Advanced Analytics
A little more than a decade ago we were leading large military utility assessments with more than 100 personnel on the assessment teams. The assessments included engineering, systems, engagement (1-v-1), capability (numerous integrated platforms providing a specific warfighter capability), and mission (all-up warfighter missions with completes OODA’s – Observe, Orient, Decide, Act) assessment teams. Even though we defined precisely what each team would deliver and how it would leverage the results from other teams, it was HARD. We implemented Systems Engineering (SE) best practices with waterfall milestones, entrance and exit criteria, SE measures to assess progress, etc. However, everything took longer than expected; it was very difficult to document, verify, and validate the source analytical data; data handoffs were often fumbled; we could never do as many analyses as we really needed; and the analyses often did not produce the needed insights. Do these challenges sound familiar? It was clear that there had to be a better way, but how?
As a leader it is often helpful to suffer thru difficult implementations so you know the critical questions that have to be answered and the critical tasks that have to be performed. Suffering with non-performant people, processes, data, analytical capabilities, and contractual approaches is hard, but comes with the benefit of being able to recognize better approaches when they become available.
Roughly six years ago our teams started adopting Digital Engineering (DE) approaches and the significant enhancements over the previous approaches began to become obvious. In our case we did not jump immediately to an all-up DE approach, but started generally moving in that direction, enhancing our approach as we had the time and bandwidth. These initial efforts happened to coincide with the publication of the Department of Defense’s (DoD’s) Digital Engineering Strategy (DES, Ref 1). We found that incorporating the DES Goals & Focus Areas into our daily practice was yielding dividends and it was encouraging to see DoD strategic guidance that confirmed that we might really be onto something. However, we were early adopters and there was not a lot of guidance available and little to no tooling for our use cases. We simply knew that the changes we were making were having a dramatic impact on both our execution and our products. Small changes led to small wins that encouraged us to continue evolving and improving. In time our DE implementation evolved to be a fully cloud-deployed Digital Engineering as a Service (DEaaS) analytical pipeline capability with a Model Based Systems Engineering (MBSE) data curation front end, web browser access, and a powerful analytical engine which yielded truly Better, Faster, Cheaper results.
Article 10 in this series (Ref 2) discussed the doctrinal DE foundations, what the DoD hopes to achieve with DE, and how the DoD expects this to look in practice. In this article we’re going to expand upon this with a discussion of how our teams have implemented our DE approach and WHY—the VALUES and DIFFERENTIATORS—that such approaches enable us to achieve. We’ll compare how we used to accomplish analytical tasks with how a DE approach accomplishes them and highlight the enhancements. I like to say that we don’t simply do DE because we were directed by our customers, we do it because it is the right thing to do. The DE values and differentiators are real. We’ve done this the hard way in the past and this suffering makes it easy to understand and articulate the positive impacts of DE approaches.
VALUES & DIFFERENTIATORS INTRODUCTION
The graphic above lists the key DE values & differentiators. This list is read from the bottom to the top, with the more foundational differentiators at the bottom of the list. In this section we’ll provide a basic set of definitions for each. Collectively these enable Better, Faster, Cheaper analytics; which in-turn enable Better, Faster, Cheaper development, acquisition, and support of warfighter solutions. We’ll revisit these in more detail at the end of the article following a description of our RAPTR IDEaaS implementation.
Collaborative: If we could identify only one DE value then we’d have to choose the ability of all team members and analytical capabilities to collaborate in the best ways that their skills, experience, knowledge, and expertise allow without any hindrances from proprietary, vendor-specific, stovepipe, classification, etc. restrictions on data interoperability. Some teams supply the Authoritative Source of Truth (ASOT), others process this ASOT into insights, and still others employ this data to make decisions that change the world for the better.
Data Ownership: Analytics are only as good as the source data, hence the focus on Verification, Validation & Accreditation (VV&A). The first two DES goals are to handle all source data as interoperable data models with documented Authoritative Sources of Truth (ASOTs). Achieving these goals requires a significant investment, but teams find that the investment is certainly worth it because of how it enables the following differentiators.
Rapid: DE capabilities are rapid in several ways: 1. Collaborative MBSE editing capabilities enable rapid ASOT documentation, 2. MBSE interoperability with the analytical tools enables rapid assessment responses to changes in source data, 3. Cloud orchestration enables rapid performance of large trade studies, 4. Contemporary Artificial Intelligence (AI), Machine Learning (ML), and Data Science (DS) approaches rapidly generate understandable and actionable assessment insights.
Scalable: DE capabilities are similarly scalable in multiple ways: 1. MBSE data models are scalable, with easy-to-understand visual relationships, to the full analytical scope and complexity, 2. Web browser access to all source data, intermediate analysis products, and final assessment insights, 3. Analyses are vertically scalable to arbitrarily large numbers of platforms and model complexities, 4. Trade studies are horizontally scalable to arbitrarily large sets of trade study options, 5. Trade studies are not limited by the scale of the data generated, 6. Cloud-deployed infrastructure significantly reduces the local compute hardware support requirements.
Agile: Typical agile sprints are 2 weeks long. DE capabilities should support the agile production of results on sprint timelines. I.e. an arbitrarily large and complex analysis could be specified and accomplished in a single sprint.
Staff-able: All analytical team positions can be staffed with commercially available skillsets.
Trusted: DE data products can be trusted because credential-based user authentication and Zero Trust (ZT) data encryption and validation approaches are employed for all transactions between user web browser user interfaces and on/off-premises cloud computing.
Insights: The ultimate goal is to generate timely, actionable leadership insights grounded in well-understood analytics and authoritative source data.
DEaaS DEPLOYMENT APPROACH
The Values & Differentiators noted above come at the expense of increased solutioning complexity. The following graphic describes the meta DE solution cloud deployment approach. Mission Engineering users are shown in the left-hand block. Users access the analytical infrastructure via web browser thin clients and virtually no compute is performed on the local computer or workstation. The upshot is that any computer able to run a web browser can have access to virtually unlimited cloud compute power.
The web browsers provide access to cloud-deployed Solutions as a Service (SaaS). In this particular case they provide access to RAPTR Innovative Digital Engineering as a Service (IDEaaS) as shown in the middle block. RAPTR IDEaaS is a high performance DE capability on its own, but can deliver enhanced value when teamed up with other User Interfaces, Data Sources, and/or Analytical Frameworks. We’ll describe RAPTR IDEaaS in more detail in the next section.
RAPTR IDEaaS receives a cyber Certification to Field (CtF) to deploy to the cloud Platform as a Service (PaaS) shown in the right-hand block. The PaaS receives a cyber Authorization to Operate (ATO) at pertinent Impact Levels (ILs) or classification levels. The user web browsers are connected to the SaaS on the PaaS via approved networks (e.g. NIPR at IL4-5, SIPR at IL6, JWICS or NMIS at TS/SCI).
The PaaS provides access to very large numbers of compute nodes with a variety of compute nodes available. Less capable nodes are significantly cheaper per compute hour, enabling the SaaS to select the required nodes and thereby manage both Cost and Time to deliver Value. Cloud PaaS can easily accelerate large multi-simulation trade study compute workloads by 1,000-2,000X+, but this is just the start of the benefit. The ability to select the appropriate nodes also enables the management of the analysis costs. RAPTR IDEaaS provides a cost estimator up front which has proven to be accurate within 2-3% of the actual costs so users or customers can choose how much is enough in terms of both Cost and Time. Cloud Service Providers (CSPs) like Microsoft and Amazon also provide access to “Spot Nodes” which are made available only when not requested by an on-demand workload. These are usually discounted by ~90% and add only ~ 30% to typical workload compute times. Huge discounts can be achieved if you have a little schedule flexibility.
Infrastructure as Code (IaC) approaches are employed in the SaaS development to ensure the solution can be deployed to any PaaS.
RAPTR IDEaaS ANALYTICAL PIPELINE
The RAPTR IDEaaS analytical pipeline is divided into 5 steps. We’ll briefly discuss each of these.
MBSE Editor: RAPTR IDEaaS uses a third-party Model Based Systems Engineering (MBSE) data model editor. A number of high-quality editors are available, including Cameo, MagicDraw, Sparx EA, Innoslate, etc. Most of these editors employ Systems Modeling Language (SysML) Version 1. A challenge is that each of these editors employs a slightly different version of SysML V1 and stores the MBSE data files in proprietary formats. This falls well short of the DE goal of full data model interoperability with analytical capabilities that need access to the ASOT in the various MBSE data models.
A significant advance is on the way with Object Management Group’s (OMG’s) finalizing of SysML V2 in the fall of 2024. A number of language enhancements and simplifications are included in V2, but from a meta DE perspective V2 should be instrumental in enabling MBSE data model interoperability for two reasons. First, editors should adhere strictly to the V2 language specification and not employ language extensions. This should enable any editor to edit any MBSE SysML V2 data file. This breaks the current vendor lock. Second, SysML V2 comes with an open-specification JSON API (Java Script Object Notation, Application Programming Interface). Right now RAPTR IDEaaS has had to define a custom JSON API for a particular vendor’s SysML version. The availability of an open-spec JSON API should permit interoperability with any SysML V2 data model regardless of editor employed. Government organizations should select their MBSE editors based upon their adherence to the SysML V2 standard to eliminate vendor lock and maximize flexibility.
The use of cloud-deployed MBSE editors is essential to the collaboration of large teams of teams. Some teams are ASOTs and other teams perform analysis using the ASOT data. All of these teams need access to the MBSE editor and data models regardless of geographic location.
领英推荐
Analysis & Scenario Authoring: The MBSE data models contain all of the documented analytical possibilities with ASOT traceability. The analysis & scenario authoring step selects both the scenario (the scenario provides context, often the red threat to be countered in the analysis) and the analysis missions, capabilities, measures, etc. This is performed in RAPTR IDEaaS via a web browser application which starts with one or more MBSE JSON data files and creates a JSON specific to the analysis & scenario of interest. As shown in Analysis # 1, this JSON can be directly loaded and executed by the RAPTR (Rapid Analysis and Prototyping Toolkit for Resiliency) Modeling, Simulation, and Analysis (MS&A) framework or it can be used to define a trade study.
Trade Study Authoring: Often there are questions about the specific solution being assessed in the analysis. These can be technical or rooted in any of the DOTMLPF-P concerns (Doctrine, Organization, Training, Materiel, Leadership and Education, Personnel, Facilities, and Policy). Trade study authoring can parametrically or categorically expand a base analysis into a large number of JSON options via a web browser authoring application. Each JSON data file represents a single option. A single analysis might spawn multiple trade studies. To date, customer analysis data shows that trade studies with 5K-10K options are most typical; although with IDEaaS’ horizontal scalability and cloud orchestration this number can really be as large as the customer might want. There are tradeoffs of larger single trade studies versus a series of smaller trade studies.
Simulations, Analyses, and Wargames: Arguably the crux Digital Engineering move is the cloud orchestration of one or more simulations on cloud compute nodes. For large trade studies, a copy of the RAPTR framework is paired with a specific trade study option JSON, containerized, and deployed to a cloud compute node. Such simulations have numerous potential failure reasons and each of these is managed autonomously by the cloud orchestration capability. If a human operator is using an external User Interface (UI) to battle manage or command & control entities in a simulation then this is a wargame. If a RAPTR IDEaaS simulation is synchronized with an external simulation then this is a federated simulation.
AI/ML/DS Assessment Insights and Dynamic Visualization: The obvious challenge with trade studies that generate large quantities of data (10s of TB are typical) is the need to process all those data to deliver timely, actionable, decision-able insights. RAPTR IDEaaS employs various AI/ML/DS (Artificial Intelligence, Machine Learning, Data Science) approaches to maximize the processing of data in-flight (i.e. during the source simulation) and in the cloud (e.g. working with data stored in low-cost S3 buckets) while minimizing the data ingress to/egress from the cloud. Typically, IDEaaS delivers finished insights within 2-3 hours after the completion of the last simulation. RAPTR IDEaaS also employs a full parametric approach to trade studies to address both factuals (we have something) and counter-factuals (we lack something) to enable the use of modern Causal Inference (CI) approaches. CI enables us to understand what is causally important to achieving a particular outcome. When we really think about specifying the requirements for a system to be acquired, the main point is to specify the requirements which will cause a solution to deliver the required operational capabilities. If requirements are causally important then they must be specified. If they are not then specifying them will over-constrain the solution, drive up costs and likely jeopardize the delivery schedule. CI drives analytical scale requirements. See Ref 3 for more information.
VALUES & DIFFERENTIATORS DISCUSSION
Collaborative: The central goal of DE is for each analytical contributor to become as smart as all of us collectively. This happens when ASOT data is migrated from paper and document artifacts to digital artifacts which are interoperable with our analytical capabilities; and as our workforce and culture is enabled to embrace DE approaches. As we noted previously, some teams supply the Authoritative Source of Truth (ASOT), others process this ASOT into insights, and still others employ this data to make decisions that change the world for the better. There is no one analytical tool or capability that is always best. The government needs the flexibility to select the best analytical capabilities for the jobs with the assumption that they will be interoperable with the pertinent ASOT. Data & analytical capabilities which adhere to open-specification language and API definitions are key enablers.
Data Ownership: Our experience is that DE provides government organizations a much-needed impetus to pay more attention to the quality and quantity of their ASOT data. Government study directors have historically paid far too much for large groups of Subject Matter Experts (SMEs) to sit in meetings because each attendee happened to know some detail essential to the analysis. The challenges are that this was both expensive and ineffective. What we really need is for the source data to be:
1.?????? Interoperable with the Analytical Capabilities: Beginning with the end in mind (thanks Stephen Covey), we employ MBSE because it enables us to rapidly and accurately understand the impacts of data to decision-able insights. Interoperability is what makes this possible. And the employment of open-specification language and API standards is what enables interoperability.
2.?????? Transparently Documented: All needed data is migrated from source references to a data store which is available, inspectable, and potentially editable by all teams regardless of location. Data quality must be a collective responsibility. Current best practice is to employ a Model Based Systems Engineering (MBSE) approach, especially visual relational data editors employing modeling languages like Systems Modeling Language (SysML). Cloud editors can permit easy web browser access and automated configuration management. Democratizing data documentation in this way disarms the usual data competition (“our data is better than your data”) and helps the collective team of teams focus on achieving the best outcomes. Permitting a spectrum of model fidelities to be represented helps ensure access to the “appropriate” model fidelity for the analysis task. This can have a huge impact to organizational effectiveness and efficiency. Once the data is documented then the source teams or individuals can be invited to fewer meetings. Better data, Faster pace, and Cheaper analysis costs (lower meeting overhead).
3.?????? Tagged with References: All data should include its source reference(s). What we know and why we know it should be transparently inspectable. A wide variety of reference types are permissible. Often data comes from published documents. Other times it comes from an authoritative organization. Still other times it comes from an authoritative individual, analysis, database, etc. Simple best practice, outsized impact.
4.?????? Tagged with Data Controls: Usually the very first thing that happens after a data store is assembled is the organization realizes how valuable the information is and someone outside the org requests access. Tagging the data with data controls (CUI, FOUO, PROPIN, classified, NOFORN, other caveats, etc.) makes data portability and releasability easy. We have found that solid data control tagging approaches can enable very large MBSE data models to be prepared for external release in roughly half an hour.
Rapid: Let’s be honest, you’re probably waiting too long for assessment insights that are not as useful as you hoped they would be. Rapid analytical capabilities at every step in the analysis process enables the employment of agile analytical approaches, which in-turn enables high speed convergence on the desired analytical outcomes. The faster products are generated, the more quickly customers can see them and adjust their requirements. And very often the needs for briefing products come at a fast and furious rate so effective and efficient analytical processes are essential for the low-stress delivery of impactful, decision-able insights at the pace of need.
Scalable: As a government study director I was usually underwhelmed by the analytical capacity of our processes. Our data stores didn’t contain all of the required data, analysts could only access the vendor-locked tools on a handful of terminals, our analytical capabilities lacked the power to handle the analytical scope (simulation size or complexity, number of simulations in a trade study), available compute capabilities were outdated and non-performant, and rudimentary data science capabilities could not extract the full insights value from the analytical data. Again, does this sound familiar? Properly implemented modern cloud-based data and analytical infrastructures (such as RAPTR IDEaaS) offer scalable analytical capabilities which overcome all of these limitations; delivering rapid actionable insights at the pace of need.
Agile: We introduced the idea of “agile” in the Rapid discussion above, but let’s expand upon this. Agile is a philosophy which shapes interactions between customers and performers. Traditionally it was devised as a more effective way to develop software, but it should be the performance approach for every aspect of an analysis/assessment because it has a proven track record of delivering the best results. The classical Agile Manifesto states:
We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on the right, we value the items on the left more. 4
The ability of cloud-deployed DE capabilities to define a trade study, execute it, and deliver analytical insights w/in a single sprint enables customers to continuously refine their understanding and drive towards game-changing leadership decisions. It enables analytics (even very large-scale analyses) to be agile. This is important because more frequent and numerous course corrections in the analytics help improve the analytics’ quality and impact. They help ensure that the analytics converge upon the actionable insights that government leadership needs to make important acquisition, etc. decisions.
Staff-able: Crafting analytical approaches (People, Process, Data, Technology, Contractual) around commercially available skillsets delivers the best results at the right cost and operations tempo. Too many analytical approaches require highly-skilled unicorns that are nowhere to be found. I’ve seen job requisitions sit open for 5-6 months asking for software developers w/ Python proficiency, domain expertise, a security clearance, and analytical skills. Sound familiar? It is best to craft the analytical approach around narrower, complementary skillsets which in total enable the entire approach to work extremely well. To this end we have crafted our approach around Operations Analysts (a.k.a. ORSAs), S/W Developers, Data Scientists, and Dynamic Visualization artists. Each of these are commercially-available skillsets which are straightforward to staff. However, we virtually never hire personnel w/ an ORSA (Operations Research Systems Analyst) background.? We hire personnel with deep domain knowledge and then team them how to perform analytics. The domain expertise is the hard part. Equipping them with killer analytical infrastructure and teaching them how to use it is far easier.
Trusted: The world moves $80T+ daily in the global financial markets based upon Zero Trust (ZT) principles and practices. These are extremely well understood and commercially available. It is time to abandon our focus on Security (closed networks with traffic trusted inside the network) and embrace Trust. As noted above, this is essential to the adoption of scalable cloud-deployed capabilities because typically the user and the cloud compute are not located within a single closed network. Even if they are linked by a single network, the network often routes its traffic over DISA leased fiber optic cables which mix encrypted classified traffic with unclassified commercial traffic. ZT approaches with credential verification on both ends of every transaction are both easy to implement and essential to contemporary solutions adoption.
Insights: How many times have you sent your analysts off to perform a complicated analysis and then been presented a simple “Excel” type trend or bar chart as the primary product? Analytics have three outcomes: 1. Understand where success occurs, 2. Understand why success happens, and 3. Understand how confident we are in the outcomes. The “Excel” chart might take a swing at # 1, but will fail to extract the needed value from the data to deliver all three analytical outcomes. Performant analytics must enable analysts to rapidly downselect to the successful options, understand why they are successful, and assess the confidence in this assessment. RAPTR IDEaaS employs the Trade Study Assessment (TSA) toolkit to rapidly distill lessons and deliver highly-customizable & exportable analytical products to enable data-driven leadership decisions at the pace of need.
References:
1.?????? Digital Engineering Strategy, OUSDR&E, Jun 2018
4.?????? https://agilemanifesto.org/