This is how we will get to grips with the complexity of the SDV
Chris Seiler
Follow for Engineering Intelligence Content | Combining GenAI with TPLE - the Open Data-Driven Systems Engineering Platform
Abstract
The automotive industry is in the midst of the biggest transformation in its history. From the perspective of a German premium OEM, a new approach is described that will make this change methodically manageable. Aspects of systems engineering, variant management and object-oriented software development are combined in a data-centric approach and integration into existing IT landscapes is demonstrated with the help of a decentralized approach to data integration.
Introduction
Larry Launch calls Carl Component: “Hey Carl, we set up our new Coupe series for the first time today and your control unit has produced some error entries - can you please take a look at it?” “Sure, I'll come and have a look at it!”
This is how the working day of Carl Component, a component manager in the development department of an automotive manufacturer (OEM = Original Equipment Manufacturer), begins. His colleague Larry Launch is responsible for commissioning a development vehicle and has updated Carl's component to the latest software version. The component = the control unit is first supplied with a software block (=flashed) and then the control unit is equipped with a configuration suitable for the vehicle. This is also referred to as variant coding or coding for short. After a successful update, the new software is started in the control unit. Part of the initialization sequence is the checking and plausibility of the input and output channels, which are usually connected to corresponding peripherals (actuators or sensors). This process is also known as self-diagnosis. If results prove to be implausible from the point of view of the control unit software, the control unit stores error entries (=DTCs / Defect Trouble Codes) in its error memory. This is exactly what happened to Larry: he cannot explain the error entries and therefore asks Carl for help.
Carl arrives at the integration center and immediately finds the vehicle that Larry is currently working on. “Hi Carl, glad you're here,” Larry greets him. “I've flashed and coded your control unit according to the specifications without any problems, and now the thing is throwing errors without end. You must still have a lot of bugs in your software”. “Hi Larry, let me have a look. Actually, our component tests were all green in the last run, so what you're saying sounds strange to me”.
Carl takes a look at the error memory entry - his eyes darken and his brow furrows. There are two active DTCs listed there:
C4D2518 – There is an interruption or short circuit to positive at the output for the lighting on the rear right door sill trims
C4D2618 – There is an interruption or short circuit to positive at the output for the lighting of the rear left door sill trims
“Hmm ...” Carl thinks aloud: “Rear door sill trims, that's pins 17 and 18 on connector 2. Larry, have you checked the plug connection on connector 2 - is it seated correctly?” Larry replies: “Sure, I've already checked all the plug connections three times, everything is wired and connected correctly. It can't be the wiring”. Carl: “That's right, you're right. Connector 2 also has the connections to the camera drive in the tailgate, everything works there”.
Suddenly Carl looks up: “Wait a minute, rear door sills, this is a two-door car! Larry, there are no rear door sills on this car, the fault doesn't make any sense!” Now Larry starts to wonder: “yes, but if that doesn't make sense, why is the error reported by the control unit?” “There must be something wrong with the coding of the software,” replies Carl.
Carl opens the authoring system for maintaining the coding rules: “I have found the error. There is no rule stored here in the system for this series, so the default setting is sent to the control unit. And this is: Switch on the monitoring function for the rear entrance lights. I'll change that quickly and then we can carry out the update again.”
And indeed, after the correction and the new update, both DTCs no longer appear in the error memory. Larry and Carl have thus rectified the error, but the underlying problem has not been solved, because errors like this occur again and again during the development process: a malfunction is detected and the cause is reactively corrected within the product documentation. However, it would be better if we could proactively use context-specific product knowledge when creating this product documentation so that such errors cannot occur in the first place.
Problem: The transformation of the automotive industry
The transformation in the automotive industry is now in full swing and continues to accelerate as several factors come together: initially, it was purely a change in the powertrain. The electrification of the powertrain through to the full introduction of electromobility with all its variants and technological approaches is in full swing across a broad front, especially in key markets such as China.
Established OEMs that have long been successful in the market with combustion engine technology must change. They will have to face up to new market players from the USA and China. In addition, the trend towards automation of the actual driving task will lead to more and more software-based components and functions being integrated into the vehicle. The resulting shift in focus to software as a driver of innovation is forcing established OEMs to change to an even greater extent: the age of the software-defined vehicle (SDV) is dawning. And the new market players are also better positioned in this field of technology, as they do not have to deal with legacy issues in their systems and existing fleets and have directly aligned their E/E architectures (electrics/electronics) to be software-defined.
The downside of software: cyber security is becoming increasingly important because hacker attacks are on the rise and the attack surface is increasing due to more and more connected services and more software components. For this reason, the legislator - here explicitly the UN ECE - has created the R155 Cyber Security Management System regulation. According to this regulation, every OEM must establish a management system and have it certified in order to prove that cyber security measures are introduced and practiced in the company's processes. Following on from this, regulation R156 Software Update Management System (SUMS) was drawn up in order to also establish a practiced and documented process at the OEMs in order to be able to carry out remedial measures due to cyber security cases in the form of software updates on the affected vehicles. This regulation R156 requires comprehensive documentation and safeguarding measures for software updates in order to be able to continue to prove the conformity of the vehicles with their type certification after the update.
Software is both a solution and a problem
As shown in the example above with the configuration of the rear door sill trims, the software offers a wide range of options for offering software-driven functions for products with high feature variance through variant coding. This logical continuation of hardware-driven product variance now continues seamlessly in system-driven product variance. Software plays the key role here: new functions can be implemented much faster and more cost-effectively with software than with hardware. This offers many opportunities, but also entails some risks: we have to master the software and its rules.
The software-defined vehicle - what exactly is it?
Slama et. al. have formulated the following explanation of the software-defined vehicle in their report - they call it “#digitalfirst - a new way of working”:
“As today's consumer preferences are in constant flux, we cannot predict with certainty today which functions will be in demand tomorrow. [...] Consequently, #digitalfirst starts with the customer experience and works backwards to the technology. [...] #digitalfirst assumes that an OEM has to go through three tectonic shifts: the shift to the north (of the vehicle API), the shift to the left (towards early testing) and the shift towards virtualized development.”
A vehicle API (Application Programming Interface) is a programming interface that abstracts and simplifies details of the vehicle hardware and its architecture so that a software application can be developed and maintained based on it. The shift to the north means that more and more functional logic is implemented above (“north”) of this interface in order to take account of the different life cycles and boundary conditions of hardware and software. The shift to the left means that more and more of these new functions must be tested very early on in the development process in order to be able to meet the increasing short-term nature of changing consumer preferences. The shift towards virtualized development contributes to the same goal.
Current approaches such as model-based systems engineering (MBSE) do not meet these requirements. The MBSE world assumes that a system can be described sufficiently well using modeling languages such as SysML (Systems Modeling Language) or UML (Unified Modeling Language) and that software development is then carried out by the respective experts in a downstream process step analogous to hardware development.
But that is not enough for the SDV. Why? Today, we have a platform development at the level of control units through the excessive use of PLE methods (product line development), which manifests itself in the variant coding of the control units. This enables a strategy of customized mass production, which combines the production of vehicles with a high degree of individualization of the product. This complexity and the variety of possible system behavior cannot be mastered today using the tools and methods of classic MBSE.
The dynamization of the market environment described above is now giving customer-centric software development a central role. This tectonic shift, while maintaining the strategy of customized mass production, is causing complexity to explode.
We therefore need to break new ground and consistently establish data-centric methods, processes and tools in SDV development. For example, centrally relevant information such as requirements from regulations or architecture decisions only need to be documented in one place in the company, and all areas that depend on this information can access it directly. This is also referred to as the “single source of truth” or an “Archimedean point” - i.e. a completely indisputable fact. This information must also be consistent, semantically correct and complete. Wherever it makes sense, contexts should be described in a formalized form rather than just in the form of prose text. The fact that every piece of information and every aspect can evolve and change must be taken into account. Consistent version and configuration management is therefore also absolutely essential. This is the only way we can master this unique transformation. In essence, the question is: how do we master complexity?
Mastering complexity
As shown in the figure above, there are basically three drivers of complexity:
We need to find appropriate answers to all of these challenges so that SDV can be implemented for customized mass products. We need a systems engineering approach that is rethought with a software mindset.
What solution options do we have?
Feature-based product line engineering in accordance with ISO/IEC 26580:2021
This industry standard enabled the rule-based generation of application configurations and the transfer of intelligence to domain engineering for the first time. This method was published as an ISO standard in 2021. The basic idea of domain engineering goes back to the early 1980s: the idea is to define and develop the platform during domain engineering, which is then used in product engineering to develop specific products. This approach is also the basis for the feature code-based variant coding methodology that we used to explain the example of door sill lighting at the beginning. The quantity-based description of dependencies and variance points is what made it possible to implement the strategy of customized mass production in the first place. This method has been used successfully in the automotive industry for several decades and is the state of the art in variant management.
However, this approach has its methodological limits:
It is therefore not possible to achieve the next evolutionary step of the mass product “customized and software-defined vehicle” with this approach.
Our new approach: Typebased Product Line Engineering
The Typebased Product Line Engineering (TPLE) approach builds on the concepts of product line engineering and object-oriented software development, both of which originate from the software world, and combines them with the paradigm of systems engineering, taking into account the diversity of product variants due to the strategy of customized mass production.
The core element of TPLE is the central knowledge graph data model, which holistically maps the problem and solution space of our product development and thus provides the basis for end-to-end configuration management of all aspects relevant to product documentation.
领英推荐
What is a knowledge graph?
At the heart of a knowledge graph is a knowledge model: a collection of interlinked descriptions of concepts, entities, relationships and events. Knowledge graphs contextualize data through links and semantic metadata, providing a framework for data integration, standardization, analysis and sharing.
How exactly does TPLE work?
TPLE serves as the central product documentation backbone of the organization. Every department involved in product development has its clear touchpoint with a data-based integration approach that takes collaboration and process automation to a whole new level. The underlying IT systems are integrated based on a data mesh approach. Each data owner is responsible for providing high-quality data products that can be used by other departments in a decentralized and democratic way.
The TPLE data model consists of two main components:
1.??????????the problem space
2.??????????the solution space
The problem space contains all key aspects that are relevant as input for product development. The regulations of the individual markets are linked to the corresponding regions and countries via the markets. The features relate to both the regulations and the market. Finally, product management links feature management with market management.
In the solution space, central type management relates to both feature management and product management. The types represent the central data object for transporting engineering knowledge. With the possibilities for modeling interfaces, state machines, functions, options and component types, supplemented by inheritance, a broad spectrum of modeling tools is available. Type Lines (also called Partial Type Configurations) are an intermediate step to instantiate types with some concrete option values, while other options are still undefined to maintain some variance at this stage. A concrete type configuration can be created when all options are fully defined. These Type Configurations are ultimately the blueprints for the concrete product instances that are then built based on these blueprints.
How can existing data standards be integrated into TPLE?
The TPLE Knowledge Graph serves as a superordinate data model that integrates all aspects of product documentation on both the hardware and software side. Therefore, all known data exchange formats can be attached to specific areas of the TPLE model - either as input/output such as ReqIF (Requirements Interchange Format) for requirements or as a reference such as the VSS (Vehicle Signal Specification) or as a format that transports data generated from the TPLE graph, such as ODX (Open Diagnostic Data Exchange), AUTOSAR XML (AUTomotive Open System Architecture Extensible Markup Language), ASAM A2L (Association for Standardization of Automation and Measuring Systems MCD-2 MC Language) or KBL (KabelBaumListe, which is a standard for an XML file format for the uniform digital description of a wiring harness). It is even possible to automatically generate active chains and other system specification content in order to export them to SysML if required.
How can nested supply chains be integrated into TPLE?
Due to the strict separation of problem and solution space within the TPLE data model, it is possible to design a nested supply chain tree with N supply chain levels: The customer's solution space contains jumping-off points for the different suppliers that lead directly to their own problem space within their own TPLE instance. They also have their own product lines and need to manage their own solution space, especially when acting as customers towards the next level of suppliers within the supply chain hierarchy.
How can TPLE be implemented?
The example with the entrance lights can now be modeled very pragmatically with the help of TPLE as shown in this figure:
By modeling the option “Number of doors: 2 doors / 4 doors” in the Type and instantiating the two Type Configurations, we now enable the semantically correct context for controlling the function “Rear door sill trim lighting monitoring function”. By linking to the respective variant coding switches, we can check the plausibility of the underlying coding rule:
Type Configuration ?Vehicle Coupe“ (numberOfDoors=2) -> Rear door sill trim lighting monitoring function = off
Type Configuration ?Vehicle Sedan“ (numberOfDoors=4) -> Rear door sill trim
lighting monitoring function = on
Distributed product knowledge is made available centrally and provides support where knowledge needs to be accessed. In the first step, this knowledge is used to check the plausibility of the coding rules and to validate them. In further expansion stages, these coding rules can then be generated fully automatically from the central knowledge model.
Impact: why is TPLE the better approach?
This approach enables us to successfully implement the strategy of customized mass production in the software-defined age. Changes to the product documentation can be implemented quickly and reliably for many configurations - while at the same time taking all regulatory requirements into account. Using Data Mesh, it is possible to map the integration of existing systems and the provision of data from them in order to ensure a seamless connection to ongoing software transformation projects. With the TPLE data model, we form the basis for setting up product configuration twins that can be optimally integrated into the idea of collaborative digital twins.
Outlook
The TPLE approach is being developed at Mercedes-Benz AG and is being used as part of the sofdcar research project funded by the German Federal Ministry for Economic Affairs and Climate Protection (BMWK) to master version and variant management in the Digital Twin.
With this product-centered, internally consistent and configuration-accurate database, we are defining the basis for many other new areas of application that build on a reliable product documentation basis.
As shown above, a data mesh-based approach can help to offer data from existing IT systems as data products. This enables distributed and decentralized data exchange architectures, which in turn can be integrated at the respective connection points within the TPLE data model. (The expert for this is Thomas Theiner)
This means, for example, that the legal requirements from the regulations of the respective markets are integrated in the problem space in both Regulation Management and Market Management. (Expert for this is Nico W?gerle)
If we manage to map a large part of the validation effort for homologation completely digitally using simulation and virtualization technologies, then further speed and efficiency potentials can be leveraged (experts for this are Nils Katzorke, Egon Wiedekind und Indrasen Raghupatruni)
The advantage of data products is that a certain level of data quality and availability in the sense of a product can be expected and demanded by the data customer. It is therefore also transparent which department accesses which data products and uses them in order to offer its own data products to others as a result of its own value creation. (Stefan Brock is an expert in this field)
The requirements contained in the Problem Space can be integrated at any level of abstraction - both the “hard” requirements from authorities, product and top management, as well as the recommendations and best practices as guidelines for both product design and development processes. A prominent example of this is the Automotive Spice Framework: the practices it contains can be pragmatically integrated by extending the TPLE data model. (Experts for Automotive Spice are J?rg Zimmer and Dominik Feurer, experts for the design of the TPLE data model are Klaus Anwender and Christian Neusius)
Through the use of distributed ledger, crypto and Web3 technologies, it will be possible to establish decentralized identities and “web of trusts” in order to create a reliable basis for communication in interaction with different partners. (Experts for this are Jan Junge and Sebastian Becker)
This can provide added value both in the area of development cooperations and partnerships and in the context of globally distributed supply chains. It can also serve as a basis for the provision of product-related data via data marketplaces such as acentrik.io. (The expert for this is Srikanth Kaja)
These technologies can also be used directly in the vehicle to further increase the trustworthiness of the data generated from the vehicle. As a result, vehicles can be reliably developed into autonomous agents in a networked and secure Economy of Things (The expert for this is Peter Busch)
In combination with autonomous driving, this results in very exciting synergies. See also my article: "Wie die Randbedingungen für ein autonom fahrendes Fahrzeug standardisiert beschrieben werden k?nnen" (The expert for this is Jan Reich)
By using generative AI technologies based on large language models, new, additional areas of application can be opened up with the help of the database created by TPLE. For example, new configurations could be proposed within TPLE with the help of generative AI, which are optimized under certain conditions. Existing data from existing systems could also be made more easily accessible and integrated into the knowledge base. This subject area is still very little researched in conjunction with TPLE and I therefore still lack contact with the relevant experts.
These are all areas in which a lot of research work still needs to be invested. However, in my view, this outlined picture forms the basis for a successful future for the European high-tech industry.
I would be delighted if you could not only leave a “like”, but also describe in a short statement why you put the “like”, what you did not understand or what other thoughts occurred to you while reading.
Let's get into a creative exchange to make this vision a reality.
High-Tech Consultant (Electronic, Software, System Engineering, Fleet management and mobility )
3 个月Could a solution be to use AI agents to manage the growing complexity in Software Defined Vehicle (SDV) and Type-based Product Line Engineering (TPLE)? These intelligent agents could streamline configuration management, automate compliance tracking, and optimize variant selection in real time. For instance, an AI Configuration Management Agent could align regulatory and customer requirements with optimal configurations, recommend efficient setups, and reduce redundancies across models, regions, and custom setups. Leveraging AI’s predictive capabilities could help reduce errors and respond swiftly to regulatory changes, minimizing human error and enhancing agility. This approach aligns well with the data-centric strategy you suggest. Of course, much more thought would be needed, but that’s my quick two cents.
Lead Business Consultant @ msg | SDV Portfolio Business Growth expert
4 个月What I liked: * I appreciate that you're asking for real feedback instead of just a "like" – it invites real conversation, something I would also like to see more on LinkedIn and unfortunately is often missing * The way you explained "flashing" and "coding" in simple terms was great – makes it easier to follow for everyone. * You covered a lot of interesting topics and wrapped them all into a clear strategy, which I found really helpful. What I didn’t fully understand / would love to see more of: * Is it possible or useful to have inheritance not only in the solution space but also in the problem space? * In A-SPICE terms, TPLE seems like a modern way to tackle SUP.8 challenges. I get that SYS.1 and SYS.2 are part of the problem space, but I’m not sure if TPLE’s solution space fully represents the architecture or if it needs to be linked to it. * I’d love to see more about how this can be deployed within an organization – what roles and responsibilities would look like.
Berater bei der Firma CPS GmbH
4 个月Robert H?misch : Da müssten ganz viele Gemeinsamkeiten mit Spicy SE zu erkennen sein.