16 Ways Large Language Models (like ChatGPT) impact Configuration Management

16 Ways Large Language Models (like ChatGPT) impact Configuration Management

With Google announcing?Bard, Microsoft integrating?OpenAI's Large Language Model into Bing, and Baidu announcing their?Ernie Bot, OpenAI with their?ChatGPT?initiated the battle of the Large Language Models (LLMs). The question now is how will these Large Language Models impact Configuration Management.?

I'm curious to see how, in the next years, this will influence the way we do Configuration Management. Some of my fellow bloggers have already shared their thoughts on the subject:?Jos Voskuil, with?My three (Artificial) PLM trends for 2023,?and?Oleg Shilovitsky, with?PLM, CHATGPT, AND LARGE LANGUAGE MODEL THOUGHTS.

What are Large Language Models (LLMs)?

Most of you will have heard something about ChatGPT. But in case you do not know what a Large Language Model is, here is the explanation from ChatGPT (prompt: 'Explain in simple terms what a large language model is'):

"A large language model is a type of computer program that is trained to understand and generate human language. Essentially, it is a really smart computer program that has been fed a huge amount of text, like books, articles, and websites, and has learned how to understand the patterns and rules of human language.?
Once the model has been trained, you can use it to do things like generate new sentences or paragraphs that sound like they were written by a human. This can be really useful for things like writing chatbot responses or even helping to write articles or stories.
Overall, a large language model is a powerful tool for working with human language in a way that mimics human understanding and creativity."

My experience with ChatGPT

I had some interesting discussions with ChatGPT, for example, when explaining the use of a CM Baseline in the context of Configuration Management. Starting with amazement about the output it generates and later a bit?of disappointment when asking more complex questions requiring more in-depth knowledge and experience. But clearly, it has potential when you learn how to use it as an assistant, supporting you. In the future, these capabilities will mature further.

How will it impact Configuration Management?

The impact on Configuration Management will depend on the capabilities of the LLM and especially the available training dataset.?But here are some ideas.?

Obvious use cases

The obvious usages include (1) preparing the agenda for the Change Control Boards or (2) writing Minutes of Meetings. Integrating LLMs with search engines can (3) improve the search capabilities to find specific datasets or records easier.

Advanced use cases

More advanced use cases are (4) extending the search capabilities with other types of models, e.g., geometric search. Example usage is that it makes it easier to find issues that might have similar causes based on provided input.

LLMs can potentially also (5) support status accounting activities or even support performing a Physical Configuration Audit or a Functional Configuration Audit by asking it to (6) perform a comparison of datasets or (7) write up a report. It can even (8) support migrating configuration information from products that were documented on paper or in legacy formats.

When given the right prompts (a prompt is like a command used as input for the LLM), LLMs can potentially replace business rules engines.?This can be helpful?with (9) autofilling specific fields on a form like Change Request or Deviation. Or (10) it can automatically change values of a specific attribute based on the content of a record, e.g., when a?verification record is uploaded, it can automatically change the maturity status of the part depending on the results of that record. Or it can (11) support users in creating standardized names for parts and datasets.

It also provides possibilities to (12) warn?you about potential dependencies with other changes as not everything is?modelled using objects and relations and a lot is still stored in documents.?

Replacing people?

Could it even replace the Audit and Release Analyst in the change process? Their role is to make sure the process is followed properly, that you do not release something that was not agreed upfront, and that the right people are involved in the review and validation of the updates. Maybe not entirely, as there will likely be situations where a human needs to make a final call, but it can certainly (13) support this role.

In any case, for the time being, we need to have Humans in the Loop, as you can't point to the AI as being accountable.

LLMs in products: Training datasets

What if you use LLMs as part of your product. An LLM is trained using a training dataset, which (14) should be treated as a configuration item. As I previously wrote in 'What is the configuration when the product has an AI?':?

"The training dataset is a vital part of building a usable ML Model. That means the training dataset needs to be under configuration control. The question is whether the training dataset is part of the baseline of the product or if it is part of a different baseline, like tools you use to assemble your product? The training dataset might be used to train the ML Model but is not necessarily shipped with the product and, therefore, not part of the actual baseline of the product. Like when you use a wrench to tighten a bolt or when you use a compiler to compile your code into an executable. The bolt is part of the configuration of the product, however the tool, in this case, the wrench, is not, but it is part of the different baseline and linked to the product configuration as an enabling item via the Process Plan/Bill of Process and its Operations/Work instructions. Same for the code and executable, which are part of the product configuration, while the compiler is not.?The same compiler or wrench might/can be used to compile/assemble other products."

LLMs in Products: Export Control

Also, over time, the training dataset can be improved, and the model starts to perform better. What used to be a product that could be sold without restrictions suddenly (15) faces export control limitations. More on that in: 'Export Control and Machine Learning (ML)'.

No alt text provided for this image

Photo by?Possessed Photography?on?Unsplash - Modified by adding text.

LLMs in Products: (Re-)Certifications

If your product requires certification to be allowed on the market, (16) when would your product require re-certification?due to the changes introduced through improved LLM performance? This is still a bit of uncharted territory, and governments will have to develop or update their rules and/or guidances regarding using Machine Learning Models as part of products.?


Please share your thoughts, and let's start the discussion.


Header Photo by DeepMind on Unsplash

This article was originally published on?mdux.net.?Don't forget to subscribe to this?newsletter!

Disclaimer

Maxime Gravel

Manager - Model-Based Engineering at Moog Inc.

1 年

what’s about translating GD&T or any other symbols and or Acronym on a model / document into human words?

Peter Ebbesmeyer

Fraunhofer IEM / OWL ViProSim

1 年

Exciting! Thanks for sharing Martijn Dullaart

Rene Welker

Lifelong Learning | Engineer | Senior Consultant at Mews

1 年

That’s an extensive list of use cases, thank you Martijn Dullaart. I think we are at a very early stage with lots of hype around ChatGPT. My user experience with the tool matches the one in the article. For implementing a massive use of LLM or other AI tools for Configuration Management, much work is needed, e.g. regarding data quality for feeding the algorithms/models, data security & ownership of SaaS solutions, and the collaboration between humans and AI. Hopefully, early use cases will support the most annoying tasks of daily work.

Martijn Dullaart

Shaping the future of CM | Book: The Essential Guide to Part Re-Identification: Unleash the Power of Interchangeability & Traceability

1 年

Feel free to share your ideas/thoughts in the comments.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了