Model or Agent - Knowing the Difference Can Save Millions
Sometimes I see the word ‘model’ used in the context of analytics diagnosing the current or predicting future state (e.g. condition or operating mode) which really is an ‘agent’ not a ‘model’. I have been part of this confusion in the past. Lots of engineering time and money can be saved knowing the difference between model and agent because you don’t need to build a model or ‘digital twin’ for most analytics tasks around the plant. For most analytics use-cases you need an agent. But sure, for some use-cases you do need a model. What is the difference and recommended practice? Here are my personal thoughts:
Caveat Emptor
Because value prediction uses a model, and a ‘digital twin’ is mostly a model, some people were led to believe they need to build a digital twin of their plant to do analytics. Moreover, ‘digital twin’ sometimes gets used as a catch all term for a broad range software capability including not only a process or equipment model but also operator training simulator (OTS), plant 3D model, Virtual Reality (VR), dashboards, document management, data lake, maintenance history, and mobile notifications – a whole suite of applications. This can be a huge and costly scope for a single software capability which can be very simple if done right. That is, be careful or you may end up with an unnecessarily costly solution with long implementation period as well as high lifecycle cost. Diagnosing and predicting a state can be very easy.
Model: Calculate Numerical Values
A model is a representation. In a plant context a model refers to a mathematical model of a process or piece of equipment. The original meaning of ‘digital twin’ was simply a virtual model that looks like and performs like the process or piece of equipment it models on which you can trial & error safely. The model is a mathematical equation. Here are a few common use-cases for models:
Process simulation: Thermodynamics, mechanical equipment, and chemical reactions are modeled mathematically. For instance, a model of a pump may compute the resulting flow given inputs like inlet and outlet pressure as well as motor speed etc. Multiple pieces of equipment make up a unit. Multiple units make up the plant. In the case of operator training simulators (OTS) the model is used for simulation of various scenarios the operator may encounter and need to practice to be able to handle when the time comes.
Inferential measurement: Direct sensing is usually best but not always possible. A mathematical model can instead infer a value from multiple simple measurements. For instance, there is no sensor to directly measure heat exchanger efficiency and fouling. A model instead computes these values from inlet and outlet temperatures on the hot and cold side. A more complex example is computing Reid vapor pressure (RVP). This is sometimes referred to as “soft sensor”. A soft sensor uses data from multiple “hard” sensors.
Consumption target: In an advanced energy management information system (EMIS) a model computes the target consumption for each energy stream based on the current production rate and other relevant variables. The actual consumption measured is then compared to this calculated target to detect overconsumption.
3D simulation: Even the graphics rendering for 3D simulation is computing values for the large number of triangles that make up the visual image based on the 3D model.
The equations in the process model may be built on known first principles (1P) thermodynamics or from statistical regression data science analysis of historical data. Regression on historical data is limited to conditions which the plant has experienced in the past. This is usually not sufficient for operator training as operators must practice how to intervene in abnormal situations beyond normal operation. Regression on historical data requires the right sensors for all the relevant data have been in place for a long time and recorded in the historical data which is often not the case. For all these reasons first principles are recommended for models.
Simulation is only as good as the model. If the model is not accurate the resulting simulation is not accurate either. Therefore, if changes are made to the plant, equipment, or the process, the model must also be updated to match. As changes are often made to the plant, equipment, and the process on an ongoing basis, maintaining the model used for plant simulation is also an ongoing task. That is, deploying a model like a ‘digital twin’ is a long-term commitment that requires dedicated resources and recurring cost budget.
A ‘digital twin’ is a long-term commitment that requires dedicated resources and recurring cost budget
Smaller models (having fewer inputs and outputs) are easier to build and maintain. For instance, the EMIS consumption target model is a single simple equation for each utility flow branch, each with only a few inputs so it is easy to build and maintain over the years. An equipment efficiency model in for instance a heat exchanger app is a fixed equation the user does not have to change in the first place so no model maintenance. However, a large model like an entire plant or process unit is a major commitment.
领英推荐
Machine learning like regression and principal component analysis (PCA) tries to identify such correlations in historical datasets but they are not hardcoded.
Building and maintaining a ‘model’ is costly and time consuming – only do it when you really need to
The other recommendation is for the 1P models to be readymade, pre-engineered apps, so you don’t have to do it – which is available in performance analytics apps for several common equipment types.
Agent: Label (Classify) States
Predicting if a pump is going to fail soon or continue to run well is in analytics terms referred to as classification. Expert software which does this classification (on behalf of an expert if you will) is referred to as an ‘agent’. Classification is about the agent software ‘putting a label on’ a piece of equipment, process unit, loop, or whatever, showing it is ‘good’ or ‘bad’ or predicting it is going to ‘fail’ etc. Here are a few common use-cases for agents:
Predict equipment failure: The most primitive form of condition monitoring analytics classification is anomaly detection which is simply two labels: either “normal” or “abnormal”. To be predictive, an agent is setup to monitor early warning ‘patterns’ such as increase vibration or reduced fluid level. Remember, increased vibration is not a failure but is foretelling failure to come if action is not taken. An early warning is predictive.
Diagnose equipment failure: With more sensors on equipment it is possible for an agent to distinguish between different failure modes. Each failure mode detected or predicted is given a its own label which is more descriptive such as “bearing wear”, “motor winding insulation breakdown”, “strainer plugging”, “mechanical seal failure”, or “cavitation” in the case of a pump. This is called descriptive analytics. But the same agent can also provide a prescriptive label with recommended action for each failure mode: “lubricate, align, or replace bearing”, “rewind motor”, “clear strainer”, “replace mechanical seal”, or “check for upstream or downstream blockage like closed valves” – also in the case of a pump. This is called prescriptive analytics. Again the same agent can also provide a label with probable cause for each failure mode: “lack of lubricant, misalignment, or loose base”, “excessive load, contaminants, abrasion, vibration, or voltage surge”, “debris in process”, “dry running, cavitation”, or “upstream or downstream blockage like closed valves” – again in the case of a pump. This is called root cause analytics. Similarly there can be labels for consequence of inaction for each failure mode. As you can tell, analytics agent can be descriptive, prescriptive, and predictive all at the same time – it is a matter of what goes onto the label displayed for each state such as a failure mode.
Predict process upsets: With the right sensors on the process it is possible for an analytics agent to label the current or future state of the process. In its most primitive form the labelling of the state could be either “normal” (stable) or “abnormal” (upset). With additional sensors the state of the process could be identified more precisely and given labels which are more descriptive and prescriptive, as well as with probable cause and consequences of inaction.
The agent can be rule-based built on known failure modes and effect analysis (FMEA) cause & effect (causation) or using machine learning (ML) statistical data science analysis of historical data to find correlation 'patterns' (such as high, low, increasing, decreasing, unstable, or flatlining etc.). Machine learning on historical data is limited to states which the plant has experienced in the past. Machine learning on historical data requires the right sensors for all the relevant data have been in place for a long time and recorded in the historical data, as well as meticulous maintenance records of historical failures with description and time must be in place which is often not the case. For all these reasons rule-based artificial intelligence (AI) agents are recommended whenever available. The other recommendation is for the rule-based AI agents to be readymade, pre-engineered, so you don’t have to do it – which is available for several common equipment types.
Machine learning tries to identify such patterns in historical datasets, but they are not hardcoded.
Keep it Simple
Many use-cases do not need a ‘model’ or ‘digital twin’. Many use-cases in a plant only require an agent, a ready-made rule-based cause & effect AI agent is very easy and low cost to both deploy and to maintain long-term. But sure, not all analytics is classification. Calculating energy efficiency uses a model which is a first principles (1P) equation, not an agent. For each use-case, make sure to use the right approach, agent or model.
Well, that’s my personal opinion. If you are interested in digital transformation in the process industries click “Follow” by my photo to not miss future updates. Click “Like” if you found this useful to you and to make sure you keep receiving updates in your feed and “Share” it with others if you think it would be useful to them. Save the link in case you need to refer in the future.
SME Control Systems & Instrumentation Engineering I Functionally Safe & Cyber Secured Critical OT Infra Engineering Specialist I IEC 61511 FSE Certified TUV Rheinland I ISA99/IEC 62443 Certified Cybersecurity Expert
1 年Virtual representation of physical system (digital twin model) v/s autonomous software program that operates within the context of the "digital twin model" to monitor, control, and optimize the physical system.
Vice President - Engg services(Control system, Digitalisation, Performance Monitoring solutions)
1 年Yes Digital twins have become a common terminology and needs more classification as explained in this article. However my thoughts or questions is as below. Predict equipment failure- Basic alerts are available in the DCS and if someone wants more detailed analysis Historian can be used effectively to achieve an early warning(by comparing past data) to forecast an impending fault. My personal question is why an Agent is required in the above scenario or Historian is the so called "Agent" Diagnose equipment failure- FMEA is essentially required to diagnose equipment failure and to come up with a prescriptive analytics. FMEA is basically done in Asset Management/PLM or whatever it is called as. Is there a overlap between Asset Management and the agent described in this article or essentially both are same. I feel it should be same as an Asset Management system should be able to handle this effectively and it is the so called "Agent" I believe. If someone does not want to invest in an Asset Management system a customised "Agent" has to be developed. Will share my thoughts on other items like OTS, 1P model, EMS etc in my next post.
with sharing and discusion to elavate the knowledge
1 年Dear Bruce Ratner, PhD may you give your insight to me relating the statement from Jonas Berge #model and #agent gor #analytics. Thank and regards.
Love this, a subtle but important distinction