If it won't break, don't sense it

If it won't break, don't sense it

Some say there is too much data and we need analytics to cope with it all. Others say there is not enough data for Digital Transformation (DX) / Industrie 4.0 and we need sensors to get the missing data points. Why this contradiction? What is the engineering best practice for determining the number of sensors you need in your Digital Operational Infrastructure (DOI) for your project? Here are my personal thoughts:

I often get the question on how many sensors are needed. I say if it won’t break, don’t sense it. Well, everything breaks eventually, and if failure of a piece of equipment is a problem, you should monitor it.

Data and Analytics

We must distinguish between process data and equipment data:

  • Process data is measurements on the process fluid; the feedstock, final product, and everything in between
  • Equipment data is measurements on the processing equipment: the pumps, compressors, and heat exchangers etc.
No alt text provided for this image

We must distinguish between live data and historical data:

  • Live data streams dynamically in real-time originating from the sensors
  • Historical data is stored in a database and retrieved as needed
No alt text provided for this image

We must distinguish between process analytics and equipment analytics:

  • Process analytics is predicting process upset so operator can act before product quality is affected, or predict product properties without waiting for lab sample report - again be more predictive
  • Equipment analytics is predicting equipment wear and fouling so maintenance can act before equipment failure

We must distinguish between historical Big Data analytics and real-time predictive analytics:

  • Big Data analytics is processing large amounts of historical data to find correlations between cause and effect to build a model of the process
  • Real-time predictive analytics is comparing live data (effect symptoms) to a model to determine the cause early (i.e. predict)
No alt text provided for this image

The historian can also pass live data through to predictive analytics apps

No alt text provided for this image

Lots of Process Data

Plants have a lot of data. But it is usually mostly process data. Live process data is streaming real-time from thousands of process sensors. Many of these sensors are part of closed loop process control strategies and interlocks. Others are for “open loop” monitoring including operator alarms and trending. Because the process is usually fast, process data is usually sampled at 1 second interval or faster. Most of the time the operators are not looking at these numbers since the control system does the control and monitoring. However, even “open loop” monitoring must be fast to trigger alarms timely and because in those operating modes when the operator does look at these numbers, the response to actions must be real-time. The process licensor already dictates which sensors are required for process control and alarms, and the update period, to meet the performance guarantee, so the plant has already been built with a good set of process sensors.

The data from the process sensors is historized and stored in the historian for years. Alarms and events are also logged in the historian. Some plants may have more than a decade worth of historical process data. Certain industries have regulations which dictates how long the data must be stored. The data must be stored with a reasonably fast sampling rate to not mask intermittent events or the true sequence of events. Many tags multiplied by fast sampling rate and long period of time equals large volumes of data; this is Big Data. Some say too much data. But this is only process data. And it does have value. This process data is considered underutilized because it is not looked at daily. It is mostly used in forensics. That is, while the plant runs fine most people are not interested in most of these historized data points. That is why much of the logged data is not looked at daily. However, if there is a trip or some process upset in the plant then lots of people want to know the root cause and the sequence of events that led up to the abnormal condition so they can learn from it. There can be a lot of pressure coming up with this information. So you have to historize the data because at some point everybody wants the information. This is the reason why all of this data is historized in the first place.

The most interesting fact is, that for Big Data analytics, data scientists essentially use the historical data just once - to generate a model, for instance a process model using AI/ML analytics tools. After that the historical data is not required anymore. From this point forward, it is the process model which is used in the prediction using live data. Therefore, there is no need to invest heavily in a new data lake or a new platform and spending time putting a copy of all the historical data into the data lake, converting data format etc.

Not Enough Equipment Data

However, process data is not sufficient to predict problems with plant equipment like pumps, compressors, heat exchangers, cooling towers, blowers, air cooled heat exchangers, and valves etc. Many have tried and failed. The reason is that by the time signs of equipment wear (e.g. bearing vibration) can be seen on process variables (like discharge pressure), the problem has already gone too far (e.g. bearing failure). The fastest and most reliable way to predict equipment failure is with direct equipment sensors (e.g. accelerometer, acoustic noise, or position etc.).

The process licensor does not dictate sensors for equipment condition monitoring. The OEMs for turbomachinery specify protection sensors and online protection system, but not online prediction analytics. OEMs for smaller pieces of equipment also do not specify any permanently installed sensors or analytics – only periodic inspection with portable testers. Therefore most equipment have not been fitted with sensors. For instance, pumps are shipped with pressure gauges, not pressure transmitters. However, Nolan and Heap teaches us that only 11% of assets suffer age-related failures for which time-based maintenance works. The other 89% of failures are random, and for these assets condition-based maintenance works better. In most plants this equipment is not instrumented. Daily, weekly, and monthly collection of  equipment data is too infrequent to be predictive. Therefore add-on sensors for this equipment must be part of any digital transformation project. It is the foundation which predictive analytics rests upon. Indeed high-speed analytics is best done close to the sensor, in the transmitter itself.

Since traditionally only turbomachinery is instrumented, generally only turbomachinery data is connected to the historian and historized. The historian only stores the time-series data like overall vibration, not the waveform or spectrum.

Inspection data for other equipment is collected manually and typed into the historian together with other time-series data; either through a tablet in the field or a workstation in the office. This includes IR guns and other portable testers, as well as manual gauge readings. Time-series data is analyzed in time-domain. Vibration waveform data collected manually with portable tester is transferred and stored in a special purpose-built waveform database from where the waveform data is analyzed in frequency-domain using fast Fourier transform (FFT) to predict imbalance, misalignment, and looseness etc. There is no point mixing waveform and time-series data together in a data lake only to have the algorithm have to tell them apart and separate them again later when it is time for analysis so they can be analyzed in frequency and time domain respectively.

Vibration transmitters provide both time-series data (overall vibration and peak value acceleration) and waveform. Time-series data goes into the historian and direct into predictive analytics. The historized data can then be used for data science analytics like ML to find correlation with other variables to build a model. The waveform goes into the waveform database for detailed analytics.

No alt text provided for this image

Design Methodology

There is a methodology for working out the applications (use-cases), how many locations, and the kinds of additional sensors required. The basic steps are

  1. Discovery session
  2. Solutions mapping
  3. Point selection
  4. Point detail
No alt text provided for this image

Discovery Session

To enable digital transformation of how the plant is run and maintained, from manual and paper-based procedures to new automatic, digital software-based, and data-driven ways of working, the first step is to find out what problems you need to solve in the first place. Therefore the digital transformation journey starts with a discovery session to uncover the various operational challenges faced by each department. This identifies which tasks need to be transformed: maintenance inspection, field operator rounds, and emergency mustering etc. That is, overall organizational challenges are ultimately due to lots of smaller inefficiencies throughout the plant. The purpose of the discovery session is to find these challenges.

For operational dashboards and notifications the operations team can conduct a workshop to brainstorm the content and format in which it is presented. Think about the KPI of each person, like OEE, energy intensity, or one of the benchmarking KPIs defined by Solomon Associates. Next, think about what real-time index does each person need to know in order to do their job minute-by-minute to achieve their KPI by the end of the month? Based on that the analytics software required to generate these real-time indexes are identified. In turn, the raw data required by the analytics software is identified. Lastly, given the data already available through the historian, what data is missing? These are additional sensors required. The same applies to Augmented Reality (AR) – what you need to display drives what analytics and sensors need to be deployed.

No alt text provided for this image

Solutions Mapping

The next step is a solutions mapping exercise where the challenges are mapped to readymade solutions. These solutions are a combination of sensors on shared standards-based wireless infrastructure and apps in a shared app framework. For challenges for which no readymade solutions exist, new solutions are created. Readymade solutions are preferred because they are tried and tested so a Proof of Concept (PoC) is not required since the solution is already proven at many other sites. This helps the plant avoid getting stuck in “pilot purgatory”. That is, ultimately the overall digital transformation vision for the enterprise is broken down into lots of smaller improvements in the work of every individual in the organization.

Point Selection

Third, the locations where each solution will be used is worked out based on selection criteria – because it is usually not possible to monitor every pump, steam trap, manual valve, and gauge in the plant. Most plants have already undergone a “HAZOP” or “criticality ranking” of their assets as part of the design and for alarm rationalization etc. This can be used as criteria for selecting which equipment to monitor continuously with permanent sensors. Critical equipment may include those with production impact but without standby redundancy, those which are very expensive to repair, or which have very long lead-time spare parts etc. The most critical equipment may include turbomachinery like combustion gas turbine, steam turbine, and turbine compressor etc. These most critical assets have machinery protection systems, and may also already have a prediction system monitoring vibration as well. If not, it should be added as the protection system may not provide prediction. The second tier is ‘essential’ equipment. These may not have continuous monitoring; today they may rely entirely on manual data collection by portable tester. This equipment should also be instrumented for predictive analytics to enable condition-based maintenance. It is important to note that just monitoring vibration and even temperature is not enough. For a complete picture of equipment health there are other sensors needed as well. The bottom tier of equipment remains as-is for now, perhaps to be instrumented in future. A shared infrastructure for wireless sensors is an important Digital Operational Infrastructure (DOI) architecture component of any digital transformation project.

Digital transformation starts by transforming data collection
No alt text provided for this image

A rigorous selection process helps ensure each solution will have a good ROI, and this will help get the DX project through the investment gate process.

It’s the value of the data, not the cost of the sensor

Point Detail

Lastly, and only for some solutions, a detail design is done for each point. For instance, the pump solution can have anything from 1 to 12 additional sensors depending on if it has a strainer or not, a mechanical seal or not, and the type of service etc.

Add-on Sensor Locations

Everybody wants a dashboard with at-a-glance information of what is happening in their area of responsibility. But nobody wants to collect data and type it into the system. It is boring, and outside can be too hot or too cold, raining, windy, snowy, and icy. This is true not just for process plants, it is a universal truth. Typing in commercial transactions for financial reporting is also a drag. However, without the raw data there can be no actionable information. Therefore data collection must be automated to remove this most painful manual data entry step. Add-on sensors are thus a key component of every digital transformation project.

No alt text provided for this image

Here are a few common examples of DX solutions that require sensors to be deployed to digitally transform the associated task organized by domain: reliability/maintenance/integrity, process/energy, HS&E, and production. All of these solutions improve productivity as they automate manual tasks like data collection and interpretation. There are other DX solutions for tasks that does not involve data collection, which therefore do not require sensors to be installed.

Without the raw data there can be no actionable information

I have not repeated all pointers for every solution, just mention them once, spread across the solutions for readability.

Reliability, Maintenance, and Integrity

The goal for digital transformation of reliability, maintenance, and integrity work practices is greater availability, reduced maintenance cost, extend equipment life, greater integrity, shorter turnarounds, and longer between turnarounds. That is, not just to tackle daily routine maintenance, but digital transformation for shutdowns/turnarounds/outages as well.

Reliability and inspection are very labor intensive, with lots of manual data collection tasks on a daily, weekly, monthly, and yearly basis. Inspection is ripe for digital transformation. Changing from paper form to an app in a tablet is a small step, but it is still manual so not a quantum leap. Sensors must be added to automate this data collection. Many of the add-on sensors that plants deploy are there to improve reliability and reduce maintenance cost to turn equipment into smart connected equipment.

The plausible equipment failure modes decide the sensor population. For readymade reliability solutions the vendor has already thought about the various failure modes of each type of equipment based on well documented FMEA, and have already identified the types of sensors required to pick up the symptoms associated with these failure modes. Go with these recommendations. Most types of equipment like pumps, compressors, heat exchangers, cooling towers, blowers, air cooled heat exchangers, and valves etc. are already well understood so there is no need to spend time and effort on Big Data analytics on historical data using AI/ML to find correlations for these common types of equipment. Go with readymade apps. These are easy for the existing reliability and maintenance engineers to use. Whether the condition monitoring is done on premises by in-house experts in the plant, from a corporate fleet monitoring center, or using Industrial Internet of Things (IIoT) based connected service business model, the sensors required are the same.

With monthly inspection and testing, pumps can fail before symptoms are detected. With continuous monitoring developing problems are instead found early and settled before failure. In the point selection process, plants chose to monitor pumps identified as critical or essential. This may be all those above a certain horsepower/kilowatt for instance 100 hp/70 kW. Those are the ones which are usually critical to the process or costly to overhaul. There is no “pump heath transmitter” – a pump needs many sensors and software. Just monitoring bearing vibration and temperature is not enough. The number of sensors depend on if there is a strainer and mechanical seal or not. The mechanical seal flush fluid reservoir may already have pressure and level switches as per the older edition of the API standard. Digital transformation is a great opportunity to upgrade all piping plans to the 2014 edition with pressure and level transmitters in place of those. Instead of changing the associated DCS I/O cards and marshalling, a more practical way is to use wireless sensors and bypass the DCS altogether, bringing the data straight into the pump analytics app and historian.

Plants chose to monitor blowers and fans identified as critical or essential. Some may be in a dirty service like ash, prone to buildup causing imbalance. Service is another consideration: instrumenting so-called bad actors. Bearing vibration and temperature sensors are a start. If there is a filter or louver, additional sensors are required to predict more of the failure modes.

Air Cooled Heat Exchangers (ACHX) a.k.a. “fin fans” have both fins and fans to monitor. There is an element of both reliability and energy efficiency to monitoring ACHX. It kind of doubles the value of monitoring these equipment. There are multiple cells to each ACHX. If fitted with louvers/vanes, they should also be monitored. Apart from bearing vibration and temperature, product and air temperature and inlet and outlet is also required. If there is a lover or adjustable fan pitch this can also be sensed.

The larger compressors in the plant may have machinery protection system, but no prediction. In this case prediction should also be provided. Additionally, there are many smaller compressors of various types in the plant, for process and instrument air, and gases. Some of these may be essential but not yet monitored. Some of them may be managed by the compressor vendor as part of a maintenance contract. In this case you may want to insist the compressor vendor instruments and monitors the compressors to attain a higher service level. Multiple sensors predict bearing problems, process instability, filter plugging, vane issues, and lube oil issues.

Cooling towers have both a fan and a pump to monitor. The fan is very large and there is usually a vibration switch for protection. But by the time the switch trips the damage to the gearbox may have already gone too far. Therefore a vibration sensor is required to become predictive. There is also an element of water chemistry to a cooling tower; to prevent scaling and corrosion. Additional sensors are installed to become predictive in this area as well.

Pipes and vessels in corrosive/erosive service should be monitored to prevent loss of containment, optimizing the time of replacement. But there is also an element of production optimization to corrosion monitoring: for instance, optimizing the crude blend with optimum percentage of low-cost high-TAN opportunity crude in a refinery for greater margin without stressing the piping system too much. Depending on the applications, UT sensors are used for wall thickness, or ER/LPR probes for corrosivity.

There are many other manual inspection points that could be automated. Put sensors at as many as possible of the points on the inspection round forms; the paper forms or the items in the handheld terminal. Prioritize eliminating the more frequent rounds: shift rounds, daily rounds, and rounds in high-risk areas. Additional position, level, discrete contact, flow, pressure, temperature, and other sensors are installed as necessary to eliminate manual inspection.

Energy Efficiency and Emission Reduction

The goal for digital transformation of energy management work practices is lower energy consumption, and reduced emissions / carbon footprint.

Granular energy management is very labor intensive, with lots of manual data collection tasks on a daily, weekly, monthly, and yearly basis. Energy management is ripe for digital transformation. Sensors must be added to automate this data collection. Many of the add-on sensors that plants deploy are there to improve energy efficiency and reduce energy cost and carbon footprint.

A big part of energy management is to automate meter readings – to get up-to-the-minute readings, and with finer granularity, but also directly uncover root causes for overconsumption and losses.

Plants like to start with energy management initiatives because the resulting savings are very soon visible on the energy bill so it is easy to demonstrate success.

Plants may already have deployed an Energy Management Information System (EMIS) but the live data input is lacking, preventing effective energy accounting and balancing. Electric power consumption for each equipment is often already measured in the MCC. However, energy management tend to be lacking for fluids like water, compressed air, steam, fuel gas, and other gases. To understand where energy overconsumption is occurring plants are improving granularity from plant-wide, to each area, each unit, and even down to individual pieces of equipment. To do this they measure utility flow on all the branches. First for each area, then the units, and ultimately for high consumption equipment. This requires many flow sensors so they focus on high-volume high-value energy streams.

With annual steam trap survey, traps can be blowing steam for a year before detection. With continuous monitoring losses are found and stopped sooner. Going straight to the root causes of overconsumption, in the point selection process, plants chose to monitor steam traps identified as critical by installing an acoustic sensor. This may be all those above a certain line pressure or a certain drain capacity. Those are the ones which causes the biggest losses when they fail blowing steam. They may also choose to monitor steam traps on steam lines critical to provide process heat, that would affect the process if they failed trapping condensate.

Plants chose to monitor pressure relief valves (PRV) identified as critical or essential. This may be high pressure or high capacity causing large volume losses; for release or passing. High value product like hydrogen is another consideration. Lastly, plants may first choose to monitor relief valves in dual redundant service where they can remove one for overhaul while the other remains in service. This monitoring is also done using an acoustic sensor.

For heat exchangers plants tend to prioritize those in fouling service. Another selection criteria is those with multiple bundles where they want to see specifically which bundle out of many is fouling, so they can bypass that one for cleaning while the others remain in operation. Plants may choose to start with those which already have flow measurement. Typically plants only measure the product outlet temperature of the last bundle, so additional inlet and outlet temperature sensors are installed on both hot and cold sides.

Plants may have identified equipment inefficiency as a cause of energy overconsumption. They may choose to continuously monitor the efficiency of their gas turbines and steam turbines, high power compressors and pumps, or their high pressure, high capacity, boilers. As well as high capacity cooling towers and other equipment. The purpose is to uncover plugging of filters, leaks, fouling of heat transfer surfaces, and other causes of inefficiency. This usually requires a mix of pressure, temperature, and flow sensors.

Health, Safety, and Environment

The goal for digital transformation of health, safety, and environment (HS&E) work practices is fewer incidents, faster response time, and reduced non-compliance. DX is not about the functional safety already done by the SIS. HS&E in the context of DX is about personnel safety and human factors etc. Plants add sensors for better situational awareness. As part of safety-case audits, plants may have identified gaps. Sensors can fill some of these gaps.

If safety showers and eyewash stations are not already connected to the system, they shall be equipped with activation sensors enabling faster response by rescue team. Depending on the climate at site they may also need pressure and temperature sensors. All of them.

Manual valves which have a safety risk shall be equipped with a position sensor for operator awareness and possibly even interlocks. Particularly when part of daily operations. This includes dyke valves, product transfer valves, isolation valves, and bypass valves etc. Some manual valves have product quality implication in that if left in the wrong position they cause cross-contamination. These shall also be instrumented; both as a preventive measure and for forensics and quality assurance.

If shutdown valves do not already have position feedback they should be fitted with sensors to provide operators a positive confirmation of successful operation. This should include automatic stroke/travel time measurement to simplify performance proof testing. Note that the feedback is not part of the SIF.

Certain pumps, storage tanks, and other location may be more prone to hydrocarbon or chemical leaks and spills than others. Plants may deploy leak detection sensors in the ground and storm water drains around these pieces of equipment.

Plants with tank farms choose to monitor the Pressure/Vacuum Relief Valves (PVRV) and blanketing valves on their large tanks to monitor their operation during filling, emptying, or temperature change to prevent them from imploding or exploding.

Sites will already have identified areas where there is a risk for toxic gas like H2S or CO, or for oxygen depletion. If these areas do not already have sensors to detect this condition, the sensors should be installed.

Storage tanks with hazardous liquids that only have a single level measurement should be fitted with a second independent level sensor to prevent overfill. This is not an automatic overfill protection system, but it is a second independent indication that can help prevent overfill. This may also be a good time to replace mechanical tape and float, as well as servo gauges with modern electronic level sensors.

Production

The goal for digital transformation of production work practices is reduced off-spec product, greater throughput, greater flexibility to handle feedstock variations and produce product grades, reduced operations cost, and shorter lead-time for new products. Better situational awareness for operators is key to achieving these goals. Sensors must be added to automate data collection to provide this higher level of visibility to operators. Sensors or manual valves already covered as part of HS&E above are also an important solution for reducing off-spec product, faster and accurate batch switching etc. Similarly, corrosion monitoring covered under integrity enables greater flexibility to handle different feedstocks.

A more uniform temperature profile can enhance chemical reactions resulting in improved quality and yield, reducing off-spec product. Plants therefore identify critical furnaces and kilns etc. where they upgrade from single point temperature measurements to multi-point temperature profile monitoring using multi-input temperature transmitters.

There are several variations of this expression:

What you don’t measure you don’t improve

There is also an element of reliability to this solution, as it can help to detect hot spots that can damage kilns and other equipment.

Production operators can be blindsided by external disturbances that upset the process, making it go off-spec or even causing an unplanned shutdown. It could be change in the weather or other operating conditions, changes in feedstock, or variations in fuel heating value etc. – operators don’t see the process upset coming so don’t know what action to take. Process analytics can help build a process model then used to predict such upsets and guide the operator to the correct action. However, data scientist will tell you they need more data to find correlations, and engineers using first principle (1P) models will tell you they need more measurements. For this reason plants deploy additional sensors for such prediction, so they need to think about what other variables not yet measured could be leading indicators of an upset coming, and install sensors for those. Indeed with direct sensors in place the analytics becomes very much simpler – one user even implied that measuring directly is almost cheating.

Measuring directly is cheating!

To get analysis results for a grab sample from the field from the lab takes time, it is not real-time. In the meantime the process might be producing off-spec. An inline sensor for the product property may not be available (e.g. Reid vapor pressure). Process analytics can help build a process model then used to predict the product property in real-time. However, again data scientist will tell you they need more data to find correlations, and engineers using first principle (1P) models will tell you they need additional data points. For this reason plants deploy additional sensors to enable such inferential measurements, so they need to think about what other variables not yet measured could have an impact on that process property, and install sensors for those.

Operating companies are de-manning remote sites and offshore platforms, instead moving the personnel onshore, to reduce cost of operation, such as reducing travel to site and the logistics associated with offshore accommodation. Therefore wellheads and wellhead control panels are now being fitted with sensors to enable centralized monitoring from an onshore location, an integrated operations (iOps) fleet management center near where people live. Wellheads and wellhead control panels often have pneumatic and hydraulic controls, so often it is pressure sensors which are installed to monitor what is going on. The wellhead controls themselves are not changed.

There are many other manual field operator round surveillance points that could be automated. Put sensors at as many as possible of the points on the field operator round forms; the paper forms or the items in the handheld terminal. Prioritize eliminating the more frequent rounds: shift rounds, daily rounds, and rounds in high-risk areas. Additional pressure, temperature, level, and flow sensors take the place of mechanical gauges, sight level glasses, variable area flow meters, and even dipsticks as necessary to eliminate manual inspection. This is particularly useful for ensuring control room awareness of start and stop of pumps, compressors, and other equipment from a local control panel in the field.

Tank farm inspection is very labor intensive because the storage tanks are offsite and very tall, and you have to climb to the top of each tank. Plants are therefore digitally transforming many tasks around the storage tank. Indeed DX of a tank farm includes many of the solutions already discussed above. This includes solutions for product level/inventory, floating roof tilt, dyke valve position, hydrocarbon/chemical leak, overfill, sump pit level, floating roof water pooling, transfer valve line-up, and unlatched hatch etc.

A Solid Foundation of Sensors

Changing from one software to another or even adding software alone cannot achieve DX because data collection is still manual and much data is missing. You need add-on sensors to achieve the expected outcome. New data-driven ways of working are built on a solid foundation of additional sensors to automate the tasks which are still manual. Data scientists use machine learning (ML) tools like regression and Principal Component Analysis (PCA) to find correlations between symptoms and the issues they are trying to predict; the effects and their root causes. Often they do not find any strong correlations in the existing historical data which is not already well known from first principles (1P) and FMEA data. They will say ‘we need more data’. This is the time when you know you need to add more sensors to pick up on those symptoms/causes. DX is not so much about software to automatically analyze manually collected data trying to respond quicker to data only gathered once per month. DX is about collecting data automatically, more frequently, so you can immediately pickup symptoms - even without much analytics.

There are several variations of this expression:

What you measure is what you get

Don’t attempt implementing all digital technologies in one go. Take a phased approach for this journey. Some say “start with the platform”, and most plants already did long ago when the plant was built, they already have that platform: their historian. For plants that already have a historian, use it, there is no need to invest in a data lake, it is now time to add sensors for equipment data, and apps to analyze the data. If you have installed a wireless sensor network at site you have already started DX in your plant by automating the data collection. You can now populate that network with additional sensors to automate other manual tasks.

The NAMUR Open Architecture (NOA) is the most practical architecture for the Digital Operational Infrastructure (DOI) required to support DX in a plant, offshore facility, or other processing site.

Lots of sensors are required for DX. Adding all these sensors using point-to-point 4-20 mA and on-off signal wiring would not be practical. The lowest cost and most practical way is to use digital networking: wireless, fieldbus, or future APL (single-pair Ethernet). Moreover, digitally networked sensors provide not only a measurement, but a status/validity so you know you can trust the measurement for use in analytics etc.

So my answer, is: a DX roadmap and architecture without add-on sensors is incomplete. Well, that’s my personal opinion. If you are interested in how the digital ecosystem is transforming process automation click “Follow” by my photo to not miss future updates. Click “Like” if you found this useful to you and share it with others if you think it would be useful to them.

Indir Jaganjac

Senior Electrical Engineer

5 年

Ajoy Kumar, Sahat P Hutagalung, Argonne's multivariate state estimation technique (MSET), coupled with sequential?probability ratio test (SPRT). See the "Benefits and Advantages", there are some details. However, this is patented technology, so it is limited in details. This is also very important life-saver technology too, e.g. AF447 Flight crashed into Atlantic in 2009, because of errors in airspeed sensor (Pitot tube).?https://www.ne.anl.gov/codes/mset/.

回复
Paul Ngana

Senior Reliability Engineer

5 年
回复
Rodrigo Barroso

Lider de Arquitectura, Infraestructura y Operaciones

5 年

Excellent article, thanks for sharing

Indir Jaganjac

Senior Electrical Engineer

5 年

Jonas Berge, good article. Argonne National Lab developed important algorithms for real-time sensor quality assurance: first building nominal MSET model from historical sensor data, then comparing it with live sensor data each second, using advanced statistical SPRT algorithm. This hes been implemented in all US nuclear power plants.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了