Industrial IoT - yes, but don’t forget the basics

Industrial IoT - yes, but don’t forget the basics

Introduction

There is no doubt that the application of the Internet of Things to industrial challenges is happening, will accelerate, and will bring profound benefits. Data mining brings new insights into machine performance, enabling enhanced maintenance routines and increased interval times. AI identifies inefficiencies in workflows and energy utilisation. Machine learning enhances automated visual inspection to the point where defects can be detected with almost 100% accuracy.

 It is very easy to get excited by the pace of technological advancement and the corresponding workplace revolution being ushered in by the application of these technologies, but one fundamental principle from the dawn of computing still holds true.

 The results you get out of a system are only as good as the quality of the data you enter.

 In the rush to keep up and maintain competitive advantage, there is understandably a focus on the top end systems and what insights and efficiencies they can deliver. Normally, the conversation takes place between the client management team and companies who have their background in IT, data science or software. In industrial applications, such groupings often lack expertise on the instrumentation and automation systems that will ultimately provide the source data for their project, and this means that the resulting implementations can be sub-optimal, either from a cost or performance perspective.

Measurement Considerations

On too many occasions, the important questions regarding measurements are ignored. For example, the need for a temperature input may be specified for a particular process, but no thought will be given to the underlying detail needed to provide an optimised measurement solution:

 ·      How often does the temperature need to be read?

·      How much can missing measurements be tolerated?

·      How precisely does the temperature need to be digitised?

·      How accurate and repeatable does the measurement need to be?

·      What range of temperatures need to be accommodated?

·      What constraints on physical and environmental characteristics does the installation environment place?

 The answers to all of the above questions can make significant differences to the cost of the subsystems needed to capture, convert and communicate the temperature reading into the system. In systems where a large amount of data needs to be captured across a wide geographical area, the cost of acquiring the data can be substantial, and it is therefore vital that consideration is given to these elements early in the discussion phase in order to establish the most cost-effective solutions to factor in to the budget for the overall system.

No alt text provided for this image


Data Acquisition Devices are not Created Equal

In order to cater for the varying answers to questions like those above, there are a wide variety of data acquisition devices and subsystems available in the marketplace. Typically, the choice between them comes down to the following factors:

 Accuracy-Precision-Repeatability

These are the basic parameters to determine quality of data for any measurement. To understand the difference between accuracy, precision, and repeatability, consider a clock:

·      Precision: If you want to know the time to the second, the clock has to have a second hand. Without this, even if the clock is 100% accurate to the minute, it is not suitable for the application as it does not provide enough detail in the measurement.

·      Repeatability: If the same measurement is taken 100 times, how much difference is there in the returned result. It may not matter that a clock is two minutes slow if it is always two minutes slow, as we can then compensate for the error. However, if the clock loses or gains minutes over time, then it is hard to use it for comparisons beyond the time period over which the loss/gain is not significant.

·      Accuracy: how close the displayed time is to the actual time. In many cases, accuracy is overly emphasised as the core quality of a measurement when in fact it is often the least important of the three. If looking for trend data for example, it may not matter that a clock is always two minutes slow. A stopped clock is 100% accurate twice a day, but is of little use for time measurement.

 Interfaces

Sensors come in a wide variety of shapes, sizes and interface standards. Fairly obviously, the interfaces on the data acquisition system must be compatible with those on the sensors that will be used. Similarly, there are many options for the communications hardware and software standards used to hand off the recovered sensor data to upstream systems and again, selection of the solution in any individual application can have significant performance and cost implications.

 Immunity & Isolation

The signals from sensors are susceptible to interference from the environment around them, which can in turn degrade the quality of the measurements they report. The effects of this interference can be minimised by combinations of hardware and software techniques on the data acquisition device, but these in turn make the device more expensive. Similarly, certain environments can create conditions on the measurement inputs that could damage connected devices, or even, in the case of hazardous area deployments, pose a risk to the safety of the plant. Again, careful consideration to the protection offered by the data acquisition device should be a fundamental part of the initial system specification to overcome these issues by design and ensure that the recovered data is of sufficient quality for the application.

 Rate of Acquisition

Different processes change at different rates. If we are considering the level of water in a large reservoir, it is unlikely that we need to measure this more regularly than once a day – maybe even once a week. On the other hand, if we are looking at the vibration characteristics of a motor, we will need to take many measurements per second, or even per millisecond. Digitising the resulting data therefore creates different requirements for the processing capacity and data bus bandwidth of the connected data acquisition devices, leading to very significant differences in their specification and cost. This is also true for any connected communication systems and upstream edge devices. Faster acquisition leads to higher data volumes and the whole system needs to be sized accordingly.

 Real-Time Operation

As may be deduced from the paragraph above, whilst there are some formal (often conflicting) definitions of what constitutes a ‘real-time’ system, it is, in practice, a term that is somewhat ambiguous in its real-world usage. Perhaps the best way to think about it is that real time means the system, in worst case conditions, is fast enough to maintain control of the processes being monitored within the constraints that they operate under. The resulting values will be very different for systems monitoring different processes, and should be considered alongside the time taken for any human interaction within the process decisions. There is no point reporting an event to an operator within one second if the operator is regularly out of contact and therefore unable to respond quickly. In such situations, the balance between responsiveness and implementation costs may be skewed in favour of cost reductions. Similarly, there is little point reporting an event within one second if within that second a significant amount of damage could be sustained, or product lost. In these cases, the balance skews towards speed of response (and automating responses such that operator intervention is not needed).

 The first question therefore, whenever the need for a real-time system is discussed, is exactly what the user regards as ‘real-time’ for their processes and business.

 Intelligence & Autonomy

Especially in higher precision, higher acquisition rate systems, the overall cost of implementation and operation can often be reduced by deploying data acquisition devices capable of pre-processing the measurements they are making locally. For example, if a measured value is not changing, it may not be necessary to transmit its value every time a measurement is taken, but instead only to transmit the data when a significant change happens. If analysing vibration characteristics, it may make sense to perform the vibration analysis within the data acquisition device and only transmit the resulting derived data. Similarly, if the communications link to the data acquisition device fails, is it acceptable to lose data until the link is restored, or should the acquisition device store these measurements and send them when communication is re-established. If there are local processes where an output signal is derived from the status of a number of local inputs, should the corresponding control loop run within the data acquisition device itself, rather than rely on communication to, and intelligence within, an upstream system such as an edge gateway or Programmable Logic Controller. The optimal distribution of intelligence is one of the fundamental architectural decisions to be made within any IoT system, and the capabilities and characteristics of the data acquisition devices should be included as part of this discussion.

Outputs are important too

All of the above considerations also apply to any output signals in the system. Specifying the correct interface characteristics, processing and communication requirements ultimately determine how quickly and accurately the system will respond to changed conditions, and therefore how it is able to impact and influence the real-world processes to which it is connected. Considerations around the selection of communications media and protocols can be especially significant as the effects of missing, delayed or repeated output commands can often not be tolerated in the downstream systems, and therefore have to be legislated for within the IoT system output devices to avoid incorrect, and potentially unsafe, operation of real-world assets.

No alt text provided for this image

 Conclusion

All too often in IoT system design, consideration to sensor interfacing and the resulting characteristics of the needed data acquisition devices is seen as a lower priority than the higher order systems and application software and yet it has a fundamental impact on the installation costs, and the subsequent operating cost and performance of the whole system. Rather than leaving choices until they are artificially constrained by decisions already made about these higher order systems, the characteristics and capabilities of the data acquisition subsystems and devices should be an integral part of the initial architectural review and system design. Doing so ensures that the quality of the recovered data is fit for purpose and has potential to significantly reduce the lifetime cost of ownership whilst enhancing the performance of the entire system.

 Advantech offers a huge range of wired and wireless data acquisition devices and subsystems covering everything from simple, ‘dumb’ I/O devices through to extremely powerful, high speed, acquisition systems in applications including factory and utility SCADA, process automation & control, vibration monitoring and AI inference, for example for optical inspection. Our technical experts can advise on the options available for any application, together with advising on selection of upstream communications, edge gateways and enterprise servers. Call us!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了