The industrial server as high-performance controller, automation server, and visionary production platform
Interview with Hans Beckhoff about enhanced machine intelligence with many-core control technology

The industrial server as high-performance controller, automation server, and visionary production platform

With the company’s C6670 industrial server, Beckhoff has brought to market one of the most powerful, if not the most powerful, machine controller to-date, equipped with up to 36 processor cores. This immense level of performance is harnessed most effectively via TwinCAT 3.1 automation software, exploiting the potential of each individual core to its fullest. In this interview, Managing Director, Hans Beckhoff explains the benefits of such many-core machine controllers with regard to current applications and their potential for future innovations.

No alt text provided for this image
Hans Beckhoff, Dr. Josef Papenfort, Dr. Dirk Janssen and Stefan Hoppe at the presentation of many-core control at the SPS IPC Drives 2014 (from left to right)


With its up to 36 processor cores, the C6670 industrial server represents a quantum leap in terms of processing performance for machine control. How did this idea come about?

Hans Beckhoff: Since the CPUs in PCs are among the most powerful processors available today, we offer cutting-edge performance with our PC-based control technology. We continuously push the envelope in automation, based on the advances made in modern processor technology as predicted by Moore’s Law. Current “starter processors” feature four or eight cores, but processors with up to 64 cores will be considered standard in only a few years. We believe that machine designers should be able to employ a many-core platform already today, either for highly demanding automation tasks or as a visionary operating platform.

What are the benefits of such a visionary operating platform?

Hans Beckhoff: Development towards more and more processor cores will continue unabated. When you have 10 or 20 times more computing performance at your disposal, you can base your machine control technology on a whole new set of innovative concepts. However, since three to five years is not a lot of time to develop a totally new software architecture, the users of automation technology would be well advised to begin this endeavor today. The C6670 industrial server provides the ideal platform to evaluate what a 24-core or 36-core computer can provide for the respective customer application. Machine manufacturers should take advantage of this opportunity since employing such a powerful controller already delivers tangible application benefits for sophisticated automation tasks today.


No alt text provided for this image
With TwinCAT 3, individual machine functions can be efficiently assigned to as many as 36 processor cores.


To what extent are control applications actually suited for such a multi-core architecture?

Hans Beckhoff: Automation technology is the ideal area for multi-core architectures, because modern machines comprise a wide range of function modules and many positioning axes. These all operate simultaneously and can be very effectively mapped via individual control programs that run side-by-side. TwinCAT 3 provides optimal support for this approach with its many-core-focused features, such as many-core PLC and motion or core isolation, making the parallel control architecture easy to implement. In addition, the high-performance EtherCAT communication bus is able to transmit even huge data volumes deterministically and with short cycle times. This enables machine builders to test the parallel control architecture on their machine and use the results to develop next-generation control technologies.

Which application benefits of the C6670 industrial server can you already implement today?

Hans Beckhoff: We already encounter many highly complex automation applications, such as in wind farm simulations, for example. A single C6670 can reduce the amount of computer hardware required by taking the place of several conventional PCs. This also enables you to replace the data communication between multiple computers with much faster software-to-software communication. Particularly in machine engineering, we see the tendency to implement many more motion axes, operating them in an ever more dynamic manner and with more complex algorithms. The tremendous performance of the industrial server eliminates many restrictions in machine design. For instance, you can have 200 or more adjustable axes plus integrated measurement functions and condition monitoring features – all of which falls in line with our concept of Scientific Automation. You can even integrate a vision system – most of which are still running on separate computers these days – into such a centralized computing platform and make image processing more of a standard feature on the machine.

No alt text provided for this image


Does this mean that you can develop more powerful machines and systems for all industries?

Hans Beckhoff: Yes, you can, especially in areas where our eXtreme Fast Control (XFC) technology is employed. Many-core control and XFC increase not only the performance of machines and systems – they also improve the product quality with their highly precise and extremely fast control processes, while minimizing the consumption of energy and raw materials. In summary, they deliver significant economic advantages as well as sustainability benefits.

Is the C6670 industrial server suited only for centralized control concepts or also for distributed designs?

Hans Beckhoff: The industrial server is mainly a central data processing unit that makes computing, storage and communication capacities available locally. With our modular and scalable control technology, however, we support both concepts as a rule. A large assembly line, for example, is ideal for an automation architecture that features small, distributed controllers. For a packaging or tool- ing machine with many coordinated movements and conditions, on the other hand, a centralized solution would be the better option. However, our server technology has become so powerful that these distinctions are becoming rather fluid. In concepts with a modular, aggregate-oriented design of controller and machine, the intelligence could be implemented either locally in the individual modules or in a central industrial server using appropriate software modules and fast EtherCAT communication technology.

What about applications with typical server functionalities?

Hans Beckhoff: With its enormous processing performance, the C6670 is also capable of providing true server functions in industrial applications such as those promoted via Industry 4.0. For instance, you might transfer complex mathematical functions to the industrial server in order to enable less powerful controllers to handle the condition monitoring, such as for vibration analysis tasks. This would be a so-called “service-based” concept, where complex automation services run on a powerful server in order to remove some of the workload from the actual machine controller. If you have a communication bandwidth that is sufficiently fast and deterministic, such a server could even run in the Cloud. With the C6670, however, you can provide the necessary performance on-site at the machine or line.


No alt text provided for this image

The Beckhoff philosophy centers on PC-based control technology. With ever more powerful PCs, it is possible to realize a central machine control system in which all PLC, motion, robotics, and CNC applications run on a single Industrial PC. Beckhoff uses the term “Scientific Automation” to describe the combination of conventional automation tasks with solutions from engineering science that go beyond the limits of conventional control. For example, it is now possible to integrate demanding applications such as image processing, measurement technology, and condition monitoring into standard control software. The goal is to gather data not only on the quality of manufactured products, but also to continuously monitor the current machine and equipment status. This is a prerequisite for fail-safe, cost-effective production.

Computing power fully leveraged with TwinCAT 3

The demand for computing power obviously increases as the complexity of an individual machine or a plant rises. Beckhoff offers a scalable range of CPUs– from ARM or Intel ? Atom?-based processors for entry-level controllers to modern “Core i” series processors, to many-core server systems for high-end control applications. For example, the C6670 industrial server with 12, 24, or 36 physical cores offers abundant computing power for demanding control tasks in large production facilities. This many-core machine control system includes two Intel? Xeon? processors, each of which combine a number of cores in a single package. Each package has its own internal cache and memory. These systems therefore have two separate physical main memories, resulting in significantly increased access speed. For users, and therefore also for real-time applications, these two main memories appear as a single large memory. Due to their memory architecture, such systems are sometimes referred to as “Non-Uniform Memory Access” (NUMA) systems.

The current TwinCAT software version 3.1 can use up to 256 cores in a targeted manner. As a result, users have the complete range of latest generation processors available for automation applications. The number of cores and the corresponding computing power can be configured as required for running real-time applications. Such applications can specify cores for running Windows, as well as cores that are not used by Windows – so-called isolated cores. When using cores for Windows, the processor time is divided into real-time and Windows time. The proportion of real-time is limited by the “CpuLimit” parameter and can be set between 10 and 90 percent. Switching between real-time and Windows takes place cyclically with a freely selectable base time; 1515PC Control 02 | 2015 products task cycle times are derived as multiples of the base time. Isolated cores do not have to switch between real-time and Windows, so that the full power of the processor is available for real-time applications. The use of isolated cores is recommended for fast tasks with cycle times of 100 μs or less. When using NUMA systems with many real-time cores, it makes sense to isolate a complete processor, so that the cache of the isolated processor is exclusively available for real-time operations.


No alt text provided for this image

From TwinCAT modules to the cores

In TwinCAT, individual automation tasks are realized as modules. Modules maybe for motion control, PLC or C++ applications, for example. These modules are assigned to individual tasks of the TwinCAT system and executed cyclically based on a user-defined sampling rate, i.e. the cycle time. The tasks are then distributed to the available real-time cores, and typically several tasks are performed on one core. Therefore, the tasks are assigned priorities to define the execution sequence; priorities control the execution sequence of tasks. The higher the priority, the more accurately a task is executed. Processing of tasks with lower priorities can be interrupted by tasks with higher priorities. As a general rule: “The shorter the cycle time, the higher the priority.”

As an example,e, Figure 1 shows the execution sequence of the tasks for a typical motion control application with PLC and CC++ software components. The real-time proportion is limited to 90 percent of the base period (here 200 μs),, so that Windows (OS) is always allocated at least 10 percent of the computing capacity. This ensures that the Windows operating system is always guaranteed to be active for a minimum time within a base time. Motion Control NC PTP is divided into an SAF task (German: “Satz-Ausführungs-Task”,English: “block execution task”) with a cycle time of 200 μs and computing time of 30 μs and an SVB task (German: “Satz-Vorbereitungs-Task”, English: “block preparation task”) with a cycle time of 400 μs and a computing time of 100 μs. The C++ task and the PLC task both run with a cycle time of 200 μs and a computing time of 40 μs and 60 μs respectively. To comply with the cycle time, the computing time obviously has to be shorter than the required cycle time, which is the case in this example. The tasks are executed according to the priorities 1, 2, 3 and 4 in the sequence SAF, C++, SPS and SVB, as indicated. All tasks are activated at time 0 μs, and the TwinCAT real-time scheduler processes them sequentially, based on the specified priorities. The tasks SAF, SPS, and C++ have a cycle time of 200 μs and are therefore reactivated at 200 μs. At this point in time, the SVB task has not yet been completely processed. The tasks with shorter cycle times, which were assigned priorities 1 to 3, are prioritized over the SVB task, which has a priority of 4. This ensures that they comply with the cycle time, as in the previous cycle, and are not “held up” by the SVB task. Processing of the SVB task then continues. If a task repeatedly misses its activation, a cycle timeout error (Exceed) is triggered. However, the task reporting a timeout may not be responsible for the timeout. It is therefore always advisable to examine the task runtimes of the higher-priority tasks on the core.


No alt text provided for this image


In this example, the computing power of the Industrial PC is fully utilized. To extend the application, it is possible to distribute it to two cores. Figure 2 shows a possible distribution. In this configuration, all tasks except the PLC task are assigned to a separate core. Note that in the single-core configuration, the PLC task is executed after SAF and C++. Since each core calculates the execution sequence locally for the tasks assigned to it, the PLC starts with the SAF task on the second core in parallel. Thanks to the additional computing power, the SVB task is calculated within the first cycle, making more computing time available for additional tasks on both cores. This can be used either for an extension of the existing application or for other modules.

Alternatively, the additional computing power can be used to increase the sampling rate for the existing application. In this case, the cycle time of one or more tasks should be reduced. Such an example is shown in Figure 3. On both cores, the base time is halved to 100 μs; in addition, the second core is being isolated. The latter is indicated by the absence of the “OS” proportion in the execution sequence. On the first core, the length of a single Windows time remains unchanged at 20 μs, i.e. the real-time limit is 80 percent in this case. Therefore, 20 percent of the computing power of the first core is available for Windows. The cycle times of the SAF, C++, and PLC tasks are reduced to 100 μs. As a result, the sampling rate of these tasks doubles. Although the SVB task is now interrupted more frequently, the calculations for all tasks are completed before their next activation. In such an approach, the available band-width on the connected fieldbus must be adequately dimensioned, because the number of fieldbus telegrams per unit of time doubles, resulting in increased overall fieldbus load.

A distribution to more than one core makes sense, for example, in cases with many computationally intensive instances of a module, which can be calculated independently. One such application example would be condition monitoring.

In principle, each module does not need to be assigned a separate task, as in the example above. Depending on the computational requirements of a single module, several modules can be assigned to a task. The resulting task runtime must not exceed the required cycle time of a module. Otherwise, further modules have to be assigned to an additional task and executed on a separatecore. Naturally, the behavior greatly depends on the respective application. In any case, it is advisable to commission the system step-by-step. If modules with different cycle times must be processed, they should always be assigned separate tasks with suitable configuration.

Summary

These days, any gain in computing power is increasingly achieved by increasing the number of cores per processor instead of a significant increase in processor clock speed. TwinCAT 3.1 supports this trend and enables the use of single-core systems, multi-core systems, and indeed many-core or NUMA systems from the server segment. The increased computing power can be used to migrate existing systems with several Industrial PCs to a single PC, or to expand and increase the control quality of an individual Industrial PC. This article describes the reduction of task cycle times through distribution across a multi-core system, based on a typical motion control application, as an example. Another example is Scientific Automation, which can complement existing systems with sophisticated measuring or image processing applications. This enables enhanced system monitoring or optimization during runtime. Beckhoff continuously develops this technology further and in this way enables customers to use cutting-edge Industrial PC systems for automation to increase performance and to ensure higher availability while retaining the benefits of centralized control systems.

Author: Dr. Henning Zabel, Real-Time Software Development, Beckhoff

要查看或添加评论,请登录

Beckhoff Automation Middle East的更多文章

社区洞察

其他会员也浏览了