Vol.4: USB4 in Industrial: Why Not?!
USB has come a long way from a simple interface consolidating legacy ports for keyboard, mouse, etc. on PCs to a high-speed interface with data rates peaking at 40Gbps per the latest USB4 release. But who would need 40Gbps USB, especially in an industrial/embedded computing environment?! Let’s see…
At first glance, it’s obvious that the data rate can enable multi-GS/s data acquisition devices that can be hot plugged to a notebook computer. Useful, but not exciting. But there’s more to USB4 than just the data rate: the ability to tunnel other protocols over a USB4 link in a transparent way. For instance, USB4 can tunnel DisplayPort allowing to connect a monitor over a USB4 interface. Oh yes, you also had this feature on USB3.2, called “alternate mode” on Type C connectors. Yet, the alternate mode did not tunnel anything, it just used a physical multiplexer to connect display port signals instead of USB Superspeed to the connector. USB4 makes the multiplexer abundant. That also means that monitors that are compliant with USB3 won’t work with USB4. In the end, that makes USB4 great for docking stations, but in embedded computing, it’s easier to connect your displays via regular interfaces such as HDMI and DP.
The cool thing about USB4 is that it can also tunnel PCIExpress. Yes, you can tunnel a PCIex4 interface over a single USB4 interface. If you have regular size motherboards and industrial PCs in mind now, you’ll think this is nothing useful, either. But think about smaller form factor implementations like 3.5” SBCs and less: there’s no space to put bulky PCIe extension connectors on those tiny boards… and if you would, the extension boards would have a fixed mechanical orientation relative to the host… which would make the assembly bulky, again. With USB4 cable lengths up to 2m, you can now detach the PCIe extension from the host which gives you a wealth of freedom for optimizing mechanical and thermal system design. The PCIe extension could be anything: off-the-shelf cards like 10GE NICs or purpose-built cards like FPGA based data acquisition. They would work seamlessly without touching the device driver software. Nice.
领英推荐
But you could also use this topology for adding dedicated acceleration hardware like GPUs or NPUs. Talking about GPUs, again, don’t think about the highest end nVidia GPUs, but about MXM modules with GPUs. These engines can offer 10x or more AI acceleration compared to what is integrated with the host CPU at a reasonable power consumption of around 30W-50W. Other accelerators like Hailo-8 even consume far less power. At Advantech, we have designed a USB4-to-MXM carrier board and use that to offer scalability in compact Edge AI platforms. Here are two obvious use cases (out of many) as food for thought: Some customers like nVidia GPUs and the related software stacks, but need more CPU performance than what you can get from nVidia’s SoCs: they can use an intel Core CPU and an nVidia GPU MXM. Others want to stick with intel but want scalable AI performance: they can use an intel Core CPU as host and add more AI horsepowers using an intel ARC GPU without disrupting their software: they can stay with OpenVino as sole AI middleware.
Especially when upgrading existing equipment with new features, space and power constraints always come into play. No matter if it’s a full product refresh or and add-on feature kit: legacy constraints will come to haunt the system designer. And in these applications, USB4 can be extremely helpful to integrate more and scalable processing or IO connectivity in a flexible way, without compromising other system features. USB as flexible PCIe Extension. Innovative. And you can test it today with Advantech hardware.
#Advantech #USB4 #EdgeAI #MXM #OpenVino