AAEON’s first MXM system combines the CPU, GPU, and peripheral support needed to thrive in the AI era.

AAEON’s first MXM system combines the CPU, GPU, and peripheral support needed to thrive in the AI era.

An integrated Intel? Arc? GPU combined with 13th Gen Intel? Core? processing power and an astounding combination of high-speed interfaces make the new MXM-ACMA-PUC the premier choice for bringing industrial-grade AI inferencing to the edge. Be it service AI-assisted automated optical inspection or versatile machine vision solutions for smart city use, the MXM-ACMA-PUC offers elite performance in a power-efficient system designed for longevity.

Learn More

Efficiency, Even Under Heavy AI Workloads

The MXM-ACMA-PUC’s selection of 13th Gen Intel? Core? CPUs offer up to 16 cores and 24 threads of processing power for real-time data analysis and parallel computing. For enhanced machine learning and AI workload management, the system’s integrated MXM-ACMA module also provides an Intel? Arc? A370E embedded GPU, able to run complex defect detection algorithms for precise, high-spec AI-assisted automated optical inspection tasks.

Bringing Utility to the Edge

Offering a total of four LAN ports, three running at 2.5GbE speed, alongside multiple USB 3.2 Gen 2 ports and an array of expansion slots for Wi-Fi, 5G, and NVMe storage, the MXM-ACMA-PUC is rich in connective potential, able to accommodate a variety of peripheral devices such as cameras and sensors. Paired with the system’s broad 0°C to 50°C temperature range, slimline dimensions, and integrated AI capabilities, the MXM-ACMA-PUC makes for an efficient engine with which to power traffic monitoring, safety, and other smart city solutions.

Four Edge Workstations, One Device

Joining the MXM-ACMA-PUC’s high-speed ethernet interfaces are one native HDMI 2.0 and four GPU-based DP++ 1.4 display outputs. Such a configuration allows the MXM-ACMA-PUC power four edge workstations simultaneously, with its high-performance CPU capable of handling real-time processing, its GPU able to execute AI inferencing tasks, and high-speed NVMe storage providing rapid access to data and minimal latency for large dataset management.

要查看或添加评论,请登录

AAEON的更多文章

社区洞察

其他会员也浏览了