??? System on chip for Autonomous Cars

??????????????????????????

?Design Methodologies for the Automotive sector is changing.The average number of IP cores integrated into automotive SoCs is growing from nearly 20 about five years ago to more than 100 within the next few years. The multiple functions that used to be performed by many discreet microcontrollers are now being consolidated onto a single SoC.??Modern trends are making an impact on the semiconductor IP providers who offer the functional ingredients that make up a chip. Advanced driving assistance systems (ADAS) and autonomous cars are driving an explosion of new silicon, and at the center of it all is the system-on-chip (SoC), a powerful semiconductor device that runs complex algorithms and includes many hardware accelerator brains.

The need for more sophisticated chips calls for a more advanced methodology to stitch together all the intellectual property, to enhance on-chip communications, to help ensure functional safety, and to provide designers with greater control of power management and silicon-die area optimization. Some of the new application areas for automotive SoCs include ADAS, sensor fusion for autonomous driving, vision processing (front camera, object detection, and recognition, surround view, etc.), advanced sensor control and processing (LIDAR, RADAR, etc.) and machine learning for decision making functions in all these domains.

?

SoC interconnect plays a vital role in facilitating functional safety because it interacts with all the data on chip. Consequently, on-chip communications are a critical building block in meeting the overall functional safety requirements. Selecting interconnect IP that is developed in accordance with the ISO 26262 functional safety specification can save OEMs and Tier-1s several man-months spent qualifying an automotive chip that must meet functional safety specifications.


Advances in hardware and networking will enable an entirely new kind of operating system, which will raise the level of abstraction significantly for users and developers. Such systems will enforce extreme location transparency. Any code fragment runs anywhere, any data object might live anywhere and the system manages locality, replication, and migration of computation and data and Self-configuring, self-monitoring, self-tuning, saleable and secure.

?

New automotive SoCs utilize multiple specialized processing units on a single chip to perform multiple simultaneous tasks like camera vision, body control, and information display. The on-chip communications infrastructure is key to ensuring efficient data flow on the chip. And as the types and numbers of processing elements increase, the role of interconnects and memory architecture connecting these processing elements becomes crucial?

While many of these applications will evolve from previous generations, others will require new chip architectures to address the need for high-performance computing in a small, cost- and power-efficient form factor. Instead of slow and power-hungry off-chip DRAM access, automotive SoCs are increasingly adopting memory techniques that can keep data close to where it will be used. Memories that are closely coupled to a single processing element are often implemented as internal SRAMs and are usually transparent to the running software. This approach works well for smaller systems, but an increase in the number of processing elements necessitates a corresponding increase in closely coupled memories. Another approach is to have RAM buffers that can be shared with multiple processing elements. However, in this case, access must be managed at the software level, which, in turn, can lead to software complexity as the system scales up. This software complexity can lead to systematic errors that can lead to errors and faults that affect ISO 26262 safety goals. Finally, as the systems become larger, it’s often useful to implement hardware cache-coherence technology. It allows processing elements to share data without the overhead of direct software management. And there is a new technology for cache coherence, now widely implemented in automotive SoCs, that allows processing elements to efficiently share data with each other and as peers in the coherent system using a specialized configurable cache called a proxy cache.

?

Beyond memory architecture, whether it achieves data locality with buffers or is cache-coherent, what also matters is the on-chip interconnect. It optimizes the overall data flow to guarantee the quality of service (QoS) and thus ensures that automotive SoCs meet the bandwidth and latency requirements. Bandwidth allocation and latency requirements are a critical factor in mission-critical automotive designs, especially when some of the processing may be non-deterministic, such as for neural-network and deep-learning processing.?Automotive designs are also providing the impetus for implementing new technologies like artificial intelligence (AI) because it is impossible to manually create “if-then-else” rules to deal with complex, real-world scenarios.


AI algorithms that can handle highly complex tasks are being incorporated into automated driving systems and other life-critical systems that must make decisions in near-real-time domains. That’s why machine learning, a subset of AI, is the most publicly visible new application in self-driving cars. Machine learning enables complex tasks in ADAS and automated driving through experiential learning that are otherwise nearly impossible using rule-based programming. But machine learning requires hardware customization for algorithm acceleration as well as for data-flow optimization. Therefore, in machine-learning-based SoC designs, the ADAS and autonomous car architects are slicing the algorithms more finely by adding more types of hardware accelerators. These custom hardware accelerators act as heterogeneous processing elements and cater to specialized algorithms that enable functions such as real-time 3D mapping, LiDAR point cloud mapping, etc.These highly specialized IP accelerators can send and receive data within the near-real-time latency bounds and deliver the huge bandwidth required to identify and classify objects, meeting stringent and oftentimes conflicting QoS demands. Here, chip designers can compete and differentiate by choosing what to accelerate, how to accelerate it, and how to interconnect that functionality with the rest of the SoC design. Regarding new technologies, it’s also worth mentioning that neural net networks have become the most common way to implement machine learning. What neural networks do here is implement deep learning in autonomous driving systems using specialized hardware accelerators to classify objects like pedestrians and road signs.

?

?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了