Autonomous Driving: Types of Sensors
Introduction
The projected size of the global autonomous car market is expected to grow from 5.68 billion USD in 2018 to 60 billion USD by 2030 according to the Statista Research Department [1]. The interest of self-driving cars has increased exponentially over the years and as a result companies such as Cruise, Waymo, Zoox and Voyage were created to tackle the next generation of transportation. An autonomous vehicle is a complex system that consists of sensors, complex algorithms, machine learning systems, robust processors and actuators in order to sense the environment and operate without human interaction [2]. There are 5 levels of driving automation ranging from Level 0 being no automation, to Level 5 being fully autonomous as seen in the infographic below [2].
Every level of driving automation requires the extensive use of sensors in order to make decisions about the data it receives. Sensors act as the eyes and ears of a vehicle and must be kept in perfect condition due to the accuracy it must maintain to keep passengers and the public safe. The following sections will cover the advantages and disadvantages of the 4 most common sensors used for self-driving cars.?
GPS Navigation
The Global Positioning System (GPS) is an integral part of navigation that uses real time geographical data from a system of 24 satellites to calculate factors such as the speed, latitude and longitude. At any given time, the self-driving car must have access to 4 of those 24 satellites in order to create a 3D trilateration as seen in the figure below [3].
If the car has access to below 4 satellites the GPS cannot accurately determine the car’s position due to a range of location as seen with Sat 1, Sat 2 and Sat 3’s example. Sat 1 determines a section of the earth, Sat 2 determines an area segment, Sat 3 determines 2 possible locations, and finally Sat 4 determines which location is the real one. This sensor helps the car make navigational decisions based on pre-programmed co-ordinates where the GPS can determine the shortest distance and plan appropriate routes to send to the central processing unit of the car.
This sensor has many benefits from previous navigational solutions such as pre-saving maps that are infrequently updated and may be missing information in specific areas such as campuses, airports and factories [4]. The GPS can detect bumps, hills and dips on the road from its altitude measurements which can help the car avoid steep and potentially dangerous routes. Furthermore, the GPS has many more applications than just autonomous driving that ranges from aviation, military systems, agriculture and marine [5]. Overall, this sensor is an essential component of self-driving cars that gives the car knowledge about its global position.
RADAR Sensors
Radio Detection and Ranging (RADAR) sensors give the car a unique enhancement in range, speed and weather conditions whereas cameras and LiDAR sensors fail to provide crucial information depending on certain surroundings. RADAR sensors work by emitting a pulse at a radio frequency of 24Ghz or 77Ghz which echos off an object back to the sensor [6]. The time difference between releasing the signal and the antenna receiving the signal allows the sensor to determine the distance and speed relative to the vehicle.
There are various settings of the sensor that changes how signal is released and can result in different data retrieved. A pulsed signal is the most common type of setting that emits a short burst of a specific frequency and records the time of flight of the signal, mainly used to detect objects [7]. The unmodulated continuous wave emits a continual set frequency and measures the frequency reflected back by the object. The main application of this type of wave pattern is motion monitoring due to the sensor’s ability to measure very accurate velocity, but no way to determine the distance [8]. The third most common radar signal setting is the doppler frequency shift which emits the same wave as a pulsed signal except it measures both time of flight and time between peaks of the wave. This allows the sensor to radial velocity and distance of the object, making it the most versatile and most common among autonomous vehicles [9].
Autonomous vehicles use radars for various applications when it comes to detecting objects or surroundings such as penetrating poor weather conditions whereas cameras struggle to function in direct sun and harsh weather. Furthermore, RADAR’s short (24 Ghz) range setting can be used for blind spot monitoring and parking aid whereas the long range (77 Ghz) setting can be used for automatic distance control and break assistance [6]. Unluckily, there are a few downsides to RADAR sensors such as not being able to tell the system what is in front of the sensor, only that there is something there. Also, this sensor relies on frequency therefore objects that are relative to specific frequencies make be hard to pickup such as pedestrians. Overall, RADARs provide a more precise detection for objects closer to the car which is reasoning behind the placement of these sensors as seen below in Figure 3 [10].
LiDAR Sensors
Light Detection and Ranging (LiDAR) sensors allow the car to visualize objects from a few centimetres to 60m away, making it essential for 3D mapping and computer vision. LiDAR sensors work by emitting pulsed laser waves in which the reflected light energy is returned to the sensor and analyzed [11]. Furthermore, the hundreds of thousands of light pulses released every second cannot be seen by human eye and does not harm an individual in any way. In order to analyze the light returned, the time taken to return, and speed of light can be used in calculations to determine the distance from the sensor to the object. Then, a full 360-degree 3D map can be created from the data received as seen in the Figure 4 below, whereas other sensors such as cameras and RADARs only rely on a confined field of view [12].
Also, unlike cameras a LiDAR does not need to depend on ambient light due to its own generation of light. This makes LiDARs crucial in situations where there are weather changes such as direct sunlight, rain, or abrupt light changes, in which cameras are unable to function to the best of its ability. LiDARs also give a more accurate distance than cameras whereas cameras need extensive computing power to calculate distance based on image perception. The only disadvantage to using LiDARs is the insurmountable cost that can range up to $100,000 due to the rare earth metals needed for the sensor.
Cameras?
Cameras are the 3rd main sensor used in autonomous driving and acts as the car’s main eyes. Cameras are able to analyze the environment in high detail and colour which allows for object tracking and decision made events based on an object’s actions. Since a camera sensor has a fixed field of view, there are usually multiple cameras attached to the vehicle in order to obtain a 360-degree view. After obtaining this sensor’s data, it is then analyzed by placing 2D bounding boxes along identifiable objects such as traffic lights, pedestrians, other cars, and static objects as seen in Figure 5 below [13].
A static object is something fixed and stagnant within the environment such as a building, tree, sign, pothole, and lanes. A dynamic object is something that is moving or has the ability to change such as a car, pedestrian or the colours of a traffic light. These classifications can then be used along with LiDAR and RADAR data in the sensor fusion process to produce the final version of an environment. From these results a Kalman filter and object tracking algorithm can be used to predict dynamic object’s future actions. For example, if the sensors observe a pedestrian walking at a steady velocity in the same direction, a Kalman filter will let the system know that this pedestrian will most likely keep heading in the same direction. This helps make split second decisions by making the car wait, even though the pedestrian may not be in front of the car at the time, this prediction helps prevent potential accidents from occurring.
领英推荐
Cameras seem like the ultimate sensor when it comes to scanning the environment, but there are some disadvantages such as the significant computing power needed to estimate the distance from objects. Complex neural networks can be used to help train the system to better recognize objects and corresponding properties. For example, a camera might view an orange sky along with an amber traffic light and may confuse the system by showing that there is no traffic light (since it blends in with the sky). Therefore, by using a neural network an autonomous vehicle can be trained to better identify tricky scenarios and become more likely to choose the most appropriate action. Since cameras are vision based, any object or weather condition that obstructs the view of the camera may cause detrimental impacts to the environmental analysis. Heavy rain, fog or snow can prevent the system from making decisions which is why a multitude of unique sensors must be used to ensure the autonomy of the vehicle.?
Sensor Failure
Each sensor has its unique advantages and disadvantages based on certain situations, but in the case that all sensors are at a disadvantage, there must be contingency plans in place to ensure the safety of the passenger(s). If the GPS, RADARs, LiDARs and cameras fail to work during travel, a High Definition map can be used to perform localization and direct the vehicle to safety until all sensors are functional again. HD maps are crucial in navigation due to its high precision of road boundaries and curves at centimeter level. Therefore, even if the vehicle loses its vision, it should still be able to navigate to the side of the road or until the driver takes back control. This is only one example of an edge case that may occur during autonomous driving in which testing is an integral part of the process which makes self-driving reliable.
Each sensor endures thousands of tests to ensure fully functioning capability without any errors because of how important precision and proper data needs to be for this system. The initial testing phase usually consists of using a simulator to test software components such as Unreal Engine and Carla Simulator. This allows for a simulated car and choice of unique environments to determine how the software will interact with certain objects. Once all tests are passed virtually, the sensors can then be tested physically on a test track or closed environment that is safe from the public. Depending on the vehicle, each sensor is usually configured and reset regularity to ensure cleared memory for the next expedition.
Conclusion
In conclusion, there are various components that play into making a vehicle autonomous as well as unforeseen circumstances that must be accounted for no matter how absurd they may be. The use of every sensor plays a key role in helping the central processing unit make decisions such as how fast the vehicle should go, which direction the vehicle should turn, and at what time should certain actions be executed. Autonomous cars will become the future of ground transportation and it will be interesting to see how the world will adapt to these changes.
“Self-driving cars are the natural extension of active safety and obviously something we should do.” ~Elon Musk
References
[1] Statista Research Department, "Projected size of the global autonomous car market from 2019 to 2023," Statista, 21 April 2021. [Online]. Available: https://www.statista.com/statistics/428692/projected-size-of-global-autonomous-vehicle-marketby-vehicle-type/.
[2] Synopsys, "What is an Autonomous Car?," [Online]. Available: https://www.synopsys.com/automotive/what-is-autonomous-car.html. [Accessed 22 April 2021].
[3] GIS Commons, "Chapter 2: Input," [Online]. Available: https://giscommons.org/chapter-2-input/. [Accessed 22 April 2021].
[4] Y. Zein, M. Darwiche and O. Mokhiamar, "GPS tracking system for autonomous vehicles," 26 December 2017. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1110016818301091.
[5] N. Gerein, "Uses of GPS: What are GPS Systems used for?," Novatel, 9 April 2020. [Online]. Available: https://blog.novatel.com/what-are-gps-systems-used-for/.
[6] S. Khvoynitskaya, "3 types of autonomous vehicle sensors in self-driving cars," iTransition, 11 February 2020. [Online]. Available: https://www.itransition.com/blog/autonomous-vehiclesensors.
[7] M. Skolnik, "Pulse radar," 18 November 2020. [Online]. Available: https://www.britannica.com/technology/radar/Pulse-radar .
[8] C. Wolff, "Continuous Wave Radar," radartutorial, [Online]. Available: https://www.radartutorial.eu/02.basics/Continuous%20Wave%20Radar.en.html. [Accessed 22 April 2021].
[9] M. Skolnik, "Doppler Frequency Shift," [Online]. Available: https://www.sciencedirect.com/topics/engineering/doppler-frequency-shift. [Accessed 22 April 2021].
[10] M. Ravenstahl, "Optimizing Autonomous Vehicle & ADAS Radar Systems in a Virtual World," Ansys, 25 September 2018. [Online]. Available: https://www.ansys.com/blog/optimizing-autonomousvehicle-adas-radar-systems-virtual-world.
[11] C. Domke and Q. Potts, "LiDARs for self-driving vehicles: a technological arms race," Automotive World, 3 August 2020. [Online]. Available: https://www.automotiveworld.com/articles/lidars-forself-driving-vehicles-a-technological-arms-race/.
[12] Velodyne, "LiDAR gets ready for automotive mass-market," 29 April 2019. [Online]. Available: https://www.i-micronews.com/how-lidar-is-getting-ready-for-the-automotive-mass-market-aninterview-with-velodyne/.
[13] Cogito, "2D Bounding Box Annotation Service for Machine Learning," [Online]. Available: https://www.cogitotech.com/bounding-box-annotation/. [Accessed 22 April 2021].
making things work
1 年One type of sensors is missing
Director of Sales @ FINIITE AI | Computer Vision & API Solutions
2 年Very cool!
??Amazing????˙
Program Management |IELTS Qualified|
2 年Amazing Article Brandon Goh.
Systems Engineer @ Cloudflare
2 年Great article. I was just wondering how a vehicle connects all the different sensors and moving parts in the pursuit of autonomy?