Smart Cities automotive location awareness and its implementations for safety/autonomy/synchronized movement.

Smart Cities automotive location awareness and its implementations for safety/autonomy/synchronized movement.

In the previous posts I wrote about localization of vehicles which is an evolution process of these steps:

  1. GPS(5-10 meters accuracy), involves several US satellites, totally depends on good weather without clouds in the sky the interfere with with TOA(Time Of Arrival) of packets from satellite to vehicle/pedestrian phone.
  2. GNSS(~1.5 meters accuracy), involves US/Galileo/Glonass/Beidu satellites, with accuracy as in 1, depends on weather.
  3. HD-GNSS(several centimetres accuracy, 6th digit in WGS84 format),same method for calculating TOA from satellites with weather fix according to clouds at specific place, must have internet connection for accurate weather input.
  4. 5G high band(>20Ghz) rel 16 localization, (<10 centimeters accuracy, 6th and even 7th digit in WGS84 format), calculates TOA from 5G ground antennas that are deployed in cities, distance between antennas around 0.5-1Km(~0.5 a mile), does not depend on weather.

See latest tests on 5G: honda-teams-up-with-verizon-to-assess-how-5g-ultra-wideband-enhances-safety-in-autonomous-driving

Location Awareness: once vehicle/pedestrian phone calculated its own location, share location which includes longitude/latitude/speed/heading by broadcasting this information to all surrounding via DSRC/C-V2X protocol, this direct sending of data without need for cloud in 1-2 milliseconds, using 5.9 Ghz frequency to ~1 kilometer radius.

Integrating location data on top of the HD-map we get this:

which we can call the next google maps/waze, that involves Birds Eye View surrounding the location where host vehicle moves with lanes on top of the HD-map(especially important in junctions where no lane markings inside junction exist), and multi objects that include vehicles and pedestrians that share their location, radar as a secondary sensor that works in any weather must be used in mixed environment of enabled/non enabled V2V vehicles.

In this example:

which involves real HD-map of Berlin, with 2 vehicles sending their location to each other, the 2 white lines bellow show the location data, every vehicle calculates its movement based on path planning integrated with inside lane(local path planning), path planning ahead(green points represent the lane segment where vehicle moves in)- This is not a video, but real location data being sent to real dynamic HD-map display, each vehicle calculates it's movement based on HD-map lane coordinates.

Implementations:

  1. Speed limitation: testing of host vehicle(own vehicle) and all vehicles in surrounding according to speed limitation data from the HD-map according to location of the vehicle, calculations are done inside vehicle or in pedestrian phone, calculating it's own speed and comparing it to allowed speed in that specific location inside road. This testing is much better then using cameras to "recognize" using "AI" (which have huge problems in weather/light/when speed signs are covered by other vehicle or other reason), and may involve several kinds of speed limitations. For example:

in this Birds Eye View display, vehicle color changes when it exceeds speed limit. This will involve MEC calculations inside every V2V enabled vehicle of not just its own speed, but testing vehicles around in ~1 kilometer radius(C-V2X limit), and may include pedestrians speed movement testing.

2. Collision Detection: calculating local path planning of own vehicle(inside lane in next several seconds in a 100 millisecond resolution), and other vehicles around, to test vehicles getting "very close" in the near future, this will include calculating collision of not just host vehicle, but all vehicles and pedestrians in 1 kilometer radius(C-V2X physical limitation), for example vehicle will be able to recognize a pedestrian moving into the street 700 meters away, 2 streets away, in non line of sight that will collide with a vehicle moving in the lane close to him. For example:

host vehicle calculates projection movement or local path planning and possible collision with vehicle turning left. Vehicles will need to send their local path plan inside location data(longitude/latitude/speed/heading), since HD-map dynamic display can not calculate which lane group vehicle intends to move on before a junction(turning right/left or moving ahead). The biggest accidents happen between pedestrians and vehicles, calculations of path planning of pedestrian using his smart phone that moves into the street and vehicles passing the roads, to possibly detect fatal accident that can occur in 1 kilometer radius, two streets behind our own vehicle will become possible, easily using C-V2X and location awareness.

3. Autonomous movement: Robotic movement based on lane coordinates from the HD-map, can become available after we have full location awareness based on location sharing with support of radars to avoid collision from non V2V vehicles. Recognizing lanes and objects without the need to "recognize" them with visual sensors in non line of sight can only be possible by location sharing on top of a HD-map, while vehicle calculates next point in which it needs to move to, according to lane group->lane->lane segment where vehicle is and calculating middle of two points in end of lane segment, when reaching a junction, where the point where a vehicle exists in several lane segments, the vehicle needs to choose the right lane group(left/right/ahead) in which to move to according to global path plan from A->B. For example:

in this film when before a junction or inside a junction, we can see several 4 green points which consists of several lane groups, where the vehicle needs to choose which lane group to move on. Autonomous movement includes much more things that require vehicle synchronization of vehicles or traffic management like broadcasting of path plan needed for collision detection, broadcasting path plan needed for movement between lanes and synchronizing traffic in other lane before movement or during movement to other lane(swarm movement)- this all requires V2X wireless communication.

4. Platoon movement: platoon movement involves synchronized movement of several vehicles in same lane, where distance between vehicles in platoon in 1/10 of normal distance between human driven vehicles. For example:

in this film each vehicle calculates its own movement based on HD-map lane coordinates, you can two sets of two vehicles moving "together", the two vehicles turning right even move to other lane when they recognize that lane becomes narrow. Swarm movement involves synchronized movement in several lanes while vehicle move between lanes and need to brake the platoon in another lane which involves slowing down of several vehicles and accelerating several vehicles.

There is no way that real autonomous movement will exist before full location awareness exists that work in any weather, in any condition, and beyond line of sight. What we see in current Tesla FSD, is very problematic, will not work at night, in places with bad light, will not work in junctions, in bad weather, in direct sun, trying to "recognize" lanes instead of getting this information from the HD-map, bellow film:

This will never reach the level seen in previous demonstrations which is bird eye view around a vehicle that includes location sharing using V2X/C-V2X protocol after we have the right localization level(6th digit WGS84) and a HD-map that includes lanes in junctions, without the need to recognize visual lane markings.

Everything that we did,do and will do is with the biggest HD-map and location provider company in the world, and using their HD-maps.

Only after we have V2X wireless communication capabilities to be able to share location and path planning, combined with good localization and a HD-map, only then we can talk about understanding the surrounding around the vehicle in any weather and beyond line of sight. Only after we have 100% location awareness(radars that work in any weather without "AI" must be used for redundancy to recognize non V2V enabled vehicles and pedestrians without 5G smart phone), then we can start talking about real autonomous movement. Using 10/20/30/40/50......... visual sensors of any kind will never work for autonomous movement(cameras and radars have same problems and use very problematic "AI" that must be trained for every possible scenario). As we can see in the last 10 years of efforts trying to reach autonomous movement while we did not reach the "understanding" of surrounding around vehicle, and will not reach it until vehicles start sharing their location on top of a HD-map with lanes. What we demonstrate is the next evolution of Google maps/Waze/HERE WeGO system with lanes and multi objects on a detailed HD-map without any need for "AI", but using real algorithms of movement in real time, algorithms that can only be run inside the vehicle(MEC- Mobile Edge Computing) or in pedestrian phone.

We developed only 1% of what should be the future of Smart Cities, a future that will lead to 0 accidents, full autonomy, and much less congestion and traffic jams in traffic.

Amichai Oron

I Help Tech companies transform their vision into paying products. Proven success with $100M+ Industry Leaders, Align your product with customers and investors in 90 days

2 个月

???? ??? ?? ?? ???????? ??? ????? ???? ?????? ???: ?????? ????? ??? ??????? ????? ????? ?????? ??????. https://chat.whatsapp.com/BubG8iFDe2bHHWkNYiboeU

回复
Svetlana Ratnikova

CEO @ Immigrant Women In Business | Social Impact Innovator | Global Advocate for Women's Empowerment

3 个月

???? ??? ?? ?? ???????? ??? ?????? ???? ?????? ???: ?????? ????? ??? ??????? ????? ????? ?????? ??????. https://chat.whatsapp.com/BubG8iFDe2bHHWkNYiboeU

回复
Tina Dong

LS LiDAR - ADAS/AD | ITS | AMR/Autonomous Forklift | Security|Mapping|Smart City| Logistics

3 年

Safer driving, better life.

回复

要查看或添加评论,请登录

Barak Rosenberg的更多文章

社区洞察

其他会员也浏览了