Connecting SUMO and Unity
https://youtu.be/9nSCKIz6lQI?si=Bol5bAqZ_ZwpPXyq

Connecting SUMO and Unity

Short Introduction:

In traffic safety research, simulation tools are considered more straightforward and cost-effective than direct observations of real-world conditions, especially when dealing with scenarios that may not exist in reality.

The tools include traffic micro-simulation tools (e.g., SUMO) and driver simulators developed in game engines (e.g., Unity). However, the tools also have limitations. For example, the equations used to simulate human behavior may not always reflect real-world behavior accurately, and driver simulators’ lack of realistic traffic systems affect the interaction between the simulator vehicle and other vehicles.

Co-simulation allows two different simulation tools to exchange data to enhance the capabilities of each tool, but many traffic safety researchers currently spend significant amounts of time, effort, and budget working on their own version of a co-simulation tool to integrate, for example, a traffic micro-simulation tool such as SUMO with a driver simulator such as the Unity game engine. This situation takes time away from focusing on the goal of improving traffic safety. In this article, we explained about the development of an open-source traffic co-simulation tool. Development involved three tasks: 1. integration of SUMO and Unity; 2. development of a 2D and 3D environment (a 3D road environment in Unity was generated from a 2D road environment in SUMO); and 3. development of a 3D model of a simulator vehicle and development of a VR-based driver simulator. We named our tool SUMO2Unity and believe that it can significantly help traffic safety researchers to conduct future research aimed at improving traffic safety. Here is the link to the project.

Task 1. Integration of SUMO and Unity (Figure 1A)

The integration of SUMO and Unity involved programming the exchange of vehicle trajectory data and signal timing. As in other traffic simulation software, SUMO adds a 2D vehicle based on input parameters (e.g., vehicle ID and length) and generates the trajectories (X, Y, Z) of the vehicle in a fraction of a second (e.g., 0.02 sec in our study). In Unity, we received these data in real-time, generated a 3D vehicle with the same input parameters (e.g., vehicle ID and width) and assigned the same trajectory for each 0.02 sec. As this procedure results in sending data from SUMO to Unity (one-way communication), the trajectories of the vehicle were similar in SUMO and Unity. We applied the same process for all the vehicles. SUMO allows the user to take control of one vehicle in the simulation and identify this vehicle as the “simulator vehicle.” We created a simple 3D simulator vehicle inside Unity with the same input parameters (e.g., ID and width) and sent the trajectories of the vehicle for each 0.02 sec to SUMO. This resulted in two-way communication between SUMO and Unity and meant that the simulator vehicle inside Unity observed the traffic generated by SUMO and we could move the vehicle by keyboard interface.

Exchanging data for signal timing was different. We designed signal timing in SUMO for each signalized intersection. SUMO can generate signal phase and duration for each 0.02 sec. In Unity, we received the data in real-time, generated the required number of 3D traffic signals for each signalized intersection, and assigned the same signal phase and duration. Exchanging data for signal timing does not require two-way communication.

A. Task 1: Integration of SUMO and Unity

Task 2. Development of a 2D and 3D environment (Figure 1B)

Generating a 3D road environment in Unity from a 2D road environment in SUMO is a difficult task. The task does not require two-way communication since the environment is static (fixed), but the road location and geometry (coordinates and lane specifications) must match in both tools or the SUMO vehicles will not move on the right path inside Unity and vice versa. OpenDrive format allows us to solve this issue. OpenDrive format is a standard language introduced by the Association for Standardization of Automation and Measuring Systems and defines road network specifications, such as the number of lanes and lane width, in a programming format. 3D modelling tools including MathWorks RoadRunner or Blender have the capability of importing/exporting OpenDrive format allowing researchers to generate a 3D road environment from a 2D road environment. We developed a 5 km by 5 km section of a town containing a 2D environment for SUMO and a 3D environment for Unity. The town was based on a 5 km by 5 km real-world location in Pickering, Ontario, Canada. We selected this area because it contains different types of roads including two-lane roads (one lane in each direction), four-lane roads (two lanes in each direction), roads with a median barrier, roads with right and left turn storage lanes, and roundabouts. The town also includes three signalized intersections and six unsignalized intersections. In addition, the 3D environment included 3D models of vehicles, traffic signals, streetlights, construction zone cones, different types of trees, and a river. This variety allows researchers to modify the environment and develop their own scenarios.

B. Task 2: Development of a 2D and 3D environment

Task 3. Development of a 3D model of simulator vehicle and development of a VR-based driver simulator (Figure 1C)

This task included creating a realistic interior design and adding vehicle dynamics (movement of vehicle due to forces and inputs including acceleration/deceleration).

We used a 3D model of a vehicle developed by Unity. Unity has an immersive and realistic interior and exterior design. The interior design includes a seat, dashboard, speed gauge, steering wheel, mirrors, and the exterior design includes wheels and doors. Figure 1.C.1 shows the interior design of the simulator vehicle.

The vehicle is only a model and has no functionality. It is as if one purchased a vehicle, but the steering wheel does not rotate, the speedometer needle does not move, and the mirrors do not work. We used Unity game engine and programming to assign these functionalities.

As the 3D model of the vehicle does not move, it was also necessary to assign vehicle dynamics equations. Fortunately, Unity provides a sample of vehicle dynamic design for simulation and testing.

The VR-based driver simulator added functionality by allowing us to control the simulator vehicle. To train and test the algorithms required in the development of a CAV, we added vision technology (camera) and the ability to collect video data from the environment.

To develop a driver simulator, we used VR technology. With VR, we can control and reproduce different scenario conditions and expose drivers to the scenarios and dangerous driving conditions without physical risk to the drivers. VR has usually been associated with high costs and huge computational power, but affordable VR devices are now available. VR headsets track the head’s orientation. We developed a VR-based driver simulator that included a VR headset as the display system, and a Logitech G25 Racing Wheel and pedal as the driving system. The VR headset was a Meta Quest 2 which provides stereoscopic vision at 90 FPS, 3,664 x 1,920 pixels (1,832 x 1,920 per eye) resolution and a field of view of 90 degrees. Figure 1C2 shows the VR-based simulator that we developed.

To allow other researchers to collect data for the development of technologies in CAV driver simulators (e.g., V2V technology), we needed vision technology in the VR-based simulator. With Unity, we could install a 3D model of a camera in front of the simulator vehicle. The camera can capture images/videos within a field of view (90 degrees) that is similar to that of the camera installed in CAVs. The model of a camera allows the simulator to collect real-time images from the environment including road geometry (lanes and markings), surrounding SUMO vehicles, roadside elements (trees and streetlights), and obstacles (traffic cones). We also provided a module with a .txt file for storage of the vehicle state including simulation time, vehicle coordinates, vehicle speed, rotational degree (yaw and pitch), and gears. Other researchers can use our model of a simulator vehicle to develop a wide range of scenarios to train and test CAV technologies. For example, they can modify the 3D environment (e.g., add traffic cones to block one lane of the road due to construction) and collect a wide range of data including captured images and details of the vehicle state.

C. Task 3: Development of a 3D model of simulator vehicle and development of a VR-based driver simulator

Discussion on Potential Applications

A. Road Safety in Autonomous Vehicles

Under some conditions, CAVs do not require their drivers to constantly monitor the driving environment allowing drivers engage in secondary activities such as reading, writing emails, and watching videos. The CAV’s automated system requests the driver to resume control of the vehicle when the system encounters an unexpected situation (e.g., an obstacle or the absence of lane marking) which it cannot handle. This is called a take-over request. The implications of CAV drivers’ secondary activities and the handling of take-over performance when unexpected situations arise are gaining increased research interest (Zeeb et al., [1]; Sportillo et al., [2]; Happee et al., [3]). For example, Zeeb et al. [1] studied how visual-cognitive load impacts take-over performance. The study examined engagement in three different secondary tasks (writing an email, reading a news text, and watching a video clip). The authors found that the drivers’ engagement in secondary tasks affected the time required to regain the control of the vehicle, but that the increase in time was not statistically significant.

CAV related studies usually use no traffic or simple traffic systems comprising only CAVs. Real-world scenarios for present and future circumstances may be very different. SUMO2Unity can help researchers to examine more realistic scenarios and produce more reliable results.

B. Road Safety in Education and Training (Driver Simulator)

Universities and colleges offer a wide range of transportation safety courses which familiarize students with the fundamentals of the geometric design of roads and with traffic micro-simulation modeling, but learning usually takes place using books or images/videos, i.e., flat displays of information that do not provide an immersive experience for students. To provide an immersive experience, Veronez et al. [4] examined a new approach in a geometric design course including geometry problems (e.g., a bad superelevation design on a horizontal curve). They imported the 3D model into Unity and allowed students to use a VR-based driver simulator to drive along the corridor virtually. The results showed that 73% of the 53 students who participated found this approach to be a helpful tool when identifying and learning about geometric design problems for undergraduate students.


References:

[1]????? K. Zeeb, A. Buchner, & M. Schrauf. Is take-over time all that matters? The impact of visual-cognitive load on driver take-over quality after conditionally automated driving. Accident Analysis & Prevention, 92, 230-239, 2016.

[2]????? D. Sportillo, A. Paljic, & L. Ojeda. Get ready for automated driving using virtual reality. Accident Analysis & Prevention, 118, 102-113, 2018.

[3]????? R. Happee, C. Gold, J. Radlmayr, S. Hergeth, & K. Bengler. Take-over performance in evasive manoeuvres. Accident Analysis & Prevention, 106, 211-222, 2017.

[4]????? M. R. Veronez, L. Gonzaga, F. Bordin, L. Kupssinsku, G. L. Kannenberg, T. Duarte, ... & F. P. Marson. RIDERS: Road inspection & driver simulation. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 715-716), IEEE, March, 2018.


要查看或添加评论,请登录

社区洞察

其他会员也浏览了