Geodätisches Institut Hannover Studienarbeiten
Erweiterung eines Lidar-basierten Referenzmesssystem und Installation auf einem Test-Fahrzeug

Erweiterung eines Lidar-basierten Referenzmesssystem und Installation auf einem Test-Fahrzeug

Betreuung:  Dominik Ernst, Hamza Alkhatib
Bearbeitung:  Saurabh Borse
Jahr:  2022
Datum:  24-01-22
Laufzeit:  09/2021 - 04/2022
Ist abgeschlossen:  ja

Autonomous driving seeks a high level of environment understanding which is best met with the help of multiple sensor perception systems which capture information of the surrounding. The data from these sensors is then processed in data processing architectures to extract useful information about objects in the vicinity of the system. This information about the objects is of great relevance in the context of decision making and motion planning involved in the task of automated driving. A system producing erroneous and imprecise object information or failing to detect objects in the surroundings can lead to misjudgments and fatal accidents. Considering the criticality of the impact that the data captured by the sensors have on driving decisions, it is important to develop systems that validate the extracted information from these sensor perception systems. This task of validating perception systems is taken up by the company IAV GmbH located in Gaimersheim under the project named “Reference Measurement System.” This project is in the developmental phase as a prototype and was partially functional including a single camera and a lidar sensor for perception at the beginning of the student thesis.
The student thesis aims to expand and integrate the existing functionalities of one camera and lidar system into a two camera and lidar system. To realize this work, the requirements for the
modification of the existing system were investigated and defined. The requirement of accurate
spatial referencing of the additional camera sensor was met with the help of intrinsic calibration of
the camera as well as extrinsic calibration of the camera and the lidar. Necessary adaptations were
made for extending the object detection functionalities of the image processing architecture to the
data from the additional camera. This also included a supplementary task of upgrading the program
for functioning on newer software development kits. Temporal synchronization of the added camera
with respect to the lidar was achieved by modifying the triggering program on an embedded
platform. Fault analysis was conducted on the embedded board to ensure correct functioning of
trigger signals responsible for time synchronization. Consequently, a modified kernel was installed
on top of the operating system of the embedded board for real time applications. In conclusion, the
data processing architecture responsible for sensor fusion was extended to simultaneously manage data inputs from two cameras.