Article
Version 2
Preserved in Portico This version is not peer-reviewed
End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous Vehicles
Version 1
: Received: 18 May 2023 / Approved: 19 May 2023 / Online: 19 May 2023 (04:33:56 CEST)
Version 2 : Received: 13 June 2023 / Approved: 29 June 2023 / Online: 29 June 2023 (08:32:46 CEST)
Version 2 : Received: 13 June 2023 / Approved: 29 June 2023 / Online: 29 June 2023 (08:32:46 CEST)
A peer-reviewed article of this Preprint also exists.
Gu, J.; Lind, A.; Chhetri, T.R.; Bellone, M.; Sell, R. End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous Vehicles. Sensors 2023, 23, 6783. Gu, J.; Lind, A.; Chhetri, T.R.; Bellone, M.; Sell, R. End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous Vehicles. Sensors 2023, 23, 6783.
Abstract
Autonomous driving vehicles rely on sensors for the robust perception of surroundings. Such vehicles are equipped with multiple perceptive sensors with a high level of redundancy to ensure safety and reliability in any driving condition. However, multi-sensor, such as camera, LiDAR and radar, systems bring up the requirements related to sensor calibration and synchronization, which are the fundamental blocks of any autonomous system. On the other hand, sensor fusion and integration have become important aspects of autonomous driving research and directly determine the efficiency and accuracy of advanced functions such as object detection and path planning. Classical model-based estimation and data-driven models are two mainstream approaches to achieving such integration. Most recent research is shifting to the latter, showing high robustness in real-world applications but requiring large quantities of data to be collected, synchronized, and properly categorized. To generalize the implementation of the multi-sensor perceptive system, we introduce an end-to-end generic sensor dataset collection framework that includes both hardware deploying solutions and sensor fusion algorithms. The framework prototype integrates a diverse set of sensors, such as cameras, LiDAR, and radar. Furthermore, we present a universal toolbox to calibrate and synchronize three types of sensors based on their characteristics. The framework also includes the fusion algorithms, which utilize the merits of three sensors , namely, camera, LiDAR and radar, and fuse their sensory information in a manner that is helpful for object detection and tracking research. The generality of this framework makes it applicable in any robotic or autonomous applications, also suitable for quick and large-scale practical deployment.
Keywords
multimodal sensors; autonomous driving; dataset collection framework; sensor calibration and synchronization; sensor fusion
Subject
Computer Science and Mathematics, Robotics
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (1)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment
Commenter: Junyi Gu
Commenter's Conflict of Interests: Author
Rewriting some confusing sentences.
Add more explanation for the figures and diagrams.