Version 1
: Received: 12 July 2024 / Approved: 13 July 2024 / Online: 15 July 2024 (10:19:30 CEST)
How to cite:
Fang, S.; Chen, C.; Li, Z.; Zhou, M.; Wei, R. YOLO-ADual: Lightweight Traffic Sign Detection Model on Mobile Driving System. Preprints2024, 2024071126. https://doi.org/10.20944/preprints202407.1126.v1
Fang, S.; Chen, C.; Li, Z.; Zhou, M.; Wei, R. YOLO-ADual: Lightweight Traffic Sign Detection Model on Mobile Driving System. Preprints 2024, 2024071126. https://doi.org/10.20944/preprints202407.1126.v1
Fang, S.; Chen, C.; Li, Z.; Zhou, M.; Wei, R. YOLO-ADual: Lightweight Traffic Sign Detection Model on Mobile Driving System. Preprints2024, 2024071126. https://doi.org/10.20944/preprints202407.1126.v1
APA Style
Fang, S., Chen, C., Li, Z., Zhou, M., & Wei, R. (2024). YOLO-ADual: Lightweight Traffic Sign Detection Model on Mobile Driving System. Preprints. https://doi.org/10.20944/preprints202407.1126.v1
Chicago/Turabian Style
Fang, S., Meng Zhou and Renjie Wei. 2024 "YOLO-ADual: Lightweight Traffic Sign Detection Model on Mobile Driving System" Preprints. https://doi.org/10.20944/preprints202407.1126.v1
Abstract
Traffic sign detection plays a pivotal role in autonomous driving systems. The intricacy of the detection model necessitates high-performance hardware. Real-world traffic environments exhibit considerable variability and diversity, posing challenges for effective feature extraction by the model. Therefore, it is imperative to develop a detection model that is not only highly accurate but also lightweight. In this paper, we proposed YOLO-ADual, a novel lightweight model. Our method leverages the C3Dual and Adown lightweight modules as replacements for CPS and CBL modules in YOLOv5. The Adown module effectively mitigates feature loss during downsampling while reducing computational costs. Meanwhile, C3Dual optimizes the processing power for kernel feature extraction, enhancing computation efficiency while preserving network depth and feature extraction capability. Furthermore, the inclusion of the CBAM module enables the network to focus on salient information within the image, thus augmenting its feature representation capability. Our proposed algorithm achieves a [email protected] of 70.1% while significantly reducing the number of parameters and computational requirements to 51.83% and 64.73% of the original model, respectively. Compared to various lightweight models, our approach demonstrates competitive performance in terms of both computational efficiency and accuracy.
Computer Science and Mathematics, Computer Vision and Graphics
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.