Preprint
Article

Sensor Fusion for Real‐Time Object Detection and Spatial Positioning in Unmanned Vehicles Using YOLOv8 and ESP32‐Cam

Altmetrics

Downloads

20

Views

16

Comments

0

This version is not peer-reviewed

Submitted:

07 November 2024

Posted:

08 November 2024

You are already at the latest version

Alerts
Abstract
With the rise of autonomous systems, the need for precision in navigating uncertain environments has become paramount. Unmanned vehicles in particular require accurate detection and spatial positioning of obstacles to ensure safe and efficient navigation. This research introduces a sensor-fusion based system that integrates an ESP-32 camera module and an ultrasonic sensor to detect objects and calculate their relative position to the vehicle in an average of 715 milliseconds. The object detection pipeline utilizes the Yolov8 object detection algorithm to detect and classify objects in the camera’s field of view. Combining the pinhole camera formula and Yolov8 bounding box, an equation is formulated to compute the exact spatial coordinates of the object. The system was validated through a series of experiments involving different object types at varying distances, resulting in a dataset of 397 instances with a total of 3176 values. The solution achieved a spatial detection accuracy of 89%, demonstrating its potential for reliable obstacle avoidance in unmanned vehicles.
Keywords: 
Subject: Engineering  -   Mechanical Engineering
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated