Preprint Article Version 1 This version is not peer-reviewed

Deep Learning Models for Predicting Drone Sound Distances: Lightweight, Fusion and Hybridization Approaches

Version 1 : Received: 28 October 2024 / Approved: 28 October 2024 / Online: 28 October 2024 (10:47:14 CET)

How to cite: Utebayeva, D.; Ilipbayeva, L.; Seidaliyeva, U.; Yembergenova, A.; Matson, E. T. Deep Learning Models for Predicting Drone Sound Distances: Lightweight, Fusion and Hybridization Approaches. Preprints 2024, 2024102156. https://doi.org/10.20944/preprints202410.2156.v1 Utebayeva, D.; Ilipbayeva, L.; Seidaliyeva, U.; Yembergenova, A.; Matson, E. T. Deep Learning Models for Predicting Drone Sound Distances: Lightweight, Fusion and Hybridization Approaches. Preprints 2024, 2024102156. https://doi.org/10.20944/preprints202410.2156.v1

Abstract

In recent years, the widespread use of drones in daily life and large public events has raised serious safety concerns, especially due to incidents, both intentional and accidental. One of the most important aspects to prevent these risks is the ability to detect and accurately predict the distance of UAVs (Unmanned Aerial Vehicles) from people and restricted areas. One of the pressing issues is to develop systems to monitor their flights in restricted areas and predict suspicious movements in cases where suspicious drone flights are detected, which may be launched for video reconnaissance and information theft purposes. The advancement of acoustic-based recognition systems is growing with the development of deep learning. This study explores deep learning model architectures for the task of predicting UAV distances based on their flight sounds. The objective of this study is to determine whether sound-based classification can effectively predict drone movements at different distances from acoustic sensor points as they move from one area to another. Our experimental tests tried to predict the movement of UAVs based on the classification into 3 main zones. The results showed that drone sounds could be reliably detected in the movements between zones with an average recognition accuracy of 90 % using the hybrid CNN-BiLSTM model. Moreover, the implementation of such advanced acoustic sensor systems for UAV detection can improve the accuracy of real-time prediction, especially when integrated into a multimodal system with multi-sensor fusion methods.

Keywords

UAV sound distance; CNNs; RNNs; SimpleRNN; LSTM; BiLSTM; GRU; CNN-BiLSTM; UAV sound classification; Kapre method; melspectrogram; deep learning; drone sound detection; real-time UAV sound detection; fusion; voting system; hybrid models

Subject

Engineering, Control and Systems Engineering

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.