Preprint
Article

LiDAR Technology for UAV Detection: From Fundamentals and Operational Principles to Advanced Detection and Classification Techniques

Altmetrics

Downloads

238

Views

89

Comments

0

Submitted:

16 October 2024

Posted:

17 October 2024

You are already at the latest version

Alerts
Abstract
As UAVs are increasingly employed across various industries, the demand for robust and accurate detection has become crucial. LiDAR has developed as a vital sensor technology because to its ability to offer rich 3D spatial information, particularly in applications such as security and airspace monitoring. This review systematically explores recent innovations in LiDAR-based drone detection, deeply focusing on the principle and components of LiDAR sensor, its classifications based on different parameters and scanning mechanisms, as well as the approaches for processing LiDAR data. The review briefly compares several deep learning approaches, including point-based, voxel-based, and hybrid models, that have improved drone detection, tracking, and classification. We also explore the integration of multi-modal sensor fusion, which combines LiDAR data with other complimentary modalities to improve UAV detection and tracking in complex environments such as GNSS-denied zones.
Keywords: 
Subject: Computer Science and Mathematics  -   Computer Science

1. Introduction

The rapid development and increasing use of unmanned aerial vehicles (UAVs) in a variety of industries, from agriculture and medicine to environmental monitoring and entertainment, has created both new opportunities and concerns, notably in terms of airspace control and security. Since drones are capable of transporting illegal and suspicious cargo ranging from drugs to dangerous explosives, drone incidents have become more frequent in recent times. For instance, in the first quarter of this year, the FAA (Federal Aviation Administration) reported nearly 200 drone near-miss incidents near US airports with multiple occurrences that need pilots to take evasive action [1]. Drones were detected bringing drugs, mobile phones, and other illicit things into correctional institutions in both Canada and UK [2]. As well as, in May 2023, Mexican criminal organizations employed drones to hurl homemade explosives on villages along drug trafficking routes [3].
Significance of the research work. A number of drone incidents cited above demonstrate that timely detection of a suspicious UAV is crucial for a number of reasons spanning a variety of industries and applications:
  • in the field of security and defense, the ability of timely UAV detection prevents dangerous activities such as espionage, smuggling and terrorist attacks;
  • drone identification on the issue of confidentiality allows to protect individuals and organizations from unauthorized control;
  • timely detection of suspicious UAVs in the matter of maintaining the safety of the airways allows to prevent an air collision.
However, there are numerous challenges in detecting suspicious UAVs. In particular, the variety of payloads, different operating environments, the small size of the UAV, its continuous dynamic movement, and the continuous development of drone technology require complex and reliable detection methods. Currently, UAV detection, classification and tracking algorithms may employ numerous sensors, such as radars, RF (radio frequency) sensors, acoustic sensors, cameras or a fusion of them. Radar-based UAV detection [4,5,6] is essential due to enhanced detection accuracy, efficiency, and real-time monitoring in response to growing concerns about UAV abuse in sensitive regions such as airports and military zones. Radar-based detection systems, particularly those based on micro-Doppler signatures, have shown to be quite successful in distinguishing UAVs from other flying objects, such as birds or small aircraft. These systems can recognize distinctive UAV flight patterns because to the micro-Doppler effect, which records the slight motion of UAV rotors. This is critical for detecting UAVs that pose potential dangers in real-time [4]. Therefore, radars are becoming dominant technologies used for UAV detection due to their ability to detect targets at long distances and in various weather conditions, as well as in complicated surroundings. However, their effectiveness may be limited by low radar cross section (RCS), high cost and complexity of deployment [4,5,6] .
Recent research on identifying hostile UAVs with RF sensors [7,8,9,10] has made tremendous progress, particularly in the integration of machine learning (ML) and deep learning (DL) approaches. RF sensors are capable of monitoring communication signals and spectra between drones and their operators. In [7], the authors applied a multiscale feature extraction approach for identifying UAVs by examining their RF fingerprints in real time. The system used an end-to-end DL model with residual blocks that evaluated across various signal-to-noise ratios (SNRs) and showed excellent detection accuracy (above 97%) even in noisy environments, outperforming traditional approaches that struggle with overlapping signals. Another research [8] found that RF sensors can detect not just UAVs but also their controllers, which is essential in identifying the operator of a hostile UAV. This system combines radar and RF technology to follow unlicensed drones and their command centers, delivering real-time data to security teams. Therefore, RF sensors are an effective and passive means of detecting UAVs, particularly in circumstances where non-line-of-sight detection is critical. Their ability to recognize both the UAV and its operator, along with long-range detection, makes them extremely useful in security and surveillance applications. However, their shortcomings, such as dependency on active communication signals, sensitivity to encryption and jamming, disability to identify fully autonomous drones that do not emit RF signals, frequently demand their integration with other sensor types (e.g., radar or optical sensors) to create a more robust UAV detection system [9,10].
Deep learning is being used to recognize drone acoustic signatures, which is an important area of object detection development. Acoustic sensors detect unique sound signatures produced by drone engines [11,12,13,14]. A recent study [11] demonstrated that DNNs could interpret multi-rotor UAV sounds acquired by acoustic sensors and reliably differentiate UAVs from background noise. The study compared CNN and RNN algorithms, either in solo or voting ensemble based on late fusion. The experiment results demonstrated that CNN-based models performed best with an accuracy of 94.7%. In another study [12], researchers evaluated UAV sound detection ability at various distances, examining how ambient factors such as noise and distance impact UAV detection. The research outcomes demonstrated that the linear discriminant analysis might be effective for UAV sound detection at short distances, while the increased detection accuracy at medium and long distances was reached using YAMNet DL model. Therefore, acoustic sensors are an inexpensive and energy-efficient alternative for UAV detection, particularly in locations with limited visual line-of-sight (LoS). Nevertheless, their performance and detection range are reduced by wind conditions and background noise [13,14].
The use of spatio-temporal information in conjunction with optical flow analysis is a common strategy in camera-based UAV detection research [15]. This approach improves small UAV detection by evaluating continuous image sequences to capture the drone motions across frames [15]. The problem of distinguishing drones from other flying objects was considered in [16,17]. In [16], the authors conducted real-time drone detection by separating the problem into moving object detection and classification tasks. The moving object detection task was solved by using traditional image processing operations and Background subtraction algorithm. While MobileNetv2 DL model handled the task of classifying the detected moving objects. While camera sensors offer rich color and texture detection, inexpensive and adaptable for object recognition, they have drawbacks such as sensitivity to lighting, weather, and line-of-sight conditions, and a lack of depth perception[17]. These issues frequently need the incorporation other detection systems such as acoustic [18,19], RF [20,21], radar [22,23], as well as Light Detection and Ranging (LiDAR) [24] sensors to improve total UAV detection reliability.
LiDAR [24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42], unlike typical camera or radar-based systems, gives exact distance measurements and high-resolution data with rich 3D representations for distinguishing tiny, fast moving objects such as drones from other aerial objects based on motion and size characteristics. As well as, LiDAR sensors have demonstrated great potential for detecting, localizing, and tracking objects in the near and mid-range. Because of its ability to capture precise high-resolution 3D spatial data, durability in harsh environmental conditions, and great precision in recognizing and tracking fast-moving objects, LiDAR is the preferred choice in applications requiring robustness and accuracy. As UAV technology advances, LiDAR’s performance benefits keep it at the forefront of UAV detection systems, particularly in key applications such as airspace security, surveillance, and autonomous navigation. Nevertheless, the effective LiDAR data processing for UAV detection brings distinct challenges. Point cloud data’s irregularity, sparsity, and high dimensionality necessitate sophisticated analytical approaches capable of effectively recognizing different types of drones in diverse environments. Deep learning has shown tremendous promise in this domain, providing advanced approaches for feature extraction, object identification and classification. The combination of LiDAR technology and deep learning methodologies is driving drone detection innovation with enhanced system accuracy, speed, and robustness. This systematic study seeks to investigate the most recent advances in LiDAR-based object recognition, with a particular emphasis on fundamentals of LiDAR technology, its structure and principle, classification types, as well as clustering-based and deep learning algorithms for LiDAR data processing. We will look at the progress of integrating LiDAR with additional sensor modalities in complex environments to increase the system resilence and accuracy.

2. Understanding LiDAR: Technology, Principles, and Classifications

Light Detection and Ranging (LiDAR) is a type of remote sensing technique that works similarly to radar but employs light instead of radio waves. It uses the concepts of reflected light and precise timing to determine the distance between objects [24]. However, LiDAR is more than simply a distance measurement. LiDAR systems may generate comprehensive 3D models of the scanned environment by delivering laser pulses approaching a target and measuring how long it takes allowing the reflected light to return. This technique is highly precise and can capture complicated scenes in real time. Therefore, it may also be employed in 3D mapping and imaging, making it both desirable in an engineering setting and a very valuable practical technology.
A common LiDAR sensor utilizing a laser in the 905 nm near-infrared ray (NIR) spectrum employs the time-of-flight (ToF) conversion technology. ToF is considered as the time difference between the laser’s transmission (t1) and its reflection from a target object back to the sensor (t2). The idea of single-shot direct time-of-flight (dToF) measurement is the simplest to understand among the several LiDAR approaches to calculate the distance to a target object [24,25,26] (see Figure 1). Based on this idea, a light source (typically a laser diode) generates a pulse of light, which activates a timer. When the light pulse strikes a target object, it is reflected back to a sensor, which is normally located near the laser diode, and the timer is turned off.
As the time (t) between the transmitted pulse and the reflected echo is known, the distance (D) to the target object may be calculated using a constant value of the speed of light (c):
D = T o F × c 2

2.1. Key Components and Operational Principles of LiDAR systems

LiDAR devices use laser light to determine distances and create detailed 3D maps of their surroundings [24,25,26]. A LiDAR sensor, which determines the precise distance from a target object by emitting a laser includes a laser diode (LD), an avalanche photo diode (APD), a time-to-digital converter (TDC), and signal processing modules as depicted in Figure 1.
Therefore, these are the primary components of a LiDAR system, which are required for data capture, processing, and interpretation in order to measure distances and detect objects:
1. Laser. The color and intensity of the laser vary depending on the sort of data being gathered, nevertheless every LiDAR payload includes a high-powered laser [24,25,26]. Laser generates pulses or continuous light beams that are emitted with a certain frequency or wavelength (usually in the NIR spectrum, such as 905 nm or 1550 nm). According to the transmission principle and application, lasers can be either pulsed or continuous-wave (CW). The pulsed laser is the most popular form in LiDAR systems, and it is a way of sending a laser with an instantaneous peak strength via a short pulse of several nanoseconds. The system determines the distance to the target object by measuring each pulse’s time-of-flight. Because the intensity of an instantaneous laser is high, this approach is particularly accurate for measuring long distances and is frequently utilized in mapping, terrain modeling, and self-driving vehicles. Instead of producing pulses, CW lasers produce a continuous beam of light and have two variations such as amplitude-modulated continuous-wave (AMCW), and frequency-modulated continuous-wave (FMCW) lasers. The AMCW is a technique for sending a continuous laser. It uses the phase difference between the transmitted and reflected detected waves to measure the distance. As it measures the phase shift of the transmitted and received laser, it is not appropriate for an accurate and long distance measurement [24,25,26]. CW LiDARs often employ the FMCW approach, which compensates for the AMCW’s inaccurate distance estimation. FMCW lasers generate a continuous beam rather than discrete pulses. To determine the distance, FMCW modulates the frequency of light and measures phase shifts induced by the Doppler effect of the reflected signal. They provide high accuracy and can estimate the distance and velocity of moving objects. However, it is substantially more complicated than the AMCW and is not appropriate for long distance measurement since the frequency fluctuates owing to the Doppler effect.
2. Scanner/Beam steering mechanism.The scanner directs and steers the laser beam across the surroundings while collecting the returning signals, and defines the LiDAR’s coverage area and impacts its resolution. Rotating mirrors, oscillating systems are used to control the laser beam. Section 2.2 explains the scanning mechanisms used in lidar sensors.
3. Receiver (photodetector). Avalanche photodiodes (APDs) and single-photon avalanche diodes (SPADs) are examples of common detectors. These detectors capture the weak reflected light pulse and convert it into an electrical signal [24,25,26].
4. Signal processing unit. This unit of a LiDAR system is in charge of converting the raw data acquired by the photodetector into useful information like distance measurements and 3D object mapping. Therefore, it removes noise and unnecessary data captured by the photodetector, and enhances signal detection accuracy by focusing just on laser pulses received from the objects of interest. It also boosts the weak signal reflected by distant or tiny objects, which is essential in long-range LiDAR systems or low-reflectivity settings. After filtering and amplifying the raw data, this unit estimates the laser pulse’s ToF. Finally it transforms the processed data into a 3D point cloud that depicts the scanned surroundings. Each point in the point cloud represents the position where the LiDAR spotted a target object. In some cases, signal processing unit performs data compression and transmission tasks to handle with rapid huge data processing in real-time applications [27,28,29].
5. Power supply unit. It is essential for supplying the electrical energy necessary to power all of the system’s components and ensuring crucial operations like as powering the laser emitter, regulating voltage for the receiver, enabling signal processing, controlling beam steering, and managing energy efficiency [27].
6. Positioning and Orientation unit. This unit provides accurate global position and orientation of the LiDAR sensor during data gathering. It includes GPS/GNSS (Global Positioning System/Global Navigation Satellite System) unit, IMU (Inertial Measurement Unit), INS (Inertial Navigation System), as well as SLAM (Simultaneous Localization and Mapping). The GPS/GNSS unit enables precise positioning, georeferencing, and trajectory tracking of the LiDAR system within a global coordinate system. The IMU unit ensures detailed information on the sensor’s orientation, acceleration, and movement, ensuring that the LiDAR data is correct even when the sensor is in motion or vibrating. The INS unit fuses data from the GPS/GNSS and IMU to produce continuous and highly precise position and orientation information, particularly in challenging environments or during dynamic motion. The SLAM unit enables the LiDAR sensor to simultaneously map and localize its surroundings in GPS-denied environments.
7. Control system unit. The control system coordinates all of the LiDAR sensor’s components. It handles laser emissions, receiver synchronization, data gathering, and filtering operations. It also interacts with other systems, such GPS or IMU, and sets scanning settings and power management [28,29].

2.2. Types of LiDAR Sensors: A Comprehensive Classification

LiDAR sensors may be classified depending on a number of factors, including scanning mechanisms, operational environments, measurement techniques, and application areas etc.
According to their scanning methods, LiDAR systems are classified as a non-scanning and a scanning LiDAR (see Figure 2). Scanning LiDAR sensors are further characterized as mechanical and non-mechanical LiDAR. Mechanical scanning LiDAR sensors use physically moving components (rotating mirrors, oscillating prisms, entire sensor head etc.) to steer the laser beam across the environment and measure the distance to create accurate 3D maps. There are several scanning types of mechanical scanning LiDAR: micro-electromechanical systems (MEMS) [30,31,32,33,34], optomechanical [35,36,37] and electromechanical [38] scanning LiDAR. MEMS scanning mechanisms use small moveable micro-mirror plates to steer laser beams in free space and produce scanning patterns in LiDAR systems. MEMS-based scanning mirrors are generally classified as either resonant or non-resonant [34]. Resonant MEMS mirrors function at a certain resonant frequency and provide broad scan angles and rapid scanning speeds, making them ideal for automotive applications; nevertheless, they have disadvantages such as non-uniform scan speeds and susceptibility to environmental changes. While non-resonant MEMS mirrors are more versatile in trajectory design, have lower scanning angles and require more complicated controllers to maintain scan quality. MEMS scanning mechanisms are used in a wide range of applications due to their small size, low cost and power consumption, and ability to create high-resolution scans. MEMS LiDAR is known as quasi-solid-state LiDAR since its moving parts merely steer the laser beam in free space and do not move any optical components. This makes them excellent for autonomous cars, robotics, space exploration, medical imaging, and mobile applications where weight, size, and energy efficiency are all important considerations. However, MEMS mirrors encounter issues such as limited range and FoV, sensitivity to environmental factors such as temperature and vibrations, which can affect their resonant frequency and overall performance [30,31,32,33,34]. Optomechanical scanning systems often use rotating polygon mirrors and Risley prisms that physically rotate to steer the laser beam over the scene being scanned. These systems are perfect for automobile LiDAR, remote sensing, and even biological imaging since they can maintain accurate beam control over lengthy durations, as well as achieve a high field of view (FoV) and range. Optomechanical systems are extremely effective, but they are heavy and feature moving components that might wear out over time, making them less suitable for small or lightweight applications than MEMS-based equivalents. However, for cases needing high power and long-range capabilities, optomechanical mechanisms continue to be the dominant choice [35,36,37]. Electromechanical scanning devices employ revolving or oscillating mirrors driven by electric motors (such as stepper motors or servo motors) to deflect a laser beam in certain directions. This enables the LiDAR system to do 2D or 3D scanning over broad regions. These systems are among the first and most extensively used scanning mechanisms in LiDAR because of their simplicity, efficacy in attaining high angular resolution, a large horizontal and vertical field of view, high accuracy, and long-range capabilities. These systems are widely utilized in self-driving cars, infrastructure monitoring, environmental mapping, remote sensing, and atmospheric studies. While electromechanical systems are strong and reliable, they do have certain limitations. Moving parts are prone to wear and tear, which might jeopardize long-term reliability. Furthermore, these systems tend to be bulky and heavier compared to newer, more compact options like as MEMS or solid-state LiDAR, which offer comparable capabilities without moving components [35,38].
Non-mechanical scanning LiDAR sensor is also known as solid-state beam scanning since it has no moving parts such as rotating mirrors or prisms. Solid-state beam scanning LiDAR systems frequently employ optical phased arrays (OPAs) to steer the laser beams by altering the phase of light emitted at various spots in the array [39,40]. OPA scanning techniques in LiDAR systems are gaining popularity because of their non-mechanical beam steering capabilities and are regarded to be fully solid-state, with no moving components. They offer the potential for very quick scanning while also being highly precise in tuning for varied beam directions. Recent research [39] focuses on advances in photonic integrated circuits (PICs) for OPAs, which improve their compactness, speed, and energy efficiency. These integrated circuits have fast response times and compatibility with existing manufacturing processes (such as CMOS), making them suitable for mass production in sectors like automotive LiDAR. Another OPA-based LiDAR research [39] presented a target-adaptive scanning method which aims to adjust the scanning resolution depending on a detected target. Based on this method, high-resolution scanning is retained for only essential objects, while unimportant background regions are scanned coarsely. Therefore, this strategy increases efficiency, as well as the system’s capacity to focus on critical aspects in real-time applications like autonomous driving [40]. While OPA technology has major advantages, such as no moving components, quick beam steering, compact size, and energy economy, it faces issues in managing beam width and reducing grating lobes [39,40].
Unlike conventional LiDAR systems, non-scanning LiDAR sensors do not use mechanical or electronic scanning mechanism to move a laser beam across its FoV (Field of View). Instead, these systems illuminate a full scene in a single light pulse and captures its reflection on a 2D sensor array, similar to a camera. Non-scanning LiDAR is commonly known as Flash LiDAR. Flash LiDAR system employs a broad laser beam and a huge photodetector array to gather 3D data in a single shot, making it perfect for real-time applications. Recent study [41,42] has focused on the integration of SPAD (single photon avalanche diodes) sensors with Flash LiDAR systems, which allows for increased sensitivity and performance in low-light conditions. SPAD-based Flash LiDARs can operate over long distances (up to 50 kilometers) and are being studied for precision landing systems for planetary missions. Therefore, Flash LiDAR offers primary advantages such as the lack of moving components, quick data capture, and the capacity to create high-resolution 3D maps in real time, making it an invaluable tool for terrestrial and space applications. However, because to their limits in range and resolution compared to scanning LiDAR systems, they are usually employed in shorter-range applications [41,42].
The differences between these LiDAR scanning mechanisms can be distinguished according to the brief comparative information in the Table 1 below.
Based on the dimension of acquired data LiDAR systems that can collect spatial information come in three varieties:
- one-dimensional (1D) LiDAR [43,44];
- two-dimensional (2D) LiDAR [45,46,47,48,49];
- three-dimensional (3D) LiDAR [50,51].
As mentioned above in Section 2.1, LiDAR system have several basic components such as laser, scanner, receiver, signal processing unit etc. However, LiDAR system still can operate without scanning mechanism and is known as 1D LiDAR. 1D LiDAR mostly estimates the distance along a single fixed axis or direction. As well as, its laser is fixed and directed in one direction, rather than rotating or sweeping over a larger region. In [43], a low-cost high-precision pointing mechanism for obstacle detection on railway tracks over long distances was demonstrated by employing a gimbaling platform coupled with a 1D LiDAR sensor. The gimbal enables the LiDAR sensor to scan a large area dynamically, detecting obstructions like animals, debris, and equipment on rails to prevent accidents. The system’s actual pointing accuracy was assessed through a controlled, indoor, as well as long-distance experiment. The experiment findings showed that the system can reliably target individual points over large distances, with angular resolution enough to detect humans at 1500 meters using long-range LiDAR. Overall, the study contributes significantly to transportation safety and provides a solid platform for future advances in obstacle detection technologies. Rather than using expensive and sophisticated 3D LiDAR systems, the authors of [44] developed an inexpensive indoor navigation system capable of mapping static environments and detecting objects. The proposed system employs a 1D LiDAR sensor to scan its surroundings and create 3D maps by combining data from many scans captured over time as the LiDAR-equipped vehicle moves across the surroundings. The system prototype is constructed with Lidar Litev3 sensor, two servo motors and pan-tilt mechanism, and primarily designed for tiny autonomous bots or Automated Guided Vehicles (AGVs) when vehicle speed is not a concern. It assumes that the environment is static relative to the sensor, with only obstructions being relevant. The prototype determines obstruction coordinates by scanning 150° horizontally and 120° vertically using a fast scanning approach. Once the coordinates are spotted, the sensor focuses on the obstacle to generate a detailed map, which allows the vehicle to analyze the object’s profile and provide navigation instructions for obstacle avoidance. Based on experiment results, the obstacles within a 1-metre range are successfully detected, followed by the creation of an object profile. The use of an adaptive scanning method reduced scan time by more than half and recognized the object’s presence and shape as rapidly as possible.
Recent research works have explored various aspects of 2D LiDAR technology in UAV detection and tracking. For example, in [45], the researchers have examined 2D lidar-based UAV detection and presented a formulation for probability of detection in various settings using a lidar-turret system. The proposed system relies on sparse detections instead of dense point clouds, as well as estimates motion and active tracking. The authors offered the theoretical framework for analyzing the performance of a 2D LiDAR-based detection system and better understanding its limitations. The work consists of field experiments involving the detection of different-size multiple drones based on a modern LiDAR system and highlights its effectiveness in UAV identification and tracking using sparse data. A LiDAR-assisted UAV exploration algorithm (LAEA) for unknown surroundings was proposed in [46]. The proposed system framework has three major modules, such as map construction, target selection, and motion planning. The approach uses 2D ToF (time of flight) LiDAR and depth camera to swiftly capture contour information from the undiscovered surroundings, then combines data from two sensors to generate a hybrid 2D map. By using fused multi-sensor data the map construction module produces both high- and low-resolution 3D occupancy maps, as well as 2D occupancy maps for detecting special frontier clusters. While the target selection module is responsible for frontier-based viewpoints generation, detection of tiny and isolated frontier clusters, and solution the asymmetric traveling salesman problem (ATSP). Finally, the motion planning module then conducts specific azimuthal trajectory optimization utilizing the EIG (environmental information gain) optimization approach, resulting in a safe trajectory that allows the UAV to collect more information. To test the proposed algorithm’s efficacy a simulation study using the Gazebo simulator was carried out by performing a thorough comparison between the proposed method and state-of-the-art techniques such as FUEL (Fast UAV Exploration) and FAEP (Fast Autonomous Exploration Planner). The simulation results revealed that the proposed LAEA approach outperforms those two methods in terms of flight distance and exploration time. The feasibility of the proposed approach was validated on a robotic platform outfitted with an RGB-D camera, 2D LiDAR, and an Nvidia onboard computer in two distinct real-world scenarios (indoors and outdoors). Robust short obstructions and potholes detection and classification approach using cost-effective 2D LiDAR sensor mounted on a mobile platform was proposed in [47]. As the reseacrh goal was to detect short obstacles in the ground, the LiDAR sensor was placed downward looking. The used data was acquired using Hokuyo UBG-04LX-F01 2D LiDAR sensor. Then the acquired data is converted into a point cloud, which is a straightforward transition from polar coordinates to Cartesian. To identify obstacles, the point cloud is segmented into lines and based on their average height the lines were classified as either ground, pothole or obstacle. The experimental findings showed that the suggested method properly recognizes obstructions and potholes in a structural environment. Nevertheless, the point cloud is oversegmented with unnecessary lines, which might be addressed by adjusting the line refinement parameter. The authors plan to consider the dynamic objects and analyze their movement in the lines in their future study. Efficient moving object detection based on 2D LiDAR combined with frame-to frame scan matching method was presented in [48]. To ensure collision-free passage across a mapped area, the proposed SegMatch algorithm was implemented on autonomous mobile robotic system (MRS) equipped with LD-OEM 1000 LiDAR. The autonomous MRS is controlled with Application Programming Interface (API) and communicates via LoRa technology. As well as WiFi router was installed that connects with the LiDAR and an external processing unit via TCP/IP (Transmission Control Protocol/Internet Protocol). The main objective of the autonomous MRS is to execute active SLAM (simultaneous localization and mapping). To detect dynamic objects, the proposed algorithm was tested on a stationary measurement, during performing SLAM and in real-time measurements respectively. For these purposes, the necessary data were obtained via a LiDAR scanner and then fed into the SLAM algorithm which calculated deviations and created a 2D map. Then data preprocessing was carried out to make data appropriate for point cloud generation. Afterwards the proposed method applied to processed data to detect dynamic objects. When detecting dynamic objects, the authors encountered problems such as the occurrence of defects on the resulting map and probable collisions of the autonomous MRS with the dynamic object. The algorithm’s advantages include a rapid response time and the ability to be employed in heavily populated areas. To gain a full perspective of the environment the proposed approach might be enhanced with ITS (Intelligent Transport System) architecture-based multi-agent system using several mobile robotic systems. Another study in [49] presents a comprehensive analytical formalism focused on 2D LiDAR structured data representation, object detection and localization within the realm of mobile robotics. The authors described a formalized approach for LiDAR data processing and its mathematical representation, converting raw sensor data into intelligible representations suited for a variety of robotic applications. The proposed analytical formalism includes advanced algorithms for noise reduction, feature extraction, and pattern recognition, allowing the mobile robot to detect static and dynamic objects in its proximity. The efficacy of the proposed method was validated by conducting numerous experiments in a scenario with items of various configurations, sizes, and forms that properly simulates a real-world use case. The experiment outcomes demonstrated that the suggested technique can efficiently recognize and separate objects in semi-structured environments in under 50 milliseconds. Furthermore, the authors are sure that the mathematical framework’s simplicity provides minimal computing effort and efficiency, establishing the groundwork for innovative solutions in a wide range of cases. In their future work, the authors plan to study the implementation of machine learning techniques into the suggested framework for object recognition and classification tasks.
In [50], the authors studied the performance of a 3D LiDAR system for UAV detection and tracking, concentrating on the robustness of effective range estimation to various drone types and shapes, as well as the visibility robustness to environmental and illumination conditions. Additionally, the potential of 3D Lidar for UAV tracking depending on lighting conditions was estimated, respectively. The effective range estimation experiment was carried out by mounting a Livox Mid-40 3D LiDAR in an open field and flying two black and white UAVs at distances up to 80m. Experiment results indicated that the color of the UAV had a considerable effect on its reflectivity and hence the range of detection, with a smaller white UAV being visible at a greater distance than a bigger black UAV, despite the latter’s size advantage. Since the white UAV had greater detection range, the visibility robustness experiment was carried out only with one UAV during three distinct time periods of the day with the same background. The experiment outcomes revealed that the number of captured LiDAR points and reflectivity decrease as distance increases, however at extreme distances, mean reflectivity increases due to the dominance of the UAV’s most reflecting parts. This means that UAV localization remains reliable even in low light conditions, without a reduction in detection range. Also, 3D UAV tracking experiment was performed by continuous LiDAR scanning at three different time periods of the day to track white UAV’s motion and assess its trajectory within the scan duration. The experiment outcomes demonstrated that UAV trajectory tracking remains effective across different lighting conditions and UAV speeds, showcasing the potential of 3D LiDAR for robust UAV tracking task. Preliminary research indicates that 3D LiDAR has considerable potential for robust UAV detection, localization, and tracking. Future research directions include extending the detection range to 200 meters without increasing laser power, developing a mobile system to track high-speed UAVs without sacrificing point cloud density, and developing a real-time system that integrates AI and machine learning for UAV detection and tracking based on different shapes, materials, and reflectivity. The research also intends to follow drone swarms in 3D and evaluate LiDAR’s performance in harsh weather conditions such as snow, fog, and rain. The authors of [51] proposed a novel approach for multi-object tracking (MOT) in 3D LiDAR point clouds by combining short-term and long-term relations to enhance object tracking over time. The short-term relation analyzes geometrical information between detections and predictions, and exploits the fact that objects move gradually between consecutive frames. The long-term relation, on the other hand, considers the historical trajectory of tracks to assess the degree to which a long-term trajectory matches the present detection. An effective Graph Convolutional Network (GCN) approach was used to assess the matching between the present detection and existing object trajectories. By representing the relations between identified objects as a graph, the system enhances its capacity to associate objects across frames, especially in dense or cluttered settings. In addition, an inactive track was kept to solve the issue of incorrect ID switching for objects that had been occluded for an extended duration. This method allows the system to keep object tracks more reliably, especially under challenging scenes as when several objects move close together or when partial occlusion occurs. Therefore, the proposed solution with a multi-level association mechanism successfully mitigates problems such as ID switching following occlusion, leading to increased tracking accuracy and resilience in complex environments. However, the work might benefit from further investigation into its computing efficiency and applicability to a wider range of applications. Despite these limitations, the comparision with cutting-edge LiDAR and LiDAR-camera fusion tracking systems revealed the proposed approach’s clear efficacy in improving robustness, notably in resolving ID switching and fragment issues.
Each type of LiDAR sensor has unique benefits and drawbacks in terms of range, size, power consumption, resolution and price. The sensor selection relies on the application’s unique needs, such as detection range, accuracy, ambient conditions, and budget.

3. Deep Learning Approaches in LiDAR Data Processing

3.1. Overview of Deep Learning Techniques

Deep learning has transformed computer vision in recent years, resulting in significant advances in 2D data processing. Researchers are now turning to deep learning algorithms to achieve comparable advances in 3D object recognition and classification with LiDAR data. By enhancing the accuracy, efficiency, and durability of 3D object identification systems, these innovations are defining the future of a variety of sectors, including autonomous vehicles, robotics, aerial surveying and mapping etc. As well as, the merging of deep learning algorithms with LiDAR technology has led to substantial breakthroughs in drone detection by identifying and monitoring in airspace for security and safety purposes. A number of deep learning approaches are employed to assess the spatial information acquired by LiDAR sensors for detecting 3D objects within point cloud data. These approaches are primarily intended to detect and classify target objects by analyzing 3D points.
The following Table 2 provides detailed comparison of important deep learning algorithms for LiDAR data processing by emphasizing data representations, major techniques, advantages and limitations.

4. State-of-the-Art Approaches in LiDAR-Based Drone Detection

4.1. Clustering-Based Approaches in LiDAR-Based Drone Detection

Because of its simplicity and efficacy, clustering-based object detection has been widely used in the field of LiDAR data processing. Euclidean Distance Clustering (EDC) [67] is one of the most fundamental and extensively used clustering algorithms for LiDAR data. This approach groups points that are close to one other within a given distance threshold and clusters them based on proximity. The Density-Based Spatial Clustering of Applications with Noise (DBSCAN) [64] method is another common clustering technique for LiDAR-based object detection. DBSCAN groups points depending on their local density, allowing it to deal with noisy data and identify clusters of various shapes. The primary stages of any clustering-based lidar object detection method are shown in Figure 3.
A novel real-time obstacle detection and navigation approach for UAVs using 2D LiDAR technology was presented in [62]. The proposed obstacle detection system is considered to be lightweight and cost-effective by consisting of 2D LiDAR sensor and Raspberry pi 3B. As well as, the DJI Matrice 100 serves as a flight platform. The employed 2D lidar consists of a fixed and rotational parts: the first part is used for fixing the UAV, whereas the latter does 360-degree environment scanning using rotational measurement devices to produce an environmental point cloud throughout the whole plane. The detection method includes point cloud correction and the clustering algorithm based on relative distance and density (CBRDD). Since the drone is a dynamic object, the lidar sensor is also constantly moving in the direction of the drone. Therefore, the point cloud obtained from the lidar is affected by the motion, and there is a difference between the ideal and actual point clouds which necessitates the correction of obtained point clouds before putting them into practice. Point cloud correction was performed by converting the point cloud series from polar to Cartesian coordinate system, then estimating the UAV position based on IMU (Inertial Measurement Unit) which refers to a velocity estimation model. The relative distance between two adjacent points and the density distribution are the main features to be extracted. Then CBRDD clustering algorithm was applied to these features to obtain the distribution information of obstructions. The experimental part includes both simulation and actual experiments to validate the proposed approach. DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm was chosen for the comparison, based on the simulation experiment outcomes, the CBRDD clustering algorithm outperformed DBSCAN for the uneven density point clouds. While in the actual experiment for uniform point clouds, they showed the same result. In [63], the authors offered a robust UAV tracking and position estimation system employing fused data from Livox Avia and LiDAR 360 sensors and a clustering-based learning detection (CL-Det) approach. LiDAR 360 used 3D point cloud data to give 360-degree coverage. Livox Avia, on the other hand, provided focused 3D point cloud data for a specific timestamp, which often represents the origin point or drone location. Initially, the timestamps of these two sensors are aligned to ensure temporal coherence. Then, fused LiDAR data coordinates were compared to the drone’s known ground truth positions at the respective timestamps. Furthermore, CL-Det clustering method, particularly DBSCAN, was applied to process LiDAR data and isolate UAV-related data from the environment points. The DBSCAN method clustered the UAV-related point clouds and selected the object of interest assuming that the biggest non-environment cluster represents the UAV. The UAV’s position in (x,y,z) coordinates is determined by calculating the mean of the selected cluster. To address the issue of sparse LiDAR data, the authors used historical estimations to fill in the data gaps, maintaining continuity and accuracy in UAV tracking even when certain LiDAR measurements were unavailable. Therefore, the proposed multi-sensor integration and clustering-based UAV detection and tracking system demonstrates the potential for real-time, precise UAV tracking, even in sparse data conditions.
Clustering-based LiDAR object detection techniques are effective, however they have some drawbacks. Usually these methods rely on manually tuned parameters, such as distance thresholds, which must be adjusted for various environments, limiting their applicability. These approaches may also suffer in complex or cluttered environments, resulting in excessive or insufficient segmentation. Furthermore, clustering-based techniques frequently rely on handmade features, making them less successful in complex scenarios than deep learning models that automatically learn features. However, hybrid approaches that integrate clustering with machine learning or deep learning detection methods show promise in overcoming these issues.

4.2. ML and DL Approaches in LiDAR-Based Drone Detection

The authors of [68] introduced an innovative airborne LiDAR-based solution for detecting and localizing drone swarms using 3D deep learning, by modifying and embedding the PointPillars neural network for detecting airborne objects. The PointPillars deep learning model was modified by horizontally dividing the LiDAR FoV into distinct layers, with anchor boxes assigned to each layer, placing the center of each anchor box at the center of its corresponding layer. A scenario-based Digital Twin was used to replicate close encounters and critical safety events, which are then used to generate training data. Data augmentation was carried out by supplementing real-world data with high-quality synthetic drone data, which served to increase the accuracy and efficiency of both training and inference processes. The efficiency of the proposed method has been evaluated on real-world datasets using primary evaluation metrics, and demonstrated notable performance, achieving 80% recall and 96% precision.
In [69], the authors presented a novel approach for indoor human localization and activity recognition (HAR) based on an autonomous moving robot outfitted with 2D LiDAR that might be useful in drone detection. The first step of the proposed method relied on collecting data points based on continuous movement, transforming them into absolute coordinates, cleaning the scan, removing noisy data points, identifying the person’s associated data points, and determine their location. In the next step, human activities are recognized based on the detected subject’s related data points. Preprocessed and interpolated data points then trained using three variations of a convolutional long short-term memory (LSTM) neural network to classify nine types of human activities such as running, standing, walking, falling down etc. The proposed LSTM models’ architecture is quite simplistic, with only a few layers. The simulation results were compared with traditional approaches that employ static 2D LiDARs mounted on the ground and proved the efficacy of the proposed approach. As the proposed approach outperformed the conventional methods in detecting falling down and lying body actions with accuracy of 81.2% and 99%, respectively.

4.3. Multimodal Sensor Fusion for Enhanced UAV Detection: Leveraging LiDAR and Complementary Technologies

The challenges discussed above with drone detection based on lidar data demonstrate the benefit of employing several sensors rather than a single sensor in object detection. For instance, LiDAR provides precise 3D spatial information, while cameras offer detailed visual data, and their integration overcomes LiDAR’s weather sensitivity and the camera’s issue in low-light conditions. Therefore, several studies [70,71] attempted to fuse the strengths of these two sensors to reach accurate detection of static and dynamic obstacles, making it particularly useful for UAV navigation in complex environments. Park et al. [70] proposed a novel sensor fusion-based airborne object detection and position estimation technique for UAV flight safety in BVLOS (Beyond Visual Line of Sight). To detect aerial objects in images captured by a vision sensor was employed CNN-based YOLOv2 architecture, while a clustering algorithm was used to detect objects from lidar point cloud data. Further, to improve the detection accuracy, a Kalman filter was used to integrate these two different sensor data based on multiple estimated state fusion approach. The Kalman filter leveraged two and three-dimensional constant acceleration models for estimating the states of vision and lidar sensors, respectively. The 3D position of the detected object was determined using the depth of LiDAR and the image’s center point, which is the outcome of fusion method. The suggested approach was validated using simulations in the Gazebo simulator to provide a realistic flight scenario. Based on simulation results, in comparison to individual camera or LiDAR systems, combining 3D spatial data from LiDAR with visual information significantly increased detection speed and accuracy, as well as decreased false positives. An innovative LiDAR-camera fusion approach to detecting a variety of static and dynamic obstacles for small UAVs in a low-altitude suburban environment was designed in [71]. Because the calibration of two sensors is required prior to integrating their data, the suggested system architecture first calibrates the LiDAR and camera sensors offline. The joint calibration results a series of incomplete and sparsely distributed unordered discrete points that requires a suitable point cloud segmentation technique for their processing. Point cloud segmentation is used to assign discrete points to detected obstacles and grouping points from the same obstacle into a one group to assess their spatial distribution. Furthermore, fusion detection algorithms are designed for linear and surface static, and dynamic obstacles. To verify the effectiveness of the proposed fusion-based obstacle detection algorithms, the authors gathered obstacle data from a UAV platform equipped with lidar and camera sensors, focusing on scenarios such as power line avoidance, building avoidance, and encounters with tiny low-altitude UAVs in low-altitude suburban areas. The detection results for different obstacles were evaluated using the over-segmentation rate and Intersection over Union (IOU) values. The comparative data verification analysis revealed that the proposed LiDAR-camera fusion-based detection algorithm beats solo LiDAR in real-world various motion scenarios, significantly increasing the distance an d size detection accuracy of static obstacle, as well as the state estimation accuracy of dynamic obstacles.
Another research [72] proposes a potential multi-sensory strategy for detecting and classifying small flying objects like drones by combining data from several sensors. This approach uses a mobile multi-sensor platform equipped with two 360° LiDAR scanners and pan-and-tilt cameras in both the visible and thermal IR spectrum. Based on the proposed approach, the multi-sensory system first detects and tracks flying objects using 3D LiDAR data, after which both IR and visible cameras are automatically directed to the object’s position to capture 2D images. Then CNN is applied to detect the region of interest (ROI) and classify the flying object as one of eight types of UAVs or birds. The suggested multi-sensor fusion assures more robust detection by integrating the characteristics of above technologies, as radar sensor provides long-range detection, LiDAR gives high-resolution 3D spatial information, and cameras provide optical and infrared data for accurate object detection. The first multi-modal UAV classification and 3D pose estimation algorithm for an accurate and robust anti-UAV system was presented in [73]. The proposed multi-modal anti-UAV network includes the UAV type classification and the 3D tracking pipelines. The UAV type classification pipeline is mostly based on image data, whereas the UAV 3D tracking (pose estimation) pipeline is primarily based on lidar and radar data. The dataset of four UAV types was gathered from stereo fisheye camera, conic and peripheral 3D LiDAR, as well as Mmwave Radar sensors. For better performance, the classification pipeline uses sequence fusion based on feature similarity, ROI cropping and keyframe selection using YOLOv9. While the pose estimation pipeline includes dynamic point analysis, a multi-object tracking system, and trajectory completion. The classification pipeline effectively combines information across sequences by applying a soft vote strategy, which improves the UAV type detection accuracy. Point cloud-based UAV detection was evaluated using primary evaluation metrics. The noisy detections and missed trajectories are corrected using the multiple object tracker. Overall, the solution of using multi-modal data with advanced DL algorithms provides accurate UAV detection, classification and tracking. Nevertheless, the issues associated with sensor alignment and calibration, as well as the large computing resources, necessitate additional exploration in future research.

4.3.1. Integrating LiDAR for Robust UAV Detection in GNSS-Enabled Environments

LiDAR technology has shown to be a highly successful technique for detecting UAVs in GNSS-denied zones, where regular GPS systems may be inaccurate or unavailable. LiDAR provides precise spatial data and real-time mapping, allowing for robust detection, tracking, and navigation capabilities even in challenging environments such as urban regions, forests, or indoor places. Several studies [74,75,76] have investigated the use of LiDAR for UAV detection and tracking in GNSS-denied environments, with an emphasis on increasing resilience using sophisticated algorithms, and sensor fusion approaches.
In [74], the authors investigated the potential of lidar as a camera sensors for real-time UAV tracking. They proposed a novel approach to UAV navigation in GNSS-disabled areas based on the combination of signal images and dense 3D point cloud data captured by an Ouster LiDAR sensor. The used three distinct data sequences were captured in indoor with the distance between the LiDAR and the UAV ranging from 0.5 meters to 8 meters. The UAV tracking process involved two basic steps: initializing the UAV’s position and fusing image signals with dense point cloud data. The absolute pose error (APE) and velocity error were calculated using ground truth data from the MOCAP (motion capture) system to validate the UAV’s estimated pose and velocity accuracy. The proposed method was compared against a UAV tracking approach that relies exclusively on either Ouster LiDAR images or point clouds, and the experiment results of the proposed one outperformed the solo models. To extend application areas and enhance detection accuracy, the authors future work entails the integration of Ouster LiDAR images, point clouds, and standard RGB images. Another novel multi-sensory solution with several LiDAR sensors for accurately UAV tracking in GNSS-denied environments was presented in [75]. The authors also proposed a novel multi-LiDAR dataset particularly designed for multi-UAV tracking captured from a 3D spinning and two low-cost solid-state LiDAR sensors with various scan patterns and FoV, as wel as RGB-D camera. The proposed dataset includes UAV data with different sizes ranging from micro-aerial vehicles to standard commercial UAV platforms taken from both indoor and outdoor environments. Sensor calibration was performed by determining extrinsic parameters, aligning point clouds of each sensor to the reference frame, optimizing the relative transformation between the reference frame and the LiDARs etc. Then, MOCAP system was employed to generate accurate ground truth data. Based on environmental scenarios and trajectory patterns, the dataset was organised as structured and unstructured indoor, as well as unstructured outdoor categories. The tracking performance was evaluated using Root Mean Squared Error (RMSE) metric. Overall, the research emphasizes the importance of addressing UAV tracking in GNSS-denied areas, and reveals that multi-LiDAR systems may greatly improve tracking accuracy and reliability. However, it suggests future research into the computational issues involved with such systems, as well as the possible integration of additional sensor modalities to increase robustness in diverse conditions. [76] developed a robust architecture for UAV navigation in GNSS-denied environment that integrates LiDAR, camera, and IMU sensors into a single odometry system. The proposed Resilient LiDAR-Visual-Inertial Odometry (R-LVIO) system aims to decrease trajectory error and enable reliable UAV operation by estimating the UAV’s state and creating a surrounding map. As well as, the system framework employs robust pose estimation approaches, including hybrid point cloud registration and visual feature depth cross-validation. The extrinsic parameters of three distinct sensors are calibrated to a single coordinate system. The IMU frame served as the primary coordinate system, while the camera and LiDAR acted as subcoordinate systems. Gaussian probability-based uncertainty was employed to represent irregular surfaces in unstructured environments. This uncertainty is then separated into eigenvalues and eigenvectors, and a pose estimation goal function is developed to accomplish precise localization. Furthermore, unstructured hierarchical environments are utilized to evaluate the localization accuracy of the proposed system. The proposed system is built to manage real-time processing and recover from individual sensor failure, making it ideal for dynamic and difficult GNSS-denied environments. In terms of accuracy and resilience, the experimental results outperformed previous approaches.

5. Discussion and Conclusions

Due to the continuous improvement of the technology of unmanned aerial vehicles, the scope of use of drones is also increasing rapidly. Their ability to carry various loads is the reason for the frequent occurrence of drone incidents. This, in turn, emphasizes the relevance of a robust and effective drone detection system. This review paper begins with Introduction section, which highlights the comparative description of the advantages and downsides of traditional anti-drone systems.
The fundamentals of LiDAR sensor was explained by describing the structure and principal of a typical LiDAR sensor, as well as its components in Section 2. Furthermore, the comprehensive LiDAR classification study gives an overview of the most recent developments in LiDAR scanning mechanisms. Figure 2 depicts the detailed classification of scanning and non-scanning Lidar sensors. While Table 1 provides a brief comparative information of LiDAR Sensors based on these scanning mechanisms. Based on the dimension of acquired data 1D, 2D and 3D LiDAR sensors were also discussed. Related works on recent LiDAR scanning mechanisms proved that Optomechanical [35,36,37], MEMS [30,31,32,33,34], and Flash [41,42] LiDAR are the most used LiDAR scanning methods, each with its unique advantages and limitations. Therefore, the appropriate LiDAR scanning mechanism must strike a compromise between range, resolution, cost, and application requirements.
Deep learning approaches to LiDAR data processing have advanced dramatically, allowing for effective handling of unstructured 3D point clouds for a variety of applications including object detection, segmentation, and tracking. Section 3 presents the comparative analysis of key deep learning approaches used for LiDAR data processing by highlighting their data representations, main techniques, as well advantages and limitations in Table 2. These approaches differ in terms of how they deal with 3D data, computational complexity, and their applicability for real-time or high-accurate tasks. For instance, point-based methods [52,53], such as PointNet, are simple and successful for classifying point clouds as they directly process point clouds and capture features using shared MLP. While voxel-based approaches [54,55], such as VoxelNet, convert the sparse and irregular point clouds into a volumetric 3D grid of voxels, and they are more suited for detecting large-scale objects. Projection-based methods [56,57] convert 3D LiDAR point clouds into 2D views, sacrificing some 3D detail for computational efficiency, whereas graph-based approaches [58,59] excel at capturing complex spatial relationships but require more computational power. Hybrid methods [60,61] integrate the benefits of many methods, although they are frequently more complex.
State-of-the-art studies in LiDAR-based UAV detection and tracking using clustering-based and deep learning-based approaches have been discussed in Section 4. There are several clustering or segmentation techniques such as EDC (Euclidean Distance Clustering), k-means, and DBSCAN. There were considered some research works handling UAV point cloud cclassification and pose estimation based on Density-Based Spatial Clustering of Applications with Noise (DBSCAN) method [62,63]. The basic clustering-based pipeline was demonstrated in Figure 3. Furthermore, some works using deep learning-based UAV detection and tracking method were exlained in [68,69]. Furthermore, weather conditions including rain, fog, and snow can affect LiDAR performance by adding noise and decreasing detection accuracy. Research into multi-modal sensor fusion, which merges LiDAR with other sensor modalities, has showed promise in solving some of these issues. Therefore, the benefit of employing several sensors rather than a single sensor for UAV detection and tracking task were considered later in [70,71,72,73]. The ability of LiDAR technology to provide accurate spatial data and real-time mapping, allowing for robust UAV detection and tracking capabilities even in challenging GNNS-denied zones were presented in [74,75,76].
Overall, innovations in clustering-based, deep learning, and multi-sensor fusion models have considerably improved UAV detection, tracking, and classification, making LiDAR a critical technology for airspace security and autonomous systems. However, significant issue is the scarcity of huge, labeled datasets required to train deep learning models for UAV detection. The majority of existing datasets focus on autonomous driving, while there are fewer datasets available for UAV detection. As well as, issues related to weather conditions and processing efficiency still require extended future research in this field.

Author Contributions

Conceptualization, U.S.; methodology, U.S.; investigation, U.S.; resources, U.S.; writing—original draft preparation, U.S.; writing—review and editing, U.S., E.T.M.; visualization, U.S.; supervision, E.T.M., L.I.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant No. AP14971031).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The followingabbreviationsareusedinthismanuscript:
ADAS Advanced driving assistance systems
APE Absolute pose error
AVG Automated Guided Vehicle
AI Artificial Intelligence
AP Average precision
APD Avalanche photo diode
API Application Programming Interface
ATSP Asymmetric traveling salesman problem
BEV Bird’s-Eye View
BiLSTM Bidirectional Long short-term memory
BVLOS Beyond Visual Line of Sight
CBRDD Clustering algorithm based on relative distance and density
CLSTM Convolutional Long Short Term Memory
CNN Convolutional Neural Networks
CRNN Convolutional recurrent neural network
DBSCAN Density-Based Spatial Clustering of Applications with Noise
DGCNN Dynamic Graph Convolutional Neural Network
DNN Deep neural network
EDC Euclidean Distance Clustering
EIG Environmental information gain
FAA Federal Aviation Administration
FAEP Fast Autonomous Exploration Planner
FMCW Frequency modulated Continuous wave
FDR False discovery rate
FNR False negative rate
FoV Field of View
FUEL Fast UAV Exploration
GAN Generative Adversarial Networks
GCN Graph Convolutional Network
GNN Graph neural network
GRU Gated Recurrent Unit
HAR Human active recognition
IoU Intersection over Union
IEEE Institute of Electrical and Electronics Engineers
IMU Inertial Measurement Unit
ITS Intelligent Transport System
LAEA LiDAR-assisted Exploration Algorithm
LD Laser diode
LiDAR Light detection and ranging
LoS Line-of-Sight
LSTM Long short-term memory
MLP Multi-Layer Perceptron
MOCAP Motion capture
MOT Multi-object tracking
MRS Mobile robot system
MVF Multi-view fusion
NIR Near-infrared ray
OPA Optical Phased Array
PIC Photonic integrated circuit
RCS Radar cross section
RGB-D Red, Green, Blue plus Depth
R-LVIO Resilient LiDAR-Visual-Inertial Odometry
RMSE Root Mean Squared Error
ROI Region of interest
SECOND Sparsely Embedded Convolutional Detection
SNR Signal-to-noise ratio
SPAD Single Photon Avalanche Diodes
TCP/IP Transmission Control Protocol/Internet Protocol
TDC Time-to-digital converter
ToF Time of Flight

References

  1. Drones and Airplanes: A Growing Threat to Aviation Safety. Available online: https://www.skysafe.io/blog/ drones-and-airplanes-a-growing-threat-to-aviation-safety (accessed on 30 March 2024).
  2. Seidaliyeva, U.; Ilipbayeva, L.; Taissariyeva, K.; Smailov, N.; Matson, E.T. Advances and Challenges in Drone Detection and Classification Techniques: A State-of-the-Art Review. Sensors 2024, 24, 125. [Google Scholar] [CrossRef]
  3. Drone Incident Review: First Half of 2023. Available online: https://d-fendsolutions.com/blog/drone-incident-review-first-half-2023/ (accessed on 8 August 2023).
  4. Yan, J.; Hu, H.; Gong, J.; Kong, D.; Li, D. Exploring Radar Micro-Doppler Signatures for Recognition of Drone Types. Drones 2023, 7, 280. [Google Scholar] [CrossRef]
  5. Rudys, S.; Laučys, A.; Ragulis, P.; Aleksiejūnas, R.; Stankevičius, K.; Kinka, M.; Razgūnas, M.; Bručas, D.; Udris, D.; Pomarnacki, R. Hostile UAV Detection and Neutralization Using a UAV System. Drones 2022, 6, 250. [Google Scholar] [CrossRef]
  6. Brighente, A.; Ciattaglia, G.; Gambi, Peruzzi, G.; Pozzebon, A.; Spinsante, S. Radar-Based Autonomous Identification of Propellers Type for Malicious Drone Detection. In Proceedings of the 2024 IEEE Sensors Applications Symposium(SAS), Naples, Italy, 2024,pp.1–6,.
  7. Alam, S.S.; Chakma, A.; Rahman, M.H.; Bin Mofidul, R.; Alam, M.M.; Utama, I.B.K.Y.; Jang, Y.M. RF-Enabled Deep-Learning-Assisted Drone Detection and Identification: An End-to-End Approach. Sensors 2023, 23, 4202. [Google Scholar] [CrossRef]
  8. Yousaf, J.; Zia, H.; Alhalabi, M.; Yaghi, M.; Basmaji, T.; Shehhi, E.A.; Gad, A.; Alkhedher, M.; Ghazal, M. Drone and Controller Detection and Localization: Trends and Challenges. Appl. Sci. 2022, 12, 12612. [Google Scholar] [CrossRef]
  9. Aouladhadj, D.; Kpre, E.; Deniau, V.; Kharchouf, A.; Gransart, C.; Gaquière, C. Drone Detection and Tracking Using RF Identification Signals. Sensors 2023, 23, 7650. [Google Scholar] [CrossRef]
  10. D. Lofù, P. D. Gennaro, P. Tedeschi, T. D. Noia and E. D. Sciascio. URANUS: Radio Frequency Tracking, Classification and Identification of Unmanned Aircraft Vehicles. in IEEE Open Journal of Vehicular Technology 2023, 4, 921–935. [Google Scholar] [CrossRef]
  11. Casabianca, P.; Zhang, Y. Acoustic-Based UAV Detection Using Late Fusion of Deep Neural Networks. Drones 2021, 5, 54. [Google Scholar] [CrossRef]
  12. Tejera-Berengue, D.; Zhu-Zhou, F.; Utrilla-Manso, M.; Gil-Pita, R.; Rosa-Zurera, M. Analysis of Distance and Environmental Impact on UAV Acoustic Detection. Electronics 2024, 13, 643. [Google Scholar] [CrossRef]
  13. Utebayeva, D.; Ilipbayeva, L.; Matson, E.T. Practical Study of Recurrent Neural Networks for Efficient Real-Time Drone Sound Detection: A Review. Drones 2023, 7, 26. [Google Scholar] [CrossRef]
  14. S. Salman, J. S. Salman, J. Mir, M. T. Farooq, A. N. Malik and R. Haleemdeen. Machine Learning Inspired Efficient Audio Drone Detection using Acoustic Features. In Proceedings of the 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST), Islamabad, Pakistan, 2021, pp. 335–339.
  15. Sun, Y.; Zhi, X.; Han, H.; Jiang, S.; Shi, T.; Gong, J.; Zhang, W. Enhancing UAV Detection in Surveillance Camera Videos through Spatiotemporal Information and Optical Flow. Sensors 2023, 23, 6037. [Google Scholar] [CrossRef]
  16. Seidaliyeva, U.; Akhmetov, D.; Ilipbayeva, L.; Matson, E.T. Real-Time and Accurate Drone Detection in a Video with a Static Background. Sensors 2020, 20, 3856. [Google Scholar] [CrossRef]
  17. Samadzadegan, F.; Dadrass Javan, F.; Ashtari Mahini, F.; Gholamshahi, M. Detection and Recognition of Drones Based on a Deep Convolutional Neural Network Using Visible Imagery. Aerospace 2022, 9, 31. [Google Scholar] [CrossRef]
  18. Jamil, S.; Fawad; Rahman, M. ; Ullah, A.; Badnava, S.; Forsat, M.; Mirjavadi, S.S. Malicious UAV Detection Using Integrated Audio and Visual Features for Public Safety Applications. Sensors 2020, 20, 3923. [Google Scholar] [CrossRef]
  19. J. Kim et al. Deep Learning Based Malicious Drone Detection Using Acoustic and Image Data. In Proceedings of the 2022 Sixth IEEE International Conference on Robotic Computing (IRC), Italy, 2022, pp. 91–92.
  20. M. Aledhari, R. M. Aledhari, R. Razzak, R. M. Parizi and G. Srivastava. Sensor Fusion for Drone Detection. In Proceedings of the 2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring), Helsinki, Finland, 2021, pp. 1–7.
  21. W. Xie, Y. W. Xie, Y. Wan, G. Wu, Y. Li, F. Zhou and Q. Wu. A RF-Visual Directional Fusion Framework for Precise UAV Positioning. in IEEE Internet of Things Journal.
  22. M. ki, J. M. ki, J. cha and H. Lyu. Detect and avoid system based on multi sensor fusion for UAV. In Proceedings of the 2018 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Korea (South), 2018, pp. 1107–1109.
  23. V. Mehta, F. V. Mehta, F. Dadboud, M. Bolic and I. Mantegh. A Deep Learning Approach for Drone Detection and Classification Using Radar and Camera Sensor Fusion. In Proceedings of the 2023 IEEE Sensors Applications Symposium (SAS), Ottawa, ON, Canada, 2023, pp. 01–06.
  24. Li, N. , Ho, C., Xue, J., Lim, L., Chen, G., Fu, Y.H., Lee, L. A Progress Review on Solid-State LiDAR and Nanophotonics-Based LiDAR Sensors. Laser and Photonics Reviews, 2022. [Google Scholar] [CrossRef]
  25. Behroozpour, B. , Sandborn, P., Wu, M., Boser, B. Lidar System Architectures and Circuits. IEEE Commun. Mag. 2017, 55, 135–142. [Google Scholar] [CrossRef]
  26. Alaba, S.Y.; Ball, J.E. A Survey on Deep-Learning-Based LiDAR 3D Object Detection for Autonomous Driving. Sensors 2022, 22, 9577. [Google Scholar] [CrossRef]
  27. Lee, S. , Lee, D., Choi, P., Park, D. Accuracy–Power Controllable LiDAR Sensor System with 3D Object Recognition for Autonomous Vehicle. Sensors 2020, 20, 5706. [Google Scholar] [CrossRef]
  28. Chen, C.; Guo, J.; Wu, H.; Li, Y.; Shi, B. Performance Comparison of Filtering Algorithms for High-Density Airborne LiDAR Point Clouds over Complex LandScapes. Remote Sens. 2021, 13, 2663. [Google Scholar] [CrossRef]
  29. Feneyrou, P.; Leviandier, L.; Minet, J.; Pillet, G.; Martin, A.; Dolfi, D.; Schlotterbeck, J.P.; Rondeau, P.; Lacondemine, X.; Rieu, A.; et al. Frequency-modulated multifunction lidar for anemometry, range finding, and velocimetry—1. Theory and signal processing. Appl. Opt. 2017, 56, 9663–9675. [Google Scholar] [CrossRef]
  30. Wang, D. , Watkins, C., Xie, H. MEMS Mirrors for LiDAR: A Review. Micromachines 2020, 11, 456. [Google Scholar] [CrossRef]
  31. C. -H. Lin, H. -S. Zhang, C. -P. Lin and G. -D. J. Su. Design and Realization of Wide Field-of-View 3D MEMS LiDAR. IEEE Sensors Journal 2022, 22, 115–120. [Google Scholar] [CrossRef]
  32. Felix Berens , Markus Reischl , Stefan Elser . Generation of synthetic Point Clouds for MEMS LiDAR Sensor. TechRxiv, 21 April. [CrossRef]
  33. Haider, A.; Cho, Y.; Pigniczki, M.; Köhler, M.H.; Haas, L.; Kastner, L.; Fink, M.; Schardt, M.; Cichy, Y.; Koyama, S.; et al. Performance Evaluation of MEMS-Based Automotive LiDAR Sensor and Its Simulation Model as per ASTM E3125-17 Standard. Sensors 2023, 23, 3113. [Google Scholar] [CrossRef]
  34. Yoo, H.W. , Druml, N., Brunner, D. et al. MEMS-based lidar for autonomous driving. Elektrotech. Inftech. 2018, 135, 408–415. [Google Scholar] [CrossRef]
  35. Li, L.; Xing, K.; Zhao, M.; Wang, B.; Chen, J.; Zhuang, P. Optical–Mechanical Integration Analysis and Validation of LiDAR Integrated Systems with a Small Field of View and High Repetition Frequency. Photonics 2024, 11, 179–10. [Google Scholar] [CrossRef]
  36. Raj, T. , Hashim, F.H., Huddin, A.B., Ibrahim, M.F., Hussain, A. A Survey on LiDAR Scanning Mechanisms. Electronics 2020, 9, 741. [Google Scholar] [CrossRef]
  37. Zheng, H.; Han, Y.; Qiu, L.; Zong, Y.; Li, J.; Zhou, Y.; He, Y.; Liu, J.; Wang, G.; Chen, H.; et al. Long-Range Imaging LiDAR with Multiple Denoising Technologies. Appl. Sci. 2024, 14, 3414. [Google Scholar] [CrossRef]
  38. Wang, Zh. , Menenti, M. Challenges and Opportunities in Lidar Remote Sensing. Frontiers in Remote Sensing 2021, 2. [Google Scholar] [CrossRef]
  39. Yi, Y.; Wu, D.; Kakdarvishi, V.; Yu, B.; Zhuang, Y.; Khalilian, A. Photonic Integrated Circuits for an Optical Phased Array. Photonics 2024, 11, 243. [Google Scholar] [CrossRef]
  40. Yunhao Fu, Baisong Chen, Wenqiang Yue, Min Tao, Haoyang Zhao, Yingzhi Li, Xuetong Li, Huan Qu, Xueyan Li, Xiaolong Hu, Junfeng Song. Target-adaptive optical phased array lidar[J]. Photonics Research 2024, 12, 904. [Google Scholar] [CrossRef]
  41. Tontini, A.; Gasparini, L.; Perenzoni, M. Numerical Model of SPAD-Based Direct Time-of-Flight Flash LIDAR CMOS Image Sensors. Sensors 2020, 20, 5203. [Google Scholar] [CrossRef]
  42. Xia, Z. Q. : Flash LiDAR single photon imaging over 50 km, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2023, 48, 1601–1606. [Google Scholar] [CrossRef]
  43. Assaf, E.H.; von Einem, C.; Cadena, C.; Siegwart, R.; Tschopp, F. High-Precision Low-Cost Gimballing Platform for Long-Range Railway Obstacle Detection. Sensors 2022, 22, 474. [Google Scholar] [CrossRef]
  44. Rutuja, Athavale; et al. Low cost solution for 3D mapping of environment using 1D LIDAR for autonomous navigation. 2019 IOP Conf. Ser. Mater. Sci. Eng. 2019, 561, 012104. [Google Scholar] [CrossRef]
  45. Dogru, S. , Marques, L. Drone Detection Using Sparse Lidar Measurements. IEEE Robotics and Automation Letters 2022, 7, 3062–3069. [Google Scholar] [CrossRef]
  46. Hou, X.; Pan, Z.; Lu, L.; Wu, Y.; Hu, J.; Lyu, Y.; Zhao, C. LAEA: A 2D LiDAR-Assisted UAV Exploration Algorithm for Unknown Environments. Drones 2024, 8, 128. [Google Scholar] [CrossRef]
  47. Gonz, A.; Torres, F.
  48. Mihálik, M.; Hruboš, M.; Vestenický, P.; Holečko, P.; Nemec, D.; Malobický, B.; Mihálik, J. A Method for Detecting Dynamic Objects Using 2D LiDAR Based on Scan Matching. Appl. Sci. 2022, 12, 5641. [Google Scholar] [CrossRef]
  49. Fagundes, L.A., Jr.; Caldeira, A.G.; Quemelli, M.B.; Martins, F.N.; Brandão, A.S. Analytical Formalism for Data Representation and Object Detection with 2D LiDAR: Application in Mobile Robotics. Sensors 2024, 24, 2284. [Google Scholar] [CrossRef]
  50. Tasnim, A.A.; Kuantama, E.; Han, R. ; Dawes,J.; Mildren, R., Ed.; Nguyen, P. Towards Robust Lidar-based 3D Detection and Tracking of UAVs. In Proceedings of the DroNet ’23: Ninth Workshop on Micro Aerial Vehicle Networks, Systems, and Applications, Helsinki, Finland, 18 June 2023. [Google Scholar]
  51. Cho, M.; Kim, E. 3D LiDAR Multi-Object Tracking with Short-Term and Long-Term Multi-Level Associations. Remote Sens. 2023, 15, 5486. [Google Scholar] [CrossRef]
  52. Peng, H. , Huang, D. Small Object Detection with lightweight PointNet Based on Attention Mechanisms. hys.: Conf. Ser. 2024, 2829. [Google Scholar] [CrossRef]
  53. Nong X, Bai W, Liu G (2023) Airborne LiDAR point cloud classification using PointNet++ network with full neighborhood features. PLoS ONE 2023, 18, e0280346. [CrossRef]
  54. Ye, M. , Xu, S. In , Cao, T. HVNet: Hybrid Voxel Network for LiDAR Based 3D Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2020; pp. 1631–1640. [Google Scholar]
  55. Chen, Y. , Liu, J. , Zhang, X., Qi, X., Jia, 2023, J. VoxelNeXt: Fully Sparse VoxelNet for 3D Object Detection and Tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); pp. 21674–21683.
  56. A. Milioto, I. A. Milioto, I. Vizzo, J. Behley and C. Stachniss. "RangeNet ++: Fast and Accurate LiDAR Semantic Segmentation. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 2019, pp. 4213–4220.
  57. Alnaggar, Y. , Afifi, M. Amer, K., ElHelw, 2021, M. Multi Projection Fusion for Real-Time Semantic Segmentation of 3D LiDAR Point Clouds. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV); pp. 1800–1809.
  58. Chen, J. , Lei, B. , Song, Q., Ying, H., Chen, Z., Wu, 2020, J. A Hierarchical Graph Network for 3D Object Detection on Point Clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); pp. 392–401.
  59. Liu, X.; Zhang, B.; Liu, N. The Graph Neural Network Detector Based on Neighbor Feature Alignment Mechanism in LIDAR Point Clouds. Machines 2023, 11, 116. [Google Scholar] [CrossRef]
  60. Lis, K. , Kryjak, T. PointPillars Backbone Type Selection for Fast and Accurate LiDAR Object Detection. In: Chmielewski, L.J., Orłowski, A. (eds) Computer Vision and Graphics. ICCVG 2022. Lecture Notes in Networks and Systems, 2023, vol 598. Springer, Cham.
  61. Manduhu, M. , Dow,A., Trslic, P., Dooly,G., Blanck, B., Riordan, J. Airborne Sense and Detect of Drones using LiDAR and adapted PointPillars DNN. arXiv 2023. [Google Scholar]
  62. Zheng, L. , Zhang, P., Tan, J.,Li, F. The Obstacle Detection Method of UAV Based on 2D Lidar. IEEE Access 2019, 7, 163437–163448. [Google Scholar] [CrossRef]
  63. Xiao, J. , Pisutsin, P., Tsao, C. W., Feroskhan, M. Clustering-based Learning for UAV Tracking and Pose Estimation. arXiv preprint, arXiv:2405.16867.
  64. Wu, D.; Liang, Z.; Chen, G. Deep learning for LiDAR-only and LiDAR-fusion 3D perception: a survey. Intell. Robot. 2022, 2, 105–129. [Google Scholar] [CrossRef]
  65. Ding, Z.; Sun, Y.; Xu, S.; Pan, Y.; Peng, Y.; Mao, Z. Recent Advances and Perspectives in Deep Learning Techniques for 3D Point Cloud Data Processing. Robotics 2023, 12, 100. [Google Scholar] [CrossRef]
  66. Alaba, S.Y.; Ball, J.E. A Survey on Deep-Learning-Based LiDAR 3D Object Detection for Autonomous Driving. Sensors 2022, 22, 9577. [Google Scholar] [CrossRef]
  67. Sun, Z.; Li, Z.; Liu, Y. An Improved Lidar Data Segmentation Algorithm Based on Euclidean Clustering. In Proceedings of the 11th International Conferenceon Modelling, Identificationand Control(ICMIC2019). Lecture Notes in Electrical Engineering, 2019, vol582. Springer, Singapore.
  68. A. Dow et al. Intelligent Detection and Filtering of Swarm Noise from Drone Acquired LiDAR Data using PointPillars. In Proceedings of the OCEANS 2023 - Limerick, Limerick, Ireland, 2023, pp. 1-6.
  69. Bouazizi, M.; Lorite Mora, A.; Ohtsuki, T. A 2D-Lidar-Equipped Unmanned Robot-Based Approach for Indoor Human Activity Detection. Sensors 2023, 23, 2534. [Google Scholar] [CrossRef]
  70. Park, C. , Lee, S., Kim, H., Lee, D. Aerial Object Detection and Tracking based on Fusion of Vision and Lidar Sensors using Kalman Filter for UAV. International Journal of Advanced Smart Convergence 2020, 9, 232–238. [Google Scholar] [CrossRef]
  71. Ma, Z. , Yao, W., Niu, Y. et al. UAV low-altitude obstacle detection based on the fusion of LiDAR and camera. Auton. Intell. Syst. 2021, 1, 12. [Google Scholar] [CrossRef]
  72. M. Hammer, B. M. Hammer, B. Borgmann, M. Hebel, and M. Arens. A multi-sensorial approach for the protection of operational vehicles by detection and classification of small flying objects. In Proceedings of the Electro-Optical Remote Sensing XIV, online, 2020.
  73. Deng, T. , Zhou, Y., Wu, W., Li, M., Huang, J., Liu, Sh., Song, Y., Zuo, H., Wang, Y., ue, Y., Wang, H., Chen, W. (2024). Multi-Modal UAV Detection, Classification and Tracking Algorithm – Technical Report for CVPR 2024 UG2 Challenge. 10.48550/arXiv.2405.16464.
  74. H. Sier, X. H. Sier, X. Yu, I. Catalano, J. P. Queralta, Z. Zou and T. Westerlund. UAV Tracking with Lidar as a Camera Sensor in GNSS-Denied Environments. In Proceedings of the 2023 International Conference on Localization and GNSS (ICL-GNSS), Castellón, Spain, 2023, pp. 1–7.
  75. I. Catalano, X. I. Catalano, X. Yu and J. P. Queralta. Towards Robust UAV Tracking in GNSS-Denied Environments: A Multi-LiDAR Multi-UAV Dataset. In Proceedings of the 2023 IEEE International Conference on Robotics and Biomimetics (ROBIO), Koh Samui, Thailand, 2023, pp. 1–7.
  76. Zhang, B. , Shao, X., Wang, Y., Sun, G., Yao, W. R-LVIO: Resilient LiDAR-Visual-Inertial Odometry for UAVs in GNSS-denied Environment. Drones 2024, 8, 487. [Google Scholar] [CrossRef]
Figure 1. Structure and principle of a typical LiDAR sensor. The LD sends out the laser, which is focussed by a light transmitting lens; then the emitted laser is reflected back from the target object and received by the APD via the light-receiving lens. Further the TDC measures the subtraction between the time the LD sends the laser and the time the APD receives it and converts the subtraction to the ToF. Finally, the signal-processing unit, also known as a microprocessor (MP), receives the ToF from the TDC and computes the distance between the LiDAR sensor and the target object [26].
Figure 1. Structure and principle of a typical LiDAR sensor. The LD sends out the laser, which is focussed by a light transmitting lens; then the emitted laser is reflected back from the target object and received by the APD via the light-receiving lens. Further the TDC measures the subtraction between the time the LD sends the laser and the time the APD receives it and converts the subtraction to the ToF. Finally, the signal-processing unit, also known as a microprocessor (MP), receives the ToF from the TDC and computes the distance between the LiDAR sensor and the target object [26].
Preprints 121432 g001
Figure 2. Types of LiDAR Sensors. Solid-state LiDAR refers to both non-scanning and non-mechanical scanning LiDAR systems. OPA and Flash LiDAR are both solid-state LiDARs since their beam-steering and scanning mechanisms do not have any moving mechanical components. MEMS is known as a quasi-solid-state scanner because, unlike typical LiDAR systems, it lacks massive mechanical moving parts but contains microscopic moving components.
Figure 2. Types of LiDAR Sensors. Solid-state LiDAR refers to both non-scanning and non-mechanical scanning LiDAR systems. OPA and Flash LiDAR are both solid-state LiDARs since their beam-steering and scanning mechanisms do not have any moving mechanical components. MEMS is known as a quasi-solid-state scanner because, unlike typical LiDAR systems, it lacks massive mechanical moving parts but contains microscopic moving components.
Preprints 121432 g002
Figure 3. Clustering-based object detection method pipeline.
Figure 3. Clustering-based object detection method pipeline.
Preprints 121432 g003
Table 1. Comparative Analysis of LiDAR Sensors based on scanning mechanism.
Table 1. Comparative Analysis of LiDAR Sensors based on scanning mechanism.
LiDAR sensor type Description Field of View (FoV) Scanning mechanism Use cases Advantages Limitations
MEMS LiDAR [30,31,32,33,34] uses moving micro-mirror plates to steer laser beam in free space while the rest of the system’s components remain motionless moderate, depends on mirror steering angle quasi-solid-state scanning (a combination of solid-state LiDAR and mechanical scanning ) autonomous vehicles; drones and robotics; medical imaging; space exploration; mobile devices accurate steering with minimal moving components; superior in terms of size, resolution, scanning speed, and cost limited range and FoV; sensitivity to vibrations and environmental factors
Optomechanical LiDAR [35,36,37] uses mechanical/moving components (mirrors, prisms or entire sensor heads) to steer the laser beam and scan the environment wide FoV (up to 360°) rotating and oscillating mirror, spinning prism remote sensing, self-driving cars, aerial surveying and mapping, robotics, security long range, high accuracy and resolution, wide FoV, fast scanning bulky and heavy, high cost and power consumption
Electromechanical LiDAR [38] uses electrically controlled motors or actuators to move mechanical parts that allow to steer the laser beam in various directions wide FoV (up to 360°) mirror, prism or entire sensor head autonomous vehicles, remote sensing, atmospheric studies, surveying and mapping enhanced scanning patterns, wide FoV, moderate cost, long range, high precision and accuracy high power consumption, limited durability, bulky
OPA LiDAR [39,40] employs optical phased arrays (OPAs) to steer the laser beam without any moving components. flexible FoV, electronically controlled, can be narrow or wide solid-state beam (non-mechanical) scanning mechanism autonomous vehicles; high-precision sensing; compact 3D mapping systems no moving parts; rapid beam steering; compact size; energy efficiency limited steering range and beam quality; high manufacturing costs
Flash LiDAR [41,42] employs a broad laser beam and a huge photodetector array to gather 3D data in a single shot wide FoV (up to 120° horizontally, 90° vertically) no scanning mechanism terrestrial and space applications; advanced driving assistance systems (ADAS); no moving parts; instantaneous capture; real-time 3D imaging limited range; lower resolution; sensitive to light and weather conditions
Table 2. Deep learning approaches for LiDAR data processing.
Table 2. Deep learning approaches for LiDAR data processing.
DL approach Data representation Main techniques Strengths Limitations Examples
Point-based [52,53] point clouds directly processes point clouds and captures features using shared MLPs direct processing of raw point clouds; efficient for sparse data; prevents voxelization and quantization concerns computationally expensive due to large-scale and irregular point clouds PointNet, PointNet++
Voxel-based [54,55] voxel grids converts the sparse and irregular point cloud into a volumetric 3D grid of voxels well-structured representation; easy to use 3D CNNs; suitable for capturing global context high memory usage and computational cost due to voxelization; loss of precision in 3D space due to quantization; loss of detail in sparse data region. VoxelNet, SECOND
Projection-based [56,57] plane (image), spherical, cylindrical, BEV projection projects the 3D point cloud onto a 2D plane efficient processing using 2D CNNs loss of spatial features due to 3D-to-2D projection RANGENet++, BEV, PIXOR, SqueezeSeg
Graph-based [58,59] adjacency matrix, feature matrices, graph Laplacian models point clouds as a graph, where each point regarded as a node and edges represent the interactions between them effective for dealing with sparse, non-uniform point clouds; enables both local and global context-aware detection; ideal for capturing spatial relationships between points high computational complexity due to large point clouds GNN, DGCNN
Hybrid appoach [60,61] combination of raw point clouds, voxels, projections etc. combines several methods to improve the accuracy of 3D object detection, segmentation, and classification tasks improved object localization and segmentation accuracy, flexibility high memory and computational resources, complex architecture PointPillars, MVF
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated