2.1. Key Components and Operational Principles of LiDAR systems
LiDAR devices use laser light to determine distances and create detailed 3D maps of their surroundings [
24,
25,
26]. A LiDAR sensor, which determines the precise distance from a target object by emitting a laser includes a laser diode (LD), an avalanche photo diode (APD), a time-to-digital converter (TDC), and signal processing modules as depicted in
Figure 1.
Therefore, these are the primary components of a LiDAR system, which are required for data capture, processing, and interpretation in order to measure distances and detect objects:
1.
Laser. The color and intensity of the laser vary depending on the sort of data being gathered, nevertheless every LiDAR payload includes a high-powered laser [
24,
25,
26]. Laser generates pulses or continuous light beams that are emitted with a certain frequency or wavelength (usually in the NIR spectrum, such as 905 nm or 1550 nm). According to the transmission principle and application, lasers can be either pulsed or continuous-wave (CW). The pulsed laser is the most popular form in LiDAR systems, and it is a way of sending a laser with an instantaneous peak strength via a short pulse of several nanoseconds. The system determines the distance to the target object by measuring each pulse’s time-of-flight. Because the intensity of an instantaneous laser is high, this approach is particularly accurate for measuring long distances and is frequently utilized in mapping, terrain modeling, and self-driving vehicles. Instead of producing pulses, CW lasers produce a continuous beam of light and have two variations such as amplitude-modulated continuous-wave (AMCW), and frequency-modulated continuous-wave (FMCW) lasers. The AMCW is a technique for sending a continuous laser. It uses the phase difference between the transmitted and reflected detected waves to measure the distance. As it measures the phase shift of the transmitted and received laser, it is not appropriate for an accurate and long distance measurement [
24,
25,
26]. CW LiDARs often employ the FMCW approach, which compensates for the AMCW’s inaccurate distance estimation. FMCW lasers generate a continuous beam rather than discrete pulses. To determine the distance, FMCW modulates the frequency of light and measures phase shifts induced by the Doppler effect of the reflected signal. They provide high accuracy and can estimate the distance and velocity of moving objects. However, it is substantially more complicated than the AMCW and is not appropriate for long distance measurement since the frequency fluctuates owing to the Doppler effect.
2. Scanner/Beam steering mechanism.The scanner directs and steers the laser beam across the surroundings while collecting the returning signals, and defines the LiDAR’s coverage area and impacts its resolution. Rotating mirrors, oscillating systems are used to control the laser beam.
Section 2.2 explains the scanning mechanisms used in lidar sensors.
3. Receiver (photodetector). Avalanche photodiodes (APDs) and single-photon avalanche diodes (SPADs) are examples of common detectors. These detectors capture the weak reflected light pulse and convert it into an electrical signal [
24,
25,
26].
4. Signal processing unit. This unit of a LiDAR system is in charge of converting the raw data acquired by the photodetector into useful information like distance measurements and 3D object mapping. Therefore, it removes noise and unnecessary data captured by the photodetector, and enhances signal detection accuracy by focusing just on laser pulses received from the objects of interest. It also boosts the weak signal reflected by distant or tiny objects, which is essential in long-range LiDAR systems or low-reflectivity settings. After filtering and amplifying the raw data, this unit estimates the laser pulse’s ToF. Finally it transforms the processed data into a 3D point cloud that depicts the scanned surroundings. Each point in the point cloud represents the position where the LiDAR spotted a target object. In some cases, signal processing unit performs data compression and transmission tasks to handle with rapid huge data processing in real-time applications [
27,
28,
29].
5. Power supply unit. It is essential for supplying the electrical energy necessary to power all of the system’s components and ensuring crucial operations like as powering the laser emitter, regulating voltage for the receiver, enabling signal processing, controlling beam steering, and managing energy efficiency [
27].
6. Positioning and Orientation unit. This unit provides accurate global position and orientation of the LiDAR sensor during data gathering. It includes GPS/GNSS (Global Positioning System/Global Navigation Satellite System) unit, IMU (Inertial Measurement Unit), INS (Inertial Navigation System), as well as SLAM (Simultaneous Localization and Mapping). The GPS/GNSS unit enables precise positioning, georeferencing, and trajectory tracking of the LiDAR system within a global coordinate system. The IMU unit ensures detailed information on the sensor’s orientation, acceleration, and movement, ensuring that the LiDAR data is correct even when the sensor is in motion or vibrating. The INS unit fuses data from the GPS/GNSS and IMU to produce continuous and highly precise position and orientation information, particularly in challenging environments or during dynamic motion. The SLAM unit enables the LiDAR sensor to simultaneously map and localize its surroundings in GPS-denied environments.
7. Control system unit. The control system coordinates all of the LiDAR sensor’s components. It handles laser emissions, receiver synchronization, data gathering, and filtering operations. It also interacts with other systems, such GPS or IMU, and sets scanning settings and power management [
28,
29].
2.2. Types of LiDAR Sensors: A Comprehensive Classification
LiDAR sensors may be classified depending on a number of factors, including scanning mechanisms, operational environments, measurement techniques, and application areas etc.
According to their scanning methods, LiDAR systems are classified as a non-scanning and a scanning LiDAR (see
Figure 2). Scanning LiDAR sensors are further characterized as mechanical and non-mechanical LiDAR. Mechanical scanning LiDAR sensors use physically moving components (rotating mirrors, oscillating prisms, entire sensor head etc.) to steer the laser beam across the environment and measure the distance to create accurate 3D maps. There are several scanning types of mechanical scanning LiDAR: micro-electromechanical systems (MEMS) [
30,
31,
32,
33,
34], optomechanical [
35,
36,
37] and electromechanical [
38] scanning LiDAR. MEMS scanning mechanisms use small moveable micro-mirror plates to steer laser beams in free space and produce scanning patterns in LiDAR systems. MEMS-based scanning mirrors are generally classified as either resonant or non-resonant [
34]. Resonant MEMS mirrors function at a certain resonant frequency and provide broad scan angles and rapid scanning speeds, making them ideal for automotive applications; nevertheless, they have disadvantages such as non-uniform scan speeds and susceptibility to environmental changes. While non-resonant MEMS mirrors are more versatile in trajectory design, have lower scanning angles and require more complicated controllers to maintain scan quality. MEMS scanning mechanisms are used in a wide range of applications due to their small size, low cost and power consumption, and ability to create high-resolution scans. MEMS LiDAR is known as quasi-solid-state LiDAR since its moving parts merely steer the laser beam in free space and do not move any optical components. This makes them excellent for autonomous cars, robotics, space exploration, medical imaging, and mobile applications where weight, size, and energy efficiency are all important considerations. However, MEMS mirrors encounter issues such as limited range and FoV, sensitivity to environmental factors such as temperature and vibrations, which can affect their resonant frequency and overall performance [
30,
31,
32,
33,
34]. Optomechanical scanning systems often use rotating polygon mirrors and Risley prisms that physically rotate to steer the laser beam over the scene being scanned. These systems are perfect for automobile LiDAR, remote sensing, and even biological imaging since they can maintain accurate beam control over lengthy durations, as well as achieve a high field of view (FoV) and range. Optomechanical systems are extremely effective, but they are heavy and feature moving components that might wear out over time, making them less suitable for small or lightweight applications than MEMS-based equivalents. However, for cases needing high power and long-range capabilities, optomechanical mechanisms continue to be the dominant choice [
35,
36,
37]. Electromechanical scanning devices employ revolving or oscillating mirrors driven by electric motors (such as stepper motors or servo motors) to deflect a laser beam in certain directions. This enables the LiDAR system to do 2D or 3D scanning over broad regions. These systems are among the first and most extensively used scanning mechanisms in LiDAR because of their simplicity, efficacy in attaining high angular resolution, a large horizontal and vertical field of view, high accuracy, and long-range capabilities. These systems are widely utilized in self-driving cars, infrastructure monitoring, environmental mapping, remote sensing, and atmospheric studies. While electromechanical systems are strong and reliable, they do have certain limitations. Moving parts are prone to wear and tear, which might jeopardize long-term reliability. Furthermore, these systems tend to be bulky and heavier compared to newer, more compact options like as MEMS or solid-state LiDAR, which offer comparable capabilities without moving components [
35,
38].
Non-mechanical scanning LiDAR sensor is also known as solid-state beam scanning since it has no moving parts such as rotating mirrors or prisms. Solid-state beam scanning LiDAR systems frequently employ optical phased arrays (OPAs) to steer the laser beams by altering the phase of light emitted at various spots in the array [
39,
40]. OPA scanning techniques in LiDAR systems are gaining popularity because of their non-mechanical beam steering capabilities and are regarded to be fully solid-state, with no moving components. They offer the potential for very quick scanning while also being highly precise in tuning for varied beam directions. Recent research [
39] focuses on advances in photonic integrated circuits (PICs) for OPAs, which improve their compactness, speed, and energy efficiency. These integrated circuits have fast response times and compatibility with existing manufacturing processes (such as CMOS), making them suitable for mass production in sectors like automotive LiDAR. Another OPA-based LiDAR research [
39] presented a target-adaptive scanning method which aims to adjust the scanning resolution depending on a detected target. Based on this method, high-resolution scanning is retained for only essential objects, while unimportant background regions are scanned coarsely. Therefore, this strategy increases efficiency, as well as the system’s capacity to focus on critical aspects in real-time applications like autonomous driving [
40]. While OPA technology has major advantages, such as no moving components, quick beam steering, compact size, and energy economy, it faces issues in managing beam width and reducing grating lobes [
39,
40].
Unlike conventional LiDAR systems, non-scanning LiDAR sensors do not use mechanical or electronic scanning mechanism to move a laser beam across its FoV (Field of View). Instead, these systems illuminate a full scene in a single light pulse and captures its reflection on a 2D sensor array, similar to a camera. Non-scanning LiDAR is commonly known as Flash LiDAR. Flash LiDAR system employs a broad laser beam and a huge photodetector array to gather 3D data in a single shot, making it perfect for real-time applications. Recent study [
41,
42] has focused on the integration of SPAD (single photon avalanche diodes) sensors with Flash LiDAR systems, which allows for increased sensitivity and performance in low-light conditions. SPAD-based Flash LiDARs can operate over long distances (up to 50 kilometers) and are being studied for precision landing systems for planetary missions. Therefore, Flash LiDAR offers primary advantages such as the lack of moving components, quick data capture, and the capacity to create high-resolution 3D maps in real time, making it an invaluable tool for terrestrial and space applications. However, because to their limits in range and resolution compared to scanning LiDAR systems, they are usually employed in shorter-range applications [
41,
42].
The differences between these LiDAR scanning mechanisms can be distinguished according to the brief comparative information in the
Table 1 below.
Based on the dimension of acquired data LiDAR systems that can collect spatial information come in three varieties:
- one-dimensional (1D) LiDAR [
43,
44];
- three-dimensional (3D) LiDAR [
50,
51].
As mentioned above in
Section 2.1, LiDAR system have several basic components such as laser, scanner, receiver, signal processing unit etc. However, LiDAR system still can operate without scanning mechanism and is known as 1D LiDAR. 1D LiDAR mostly estimates the distance along a single fixed axis or direction. As well as, its laser is fixed and directed in one direction, rather than rotating or sweeping over a larger region. In [
43], a low-cost high-precision pointing mechanism for obstacle detection on railway tracks over long distances was demonstrated by employing a gimbaling platform coupled with a 1D LiDAR sensor. The gimbal enables the LiDAR sensor to scan a large area dynamically, detecting obstructions like animals, debris, and equipment on rails to prevent accidents. The system’s actual pointing accuracy was assessed through a controlled, indoor, as well as long-distance experiment. The experiment findings showed that the system can reliably target individual points over large distances, with angular resolution enough to detect humans at 1500 meters using long-range LiDAR. Overall, the study contributes significantly to transportation safety and provides a solid platform for future advances in obstacle detection technologies. Rather than using expensive and sophisticated 3D LiDAR systems, the authors of [
44] developed an inexpensive indoor navigation system capable of mapping static environments and detecting objects. The proposed system employs a 1D LiDAR sensor to scan its surroundings and create 3D maps by combining data from many scans captured over time as the LiDAR-equipped vehicle moves across the surroundings. The system prototype is constructed with Lidar Litev3 sensor, two servo motors and pan-tilt mechanism, and primarily designed for tiny autonomous bots or Automated Guided Vehicles (AGVs) when vehicle speed is not a concern. It assumes that the environment is static relative to the sensor, with only obstructions being relevant. The prototype determines obstruction coordinates by scanning 150° horizontally and 120° vertically using a fast scanning approach. Once the coordinates are spotted, the sensor focuses on the obstacle to generate a detailed map, which allows the vehicle to analyze the object’s profile and provide navigation instructions for obstacle avoidance. Based on experiment results, the obstacles within a 1-metre range are successfully detected, followed by the creation of an object profile. The use of an adaptive scanning method reduced scan time by more than half and recognized the object’s presence and shape as rapidly as possible.
Recent research works have explored various aspects of 2D LiDAR technology in UAV detection and tracking. For example, in [
45], the researchers have examined 2D lidar-based UAV detection and presented a formulation for probability of detection in various settings using a lidar-turret system. The proposed system relies on sparse detections instead of dense point clouds, as well as estimates motion and active tracking. The authors offered the theoretical framework for analyzing the performance of a 2D LiDAR-based detection system and better understanding its limitations. The work consists of field experiments involving the detection of different-size multiple drones based on a modern LiDAR system and highlights its effectiveness in UAV identification and tracking using sparse data. A LiDAR-assisted UAV exploration algorithm (LAEA) for unknown surroundings was proposed in [
46]. The proposed system framework has three major modules, such as map construction, target selection, and motion planning. The approach uses 2D ToF (time of flight) LiDAR and depth camera to swiftly capture contour information from the undiscovered surroundings, then combines data from two sensors to generate a hybrid 2D map. By using fused multi-sensor data the map construction module produces both high- and low-resolution 3D occupancy maps, as well as 2D occupancy maps for detecting special frontier clusters. While the target selection module is responsible for frontier-based viewpoints generation, detection of tiny and isolated frontier clusters, and solution the asymmetric traveling salesman problem (ATSP). Finally, the motion planning module then conducts specific azimuthal trajectory optimization utilizing the EIG (environmental information gain) optimization approach, resulting in a safe trajectory that allows the UAV to collect more information. To test the proposed algorithm’s efficacy a simulation study using the Gazebo simulator was carried out by performing a thorough comparison between the proposed method and state-of-the-art techniques such as FUEL (Fast UAV Exploration) and FAEP (Fast Autonomous Exploration Planner). The simulation results revealed that the proposed LAEA approach outperforms those two methods in terms of flight distance and exploration time. The feasibility of the proposed approach was validated on a robotic platform outfitted with an RGB-D camera, 2D LiDAR, and an Nvidia onboard computer in two distinct real-world scenarios (indoors and outdoors). Robust short obstructions and potholes detection and classification approach using cost-effective 2D LiDAR sensor mounted on a mobile platform was proposed in [
47]. As the reseacrh goal was to detect short obstacles in the ground, the LiDAR sensor was placed downward looking. The used data was acquired using Hokuyo UBG-04LX-F01 2D LiDAR sensor. Then the acquired data is converted into a point cloud, which is a straightforward transition from polar coordinates to Cartesian. To identify obstacles, the point cloud is segmented into lines and based on their average height the lines were classified as either ground, pothole or obstacle. The experimental findings showed that the suggested method properly recognizes obstructions and potholes in a structural environment. Nevertheless, the point cloud is oversegmented with unnecessary lines, which might be addressed by adjusting the line refinement parameter. The authors plan to consider the dynamic objects and analyze their movement in the lines in their future study. Efficient moving object detection based on 2D LiDAR combined with frame-to frame scan matching method was presented in [
48]. To ensure collision-free passage across a mapped area, the proposed SegMatch algorithm was implemented on autonomous mobile robotic system (MRS) equipped with LD-OEM 1000 LiDAR. The autonomous MRS is controlled with Application Programming Interface (API) and communicates via LoRa technology. As well as WiFi router was installed that connects with the LiDAR and an external processing unit via TCP/IP (Transmission Control Protocol/Internet Protocol). The main objective of the autonomous MRS is to execute active SLAM (simultaneous localization and mapping). To detect dynamic objects, the proposed algorithm was tested on a stationary measurement, during performing SLAM and in real-time measurements respectively. For these purposes, the necessary data were obtained via a LiDAR scanner and then fed into the SLAM algorithm which calculated deviations and created a 2D map. Then data preprocessing was carried out to make data appropriate for point cloud generation. Afterwards the proposed method applied to processed data to detect dynamic objects. When detecting dynamic objects, the authors encountered problems such as the occurrence of defects on the resulting map and probable collisions of the autonomous MRS with the dynamic object. The algorithm’s advantages include a rapid response time and the ability to be employed in heavily populated areas. To gain a full perspective of the environment the proposed approach might be enhanced with ITS (Intelligent Transport System) architecture-based multi-agent system using several mobile robotic systems. Another study in [
49] presents a comprehensive analytical formalism focused on 2D LiDAR structured data representation, object detection and localization within the realm of mobile robotics. The authors described a formalized approach for LiDAR data processing and its mathematical representation, converting raw sensor data into intelligible representations suited for a variety of robotic applications. The proposed analytical formalism includes advanced algorithms for noise reduction, feature extraction, and pattern recognition, allowing the mobile robot to detect static and dynamic objects in its proximity. The efficacy of the proposed method was validated by conducting numerous experiments in a scenario with items of various configurations, sizes, and forms that properly simulates a real-world use case. The experiment outcomes demonstrated that the suggested technique can efficiently recognize and separate objects in semi-structured environments in under 50 milliseconds. Furthermore, the authors are sure that the mathematical framework’s simplicity provides minimal computing effort and efficiency, establishing the groundwork for innovative solutions in a wide range of cases. In their future work, the authors plan to study the implementation of machine learning techniques into the suggested framework for object recognition and classification tasks.
In [
50], the authors studied the performance of a 3D LiDAR system for UAV detection and tracking, concentrating on the robustness of effective range estimation to various drone types and shapes, as well as the visibility robustness to environmental and illumination conditions. Additionally, the potential of 3D Lidar for UAV tracking depending on lighting conditions was estimated, respectively. The effective range estimation experiment was carried out by mounting a Livox Mid-40 3D LiDAR in an open field and flying two black and white UAVs at distances up to 80m. Experiment results indicated that the color of the UAV had a considerable effect on its reflectivity and hence the range of detection, with a smaller white UAV being visible at a greater distance than a bigger black UAV, despite the latter’s size advantage. Since the white UAV had greater detection range, the visibility robustness experiment was carried out only with one UAV during three distinct time periods of the day with the same background. The experiment outcomes revealed that the number of captured LiDAR points and reflectivity decrease as distance increases, however at extreme distances, mean reflectivity increases due to the dominance of the UAV’s most reflecting parts. This means that UAV localization remains reliable even in low light conditions, without a reduction in detection range. Also, 3D UAV tracking experiment was performed by continuous LiDAR scanning at three different time periods of the day to track white UAV’s motion and assess its trajectory within the scan duration. The experiment outcomes demonstrated that UAV trajectory tracking remains effective across different lighting conditions and UAV speeds, showcasing the potential of 3D LiDAR for robust UAV tracking task. Preliminary research indicates that 3D LiDAR has considerable potential for robust UAV detection, localization, and tracking. Future research directions include extending the detection range to 200 meters without increasing laser power, developing a mobile system to track high-speed UAVs without sacrificing point cloud density, and developing a real-time system that integrates AI and machine learning for UAV detection and tracking based on different shapes, materials, and reflectivity. The research also intends to follow drone swarms in 3D and evaluate LiDAR’s performance in harsh weather conditions such as snow, fog, and rain. The authors of [
51] proposed a novel approach for multi-object tracking (MOT) in 3D LiDAR point clouds by combining short-term and long-term relations to enhance object tracking over time. The short-term relation analyzes geometrical information between detections and predictions, and exploits the fact that objects move gradually between consecutive frames. The long-term relation, on the other hand, considers the historical trajectory of tracks to assess the degree to which a long-term trajectory matches the present detection. An effective Graph Convolutional Network (GCN) approach was used to assess the matching between the present detection and existing object trajectories. By representing the relations between identified objects as a graph, the system enhances its capacity to associate objects across frames, especially in dense or cluttered settings. In addition, an inactive track was kept to solve the issue of incorrect ID switching for objects that had been occluded for an extended duration. This method allows the system to keep object tracks more reliably, especially under challenging scenes as when several objects move close together or when partial occlusion occurs. Therefore, the proposed solution with a multi-level association mechanism successfully mitigates problems such as ID switching following occlusion, leading to increased tracking accuracy and resilience in complex environments. However, the work might benefit from further investigation into its computing efficiency and applicability to a wider range of applications. Despite these limitations, the comparision with cutting-edge LiDAR and LiDAR-camera fusion tracking systems revealed the proposed approach’s clear efficacy in improving robustness, notably in resolving ID switching and fragment issues.
Each type of LiDAR sensor has unique benefits and drawbacks in terms of range, size, power consumption, resolution and price. The sensor selection relies on the application’s unique needs, such as detection range, accuracy, ambient conditions, and budget.