Preprint
Article

Object Detection in Robot Autonomous Navigation Using 2D LiDAR Data: An Analytical Approach

Altmetrics

Downloads

156

Views

68

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

04 January 2024

Posted:

23 January 2024

You are already at the latest version

Alerts
Abstract
In mobile robotics, laser scanners have a wide spectrum of indoor and outdoor applications, both in structured and unstructured environments, due its accuracy and precision. Most works that use this sensor have their own data representation and their own case-specific modeling strategies, and no common formalism is adopted. To address this issue, this manuscript presents an analytical approach for identification and localization of objects using a 2D LiDARs Our main contribution lies in formally defining laser sensor measurements and their representation, the identification of objects, their main properties and their location in a scene. We validate our proposal with experiments in generic semi-structured environments common in autonomous navigation, and we demonstrate its feasibility on multiple object detection and identification, following strictly its analytical representation. Finally, our proposal further encourages and facilitates the design, modeling and implementation of other applications that use laser scanners as a distance sensor.
Keywords: 
Subject: Engineering  -   Electrical and Electronic Engineering

1. Introduction

Lasers Scanners (or LiDAR devices - Light Detection And Ranging) are essential tools used for instrumentation. One of their great advantages lies in calculating depth, format and size of objects from their data. In the current context, these devices have been used to enable robots to move independently, autonomously and intelligently through indoor settings such as factories corridors and warehouses [1]. Among the various types of sensors that can be used in robotics, laser sensors are usually most applied for time series [2], point clouds [3,4] and regular angular depth data [5].
A notorious example of a widely applied technique that uses lasers scanners is Simultaneous Localization and Mapping (SLAM), the procedure of autonomously building a map while a robot is localizing itself in the environment [6]. Researches related to this topic within the field of mobile robotics have remained popular for a long time, and recently more effort has been made to contribute to the manufacture of intelligent and autonomous vehicles [7,8], field in which many of the works focus on object detection methods using 3D LiDAR [9,10,11]. By its turn, 2D LiDARs are preferred in many mobile robotics applications due to its low cost and high degree of accuracy [12]. The application also plays a role in the choice of sensor. For example, in places such as electrical substations, optical sensors are preferred for getting distance information because they do not suffer interference from the large electromagnetic fields [13].
Besides the approaches mentioned above, there are many others that motivate and drive this works’ purpose. In particular the use of laser scanners for object detection and tracking (including cases in which both agent and objects are mobile) [14,15,16,17], object identification and segmentation from local environment [18,19,20] and object feature extraction [21]. These implementations have deep impact on autonomous robotics and decision making using little or no previous knowledge about the environment and objects, nevertheless accurately inferring information and executing tasks based on such data.
In yet another similar sense, SLAM implementations frequently focus on building and self-correcting a map or CAD model map, based on laser scanner data. Generally, many such techniques apply triangulation, environment landmarks [4] and object features detection [22], for systematic odometry error compensation in both indoor [23,24,25] and outdoor [26,27] data. For the case in which a map is already available, the use of 2D LiDAR is also attractive. For instance, a fast obstacle detection technique for mobile robot navigation using a 2D-LiDAR scan and a 2D map is proposed in [28].
Nonetheless, other fields also benefit from the use of laser scanners. In the agricultural automation industry, for example, there are a variety of researches in the assessment of canopy volume [29], poplar biomass [30] and trunks [31], crop and weed distinction [32], among other uses. On a different view, the robotics competition RoboCup and its educational counterpart RoboCupJunior, specifically on the Rescue [33] and Rescue B [34] categories, respectivelly, have also gained from using laser range data for robot navigation in unstructured environments to perform rescue operations.
Thus, there is extensive literature on 2D LiDAR data applications in detecting, locating and matching objects, as well as in map construction, matching and correction for self-localization. Yet, to the best of our knowledge, there is no clear universal consensus on strict mathematical notation and modeling for such instruments, although they present less computing cost than image recognition processes [16]. The work [35] also states that there is a standardization need for information extraction based on LiDAR data. They propose a framework from semantic segmentation to geometric information extraction and digital modeling, but their focus is on extraction of geometric information of roads.
In order to process laser scan information, each paper in the literature suggests its own notation, framework, and approach. This is far from ideal in scientific research and development, as well as in education, where a unified approach would be preferable. Considering all aforementioned applications, we claim that it is valuable and significant to propose and evaluate a formal mathematical definition for object detection and identification in tasks based on segmentation, tracking and feature extraction.
Given the wide array of applications based on and benefiting from LiDAR data, there is yet no rigid definition or analytical approach for the general problem of detecting objects in semi-structured environments. In other words, despite the existence of similar structures, there is a gap between different approaches.

1.1. Related Works

The flexibility and relative low cost of laser-based distance measurement instruments has promoted fast development in the field of robotics, specially for autonomous vehicles. The generality level and ability to detect and identify objects, obstacles, other robots and humans in a scene is deeply impactful in any autonomous robot’s planning algorithm. In the literature, distance laser data has been often modeled and processed in polar coordinates [36,37,38,39,40,41,42,43,44,45,46,47], in order extract features from a given setting and parse data. Feature extraction itself on LiDAR data often relies in Polar Scan Matching (PSM) [36,37], Suport Vector Machines (SVMs) [38], Gaussian Process (GP) regression [39], various means of clustering and matching [40,41,42,43], among other probabilistic and deterministic approaches [44,45,46,47] to model and interpret data.
Robot navigation relies on sensor influx and fusion in order to extract environment features and deliberate upon its surroundings in order to execute a task, whether simple or complex. In that sense, LiDAR sensors are widely used in SLAM and often depend on feature mapping and tracking to achieve precision and accuracy using deterministic and probabilistic models, as seen in literature [4,6,24,25] and similar techniques are used in the research of autonomous driving [7,8].
Likewise, in applied and field autonomous robotics the need to detect, identify and match objects and their properties is imperative for task completion, and their mathematical interpretation can be fruitful to describe as well as improve models and implementations, e.g. in the field of forest and agriculture robotics applications [29,30,31,32]. Thus, formal investigations and modeling of the physical world for autonomous interpretation by robots is impactful.

1.2. Aims and Contributions

In such a context, a formal mathematical definition for object detection and identification in tasks based on segmentation, tracking and feature extraction is valuable for several applications within research and industry. Thus, our main contribution lies in formally defining laser sensor measurements and their representation, the identification of objects, their main properties and their location in a scene, representing each object by its mathematical notation within the set of objects that composes the whole Universe set, the latter being the complete environment surrounding an agent.
In summary, this paper deals with the problem of formalizing distance measurement and object detection with laser sweep sensors (2D LiDARs), defining strictly an object and some of its properties, applying such an approach and discussing its results and applications, regarding framework setting for object detection, localization and matching. To address these topics, related formalization efforts and similar works that may benefit from a modeling framework are presented. Thereupon, our contribution is laid out on three main sections of theoretical modeling and subsequent experiment with a real robot. First, we define the scope and how to represent LiDAR scan measurements mathematically. Then, this framework is used to define and infer properties from objects in a scene. Finally, a guideline for object detection and localization is set with an application, providing insight by applying these techniques in a realistic semi-structured environment, thus validating our proposal and investigating the advantages for modeling and applications. Our goal is to enable comprehensible, common-based research of the advantages and possible shortcomings of LiDAR sensors in the various fields of robotics albeit educational, theoretical or applied.

2. Proposed Formalism for Object Identification and Localization

In robotics applications, a navigation environment is categorized based on the quantity and disposition of objects in the scene, as well as the agents’ freedom of movement. In this context, an environment is known as structured when the task-executing agent is previously familiar with the posture of any object and those do not suffer any changes (or all changes are known) during task execution. In contrast, when objects move unpredictably as the agent is executing tasks, the environment is said to be unstructured. Finally, those environments in which a certain degree of object mobility is admissible, such as offices, laboratories, residences, storage houses and workshops, are known as semi-structured environments.
In the specific case of semi-structured environments, entities in the navigation scene may be mapped by an agent using a distance sensor, which this work will consider to be a laser scanner sensor as a 2D LiDAR. These entities may be fixed objects (as walls, shelves, wardrobes, etc.) or mobile objects (e.g. boxes or even other agents).

2.1. 2D LiDAR sweep representation

A 2D LiDAR uses a LASER beam to measure distances from itself to objects in a plane surrounding the sensor. In mobile robotics applications, usually, the LASER beam rotates parallel to the ground so that the resulting measurements give the robot information about its distance to obstacles around it. Different sensors have varying ranges and resolutions, which must also be taken into account. In the definitions to follow, the subscript k denotes a discrete set of elements and n represents an element belonging to such a set, thus both being discrete.
Definition 1.
Let r be a discrete function representing a LiDAR sensor, denoted by
r : Θ k D k θ n D n = r ( θ n ) ,
where the domain Θ k indicates a set containing each discrete angle within the angular scan range and the codomain D k the set of measurements assigned to each angle θ k . Such a discrete function is shown in Figure 1(a).
Definition 2.
Let s be a difference function given by:
s : Θ k d k θ n d n = r ( θ n ) r ( θ n 1 ) = s ( θ n )
where analogously to Definition 1, θ n is an element in the set Θ k of all angles within the instrument’s angular range and d k a set of differences between two neighboring consecutive measurements, as shown in Figure 1(b).
Definition 3.
Let f be a function coinciding with r( θ n ) θ n Θ k , that is:
f : R R θ D = r ( θ n )
such that f is also continuous and monotonic in intervals ( θ n 1 , θ n ) for every n = 1 , . . . , N , whose one-sided limits are:
lim θ θ n f ( θ ) = f ( θ n 1 ) lim θ θ n + f ( θ ) = f ( θ n + 1 )
whenever | s ( θ n ) | > d t h , where N = c a r d ( Θ k ) is the set cardinality of Θ k (the sensor’s resolution) and d t h is a case-specific threshold value (free parameter) representing the difference of the minimal distance measurement for object detection. In order to automate the process of object detection and identification, a measure for d t h may be calculated as the mean absolute difference value:
d t h = n = 1 N | d n ( θ n ) | N
in order to separate noise from actual meaningful data, as it will be further addressed. An example is presented within a simulated environment in Section 3 ( Figure 4).
Proposition 1.
Given a well-functioning LiDAR sensor, θ n Θ k , r ( θ n ) D n = r ( θ n ) .
Proof. 
The LiDAR sensor attributes a distance measurement reading for each angle within its range, unless the sensor malfunctions or has manufacturing errors, which must then be assessed and corrected. □
Corollary 1.
Given Proposition 1 is satisfied, r : θ n D k is surjective by definition.
Corollary 2.
Proposition 1 and Corollary 1 implicate that f is surjective by definition, since r coincides with f .
Above, Definition 1 states how the agent visualizes its navigation surroundings. Notice that it follows from Corollary 2 and Definition 3 that f is differentiable over most of its domain. Those points where f is not differentiable have important properties, to be discussed when defining objects in a laser’s scan data. Note that θ [ θ min , θ max ] and D [ 0 , D max ] , their extreme values are specific to model and manufacturer of the sensor device.

2.2. Defining objects

First we define U to be a set of points representing the whole environment in the robot’s point of view composed respectively by a set of objects, a set of agents (either humans or robots in the environment) and other task-unrelated data, taken as noise. Evidently, these three sets of data comprising U are disjoint sets among each other.
Definition 4.
Let U be a universe set, populated by LiDAR measurements and comprised strictly of a set of objects O , a set of agents A and a set of noise S . Thus:
U = { A S O : O A = O S = A S = O A S = }
Now, as f is a continuous function, it may or may not be differentiable. However if f is differentiable in a, then f is continuous in a and it is laterally continuous: f ( a ) = f + ( a ) . In other words, the left-hand and right-hand derivatives in a must exist and have equal value. By applying the concept of differentiability, objects, walls and free space can be distinguished in a LiDAR scanner reading. In particular, it follows that if there exists a point where f ( θ ) is not differentiable and that point does not belong to the interval of an object, then that must be an edge of a wall (corner), otherwise that point belongs to the edge of an object.
Definition 5.
Let O be any prism-shaped object in a semi-structured environment. Then O may be defined as a set of points in polar coordinates:
O = θ , r ( θ ) R 2 / θ i θ θ f , θ i , f / f ( θ i , f ) = f + ( θ i , f )
where θ i < θ f , such that P i = θ i , r ( θ i ) is a point of discontinuity and P f = θ f , r ( θ f ) is the first next point of discontinuity to the right-hand side of P i , thus both encompassing start and final measurements of an objects’ body. Thus, f ( θ ) is continuous in the open interval ( θ i , θ f ) .
Consider a generic prismatic object and its respective polar coordinates comprised in O . Notice that, in any such set O , a discontinuity in the derivative of f ( θ ) must represent an edge, indicated with red triangles in Figure 1(c). Therefore, we can define both faces and vertices that belong to O.
Definition 6.
Let V be a set of points representing any edge of any prismatic object, such that:
V k = θ , r ( θ ) R 2 / f ( θ ) = 0 , θ O , with k = 0 , 1 , 2 , , n .
Definition 7.
Let O be any prism-shaped object, then let F k be a set of points representing the k-th face of such an object. Therefore, we define in polar coordinates:
F k = θ , r ( θ ) R 2 / θ k θ θ k + 1 , with k = 0 , 1 , 2 , , k ,
where θ 0 = θ i , θ n = θ f and all V k are in ( θ k , f ( θ k ) ) .
In other words, according to Definition 7, any of the prism’s edges are found in a local minimum or maximum between two faces according to the laser’s readings and all faces are found within ( θ i , θ f ) , such that O V k F k .
Therefore, as in Figure 1(c), the function f ( θ ) is discontinuous in θ 1 and θ 2 . From that, it is possible to define that every element θ [ θ 1 , θ 2 ] represents a measurement from the surface of an object (hereby defining all necessary conditions for proposing the existence of an object). Note that O was defined as prism-shaped for the sake of defining faces and vertices, although the same discontinuity-based definition may be used to identify other - more unusually-shaped - objects.
The above proposal could improve formalism, notation and analysis in [22] without great computational effort. Similarly, [2] could benefit from notation formalism in point cloud temporal series as a means of data representation as a function of time and reference frame. In yet another case-oriented illustration, [5] presents a scenario where laser data are presented on the Cartesian plane for later use in extrinsic camera parameter calibration. Its worth reinforcing that our work could have been employed in all such cited situations as a guideline for laser sweep representation, region of interest highlighting in data and notation, in order to develop the state of the art in autonomous robot navigation. For further illustration and validation of the strategy’s reliability, generic representative cases will be presented in the following section.

3. Detection and Localization Experiments

This section demonstrates the behavior of the 2D LiDAR sensor in a real-world environment, and how the formalism proposed in this article is used to represent the scene and to identify potential objects of interest. Figure 2 shows a basic experimental setup to illustrate the materials and LiDAR measurements. The robot used in the experiments is shown in Figure 2(a): a Pioneer 3-DX controlled by a Rapsberry Pi running RosAria, with an omnidirectional 2D LiDAR sensor mounted on its top. A basic setup with the robot and one static object (a box) is shown in Figure 2(b). The corresponding LiDAR measurements are shown in Figure 2(c), where the edges of the object are identified (red dots).
To further illustrate the usefulness of our proposal, we use a scenario with objects of diverse configurations, sizes, and shapes, to build an environment that faithfully represents a real world use-case. In such a scenario, the mobile robot navigates along a super-ellipse trajectory around the objects located in the center of the environment. Measurements from the 2D LiDAR sensor are used to build views of the scene while the robot is navigating. The LiDAR sensor is configured with a depth range of 0.1 to 12 meters, a resolution of 361 measurements per revolution, and a sampling rate of one revolution per 100ms. Following our notation, the laser’s domain is Θ k = [ 180 , 180 ] , such that N = card ( Θ k ) = 361 (k= 1 , 2 , , 361 ), and the codomain is D [ 0.1 , 12 ] m, as per Definition 3. To guide the navigation of the Pioneer 3-DX, a previously validated controller was employed [48].
Figure 3 illustrates the experimental environment used to validate the sensory mathematical representation. In the displayed views, it is possible to verify the presence of rectangular boxes, chairs with legs and wheels, a four-legged ladder, a second mobile robot, and the walls that bound the scenario. This configuration enabled the identification of objects based on the discontinuities observed in the measurements, as conceptualized in Section 2.
Figure 3. Experimental environment employed for validating the sensory mathematical representation. Figures 3(a) and 3(b) are views of the same scenario in different conditions and angles. Figures 3(c) and 3(d) are the 2D LiDAR readings corresponding to the scenes (a) and (b), respectively.
Figure 3. Experimental environment employed for validating the sensory mathematical representation. Figures 3(a) and 3(b) are views of the same scenario in different conditions and angles. Figures 3(c) and 3(d) are the 2D LiDAR readings corresponding to the scenes (a) and (b), respectively.
Preprints 95519 g003
To confirm the validation of the proposed approach, utilizing Definition 3, d t h establishes, computes, and distinguishes the objects in the scene. In Figure 4, all red lines represent a set of measurements of interest, indicative of a potential object. It is important to emphasize that the vertices of the objects, i.e., their starting and ending boundaries, are derived from the difference function s ( θ n ) .
To illustrate the identification of objects during the robot’s navigation, the first row of Figure 5 presents three snapshots of the robot’s trajectory. The second row of Figure 5 shows the corresponding 2D LiDAR scans, while the third row of the same figure presents the 2D reconstruction of the world from the perspective of the mobile robot (with the blue bounding boxes representing the identified objects). A video1 shows the execution of this experiment.
Figure 4. Resulting selection from Figure 3 based on d t h , according to Definition 3.
Figure 4. Resulting selection from Figure 3 based on d t h , according to Definition 3.
Preprints 95519 g004
Figure 5. Snapshots of the validation experiment with their corresponding 2D LiDAR readings from the robot’s perspective, along with the 2D representations in the world featuring bounding boxes of identified objects according to the proposed formalism.
Figure 5. Snapshots of the validation experiment with their corresponding 2D LiDAR readings from the robot’s perspective, along with the 2D representations in the world featuring bounding boxes of identified objects according to the proposed formalism.
Preprints 95519 g005
It is important to note that in real-world experiments, we commonly encounter sensor noise and information losses, which are appropriately addressed through signal filtering processes. However, since this is out of the scope of this work, we chose to showcase the step-by-step implementation of the object identification process in the absence of sensor noise via a simulation. Figure 6 depicts a cluttered environment created using the CoppeliaSim simulator.
Figure 6(a) illustrates the simulated scenario, and Figure 6(b) shows the corresponding 2D LiDAR data, where the process described before was applied to detect, identify, and categorize objects. Figure 6(c) presents a LiDAR sweep (r ( θ n ) , as previously defined), allowing intuitive differentiation of the highest values as walls and lower readings as objects, depending on their proximity to the robot. Upon careful analysis and comparison of Figures 6(d) and 6(e), as discussed and defined in Section 2.2, various objects are identified by setting a similar threshold difference value (as presented in Definition 3) in s ( θ n ) and observing discontinuities in f ( θ n ) . Discontinuities occurring for an angle measurement where the threshold is surpassed must represent the starting point of an object. Furthermore, local minima in each set representing an object must also represent the edge closest to the scanner, marked in Figure fig:envSim(e) with red triangles. The objects’ readings are shown between two dark-blue filled circles, comprising V 1 , V 2 V 11 and thus exhibiting five fully identified objects O 1 , O 2 , O 3 , O 4 and O 5 .
Comparing Figures 6(a) and 6(b), one can identify the objects marked in Figure 6(e) (in anti-clockwise order): the first and second brown prismatic boxes, the wooden ladder, the potted plant, and the smaller brown box, as seen in Figure 6(a). Respectively, they are separated with color-coded bounding boxes: red for the prismatic boxes, orange for the wooden ladder, and green for the plant, according to the topology of the F k faces that connect each object’s edge ( V k represented as red circles). By observing the environment using the 2D LiDAR scan, the robot can identify objects of interest in the room and understand its distance to them.
Assuming the agent has a known starting point (e.g., a recharging dock) or a map linking each laser sweep to a certain position, it is also possible to locate objects by storing measurements of the semi-structured environment without any objects of interest to the robot—no objects that should be handled by the agent, only uninteresting objects. Then, one can highlight any new objects by taking the algebraic difference between readings before and after objects were placed—where r w represents measurements with the new objects, and r e represents the original readings of the environment. This is shown in Figure 6(f), where every new V k and F k are outlined, thus locating all O k objects of interest in the environment, while all other data is considered noise. Given these features, it is possible to match and track specific objects throughout a scene. For instance, the three brown boxes are highlighted as an example of objects of interest.

4. Concluding Remarks

Addressing a crucial aspect of autonomous robot decision-making, the identification and localization of objects, especially those essential for achieving specific goals, play a pivotal role in advancing robotics. The lack of a formal and standardized framework in existing literature has posed challenges for algorithm comparison, optimization, and strategy development. This deficiency arises from the prevalent use of ad-hoc definitions and modeling approaches, hindering the reproducibility and advancement of results.
Our work fills this gap by introducing a rigorous mathematical formalization applicable to a broad range of contexts involving LiDAR point-cloud data. Results presented in Section 3 show that our method is able to efficiently identify and separate objects in under 50ms in semi-structured environments. Despite the necessity of setting a threshold for object detection, which may not be automatically or dynamically determined, our approach allows flexibility to tailor this parameter to the specific requirements of each application. The simplicity of our mathematical framework ensures low computational effort and efficiency, laying the foundation for creative solutions in diverse scenarios.
In conclusion, our manuscript establishes a comprehensive framework for the development and optimization of algorithms focused on autonomous object detection, localization, and matching using 2D LiDAR data. We provide essential insights into the properties of laser scanner data and offer guidelines for feature extraction, with potential applications ranging from direct implementation for specific tasks to indirect applications in machine learning processes. Overall, we anticipate that our analytical structure will inspire the development of coherent and effective methodologies for object detection, identification, and localization in various applications.

Author Contributions

formal analysis, validation, visualization, software and original draft provided by L.A.F.J and A.G.C.; methodology and investigation by L.A.F.J, A.G.C. and M.B.Q.; manuscript review and editing by A.S.B and F.N.M.; supervision, conceptualization, funding acquisition and project administration by A.S.B.

Funding

The authors thank CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico, a Brazilian agency that supports scientific and technological development and FAPEMIG - Fundação de Amparo à Pesquisa de Minas Gerais, an agency of the State of Minas Gerais, Brazil, for scientific development, both for their financial support. Mr. Fagundes-Jr, Mr. Caldeira and Mr. Quemelli thank FAPEMIG, CNPq and CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior, respectively, for their scholarships. Dr. Martins also thank the Research Centre Biobased Economy and the Sensors and Smart Systems research group of Hanze University of Applied Sciences.

Conflicts of Interest

The authors declare no conflict of interest.

Notes

1

References

  1. Costa, P.J.; Moreira, N.; Campos, D.; Gonçalves, J.; Lima, J.; Costa, P.L. Localization and navigation of an omnidirectional mobile robot: the robot@ factory case study. IEEE Revista Iberoamericana de Tecnologias del Aprendizaje 2016, 11, 1–9. [Google Scholar] [CrossRef]
  2. Wang, J.; Xu, L.; Li, X.; Quan, Z. A Proposal to Compensate Platform Attitude Deviation’s Impact on Laser Point Cloud From Airborne LiDAR. IEEE Transactions on Instrumentation and Measurement 2013, 62, 2549–2558. [Google Scholar] [CrossRef]
  3. Huang, Z.; Zhu, J.; Yang, L.; Xue, B.; Wu, J.; Zhao, Z. Accurate 3-D Position and Orientation Method for Indoor Mobile Robot Navigation Based on Photoelectric Scanning. IEEE Transactions on Instrumentation and Measurement 2015, 64, 2518–2529. [Google Scholar] [CrossRef]
  4. Schlarp, J.; Csencsics, E.; Schitter, G. Optical scanning of a laser triangulation sensor for 3D imaging. IEEE Transactions on Instrumentation and Measurement 2019, pp. 1–1. [CrossRef]
  5. Li, Y.; Ruichek, Y.; Cappelle, C. Optimal Extrinsic Calibration Between a Stereoscopic System and a LIDAR. IEEE Transactions on Instrumentation and Measurement 2013, 62, 2258–2269. [Google Scholar] [CrossRef]
  6. Krinkin, K.; Filatov, A.; yom Filatov, A.; Huletski, A.; Kartashov, D. Evaluation of Modern Laser Based Indoor SLAM Algorithms. 2018 22nd Conference of Open Innovations Association (FRUCT). IEEE, 2018, pp. 101–106. [CrossRef]
  7. Bresson, G.; Alsayed, Z.; Yu, L.; Glaser, S. Simultaneous localization and mapping: A survey of current trends in autonomous driving. IEEE Transactions on Intelligent Vehicles 2017, 2, 194–220. [Google Scholar] [CrossRef]
  8. Gargoum, S.; El-Basyouny, K. Automated extraction of road features using LiDAR data: A review of LiDAR applications in transportation. 2017 4th International Conference on Transportation Information and Safety (ICTIS), 2017, pp. 563–574. [CrossRef]
  9. Zamanakos, G.; Tsochatzidis, L.; Amanatiadis, A.; Pratikakis, I. A comprehensive survey of LIDAR-based 3D object detection methods with deep learning for autonomous driving. Computers & Graphics 2021, 99, 153–181. [Google Scholar]
  10. Yahya, M.A.; Abdul-Rahman, S.; Mutalib, S. Object detection for autonomous vehicle with LiDAR using deep learning. 2020 IEEE 10th International conference on system engineering and technology (ICSET). IEEE, 2020, pp. 207–212.
  11. Weon, I.S.; Lee, S.G.; Ryu, J.K. Object Recognition based interpolation with 3d lidar and vision for autonomous driving of an intelligent vehicle. IEEE Access 2020, 8, 65599–65608. [Google Scholar] [CrossRef]
  12. Konolige, K.; Augenbraun, J.; Donaldson, N.; Fiebig, C.; Shah, P. A low-cost laser distance sensor. 2008 IEEE International Conference on Robotics and Automation. IEEE, 2008, pp. 3002–3008. [CrossRef]
  13. Lu, S.; Zhang, Y.; Su, J. Mobile robot for power substation inspection: a survey. IEEE/CAA Journal of Automatica Sinica 2017. [CrossRef]
  14. Mertz, C.; Navarro-Serment, L.E.; MacLachlan, R.; Rybski, P.; Steinfeld, A.; Suppé, A.; Urmson, C.; Vandapel, N.; Hebert, M.; Thorpe, C.; others. Moving object detection with laser scanners. Journal of Field Robotics 2013, 30, 17–43. [Google Scholar] [CrossRef]
  15. Azim, A.; Aycard, O. Detection, classification and tracking of moving objects in a 3D environment. 2012 IEEE Intelligent Vehicles Symposium. IEEE, 2012, pp. 802–807. [CrossRef]
  16. Lindstrom, M.; Eklundh, J.O. Detecting and tracking moving objects from a mobile platform using a laser range scanner. Proceedings 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No. 01CH37180). IEEE, 2001, Vol. 3, pp. 1364–1369. [CrossRef]
  17. Gómez, J.; Aycard, O.; Baber, J. Efficient Detection and Tracking of Human Using 3D LiDAR Sensor. Sensors 2023, 23, 4720. [Google Scholar] [CrossRef]
  18. Lehtomäki, M.; Jaakkola, A.; Hyyppä, J.; Kukko, A.; Kaartinen, H. Detection of vertical pole-like objects in a road environment using vehicle-based laser scanning data. Remote Sensing 2010, 2, 641–664. [Google Scholar] [CrossRef]
  19. Yang, B.; Dong, Z.; Zhao, G.; Dai, W. Hierarchical extraction of urban objects from mobile laser scanning data. ISPRS Journal of Photogrammetry and Remote Sensing 2015, 99, 45–57. [Google Scholar] [CrossRef]
  20. Gomes, T.; Matias, D.; Campos, A.; Cunha, L.; Roriz, R. A survey on ground segmentation methods for automotive LiDAR sensors. Sensors 2023, 23, 601. [Google Scholar] [CrossRef] [PubMed]
  21. Nunez, P.; Vazquez-Martin, R.; del Toro, J.C.; Bandera, A.; Sandoval, F. Feature extraction from laser scan data based on curvature estimation for mobile robotics. Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006. IEEE, 2006, pp. 1167–1172. [CrossRef]
  22. Giri, P.; Kharkovsky, S. Detection of Surface Crack in Concrete Using Measurement Technique With Laser Displacement Sensor. IEEE Transactions on Instrumentation and Measurement 2016, 65, 1951–1953. [Google Scholar] [CrossRef]
  23. Xiong, X.; Adan, A.; Akinci, B.; Huber, D. Automatic creation of semantically rich 3D building models from laser scanner data. Automation in construction 2013, 31, 325–337. [Google Scholar] [CrossRef]
  24. Shen, S.; Michael, N.; Kumar, V. Autonomous multi-floor indoor navigation with a computationally constrained MAV. 2011 IEEE International Conference on Robotics and Automation. IEEE, 2011, pp. 20–25. [CrossRef]
  25. Biswas, J.; Veloso, M. Depth camera based indoor mobile robot localization and navigation. 2012 IEEE International Conference on Robotics and Automation. IEEE, 2012, pp. 1697–1702. [CrossRef]
  26. Wakita, S.; Nakamura, T.; Hachiya, H. Laser Variational Autoencoder for Map Construction and Self-Localization. 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2018, pp. 3993–3998. [CrossRef]
  27. Oria-Aguilera, H.; Alvarez-Perez, H.; Garcia-Garcia, D. Mobile LiDAR Scanner for the Generation of 3D Georeferenced Point Clouds. 2018 IEEE International Conference on Automation/XXIII Congress of the Chilean Association of Automatic Control (ICA-ACCA). IEEE, 2018, pp. 1–6. [CrossRef]
  28. Clotet, E.; Palacín, J. SLAMICP Library: Accelerating Obstacle Detection in Mobile Robot Navigation via Outlier Monitoring following ICP Localization. Sensors 2023, 23. [Google Scholar] [CrossRef]
  29. Colaço, A.F.; Trevisan, R.G.; Molin, J.P.; Rosell-Polo, J.R.; Escolà, A. Orange tree canopy volume estimation by manual and LiDAR-based methods. Advances in Animal Biosciences 2017, 8, 477–480. [Google Scholar] [CrossRef]
  30. Andújar, D.; Rosell-Polo, J.R.; Sanz, R.; Rueda-Ayala, V.; Fernández-Quintanilla, C.; Ribeiro, A.; Dorado, J.; others. A LiDAR-based system to assess poplar biomass. Gesunde Pflanzen 2016, 68, 155–162. [Google Scholar] [CrossRef]
  31. Bargoti, S.; Underwood, J.P.; Nieto, J.I.; Sukkarieh, S. A pipeline for trunk detection in trellis structured apple orchards. Journal of field robotics 2015, 32, 1075–1094. [Google Scholar] [CrossRef]
  32. Andújar, D.; Rueda-Ayala, V.; Moreno, H.; Rosell-Polo, J.; Escolà, A.; Valero, C.; Gerhards, R.; Fernandez-Quintanilla, C.; Dorado, J.; Griepentrog, H.W. Discriminating Crop, Weeds and Soil Surface with a Terrestrial LIDAR Sensor. Sensors (Basel, Switzerland) 2013, 13, 14662–75. [Google Scholar] [CrossRef]
  33. Akin, H.L.; Ito, N.; Jacoff, A.; Kleiner, A.; Pellenz, J.; Visser, A. Robocup rescue robot and simulation leagues. AI magazine 2013, 34, 78–78. [Google Scholar] [CrossRef]
  34. de Azevedo, A.M.C.; Oliveira, A.S.; Gomes, I.S.; Marim, Y.V.R.; da Cunha, M.P.C.P.; Cássio, H.; Oliveira, G.; Martins, F.N. An Omnidirectional Robot for the RoboCup Junior Rescue B Competition. WEROB - RoboCupJunior Workshop on Educational Robotics, 2013.
  35. Wang, Y.; Wang, W.; Liu, J.; Chen, T.; Wang, S.; Yu, B.; Qin, X. Framework for geometric information extraction and digital modeling from LiDAR data of road scenarios. Remote Sensing 2023, 15, 576. [Google Scholar] [CrossRef]
  36. Diosi, A.; Kleeman, L. Laser scan matching in polar coordinates with application to SLAM. 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005, pp. 3317–3322.
  37. Ma, Y.; Anderson, J.; Crouch, S.; Shan, J. Moving Object Detection and Tracking with Doppler LiDAR. Remote Sensing 2019, 11, 1154. [Google Scholar] [CrossRef]
  38. Zhou, X.; Wang, Y.; Zhu, Q.; Miao, Z. Circular object detection in polar coordinates for 2D LIDAR data. Chinese Conference on Pattern Recognition. Springer, 2016, pp. 65–78.
  39. Wang, Y.; Li, B.; Han, B.; Zhang, Y.; Zhao, W. Laser Scan Matching in Polar Coordinates Using Gaussian Process. Chinese Intelligent Automation Conference. Springer, 2019, pp. 106–115.
  40. Catapang, A.N.; Ramos, M. Obstacle detection using a 2D LIDAR system for an Autonomous Vehicle. 2016 6th IEEE International Conference on Control System, Computing and Engineering (ICCSCE). IEEE, 2016, pp. 441–445.
  41. Vaquero, V.; Repiso, E.; Sanfeliu, A. Robust and Real-Time Detection and Tracking of Moving Objects with Minimum 2D LiDAR Information to Advance Autonomous Cargo Handling in Ports. Sensors 2018, 19, 107. [Google Scholar] [CrossRef] [PubMed]
  42. Bosse, M.; Zlot, R. Keypoint design and evaluation for place recognition in 2D lidar maps. Robotics and Autonomous Systems 2009, 57, 1211–1224. [Google Scholar] [CrossRef]
  43. Rosell, J.R.; Llorens, J.; Sanz, R.; Arnó, J.; Ribes-Dasi, M.; Masip, J.; Escolà, A.; Camp, F.; Solanelles, F.; Gràcia, F.; others. Obtaining the three-dimensional structure of tree orchards from remote 2D terrestrial LIDAR scanning. Agricultural and Forest Meteorology 2009, 149, 1505–1515. [Google Scholar] [CrossRef]
  44. Ricaud, B.; Joly, C.; de La Fortelle, A. Nonurban driver assistance with 2D tilting laser reconstruction. Journal of Surveying Engineering 2017, 143, 04017019. [Google Scholar] [CrossRef]
  45. Pelenk, B.; Acarman, T. Object detection and tracking using sensor fusion and Particle Filter. 2013 IEEE International Conference on Imaging Systems and Techniques (IST). IEEE, 2013, pp. 210–215.
  46. Huang, L.; Barth, M. A novel multi-planar LIDAR and computer vision calibration procedure using 2D patterns for automated navigation. 2009 IEEE Intelligent Vehicles Symposium. IEEE, 2009, pp. 117–122.
  47. El Madawi, K.; Rashed, H.; El Sallab, A.; Nasr, O.; Kamel, H.; Yogamani, S. RGB and LiDAR fusion based 3D Semantic Segmentation for Autonomous Driving. 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 2019, pp. 7–12.
  48. Martins, F.N.; Brandão, A.S. Motion Control and Velocity-Based Dynamic Compensation for Mobile Robots. In Applications of Mobile Robots; IntechOpen, 2018.
Figure 1. Polar stem plots representing one sweep of a 2D LiDAR scan: (a) Discrete distance measurements per sample; (b) Distance difference between subsequent samples; (c) Contour given by the distance measurements. In all cases, the angle represents the orientation of the LASER beam with respect to the base of the sensor, which has a range of 360 .
Figure 1. Polar stem plots representing one sweep of a 2D LiDAR scan: (a) Discrete distance measurements per sample; (b) Distance difference between subsequent samples; (c) Contour given by the distance measurements. In all cases, the angle represents the orientation of the LASER beam with respect to the base of the sensor, which has a range of 360 .
Preprints 95519 g001
Figure 2. (a) Robot used in the experiments: a Pioneer 3-DX controlled by a Rapsberry Pi running RosAria with an omnidirectional 2D LiDAR sensor mounted on its top. (b) Basic setup with the robot and one static object in the surrounding environment. (c) Resulting LiDAR measurements with identification of the edges of the object (red dots).
Figure 2. (a) Robot used in the experiments: a Pioneer 3-DX controlled by a Rapsberry Pi running RosAria with an omnidirectional 2D LiDAR sensor mounted on its top. (b) Basic setup with the robot and one static object in the surrounding environment. (c) Resulting LiDAR measurements with identification of the edges of the object (red dots).
Preprints 95519 g002
Figure 6. Simulated environment with different objects 6(a) and corresponding identification of objects according to our proposed framework 6(b). In addition, functions derived from LiDAR scans as defined, representing measurements from simulated semi-structured environment in Figure 6(a).
Figure 6. Simulated environment with different objects 6(a) and corresponding identification of objects according to our proposed framework 6(b). In addition, functions derived from LiDAR scans as defined, representing measurements from simulated semi-structured environment in Figure 6(a).
Preprints 95519 g006aPreprints 95519 g006b
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated