1. Introduction
Citrus cultivation plays a pivotal role in the rural revitalization strategy of southern China, with China leading globally in terms of both total output and planting area. However, there are persistent challenges, such as inadequate intelligence and mechanization levels, as well as incomplete information construction[
1]. The collection of crop phenotyping data serves as the fundamental basis for intelligent decision-making applications in agriculture. The canopy of a fruit tree constitutes the primary section responsible for light absorption during respiration and photosynthesis. It encompasses the majority of branches, leaves, and fruits that provide the essential nutrients and energy required for growth. The fruit tree canopy plays a crucial role in the growth and productivity of trees, making it an essential indicator for monitoring tree biomass, estimating growth, predicting yield, estimating water consumption, monitoring health status, and tracking long-term productivity [
2,
3,
4,
5,
6]. Moreover, this approach enables the quantification of pruning effects and the evaluation of tree characteristics. This approach enables the quantification of pruning effects and the evaluation of tree characteristics. Recognizing citrus canopy features is vital in citrus tree production, breeding, and management. It is inextricably associated with orchard precision management projects such as irrigation and fertilization practices, moisture detection techniques, fruit crop breeding, and yield assessment methods[
7]. At the same time, accurate canopy volume measurements can serve as a foundation for determining crop pesticide doses and making decisions[
8,
9], as well as technical assistance for pruning robots[
10], assisting managers in making sound agricultural production decisions[
11,
12,
13]. As a result, the rapid and accurate estimation of canopy features plays a crucial role in monitoring fruit tree growth dynamics and enhancing orchard management optimization[
14].
Citrus trees are evergreen perennials, and their canopy characteristics are essential for understanding how they grow. These features can make it easier to analyze how they relate to yield, physiological markers, and growth circumstances[
15]. Currently, the most common methods for measuring canopy volume are manual measurement and noncontact automatic measurement. Manual measurements often entail measuring the height and width of fruit trees with equipment such as tape and height rulers. The crown is then approximated to a corresponding geometric model based on its shape and structure, and its volume is calculated using the geometric model’s volume formula. Miranda-Fuentes et al.[
16] estimated canopy volume for olive trees by assuming an ellipsoidal shape. Zheng et al.[
17] Computed crown volume for Hamlin sweet orange trees assuming an uneven columnar shape. Scapin et al.[
18] determined crown volume by measuring the contour shape of citrus trees along their rows and combining that information with crown height data. Lee et al.[
19] introduced a novel approach for accurately measuring the geometric features of tree crowns by slicing the crown horizontally at a 20 cm spacing, measuring the circumference of the slices with a PVC pipe around the tree, and multiplying the area of the sliced circle by the height to determine the slice’s volume. Li et al.[
20] the formula for calculating the volume of slices in this method was changed to a formula for calculating the volume of circular platforms and then used to calculate the volume of citrus trees, which was more accurate than the previous method of comparing the entire fruit tree to a single geometric model and more responsive to the volume of the actual fruit tree. Although manual measurements are simple and convenient, they are prone to errors due to subjective factors, and there is also a significant difference between the model used and the canopy’s true form; moreover, no model can completely and accurately simulate the canopy’s true form.
Researchers have tried and developed a variety of noncontact automatic measurement methods to meet the requirements of efficient and accurate measurements. Dong et al.[
21] employed a camera-carrying drone to capture photos, reconstruct a single fruit tree and extract its area and diameter. Jurado et al.[
22] employed a camera to capture numerous overlapping photos to estimate plant height and volume. Cameras for 3D reconstruction have the benefit of being low-cost and rich in texture. Cameras, on the other hand, have the disadvantage of being readily limited by light and climatic conditions, requiring a high level of equipment and technology, complex data processing, and the necessity for vast amounts of data capture. Dong et al.[
23] collected canopy volume, trunk diameter, tree height, and fruit number in an orchard by putting an RGB-D depth camera on a pole to gather data from a horizontal or top-down view, and then 3D reconstructed an apple tree using photos on both sides. Subsequently, they utilized these data to reconstruct a three-dimensional model of an apple tree by employing images taken from multiple angles. Yin et al.[
24] employed depth cameras to generate a three-dimensional reconstruction of a cherry tree and perform volume calculations. Although depth cameras are more precise and accurate in 3D reconstruction and are more sensitive to changes in light than are standard cameras, they nevertheless have drawbacks, including low-quality depth images and a small measuring range. A system built using wireless networks and ultrasonic sensors was created by Yu et al.[
25] to achieve three-dimensional reconstruction and volume extraction from fruit tree canopy. H Maghsoudi et al.[
26] Ultrasonic sensors at various heights were placed on a sprayer to detect canopy information in real time, and then a multilayer perceptual neural network technique was used to calculate canopy volume. Measuring data using ultrasonic measurement technology has the advantages of being noncontact, highly efficient, and highly accurate; however, in practice, measurement results are easily influenced by environmental noise and temperature changes, and the detection range is limited, necessitating the use of multiple devices and complex technology. Zhang et al.[
27] used tilt photography to create a linear relationship between the crop volume model and manually determined biomass. Zhu et al.[
28] used tilt photography to determine the canopy structural parameters of maize. Tilt photography has the advantages of wide range and quick speed, but its measurement accuracy is greatly influenced by lighting conditions, background interference, camera distortion, and other variables. Because of the complicated canopy structure of fruit trees, tilt photography cannot reconstruct the bottom of the canopy and is easily influenced by environmental changes during acquisition.
LiDAR (Light Detection and Ranging) can be employed with effective information management to characterize tree canopy and generate canopy point clouds at various viewing angles and spatial resolutions [
29]. This method has the advantages of high accuracy, long-distance measurement, high density, lack of light, and fast data acquisition and is widely used in agriculture. Liu et al.[
30] employed a handheld laser scanner to achieve 3D canopy reconstruction and volume extraction of pomelo trees. Wang et al.[
31] developed a dynamic grid division of fruit tree canopy volume LiDAR online detection method for determining the canopy contour and volume of peach groves. Pagliai et al.[
32] created a terrestrial LiDAR-based canopy volume assessment algorithm that uses a tractor in conjunction with terrestrial LiDAR and GNSS technology to achieve exact vineyard spraying, which has the advantages of being faster and requiring fewer postprocessing computations. Yu et al.[
33] utilized a handheld mobile laser scanner, scanned and reconstructed 367 blueberry plants, and collected height, breadth, and volume data for each. The usefulness of these data was then assessed statistically by examining the extracted canopy size and shape characteristics to identify the best shrub structure for mechanical harvesting.
The following algorithms are commonly used by researchers for point cloud-based canopy reconstruction. Lee et al.[
19] employed a convex hull approach to determine the volume of Hamlin sweet orange trees, resulting in more valuable information for predicting tree growth and productivity. Xu et al.[
34] employed a convex hull slicing approach to determine the predicted volume and surface area of the canopy, lowering the computational error of complicated canopy structures. Colaço et al.[
35] used the alpha-shape algorithm and the convex hull algorithm to reconstruct the surface of citrus trees and compared it to artificial measurements based on cylinders or cubes. The experiment revealed that the use of the alpha-shape algorithm may better describe the distributions of the canopy and provide a more realistic representation of the canopy. Mahmud et al.[
36] employed the alpha-shape algorithm to estimate the volume of a fruit tree, with the canopy volume calculated by this algorithm showing a significant correlation with the physically counted foliage. Chakraborty et al.[
37] employed a handheld 3D LiDAR scanning device to determine the volume of a grapevine canopy, and they found that the voxel-based algorithm more accurately accounts for changes in canopy structure than does the convex hull algorithm. Fernández-Sarría et al.[
38] used seven methods reconstruct the canopy of Platanus hispanica trees. They assessed the leftover biomass produced by pruning faster by analyzing the volume and runtime of multiple techniques, which will aid future management. Wang Jia et al.[
39] the tree crown was split into many uneven platforms, and the crown volume was obtained, but viewing the crosssection of each point cloud slice as circular or elliptical resulted in a large deviation in the calculation results. Yang et al.[
34] improved this platform method to determine the volume of the platform by calculating the area of the convex packet of each crown point cloud slice, and high-precision and fast reconfiguration of the canopy structure was achieved. Yan et al.[
40] developed an adaptive sliced convex hull approach, achieving street tree reconstruction and volumetric computation. Liu et al.[
30] proposed a slice-based alpha shape algorithm for canopy reconstruction and volume calculation of pomelo trees.
Most agricultural researchers still use manual measurements of the volume of citrus, the world’s number one fruit crop, which not only is time-consuming and laborious but also has a large deviation from the true volume, which is not conducive to further analysis by researchers. We summarize six typical point cloud reconstruction algorithms(PRAs) for reconstructing a citrus circle with 160 orange trees and propose a set of preprocessing procedures for 3D reconstruction of a large-scale citrus orchard using a hand-held laser scanner. These algorithms include Convex Hull (CH), Alpha-Shape (AS), Platform Convex Hull (PCH), Voxel-based (VB), Convex Hull by Slices (CHBS), with Alpha-Shape by Slices (ASBS). We then synthesize the principles, geometric properties, volume values, runtime, and linear relationships between each algorithm to analyze the reconstruction effectiveness and problems. The efficacy and issues of each algorithm’s outputs are evaluated. The findings demonstrate that even with the best performance of these six algorithms(the ASBS algorithm) is used to reconstruct an orange tree, the thickness and number of canopy point cloud slices are not always able to match the size and shape of the canopy itself. Accordingly, we developed a dynamic slicing and reconstruction canopy volume algorithm (DR), which, compared to existing algorithms, dynamically slices nearby slices based on their proportional area change and density difference, better capturing and reflecting the actual growth characteristics of the tree canopy. Finally, we integrate the benefits of each algorithm to identify acceptable operating fields for them, providing an effective reference for the intelligent mechanization of orchards in the future.
Section 2 describes the preprocessing process of segmenting a single fruit tree, as well as the principle method and implementation process of each algorithm;
Section 3 describes the experimental process and results with a detailed analysis;
Section 4 discusses the influence of the number of slices on the reconstruction effect, followed by the application scenarios of each algorithm; and
Section 5 concludes this paper’s work and contributions.
Figure 1.
Photographs of the experimental area: (a) Satellite image of a blood orange orchard; (b) ground-level image of a blood orange orchard.
Figure 1.
Photographs of the experimental area: (a) Satellite image of a blood orange orchard; (b) ground-level image of a blood orange orchard.
Figure 2.
Constituent part of the handheld 3D LIDAR scanning system (GoSLAM RS100).
Figure 2.
Constituent part of the handheld 3D LIDAR scanning system (GoSLAM RS100).
Figure 3.
3D LiDAR map of the blood orange orchard. The gradient color by z value indicates altitude; the white dashed line represents the path traveled by the LiDAR scanning equipment during the scan, and the triangle and star markers indicate the starting and ending points of the scan paths, respectively.
Figure 3.
3D LiDAR map of the blood orange orchard. The gradient color by z value indicates altitude; the white dashed line represents the path traveled by the LiDAR scanning equipment during the scan, and the triangle and star markers indicate the starting and ending points of the scan paths, respectively.
Figure 4.
Preprocessing for canopy reconstruction flowchart. Note: CSF, cloth simulation framework algorithm; SOR, Statistical Outlier Removal algorithm.
Figure 4.
Preprocessing for canopy reconstruction flowchart. Note: CSF, cloth simulation framework algorithm; SOR, Statistical Outlier Removal algorithm.
Figure 5.
The experimental area after completing CSF ground filtering, with some weeds and ground points still present (indicated by white circles).
Figure 5.
The experimental area after completing CSF ground filtering, with some weeds and ground points still present (indicated by white circles).
Figure 6.
Clustering results for the blood orange orchard 3D map. Each different color represents a split fruit tree, manually numbered on the original image, indicating trees from 1-160.
Figure 6.
Clustering results for the blood orange orchard 3D map. Each different color represents a split fruit tree, manually numbered on the original image, indicating trees from 1-160.
Figure 7.
Comparison of the data before and after denoising: (a) The data before denoising. The red dashed circle shows a few obvious noise points (including the stick used to support the branch); (b) The state after denoising.
Figure 7.
Comparison of the data before and after denoising: (a) The data before denoising. The red dashed circle shows a few obvious noise points (including the stick used to support the branch); (b) The state after denoising.
Figure 8.
Canopy reconstruction effect diagram with (a) manual method (MM); (b) convex hull (CH); (c) convex hull by slices (CHBS); (d) voxel-based (VB); (e) alpha-shape (AS) (f) alpha-shape by slices (ASBS); (g) platform convex hull (PCH); (h) dynamic slicing and reconstruction (DR) algorithms. Note: x, y, and z denote the horizontal width, vertical width, and height, respectively.
Figure 8.
Canopy reconstruction effect diagram with (a) manual method (MM); (b) convex hull (CH); (c) convex hull by slices (CHBS); (d) voxel-based (VB); (e) alpha-shape (AS) (f) alpha-shape by slices (ASBS); (g) platform convex hull (PCH); (h) dynamic slicing and reconstruction (DR) algorithms. Note: x, y, and z denote the horizontal width, vertical width, and height, respectively.
Figure 9.
Effect of the AS algorithm reconstruction for varying α values.
Figure 9.
Effect of the AS algorithm reconstruction for varying α values.
Figure 10.
Flowchart of the DR algorithm.
Figure 10.
Flowchart of the DR algorithm.
Figure 11.
Comparative plot of volumetric values of 160 blood orange trees measured by the MM method and 7 PRAs.
Figure 11.
Comparative plot of volumetric values of 160 blood orange trees measured by the MM method and 7 PRAs.
Figure 12.
Lateral angle of the reconstruction using the DR algorithm. Note: α, the α-value used for AS reconstruction from the slice; Sum(V) signifies the total volume from the canopy; V, sliced point cloud volume.
Figure 12.
Lateral angle of the reconstruction using the DR algorithm. Note: α, the α-value used for AS reconstruction from the slice; Sum(V) signifies the total volume from the canopy; V, sliced point cloud volume.
Figure 13.
Comparison plot of run times for 7 PRAs measuring 160 blood orange trees.
Figure 13.
Comparison plot of run times for 7 PRAs measuring 160 blood orange trees.
Figure 14.
Impact of the amount of slices (N) on the reconstruction volume in the CHBS approach. (a) N is set to 10; (b) 20; (c) 30; (d) 40; (e) 50; (f) Impact of various numbers of slices on the running time of the CHBS approach.
Figure 14.
Impact of the amount of slices (N) on the reconstruction volume in the CHBS approach. (a) N is set to 10; (b) 20; (c) 30; (d) 40; (e) 50; (f) Impact of various numbers of slices on the running time of the CHBS approach.
Figure 15.
Impact of the amount of slices (N) on the reconstruction volume in the ASBS approach. (a) N is set to 10; (b) 20; (c) 30; (d) 40; (e) 50; (f) Impact of various numbers of slices on the running time of the ASBS approach.
Figure 15.
Impact of the amount of slices (N) on the reconstruction volume in the ASBS approach. (a) N is set to 10; (b) 20; (c) 30; (d) 40; (e) 50; (f) Impact of various numbers of slices on the running time of the ASBS approach.
Table 1.
Comparison of height values measured by manual and CC methods.
Table 1.
Comparison of height values measured by manual and CC methods.
H(m) |
Manual |
CC |
Absolute Error |
Relative Error |
Mean |
1.835 |
1.814 |
0.072 |
0.039 |
Var. |
0.118 |
0.130 |
0.004 |
0.001 |
Max |
2.590 |
2.616 |
0.397 |
0.178 |
Min |
0.880 |
0.83 |
0.001 |
0.0005 |
Table 2.
Comparison of the measured width between the manual and CC methods.
Table 2.
Comparison of the measured width between the manual and CC methods.
W(m) |
Manual |
CC |
Absolute Error |
Relative Error |
Mean |
1.619 |
1.719 |
0.131 |
0.081 |
Var. |
0.087 |
0.114 |
0.011 |
0.004 |
Max |
2.400 |
2.460 |
0.513 |
0.343 |
Min |
0.730 |
0.793 |
0.0005 |
0.0003 |
Table 3.
R2 and regression equation for various approaches.
Table 3.
R2 and regression equation for various approaches.
Eq./R2
|
VMM
|
VCH
|
VCHBS
|
VVB
|
VPCH
|
VAS
|
VASBS
|
VDR
|
VMM
|
|
y=0.884x+0.495 |
y=1.124x+0.615 |
y=1.345x+0.341 |
y=1.460x+0.642 |
y=2.199x+0.575 |
y=2.353x+0.600 |
y=2.346x+0.750 |
VCH
|
0.854 |
|
y=1.284x+0.109 |
y=1.536x-0.201 |
y=1.664x+0.145 |
y=2.516x+0.061 |
y=2.690x+0.091 |
y=2.678x+0.265 |
VCHBS
|
0.827 |
0.989 |
|
y=1.195x-0.242 |
y=1.297x+0.025 |
y=1.963x-0.042 |
y=2.097x-0.017 |
y=2.088x+0.118 |
VVB
|
0.819 |
0.979 |
0.989 |
|
y=1.078x+0.233 |
y=1.646x+0.161 |
y=1.759x+0.183 |
y=1.743x+0.304 |
VPCH
|
0.820 |
0.977 |
0.995 |
0.989 |
|
y=1.509x-0.048 |
y=1.611x-0.027 |
y=1.602x+0.077 |
VAS
|
0.802 |
0.962 |
0.978 |
0.994 |
0.982 |
|
y=1.068x+0.013 |
y=1.053x+0.090 |
VASBS
|
0.805 |
0.964 |
0.977 |
0.994 |
0.980 |
0.999 |
|
y=0.985x+0.072 |
VDR
|
0.794 |
0.948 |
0.962 |
0.968 |
0.962 |
0.965 |
0.964 |
DR |
Table 4.
Comparative analysis of DR with CHBS, PCH, and ASBS.
Table 4.
Comparative analysis of DR with CHBS, PCH, and ASBS.
Algorithm |
|
|
|
|
|
CHBS |
10 |
No |
0.187 |
0.556 |
1.872 |
PCH |
5 |
No |
0.254 |
0.709 |
1.339 |
ASBS |
10 |
0.05 |
0.09 |
0.664 |
0.901 |
DR |
13.806 |
0.085 |
0.065 |
1.532 |
0.84 |