Preprint
Article

Estimation of Earth's Central Angle Threshold and Measure-ment Model Construction Method for Pose and Attitude Solution Based on Aircraft Scene Matching

Altmetrics

Downloads

86

Views

20

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

10 July 2023

Posted:

11 July 2023

You are already at the latest version

Alerts
Abstract
To handle the problem of solving the results of aircraft’s visual navigation with scene matching, this paper take the measure of the spherical EPnP positioning posture solving by measuring the central angle threshold value and approaches for constructing a measuring model. The detailed steps are as follows: firstly, this approach needs to construct a positioning coordinate model for the earth surface, makes sure the expression for the 3-dimensional coordinate of the earth surface and solves the positioning of constructing data model with EPnP positioning posture solving algo-rithm. Secondly, by contrasting and analyzing the positioning posture value of approximate plane coordinates, the critical value is acquired, which can be recognized as plane calculation. Lastly, this method should construct a theoretical model of measurement for the visual height and central angle with the decided central angle threshold value. The simulation experiment shows that the average positioning precision of taking the spherical coordinates as input is 16.42 percent way higher than taking the plane coordinates as input. When the central angle is less than 0.05 degrees and the surface district is less than 5585 square meters, the positioning precision of the plane co-ordinates is pretty much equal to the spherical coordinates. At this moment, the sphere can be seen as flat. The conclusion of this essay can theoretically guide the further study of positioning posture solving of the scene matching, which is also of vital significance for theory research and engineering application.
Keywords: 
Subject: Environmental and Earth Sciences  -   Remote Sensing

1. Introduction

Positioning is one of the very most important parts of aircraft to perform a task. Currently, the navigation system for aircraft mainly includes satellite navigation, satellite, inertia combined navigation, and visual navigation. However, even if a satellite navigation system can provide successive position information thereby realizing high precision and stable navigation, it is easy to be disturbed and deceived by anti-satellite equipment thus deciding the wrong position information. Additionally, when the satellite navigation system was affected by some disturbing signals, the navigation precision will deteriorate dramatically just in a very short period thus giving rise to error. As for the inertia and satellite combined navigation system, its navigation precision is mainly decided by the precision of inertial navigation. When there are high precision inertial navigation equipment and stable satellite signals, it can help the aircraft realize stable positioning. On top of that is visual navigation, which has the nature of low accumulative errors and a strong ability of anti-interference. Now visual navigation is widely used in the sphere of terminal guidance. To improve the stability of the navigation system and safety, simultaneously lowering the cost of navigation equipment, visual navigation will be predominant.
Visual navigation mainly can be classified into two categories, including SLAM (Simultaneous Localization and Mapping) based on images of sequence and scene matching based on a single image. For the visual SLAM, it can establish a visual speedometer by acquiring the positioning posture transformation matrix of successive images and calculating successive positioning posture information. However, accumulative errors can always exist and simultaneously, the visual SLAM always needs abundant characteristic points to match, which can be hard to achieve for high-altitude flight. Scene matching with the nature of no accumulative errors and high stability, which is usually applied in the navigation of high-altitude flights, is an important approach for aircraft navigation. The process of scene matching is as follows. Image matching figures out the absolute positioning information of the datum mark on the surface and inversely solves the positioning posture through the algorithm of positioning posture solving. The precision of scene matching is not only determined by image matching but the algorithm also plays a very important part.
PnP (Perspective-n-Point) algorithm is currently one of the most widely used positioning posture solving algorithms with the feature of wonderful solving stability and calculating efficiency. Common PnP algorithms mainly include P3P (Perspective-3-Point), EPnP, DLT (Direct Linear Transform), UPnP, and so on, which can be used in different situations. Among all the PnP algorithms, P3P is hard to be applied in the SLAM algorithm because of the difficulty of acquiring the exclusive solution for it and the requirement of many pairs of characteristic points to calculate accurate results after optimization. For the DLT and UPnP algorithm, at least six pairs of matching pairs are required to match and are very strict in their quality so it is hard to be used in multi-modal image matching algorithm with noisy pictures. However, EPnP not only just needs 4 pairs of right image matching pairs to realize positioning posture solving but it can also acquire the exclusive solution, thereby it is suitable for positioning posture solving based on aircraft positioning.
Image matching of aircraft always uses monocular vision, which can be hard to acquire profound information, consequently, during the normal processing, pictures collected always are recognized as planes and the absolute coordinates acquired always are the points within the plane. Additionally, errors may occur if this paper take the real-time pictures as a plane to solve the positioning posture. ∆H is the error, shown in the following Figure 1.
As is shown in Figure 1, during image matching, errors will occur because approximating the sphere as surface, which can lower the precision of positioning posture solving. Therefore, it is necessary to figure out the threshold value of the central angle to estimate the ideal errors of a plane to replace the sphere surface and the transmission law. To conquer this problem, this paper have put forward an approach for spherical EPnP positioning posture solving by measuring the central angle threshold value for scene matching of aircraft. Based on the above approach, this paper can construct theoretical models for the flying height and camera field angles. In this essay, it is named EarM_EPnP method, and the innovations for the EarM_EPnP method mainly are as follows:
  • This paper have put forward aircraft’s positioning posture solving with analogous three-dimensional data as input, which can effectively improve the positioning precision of flat coordinates solving with the EPnP algorithm.
  • It makes sure the critical value of central angle for plane replaced three-dimensional solving of scene matching on earth and it provides theoretical support for further study of positioning posture solving of scene matching.
  • This paper can acquire the parameters of the camera, critical angle, and the inherent relationship among field angles thus getting the corresponding function and providing measuring principles for following relevant research.
This essay mainly includes four parts. The first part is the introduction, which has introduced the research status quo of positioning posture solving for scene matching. The second part is the special approaches this essay has put forward. Additionally, the third part is for experiment and discussion and the last part is about the conclusion.

2. EarM_EPnP

The flow chart of EarM_EPnP is as follows.
Figure 2. Flow chart of EarM_EPnP.
Figure 2. Flow chart of EarM_EPnP.
Preprints 79079 g002
The detailed steps of the above picture are described below.
  • Setting up the spherical coordinates system and getting the spherical coordinates.
  • Solving spherical coordinates based on EPnP and acquiring the results of solving positioning posture.
  • Integrating with the results of EPnP solving and taking GPS precision as a datum to decide the central angle of the earth.
  • Construction of theoretical models for the height of the Aerial photography, the central angle, and the field angle.

2.1. Construction of spherical coordinates

In this essay, this paper take the east, west, and celestial coordinates system. As it is shown in Figure 1, data acquisition is always carried out using direct and downward vision for the monocular vision of aircraft. However, because monocular vision is incapable of acquiring profound information, the collected datum mark coordinates can approximate ( X , Y , R ) . Nevertheless, the coordinates on the earth's surface should be ( X , Y , Z ) . The function of X, Y, and Z is like,
X 2 + Y 2 + Z 2 = R 2
So,
Z = R 2 - X 2 - Y 2
R is the radius of the earth. According to the expression of the spherical coordinates system, the datum mark coordinates of the earth's surface can be like ( X , Y , R 2 - X 2 - Y 2 ) . For the convenience of calculating and expressing the relationship of erroneous angles, this paper take polar coordinates, so spherical coordinates are:
{ X = R × sin θ × cos φ Y = R × sin θ × sin φ Z = R × cos θ
θ is the angle with the X axis. φ is the angle with the Z axis. Considering the independence between θ and φ , so this paper set them at the same angle.

2.2. EPnP positioning posture solving for spherical coordinates

In this essay, this paper take the approaches mentioned in document [10] to solve positioning posture. f t r a n approximate positioning posture solution of plane coordinates system. f E a r - E P N P is the positioning posture solution of spherical coordinates system. EPnP positioning posture solution of the stated two coordinates is like:
f t r a n = f E P N P ( X n , Y n , R )
f E a r - E P N P = f E P N P ( X n , Y n , R 2 - X n 2 - Y n 2 )
X n , Y n respectively represents abscissa and ordinate. The n is about the natural number and ( X n , Y n , Z n ) is a noncoplanar coordinate point.

2.3. Threshold value of central angle

This method can acquire stimulative input through spherical models. Additionally, combining with the equation (3), this paper can have ( X s , Y s , Z s ) which represents spherical input coordinates. The construction of plane coordinates only needs to change Z into R, thereby acquiring plane input coordinates ( X f , Y f , Z f ) , which have something to do with the input variable angle-Angle. This paper did the stimulative experiments under ideal circumstances.
p o i n t = f ( A n g l e , A , R t )
Equation (6) has described the generation of analog input point pair data by combining angle, internal reference matrix, rotation, and translation with the sphere model. A n g l e = s t a r t + Δ a , setting the initial angle start and the variable Δ a to realize even point selection on the spherical surface.
A = [ f x 0 u 0 0 f y v 0 0 0 1 ]
Equation (7) describes the internal reference matrix. In the process of calculating the point set, all the calculating steps are the same except ( X f , Y f , Z f ) and ( X s , Y s , Z s ) . What’s more, this paper just need to take spherical input as an example to explain plane replaced sphere input. The coordinates of the earth's surface points are expressed A ' ( X s , Y s , Z s ) with analog input.
So,
A ' = { X s = R × s i n θ × c o s φ Y s = R × s i n θ × s i n φ Z s = R × c o s θ
X I m g _ h S = A × A ' = [ R sin θ cos φ × f x + R cos θ × u 0 R sin θ sin φ × f y + R cos θ × v 0 R cos θ ]
Equation (9) describes the preparation for generating two-dimensional image coordinates by multiplying the coordinates of the earth’s surface points and the internal reference matrix.
[ u , v ] = X I m g S = [ R sin θ cos φ × f x + R cos θ × u 0 R cos θ R sin θ sin φ × f y + R cos θ × v 0 R cos θ ]
This paper can get two-dimensional coordinates (u, v) by formula (9)’s X-direction coordinates divide its Z-direction coordinates and Y-direction coordinates divide the Z-direction coordinates.
X 2 d _ h S = [ ( X Img S ) T 1 ]
Equation(11) adds one dimension to the 2D point coordinates of the image, which can facilitate the subsequent EPnP solution. Additionally, this paper need to obtain the 3D coordinates of the global coordinates system. The dates observed are not within the camera the coordinates system. What’s more, this paper have to focus on the world’s system centroid, while assuming that the data is rotated to the camera coordinate system.
α = 45 180 π , β = 45 180 π , γ = 45 180 π
Equation (12) describes the stimulative input data rotated severally to the three directions of X, Y, and Z to prevent the fluctuation of results after introducing random variables. Therefore in this essay, α β γ are set as 45 degrees.
t = [ 0 0 0 ] T
Formula (13) describes the translation vector t.
R t = [ cos ( α ) cos ( γ ) - cos ( β ) sin ( α ) sin ( γ ) - cos ( β ) cos ( γ ) sin ( α ) - cos ( α ) sin ( γ ) sin ( α ) sin ( β ) 0 cos ( γ ) sin ( α ) + cos ( α ) cos ( β ) sin ( γ ) cos ( α ) cos ( β ) cos ( γ ) - sin ( α ) sin ( γ ) - cos ( α ) sin ( β ) 0 sin ( β ) sin ( γ ) cos ( γ ) sin ( β ) cos ( β ) 0 ]
Formula(14) describes the rotating matrix of combining formula (12) and formula (13) translation vector integrated to produce a rotating translation matrix, which is used to solve the global coordinates functioning as input.
centroid = i = 1 n A ' n
Equation (15) describes the acquisition of barycenter. By barycenter and rotating translation matrix Rt, this paper can solve the global coordinates function as input.
X w o r l d S = R t × [ X s centroid x Y s centroid y Z s centroid z 1 ]
X 3 d _ h S = [ X w o r l d T 1 ]
Equation (17) adds one dimension to the 3D point coordinates, thereby facilitating further solving of EPnP. Taking equation (11), (17), and the internal reference matrix as input to substitute into EPnP to solve the posture information.
  ( R S , T S , X c ) = f E P n P ( X 3 d _ h S , X 2 d _ h S , A )
By solving, equation (18) gains the rotating matrix R f and translation vector T f and the positioning coordinates X C of the camera.
X w = ( X w o r l d S ) T U = ( X I m g S ) T
After transforming the format of equation (19), a secondary projection error will occur and the equations are as follows.
e r r o r = f r e p r o j e c t i o n ( X w , U , R S , T S , A )
P = A [ R S T S ]
P × [ X w 0 ] T
Positioning errors rotate and translate based on the datum mark [0, 0, 0]. This paper can acquire different positions P 1 , P 2 after different rotation and translation are solved by spherical and plane analogous input.
P 1 = R S × [ 0 0 0 ] T + T S
P 2 = R f × [ 0 0 0 ] T + T f
Here, this essay makes the difference between the positions acquired by the two formulas above and then takes the absolute value to get the positioning error.
P e r r o r = a b s ( P 2 P 1 )
represents the positioning solving errors of plane coordinates system, P 1 and P 2 respectively describes the positioning solving results from the spherical and plane coordinates system. When P e r r o r is 1m, 10m, or 100m, the central angle is the threshold value.

2.4. Model of aerial and field of view angle under fixed central angle

In chapter 2.3, by EPnP, this paper can solve the threshold value of central angle, thereby calculating the theoretical model for the height of aerial photography and field angle, which is shown in the following picture.
It can be seen from Figure 3 that the relationship between the flying height and the central angle of the earth’s center is described by W, where the expression of W is as follows:
W = 2 × R × s i n ( / 2 )
W represents the length of the line segment | A B | , R represents the radius of the earth and is the central angle of the earth. Additionally, the flying height of aircraft is H.
H = H 1 Δ H
H is the flying height and H 1 is the length of | C O 1 | , in other words, the height errors plus flying height. Δ H is the length of segment | C D | , so the height error is:
Δ H = R R cos ( 2 )
And:
H 1 = W 2 tan ( α / 2 )
W is the length of segment | A B | and α is the field angle of the carrying camera on aircraft. H 1 is the length of the segment | C O 1 | , in other words, the flying height plus the height errors. So the function for flying height H and field angle α is as follows. Formula 10 is to describe the relationship between α and H . Equation 29 is to represent the relationship between H and α .
H = R × sin ( / 2 ) ÷ tan ( α / 2 ) R + R cos ( / 2 )
α = 2 arctan [ R × sin ( 2 ) × ( H + R R × cos ( 2 ) ) 1 ]
H , α , and R respectively represents the flying height, field angle, and radius of the earth and is the central angle of the earth.

3. Simulative experiment and Discussion

3.1. Model of aerial and field of view angle under fixed central angle

1. 
Experimental conditions
The experiment platform was a 64-bit Windows10 operating system based on an X64 embedded device and processor of Intel Core i7-7500U CPU, whose main frequency is 2.70GHz~2.9GHz. Based on the code in reference [11], combined with spherical coordinates in section 2.1, this paper can construct three-dimensional spherical coordinates. The plane-like-EPnP adopts setting Z coordinates R e a r t h cos θ .
2. 
Comparison between plane coordinate results and stereo coordinate results
When the central angle is 0.6 degree, the according district on the earth's surface is R e a r t h × s i n 0.6 , approximately 66.723km. For aircraft flying height of 30~50km is enough. Therefore, the angle degree range is 0~0.6 degrees. To show the excellence of our approach, this paper divide the experiment into two cases. By the way, the precision formula is
η = P C a s e 2 P C a s e 1 P C a s e 2
η is the improved positioning errors P C a s e 2 and P C a s e 1 respectively represents case2 and case1. Detailed definitions for them are as follows.
Case1 takes the coordinates of the plane coordinates system as the calculation result of EPnP algorithm data input.
Case2 takes the coordinates of the spherical coordinate system (close to an ideal state, which can be used as a reference datum)as the solution result of EPnP algorithm data input.
Figure 4 depicts the relationship between positioning errors and central angle solved by the EPnP algorithm with plane coordinates and spherical coordinates as inputs. By the way, the central angle also ranges from 0 to 0.6 degrees. The up-left figure is about the results of X, Y, and Z orientation. From the figure, this paper can know when the central angle is relatively small (about 0.05 degrees), plane solved results are pretty much equal to the sphere solved, and errors are about 0. However, as the central gets bigger, errors become bigger, too. If this paper analyze the X, Y, Z orientation separately, this paper will know that the errors of direction X and Y get bigger with the central angle becoming greater and fluctuation in direction Z is relatively dramatic.
Considering the number difference of central points selected during solving with EPnP algorithm, which is concluded in the below table. Table 1 is about the relationship between positioning posture solving points and the time of the EPnP algorithm, which takes spherical coordinates as datum marks and it includes the average errors solved by different point pairs. From the table, we can know that the period for solving doesn’t strongly related and the solving time doesn’t lengthen with the increase of calculating points. What’s more, the average errors for different number of solving points aren't connected with others. Therefore, based on real stimulation, this paper take 30 as the number of solving points.
Then this paper took the precision of 1m, 10m, and 100m as an example, and this paper recorded the corresponding angles, errors, average errors, and the variance. The results are as follows.
Figure 5 is the diagram of angles, errors, average errors, and variance for the precision of 1m, 10m, and 100m. Based on Figure 6, the positioning errors, this paper solve the positioning errors of different precision orders in a much smaller range. The up-left figure depicts the positioning accuracy of 0~1m corresponding to 0~0.6 degrees, and the width of the central angle corresponding to 1m is 0.057 degrees. The variance and the average value in the range of 0~0.059 are described in the upper right and at this time, the variance of the average value is large, which shows the system is stable and the error fluctuation is small. Additionally, the middle left figure depicts the range of 0.050~0.181 degrees corresponding to a positioning precision of 1~10m, by the way, the corresponding central angle threshold value for 10m is 0.179 degrees. The middle right figure is for the variance and average value of the range 0.050~0.05 and the variance is pretty much 3.76 times the average value. Therefore, in comparison with 0~0.06, the system becomes unstable and more fluctuant. The lower left figure is about the positioning precision of 1~10m corresponding to the angle range of 0.170~0.539. However, the fluctuation here is dramatic. According to the relationship between central and the positioning error, the corresponding central angle threshold value for the precision of 1m is 0.500 degrees. The lower right figure describes the variance and the average value of 0~0.539 degrees. At this time, the variance is about 41.12 times the average value.
Simulative experiment results from theoretical models for field angle and flying height.
The above figures depict the relationship between flying height and field angle when the central angle is 0.1800 or 0.5000 degrees. According to the field angle, we can obtain the flying height, simultaneously, we can adjust the camera with a different field angle based on its flying height.

3.2. Discussion

In this paper, coordinate from the plane coordinate simulation to the spherical, the accuracy in the x and y directions has been improved, and there are some fluctuations in the z direction. Altitude can usually be measured with an altimeter, which can be a great addition to the entire navigation system. The experimental results also give the relationship between the flight height and the field of view of the visual sensor under a certain central angle of the earth, providing theoretical support for the autonomous flight of the subsequent aircraft.

4. Conclusion

To handle the existing problems of positioning posture solving of scene matching of aircraft, this paper introduce the three-dimensional coordinates to solve the positioning posture. The experimental results show that the three-dimensional calculation with high precision makes the whole precision increase by 16 percent, providing effective approaches for the positioning posture solving of the visual navigation. According to the relationship between flying height and field angle, this paper have accurately inferred the theoretical models in our essay and conducted analytical research. The results this paper have reached can not only provide theoretical guidance for further study of flying height and orbital improvement of the aircraft's scene matching but is also of great significance for theoretical research and application. However, of course, the theoretical analysis for the partial tendency of errors corresponding to different angle ranges will be the research focus of the next stage.

References

  1. XiYan, David Acuna, and SanjaFidler, “A Large-Scale Search Engine for Transfer Learning Data” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 25. pp. 2575-7075,2020.
  2. Simon Hadfield, Karel Lebeda, and Richard Bowden, “HARD-PnP: PnP Optimization Using a Hybrid Approximate Representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 3, pp. 768-774, 2019. [CrossRef]
  3. Chi Xu, Lilian Zhang, Li Cheng, et al., “Pose Estimation from Line Correspondences: A Complete Analysis and a Series of Solutions,” IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 39, no. 6, pp. 1209-1222, 2017. [CrossRef]
  4. Haoyin Zhou, Tao Zhang, and Jayender Jagadeesan, “Re-weighting and 1-Point RANSAC-Based PnnP Solution to Handle Outliers,” IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 41, no. 12, pp. 1209-1222, 2019. [CrossRef]
  5. Bo Chen, Álvaro Parra, Jiewei Cao, Nan Li, et al., “End-to-End Learnable Geometric Vision by Backpropagating PnP Optimization,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, pp. 124-127, August 2020. [CrossRef]
  6. Jiayan Qiu, Xinchao Wang, Pascal Fua, et al., “Matching Seqlets: An Unsupervised Approach for Locality Preserving Sequence Matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 43, no. 2, pp. 745-752, 2021. [CrossRef]
  7. Erik Johannes Bekkers, Marco Loog, Bart M. ter Haar Romeny, et al., “Template Matching via Densities on the Roto-Translation Group,” IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 40, no. 2, pp. 452-466, 2018. [CrossRef]
  8. Wengang Zhou, Ming Yang, Xiaoyu Wang, et al., “Scalable Feature Matching by Dual Cascaded Scalar Quantization for Image Retrieval,” IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 38, no. 1, pp. 159-171, 2016. [CrossRef]
  9. Dylan Campbell, Lars Petersson, Laurent Kneip, et al., “Globally-Optimal Inlier Set Maximisation for Simultaneous Camera Pose and Feature Correspondence,” 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, December 2017. [CrossRef]
  10. Kiru Park, Timothy Patten, and Markus Vincze, “Pix2Pose: Pixel-Wise Coordinate Regression of Objects for 6D Pose Estimation,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV),Seoul, Korea (South), February 2019. [CrossRef]
  11. Andrea Pilzer, Stéphane Lathuilière, Dan Xu, et al., “Progressive Fusion for Unsupervised Binocular Depth Estimation Using Cycled Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 10, pp. 2380-2395, 2020. [CrossRef]
  12. Zhengxia Zou, Wenyuan Li, Tianyang Shi, et al., “Generative Adversarial Training for Weakly Supervised Cloud Matting,”2019 IEEE/CVF International Conference on Computer Vision (ICCV),Seoul, Korea (South), February 2019. 20 February 2019. [CrossRef]
  13. Ming Cai, Ian Reid, “Reconstruct Locally, Localize Globally: A Model Free Method for Object Pose Estimation,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, August 2020.
  14. Hansheng Chen, Yuyao Huang, Wei Tian, et al., “Monorun: Monocular 3D Object Detection by Reconstruction and Uncertainty Propagation,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),Nashville, TN, USA, November 2021. [CrossRef]
  15. Kunpeng Li, Yulun Zhang, Kai Li,c “Visual Semantic Reasoning for Image-Text Matching,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), February 2019.
  16. Haoyin Zhou, Tao Zhang, and Jayender Jagadeesan, “Re-weighting and 1-Point RANSAC-Based PnnP Solution to Handle Outliers,”IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 41, no. 12, pp. 3022-3033, 2019.
  17. Chunyu Wang, Yizhou Wang, Zhouchen Lin, et al., “Robust 3D Human Pose Estimation from Single Images or Video Sequences,”IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 41, no. 5, pp. 1227-1241, 2019. [CrossRef]
  18. Yeongmin Lee, Chong-Min Kyung, “A Memory- and Accuracy-Aware Gaussian Parameter-Based Stereo Matching Using Confidence Measure,” IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 43, no. 6, pp. 1845-1858, 2021. [CrossRef]
  19. Chi Xu, Lilian Zhang, Li Cheng, et al., “Pose Estimation from Line Correspondences: A Complete Analysis and a Series of Solutions,”IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 39, no. 6, pp. 1209-1222, 2017. [CrossRef]
  20. Huang-Chia Shih,Kuan-Chun Yu, “A New Model-Based Rotation and Scaling-Invariant Projection Algorithm for Industrial Automation Application,” IEEE Transactions on Industrial Electronics,vol. 63, no. 7, pp. 4452-4460, 2016. [CrossRef]
  21. Xiangzeng Liu, Yunfeng Ai, Bin Tian, et al., “Robust and Fast Registration of Infrared and Visible Images for Electro-Optical Pod,”IEEE Transactions on Industrial Electronics,vol. 66, no. 2, pp. 1335-1344, 2019. [CrossRef]
  22. Bo Wang, Jingwei Zhu, Zhihong Deng, and Mengyin Fu,“A Characteristic Parameter Matching Algorithm for Gravity-Aided Navigation of Underwater Vehicles,” IEEE Transactions on Industrial Electronics,vol. 66, no. 2, pp. 1203-1212, 2019.
  23. Shengkai Zhang, Wei Wang, and Tao Jiang,“Wi-Fi-Inertial Indoor Pose Estimation for Microaerial Vehicles,” IEEE Transactions on Industrial Electronics,vol. 68, no. 5, pp. 4331-4340, 2021.
  24. Zaixing He, Zhiwei Jiang, Xinyue Zhao, et al., “Sparse Template-Based 6-D Pose Estimation of Metal Parts Using a Monocular Camera,”IEEE Transactions on Industrial Electronics,vol. 67, no. 1, pp. 390-401, 2020. [CrossRef]
  25. Yi An, Lei Wang, Rui Ma, et al., “Geometric Properties Estimation from Line Point Clouds Using Gaussian-Weighted Discrete Derivatives,” IEEE Transactions on Industrial Electronics,vol. 68, no. 1, pp. 703-714, 2021. [CrossRef]
  26. Jun Yu, Chaoqun Hong, Yong Rui, et al., “Multitask Autoencoder Model for Recovering Human Poses,” IEEE Transactions on Industrial Electronics,vol. 65, no. 6, pp. 5060-5068, 2018. [CrossRef]
  27. Jingyu Zhang, Zhen Liu, Yang Gao, et al., “Robust Method for Measuring the Position and Orientation of Drogue Based on Stereo Vision.” IEEE Transactions on Industrial Electronics, vol. 68, no. 5, pp. 4298-4308, 2021. [CrossRef]
  28. Tae-jae Lee, Chul-hong Kim, and Dong-il Dan Cho, “A Monocular Vision Sensor-Based Efficient SLAM Method for Indoor Service Robots,” IEEE Transactions on Industrial Electronics,vol. 66, no. 1, pp. 318-328, 2019. [CrossRef]
  29. Yingxiang Sun, Jiajia Chen, Chau Yuen, et al., “Indoor Sound Source Localization With Probabilistic Neural Network,” IEEE Transactions on Industrial Electronics,vol. 65, no. 1, pp. 6403-6413, 2018. [CrossRef]
  30. Jundong Wu, Jinhua She, Yawu Wang, et al., “Position and Posture Control of Planar Four-Link Underactuated Manipulator Based on Neural Network Model,”IEEE Transactions on Industrial Electronics,vol. 67, no. 6, pp. 4721-4728, 2020. [CrossRef]
Figure 1. PNP surface error model.
Figure 1. PNP surface error model.
Preprints 79079 g001
Figure 3. The relationship between flying height and field angle.
Figure 3. The relationship between flying height and field angle.
Preprints 79079 g003
Figure 4. Relationship between central angle and the positioning error.
Figure 4. Relationship between central angle and the positioning error.
Preprints 79079 g004
Figure 5. angles, errors, average errors, and variance diagram for the precision of 1m, 10m, and 100m.(a) precision of 1m. (b) precision of 10m. (c) precision of 100m.
Figure 5. angles, errors, average errors, and variance diagram for the precision of 1m, 10m, and 100m.(a) precision of 1m. (b) precision of 10m. (c) precision of 100m.
Preprints 79079 g005
Figure 6. Relationship between flying height and field angle. (a) the central angle is 0.1800 degrees. (b) the central angle is 0.5000 degrees.
Figure 6. Relationship between flying height and field angle. (a) the central angle is 0.1800 degrees. (b) the central angle is 0.5000 degrees.
Preprints 79079 g006
Table 1. Relationship between positioning posture solving points and the calculating time.
Table 1. Relationship between positioning posture solving points and the calculating time.
Number Average error/m Calculating time/s
10 31.37 0.364
20 28.30 0.381
30 23.99 0.376
40 31.78 0.363
50 41.31 0.378
60 32.51 0.369
70 75.10 0.377
80 28.94 0.366
90 40.06 0.375
100 260.16 0.387
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated