You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

3 November 2025

A Dynamic Pose-Testing Technique of Landing Gear Combined Stereo Vision and CAD Digital Model

,
,
and
1
National Key Laboratory of Strength and Structural Integrity, Xi’an 710065, China
2
AVIC Aircraft Strength Research Institute of China, Xi’an 710065, China
3
School of Mechano-Electronic Engineering, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
This article belongs to the Section Optical Sensors

Abstract

The landing gear is one of the key components of an aircraft, enduring significant forces during takeoff and landing, and is influenced by various uncertain factors related to its structure. Therefore, conducting strength tests on the landing gear structure to study its ultimate load capacity is of great significance for structural design and analysis. This paper proposes a visual measurement method for dynamic pose of landing gear that combines stereo vision and CAD digital model. The method first establishes a measurement reference in CAD digital model and then uses close-range photogrammetry and binocular stereo vision technology to unify the coordinate system of the physical landing gear model with the measurement coordinate system of CAD model. Finally, during the motion of the landing gear, CAD model and the physical model can be synchronized by tracking a small number of key points, thus obtaining the complete motion state of the landing gear during the test. The experimental results demonstrate that the RMSE of the angle error is less than 0.1°, and the RMSE of the trajectory error is under 0.3 mm. This level of accuracy meets the requirements for pose measurement during the landing gear retraction and extension test. Compared to existing methods, this approach offers greater environmental adaptability, effectively reducing the impact of unfavorable factors such as occlusion during testing. It allows for the retrieval of pose information for any point on the landing gear, including its centroid.

1. Introduction

The aircraft landing gear, as one of the key components of an aircraft, bears a huge mechanical load during the takeoff and landing processes. There is a complex interaction between the uncertain factors during takeoff and landing and the structural characteristics of the landing gear []. Any structural damage may lead to incalculable losses. According to the statistical analysis of previous accidents, accidents caused by the structure of the landing gear account for more than 66% of all aircraft accidents []. This data fully reflects the importance of analyzing the static and dynamic mechanical properties of the landing gear structure. Therefore, studying the ultimate load-bearing capacity of the landing gear structure is of great practical significance for its design and analysis.
Visual measurement technology has been widely applied in the ground and airborne tests of aircraft [,,,,] due to its remarkable advantages such as non-contact nature, high measurement accuracy, and real-time performance [,]. This technology is not only used to verify the effectiveness of the design but also applied in various aspects [,] like fault detection and performance evaluation. For instance, in 2002, the Langley Research Center of the National Aeronautics and Space Administration (NASA) in the United States used cameras arranged in a stereo configuration to track and measure the marker points pasted on the aircraft model. By utilizing the changes in the three-dimensional coordinates of these points, the deformation of the aircraft model under dynamic loads was analyzed []. This innovative method has laid the foundation for subsequent research. During the period from 2006 to 2014, the AIM (Advanced In-flight Measurement Techniques) project carried out in Europe [] successfully developed a series of low-cost and high-efficiency three-dimensional optical real-time deformation measurement tools to meet industrial demands.
From the development history of pose visual measurement, pose visual measurement methods are often closely related to the specific measurement features. The point features of images are the most basic image features for realizing pose measurement, which have completed geometric descriptions and perspective projection geometric constraints. Classical point feature extraction methods obtain extracted point features by identifying significant changes in color or grayscale in images [,,]. Pose visual measurement based on point features [] requires establishing a matching relationship between these point features and the corresponding features in the 3D CAD model, namely the 2D-3D matching relationship. SoftPOSIT [] is a classic method for directly and synchronously performing 2D-3D matching and pose solving. However, the problems of difficult selection of feature points, large calculation amount, and insufficient robustness in complex environments still have not been solved. The deep learning-based pose visual measurement method based on point features [,] overcomes the above shortcomings, greatly improves robustness, realizes point feature extraction under harsh lighting conditions and occlusion, and can greatly reduce the amount of computation. However, the convergence speed during the training process and the versatility of the training model needs further research.
Point features are relatively sensitive to environmental influencing factors such as occlusion and lighting, which affects the robustness and accuracy of pose measurement. Line features are invariant to lighting changes and image noise, and have strong robustness to occlusion. Therefore, pose visual measurement methods based on line features have been widely studied [,]. Line features include straight-line features and curve features presented by object contours. For pose visual measurement methods based on straight-line features, the process can first involve the extraction [,,] and matching [,] of straight-line features, and then the Perspective-n-Line (PnL) algorithm is used to solve the target pose. For pose measurement methods based on edge features, pose parameters are optimized through continuous projection iteration until the optimization goal is achieved where the projection of the object’s 3D CAD model completely coincides with the target edge [,].
Compared with pose measurement methods based on features such as points and lines, region-based methods can utilize more target surface features, offering stronger robustness and higher accuracy. Early region-based pose measurement methods [] mainly employed probabilistic statistical methods to construct descriptions of local regions. In recent years, deep learning [,] has been gradually introduced into region-based pose measurement methods to extract and represent features, further enhancing the performance of region feature-based pose measurement methods. However, since region-based methods require processing all pixels on the target surface, this reduces the processing speed of such methods. Region feature representation methods that improve processing speed and robustness still require further research.
Using only the single features mentioned above for pose measurement results in poor performance and easy failure in scenarios such as occlusion, lighting changes, and high object symmetry. Therefore, pose visual measurement methods based on multi-feature fusion have been studied. Choi et al. [,] proposed an iterative pose estimation method that fuses edge and point features. Feature points are used to estimate the initial pose during the global pose estimation process, and then edge features are used to iteratively optimize the pose during the local pose estimation process. Pauwels et al. [] utilized a binocular vision approach to perform iterative pose estimation by fusing multi-features such as surface point features, color features, and optical flow information of the measured target. With the development of deep learning technology, Hu et al. [] applied deep learning to object pose measurement and designed a two-stream network structure. The segmentation stream and regression stream in the network target the region features and point features of images, respectively. By combining these two features, more reliable pose information is output, which can effectively handle multiple mutually occluded objects with poor texture. Zhong et al. [] proposed a polar coordinate-based local region segmentation method. By using a fusion of region distance features and color features to detect pixels in occluded parts, this method demonstrates good robustness in pose measurement for partially occluded targets.
This paper proposes a visual measurement method for the dynamic pose of the landing gear that combines CAD model and actual test. This method is based on close-range photogrammetry and binocular stereo vision technology, and aims to align the coordinate system of the physical model of the landing gear with the coordinate system of the CAD digital model. By tracking a small number of key points, the CAD digital model is driven to move synchronously with the physical model, so as to obtain the complete motion state of the landing gear during the test. This method significantly reduces the impact caused by factors such as occlusion during the test, enabling the real-time and accurate acquisition of the pose information of the landing gear under different working conditions, which is of great significance for the research and optimization of the landing gear structure.

2. Methodology

The layout of the landing gear retracts and extend test is shown in Figure 1. The movement space of the landing gear is approximately 2 m × 2 m, and the test frequency is about 0.1 Hz. The measurement process for the dynamic pose of the landing gear, which integrates the CAD digital model with the physical model, is illustrated in Figure 2. Firstly, a measurement coordinate system is established on CAD model, providing a reference benchmark for subsequent coordinate transformation and data processing. Then, by integrating the close-range photogrammetry technology and the binocular stereo vision technology, the coordinate system of the physical model of the landing gear is made consistent with the measurement coordinate system of CAD model. The specific steps include: (1) Arranging a certain number of circular non-coded marker points as key points in the observable area of the physical model of the landing gear; (2) Using photogrammetry technology to reconstruct the 3D coordinates of the key points, and adopting the 3-2-1 coordinate transformation or the feature-based point cloud-CAD model registration method to transform the coordinates of the key points into the measurement coordinate system; (3) Using binocular cameras to collect images of the landing gear and reconstruct the 3D coordinates of the key points; (4) Transforming the binocular vision coordinate system into the coordinate system of CAD model. Finally, after applying a load to the physical model of the landing gear, the binocular stereo vision system tracks and reconstructs the 3D coordinates of the key points in real time (50 Hz), and drives the synchronous movement of CAD digital model through the key points. The software runs on a laptop equipped with an Intel Core i9-14900HX processor, an RTX 5060 Ti GPU, and Windows 11.
Figure 1. Setup of the Landing Gear Retract and Extend Test.
Figure 2. Flowchart of Real-Time Measurement of Landing Gear Pose.

2.1. Construction of the Measurement Coordinate System Based on CAD Digital Model

The measurement coordinate system serves as the benchmark for determining the pose information. Accurately establishing the measurement coordinate system at the test site has always been a major challenge in visual measurement. This is especially true for irregular measurement objects, as it is difficult to determine an appropriate measurement reference. In this paper, a measurement coordinate system is established on CAD digital model. By arranging key points on the surface of the physical model of the landing gear, the coordinate systems of the digital model and the physical model are registered. In this way, the coordinate system of the physical model is unified with that of CAD digital model, and the consistency of the measurement coordinate system is achieved.

2.2. Unification of the Coordinate System of the Physical Landing Gear Model and the Measurement Coordinate System

The unification of the coordinate system of the physical model of the landing gear and the measurement coordinate system established on CAD digital model is the basis for the correct calculation of the pose. In this paper, the unification of the virtual and real coordinate systems is mainly achieved by arranging several key points (6~10 circular non-coded marker points) on the surface of the physical model of the landing gear. As shown in Figure 3, only a small number (not less than four) of circular non-coded marker points need to be arranged in the local area of the surface of the landing gear to ensure that these marker points can be clearly imaged by the binocular camera throughout the movement process. The marker placement ruler is as follows: (1) The markers are non-collinear; (2) The markers are non-coplanar; (3) All markers must be visible throughout the entire motion of the object. These key points serve as reference points to transform the coordinate system of the physical model of the landing gear into the coordinate system of CAD digital model (i.e., the measurement coordinate system).
Figure 3. Markers Arranged in Local Areas of the Landing Gear.

2.2.1. Reconstruction of the Three-Dimensional Coordinates of Key Points

To unify the coordinate system of the physical model with the coordinate system of CAD digital model (i.e., the measurement coordinate system), the close-range photogrammetry system is first used to reconstruct the three-dimensional coordinates of the key points arranged on the surface of the physical model of the landing gear, and then these three-dimensional coordinates are transformed into the coordinate system of CAD digital model.
The hardware and software configurations of the close-range photogrammetry system used are shown in Figure 4. The system hardware includes a Nikon D610 digital single-lens reflex camera, circular coded targets, circular non-coded targets, and a scale bar. The system software, which can run on the Windows platform, includes functions such as calculation mode, deformation mode, and comparison mode.
Figure 4. The close-range photogrammetry system.
In the process of close-range photogrammetry reconstruction, it is necessary to arrange some coded and non-coded marker points around and on the surface of the landing gear to facilitate relative orientation. In addition, at least one scale bar needs to be placed in the measurement field of view, as shown in Figure 4b. After reconstructing the three-dimensional coordinates of the key points, the 3-2-1 coordinate transformation method or the best fitting method is used to transform the three-dimensional coordinates of these key points into the measurement coordinate system, denoted as P B i , 1 i N , N represents the number of key points, see Figure 5.
Figure 5. Transformation of Key Points from Close-Range Photogrammetry to the Measurement Coordinate System.

2.2.2. Transformation of the Coordinate System of Binocular Stereo Vision

The binocular stereo vision system used is shown in Figure 6. This system consists of two Basler acA2440-75 μm cameras with an image resolution of 2448 × 2048, a pixel size of 3.45 μm, and Computar 16 mm lenses mounted on a crossbeam. Meanwhile, LED light sources are installed on the crossbeam to provide the illumination required for image acquisition. During the measurement process, the binocular cameras are aimed at the landing gear, and the measurement field of view covers the movement range of the marker points on the landing gear.
Figure 6. Binocular stereo vision system.
(1)
Calibration of the binocular stereo vision system
Due to the interference of various factors, there will be deviations between the actual positions of image points on the imaging plane and their theoretical positions [,]. The factors interfering with imaging mainly include four types of distortions: radial distortion, tangential distortion, image plane distortion, and internal orientation errors of the camera lens. Therefore, camera calibration [] is required to correct these distortions. In this paper, a cross target is used as the calibration object, and a camera distortion model with ten parameters is adopted to calibrate the binocular cameras.
The radial distortion can be expressed as:
d x r = x ( K 1 r 2 + K 2 r 4 + K 3 r 6 ) d y r = y ( K 1 r 2 + K 2 r 4 + K 3 r 6 ) ,
The tangential distortion can be expressed as:
d x d = B 1 ( r 2 + 2 x 2 ) + 2 B 2 x y d y d = B 2 ( r 2 + 2 y 2 ) + 2 B 1 x y ,
The image plane distortion can be expressed as:
d x m = E 1 x + E 2 y d y m = 0 ,
Internal orientation errors of the camera lens can be expressed as:
d x n = Δ x 0 + x f × Δ f d y n = Δ y 0 + y f × Δ f .
where ( x ,   y ) represents the pixel coordinates, r = x 2 + y 2 , K 1 , K 2 , K 3 represents the radial distortion coefficient, B 1 , B 2 represents the decentering distortion coefficient, E 1 , E 2 represents the distortion of the image plane, and Δ f , x 0 , y 0 represents the error of the internal orientation elements.
Finally, the ten-parameter distortion model can express as
d x = d x r + d x d + d x m + d x n d y = d y r + d y d + d y m + d y n ,
That is,
d x = x f Δ f + x 0 + K 1 x r 2 + K 2 x r 4 + K 3 x r 6 + B 1 ( r 2 + 2 x 2 ) + 2 B 2 x y + E 1 x + E 2 y d y = y f Δ f + y 0 + K 1 y r 2 + K 2 y r 4 + K 3 y r 6 + 2 B 1 x y + B 2 ( r 2 + 2 y 2 )
To calibrate the system, we captured at least eight cross-target images from different poses using the binocular stereo vision system (Figure 7). The intrinsic and extrinsic parameters of the cameras were then derived from these images via target recognition. The final intrinsic parameters for the left and right cameras are summarized in Table 1.
Figure 7. Image of the Cross-Calibration Target Captured by the Stereo Cameras.
Table 1. Calibration results of the internal and external parameters of the binocular cameras.
(2)
Coordinate System Transformation of Binocular Stereo Vision
After the calibration is completed, control the binocular cameras to synchronously capture a frame of the physical model image of the landing gear, and reconstruct the three-dimensional coordinates P A i of the N key points arranged on the landing gear. According to the three-dimensional coordinates of the key points in the binocular stereo vision system and their corresponding coordinates in the measurement coordinate system P B i (see Section 2.2.1 for details), calculate the transformation matrix ( R , T ) to complete the coordinate system transformation.
P B i = R × P A i + T ,   1 i N
The SVD method can be used to calculate the value of R , T :
H = i = 1 N ( P A i P A C ) ( P B i P B C ) T U , S , V = S V D ( H ) R = V U T T = R × P A C + P B C
where ( P C A , P C B ) represents the centroid of the point set P A i and P B i , U , S , V represents the result of singular value decomposition.

2.3. Real-Time Calculation of the Dynamic Pose of the Landing Gear

2.3.1. Real-Time Detection and Reconstruction of Key Points

(1)
Real-time detection of key points
The system detects and identifies marker points placed on the surface of the landing gear, extracting their central coordinates. First, the images captured by the binocular stereo vision system undergo adaptive binary processing to separate the background from the targets to be identified. Next, the connected domains of the targets are marked to obtain the edge information of the marker points. Subsequently, the subpixel extraction of Zernike moments-based edge detection algorithm is performed on the edges of the marker points with high accuracy about 0.1 pixel. Finally, ellipse fitting is used to determine the central coordinates of the marker points. As shown in Figure 8, the overall process of the marker point detection algorithm consists of four steps: image binarization, connected domain marking, subpixel edge extraction, and ellipse fitting. To enhance detection efficiency, GPU programming technology is employed to implement the marker point detection algorithm on the CUDA platform of RTX 5070Ti.
Figure 8. Detection of marker points.
(2)
Real-time Reconstruction of 3D Coordinates of Key Points
The binocular stereo vision model is shown in Figure 9. The coordinates of object point P in the world coordinate system are x w , y w , z w , and the coordinates of its image points p l and p r in the left and right cameras are u l , v l , u r , v r respectively.
Figure 9. Schematic diagram of the binocular stereo vision model.
Assuming the internal and external parameters of the left and right cameras are ( F l , F l ) and ( F r , F r ) respectively, the following relationship exists:
z c l u l v l 1 = F l F l x w y w z w 1 = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 x w y w z w 1 z c r u r v r 1 = F r F r x w y w z w 1 = b 11 b 12 b 13 b 14 b 21 b 22 b 23 b 24 b 31 b 32 b 33 b 34 x w y w z w 1
Using the least squares method to solve, the calculation formula for the spatial coordinates of the object point can be derived as:
P = A T A 1 A T B
Here,
A = a 11 a 31 u l a 12 a 32 u l a 13 a 33 u l a 21 a 31 v l a 22 a 32 v l a 23 a 33 v l b 11 b 31 u r b 12 b 32 u r b 13 b 33 u r b 21 b 31 v r b 22 b 32 v r b 23 b 33 v r B = u l a 34 a 14 v l a 34 a 24 u r b 34 b 14 v r b 34 b 24

2.3.2. Dynamic Pose Calculation

Assume that the three-dimensional coordinates of the i-th key point reconstructed from the 0th frame image captured by the binocular camera before and after the landing gear movement are P A 0 , i . During the movement, the coordinates of the i-th key point reconstructed from the j-th frame image captured by the binocular camera are P A j , i . Then, the coordinate correspondence of the key points in different frames can be expressed as:
P A 0 , i = R j × P A j , i + T j
The rotation matrix R j and translation matrix T j can be computed using the SVD method described in Section 2.2.2:
R j = r j 11 r j 12 r j 13 r j 21 r j 22 r j 23 r j 31 r j 32 r j 33 T j = t j x t j y t j z
By performing Euler angle decomposition on the rotation matrix R j , the attitude angles of the landing gear in the j-th frame image can be obtained:
α j = arctan 2 r j 32 r j 33 β j = arcsin r j 31 γ j = arctan 2 r j 21 r j 11
For the position information of any point Q A j , i on the landing gear in the j-th frame image, excluding the key points, it can be obtained using the following equation:
Q A j , i = R j × Q A 0 , i + T j
Here, Q A 0 , i represents the initial coordinates of point Q A j , i in CAD model. Since the coordinates of any point on the CAD digital model can be obtained, the pose information of any point on the landing gear can be calculated according to Equation (14).

2.3.3. Covariance Estimation for Pose Data

The problem of propagating uncertainty from 3D key point correspondences to the transformation parameters (rotation R and translation T)—often estimated via Singular Value Decomposition (SVD)—is frequently addressed in point cloud alignment. This is central to algorithms like Iterative Closest Point (ICP) and feature-based registration [], which aim to find the relative pose that minimizes the sum of squared distances between reference and sensed points. The covariance of the estimated transformation can be derived as follows:
C = i = 1 n P B i R P A i T 2 cov ( l ) = 2 C l 2 1 2 C m l cov ( m ) 2 C m l T 2 C l 2 1
where m represents the n sets of correspondences P A , P B , and I is a vector of the relative pose expressed as I = α , β , γ , t x , t y , t z , cov ( l ) is the covariance of the output pose, and cov ( m ) is the covariance of the input point vector. A detailed derivation is provided in Reference []. The Monte Carlo (MC) simulation results [] demonstrate that when the input correspondence noise (measured as the mean squared error of distances) is below 10−1, the resulting covariance of the output pose can reach on the order of 10−3 under the given data scale.

3. Experimental Results and Discussion

The measurement accuracy of the landing gear’s dynamic pose is primarily determined by the reconstruction accuracy of close-range photogrammetry and the measurement accuracy of binocular stereo vision. Therefore, under laboratory conditions, experimental verifications were respectively conducted on the reconstruction accuracy of key points in close-range photogrammetry, the 3D reconstruction accuracy of binocular stereo vision, and the calculation accuracy of attitude angles.

3.1. Evaulation of the Reconstruction Accuracy of Key Points in Close-Range Photogrammetry

In order to verify the reconstruction accuracy of close-range photogrammetry, as shown in Figure 10, two calibrated scale bars are placed in the field of view. One of the scale bars is used as a reference for 3D reconstruction, and then the length of the other scale bar is measured and compared with its calibrated length, so as to obtain the measurement error and the RMSE value. The results are shown in Table 2. The reconstruction accuracy of close-range photogrammetry is better than 0.03 mm.
Figure 10. Verification of the accuracy of the close-range photogrammetry system.
Table 2. Reconstruction Results of the Scale in Close-Range Photogrammetry.

3.2. Evaulation of the Reconstruction Accuracy of Key Points in Binocular Stereo Vision

The calibrated bar shown in Figure 11 is utilized for verification. This calibrated bar consists of 13 circular non-coded marker points, which serve as key reference points for accuracy assessment. The verification process involves calculating the difference between the distances of marker points reconstructed by the binocular stereo vision system and the corresponding calibrated values. These differences are then statistically analyzed to compute the Root Mean Square Error (RMSE), a standard metric for evaluating the reconstruction accuracy.
Figure 11. The calibrated bar used for verify the accuracy of the binocular stereo vision system.
The results of this verification process are summarized in Table 3, where it is evident that the reconstruction error of the binocular stereo vision system is less than 0.08 mm. This indicates that the system is capable of achieving highly precise distance measurements, confirming its reliability and accuracy in practical applications.
Table 3. Measured distances of marker points on the calibrated bar using the binocular stereo vision system.

3.3. Evaulation of the Calculation Accuracy of Attitude Angles

In the laboratory environment, the binocular stereo vision system is used to repeatedly measure the included angle of the V-block as shown in Figure 12 to verify the measurement accuracy of the attitude angle. The included angle of the marble V-block is 90° ± 0.01°. Circular non-coded marker points are pasted on both sides of the V-shaped surface. Then, the binocular stereo vision is used to measure the three-dimensional coordinates of these marker points, and these points are used to fit two planes that are close to vertical. Subsequently, the included angle between the two fitted planes is calculated, and the difference between this angle and the reference angle of 90° is calculated to obtain the measurement error of the attitude angle.
Figure 12. Marble V-block.
The angular error mainly comes from two aspects: (1) The reconstruction accuracy of the binocular stereo vision; (2) The change in the included angle caused by the pasting of the marker points. The measurement results are shown in the following table. The RMSE value of the measured angle is 0.065°, which indicates that the measurement accuracy of the attitude angle is better than 0.1°, see Table 4.
Table 4. Angle Measurement Results of the V-Block.

3.4. Comparison of Dynamic Pose Measurement Accuracy Using a Laser Total Station

To further evaluate the dynamic pose measurement accuracy, a laser total station (Leica TZ05) is employed as a reference tool. As shown in Figure 13, several crosshair markers, which are detectable by both the laser total station and the binocular stereo vision system, are strategically placed on the surface of the object. These markers serve as key reference points to ensure accurate and consistent measurements across both systems.
Figure 13. Setup of dynamic pose measurement using a laser total station.
During the object’s motion, the binocular stereo vision system continuously measures the pose in real time, as depicted in Figure 14. The dynamic measurement process allows for the tracking of the object’s movement with high precision, enabling real-time updates of its position and orientation. When the object is stationary, both the laser total station and the binocular stereo vision system measure the pose independently. The results from both systems are then compared to assess the accuracy and reliability of the binocular stereo vision system.
Figure 14. The dynamic pose measured by the binocular stereo vision system. In the attitude plot, the red, green, and blue lines correspond to roll, pitch, and yaw, respectively. In the trajectory plot, the red, green, blue, and purple lines represent the deformations along the x-axis, y-axis, z-axis, and their combination, respectively.
Table 5 presents the attitude angle of a monitoring point on the object, as measured by both systems. It can be observed that the RMSE of the angle error is less than 0.1°. Table 6 displays the trajectory, where the RMSE is shown to be under 0.3 mm. This comparison offers valuable insights into the performance of the stereo vision system across various real-world scenarios, confirming its reliability and robustness for applications requiring precise dynamic pose measurement.
Table 5. Comparison of the Attitude Angle of a Monitoring Point on the Object.
Table 6. Comparison of the Trajectory of a Monitoring Point on the Object.

3.5. Experimental Measurement of the Dynamic Pose of the Landing Gear

A dynamic pose measurement experiment was carried out on a certain type of landing gear. The acquisition and reconstruction frame rate of the binocular stereo vision is 50 Hz, and the size of the measurement field of view is 1.6 m × 1.1 m. Figure 15 shows the three-dimensional poses during the synchronous movement of the virtual and real models of the landing gear at different moments. Figure 16 shows the movement trajectory of a certain monitoring point on the landing gear, and Figure 17 shows the change in the attitude angle of this point over time. Table 7 lists the position and attitude angle values of this monitoring point at different moments.
Figure 15. Imaginary and real posture diagram of a certain type of landing gear at different times.
Figure 16. Motion Trajectory of a Certain Model of Landing Gear.
Figure 17. Pose Curve of a Certain Model of Landing Gear.
Table 7. Pose of a Monitoring Point on the Landing Gear.
During the structural strength test of the landing gear, the environmental conditions are generally complex. For example, factors like occlusion may be present, which adds to the uncertainty of the measurement process. The dynamic pose measurement method that combines virtual and real elements proposed in this paper only needs to observe a local area. By leveraging a small number of key points, it enables the synchronous movement of the CAD model and the physical model. In this way, the overall motion state and pose information can be obtained.
The results demonstrate that the proposed framework, which integrates close-range photogrammetry with binocular stereo vision, effectively addresses the critical challenge of occlusion in dynamic pose measurement. Unlike conventional methods that require dense optical markers and are susceptible to tracking failure, our approach establishes a stable correspondence between the physical landing gear and its CAD model by tracking a sparse set of key points. This strategy allows the CAD model to be driven synchronously with the physical motion, thereby enabling the reconstruction of the complete motion state even when partial occlusions occur. The achieved accuracy in real-time pose estimation under different working conditions confirms the robustness of this model-based approach.
Unlike the model-free, direct optical tracking methods of [,] that are susceptible to marker occlusion and data noise, our approach maintains stable pose estimation under these conditions. This capability for consistent performance in non-ideal scenarios makes it a more reliable and robust solution for the demanding environments common in landing gear research and development.

4. Conclusions

This paper proposes a visual measurement method for the dynamic pose of the landing gear that combines virtual and real elements. This method first establishes a measurement coordinate system on the virtual CAD digital model, which reduces the difficulty of establishing a measurement reference coordinate system on site. By combining close-range photogrammetry and binocular stereo vision technology, the coordinate system of the physical model of the landing gear is unified with the coordinate system of the virtual CAD digital model (i.e., the measurement coordinate system). Only by tracking a small number of key points can the virtual CAD model and the physical model be driven to move synchronously, so as to obtain the complete motion state of the landing gear during the test process.
Compared with existing methods, this method has the following advantages: (1) It has stronger environmental adaptability, effectively reducing the impact of adverse factors such as occlusion during the test process; (2) The strategy of combining virtual and real elements reduces the requirements for measurement equipment, improving the cost-effectiveness and efficiency of measurement; (3) Through abundant measurement data, the pose information of any point on the landing gear, including the center of mass, can be obtained. These advantages endow this method with higher reliability and flexibility in practical applications.
The current method relies on a sparse set of 6–10 key points. A primary limitation is its potential sensitivity to partial occlusions of these markers, which could interrupt tracking and pose estimation. To address this, a key direction for future research is the development of an optimal marker placement strategy that maximizes spatial distribution and ensures visibility throughout the expected motion path. Furthermore, we plan to investigate algorithmic improvements, such as incorporating redundant key points and developing robust pose estimation algorithms that can recover a stable solution even when a subset of markers is temporarily lost.

Author Contributions

Methodology, W.Z. and B.S.; software and experiments, B.S. and W.Z.; experimental validation, Y.L. and X.C. writing—original draft preparation, W.Z. and X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Open Fund of National Key Laboratory of Strength and Structural Integrity (LSSIKFJJ202402005).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available from the corresponding authors upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hong, N. Fatigue life prediction of aircraft landing gear based on multi-body system simulation and local strain method. Fatigue 2000. In Proceedings of the Fourth International Conference of the Engineering Integrity Society, Cambridge, UK, 10–12 April 2000; pp. 407–414. [Google Scholar]
  2. Ossa, E.A. Failure Analysis of a Civil Aircraft Landing Gear. Eng. Fail. Anal. 2006, 13, 1177–1183. [Google Scholar] [CrossRef]
  3. Wu, Z.J.; Guo, W.B.; Li, Y.Y.; Liu, Y.; Zhang, Q. High-speed and high-efficiency three-dimensional shape measurement based on Gray-coded light. Photonics Res. 2020, 8, 819–829. [Google Scholar] [CrossRef]
  4. Liu, F.L.; Wei, Z.Z.; Zhang, G.J. An off-board vision system for relative attitude measurement of aircraft. IEEE Trans. Ind. Electron. 2022, 69, 4225–4233. [Google Scholar] [CrossRef]
  5. Anthonsen, T.N.; Henrik, J.J.; Christian, S.; Stahl, A. Motion trajectory estimation of salmon using stereo vision. IFAC PapersOnLine 2022, 55, 363–368. [Google Scholar]
  6. Gao, Z.R.; Su, Y.; Zhang, Q.C. Single-event-camera-based 3D trajectory measurement method for high-speed moving targets. Chin. Opt. Lett. 2022, 20, 061101. [Google Scholar] [CrossRef]
  7. Fan, R.Z.; Xu, T.B.; Wei, Z.Z. Estimating 6D aircraft pose from keypoints and structures. Remote Sens. 2021, 13, 663. [Google Scholar] [CrossRef]
  8. Thomas, D.F.; Ulrich, S. Real-Time Stereovision-Based Spacecraft Pose Determination Using Convolutional Neural Networks. J. Spacecr. Rocket. 2025, 62, 269–279. [Google Scholar] [CrossRef]
  9. Yin, L. A stereo vision-based real-time 3D hand pose estimation system combining nonlinear optimization. In Proceedings of the Seventh International Conference on Computer Graphics and Virtuality (ICCGV 2024), Hangzhou, China, 23–25 February 2024; Volume 13158, p. 1315808. [Google Scholar]
  10. Neogi, N.; Mohanta, D.K.; Dutaa, P.K. Defect detection of steel surfaces with global adaptive percentile thresholding of gradient image. J. Inst. Eng. 2017, 98, 557–565. [Google Scholar] [CrossRef]
  11. Sun, X.H.; Gu, J.N.; Tang, S.; Li, J. Research progress of visual inspection technology of steel products—A review. Appl. Sci. 2018, 8, 2195–2220. [Google Scholar] [CrossRef]
  12. Burner, A.W.; Lokos, W.A.; Barrows, D.A. In-flight aeroelastic measurement technique development. In Proceedings of the Optical Science and Technology, SPIE’s 48th Annual Meeting, San Diego, CA, USA, 3–8 August 2003; International Society for Optics and Photonics: Bellingham, WA, USA, 2003; pp. 186–199. [Google Scholar]
  13. Boden, F.; Lawson, N.; Henk, W.J.; Kompenhans, J. Advanced In-Flight Measurement Techniques; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  14. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  15. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-up robustfeatures (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  16. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; IEEE Press: New York, NY, USA, 2012; pp. 2564–2571. [Google Scholar]
  17. Zheng, Y.Q.; Kuang, Y.B.; Sugimoto, S.; Åström, K.; Okutomi, M. Revisiting the PnP problem: A fast, general and optimal solution. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013; IEEE Press: New York, NY, USA, 2014; pp. 2344–2351. [Google Scholar]
  18. David, P.; DeMenthon, D.; Duraiswami, R.; Samet, H. SoftPOSIT: Simultaneous pose and correspondence determination. Int. J. Comput. Vis. 2004, 59, 259–284. [Google Scholar] [CrossRef]
  19. Li, Z.G.; Wang, G.; Ji, X.Y. CDPN: Coordinates-based disentangled pose network for real-time RGB-based 6-DoF object pose estimation. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; IEEE Press: New York, NY, USA, 2020; pp. 7677–7686. [Google Scholar]
  20. Wang, G.; Manhardt, F.; Tombari, F.; Ji, X. GDR-net: Geometry-guided direct regression network for monocular 6D object pose estimation. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; IEEE Press: New York, NY, USA, 2021; pp. 16606–16616. [Google Scholar]
  21. Wang, Z.H.; Wu, F.C.; Hu, Z.Y. MSLD: A robust descriptor for line matching. Pattern Recognit. 2009, 42, 941–953. [Google Scholar] [CrossRef]
  22. Zhang, L.L.; Koch, R. Line matching using appearance similarities and geometric constraints. In Proceedings of the Joint DAGM (German Association for Pattern Recognition) and OAGM Symposium, Graz, Austria, 28–31 August 2012; Lecture Notes in Computer Science. Pinz, A., Pock, T., Bischof, H., Leberl, F., Eds.; Springer: Heidelberg, Germany, 2012; Volume 7476, pp. 236–245. [Google Scholar]
  23. Duda, R.O.; Hart, P.E. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar] [CrossRef]
  24. Gioi, R.G.; Jakubowicz, J.; Morel, J.M.; Randall, G. LSD: A fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 722–732. [Google Scholar] [CrossRef] [PubMed]
  25. Akinlar, C.; Topal, C. EDLines: A real-time line segment detector with a false detection control. Pattern Recognit. Lett. 2011, 32, 1633–1642. [Google Scholar] [CrossRef]
  26. Teulière, C.; Marchand, E.; Eck, L. Using multiple hypothesis in model-based tracking. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; IEEE Press: New York, NY, USA, 2010; pp. 4559–4565. [Google Scholar]
  27. Wang, B.; Zhong, F.; Qin, X.Y. Robust edge-based 3D object tracking with direction-based pose validation. Multimed. Tools Appl. 2019, 78, 12307–12331. [Google Scholar] [CrossRef]
  28. Schmaltz, C.; Rosenhahn, B.; Brox, T.; Cremers, D.; Weickert, J.; Wietzke, L.; Sommer, G. Region-based pose tracking. In Proceedings of the Iberian Conference on Pattern Recognition and Image Analysis, Girona, Spain, 6–8 June 2007; Martí, J., Benedí, J.M., Mendonça, A.M., Serrat, J., Eds.; Lecture Notes in Computer Science. Springer: Heidelberg, Germany, 2007; Volume 4478, pp. 56–63. [Google Scholar]
  29. Zhong, L.S.; Zhang, Y.; Zhao, H.; Chang, A.; Xiang, W.; Zhang, S.; Zhang, L. Seeing through the occluders: Robust monocular 6-DOF object pose tracking via model-guided video object segmentation. IEEE Robot. Autom. Lett. 2020, 5, 5159–5166. [Google Scholar] [CrossRef]
  30. Hodaň, T.; Baráth, D.; Matas, J. EPOS: Estimating 6D pose of objects with symmetries. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; IEEE Press: New York, NY, USA, 2020; pp. 11700–11709. [Google Scholar]
  31. Choi, C.; Christensen, H.I. Real-time 3D model-based tracking using edge and keypoint features for robotic manipulation. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; IEEE Press: New York, NY, USA, 2010; pp. 4048–4055. [Google Scholar]
  32. Choi, C.; Christensen, H.I. Robust 3D visual tracking using particle filtering on the SE(3) group. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; IEEE Press: New York, NY, USA, 2011; pp. 4384–4390. [Google Scholar]
  33. Pauwels, K.; Rubio, L.; Díaz, J.; Ros, E. Real-time model based rigid object pose estimation and tracking combining dense and sparse visual cues. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; IEEE Press: New York, NY, USA, 2013; pp. 2347–2354. [Google Scholar]
  34. Hu, Y.L.; Hugonot, J.; Fua, P.; Salzmann, M. Segmentation-driven 6D object pose estimation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; IEEE Press: New York, NY, USA, 2019; pp. 3385–3394. [Google Scholar]
  35. Zhong, L.S.; Zhao, X.L.; Zhang, Y.; Zhang, S.; Zhang, L. Occlusion-aware region-based 3D pose tracking of objects with temporally consistent polar-based local partitioning. IEEE Trans. Image Process. 2020, 29, 5065–5078. [Google Scholar] [CrossRef]
  36. Yu, J.; Zhang, Z.; Sun, H.; Xia, Z.; Wen, H. Reevaluating the Underlying Radial Symmetry Assumption of Camera Distortion. IEEE Trans. Instrum. Meas. 2024, 73, 1–10. [Google Scholar] [CrossRef]
  37. Chuang, J.-H.; Chen, H.-Y. Alleviating Radial Distortion Effect for Accurate, Iterative Camera Calibration Using Principal Lines. IEEE Trans. Instrum. Meas. 2024, 73, 1–11. [Google Scholar] [CrossRef]
  38. Liu, Q.; Dong, M.; Sun, P.; Yan, B.; Wang, J.; Zhu, L. All-parameter calibration method of the on-orbit multi-view dynamic photogrammetry system. Opt. Express 2023, 31, 11471–11489. [Google Scholar] [CrossRef] [PubMed]
  39. Yuan, H.R.; Taylor, C.N.; Nykl, S.L. Accurate covariance estimation for pose data from iterative closest point algorithm. Navigation 2023, 70, 19. [Google Scholar] [CrossRef]
  40. Manoj, P.S.; Bingbing, L.; Rui, Y.; Lin, W. Closed-form Estimate of 3D ICP Covariance. In Proceedings of the IAPR International Conference on Machine Vision Applications, Tokyo, Japan, 18–22 May 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 526–529. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.