Next Article in Journal
Design of a New Seismoelectric Logging Instrument
Next Article in Special Issue
Design of Citrus Fruit Detection System Based on Mobile Platform and Edge Computer Device
Previous Article in Journal
Assessing Post-Driving Discomfort and Its Influence on Gait Patterns
Previous Article in Special Issue
Hybrid Deflection of Spoiler Influencing Radar Cross-Section of Tailless Fighter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Relative Pose Determination of Uncooperative Spacecraft Based on Circle Feature

Research Center of Satellite Technology, Harbin Institute of Technology, Harbin 150080, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(24), 8495; https://doi.org/10.3390/s21248495
Submission received: 30 November 2021 / Revised: 14 December 2021 / Accepted: 15 December 2021 / Published: 20 December 2021
(This article belongs to the Special Issue Instrument and Measurement Based on Sensing Technology in China)

Abstract

:
This paper investigates the problem of spacecraft relative navigation with respect to an unknown target during the close-proximity operations in the on-orbit service system. The serving spacecraft is equipped with a Time-of-Flight (ToF) camera for object recognition and feature detection. A fast and robust relative navigation strategy for acquisition is presented without any extra information about the target by using the natural circle features. The architecture of the proposed relative navigation strategy consists of three ingredients. First, a point cloud segmentation method based on the auxiliary gray image is developed for fast extraction of the circle feature point cloud of the target. Secondly, a new parameter fitting method of circle features is proposed including circle feature calculation by two different geometric models and results’ fusion. Finally, a specific definition of the coordinate frame system is introduced to solve the relative pose with respect to the uncooperative target. In order to validate the efficiency of the segmentation, an experimental test is conducted based on real-time image data acquired by the ToF camera. The total time consumption is saved by 94%. In addition, numerical simulations are carried out to evaluate the proposed navigation algorithm. It shows good robustness under the different levels of noises.

1. Introduction

The on-orbit service for failed spacecraft has been widely concerned and studied since the 1960s. In order to extend their lifetime in space, the USA, Europe, Japan, Canada, Russia, China, and other countries and regions presented many approaches for on-orbit service, such as replacement of device modules, fuel injection and on-orbit assembly and maintenance [1,2,3,4,5]. Most of these methods rely on the spacecraft rendezvous and docking (R&D) technology. High precision relative measurement is the prerequisite for R&D process, especially during the close-proximity operations.
According to the measured objects, the spacecraft relative measurement can be divided into cooperative and uncooperative types. Compared with relatively mature cooperative pose determination techniques [6], the relative measurement for uncooperative targets without special recognition devices is more challenging [7]. Electro-optical sensors used for uncooperative relative measurements include passive ones based on the visible band like monocular and stereo vision systems, and active sensors based on the infrared band like LIDAR [8]. Passive sensors are easily affected by light conditions, which is difficult to guarantee the image quality under dynamic, while active sensors can adapt to more application scenarios in space. “Time-of-Flight (ToF) camera” is a specific type of LIDAR, which uses either pulse or continuous wave to measure the time of flight based on detector arrays and can simultaneously providing intensity data and range information for every pixel at a high frame rate [9]. With low weight, low power consumption, compact structure, and robustness to illumination changes, many LIDAR companies are committed to the research on miniaturization, low cost and high performance of the camera and ToF camera gain a rapid development in the civil field, which receives extensive attention in the aerospace field [10,11,12].
For uncooperative targets, the general framework of pose determination is divided into two steps, acquisition and tracking. When dealing with the acquisition step, an initial pose is solved fast using the first dataset without a priori pose information [13]. This pose result allows for residuals, but too much error will lead to failure of tracking. In this paper, the acquisition process is investigated. Although the pose result is not accurate enough, a fast convergence can be made in the tracking step.
Feature-based methods are most commonly used in the process of acquisition [14,15,16]. Natural features on uncooperative targets are detected by sensors and the corresponding pose can be obtained by matching them with a benchmark such as a model [11,17]. Natural circle features such as docking ring and nozzle are common attributes of most spacecraft and are more stable compared with point and line features and are used as a reference feature in many missions [6,18,19]. In this paper, the pose estimation does not need the whole model or specific local model and the unique condition is the presence of circle features on the spacecraft. However, even considering the blurring problems [20], there are still shadows, overexposure and other problems in actual scenes and circle feature detection is often affected by the lighting conditions, the geometry of the circle feature and the shooting perspective, resulting in poor circle detection results of the images. Some scholars have given solutions from different views for the problems in different scenes [21,22,23].
During the study of circle features in this paper, the circle images are easily shaded by the geometry of the circle features. In addition, in the tests of two ellipse detection algorithms [24,25], multiple independent circle feature structures are often confused as the same ellipse as shown in Figure 1 (i.e., an ellipse detection result contains multiple structures of the actual target). Other algorithms sometimes have the same questions [23]. If we rely only on ellipse detection algorithms for circle feature extraction, the results during the actual target motion often do not make the pose reliable.
For the segmentation of the point clouds, some specialized segmentation algorithms are studied from the complete model [26,27]. However, the point cloud data at a certain moment (in the acquisition process) is the result of a certain angle of view of the target spacecraft. Due to the structure occlusion, the point cloud will be incomplete, and the circle feature may also be incomplete, which will make circle feature segmentation more difficult than simple geometric features. Moreover, the circle feature is local, and the direct segmentation (or direct alignment) of the point cloud will result in a large scale of point cloud segmentation and low calculation efficiency [11] (as shown in Figure 2).
Therefore, image-assisted point cloud segmentation methods are considered [28]. The gray image and depth image of the ToF camera share one optical imaging system and have the same camera parameters. Gray and depth images are associated by pixels, one-to-one correspondence, which avoids the complex registration between two types of sensors and data. In addition, the ToF camera can keep constant accuracy in the range measurement as the target distance varies. In this paper, we will investigate the above problems using a ToF camera and propose a fast segmentation method using the gray image to assist point clouds according to the characteristic of the ToF camera and some tricks, which solve the problems of large segmentation scale and structure occlusion caused by direct segmentation of point clouds, as well as the problem of ellipse detection errors due to shadows and multiple circles in gray images.
After the circle features are extracted, this paper will divide the irregular circle features point cloud into two general groups to meet the different circle feature requirements in different scenes and fit the parameters. Finally, the relative pose can be solved by using the circle feature parameters. Because the relative pose solution model uses the circle features, therefore a special coordinate system is built to simplify the pose calculation. The paper conducts a large number of experiments to verify the effectiveness of the algorithm in each part. For the effect of circle feature extraction, we use actual shooting data for verification. For the effect of the whole framework of the positional solution, the numerical simulation will be used for the accuracy evaluation.
The paper is organized as follows: The architecture of the proposed method and basic calculation model is in Section 2. The main part of the relative pose determination algorithm is described in Section 3 and Section 4. Section 5 presents the experimental results. Finally, Section 6 concludes the paper.

2. Architecture of the Proposed Method

The ToF camera-based uncooperative target relative pose fusion calculation algorithm using circle features for on-orbit service mainly has three steps: (1) circle feature detection and segmentation, (2) parameter fitting for space circle, and (3) relative pose calculation by circle parameters. The architecture of the algorithm is shown in Figure 3. It takes the gray image and the depth image of the ToF camera as input. At first, gray image auxiliary segmentation as fast initial segmentation is executed, which includes ellipse detection and point cloud transformation. It aims at reducing the actual processing scale of the point cloud and improve the efficiency of the algorithm. After segmentation, the local point cloud including circle features can be obtained. According to the data characteristics, the method calculates the circle parameters respectively and then fuses the results into a more robust one. Finally, the relative position and Euler angle (or transformation matrix) are calculated by the circle parameters according to the model in Section 2.2.

2.1. Coordinate Systems

In this paper, the relative pose of the target spacecraft with a circle feature on the surface is measured by using the depth image and gray image of the ToF camera. To facilitate calculation and conversion, the following coordinate system is defined as follows (Figure 4):
The service spacecraft body frame (SBF) is the classical principal axis of the inertia coordinate system. It should be noted that the aimed relative pose of the uncooperative spacecraft is in the SBF.
The ToF camera reference frame (CRF) is the ToF camera coordinate system whose z-axis is directed along the optical axis.
Circle feature frame (CFF) is coincident and aligned with the target body frame (TBF), which can simplify the process of coordinate system transformation. The CFF’s xy plane is located at the end face of the circle feature far from the mounting surface, and the origin is the center of the annular torus. The z-axis is perpendicular to the circle feature and points to the direction of the target spacecraft body.
The image frame (IF) consists of two parts. One is the gray-image coordinate system, and the other is the depth-image coordinate system. Both share the same optical system, so the image coordinate system is the same. IF takes the corner of the pixel above and left of the image as the origin.
Since the installation position of the ToF camera on the servicing spacecraft is known, the transformation relationship between the SBF and CRF is constant and can be calibrated in advance. Without loss of generality, the CRF is assumed to coincide with the SBF.
In this paper, the CFF is used as the TBF, and the centroid of the target spacecraft is not considered. The relative pose of the uncooperative target can be simplified as the transformation between the CFF and CRF. This result can be applied to the initial pose acquisition.

2.2. Relative Pose Calculation Model

For a circle feature, the parameters are the center, the radius, and the normal vector of the plane. This paper will use these circle feature parameters to study the spacecraft pose solution method.
Assume that the circle feature parameters are the center of the circle r 0 = ( x 0 , y 0 , z 0 ) , and the unit normal vector n = ( n x , n y , n z ) . These parameters are measured by the ToF camera in the CRF. Since the TBF is established on the circle feature, the displacement of the uncooperative target relative to the service spacecraft is the coordinate of the center of the circle in the CRF that is T = r 0 .
For the unit normal vector of the circle feature, it is in the same direction as the z-axis of the TBF. After the vector is translated to the origin of the CRF, the corresponding rotation angle can be further obtained according to the geometric relationship, and then the rotation matrix of the uncooperative target relative to the service spacecraft is also obtained.
Figure 5 shows a schematic diagram of the normal vector transferred to the CRF. For the convenience of calculation, the z-axis component of the unit normal vector obtained by the measurement is adjusted to a positive value, that is, the normal vector of the specified circle feature points to the body of the uncooperative target.
The geometric relationship can be obtained from Figure 5,
ϕ = arctan n y n x 2 + n z 2
θ = arctan n x n z
where ϕ is the pitch angle and θ is the yaw angle.
The rotation matrix can be obtained as R = C x ( ϕ ) C y ( θ ) : where
C x = 1 0 0 0 cos ( ϕ ) sin ( ϕ ) 0 sin ( ϕ ) cos ( ϕ ) , C y = cos θ 0 sin θ 0 1 0 sin θ 0 cos θ .
As a result, the use of circle features to calculate the position and attitude of the uncooperative spacecraft relative to the CRF can be realized. It should be noted that, due to the symmetry of the circle feature, the angle of rotation around the axis of symmetry cannot be calculated, which is not taken into account in this article. It is assumed that there is no rotation around the axis of symmetry of the circular feature.

2.3. ToF Camera Measurement Model and Data Transformation

The sensor used in this paper is the ToF camera. The ToF camera integrates multiple functions and can collect multiple types of data, including the target distance, the intensity of the reflected laser, the spatial three-dimensional coordinates, and the target surface amplitude intensity (some types of the data are shown in Figure 6). The most typical and common data used are gray images and depth images or point clouds.
In particular, the depth image records the distance of the reflection point. If needed, the depth image can be converted into a point cloud by the camera internal parameters.
Assume that the ToF camera parameters are as follows: the focal length per pixel is ( f x , f y ) and the principal point is at ( u c , v c ) .
Before calculating, the origin of the image coordinates is needed to translate to the position of the principal point, and then the pixel coordinates on the translated depth image are ( u r , v r ) = ( u , v ) ( u c , v c ) .
The relationship between the pixel coordinates and the actual space coordinates of the target can be obtained from the principle of camera imaging: v r f x = x 0 z 0 , u r f y = y 0 z 0 .
According to the Time of Flight imaging principle of the ToF camera, the depth value on the pixel is the distance from the emitter to the reflection point. Thus, the constraint x 0 2 + y 0 2 + z 0 2 = d 2 exists.
The space coordinate ( x 0 , y 0 , z 0 ) of the target reflection point corresponding to the pixel point is as follows:
z 0 = d 1 + v r 2 f x 2 + u r 2 f y 2 x 0 = v r f x z 0 y 0 = u r f y z 0

3. Initial Point Cloud Segmentation Assisted by the Gray Image

Since the point cloud of the ToF camera containing depth information is the main basis for the subsequent parameter solving, the first step of calculating the relative pose of the uncooperative target is to extract the circle feature point cloud. However, the circle features in the actual scene may lead to poor circle feature extraction (as described in Section 1). The following will take advantage of the characteristics of the ToF camera and use the gray image to assist the point cloud segmentation, and employ some tricks to extract the circle feature point cloud even when the ellipse detection results are poor (the process is shown in Figure 7).
Firstly, an ellipse detection algorithm is used in the gray image, and the results are taken to form a union, which ensures that the region contains as many circle feature structures as possible. The ToF camera parameters are used to transform the depth image to a small-scale point cloud obtained from the corresponding region containing a circle feature structure. After further simple denoising and segmentation, the point cloud of the circle feature area can be obtained.
As a basic technology in image processing, ellipse detection is widely used in many practical problems. However, due to the complexity of the ellipse parameters, the image may suffer from motion blur, occlusion, unstable lighting conditions, noise, etc. in the motion scene, the ellipse detection algorithm still faces the challenge of efficiency, accuracy, and stability.
When selecting an ellipse detection algorithm, in addition to the effect and stability, the form of the detection result must also be considered. The current ellipse detection algorithm results mainly have two types. One is the ellipse arc with width [25], and the other is the complete edge of the ellipse (Figure 8). This paper finally chooses an efficient and fast ellipse detection algorithm based on arc support proposed by Changsheng Lu in [24]. The result is a complete ellipse with more information, and the result is relatively stable.
When the ellipse detection is achieved, the parameters and edge pixel coordinates of the ellipse can be obtained, and the area containing the edge pixel is the area where the circle feature is located. Since the pixel position of the depth image in the ToF camera corresponds to that of the gray image one-to-one, the area containing the circle feature is also in the same area in the depth image (Figure 9). Using the ToF camera parameters to convert the depth image coordinates of the area into a point cloud, a small-scale point cloud containing a circle feature area can be obtained.
In the actual detection result, due to the width and height of the ring, shadows may be generated under certain lighting conditions, and arcs from different surfaces may be combined into the same ellipse, which may result in multiple ellipses. To obtain more effective information, the detected ellipse area is merged, and the point cloud converted from the depth image in the corresponding area will contain more complete circle features.
The circle feature point cloud extracted by the result of gray image ellipse detection contains not only the circle feature but also other interferences, such as the installation surface of the circle feature, and the false detection of the circle area because of the shadow area of a small part of the ring.
For the installation surface, it can be removed by plane segmentation easily.
The remaining part is segmented using clustering based on Euclidean distance to separate the group containing circle features. Suppose the segmented point cloud is P seg , and the total number of contained points is N seg . Searching the set of neighboring points by the minimum Euclidean distance d m i n with each internal point p i 1 i N s e g as the center, if the internal point already contains the label L, then skip it and search the next point. If the internal point has no label and the neighbor points also have no labels, a new label is assigned to the points in the area. If the internal point has no label and the neighbor points have labels, the smallest label in the neighbor is assigned to all points. After the clustering process, a total of N L labels are generated, and each point has its label.
The points containing the same label p 1 k , p 2 k , , p n k k 1 k N L are extracted to form a new point cloud P k , containing n k points. Then, we have P s e g = N L P k , N s e g = N L n k . The new point clouds are sorted by descending order according to the number, and the number difference of adjacent point clouds is calculated. When the number difference exceeds the threshold n 0 , all the point clouds before this point cloud are considered as the point cloud of the final circle feature P circle = m P k 1 m N L , n m + 1 n m > n 0 .
Suppose the parameter not related to the vector in the plane obtained by fitting is d f (setting the z-axis component of the vector is positive). For the interference of a small amount of point cloud such as the shadow area, point cloud denoising can directly remove these discrete points. Thus, the point cloud segmentation aided by gray image is completed.

4. Feature Parameters Calculation of the Non-Ideal Circle

Some studies on the fitting of circle features have focused on the more regular geometry of the circle features [29,30], while the actual circle features may contain multiple structures or more complex structures such as docking ring, nozzle, etc. To make full use of the data and adapt to the circle features of more structures, this paper divides the circle features into the following two groups—the end surface away from the mounting surface and the side surface. The end surface is a flat structure, and the side surface is mainly a curved surface formed by rotating around an axis:
P circle = P end P side

4.1. Circle Feature Parameters Calculation of Each Group of Point Cloud

For the end face part, since it is only a plane structure, only the plane normal vector fitting is needed to obtain the normal vector of the target circle feature. Suppose the fitted plane is a x + b y + c z + d 0 = 0 , and the normal vector is n end = ( a , b , c ) . The radius r and center of the circle feature r 0 end can be estimated by the circle fitting of the points on the end surface and then the displacement is t end = r 0 end .
For the side curved surface, the normal vector of the target circle feature is its axis. To adapt to more rotating body structures, the method of geometric cross-section fitting is adopted, using several planes parallel to the end face to intercept the side surface, and the obtained intercepting circle is fitted to the center. The axis is the straight line where the center of the circle is located (as shown in Figure 10).
Set the plane where the end face is located as the standard plane, which is Ω 0 : a x + b y + c z + d 0 = 0 . For the plane family parallel to the standard plane on the side surface, the function is Ω : a x + b y + c z + d = 0 d 0 d d f , where the plane parameter of the installation surface is d f . Suppose the parameter of the selected section is d j , and take n random numbers between d 0 d f to get the set of n plane dots. Then, fit the center of the circle, and fit the centers to get the normal vector direction n rot :
d j = rand d 0 , d f ( j [ 1 , n ] )
Γ j = x i , y i , z i , a x i + b y i + c z i + d j < ε
O j = fit center Γ j
L = fit line O j = n rol t + p p O j , t R
Then, the normal vector of the space circle is n side = n rot .
For the center of the target circle feature, it is the center of the end face, that is, the intersection point of the side surface axis and the end face. After finding the rotation axis of the side surface, the center of the circle can be obtained by using the axis equation and the end face equation, which is r 0 side = L Ω 0 . Then, the displacement obtained in this part is t side = r 0 side .

4.2. Parameters Fusion Based on Point Cloud

After the circle feature parameters calculation, both the normal vector and circle center get two results. To obtain the unique circle feature parameters to determine the relative pose of the uncooperative spacecraft, it is necessary to perform information fusion.
Performing a weighted summation of different results is the simplest and most effective way. For the parameter calculation method, the circle feature is divided into two groups which are the end surface and the side surface, and the result is solved in different ways. Therefore, the weight of each component of the normal vector or displacement is the ratio of the number of the divided point cloud and the total number N of feature point clouds before segmentation. The parameter weight of the end surface and the side of the circle feature point is
w end = n end N , w side = n side N
Then, the result of the normal vector and displacement of the circle feature is
n = w end n end + w side n side
t = w end t end + w side t side
Then, according to the relationship between the normal vector of the circle feature parameter and the rotation matrix, the rotation matrix R can be solved.

5. Experiments and Numerical Simulations

5.1. Circle Feature Point Cloud Segmentation Experiment

This part tests the effect of point cloud segmentation assisted by the gray image in the ToF camera. The data including gray images and depth images are obtained by shooting the satellite model with the circle feature using the ToF camera. The size of the images is 320 × 240 pixels. After calibration, the result is that the pixel position of the main point is [165.39, 122.44], the focal length is [394.16, 393.89], and the distortion coefficient is [−0.5953, 1.2452]. For the convenience of representation, the unit of the converted CRF is unified to centimeters. The computer host in this experiment is configured with an Intel i7 processor, 2.8 GHz of the main frequency, and 8 GB memory.
To test the improved efficiency of the segmentation method in this paper, the direct segmentation of the circle feature from the global point cloud is chosen for comparison. However, the circle feature is incomplete and contains multiple geometric structure disturbances, so it is difficult to directly use the circle/cylinder-based segmentation method [31]. Thus, the planar background segmentation and the clustering algorithm mentioned in Section 3 are used for the simple and effective method of separating the circle feature from the global point cloud.
As shown in Figure 2, there is a gray image and a point cloud obtained simultaneously by shooting. The point cloud (76,800 points) is obtained by converting the depth image with the camera parameters. After background culling, the point cloud is clustered and selected as shown in Figure 11.
The process of segmentation assisted by gray image is shown in Figure 12 and Figure 13. After ellipse detection, extract the region of interest and transform it to the local point cloud. The final circle feature can be obtained by Easy clustering and selecting. The result of the two methods is shown in Figure 14.
Table 1 shows the comparison of the two point cloud segmentation algorithms. With regard to the direct method, the data conversion takes most of the total time and a huge point number of point clouds is to be divided, which leads to a long segmentation time. Compared with the direct method, the gray image auxiliary method adds an ellipse-detection step and fuses the data conversion and circle feature extraction. In the gray image auxiliary method, the total time consumption is saved by 94.72% and the number of points in the point cloud to be divided is reduced by 93.53%.
The proposed method focuses the circle feature, which only occupies a part of the point cloud, thus the number of points to be processed is decreased. Since the total segmentation time is proportional to the number of points in the point cloud to be divided, the time efficiency is increased at a similar rate compared to the number improvement. It can be also found that the time improvement is higher than that of the number, which means that the algorithm has a good performance on the fast segmentation besides the data filtering (bilateral filters).

5.2. Relative Pose Accuracy and Robustness of Non-Ideal Circle Features of Point Clouds

In this section, we conduct a simulation test with the method of circle feature parameter fitting and pose calculation by geometric model to discuss the algorithm accuracy and the robustness against the noises. First, 100 groups of circle feature point cloud observation data are generated randomly and added by different levels of noises. After that, the algorithm is executed to solve the relative pose. Finally, the results are analyzed and calculated.
In order to simulate the segmented circle feature observation point cloud, some steps are proceeded:
(1)
Standard model settings: The standard model is a point cloud of the frustum of a cone with a resolution of 2 mm (as shown in Figure 15). The upper bottom’s radius is 12 cm, which is fixed on the mounting surface. On the standard plane, the lower bottom’s outer radius is 24 cm and the inner radius is 20 cm, which is the end face. The height of the model is 10 cm. Furthermore, the standard position of the circle feature model is fixed to the origin in CRF, i.e., the center of the end face coincides with the origin ( 0 , 0 , 0 ) . The Euler angles are all 0 and the transformation matrix is the identity matrix I.
(2)
Noise addition: Different levels of noises (0.01–1 magnitude of the resolution of point cloud standard model) are added to the standard model to verify the algorithm’s strong anti-interference ability.
(3)
Model transformation: 100 groups of positions and Euler angles are generated randomly as nominal values. The final circle feature observation point cloud is generated by the noise-added standard model transformation according to the nominal values.
The relative pose is obtained by the circle feature parameter fitting algorithm and the geometric model. The errors of the corresponding variables can be received by comparing the pose result with the nominal value. The final statistical results are as follows:
It can be seen from Table 2 and Table 3 and Figure 16 and Figure 17 that, when the observation model is added the noise of the order of 0.01–1 mm (the point cloud resolution is on the order of a millimeter), the errors of Euler angle and displacement increase with the increasing of the noise. However, the Euler angle always maintains high accuracy, whose total attitude angle error is within 0.1 and the fluctuation range is within 0.1 . With respect to the displacement, when the noise is on the order of a millimeter, the error and fluctuation range is within several millimeters. With the noise under millimeter order, the accuracy can reach sub-millimeters or even higher. It can be seen that the relative pose solution method based on the circle feature point cloud has higher accuracy and better robustness under strong noises.

6. Conclusions

In this paper, we propose a relative pose determination strategy of uncooperative spacecraft in close-proximity operations. The natural circle feature acquired by the Time-of-Flight (ToF) camera is used to build the pose solution model based on specific coordinate frames. To improve the efficiency of the circle feature point cloud segmentation, gray images are used to assist the process of segmentation. These gray images are generated from the ToF camera together with depth images, which can be converted into the point cloud. In addition, the point cloud extraction algorithm solves the poor results from the bad ellipse detection results of the images. A novel circle feature fitting method is developed in order to cope with the non-ideal, or even variant shape of the circle feature in different on-orbit service scenarios.
The experiment showed that the proposed gray-image-assisted segmentation method dramatically decreased the computation time compared with the direct segmentation method. Numerical simulation results demonstrated that the accuracy of the attitude angle errors was kept within 0.1 and the target displacement errors were at the point cloud resolution level when the maximum noise is at the level of point cloud resolution. In case the noise is lower than the resolution level, higher accuracy can be obtained with stronger robustness.
In the following research, the information of gray images from the ToF camera is expected to be deeply used. The method of fusing gray and depth image data will be considered for more robust and accurate pose estimation.

Author Contributions

S.Z. provided the main idea and supervised the whole process. Y.L. performed the experiment and simulation and finished the draft manuscript. X.Z. reviewed and edited the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yoshida, K. ETS-VII Flight Experiments For Space Robot Dynamics and Control. In Experimental Robotics VII; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  2. Goeller, M.; Oberlaender, J.; Uhl, K.; Roennau, A.; Dillmann, R. Modular robots for on-orbit satellite servicing. In Proceedings of the 2012 IEEE International Conference on Robotics and Biomimetics, ROBIO 2012–Conference Digest, Guangzhou, China, 11–14 December 2012. [Google Scholar]
  3. Imaida, T.; Yokokohji, Y.; Doi, T.; Oda, M.; Yoshikawa, T. Ground-space bilateral teleoperation of ETS-VII robot arm by direct bilateral coupling under 7-s time delay condition. IEEE Trans. Robot. Autom. 2004, 20, 499–511. [Google Scholar] [CrossRef] [Green Version]
  4. Friend, R.B. Orbital Express program summary and mission overview. In Proceedings of the Sensors and Systems for Space Applications II, Orlando, FL, USA, 15 April 2008; Volume 6958. [Google Scholar]
  5. Wang, E.; Wu, S.; Wu, Z.; Radice, G. Distributed adaptive vibration control for solar power satellite during on-orbit assembly. Aerosp. Sci. Technol. 2019, 94, 105378. [Google Scholar] [CrossRef]
  6. Mokuno, M.; Kawano, I. In-orbit demonstration of an optical navigation system for autonomous rendezvous docking. J. Spacecr. Rockets 2011, 48, 1046–1054. [Google Scholar] [CrossRef]
  7. Black, K.; Shankar, S.; Fonseka, D.; Deutsch, J.; Dhir, A.; Akella, M.R. Real-time, flight-ready, non-cooperative spacecraft pose estimation using monocular imagery. arXiv 2021, arXiv:2101.09553. [Google Scholar]
  8. Li, W.J.; Cheng, D.Y.; Liu, X.G.; Wang, Y.B.; Shi, W.H.; Tang, Z.X.; Gao, F.; Zeng, F.M.; Chai, H.Y.; Luo, W.B.; et al. On-orbit service (OOS) of spacecraft: A review of engineering developments. Prog. Aerosp. Sci. 2019, 108, 32–120. [Google Scholar] [CrossRef]
  9. Christian, J.A.; Cryan, S. A survey of LIDAR technology and its use in spacecraft relative navigation. In Proceedings of the AIAA Guidance, Navigation, and Control (GNC) Conference, Boston, MA, USA, 19–22 August 2013. [Google Scholar]
  10. Foix, S.; Alenyà, G.; Torras, C. Lock-in time-of-flight (ToF) cameras: A survey. IEEE Sens. J. 2011, 11, 1917–1926. [Google Scholar] [CrossRef] [Green Version]
  11. Wang, Q.; Ting, L.E.I.; Xiaofeng, L.I.U.; Cai, G.; Yang, Y.; Jiang, L.; Yu, Z. Pose estimation of non-cooperative target coated with MLI. IEEE Access 2019, 7, 153958–153968. [Google Scholar] [CrossRef]
  12. Opromolla, R.; Fasano, G.; Rufino, G.; Grassi, M. Characterization and testing of a high-resolution time-of-flight camera for autonomous navigation. In Proceedings of the 5th IEEE International Workshop on Metrology for AeroSpace, MetroAeroSpace 2018, Rome, Italy, 20–22 June 2018. [Google Scholar]
  13. Opromolla, R.; Fasano, G.; Rufino, G.; Grassi, M. A review of cooperative and uncooperative spacecraft pose determination techniques for close-proximity operations. Prog. Aerosp. Sci. 2017, 93, 53–72. [Google Scholar] [CrossRef]
  14. Lim, T.W.; Oestreich, C.E. Model-free pose estimation using point cloud data. Acta Astronaut. 2019, 165, 298–311. [Google Scholar] [CrossRef]
  15. Zhang, L.; Zhu, F.; Hao, Y.; Pan, W. Rectangular-structure-based pose estimation method for non-cooperative rendezvous. Appl. Opt. 2018, 57, 6164. [Google Scholar] [CrossRef]
  16. Song, J.; Cao, C. Pose Self-Measurement of Noncooperative Spacecraft Based on Solar Panel Triangle Structure. J. Robot. 2015, 2015, 472461. [Google Scholar] [CrossRef]
  17. Sell, J.L.; Rhodes, A.; Woods, J.O.; Christian, J.A.; Evans, T. Pose performance of LIDAR-based navigation for satellite servicing. In Proceedings of the AIAA/AAS Astrodynamics Specialist Conference 2014, San Diego, CA, USA, 4–7 August 2014. [Google Scholar]
  18. Hu, H.; Wang, D.; Gao, H.; Wei, C.; He, Y. Vision-based position and pose determination of non-cooperative target for on-orbit servicing. Multimed. Tools Appl. 2020, 79, 14405–14418. [Google Scholar] [CrossRef]
  19. Li, C.; Chen, D.; Gong, J.; Wu, Y. Elliptic shape prior for object 2D-3D pose estimation using circular feature. EURASIP J. Adv. Signal Process. 2020, 2020:34, 1–19. [Google Scholar] [CrossRef]
  20. Wang, S.; Zhang, S.; Ning, M.; Zhou, B. Motion blurred star image restoration based on MEMS gyroscope aid and blur kernel correction. Sensors 2018, 18, 2662. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Wu, B.; Ye, D.; Guo, Y.; Chen, G. Multiple circle recognition and pose estimation for aerospace application. Optik 2017, 145, 148–157. [Google Scholar] [CrossRef]
  22. Ozawa, R.; Chaumette, F. Dynamic visual servoing with image moments for an unmanned aerial vehicle using a virtual spring approach. Adv. Robot. 2013, 27, 683–696. [Google Scholar] [CrossRef] [Green Version]
  23. Zhang, L.; Pan, W.; Ma, X. Real-time docking ring detection based on the geometrical shape for an on-orbit spacecraft. Sensors 2019, 19, 5243. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Lu, C.; Xia, S.; Shao, M.; Fu, Y. Arc-Support Line Segments Revisited: An Efficient High-Quality Ellipse Detection. IEEE Trans. Image Process. 2019, 29, 768–781. [Google Scholar] [CrossRef]
  25. Pǎtrǎucean, V.; Gurdjos, P.; Von Gioi, R.G. A parameterless line segment and elliptical arc detector with enhanced ellipse fitting. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2012; Volume 7573. [Google Scholar]
  26. Vosselman, G.; Gorte, B.; Sithole, G.; Rabbani, T. Recognising structure in laser scanner point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 46, 33–38. [Google Scholar]
  27. Xu, Y.; Tuttas, S.; Hoegner, L.; Stilla, U. Voxel-based segmentation of 3D point clouds from construction sites using a probabilistic connectivity model. Pattern Recognit. Lett. 2018, 102, 67–74. [Google Scholar] [CrossRef]
  28. Liu, T.; Zhao, W.; Wang, Z.; Jia, S. An Image-Based Circle Extraction Method of Three-Dimensional Point Cloud Data. In Proceedings of the 2008 Second International Symposium on Intelligent Information Technology Application, Shanghai, China, 20–22 December 2008; Volume 2, pp. 447–451. [Google Scholar]
  29. Jiang, X.; Cheng, D.C. Fitting of 3D circles and ellipses using a parameter decomposition approach. In Proceedings of the International Conference on 3D Digital Imaging and Modeling, 3DIM, Ottawa, ON, Canada, 13–16 June 2005. [Google Scholar]
  30. Figueiredo, R.; Moreno, P.; Bernardino, A. Robust cylinder detection and pose estimation using 3D point cloud information. In Proceedings of the 2017 IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2017, Coimbra, Portugal, 26–28 April 2017. [Google Scholar]
  31. Nurunnabi, A.; Sadahiro, Y.; Lindenbergh, R. Robust cylinder fitting in three-dimensional point cloud data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2017, 42, 63–70. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Different types of ellipse-detection results.
Figure 1. Different types of ellipse-detection results.
Sensors 21 08495 g001
Figure 2. Data of Time-of-Flight (ToF) camera. (a) The captured gray image; (b) The captured point cloud in the field of view (76,800 points). Obviously, the circle feature only accounts for a small part of the point cloud.
Figure 2. Data of Time-of-Flight (ToF) camera. (a) The captured gray image; (b) The captured point cloud in the field of view (76,800 points). Obviously, the circle feature only accounts for a small part of the point cloud.
Sensors 21 08495 g002
Figure 3. Illustration of coordinate frames and related vectors.
Figure 3. Illustration of coordinate frames and related vectors.
Sensors 21 08495 g003
Figure 4. Definition of the coordinate system.
Figure 4. Definition of the coordinate system.
Sensors 21 08495 g004
Figure 5. Schematic diagram of using the normal vector to solve the Euler angle.
Figure 5. Schematic diagram of using the normal vector to solve the Euler angle.
Sensors 21 08495 g005
Figure 6. Data of ToF camera. (a) point cloud (76,800 points); (b) gray image; (c) depth image.
Figure 6. Data of ToF camera. (a) point cloud (76,800 points); (b) gray image; (c) depth image.
Sensors 21 08495 g006
Figure 7. The flowchart of circle feature point cloud extraction.
Figure 7. The flowchart of circle feature point cloud extraction.
Sensors 21 08495 g007
Figure 8. Some good results of ellipse-detection based on arc support.
Figure 8. Some good results of ellipse-detection based on arc support.
Sensors 21 08495 g008
Figure 9. Gray image and depth image correspond to the same area.
Figure 9. Gray image and depth image correspond to the same area.
Sensors 21 08495 g009
Figure 10. Axis fitting by the intersecting line.
Figure 10. Axis fitting by the intersecting line.
Sensors 21 08495 g010
Figure 11. Direct segmentation result of point cloud. (a) the result after the planar background segmentation; (b) the result after the clustering segmentation.
Figure 11. Direct segmentation result of point cloud. (a) the result after the planar background segmentation; (b) the result after the clustering segmentation.
Sensors 21 08495 g011
Figure 12. (a) The result of ellipse detection; (b) the region of interest.
Figure 12. (a) The result of ellipse detection; (b) the region of interest.
Sensors 21 08495 g012
Figure 13. (a) The result of point cloud segmentation assisted by gray images; (b) the result after the clustering segmentation.
Figure 13. (a) The result of point cloud segmentation assisted by gray images; (b) the result after the clustering segmentation.
Sensors 21 08495 g013
Figure 14. (a) The final results of the direct segmentation; (b) final results of the segmentation assisted by gray image.
Figure 14. (a) The final results of the direct segmentation; (b) final results of the segmentation assisted by gray image.
Sensors 21 08495 g014
Figure 15. Standard model of circle feature.
Figure 15. Standard model of circle feature.
Sensors 21 08495 g015
Figure 16. Errors of angles. (a) yaw angle; (b) pitch angle; (c) the included angle.
Figure 16. Errors of angles. (a) yaw angle; (b) pitch angle; (c) the included angle.
Sensors 21 08495 g016
Figure 17. Standard model of circle feature.
Figure 17. Standard model of circle feature.
Sensors 21 08495 g017
Table 1. Comparison of point cloud segmentation algorithm results.
Table 1. Comparison of point cloud segmentation algorithm results.
Direct MethodGray Image Auxiliary MethodImprovement (%)
Ellipse Detection Time (s)0.047
Data Conversion Time (s)26.6252811.764461
Circle Segmentation Time (s)7.687266
Total Time (s)34.3125471.81146194.72
Divided Point Number76,800497093.53
Table 2. Statistics of error mean and variance of two-axis attitude.
Table 2. Statistics of error mean and variance of two-axis attitude.
Noise(mm)Error phi ( )Error Theta ( )Error Angle ( )
MeanVarianceMeanVarianceMeanVariance
0.021.5986 × 10 5 2.1604 × 10 5 2.2504 × 10 5 7.1102 × 10 5 0.00140.0031
0.053.0365 × 10 5 6.8873 × 10 5 3.8961 × 10 5 6.6372 × 10 5 0.00250.0042
0.086.6854 × 10 5 9.6205 × 10 5 6.6996 × 10 5 0.00010.00520.0067
0.20.00010.00010.00020.00030.00990.0114
0.50.00020.00030.00040.00040.02250.0189
0.80.00030.00040.00100.00120.04240.0233
20.00090.00070.00170.00150.09210.0555
Table 3. Displacement error.
Table 3. Displacement error.
Noise(mm)Position(mm)
MeanVariance
0.020.02500.0152
0.050.06200.0278
0.080.09770.0721
0.20.29090.1844
0.50.60090.3109
0.80.88290.4063
22.37741.0606
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, Y.; Zhang, S.; Zhao, X. Relative Pose Determination of Uncooperative Spacecraft Based on Circle Feature. Sensors 2021, 21, 8495. https://doi.org/10.3390/s21248495

AMA Style

Liu Y, Zhang S, Zhao X. Relative Pose Determination of Uncooperative Spacecraft Based on Circle Feature. Sensors. 2021; 21(24):8495. https://doi.org/10.3390/s21248495

Chicago/Turabian Style

Liu, Yue, Shijie Zhang, and Xiangtian Zhao. 2021. "Relative Pose Determination of Uncooperative Spacecraft Based on Circle Feature" Sensors 21, no. 24: 8495. https://doi.org/10.3390/s21248495

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop