A Novel Method for Space Circular Target Detection in Machine Vision

Computer-vision-based space circular target detection has a wide range of applications in visual measurement, object detection, and other fields. The space circular target is projected into an ellipse in the camera for localization. Traditional methods based on monocular vision use a precise calculation model to calculate the center coordinate and normal vector of the space circular target according to the image’s elliptic parameters. However, this accurate calculation method has the disadvantage of poor anti-interference ability in practical application. Aiming at the shortcomings of the above traditional calculation method, this paper proposes an optimization method for fitting the circular target in 3D space, where the image ellipse is projected back into 3D space and then detects the center coordinate and normal vector of the space circular target. Unlike the traditional method, this approach is not sensitive to the image’s elliptic parameters; it has stronger noise resistance performance and notable application value. The feasibility and effectiveness of the proposed method were verified by both simulation and practical experimental results.


Introduction
Three-dimensional (3D) measurement and 3D reconstruction [1,2] are important research topics in the field of computer vision. According to the basic principle of computer vision, it is impossible to measure 3D space via monocular vision [3]. The measurement of 3D space points usually requires two or more cameras to form a binocular or multi-vision system [4]. For the purposes of certain applications, however, 3D measurement based on monocular vision is still a research hotspot [5]. A noise adaptive filtering structure was first explored in [6], where a new monocular vision feedback control strategy based on the adaptive filtering structure was developed for robotic positioning without noise parameters. Wang et al. [7] proposed a technique to measure the absolute depth information of the object in the image using monocular vision based on the Harris-SIFT corner detection algorithm. Researchers have established a 3D object detection and pose estimation based on monocular images [8], for example, where 3D object properties are first regressed using a deep convolutional neural network and then combined with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box.
There have been many other valuable contributions to the literature. Xiang et al. [9] captured the 3D total motion of a target person from a monocular view input. Given an image or a monocular video, body, face, and finger motions could be reconstructed by a 3D deformable mesh model. This paper used 3D part orientation fields (POFs) to encode the 3D orientations of all body parts in a common 2D image space. The POFs were predicted by a fully convolutional network along with the joint confidence maps. An approach to Sensors 2022, 22, 769 2 of 20 monocular 3D human pose estimation and tracking was introduced in [10], which leveraged advances in reliable 2D pose estimation from monocular images, tracking-by-detection, and powerful modeling of 3D dynamics based on hierarchical Gaussian process latent variable models as a three-stage process to recover human poses in realistic street conditions. Monocular vision depth estimation, which plays a crucial role in understanding 3D scene geometry, is an ill-posed problem. Recently, researchers have extensively investigated monocular depth estimation based on deep learning. Michels et al. [11], for example, applied supervised learning to learn monocular depth estimates in unstructured outdoor environments; the resulting depth estimates were sufficiently reliable to drive an RC car at high speeds through the environment. Monocular vision systems for 3D reconstruction and measurement are simple in structure, require relatively small quantities of data to operate, and have generally high processing speeds. However, their 3D measurement and reconstruction accuracy are inferior to multi-camera systems. It is still difficult to complete real high-precision 3D measurement tasks via monocular vision.
For certain rigid objects in space, monocular vision can indeed achieve accurate 3D measurement. For example, in calibrating camera parameters, the rigid transformation relationship between the camera frame and target frame can be obtained through the correspondence between spatial 3D primitives (e.g., points, lines) and graphic 2D primitives. In other words, monocular vision can be used to estimate the position and pose of rigid objects.
Monocular vision can also be used to accurately measure some specially shaped targets, such as rectangular target positions [12]. When the size of a space rectangular target is known, the center can be derived and accurately measured by monocular vision. Linnainmaa [13] and DeMenthon and Davis [14] also achieved the position estimation of space triangular targets with known sizes via monocular vision. The central position derivation and normal measurement of space circular targets are another common research topic. The projection of a circular target in space is generally an ellipse in the image. A location method of space conic targets was proposed in [15,16]; this method is not strictly limited to circular targets and does have a certain amount of versatility, but it is solved iteratively according to the initial value during computing, creating a multisolution problem. D.H. Marimont et al. [17] proposed a space circular target measurement method accompanying a rigorous mathematical derivation. Eight groups of solutions corresponding to the center coordinate of the space circular target were gathered with a known radius, with four solutions in front of the camera and the others behind it. This method is also susceptible to redundant solutions. Kanatani and Liu [18] interpreted three 3D orthogonal lines by computing conics and then described an analytical procedure for computing the 3D geometry of a cone with a known shape from its projection. They also tested their method on real image examples. A space locating approach for space circular targets is presented in [19], where the center coordinate and normal vector of a space circle can be solved according to its image ellipse when its radius is known or unknown. Shiu and Ahmad [20] calculated the central position and normal direction of a space circular target and its image in the camera based on ellipse image features (e.g., major/minor axis of the image ellipse, ellipse center, angle between the long axis of the ellipse and the vertical direction). Generally, there are only two solutions for the calculation results. This method is well-suited to camera calibration and vision measurement. For example, a circular target with a known radius, when used as a calculation or measuring target, can realize the 3D position measurement of 3D space points via monocular vision.
Other researchers [21][22][23] have also analyzed computational errors in the process of space object reconstruction. Factors such as imaging distortion interference, camera inner parameter error, and feature target size [24,25] are the primary error sources in monocular-vision-based 3D target position and orientation measurement systems.
In summary, most existing space circular target location methods can accurately reveal the center and normal vector based on image characteristic parameters (which are related to the camera configuration) [19,20,26]. However, certain problems arise in the practical application of these methods. The corresponding algorithms depend on the space circular target and its elliptic projection in the image, so they are sensitive to image characteristic parameters. Subtle changes in image parameters (e.g., slight error in the long and short axes of the image ellipse, or in the angle between the long axis of the ellipse and the horizontal), for example, can cause a large deviation in the location of a space circular target [20]. The normal vector of a space circular target plane is only related to the posture of the image ellipse, but its center coordinate is relevant to the ellipse posture and circle radius. In practical applications, the radius of a space circular target is often already known, so the pose of the image ellipse is critical for accurate measurement of the space circular target.
It is necessary to detect the parameters of the imaging ellipse accurately during the image segmentation stage, but various noise factors alter the pose of the elliptic projection, thus creating measuring errors of the normal and center coordinates of the space circular target plane. When using this method for iris detection in an eye tracking system, it is dependent on (and sensitive to) the detection of image ellipse parameters. The anti-noise performance of the method proposed in [27] is relatively weak, so stringent requirements are necessary for any image segmentation application. This is highly challenging in most practical engineering scenarios, such as 3D gaze tracking, dynamic micro displacement measurement, and other fields.
This paper proposes a novel space circular target detection method based on monocular vision (also referred to here as "mono-vision"). The traditional calculation method is first used to perform segmentation and optimum fitting of the image elliptic projection to detect image ellipse parameters, then a 3D calculation model of the space circular target is built based on these parameters. Optimized parameters of the ellipse on the image are not extracted directly, but rather, edge points of the image ellipse are projected back into 3D space. The space circular target is fitted in this 3D space, then its center coordinate and normal vector are detected. Unlike the traditional method, this approach is not sensitive to image ellipse parameters; it has a strong anti-noise capacity and noteworthy potential application value. Simulation and experimental results verify the feasibility and effectiveness of the proposed method.
The rest of this paper is organized as follows. Section 2 provides a brief introduction to the traditional calculation method. The proposed method is described in Section 3. Section 4 presents simulation results, and Section 5 discusses the experiment conducted to further validate the proposed method. Section 6 contains a brief summary and concluding remarks.

Traditional Calculation Method
The perspective projection of a space circular target is an ellipse image in the camera. An accurate calculation method for the center coordinate and normal vector of a space circular target based on monocular vision is given in [19,20]. The method is implemented in the following steps.

Construction of the Cone Corresponding to Space Circular Target
As shown in Figure 1, according to the principle of pin-hole imaging, the edge of a space circular target, the center of a camera lens, and the edge of the ellipse image constitute a space elliptic cone.

Space Vertebral Equation
According to graphics on the image interface, the elliptic equation fitted is au 2 + bv 2 + cuv + du + ev + f = 0, and coefficients of the equation a-f are known. Each point (u, v) on the image ellipse becomes a line of the cone through (u, v, f 0 ) and (0, 0, 0), where f 0 is the focal length of the camera. Let the coordinate of the principal point be (0, 0), so the origin of the u-v frame is on the z-axis of the camera frame. The generatrix equation is

Space Vertebral Equation
According to graphics on the image interface, the elliptic equation fitted is a 2 + 2 + + + + = 0, and coefficients of the equation a-f are known. Each point (u, v) on the image ellipse becomes a line of the cone through (u, v, f0) and (0, 0, 0), where f0 is the focal length of the camera. Let the coordinate of the principal point be (0, 0), so the origin of the u-v frame is on the z-axis of the camera frame. The generatrix equation is By substituting Equation (1) into the fitted elliptic equation, the space vertebral equation is written as where A = a 0 2 , B = b 0 2 , C = c 0 2 , D = d 0 , E = e 0 , F = .

Establishment of a New Frame
A new frame ox'y'z' is created which has the same origin as the original frame; the vertebral equation is in a standard form in the new frame. Set a point (x, y, z) in the frame oxyz, the coordinate of which is (x', y', z') in the frame ox'y'z', so that (x, y, z) T = P(x', y', z') T . The matrix P can be viewed as a rotation transformation matrix of the frame.
Standardize Equation (2) in terms of a quadratic form Q: In the new frame ox'y'z', the z'-axis coincides with the axis of the space elliptic cone. The cone falls in the positive direction of the z'-axis. Assume | 1 | > | 2 |, then the major axis of the resulting ellipse is parallel to the y'-axis when using a plane parallel to the x'y' plane to intercept the cone.
After the quadratic form of the cone is standardized, the expression in the ox'y'z' frame is By substituting Equation (1) into the fitted elliptic equation, the space vertebral equation is written as

Establishment of a New Frame
A new frame ox y z is created which has the same origin as the original frame; the vertebral equation is in a standard form in the new frame. Set a point (x, y, z) in the frame oxyz, the coordinate of which is (x , y , z ) in the frame ox y z , so that (x, y, z) T = P(x , y , z ) T . The matrix P can be viewed as a rotation transformation matrix of the frame.
Standardize Equation (2) in terms of a quadratic form Q: In the new frame ox y z , the z -axis coincides with the axis of the space elliptic cone. The cone falls in the positive direction of the z -axis. Assume |λ 1 | > |λ 2 |, then the major axis of the resulting ellipse is parallel to the y -axis when using a plane parallel to the x -y plane to intercept the cone.
After the quadratic form of the cone is standardized, the expression in the ox y z frame is Compare Equation (4) with the standard formula x 2 = 0 of a cone whose central axis is the z'-axis. In this case, , where ρ is any constant in λ 1 , λ 2 , λ 3 , λ 3 and λ 1 , λ 2 have opposite signs.

Detection of Center Coordinate and Normal Vector of Space Circular Target
In the transmission projection cone of a circular target, two circles with radius R can be found by the intersection of a cross-section plane and the cone. When λ 1 > λ 2 > 0, the space equation of the cross-section is x Translating the section planes along the z'-axis by 1 gives The corresponding cross-section plane and the cone intersect in a circle with radius r. If the section plane is translated along positive direction of the z -axis by distance S, then S = R/r, as shown in Figure 2. space equation of the cross-section is . Translating the section planes along the z'-axis by 1 gives The corresponding cross-section plane and the cone intersect in a circle with radius r. If the section plane is translated along positive direction of the z′ -axis by distance S, then S = R/r, as shown in Figure 2. In the plane z x′ ′ − , the cone is projected into two lines that are expressed as According to Equations (5) and (6), In the plane x − z , the cone is projected into two lines that are expressed as According to Equations (5) and (6), Radius r can then be calculated by The center coordinate of the circle with radius r in Figure 2a is . If the inward unit normal vector of the circular target perpendicular to the circular plane is taken as negative, Similarly, the center coordinate of the circle with radius R in Figure 2b is The results converted to the original camera frame oxyz are where P = e 1 e 2 e 3 =   e 1x e 2x e 3x e 1y e 2y e 3y e 1z e 2z e 3z   . According to the above equations, the center coordinate I and unit normal vector D of the space circular target are

Mathematical Model of Space Circular Target Detection
A space circular target projects an ellipse in a camera [21]. In the proposed method, the edge of the elliptical target on the image plane is detected first. Based on edge points of the image ellipse, a space circular cone where the edge of the space circular target is located is created. A nonlinear equation system can be built to solve the center and normal vector of the space circular target by optimization according to the space geometry relation. The estimated solutions of the space circular target are then obtainable. Table 1 summarizes the notations used in this section. In Figure 3, Π is a space circular target plane and O is the optical center of the camera lens. Light emitted from edge points of the image ellipse is back-projected into 3D space through O. The mapped points together form several real space circular target planes. The radius R of the space circular target is known, so real space edge points can be fitted and the relation between the points can be expressed by a nonlinear equation system. The parameters of the space circular target can be roughly estimated after solving the equations.
There are numerous ellipse edge points on the image plane of the camera. In an actual image processing scenario, the elliptical contour on the camera imaging plane is sampled at an equal angle, then N image points of the elliptical edge are obtained. The image plane C img consisting of several sample points can be expressed as C img = (c 1img , c 2img , · · · , c kimg , · · · , c Nimg ) accordingly, where k = 1, 2, . . . , N. The light from each sample point c kimg reaches a point p k on the space circular target plane Π through the optical center O of the camera lens, so the points p k , O, c kimg are collinear. p k can be expressed by the following geometric relation: where t k is a scale factor that represents the length of vector Op k .
In Figure 3, Π is a space circular target plane and O is the optical center of th lens. Light emitted from edge points of the image ellipse is back-projected into through O. The mapped points together form several real space circular target pl radius R of the space circular target is known, so real space edge points can be the relation between the points can be expressed by a nonlinear equation system rameters of the space circular target can be roughly estimated after solving the e There are numerous ellipse edge points on the image plane of the camera. In image processing scenario, the elliptical contour on the camera imaging plane is at an equal angle, then N image points of the elliptical edge are obtained. The im img C consisting of several sample points can be expres Because p k is an edge point of the space circular target and the radius R of the space circular target is known, the relationship is expressed as follows: Additionally, N direction vectors Pp k composed of the edge point p k and center P of the space circular target are located on the space circular target plane Π. Thus, There are six unknown parameters in the nonlinear equation system formed from Equations (12) and (13), which may reveal the center P and normal vector n of the space circular target. An optimization algorithm can be applied to extract the space circular target.

Space Circular Target Parameter Detection Algorithm
Two parameters of the circular target including center and normal vector can be found according to the known radius R of the space circular target. First, from the elliptical edge coordinates on the image, a series of coordinate values are calculated in the system (camera) frame. The direction vectors i k of the line between them and the optical center of the camera are obtained through the mapping relationship.
As shown in Figure 4, O is the optical center of the camera, point C img is the center of the image ellipse, point P is the center of the space circular target whose normal vector is n and radius is R, and the direction vector of the line between an image elliptical edge point and O is expressed as i k .
(camera) frame. The direction vectors k i of the line between them and the optical center of the camera are obtained through the mapping relationship.
As shown in Figure 4, O is the optical center of the camera, point img C is the center of the image ellipse, point P is the center of the space circular target whose normal vector is n and radius is R, and the direction vector of the line between an image elliptical edge point and O is expressed as k i .  If any edge point of the image ellipse is assumed to be c kimg , then for each c kimg , the direction vector of the incident light i k is Each elliptical edge point c kimg on the image corresponds to the direction vector of an incident light i k . The connection between any two points on the space circular target plane is perpendicular to the normal of the plane, so the satisfied equality is namely, n · Pp k = 0 In Figure 4, the following relationship for three direction vectors OP, Op k ,Pp k composed of points O, P, p k is met: The following can be written after combining Equations (16) and (17): and the simplified result is Figure 4 shows that Op k is t k i k , where t k is the length of two points O, p k . Inputting t k i k into Equation (19) provides the expression of the scale factor t k : According to the above relation, for an edge point p k of any given space circular target, the optical center O of the camera and direction vector i k of the incident light satisfy the following expression: while the space circular target plane consisting of N edge points satisfies the following relationship: In the linear equation system formed from 2N -2 nonlinear elements of Equations (22) and (23), because the parameters to be solved are the center P = [p x , p y , p z ] and normal vector n = n x , n y , n z of the space circular target including six unknown variables, when 2N − 2 ≥ 6, so that N ≥ 4, the calculation results are valid. If N = 4, there is a unique set of solutions. When N > 4, the nonlinear equation system is overdetermined. The data are mixed with model error or measurement noise, so it is generally impossible to determine a set of unknown parameter solutions uniquely to meet all 2N − 2 nonlinear elements of Equations (22) and (23). The least square method can be applied to remedy this.
Assume that the error vector is E = [e 1 , e 2 , · · · , e k , · · · , e N , · · · , e 2N−1 ] T , the first N items and items N + 1 to 2N − 1 of which can be expressed as The object function is: Minimizing the square sum J of the error vector realizes the best estimation of the center P and normal vector n of the space circular target. The measured parameter values of the space circular target are ultimately obtained. The parameter calculation algorithm (Algorithm 1) is shown below.
Algorithm 1: Detection of the center P and normal vector n of the space circular target Input: Radius of the space circular target R, optical center of the camera O, the number of edge points C kimg of the image ellipse m, error precision given eps, initial center P 0 of the space circular target, and random initial value n 0 of the unit normal vector Output: The optimal solution of center P and unit normal vector n of the space circular target Procedure: 1: 14: end if 15: end for 16: return P, n

Simulation Experiment
As discussed in this section, the proposed method was simulated to evaluate its feasibility and performance.

Simulation Environment
The simulation experiment was conducted using 3D modeling software Rhinoceros 6.0 and Matlab 2016a. Rhinoceros is a powerful 3D modeling software, and Matlab is a mathematical software which is used in the fields of data processing and analysis. The simulation system was first built in Rhinoceros 6.0 according to the known model parameters. As shown in Figure 5, a frame was established with the camera imaging center as the origin whose coordinate is O = [0 0 0]. The x-axis and y-axis were created in the plane passing through the origin and parallel to the image plane, with the z-axis in the direction perpendicular to the image plane. This produced a space rectangular frame oxyz. The focal length of the camera was 16 mm and the radius of the space circular target was set to 6.5726701 mm. Uniformly spaced sampling was adopted to select 16 edge points of the space circular target for further simulation. All edge points were projected to the imaging plane through the optical center of the camera to obtain 16 elliptical edge points, then the image ellipse was obtained by fitting them.

Simulation Environment
The simulation experiment was conducted using 3D modeling software Rhinoceros 6.0 and Matlab 2016a. Rhinoceros is a powerful 3D modeling software, and Matlab is a mathematical software which is used in the fields of data processing and analysis. The simulation system was first built in Rhinoceros 6.0 according to the known model parameters. As shown in Figure 5, a frame was established with the camera imaging center as the origin whose coordinate is . The x-axis and y-axis were created in the plane passing through the origin and parallel to the image plane, with the z-axis in the direction perpendicular to the image plane. This produced a space rectangular frame oxyz . The focal length of the camera was 16 mm and the radius of the space circular target was set to 6.5726701 mm. Uniformly spaced sampling was adopted to select 16 edge points of the space circular target for further simulation. All edge points were projected to the imaging plane through the optical center of the camera to obtain 16 elliptical edge points, then the image ellipse was obtained by fitting them.

Algorithm Feasibility
In general, a space circular target appears as an ellipse on the imaging plane of a camera; the connection between edge points of the image ellipse and optical center of the camera forms an elliptic cone. If a plane intersects the elliptic cone and the intercepted image is a circle with radius R, then the plane may fall into one of two positions. There are thus two solutions of the center and normal vector that may be obtained through imaging of the space circle.
The coordinates of the image elliptical edge points and radius R of the space circular target taken as input were substituted in this simulation into the detection procedure of the space circular objective parameters. For the given radius R, edge points of the image ellipse were mapped through the optical center of the camera O, and then two space circular planes were obtained with different directions and positions. A group of estimated values closest to the real values was selected artificially, and the other group was discarded automatically according to the known parameters in the simulation model. The resulting data are listed in Table 2.

Algorithm Feasibility
In general, a space circular target appears as an ellipse on the imaging plane of a camera; the connection between edge points of the image ellipse and optical center of the camera forms an elliptic cone. If a plane intersects the elliptic cone and the intercepted image is a circle with radius R, then the plane may fall into one of two positions. There are thus two solutions of the center and normal vector that may be obtained through imaging of the space circle.
The coordinates of the image elliptical edge points and radius R of the space circular target taken as input were substituted in this simulation into the detection procedure of the space circular objective parameters. For the given radius R, edge points of the image ellipse were mapped through the optical center of the camera O, and then two space circular planes were obtained with different directions and positions. A group of estimated values closest to the real values was selected artificially, and the other group was discarded automatically according to the known parameters in the simulation model. The resulting data are listed in Table 2. The proposed algorithm was tested based on the input to obtain the estimated values of the center and normal vector of the space circular target ( Table 2). The simulated and real values are in accordance, which indicates that the proposed method is feasible.

Algorithm Feasibility
The accuracy of the center and normal vector estimation of the space circular target is determined by a single factor: the image elliptical edge points (Section 3.2). In order to explore the influence of this factor on the parameters of the space circular target, Gaussian white noise whose signal-to-noise ratios (SNR) ranging from 100 to 40 were added to the edge points of the image ellipse at an interval of 10. Twenty measurement results were selected and averaged under each shifting condition. The resulting data are shown in Table 3. For the same and certain elliptical edge pixels after adding Gaussian white noise, the space circular target parameters were calculated using the proposed method and the traditional method, respectively. The Euclidean distance and angle between vectors were used to represent the error between the real and estimated values of the target center and normal vector separately. The parameters were measured 20 times under each noise condition to obtain the measurement error values listed in Tables 4 and 5. (Due to space reasons, only 10 sets of data are shown here).   Figure 6 shows the variation trends of the center and normal vector of the space circular target under different values of marginal noise. When the degree of noise is small (and the SNR is large), there was almost no interference to the target center and normal estimations; the fluctuation of the polyline tended to be stable. As the noise intensity increased, the influence of image elliptical edge points on the measured parameters increased gradually and the error value distribution grew irregularly.   Figure 6 shows the variation trends of the center and normal vector of the space circular target under different values of marginal noise. When the degree of noise is small (and the SNR is large), there was almost no interference to the target center and normal estimations; the fluctuation of the polyline tended to be stable. As the noise intensity increased, the influence of image elliptical edge points on the measured parameters increased gradually and the error value distribution grew irregularly. The discreteness of measured parameters of the space circular target as well as the stability and concentrative level of data were assessed next. The average of 20 error values was calculated for each scenario to test the respective abilities of the two methods to solve the space circular target parameters.
The "solution effect" of the two methods is reflected in the parameter measurement errors under a consistent SNR distribution, as shown above. The estimation errors of the objective parameters are inversely proportional to the SNR. The robustness of the space circular objective parameter extraction results was evaluated next to further compare the proposed and traditional methods. As shown below, the error calculations of the two methods were placed on the same graph for an intuitive comparison. The discreteness of measured parameters of the space circular target as well as the stability and concentrative level of data were assessed next. The average of 20 error values was calculated for each scenario to test the respective abilities of the two methods to solve the space circular target parameters.
The "solution effect" of the two methods is reflected in the parameter measurement errors under a consistent SNR distribution, as shown above. The estimation errors of the objective parameters are inversely proportional to the SNR. The robustness of the space circular objective parameter extraction results was evaluated next to further compare the proposed and traditional methods. As shown below, the error calculations of the two methods were placed on the same graph for an intuitive comparison.
Independent Gaussian white noise was added to the elliptical edge as position deviation of image elliptical edge points as the SNR was continually adjusted and the corresponding center and normal vector of the space circular target were calculated. The average error of the two methods for measuring these parameters is listed in Table 6; comparative graphs of their respective mean error are shown in Figures 7 and 8. The red polyline marks the proposed algorithm and the blue polyline represents the traditional algorithm. Independent Gaussian white noise was added to the elliptical edge as position dev ation of image elliptical edge points as the SNR was continually adjusted and the corr sponding center and normal vector of the space circular target were calculated. The ave age error of the two methods for measuring these parameters is listed in Table 6; compa ative graphs of their respective mean error are shown in Figures 7 and 8. The red polylin marks the proposed algorithm and the blue polyline represents the traditional algorithm  As shown in Figures 7 and 8, the proposed algorithm has stronger anti-interference ability than the traditional algorithm. When the SNR is less than or equal to 60, the rising trend of the red polyline is gentler and the ascending height is significantly lower than the blue polyline. The traditional method produced maximum errors of the center and normal vector of the space circular target of about 37 mm and 24 • , respectively; the maximum errors of the proposed algorithm were 23 mm and 19 • , respectively.

Proposed Algorithm Application Example
The proposed method was used to reconstruct the optical axis of a human eyeball for 3D gaze estimation according to the center and normal vector of the iris. According to the human eye structure, the human iris can be regarded as a space circular target, and the optical axis of the eye can be represented by the normal vector of the iris. The better the optical axis is reconstructed, the more accurately will the final point-of-regard be estimated. The results were compared against those of a previously published optical axis and 3D gaze estimation method [28].

3D Gaze Estimation Based on Center and Normal Vector of Iris
3D gaze estimation methods work based on a geometric model and an imaging model of the eyeball [28]. Figure 9 shows a human eyeball model with a radius of about 12 mm. The sclera, iris, and pupil progress from the outer to the inner layer of the eyeball; the cornea covers the exterior of the iris. There is a ring in the center of the iris, the pupil, which adjusts the amount of light entering the eyes. The retina is located at the posterior wall of the eyeball. After entering the eyeball, light passes through a series of optical media, then reflects and refracts at various levels before reaching the retina. As shown in Figures 7 and 8, the proposed algorithm has stronger anti-interference ability than the traditional algorithm. When the SNR is less than or equal to 60, the rising trend of the red polyline is gentler and the ascending height is significantly lower than the blue polyline. The traditional method produced maximum errors of the center and normal vector of the space circular target of about 37 mm and 24°, respectively; the maximum errors of the proposed algorithm were 23 mm and 19°, respectively.

Proposed Algorithm Application Example
The proposed method was used to reconstruct the optical axis of a human eyeball for 3D gaze estimation according to the center and normal vector of the iris. According to the human eye structure, the human iris can be regarded as a space circular target, and the optical axis of the eye can be represented by the normal vector of the iris. The better the optical axis is reconstructed, the more accurately will the final point-of-regard be estimated. The results were compared against those of a previously published optical axis and 3D gaze estimation method [28].

3D Gaze Estimation Based on Center and Normal Vector of Iris
3D gaze estimation methods work based on a geometric model and an imaging model of the eyeball [28]. Figure 9 shows a human eyeball model with a radius of about 12 mm. The sclera, iris, and pupil progress from the outer to the inner layer of the eyeball; the cornea covers the exterior of the iris. There is a ring in the center of the iris, the pupil, which adjusts the amount of light entering the eyes. The retina is located at the posterior wall of the eyeball. After entering the eyeball, light passes through a series of optical media, then reflects and refracts at various levels before reaching the retina.  A special area on the retina called the fovea contains a large amount of photoreceptor cells. The symmetry axis of the eyeball is the optical axis, where the fovea is not precisely located; it does form, however, a visual axis with the center of the cornea. There is a fixed angle between the optical axis and visual axis of the eyeball, which is called the kappa angle. 3D gaze estimation serves to estimate the direction of the visual axis. To align with the special structure of the eyeball, a 3D gaze estimation method usually involves first A special area on the retina called the fovea contains a large amount of photoreceptor cells. The symmetry axis of the eyeball is the optical axis, where the fovea is not precisely located; it does form, however, a visual axis with the center of the cornea. There is a fixed angle between the optical axis and visual axis of the eyeball, which is called the kappa angle. 3D gaze estimation serves to estimate the direction of the visual axis. To align with the special structure of the eyeball, a 3D gaze estimation method usually involves first reconstructing the optical axis of the eyeball, and then converting it into the visual axis according to the specific kappa angles of the eye under analysis. Finally, points-of-view are calculated according to the position of the screen.
The eyeball center E, cornea curvature center C, iris center I, and pupil center P are all located on the optical axis, which is always perpendicular to the plane where the margins of the pupil and iris are located. If the iris is regarded as a space circular target, the direction vector of the optical axis can be represented by the normal vector of the iris. Figure 10 shows various means of reconstructing the eyeball's optical axis using visual features in a 3D gaze estimation system. In the case of the given space normal vector of the iris, if the coordinate of any point in E, C, I, and P is known, the optical axis can be reconstructed.
reconstructing the optical axis of the eyeball, and then converting it into the visual axis according to the specific kappa angles of the eye under analysis. Finally, points-of-view are calculated according to the position of the screen.
The eyeball center E, cornea curvature center C, iris center I, and pupil center P are all located on the optical axis, which is always perpendicular to the plane where the margins of the pupil and iris are located. If the iris is regarded as a space circular target, the direction vector of the optical axis can be represented by the normal vector of the iris. Figure 10 shows various means of reconstructing the eyeball's optical axis using visual features in a 3D gaze estimation system. In the case of the given space normal vector of the iris, if the coordinate of any point in E, C, I, and P is known, the optical axis can be reconstructed.
The center and normal vector of the iris were detected to reconstruct the optical axis of an eyeball in this experiment. This a two-step process: (1) user calibration and (2) 3D gaze estimation. User calibration serves to calibrate the iris radius and kappa angle [28], and then 3D gaze estimation estimates the optical axis and visual axis of the eyeball and points-of-regard on the screen. In the user calibration stage, calibrations of iris radius and kappa angle were achieved using a previously published method [28]. The center and normal vector of the iris were then calculated by the proposed method to construct the optical axis of the eyeball. The visual axis of the eyeball was then constructed based on the optical axis and the calibrated kappa angle for visual direction detection. According to the position of the screen in the camera frame determined in the system calibration step [29], the intersection points between the visual axis and screen were calculated to determine the falling points of the user's gaze.
A flowchart of the experimental procedures run on the actual system, which mainly included user calibration and 3D gaze estimation, is shown in Figure 11. The 3D line-ofsight estimation results using the proposed method were compared against the results using the traditional calculation method.

System Establishment
As shown in Figure 12, a gaze-tracking system based on remote one-camera-one-single source is adopted to carry on the experiment. The gaze-tracking device is placed at the top of a computer screen. The camera is in the middle and the light source is on the left side of it. The focal length of the camera is 3.66 mm, the resolution is 640 × 480, and the The center and normal vector of the iris were detected to reconstruct the optical axis of an eyeball in this experiment. This a two-step process: (1) user calibration and (2) 3D gaze estimation. User calibration serves to calibrate the iris radius and kappa angle [28], and then 3D gaze estimation estimates the optical axis and visual axis of the eyeball and points-of-regard on the screen.
In the user calibration stage, calibrations of iris radius and kappa angle were achieved using a previously published method [28]. The center and normal vector of the iris were then calculated by the proposed method to construct the optical axis of the eyeball. The visual axis of the eyeball was then constructed based on the optical axis and the calibrated kappa angle for visual direction detection. According to the position of the screen in the camera frame determined in the system calibration step [29], the intersection points between the visual axis and screen were calculated to determine the falling points of the user's gaze.
A flowchart of the experimental procedures run on the actual system, which mainly included user calibration and 3D gaze estimation, is shown in Figure 11. The 3D line-ofsight estimation results using the proposed method were compared against the results using the traditional calculation method. reconstructing the optical axis of the eyeball, and then converting it into the visual axis according to the specific kappa angles of the eye under analysis. Finally, points-of-view are calculated according to the position of the screen. The eyeball center E, cornea curvature center C, iris center I, and pupil center P are all located on the optical axis, which is always perpendicular to the plane where the margins of the pupil and iris are located. If the iris is regarded as a space circular target, the direction vector of the optical axis can be represented by the normal vector of the iris. Figure 10 shows various means of reconstructing the eyeball's optical axis using visual features in a 3D gaze estimation system. In the case of the given space normal vector of the iris, if the coordinate of any point in E, C, I, and P is known, the optical axis can be reconstructed.
The center and normal vector of the iris were detected to reconstruct the optical axis of an eyeball in this experiment. This a two-step process: (1) user calibration and (2) 3D gaze estimation. User calibration serves to calibrate the iris radius and kappa angle [28], and then 3D gaze estimation estimates the optical axis and visual axis of the eyeball and points-of-regard on the screen. In the user calibration stage, calibrations of iris radius and kappa angle were achieved using a previously published method [28]. The center and normal vector of the iris were then calculated by the proposed method to construct the optical axis of the eyeball. The visual axis of the eyeball was then constructed based on the optical axis and the calibrated kappa angle for visual direction detection. According to the position of the screen in the camera frame determined in the system calibration step [29], the intersection points between the visual axis and screen were calculated to determine the falling points of the user's gaze.
A flowchart of the experimental procedures run on the actual system, which mainly included user calibration and 3D gaze estimation, is shown in Figure 11. The 3D line-ofsight estimation results using the proposed method were compared against the results using the traditional calculation method.

System Establishment
As shown in Figure 12, a gaze-tracking system based on remote one-camera-one-single source is adopted to carry on the experiment. The gaze-tracking device is placed at the top of a computer screen. The camera is in the middle and the light source is on the left side of it. The focal length of the camera is 3.66 mm, the resolution is 640 × 480, and the

System Establishment
As shown in Figure 12, a gaze-tracking system based on remote one-camera-one-single source is adopted to carry on the experiment. The gaze-tracking device is placed at the top of a computer screen. The camera is in the middle and the light source is on the left side of it. The focal length of the camera is 3.66 mm, the resolution is 640 × 480, and the pixel size is 2.2 µm. A source of visible light is chosen as the system illuminant in order to detect the iris margin more accurately. Before user calibration and 3D gaze estimation experiment, camera calibration and system calibration were carried out for the above system. Camera calibration is used to obtain internal parameters of the camera including lens focus, principal point coordinate, etc. System calibration is used to determine the position of the light source and the equation of the screen in the system camera frame. pixel size is 2.2 μm . A source of visible light is chosen as the system illuminant in order to detect the iris margin more accurately. Before user calibration and 3D gaze estimation experiment, camera calibration and system calibration were carried out for the above system. Camera calibration is used to obtain internal parameters of the camera including lens focus, principal point coordinate, etc. System calibration is used to determine the position of the light source and the equation of the screen in the system camera frame.

Eye Image Processing
It is necessary to detect eye images and extract visual features of the user's eyeballs in the user calibration and gaze estimation processes for sound optical axis reconstruction. This includes detection of the iris edge, long and short axes of the iris ellipse, and other relevant elements. Eye image processing provided a workable basis in this study for validating the proposed method on an actual gaze-tracking system.
As users gazed at the screen, the system camera captured images of their faces and eyes at a frequency of 30 Hz. Irises and bright spots were detected, respectively, in each frame of face images including eye localization, iris location, iris segmentation and edge detection, iris fitting, bright spot location, and sub-pixel extraction of the spot center. The iris and Purkin spot detection process is shown in Figure 13.

Eye Image Processing
It is necessary to detect eye images and extract visual features of the user's eyeballs in the user calibration and gaze estimation processes for sound optical axis reconstruction. This includes detection of the iris edge, long and short axes of the iris ellipse, and other relevant elements. Eye image processing provided a workable basis in this study for validating the proposed method on an actual gaze-tracking system.
As users gazed at the screen, the system camera captured images of their faces and eyes at a frequency of 30 Hz. Irises and bright spots were detected, respectively, in each frame of face images including eye localization, iris location, iris segmentation and edge detection, iris fitting, bright spot location, and sub-pixel extraction of the spot center. The iris and Purkin spot detection process is shown in Figure 13. pixel size is 2.2 μm . A source of visible light is chosen as the system illuminant in order to detect the iris margin more accurately. Before user calibration and 3D gaze estimation experiment, camera calibration and system calibration were carried out for the above system. Camera calibration is used to obtain internal parameters of the camera including lens focus, principal point coordinate, etc. System calibration is used to determine the position of the light source and the equation of the screen in the system camera frame.

Eye Image Processing
It is necessary to detect eye images and extract visual features of the user's eyeballs in the user calibration and gaze estimation processes for sound optical axis reconstruction. This includes detection of the iris edge, long and short axes of the iris ellipse, and other relevant elements. Eye image processing provided a workable basis in this study for validating the proposed method on an actual gaze-tracking system.
As users gazed at the screen, the system camera captured images of their faces and eyes at a frequency of 30 Hz. Irises and bright spots were detected, respectively, in each frame of face images including eye localization, iris location, iris segmentation and edge detection, iris fitting, bright spot location, and sub-pixel extraction of the spot center. The iris and Purkin spot detection process is shown in Figure 13.  The iris edge was detected and subjected to ellipse fitting. The proposed algorithm uses edge points of the iris to detect its parameters; the traditional algorithm uses iris contours obtained by fitting.

User Calibration Experiment
As mentioned above, the general gaze estimation can be divided into two stages. The first step is user calibration. Structural parameters of the eyeball with individual differences are calibrated in this step. The user-calibrated ocular parameters in this experiment include the iris radius and kappa angle. As shown in Figure 12, five calibration points were set on the screen; then, each user sat 350-600 mm in the front of the computer screen and observed them sequentially. The system synchronously took images of the user's face as they gazed at each point. The face images were processed through the method described in Section 5.2.2 to abstract the visual features of the eyeball including the iris edge points and iris center. Based on boundary points of the iris in the image, the method described in Section 3 was used to calculate the center and normal vector of the spatial iris to reconstruct the optical axis of the eyeball according to the calibrated iris radius.
The reconstructed optical axis of the eyeball was calibrated using two separate processes: (1) according to the iris radius calculated using the method of Ref. [29] and (2) according to the coordinates of the calibration points on the screen, with visual axes of the eye calculated as the user gazed at the calibration points. Then, the kappa angle between the optical axis and visual axis of the eyeball was calculated. User calibration was performed on each subject as recorded in Table 7. The normal range of an iris radius is 5-6.8 mm and the kappa angle is about 5 • . The calibrated iris radius and kappa angle identified in this experiment are within the normal limits.

3D Gaze Estimation Experiment
An additional 3D gaze estimation experiment was performed after obtaining specific eye invariant parameters of the users through the calibration experiment. Nine test points were set on the screen and the system camera again took images of the user's face as they gazed at each test point. The face images were processed by the method discussed in Section 5.2.2. The method presented in Section 3 was used to calculate the optical axis of the eyeball based on the boundary points of the iris. According to the fitting iris ellipse in the image, the traditional method of calculating the center and normal vector of the space circular target [28] was also used to reconstruct the optical axis of the eyeball. According to the optical axes reconstructed by the two methods and the kappa angles in the user calibration experiment, the visual axes and points-of-regard on the visual axes on the screen were calculated. Table 8 shows the RMSE estimation results of the two methods for different subjects' points-of-regard in the x and y directions compared with the actual points on the screen. Based on the calibrated parameters of the eyeball structure (iris radius), the proposed method and traditional method were separately used to calculate the center and normal vector of the space iris circle to construct the optical axis of the eyeball and calculate points-of-regard. As shown in Table 8, the proposed method calculated gaze points in the x direction more accurately than the proposed method; its accuracy in the y direction, however, was slightly lower. This demonstrates the effectiveness and feasibility of the proposed method in calculating the center and normal vector of the iris.
The main reason for the low accuracy of point-of-regard coordinates in the y direction was an iris edge point sampling problem. It was necessary in the experiment to extract several edge points from the detected boundary points of the iris to calculate the center and normal vector of the space iris circle. This extraction process generally depends on uniform sampling. Therefore, compared with the traditional method, edge information of the image iris was inevitably lost; in other words, using part of the edge points to calculate the center and normal vector of the space circular target is an incomplete process. Statistically speaking, the space circle detection process is more accurate when more uniform and abundant edge points are used, but this drives down the operation speed.

Conclusions
This work is oriented toward engineering applications. This paper proposed a new space circular target detection method which was designed to enhance the anti-interference ability of the traditional space circular target detection method. The image of a space circular target is an ellipse in the camera. Under the condition that radius R of the space circular target is known, a spatial cone is constructed via back-projection of the elliptical contour points. Based on the geometric relations satisfied by the target parameters, optimum fitting of the space circular target in 3D space can be realized. The center and normal vector of the space circular target can then be detected accordingly. This method is insensitive to image elliptic parameters, which gives it stronger noise resistance than the traditional method.
A simulation was conducted to evaluate the performance of the proposed algorithm. The proposed method was also tested in a 3D line-of-sight estimation experiment to determine its effectiveness and practical value. The proposed method was proved, in principle, to suppress noise interference in space circular target ellipse images more effectively than the traditional method. However, by comparing the running time of Matlab programs for the two methods, its calculation speed is relatively slow due to its space fitting technology, which is related to the number of image ellipse edge points. In the future, we plan to improve the speed of our algorithm while fully preserving its anti-interference performance.  Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available because they have not been ordered and stored in a clear and manageable form.

Conflicts of Interest:
The authors declare no conflict of interest.