Radius and Orientation Measurement for Cylindrical Objects by a Light Section Sensor

In this paper, an efficient method based on a light section sensor is presented for measuring cylindrical objects’ radii and orientations in a robotic application. By this method, the cylindrical objects can be measured under some special conditions, such as when the cylindrical objects are welded with others, or in the presence of interferences. Firstly, the measurement data are roughly identified and accurately screened to effectively recognize ellipses. Secondly, the data are smoothed and homogenized to eliminate the effect of laser line loss or jump and reduce the influence of the inhomogeneity of measurement data on the ellipse fitting to a minimum. Finally, the ellipse fitting is carried out to obtain the radii and orientations of the cylindrical objects. Measuring experiments and results demonstrate the effective of the proposed radius and orientation measurement method for cylindrical object.


Introduction
Cylindrical objects are widely used in industry. When cylindrical objects (for example, pipes, thin rods) are processed such as by one side welding or single punch, warps and deformations always occur. In order to eliminate the effects of the warps and deformations, the radii and orientations of the cylindrical objects must be measured before processing.
There are many ways to measure the radius and orientation of cylindrical objects. They can be generally classified into contact and non-contact measurements. There are lots of methods for non-contact measurement, such as electromagnetic induction, visual and laser imaging measurement. Among them, the visual measurement methods and laser measurement methods are widely used for cylindrical object measurement purposes.
For the visual measurement of cylindrical objects, a 3D vision system using highlight patterns formed by specular reflection on the object surface has been proposed in [1]. Two light sources were employed at once for the reduction of the processing time, which can only measure one cylindrical object at a time. Wang et al. [2] presented a high precision detection of cylindrical objects using one camera, which can only be used to measure the diameter of one small size cylinders at a time. An efficient relative pose estimation method based on multi-microscopic vision was presented in [3] for the alignment of long cylindrical components in six DOF (degree of freedom) of 3D space. In [4], an approach to applications of object detection and pose estimation from noisy RGB-D sensor (a kind of sensor which can capture and depth information) was presented. It can also be used to determine incomplete object poses, including those of symmetrical objects. Oleari et al. [5] presented the design and experimental evaluation of an embedded vision system for underwater object detection. The design approach has focused on a low power budget, low cost, and inevitably low performance embedded system. The system has proven thermally stable and capable of guaranteeing a level of autonomy of at least two hours of video acquisition. A new method for non-contact measurement of the radius and orientation of cylindrical object is presented. This method can measure the radius and orientation with interference. • This method can obtain the radii and orientations of several cylindrical objects in one measurement.
The rest of this paper is organized as follows: Section 2 gives an introduction of the light section sensor and the calculation of the radius and orientation. Section 3 details the measurement process and data processing method. Section 4 shows experiments and result analyses. Finally, this paper is concluded in Section 5.

Light Section Sensor
A light section sensor is a kind of laser sensor that works according to the triangulation principle. Using transmission optics, a laser beam is expanded to a line aimed at a measurement object, as shown in Figure 1. The laser lines reflected by the measurement object are detected by the receiver located behind the laser transmitter. The laser transmitter is composed of the receiving optics and the CMOS chip, processes and transforms the reflecting laser lines into distance data in internal calculus, achieves surface profile.
Measurement data are a series of discrete points, which have X and Y coordinates. Obviously, the light section sensor can only measure the 2D data in one operation. The 3D data can then be obtained from the process control or by calculation.
Sensors 2016, 16,1981 3 of 15  A new method for non-contact measurement of the radius and orientation of cylindrical object is presented. This method can measure the radius and orientation with interference.  This method can obtain the radii and orientations of several cylindrical objects in one measurement.
The rest of this paper is organized as follows: Section 2 gives an introduction of the light section sensor and the calculation of the radius and orientation. Section 3 details the measurement process and data processing method. Section 4 shows experiments and result analyses. Finally, this paper is concluded in Section 5.

Light Section Sensor
A light section sensor is a kind of laser sensor that works according to the triangulation principle. Using transmission optics, a laser beam is expanded to a line aimed at a measurement object, as shown in Figure 1. The laser lines reflected by the measurement object are detected by the receiver located behind the laser transmitter. The laser transmitter is composed of the receiving optics and the CMOS chip, processes and transforms the reflecting laser lines into distance data in internal calculus, achieves surface profile.
Measurement data are a series of discrete points, which have X and Y coordinates. Obviously, the light section sensor can only measure the 2D data in one operation. The 3D data can then be obtained from the process control or by calculation.

Measurement Scheme
Supposing the cylindrical object under test is tilted on the horizontal plane, it's illuminated by a laser plane paralleled to the horizontal plane, thus creating a section containing an ellipse arc, shown in Figure 2a.
In the case of a tilted cylindrical object, because of its general rotating body characteristics, it is not necessary to consider the angle of rotation around the axis. The orientation can be determined by two linear independent parameters, the angle φ between the cylindrical object axis and the OZ axis, the angle θ between the projection of the major axis of the ellipse arc and the horizontal axis, shown in Figure 2a.
The ellipse arc determined by three points A, B, C is on the laser plane of sensor, which is parallel to the XOY plane. The point A (xo, yo, zo) is the center of the ellipse. The segments AB and AC are the major semi-axis and the minor semi-axis, respectively. The lengths of AB and AC are a and b. AE is

Measurement Scheme
Supposing the cylindrical object under test is tilted on the horizontal plane, it's illuminated by a laser plane paralleled to the horizontal plane, thus creating a section containing an ellipse arc, shown in Figure 2a.
In the case of a tilted cylindrical object, because of its general rotating body characteristics, it is not necessary to consider the angle of rotation around the axis. The orientation can be determined by two linear independent parameters, the angle φ between the cylindrical object axis and the OZ axis, the angle θ between the projection of the major axis of the ellipse arc and the horizontal axis, shown in Figure 2a.
The ellipse arc determined by three points A, B, C is on the laser plane of sensor, which is parallel to the XOY plane. The point A (x o , y o , z o ) is the center of the ellipse. The segments AB and AC are the major semi-axis and the minor semi-axis, respectively. The lengths of AB and AC are a and b. AE is the axis of the cylindrical object. A', B' and C' is the projection of A, B, C on the XOY plane. The angle between the major axis direction and the OX direction is θ. That is the angle between A'B' and the axis OX. The angle between AE and axis OZ is φ. BF is one of the generatrices of the cylindrical object. The terms a, b and θ can be calculated by fitting the measurement data into an ellipse.
Sensors 2016, 16,1981 4 of 15 the axis of the cylindrical object. A', B' and C' is the projection of A, B, C on the XOY plane. The angle between the major axis direction and the OX direction is θ. That is the angle between A'B' and the axis OX. The angle between AE and axis OZ is φ. BF is one of the generatrices of the cylindrical object. The terms a, b and θ can be calculated by fitting the measurement data into an ellipse.

Radius Calculation
The value of AC is the value of the actual radius of the cylindrical object. The actual radius of the cylindrical object equals the minor semi-axis of the ellipse. That is r = b Proof. ∵OZ⊥plane ABC, ∴OZ⊥AC; ∵AC⊥AB and AC⊥OZ, ∴AC⊥plane ABO; ∵AE⊂ plane ABO, ∴AE⊥AC, ∴AC is one of the radii of the cylindrical object. □

Orientation Calculation
Because the cylindrical object is rotationally symmetric, the two linear independent degrees of freedom θ and φ can determine the orientation of the cylindrical object in space.
As shown in Figure 2a, θ is the angle between X axis and the projection of the cylindrical object axis in the XOY plane. As shown in Figure 2b, according to the trigonometric relations, φ can be obtained by the arccosine value of the ratio EF/AB, and the ratio equals to the ratio of the radius r (equals to the minor semi-axis, b) and the major semi-axis a. That is φ = cos −1 (b/a).

Fitting Method
The ellipse arc paralleled to the horizontal plane. The equation for the ellipse arc can be simplified as: where (xi, yi) are the boundary points. The measured points are a series of discrete points, shown in Figure 3. In order to identify the ellipse, we adopted a least squares fitting method based on the absolute Euclidean distance.

Radius Calculation
The value of AC is the value of the actual radius of the cylindrical object. The actual radius of the cylindrical object equals the minor semi-axis of the ellipse. That is r = b Proof. ∵ OZ⊥plane ABC, ∴ OZ⊥AC; ∵ AC⊥AB and AC⊥OZ, ∴ AC⊥plane ABO; ∵ AE⊂ plane ABO, ∴ AE⊥AC, ∴ AC is one of the radii of the cylindrical object. ◻

Orientation Calculation
Because the cylindrical object is rotationally symmetric, the two linear independent degrees of freedom θ and φ can determine the orientation of the cylindrical object in space.
As shown in Figure 2a, θ is the angle between X axis and the projection of the cylindrical object axis in the XOY plane. As shown in Figure 2b, according to the trigonometric relations, φ can be obtained by the arccosine value of the ratio EF/AB, and the ratio equals to the ratio of the radius r (equals to the minor semi-axis, b) and the major semi-axis a. That is φ = cos −1 (b/a).

Fitting Method
The ellipse arc paralleled to the horizontal plane. The equation for the ellipse arc can be simplified as: where (x i , y i ) are the boundary points. The measured points are a series of discrete points, shown in Figure 3. In order to identify the ellipse, we adopted a least squares fitting method based on the absolute Euclidean distance. The point (xi, yi) is on the ellipse, and the corresponding measurement point is the point (xj, yj). The distance between them can be expressed by: Since the point (xi, yi) is on the ellipse, it satisfies Equation (1). Because the points (xj, yj), (xi, yi) and (xo, yo) are on the same line, we have: Taking Equations (1) and (3) into Equation (2), eliminating the latter half of Equation (2) which contains (xi, yi), the rj can be rewritten as: In order to get the fitting ellipse, we employ the least squares method: The terms a and b are initialized by the radius of the cylindrical object. The initial value of θ is set to zero. The point with the maximum y value is recorded as (xymax, yymax). The xymax is assigned to the initial value of xo. The value that the yymax minus the cylindrical object's radius is assigned to the yo initial value. To obtain the minimum value of the J, Equation (4) is calculated iteratively. After the minimum value of the J is obtained, the corresponding (xo, yo), a, b and θ are obtained.

Measurement Process
The measurement of the radius and orientation of the cylindrical object includes ellipse recognition, smoothing and homogenizing, and ellipse fitting. The whole measurement process is summarized in Figure 4. First of all in the procedure, the initial poses of the light section sensor and cylindrical object are manually adjusted into place. The ellipse fitting fits the measurement points onto an ellipse to get the elliptic parameters, then calculates the radius and orientation. The point (x i , y i ) is on the ellipse, and the corresponding measurement point is the point (x j , y j ). The distance between them can be expressed by: Since the point (x i , y i ) is on the ellipse, it satisfies Equation (1). Because the points (x j , y j ), (x i , y i ) and (x o , y o ) are on the same line, we have: Taking Equations (1) and (3) into Equation (2), eliminating the latter half of Equation (2) which contains (x i , y i ), the r j can be rewritten as: In order to get the fitting ellipse, we employ the least squares method: The terms a and b are initialized by the radius of the cylindrical object. The initial value of θ is set to zero. The point with the maximum y value is recorded as (x ymax , y ymax ). The x ymax is assigned to the initial value of x o . The value that the y ymax minus the cylindrical object's radius is assigned to the y o initial value. To obtain the minimum value of the J, Equation (4) is calculated iteratively. After the minimum value of the J is obtained, the corresponding (x o , y o ), a, b and θ are obtained.

Measurement Process
The measurement of the radius and orientation of the cylindrical object includes ellipse recognition, smoothing and homogenizing, and ellipse fitting. The whole measurement process is summarized in Figure 4. First of all in the procedure, the initial poses of the light section sensor and

Ellipse Recognition
Ellipse recognition includes rough identification and accurate screening. In a practical environment, the measurement objects are not only cylindrical objects, but also other shaped ones, such as pipes that are welded side by side on a flat bar. Because the least squares method minimizes the quadratic sum of the global data's errors, the points in the transition part between the ellipse contour and others have a great influence on the overall fitting result. In order to accurately fit the ellipse and precisely calculate the radius and orientation, it's necessary to exclude the non-ellipse arc points as much as possible.
The rough identification judges whether a point belongs to an ellipse, as shown in Figure 5. If the point was relatively close to the ellipse and does not belong to the ellipse, the rough identification would no longer be effective. For example, considering cylindrical objects welded together, because the weld seams usually have circular contours, their measurement points cannot be accurately identified by rough identification only. An accurate screening can remove these points.

Rough Identification
There may be one or more cylindrical objects to be measured in one time. The rough identification groups those points by comparing the rj with the sensor error to determine whether

Ellipse Recognition
Ellipse recognition includes rough identification and accurate screening. In a practical environment, the measurement objects are not only cylindrical objects, but also other shaped ones, such as pipes that are welded side by side on a flat bar. Because the least squares method minimizes the quadratic sum of the global data's errors, the points in the transition part between the ellipse contour and others have a great influence on the overall fitting result. In order to accurately fit the ellipse and precisely calculate the radius and orientation, it's necessary to exclude the non-ellipse arc points as much as possible.
The rough identification judges whether a point belongs to an ellipse, as shown in Figure 5. If the point was relatively close to the ellipse and does not belong to the ellipse, the rough identification would no longer be effective. For example, considering cylindrical objects welded together, because the weld seams usually have circular contours, their measurement points cannot be accurately identified by rough identification only. An accurate screening can remove these points.

Ellipse Recognition
Ellipse recognition includes rough identification and accurate screening. In a practical environment, the measurement objects are not only cylindrical objects, but also other shaped ones, such as pipes that are welded side by side on a flat bar. Because the least squares method minimizes the quadratic sum of the global data's errors, the points in the transition part between the ellipse contour and others have a great influence on the overall fitting result. In order to accurately fit the ellipse and precisely calculate the radius and orientation, it's necessary to exclude the non-ellipse arc points as much as possible.
The rough identification judges whether a point belongs to an ellipse, as shown in Figure 5. If the point was relatively close to the ellipse and does not belong to the ellipse, the rough identification would no longer be effective. For example, considering cylindrical objects welded together, because the weld seams usually have circular contours, their measurement points cannot be accurately identified by rough identification only. An accurate screening can remove these points.

Rough Identification
There may be one or more cylindrical objects to be measured in one time. The rough identification groups those points by comparing the rj with the sensor error to determine whether

Rough Identification
There may be one or more cylindrical objects to be measured in one time. The rough identification groups those points by comparing the r j with the sensor error to determine whether those points can form an ellipse arc. It removes the points that obviously do not belong to the ellipse arc. The effect is shown in Figure 5. The rough identification flow is shown in Figure 6.
Sensors 2016, 16,1981 7 of 15 those points can form an ellipse arc. It removes the points that obviously do not belong to the ellipse arc. The effect is shown in Figure 5. The rough identification flow is shown in Figure 6. The term n is the amount of measurement points. The term num is the number of points that needs to be fitted each time, which is about 10 percent of n. The term group records the number of ellipse arcs that have been identified, while flag is a mark for indicating that the current point belongs to the start part or the end part. The start[] and the end[] store the start point and end point of every ellipse arc. The key to rough identification is fitting an ellipse using the points from i to I + num − 1 in one time, and looping it in turn from i = 0 to I + num − 1 = n − 1. If all rj (j = i,… I + num − 1) were less than the sensor error and flag = 0, the point i would be the start point of this ellipse arc. If one of rj (j = i,… I + num − 1) was larger than the sensor error and flag = 1, the point I + num − 2 would be the end point of this ellipse arc. The first few points after a start point are the start part. The last few points before an end point are the end part. If all ellipse arcs in the measurement range were obtained, the rough identification would be finished. The method illustrated in Figure 7 is an example of the rough identification process. The term n is the amount of measurement points. The term num is the number of points that needs to be fitted each time, which is about 10 percent of n. The term group records the number of ellipse arcs that have been identified, while flag is a mark for indicating that the current point belongs to the start part or the end part. The start[] and the end[] store the start point and end point of every ellipse arc. The key to rough identification is fitting an ellipse using the points from i to I + num − 1 in one time, and looping it in turn from i = 0 to I + num − 1 = n −1. If all r j (j = i, . . . I + num −1) were less than the sensor error and flag = 0, the point i would be the start point of this ellipse arc. If one of r j (j = i, . . . I + num −1) was larger than the sensor error and flag = 1, the point I + num −2 would be the end point of this ellipse arc. The first few points after a start point are the start part. The last few points before an end point are the end part. If all ellipse arcs in the measurement range were obtained, the rough identification would be finished. The method illustrated in Figure 7 is an example of the rough identification process.

Accurate Screening
After the rough identification is finished, there are points that may not belong to the ellipse arc. These points lie on the points of the start part and the end part. The accurate screening obtains an approximate ellipse by fitting the points of the middle part, determines which points of the start and end part are on the ellipse, and accurately identifies the points on the ellipse, as shown in Figure 8. Considering the elliptic curve, the rj of points in the start and end part are calculated. If the rj was larger than the sensor error, the point would be discarded. As shown in Figure 8, the point in the start part and the first point in the end part will be kept, and the second point in the end part will be discarded. The flow of the accurate screening is as shown in Figure 9:

Accurate Screening
After the rough identification is finished, there are points that may not belong to the ellipse arc. These points lie on the points of the start part and the end part. The accurate screening obtains an approximate ellipse by fitting the points of the middle part, determines which points of the start and end part are on the ellipse, and accurately identifies the points on the ellipse, as shown in Figure 8. Considering the elliptic curve, the r j of points in the start and end part are calculated. If the r j was larger than the sensor error, the point would be discarded. As shown in Figure 8, the point in the start part and the first point in the end part will be kept, and the second point in the end part will be discarded.

Accurate Screening
After the rough identification is finished, there are points that may not belong to the ellipse arc. These points lie on the points of the start part and the end part. The accurate screening obtains an approximate ellipse by fitting the points of the middle part, determines which points of the start and end part are on the ellipse, and accurately identifies the points on the ellipse, as shown in Figure 8. Considering the elliptic curve, the rj of points in the start and end part are calculated. If the rj was larger than the sensor error, the point would be discarded. As shown in Figure 8, the point in the start part and the first point in the end part will be kept, and the second point in the end part will be discarded. The flow of the accurate screening is as shown in Figure 9:

Accurate Screening
After the rough identification is finished, there are points that may not belong to the ellipse arc. These points lie on the points of the start part and the end part. The accurate screening obtains an approximate ellipse by fitting the points of the middle part, determines which points of the start and end part are on the ellipse, and accurately identifies the points on the ellipse, as shown in Figure 8. Considering the elliptic curve, the rj of points in the start and end part are calculated. If the rj was larger than the sensor error, the point would be discarded. As shown in Figure 8, the point in the start part and the first point in the end part will be kept, and the second point in the end part will be discarded. The flow of the accurate screening is as shown in Figure 9:

Smoothing and Homogenizing
During actual measurement, there are large errors caused by laser line losses or jumps, which must be excluded from the measurement data before fitting, as shown in Figure 10. Because the ellipse recognition involves batch processing of the points and the least squares method is for all the fitting points' optimal solution, the above ellipse recognition process can't identify single laser line losses or jumped points effectively. Although the r j of any lost laser lines and jump points in the ellipse arcs are usually larger than the sensor error, their influence can't be eliminated by the ellipse recognition process. Based on this, smoothing and homogenization are proposed.

Smoothing and Homogenizing
During actual measurement, there are large errors caused by laser line losses or jumps, which must be excluded from the measurement data before fitting, as shown in Figure 10. Because the ellipse recognition involves batch processing of the points and the least squares method is for all the fitting points' optimal solution, the above ellipse recognition process can't identify single laser line losses or jumped points effectively. Although the rj of any lost laser lines and jump points in the ellipse arcs are usually larger than the sensor error, their influence can't be eliminated by the ellipse recognition process. Based on this, smoothing and homogenization are proposed.
The smoothing and homogenization exclude the large error points and eliminate the influence of variable resolution. The length of the laser in the X direction changes over the distance in the Y direction. The resolution in the X direction decreases with the increasing distance in the Y direction. According to the triangulation measurement principle, the resolution in the Y direction also decreases with the increasing distance. Because the method of fitting uses the least squares method, the uniformity of the points affects the fitting results. It is necessary to normalize the transverse distance to ensure that this influence is reduced to a minimum.

Smoothing
According to the double-smoothing local linear regression method [20], we set the filter window width for 30% of the amount of ellipse arc points. As described in the reference, we need to select the filter window width, and successively select points to filter. We can get the new contour points by the optimal solutions of quadratic polynomial curve in the set of the filter window width.

Homogenizing
Because the highest order of the elliptic curve equation is 2, the data is processed by the cubic spline interpolation in order to avoid the distortion. Assuming that the amount of contour points after the smoothing processing is n + 1, the points are record as (xo, yo), …, (xn, yn).
The piecewise cubic spline curve is as follows: where i = 0, 1, 2, …, n; ai, bi, ci, di are parameters of the cubic spline curve. There are boundary conditions: There are continuity conditions: The smoothing and homogenization exclude the large error points and eliminate the influence of variable resolution. The length of the laser in the X direction changes over the distance in the Y direction. The resolution in the X direction decreases with the increasing distance in the Y direction. According to the triangulation measurement principle, the resolution in the Y direction also decreases with the increasing distance. Because the method of fitting uses the least squares method, the uniformity of the points affects the fitting results. It is necessary to normalize the transverse distance to ensure that this influence is reduced to a minimum.

Smoothing
According to the double-smoothing local linear regression method [20], we set the filter window width for 30% of the amount of ellipse arc points. As described in the reference, we need to select the filter window width, and successively select points to filter. We can get the new contour points by the optimal solutions of quadratic polynomial curve in the set of the filter window width.

Homogenizing
Because the highest order of the elliptic curve equation is 2, the data is processed by the cubic spline interpolation in order to avoid the distortion. Assuming that the amount of contour points after the smoothing processing is n + 1, the points are record as (x o , y o ), . . . , (x n , y n ).
The piecewise cubic spline curve is as follows: where i = 0, 1, 2, . . . , n; a i , b i , c i , d i are parameters of the cubic spline curve. There are boundary conditions: There are continuity conditions: From Equarions (6)-(8) we have: According to the iterative Equation (9), with the x coordinates of the contour points after the smoothing processing, we can get the four parameters of a i , b i , c i , d i to obtain the formula S i (x). After the step interval is set (for example, 0.5 mm), the coordinates x new predetermined by the step interval between x o and x n are obtained. If the x new was taken into the corresponding piecewise polynomial curve, the S i (x new ) would be obtained. The corresponding coordinate y new is obtained by the polynomial interpolation. Once completing the homogenizing processing of the measurement data, the influence of variable resolution will be eliminated.

Experiments and Results
In this section, we describe a series of experiments conducted to verify the correctness of the method and the recognition algorithm, and the robustness of the recognition algorithm. The light section sensor is a LPS36HI unit of Leuze Electronic (In der Braike 1, Owen, Germany). The resolution in the measurement plane is about 0.6 mm × 0.9 mm. The environment for the measuring is an indoor fluorescent lamp. The algorithm is written in Matlab. We employ four cylindrical objects of different radius and put them at different tilt angles to test the measurement accuracy, as shown in Figure 11. Cylindrical objects welded side by side are used to verify the recognition algorithm, as shown in Figure 12. In the presence of several kinds of interference, the cylindrical objects are tested to show the robustness of the recognition algorithm, shown in Figure 13.
According to the iterative Equation (9), with the x coordinates of the contour points after the smoothing processing, we can get the four parameters of ai, bi, ci, di to obtain the formula Si(x). After the step interval is set (for example, 0.5 mm), the coordinates xnew predetermined by the step interval between xo and xn are obtained. If the xnew was taken into the corresponding piecewise polynomial curve, the Si(xnew) would be obtained. The corresponding coordinate ynew is obtained by the polynomial interpolation. Once completing the homogenizing processing of the measurement data, the influence of variable resolution will be eliminated.

Experiments and Results
In this section, we describe a series of experiments conducted to verify the correctness of the method and the recognition algorithm, and the robustness of the recognition algorithm. The light section sensor is a LPS36HI unit of Leuze Electronic (In der Braike 1, Owen, Germany). The resolution in the measurement plane is about 0.6 mm × 0.9 mm. The environment for the measuring is an indoor fluorescent lamp. The algorithm is written in Matlab. We employ four cylindrical objects of different radius and put them at different tilt angles to test the measurement accuracy, as shown in Figure 11. Cylindrical objects welded side by side are used to verify the recognition algorithm, as shown in Figure 12. In the presence of several kinds of interference, the cylindrical objects are tested to show the robustness of the recognition algorithm, shown in Figure 13.
According to the iterative Equation (9), with the x coordinates of the contour points after the smoothing processing, we can get the four parameters of ai, bi, ci, di to obtain the formula Si(x). After the step interval is set (for example, 0.5 mm), the coordinates xnew predetermined by the step interval between xo and xn are obtained. If the xnew was taken into the corresponding piecewise polynomial curve, the Si(xnew) would be obtained. The corresponding coordinate ynew is obtained by the polynomial interpolation. Once completing the homogenizing processing of the measurement data, the influence of variable resolution will be eliminated.

Experiments and Results
In this section, we describe a series of experiments conducted to verify the correctness of the method and the recognition algorithm, and the robustness of the recognition algorithm. The light section sensor is a LPS36HI unit of Leuze Electronic (In der Braike 1, Owen, Germany). The resolution in the measurement plane is about 0.6 mm × 0.9 mm. The environment for the measuring is an indoor fluorescent lamp. The algorithm is written in Matlab. We employ four cylindrical objects of different radius and put them at different tilt angles to test the measurement accuracy, as shown in Figure 11. Cylindrical objects welded side by side are used to verify the recognition algorithm, as shown in Figure 12. In the presence of several kinds of interference, the cylindrical objects are tested to show the robustness of the recognition algorithm, shown in Figure 13.

Single Cylindrical Object
The measurement results of single cylindrical objects with radius 15 mm, 16 mm, 18.5 mm, 29 mm, the angle θ and φ 10°, 20°, 30° are shown in Tables 1-3. The experimental results show that the radius measurement errors are within 0.4 mm. Considering the resolution of the sensor is 0.6 mm × 0.9 mm, the measurement is acceptable.
The θ value is obtained by fitting the ellipse directly. For different θ values, the measurement error is randomly distributed in a 0°~0.2° range. For different φ values, the maximum error in 30° is −0.1253°, in 20° is −0.1626°, and in 10° is −0.2140°. The errors of each group are near the maximum values, respectively. This is because φ is obtained by calculating a cosine function, and cosine functions are not sensitive to the small angle change values.
This measurement accuracy of the proposed method can meet the requirements of production. If the accuracy of the angle φ near the small end of the range were to be improved, it would give a higher resolution sensor.

Single Cylindrical Object
The measurement results of single cylindrical objects with radius 15 mm, 16 mm, 18.5 mm, 29 mm, the angle θ and φ 10 ○ , 20 ○ , 30 ○ are shown in Tables 1-3. The experimental results show that the radius measurement errors are within 0.4 mm. Considering the resolution of the sensor is 0.6 mm × 0.9 mm, the measurement is acceptable.
The θ value is obtained by fitting the ellipse directly. For different θ values, the measurement error is randomly distributed in a 0 ○~0 .2 ○ range. For different φ values, the maximum error in 30 ○ is −0.1253 ○ , in 20 ○ is −0.1626 ○ , and in 10 ○ is −0.2140 ○ . The errors of each group are near the maximum values, respectively. This is because φ is obtained by calculating a cosine function, and cosine functions are not sensitive to the small angle change values.
This measurement accuracy of the proposed method can meet the requirements of production. If the accuracy of the angle φ near the small end of the range were to be improved, it would give a higher resolution sensor.

Multiple Cylindrical Objects
The method can measure multiple cylindrical objects, as shown in Figure 14. The blue dots are the measurement points, the red rings are elliptic feature points, and the blue ellipses represent the cylindrical objects' section.

Multiple Cylindrical Objects
The method can measure multiple cylindrical objects, as shown in Figure 14. The blue dots are the measurement points, the red rings are elliptic feature points, and the blue ellipses represent the cylindrical objects' section. In Table 4, each radius is 19 mm. The θ varies from 10°, 20° to 30°, and the corresponding φ varies from 30°, 20° to 10°. In Table 4, we can see that the measurement errors show almost the same distribution as those of a single cylindrical object, indicating that the proposed method can precisely measure not only single cylindrical objects, but also multiple cylindrical objects.

Interference
In order to verify that the recognition algorithm can identify the ellipse contour points with interferences, we introduce some interference into the measurement. The blue dots are the measurement points. The red rings are the elliptic feature points. The blue ellipses represent the cylindrical objects' section. The blue dotted lines are the direction of the major axis. Results show that the recognition algorithm can eliminate the interference of arbitrarily shaped objects to accurately identify the ellipse arc from the measurement data, as shown in Figure 15. In Table 4, each radius is 19 mm. The θ varies from 10 ○ , 20 ○ to 30 ○ , and the corresponding φ varies from 30 ○ , 20 ○ to 10 ○ . In Table 4, we can see that the measurement errors show almost the same distribution as those of a single cylindrical object, indicating that the proposed method can precisely measure not only single cylindrical objects, but also multiple cylindrical objects.

Interference
In order to verify that the recognition algorithm can identify the ellipse contour points with interferences, we introduce some interference into the measurement. The blue dots are the measurement points. The red rings are the elliptic feature points. The blue ellipses represent the cylindrical objects' section. The blue dotted lines are the direction of the major axis. Results show that the recognition algorithm can eliminate the interference of arbitrarily shaped objects to accurately identify the ellipse arc from the measurement data, as shown in Figure 15. We have measured a cylindrical object many times in the same position, as shown in Figure 16. The ellipse centers can be in a rectangular box of 0.6 mm × 0.6 mm. The ellipse axis errors are less than 0.25 mm. They are within the error of sensor limits.  We have measured a cylindrical object many times in the same position, as shown in Figure 16. The ellipse centers can be in a rectangular box of 0.6 mm × 0.6 mm. The ellipse axis errors are less than 0.25 mm. They are within the error of sensor limits. We have measured a cylindrical object many times in the same position, as shown in Figure 16. The ellipse centers can be in a rectangular box of 0.6 mm × 0.6 mm. The ellipse axis errors are less than 0.25 mm. They are within the error of sensor limits.

Discussion
As indicated in the measurement results, the radius errors vary from 0.05 mm~0.4 mm. They are independent of the radii of the cylindrical objects. The θ and φ errors vary from 0 ○~0 .2 ○ . They are independent of the theoretical angles. We created the routine functions using Matlab, and it took 1~3 s to complete one measurement, as shown in Table 5. The measurement time depends on the number of cylindrical objects measured at the same time and the type of the interference. The more objects that are measured in one time and the more complicated the interferences that exist, the more time is required. The surface finish of the cylindrical objects has an influence on the measurement. In order to measure the object, the sensor emits a laser beam and receives the reflected light, so specular surfaces can't be measured. Even if we change the laser intensity, an excessively high absorbance surface is difficult to measure. Surfaces with good diffuse reflectance achieve the best measurements.

Conclusions
The main contribution of this paper is the radius and orientation measurement method of cylindrical objects with interference and multi-cylindrical object measurement based on a light section sensor. The method proposed in this paper identifies ellipse arcs by rough identification and accurate screening. After the ellipse recognition is finished, the data are smoothened and homogenized to eliminate the effect of laser line losses or jumps and the influence of variable resolution. The radius and orientation are obtained by fitting the smoothed and homogenized data to an ellipse. The experimental results demonstrate the effectiveness of the proposed radius and orientation measurement method for cylindrical objects.