Next Article in Journal
Multi-Walled Carbon Nanotube-Doped Tungsten Oxide Thin Films for Hydrogen Gas Sensing
Next Article in Special Issue
Field Evaluation of Polymer Capacitive Humidity Sensors for Bowen Ratio Energy Balance Flux Measurements
Previous Article in Journal
Flat-Cladding Fiber Bragg Grating Sensors for Large Strain Amplitude Fatigue Tests
Previous Article in Special Issue
Performance Evaluation of Triangulation Based Range Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mobile Calibration Based on Laser Metrology and Approximation Networks

by
J. Apolinar Muñoz-Rodriguez
Centro de Investigaciones en Optica, Loma del Bosque 115, Col. Lomas del campestre, C.P. 37150, Leon, Guanajuato, Mexico
Sensors 2010, 10(8), 7681-7704; https://doi.org/10.3390/s100807681
Submission received: 30 June 2010 / Revised: 20 July 2010 / Accepted: 5 August 2010 / Published: 17 August 2010

Abstract

:
A mobile calibration technique for three-dimensional vision is presented. In this method, vision parameters are computed automatically by approximation networks built based on the position of a camera and image processing of a laser line. The networks also perform three-dimensional visualization. In the proposed system, the setup geometry can be modified online, whereby an online re-calibration is performed based on data provided by the network and the required modifications of extrinsic and intrinsic parameters are thus determined, overcoming any calibration limitations caused by the modification procedure. The mobile calibration also avoids procedures involving references, which are used in traditional online re-calibration methods. The proposed mobile calibration thus improves the accuracy and performance of the three-dimensional vision because online data of calibrated references are not passed on to the vision system. This work represents a contribution to the field of online re-calibration, as verified by a comparison with the results based on lighting methods, which are calibrated and re-calibrated via perspective projection. Processing time is also studied.

1. Introduction

Nowadays in order to perform three-dimensional vision various lighting methods are used, such as fringe pattern projection, laser line and point projection, all of which require some form of calibration. Calibration for lighting methods is performed via perspective projection models [1]. In fringe projection, the calibration is performed based on calibrated references via perspective projection [2,3]. In this method, the three-dimensional vision is achieved by a phase detection algorithm. In line and point projection, the calibration is also achieved by perspective projection and the use of calibrated references [4,5], but here, the three-dimensional vision is performed by laser triangulation.
In the calibration and re-calibration of lighting methods, several methods based on perspective projection have been developed. One calibration method is performed by projecting a laser line on black and white rectangles [6,7]. The perspective projection is determined by matching the line to the known rectangles. A stereo calibration determines the perspective projection by matching a line of a grating and the use of epipolar geometry [8]. A paint brush method performs the calibration by projecting a line on two reference planes [9]. By detecting the line on these references, the perspective projection is determined via least squares. A lighting method performs the calibration based on the coordinates of a laser line [10]. Here, perspective projection is determined by transforming the laser line coordinates to real world coordinates. A zigzag method performs the calibration by detecting a laser line on zigzag references [11,12]. Based on these references, the perspective projection is obtained via a transformation matrix. A vision sensor performs the calibration by projecting a laser line on a reference plane [1317]. In this case, the perspective projection is determined by detecting the line on this reference plane. A structure light system performs the calibration by projecting a pattern of spots on a reference plane [18,19]. The perspective projection is determined by detecting the spots on this plane. Another type of calibration is performed by projecting a spots pattern and a fringe pattern [20]. In this method, the perspective projection is determined by detecting the point-to-line correspondence on a plane. Re-calibration methods have also been implemented to change the vision parameters when the base setup is modified. One such re-calibration method is performed by detecting a pattern of lines on a reference plane to determine the perspective projection [21]. Self re-calibration methods have been implemented via plane-based homography [2224], in which the perspective projection is determined by matching the light pattern on a reference plane.
Online re-calibration methods have also been developed to change the vision parameters during vision task [2527]. In these methods, the perspective projection is determined by matching the light pattern on a reference plane. In the above mentioned techniques, the vision system does not provide the data to perform the re-calibration. Typically, these online re-calibration techniques are performed by detecting a light pattern on a reference. However, in several applications such references do not exist during the vision task, so the mentioned techniques are limited by the availability of light pattern references. To overcome these limitations, a re-calibration method without online references is necessary to facilitate online modifications of the setup geometry.
The proposed mobile calibration is performed by means of a Bezier network, which provides the data needed for online re-calibration, and laser line imaging. In this procedure, the camera orientation, focal distance, setup distances, pixel scale and image centre are determined. In addition three-dimensional vision is performed by the network via line shifting, whereby the network retrieves the surface depth and provides the data for the re-calibration when the setup geometry is modified online, the extrinsic and intrinsic parameters are thus re-calibrated online and the need for references is avoided. Consequently, the mobile calibration improves the performance and the accuracy of the online re-calibration. All this constitutes a contribution to the field of re-calibration of lighting methods. This contribution is elucidated by an evaluation based on the calibration and re-calibration of lighting methods. This evaluation is based on the root mean squared of error using a contact method as reference. Finally, the processing time to produce three-dimensional visualization is also determined.

2. Basic Theory

In lighting methods calibration is performed based on perspective projection [624]. This procedure is carried out by means of calibrated references and a transformation matrix. Typically, the perspective projection model is determined based in the geometry shown in Figure 1. In this geometry, a point Pw = (xw, yw, zw) is transformed to the camera coordinates Pc = (xc, yc, zc) by Pc = R·Pw + t. Where R is the rotation matrix and t is the translation vector. Here, the transformation Pc to the image coordinates (Xu, Yu) is given by Xu = fxc/zc and Yu = fyc/zc Considering radial distortion, the image coordinates are represented by Xd + Dx = Xu and Yd + Dy = Yu, where Dx = Xd (δ1r2 + δ2r4 + …), Dy = Yd (δ1r2 + δ2r4 + …) and r = (Xd2 + Yd2)1/2. In these expressions, Xd and Yd are the distorted coordinates. The pixel coordinates are also converted into real coordinates by means of a scaling factor η. Thus, the parameters to be calibrated are the matrix R, the vector t, the focal length f, the distortion coefficient δi, the image center (cx, cy) and the scaling factor η. This procedure is carried out by detecting calibrated references on a reference plane and use of a transformation matrix [627]. Then, the calibration data are passed to the vision system to perform three-dimensional visualization.
In several applications, the setup geometry is modified online to achieve good sensitivity and to avoid occlusions. In this case, a re-calibration is necessary for each modification [18,22]. In perspective projection, the translation vector t is the position vector from the Ow to Oc. This vector has components in the x-, y- and z-axes from the world coordinates Ow to the camera coordinates Oc. The distances of these components are determined in the initial calibration, but the components of vector t are modified when the camera is moved. In this case, these components are re-calibrated via calibrated references to perform the transformation from Pw to Pc [23]. The transformation Pc = R·Pw + t to the coordinates (Xu, Yu) should also be recomputed. However, in several applications calibrated references do not exist during the three-dimensional vision task, so established online re-calibration methods are limited by the availability of known references. To overcome these limitations, a re-calibration method without online references should be implemented.
In the proposed mobile calibration, a Bezier network provides the data to perform the online re-calibration and three-dimensional visualization based on a mobile setup and image processing of a laser line. The mobile setup to perform the three-dimensional vision is shown in Figure 2. This arrangement includes an electromechanical device, a CCD camera, a laser line projector and a computer to process the data. In this setup, the laser line is projected perpendicularly on the surface and the CCD image plane is aligned parallel to the reference plane. In this geometry, the laser line reflected to the CCD camera forms an angle that varies according to the position of the reference plane in the z-axis. The orientation of the CCD camera and the laser line orientation are fixed. The alignment of the camera and laser line are described in Section 3. The electromechanical device moves the laser and the camera in the x-axis, y-axis and z-axis. In addition the camera can also be moved toward the laser diode along in the x-axis.
In this system, a network computes the surface depth based on the line position. The geometry of this relationship is shown in Figure 3(a). In this geometry, the x-axis and y-axis are located on the reference plane and the z-axis is located perpendicularly to the reference plane. The focal length f is the distance between the lens and the image plane. The image center is indicated by xc on the x-axis. The distance between the laser line and the optical axis is indicated by ℓa. The surface depth is indicated by hi and zi is the distance between the lens and the object surface. The distance from the lens to the reference plane is defined by D = hi + zi. In the proposed setup, the distance ℓa and D can be modified during the visualization procedure. The laser line coordinates are indicated in the y-axis based on the geometry shown in Figure 3(b). In this geometry, a point qi of the laser line in the y-axis is indicated by yi in the image plane. Thus, the laser line coordinates are determined by qi = D η (ycyi)/η f. In this expression yi is the image row and the parameters D, f, η, yc are deduced during the mobile calibration, which is described in Section 4. In perspective projection, the surface depth is computed by zi = (f·ℓa)/(xcxi) [1], based on the calibrated f and ℓa. In the proposed model, the surface depth is computed based on the line shifting in the image plane. When the laser line is projected on a surface hi, the line position is moved from xA to xi in the image plane. In this case, the line shifting si is directly proportional to the surface depth hi. This line shifting is described by following expression:
s i = x A x i
To compute the shift, the line position xA and xi are detected in the image. To carry this out, the intensity maximum is measured in each row of the image. Then, first and second derivatives are computed to obtain the maximum. To detect the maximum, the pixels are approximated to a continuous function by means of Bezier curves [28]. In this case, the pixels are represented by (x0, I0), (x1, I1),......, (xn, In), where xi is the pixel position, Ii is the pixel intensity and n is the pixel number. The Bezier curves are described by:
P ( u ) = i = 0 n ( n i ) ( 1 u ) n i u i p i , ( n i ) = n ! i ! ( n i ) ! , 0 u 1
By applying the definition of Equation (2), two equations are obtained, one for x and one for I:
x ( u ) = ( n 0 )   ( 1 u ) n u 0 x 0 + ( n 1 )   ( 1 u ) n 1 u     x 1 + . + ( n n )   ( 1 u ) 0 u n x n , 0 u 1
I ( u ) = ( n 0 )   ( 1 u ) n u 0 I 0 + ( n 1 )   ( 1 u ) n 1 u     I 1 + + ( n n )   ( 1 u ) 0 u n I n , 0 u 1
Equation (3) represents the pixel position and Equation (4) represents the pixel intensity. Based on these equations, a continuous function is fitted from the pixels shown in Figure 4. To carry this out, the positions x0, x1, x2,…, xn, are substituted into Equation (3) and the intensities I0, I1, I2,..,In, are substituted into Equation (4). These two equations are evaluated in the interval 0 ≤ u ≤ 1 to fit the curve shown in Figure 4. The result of this curve is a concave function. Therefore, the second derivative I’(u) is positive and the peak is a global maximum. In this manner, the maximum is computed based on the first derivative I’(u) = 0 [29]. To find the derivative I’(u) = 0, the bisection method is applied [29]. The Bezier function is defined in the interval 0 ≤ u ≤ 1, so the initial value is defined by ui = 0 and the final value is indicated by uf = 1. Then, the middle point is computed by u* = (ui + uf)/2 to find a value u that converges to the expression I′(u) = 0. Next the first derivative I′(u) is evaluated in middle point u*. If the derivative I’(u = u*) is positive, then ui = u*. If the derivative I’(u = u*) is negative, then uf = u*. The next middle point u* is obtained from the last pair of values ui and uf. These steps are repeated until I′(u) = 0 is found based on some set tolerance value. The value u = u* where I′(u) = 0 is substituted into Equation (3) to determine the position of the intensity maximum x(u). The result is x(u) = 34.274 and the laser line position is xi = 34.274 pixels, as shown in Figure 4. Thus, the laser line position is detected. The Bezier network to perform the three-dimensional vision and calibration is described in Section 3.

3. Network Structure for Depth Contouring

The three-dimensional vision is performed by a Bezier network based on the line shifting. This network is built based on an image plane parallel to the reference plane in the x-axis and y-axis. Based on the geometry of Figure 3(a), the laser line is perpendicular to the reference plane in the x-axis. In this case, the line position along the x-axis is constant for any surface depth hi. By means of this criterion, the laser line is aligned perpendicularly to the x-axis. To carry this out, the laser line is projected on a peak reference along the y-axis. Figure 5(a) shows the laser line aligned in the y-axis on the peak reference. Then, the reference plane is moved in the z-axis, as shown in Figure 5(b). By rotating the laser diode in 0.0896 degree steps, the laser line is positioned in the reference for any depth hi of the reference plane. By image processing, the reference position and the line position are detected in each displacement of depth hi. In this case, the line position is the same in the x-axis for any position of the reference plane. Thus, a perpendicular laser line to the reference plane in the x-axis is achieved. Now, the image plane is aligned parallel to the reference plane in the x-axis. Based on the setup geometry of Figure 3(a), the term (ki/hi) = [a/(Dhi)] = [(ηxcηxi)/η f] is obtained. In this term, η is the scale factor in millimeters, f is in pixels and (xcxi) = si + (xcxA). Thus, the following expression for the line shifting is obtained:
k i = η ( x c x i ) h i η f = [ ( x c x A ) + s i ] h i f
In Equation (5) f, xc, and xA are constants. In this case, a linear hi produces a linear si. Conversely, a linear si produces a linear ki. Therefore dk/ds, the derivative of ki with respect to si, is a constant. Another camera orientation is an optical axis not perpendicular to the reference plane. In this case, a linear si does not produce a linear ki and the derivative dk/ds is not a constant.
The camera orientation along the y-axis is performed based on the geometry of Figure 3(b). In this geometry, a line pattern is moved in steps yi in the image plane in the y-axis based on depth hi. In this geometry, the optical axis is perpendicular to the reference plane and the term (qi/hi) = [b/(Dhi)] = [η (ycyi)/η f] is obtained. Here, the position of the pattern shifting is computed by ti = (y0yi) = (ycyi) − (ycyA). Thus, the following expression is obtained:
q i = η ( y c y i ) h i η f = [ ( y c y A ) + t i ] h i f
In Equation (6) f, yc and yA are constant. In this case, a linear hi produces a linear ti. A linear ti also produces a linear qi, so the derivative dq/dt is a constant. In this manner, the camera orientation is defined by the dk/ds = constant and dq/dt = constant. Due to the distortion, these derivatives are not exactly a constant, but they may be considered constant. For the camera orientation in the x-axis, the reference plane is moved from h0 to h1, h2, h3,…, hn by means of the electromechanical device. For each depth hi, the line position xi is computed by the procedure described in Section 2. Then, the shifting si is computed via Equation (1), the derivative dk/ds is computed and it is evaluated with respect to the derivative dk1/ds. If dk/ds is bigger than dk1/ds the camera is rotated to the right in steps of 0.0896 degree. Again, the reference plane is moved from h0 to h1, h2, h3,…, hn and the derivative is computed. If dk/ds is less than dk1/ds the camera is moved in the opposite direction. The rotation to the left and right is repeated until the minimum error of derivative dk/ds respect to dk1/ds is found. In this case, the derivative dk/ds is not exactly a constant, but it is close to constant. This criterion is illustrated by the derivative shown by the solid line in Figure 6, where the dashed line represents dk/ds of an optical axis aligned at an angle smaller than 90°. The dotted line is dk/ds of an optical axis aligned at an angle greater han 90°.
For the y-axis camera orientation, the dq/dt is computed. To carry this out, the position ti is computed in the image plane based on hi. This is done by detecting the corner of the laser line in the y-axis via edge detection. Figure 7(a) shows the corner position y0 of the line in the y-axis at the reference plane h0. Then, the reference plane is moved in the z-axis and the corner is detected to obtain y1. This procedure is repeated to obtain y2, y3,…, yn. Figure 7(b) shows the corner position y10 of the line in y-axis at the reference plane h10. Based on these data, ti and the derivative dq/dt are computed. Then, this derivative is evaluated with respect to the derivative dq1/dt. The camera is rotated in the y-axis to the right or the left in the same manner as it was rotated in the orientation in the x-axis.
The derivative dq/dt obtained in this procedure is not exactly a constant, but again it is close to being a constant. Therefore, the camera parallel to the reference plane is defined when the dk/ds and dq/dt are very close to a constant according to a tolerance. In this manner, the image plane has been aligned parallel to the reference plane. From here, the alignment of the laser line is achieved perpendicular to the reference plane and the camera is fixed. Based on an image plane parallel to the reference plane, the network is built. To carry this out, the data of the camera alignment hi and si are used. The structure of the proposed network is shown in Figure 8. This network consists of an input vector, two parametric inputs, a hidden layer and an output layer. Each layer of the network is constructed as follows: the input includes the depth hi, the line shifting si and the parametric values (u, v). The depth data h0, h1, h2,…, hn and the line shifting data s0, s1, s2,…, sn are obtained in the camera alignment by moving the reference plane in the z-axis. Thus, the line shifting si is directly proportional to the surface depth hi. In this case, the line shifting is represented by a parametric value u by the next linear combination (LC):
u = a 0 + a 1 s
where a0 and a1 are constants to be determined. By means of two values si and its respective value u, Equation (7) is determined. The Bezier curves are defined in the interval 0 ≤ u ≤ 1. Therefore, u = 0 for the first line shifting and u = 1 for the last shifting sn. Substituting these values in Equation (7), two equations with two unknown constants are obtained. Solving these equations, a0 and a1 are determined. Thus, for each shifting si, a value u is computed via Equation (7).
The coordinate yi corresponds to each row of the laser line image. This coordinate is represented by a parametric value v by the following expression:
v = b 0 + b 1 y
where b0 and b1 are constants to be determined. Using two values yi and its respective v, Equation (8) is determined. Bezier curves are defined in the interval 0 ≤ v ≤ 1. In this case, v = 0 for y0 and v = 1 for yn. Substituting these two values in Equation (8), two equations with two unknown constants are obtained. Solving these equations, b0 and b1 are determined. Thus, for each coordinate yi, a value v is computed via Equation (8). The hidden layer is built by a Bezier basis function, which is described by:
ij = B i ( u )   B j ( v )
where B i ( u ) = ( n i ) u i ( 1 u ) n i, B j ( v ) = ( m j ) u j ( 1 v ) m j, ( n i ) = n ! i ! ( n i ) !, ( m j ) = m ! j ! ( m j ) !.
The output layer is obtained by the summation of the neurons, which are multiplied by a weight. Thus, the output response is the surface depth given by following expression:
h ( u , v ) = i = 0 n j = 0 m w ij h i   B i ( u )   B j ( v ) ,                 0 u 1 , 0 v 1
where wij are the weights, hi is the surface depth, Bi(u) and Bj(v) are the Bezier basis function represented by Equation (9). To construct the complete network Equation (10), the appropriate weights wij should be determined. To carry this out, the network is being forced to produce the correct surface depth hi. This procedure is performed by an adjustment mechanism. Based on the reference data obtained in camera alignment, the initial depth h0 = 0 mm, the line position in the image plane is xA = x0 and s0 = 0. The line position x0 in x-axis is shown in Figure 7(a). In the camera alignment, the reference plane is moved in the z-axis in steps of 2.54 mm. Thus, h10 = 25.40 mm and the line position correspond to xi = x10 which are shown in Figure 7(b). Here, the shifting is determined by s10 = xAxi. Then, the si and its coordinate yi are converted to values (u, v) via Equation (7) and Equation (8), respectively. Then, the depth hi and its coordinates (u, v) are substituted in Equation (10) to obtain an output H(u, v), thus giving the following system of equations:
H ( u = 0 , v = 0 ) = h 0 = w 00 h 0 B 0 ( u ) B 0 ( v ) + w 01 h 0 B 0 ( u ) B 1 ( v ) + , , + w 0 m h 0 B 0 ( u ) B m ( v ) + , , + w 10 h 1 B 1 ( u ) B 0 ( v ) + , , + w 1 m h 1 B 1 ( u ) B m ( v ) + , , + w nm h n B n ( u ) B m ( v ) H ( u , v ) = h 1 = w 00 h 0 B 0 ( u ) B 0 ( v ) + w 01 h 0 B 0 ( u ) B 1 ( v ) + , , + w 0 m h 0 B 0 ( u ) B m ( v ) + , , + w 10 h 1 B 1 ( u ) B 0 ( v ) + , , + w 1 m h 1 B 1 ( u ) B m ( v ) + , , + w nm h n B n ( u ) B m ( v ) H ( u = 0 , v = 1 ) = h n = w 00 h 0 B 0 ( u ) B 0 ( v ) + w 01 h 1 B 0 ( u ) B 1 ( v ) + , , + w 0 n h n B 0 ( u ) B n ( v ) + , , + w m 0 h 0 B m ( u ) B 0 ( v ) + , , + w mn h n B m ( u ) B n ( v ) H ( u = 1 , v = 0 ) = h 0 = w 00 h 0 B 0 ( u ) B 0 ( v ) + w 01 h 0 B 0 ( u ) B 1 ( v ) + , , + w 0 m h 0 B 0 ( u ) B m ( v ) + , , + w 10 h 1 B 1 ( u ) B 0 ( v ) + , , + w 1 m h 1 B 1 ( u ) B m ( v ) + , , + w nm h n B n ( u ) B m ( v ) H ( u = 1 , v = 1 ) = h n = w 00 h 0 B 0 ( u ) B 0 ( v ) + w 01 h 0 B 0 ( u ) B 1 ( v ) + , , + w 0 m h 0 B 0 ( u ) B m ( v ) + , , + w 10 h 1 B 1 ( u ) B 0 ( v ) + , , + w 1 m h 1 B 1 ( u ) B m ( v ) + , , + w nm h n B n ( u ) B m ( v )
This linear system of Equation (11) can be represented as:
H 00 = w 00 β 0 , 0 + w 01 β 0 , 1 + , , + w nm β 0 , n m H 01 = w 00 β 1 , 0 + w 01 β 1 , 1 + , , + w nm β 1 , n m                                                             ..... H nm = w 00 β mn , 0 + w 01 β mn , 1 + , , + w nm β nm , n m
This equation can be rewritten in matrix form as β W = H. Thus, the linear system is represented by the following matrix:
[ β 0 , 0 β 0 , 1 β 0 , 2 ....     β 0 , n m β 1 , 0 β 1 , 1 β 1 , 2 ....     β 1 , n m β nm , 0 β nm , 1 β nm , 2 ....     β nm , n m ] [ w 00 w 01 w nm ] = [ H 00 H 01 H nm ]
The linear system Equation (13) is solved by the Chelosky method and the weights wij are determined. In this manner, the Bezier network H(u,v) has been completed. The result of this network is a model that computes the surface depth h(x, y) via line shifting. The network is applied to the laser line shown in Figure 9(a) to obtain the surface depth. To carry this out, the shifting si is detected in each image row yi. Then, the shifting si and its coordinate yi are converted to a value (u, v), respectively. Then, these values are substituted in the network Equation (10) to compute the surface depth shown in Figure 9(b). In this figure, the symbol “Δ” is the data provided by a coordinate measurement machine (CMM).
To determine the accuracy, the network data are compared with the data provided by the CMM. The accuracy is computed based on a root means squared error (rms) [30] by:
rms = 1 n i = 1 n ( ho i hc i ) 2
where hoi is the data provided by the CMM, hci is the calculated data by the network and n is the number of data. For the data shown in Figure 9(b), the error is a rms = 0.148 mm. The depth resolution is deduced by the detection of the minimum line shifting si. In this case, the network is built using the minimum and maximum si at distance ℓa. For this configuration, a shifting si = 0.38 pixels is detected from the reference plane. Based on this shifting, the network computes a depth h = 0.28 mm. Thus, the small details around h = 0.28 mm can be detected and the network sensibility has been determined. The calibration of vision parameters based on the network is described in Section 4.

4. Parameters of the Vision System

In the lighting methods, the calibration is performed based on perspective projection [622]. In this model, the extrinsic and intrinsic parameters are determined based on calibrated references. Thus, the matrix R, vector t, focal length f, distortion δi, image center (cx, cy) and the scale factor η are calibrated. Typically, these lighting systems do not provide the data needed to perform the re-calibration. Any time the setup is modified a re-calibration should be applied. This procedure provides the vision system with the ability to change the intrinsic and extrinsic parameters. This was tested when the camera position is modified. In this case, the components of vector t are changed and the distances from the origin of the world coordinates Ow to the camera coordinates Oc should be re-calibrated. Recently, self re-calibration and online re-calibration have been developed to change the vision parameters [2227]. In these methods, the data for online re-calibration are determined by detecting a light pattern on calibrated references. This kind of re-calibration is suitable when the references exist during the vision task, but in several applications such references do not exist. In this case, the online re-calibration cannot be completed due to the lack of references and an online re-calibration without references is thus necessary to overcome this.
In the proposed mobile calibration, the data for online re-calibration is provided the Bezier network and laser line imaging, thus avoiding the need for pattern references. The proposed vision system can be moved in the x-, y- and z-axes. In addition the camera can be moved toward the laser line. Here, the setup geometry is modified when the camera is moved in the z-axis. The geometry is also modified when the camera is moved toward the laser line. In this case, a mobile calibration is applied to perform the online re-calibration. This procedure is performed based on the setup geometry shown in Figure 3(a). Here, the calibration is performed based on the camera being parallel to the reference plane. The triangulation of this geometry is described by the expression of Equation (5). Considering radial distortion, the line position is defined by xA = XA + δxA and xi = Xi + δxi, respectively, where XA and Xi are the undistorted image coordinates and the distortion is indicated by δxA and δxi, respectively. Thus, the line shifting is defined by Si = (xcXi) − (xcXA). Therefore, the projection ki Equation (5) is rewritten as:
K i = η ( x c X i ) h i η f = [ ( x c X A ) + S i ] h i f
The distortion can be described from the terms of Equation (5) and Equation (15) by the following expression:
h i = f   K i η [ ( x c X A ) + S i ] = fk i η [ ( x c x A ) + s i ] K i [ ( x c x A + δ x A ) + ( s i + δ x i δ x A ) ] = k i [ ( x c x A ) + s i ]
From this equation, the distortion δxi is described by:
δ x i = K i k i ( x c x A + s i ) + ( x A x c s i )
The distortion can be determined from the terms of the line shifting Si = (xcXi) − (xcXA) and si = (xAxi). Here, the first line shifting s1 is defined without distortion. In this manner, Si = i* s1, Ki = i* k1 and dKi/dS = dk1/ds for i = 1, 2,…, n. Based on these criteria, the undistorted shifting is described by the term i* s1 = (xcxi + δxi) − (xcxA + δxA) = (xAxi) + (δxiδxA). Thus, the distortion is obtained by δxi = i* s1 − (xAx1) + δxA. From this expression, the first line shifting is defined without distortion. Therefore, δxA = 0, δx1 = 0, S1 = s1 and the distortion δx1 = i* s1 − (xAx1). Thus, the distortion is defined by:
δ x i = i * s 1 ( x A x i ) + δ x A = i * s 1 ( x A x i )     for     i = 2 , 3 , , n
The y-axis distortion is determined based on the setup geometry shown in Figure 3(b). The triangulation of this geometry is described by the expression of Equation (6). Considering radial distortion, yA = YA + δyA and yi = Yi + δyi, where YA and Yi are the undistorted image coordinate and the distortion is indicated by δyA and δyi, respectively. Thus, the pattern shifting is defined by Ti = (ycYA) − (ycYi). In this manner, the projection qi Equation (6) is rewritten as:
Q i = η ( y c Y i ) h i η f = [ ( y c Y A ) + T i ] h i f
The procedure to determine the distortion in x-axis is applied to find the distortion δyi. Thus, the distortion in y-axis is defined by:
δ y i = i * t 1 ( y A y i ) + δ y A = i * t 1 ( y A y i )     for     i = 2 , 3 , , n
In this manner, the distortion has been deduced. Based on an image plane parallel to the reference plane, the vision parameters are deduced. This procedure is carried out based on the setup geometry Figure 3(a), which is described by:
a D h i = η ( x c X i ) f = η [ ( x c X A ) + S i ) ] f
In this equation, the constants D, a, f, are in millimeters, xc is in pixels and η is the scale factor. To determine these parameters, Equation (21) is rewritten as the following system of equations:
h 0 = f a η ( x c X A + S 0 ) + D h 1 = f a η ( x c X A + S 1 ) + D h 2 = f a η ( x c X A + S 2 ) + D h 3 = f a η ( x c X A + S 3 ) + D h 4 = f a η ( x c X A + S 4 ) + D
The values h0, h1,…, h4, are computed by the network based on s0, s1,…, s4. The values XA and Si are computed using the known si, δxA and δxi, then these values are substituted in Equation (22) to solve the system of equations and thus determine the constants D, a, f, η, and xc. The coordinate yc is computed based on the geometry of Figure 3(b). Here, the parameters η, Ti = I × t1, Yi are known and Ti = η(ycYi+1) − η(ycY1). Thus, yc is determined by the following system of equations:
T 1 = η ( y c Y 2 ) η ( y c Y 1 ) T 2 = η ( y c Y 3 ) η ( y c Y 1 )
The values T1, T2, Y1, Y2 and Y3 are collected from the camera orientation in the y-axis. These values are substituted in Equation (23) to solve the system and thus the value yc is determined. The laser line coordinates are determined based on the parameters D, η, yc, and f. In this case, yi are the image coordinates of the laser line in the y-axis. Based on the geometry of Figure 3(b), the coordinates of laser line in the y-axis are determined by the term qi = D η(ycyi)/η f. In this manner, the vision parameters have been determined by the data provided by network and image processing. The mobile setup generates online geometric modifications, giving the system the ability to overcome occlusions and attain high sensibility. Typically, the extrinsic parameters are re-calibrated when the camera changes position [31]. In the reported methods, the online re-calibration depends on the availability of calibrated references [2429]. The proposed mobile calibration avoids the use of references for online re-calibration. In the proposed vision system the setup geometry [Figure 10(a)], is modified when the camera is moved toward the laser line along the x-axis, as seen in Figure 10(b). The setup geometry is also modified when the camera is moved in the z-axis [Figure 10(c)]. In these cases, the line shifting magnitude should be re-calibrated online. The distance to the object surface should also be recalibrated online when the camera is moved in the z-axis. This procedure is carried out by computing the line shifting factor α. From the initial configuration [Figure 10(a)], the expression tanθ1 = η f/(XAxc) η describes the line position. From the geometry shown in Figure 10(b), the expression tanθ2 = η f/(αXAxc) η describes the line position. In these expressions, f, xc are known from the initial calibration and αXA is a line position in the new configuration [Figure 10(b)]. Here, the position αXA is defined as the smaller distance obtained from the term (αXjxc).
In this case, the term αXj is the line position in each row of the image in y-axis and the j-index is row number of the image. Based on these data, the angles θ1 and θ2 are computed. Thus, the expression (XAxc) tanθ1 = (αXAxc) tanθ2 is obtained. Therefore, the factor α is determined by:
α = ( X A x c )   tan   θ 1 X A   tan   θ 2 + x c X A
Then, the shifting is divided by the factor α in the modified configuration. The shift si is re-calibrated online and thus processed by the network to obtain the surface hi. In this manner, the vision system provides an online re-calibration of line position and the line shifting and the need for calibrated references is avoided. In the perspective projection, the extrinsic and intrinsic parameters change when the camera is moved, therefore components of the translation vector t should be re-calibrated to perform the transformation Pc = R·Pw + t. Methods such as object-based and plane-based homography have been applied to perform the online re-calibration of vision parameters. Object-based methods perform the re-calibration based on the known marks on the object [32]. Homography methods perform the re-calibration based on the detection of light pattern references [2224]. These reported methods are limited when the references do not exist during the vision task.
The contribution of the mobile calibration when an occlusion appears during the vision task was also studied. Typically, occlusions appear due to the surface variation and a big distance a. This criterion is studied based on the geometry shown in Figure 11, in which the point A is occluded at the initial configuration when the image plane is placed at (x0, z0). Here, an occlusion is detected when the width of the line is less than three pixels.
To avoid occlusions, the camera is moved away from the surface at (x0, z1) in the z-axis or toward the laser line at (x1, z0) in the x-axis. Here, an occlusion is detected when the width of the line is smaller than three pixels along the laser line in the y-axis. This criterion is established according to a threshold value. When an occlusion appears, the surface scanning is stopped. Then, the camera is moved by one 1.27 mm step toward the laser line. In this camera position, the width of the laser line is evaluated. If the width of laser line is less than three pixels, the camera is moved one step toward the laser line. This procedure is repeated until to the width of the laser line is greater than three pixels. Then, the scanning is continued with the new camera position and the occluded region is thus detected based on a complete laser line. In this case, the shift magnitude is different from the initial configuration, so the shift should be re-calibrated based on the factor α via Equation (24). Then, the shift si is processed by the network to compute the object surface. The three-dimensional vision and calibration accuracy are described in Section 5.

5. Experimental Results

The proposed mobile calibration is performed based on Bezier networks and laser line imaging. This automatic technique avoids calibrated references and physical measurements. Here, the three-dimensional visualization is performed by laser scanning in 1.27 mm steps along the x-axis. The scanning movement is provided by the electromechanical device, whose minimum step is 0.0245 mm. The positioning accuracy is 0.00147 mm and the positioning repeatability is a standard deviation ± 0.001 mm. The laser line is captured and digitized in 256 gray levels. From each image, the network computes the surface contour based on the line shift si. The depth resolution provided by the network is around of 0.28 mm. This resolution corresponds to the distances a = 174.8321 mm and D = 380.4126 mm. In this case, the depth resolution varies according to the distances a and D. The depth resolution is increased when a is increased and D is decreased. On the other hand, depth resolution is decreased when a is decreased and D is increased. The image plane parallel to the reference plane provides better sensitivity for a laser line perpendicular to the reference plane. This criterion is proven by the geometry of Figure 12. In this case, the image plane parallel to the reference plane provides a bigger line shift than the rotated image plane, but the rotated image plane provides better sensitivity when the laser line is aligned at an angle. Therefore, the image plane is fixed parallel to the reference plane and perpendicular to the laser line, thus affording the best sensitivity. When the setup geometry is modified, a mobile calibration is performed based on the data provided by the network and image processing. This procedure provides high sensitivity and avoids occlusions. This was verified by the three-dimensional visualization of some surfaces with small and big details.
The first test of the mobile calibration is the three-dimensional visualization of a plastic fruit, shown in Figure 13(a). To carry this out, the fruit is scanned by the vision system as shown in Figure 13(b). In this procedure, a set of images is captured by the CCD camera. From each image, the line shift si is detected via Equation (1) and converted to a value u via Equation (7). In addition, the coordinate yi of the laser line is converted to a value v via Equation (8). Then, the network computes a transverse section of the object by substituting the values (u,v) in Equation (10). In this scanning, an occlusion is detected by the broken line shown in Figure 13(c). To avoid the occlusion, the camera is moved toward the laser line. In doing so, the initial geometry has been modified, so the line shift should be re-calibrated online based on the factor α via Equation (24). To carry this out, the line position αXj is detected in each row of the images in the y-axis. In this case, the term αXj is the line position and the j-index is the row number of the image in the y-axis. Here, the position αXA is the smaller distance from the term (αXjxc). Then, the factor α is computed and the line shift is divided by this factor to achieve the online re-calibration and the re-calibrated shift si is substituted in the network Equation (10) to compute the surface hi. The whole surface is reconstructed from all depth data provided by the network. Fifty six images were processed to obtain the plastic fruit shown in Figure 13(d), where the scale of this figure is in mm.
To know the accuracy, a root means squared error (rms) is computed via Equation (14). To do this the plastic surface is measured by the CMM. This procedure is performed by measuring a transverse section of the object in 8.0 mm steps along the y-axis. The transverse section corresponds to the position where the laser line was projected on the x-axis. Then, the network computes the surface depth of this transverse section. Forty six transverse sections are measured by the CMM and by the network in steps of 1.27 mm. In this case, the data obtained from the CMM are n = 420. Then, error is computed and the result is a rms = 0.142 mm.
The second test of the mobile calibration is the vision of a dummy face shown in Figure 14(a). To carry this out, the dummy face is scanned by the vision system. In this scanning, no reference plane exists. This was established based on the absence of a line at the end and beginning of the image in the y-axis. Here, the shifting magnitude is different from the initial configuration, so the line shift should be re-calibrated online. In this case, the line position αXj is detected in each row of the image in the y-axis. From these line positions, the position αXA is determined as the minimum distance from the term (αXjxc). Then, the factor α is computed via Equation (24) and the line shift is divided by this factor α to achieve the online re-calibration and the re-calibrated shift si is converted to a value u. In addtion, the coordinate yi of the laser line is converted to a value v. Then, the network computes the data hi of a transverse section of the object by substituting the values (u,v) in Equation (10).
When the object is scanned, an occlusion is detected based on the broken line shown in Figure 14(b). To avoid the occlusion, the camera is moved toward the laser line. Since the geometry and the shift magnitude have been modified the mobile calibration performs a second online re-calibration to achieve the three-dimensional vision. To carry it out, the line position αXj is detected in each row of the image in the y-axis for the new configuration. From these line positions, the αXA is determined as the minimum from the term (αXjxc). Then, the factor α is computed via Equation (24). and the shifting is divided by this factor α to achieve the online re-calibration. This re-calibrated shifting si is converted to a value u. The coordinate yi of the laser line is also converted to a value v. Then, the network computes the surface depth hi by substituting the values (u,v) in Equation (10). In this manner, the whole surface is obtained by all data provided by the network. One hundred and sixteen images are processed to obtain the complete dummy face shown in Figure 14(c). The scale of this figure is in mm. To know the accuracy, the rms is computed via Equation (14). To do this, the transverse sections of the dummy face are measured by the CMM and by the network. In this case, sixty two transverse sections are measured by the CMM at 1.00 mm in the x-axis to perform the evaluation. From this procedure, the obtained data are n = 1,400. Then, the error is computed and the result is a rms = 0.155 mm.
The third test of the mobile calibration is the visualization of the flat surface of a metallic piece shown in Figure 15(a). To carry this out, the metallic piece is scanned by the vision system. In this scanning, no reference plane exists during the vision procedure. This is stated based on the lack of a line at the end and beginning of the y-axis in the image. Since the line shift magnitude is different from the initial configuration, mobile calibration is applied to achieve the online re-calibration of the line shift. To do so, the line position αXj is detected in each row of the image in the y-axis. In this case, the line position Xj is the same along the laser line and X0 = X1 = X2,…, = Xn and the minimum distance from the term (αXjxc) can be obtained by means of αX0. Therefore, the reference position is determined by αXA = αX0. Then, the factor α is computed via Equation (24) abd the line shift is divided by the factor α to achieve the online re-calibration. Then, the re-calibrated shift si is converted to a value u via Equation (7). The coordinate yi of the laser line is also converted to a value v via Equation (8). Then, the network computes the depth hi by substituting the values (u,v) in Equation (10). In the scanning, an occlusion is detected based on the missing line shown in Figure 15(b). To avoid the occlusion, the camera is moved toward the laser line. Here, the geometry and the shift magnitude have been modified, so the mobile calibration performs a second online re-calibration to achieve the three-dimensional visualization. To carry it out, the line position αXj is detected in each row of the image in the y-axis. In this case, position αXj is the same along the laser line. Again, the reference position αXA is determined by αXA = αX0. Then, the factor α is computed via Equation (24) and the shift is divided by this factor α to achieve the on-line recalibration and the re-calibrated shift si is then converted to a value u. The coordinate yi of the laser line is also converted to a value v and the network computes the surface depth hi by substituting the values (u,v) in Equation (10). In this manner, the whole surface is obtained from data provided by the network. Fifty eight images were processed to obtain the metallic piece shown in Figure 14(c). The scale of this figure is in mm. To know the accuracy, the metallic piece was measured by the CMM. Then, the rms is computed using the data provided by the CMM and by the network. In this procedure, the error is computed using n = 1,400 and error is a rms = 0.155 mm.
The value n has a great influence in the precision of the calculated error. To determine if n is according to the desired precision, the confidence level [33] is calculated by:
n = ( z α σ x e ) 2
where zα is the desired confidence, e is the error expressed in percentage, and σx is standard deviation. Therefore, the confidence level based on the data n is described by:
z α = e σ x n
To know if the chosen n is according to the desired confidence, Equation (26) is applied. The desired confidence is 95%, which corresponds to zα = 1.96 according to the confidence table [33]. The average of the dummy face height is 44.32 mm. Therefore, the rms is an error of 0.0035, which represents a 0.35% error. To determine the error precision, the confidence is calculated via Equation (25) for n = 1,400, e = 0.35 and a standard deviation of 6.204. The result is zα = 2.110, which indicates a confidence level over 95%. The confidence levels for the plastic fruit and for the metallic piece are also greater than 95%.
The accuracy provided by the calibration via Bezier networks for the three-dimensional vision is less than 1%. For comparison, the best accuracy of the calibration and online recalibration are mentioned as follows: a stereo calibration via perspective projection for face profiling reports an error over 1% [8]. A calibration method based on perspective projection and least squares for a laser range scanner reports an error over 1% [9]. A calibration method based on perspective projection and the invariance of double cross-ratio of the cross-points reports an error of 2% [16]. A self plane-based homography re-calibration method reports an error of 1% [23]. An online re-calibration method based on homography and reference plane reports an error of 1.5% [27]. These results indicate that the proposed mobile calibration provides better accuracy, based on its error of less than 1%. The mobile calibration also avoids external procedures and the need for calibrated references to perform the online re-calibration. The resolution provided by proposed technique is good, according to the calibration methods based on perspective projection using similar distances to our setup [627]. In these reports, a paint brush laser range scanner reports 0.57 mm as the best resolution [9]. The measurement range of the proposed technique is in the interval between 0.3 mm and 280.60 mm. According to the above mentioned techniques [627], the measurement range of the proposed mobile setup is good.
The computer used in this vision system is a 1.8 GHz PC. The capture velocity of the camera is 34 fps. The electromechanical device is moved at 34 steps per second. Each image of laser line is processed by the network in 0.010 s. The shape of the dummy face was reconstructed in 3.88 s, the metallic piece was reconstructed in 3.22 s and the plastic fruit was profiled in 2.59 s. This processing time is good compared to the lighting methods based on perspective projection. To demonstrate this, the processing time of the fast techniques is given as follows: for a paint brush laser range scanner, the reported time to reconstruct a single view is 15 s [9]. In implementation and experimental studies on fast object modeling based on multiple structured stripes, the reported time to reconstruct a single view of the object is 10 s [17]. These results indicate that the proposed mobile system provides a fast three-dimensional visualization. In this procedure, physical measurements and calibrated references are avoided to perform online re-calibration of the vision parameters and the distances of the geometry of the setup are not used to compute the surface depth, so the proposed mobile calibration performed online using data provided by the network and image processing is easier than the online re-calibration techniques based on references and perspective projection. In this manner, the vision system achieves good repeatability, corresponding to a standard deviation ± 0.01 mm.

6. Conclusions

A mobile calibration technique for three-dimensional vision has been presented. In this technique, a Bezier network provides the data needed to perform the mobile calibration via image processing. The network also computes the object surface based on the mobile setup. The setup geometry can thus be modified online and the network provides the online re-calibration to needed to accurately perform the three-dimensional visualization. The automatic calibration avoids the need for physical measurements and calibrated references, which are used in the lighting methods based on perspective projection. This improves the performance of the vision system and the accuracy of the three-dimensional visualization. The ability to detect the laser line with a sub-pixel resolution has been achieved by using Bezier curves, and the image processing is achieved with few operations. With the automatic calibration, good repeatability is achieved in each three-dimensional visualization procedure. The technique described here should provide a valuable tool for industrial inspection and reverse engineering tasks.

Acknowledgments

J. Apolinar Muñoz Rodríguez would like to thank to CONCYTEG of Guanajuato State and CONACYT of Mexico, for the partial support of this research.

References

  1. Klette, R; Schluns, K; Koschan, A. Computer Vision: Three-Dimensional Data from Images; Springer: Singapore, 1998; pp. 349–367. [Google Scholar]
  2. Jia, P; Kofman, J; English, C. Comparison of linear and nonlinear calibration methods for phase measuring profilometry. Opt Eng 2005, 46, 043601:1–043601:7. [Google Scholar]
  3. Breque, C; Dupre, JC; Brenand, F. Calibration of a system of projection moiré for relief measuring: biomechanical applications. Opt. Lasers Eng 2004, 41, 241–260. [Google Scholar]
  4. Remondino, F; El-Hakim, S. Image-based 3D modelling: a review. Photogrametric Record 2006, 21, 269–291. [Google Scholar]
  5. Muñoz-Rodríguez, JA; Rodríguez-Vera, R. Evaluation of the light line displacement location for object shape detection. J. Mod. Optic 2003, 50, 137–154. [Google Scholar]
  6. Vilaca, JL; Fonceca, JC; Pinho, AM. Calibration procedure for 3D measurement system using two cameras and a laser line. Opt. Laser Technol 2009, 41, 112–119. [Google Scholar]
  7. Zhang, S; Huang, PS. Novel method for structured light system. Opt Eng 2006, 45, 083601:1–083601:8. [Google Scholar]
  8. Song, LM; Wang, DN. A novel grating matching method for 3D reconstruction. Ndt. Int 2006, 39, 282–288. [Google Scholar]
  9. Zagorchev, L; Goshtasby, A. A paintbrush laser range scanner. Comput. Vis. Image Underst 2006, 10, 65–86. [Google Scholar]
  10. Song, L; Qu, X; Yang, Y. Application of structured lighting sensor for on line measurement. Opt. Lasers Eng 2005, 43, 1118–1126. [Google Scholar]
  11. Huuynh, DQ. Calibration a structured light stripe system: a novel approach. Int. J. Comput. Vis 1999, 33, 73–86. [Google Scholar]
  12. Liu, F; Duan, F; Ye, S. A new method for calibration of line structured light sensor using zigzag target. Meas. Technol 1999, 7, 3–6. [Google Scholar]
  13. Zhou, F; Zhang, G; Jiang, J. Constructing feature points for calibration a structured light vision sensor by viewing a plane from unknown orientation. Opt. Lasers Eng 2005, 43, 1056–1070. [Google Scholar]
  14. Zhou, F; Zhang, G. Complete calibration of a structured light stripe vision sensor trough plane target of unknown orientation. Image Vision Comput 2005, 23, 59–67. [Google Scholar]
  15. Mclvor, AM. Nonlinear calibration of a laser stripe profiler. Opt. Eng 2002, 41, 205–212. [Google Scholar]
  16. Wei, Z; Zhang, G; Xu, Y. Calibration approach for structured-light-stripe vision sensor based on the invariance of double cross-ratio. Opt Eng 2003, 2956–2966. [Google Scholar]
  17. Wang, G; Hu, Z; Wu, F; Tsui, HT. Implementation and experimental study on fast object modeling based on multiple structured light stripes. Opt. Lasers Eng 2004, 42, 627–638. [Google Scholar]
  18. Doignon, C; Knittel, D. A structured light vision system for out plane vibration frequencies location of moving web. Mach. Vision Appl 2005, 16, 289–297. [Google Scholar]
  19. Chen, X; Xi, J; Jin, Y; Sun, J. Accurate calibration for camera projector measurement system based on structured light projection. Opt. Lasers Eng 2009, 47, 310–319. [Google Scholar]
  20. Li, Z; Shi, Y; Wang, C; Wang, Y. Accurate calibration method for a structured light system. Opt Eng 2008, 47, 053604:1–053604:9. [Google Scholar]
  21. Li, YF; Chen, SY. Automatic recalibration of an active structured light vision system. IEEE Trans. Robot. Automation Mach. Vision Appl 2003, 19, 259–268. [Google Scholar]
  22. Zhang, B; Li, YF; Wu, YH. Self-recalibration of a structured light system via plane-based homography. Pattern Recogn 2007, 40, 1368–1377. [Google Scholar]
  23. Chen, SY; Li, YF. Self-recalibration of a color-encoded light system for automated three-dimensional measurements. Meas. Sci. Technol 2003, 14, 33–40. [Google Scholar]
  24. Zhang, B; Li, Y. Dynamic calibration of relative pose and error analysis in a structured light system. J. Opt. Soc. Am. A 2008, 25, 612–622. [Google Scholar]
  25. Canlin, L; Ping, L; Lizhuang, M. A camera online recalibration framework using SIFT. Visual Comput 2010, 26, 227–240. [Google Scholar]
  26. Lu, RS; Li, YF. Calibration of a 3D vision system using pattern projection. Sens. Actuat. A-Phys 2003, 104, 94–102. [Google Scholar]
  27. Li, FY; Zhang, B. A method for 3D measurement and reconstruction for active vision. Meas. Sci. Technol 2004, 15, 2224–2232. [Google Scholar]
  28. Mortenson, ME. Geometric Modeling, 2nd ed; Wiley: Salt Lake, UT, USA, 1997; pp. 83–105. [Google Scholar]
  29. Frederick, H; Lieberman, GJ. Introduction to Operations Research; McGraw-Hill: New York, NY, USA, 1982; pp. 754–758. [Google Scholar]
  30. Gonzalez, RC; Wintz, P. Digital image processing, 2nd ed; Addison-Wesley: Menlo Park, CA, USA, 1987; pp. 672–675. [Google Scholar]
  31. Li, FY; Chen, SY. Automatic recalibration of an active structured light vision system. IEEE Trans. Robot 2003, 19, 259–268. [Google Scholar]
  32. Peng, E; Li, L. Camera calibration using one-directional information and its applications in both controlled and uncontrolled environments. Pattern Recogn 2010, 43, 1188–1198. [Google Scholar]
  33. Freund, JE. Modern Elementary Statistics; Prentice Hall: Upper Saddle River, NJ, USA, 1979; pp. 249–251. [Google Scholar]
Figure 1. Geometry of the perspective projection model.
Figure 1. Geometry of the perspective projection model.
Sensors 10 07681f1
Figure 2. Mobile setup to perform the three-dimensional visualization.
Figure 2. Mobile setup to perform the three-dimensional visualization.
Sensors 10 07681f2
Figure 3. (a) Geometry of an image plane parallel to the reference plane in x-axis. (b) Geometry of an image plane parallel to the reference plane in y-axis.
Figure 3. (a) Geometry of an image plane parallel to the reference plane in x-axis. (b) Geometry of an image plane parallel to the reference plane in y-axis.
Sensors 10 07681f3
Figure 4. Fitted pixels to a continuous function by Bezier curves.
Figure 4. Fitted pixels to a continuous function by Bezier curves.
Sensors 10 07681f4
Figure 5. (a) Laser line aligned on a peak reference in y-axis. (b) Setup at different reference plane from the initial configuration.
Figure 5. (a) Laser line aligned on a peak reference in y-axis. (b) Setup at different reference plane from the initial configuration.
Sensors 10 07681f5
Figure 6. Derivative dk/ds for an optical axis perpendicular and not perpendicular to the x-axis.
Figure 6. Derivative dk/ds for an optical axis perpendicular and not perpendicular to the x-axis.
Sensors 10 07681f6
Figure 7. (a) Laser line on the reference plane. (b) Laser line at 25.4 mm from the reference plane.
Figure 7. (a) Laser line on the reference plane. (b) Laser line at 25.4 mm from the reference plane.
Sensors 10 07681f7
Figure 8. Structure of the Bezier network.
Figure 8. Structure of the Bezier network.
Sensors 10 07681f8
Figure 9. (a) Laser line projected on a surface. (b) Surface depth computed by the network form the laser line.
Figure 9. (a) Laser line projected on a surface. (b) Surface depth computed by the network form the laser line.
Sensors 10 07681f9
Figure 10. (a) Initial geometric configuration. (b) Geometry of the camera moved toward the laser line in x-axis. (c) Geometry of the camera moved toward the object surface in z-axis.
Figure 10. (a) Initial geometric configuration. (b) Geometry of the camera moved toward the laser line in x-axis. (c) Geometry of the camera moved toward the object surface in z-axis.
Sensors 10 07681f10
Figure 11. Geometry of an occlusion in the initial configuration.
Figure 11. Geometry of an occlusion in the initial configuration.
Sensors 10 07681f11
Figure 12. Geometry of an image plane parallel and not parallel to the reference plane.
Figure 12. Geometry of an image plane parallel and not parallel to the reference plane.
Sensors 10 07681f12
Figure 13. (a) Plastic fruit to be profiled. (b) Mobile setup for three-dimensional vision. (c) Occlusion based on the broken line. (d) Three-dimensional shape of the plastic fruit.
Figure 13. (a) Plastic fruit to be profiled. (b) Mobile setup for three-dimensional vision. (c) Occlusion based on the broken line. (d) Three-dimensional shape of the plastic fruit.
Sensors 10 07681f13
Figure 14. (a) Dummy face to be profiled. (b) Occlusion based on the broken line. (c) Three-dimensional shape of the dummy face.
Figure 14. (a) Dummy face to be profiled. (b) Occlusion based on the broken line. (c) Three-dimensional shape of the dummy face.
Sensors 10 07681f14
Figure 15. (a) Metallic piece to be profiled. (b) Occlusion based on the broken line. (c) Three-dimensional shape of the metallic piece.
Figure 15. (a) Metallic piece to be profiled. (b) Occlusion based on the broken line. (c) Three-dimensional shape of the metallic piece.
Sensors 10 07681f15

Share and Cite

MDPI and ACS Style

Muñoz-Rodriguez, J.A. Mobile Calibration Based on Laser Metrology and Approximation Networks. Sensors 2010, 10, 7681-7704. https://doi.org/10.3390/s100807681

AMA Style

Muñoz-Rodriguez JA. Mobile Calibration Based on Laser Metrology and Approximation Networks. Sensors. 2010; 10(8):7681-7704. https://doi.org/10.3390/s100807681

Chicago/Turabian Style

Muñoz-Rodriguez, J. Apolinar. 2010. "Mobile Calibration Based on Laser Metrology and Approximation Networks" Sensors 10, no. 8: 7681-7704. https://doi.org/10.3390/s100807681

Article Metrics

Back to TopTop