Open Access
This article is

- freely available
- re-usable

*Sensors*
**2010**,
*10*(8),
7681-7704;
doi:10.3390/s100807681

Article

Mobile Calibration Based on Laser Metrology and Approximation Networks

Centro de Investigaciones en Optica, Loma del Bosque 115, Col. Lomas del campestre, C.P. 37150, Leon, Guanajuato, Mexico; Tel.: +477-441-4200; Fax: +477-441-4209.

Received: 30 June 2010; in revised form: 20 July 2010 / Accepted: 5 August 2010 / Published: 17 August 2010

## Abstract

**:**

A mobile calibration technique for three-dimensional vision is presented. In this method, vision parameters are computed automatically by approximation networks built based on the position of a camera and image processing of a laser line. The networks also perform three-dimensional visualization. In the proposed system, the setup geometry can be modified online, whereby an online re-calibration is performed based on data provided by the network and the required modifications of extrinsic and intrinsic parameters are thus determined, overcoming any calibration limitations caused by the modification procedure. The mobile calibration also avoids procedures involving references, which are used in traditional online re-calibration methods. The proposed mobile calibration thus improves the accuracy and performance of the three-dimensional vision because online data of calibrated references are not passed on to the vision system. This work represents a contribution to the field of online re-calibration, as verified by a comparison with the results based on lighting methods, which are calibrated and re-calibrated via perspective projection. Processing time is also studied.

Keywords:

three-dimensional vision; laser line projection; mobile calibration; Bezier networks## 1. Introduction

Nowadays in order to perform three-dimensional vision various lighting methods are used, such as fringe pattern projection, laser line and point projection, all of which require some form of calibration. Calibration for lighting methods is performed via perspective projection models [1]. In fringe projection, the calibration is performed based on calibrated references via perspective projection [2,3]. In this method, the three-dimensional vision is achieved by a phase detection algorithm. In line and point projection, the calibration is also achieved by perspective projection and the use of calibrated references [4,5], but here, the three-dimensional vision is performed by laser triangulation.

In the calibration and re-calibration of lighting methods, several methods based on perspective projection have been developed. One calibration method is performed by projecting a laser line on black and white rectangles [6,7]. The perspective projection is determined by matching the line to the known rectangles. A stereo calibration determines the perspective projection by matching a line of a grating and the use of epipolar geometry [8]. A paint brush method performs the calibration by projecting a line on two reference planes [9]. By detecting the line on these references, the perspective projection is determined via least squares. A lighting method performs the calibration based on the coordinates of a laser line [10]. Here, perspective projection is determined by transforming the laser line coordinates to real world coordinates. A zigzag method performs the calibration by detecting a laser line on zigzag references [11,12]. Based on these references, the perspective projection is obtained via a transformation matrix. A vision sensor performs the calibration by projecting a laser line on a reference plane [13–17]. In this case, the perspective projection is determined by detecting the line on this reference plane. A structure light system performs the calibration by projecting a pattern of spots on a reference plane [18,19]. The perspective projection is determined by detecting the spots on this plane. Another type of calibration is performed by projecting a spots pattern and a fringe pattern [20]. In this method, the perspective projection is determined by detecting the point-to-line correspondence on a plane. Re-calibration methods have also been implemented to change the vision parameters when the base setup is modified. One such re-calibration method is performed by detecting a pattern of lines on a reference plane to determine the perspective projection [21]. Self re-calibration methods have been implemented via plane-based homography [22–24], in which the perspective projection is determined by matching the light pattern on a reference plane.

Online re-calibration methods have also been developed to change the vision parameters during vision task [25–27]. In these methods, the perspective projection is determined by matching the light pattern on a reference plane. In the above mentioned techniques, the vision system does not provide the data to perform the re-calibration. Typically, these online re-calibration techniques are performed by detecting a light pattern on a reference. However, in several applications such references do not exist during the vision task, so the mentioned techniques are limited by the availability of light pattern references. To overcome these limitations, a re-calibration method without online references is necessary to facilitate online modifications of the setup geometry.

The proposed mobile calibration is performed by means of a Bezier network, which provides the data needed for online re-calibration, and laser line imaging. In this procedure, the camera orientation, focal distance, setup distances, pixel scale and image centre are determined. In addition three-dimensional vision is performed by the network via line shifting, whereby the network retrieves the surface depth and provides the data for the re-calibration when the setup geometry is modified online, the extrinsic and intrinsic parameters are thus re-calibrated online and the need for references is avoided. Consequently, the mobile calibration improves the performance and the accuracy of the online re-calibration. All this constitutes a contribution to the field of re-calibration of lighting methods. This contribution is elucidated by an evaluation based on the calibration and re-calibration of lighting methods. This evaluation is based on the root mean squared of error using a contact method as reference. Finally, the processing time to produce three-dimensional visualization is also determined.

## 2. Basic Theory

In lighting methods calibration is performed based on perspective projection [6–24]. This procedure is carried out by means of calibrated references and a transformation matrix. Typically, the perspective projection model is determined based in the geometry shown in Figure 1. In this geometry, a point P

_{w}= (x_{w}, y_{w}, z_{w}) is transformed to the camera coordinates P_{c}= (x_{c}, y_{c}, z_{c}) by P_{c}=**R**·Pw +**t**. Where**R**is the rotation matrix and**t**is the translation vector. Here, the transformation Pc to the image coordinates (X_{u}, Y_{u}) is given by X_{u}= fx_{c}/z_{c}and Y_{u}= fy_{c}/z_{c}Considering radial distortion, the image coordinates are represented by X_{d}+ D_{x}= X_{u}and Y_{d}+ D_{y}= Y_{u}, where D_{x}= X_{d}(δ_{1}r^{2}+ δ_{2}r^{4}+ …), D_{y}= Y_{d}(δ_{1}r^{2}+ δ_{2}r^{4}+ …) and r = (X_{d}^{2}+ Y_{d}^{2})^{1/2}. In these expressions, X_{d}and Yd are the distorted coordinates. The pixel coordinates are also converted into real coordinates by means of a scaling factor η. Thus, the parameters to be calibrated are the matrix**R**, the vector**t**, the focal length f, the distortion coefficient δ_{i}, the image center (c_{x}, c_{y}) and the scaling factor η. This procedure is carried out by detecting calibrated references on a reference plane and use of a transformation matrix [6–27]. Then, the calibration data are passed to the vision system to perform three-dimensional visualization.In several applications, the setup geometry is modified online to achieve good sensitivity and to avoid occlusions. In this case, a re-calibration is necessary for each modification [18,22]. In perspective projection, the translation vector

**t**is the position vector from the O_{w}to O_{c}. This vector has components in the x-, y- and z-axes from the world coordinates O_{w}to the camera coordinates O_{c}. The distances of these components are determined in the initial calibration, but the components of vector**t**are modified when the camera is moved. In this case, these components are re-calibrated via calibrated references to perform the transformation from P_{w}to P_{c}[23]. The transformation P_{c}=**R**·Pw +**t**to the coordinates (X_{u}, Y_{u}) should also be recomputed. However, in several applications calibrated references do not exist during the three-dimensional vision task, so established online re-calibration methods are limited by the availability of known references. To overcome these limitations, a re-calibration method without online references should be implemented.In the proposed mobile calibration, a Bezier network provides the data to perform the online re-calibration and three-dimensional visualization based on a mobile setup and image processing of a laser line. The mobile setup to perform the three-dimensional vision is shown in Figure 2. This arrangement includes an electromechanical device, a CCD camera, a laser line projector and a computer to process the data. In this setup, the laser line is projected perpendicularly on the surface and the CCD image plane is aligned parallel to the reference plane. In this geometry, the laser line reflected to the CCD camera forms an angle that varies according to the position of the reference plane in the z-axis. The orientation of the CCD camera and the laser line orientation are fixed. The alignment of the camera and laser line are described in Section 3. The electromechanical device moves the laser and the camera in the x-axis, y-axis and z-axis. In addition the camera can also be moved toward the laser diode along in the x-axis.

In this system, a network computes the surface depth based on the line position. The geometry of this relationship is shown in Figure 3(a). In this geometry, the x-axis and y-axis are located on the reference plane and the z-axis is located perpendicularly to the reference plane. The focal length f is the distance between the lens and the image plane. The image center is indicated by x

_{c}on the x-axis. The distance between the laser line and the optical axis is indicated by ℓ_{a}. The surface depth is indicated by h_{i}and z_{i}is the distance between the lens and the object surface. The distance from the lens to the reference plane is defined by D = h_{i}+ z_{i}. In the proposed setup, the distance ℓ_{a}and D can be modified during the visualization procedure. The laser line coordinates are indicated in the y-axis based on the geometry shown in Figure 3(b). In this geometry, a point q_{i}of the laser line in the y-axis is indicated by y_{i}in the image plane. Thus, the laser line coordinates are determined by q_{i}= D η (y_{c}− y_{i})/η f. In this expression y_{i}is the image row and the parameters D, f, η, y_{c}are deduced during the mobile calibration, which is described in Section 4. In perspective projection, the surface depth is computed by z_{i}= (f·ℓ_{a})/(x_{c}− x_{i}) [1], based on the calibrated f and ℓ_{a}. In the proposed model, the surface depth is computed based on the line shifting in the image plane. When the laser line is projected on a surface h_{i}, the line position is moved from x_{A}to x_{i}in the image plane. In this case, the line shifting s_{i}is directly proportional to the surface depth h_{i}. This line shifting is described by following expression:
$${s}_{i}={x}_{A}-{x}_{i}$$

To compute the shift, the line position x

_{A}and x_{i}are detected in the image. To carry this out, the intensity maximum is measured in each row of the image. Then, first and second derivatives are computed to obtain the maximum. To detect the maximum, the pixels are approximated to a continuous function by means of Bezier curves [28]. In this case, the pixels are represented by (x_{0}, I_{0}), (x_{1}, I_{1}),......, (x_{n}, I_{n}), where x_{i}is the pixel position, I_{i}is the pixel intensity and n is the pixel number. The Bezier curves are described by:
$$\begin{array}{llll}P(u)=\sum _{i=0}^{n}\left(\begin{array}{c}n\\ i\end{array}\right){(1-u)}^{n-i}{u}^{i}{p}_{i},\hfill & \hfill & \left(\begin{array}{c}n\\ i\end{array}\right)=\frac{n!}{i!(n-i)!},\hfill & 0\le u\le 1\hfill \end{array}$$

By applying the definition of Equation (2), two equations are obtained, one for x and one for I:

$$x\mathit{(}u\mathit{)}=\left(\begin{array}{c}n\\ 0\end{array}\right){(1-u)}^{n}{u}^{\mathit{0}}{x}_{\mathit{0}}+\left(\begin{array}{c}n\\ 1\end{array}\right){(1-u)}^{n-1}u\mathrm{\hspace{0.17em}\u200a\u200a}{x}_{1}+\dots \dots .+\left(\begin{array}{c}n\\ n\end{array}\right){(1-u)}^{\mathit{0}}{u}^{n}{x}_{n,}0\le u\le 1$$

$$I\mathit{(}u\mathit{)}=\left(\begin{array}{c}n\\ 0\end{array}\right){(1-u)}^{n}{u}^{\mathit{0}}{I}_{\mathit{0}}+\left(\begin{array}{c}n\\ 1\end{array}\right){(1-u)}^{n-1}u\mathrm{\hspace{0.17em}\u200a\u200a}{I}_{1}+\dots \dots +\left(\begin{array}{c}n\\ n\end{array}\right){(1-u)}^{\mathit{0}}{u}^{n}{I}_{n,}0\le u\le 1$$

Equation (3) represents the pixel position and Equation (4) represents the pixel intensity. Based on these equations, a continuous function is fitted from the pixels shown in Figure 4. To carry this out, the positions x

_{0}, x_{1}, x_{2},…, x_{n}, are substituted into Equation (3) and the intensities I_{0}, I_{1}, I_{2},..,I_{n}, are substituted into Equation (4). These two equations are evaluated in the interval 0 ≤ u ≤ 1 to fit the curve shown in Figure 4. The result of this curve is a concave function. Therefore, the second derivative I’(u) is positive and the peak is a global maximum. In this manner, the maximum is computed based on the first derivative I’(u) = 0 [29]. To find the derivative I’(u) = 0, the bisection method is applied [29]. The Bezier function is defined in the interval 0 ≤ u ≤ 1, so the initial value is defined by u_{i}= 0 and the final value is indicated by u_{f}= 1. Then, the middle point is computed by u* = (u_{i}+ u_{f})/2 to find a value u that converges to the expression I′(u) = 0. Next the first derivative I′(u) is evaluated in middle point u*. If the derivative I’(u = u*) is positive, then u_{i}= u*. If the derivative I’(u = u*) is negative, then u_{f}= u*. The next middle point u* is obtained from the last pair of values u_{i}and u_{f}. These steps are repeated until I′(u) = 0 is found based on some set tolerance value. The value u = u* where I′(u) = 0 is substituted into Equation (3) to determine the position of the intensity maximum x(u). The result is x(u) = 34.274 and the laser line position is x_{i}= 34.274 pixels, as shown in Figure 4. Thus, the laser line position is detected. The Bezier network to perform the three-dimensional vision and calibration is described in Section 3.## 3. Network Structure for Depth Contouring

The three-dimensional vision is performed by a Bezier network based on the line shifting. This network is built based on an image plane parallel to the reference plane in the x-axis and y-axis. Based on the geometry of Figure 3(a), the laser line is perpendicular to the reference plane in the x-axis. In this case, the line position along the x-axis is constant for any surface depth h

_{i}. By means of this criterion, the laser line is aligned perpendicularly to the x-axis. To carry this out, the laser line is projected on a peak reference along the y-axis. Figure 5(a) shows the laser line aligned in the y-axis on the peak reference. Then, the reference plane is moved in the z-axis, as shown in Figure 5(b). By rotating the laser diode in 0.0896 degree steps, the laser line is positioned in the reference for any depth h_{i}of the reference plane. By image processing, the reference position and the line position are detected in each displacement of depth h_{i}. In this case, the line position is the same in the x-axis for any position of the reference plane. Thus, a perpendicular laser line to the reference plane in the x-axis is achieved. Now, the image plane is aligned parallel to the reference plane in the x-axis. Based on the setup geometry of Figure 3(a), the term (k_{i}/h_{i}) = [ℓ_{a}/(D − h_{i})] = [(ηx_{c}− ηx_{i})/η f] is obtained. In this term, η is the scale factor in millimeters, f is in pixels and (x_{c}− x_{i}) = s_{i}+ (x_{c}− x_{A}). Thus, the following expression for the line shifting is obtained:
$${k}_{i}=\frac{\eta ({x}_{c}-{x}_{i}){h}_{i}}{\eta f}=\frac{[({x}_{c}-{x}_{A})+{s}_{i}]{h}_{i}}{f}$$

In Equation (5) f, x

_{c}, and x_{A}are constants. In this case, a linear h_{i}produces a linear s_{i}. Conversely, a linear s_{i}produces a linear k_{i}. Therefore dk/ds, the derivative of k_{i}with respect to s_{i}, is a constant. Another camera orientation is an optical axis not perpendicular to the reference plane. In this case, a linear s_{i}does not produce a linear k_{i}and the derivative dk/ds is not a constant.The camera orientation along the y-axis is performed based on the geometry of Figure 3(b). In this geometry, a line pattern is moved in steps y

_{i}in the image plane in the y-axis based on depth h_{i}. In this geometry, the optical axis is perpendicular to the reference plane and the term (q_{i}/h_{i}) = [ℓ_{b}/(D − h_{i})] = [η (y_{c}− y_{i})/η f] is obtained. Here, the position of the pattern shifting is computed by t_{i}= (y_{0}− y_{i}) = (y_{c}− y_{i}) − (y_{c}− y_{A}). Thus, the following expression is obtained:
$${q}_{i}=\frac{\eta ({y}_{c}-{y}_{i}){h}_{i}}{\eta f}=\frac{[({y}_{c}-{y}_{A})+{t}_{i}]{h}_{i}}{f}$$

In Equation (6) f, y

_{c}and y_{A}are constant. In this case, a linear h_{i}produces a linear t_{i}. A linear t_{i}also produces a linear q_{i}, so the derivative dq/dt is a constant. In this manner, the camera orientation is defined by the dk/ds = constant and dq/dt = constant. Due to the distortion, these derivatives are not exactly a constant, but they may be considered constant. For the camera orientation in the x-axis, the reference plane is moved from h_{0}to h_{1}, h_{2}, h_{3},…, h_{n}by means of the electromechanical device. For each depth h_{i}, the line position x_{i}is computed by the procedure described in Section 2. Then, the shifting s_{i}is computed via Equation (1), the derivative dk/ds is computed and it is evaluated with respect to the derivative dk_{1}/ds. If dk/ds is bigger than dk_{1}/ds the camera is rotated to the right in steps of 0.0896 degree. Again, the reference plane is moved from h_{0}to h_{1}, h_{2}, h_{3},…, h_{n}and the derivative is computed. If dk/ds is less than dk_{1}/ds the camera is moved in the opposite direction. The rotation to the left and right is repeated until the minimum error of derivative dk/ds respect to dk_{1}/ds is found. In this case, the derivative dk/ds is not exactly a constant, but it is close to constant. This criterion is illustrated by the derivative shown by the solid line in Figure 6, where the dashed line represents dk/ds of an optical axis aligned at an angle smaller than 90°. The dotted line is dk/ds of an optical axis aligned at an angle greater han 90°.For the y-axis camera orientation, the dq/dt is computed. To carry this out, the position t

_{i}is computed in the image plane based on h_{i}. This is done by detecting the corner of the laser line in the y-axis via edge detection. Figure 7(a) shows the corner position y_{0}of the line in the y-axis at the reference plane h_{0}. Then, the reference plane is moved in the z-axis and the corner is detected to obtain y_{1}. This procedure is repeated to obtain y_{2}, y_{3},…, y_{n}. Figure 7(b) shows the corner position y_{10}of the line in y-axis at the reference plane h_{10}. Based on these data, t_{i}and the derivative dq/dt are computed. Then, this derivative is evaluated with respect to the derivative dq_{1}/dt. The camera is rotated in the y-axis to the right or the left in the same manner as it was rotated in the orientation in the x-axis.The derivative dq/dt obtained in this procedure is not exactly a constant, but again it is close to being a constant. Therefore, the camera parallel to the reference plane is defined when the dk/ds and dq/dt are very close to a constant according to a tolerance. In this manner, the image plane has been aligned parallel to the reference plane. From here, the alignment of the laser line is achieved perpendicular to the reference plane and the camera is fixed. Based on an image plane parallel to the reference plane, the network is built. To carry this out, the data of the camera alignment h
where a

_{i}and s_{i}are used. The structure of the proposed network is shown in Figure 8. This network consists of an input vector, two parametric inputs, a hidden layer and an output layer. Each layer of the network is constructed as follows: the input includes the depth h_{i}, the line shifting s_{i}and the parametric values (u, v). The depth data h_{0}, h_{1}, h_{2},…, h_{n}and the line shifting data s_{0}, s_{1}, s_{2},…, s_{n}are obtained in the camera alignment by moving the reference plane in the z-axis. Thus, the line shifting s_{i}is directly proportional to the surface depth h_{i}. In this case, the line shifting is represented by a parametric value u by the next linear combination (LC):
$$u={a}_{0}+{a}_{1}s$$

_{0}and a_{1}are constants to be determined. By means of two values s_{i}and its respective value u, Equation (7) is determined. The Bezier curves are defined in the interval 0 ≤ u ≤ 1. Therefore, u = 0 for the first line shifting and u = 1 for the last shifting s_{n}. Substituting these values in Equation (7), two equations with two unknown constants are obtained. Solving these equations, a_{0}and a_{1}are determined. Thus, for each shifting s_{i}, a value u is computed via Equation (7).The coordinate y
where b
where
${B}_{i}\left(u\right)=\left(\begin{array}{c}n\\ i\end{array}\right){u}^{i}{(1-u)}^{n-i}$,
${B}_{j}\left(v\right)=\left(\begin{array}{c}m\\ j\end{array}\right){u}^{j}{(1-v)}^{m-j}$,
$\left(\begin{array}{c}n\\ i\end{array}\right)=\frac{n!}{i!(n-i)!}$,
$\left(\begin{array}{c}m\\ j\end{array}\right)=\frac{m!}{j!(m-j)!}$.

_{i}corresponds to each row of the laser line image. This coordinate is represented by a parametric value v by the following expression:
$$v={b}_{0}+{b}_{1}y$$

_{0}and b_{1}are constants to be determined. Using two values y_{i}and its respective v, Equation (8) is determined. Bezier curves are defined in the interval 0 ≤ v ≤ 1. In this case, v = 0 for y_{0}and v = 1 for y_{n}. Substituting these two values in Equation (8), two equations with two unknown constants are obtained. Solving these equations, b_{0}and b_{1}are determined. Thus, for each coordinate y_{i}, a value v is computed via Equation (8). The hidden layer is built by a Bezier basis function, which is described by:
$${\mathcal{B}}_{\mathit{ij}}={B}_{i}\left(u\right){B}_{j}\left(v\right)$$

The output layer is obtained by the summation of the neurons, which are multiplied by a weight. Thus, the output response is the surface depth given by following expression:
where w

$$\begin{array}{cccccc}\mathscr{H}h\left(u,v\right)=\sum _{i=0}^{n}\sum _{j=0}^{m}{w}_{\mathit{ij}}{h}_{i}{B}_{i}\left(u\right){B}_{j}\left(v\right),& & & 0\le u\le 1,& & 0\le v\le 1\end{array}$$

_{ij}are the weights, h_{i}is the surface depth, B_{i}(u) and B_{j}(v) are the Bezier basis function represented by Equation (9). To construct the complete network Equation (10), the appropriate weights w_{ij}should be determined. To carry this out, the network is being forced to produce the correct surface depth h_{i}. This procedure is performed by an adjustment mechanism. Based on the reference data obtained in camera alignment, the initial depth h_{0}= 0 mm, the line position in the image plane is x_{A}= x_{0}and s_{0}= 0. The line position x_{0}in x-axis is shown in Figure 7(a). In the camera alignment, the reference plane is moved in the z-axis in steps of 2.54 mm. Thus, h_{10}= 25.40 mm and the line position correspond to x_{i}= x_{10}which are shown in Figure 7(b). Here, the shifting is determined by s_{10}= x_{A}− x_{i}. Then, the s_{i}and its coordinate y_{i}are converted to values (u, v) via Equation (7) and Equation (8), respectively. Then, the depth h_{i}and its coordinates (u, v) are substituted in Equation (10) to obtain an output H(u, v), thus giving the following system of equations:
$$\begin{array}{c}\mathrm{H}\left(u=\mathit{0},v=\mathit{0}\right)={h}_{\mathit{0}}={w}_{\mathit{00}}{h}_{\mathit{0}}{B}_{\mathit{0}}\left(u\right){B}_{\mathit{0}}\left(v\right)+{w}_{\mathit{01}}{h}_{\mathit{0}}{B}_{\mathit{0}}\left(u\right){B}_{\mathit{1}}\left(v\right)+,\dots ,+{w}_{\mathit{0}m}{h}_{\mathit{0}}{B}_{\mathit{0}}\left(u\right){B}_{m}\left(v\right)+,\dots ,\\ +{w}_{\mathit{10}}{h}_{\mathit{1}}{B}_{\mathit{1}}\left(u\right){B}_{\mathit{0}}\left(v\right)+,\dots ,+{w}_{\mathit{1}m}{h}_{\mathit{1}}{B}_{\mathit{1}}\left(u\right){B}_{m}\left(v\right)+,\dots ,+{w}_{\mathit{nm}}{h}_{n}{B}_{n}\left(u\right){B}_{m}\left(v\right)\\ \mathrm{H}\left(u,v\right)={h}_{\mathit{1}}={w}_{\mathit{00}}{h}_{\mathit{0}}{B}_{\mathit{0}}\left(u\right){B}_{\mathit{0}}\left(v\right)+{w}_{\mathit{01}}{h}_{\mathit{0}}{B}_{\mathit{0}}\left(u\right){B}_{\mathit{1}}\left(v\right)+,\dots ,+{w}_{\mathit{0}m}{h}_{\mathit{0}}{B}_{\mathit{0}}\left(u\right){B}_{m}\left(v\right)+,\dots ,\\ +{w}_{\mathit{10}}{h}_{\mathit{1}}{B}_{\mathit{1}}\left(u\right){B}_{\mathit{0}}\left(v\right)+,\dots ,+{w}_{\mathit{1}m}{h}_{\mathit{1}}{B}_{\mathit{1}}\left(u\right){B}_{m}\left(v\right)+,\dots ,+{w}_{\mathit{nm}}{h}_{n}{B}_{n}\left(u\right){B}_{m}\left(v\right)\\ \begin{array}{ccccccccc}\vdots & & \vdots & & \vdots & & & & \vdots \end{array}\\ \mathrm{H}\left(u=0,v=1\right)={h}_{n}={w}_{\mathit{00}}{h}_{\mathit{0}}{B}_{\mathit{0}}\left(u\right){B}_{\mathit{0}}\left(v\right)+{w}_{\mathit{01}}{h}_{\mathit{1}}{B}_{\mathit{0}}\left(u\right){B}_{\mathit{1}}\left(v\right)+,\dots ,+{w}_{\mathit{0}n}{h}_{n}{B}_{\mathit{0}}\left(u\right){B}_{n}\left(v\right)+\\ ,\dots ,+{w}_{m\mathit{0}}{h}_{\mathit{0}}{B}_{m}\left(u\right){B}_{\mathit{0}}\left(v\right)+,\dots ,+{w}_{\mathit{mn}}{h}_{n}{B}_{m}\left(u\right){B}_{n}\left(v\right)\\ \begin{array}{ccccccccc}\vdots & & \vdots & & \vdots & & & & \vdots \end{array}\\ \mathrm{H}\left(u=1,v=0\right)={h}_{\mathit{0}}={w}_{\mathit{00}}{h}_{\mathit{0}}{B}_{\mathit{0}}\left(u\right){B}_{\mathit{0}}\left(v\right)+{w}_{\mathit{01}}{h}_{\mathit{0}}{B}_{\mathit{0}}\left(u\right){B}_{\mathit{1}}\left(v\right)+,\dots ,+{w}_{\mathit{0}m}{h}_{\mathit{0}}{B}_{\mathit{0}}\left(u\right){B}_{m}\left(v\right)+,\dots ,\\ +{w}_{\mathit{10}}{h}_{\mathit{1}}{B}_{\mathit{1}}\left(u\right){B}_{\mathit{0}}\left(v\right)+,\dots ,+{w}_{\mathit{1}m}{h}_{\mathit{1}}{B}_{\mathit{1}}\left(u\right){B}_{m}\left(v\right)+,\dots ,+{w}_{\mathit{nm}}{h}_{n}{B}_{n}\left(u\right){B}_{m}\left(v\right)\\ \begin{array}{ccccccccc}\vdots & & \vdots & & \vdots & & & & \vdots \end{array}\\ \mathrm{H}\left(u=1,v=1\right)={h}_{n}={w}_{\mathit{00}}{h}_{\mathit{0}}{B}_{\mathit{0}}\left(u\right){B}_{\mathit{0}}\left(v\right)+{w}_{\mathit{01}}{h}_{\mathit{0}}{B}_{\mathit{0}}\left(u\right){B}_{\mathit{1}}\left(v\right)+,\dots ,+{w}_{\mathit{0}m}{h}_{\mathit{0}}{B}_{\mathit{0}}\left(u\right){B}_{m}\left(v\right)+,\dots ,\\ +{w}_{\mathit{10}}{h}_{\mathit{1}}{B}_{\mathit{1}}\left(u\right){B}_{\mathit{0}}\left(v\right)+,\dots ,+{w}_{\mathit{1}m}{h}_{\mathit{1}}{B}_{\mathit{1}}\left(u\right){B}_{m}\left(v\right)+,\dots ,+{w}_{\mathit{nm}}{h}_{n}{B}_{n}\left(u\right){B}_{m}\left(v\right)\end{array}$$

This linear system of Equation (11) can be represented as:

$$\begin{array}{c}{H}_{\mathit{00}}={w}_{\mathit{00}}{\beta}_{0,0}+{w}_{\mathit{01}}{\beta}_{0,1}+,\dots ,+{w}_{\mathit{nm}}{\beta}_{\mathit{0},nm}\\ {H}_{\mathit{01}}={w}_{\mathit{00}}{\beta}_{1,0}+{w}_{\mathit{01}}{\beta}_{1,1}+,\dots ,+{w}_{\mathit{nm}}{\beta}_{\mathit{1},nm}\\ \vdots \mathrm{\hspace{0.17em}\u200a\u200a}\mathrm{\hspace{0.17em}\u200a\u200a}\mathrm{\hspace{0.17em}\u200a\u200a}\mathrm{\hspace{0.17em}\u200a\u200a}\mathrm{\hspace{0.17em}\u200a\u200a}\mathrm{\hspace{0.17em}\u200a\u200a}\mathrm{\hspace{0.17em}\u200a\u200a}\vdots \mathrm{\hspace{0.17em}\u200a\u200a}\mathrm{\hspace{0.17em}\u200a\u200a}\mathrm{\hspace{0.17em}\u200a\u200a}\mathrm{\hspace{0.17em}\u200a\u200a}\mathrm{\hspace{0.17em}\u200a\u200a}\vdots \mathrm{\hspace{0.17em}\u200a\u200a}\mathrm{\hspace{0.17em}\u200a\u200a}\mathrm{\hspace{0.17em}\u200a\u200a}\mathrm{.....}\vdots \\ {H}_{\mathit{nm}}={w}_{\mathit{00}}{\beta}_{\mathit{mn},0}+{w}_{\mathit{01}}{\beta}_{\mathit{mn},1}+,\dots ,+{w}_{\mathit{nm}}{\beta}_{\mathit{nm},nm}\end{array}$$

This equation can be rewritten in matrix form as

**β W**=**H**. Thus, the linear system is represented by the following matrix:
$$\left[\begin{array}{cccc}{\beta}_{0,0}& {\beta}_{0,1}& {\beta}_{0,2}& \mathrm{....}\mathrm{\hspace{0.17em}\u200a\u200a}{\beta}_{0,nm}\\ {\beta}_{1,0}& {\beta}_{1,1}& {\beta}_{1,2}& \mathrm{....}\mathrm{\hspace{0.17em}\u200a\u200a}{\beta}_{1,nm}\\ \vdots & \vdots & \vdots & \vdots \\ {\beta}_{\mathit{nm},0}& {\beta}_{\mathit{nm},1}& {\beta}_{\mathit{nm},2}& \mathrm{....}\mathrm{\hspace{0.17em}\u200a\u200a}{\beta}_{\mathit{nm},nm}\end{array}\right]\left[\begin{array}{c}{w}_{00}\\ {w}_{01}\\ \vdots \\ {w}_{\mathit{nm}}\end{array}\right]=\left[\begin{array}{c}{H}_{00}\\ {H}_{01}\\ \vdots \\ {H}_{\mathit{nm}}\end{array}\right]$$

The linear system Equation (13) is solved by the Chelosky method and the weights w

_{ij}are determined. In this manner, the Bezier network H(u,v) has been completed. The result of this network is a model that computes the surface depth h(x, y) via line shifting. The network is applied to the laser line shown in Figure 9(a) to obtain the surface depth. To carry this out, the shifting s_{i}is detected in each image row y_{i}. Then, the shifting s_{i}and its coordinate y_{i}are converted to a value (u, v), respectively. Then, these values are substituted in the network Equation (10) to compute the surface depth shown in Figure 9(b). In this figure, the symbol “Δ” is the data provided by a coordinate measurement machine (CMM).To determine the accuracy, the network data are compared with the data provided by the CMM. The accuracy is computed based on a root means squared error (rms) [30] by:
where ho

$$\mathit{rms}=\sqrt{\frac{1}{n}\sum _{i=1}^{n}{({\mathit{ho}}_{i}-{\mathit{hc}}_{i})}^{2}}$$

_{i}is the data provided by the CMM, hc_{i}is the calculated data by the network and n is the number of data. For the data shown in Figure 9(b), the error is a rms = 0.148 mm. The depth resolution is deduced by the detection of the minimum line shifting s_{i}. In this case, the network is built using the minimum and maximum s_{i}at distance ℓ_{a}. For this configuration, a shifting s_{i}= 0.38 pixels is detected from the reference plane. Based on this shifting, the network computes a depth h = 0.28 mm. Thus, the small details around h = 0.28 mm can be detected and the network sensibility has been determined. The calibration of vision parameters based on the network is described in Section 4.## 4. Parameters of the Vision System

In the lighting methods, the calibration is performed based on perspective projection [6–22]. In this model, the extrinsic and intrinsic parameters are determined based on calibrated references. Thus, the matrix

**R**, vector**t**, focal length f, distortion δ_{i}, image center (c_{x}, c_{y}) and the scale factor η are calibrated. Typically, these lighting systems do not provide the data needed to perform the re-calibration. Any time the setup is modified a re-calibration should be applied. This procedure provides the vision system with the ability to change the intrinsic and extrinsic parameters. This was tested when the camera position is modified. In this case, the components of vector**t**are changed and the distances from the origin of the world coordinates O_{w}to the camera coordinates O_{c}should be re-calibrated. Recently, self re-calibration and online re-calibration have been developed to change the vision parameters [22–27]. In these methods, the data for online re-calibration are determined by detecting a light pattern on calibrated references. This kind of re-calibration is suitable when the references exist during the vision task, but in several applications such references do not exist. In this case, the online re-calibration cannot be completed due to the lack of references and an online re-calibration without references is thus necessary to overcome this.In the proposed mobile calibration, the data for online re-calibration is provided the Bezier network and laser line imaging, thus avoiding the need for pattern references. The proposed vision system can be moved in the x-, y- and z-axes. In addition the camera can be moved toward the laser line. Here, the setup geometry is modified when the camera is moved in the z-axis. The geometry is also modified when the camera is moved toward the laser line. In this case, a mobile calibration is applied to perform the online re-calibration. This procedure is performed based on the setup geometry shown in Figure 3(a). Here, the calibration is performed based on the camera being parallel to the reference plane. The triangulation of this geometry is described by the expression of Equation (5). Considering radial distortion, the line position is defined by x

_{A}= X_{A}+ δx_{A}and x_{i}= X_{i}+ δx_{i}, respectively, where X_{A}and X_{i}are the undistorted image coordinates and the distortion is indicated by δx_{A}and δx_{i}, respectively. Thus, the line shifting is defined by S_{i}= (x_{c}− X_{i}) − (x_{c}− X_{A}). Therefore, the projection k_{i}Equation (5) is rewritten as:
$${K}_{i}=\frac{\eta ({x}_{c}-{X}_{i}){h}_{i}}{\eta f}=\frac{[({x}_{c}-{X}_{A})+{S}_{i}]{h}_{i}}{f}$$

The distortion can be described from the terms of Equation (5) and Equation (15) by the following expression:

$$\begin{array}{c}{h}_{i}=\frac{{fK}_{i}}{\eta [({x}_{c}-{X}_{A})+{S}_{i}]}=\frac{{\mathit{fk}}_{i}}{\eta [({x}_{c}-{x}_{A})+{s}_{i}]}\\ \frac{{K}_{i}}{[({x}_{c}-{x}_{A}+\delta {x}_{A})+({s}_{i}+\delta {x}_{i}-\delta {x}_{A})]}=\frac{{k}_{i}}{[({x}_{c}-{x}_{A})+{s}_{i}]}\end{array}$$

From this equation, the distortion δx

_{i}is described by:
$${\delta x}_{i}=\frac{{K}_{i}}{{k}_{i}}({x}_{c}-{x}_{A}+{s}_{i})+({x}_{A}-{x}_{c}-{s}_{i})$$

The distortion can be determined from the terms of the line shifting S

_{i}= (x_{c}− X_{i}) − (x_{c}− X_{A}) and s_{i}= (x_{A}− x_{i}). Here, the first line shifting s_{1}is defined without distortion. In this manner, S_{i}= i* s_{1}, K_{i}= i* k_{1}and dK_{i}/dS = dk_{1}/ds for i = 1, 2,…, n. Based on these criteria, the undistorted shifting is described by the term i* s_{1}= (x_{c}− x_{i}+ δx_{i}) − (x_{c}− x_{A}+ δx_{A}) = (x_{A}− x_{i}) + (δx_{i}− δx_{A}). Thus, the distortion is obtained by δx_{i}= i* s_{1}− (x_{A}− x_{1}) + δx_{A}. From this expression, the first line shifting is defined without distortion. Therefore, δx_{A}= 0, δx_{1}= 0, S_{1}= s_{1}and the distortion δx_{1}= i* s_{1}− (x_{A}− x_{1}). Thus, the distortion is defined by:
$${\delta x}_{i}=i*{s}_{\mathit{1}}-\left({x}_{A}-{x}_{i}\right)+{\delta x}_{A}=i*{s}_{\mathit{1}}-\left({x}_{A}-{x}_{i}\right)\mathrm{\hspace{0.17em}\u200a\u200a}\text{for}\mathrm{\hspace{0.17em}\u200a\u200a}i=2,3,\dots ,n$$

The y-axis distortion is determined based on the setup geometry shown in Figure 3(b). The triangulation of this geometry is described by the expression of Equation (6). Considering radial distortion, y

_{A}= Y_{A}+ δy_{A}and y_{i}= Y_{i}+ δy_{i}, where Y_{A}and Y_{i}are the undistorted image coordinate and the distortion is indicated by δy_{A}and δy_{i}, respectively. Thus, the pattern shifting is defined by T_{i}= (y_{c}− Y_{A}) − (y_{c}− Y_{i}). In this manner, the projection q_{i}Equation (6) is rewritten as:
$${Q}_{i}=\frac{\eta ({y}_{c}-{Y}_{i}){h}_{i}}{\eta f}=\frac{[({y}_{c}-{Y}_{A})+{T}_{i}]{h}_{i}}{f}$$

The procedure to determine the distortion in x-axis is applied to find the distortion δy

_{i}. Thus, the distortion in y-axis is defined by:
$${\delta y}_{i}=i*{t}_{\mathit{1}}-\left({y}_{A}-{y}_{i}\right)+{\delta y}_{A}=i*{t}_{\mathit{1}}-\left({y}_{A}-{y}_{i}\right)\mathrm{\hspace{0.17em}\u200a\u200a}\text{for}\mathrm{\hspace{0.17em}\u200a\u200a}i=2,3,\dots ,n$$

In this manner, the distortion has been deduced. Based on an image plane parallel to the reference plane, the vision parameters are deduced. This procedure is carried out based on the setup geometry Figure 3(a), which is described by:

$$\frac{{\ell}_{a}}{D-{h}_{i}}=\frac{\eta ({x}_{c}-{X}_{i})}{f}=\frac{\eta [({x}_{c}-{X}_{A})+{S}_{i})]}{f}$$

In this equation, the constants D, ℓ

_{a}, f, are in millimeters, x_{c}is in pixels and η is the scale factor. To determine these parameters, Equation (21) is rewritten as the following system of equations:
$$\begin{array}{c}{h}_{0}=\frac{f{\ell}_{a}}{\eta \left({x}_{c}-{X}_{A}+{S}_{0}\right)}+D\\ {h}_{1}=\frac{f{\ell}_{a}}{\eta \left({x}_{c}-{X}_{A}+{S}_{1}\right)}+D\\ {h}_{2}=\frac{f{\ell}_{a}}{\eta \left({x}_{c}-{X}_{A}+{S}_{2}\right)}+D\\ {h}_{3}=\frac{f{\ell}_{a}}{\eta \left({x}_{c}-{X}_{A}+{S}_{3}\right)}+D\\ {h}_{4}=\frac{f{\ell}_{a}}{\eta \left({x}_{c}-{X}_{A}+{S}_{4}\right)}+D\end{array}$$

The values h

_{0}, h_{1},…, h_{4}, are computed by the network based on s_{0}, s_{1},…, s_{4}. The values X_{A}and S_{i}are computed using the known s_{i}, δx_{A}and δx_{i}, then these values are substituted in Equation (22) to solve the system of equations and thus determine the constants D, ℓ_{a}, f, η, and x_{c}. The coordinate y_{c}is computed based on the geometry of Figure 3(b). Here, the parameters η, T_{i}= I × t_{1}, Y_{i}are known and T_{i}= η(y_{c}− Y_{i+1}) − η(y_{c}− Y_{1}). Thus, y_{c}is determined by the following system of equations:
$$\begin{array}{c}{T}_{\mathit{1}}=\eta \left({y}_{c}-{Y}_{\mathit{2}}\right)-\eta \left({y}_{c}-{Y}_{\mathit{1}}\right)\\ {T}_{\mathit{2}}=\eta \left({y}_{c}-{Y}_{\mathit{3}}\right)-\eta \left({y}_{c}-{Y}_{\mathit{1}}\right)\end{array}$$

The values T

_{1}, T_{2}, Y_{1}, Y_{2}and Y_{3}are collected from the camera orientation in the y-axis. These values are substituted in Equation (23) to solve the system and thus the value y_{c}is determined. The laser line coordinates are determined based on the parameters D, η, y_{c}, and f. In this case, y_{i}are the image coordinates of the laser line in the y-axis. Based on the geometry of Figure 3(b), the coordinates of laser line in the y-axis are determined by the term q_{i}= D η(y_{c}− y_{i})/η f. In this manner, the vision parameters have been determined by the data provided by network and image processing. The mobile setup generates online geometric modifications, giving the system the ability to overcome occlusions and attain high sensibility. Typically, the extrinsic parameters are re-calibrated when the camera changes position [31]. In the reported methods, the online re-calibration depends on the availability of calibrated references [24–29]. The proposed mobile calibration avoids the use of references for online re-calibration. In the proposed vision system the setup geometry [Figure 10(a)], is modified when the camera is moved toward the laser line along the x-axis, as seen in Figure 10(b). The setup geometry is also modified when the camera is moved in the z-axis [Figure 10(c)]. In these cases, the line shifting magnitude should be re-calibrated online. The distance to the object surface should also be recalibrated online when the camera is moved in the z-axis. This procedure is carried out by computing the line shifting factor α. From the initial configuration [Figure 10(a)], the expression tanθ1 = η f/(X_{A}− x_{c}) η describes the line position. From the geometry shown in Figure 10(b), the expression tanθ_{2}= η f/(αX_{A}− x_{c}) η describes the line position. In these expressions, f, x_{c}are known from the initial calibration and αX_{A}is a line position in the new configuration [Figure 10(b)]. Here, the position αX_{A}is defined as the smaller distance obtained from the term (αX_{j}− x_{c}).In this case, the term αX

_{j}is the line position in each row of the image in y-axis and the j-index is row number of the image. Based on these data, the angles θ_{1}and θ_{2}are computed. Thus, the expression (X_{A}− x_{c}) tanθ_{1}= (αX_{A}− x_{c}) tanθ_{2}is obtained. Therefore, the factor α is determined by:
$$\alpha =\frac{\left({X}_{A}-{x}_{c}\right)\text{tan}{\theta}_{1}}{{X}_{A}\text{tan}{\theta}_{2}}+\frac{{x}_{c}}{{X}_{A}}$$

Then, the shifting is divided by the factor α in the modified configuration. The shift s

_{i}is re-calibrated online and thus processed by the network to obtain the surface h_{i}. In this manner, the vision system provides an online re-calibration of line position and the line shifting and the need for calibrated references is avoided. In the perspective projection, the extrinsic and intrinsic parameters change when the camera is moved, therefore components of the translation vector**t**should be re-calibrated to perform the transformation P_{c}=**R**·Pw +**t**. Methods such as object-based and plane-based homography have been applied to perform the online re-calibration of vision parameters. Object-based methods perform the re-calibration based on the known marks on the object [32]. Homography methods perform the re-calibration based on the detection of light pattern references [22–24]. These reported methods are limited when the references do not exist during the vision task.The contribution of the mobile calibration when an occlusion appears during the vision task was also studied. Typically, occlusions appear due to the surface variation and a big distance ℓ

_{a}. This criterion is studied based on the geometry shown in Figure 11, in which the point A is occluded at the initial configuration when the image plane is placed at (x_{0}, z_{0}). Here, an occlusion is detected when the width of the line is less than three pixels.To avoid occlusions, the camera is moved away from the surface at (x

_{0}, z_{1}) in the z-axis or toward the laser line at (x_{1}, z_{0}) in the x-axis. Here, an occlusion is detected when the width of the line is smaller than three pixels along the laser line in the y-axis. This criterion is established according to a threshold value. When an occlusion appears, the surface scanning is stopped. Then, the camera is moved by one 1.27 mm step toward the laser line. In this camera position, the width of the laser line is evaluated. If the width of laser line is less than three pixels, the camera is moved one step toward the laser line. This procedure is repeated until to the width of the laser line is greater than three pixels. Then, the scanning is continued with the new camera position and the occluded region is thus detected based on a complete laser line. In this case, the shift magnitude is different from the initial configuration, so the shift should be re-calibrated based on the factor α via Equation (24). Then, the shift s_{i}is processed by the network to compute the object surface. The three-dimensional vision and calibration accuracy are described in Section 5.## 5. Experimental Results

The proposed mobile calibration is performed based on Bezier networks and laser line imaging. This automatic technique avoids calibrated references and physical measurements. Here, the three-dimensional visualization is performed by laser scanning in 1.27 mm steps along the x-axis. The scanning movement is provided by the electromechanical device, whose minimum step is 0.0245 mm. The positioning accuracy is 0.00147 mm and the positioning repeatability is a standard deviation ± 0.001 mm. The laser line is captured and digitized in 256 gray levels. From each image, the network computes the surface contour based on the line shift s

_{i}. The depth resolution provided by the network is around of 0.28 mm. This resolution corresponds to the distances ℓ_{a}= 174.8321 mm and D = 380.4126 mm. In this case, the depth resolution varies according to the distances ℓ_{a}and D. The depth resolution is increased when ℓ_{a}is increased and D is decreased. On the other hand, depth resolution is decreased when ℓ_{a}is decreased and D is increased. The image plane parallel to the reference plane provides better sensitivity for a laser line perpendicular to the reference plane. This criterion is proven by the geometry of Figure 12. In this case, the image plane parallel to the reference plane provides a bigger line shift than the rotated image plane, but the rotated image plane provides better sensitivity when the laser line is aligned at an angle. Therefore, the image plane is fixed parallel to the reference plane and perpendicular to the laser line, thus affording the best sensitivity. When the setup geometry is modified, a mobile calibration is performed based on the data provided by the network and image processing. This procedure provides high sensitivity and avoids occlusions. This was verified by the three-dimensional visualization of some surfaces with small and big details.The first test of the mobile calibration is the three-dimensional visualization of a plastic fruit, shown in Figure 13(a). To carry this out, the fruit is scanned by the vision system as shown in Figure 13(b). In this procedure, a set of images is captured by the CCD camera. From each image, the line shift s

_{i}is detected via Equation (1) and converted to a value u via Equation (7). In addition, the coordinate y_{i}of the laser line is converted to a value v via Equation (8). Then, the network computes a transverse section of the object by substituting the values (u,v) in Equation (10). In this scanning, an occlusion is detected by the broken line shown in Figure 13(c). To avoid the occlusion, the camera is moved toward the laser line. In doing so, the initial geometry has been modified, so the line shift should be re-calibrated online based on the factor α via Equation (24). To carry this out, the line position αX_{j}is detected in each row of the images in the y-axis. In this case, the term αX_{j}is the line position and the j-index is the row number of the image in the y-axis. Here, the position αX_{A}is the smaller distance from the term (αX_{j}−x_{c}). Then, the factor α is computed and the line shift is divided by this factor to achieve the online re-calibration and the re-calibrated shift s_{i}is substituted in the network Equation (10) to compute the surface h_{i}. The whole surface is reconstructed from all depth data provided by the network. Fifty six images were processed to obtain the plastic fruit shown in Figure 13(d), where the scale of this figure is in mm.To know the accuracy, a root means squared error (rms) is computed via Equation (14). To do this the plastic surface is measured by the CMM. This procedure is performed by measuring a transverse section of the object in 8.0 mm steps along the y-axis. The transverse section corresponds to the position where the laser line was projected on the x-axis. Then, the network computes the surface depth of this transverse section. Forty six transverse sections are measured by the CMM and by the network in steps of 1.27 mm. In this case, the data obtained from the CMM are n = 420. Then, error is computed and the result is a rms = 0.142 mm.

The second test of the mobile calibration is the vision of a dummy face shown in Figure 14(a). To carry this out, the dummy face is scanned by the vision system. In this scanning, no reference plane exists. This was established based on the absence of a line at the end and beginning of the image in the y-axis. Here, the shifting magnitude is different from the initial configuration, so the line shift should be re-calibrated online. In this case, the line position αX

_{j}is detected in each row of the image in the y-axis. From these line positions, the position αX_{A}is determined as the minimum distance from the term (αX_{j}− x_{c}). Then, the factor α is computed via Equation (24) and the line shift is divided by this factor α to achieve the online re-calibration and the re-calibrated shift s_{i}is converted to a value u. In addtion, the coordinate y_{i}of the laser line is converted to a value v. Then, the network computes the data h_{i}of a transverse section of the object by substituting the values (u,v) in Equation (10).When the object is scanned, an occlusion is detected based on the broken line shown in Figure 14(b). To avoid the occlusion, the camera is moved toward the laser line. Since the geometry and the shift magnitude have been modified the mobile calibration performs a second online re-calibration to achieve the three-dimensional vision. To carry it out, the line position αX

_{j}is detected in each row of the image in the y-axis for the new configuration. From these line positions, the αX_{A}is determined as the minimum from the term (αX_{j}− x_{c}). Then, the factor α is computed via Equation (24). and the shifting is divided by this factor α to achieve the online re-calibration. This re-calibrated shifting s_{i}is converted to a value u. The coordinate y_{i}of the laser line is also converted to a value v. Then, the network computes the surface depth h_{i}by substituting the values (u,v) in Equation (10). In this manner, the whole surface is obtained by all data provided by the network. One hundred and sixteen images are processed to obtain the complete dummy face shown in Figure 14(c). The scale of this figure is in mm. To know the accuracy, the rms is computed via Equation (14). To do this, the transverse sections of the dummy face are measured by the CMM and by the network. In this case, sixty two transverse sections are measured by the CMM at 1.00 mm in the x-axis to perform the evaluation. From this procedure, the obtained data are n = 1,400. Then, the error is computed and the result is a rms = 0.155 mm.The third test of the mobile calibration is the visualization of the flat surface of a metallic piece shown in Figure 15(a). To carry this out, the metallic piece is scanned by the vision system. In this scanning, no reference plane exists during the vision procedure. This is stated based on the lack of a line at the end and beginning of the y-axis in the image. Since the line shift magnitude is different from the initial configuration, mobile calibration is applied to achieve the online re-calibration of the line shift. To do so, the line position αX

_{j}is detected in each row of the image in the y-axis. In this case, the line position X_{j}is the same along the laser line and X_{0}= X_{1}= X_{2},…, = X_{n}and the minimum distance from the term (αX_{j}− x_{c}) can be obtained by means of αX_{0}. Therefore, the reference position is determined by αX_{A}= αX_{0}. Then, the factor α is computed via Equation (24) abd the line shift is divided by the factor α to achieve the online re-calibration. Then, the re-calibrated shift s_{i}is converted to a value u via Equation (7). The coordinate y_{i}of the laser line is also converted to a value v via Equation (8). Then, the network computes the depth h_{i}by substituting the values (u,v) in Equation (10). In the scanning, an occlusion is detected based on the missing line shown in Figure 15(b). To avoid the occlusion, the camera is moved toward the laser line. Here, the geometry and the shift magnitude have been modified, so the mobile calibration performs a second online re-calibration to achieve the three-dimensional visualization. To carry it out, the line position αX_{j}is detected in each row of the image in the y-axis. In this case, position αX_{j}is the same along the laser line. Again, the reference position αX_{A}is determined by αX_{A}= αX_{0}. Then, the factor α is computed via Equation (24) and the shift is divided by this factor α to achieve the on-line recalibration and the re-calibrated shift s_{i}is then converted to a value u. The coordinate y_{i}of the laser line is also converted to a value v and the network computes the surface depth h_{i}by substituting the values (u,v) in Equation (10). In this manner, the whole surface is obtained from data provided by the network. Fifty eight images were processed to obtain the metallic piece shown in Figure 14(c). The scale of this figure is in mm. To know the accuracy, the metallic piece was measured by the CMM. Then, the rms is computed using the data provided by the CMM and by the network. In this procedure, the error is computed using n = 1,400 and error is a rms = 0.155 mm.The value n has a great influence in the precision of the calculated error. To determine if n is according to the desired precision, the confidence level [33] is calculated by:
where z

$$n={\left({z}_{\alpha}\frac{{\sigma}_{x}}{e}\right)}^{2}$$

_{α}is the desired confidence, e is the error expressed in percentage, and σ_{x}is standard deviation. Therefore, the confidence level based on the data n is described by:
$${z}_{\alpha}=\frac{e}{{\sigma}_{x}}\sqrt{n}$$

To know if the chosen n is according to the desired confidence, Equation (26) is applied. The desired confidence is 95%, which corresponds to z

_{α}= 1.96 according to the confidence table [33]. The average of the dummy face height is 44.32 mm. Therefore, the rms is an error of 0.0035, which represents a 0.35% error. To determine the error precision, the confidence is calculated via Equation (25) for n = 1,400, e = 0.35 and a standard deviation of 6.204. The result is z_{α}= 2.110, which indicates a confidence level over 95%. The confidence levels for the plastic fruit and for the metallic piece are also greater than 95%.The accuracy provided by the calibration via Bezier networks for the three-dimensional vision is less than 1%. For comparison, the best accuracy of the calibration and online recalibration are mentioned as follows: a stereo calibration via perspective projection for face profiling reports an error over 1% [8]. A calibration method based on perspective projection and least squares for a laser range scanner reports an error over 1% [9]. A calibration method based on perspective projection and the invariance of double cross-ratio of the cross-points reports an error of 2% [16]. A self plane-based homography re-calibration method reports an error of 1% [23]. An online re-calibration method based on homography and reference plane reports an error of 1.5% [27]. These results indicate that the proposed mobile calibration provides better accuracy, based on its error of less than 1%. The mobile calibration also avoids external procedures and the need for calibrated references to perform the online re-calibration. The resolution provided by proposed technique is good, according to the calibration methods based on perspective projection using similar distances to our setup [6–27]. In these reports, a paint brush laser range scanner reports 0.57 mm as the best resolution [9]. The measurement range of the proposed technique is in the interval between 0.3 mm and 280.60 mm. According to the above mentioned techniques [6–27], the measurement range of the proposed mobile setup is good.

The computer used in this vision system is a 1.8 GHz PC. The capture velocity of the camera is 34 fps. The electromechanical device is moved at 34 steps per second. Each image of laser line is processed by the network in 0.010 s. The shape of the dummy face was reconstructed in 3.88 s, the metallic piece was reconstructed in 3.22 s and the plastic fruit was profiled in 2.59 s. This processing time is good compared to the lighting methods based on perspective projection. To demonstrate this, the processing time of the fast techniques is given as follows: for a paint brush laser range scanner, the reported time to reconstruct a single view is 15 s [9]. In implementation and experimental studies on fast object modeling based on multiple structured stripes, the reported time to reconstruct a single view of the object is 10 s [17]. These results indicate that the proposed mobile system provides a fast three-dimensional visualization. In this procedure, physical measurements and calibrated references are avoided to perform online re-calibration of the vision parameters and the distances of the geometry of the setup are not used to compute the surface depth, so the proposed mobile calibration performed online using data provided by the network and image processing is easier than the online re-calibration techniques based on references and perspective projection. In this manner, the vision system achieves good repeatability, corresponding to a standard deviation ± 0.01 mm.

## 6. Conclusions

A mobile calibration technique for three-dimensional vision has been presented. In this technique, a Bezier network provides the data needed to perform the mobile calibration via image processing. The network also computes the object surface based on the mobile setup. The setup geometry can thus be modified online and the network provides the online re-calibration to needed to accurately perform the three-dimensional visualization. The automatic calibration avoids the need for physical measurements and calibrated references, which are used in the lighting methods based on perspective projection. This improves the performance of the vision system and the accuracy of the three-dimensional visualization. The ability to detect the laser line with a sub-pixel resolution has been achieved by using Bezier curves, and the image processing is achieved with few operations. With the automatic calibration, good repeatability is achieved in each three-dimensional visualization procedure. The technique described here should provide a valuable tool for industrial inspection and reverse engineering tasks.

## Acknowledgments

J. Apolinar Muñoz Rodríguez would like to thank to CONCYTEG of Guanajuato State and CONACYT of Mexico, for the partial support of this research.

## References

- Klette, R; Schluns, K; Koschan, A. Computer Vision: Three-Dimensional Data from Images; Springer: Singapore, 1998; pp. 349–367. [Google Scholar]
- Jia, P; Kofman, J; English, C. Comparison of linear and nonlinear calibration methods for phase measuring profilometry. Opt Eng
**2005**, 46, 043601:1–043601:7. [Google Scholar] - Breque, C; Dupre, JC; Brenand, F. Calibration of a system of projection moiré for relief measuring: biomechanical applications. Opt. Lasers Eng
**2004**, 41, 241–260. [Google Scholar] - Remondino, F; El-Hakim, S. Image-based 3D modelling: a review. Photogrametric Record
**2006**, 21, 269–291. [Google Scholar] - Muñoz-Rodríguez, JA; Rodríguez-Vera, R. Evaluation of the light line displacement location for object shape detection. J. Mod. Optic
**2003**, 50, 137–154. [Google Scholar] - Vilaca, JL; Fonceca, JC; Pinho, AM. Calibration procedure for 3D measurement system using two cameras and a laser line. Opt. Laser Technol
**2009**, 41, 112–119. [Google Scholar] - Zhang, S; Huang, PS. Novel method for structured light system. Opt Eng
**2006**, 45, 083601:1–083601:8. [Google Scholar] - Song, LM; Wang, DN. A novel grating matching method for 3D reconstruction. Ndt. Int
**2006**, 39, 282–288. [Google Scholar] - Zagorchev, L; Goshtasby, A. A paintbrush laser range scanner. Comput. Vis. Image Underst
**2006**, 10, 65–86. [Google Scholar] - Song, L; Qu, X; Yang, Y. Application of structured lighting sensor for on line measurement. Opt. Lasers Eng
**2005**, 43, 1118–1126. [Google Scholar] - Huuynh, DQ. Calibration a structured light stripe system: a novel approach. Int. J. Comput. Vis
**1999**, 33, 73–86. [Google Scholar] - Liu, F; Duan, F; Ye, S. A new method for calibration of line structured light sensor using zigzag target. Meas. Technol
**1999**, 7, 3–6. [Google Scholar] - Zhou, F; Zhang, G; Jiang, J. Constructing feature points for calibration a structured light vision sensor by viewing a plane from unknown orientation. Opt. Lasers Eng
**2005**, 43, 1056–1070. [Google Scholar] - Zhou, F; Zhang, G. Complete calibration of a structured light stripe vision sensor trough plane target of unknown orientation. Image Vision Comput
**2005**, 23, 59–67. [Google Scholar] - Mclvor, AM. Nonlinear calibration of a laser stripe profiler. Opt. Eng
**2002**, 41, 205–212. [Google Scholar] - Wei, Z; Zhang, G; Xu, Y. Calibration approach for structured-light-stripe vision sensor based on the invariance of double cross-ratio. Opt Eng
**2003**, 2956–2966. [Google Scholar] - Wang, G; Hu, Z; Wu, F; Tsui, HT. Implementation and experimental study on fast object modeling based on multiple structured light stripes. Opt. Lasers Eng
**2004**, 42, 627–638. [Google Scholar] - Doignon, C; Knittel, D. A structured light vision system for out plane vibration frequencies location of moving web. Mach. Vision Appl
**2005**, 16, 289–297. [Google Scholar] - Chen, X; Xi, J; Jin, Y; Sun, J. Accurate calibration for camera projector measurement system based on structured light projection. Opt. Lasers Eng
**2009**, 47, 310–319. [Google Scholar] - Li, Z; Shi, Y; Wang, C; Wang, Y. Accurate calibration method for a structured light system. Opt Eng
**2008**, 47, 053604:1–053604:9. [Google Scholar] - Li, YF; Chen, SY. Automatic recalibration of an active structured light vision system. IEEE Trans. Robot. Automation Mach. Vision Appl
**2003**, 19, 259–268. [Google Scholar] - Zhang, B; Li, YF; Wu, YH. Self-recalibration of a structured light system via plane-based homography. Pattern Recogn
**2007**, 40, 1368–1377. [Google Scholar] - Chen, SY; Li, YF. Self-recalibration of a color-encoded light system for automated three-dimensional measurements. Meas. Sci. Technol
**2003**, 14, 33–40. [Google Scholar] - Zhang, B; Li, Y. Dynamic calibration of relative pose and error analysis in a structured light system. J. Opt. Soc. Am. A
**2008**, 25, 612–622. [Google Scholar] - Canlin, L; Ping, L; Lizhuang, M. A camera online recalibration framework using SIFT. Visual Comput
**2010**, 26, 227–240. [Google Scholar] - Lu, RS; Li, YF. Calibration of a 3D vision system using pattern projection. Sens. Actuat. A-Phys
**2003**, 104, 94–102. [Google Scholar] - Li, FY; Zhang, B. A method for 3D measurement and reconstruction for active vision. Meas. Sci. Technol
**2004**, 15, 2224–2232. [Google Scholar] - Mortenson, ME. Geometric Modeling, 2nd ed; Wiley: Salt Lake, UT, USA, 1997; pp. 83–105. [Google Scholar]
- Frederick, H; Lieberman, GJ. Introduction to Operations Research; McGraw-Hill: New York, NY, USA, 1982; pp. 754–758. [Google Scholar]
- Gonzalez, RC; Wintz, P. Digital image processing, 2nd ed; Addison-Wesley: Menlo Park, CA, USA, 1987; pp. 672–675. [Google Scholar]
- Li, FY; Chen, SY. Automatic recalibration of an active structured light vision system. IEEE Trans. Robot
**2003**, 19, 259–268. [Google Scholar] - Peng, E; Li, L. Camera calibration using one-directional information and its applications in both controlled and uncontrolled environments. Pattern Recogn
**2010**, 43, 1188–1198. [Google Scholar] - Freund, JE. Modern Elementary Statistics; Prentice Hall: Upper Saddle River, NJ, USA, 1979; pp. 249–251. [Google Scholar]

**Figure 3.**

**(a)**Geometry of an image plane parallel to the reference plane in x-axis.

**(b)**Geometry of an image plane parallel to the reference plane in y-axis.

**Figure 5.**

**(a)**Laser line aligned on a peak reference in y-axis.

**(b)**Setup at different reference plane from the initial configuration.

**Figure 7.**

**(a)**Laser line on the reference plane.

**(b)**Laser line at 25.4 mm from the reference plane.

**Figure 9.**

**(a)**Laser line projected on a surface.

**(b)**Surface depth computed by the network form the laser line.

**Figure 10.**

**(a)**Initial geometric configuration.

**(b)**Geometry of the camera moved toward the laser line in x-axis.

**(c)**Geometry of the camera moved toward the object surface in z-axis.

**Figure 13.**

**(a)**Plastic fruit to be profiled.

**(b)**Mobile setup for three-dimensional vision.

**(c)**Occlusion based on the broken line.

**(d)**Three-dimensional shape of the plastic fruit.

**Figure 14.**

**(a)**Dummy face to be profiled.

**(b)**Occlusion based on the broken line.

**(c)**Three-dimensional shape of the dummy face.

**Figure 15.**

**(a)**Metallic piece to be profiled.

**(b)**Occlusion based on the broken line.

**(c)**Three-dimensional shape of the metallic piece.

© 2010 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).