An Accurate Image Measurement Method Based on a Laser-Based Virtual Scale

Image measurement methods have been widely used in broad areas due to their accuracy and efficiency. However, current techniques usually involve complex calibration, an elaborate optical design, or sensitivity to the test environment. In this paper, a simple optical device was designed to emit parallel beams to obtain a virtual scale for measurement purposes. The proposed theory ensures the robustness of the system when obtaining each scale in the presence of uncertainty. The scale creates a mapping from image coordinates to world coordinates. By using the moving least squares (MLS) method, a full-field scale map can be reconstructed to achieve high-precision measurement at the sub-pixel level. Experimental verifications are carried out, showing that the proposed method provides very accurate and reliable results. The proposed approach is simple in terms of equipment, and the scale can be automatically calculated. Therefore, the system proposed in this paper is a promising candidate as a tool for non-contacting measurements (e.g., the crack development, geometric size) in the inaccessible structures such as high-rise buildings and long-span bridges.


Introduction
Modern industry requires higher accuracy and efficiency for geometric measurement instruments [1]. Traditional geometric measurement instruments (e.g., ruler, calipers) as a contacting measurement technique cannot measure complex geometry or fragile objects, or inaccessible places. Therefore, in the last few decades, optical technology as a non-contacting measurement technique has been wildly used in broad areas. In general, the optical measurement techniques fall into two categories [2]: the methods that use laser beams, such as laser doppler vibrometry [3], electronic speckle pattern interferometry (ESPI) [4], and digital speckle shearography (DSS) [5]; and the methods that use white light, which is also called photogrammetry or image measurement method.
Due to the rapid development of the microcomputer, memory chip, and camera sensors, image measurement method became an effective and practical tool for geometric measurement, which has been commonly accepted and widely used in the fields of experimental mechanics and civil engineering. Based on the type of targets, they can be categorized into three groups [2]: point tracking method [6,7], digital image correlation (DIC) method [8], and target-less method [9]. The point tracking technique uses digital cameras to distinguish the coordinates of discrete points mounted to the targets. Based on the position of the discrete points, it can measure the geometric information of the targets. The point tracking method is usually used to monitor the displacement of the target point. Instead of a set of discrete points' measurements, the DIC method is to provide the full-field geometric information with sub-pixel accuracy [10,11]. The accuracy of the DIC method is closely related to the quality of the speckle pattern, environmental vibration, and camera calibration [12]. Therefore, these factors limit the application of the DIC method, such as crack development in bridges and high-rise buildings, where on-site environmental vibration continually occurs and some structures are inaccessible. As for the target-less method, researchers utilize the internal features or edges of a structure to recognize the object or areas of the object that needs to be tracked. This method is especially useful when it is impossible to mount optical targets to the structure or spray the speckle pattern on the surface.
Even though non-contacting methods have achieved great success in the last few decades, there are still some challenges in practical applications. Harsh on-site environments limit the application of current methods. For example, camera vibration due to a vehicle passing or natural excitation in bridges makes it impossible for DIC method to complete the test. The optical target or calibration target cannot be fixed to the inaccessible structures such as high-rise buildings and long-span bridges. Furthermore, the accuracy of the current measurement techniques usually relies on the intrinsic and extrinsic parameters of the system, which requires a strict calibration process [13,14].
To this end, we propose a simple laser setup for non-contacting two-dimensional measurements. The laser emits parallel beams to form spot pairs (recorded by the camera), which is subsequently used to reconstruct a full-field scale map. The scale map bridges the image coordinates to a world coordinate system, making sub-pixel accuracy possible. In the following sections, we will first set up the equipment, and then the MLS method [15][16][17] will be introduced to reconstruct a full-field scale map. Finally, we present two experiments to verify the proposed method.

Device Setup
To obtain the mapping from image coordinates to a world coordinate system, the following device has been developed. The device consists of four modules: (1) a semiconductor laser emitter; (2) a beam splitter; (3) an inclinometer; and (4) a digital camera, as shown in Figure 1. These four modules work independently and have been encapsulated separately. The semiconductor laser emitter is used to emit a laser beam with a circular shape. The semiconductor laser emitter in the current design should meet the following requirement: (1) the output power is sufficient (generally between 0.5 to 5 mW), thus it can be recorded by the camera; (2) small beam divergence (generally less than 1 mrad), which indicates the beam size will not be too big in the target plane; and (3) circular shape, which is used to obtain α in the next section. The lateral displacement beam splitter is to divide the laser beam into two parallel beams with a fixed distance, which is further used as the virtual scale for measurement purposes. The inclinometer is used to measure the angle of the spot pair's direction in the next section, and we suggest the accuracy is better than ± (0.1 • + 1%). The camera module is to record the image with laser scale. The camera is fixed. Each laser scale is recorded by a single image. Since the camera is fixed, the images have the same coordinate systems. Combining all the laser scales in different locations builds the full-field scale image. Obviously, the camera quality (high CCD sensitivity, high resolution, and good lens) will affect the measuring accuracy. However, in this device, we only use the in-expensive camera. The detailed device work process is as follows.
A semiconductor laser emits a beam (circular shape) first, and then the beam goes through a lateral displacement beam splitter. The lateral displacement beam splitter outputs two identical parallel beams separated by a fixed distance (this distance is called beam separation and denoted as d).
The beam splitter consists of a precision rhomboid prism cemented to a right angle prism. The current manufacturing technology can easily ensure that the exiting beams are parallel within 30 arcsec, which indicates that the distance of the parallel beams (d) varies very little along the path to the target plane. The two beams are finally projected on the target plane and formed two spots (a spot pair), which is recorded by the fixed camera. After the distance between the spots (spot pair distance, denoted as S) is obtained, it is used to establish the relation between the image coordinates and the world coordinate system. It is noted that image coordinates represent coordinates in terms of pixel values in the image plane.
Since the scale (ratio of the image coordinates to the world coordinates) is nonlinear across the image field, we need a series of scales to construct a full-field scale map. The scales can be achieved by moving the laser, rotating the beam splitter at different locations. These scales are recorded by a fixed camera. The advantages of this device are as follows: the system does not need the camera parameters to obtain the relationship from the image coordinate to the world coordinate system, and thus it does not need camera calibration.
In the current design, the laser emits visible light (the wavelength is between 400-760 nm). However, the device is not limited to visible light. Invisible light and harmful laser light can also be used when the infrared camera is used. As mentioned before, the more parallel the beams separated by the splitter are, the more accurate the virtual scale will be. Therefore, more accurate beam splitters will be used when the manufacturing technology develops. Further hardware improvement of the system will be done in our future work.

Spot Pair Distance (S) Calculation Theory
If the device above generates parallel beams perpendicular (β = π/2, as seen in Figure 2) to the image plane, the beam separation (denoted as d, d = BC) is the spot pair distance (denoted as S, S = AB) in the target plane, i.e., d = S. Note that distance or physical distance is different from pixel distance. However, in many cases, the output angle β is not a right angle and depends on the on-site environment, and thus, S should be calculated using the following theory. In this section, we use basic optical principles to obtain the spot pair distance (S). Let us assume that the two beams intersect with the image plane at points A and B (with angle β), as shown in Figure 2. When the beams are perpendicular (β is equal to π/2) to the target plane, the spots are circular. When β is not equal to π/2, the circle becomes an ellipse. Obviously, the angle β can be obtained via the ratio r of the spot's semiminor axis and the semimajor axis via the expression sin(β) = r, as shown in Figure 2. Note that the distance between the beams is the length of the line BC. The projection of point C on the image plane is point O. Since we set the angle BAO to be α, which can be adjusted by rotating the beam The two beams are finally projected on the target plane and formed two spots (a spot pair), which is recorded by the fixed camera. After the distance between the spots (spot pair distance, denoted as S) is obtained, it is used to establish the relation between the image coordinates and the world coordinate system. It is noted that image coordinates represent coordinates in terms of pixel values in the image plane.
Since the scale (ratio of the image coordinates to the world coordinates) is nonlinear across the image field, we need a series of scales to construct a full-field scale map. The scales can be achieved by moving the laser, rotating the beam splitter at different locations. These scales are recorded by a fixed camera. The advantages of this device are as follows: the system does not need the camera parameters to obtain the relationship from the image coordinate to the world coordinate system, and thus it does not need camera calibration.
In the current design, the laser emits visible light (the wavelength is between 400-760 nm). However, the device is not limited to visible light. Invisible light and harmful laser light can also be used when the infrared camera is used. As mentioned before, the more parallel the beams separated by the splitter are, the more accurate the virtual scale will be. Therefore, more accurate beam splitters will be used when the manufacturing technology develops. Further hardware improvement of the system will be done in our future work.

Spot Pair Distance (S) Calculation Theory
If the device above generates parallel beams perpendicular (β = π/2, as seen in Figure 2) to the image plane, the beam separation (denoted as d, d = BC) is the spot pair distance (denoted as S, S = AB) in the target plane, i.e., d = S. Note that distance or physical distance is different from pixel distance. However, in many cases, the output angle β is not a right angle and depends on the on-site environment, and thus, S should be calculated using the following theory. In this section, we use basic optical principles to obtain the spot pair distance (S). Let us assume that the two beams intersect with the image plane at points A and B (with angle β), as shown in Figure 2. When the beams are perpendicular (β is equal to π/2) to the target plane, the spots are circular. When β is not equal to π/2, the circle becomes an ellipse. Obviously, the angle β can be obtained via the ratio r of the spot's semiminor axis and the semimajor axis via the expression sin(β) = r, as shown in Figure 2. Note that the distance between the beams is the length of the line BC. The projection of point C on the image plane is point O.
Since we set the angle BAO to be α, which can be adjusted by rotating the beam splitter and measured in the image coordinates, then we have the following relationship according to the cosine theorem Obviously, the angles AOC and BOC are right angles. Thus, we have The angle between lines BC and AC is a right angle. Therefore, we have Substituting BO in Equation (1) into Equation (3), we obtain Finally, we have the following expression for S Obviously, the angles AOC and BOC are right angles. Thus, we have The angle between lines BC and AC is a right angle. Therefore, we have  According to Equation (5), when cos 2 (α) = 0 or cos 2 (β) = 0, the length of the line AB is equal to BC, i.e., S = d. In practical applications, the target object may be unapproachable, and β may not be equal to π/2. In this case, we can adjust α to be π/2 via rotating the beam splitter to make S = d. As observed in Equation (5), different values of α and β lead to different S. As we demonstrated in Figure  3a, where S is normalized by BC, changing the angles α and β from π/2 to 0 (or π/2 to π) results in the length of S increasing from 1 to infinity. Although S can be obtained by controlling either α or β to be π/2, the small perturbations of α or β cannot be avoided in practical applications. The sensitivity of parameter α or β to S can be obtained by taking the partial derivative of Equation (5) with respect to α or β as According to Equation (5), when cos 2 (α) = 0 or cos 2 (β) = 0, the length of the line AB is equal to BC, i.e., S = d. In practical applications, the target object may be unapproachable, and β may not be equal to π/2. In this case, we can adjust α to be π/2 via rotating the beam splitter to make S = d. As observed in Equation (5), different values of α and β lead to different S. As we demonstrated in Figure 3a, where S is normalized by BC, changing the angles α and β from π/2 to 0 (or π/2 to π) results in the length of S increasing from 1 to infinity. Although S can be obtained by controlling either α or β to be π/2, the Sensors 2019, 19, 3955 5 of 21 small perturbations of α or β cannot be avoided in practical applications. The sensitivity of parameter α or β to S can be obtained by taking the partial derivative of Equation (5) with respect to α or β as It is noted that / S β ∂ ∂ it has the same result as the Equation (6) after exchange α to β. Figure 3b demonstrated the result for / S α ∂ ∂ or / S β ∂ ∂ when α or β close to π/2, i.e., α or β at the interval [π/2 − 0.1, π/2 + 0.1], which shows that the perturbations of α or β have a small effect on S (the maximum perturbation for S is less than one thousandth). This calculation theory ensures the robustness of the system in the presence of uncertainty.

Theory
To obtain the real size of the object, we need to construct a mapping from the image coordinates to the world coordinate system (as seen in Figure 4). Here, for simplicity, the image coordinate origin, which is conventionally located in the upper-left corner, was shifted to the center (parallel to the world coordinate), as seen in Figure 4. Let us assume that we have a small segment ds with an incline angle θ; thus, its projection into the horizontal segment du and vertical segment dv can be expressed as It is noted that ∂S/∂β it has the same result as the Equation (6) after exchange α to β. Figure 3b demonstrated the result for ∂S/∂α or ∂S/∂β when α or β close to π/2, i.e., α or β at the interval [π/2 − 0.1, π/2 + 0.1], which shows that the perturbations of α or β have a small effect on S (the maximum perturbation for S is less than one thousandth). This calculation theory ensures the robustness of the system in the presence of uncertainty.

Theory
To obtain the real size of the object, we need to construct a mapping from the image coordinates to the world coordinate system (as seen in Figure 4). Here, for simplicity, the image coordinate origin, which is conventionally located in the upper-left corner, was shifted to the center (parallel to the world coordinate), as seen in Figure 4. Let us assume that we have a small segment ds with an incline angle θ; thus, its projection into the horizontal segment du and vertical segment dv can be expressed as where θ is measured by the inclinometer cemented to the beam splitter.
where θ is measured by the inclinometer cemented to the beam splitter. The segment ds has been recorded in the image, where its pixel length is denoted as ds'. Accordingly, the horizontal pixel segment dx and vertical pixel segment dy can be expressed as yi1) and (xi0, yi0) are the coordinates of the spots' centroids in the image coordinate system. Therefore, the relation between the image coordinates and the world coordinate system can be expressed as where (x, y) is the middle point of ds', h(x, y) is the measured scale in the x direction, and g(x, y) is the measured scale in the y direction.
In the device, the measured scale h(x, y) at (xi, yi), which is located at the middle point of the line formed by the two centroids of the laser spots, represents the scale for ds' in an average sense. According to Equation (9), the scales at this point can be expressed as ; x S and y S are the horizontal and vertical projections of the spot The segment ds has been recorded in the image, where its pixel length is denoted as ds'. Accordingly, the horizontal pixel segment dx and vertical pixel segment dy can be expressed as where θ = arctan( x i1 −x i0 ), and (x i1 , y i1 ) and (x i0 , y i0 ) are the coordinates of the spots' centroids in the image coordinate system. Therefore, the relation between the image coordinates and the world coordinate system can be expressed as where (x, y) is the middle point of ds', h(x, y) is the measured scale in the x direction, and g(x, y) is the measured scale in the y direction.
In the device, the measured scale h(x, y) at (x i , y i ), which is located at the middle point of the line formed by the two centroids of the laser spots, represents the scale for ds' in an average sense. According to Equation (9), the scales at this point can be expressed as ; S x and S y are the horizontal and vertical projections of the spot pair distance (S).
Through the set h(x i , y i )and g(x i , y i ), we can reconstruct the full-field scale map. With the scale map, we can measure all the geometry inside the image. Many methods are used to reconstruct the surface [18]. Among these methods, piecewise low-order fitting and polynomial fitting are the two most frequently used tools. However, high-order polynomial fitting can lead to ill-conditioned matrices, while piecewise low-order fitting results in discontinuities. To overcome the difficulty mentioned above, we introduce the moving least squares (MLS) method [19]. MLS method is a method of reconstructing continuous functions from a set of unorganized point samples via the calculation of a weighted least squares measure biased towards the region around the point at which the reconstructed value is requested. The MLS method is useful for reconstructing a surface from a set of points. In the following section, we use measured scale h(x i , y i ) to reconstruct the scale map H(x, y) in the x direction. Suppose the scale function (i.e., MLS approximant) H(x, y) consists of polynomial and undetermined coefficients as where p i (x, y) is a complete monomial basis of order m and a i (x, y) is the undetermined coefficient.
Since the undetermined coefficients are location-dependent, Equation (12) can be rewritten at the subdomain centered at (x, y) as H((x, y), (x, y)) where (x, y) are the points in the subdomain centered at (x, y). Since the scale is nonlinear, we use a quadratic basis as In this case, m is equal to 6. The weight function can be expressed as where (x I , y I ) is the known node, ((x, y)-(x I , y I )) represents the distance between the undetermined point and the node, i.e., (x − x I ) 2 + (y − y I ) 2 .
There are a number of weight functions that are used for the MLS method. Here, we use a cubic spline function as where l = l/l max is the normalized distance, l is the distance and l max is the influential radius. The characteristic of the cubic spline weight function is demonstrated in Figure 5, where we set the unit along X and Y axis equal to l max , and (x I , y I ) equal to (0, 0). To avoid the singularity in the MLS algorithm, l max is set to be a proper value to include sufficient nodes in the subdomain [20]. In this paper, the influential radius is determined by the quadrant method [21]. l max is obtained via l max = k · max(l 1 , l 2 , l 3 , l 4 ), where k is the positive number between 1.2 to 2.5 and l i is the nearest distance from the interpolation point (x, y) to the known node for each quadrant, respectively, as seen in Figure 6.   Note that the local approximant H((x, y), (x, y)) can be expressed by the known nodes (i.e., (x, y) is replaced by (x I , y I )). The weighted sum of squared errors at all nodes has the following form 2 (17) where N is the node number, and it is required w I (x, y) > 0. To obtain the best approximant of H((x, y), (x, y)), J can be minimized with respect to a(x, y) as Equation (19) has the matrix form The undetermined coefficient a(x, y) can be expressed as where Thus, the approximant based on MLS can be expressed as The inverse of matrix A is a critical step in the MLS algorithm. In the algorithm, we first obtain the determinant of matrix A. If it is zero, the matrix A is singular and we need to increase the influential radius to include more nodes to form matrix A. Alternatively, one can determine the invertibility of matrix A by checking its rank via singular value decomposition or rank-revealing QR decomposition. A full rank square matrix is invertible. There are many methods for matrix inversion and we used Gaussian elimination method in the following examples [22]. The computational efficiency of the MLS method will be discussed in our future work.
It is noted that G(x, y) (the scale map in the y direction) can be obtained in the same manner. Once the scale functions H(x, y) and G(x, y) have been obtained, the object length in the image can be calculated via the integration method with ds. ds can be expressed as Similarly, the area of the object in the image can be calculated via integration with dA. dA is expressed as dA = H(x, y) · dx · G(x, y) · dy (27)

Algorithm
Since the spots have identical shapes, they can be recognized by the algorithm. An edge detection method [23,24] was used to detect the spot edge, as shown in Figure 7. Once the edges of the two spots have been detected, the pixel length between two spots' centroids can be calculated. The centroids (x i0 , y i0 ) and (x i1 , y i1 ) are obtained via a set of discrete points (denoted as array (x, y)) on the edge as x,y array(x, y) Since the spots have identical shapes, they can be recognized by the algorithm. An edge detection method [23,24] was used to detect the spot edge, as shown in Figure 7. Once the edges of the two spots have been detected, the pixel length between two spots' centroids can be calculated. The centroids (xi0, yi0) and (xi1, yi1) are obtained via a set of discrete points (denoted as array (x, y)) on the edge as Subsequently, h(xi, yi) and g(xi, yi) can be calculated via Equations (10) and (11), respectively. Finally, H(x, y) and G(x, y) can be obtained via the MLS approach.
To summarize, there are two critical steps in the proposed method. The first one is the image processing algorithm, which is used to obtain a single virtual scale. The second one is the MLS algorithm, which is used to build the full-field scale map. The detailed flowchart can be seen in Figure 8. Due to the characteristics of the device (outputting two identical spots), every single virtual scale can be calculated automatically. When the full-field scale is reconstructed by the MLS algorithm, highprecision measurement is possible. Verification will be done in the next section. Subsequently, h(x i , y i ) and g(x i , y i ) can be calculated via Equations (10) and (11), respectively. Finally, H(x, y) and G(x, y) can be obtained via the MLS approach.
To summarize, there are two critical steps in the proposed method. The first one is the image processing algorithm, which is used to obtain a single virtual scale. The second one is the MLS algorithm, which is used to build the full-field scale map. The detailed flowchart can be seen in Figure 8. Due to the characteristics of the device (outputting two identical spots), every single virtual scale can be calculated automatically. When the full-field scale is reconstructed by the MLS algorithm, high-precision measurement is possible. Verification will be done in the next section.

Experimental Verification
To verify the accuracy and stability of the proposed method, we designed the following experiments.

Design of the Experiment
In this example, we used a dot calibration target (with 108 lattices) with known dimensions as the target. The calibration target's specifications can be found in Table 1. The experimental setup can be seen in Figure 9. The detailed specifications of the semiconductor laser emitter, lateral displacement beam splitter and inclinometer can be found in Tables 2, 3, and 4. The proposed method is used to reconstruct the full-field scale map, which is then used to compare with the scale map obtained by the calibration target. It is noted that different cameras have different intrinsic parameters, which is essential for measurement accuracy. Here, we used two different cameras, a

Experimental Verification
To verify the accuracy and stability of the proposed method, we designed the following experiments.

Design of the Experiment
In this example, we used a dot calibration target (with 108 lattices) with known dimensions as the target. The calibration target's specifications can be found in Table 1. The experimental setup can be seen in Figure 9. The detailed specifications of the semiconductor laser emitter, lateral displacement beam splitter and inclinometer can be found in Tables 2-4. The proposed method is used to reconstruct the full-field scale map, which is then used to compare with the scale map obtained by the calibration target. It is noted that different cameras have different intrinsic parameters, which is essential for measurement accuracy. Here, we used two different cameras, a Nikon digital single lens reflex (DSLR) camera and a Huawei cell phone camera. The detailed specifications of the cameras can be found in Table 5. Nikon digital single lens reflex (DSLR) camera and a Huawei cell phone camera. The detailed specifications of the cameras can be found in Table 5.

Experimental Procedures
We used the following steps to complete the experiment: 1.
Laser spots are projected to the target plane (calibration target), which is recorded by the camera. For simplicity, we projected the vertical laser spot pair (α = π/2, θ = π/2) for comparison.

2.
Repeat step 1 by moving the laser spots until enough laser spot pairs have been generated. In this case, we obtained 7 × 11 = 77 laser spot pairs. 3.
Use the algorithm to get the coordinates of each laser spot pair to obtain laser scale at different locations, as shown in Figure 10.

4.
Use the MLS method to reconstruct the scale map based on the discrete laser scale points.

. Experimental Procedures
We used the following steps to complete the experiment: 1. Laser spots are projected to the target plane (calibration target), which is recorded by the camera. For simplicity, we projected the vertical laser spot pair (α = π/2, θ = π/2) for comparison. 2. Repeat step 1 by moving the laser spots until enough laser spot pairs have been generated. In this case, we obtained 7 × 11 = 77 laser spot pairs. 3. Use the algorithm to get the coordinates of each laser spot pair to obtain laser scale at different locations, as shown in Figure 10 Meanwhile, each dot's coordinate of the calibration target is known, and the scale map can be easily obtained as follows, which is referred to as the direct method. At each lattice, the scale is equal to the ratio of the physical distance to the pixel distance between two neighboring dots. Using the MLS interpolation, then the scale map can be reconstructed based on the scale at each lattice. The full-field scale map by the proposed method is compared to the direct measurement scale map by the calibration target. The data processing flow chart can be seen in Figure 11. Deviation (the difference between the proposed method and the direct method) is used to evaluate the result. In Meanwhile, each dot's coordinate of the calibration target is known, and the scale map can be easily obtained as follows, which is referred to as the direct method. At each lattice, the scale is equal to the ratio of the physical distance to the pixel distance between two neighboring dots. Using the MLS interpolation, then the scale map can be reconstructed based on the scale at each lattice.

Verification of the Full-Field Scale Map
The full-field scale map by the proposed method is compared to the direct measurement scale map by the calibration target. The data processing flow chart can be seen in Figure 11. Deviation (the difference between the proposed method and the direct method) is used to evaluate the result. In Figure 11, the full-field deviation map for G (x, y) is demonstrated. The result is slightly affected by the camera angles (the angles between the camera-target direction and the calibration target's normal direction), as seen in Figure 12. The camera angles are set to be 0, 12.5 • , 25 • , and 45 • , respectively. The deviation is within the range of ±0.5%, as shown in Figure 12 and Table 6. There are three main reasons for this. The first one comes from the manufacturing errors of the calibration target, which is 3-5 µm. The second one comes from the distortion and nonlinearity of the image, which depends on the camera intrinsic parameters. Adding more laser spot pairs to obtain more laser scales can reduce this effect. For comparison, we changed the Nikon camera to Huawei cell phone camera. It was found that the error was increased to ±1%, as shown in Figure 13 and Table 7. This is because the cell phone camera's quality (Huawei) is not as good as that of the Nikon DSLR camera. The third one comes from the error of the spot pair distance calculation, which will be discussed in the next section by an experiment.

Local Accuracy of the Laser Scale
Accuracy of the local laser scale is the foundation of the full-field scale accuracy. The spot pair distance accuracy is affected by the angle α and β, which is theoretically discussed in Section 2.2. In practical applications, we usually get the spot pair distance by controlling the angle α. Here, an experiment was carried out for verification. The experimental setup is as follows. Firstly, we fixed the camera and the calibration target. The target is with a distance of 2 m to the laser and the camera was fixed in place with a distance of 2.5 m to the target. The camera angle is 12.5 • . By rotating the laser system for different α, we get a set of laser spot pairs, as shown in Figure 14. In the local area, the scale of the spot pair distance to the pixel distance is considered to be linear. We compared the pixel distance (in Figure 15) and picked up the minimum pixel distance orientation as the spot pair orientation for α = π/2. In Figure 15, it can be observed that the spot pair orientation effect on its distance agreed with Equation (5). By measuring the spot pair distance (S) for α = π/2, it is almost equal to d, with 0.015% difference. The reason for this comes from two factors. The first one is that the laser beams are not perfectly paralleled; the second one is that the pixel's distance in the spot pair rotating area is nonlinear.  Accuracy of the local laser scale is the foundation of the full-field scale accuracy. The spot pair distance accuracy is affected by the angle α and β, which is theoretically discussed in Section 2.2. In practical applications, we usually get the spot pair distance by controlling the angle α. Here, an experiment was carried out for verification. The experimental setup is as follows. Firstly, we fixed the camera and the calibration target. The target is with a distance of 2 m to the laser and the camera was fixed in place with a distance of 2.5 m to the target. The camera angle is 12.5°. By rotating the laser system for different α, we get a set of laser spot pairs, as shown in Figure 14. In the local area, the scale of the spot pair distance to the pixel distance is considered to be linear. We compared the pixel distance (in Figure 15) and picked up the minimum pixel distance orientation as the spot pair orientation for α = π/2. In Figure 15, it can be observed that the spot pair orientation effect on its distance agreed with Equation (5). By measuring the spot pair distance (S) for α = π/2, it is almost equal to d, with 0.015% difference. The reason for this comes from two factors. The first one is that the laser beams are not perfectly paralleled; the second one is that the pixel's distance in the spot pair rotating area is nonlinear.

Measurement of the Curved Wire
This test is to measure the length of a wire, which is formed by folding a 500-mm-long tin wire, as shown in Figure 16. The wire was attached to a board with a distance of 2m to the laser. The camera was fixed in place with a distance of 2.5 m to the board. The camera angle is 0. We moved the laser, and the spots were formed at different locations on the board. The fixed camera recorded those spots. By controlling the incline angle θ, we recorded 45 values of h(xi, yi) and 45 values of g(xi, yi), as shown in Figure 17.

Measurement of the Curved Wire
This test is to measure the length of a wire, which is formed by folding a 500-mm-long tin wire, as shown in Figure 16. The wire was attached to a board with a distance of 2m to the laser. The camera was fixed in place with a distance of 2.5 m to the board. The camera angle is 0. We moved the laser, and the spots were formed at different locations on the board. The fixed camera recorded those spots. By controlling the incline angle θ, we recorded 45 values of h(x i , y i ) and 45 values of g(x i , y i ), as shown in Figure 17.

Measurement of the Curved Wire
This test is to measure the length of a wire, which is formed by folding a 500-mm-long tin wire, as shown in Figure 16. The wire was attached to a board with a distance of 2m to the laser. The camera was fixed in place with a distance of 2.5 m to the board. The camera angle is 0. We moved the laser, and the spots were formed at different locations on the board. The fixed camera recorded those spots. By controlling the incline angle θ, we recorded 45 values of h(xi, yi) and 45 values of g(xi, yi), as shown in Figure 17.  By using a set of h(x i , y i )and g(x i , y i ) in MLS, the full field of the scale was obtained, as shown in Figure 18. According to Equation (26), the wire was divided into 100 elements and 200 elements, respectively, for integration. The integration results for these two cases are almost the same, which indicates that ds' (its corresponding horizontal pixel segment is dx and vertical pixel segment dy) is sufficiently small. Using the proposed method, the wire length was 501.02 mm, with an error tolerance less than 0.3%. When using the scale at the center of the image (uniform scale), the wire length was 504.58 mm, with an error tolerance less than 1%. Obviously, using MLS to reconstruct the full-field scale for the image can give much better results. This also indicates the presence of the nonlinearity of the scale in the image. The detailed results can be found in Table 8. By using a set of h(xi, yi)and g(xi, yi) in MLS, the full field of the scale was obtained, as shown in Figure 18. According to Equation (26), the wire was divided into 100 elements and 200 elements, respectively, for integration. The integration results for these two cases are almost the same, which indicates that ds' (its corresponding horizontal pixel segment is dx and vertical pixel segment dy) is sufficiently small. Using the proposed method, the wire length was 501.02 mm, with an error tolerance less than 0.3%. When using the scale at the center of the image (uniform scale), the wire length was 504.58 mm, with an error tolerance less than 1%. Obviously, using MLS to reconstruct the full-field scale for the image can give much better results. This also indicates the presence of the nonlinearity of the scale in the image. The detailed results can be found in Table 8.

Conclusions
In this paper, we proposed a new method for non-contacting image measurements. By designing laser-based virtual scale equipment, a set of scales to reconstruct a full-field scale map for an image is obtained. The scale map bridges image coordinates to a world coordinate system and can be used to measure the real size of an object in the image. Experimental verifications are carried out, showing that the proposed method gives very accurate and reliable results.
The method has the following advantages and potentials: (1) it does not require calibration for the camera, which is usually complex and unique to each camera; (2) the scale can be calculated automatically since the two spots are theoretically identical; (3) the scale calculation theory ensures the robustness of the system when obtaining each scale in the presence of uncertainty; (4) the fullfield scale map can be reconstructed via a set of scale points and thus can be used to measure anything in the image; (5) the approach can achieve high-precision measurement at the sub-pixel level; and (6) the complexity of the scale map, which is nonlinear and location-dependent, can be easily solved by

Conclusions
In this paper, we proposed a new method for non-contacting image measurements. By designing laser-based virtual scale equipment, a set of scales to reconstruct a full-field scale map for an image is obtained. The scale map bridges image coordinates to a world coordinate system and can be used to measure the real size of an object in the image. Experimental verifications are carried out, showing that the proposed method gives very accurate and reliable results.
The method has the following advantages and potentials: (1) it does not require calibration for the camera, which is usually complex and unique to each camera; (2) the scale can be calculated automatically since the two spots are theoretically identical; (3) the scale calculation theory ensures the robustness of the system when obtaining each scale in the presence of uncertainty; (4) the full-field scale map can be reconstructed via a set of scale points and thus can be used to measure anything in the image; (5) the approach can achieve high-precision measurement at the sub-pixel level; and (6) the complexity of the scale map, which is nonlinear and location-dependent, can be easily solved by the proposed method. Thus, the system proposed in this paper is a promising candidate tool for non-contacting measurements and can be used in broad areas with high-accuracy requirements.