Next Article in Journal
Preeminently Robust Neural PPG Denoiser
Next Article in Special Issue
Innovative Image Processing Method to Improve Autofocusing Accuracy
Previous Article in Journal
Image Synthesis Pipeline for CNN-Based Sensing Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High Precision Visual Dimension Measurement Method with Large Range Based on Multi-Prism and M-Array Coding

1
School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430070, China
2
Intelligent Transport Systems Research Center, Wuhan University of Technology, Wuhan 430063, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(6), 2081; https://doi.org/10.3390/s22062081
Submission received: 17 January 2022 / Revised: 22 February 2022 / Accepted: 28 February 2022 / Published: 8 March 2022
(This article belongs to the Special Issue Sensor-Based Precision Dimensional Measurement)

Abstract

:
The visual dimension measurement method based on non-splicing single lens has the contradiction between accuracy and range of measurement, which cannot be considered simultaneously. In this paper, a multi-camera cooperative measurement method without mechanical motion is proposed for the dimension measurement of thin slice workpiece. After the calibration of the multi-camera imaging system is achieved through a simple and efficient scheme, the high-precision dimension measurement with a large field of view can be completed through a single exposure. First, the images of the edges of the workpiece are compressed and combined by splitting and merging light through the multi-prism system, and the results are distributed to multiple cameras by changing the light path. Then, the mapping relationship between the global measurement coordinates and the image coordinates of each camera is established based on the globally unique M-array coding, and the image distortion is corrected by the coding unit composed of black and white blocks. Finally, the edge is located accurately by edge point detection at the sub-pixel level and curve fitting. The results of measuring a test workpiece with the dimension of 24 mm × 12 mm × 2 mm through a single exposure show that the repeated measurement accuracy can reach 0.2 µm and the absolute accuracy can reach 0.5 µm. Compared with other methods, our method can achieve the large-field measurement through only one exposure and without the mechanical movement of cameras. The measurement precision is higher and the speed is faster.

1. Introduction

In modern production, there is a strong need for large field-of-view and high precision dimension measurement. The existing manual contact measurement method can meet the requirements of measurement range and precision at the same time, but the measurement efficiency is low and there are uncertainties caused by operators, environment, measuring equipment and other factors. To solve the problems of traditional contact manual measurement, some modern instruments such as coordinate measuring instruments, laser scanning measurements, optical microscope observation and measurement, using image processing technology to realize contour scanning, are improved and applied to the dimension measurement of the workpiece. These modern measuring instruments have obvious advantages, such as high precision and high stability, and their measurement accuracy can reach 1 µm or even 0.5 µm [1]. Using a coordinate measuring instrument (CMM) for contact measurement of the workpiece, the measurement accuracy is high, the stability is strong. However, the CMM is expensive, and being a contact measurement with measuring stress, the measurement process may cause destructive damage to the workpiece. Laser scanning measurement belongs to non-contact measurement, and the accuracy can reach the micron level. Generally, camera and motion mechanisms are needed for scanning measurement, and the time of high-precision measurement is very long [2,3]. The optical microscope has high measurement accuracy but low measurement efficiency, and professional detection personnel are required to complete the measurement task [4]. Some scholars have proposed the combination of a universal tool microscope and a charge-coupled device (CCD) to replace manual alignment reading [5]. However, due to the large magnification of the microscope, the all-dimensions measurement cannot be completed at one time. For the workpiece beyond the field of view, it needs to be measured several times, and the detection efficiency is still limited. Although these modern measurement technologies are simple in operation and high in digitization, the contradiction between measurement efficiency and cost still exists. At the same time, for large quantities of a workpiece, real-time automatic measurement cannot be completed and professional measuring personnel are required. Therefore, more efficient measurement methods need to be considered to achieve fast and high-precision measurement.
In recent years, microelectronics technology and image processing technology have developed rapidly [6], and scholars have conducted a lot of research in this field. Because vision-based measurement technology has the advantages of non-contact, high precision, large measurement range and online detection [7]. It has been more and more widely used in industrial geometric measurement, workpiece surface defect detection and workpiece surface deformation detection. The workpiece dimension measurement using machine vision technology, single-camera exposures to do measurement is suitable for low precision or the circumstances of a comparatively small measuring range. For the high-precision and large range measurement, multiple measurements need to be achieved by moving objects or cameras with mechanical movement devices [8], or multiple measurements need to be achieved by simultaneous exposure of different areas with multiple cameras, both of which require the splicing of multiple measurement results [9].
If a single camera is used for one-time imaging of the workpiece, the measurement field of view and resolution are closely related to the CCD device parameters. To keep the measurement precision unchanged and expand the measurement range, you can only use the algorithm of sub-pixel positioning [10,11]. The performance of the algorithm depends on the algorithm limit and the shape characteristics of the measured object. In the case of multiple imaging, precise position parameters between multiple fields of view need to be known, and external precise displacement measurement equipment or more complex calibration methods need to be used, which will increase the uncertainty of the system or reduce the measurement efficiency. In ref. [12], a planar array CCD and a high-precision measurement grating are used to calculate the actual position of the contour of the part, and a high-precision movement system is used to image the complete edge, realizing the precise measurement of the geometric quantity of the part, which can reach the detection level of 3 µm. In ref. [13], four cameras and continuous image data are used for matching to correct errors in measurement data to obtain the precision boundary of large-scale panels. In ref. [14], to get sub-pixel edge detection, a method based on convolution neural network (CNN) and Bi-directional long short-term memory is proposed, which overcomes the resolution limitation of image sensors and the influence of noise. In ref. [15], a completely constrained and model-free distortion correction method was proposed to solve the imaging distortion of a single-lens biprism system, which performed well in the distorted image data captured by the system.
In this paper, the prism assembly and M-array coding method are used to achieve the fast and high precision measurement of a large field of view by multi-camera collaboration. Firstly, a simple combination of prisms is used to change the light path, and the large area background irrelevant to the measurement results is abandoned. Only the regions that contribute to measurement are integrated into camera imaging by a designed optical path, which improves the utilization rate of the camera imaging plane and alleviates the contradiction between the field of view and resolution. The use of combined prisms disturbs the imaging light path of the whole field of view, which makes the imaging relationship of each camera complicated and brings some difficulties to system calibration. Next, the global coordinate system was established by using photo-etching glass board based on M-array, and the pixel-by-pixel mapping relationship was established by identifying the globally unique coding pattern in the M-array, and the camera distortion correction and parameter calibration were completed at one time. The prism assembly adopts some common triangular reflection prism and splitting prism, which is simple and easy to construct. High-precision and large-area photo-etching glass boards can be processed by mature photolithography technology, and the calibration of the entire complex measurement optical path system can be completed by single image analysis.
The test system constructed by our method is shown in Figure 1. The marble slab is used as the base to reduce the influence of the environment and vibration on the prism imaging system. Fixed devices of the light source, prism system and lens are installed on a one-piece steel frame to accurately control the imaging focal length and object distance and increase the stability of the system.
The remainder of this paper is composed as follows: In Section 2, an enlarged imaging region method based on multi prism is proposed, and the design process and parameters of prism assembly are introduced in detail. In Section 3, the coordinate system transformation and calibration algorithm based on two-dimensional maximum area array (M-array) are introduced in detail. Section 4 is the experimental results and analysis of this paper. In Section 5, the feasibility and shortcomings of this method are discussed. Section 6 is a summary and conclusion of our work.

2. Multi-Prism Imaging Methods

To integrate the region of the workpiece which contributes to the measurement into the same camera to image, it is necessary to design a prism system to transform the optical path. It can improve the utilization rate of the camera imaging plane and solve the spatial interference between cameras. By splitting and merging the light of prisms, the prism system can compress and combine the parts of the workpiece which are out of field-of-view to make them imaged on the same camera without changing the imaging magnification. It alleviates the contradiction between accuracy and range of measurement.

2.1. Field of View Allocation

The high precision visual dimension measurement method proposed in this paper can be designed for high precision measurement of various shapes and various sizes of the workpiece while maintaining the measurement accuracy. Because this method carries out a single exposure to complete all the imaging required by the dimension calculation, and then performs multi-field stitching based on absolute coordinates. There is no need to move the camera or the target, which avoids the loss of accuracy and measurement time caused by moving. For the workpiece of different sizes and types, it is necessary to use multiple cameras to image each region of the target to be measured simultaneously, and each cameras’ field of view needs to be allocated for the target workpiece.
To facilitate the description of specific parameters in this method, a practical case is illustrated, in which the measurement object is an oil pump blade from an automobile. Its three-dimensional shape is shown in Figure 2a. The product length is about 24.940 mm, and the product width is about 12.962 mm, and its measurement precision requirements are ±1 µm. In this case, considering the requirements of measurement accuracy, there are few industrial cameras that can reach the measurement accuracy at the pixel level. The pixel size of existing industrial cameras is generally 1.4   μ m   ~   14   μ m , but the smaller the pixel size is, the smaller the sensor size will be under the same resolution, which is not conducive to placing the whole blade in the field of view. To avoid the measurement uncertainty caused by sub-pixel edge calculation as much as possible, the blade is designed to be magnified by 1.5   ~   2 × and the sensor size of the camera must be larger than 19 . 443   mm × 37 . 41   mm   ~   25 . 924   mm × 49 . 88   mm . To avoid the influence of diffraction phenomenon on the blade edge after magnified imaging, a blue parallel light source with a wavelength of 420 nm was selected.
According to the geometric dimension of the oil pump blade, it is only necessary to place each edge area of the blade in the camera field of view. The middle area of the blade that does not contribute to the measurement does not need to be imaged. According to the parameters of the existing industrial camera products on the market whose sensor size is generally 3.6   mm × 4 . 8   mm   ~   13 . 5   mm × 18 . 8   mm (1/3″ ~ 4/3″), at least 3 cameras are required to cover the edge areas in the length direction. To cover the edge areas while using the minimum number of cameras, three cameras with a sensor size of more than 1″ were selected in one length direction and two cameras with a sensor size of more than 1″ in one width direction. An MV-CH120-10UM Hikang camera was used. The camera is a grayscale array industrial camera with a sensor size of 1.1″, with a resolution of 4096 × 3000 and a single pixel size of 3.45   μ m × 3.45   μ m . At least 10 cameras are needed to cover all four edges. The field of view allocation is shown in Figure 2b, where cameras need to be arranged from region ① to region ⑩ to complete simultaneous imaging.

2.2. Analysis of Multi-Prism Structure

The projected dimensions of a single industrial camera are generally 30 mm × 30 mm or more. Due to the small size of the workpiece relative to the external size of the camera, it is impossible to arrange the camera array closely without spatial interference for the field of view allocation diagram in Figure 2b. Therefore, it is necessary to expand the imaging light path in space, and a light splitting structure is designed to make multiple camera fields complete imaging at the same time. The optical spectroscopic structure in camera systems is mainly used in multi-spectral cameras and framing cameras to collect images of different spectra or different locations. The typical optical splitting is carried out by a splitting prism or splitting pyramid.
In Figure 2b, the system is determined as a ten-amplitude system. Figure 3 is a schematic diagram of using multiple prisms to achieve ten frames, the size of the imaging plane in each camera field is 14.13   mm × 10.35   mm . To allow different cameras to collect images in different fields, the center of the camera can deviate from the center of the prism. In this method, the space design should be compact and convenient for the assembly of various parts. As can be seen from Figure 3, when there are many fields to be segmented, the optical system will be very large and difficult to assemble.
To reduce the number of cameras and improve the imaging plane utilization of a single camera, additional optical prism devices are designed. By using plane mirror combination, biprism or diffraction grating, the two opposite edge regions of the measured object can reach the same camera imaging plane through two different light paths, so as to achieve the purpose of merging light. As shown in Figure 4, area ① and area ⑧ are merged in the same camera imaging plane. With the similar combination of ②⑦, ③⑥, ④⑩, ⑤⑨, all edge imaging can be completed with five cameras. In a camera, the two opposite edges of the workpiece are collected simultaneously, and the two edges are distributed to the upper and lower parts of the camera photosensitive chip, respectively. This method will not reduce the measurement resolution of the system. On the contrary, because the middle part of the workpiece will not add an effective edge to the dimension measurement, the two opposite edge regions are presented in one camera to improve the measurement efficiency.
According to the existing research in [16,17,18,19] and the measurement object, the biprism structure or rhombic prism can be used to achieve multi-light path merging. It is difficult to adjust the angle between two prisms to achieve uniform distribution of camera image on the left and right and obtain clear image on both sides at the same time. Considering the limited space and convenient assembly, this paper uses the rhombic prism to complete light merging, which can avoid the angle adjustment and assembly problems in the prism system.
In this paper, in order to image the edge of the workpiece with complete coverage, five cameras and a prism system are used as shown in Figure 5a. The designed prism assembly A is located in the middle of the five cameras, which can split and merge light and ensure that the positions of the cameras do not interfere with each other. The blue parallel light source E is emitted from the bottom. After passing through the workpiece D to be measured, the light reaches the lens B of 55 mm caliber and becomes an enlarged image. After passing through the prism system A, the light is divided into two parts: one is deflected by 90°, as shown in Figure 5b, and the other is vertically upward, as shown in Figure 5c. Then, the light is merged to the camera imaging plane respectively. Camera C1, C2 and C5 image the long edge of the workpiece, while cameras C3 and C4 image the short edge of the workpiece. In Figure 5b,c, the lens is removed for rendering in order to show a clearer light path. In fact, the lens is always there for imaging.
In Figure 5b, green light is a section of the long side of the workpiece, which reaches the imaging plane of camera C1 through the prism assembly, yellow light is the light path to camera C2, and blue light is the light path to camera C5. To avoid the spatial interference of cameras C1, C2 and C5 placed side by side, two isosceles right-angle reflector prisms of 45° were used to change the direction of the light path received by cameras C1 and C5 and made them deflect by 90°. Therefore, cameras C2 and C5 are vertically distributed on both sides of camera C2. In Figure 5c, the purple light path is the light path to camera C3, and the pink part is the light path to camera C4. To show the details of cameras C3 and C4 better, we render only a part of Prism Group A. To avoid mutual spatial interference from cameras C3 and C4, one isosceles right-angle reflector prism of 45° is used to change the direction of the light path received by camera C4.

2.3. Calculation of Prism Parameters of Merged Light

The optical imaging model of the rhombic prism used for light merging is shown in Figure 6a. The light coming out of the camera lens into the prism are not parallel, so there is a black shadow region between the two red lines after the light is merged through a rhombic prism. After determining the spectral structure and spatial layout of multipath imaging, it is necessary to design the size of each prism in the imaging structure according to the actual dimension of the workpiece to image the effective edge of the workpiece, and discard the invalid background part. Firstly, the width of the dark shadow in Figure 6a needs to be calculated. It determines whether the two opposite edges of the workpiece can be captured in the same camera simultaneously and are not hidden in the dark shadow. Its calculation model is shown in Figure 6b.
The rhombic prism used for merging light is placed on the splitting prism. In Figure 6b, β represents the angle of incidence, L e n represents the distance between the lens focus and the splitting prism, H 1 represents the height of the splitting prism, H 2 represents the distance between the upper surface of the rhombic prism and the imaging plane of the camera, H 3 represents the height of the rhombic prism, L represents the length of the rhombic prism, and W b l a c k represents half of the width of the black shadow. The light enters the splitting prism from the focal point of the lens and exits from the rhombic prism. The coordinate point A ( a , 0 ) represents the point at which light rays enter the rhombic prism. It can be calculated as:
a = L + H 3 L e n × ( tan β ) H 1 × ( tan α )  
Black shadows are formed because light cannot be reflected from the vertex of the left boundary of the rhombic prism. Take the boundary position of the rhombic prism, when a = H 3 + H 3 × tan α . After geometric calculations, we get the width of the black shadow:
W b l a c k = L tan α + H 2 × tan ( arcsin ( λ sin α ) )
Finally, the prism assembly dimension was designed, as shown in Figure 7a below, and the actual picture was shown in Figure 7b. The height and length of the rhombic prism used for cameras C1, C2 and C5′s optical path combinations are 5 mm and 6.5 mm, the height and length of the rhombic prism used for cameras C3 and C4′s optical path combinations are 7.5 mm and 16 mm, and the height of the splitting prism is 30 mm.

3. Calibration Based on M-Array

The model of the prism imaging system is complicated because of the imaging distortion caused by manufacturing and assembly errors. To ensure the accuracy of measurement, point-to-point accurate correction is designed in this paper. In the system of prism splitting and merging light, a single camera can simultaneously image two regions outside the field of view, but the physical geometric distance of two regions in the same image is not known. To obtain the physical geometric distance between any pixel coordinates in the synthetic field of view, it is necessary to establish the corresponding relationship between the image coordinates and the actual physical coordinates.
Pseudo-random coding is a kind of random coding which can be determined in advance and implemented repeatedly. There are usually two forms of expression, a pseudo random sequence and a pseudo random array. The M-array is a typical pseudo-random array with specific window properties, often represented by U ( u , v ; m , n ) . In array of size ( u , v ) , any sub-window of size ( m , n ) appears only once in the array [20]. Due to the global uniqueness of sub-windows, the sub-windows in an M-array can be used to encode the two-dimensional coordinate systems. The encoded M-array is printed on a 3-inch high-precision photo-etching glass plate, the minimum line width is 23 µm, as shown in Figure 8. The accuracy of the line width of the M-array photo-etching glass plate can reach 0.25 µm. By placing the measuring blade on the photo-etching glass board and imaging them at the same time, all the absolute physical coordinates of the blade can be obtained as well as all the edge pixel coordinates of the blade.

3.1. Calculate Corner Points Based on Minimum Classification Error

The local region of the M-array is shown in Figure 9a, we can see that the pattern encoded by M-array is not strictly alternating between black and white squares. There are continuous black blocks or continuous white blocks. So, the traditional pixel gradient descent method is not suitable for the corner point calculation of the system. To establish the M-array coordinate system and correct the image, this paper proposes the method of minimum classification error to calculate all the corner points under the M-array.
After the actual photo-etching glass board is imaged, the pixel size of a single black-or-white square is 10 × 10. To find the corner points between continuous white or black squares, it is necessary to use the squares in the surrounding area for auxiliary calculation. The minimum window of M-array designed in this paper is 5 × 5, which means that it is impossible for the M-array to have all black or white squares within 50 × 50 pixels. In this paper, we define parameter S ( x i , y i ) as the square dispersion degree of pixel point ( x i , y i ) . By taking ( x i , y i ) as the origin, divide the square at equal spacing within 50 × 50 pixels of its lower right corner, and then we can get the divided 25 squares and the total pixel value of each square. The dispersion degree of 25 pixels values is S ( x i , y i )   at point ( x i , y i ) .
The square dispersion degree in the 50 × 50 region is calculated for the possible locations of pixels, as shown in Equation (3). To obtain more accurate corner point coordinates, pixel interpolation within the range of w A × h A is carried out near point ( x i , y i ) in the image coordinate system, while f ( x , y ) is the pixel value, max G a t e = w A × h A × 255 ,   min G a t e = 0 and μ is the average pixel values in the region after interpolation. In the case of minimizing S ( x i , y i ) , the corresponding ( x dst , y d s t ) is the exact corner point required, where N s e t is the possible range of corner points, and N s e t is related to the initial value given, the maximum is 10 × 10 = 100. Figure 9b shows the distribution of S ( x i , y i ) near the corner point ( x i , y i ) , the blue point in the x o y coordinate plane is the final corner point position. The coordinates x and y corresponding to the minimum value of z-axis can be obtained to get the target corner point ( x dst , y d s t ) .
{ S ( x i , y i ) = x = x i w A i / 2 x i + w A i / 2 y = y i   h A i / 2 y i + h A i / 2 s ( x , y )   s ( x , y ) = { max G a t e f ( x , y )   f ( x , y ) > μ f ( x , y ) minGate     o t h e r s
( x d s t , y d s t ) = a r g m i n | S ( x i , y i ) |   i = 0 , 1 , N s e t
The actual image of the photo-etching glass board and the corner points image obtained by searching are shown in Figure 9a, where the red point is the calculated sub-pixel corner point. There is no blade edge in the dark area, so there is no need to calculate the relevant corner points.

3.2. Coordinate System Transformation and Correction

The distortion model is no longer based on the traditional keyhole imaging model due to the existence of a prism system. The distortion parameters obtained by the traditional distortion model correspond to the global correction of the image. Obviously, the global distortion model described by finite parameters is not suitable for high-precision distortion correction occasions [21,22]. At the same time, the physical span of the imaging merging part is unknown due to the prism splitting and merging light, so the physical coordinates in the global range are obtained by using M-array decoding to calculate the span. The local M-array imaging decoding diagram is shown in Figure 10. Arbitrarily select two adjacent corner points calculated in Section 3.1. The coded coordinates of each corner point are made up of 5 × 5 squares in the lower right corner, with the white square representing 1 and the black representing 0. Due to the existence of prism and lens, the encoded image has the combination of mirror image and multiple rotations of 90°, so the decoding direction is different in different optical paths. For example, the 25-bit binary code in the optical path of camera C1, the lower right corner is the highest bit, the upper left corner is the lowest bit, and the numbers are arranged from top to bottom, and from right to left. For the red point, the decimal corresponding to the 25 bits binary is “12237003”. The physical coordinates obtained from the query table are (739, 1618), and the physical coordinates corresponding to the blue dot are (738, 1618). Through the pre-coded M-array coordinate system, the corresponding physical coordinates of all pixels in the imaging range can be obtained.
In this paper, it is necessary to establish the mapping relationship between the corner points and the physical coordinates of the M-array. Because the image coordinates and the physical coordinates of the M-array are in the same plane, the mapping relationship only includes rotation and translation. The transformation matrix is shown in Equation (4), where M ˜ represents the coordinate system encoded by M-array, H represents the mapping matrix with a size of 3 × 3, P ˜ represents the image coordinate system, and the introduced parameter s represents the transformation of any scale. The transformation matrix finally obtained satisfies Equation (5), which minimizes the transformation error. i represents the sequence number of extracted corner points, and C n t represents the total number of corner points. According to the calculated transformation matrix, the corresponding physical coordinates of each pixel can be obtained. Moreover, the pixel coordinates of each corner point are corrected according to the physical coordinates encoded by the M-array.
M ˜ = sH P ˜
[ x M y M 1 ] = s [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 1 ] [ x p y p 1 ]
m i n i = 0 C n t ( x M i s h 11 x p i + h 12 y p i + h 13 h 31 x p i + h 32 y p i + 1 ) 2 + ( y M i s h 21 x p i + h 22 y p i + h 23 h 31 x p i + h 32 y p i + 1 ) 2
To avoid the gross error caused by the calibration of a single frame, the optimal transformation matrix was calculated by synthesizing the mapping parameters of multiple M-array images taken at different positions. We put the M-array board in the same plane, rotate and translate arbitrarily, and take N clear pictures. The spatial position relationship is described by the affine transformation model, as shown in Equation (6). P i represents the corrected corner points pixel coordinates of the i-th image, and A i represents the affine transformation coefficient matrix from the i-th frame corner point coordinate to the first frame corner point coordinate.
A i P i = P 1
[ a i b i c i d i e i f i 0 0 1 ] [ x i y i 1 ] = [ x 1 y 1 1 ]
To get optimal affine parameters, the corresponding parameters under the minimum error should be calculated as Equation (7). The N 1   frames of the image are sequentially affine transformed to the first frame, and N 1 affine matrices A are obtained. In Equation (8), to eliminate the gross error, the obtained pixel matrix is averaged. P * represents the image coordinates after N frame adjustment. The mapping matrix H from pixel coordinate system to physical coordinate system can be recalculated according to P * instead of P 1 in Equation (4).
A i = a r g m i n | | A i P i P 1 | | 2
P * = i = 1 N A i P i N
After the corrected corner point is obtained, the distance between the corrected corner and the original corner is calculated, and the distorted thermal map is drawn as shown in Figure 11. Due to the light merging effect of the rhombic prism, there are black shadows in the center of the image, so distortion statistics are not included. The red part represents the largest distance gap, followed by blue and green. As can be seen from Figure 11, the distortion is small near the image center, then becomes larger with the increase of the distance from the center. However, due to the existence of manufacturing and assembly errors of the prisms, the distortion presents discrete irregularity. In view of the actual image distortion, it is difficult to accurately describe it by the traditional global formula. For high precision measurement, it is necessary to map point by point to get an accurate coordinate system transformation relation.

4. Experimental Results and Analysis

The workpiece to be measured is placed in the measurement area, and the edges of the five fields of view are converted to the M-array coordinate system and drawn into the same image, as shown in Figure 12. In order to visualize the edge details, adaptive amplification is performed in the Y-axis direction for the edge parallel to the X-axis and in the X-axis direction for the edge parallel to the Y-axis.
Since the accuracy of M-array calibration directly affects the final result of the whole measurement system, this paper verifies the accuracy of the calibration method. Because the object to be measured and the M-array are always in the same plane, the dimension calculation of the object to be measured depends on the corner extraction of the M-array and the accuracy of the coordinate system transformation. The accumulated manufacturing error of the photo-etching glass board that can cover the field of five cameras is within 0.25 µm, so we believe that all the corner points in the same row or column lie on a physical straight line. Due to the existence of errors from the corner point extraction algorithm and errors caused by distortion, extracted corner points deviate from the linear discrete distribution.
For the M-array images captured by five cameras, a random row of corner points is extracted, respectively, and the distribution in the image coordinate system is shown as the black line in Figure 13. Through the coordinate system transformation established by our method, it corresponds to the M-array coordinate system, as shown in the red line in Figure 13. Figure 13a–e correspond to cameras C1–C5 respectively. Due to the existence of corner point extraction and distortion error, the corner point distribution fluctuates in the range of straight line ±0.2 pixels. It is mapped to the decoded physical coordinate system to remove the distortion, and then converted to the same ordinate pixel unit. The corner points fluctuate in the range of ±0.1 pixels, that is ±0.2 µm. Since the corner point is calculated based on minimum classification error, so there are still subtle fluctuations in the M-array coordinate system.
Assuming that the error of corner point calculation conforms to normal distribution, the least square fitting is carried out to remove the error caused by corner point calculation in the physical coordinate system, so a standard straight line in the physical coordinate system is obtained. Then, the straight line of this physical coordinate is inversely corresponding to the pixel coordinate. For camera C2, the corrected pixel corner coordinate curve is shown as the red line in Figure 14. It can be seen that the red line completely converges within the range of ±0.01 pixel, which indicates that the coordinate system transformation relationship is accurate and corresponding. For the other four cameras, we also conducted the same corner points data tests, and the transformation relationship between pixel coordinates and physical coordinates could meet the accuracy requirements completely.
Then, we used the established table of the corresponding relationship between pixel coordinates and the physical coordinate system to measure blades. The blade was simultaneously imaged in five cameras as shown in Figure 15. The length is calculated from Figure 15c–e imaged by cameras 1, 2 and 5, and the width is calculated from Figure 15a,b imaged by cameras 3 and 4. The red rectangular box is the blade edge region used in the calculation.
The main algorithm flow chart of the measurement part is shown in Figure 16. Due to the high measurement precision, it is necessary to give trigger signals to five cameras at the same time to avoid the influence of the external environment, so that each camera starts imaging at the same time. Then, sub-pixel edge extraction and coordinate system conversion are carried out for five images respectively. Finally, all the blade edge points converted to the M-array coordinate system are fitted and the final results are calculated and output.
Two experiments were carried out to evaluate the accuracy and precision of the proposed method: one workpiece was measured repeatedly to compare the results; 100 workpieces were measured and compared to the reference value obtained by a professional contact measuring instrument.
First, the same blade is placed in different positions in the field of vision and measured 100 times repeatedly. The consistency error distribution is shown in Figure 17a and the error probability distribution is shown in Figure 17b. 97% of the measurement results fluctuate within 0.2 µm. The variation could be from random changes of the external environment, including temperature, humidity, stability of light source, etc., as well as errors caused by the coordinate conversion of different positions.
Then, the measurement of the system was carried out three times for 100 different blades, respectively, at different times. Multiple measurements were conducted by professional surveyors and instrument on 100 blades, and the average measurement results are used as the reference data. The repetitive measurement errors of the 100 blades were obtained by subtracting the second and third test data, respectively, from the first one, as shown in Figure 18. The repeated measurement error distribution of 100 different blades is within 0.5 µm, indicating that the batch of measured blades has little impact on the stability of the system.
The three measured values were compared with the reference values, respectively, and the error distribution was shown in Figure 19. In Figure 19, abscissa 0–100 is the difference between the first test data and the reference data, 100–200 is the difference between the second test data and the reference data, and 200–300 is the difference between the third test data and the reference data. The ordinate unit is µm, and it can be seen that errors between the measurement results of our method and the reference values is within 0.5 µm, which can meet the accuracy requirements.
There are several methods of high precision dimension measurement based on vision, and the comparison of several methods is shown in Table 1. Among them, the high magnification of the universal tool microscope can achieve the plane measurement accuracy up to 0.31 µm, but its field of view limits its range to 0.76 mm only. The high resolution of the grating scale has an absolute advantage in two-dimensional displacement measurement. When used for dimensional measurement, the accuracy can reach 0.7 µm, the measurement range is up to 1.1 mm. Using the principle of laser triangulation for visual measurement, accuracy can reach 10 µm, measurement range can reach 250 mm. The blade workpiece can be measured manually with a twist spring instrument. The error is 1um, but it has an artificial uncertainty error. The measurement accuracy of the method in this paper can reach 0.5 µm for 25 mm, and due to the universality of the method, the measurement accuracy can also be guaranteed for a larger range of dimension measurement in principle. For the test system used in this paper, the maximum measurement field of view can reach 30.4   mm × 22.8   mm . From the comprehensive comparison of precision and range of measurement, the proposed method has higher advantages than the existing high precision plane geometry measurement scheme.

5. Discussion

This paper proposes a visual dimension measurement method based on multi-prism splitting and merging light which can achieve the multi-camera cooperative measurement through a simple and efficient scheme without mechanical movement. For cuboid blades in the above case, the 100 times repeatable measurement uncertainty of the same blade is within 0.2 µm. For the comparison between the measurement results of our method of 100 different blades and the reference values, the measurement uncertainty is within 0.5 µm, which can meet the precision requirements. The feasibility of this method for large measurement and high precision workpiece measurement is verified through multiple tests on different test samples. For the measurement of a thin slice workpiece without mechanical movement, a single exposure can realize the measurement of a large field of view, and the measurement precision is higher and the speed is faster than the traditional scheme. Due to the rapid non-contact detection of visual measurement, this method can also be equipped with corresponding mechanical picking, transporting and sorting devices, which can carry out automatic measurement work with high efficiency and high precision in large quantities.
As the measurement accuracy is below 1um, stability between imaging structures is required. The slight displacement of mechanical structure caused by ambient temperature, humidity and airflow may bring errors to the final results. Our follow-up work should be to continuously test and collect data to compensate the measurement results for environmental factors. For the measurement object with a large thickness, in order to ensure the measurement accuracy, we also need to expand the depth of field and reconstruct the image at different depths. We will conduct further research and exploration on the method of a measured object whose thickness exceeds the measured depth of field.

6. Conclusions

For the contradiction between accuracy and range of traditional visual methods, we discard the large region background irrelevant to the measurement results to maximize the use of the imaging plane of cameras. The edge image that contributes to measurement can be compressed and merged by splitting and merging the light of multiple prisms. Then, the global coordinate coding board based on M-array is constructed by using the unique global characteristics of M-array. The coded board and the workpiece are placed on the same plane to establish the mapping relationship between the global measurement coordinates and the image coordinates of each camera. Because the continuous M-array coordinate system can cover the whole field of view of the workpiece measurement, the high precision correction of camera distortion and parameter calibration can be completed at one time. Finally, the edge position and dimension measurement results are obtained by detecting sub-pixel edge points and curve fitting. Compared with other traditional visual measurements, our method is more versatile for large-scale and high-precision measurements. For different types of measurement objects, only the corresponding prism combination needs to be designed, and other parts do not need any change.

Author Contributions

Conceptualization, X.Z.; methodology, X.Z. and C.Z.; software, X.Z. and C.Z.; validation, X.Z. and C.Z.; formal analysis, C.Z.; investigation, X.Z.; resources, X.Z. and X.M.; data curation, C.Z.; writing—original draft preparation, C.Z.; writing—review and editing, C.Z. and T.Z.; visualization, C.Z. and J.X.; supervision, Y.H. and X.M.; project administration, X.Z. and X.M.; funding acquisition, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is sponsored by the National Key Research and Development Project (2021YFC3001502), the National Natural Science Foundation of China (5207229).

Data Availability Statement

Data collected through research presented in the paper are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, B. Application of machine vision technology in geometric dimension measurement of small parts. EURASIP J. Image Video Process. 2018, 2018, 127. [Google Scholar] [CrossRef] [Green Version]
  2. Zhang, Y.C.; Han, J.X.; Fu, X.B.; Lin, H.B. An online measurement method based on line laser scanning for large forgings. Int. J. Adv. Manuf. Technol. 2014, 70, 439–448. [Google Scholar] [CrossRef]
  3. Liu, Y.; Wang, L.; Brandt, M. An accurate and real-time melt pool dimension measurement method for laser direct metal deposition. Int. J. Adv. Manuf. Technol. 2021, 114, 2421–2432. [Google Scholar] [CrossRef]
  4. Jiang, G.M. Research on Detection Method of Jet Disc size and Geometric Tolerance Based on CCD. Master’s Thesis, Harbin Institute of technology, Shenzhen, China, 2014. [Google Scholar]
  5. Feng, G.K.; Wang, Y.Q. Universal Tool Microscope Computer Aided Measurement System Based on Machine Vision Technology. Acta Microsc. 2019, 28, 760–769. [Google Scholar]
  6. Pan, B.; Qian, K.; Xie, H. Two-dimensional digital image correlation for in-plane displacement and strain measurement: A review. Meas. Sci. Technol. 2009, 20, 62001. [Google Scholar] [CrossRef]
  7. Khalili, K.; Vahidnia, M. Improving the Accuracy of Crack Length Measurement Using Machine Vision. Procedia Technol. 2015, 19, 48–55. [Google Scholar] [CrossRef] [Green Version]
  8. Kim, J.A.; Kim, J.W.; Kang, C.S.; Jin, J.H.; Eom, T.B. An optical absolute position measurement method using a phase-encoded single track binary code. Rev. Sci. Instrum. 2012, 83, 115115. [Google Scholar] [CrossRef]
  9. Foroosh, H.; Zerubia, J.B.; Berthod, M. Extension of phase correlation to sub-pixel registration. IEEE Trans. Image Process. 2002, 11, 188–200. [Google Scholar] [CrossRef] [Green Version]
  10. Cui, J.S.; Huo, J.; Yang, M. The high precision positioning algorithm of circular landmark center in visual measurement. Optik 2014, 125, 6570–6575. [Google Scholar] [CrossRef]
  11. Zhou, X.; Wang, J.; Mou, X.; Li, X.; Feng, X. Robust and high-precision vision system for deflection measurement of crane girder with camera shake reduction. IEEE Sens. J. 2020, 21, 7478–7489. [Google Scholar] [CrossRef]
  12. Liao, Q.; Mi, L.; Zhou, Y.; Xu, Z.J. Design of micro size vision precision detection system. J. Chongqing Univ. 2002, 25, 21–23. [Google Scholar]
  13. Noh, D.K.; Kim, S.H.; Park, Y.J. A measurement system based on visual servoing and error correction method using multiple ccd camera module and structured target for three dimensional measurement. Proc. SPIE 2007, 6719, 671907. [Google Scholar]
  14. Wang, Y.H.; Chen, Q.B.; Ding, M.; Li, J.Y. High Precision Dimensional Measurement with Convolutional Neural Network and Bi-Directional Long Short-Term Memory (LSTM). Sensors 2019, 19, 5302. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Qian, B.B.; Kah, B.L. Image distortion correction for single-lens stereo vision system employing a biprism. J. Electron. Imaging 2016, 25, 043024. [Google Scholar] [CrossRef]
  16. Pan, B.; Yu, L.; Zhang, Q. Review of single-camera stereo-digital image correlation techniques for full-field 3D shape and deformation measurement. Sci. China Technol. Sci. 2018, 61, 2–20. [Google Scholar] [CrossRef] [Green Version]
  17. Lee, D.H.; Kweon, I.S. A novel stereo camera system by a biprism. IEEE Trans. Robot. Autom. 2000, 16, 528–541. [Google Scholar]
  18. Lim, K.B.; Yong, X. Virtual stereovision system: New understanding on single-lens stereovision using a biprism. J. Electron. Imaging 2005, 14, 043020. [Google Scholar] [CrossRef]
  19. Inaba, M.; Hara, T.; Inoue, H. A stereo viewer based on a single camera with view-control mechanisms. In Proceedings of the 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’93), Yokohama, Japan, 26–30 July 1993; Volume 3, pp. 1857–1865. [Google Scholar]
  20. Zhou, X.; Kang, Y.; Zhang, T.; Mou, X. M-array based on non-zero maps. IEEE Access 2021, 9, 58467–58477. [Google Scholar] [CrossRef]
  21. Yin, L.H.; Li, F.M.; Liu, S.J. Image correction technology of photoelectric panoramic imaging system. Optoelectron. Eng. 2016, 43, 53–58. [Google Scholar]
  22. Hu, X.; Zhou, C.; Kang, Y.; Zhou, X.; Mou, X. A High-Precision Lens Distortion Correction Method for Vision Measurement. In Proceedings of the 2021 6th International Conference on Image, Vision and Computing (ICIVC), Qingdao, China, 23–25 July 2021; pp. 317–321. [Google Scholar]
  23. Liu, J.; Cui, T. A measurement method on the roundness error based on computer vision system. In Proceedings of the 2nd International Conference on Information Science and Engineering, Hangzhou, China, 4–6 December 2010; pp. 884–887. [Google Scholar]
  24. Tong, Z.X.; Bai, J.C.; Kang, Z.Q. Research on a new method of high precision dimension measurement based on linear CCD fused grating ruler. Opt. Technol. 2019, 45, 275–281. [Google Scholar]
  25. Ding, Y.; Zhang, X.; Kovacevic, R. A laser-based machine vision measurement system for laser forming. Measurement 2016, 82, 345–354. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Actual picture of the test system. (A) Cameras; (B) prisms and cameras assembly; (C) camera lens; (D) computer; (E) marble slab; (F) photo-etching glass board and object to be measured.
Figure 1. Actual picture of the test system. (A) Cameras; (B) prisms and cameras assembly; (C) camera lens; (D) computer; (E) marble slab; (F) photo-etching glass board and object to be measured.
Sensors 22 02081 g001
Figure 2. Oil pump blade and assignment of field of view. (a) Oil pump blade. (b) Field of view assignment diagram.
Figure 2. Oil pump blade and assignment of field of view. (a) Oil pump blade. (b) Field of view assignment diagram.
Sensors 22 02081 g002
Figure 3. Prisms split the light.
Figure 3. Prisms split the light.
Sensors 22 02081 g003
Figure 4. Prisms merge the light.
Figure 4. Prisms merge the light.
Sensors 22 02081 g004
Figure 5. Schematic diagram of spatial distribution of imaging system and detail diagram of prism light path. (a) Spatial distribution of imaging system. (b) C1, C2 and C5 light path. (c) C3, C4 light path. (A) Prism assembly; (A1) part of prism assembly A; (B) camera lens; (C1–C5) cameras; (D) work piece to be measured; (E) parallel light source.
Figure 5. Schematic diagram of spatial distribution of imaging system and detail diagram of prism light path. (a) Spatial distribution of imaging system. (b) C1, C2 and C5 light path. (c) C3, C4 light path. (A) Prism assembly; (A1) part of prism assembly A; (B) camera lens; (C1–C5) cameras; (D) work piece to be measured; (E) parallel light source.
Sensors 22 02081 g005
Figure 6. Rhombic prism model. (a) Rhombic prism imaging model. (b) Geometric parameter model.
Figure 6. Rhombic prism model. (a) Rhombic prism imaging model. (b) Geometric parameter model.
Sensors 22 02081 g006
Figure 7. Detailed dimension picture and actual picture of prism assembly. (a) Dimension drawing of prism assembly; (b) Actual picture of prism assembly.
Figure 7. Detailed dimension picture and actual picture of prism assembly. (a) Dimension drawing of prism assembly; (b) Actual picture of prism assembly.
Sensors 22 02081 g007
Figure 8. Actual picture of M-array photo-etching glass plate. Part I is the assembly area; part II is the M-array coding area, and the workpiece is placed in the coding area for the transformation of absolute coordinates.
Figure 8. Actual picture of M-array photo-etching glass plate. Part I is the assembly area; part II is the M-array coding area, and the workpiece is placed in the coding area for the transformation of absolute coordinates.
Sensors 22 02081 g008
Figure 9. Image corner point extraction of photo-etching glass plate. (a) Corner point extraction.; (b) Corner point classification error distribution.
Figure 9. Image corner point extraction of photo-etching glass plate. (a) Corner point extraction.; (b) Corner point classification error distribution.
Sensors 22 02081 g009
Figure 10. M-array decoding flow of camera C1.
Figure 10. M-array decoding flow of camera C1.
Sensors 22 02081 g010
Figure 11. Distortion distribution. Red represents high distortion, followed by green and blue.
Figure 11. Distortion distribution. Red represents high distortion, followed by green and blue.
Sensors 22 02081 g011
Figure 12. All the edge points are converted to the M-array coordinate system.
Figure 12. All the edge points are converted to the M-array coordinate system.
Sensors 22 02081 g012
Figure 13. Distribution of corner points on a straight line in five camera images respectively. (ae) corresponds to cameras C1–C5 respectively.
Figure 13. Distribution of corner points on a straight line in five camera images respectively. (ae) corresponds to cameras C1–C5 respectively.
Sensors 22 02081 g013
Figure 14. Corner points correction of image coordinate system.
Figure 14. Corner points correction of image coordinate system.
Sensors 22 02081 g014
Figure 15. Simultaneous imaging of five cameras. (ce) were the pictures imaged by cameras 1, 2 and 5; (a,b) were the pictures imaged by cameras 3 and 4.
Figure 15. Simultaneous imaging of five cameras. (ce) were the pictures imaged by cameras 1, 2 and 5; (a,b) were the pictures imaged by cameras 3 and 4.
Sensors 22 02081 g015
Figure 16. Algorithm flow of dimension measurement.
Figure 16. Algorithm flow of dimension measurement.
Sensors 22 02081 g016
Figure 17. Errors of repeated measurement. (a) Repetition errors distribution; (b) Repetition errors histogram.
Figure 17. Errors of repeated measurement. (a) Repetition errors distribution; (b) Repetition errors histogram.
Sensors 22 02081 g017
Figure 18. Repeated measurement error of different blades.
Figure 18. Repeated measurement error of different blades.
Sensors 22 02081 g018
Figure 19. Measurement errors of blades.
Figure 19. Measurement errors of blades.
Sensors 22 02081 g019
Table 1. Method in this paper is compared with other methods.
Table 1. Method in this paper is compared with other methods.
MethodAccuracy/Measuring Range
Universal tool microscope combined with CCD [23]0.31 µm/0.76 mm
Raster coded measurement [24]0.7 µm/1.1 mm
Machine vision Measurement based on laser triangulation [25]10 µm/250 mm
Manual twist spring instrument measurement1 µm/25 mm
Method in this paper0.5 µm/30.4 mm
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhou, X.; Zhou, C.; Zhang, T.; Mou, X.; Xu, J.; He, Y. High Precision Visual Dimension Measurement Method with Large Range Based on Multi-Prism and M-Array Coding. Sensors 2022, 22, 2081. https://doi.org/10.3390/s22062081

AMA Style

Zhou X, Zhou C, Zhang T, Mou X, Xu J, He Y. High Precision Visual Dimension Measurement Method with Large Range Based on Multi-Prism and M-Array Coding. Sensors. 2022; 22(6):2081. https://doi.org/10.3390/s22062081

Chicago/Turabian Style

Zhou, Xiao, Cong Zhou, Tingting Zhang, Xingang Mou, Jiaxin Xu, and Yi He. 2022. "High Precision Visual Dimension Measurement Method with Large Range Based on Multi-Prism and M-Array Coding" Sensors 22, no. 6: 2081. https://doi.org/10.3390/s22062081

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop