You are currently viewing a new version of our website. To view the old version click .
Symmetry
  • Article
  • Open Access

4 December 2025

Research on Multi-View Phase Shift and Highlight Region Treatment for Large Curved Parts Measurement

,
,
and
School of Mechanical Engineering, Southeast University, Nanjing 211189, China
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Symmetry and Asymmetry in Dynamics of Mechanical and Structural Engineering

Abstract

For large curved parts with complex surfaces, which often exhibit both symmetry and asymmetry in their geometric features, the multi-view combined with the phase shift method and highlight regions treatment method has been proposed and applied to the online measurement system. The hardware components of the measuring system include a self-designed multi-vision platform and a multi-view three-dimensional measurement platform composed of rotating platform, robot and linear guide rail. The overall calibration of the system was conducted to guarantee the effectiveness of the measurement point cloud splicing of each viewing angle. And the system integrates the three-dimensional measurement technology of multi vision combined with the phase shift method and online measure system to realize full coverage and high-precision measurement of the impeller—addressing both its inherent symmetry (regular blade arrangement) and local asymmetry (irregular edge details)—and controls the relative error of the measured size and the actual size within 1%. In addition, the highlight regions treatment method has also been proposed. By adjusting the camera’s exposure time to change the light intensity of the captured images, images under different exposures and their valid pixels are obtained, thereby facilitating the synthesis of a composite image free of highlight phenomena. Experimental results demonstrate that the proposed method can achieve full-coverage measurement of the measured object and effective measurement of highlight regions.

1. Introduction

Three-dimensional measurement technology has been widely applied in various fields and it could be divided into contact measurement and non-contact measurement [1,2,3,4,5,6]. Contact measurement, represented by coordinate measuring machines (CMMs) and articulated arm measuring machines, achieves micron-level precision, suitable for key dimension inspection of precision metal parts, but suffers from slow measurement speed and potential damage to soft materials. Among non-contact technologies, optical methods are most widely used. Laser scanning balances speed and precision for rapid scanning of medium-sized workpieces, structured light measurement acquires full-field data in one shot that could be efficiently adapted to 3D reconstruction of complex surfaces like gears and impellers, and photogrammetry enables low-cost measurement of large-sized objects. Additionally, CT scanning realizes non-destructive detection of internal part structures, while ultrasonic measurement is applicable to non-metallic materials and harsh environments.
In recent years, the structured light fringe projection 3D measurement technology [7], a type of non-contact measurement, has been widely applied in industrial measurement and reverse engineering fields. It enables efficient measurement using low-cost equipment without damaging the measured surface. The classic phase-shifted fringe structured light 3D measurement technique was first proposed by Srinivasan et al. [8] in 1984. With the rapid advancement of digital projection and image sensing technologies, it now allows pixel-level measurement of more complex objects with higher accuracy and faster speed. Phase unwrapping, a crucial step in this process, has also witnessed significant development in recent years. Building on the temporal phase unwrapping method initially proposed by Huntley [9], a variety of phase unwrapping approaches have been developed. For instance, Bergman [10] employed Gray-code fringe patterns for phase unwrapping to enhance accuracy; Zhang et al. [11] proposed a complementary Gray-code unwrapping method to address the issue of period misalignment during phase unwrapping; Wei-Hung Su [12] utilized color-coded fringes for phase order encoding; Yueyi Zhang [13] designed a fringe pattern embedded with coded light spots to reduce the number of projected fringes and improve phase unwrapping efficiency; Reich et al. [14] introduced the dual-frequency heterodyne method, which converts two sinusoidal fringe patterns of different frequencies into an equivalent single-frequency sinusoidal fringe pattern through calculation. In this case, the wrapped phase of the equivalent single-frequency fringe equals the absolute phase. Subsequently, multi-frequency and multi-wavelength heterodyne methods [15] were developed, further enhancing phase unwrapping accuracy and enabling application to more complex surfaces. Moreover, deep learning techniques have been extensively validated and successfully applied across various fields, demonstrating powerful capabilities in feature extraction and nonlinear modeling [16,17,18].
In the field of complex curved surface measurement, numerous commercial online measurement systems for complex curved parts have been widely used in automotive, aerospace, medical technology, and other industries. Products such as the Gocator series, SmartScan blue light scanning system, and ATOS Tripple Scan scanner represent state-of-the-art solutions for high-speed and high-precision 3D measurement. However, compared with foreign technologies, China’s complex curved surface measurement techniques still lag in terms of measurement accuracy, scene adaptability, and measurement speed. This makes it difficult to achieve effective full-coverage and high-efficiency measurement of large complex curved parts, and related products struggle to gain core competitiveness. Therefore, researching the online measurement of large complex curved parts is of great significance for enhancing the core competitiveness of China’s manufacturing industry and breaking the foreign monopoly on the processing and measurement technologies of such parts.
In addition, highlight region processing is also an unavoidable challenge in structured light 3D measurement, and researchers have conducted extensive studies on it. Currently, for the highlight phenomenon in structured light measurement, the commonly used methods mainly include spraying white powder, adding polarizers [19], multi-exposure [20], adaptive projection [21], and other approaches [22]. These methods can reduce the impact of specular reflection regions on measurement to a certain extent. However, when considering factors such as measurement requirements, measurement costs, and versatility, these methods still have certain limitations. Therefore, there is still no widely applicable solution for structured light measurement of objects with highlight phenomena.
In this study, we focus on complex system calibration and 3D measurement technology that combines multi-view vision with the phase shift method and highlight regions treatment method. Leveraging multi-view vision and the phase shift method, and based on equipment such as multi-view vision platforms, six-axis robots, rotating platforms, linear modules, and online measurement platforms, the integration and automation of online measurement for curved parts, which often feature both geometric symmetry and local asymmetry, are realized. And we also investigate the application of multi-exposure method to address the highlight phenomenon occurring during the measurement of specular reflection objects. Finally, an experiment has been conducted to verify the applicability of the proposed methods.

2. System Construction

2.1. Hardware Composition

The online measurement of large curved surfaces is based on a self-developed multi-view vision platform, equipped with three Basler industrial cameras arranged in a triangular configuration. The two lower cameras are positioned on the same horizontal plane, and a DLP projector for projecting structured light is installed between these two lower cameras. Additionally, the measurement system achieves multi-degree-of-freedom movement of the vision system through the coordinated operation of the rotating platform, ABB six-axis industrial robot, and TOYO linear guide module. This enables full-coverage and multi-angle measurement of the measured object.
The system involves multiple complex coordinate systems, including the camera coordinate system ( C V ), robot end-effector coordinate system ( C E ), robot base coordinate system ( C B ), linear guide coordinate system ( C D ), and rotating platform coordinate system ( C R ). A schematic diagram of these coordinate systems is presented in Figure 1.
Figure 1. Schematic diagram of the measurement system’s coordinate systems.

2.2. Software Composition

The measurement system primarily includes functions such as camera and projector control, 3D reconstruction of the measured object, preliminary point cloud processing, and stitching of 3D reconstructed point clouds from different angles based on system calibration parameters. Ultimately, it obtains comprehensive measurement results of the object, realizing online measurement. The architecture of the online measurement software is illustrated in Figure 2.
Figure 2. Architecture of the online measurement system.

3. System Calibration

Point clouds acquired from different measurement angles correspond to the camera coordinate systems at their respective measurement positions. To unify the coordinate systems of multi-angle point clouds, the measurement system must undergo comprehensive calibration to determine the transformation relationships between its various coordinate systems. System calibration mainly includes camera calibration, rotation axis calibration, robot hand–eye calibration, and linear guide calibration. Thus, in the subsequent calibration process, the left camera coordinate system is designated as the reference coordinate system, and both rotation axis calibration and hand–eye calibration are performed under this left camera coordinate system.

3.1. Camera Calibration

Camera calibration is implemented using a checkerboard calibration board based on Zhang Zhengyou’s calibration method [23]. Setting point P as an arbitrary point in the world coordinate system, and the coordinates of point P in the left and right camera coordinate systems can then be expressed as P l , r = R l , r P + t l , r , where P l , r denotes the coordinates of point P in the left and right camera coordinate systems; R l , r and t l , r represent the rotation matrix and translation vector of the world coordinate system relative to the left and right camera coordinate systems, respectively. As shown in Figure 3, the transformation relationship between the left and right camera coordinate systems can be expressed as P l = R P r + t and the rotation matrix and translation vector of the left and right camera coordinate systems are calculated as
R = R r R l T t = t r R r R l T t l
Figure 3. Stereo calibration model.
Multi-camera calibration follows the same principle as binocular calibration. Adding a third camera improves calibration accuracy, and multi-view stereo calibration can be treated as three sets of binocular stereo calibration [24]. By calibrating each of these three binocular models, the pairwise transformation relationships among the three cameras can be obtained. The multi-view vision model is depicted in Figure 4.
Figure 4. Multi-view model.
Setting P as an arbitrary point in the world coordinate system, and P 1 , P 2 , P 3 represent the coordinates of point P in the three camera coordinate systems, respectively. Through camera calibration, the rotation matrices R 12 , R 13 , R 23 and translation vectors t 12 , t 13 , t 23 between the three cameras can be acquired. The transformation relationships between the three camera coordinate systems are then expressed as
P 1 = R 12 P 2 + t 12 P 1 = R 13 P 3 + t 13 P 2 = R 23 P 3 + t 23

3.2. Rotation Axis Calibration with Adaptive Distance Weighting

During 3D measurement, the limited field of view of the camera restricts single-measurement coverage to only the point cloud of the object at a specific angle. To overcome this limitation, a rotating platform is typically used to perform 360-degree measurement of the object. The transformation relationship between point clouds from different angles is determined based on the rotation axis parameters of the turntable and the turntable’s rotation angle, ultimately achieving point cloud stitching. This method offers high speed and accuracy but is highly dependent on the rotation axis; thus, precise calibration of the rotation axis is essential. This involves determining the direction vector of the rotation axis in the camera coordinate system and the spatial coordinates of a point on the rotation axis. Therefore, this study proposes a rotation axis calibration method based on adaptive distance weighting, which uses a checkerboard calibration board. By analyzing the impact of the centers of corner trajectory circles on the accuracy of rotation axis fitting, weighting coefficients are designed to be adaptively determined based on the movement distance of the corners. These coefficients enable more accurate fitting of the rotation axis.
Rotation axis calibration primarily involves two coordinate systems: the fixed camera coordinate system ( C V ) and the calibration board coordinate system ( C B ), which changes with the rotation of the rotating platform. The checkerboard calibration board is fixed on the rotating platform and rotates with it around the rotation axis. Consequently, the rotation trajectory of each corner forms a circle, with the center of the circle lying on the rotation axis. Therefore, after calculating the center coordinates of the rotation trajectory circle for each corner, the least squares method [25] is used to fit a straight line through all these centers, thereby obtaining the linear equation of the rotation axis.
After calculating the center coordinates of the corner rotation trajectory circle, fitting a straight line through all these centers yields the linear equation of the rotation axis. The specific process is as following.
(1)
Calculation of the center of the corner trajectory circle
The equation of the trajectory circle in the plane of an arbitrary corner could be given as
A x + B y + C z + D = 0
where A, B, and C, respectively, represent the parameters of the plane’s normal vector, which can be calculated using the coordinates of the corner at three different positions. Taking q 1 , q 2 , and q 3 of the same corner, the spatial normal vector of the plane is obtained by computing the cross product of vectors q 1 q 2 and q 2 q 3 .
N = q 1 q 2 × q 2 q 3
where N = [ A , B , C ] . Substituting the coordinates of any corner into Equation (2) allows solving for the value of D.
Both the center of the rotation axis trajectory and the corner lie on the plane of the aforementioned trajectory circle. Moreover, the distance from different positions of the same corner on the trajectory circle to the center is equal to the circle’s radius. This relationship satisfies the following equations.
P i , j C i 2 = r 2 A X C i + B Y C i + C Z C i + D = 0
where P i , j denotes the i-th corner on the calibration board at the j-th rotation position; C i = ( X C i , Y C i , Z C i ) is the center coordinate of the trajectory circle corresponding to the i-th corner, i = 1 , 2 m represents the corner index and m is the total number of corners; j = 1 , 2 n represents the calibration board position index and n is the total number of calibration board positions; r is the radius of the trajectory circle. By substituting the coordinates of any corner at more than three positions into the equations, the center coordinate of the rotation trajectory circle corresponding to that corner can be calculated. Repeating this process for all corners yields the centers of the rotation trajectory circles for all corners.
(2)
Rotation axis fitting based on adaptive distance weighting
As previously mentioned, the centers of the corner rotation trajectory circles lie on the rotation axis. Therefore, fitting a spatial straight line through all these centers provides the linear equation of the rotation axis. Traditional methods often use the ordinary least squares method for rotation axis fitting, which aims to minimize the sum of the squares of the distances from each center to the fitted straight line. Let the rotation axis to be fitted lie on the spatial straight-line L, with a unit direction vector of [ u , v , w ] . Since the straight-line L must pass through the centroid [ a , b , c ] of all centers, the linear equation of the rotation axis can be expressed as
x a u = y b v = z c w
Setting the distance from the i-th center to the straight line L as d i , then
d i 2 = X C i a 2 + Y C i b 2 + Z C i c 2 u X C i a + v Y C i b + w Z C i c 2
where ( X C i , Y C i , Z C i ) is the center coordinate of the i-th corner. When using the ordinary least squares method, the sum of the squares of the distances from each center to the straight-line L is minimized in accordance with the least squares principle
m i n u , v , w i = 1 m d i 2 s . t .         u 2 + v 2 + w 2 = 1
It is evident from the above equation that the ordinary least squares method assumes each center contributes equally to the accuracy of the fitted straight line. However, in practice, errors exist in the calculation of the centers, and the magnitudes of these errors vary, leading to differing contributions of individual centers to the rotation axis fitting accuracy. Therefore, it is necessary to differentiate between different centers. To improve the accuracy of rotation axis fitting, a weighting coefficient α i is designed based on the distance between the front and rear positions of each corner. This allows the weighting coefficient corresponding to each corner trajectory center to be adaptively determined, thereby avoiding the errors associated with the ordinary least squares method.
α i = 1 n 1 j = 1 n 1 l i , j 1 m i = 1 m l i , j p
where p > 1 . Figure 5 responses the schematic diagram of the movement distance of l i , j in a projected form. l i , j can be given as
l i , j = P i , j + 1 P i , j 2 , j < n
Figure 5. Schematic diagram of corner movement distance.
The unit direction vector of the rotation axis is then calculated as
m i n u , v , w i = 1 m α i d i 2 s . t .         u 2 + v 2 + w 2 = 1
The problem of fitting the rotation axis is thus transformed into solving Equation (11), ultimately yielding the unit direction vector of the straight line containing the rotation axis. Combined with the known centroid of all centers (which the straight line must pass through), the rotation axis calibration is completed.
Given the rotation axis direction vector parameters [u, v, w] and the coordinates of a center [a, b, c] on the rotation axis, the transformation matrix H [26] between the camera coordinate systems before and after the rotating platform rotates clockwise by an angle θ around the rotation axis can be derived as
H = R H t H 0 1
where R H is the rotation matrix and t H is the translation vector.
R H = u 2 + v 2 + w 2 cos θ u v 1 cos θ w sin θ u w 1 cos θ + v sin θ u v 1 cos θ + w sin θ v 2 + u 2 + w 2 cos θ v w 1 cos θ u sin θ u w 1 cos θ v sin θ v w 1 cos θ + u sin θ w 2 + u 2 + v 2 cos θ
t H = a v 2 + w 2 u b v + c w 1 cos θ + b w c v sin θ b u 2 + w 2 v a u + c w 1 cos θ + c u a w sin θ c u 2 + v 2 w a u + b v 1 cos θ + a v b u sin θ

3.3. Calibration of Hand–Eye and Linear Guide

Robot hand–eye calibration is performed using the classic Tsai two-step method [27]. The underlying principle is that the transformation matrix X between the robot end-effector coordinate system ( C E ) and the camera coordinate system ( C V ) remains unchanged before and after the robot moves. This gives rise to the following equation.
A i X = X B i
where A i is the transformation matrix from the robot end-effector coordinate system at pose of i to that at pose of i + 1; B i is the transformation matrix from the camera coordinate system at pose of i to that at pose of i + 1. Hand–eye calibration thus reduces to solving for X in the above equation.
Linear guide calibration involves determining the relationship between the camera coordinate systems before and after changes in the poses of the robot and the linear guide, which serves as the foundation for subsequent multi-angle stitching and measurement. The robot base is fixed on the slide of the linear guide. When the slide receives a movement command, it moves horizontally along the linear guide and feeds back its positions before and after the movement. Since the linear guide coordinate system ( C D ) is fixed, converting the camera coordinate system ( C V ) at any pose to the linear module coordinate system ( C D ) yields the following equivalent relationship.
T C B 1 2 C D T C E 1 2 C B 1 X C V 1 = T C B 2 2 C D T C E 2 2 C B 2 X C V 2
where T C E 1 2 C B 1 and T C E 2 2 C B 2 represent the transformation relationships between the robot end-effector coordinate system and the robot base coordinate system before and after the robot pose changes, respectively. The robot base coordinate system ( C B ) translates along the Y-axis of the coordinate system ( C D ), setting t 1 and t 2 as the movement distances of the slide at any two positions. The corresponding translational movements of the linear module are denoted as T C B 1 2 C D and T C B 2 2 C D , which can be, respectively, expressed as
T C B 1 2 C D = 1 0 0 1 0 0 0 t 1 0 0 0 0 1 0 0 1
T C B 2 2 C D = 1 0 0 1 0 0 0 t 2 0 0 0 0 1 0 0 1
After rearrangement, the transformation relationship between the corresponding camera coordinate systems under any two poses of the robot and the linear module is
C V 1 = T C E 1 2 C B 1 X 1 T C B 1 2 C D 1 T C B 2 2 C D T C E 2 2 C B 2 X C V 2

4. Three-Dimensional Measurement

4.1. Phase Shift Method

The phase shift method [28] typically employs a series of sinusoidal fringe gratings with equal periods and a light intensity distribution of I n x ,   y . The camera captures the deformed fringe patterns modulated on the surface of the measured object, and the principal value phase is calculated as follows:
φ x ,   y = arctan n = 0 N 1 I n x ,   y sin ( 2 n π N ) n = 0 N 1 I n x ,   y cos ( 2 n π N )
Since phase calculation involves the arctangent function, the computed phase values are truncated within the range of [−π, π], resulting in discontinuities. For this reason, φ x , y is also referred to as the wrapped phase. Phase unwrapping is required to restore the true phase value. For complex surfaces with large curvatures, methods such as Gray code combined with phase shift [10] and dual-frequency heterodyne [14] are commonly used. The absolute phase x , y can be obtained from the wrapped phase and phase order using Equation (21), where different methods correspond to different approaches for calculating the phase order k ( x , y ) .
x , y = φ x , y + 2 π k x , y

4.2. Multi-View Vision

The principle of multi-view vision 3D measurement is identical to that of classical binocular vision measurement. It primarily relies on cameras positioned at two different locations to capture images of the measured object. The solved absolute phase value is used as a pixel feature to design a stereo matching algorithm, which identifies matching point pairs between the two images. The disparity between the two images is calculated using the coordinates of these matching points based on the triangulation principle. Finally, the 3D coordinates of the surface of the measured object are computed by incorporating the camera calibration parameters. By treating three-view vision as three sets of binocular vision models and combining the calibration results from the previous section, the unification of the three camera coordinate systems is achieved.
To verify the feasibility of the module, 3D measurement was conducted on a polyhedral gypsum sample. The selection of key dimension of the polyhedral gypsum is illustrated in Figure 6, that a refers to the horizontal symmetric outer contour width of the polyhedral gypsum sample, b refers to the vertical symmetric outer contour height of the polyhedral gypsum sample, c and d respectively refer to the folded edge lengths of the left-side and right-side contours of the polyhedral gypsum sample. And the three sets of point cloud results obtained from the measurement are presented in Figure 7.
Figure 6. Schematic diagram of key dimensions of the polyhedral gypsum.
Figure 7. Point clouds of the polyhedral gypsum from two angles.

4.3. Point Cloud Stitching

The point cloud stitching module is based on the transformation relationships between the various coordinate systems obtained from system calibration.
T C V 1 2 C V 2 = H T C E 1 2 C B 1 X 1 T C B 1 2 C D 1 T C B 2 2 C D T C E 2 2 C B 2 X
C V i = T C V i 2 C V i + 1 C V i + 1
The transformation relationship between the multi-angle points clouds measured by the system after multiple changes to the movement scenario can be calculated as
C V 1 = T C V 1 2 C V 2 T C V 2 2 C V 3 T C V i 1 2 C V 1 C V i
By adjusting the poses of the rotating platform, robot, and linear guide multiple times and recording the pose data before and after each adjustment, the 3D measurement module is used to acquire dense point clouds of the measured object from multiple angles. Point clouds generated under the same measurement view could be optimally selected using the visualization method. Point clouds with large-area missing regions, dense outlier points, and significant local distortion will be eliminated firstly, and point clouds with indicators superior to the thresholds will be selected by conducting statistical analysis of point cloud density, noise intensity, and outlier ratio. All angle-specific point clouds are converted to the initial camera coordinate system using the transformation relationship, realizing the unification of the point cloud coordinate systems and thus completing multi-angle point cloud stitching. Finally, post-processing operations such as noise point filtering are performed on the stitched dense point clouds [29]. The complete point cloud of the polyhedral gypsum after rough and fine stitching is shown in Figure 7.
Figure 8a shows the orientations of several groups of point clouds to be stitched in their respective camera coordinate systems before stitching. Figure 8b presents the point cloud results after unifying the coordinate systems using the multi-degree-of-freedom parameter-based rough point cloud stitching method. It can be observed that rough stitching of the point cloud groups already yields a preliminary overall point cloud of the polyhedral gypsum surface, albeit with slight misalignments. Figure 8c demonstrates that after fine point cloud stitching based on the ICP algorithm, no misalignments remain, and all point cloud groups can be stitched into a complete point cloud of the polyhedral gypsum. The accuracy of the rough and fine point cloud stitching is evaluated by comparing the key dimensions of the polyhedral gypsum obtained from physical measurement with the corresponding key dimensions on the stitched point clouds. The comparison results are provided in Table 1. By comparing the key dimensions of the cross-shaped gypsum polyhedron from physical measurement with those on the stitched point cloud, it is confirmed that the multi-angle point cloud stitching achieves excellent results with high accuracy, and the error is at the millimeter level. The main reason of the error could be that the positions of c and d in the paper are more complex relative to a and b, and the impeller point cloud may have local distortions due to highlights and occlusions.
Figure 8. Point cloud stitching results of the polyhedral gypsum.
Table 1. Comparison of selected dimensions of the polyhedral gypsum.

5. Highlight Regions Treatment Method

5.1. Treatment Method

Complex curved surface components are often machined from metal materials, and their surfaces exhibit specular reflection. Therefore, during structured light projection measurement, highlight phenomena are prone to occur in specular reflection regions. This manifests as oversaturation in camera images, causing the loss of fringe information in the highlight regions of the captured images and ultimately leading to measurement failure in these regions. Therefore, this paper also investigates the processing method for highlight regions based on the multi-exposure method combined with the high dynamic range (HDR) image synthesis principle, aiming to address the impact of highlight regions on structured light measurement.
Highlights on the measured object are processed based on the multi-exposure method and image synthesis principle. When measuring objects with specular reflection using the phase-shifted fringe method, the grayscale range of the measurement camera is between 0 and 255. Consequently, in the captured images, pixels in the highlight fringes whose actual grayscale values exceed 255 can only be displayed as 255. This leads to a discrepancy between the theoretical and actual light intensity values, resulting in the loss of partial fringe information and ultimately causing measurement failure in these regions. In contrast, the grayscale values of normal fringes do not exceed 255, so their true values are retained, and the theoretical light intensity values are consistent with the actual ones.
To avoid the loss of fringe information caused by specular reflection, it is necessary to process the highlight regions to ensure they fall within the normal light intensity range. The simulated highlight fringe map and normal fringe map are shown in Figure 9a, while the distribution diagrams of the theoretical light intensity, actual light intensity of highlight fringes, and light intensity of normal fringes are presented in Figure 9b.
Figure 9. Highlight fringes and normal fringes.
The wrapped phases of the simulated highlight fringes and normal fringes are calculated, respectively, based on the four-step phase shift method, and their comparison diagram of wrapped phases is shown in Figure 10b. It can be seen from the wrapped phase distribution diagram that the wrapped phases of the normal fringes are distributed normally. However, due to the loss of partial fringe information, the wrapped phases corresponding to the highlight fringes exhibit distortion. Compared with the normal phases, they exhibit an abnormal wavy distribution. It is difficult to perform subsequent phase unwrapping and point cloud calculation based on such distorted phases. Therefore, the highlight phenomenon has a non-negligible impact on 3D reconstruction.
Figure 10. Comparison of light intensity and wrapped phase.
Adjusting the camera’s exposure time is the most direct and effective method to change the light intensity of captured fringe patterns. For objects with specular reflection, reducing the exposure time can effectively mitigate the oversaturation of highlight regions. However, there is no single exposure time that can completely eliminate all highlight regions. Based on this, the camera’s exposure time is adjusted from high to low; at each corresponding exposure time, four-step phase-shifted fringe projection and image acquisition are performed. Ultimately, multiple sets of fringe images under different exposure times are obtained. Among these images, high-exposure images suffer from information loss in highlight regions, while retaining relatively complete fringe information in other regions. Meanwhile, low-exposure images have less reflection in the corresponding (previously highlighted) regions, with relatively complete fringe information preserved there—but other regions suffer from fringe information loss due to insufficient brightness. Thus, the valid regions from different exposure images can be combined using an image synthesis method, resulting in a set of fully valid composite phase-shifted fringe patterns.
Let the expression of a set of four-step phase-shifted fringe patterns in the multi-exposure method be expressed as follows:
I k 0 x ,   y   =   A x ,   y + B x ,   y cos Φ x ,   y I k 1 x ,   y   =   A x ,   y + B x ,   y cos Φ x ,   y π 2 I k 2 x ,   y   =   A x ,   y + B x ,   y cos Φ x ,   y π I k 3 x ,   y   =   A x ,   y + B x ,   y cos Φ x ,   y 3 π 2
where k = 1, 2, …, N denotes the sequence number of the k-th group of images. Since the exposure time is adjusted from high to low, the fringe light intensity of each group has the following relationship:
I k n x , y > I k + 1 n x , y , n = 0,1 , 2,3
To achieve the synthesis of fringe patterns captured under multiple exposure times, the process starts from the first group (the image with the highest exposure), and each pixel is processed one by one for every image within the group. If the grayscale values of pixels at a specific position in all images of the current group are all less than 255, these pixels are used as the pixels at the corresponding position in the composite image, and the process moves to the next pixel for processing. If a pixel at a specific position in any image of the current group is oversaturated (its grayscale value equals 255), the phase value calculated using this pixel will contain errors. Therefore, the pixels at the corresponding position in all images of this group are discarded, and the process proceeds to check the pixels at the same position in all images of the next group. If the grayscale values of pixels at this position in all images of the next group are all less than 255, these pixels are adopted as the pixels at the corresponding position in the composite image; otherwise, the process continues to the next group and repeats the checking step. It means each pixel in the composite image takes the maximum value among the pixels (with grayscale values less than 255) from different exposure image groups at the corresponding position. This is expressed by the formula as follows:
I 0 x , y = max I k 0 x , y | I k 0 x , y < 255 , I k 1 x , y < 255 , I k 2 x , y < 255 , I k 3 x , y < 255 I 1 x , y = max I k 1 x , y | I k 0 x , y < 255 , I k 1 x , y < 255 , I k 2 x , y < 255 , I k 3 x , y < 255 I 2 x , y = max I k 2 x , y | I k 0 x , y < 255 , I k 1 x , y < 255 , I k 2 x , y < 255 , I k 3 x , y < 255 I 3 x , y = max I k 3 x , y | I k 0 x , y < 255 , I k 1 x , y < 255 , I k 2 x , y < 255 , I k 3 x , y < 255
where I 0 x , y , I 1 x , y , I 2 x , y , I 3 x , y denotes the light intensity of the final composite image, respectively.

5.2. Evaluation Method and Experiment

To evaluate the quality of multi-exposure synthesized images for judging the image synthesis algorithm, multi-exposure images are evaluated based on the number of valid pixels in the images. A pixel-by-pixel cyclic judgment is performed for each group of images: if the pixel values of all images in the group at a certain pixel position are greater than 0 and less than 255, both the wrapped phase of high-frequency fringes and the wrapped phase of low-frequency fringes corresponding to this pixel can be successfully calculated. This means the absolute phase corresponding to the pixel is valid, so the pixel is judged as a valid pixel, and the count of valid pixels is increased by 1.
In addition, experimental measurement of metal gears was conducted based on the multi-exposure method and image synthesis principle. The measurement method is implemented by combining the dual-frequency four-step phase shift method with binocular vision. Each measurement requires projecting eight fringe patterns, including four high-frequency phase-shifted fringes and four low-frequency phase-shifted fringes, where the wavelength of the low-frequency fringes is 33 pixels and that of the high-frequency fringes is 32 pixels. There is a specular reflection phenomenon in the central area of the metal gear surface, resulting in highlight regions in the captured images. Therefore, five groups of images were collected at different exposure times (100,000 μs, 80,000 μs, 60,000 μs, 40,000 μs, 20,000 μs), with eight phase-shifted fringe patterns in each group. After extracting the region of interest (ROI) from the images captured by the left camera, the results are shown in Figure 11, and their exposure intensity decreases from top to bottom.
Figure 11. Different exposure images.
Based on the above method, two groups of stripe images with different exposures were synthesized, and the results are shown in Figure 12.
Figure 12. Synthesized images.
As can be seen from Figure 12, the composite stripe image effectively eliminates the highlight areas and replaces them with effective pixels in the low-exposure image. According to the image evaluation method, the effective pixel ratio of each group of images is calculated, and the results are as shown in Figure 13. As can be seen from the result, the multi-exposure image synthesis method effectively improves the proportion of effective pixels.
Figure 13. Effective pixel ratio calculation results.

6. Experiment

The hardware platform of the measurement system mainly includes a six-axis industrial robot, a linear module, a rotary platform, a vision platform, an industrial computer, and an electrical cabinet. The six-axis industrial robot in the system (model: IRB1200-7/0.7) is mainly used to change the position and angle of the vision platform installed at the end, enhancing the flexibility of the vision platform. The system adopts a TOYO linear module with a stroke of approximately 1.3 m and a positional repeatability of ±0.002 mm, which is mainly used to realize the horizontal movement of the industrial robot. The rotary platform (model: TK13500EL) has a diameter of 500 mm, a rotation accuracy of 20 arcseconds, and a repeatability of 6 arcseconds. It is mainly used to change the viewing angle of the measured object, thereby realizing 360° measurement of the object. The vision platform is equipped with three Basler acA2500-14gm cameras (Basler, Ahrensburg, Germany), each with a resolution of 1944 × 2592. The three cameras are arranged in a triangular pattern: the two lower cameras are located on the same plane, with a distance of 350 mm between their optical centers and an inward deflection angle of 17.6° for each optical axis; the upper camera is 200 mm away from the bottom cameras, with its optical axis deflected downward by 20°. When the object distance is 550 mm, the common field of view of the three cameras is approximately 350 mm × 280 mm, with a focal length of about 9 mm and a camera field of view angle of approximately 35.2°, as shown in Figure 14a.
Figure 14. Physical diagram of impeller measurement.
Based on the hardware platform, the online measurement software was used to perform 3D reconstruction of an impeller with a complex surface, as shown in Figure 14. The impeller has a complex surface with numerous highlight regions and also contains shadowed regions. This makes threshold segmentation difficult for measurement using the gray code method. Therefore, the dual-frequency four-step phase shift method combined with the multi-exposure image synthesis method was adopted for the experiment.
After completing the system calibration in accordance with the overall calibration scheme of the measurement system, structured light projection measurement of the impeller was initiated. The projected patterns consist of two groups of phase shift patterns: one group of high-frequency sinusoidal fringes with a wavelength of 32 pixels and one group of low-frequency sinusoidal fringes with a wavelength of 33 pixels. The synthetic wavelength is larger than the horizontal resolution of the projector, satisfying the requirements of the dual-frequency heterodyne phase shift method. Due to the impeller’s large size and complex structure, measurement from multiple angles is necessary to achieve full coverage. Through practical testing, a total of 26 measurement angles were required, resulting in 36 point clouds from different angles. After selecting and point cloud filtering, the point clouds from three of these angles are displayed in Figure 15.
Figure 15. Point clouds from different angles.
Stitching of all angle-specific point clouds was completed using the point cloud stitching module. A screenshot of the stitching process is shown in Figure 16, and the final stitching result is presented in Figure 17 (with different colors representing point clouds from different angles). After filtering the complete impeller point cloud and performing surface reconstruction, the final result is illustrated in Figure 18.
Figure 16. Multi-angle point cloud stitching process.
Figure 17. Complete impeller point cloud.
Figure 18. Impeller surface reconstruction result.
Multiple sets of measurements were conducted on the key dimensions of the impeller, and the average values were calculated. The comparison between the actual dimensions and the measured dimensions is shown in Table 2. The measurement accuracy of the key dimensions reaches the 0.1 mm level, and the relative error between the actual dimensions and the measured dimensions is within 1%, meeting the measurement requirements.
Table 2. Comparison of impeller parameters.

7. Conclusions

This research endeavors to address the intricate challenges associated with the measurement of large, complex curved parts, which are characterized by irregular surfaces, varying depths, and shadow-induced occlusions. To this end, an innovative online measurement system is proposed, which integrates multi-view vision techniques with the phase shift method. The system’s hardware architecture comprises a custom-developed multi-view vision platform, complemented by a sophisticated multi-angle 3D measurement platform. The latter is meticulously designed with a rotational platform, a robotic arm, and a linear guide module, enabling comprehensive and precise measurements from diverse perspectives.
A crucial initial step in the system’s development involves a meticulous calibration process, which serves as the cornerstone for subsequent multi-angle point cloud registration. For the calibration of the camera, rotation axis, and robotic hand–eye coordination, a standard checkerboard calibration pattern is employed. Building upon these fundamental calibrations, the system further undertakes the calibration of the entire movement scenario. This comprehensive calibration procedure facilitates the establishment of transformation matrices that accurately describe the relationship between the camera coordinate systems at any given pose of the rotational platform-robotic arm-linear guide assembly.
Leveraging the synergistic advantages of multi-view and the phase shift method, in conjunction with the custom-developed online measurement software, the system is capable of performing in situ measurements on impellers, a representative example of large, complex curved parts. Through the strategic adjustment of measurement angles and advanced point cloud registration algorithms, the system overcomes the limitations posed by shadowed and occluded regions, thereby ensuring the capture of comprehensive geometric information. Rigorous experimental evaluations demonstrate that the system achieves an impressive level of accuracy, with the relative error between the measured and actual dimensions of the impeller being confined within 1%.
Furthermore, based on the multi-exposure method and image synthesis principle, this study targets and processes the highlight regions caused by the specular reflection phenomenon of the measured object in structured light measurement. It selects valid pixels from images under different exposures for synthesis, ultimately eliminating the impact of highlight regions effectively.
The proposed measurement system exhibits remarkable adaptability across a wide range of measurement scenarios. By integrating a high-stroke linear guide, a 180° six-axis robotic arm, a 360° high-load, high-stability rotational platform, and a custom large-field-of-view multi-camera vision platform, the system maintains its stability and measurement accuracy even when confronted with the challenges presented by more complex large curved parts. This robust performance positions the system as a viable solution for overcoming the technological bottlenecks in the manufacturing of large, complex curved parts.

Author Contributions

R.S.: Visualization, Validation, Software, Formal analysis, Writing—original draft. X.L.: Investigation, Formal analysis. C.L.: Methodology, Formal analysis, Funding acquisition. Y.Z.: Conceptualization, Methodology, Writing—review and editing, Funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

Funding for this research was provided by the National Natural Science Foundation of China (Grant No 6502008027) and Key R&D plans of Jiangsu Provincial Department of Science and Technology (Grant No BE2020020).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

Thanks to the National Natural Science Foundation of China (Grant No 6502008027), Key R&D plans of Jiangsu Provincial Department of Science and Technology (Grant No BE2020020).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sam, V.D.J.; Dirckx, J.J.J. Real-time structured light profilometry: A review. Opt. Lasers Eng. 2016, 87, 18–31. [Google Scholar] [CrossRef]
  2. Lv, S.; Kemao, Q. Modeling the measurement precision of Fringe Projection Profilometry. Light (Sci. Appl.) 2023, 12, 2480–2497. [Google Scholar] [CrossRef]
  3. Zhang, Z.; Wang, H.; Li, Y.; Li, Z.; Gui, W.; Wang, X.; Zhang, C.; Liang, X.; Li, X. Fringe-Based Structured-Light 3D Reconstruction: Principles, Projection Technologies, and Deep Learning Integration. Sensors 2025, 25, 6296. [Google Scholar] [CrossRef]
  4. Han, M.; Xing, Y.; Wang, X.; Li, X. Projection superimposition for the generation of high-resolution digital grating. Opt. Lett. 2024, 49, 4473–4476. [Google Scholar] [CrossRef]
  5. Luo, H.; Zhang, K.; Yang, N.; Tan, M.; Wang, J. A robust method for multi-view 3D data stitching based on pasted marked points. Measurement 2024, 228, 14. [Google Scholar] [CrossRef]
  6. He, C.; Zheng, H.; Ding, K.; Lin, Q. Multi-View 3D Point Cloud Stitching Algorithm Based on Robotic Arm Assistance. Laser Optoelectron. Prog. 2023, 60, 2015001. [Google Scholar]
  7. Gorthi, S.S.; Rastogi, P. Fringe projection techniques: Whither we are? Opt. Lasers Eng. 2010, 48, 133–140. [Google Scholar] [CrossRef]
  8. Srinivasan, V.; Liu, H.C.; Halioua, M. Automated phase-measuring profilometry of 3-D diffuse objects. Appl. Opt. 1984, 23, 3105. [Google Scholar] [CrossRef] [PubMed]
  9. Huntley, J.M.; Saldner, H.O. Temporal phase-unwrapping algorithm for automated interferogram analysis. Appl. Opt. 1993, 32, 3047–3052. [Google Scholar] [CrossRef] [PubMed]
  10. Bergmann, D. New approach for automatic surface reconstruction with coded light. Proc. SPIE-Int. Soc. Opt. Eng. 1995, 2572, 2–9. [Google Scholar]
  11. Zhang, Q.; Su, X.; Xiang, L.; Sun, X. 3-D shape measurement based on complementary Gray-code light. Opt. Lasers Eng. 2012, 50, 574–579. [Google Scholar] [CrossRef]
  12. Su, W.H. Color-encoded fringe projection for 3D shape measurements. Opt. Express 2007, 15, 13167–13181. [Google Scholar] [CrossRef] [PubMed]
  13. Zhang, Y.; Xiong, Z.; Wu, F. Unambiguous 3D measurement from speckle-embedded fringe. Appl. Opt. 2013, 52, 7797–7805. [Google Scholar] [CrossRef] [PubMed]
  14. Reich, C.; Ritter, R.; Thesing, J. White light heterodyne principle for 3D-measurement. Proc. SPIE-Int. Soc. Opt. Eng. 1997, 3100, 236–244. [Google Scholar]
  15. Long, J.; Xi, J.; Zhu, M.; Cheng, W.; Cheng, R.; Li, Z.; Shi, Y. Absolute phase map recovery of two fringe patterns with flexible selection of fringe wavelengths. Appl. Opt. 2014, 53, 1794–1801. [Google Scholar] [CrossRef]
  16. Wang, H.; Zhang, C.; Qian, X.; Wang, X.; Gui, W.; Gao, W.; Liang, X.; Li, X. HDRSL Net for Accurate High Dynamic Range Imaging-based Structured Light 3D Reconstruction. IEEE Trans. Image Process. 2025, 34, 5486–5499. [Google Scholar] [CrossRef]
  17. Wang, H.; He, X.; Zhang, C.; Liang, X.; Zhu, P.; Wang, X.; Gui, W.; Li, X.; Qian, X. Accelerating surface defect detection using normal data with an attention-guided feature distillation reconstruction network. Measurement 2025, 246, 116702. [Google Scholar] [CrossRef]
  18. Nguyen, H.; Novak, E.; Wang, Z. Accurate 3D reconstruction via fringe-to-phase network. Measurement 2022, 190, 110663. [Google Scholar] [CrossRef]
  19. Salahieh, B.; Chen, Z.; Rodriguez, J.J.; Liang, R. Multi-polarization fringe projection imaging for high dynamic range objects. Opt. Express 2014, 22, 10064–10071. [Google Scholar] [CrossRef]
  20. Song, Z.; Jiang, H.; Lin, H.; Tang, S. A high dynamic range structured light means for the 3D measurement of specular surface. Opt. Lasers Eng. 2017, 95, 8–16. [Google Scholar] [CrossRef]
  21. Li, D.; Kofman, J. Adaptive fringe-pattern projection for image saturation avoidance in 3D surface-shape measurement. Opt. Express 2014, 22, 9887–9901. [Google Scholar] [CrossRef] [PubMed]
  22. Tan, J.; Su, W.; He, Z.; Bai, Y.; Dong, B.; Xie, S. Generic saturation-induced phase error correction for structured light 3D shape measurement. Opt. Lett. 2022, 47, 3387–3390. [Google Scholar] [CrossRef] [PubMed]
  23. Zhang, Z. A Flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  24. Shuai, H.; Wu, L.; Liu, Q. Adaptive Multi-View and Temporal Fusing Transformer for 3D Human Pose Estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 4122–4135. [Google Scholar] [CrossRef]
  25. Han, Q.; Xiao, Q.; Yue, Y. Study of space discrete point’s piecewise linear fitting on least square method. Ind. Instrum. Autom. 2012, 4, 107–109. [Google Scholar]
  26. Maritz, M.F. Rotations in Three Dimensions. Soc. Ind. Appl. Math. 2021, 63, 395–404. [Google Scholar] [CrossRef]
  27. Tsai, R.Y.; Lenz, R.K. A new technique for fully autonomous and efficient 3D robotics hand/eye calibration. IEEE Trans. Robot. Autom. 1989, 5, 345–358. [Google Scholar] [CrossRef]
  28. Song, W. Phase-height mapping and coordinate calibration simultaneously in phase-measuring profilometry. Opt. Eng. 2004, 43, 708–712. [Google Scholar] [CrossRef]
  29. Yan, D.; Lévy, B.; Liu, Y.; Sun, F.; Wang, W. Isotropic remeshing with fast and exact computation of restricted Voronoi diagram. Comput. Graph. Forum 2009, 28, 1445–1454. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.