Next Article in Journal
Clustering Methods for Vibro-Acoustic Sensing Features as a Potential Approach to Tissue Characterisation in Robot-Assisted Interventions
Previous Article in Journal
A Reliable Pipeline Leak Detection Method Using Acoustic Emission with Time Difference of Arrival and Kolmogorov–Smirnov Test
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Precise Phase Measurement for Fringe Reflection Technique through Optimized Camera Response

College of Computer Science & Technology, Zhejiang University of Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(23), 9299; https://doi.org/10.3390/s23239299
Submission received: 8 October 2023 / Revised: 9 November 2023 / Accepted: 17 November 2023 / Published: 21 November 2023
(This article belongs to the Section Sensing and Imaging)

Abstract

:
The Fringe Reflection is a robust and non-contact technique for optical measurement and specular surface characterization. The periodic alternation between dark and light cycles of the fringe patterns encodes the geometric information and provides a non-contact method of spatial measurement through phase extraction. Precisely expressing the positions of the points of the fringe pattern is a fundamental requirement for an accurate fringe reflection measurement. However, the nonlinear processes, both in generating the fringe pattern on a screen and capturing it using pixel values, cause inevitable errors in the phase measurement and eventually reduce the system’s precision. Aiming at reducing these nonlinear errors, we focus on constructing a new quantity from the pixel values of the photos of the fringe patterns that could linearly respond to the ideal fringe pattern. To this end, we hypothesize that the process of displaying the fringe pattern on a screen using a control function is similar to the process of capturing the pattern and converting the illuminating information into pixel values, which can be described using the camera’s response function. This similarity allows us to build a scaled energy quantity that could have a better linear relation with the control function. We optimize the extracted camera response function using an objective to increase the precision and reduce the quoted error. Experiments designed to determine the positions of points along the quartile lines verify the effectiveness of the proposed method in improving fringe reflection measurement precision.

1. Introduction

The Fringe Reflection or the deflectometry is a robust technique that provides a non-contact and accurate method for optical measurements and specular surface characterization [1,2]. This technique is derived from the interferometry theory, as a grating pattern shows distortion in the mirror image through an imperfect specular surface [3]. Consequently, comparing the deformed mirror image with a standard reference pattern, the imperfection of the specular surface can be inferred [4,5]. Thanks to its high-speed inspection and measurement precision, this technique is applicable in many different fields, such as in window shields and car bodies in the automobile industry [6], solar collectors [7,8], etc.
With the increased demand for clean and renewable energy, solar thermal power has been receiving more and more attention, due to its high sun-to-electricity conversion ratio. The solar collector, which is composed of a set of reflective mirrors of a particular shape, is one of the key components in such a system. Fabricating a high-quality solar collector (mirror facets) is not only a requirement of high efficiency but a requirement of safety [9]. Measurement techniques that are capable of characterizing the quality of the solar collector are required in the production line, and are continuously receiving research attention [10]. The fringe reflection is an ideal tool for measuring the optical properties, such as the local surface slopes, focal length, twist errors, etc. [11]. The adaptation and improvement of this technique have been hot research topics in solar thermal power development. The pioneer application can be dated back to 2003, when Fontani et al. [12] introduced the reflecting grating moire method in measuring the curvature of mirrors used in a heliostat plant, and achieved an accuracy of 0.1 mrad for surface gradients [13]. The milestone achievement was made by Andraka and co-authors in the Sadia National Laboratories, who developed the Sandia Optical Fringe Analysis Slope Tool (SOFAST) [14]. This fringe-reflection-based tool not only enables the fast characterization of mirror facets’ shape and rotations in almost real-time [15], but also the development of the Alignment Implementation for Manufacturing using Fringe Analysis Slope Technique (AIMFAST) [16].
No matter the field in which this technique is applied, the basic setup is the same, i.e., a standard fringe pattern is displayed on a screen and a camera takes photos of the distorted fringe pattern through the reflection of the mirror to be tested [17]. In order to reconstruct a full map of the specular surface, it is essential to calculate the normal vector of each reflective point. Typically, this procedure is accomplished by applying the ray-tracing algorithms to the source point on the screen, its corresponding image point in the photo and reflecting point on the mirror, as long as the positions of these three points are expressed in the same coordinate system [18]. Thanks to the algorithm that precisely retrieves the transforming matrix between the world and camera coordinate systems [19], the expression of positions in different coordinate systems can be unified. As a result, automatically determining the position of a source point on the screen in its local coordinate system is a basic requirement of precise measurement.
Phase shifting is the frequently used method in determining the position of a point in the screen coordinates. To proceed, a series of fringe patterns with a fixed phase difference in one dimension are generated and captured through the reflection of the test mirror. With the pixel values of a point recorded in different photos, its position is then calculated by applying the phase shifting algorithm, which transforms the pixel values into an absolute phase of that particular point [10]. Further combining the periodicity of the fringe pattern, its position is then obtained. As the phase shift algorithm is derived from the assumption of an ideal pattern shape [20], its accuracy is sensitive to any deformation to the fringe pattern caused by the nonlinear response of the camera in capturing the projected pattern. For example, image saturation can severely affect measurement precision [21].
Due to the collective awareness of the effect of a camera’s nonlinear response on the measurement precision, great research efforts have been dedicated to improving the phase-shifting algorithms. For example, Lei et al. [22] proposed a new phase extraction algorithm based on the Gram–Schmidt orthonormalization and least square ellipse fitting. Zhai et al. [23] developed a general phase extraction algorithm by utilizing the Lissajous figures and ellipse fitting. Wang et al. [24] proposed the triple N-step phase shift algorithm to compensate for the errors in fringe projection. Han et al. introduced a two-step calibration procedure in the measuring phase in the deflectometry system [25]. Tounsi et al. proposed a digital four-step phase-shifting technique. It enables a precise phase measurement using only one fringe pattern, as the fringe patterns of different phase shifts are generated from the first-, second-, and third-order Riesz transform components of the original one [26]. In 2022, Chang et al. [27] proposed the adaptive phase shift and integrated it into the multi-surface interferometric algorithm.
Notice that the saturation of shooting fringes due to ambient light, or the characteristics of the object to be measured, are the most severe nonlinear effects; researchers have tried to introduce different modulations to alleviate the camera’s nonlinearity in responding to the projected fringe pattern, especially to avoid gray saturation. Chang et al. proposed the use of an infrared sinusoidal pattern and infrared camera to avoid the interference of ambient light [28]. Waddington and Kofman [29] adaptively adjusted the maximum input gray level (MIGL) of the projected fringe pattern and the composite images captured at different exposure intensities to avoid image saturation. After this work, an adaptive fringe pattern was designed [30], which generates patterns of appropriate intensity in regions according to their local optical properties. Apparently, this requires complex pre-calibration. As with the method proposed by Lin et al. [31], a set of uniform gray pattern sequences are required in order to determine the best projection gray level. Recently, Wang et al. introduced a linear interpolation to tackle pixel saturation [21].
The existing works mainly focus on the nonlinearity in the process of converting an illumination pattern into pixel values, with a preset assumption that the projected pattern keeps its ideal form. In fact, the nonlinearity exists not only in the process of converting the scene illumination into pixel values, but also in generating the fringe patterns on the screen. When an ideal fringe pattern generated by some control function (computer program) is displayed on a screen, deformation is also inevitable. The fringe pattern on the screen itself might not be of an ideal shape in terms of gray value or brightness. This distorted fringe pattern would severely reduce the precision of the measurement system, and even the projected pattern could be captured accurately. Consequently, establishing a linear relation between the pattern illumination and the corresponding pixel values might not guarantee a precise phase measurement. This paper aims to reduce the errors caused by a set of nonlinear processes involved in the fringe-reflection-based measurement. The key idea is to establish a quantity that responds to an ideal fringe pattern in a linear way. In the following, we first present the principle of applying the fringe reflection in concentrative mirror facets’ qualification and its limitations. We detail the procedures of establishing the suitable quantity in Section 3. The experimental results immediately follow in order to validate the effectiveness of the proposed method in Section 4. We end the paper with a short conclusion on the main findings of the paper and a discussion of the limitations.

2. Fringe Reflection Principle and Application Limitations

Fringe projection is an ideal technique for large area measurements, providing high accuracy and repeatability with low system noise. When being applied to measure large reflective mirror surfaces [18], a fringe projection system typically consists of a projector and a camera. As shown in Figure 1, the projector projects a fringe pattern (a sinusoid pattern) generated using a computer program onto a large screen, which can be captured by the camera through the reflection of the mirror to be measured. By pairing a target point T in the image from the camera with its source point S on the screen, a unit normal vector n characterizing the optical property of the mirror surface can be fully determined [18]. Extracting the spatial information of points on the screen automatically and precisely is a fundamental requirement of the subsequent measurement.

2.1. Four-Step Phase-Shifting Algorithm

The periodicity of the fringe pattern serves as a ruler in determining the coordinates of any point on the screen. However, due to the interference of background illumination, a direct measure of the intensity of the projected pattern cannot provide precise spatial information. Phase-shifting is such a technique that eliminates background interference. In the past decades, scholars have developed a variety of phase-shifting algorithms, among which the four-step phase-shifting algorithm is the most frequently used. To proceed, a set of sinusoidal patterns of the same form as Equation (1) are projected onto the screen:
G i ( x , y ) = G 0 cos ϕ ( x , y ) + δ i
where δ i ( = 0 , π 2 , π , 3 π 2 ) is the phase shift between the two sinusoidal patterns, x and y are the coordinates of the screen, and G 0 is the amplitude of fringe modulation. By recording the intensities G i ( x , y ) at point ( x , y ) under the illumination of the fringe patterns of the phase shift δ i , we can determine the phase ϕ at ( x , y ) as
tan ϕ ( x , y ) = G 4 G 2 G 1 G 3
The advantage of using phase-shifting is clearly manifested in Equation (2), where the interference of background illumination is eliminated by taking the difference between the measured pattern intensities G i . Further combining the periodicity of the fringe pattern in space, the coordinates of a point can be determined as follows:
x = λ 2 π ϕ = λ 2 π · tan 1 ( G 4 G 2 G 1 G 3 )
where λ represents the wavelength of the projected sinusoidal patterns.

2.2. Error Analysis in Practical Use

Since Equation (2) is derived from an ideal sinusoidal pattern, the precision of the extracted spatial information severely depends on the quality of the sinusoidal fringes captured by the camera. When an ideal sinusoidal pattern is projected onto the screen (Figure 2a), we would expect that its intensity could be linearly captured by the camera (panel (b)). However, as the camera responds to sense illumination in a nonlinear way [32], the captured pattern distorts, i.e., the fringe pattern is no longer of a sinusoidal form (panel (c)). Using this distorted fringe pattern in Equation (2) brings unexpected errors to phase measurement and, finally, the extracted spatial information, especially when the illumination is relatively low and high due to dark noise and saturation, respectively (see Figure 2e).
Ideally, we project a fringe pattern of intensity range I [ a , b ] generated using a computer program onto the screen, and expect that the camera linearly maps this pattern into pixel values G, i.e., G I (Figure 2b). However, in reality, the generated pixel value G in responding to illumination intensity I i would be of the following form:
G i = c I i + N ( I i )
where N ( I i ) represents the high-order responses to illumination I i , and c is a constant representing the linear part which can be set to 1 during analysis without any loss of generality. When applying these G i in measuring the phase, Equation (2) would become
ϕ = arctan I 4 + N ( I 4 ) I 2 + N ( I 2 ) I 1 + N ( I 1 ) I 3 + N ( I 3 )
As can clearly be seen from Equation (5), the measurement in phase is sensitive to the nonlinearity of the camera response. However, this nonlinearity comes from the fundamental physical rules of the imaging devices [32]. Developing algorithms that help in reducing the nonlinearity is of essential importance in increasing the precision of the fringe-pattern-based measurement [18].

3. Camera-Response-Function-Based Fringe Measurement

As mentioned above, the errors of the conventional fringe pattern measurement are a direct consequence of the imperfect sinusoidal pattern captured using the camera. Unfortunately, there is no explicit knowledge on how and to what extend the pattern is deformed from the ideal shape, as there are multiple nonlinear processes involved, i.e., the projector generates a pattern on the screen according to a control function Y cos ( · ) , then the camera represents the pattern with pixel values. All these nonlinear responses integrally contribute to the deformation, as seen in Figure 2. As a result, extracting quantities that map the intended sinusoidal pattern linearly is essential in increasing measurement precision.

3.1. Fringe Pattern Generation and Acquisition

Recall the procedure of fringe measurement, i.e., a pattern is typically generated using some computer programs according to the data of the form Y cos ( · ) . Using this information Y, the projector casts a pattern of irradiance E on the screen. The image sensor of the camera responds to the irradiance E by producing an electronic signal and is further digitized into a pixel value G. Although the pixel value G responds to Y monotonically, thisis not in a linear way (see Figure 2d). During the whole process, the pixel value G is the only quantity we can obtain. Consequently, constructing a new quantity y that is proportional to Y from the pixel value G is a key step in fringe measurement.
However, the sense irradiation E is not the only factor that determines the resulting pixel value G. The exposure time δ t plays the same role. The longer the exposure time, the greater the energy that is absorbed. So, the camera response curve shows a monotonical increase. This behavior is described by the response function G = F E · δ t , where F ( · ) describes how the camera converts the absorbed energy E · δ t into a pixel value G. Since the exposure time δ t is independent of the fringe pattern, it is reasonable to exclude the exposure time from the measured quantity in order to better represent the fringe pattern. That is to say, we need to reconstruct an inverse function F 1 that extracts the illumination information E from the pixel value G as
E = F 1 ( G )
Equation (6) is known as the camera’s responding function. In 1997, a method was proposed by Debevec and Malik [33] for recovering this function, i.e., obtaining the illumination information (irradiation E) using an explicit function ln E = g ( G ) , with g ( · ) a polynomial function [18].
Unfortunately, the illumination information E alone would not give us satisfactory results, as the key requirement of a precise fringe technique is the linear response to the controlled fringe pattern. Diving deeper in the process of pattern generation and acquisition, generating the fringe pattern using a control function Y could be seen as an inverse process of converting irradiation E into pixel value G. Consequently, a linear relationship between Y and ln E is expected, and, consequently, a more precise measurement.

3.2. Linear Response Establishment

Camera Response Function

The analysis in the previous sub-section implies the importance of illumination information E in establishing a linear relation between an ideal sinusoidal pattern and the only quantity G that can be measured. The method developed by Debevec and Malik is frequently used to extract this relation [33], and the precision of this method has been widely validated [34,35,36]. To proceed, a set of M photos of a scene are taken with different exposure times δ t j over a short period, as shown in Figure 3. We then express the camera’s response function F that converts the total energy E i δ t j received at point i into pixel values G i j in the jth photo as follows:
F 1 G i j = E i δ t j
The positive definiteness of pixel values and irradiation strength allow us to take logarithm operations on both sides of Equation (7), and to turn them into
ln F 1 G i j = g ( G i j ) = ln E i + ln δ t j
where g ( G i j ) is the explicit form of ln F 1 ( G i j ) , which can be approximated by a Kth polynomial of G as g ( G ) = k = 0 K a k G k . Determining the coefficients a k is then transformed into minimizing a quadratic objective function [33]
O = i = 1 N j = 1 M g G i j ln E i ln δ t j 2 + γ G = G m i n + 1 G m a x 1 g ( G ) 2
where G m i n and G m a x are the smallest and largest pixel values among the total N pixels in all figures, and a regularization term with scaler γ (= 1.0) weights the smoothness relative to the data fitting. Applying the singular value decomposition method to Equation (9), we extract g ( G ) and show it in Figure 3g. Apparently, there is no simple relation between the captured pixel values and their corresponding illumination energy E. With the extracted relationship G = f ( Y ) between the control value Y and the captured pixel values G shown in Figure 2d, we obtain the relation between Y and ln E (see the solid curve in Figure 4a). However, when we extrapolate these two quantities with a linear relation ln E = g o ( Y ) (see the dashed line in Figure 4a), large residual errors are observed.
Since finding a quantity that responds linearly to the control value Y used in generating the ideal sinusoidal pattern is essential to the precision of any fringe-based measurement, we re-design the objective function by introducing a weight w ( G ) to the quadratic term in Equation (9), with the purpose of retrieving the camera’s response function g ( · ) . As the linearity is directly linked to precision, we want the weight w ( G ) to be helpful in minimizing the residual errors ε between g ( G ) and its linear extrapolation g o ( G ) (see the dash-dotted line in Figure 4a). Combining all the above-mentioned requirements together, we come up with a modified quadratic objective function:
O = i = 1 N j = 1 M w ( G i j ) g G i j ln E i ln δ t j + ε i 2 + γ G = G m i n + 1 G m a x 1 g ( G ) 2
ε i = g 1 f ( Y i ) g 0 f ( Y i )
w G i j = 0 G i j < G m i n   &   G i j > G m a x 1 1   +   ε i G m i n G i j G m a x
where the weight w G i j is introduced to guarantee a satisfactory linearity near the middle point of the control value Y. As expected, the optimal solution of Equations (10)–(12), i.e., g ( G ) , shows clear linear dependency on the control value Y, especially when Y is close to zero (see Figure 4b).
Recall the procedure of applying the four-step phase-shift principle in fringe measurement, the assumption of a linear response to an ideal sinusoidal fringe pattern is a pre-requirement. That is to say, to precisely measure the phase ϕ using Equation (2), the quantities G i have to have a linear relation with the control value Y. When pixel values G i are used (see Figure 5a), for each control value Y i , its corresponding G i o on the dotted line must be used. However, due to the nonlinear response of the camera’s pixel values to the control value Y, it is G i that is used in reality. The difference between G i o and G i causes inevitable errors in measurement. This is termed quoted error, which is a intrinsic property of a measurement system. In order to set a reasonable basis for comparing different systems, the quoted error is typically expressed as follows:
μ G i = G i o G i L × 100 %
where L is the measurement range of the system. In the case of using pixel values, L would be defined as G m a x G m i n . Throughout the whole measurement range, the quoted error is different from point to point. As can be seen from Figure 5b, using the pixel values G in measuring the phase gives rise to high quoted errors, especially when an extreme dark pattern is used, where the original dark pattern can be severely affected by the background illumination.
Fortunately, the proposed method builds a quantity ln E that responses to an ideal sinusoidal pattern in a much more linear way. As shown in Figure 5c, the optimally fitted line almost lies on top to the real responding curve. Consequently, an alleviated quoted error is expected (see Figure 5c) when we use this quantity in Equation (2) to measure the phase. The advantages of introducing the new quantity ln E in fringe measurement is further confirmed by Figure 5d, where ln E reduces the maximum, median, and even average quoted errors. As a result, an improvement in measurement precision is expected.

3.3. Determination of Optimal Responding Interval

A direct consequence of the weight function w G i j used in Equations (10)–(12) is the relatively large quote errors in the measurements of using very bright or dark patterns, as the fitting errors beyond the extreme pixel values G m i n or G m a x do not affect the minimization of the objective function O . Consequently, a larger deviation from the optimal linear response g o is expected when the camera responds to control values Y close to the extreme value −1 or 1. Its effect on the measurement system is quantitated by the maximum quote error μ m a x .
A possible and easy solution to alleviate this error would be by using only a smaller range of the responding curve, i.e., increasing G m i n and decreasing G m a x . In this case, the fringe pattern, i.e., the periodic change from bright to dark, is quantitated by discrete integer (pixel) values from G m i n to G m a x . Recall the principle of fringe pattern measurement, i.e., the periodicity in space with wavelength λ is represented by the changes of brightness which can be captured using imaging devices; a reduced measurement range [ G m i n , G m a x ] will degrade a system’s resolution ratio, which is defined as follows:
k = λ G m a x G m i n × 100 %
Both the resolution ratio and the maximum quote error are characteristics of a measurement system. In practice, a high resolution and small quote error are expected. However, these two characteristics contradict each other. Expanding the measurement range to increase the resolution ratio is accompanied by an increase in quote error. Thus, a compromise has to be made for a practical system. To this end, we combine the requirements of high resolution and small quote error into a unified expression U = μ m a x + η · k , where η is a free parameter that scales the resolution k into an absolute unit.
The introduction of U allows us to express the pursuit of a better measurement system as an optimization problem, i.e.,
minimize U ( G m i n , G m a x ) subject to G m i n [ 0 , 255 ] , G m a x [ 0 , 255 ] G m i n < G m a x
However, there is no easy way to find out the optimal measurement range that minimizes U. Fortunately, both G m i n and G m a x have a very limited range, which enables a complete search in finding the optimal solution. To proceed, for each pair of ( G m i n , G m a x ) x , their corresponding control values ( Y m i n , Y m a x ) x are first determined by utilizing the relation shown in Figure 2d. Instead of generating the periodic pattern with control values in the range of [−1.0, 1.0], we rescale the control function Y as
Y = 1 2 Y m a x ( 1 + cos ( x ) ) + 1 2 Y m i n ( 1 cos ( x ) )
A fixed colormap with limit values in the range [−1.0, 1.0] is then used to visualize the control value Y, which guarantees that the camera responds to the brightest and darkest pattern with pixel values G m i n and G m a x , as long as the camera’s settings are unchanged. These preparing procedures allow a new solution for the optimization problem Equations (10)–(12) and, finally, the integral system error U. Scanning over all the possible pairs of ( Y m i n , Y m a x ) , we come up with the distribution of integral errors U in the parameter space, which is illustrated in Figure 6. Recalling the requirement of a precise fringe pattern measurement system, we determine the optimal measurement range from 27 to 247 of the pixel values by finding out the minimum U in the parameter space.

4. Experimental Verification

The key advantage of the fringe reflection technique is its ability to provide a precise but easy-to-implement computer-vision-based relative position measurement. The periodic change in the brightness of the fringe pattern serves as a standard ruler in quantitating the position of a point in terms of wavelength λ or pattern size. Conventionally, the periodicity is captured by the pixel values G of the camera. The multiple nonlinear processes in the generation of the fringe pattern, as well as in image acquisition, make the pixel value representation not precise enough. The proposed method establishes a better linear representation of the fringe pattern. Thus, it is believed to give a more precise position measurement. This section presents the experimental results obtained from the implementation of the proposed method on a personal computer (Huawei MateBook 13, Manufactured in Shenzhen, China, Matab R2020a) to validate the effectiveness of the proposed method.
To proceed, under the same illuminating condition, we generated a set of fringe patterns with a phase difference of π / 2 using control value Y in the range [−0.893, −0.784] as determined previously (see Figure 7a). These patterns are displayed on an LCD screen (Dell D2720DS, Dell, manufactured in China) using full-screen mode. Recalling the purpose of the position representation of the fringe measurement, we validated the effectiveness of the proposed method by calculating the positions of a set of preset points of known positions (1/4, 1/2, and 3/4 of the screen length, marked by dashed lines in Figure 7b). To easily identify these positions, we marked the top and bottom terminals of the lines with colored pairs of ▽ − △. Thanks to the clear difference in colors, in comparison to the fringe patterns, these markers are easily identifiable in the photos acquired using the camera (Canon 60DES with iso 200, manufactured in Taiwan, China, see Figure 7c).
We first demonstrate that the proposed quantity can help to extract the phase of each pixel point effectively. To this end, we calculated ln E using the expression obtained from Equations (10)–(12) and the pixel values of the fringe pattern captured by the camera. Instead of using G i , we substituted the corresponding ln E i in Equation (2). Figure 8 shows the obtained phase at each pixel point along the horizontal direction. Due to the definition of tan ( · ) , the obtained phase wraps up π (see panel (a)). When we unwrap it, the obtained phase shows an almost linear increase with the horizontal length, i.e., number of pixels (see panel (b)).
We then calculated the phase ϕ of each point on the lines between the tips of markers identified from the photos (see Figure 7c) using their pixel values G (open symbols) and scaled energy values ln E (closed small symbols). As these lines mark the positions of 1/4, 1/2, and 3/4 of the screen lengths, the absolute phase values of the points on the lines are known to be π , 2 π , and 3 π . Extracting the phase ϕ using the fringe patterns represented by the new quantity ln E provides a direct means of testing the effectiveness of the proposed method. Figure 9 shows the phase ϕ of the points along these lines calculated by using the pixel-based and ln E -based representation of fringe patterns. Comparing to the measurement results from pixel values (open symbols), the proposed method gives not only more precise results, but also smaller variations (see the black closed symbols). These superiorities of the proposed method are also confirmed by the smaller deviations of the extracted phases to the ground truth values, whose phases are π , 2 π , and 3 π , respectively. As there are multiple fringes (two wavelengths), the phases extracted by substituting pixel values as well as the scaled energy quantity ln E in Equation (1) are first unwrapped, and their differences to the ground truth values are calculated and plotted in the right panels of Figure 9. As shown in these figures, the proposed method (using scaled energy presentation) can significantly reduce the phase errors, with an average error of 0.011 radian, which is almost one-third smaller than the error of using pixel-value-based measurements.
We also test the effectiveness of the optimal control value interval by carrying out the same experiment presented above, expect that the fringe patterns are generated by using the control function Y = cos ( · ) of the range [−1.0 1.0], i.e., the full range of control value Y. We extract the median, average, and max of the measured phase errors calculated from the pixel values and the scaled energy ln E . A direct comparison is shown in Figure 10. Clearly, the optimized control interval of Y reduces measurement errors in both cases of using pixel values and scaled energy ln E . Most importantly, the combination of optimal control value Y in generating the fringe pattern and the scaled energy ln E representation of fringes, i.e., the proposed method, brings us the smallest measurement error.

5. Conclusions and Discussion

The fringe pattern is a key technology upon which many non-contact and precise measurement methods have been built. It is applicable in many different fields. The essential idea of fringe-pattern-based measurements is that the positional information can be extracted from the periodicity of the fringe patterns and their changes in phase. To obtain this information, photos of the fringe patterns are taken using digital cameras and the pixel values are typically used to represent the periodical changes in brightness. Although the phase-shift enables the elimination of the effect of background illumination and the easy extraction of positional information, the nonlinearity of the camera in responding to illumination cripples the measurement precision.
Targeting improved measurement precision, we first clarified the objective, i.e., the control value used to generate the fringe patterns, to which the camera should respond linearly. After analyzing the different physical processes involved in fringe pattern measurement, we came up with a scaled illuminating energy ln E to present the fringe pattern. By introducing a weight function ω , constructed from the difference of an ideal fitting between ln E and the control value Y in the extraction of a camera’s response function, we managed to construct the ln E from the measured pixel values. After determining the optimal measurement range, improvements in measurement precision are observed in comparison with the conventional pixel-value-based measurements.
Although we have not tested the proposed method in real applications, for example, in determining the integral quality of large reflective mirrors used in solar thermal power applications [14,18,37], the advantages of the proposed method are still obvious. To quantify the optical quality of mirrors effectively, the normal vectors of points on the reflective surface need to be determined through the pairing-up of the source points on the pattern-displaying screen and their reflections captured by the camera. Precisely representing each of the source point positions in the local coordinate system is the basis of the following procedures. The proposed method focuses on building a new quantity to better characterize the fundamental properties of the fringe pattern, by extracting the camera’s response curve to illumination. Although it was implemented on the sinusoidal pattern, it could be applicable to any other types of fringe pattern. The proposed method has shown its ability to provide more precise position measurements. Furthermore, the fringe technique only uses brightness in determining the position. Since the brightness is a continuous physical quantity, there is no limit on the number of points on the mirror surface that can be used. Ideally, the fringe pattern technique would allow the determination of all the normal vectors of points on a reflective mirror surface. This property brings a superior advantage over the color-coded technique [18], where the number of points on the surface that can be included in the quality examination is limited by the number of colors and the coding–decoding strategy.
The limitations of the proposed method mainly stem from the preparing procedures, which include extracting the camera’s response function and determining the optimal measurement range. As the camera’s response function is robust to operating parameters, i.e, exposure time, ISO values, etc., ([38]) the measurement system only needs to be calibrated once, as long as the background illumination condition remains unchanged. As a result, the application of the proposed method requires a relatively stable illumination condition, i.e., the measurement might be vulnerable to external light disturbances. Fortunately, this condition can be easily guaranteed, even when this method is applied to assess the quality of a large reflective mirror used in solar thermal power, as production lines are typically indoors.

Author Contributions

Conceptualization, J.X.; Methodology, J.X.; Software, F.H.; Formal analysis, W.Z.; Investigation, F.H. and W.Z.; Data curation, F.H.; Writing—original draft, F.H. and W.H.; Writing—review & editing, W.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Zhejiang Province under grant LY22F020015, the National Natural Science Foundation of China under grant 62373327, and the National Key R&D Program of China (2022YFE0198900). The APC was funded by by the Natural Science Foundation of Zhejiang Province under grant LY22F020015.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Guan, J.; Li, J.; Yang, X.; Chen, X.; Xi, J. Defect detection method for specular surfaces based on deflectometry and deep learning. Opt. Eng. 2022, 61, 061407. [Google Scholar] [CrossRef]
  2. Zhu, Z.; Li, M.; Zhou, F.; You, D. Stable 3D measurement method for high dynamic range surfaces based on fringe projection profilometry. Opt. Lasers Eng. 2023, 166, 107542. [Google Scholar] [CrossRef]
  3. Xu, Y.; Gao, F.; Jiang, X. A brief review of the technological advancements of phase measuring deflectometry. PhotoniX 2020, 1, 14. [Google Scholar] [CrossRef]
  4. Malacara, D.; Servin, M.; Malacara, Z. Interferogram Analysis for Optical Testing; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  5. Li, Z.; Yin, D.; Yang, Y.; Zhang, Q.; Gong, H. Specular Surface Shape Measurement with Orthogonal Dual-Frequency Fourier Transform Deflectometry. Sensors 2023, 23, 674. [Google Scholar] [CrossRef] [PubMed]
  6. Knauer, M.C.; Kaminski, J.; Hausler, G. Phase measuring deflectometry: A new approach to measure specular free-form surfaces. In Proceedings of the Optical Metrology in Production Engineering, Strasbourg, France, 27–30 April 2004; Volume 5457, pp. 366–376. [Google Scholar]
  7. Arancibia-Bulnes, C.A.; Peña-Cruz, M.I.; Mutuberría, A.; Díaz-Uribe, R.; Sánchez-González, M. A survey of methods for the evaluation of reflective solar concentrator optics. Renew. Sustain. Energy Rev. 2017, 69, 673–684. [Google Scholar] [CrossRef]
  8. Gao, Y.; Tian, Z.; Wei, H.; Li, Y. 3D global optimization of calibration parameters of deflectometry system by using a spherical mirror. Measurement 2023, 219, 113287. [Google Scholar] [CrossRef]
  9. Natraj; Rao, B.; Reddy, K. Optical and structural optimization of a large aperture solar parabolic trough collector. Sustain. Energy Technol. Assess. 2022, 53, 102418. [Google Scholar] [CrossRef]
  10. El Ydrissi, M.; Ghennioui, H.; Farid, A.; Bennouna, E.G. A review of optical errors and available applications of deflectometry technique in solar thermal power applications. Renew. Sustain. Energy Rev. 2019, 116, 109438. [Google Scholar] [CrossRef]
  11. Iriarte-Cornejo, C.; Arancibia-Bulnes, C.; Hinojosa, J.; Peña-Cruz, M.I. Effect of spatial resolution of heliostat surface characterization on its concentrated heat flux distribution. Sol. Energy 2018, 174, 312–320. [Google Scholar] [CrossRef]
  12. Fontani, D.; Francini, F.; Jafrancesco, D.; Mercatelli, L.; Sansoni, P. Mirror shape detection by reflection grating moiré method with optical design validation. Proc. SPIE Int. Soc. Opt. Eng. 2005, 5856, 377–384. [Google Scholar]
  13. Heimsath, A.; Platzer, W.; Bothe, T.; Wansong, L. Characterization of optical components for linear Fresnel collectors by fringe reflection method. In Proceedings of the Solar Paces Conference, Las Vegas, NV, USA, 4–7 March 2008; pp. 1–8. [Google Scholar]
  14. Andraka, C.E.; Sadlon, S.; Myer, B.; Trapeznikov, K.; Liebner, C. Rapid reflective facet characterization using fringe reflection techniques. In Proceedings of the ASME 2009 3rd International Conference on Energy Sustainability Collocated with the Heat Transfer and InterPACK09 Conferences, San Francisco, CA, USA, 19–23 July 2009; Volume 48906, pp. 643–653. [Google Scholar]
  15. Finch, N.S.; Andraka, C.E. Uncertainty analysis and characterization of the SOFAST mirror facet characterization system. J. Sol. Energy Eng. Trans. ASME 2014, 136, 011003. [Google Scholar] [CrossRef]
  16. Andraka, C.E.; Yellowhair, J.; Trapeznikov, K.; Carlson, J.; Myer, B.; Stone, B.; Hunt, K. AIMFAST: An alignment tool based on fringe reflection methods applied to dish concentrators. J. Sol. Energy Eng. 2011, 133, 031018. [Google Scholar] [CrossRef]
  17. Zhang, Z.; Chang, C.; Liu, X.; Li, Z.; Shi, Y.; Gao, N.; Meng, Z. Phase measuring deflectometry for obtaining 3D shape of specular surface: A review of the state-of-the-art. Opt. Eng. 2021, 60, 020903. [Google Scholar] [CrossRef]
  18. Li, S.; Xu, J.; Lou, J.; Yang, X.; Liu, H.; Chen, S. Mirror Surface Assessment in Solar Power Applications by 2-D Coded Light. IEEE Trans. Instrum. Meas. 2020, 69, 3555–3565. [Google Scholar] [CrossRef]
  19. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  20. Larkin, K.; Oreb, B. Design and assessment of symmetrical phase-shifting algorithms. JOSA A 1992, 9, 1740–1748. [Google Scholar] [CrossRef]
  21. Wang, J.; Yang, Y.; Xu, P.; Liu, J. Adaptive fringe projection algorithm for image saturation suppression. Precis. Eng. 2023, 82, 140–155. [Google Scholar] [CrossRef]
  22. Lei, H.; Yao, Y.; Liu, H.; Tian, Y.; Yang, Y.; Gu, Y. Accurate phase extraction algorithm based on Gram–Schmidt orthonormalization and least square ellipse fitting method. J. Mod. Opt. 2018, 65, 1199–1209. [Google Scholar] [CrossRef]
  23. Zhai, Z.; Li, Z.; Zhang, Y.; Dong, Z.; Wang, X.; Lv, Q. An accurate phase shift extraction algorithm for phase shifting interferometry. Opt. Commun. 2018, 429, 144–151. [Google Scholar] [CrossRef]
  24. Wang, J.; Yang, Y. Triple N-step phase shift algorithm for phase error compensation in fringe projection profilometry. IEEE Trans. Instrum. Meas. 2021, 70, 7006509. [Google Scholar] [CrossRef]
  25. Han, H.; Wu, S.; Song, Z. An accurate calibration means for the phase measuring deflectometry system. Sensors 2019, 19, 5377. [Google Scholar] [CrossRef] [PubMed]
  26. Tounsi, Y.; Kumar, M.; Siari, A.; Mendoza-Santoyo, F.; Nassim, A.; Matoba, O. Digital four-step phase-shifting technique from a single fringe pattern using Riesz transform. Opt. Lett. 2019, 44, 3434–3437. [Google Scholar] [CrossRef] [PubMed]
  27. Chang, L.; Valyukh, S.; He, T.; Yu, Y. Multisurface Interferometric Algorithm and Error Analysis With Adaptive Phase Shift Matching. IEEE Trans. Instrum. Meas. 2022, 71, 1–13. [Google Scholar] [CrossRef]
  28. Chang, C.; Zhang, Z.; Gao, N.; Meng, Z. Measurement of the three-dimensional shape of discontinuous specular objects using infrared phase-measuring deflectometry. Sensors 2019, 19, 4621. [Google Scholar] [CrossRef] [PubMed]
  29. Waddington, C.; Kofman, J. Analysis of measurement sensitivity to illuminance and fringe-pattern gray levels for fringe-pattern projection adaptive to ambient lighting. Opt. Lasers Eng. 2010, 48, 251–256. [Google Scholar] [CrossRef]
  30. Li, D.; Kofman, J. Adaptive fringe-pattern projection for image saturation avoidance in 3D surface-shape measurement. Opt. Express 2014, 22, 9887. [Google Scholar] [CrossRef]
  31. Lin, H.; Gao, J.; Mei, Q.; Zhang, G.; He, Y.; Chen, X. Three-dimensional shape measurement technique for shiny surfaces by adaptive pixel-wise projection intensity adjustment. Opt. Lasers Eng. 2017, 91, 206–215. [Google Scholar] [CrossRef]
  32. Grossberg, M.D.; Nayar, S.K. Determining the camera response from images: What is knowable? IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1455–1467. [Google Scholar] [CrossRef]
  33. Debevec, P.E.; Malik, J. Recovering high dynamic range radiance maps from photographs. In Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 3–8 August 1997; pp. 369–378. [Google Scholar]
  34. Nayar; Branzoi. Adaptive dynamic range imaging: Optical control of pixel exposures over space and time. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 14–17 October 2003; Volume 2, pp. 1168–1175. [Google Scholar]
  35. Hsu, Y.F.; Chang, S.F. Camera Response Functions for Image Forensics: An Automatic Algorithm for Splicing Detection. IEEE Trans. Inf. Forensics Secur. 2010, 5, 816–825. [Google Scholar] [CrossRef]
  36. Shen, F.; Zhao, Y.; Jiang, X.; Suwa, M. Recovering high dynamic range by Multi-Exposure Retinex. J. Vis. Commun. Image Represent. 2009, 20, 521–531. [Google Scholar] [CrossRef]
  37. Kammel, S.; Leon, F.P. Deflectometric measurement of specular surfaces. IEEE Trans. Instrum. Meas. 2008, 57, 763–769. [Google Scholar] [CrossRef]
  38. Xu, J.; Li, S.; Ruan, Z.; Cheng, X.; Hou, X.; Chen, S. Intensive flux analysis in concentrative solar power applications using commercial camera. IEEE Trans. Instrum. Meas. 2020, 69, 501–508. [Google Scholar] [CrossRef]
Figure 1. Reflective mirror surface quality examination system, i.e., a typical application of the fringe pattern technique. A computer program generates the sinusoidal fringe pattern on a monitor screen. Any point S on the screen will be captured by the camera through the reflection of the mirror to be measured. Pairing up the point S and its image point T determines the normal vector n and, finally, the mirror quality [18].
Figure 1. Reflective mirror surface quality examination system, i.e., a typical application of the fringe pattern technique. A computer program generates the sinusoidal fringe pattern on a monitor screen. Any point S on the screen will be captured by the camera through the reflection of the mirror to be measured. Pairing up the point S and its image point T determines the normal vector n and, finally, the mirror quality [18].
Sensors 23 09299 g001
Figure 2. Illustration of the effect of the camera’s nonlinear response in phase measurement. (a) Ideal fringe pattern generated using a control function Y cos ( ) . (b) The expected response to the control value Y. (c) A deformed sinusoidal fringe pattern captured using a digital camera. (d) Camera’s real response to the control value Y, which is clearly not linear. (e) The effect of deformed fringe pattern in phase measurement.
Figure 2. Illustration of the effect of the camera’s nonlinear response in phase measurement. (a) Ideal fringe pattern generated using a control function Y cos ( ) . (b) The expected response to the control value Y. (c) A deformed sinusoidal fringe pattern captured using a digital camera. (d) Camera’s real response to the control value Y, which is clearly not linear. (e) The effect of deformed fringe pattern in phase measurement.
Sensors 23 09299 g002
Figure 3. (af) Photos of one scene taken with different exposure times: (a) 1/2, (b) 1/10, (c) 1/30, (d) 1/60, (e) 1/80, and (f) 1/100, respectively. (g) The camera’s response function extracted from Equation (9) using these images.
Figure 3. (af) Photos of one scene taken with different exposure times: (a) 1/2, (b) 1/10, (c) 1/30, (d) 1/60, (e) 1/80, and (f) 1/100, respectively. (g) The camera’s response function extracted from Equation (9) using these images.
Sensors 23 09299 g003
Figure 4. (a) Determination of the weight function w by calculating the difference between the previously obtained ln E = g ( G ) with its best fit g o ( G ) to the control value Y. (b) The response of ln E extracted from Equations (10)–(12) to control value Y. Green dots are obtained ln E using response function constructed from Equations (10)–(12).
Figure 4. (a) Determination of the weight function w by calculating the difference between the previously obtained ln E = g ( G ) with its best fit g o ( G ) to the control value Y. (b) The response of ln E extracted from Equations (10)–(12) to control value Y. Green dots are obtained ln E using response function constructed from Equations (10)–(12).
Sensors 23 09299 g004
Figure 5. Pixel value G (a) and scaled energy ln E (b) representation of camera’s response to control value Y. (c) The quoted errors of these two representations and different measurement points. (d) Comparison of the max, mean, and median quoted errors of the two camera response representations.
Figure 5. Pixel value G (a) and scaled energy ln E (b) representation of camera’s response to control value Y. (c) The quoted errors of these two representations and different measurement points. (d) Comparison of the max, mean, and median quoted errors of the two camera response representations.
Sensors 23 09299 g005
Figure 6. The distribution of function U ( G m i n , G m a x ) . The abscissa is the minimum gray value of the projection, and the ordinate is the maximum gray value of the projection.
Figure 6. The distribution of function U ( G m i n , G m a x ) . The abscissa is the minimum gray value of the projection, and the ordinate is the maximum gray value of the projection.
Sensors 23 09299 g006
Figure 7. Illustrative description of validation experiment. (a) The control value of Y with optimized interval Y m i n and Y m a x . (b) The generated fringe pattern displayed on a LCD monitor. The dashed lines mark the positions of 1/4, 1/2, and 3/4 of the screen lengths. (c) Fringe pattern captured using the camera. Pairs of ▽ − △ are used to mark the positions of the lines in panel (b).
Figure 7. Illustrative description of validation experiment. (a) The control value of Y with optimized interval Y m i n and Y m a x . (b) The generated fringe pattern displayed on a LCD monitor. The dashed lines mark the positions of 1/4, 1/2, and 3/4 of the screen lengths. (c) Fringe pattern captured using the camera. Pairs of ▽ − △ are used to mark the positions of the lines in panel (b).
Sensors 23 09299 g007
Figure 8. Phase measurement using the proposed method. (a) Wrapped phase at pixels along the horizontal direction extracted using the proposed quantity ln E , building upon the captured fringe patterns shown in Figure 7. (b) The unwrapped phase shows a very good linear increase.
Figure 8. Phase measurement using the proposed method. (a) Wrapped phase at pixels along the horizontal direction extracted using the proposed quantity ln E , building upon the captured fringe patterns shown in Figure 7. (b) The unwrapped phase shows a very good linear increase.
Sensors 23 09299 g008
Figure 9. Panel (a)–(c). Phase measurements of the dashed lines in Figure 7 by using pixel value G (open symbols) and the scaled energy ln E (closed symbols). Panel (d)–(f). Measured phase errors of points along the dashed lines.
Figure 9. Panel (a)–(c). Phase measurements of the dashed lines in Figure 7 by using pixel value G (open symbols) and the scaled energy ln E (closed symbols). Panel (d)–(f). Measured phase errors of points along the dashed lines.
Sensors 23 09299 g009
Figure 10. A comparison showing the effectiveness of the optimal control interval Y and the scaled energy representation in fringe pattern measurement.
Figure 10. A comparison showing the effectiveness of the optimal control interval Y and the scaled energy representation in fringe pattern measurement.
Sensors 23 09299 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, F.; Zhu, W.; Huang, W.; Xu, J. Precise Phase Measurement for Fringe Reflection Technique through Optimized Camera Response. Sensors 2023, 23, 9299. https://doi.org/10.3390/s23239299

AMA Style

Hu F, Zhu W, Huang W, Xu J. Precise Phase Measurement for Fringe Reflection Technique through Optimized Camera Response. Sensors. 2023; 23(23):9299. https://doi.org/10.3390/s23239299

Chicago/Turabian Style

Hu, Fengdan, Wenqi Zhu, Wei Huang, and Jinshan Xu. 2023. "Precise Phase Measurement for Fringe Reflection Technique through Optimized Camera Response" Sensors 23, no. 23: 9299. https://doi.org/10.3390/s23239299

APA Style

Hu, F., Zhu, W., Huang, W., & Xu, J. (2023). Precise Phase Measurement for Fringe Reflection Technique through Optimized Camera Response. Sensors, 23(23), 9299. https://doi.org/10.3390/s23239299

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop