Modified Gray-Level Coding Method for Absolute Phase Retrieval

Fringe projection systems have been widely applied in three-dimensional (3D) shape measurements. One of the important issues is how to retrieve the absolute phase. This paper presents a modified gray-level coding method for absolute phase retrieval. Specifically, two groups of fringe patterns are projected onto the measured objects, including three phase-shift patterns for the wrapped phase, and three n-ary gray-level (nGL) patterns for the fringe order. Compared with the binary gray-level (bGL) method which just uses two intensity values, the nGL method can generate many more unique codewords with multiple intensity values. With assistance from the average intensity and modulation of phase-shift patterns, the intensities of nGL patterns are normalized to deal with ambient light and surface contrast. To reduce the codeword detection errors caused by camera/projector defocus, nGL patterns are designed as n-ary gray-code (nGC) patterns to ensure that at most, one code changes at each point. Experiments verify the robustness and effectiveness of the proposed method to measure isolated objects with complex surfaces.


Introduction
Optical three dimensional (3D) sensing systems are becoming increasingly used in many fields such as medical sciences, industrial inspection and virtual reality. Among various technologies, digital fringe projection (DFP) has been a major research subject in terms of speed, accuracy, resolution, and ease of use [1][2][3][4][5]. In a typical DFP system, some pre-designed patterns are projected onto the measured objects by a projector from one viewpoint, and modulated by the objects' profiles. Meanwhile, their corresponding deformed images are captured by a camera from another viewpoint. Then the phase modulation can be calculated through suitable digital fringe analysis. Finally, the DFP system should be calibrated to recover the relationship between the phase modulation and the real 3D world coordinates [6][7][8]. Therefore, the phase extraction accuracy will directly affect the 3D shape measurement result [9].

Three-Step Phase-Shift Method
Phase-shift methods have been widely used in optical metrology because of their speed and accuracy [41]. Among various phase-shift methods, the three-step method requires the least number of patterns for phase recovery. Three sinusoidal phase-shift fringe patterns with equal phase shifts captured by camera can be mathematically described as: where A c (x, y) denotes the average intensity, denotes the intensity modulation, and φ(x, y) denotes the modulating phase to be solved. Combining the above equations, the three variables can be calculated as [42][43][44]: A c (x, y) = (I 1 c + I 2 c + I 3 c )/3 (4) B c (x, y) = (I 1 c − I 3 c ) 2 /3 + (2I 2 c − I 1 c − I 3 c ) 2 /9 (5) φ(x, y) = arctan Generally, background and shadow regions can be removed with the assistance of the modulation map and a segmentation threshold. Due to the arctangent operation, the solved phase map will be limited in range of [−π, π] with 2π discontinuities. Thus, phase unwrapping should be carried out to remove these discontinuities. The key to the phase unwrapping is to determine the fringe orders. If the fringe orders k(x, y) are determined, the wrapped phase can be unwrapped as: Φ(x, y) = φ(x, y) + k(x, y) × 2π (7) where Φ(x, y) denotes the unwrapped phase or absolute phase.

Intensity Normalization for Coded Patterns
In this paper, we employ extra coded patterns to calculate the fringe orders. Three coded patterns used to encode the codewords can be mathematically described as: Similarly, A c (x, y) denotes the average intensity, B c (x, y) denotes the intensity modulation, α 1 (x, y), α 2 (x, y) and α 3 (x, y) denote the coded coefficients ranging from −1 to 1. Note that, A c (x, y) and B c (x, y) are assigned the same values as that of phase-shift patterns, thus they can be computed with Equations (4) and (5). The following equations used to normalize the coded patterns can be described as: Through the above equations which take the average intensity and modulation into consideration, the influences of ambient light and surface contrast can be eliminated, and the codes can be exactly identified.

The n-Ary Gray-Code Method
Among the various temporal phase unwrapping algorithms, the bGL method may be the simplest way to resolve phase ambiguity [45]. In this method, codewords are encoded within binary patterns used to mark the fringe orders of the phase-shift patterns. Figure 1 shows three-frame binary patterns as an example. There are two intensity values: the black stripes are assigned to the logical value 0, while the white stripes are assigned to the logical value 1. In general, m patterns can generate 2 m codewords, and each of them contains m bits. The pattern images can be sequentially captured by the camera. Then, the codewords of each pixel can be determined through suitable threshold algorithms. This method proves to be reliable and less sensitive to the surface contrast, since only binary values are used in all pixels. However, a larger number of patterns need to be projected to achieve high spatial resolution, which means that image acquisition takes too long and the measured objects have to remain static. Thus, this method is not suitable for real-time measurement [46].
To reduce the number of projected patterns, the nGL coding method is developed where more than two intensity values are encoded. Differing from the bGL method that only uses intensity values 0 and 255, the nGL method uses n > 2 intensity values from 0 to 255. Figure 2 shows the nGL method with n = 4 intensity levels. However, the images of nGL patterns become blurred due to the camera/projector defocus, which makes the codeword determination at code boundaries difficult. This problem will be worse if the codes change at the same place in different coded patterns.
To tackle this problem, the nGC method is used to improve the conventional nGL method, as illustrated in Figure 3. Clearly, at most, one code changes at each pixel of all the nGC patterns. Moreover, the codewords do not appear more than once. The total number of codewords remains the same, yet the codeword detection errors occurring at the code boundaries could be reduced and the phase unwrapping can be improved.
In this paper, we use three nGC patterns with n = 4 intensity levels as an example. With these patterns, a total of 4 3 = 64 codewords can be encoded, as illustrated in Table 1. Also, the codewords are drawn in code-space, as shown in Figure 4.
Through the above equations which take the average intensity and modulation into consideration, the influences of ambient light and surface contrast can be eliminated, and the codes can be exactly identified.

The n-Ary Gray-Code Method
Among the various temporal phase unwrapping algorithms, the bGL method may be the simplest way to resolve phase ambiguity [45]. In this method, codewords are encoded within binary patterns used to mark the fringe orders of the phase-shift patterns. Figure 1 shows three-frame binary patterns as an example. There are two intensity values: the black stripes are assigned to the logical value 0, while the white stripes are assigned to the logical value 1. In general, m patterns can generate 2 m codewords, and each of them contains m bits. The pattern images can be sequentially captured by the camera. Then, the codewords of each pixel can be determined through suitable threshold algorithms. This method proves to be reliable and less sensitive to the surface contrast, since only binary values are used in all pixels. However, a larger number of patterns need to be projected to achieve high spatial resolution, which means that image acquisition takes too long and the measured objects have to remain static. Thus, this method is not suitable for real-time measurement [46].
To reduce the number of projected patterns, the nGL coding method is developed where more than two intensity values are encoded. Differing from the bGL method that only uses intensity values 0 and 255, the nGL method uses n > 2 intensity values from 0 to 255. Figure 2 shows the nGL method with n = 4 intensity levels. However, the images of nGL patterns become blurred due to the camera/projector defocus, which makes the codeword determination at code boundaries difficult. This problem will be worse if the codes change at the same place in different coded patterns.
To tackle this problem, the nGC method is used to improve the conventional nGL method, as illustrated in Figure 3. Clearly, at most, one code changes at each pixel of all the nGC patterns. Moreover, the codewords do not appear more than once. The total number of codewords remains the same, yet the codeword detection errors occurring at the code boundaries could be reduced and the phase unwrapping can be improved.
In this paper, we use three nGC patterns with n = 4 intensity levels as an example. With these patterns, a total of 4 3 = 64 codewords can be encoded, as illustrated in Table 1. Also, the codewords are drawn in code-space, as shown in Figure 4.

The Framework of the Proposed Method
The following steps describe the framework for absolute phase retrieval using the nGC method.
Step 1: Design codewords. Let C1, C2 and C3 be the code sequences for the three coded patterns. All designed 3-bit codewords are given in Table 1.

The Framework of the Proposed Method
The following steps describe the framework for absolute phase retrieval using the nGC method.
Step 1: Design codewords. Let C1, C2 and C3 be the code sequences for the three coded patterns. All designed 3-bit codewords are given in Table 1.

The Framework of the Proposed Method
The following steps describe the framework for absolute phase retrieval using the nGC method.
Step 1: Design codewords. Let C 1 , C 2 and C 3 be the code sequences for the three coded patterns. All designed 3-bit codewords are given in Table 1.
Step 2: Calculate the code coefficients. The code coefficients range from −1 to 1, while codewords range from 1 to 4. The following mathematical equations describe the mapping relationship from the codewords to the code coefficients.
Step 3: Encode codewords into patterns. With the three code sequences, the three coded patterns used to carry them can be mathematically described as: where A p (x, y) and B p (x, y) are constants, k = x/P denotes the fringe order; P denotes the number of pixels per fringe period.
Step 4: Wrapped phase calculation. Once the deformed phase-shift patterns are captured by the camera, A c (x, y) and B c (x, y) can be calculated on the basis of Equations (4) and (5), and the wrapped phase φ(x, y) can be calculated on the basis of Equation (6).
Step 5: Intensity Normalization. With the captured coded patterns, and A c (x, y) and B c (x, y) calculated in the previous step, a 1 (x, y), a 2 (x, y) and a 3 (x, y) can be calculated on the basis of Equations (11)-(13).
Step 6: Calculate codewords. Then, C 1 , C 2 and C 3 can be obtained as: Here, Round(x) denotes the closest integer of input x.
Step 7: Determine fringe order. Looking at C 1 , C 2 and C 3 in Table 1, their order can be regarded as the fringe order. Then, we can convert the wrapped phase φ(x, y) to the absolute phase Φ(x, y) according to the Equation (7).

Simulations
In order to explore the feasibility of the proposed method, some simulations have been done. Figure 5 shows the simulated phase-shift patterns and coded patterns. In those simulations, three nGC patterns are used to generate 64 codewords. The phase-shift patterns have a fringe period of P = 20 pixels. A modulation function is used to simulate variation of the background, as shown in Figure 5a, and the modulation function changes from 0.5 to 1.0. The function modulates both phase-shift fringe patterns and coded fringe patterns, and Gaussian noises are also added to the patterns. Figure 5b,c show the phase-shift fringe patterns and coded fringe patterns after modulating background intensity and adding Gaussian noises (10 dB). Figure 5c shows that it is difficult to differentiate the different codes from intensities because the background intensities change too much. Figure 6 shows simulated results. Figure 6a shows the wrapped phases calculated on the basic of Equation (6). Figure 6b shows the coded fringe patterns with intensity normalization, where three thresholds, −0.5, 0, and 0.5 can divide the intensity into 4 levels unambiguously. Figure 6c shows the codes calculated from the normalization coded fringe patterns according to Equations (20)- (22). Figure 6d shows the fringe orders determined from Table 1. Figure 6e shows the calculated absolute phase according to Equation (7) after obtaining the wrapped phase and fringe orders.  Figure 6 shows simulated results. Figure 6a shows the wrapped phases calculated on the basic of Equation (6). Figure 6b shows the coded fringe patterns with intensity normalization, where three thresholds, −0.5, 0, and 0.5 can divide the intensity into 4 levels unambiguously. Figure 6c shows the codes calculated from the normalization coded fringe patterns according to Equations (20)- (22). Figure 6d shows the fringe orders determined from Table 1. Figure 6e shows the calculated absolute phase according to Equation (7) after obtaining the wrapped phase and fringe orders.   Figure 6 shows simulated results. Figure 6a shows the wrapped phases calculated on the basic of Equation (6). Figure 6b shows the coded fringe patterns with intensity normalization, where three thresholds, −0.5, 0, and 0.5 can divide the intensity into 4 levels unambiguously. Figure 6c shows the codes calculated from the normalization coded fringe patterns according to Equations (20)- (22). Figure 6d shows the fringe orders determined from Table 1. Figure 6e shows the calculated absolute phase according to Equation (7) after obtaining the wrapped phase and fringe orders.

Experiments
To verify the performance of the proposed method, we developed a common fringe projection system including a COMS camera (IOI Flare 2M360-CL), a digital light processing projector (LightCrafter 4500) and a computer. Figure 7 shows the setup used in the experiments. The camera has a resolution of 1280 × 1024 pixels, and the images are delivered to the computer via high-speed Camera Link. The resolution of the projector is 912 × 1140 pixels. The measured objects are two separated plaster sculptures placed before the fringe projection measurement system. The sinusoidal phase-shift fringe patterns and nGC coded patterns were projected onto the measured objects by the projector sequentially, and then deformed fringe patterns were captured by the camera at the same time. For comparison, the nGL patterns were also projected and captured like the nGC coded patterns.

Experiments
To verify the performance of the proposed method, we developed a common fringe projection system including a COMS camera (IOI Flare 2M360-CL), a digital light processing projector (LightCrafter 4500) and a computer. Figure 7 shows the setup used in the experiments. The camera has a resolution of 1280 × 1024 pixels, and the images are delivered to the computer via high-speed Camera Link. The resolution of the projector is 912 × 1140 pixels. The measured objects are two separated plaster sculptures placed before the fringe projection measurement system. The sinusoidal phase-shift fringe patterns and nGC coded patterns were projected onto the measured objects by the projector sequentially, and then deformed fringe patterns were captured by the camera at the same time. For comparison, the nGL patterns were also projected and captured like the nGC coded patterns.  Figure 8 shows the projected patterns including three phase-shift patterns, three nGC patterns and three nGL patterns. Figure 8 shows the deformed images of these patterns, respectively. Figure  8a-c shows the images of phase-shift patterns, from which the wrapped phase can be calculated.   Figure 9 shows the captured images of these projected patterns, including phase-shift patterns, nGC patterns and nGL patterns. To better illustrate the proposed method, Figure 10a shows the intensities of three phase-shift patterns at the reference plane, while Figure 10b shows the intensities  Figure 8 shows the projected patterns including three phase-shift patterns, three nGC patterns and three nGL patterns. Figure 8 shows the deformed images of these patterns, respectively.

Experiments
To verify the performance of the proposed method, we developed a common fringe projection system including a COMS camera (IOI Flare 2M360-CL), a digital light processing projector (LightCrafter 4500) and a computer. Figure 7 shows the setup used in the experiments. The camera has a resolution of 1280 × 1024 pixels, and the images are delivered to the computer via high-speed Camera Link. The resolution of the projector is 912 × 1140 pixels. The measured objects are two separated plaster sculptures placed before the fringe projection measurement system. The sinusoidal phase-shift fringe patterns and nGC coded patterns were projected onto the measured objects by the projector sequentially, and then deformed fringe patterns were captured by the camera at the same time. For comparison, the nGL patterns were also projected and captured like the nGC coded patterns.  Figure 8 shows the projected patterns including three phase-shift patterns, three nGC patterns and three nGL patterns. Figure 8 shows the deformed images of these patterns, respectively. Figure  8a-c shows the images of phase-shift patterns, from which the wrapped phase can be calculated.   Figure 9 shows the captured images of these projected patterns, including phase-shift patterns, nGC patterns and nGL patterns. To better illustrate the proposed method, Figure 10a shows the intensities of three phase-shift patterns at the reference plane, while Figure 10b shows the intensities  Figure 9 shows the captured images of these projected patterns, including phase-shift patterns, nGC patterns and nGL patterns. To better illustrate the proposed method, Figure 10a shows the intensities of three phase-shift patterns at the reference plane, while Figure 10b shows the intensities of three nGC patterns. Figure 10c shows the codewords determined from Equations (17)- (19). Figure 10d shows the wrapped phase calculated from Equation (6). By looking up the position of the codeword in Table 1, the fringe order for each pixel can be determined as shown in Figure 10e. Then based on Equation (7), the absolute phase can be obtained as shown in Figure 10f. The 2π discontinuities are removed, and the absolute phase is continuous. of three nGC patterns. Figure 10c shows the codewords determined from Equations (17)- (19). Figure  10d shows the wrapped phase calculated from Equation (6). By looking up the position of the codeword in Table 1, the fringe order for each pixel can be determined as shown in Figure 10e. Then based on Equation (7), the absolute phase can be obtained as shown in Figure 10f. The 2π discontinuities are removed, and the absolute phase is continuous.   of three nGC patterns. Figure 10c shows the codewords determined from Equations (17)- (19). Figure  10d shows the wrapped phase calculated from Equation (6). By looking up the position of the codeword in Table 1, the fringe order for each pixel can be determined as shown in Figure 10e. Then based on Equation (7), the absolute phase can be obtained as shown in Figure 10f. The 2π discontinuities are removed, and the absolute phase is continuous.   The first experiment shows the process of the nGC method. Figure 11a shows a cross-section of the reference plane without normalization. Figure 11b shows the same section of the reference plane after normalization. Because the surface contrast remains nearly the same in the reference plane, the intensity normalization shows little improvement. Figure 11c shows fringe orders calculated from the normalization patterns for the reference plane. Figure 11d shows a cross-section of the tested objects without normalization. Figure 11e shows the same cross-section of the tested objects after normalization, which illustrates significant differences compared to Figure 11d. It was hard for us to extract the codes from Figure 11d, but there was no difficulty in Figure 11e. Thus, intensity normalization can eliminate the influences of ambient light and surface contrast, which make the nGC more robust. Moreover, there is no fringe order error in Figure 11c,f, which also shows the robustness of the nGC method.
The first experiment shows the process of the nGC method. Figure 11a shows a cross-section of the reference plane without normalization. Figure 11b shows the same section of the reference plane after normalization. Because the surface contrast remains nearly the same in the reference plane, the intensity normalization shows little improvement. Figure 11c shows fringe orders calculated from the normalization patterns for the reference plane. Figure 11d shows a cross-section of the tested objects without normalization. Figure 11e shows the same cross-section of the tested objects after normalization, which illustrates significant differences compared to Figure 11d. It was hard for us to extract the codes from Figure 11d, but there was no difficulty in Figure 11e. Thus, intensity normalization can eliminate the influences of ambient light and surface contrast, which make the nGC more robust. Moreover, there is no fringe order error in Figure 11c,f, which also shows the robustness of the nGC method. The next experiment shows the process of the nGL method. Figure 12a shows one cross-section of the reference plane without normalization. Figure 12b shows the same section of the reference plane after normalization. Figure 12c shows fringe orders calculated from the normalization patterns for the reference plane. Obviously, some sharp peaks occur in some of the code boundaries which will lead to the absolute phase errors. Figure 12d shows a cross-section of the tested objects without normalization. Figure 12e shows the same cross-section of the tested objects after normalization. Also, intensity normalization can eliminate the influences of ambient light and surface contrast. Figure 12f shows the fringe orders calculated from the normalization patterns for the measured objects. As with Figure 12c, some sharp peaks occur at the edge of some fringes in Figure 12f which leads to absolute phase errors. Comparing the results of the nGL method with the previous nGC method, the nGC method shows less errors happening, and therefore, be more robust in fringe order calculations. Figure 13 shows the two different results according to the framework proposed in this paper, and the only difference is that the former one uses the nGC method, and the latter uses the nGL method. Figure 13a shows the absolute phase map using the nGC method, which has no obvious mistakes. However, the absolute phase map obtained from the nGL method, shown in Figure 13b, shows some obvious errors at the fringe boundaries because the blurred pattern images caused by the defocus effects of the projector or the camera may lead to incorrect codewords. The results Figure 11. One cross-section of the nGC method. (a) nGC patterns without normalization of the reference plane, and (b) the same patterns after normalization, (c) fringe orders calculated from the normalization patterns for the reference plane, (d) nGC patterns without normalization of the measured objects, and (e) the same patterns after normalization, and (f) fringe orders calculated from the normalization patterns for the measured objects.
The next experiment shows the process of the nGL method. Figure 12a shows one cross-section of the reference plane without normalization. Figure 12b shows the same section of the reference plane after normalization. Figure 12c shows fringe orders calculated from the normalization patterns for the reference plane. Obviously, some sharp peaks occur in some of the code boundaries which will lead to the absolute phase errors. Figure 12d shows a cross-section of the tested objects without normalization. Figure 12e shows the same cross-section of the tested objects after normalization. Also, intensity normalization can eliminate the influences of ambient light and surface contrast. Figure 12f shows the fringe orders calculated from the normalization patterns for the measured objects. As with Figure 12c, some sharp peaks occur at the edge of some fringes in Figure 12f which leads to absolute phase errors. Comparing the results of the nGL method with the previous nGC method, the nGC method shows less errors happening, and therefore, be more robust in fringe order calculations. Figure 13 shows the two different results according to the framework proposed in this paper, and the only difference is that the former one uses the nGC method, and the latter uses the nGL method. Figure 13a shows the absolute phase map using the nGC method, which has no obvious mistakes. However, the absolute phase map obtained from the nGL method, shown in Figure 13b, shows some obvious errors at the fringe boundaries because the blurred pattern images caused by the defocus effects of the projector or the camera may lead to incorrect codewords. The results confirm that the proposed method can be performed better than the nGL method. Finally, we present the 3D shape measurement resulting from the nGC method after correction in Figure 14. confirm that the proposed method can be performed better than the nGL method. Finally, we present the 3D shape measurement resulting from the nGC method after correction in Figure 14.   confirm that the proposed method can be performed better than the nGL method. Finally, we present the 3D shape measurement resulting from the nGC method after correction in Figure 14.

Conclusions
This paper presents a modified gray-level method for absolute phase retrieval. The proposed method can generate many more codewords than the common bGL method, with n > 2 intensity values. An intensity normalization procedure, which takes the average intensity and modulation of phase-shift patterns into account, is developed to deal with the problems caused by ambient light and surface contrast. Compared with the nGL method, the nGC method can reduce the codeword detection errors and then improve the phase unwrapping. Both simulation and real experiments demonstrate that the proposed method is reliable and applicable.

Conclusions
This paper presents a modified gray-level method for absolute phase retrieval. The proposed method can generate many more codewords than the common bGL method, with n > 2 intensity values. An intensity normalization procedure, which takes the average intensity and modulation of phase-shift patterns into account, is developed to deal with the problems caused by ambient light and surface contrast. Compared with the nGL method, the nGC method can reduce the codeword detection errors and then improve the phase unwrapping. Both simulation and real experiments demonstrate that the proposed method is reliable and applicable.