Next Article in Journal
Frequency-Spectra-Based High Coding Capacity Chipless RFID Using an UWB-IR Approach
Next Article in Special Issue
An Enhanced Smoothed L0-Norm Direction of Arrival Estimation Method Using Covariance Matrix
Previous Article in Journal
High Precision Optical Tracking System Based on near Infrared Trinocular Stereo Vision
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fourier-Transform-Based Surface Measurement and Reconstruction of Human Face Using the Projection of Monochromatic Structured Light

School of Science, Qingdao University of Technology, Qingdao 266525, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(7), 2529; https://doi.org/10.3390/s21072529
Submission received: 15 March 2021 / Revised: 25 March 2021 / Accepted: 31 March 2021 / Published: 4 April 2021
(This article belongs to the Special Issue Object Detection and Identification in Any Medium)

Abstract

:
This work presents a new approach of surface measurement of human face via the combination of the projection of monochromatic structured light, the optical filtering technique, the polarization technique and the Fourier-transform-based image-processing algorithm. The theoretical analyses and experimental results carried out in this study showed that the monochromatic feature of projected fringe pattern generated using our designed laser-beam-based optical system ensures the use of optical filtering technique for removing the effect of background illumination; the linearly-polarized characteristic makes it possible to employ a polarizer for eliminating the noised signal contributed by multiply-scattered photons; and the high-contrast sinusoidal fringes of the projected structured light provide the condition for accurate reconstruction using one-shot measurement based on Fourier transform profilometry. The proposed method with the portable and stable optical setup may have potential applications of indoor medical scan of human face and outdoor facial recognition without strict requirements of a dark environment and a stable object being observed.

1. Introduction

The measurement and reconstruction of three-dimensional (3D) human face has been a very challenging problem, which has attracted scientists and technologists for developing new methods with higher accuracy and better adaptability, which is necessary and important for the application purpose, such as medical imaging [1,2,3] and facial recognition [4,5].
Methods of 3D facial scan using structured light have been investigated and developed for about three decades. The proposed method of Vázquez and Cuevas for 3D facial reconstruction and classification was based on the use of structured-light projection and the phase-shifting technique [4]. The investigation by Dunn and co-authors showed the feasibility of the imaging system for reconstructing the surface of human body using structured light [6]. The key part of the work by Bhatia et al. was the optical surface scanner with white light to project coded patterns of structured light [7]. Gregory and Lipczynski completed the study of monitoring facial surface based on a structured-light technique, while the measuring system simply consisted of a standard commercial slide projector and CCD cameras [8]. Amor and co-workers presented a face reconstruction method based on a hybrid structured light and stereo sensor technique [9], which featured the combination of the stereo triangulation, data interpolation based on cubic spline models, meshing based on Delaunay triangulation and texture mapping process. Yang et al. employed color structured light to image the skin [10], while this approach may have the same limitations of phase-shifting method using digital-light-processing (DLP) projector. In addition to the basic method based on the combination of phase shifting and coded-pattern projection, Yue and co-authors recently developed their improved registration and graph optimization algorithms [11].
There have been some investigations related to the situations that human body is under moving conditions. Zhang and Huang developed a 3D shape measurement system using digital fringe projections, a high-speed CCD camera and the phase-shifting technique [12]. Kimura et al. aimed at the accurate measurement of the dynamic shape of human foot in motion, while they indicated that the proposed method using fixed characteristic patterns had a requirement that the surface reflectance of the observed object should not be variable [13]. The method of Liberadzki et al. was developed for measuring the human body in motion [14], which employed the projected sinusoidal patterns generated using DLP projector, as discussed by Sitnik [15]. Note that the sinusoidal patterns generated using DLP projector will be affected by the Gamma effect and defocusing issue. The 3D facial reconstruction approach based on one-shot image of the projected structured-light line pattern was recently suggested by Wang [16], while the energy distribution of the projected pattern might be one of the critical issues that should be considered and it would be a problem if background illumination exists.
3D facial reconstruction can also be realized based on laser line scanning using a laser line projector [17], which is the basis for laser 3D slit scanner. Measurements using laser line scanning require that a single line of points is captured at each time, and it is necessary to move either the scanner or the object to acquire additional lines of points for completing the 3D reconstructions of observed objects. Recently, Piedra-Cascón et al. [1], Zhao et al. [2] and Amornvit and Sanohkan [3] compared different facial scanners, and from their conclusions we could see that both slit scanners and structured-lighting techniques are ill-suited for scanning dynamic scenes.
For the existing methods including the commercial technologies, such as the 3dMD Face system (3dMD, Atlanta, USA) and the FaceScan system (Isravision, Darmstadt, Germany), there are some strict requirements for the targets (or the human participants). The individual participant has to maintain a sitting position together with a stable support for head and neck and a stable mandibular position, while the participant must also keep his/her eyes and lips naturally closed and his/her body relaxed. Usually, repeated scans or multiple images are necessary for those methods, which should mostly be carried out under a dark environment. As reported, it is capable of obtaining the best accuracy within the middle part of the participant’s face, while the deviation of reconstruction exists for the upper and lower parts of the face [2].
Generally speaking, the existing techniques discussed and mentioned above basically suffer the following main drawbacks: (i) 3D facial scan and reconstructions using structured light and phase-shifting technique have a rigorous requirement that the human face remain still during the measuring time. (ii) If the projected pattern is based on the sinusoidal fringe generated by DLP projector, the intensity deviation of the sinusoidal-fringe curve affected by the Gamma effect and defocusing issue will result in reconstruction errors. (iii) For structured-pattern projections with white light, background illumination is mostly not allowed during the measurement, which indicates that outdoor in-situ measurement for facial recognition will be impossible.
Thus, aiming at obtaining a 3D reconstruction of human face with higher accuracy and better adaptability, we propose a method that combines the projection of sinusoidal optical signal, techniques of optical filtering and polarization and Fourier transform profilometry (FTP). Note that the projected sinusoidal optical pattern generated by the designed optical system has the features of being monochromatic, higher fringe contrast, time-invariance and adjustable spatial frequency. The monochromatic feature can be combined with an optical filter to effectively reduce the influence of the environmental noisy light signals, which has been a tough restriction for 3D measurement and reconstruction with background illumination.

2. Theoretical Analyses

2.1. The Generation and Projection of Sinusoidal Fringe Signal

Profilometry based on either Fourier transform or phase shifting relies on the projection of sinusoidal fringe patterns. Moreover, the widely-used sinusoidal pattern generated using DLP projector is featured with having the limitations resulted from the requirement of precise synchronization, speed limit of measurement and the nonlinear gamma effect. For the purpose of generating monochromatic, high contrast and truly sinusoidal fringe patterns, we designed and developed the laser-beam-based optical system sketched in Figure 1. The fundamental components of the sinusoidal optical signal generator include a laser source with the wavelength λ = 532 nm, a rectangular grating positioned at plane P 0 , a Fourier-transform positive lens with focal length f and an adjustable spatial-frequency filter located at plane P 2 .
We now present a theoretical description of the generation and propagation of the sinusoidal fringe pattern based on the designed optical system. The grating at plane P 0 is illuminated by the optical field from point laser source, as indicated in Figure 1. The optical field right behind the grating, U 0 ( x 0 ,   y 0 ) , is given by
U 0 ( x 0 ,   y 0 ) = 1 Z 0 e i k Z 0 e i k 2 Z 0 ( x 0 2 + y 0 2 ) t 0 ( x 0 ,   y 0 ) ,
where Z 0 is the distance between the laser source and grating and t 0 ( x 0 ,   y 0 ) is the transmission function of the grating. t 0 ( x 0 ,   y 0 ) has the form
t 0 ( x 0 ,   y 0 ) = t 0 ( x 0 ) = rect x 0 a 1 d comb x 0 d circ 2 | x 0 | H ,
where x 0 and y 0 are the coordinate variables of P 0 and a and d are the optical parameters of the grating. H is the grating width, which is assumed, for the sake of simplicity, to be the diameter of the illumination spot of laser. Since the propagation of field U 0 ( x 0 ,   y 0 ) from plane P 0 to plane P 1 is within Fresnel region, the field U 1 ( x 1 ,   y 1 ) in front of the lens at plane P 1 can be written as
U 1 ( x 1 ,   y 1 ) = C 1 Σ 0 U 0 ( x 0 ,   y 0 ) e i k 2 Z 1 [ ( x 1 x 0 ) 2 + ( y 1 y 0 ) 2 ) ] d x 0 d y 0 ,
where C 1 = 1 i λ Z 1 e i k Z 1 is a complex constant and Z 1 is the distance between the planes P 0 and P 1 . The field U 1 ( x 1 ,   y 1 ) after the lens at plane P 1 should be in the form
U 1 ( x 1 ,   y 1 ) = U 1 ( x 1 ,   y 1 ) t L ( x 1 ,   y 1 ) = U 1 ( x 1 ,   y 1 ) e i k 2 f ( x 1 2 + y 1 2 ) ,
where t L ( x 1 ,   y 1 ) = e i k 2 f ( x 1 2 + y 1 2 ) denotes the transmission function of the lens.
The propagating field U 1 ( x 1 ,   y 1 ) at plane P 1 to U 2 ( x 2 ,   y 2 ) at plane P 2 can be accurately computed using Fresnel diffraction. U 2 ( x 2 ,   y 2 ) denotes the field in front of the spatial-frequency filter F and is given by
U 2 ( x 2 ,   y 2 ) = C 2 Σ 1 U 1 ( x 1 ,   y 1 ) e i k 2 Z 2 [ ( x 2 x 1 ) 2 + ( y 2 y 1 ) 2 ) ] d x 1 d y 1 ,
where C 2 = 1 i λ Z 2 e i k Z 2 is another complex constant and Z 2 is the distance between the planes P 1 and P 2 . The combination of Equations (1)–(4) as well as the expressions of t 0 ( x 0 ,   y 0 ) and t L ( x 1 ,   y 1 ) can yield that
U 2 ( ω ) = a C 3 d m = sinc a m d J 1 [ π H ( ω m / d ) ] ω m / d ,
where J 1 ( · ) is the Bessel function of the first kind, ω = 2 π f x is the spatial frequency f x at plane P 2 defined by f x = x 2 λ f and C 3 is a complex constant given by
C 3 = 1 λ f Z 0 e i [ k ( Z 0 + Z 1 + Z 2 ) π / 2 ] .
At plane P 2 , an adjustable spatial-frequency filter is specially-designed and employed to select the ± m th-order spectrum described in Equation (5) and allows them to pass through it. Here, we take m = 1 . We then get the field U 2 ( x 2 ,   y 2 ) right behind the spatial-frequency filter in the following form
U 2 ( x 2 ,   y 2 ) = U 2 ( ω ) = U 2 ( ω ) | m = ± 1 = a C 1 d sinc a d × { J 1 [ π H ( ω 1 / d ) ] ω 1 / d + J 1 [ π H ( ω + 1 / d ) ] ω + 1 / d } .
Note that the propagation of field U 2 ( x 2 ,   y 2 ) from plane P 2 to P 3 can also be analyzed using Fresnel diffraction, which allows the field U 3 ( x 3 ,   y 3 ) at plane P 3 to be given by
U 3 ( x 3 ,   y 3 ) = C 4 Σ 2 U 2 ( x 2 ,   y 2 ) e i k 2 Z 3 [ ( x 3 x 2 ) 2 + ( y 3 y 2 ) 2 ) ] d x 2 d y 2 ,
where C 4 = 1 i λ Z 3 e i k Z 3 is also a complex constant and Z 3 is the distance between the planes P 2 and P 3 . Considering that the size of Σ 2 ( x 2 ,   y 2 ) (≤4 mm in diameter) is much less than that of Σ 3 ( x 3 ,   y 3 ) , i.e., the spot size of the sinusoidal fringe pattern (≥100 mm in diameter), we take an approximation that λ f ( f x 2 + f y 2 ) 2 ( f x x 3 + f y y 3 ) for further derivation of Equation (7). Equation (7) then becomes
U 3 ( x 3 ,   y 3 ) = C 5 Σ 2 U 2 ( f x ,   f y ) e i 2 π ( f x f x 3 Z 3 + f y f y 3 Z 3 ) d f x d f y = C 5 F 1 { U 2 ( f x ,   f y ) } ,
where the complex parameter C 5 is given by
C 5 = λ f 2 Z 3 e i ( k Z 3 π / 2 ) e i k 2 Z 3 ( x 3 2 + y 3 2 ) .
Equation (8) indicates that the field at the observation plane P 3 is an inverse Fourier transform of the field output from the spatial-frequency filter. Combining Equations (6) and (8), the field at the observation plane P 3 has the form
U ( x 3 ) = C a · C b · circ 2 f | x 3 | H Z 3 · cos 2 π f x 3 d Z 3 ,
where C 1 and C 2 are, respectively, given by
C a = 2 f Z 0 Z 3 π sin a π d ,
C b = e i [ k ( Z 0 + Z 1 + Z 2 + Z 3 ) π ] e i k 2 Z 3 ( x 3 2 + y 3 2 ) .
Note that C a is a real constant for fixed spatial distances Z 0 and Z 3 and | C b | = 1 . Equation (9) indicates that: (i) C 1 represents the term of amplitude. We see that the fringe intensity will decrease as Z 3 increases since f and Z 0 are parameters with fixed values. (ii) The circ-function in Equation (9) defines the range of the fringe pattern with a circle of radius H Z 3 2 f . (iii) The c o s i n e term in Equation (9) has only one variable x 3 at the plane P 3 along x 3 -axis, which indicates that the output of this system is monochromatic, high contrast and truly sinusoidal fringes. The c o s i n e term also includes the information that the fringe width (related to the spatial frequency of the output fringe pattern) will increase as Z 3 increases, while the projected fringe pattern with adjustable spatial frequency might be useful and important for 3D surface measurement.

2.2. Imaging Technique and Reconstruction Based on One-Shot FTP

Started by the work of Takeda and Mutoh about four decades ago [18], FTP has been an active research field focusing on 3D surface measurement [19,20,21,22]. However, applications of FTP have been limited to the situations with ideal conditions, while such necessary conditions may include that the surface of the measuring object must be opaque, the surface is highly reflective and diffuse and there is no background illumination.
The penetration depth of green light into human skin is about 2.5 mm [23,24], which represents that the projected laser-beam-based monochromatic structured light onto the human face undergoes multiple scattering within the subsurface layer. As discussed by Holroyd and Lawrence [25], it might be difficult to carry out 3D shape measurement of human face using optical triangular methods, i.e., the structured-light-illumination methods, due to the subsurface scattering brings uncertainties to the reflection measurements at face surface, since the optical triangular method is simply based on the assumption that the direct reflection takes place at the object surface.
When a fringe pattern of structured light is projected onto the human face, the measured reflectance basically consists of three components, i.e., the intensity distribution of directly-reflected light of the projected signal on the face surface, the exited photons that have experienced a multiple-scattering process within the subsurface of face skin, and the background illumination on the face surface. Generally speaking, only the pure direct reflection can be used for determining the 3D shape of human face. However, the direct reflection is usually quite weak, i.e., only few percent of the incident signal for the measurement of human face, while most of the projected light and background illumination will go through the interface and undergo the multiple scattering process [26].

2.2.1. Removing the Effect of Background Illumination Using an Optical Filter

The component related to background illumination is one important issue for in-situ measurement, which has been one reason that most of previous investigations have carried out experimental study in a dark environment. We use a laser-beam-based monochromatic structured light at 532 nm as the projecting fringe patterns, which makes it possible to employ the optical filtering technique in the process of measurement. Thus, an optical bandpass filter centered at 532 nm with FWHM being 10 nm is employed for removing the effects of background illuminations in the experimental setup of this investigation.
As shown in Figure 2, the background illuminance, E = 3465 L u x , has lowered the fringe contrast and increased the noised signal, and the distorted fringes on the surface of the human head model is almost submerged by the background illumination, as indicated by Figure 2a. However, for the same environmental and experimental conditions, the completed measurement of Figure 2b using optical filtering technique shows greatly-improved fringe contrast with much lower noise level, which ensures the accurate reconstruction of the observed object.

2.2.2. Eliminating the Effect of Multiply-Scattered Photons Using a Polarizer

There have been some efforts towards 3D reconstructions based on the multiply-scattered photons, such as the work by Ohtani et al. [27]. However, the theoretical modeling and computation based on radiative transfer and diffusion theory for the reconstruction of the observed object must be difficult, since it is almost impossible to describe the observed objects such as human face using a one-dimensional model [28].
In our proposed method, the effect of multiply-scattered photons is eliminated by employing polarization technique, which can be ensured since the incident monochromatic structured light and reflected signal from human face or other object surface are linearly polarized, while the signal composing multiply-scattered photons is unpolarized. Thus, a polarizer is mounted on the camera to carry out the measurement of observed object with projected fringe patterns generated using our designed laser-beam-based optical system, as sketched in Figure 1.
A spherical cap as an observed target, as shown in Figure 3a, is employed for illustrating the use of polarization technique. The spherical cap was made of room-temperature vulcanized silicone rubber using two-component-addition molding. The real part of refractive index of silicone rubber at 532 nm is about n r = 1.4 1.52 , which is similar to that of human skin [29].
Figure 3b,c shows the measuring results and comparison. The spherical cap was illuminated by monochromatic sinusoidal fringe pattern at 532 nm and the background illuminance was E = 0 L u x , i.e., a dark environment. Figure 3b is the image without polarizer, while Figure 3c is the measurement with a polarizer mounted on the camera. Obviously, the relative brightness of the image in Figure 3b indicates that the multiply-scattered photons add a strong effect on the surface reflectance, while the effect of multiply-scattered photons depends on the thickness of the silicone rubber and the reflectivity of the bottom surface. In contrast, Figure 3c shows that the effect of multiply-scattered photons has mostly been removed.

2.2.3. Reconstruction Based on One-Shot FTP

Note that, if the background illumination and the effect of multiply-scattered photons exist, the facial reconstructions using measurements with structured-light projections would fail or lead to a big error. As discussed above, the quality of the measurement of surface reflectance provides a necessary basis for developing an image-processing algorithm for the retrievals of phase and 3D-surface height of observed object. The image-processing algorithm is based on FTP with a typical triangulation framework of 3D surface measurement, while the general framework of 3D surface measurement can be found in publications by Takeda and Mutoh [18] and Maurel and co-workers [19]. The one-shot FTP-based reconstruction of the measurement of Figure 3c is given in Figure 4a, while the reconstruction error is described in Figure 4b. The reconstruction accuracy based on the comparison between the retrieved curve and the actual shape of the spherical cap, as indicated in Figure 4b, is good and acceptable, and it will not be affected by background illumination if the optical filtering technique is used, as we discussed above.

3. Experimental Results of Facial Reconstruction and Discussions

The experimental system basically consisted of two parts: the part of fringe-pattern generator and the part of measurement. The fringe-pattern generator mainly contained the following components: the CW laser source MGL-III-532-200 mW with the wavelength of λ = 532 nm and a small divergence angle (≤ 1.5 ° ), a rectangular grating with spatial frequency of 50 l p / mm, the Fourier transform lens with focal length of 150 mm and a specially-designed spatial-frequency filter that has a V-shape aperture with slit-width of 0.5 mm, which is carved with high precision. The measurement part included a CCD camera, an optical filter centered at 532 nm with FWHM being 10 nm, an optical polarizer, a computer and the image-processing software that was investigated and developed during this study. Note that the cost of each component depends on the the measurement needs and the physical parameters of the individual component. For example, the cost of the CW laser source that we used in this work is about $2000, while it would increase to about $30,000 if we needed 10 times the output power of the CW laser source for potential outdoor measurement and reconstruction with high accuracy.
According to the physical basis of sinusoidal fringe pattern generated in our approach and the FTP-based method discussed above, the reconstruction of human face can be affected by the intensity, contrast and spatial frequency of the projected fringes and the background illuminance. The contrast of the monochromatic fringe pattern generated using the optical system sketched in Figure 1 is approximately equal to unity for a dark background environment, which can be concluded from Equation (9), and was validated using experimental results, as indicated in Figure 5. The light-intensity distribution of the sinusoidal fringe pattern generated using the developed optical system was measured using a CMOS camera with an Aptina CMOS sensor with a size of 5.70 × 4.28 mm and each pixel size of 2.2 × 2.2 μm. It was confirmed that the generated fringe pattern is featured by being monochromatic, time-invariant, high contrast and truly sinusoidal, which is critically important for the measurement and reconstruction of human face in this investigation.

3.1. The Effect of Fringe Contrast

The first part of our experimental work was on the effect of fringe contrast as well as its relation with the background illuminance, as illustrated in Figure 6. Figure 6 shows the measurement and reconstruction of a human face projected by a sinusoidal fringe pattern generated using our designed optical system, as sketched in Figure 1, while the spatial frequency of the sinusoidal fringe pattern is 0.3 l p / mm and the background illuminance is E = 200 L u x . The image of Figure 6a was taken without optical filtering and polarizer, and, for the image in Figure 6b, an optical polarizer and an optical bandpass filter centered at 532 nm with FWHM being 10 nm were employed. The fringes on the face in Figure 6b were could be identified, while the fringes were completely buried by the background illumination. The inaccuracy of face reconstruction of Figure 6c from the measurement of Figure 6b is considered as mostly being due to the decreased fringe contrast resulting from the background illumination, which is explained in Figure 7.
The experimental condition shown in Figure 7 was the same as the one in Figure 6 except the background illuminance. The better accuracy of reconstruction shown in Figure 7b compared to Figure 6c indicates that the contrast of the projected fringe pattern is affected by the background illuminance. Note that the fringe contrast of Figure 6b can be increased by increasing the output power of the laser source.

3.2. Reconstruction for Both Static and Dynamic Faces at Higher Spatial Frequency

To optimize the experimental condition, we set the spatial frequency of projected fringe pattern to 1.0 l p / mm and increased the maximum intensity value of the projected fringe by about five times that used in Figure 6 and Figure 7, as shown in Figure 8.
Different measurements or images, as shown in Figure 8a,c,e, of a static face were taken at the same spatial frequency but different background illuminance. The results presented in Figure 8b,d,f show the same accuracy of reconstruction. With the same experimental condition of Figure 8c, including the position and parameters of the camera, spatial frequency of the projected fringe pattern and the background illuminance, the measurement was completed for a shaking and moving face, as shown in Figure 9a. Obviously, as indicated in Figure 9b, we could also obtain very accurate reconstruction for measuring the 3D surface shape of a moving human face even when background illuminance exists.
The ability to measure a moving human face and obtain an accurate reconstruction is an important feature of this proposed method, which is ensured by all aspects discussed above, mainly including the physical feature and accuracy of the projected fringe pattern generated using the designed laser-beam-based optical system, the optical filtering and polarization techniques and the one-shot FTP-based imaging processing algorithm.

3.3. Discussions

The experimental results and corresponding plots are summarized in Table 1 and Table 2.
Compared with the most commonly used approaches, as discussed in the Introduction, the improvements that we made include the following aspects. The first advantage is that the individual participant does not need to be kept in a motionless state, and the proposed method of this work can even obtain high-accuracy reconstruction for a moving participant, as shown in Figure 9b. The second advantage is that our method only needs one-shot measurement for facial reconstruction, while the commonly used approaches usually need repeated scans or multiple images, for which it would also be difficult for a participant to be kept in a stable position for a longer time. The third advantage is that our method is capable of resulting in high-accuracy reconstruction when the background illumination exists, as shown in Figure 8d,f and Figure 9b, while background lighting is not allowed for the most commonly used approaches.
High-contrast and high-quality sinusoidal fringes are critically important for 3D surface profilometry, which was validated by our experiments. Moreover, the spatial frequency and the intensity of the sinusoidal fringe pattern generated using the designed laser-beam-based optical system can easily be controlled and adjusted according to the requirement of measuring different object under different environmental conditions. The optical wave from laser source is monochromatic and generally polarized, thus both the optical filtering and polarization techniques can be used for measurement and image processing, as proposed in our investigation. However, since the optical signal generated by a DLP-based projector is non-monochromatic and unpolarized, it is impossible to extend our approach to methods and applications based on the use of structured light generated by a DLP projector.
Based on the experimental results above, we see that the accuracy of human face reconstruction is basically determined by the intensity, contrast and spatial frequency of the projected fringes and the background illumination. The comparison of the reconstructions of using two spatial frequencies, i.e., 0.3 and 1.0 l p / mm, indicated that the higher spatial frequency may yield better accuracy than the lower one within a validated range. Note that, if the spatial frequency is too high, there will be some problems related to measurement and image processing, which may decrease the accuracy of reconstruction. Thus, the optical parameters including the spatial frequency of projected fringes should be optimized according to the measuring conditions.

4. Conclusions

In this study, we carried out both theoretical analysis and experimental investigation for describing a new approach of the measurement and reconstruction of human face. The technical feature of the proposed approach mainly contains three parts: the generation and projection of monochromatic and sinusoidal fringe signal, the imaging technique with optical filtering and polarization and the image processing and reconstruction algorithm based on one-shot Fourier transform profilopmetry. Based on the theoretical analyses and experimental results, the concluding evaluation of the developed method may include the following aspects: (i) The projected sinusoidal fringe pattern generated by the designed laser-beam-based optical system is a monochromatic signal, which is the basis for the use of optical filtering technique for getting rid of the effect of background illumination. (ii) The linearly-polarized characteristic of projected sinusoidal fringe pattern makes it possible to use the polarization technique to effectively subtract the noised signal of multiply-scattered photons coming from the subsurface of face skin. (iii) The sinusoidal fringe pattern also possesses the features of high-contrast, easily-controlled intensity and adjustable spatial frequency, which is capable of resulting in accurate reconstruction of human face using one-shot measurement based on Fourier transform profilometry. Both the optical setup of generating sinusoidal fringe pattern and the measuring equipment such as CCD camera are portable and stable, which helps predict the future applications of accurate medical scan and reconstruction as well as in-situ imaging and recognition of human face in the general indoor and outdoor environments.

Author Contributions

Theory and conceptualization, B.C. and P.S.; methodology, B.C., H.L. and J.Y.; validation and investigation, H.L., J.Y. and P.S.; writing—original draft preparation, B.C.; and writing—review and editing, P.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the Natural Science Foundation of Shandong Province under grant number ZR2016FB09.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Piedra-Cascón, W.; Meyer, M.J.; Methani, M.M.; Revilla-León, M. Accuracy (trueness and precision) of a dual-structured light facial scanner and interexaminer reliability. J. Prosthet. Dent. 2020, 124, 567–574. [Google Scholar] [CrossRef]
  2. Zhao, Y.-J.; Xiong, Y.-X.; Wang, Y. Three-dimensional accuracy of facial scan for facial deformities in clinics: A new evaluation method for facial scanner accuracy. PLoS ONE 2017, 12, e0169402. [Google Scholar] [CrossRef]
  3. Amornvit, P.; Sanohkan, S. The accuracy of digital face scans obtained from 3D scanners: An in vitro study. Int. J. Environ. Res. Public Health 2019, 16, 5061. [Google Scholar] [CrossRef] [Green Version]
  4. Vázquez, M.A.; Cuevas, F.J. A 3D facial recognition system using structured light projection. In Proceedings of the International Conference on Hybrid Artificial Intelligence Systems, HAIS 2014, Salamanca, Spain, 11–13 June 2014; pp. 241–253. [Google Scholar] [CrossRef]
  5. Sharma, S.; Kumar, V. Voxel-based 3D face reconstruction and its application to face recognition using sequential deep learning. Multimed. Tools Appl. 2020, 79, 17303–17330. [Google Scholar] [CrossRef]
  6. Dunn, S.M.; Keizer, R.L.; Yu, J. Measuring the area and volume of the human body with structured light. IEEE Trans. Syst. Man Cybern. 1989, 19, 1350–1364. [Google Scholar] [CrossRef]
  7. Bhatia, G.; Vannier, M.W.; Smith, K.E.; Commean, P.K.; Riolo, J.; Leroy Young, V. Quantification of facial surface change using a structured light scanner. Plast. Reconstr. Surg. 1994, 94, 768–774. [Google Scholar] [CrossRef]
  8. Gregory, A.; Lipczynski, R.T. The three dimensional reconstruction and monitoring of facial surfaces. Med. Eng. Phys. 1994, 16, 249–252. [Google Scholar] [CrossRef]
  9. Amor, B.B.; Ardabilian, M.; Chen, L. 3D Face modeling based on structured-light assisted stereo sensor. In Proceedings of the Image Analysis and Processing, ICIAP 2005, LNCS 3617, Cagliari, Italy, 6–8 September 2005; pp. 842–849. [Google Scholar] [CrossRef] [Green Version]
  10. Yang, B.; Lesicko, J.; Moy, A.; Reichenberg, J.; Sacks, M.; Tunnell, J.W. Color structured light imaging of skin. J. Biomed. Opt. 2016, 21, 050503. [Google Scholar] [CrossRef] [Green Version]
  11. Yue, H.; Yu, Y.; Chen, W.; Wu, X. Accurate three dimensional body scanning system based on structured light. Opt. Express 2018, 26, 28544–28559. [Google Scholar] [CrossRef]
  12. Zhang, S.; Huang, P.S. High-resolution, real-time three-dimensional shape measurement. Opt. Eng. 2006, 45, 123601. [Google Scholar] [CrossRef]
  13. Kimura, M.; Mochimaru, M.; Kanade, T. Measurement of 3D foot shape deformation in motion. In Proceedings of the 5th ACM/IEEE International Workshop on Projector Camera Systems, Bali Way, CA, USA, 20 August 2008; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  14. Liberadzki, P.; Adamczyk, M.; Witkowski, M.; Sitnik, R. Structured-light-based system for shape measurement of the human body in motion. Sensors 2018, 18, 2827. [Google Scholar] [CrossRef] [Green Version]
  15. Sitnik, R. Four-dimensional measurement by a single-frame structured light method. Appl. Opt. 2009, 48, 3344–3354. [Google Scholar] [CrossRef]
  16. Wang, Z. Robust three-dimensional face reconstruction by one-shot structured light line pattern. Opt. Laser Eng. 2020, 124, 105798. [Google Scholar] [CrossRef]
  17. Bleier, M.; Nüchter, A. Low-cost 3D laser scanning in air or water using self-calibrating structured light. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 105. [Google Scholar] [CrossRef] [Green Version]
  18. Takeda, M.; Mutoh, K. Fourier transform profilometry for the automatic measurement of 3-D object shapes. Appl. Opt. 1983, 22, 3977–3982. [Google Scholar] [CrossRef]
  19. Maurel, A.; Cobelli, P.; Pagneux, V.; Petitjeans, P. Experimental and theoretical inspection of the phase-to-height relation in Fourier transform profilometry. Appl. Opt. 2009, 48, 380–392. [Google Scholar] [CrossRef]
  20. Zappa, E.; Busca, G. Static and dynamic features of Fourier transform profilometry: A review. Opt. Laser Eng. 2012, 50, 1140–1151. [Google Scholar] [CrossRef]
  21. Yun, H.; Li, B.; Zhang, S. Pixel-by-pixel absolute three-dimensional shape meausrement with Fourier transform profilometry. Appl. Opt. 2017, 56, 1472–1480. [Google Scholar] [CrossRef]
  22. Wang, Z.; Zhang, Z.; Gao, N.; Xiao, Y.; Gao, F.; Jiang, X. Single-shot 3D shape measurement of discontinuous objects based on a coaxial fringe projection system. Appl. Opt. 2019, 58, A169–A178. [Google Scholar] [CrossRef]
  23. Clement, M.; Daniel, G.; Trelles, M. Optimising the design of a broad band light source for the treatment of skin. J. Cosmet. Laser Ther. 2005, 7, 177–189. [Google Scholar] [CrossRef]
  24. Ash, C.; Dubec, M.; Donne, K.; Bashford, T. Effect of wavelength and beam width on penetration in light-tissue interaction using computational methods. Lasers Med. Sci. 2017, 32, 1909–1918. [Google Scholar] [CrossRef] [PubMed]
  25. Holroyd, M.; Lawrence, J. An analysis of using high-frequency sinusoidal illumination to measure the 3D shape of translucent objects. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2985–2991. [Google Scholar] [CrossRef]
  26. Jensen, S.N.; Wilm, J.; Aanæs, H. An error analysis of structured light scanning of biological tissue. In Proceedings of the Scandinavian Conference on Image Analysis, SCIA-2017, Tromsø, Norway, 12–14 June 2017; pp. 135–145. [Google Scholar] [CrossRef] [Green Version]
  27. Ohtani, K.; Li, L.; Baba, M. Determining surface shape of translucent objects by using laser rangefinder. In Proceedings of the 2013 IEEE/SICE International Symposium on System Integration, Kobe, Japan, 15–17 December 2013; pp. 454–459. [Google Scholar] [CrossRef]
  28. Chen, B.; Stamnes, K.; Stamnes, J.J. Validity of the diffusion approximation in bio-optical imaging. Appl. Opt. 2001, 40, 6356–6366. [Google Scholar] [CrossRef] [PubMed]
  29. Ding, H.; Lu, J.Q.; Wooden, W.A.; Kragel, P.J.; Hu, X.H. Refractive indices of human skin tissues at eight wavelengths and estimated dispersion relations between 300 and 1600 nm. Phys. Med. Biol. 2006, 51, 1479–1489. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Sketch of the sinusoidal optical signal generator.
Figure 1. Sketch of the sinusoidal optical signal generator.
Sensors 21 02529 g001
Figure 2. The measuring result of using filtering technique. The target of human head model is illuminated by monochromatic sinusoidal fringe pattern at 532 nm and the background illuminance is E = 3365 L u x : (a) image without filtering; and (b) image with an optical bandpass filter centered at 532 nm with FWHM being 10 nm.
Figure 2. The measuring result of using filtering technique. The target of human head model is illuminated by monochromatic sinusoidal fringe pattern at 532 nm and the background illuminance is E = 3365 L u x : (a) image without filtering; and (b) image with an optical bandpass filter centered at 532 nm with FWHM being 10 nm.
Sensors 21 02529 g002
Figure 3. The measuring result of using polarization technique: (a) photo of the original target; (b) image without polarizer; and (c) image with polarizer.
Figure 3. The measuring result of using polarization technique: (a) photo of the original target; (b) image without polarizer; and (c) image with polarizer.
Sensors 21 02529 g003
Figure 4. Reconstruction of the spherical cap of vulcanized silicone rubber: (a) reconstruction using the measurement of Figure 3c; and (b) error of reconstruction.
Figure 4. Reconstruction of the spherical cap of vulcanized silicone rubber: (a) reconstruction using the measurement of Figure 3c; and (b) error of reconstruction.
Sensors 21 02529 g004
Figure 5. The generated sinusoidal fringe pattern and the comparison of intensity distribution between the generated sinusoidal signal and the standard sine curve.
Figure 5. The generated sinusoidal fringe pattern and the comparison of intensity distribution between the generated sinusoidal signal and the standard sine curve.
Sensors 21 02529 g005
Figure 6. Measurement and reconstruction of a static human face at spatial frequency of 0.3 l p / mm and the background illuminance E = 200 L u x : (a) face image without optical filtering and polarizer; (b) face image with optical filtering and polarizer; and (c) reconstructed face using the measurement of (b).
Figure 6. Measurement and reconstruction of a static human face at spatial frequency of 0.3 l p / mm and the background illuminance E = 200 L u x : (a) face image without optical filtering and polarizer; (b) face image with optical filtering and polarizer; and (c) reconstructed face using the measurement of (b).
Sensors 21 02529 g006
Figure 7. Measurement and reconstruction of a static human face at spatial frequency of 0.3 l p / mm and the background illuminance E = 0 : (a) face image with projected fringe pattern; and (b) reconstructed face using the measurement of (a).
Figure 7. Measurement and reconstruction of a static human face at spatial frequency of 0.3 l p / mm and the background illuminance E = 0 : (a) face image with projected fringe pattern; and (b) reconstructed face using the measurement of (a).
Sensors 21 02529 g007
Figure 8. Measurement and reconstruction of a static human face at spatial frequency of 1.0 l p / mm and different background illuminance E: (a) measurement at E = 0 ; (b) reconstruction using (a); (c) measurement at E = 50 L u x ; (d) reconstruction using (c); (e) measurement at E = 175 L u x ; and (f) reconstruction using (e).
Figure 8. Measurement and reconstruction of a static human face at spatial frequency of 1.0 l p / mm and different background illuminance E: (a) measurement at E = 0 ; (b) reconstruction using (a); (c) measurement at E = 50 L u x ; (d) reconstruction using (c); (e) measurement at E = 175 L u x ; and (f) reconstruction using (e).
Sensors 21 02529 g008
Figure 9. Measurement and reconstruction of a dynamic human face at spatial frequency of 1.0 l p / mm and the background illuminance E = 50 L u x : (a) face image with projected fringe pattern; and (b) reconstructed face using the measurement of (a).
Figure 9. Measurement and reconstruction of a dynamic human face at spatial frequency of 1.0 l p / mm and the background illuminance E = 50 L u x : (a) face image with projected fringe pattern; and (b) reconstructed face using the measurement of (a).
Sensors 21 02529 g009
Table 1. Summary of measurement and reconstruction at spatial frequency being 0.3 l p / mm.
Table 1. Summary of measurement and reconstruction at spatial frequency being 0.3 l p / mm.
E ( Lux )Using Filter and PolarizerMeasurementReconstructionAccuracy
0YesFigure 7aFigure 7bModerate
200NoFigure 6an/aUnable to reconstruct
200YesFigure 6bFigure 6cLow
Table 2. Summary of measurement and reconstruction at spatial frequency being 1.0 l p / mm.
Table 2. Summary of measurement and reconstruction at spatial frequency being 1.0 l p / mm.
StateE ( Lux )Using Filter and PolarizerMeasurementReconstructionAccuracy
0YesFigure 8aFigure 8bHigh
Static50YesFigure 8cFigure 8dHigh
175YesFigure 8eFigure 8fHigh
Dynamic50YesFigure 9aFigure 9bHigh
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, B.; Li, H.; Yue, J.; Shi, P. Fourier-Transform-Based Surface Measurement and Reconstruction of Human Face Using the Projection of Monochromatic Structured Light. Sensors 2021, 21, 2529. https://doi.org/10.3390/s21072529

AMA Style

Chen B, Li H, Yue J, Shi P. Fourier-Transform-Based Surface Measurement and Reconstruction of Human Face Using the Projection of Monochromatic Structured Light. Sensors. 2021; 21(7):2529. https://doi.org/10.3390/s21072529

Chicago/Turabian Style

Chen, Bingquan, Hongsheng Li, Jun Yue, and Peng Shi. 2021. "Fourier-Transform-Based Surface Measurement and Reconstruction of Human Face Using the Projection of Monochromatic Structured Light" Sensors 21, no. 7: 2529. https://doi.org/10.3390/s21072529

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop