Next Article in Journal
Power-over-Fiber Co-Transmission with Analog Radio-over-Fiber over a Single Standard Single-Mode Fiber
Next Article in Special Issue
Design of a Combined-Freeform-Surface Diffuse-Reflection System for High-Uniformity, Compact LED Inspection Illumination
Previous Article in Journal
Influence of Adhesive Bonding on the Surface Accuracy of Flat Optics: A Mechanistic Analysis and a Quantitative Approximation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Intensity Modulation for High Dynamic Range Target Measurement Based on Neighbourhood Diffusion

1
School of Electrical and Automation Engineering, East China Jiaotong University, Nanchang 330013, China
2
Shanghai Engineering Research Center of Ultra-Precision Optical Manufacturing, School of Information Science and Technology, Fudan University, Shanghai 200438, China
3
College of Intelligent Science and Technology, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Photonics 2026, 13(2), 167; https://doi.org/10.3390/photonics13020167
Submission received: 19 January 2026 / Revised: 31 January 2026 / Accepted: 5 February 2026 / Published: 9 February 2026
(This article belongs to the Special Issue Recent Advances in Imaging and Non-Imaging Optical Technologies)

Abstract

Fringe projection profilometry has been widely adopted in various fields due to its non-contact nature, high accuracy, high speed, and full-field measurement capability. However, when measuring objects with highly reflective surfaces, saturation often occurs due to the limited dynamic range of the camera. To effectively address this issue, this paper proposes a novel adaptive fringe projection method. First, an intensity transfer model is established, which uses uniform grayscale images to compute surface reflectance coefficients and accurately determines the optimal projection intensity in the camera coordinate system. Subsequently, low-intensity orthogonal fringe patterns are employed to compute a smoothed absolute phase in saturated regions, establishing a coordinate mapping. The mapped pixel intensities are diffused into their neighborhoods, and the minimum value is taken in overlapping areas to generate an optimal projection intensity template in the projector coordinate system. Finally, adaptive fringe patterns are generated based on this template. Experimental results demonstrate that the proposed method achieves high-precision and high-completeness 3D measurement for objects with highly reflective surfaces.

1. Introduction

Fringe projection profilometry (FPP) is an active optical 3D shape measurement technique based on structured light and phase matching. Owing to its advantages of high speed and high accuracy, this method has been extensively applied across various fields [1,2,3]. In a traditional FPP system, pre-designed fringe patterns are first projected onto the surface of the object being measured. A camera then captures the deformed fringe patterns, which contain phase information modulated by the object’s surface shape [4,5]. Following phase extraction via phase decoding algorithms, the true 3D coordinates of the measured object’s surface can be calculated by integrating the phase information with pre-calibrated geometric parameters [6]. However, due to the limited dynamic range of imaging sensors, traditional FPP systems often suffer from pixel saturation when measuring objects with highly reflective surfaces [7,8]. Pixel saturation causes the loss of modulated object information, resulting in significant holes in the measured 3D point cloud within the saturated regions.
Therefore, to address the 3D measurement challenges in saturated regions caused by highly reflective surfaces, researchers have developed multiple approaches [9]. The first category is the multi-exposure method, which involves capturing a series of images under varying exposure times to obtain differently illuminated patterns. Subsequently, unsaturated intensity values at each pixel location are selected and fused to acquire an accurate phase map [10,11]. However, this method necessitates careful selection of an appropriate exposure range and requires the acquisition of multiple sets of fringe patterns, which significantly increases the total measurement time. Li et al. [12] proposed a method based on exposure map fusion, which effectively mitigates intensity saturation by acquiring multiple sets of fringe patterns under varying exposure conditions. The number of exposure maps and the fusion strategy in this method require empirical determination, and the process involves relatively complex frequency-domain operations. Feng et al. [13] implemented a reflectance-based grouping strategy for adaptive exposure control, thereby considerably improving the measurement capability for complex reflective surfaces, although at the cost of increased system complexity. The second category of approaches primarily focuses on enhancing image quality through the incorporation of additional hardware components. Salahieh et al. [14] introduced the Multi-polarization Fringe Projection (MPFP) technique, which employs polarizers to suppress specular reflections and effectively mitigate saturated regions. Sun and Luo [15] captured fringe patterns under different polarization angles to extract valid fringe information from non-saturated regions. These were then regionally fused with traditional non-polarized images to acquire complete and high-precision phase data.
The third category encompasses adaptive fringe projection (AFP). Liu et al. [16] proposed a method that locates saturated regions by projecting a sequence of uniform grayscale images and adaptively generates optimal projection intensities at the pixel level. However, this approach relies heavily on precise system calibration and pixel-wise reflectance estimation. Cai et al. [17] proposed a method that effectively suppresses imaging saturation on highly reflective surfaces by combining a grayscale correction function with an optimized mapping model. Yuan et al. [18] employed a binary search strategy to determine the optimal projection intensity in a pixel-wise manner, thereby effectively mitigating over-exposure. However, this pixel-level intensity optimization entails high computational complexity. Feng et al. [19] proposed an approach that determines a saturation threshold by integrating multiple mask images and computes the optimal projection grayscale through an interpolation-based prediction algorithm. However, the interpolation and search process involved is computationally intensive. The method of Zhang et al. [20] generates optimal fringe patterns based on uniform grayscale projection and Gray code mapping. A potential limitation is that it might overlook the projector’s nonlinearity, while the employed Gray codes are liable to decoding failures on highly reflective surfaces. Chen et al. [21] established a mapping between the captured intensity and the projected intensity using a polynomial fitting function. However, the accuracy and generalization capability of this polynomial model depend heavily on the quality of the calibration data. Wang and Yang [22] proposed a method, combining inverse and adaptive fringe projection to avoid multi-exposure limitations. However, it requires high-precision pixel mapping and complex phase computation. Other methods have also been introduced to resolve issues arising during measurement. Zhang et al. [23] employed a calibrated radiometric response model to fuse multiple fringe patterns into a single high-dynamic-range image. Sun and Zhang [24] addressed the inherent nonlinearity of projectors by proposing a joint gamma correction model for multi-color channels, which effectively compensates for color crosstalk errors. Other approaches include spray-coating [25] and fringe reflection methods [26]. In recent years, data-driven techniques have shown great potential; beyond supervised learning [27,28], untrained deep learning networks [29] and cycle generative adversarial frameworks [30] have been developed to robustly handle complex surface characteristics and large depth ranges. Furthermore, computational imaging techniques such as separable Hadamard single-pixel imaging [31] and angular Fourier slicing enhanced burst photography [32] have been explored to achieve high-resolution and high-dynamic-range reconstruction. For high-speed applications, spatiotemporal speckle projection based on VCSEL arrays [33] offers a new avenue for real-time 3D imaging. Additionally, to specifically address mutual reflections on shiny surfaces, spectral division multiplexing [34] has been proposed to physically isolate interference artifacts.
Despite the progress represented by the aforementioned methods in high-dynamic-range 3D measurement, certain inherent limitations persist. Existing adaptive projection techniques typically rely on a linearity assumption and fail to account for the effects of ambient light and projector nonlinearity. We propose an improved method that incorporates ambient light into an intensity transfer model for precise reflectance calculation, pre-processes patterns with gamma correction [24] to mitigate nonlinear effects, and employs a novel neighborhood diffusion strategy to locally modulate fringe intensity, thus effectively suppressing pixel saturation. The methodology begins by establishing an intensity transfer model that considers ambient light. By projection of a grey scale sequence (0–180), the grey scale detection programme automatically identifies the highest non-saturated intensity level. Following the projection of a maximum-intensity (255) image, the model is utilized to compute the optimal intensity for saturated areas. Subsequently, low-intensity vertical and horizontal fringe patterns are projected to extract phase information within the saturated zones. Following a neighborhood diffusion strategy, a mapping relationship between camera and projector coordinates is established for pixels in saturated areas. This enables the determination of optimal projection intensities in the projector coordinate system, based on which adaptive fringe patterns are generated. Comparative experiments with multiple existing methods demonstrate that the proposed technique successfully acquires complete point cloud data from highly reflective surfaces while achieving high measurement accuracy.
The remainder of this paper is structured as follows: Section 2 elaborates on the principles of the proposed method, Section 3 presents the experimental results, and Section 4 discusses the performance, limitations, and future directions of the proposed method, and Section 5 provides a concluding summary of the work.

2. Principle

A typical fringe projection measurement system consists of a computer, a projector, and a camera, with the system setup illustrated in Figure 1. Sinusoidal fringe patterns are projected onto the surface of the target object by the projector. The deformed fringe patterns modulated by the object surface are then captured by the camera. Subsequently, the phase information is computed by the computer [35]. Finally, by combining the calibration parameters of the monocular fringe projection system, the three-dimensional information of the object surface can be measured.

2.1. Four-Step Phase-Shifting Algorithm

The phase-shifting method provides accurate phase values and has been widely adopted in 3D shape measurement. In this paper, a four-step phase-shifting algorithm is employed for phase retrieval. The intensity expressions of the four fringe patterns captured by the camera are given as follows:
I 1 x , y = A x , y + B x , y cos φ x , y I 2 x , y = A x , y + B x , y cos φ x , y + π 2 I 3 x , y = A x , y + B x , y cos φ x , y + π I 4 x , y = A x , y + B x , y cos φ x , y + 3 π 2 ,
where x , y denote the pixel coordinates in the camera coordinate system, A x , y and B x , y represent the average intensity and the modulation intensity, respectively, and φ x , y refers to the wrapped phase of the measured object. Based on Equation (1), the wrapped phase φ x , y can be derived as follows:
φ x , y = arctan I 4 x , y I 2 x , y I 1 x , y I 3 x , y .
The calculated wrapped phase φ x , y embodies depth-related information of the measured object, with its values constrained within the principal interval (−π, π] and exhibiting a periodic sawtooth-like distribution across the entire field of view. In this study, a multi-frequency temporal phase unwrapping algorithm is employed to unwrap the phase, thereby retrieving the continuous absolute phase.

2.2. Adaptive Fringe Projection Method

In practical measurement scenarios, variations in surface reflectance characteristics of the object lead to differences in the camera’s imaging response. When projected fringe patterns illuminate areas with high reflectivity, localized saturation tends to occur in the captured images. To mitigate information loss caused by saturation, it is essential to dynamically adjust the projection intensity for each pixel based on its corresponding surface reflectance. The detailed procedure is illustrated in Figure 2.
The intensity transfer function of a projector deviates from linearity due to its inherent nonlinear response, which introduces errors in the calculated projection intensity. To ensure accuracy, it is necessary to compensate for this nonlinear effect. Following the multi-color channel gamma correction method described in Ref. [24], a pre-correction function can be derived to eliminate this nonlinearity. Specifically, this function characterizes the projector’s inherent Gamma response, linking the input grayscale I c g to the output intensity I t g . By applying the inverse of this model, the mapping from the desired linear target intensity to the required input grayscale is established, thereby linearizing the projected light intensity and ensuring the accuracy of the subsequent intensity transfer model. For clarity in the subsequent discussion, we denote this pre-correction function as
f I c g = I t g ,
where I c g is the captured grayscale value, and I t g is the target grayscale value to be set.
To adjust the projection intensity for saturated pixels, it is first necessary to identify the saturated regions. This is achieved by projecting a uniform pattern with a grayscale value of 255 onto the target object and capturing the resulting image using the camera. A binary mask is defined as a logical matrix. Threshold segmentation is performed according to Equation (4): a pixel is identified as saturated and marked accordingly if its intensity value I C x , y in the captured image exceeds the saturation threshold t .
m a s k ( x , y ) = 1 ,                     I C x , y t 0 ,                   o t h e r w i s e
In this context, x , y denotes the pixel coordinates in the image captured by the camera. To account for the inherent noise of the camera sensor, the saturation threshold t is set to 250. This value incorporates a grayscale tolerance margin to prevent misclassification caused by noise interference. To determine the optimal projection intensity and analyze the influence of various illumination conditions, an intensity transfer model is established based on the optical path illustrated in Figure 1, which characterizes the relationship between the light intensity captured by the camera and that emitted by the projector. This relationship is expressed as
I c x , y = α t R x , y I p + β 1 + α t β 2 + I N ,
where the camera sensitivity coefficient α and the exposure time t are intrinsic camera parameters. The term R x , y denotes the surface reflectivity of the object, while I p represents the projected light intensity. Additionally, β 1 corresponds to the ambient light reflected by the object surface, and β 2 refers to the ambient light directly incident onto the camera. I N signifies the sensor noise. By defining the optimal projection intensity as I o p and the optimal captured intensity as I s , the above expression can be reformulated as Equation (6):
I o p = I s α t R x , y β 1 α t β 2 I N α t R x , y .
To account for the effects of ambient illumination and sensor noise, a uniform pattern with zero intensity was projected onto the object surface after being processed by the pre-correction function. The corresponding intensity captured by the camera, which serves as the baseline intensity, can be expressed as
I c 0 x , y = α t R x , y β 1 + α t β 2 + I N .
Here, the surface reflectance coefficient of the object is defined as ρ x , y = α t R x , y . This definition allows Equation (6) to be simplified into Equation (8), then we have
I o p = I s I c 0 x , y ρ x , y ,
where the optimal captured intensity I s is set to 240 to maintain a high signal-to-noise ratio while minimizing the risk of pixel saturation. To compute ρ x , y , by projecting a uniform grey-scale image sequence ranging from 0 to 180 with a grey-scale difference of 20, this range is empirically optimized for highly reflective surfaces. Since saturation on such surfaces typically occurs at low illumination levels, extending the scan range to 255 provides negligible benefit for identifying the non-saturation threshold of shiny regions while unnecessarily increasing the number of projected patterns and measurement time. The grey-scale detection programme automatically identifies the highest non-saturated intensity level. This intensity is referred to as the low grayscale intensity I G , with the corresponding grayscale value captured by the camera being designated as I c 1 x , y , the surface reflectance coefficient ρ x , y can be expressed as
ρ x , y = I c 1 x , y I c 0 x , y I G .
Then, the optimal projection intensity in the camera coordinate system can be expressed as
I o p = I G I s I c 0 x , y I c 1 x , y I c 0 x , y .
Owing to the nonlinear gamma response of the projector, a discrepancy exists between the intended adaptive projection intensity and the actual output intensity. Therefore, by applying the inverse of the pre-calibration function f ( · ) , the actual projection intensity required to achieve the desired intensity I G is given by f 1 ( I G ) . Consequently, the final optimal projection intensity is formulated as
I o p = f f 1 ( I G ) I s I c 0 x , y I c 1 x , y I c 0 x , y .

2.3. Coordinate Mapping

Following the determination of the optimal projection intensity in the camera coordinate system, it is also necessary to ascertain the corresponding optimal intensity in the projector coordinate system. This is achieved by projecting two sets of orthogonal sinusoidal fringe patterns—one horizontally and one vertically—to compute the absolute phase along both directions. This process establishes a mapping relationship between the camera and projector coordinate systems. For any camera pixel with coordinates x c , y c , the corresponding projector pixel coordinates ( u p , v p ) can be determined using Equation (12):
u p = Φ v x c , y c × W 2 π T v p = Φ h x c , y c × H 2 π T ,
where Φ v x c , y c and Φ h x c , y c represent the absolute phases obtained from the vertically and horizontally projected fringe patterns, respectively. The width and height of the projected fringe patterns are denoted as W and H, while T indicates the maximum number of fringe periods.
In fringe projection 3D measurement, the inherent pixel resolution disparity between the camera and the projector makes it difficult to establish a strict one-to-one pixel correspondence. To ensure complete coverage of saturated regions by the mapped areas, this study improves upon the traditional single-pixel mapping approach by proposing a neighborhood diffusion strategy. Specifically, after mapping pixels from the camera coordinate system to the projector coordinate system, the intensity value associated with each pixel is diffused into its surrounding 3 × 3 neighborhood. This effectively expands a single pixel into a 3 × 3 mapping region, thereby enhancing the coverage area. To adequately suppress highlight effects in saturated regions, for any overlapping pixels within these 3 × 3 mapping zones originating from different camera pixels, the minimum value among all contributing mapping regions is selected. This minimization strategy is adopted to prioritize the suppression of saturation. Since the overlapping region may correspond to object surface points with varying reflectivities, selecting the minimum intensity ensures that the projected light is sufficiently attenuated to prevent overexposure even at the most reflective point within that neighborhood. In contrast, using an average or maximum value would likely fail to eliminate saturation in these critical high-reflectivity areas. The mapping effect is shown in Figure 3. It is worth noting that while diffusion strategies (such as the 2 × 2 neighborhood used in Ref. [17]) have been explored, the proposed 3 × 3 neighborhood diffusion offers distinct advantages. First, the odd-sized 3 × 3 kernel possesses a strict geometric center, ensuring centrosymmetric intensity modulation without introducing the sub-pixel coordinate shifts inherent in even-sized kernels. Second, the expanded 3 × 3 coverage creates a more robust safety margin around saturated pixels. When combined with the minimization strategy for overlapping regions, this larger footprint effectively compensates for potential sub-pixel calibration errors or resolution mismatches between the camera and projector, ensuring that the high-reflectivity points are completely enveloped by the optimal projection intensity. Through coordinate mapping, the optimal projection intensity I o p defined in the camera coordinate system is transformed into the corresponding optimal intensity I o p p in the projector coordinate system. This resulting intensity distribution is subsequently utilized for generating adaptive fringe patterns.
A potential concern with using low-intensity patterns is the reduction in the signal-to-noise ratio (SNR). However, it is crucial to note that this low projection intensity is specifically targeted at highly reflective regions. Due to the high surface reflectance, the captured intensity in these areas remains sufficiently strong to maintain a high SNR and sinusoidal quality. Furthermore, the saturation mask defined in Equation (4) is applied to filter out low-reflectivity background areas where SNR would indeed be compromised. As evidenced in the measurement process of the metal sheet, the retrieved absolute phase within the targeted saturated regions exhibits a smooth and continuous distribution, confirming the reliability of the phase data for coordinate mapping.

2.4. Generation of Adaptive Fringe Patterns

For any saturated pixel x 0 , y 0 in the camera image where mask x 0 , y 0 = 1 , the corresponding pixel u 0 , v 0 in the projector coordinate system must be identified and its intensity adjusted to mitigate overexposure. The coordinates u 0 , v 0 are computed using Equation (12). The intensity value at u 0 , v 0 in the optimal projector-side intensity map I o p p is assigned the value from the camera-side optimal intensity I o p at x 0 , y 0 , and this value is subsequently diffused into its 8-neighborhood. In cases of overlapping regions resulting from multiple saturated pixels, the minimum intensity value among all candidate values is selected. After processing all saturated pixels, the remaining pixels in I o p p are set to the maximum intensity value of 255. This process involves the projection of a total of 35 images. This includes 11 uniform grayscale images for reflectance estimation and saturation area identification, although only two of them are ultimately used to determine the optimal projection intensity. It also includes 24 low-intensity orthogonal sinusoidal fringe patterns for absolute phase calculation and coordinate mapping. Finally, the adaptive fringe patterns are generated based on the refined intensity map I o p p using Equation (13):
I A F P x , y = I o p p 255 A x , y + B x , y cos φ x , y + 2 k π N ,
where A x , y and B x , y represent the average intensity and the modulation intensity, respectively, while φ x , y denotes the wrapped phase, where k = 0, 1, 2, …, N − 1 corresponds to the phase-shifting steps. The fringe patterns are subsequently projected onto the target object’s surface, and the deformed fringes captured by the camera are processed to reconstruct the three-dimensional shape.
To evaluate the point cloud integrality (PCI) achieved by the proposed method, a metric is introduced as defined in Equation (14), where n denotes the number of actual measured points and N represents the theoretical number of measured points [36]:
P C I = n N × 100 % .

3. Experiments and Results

To validate the feasibility of our method, a FPP system was constructed as illustrated in Figure 4. The system comprises a laptop computer, a CMOS camera (Daheng MER-132-43U3C, Daheng Imaging, Beijing, China) with a resolution of 1292 × 964 pixels, and a 3LCD projector (Epson CB-FH06, Seiko Epson Corporation, Suwa, Nagano, Japan) with a resolution of 1920 × 1080 pixels. Prior to measurement, the system was calibrated using a toolbox [37] implemented in MATLAB R2021b.

3.1. Experimental Validation

For the experiments, a highly reflective metal sheet was selected as the test object. A sequence of uniform grayscale images with intensity values ranging from 0 to 180, at increments of 20, was successively projected onto the target surface. A grayscale detection procedure was applied to the captured image sequence to automatically determine the lowest-intensity value by identifying the first non-saturating image. The process of obtaining the I G is illustrated in Figure 5.
Subsequently, a uniform pattern with the maximum grayscale value of 255 was projected, and the resulting image was captured to identify saturated regions. These saturated areas were segmented using the thresholding criterion defined in Equation (4), and the resulting segmentation is illustrated in Figure 6b.
The grayscale image sequence was systematically analyzed. The captured image corresponding to the highest projection intensity that did not induce pixel saturation, along with the image obtained under uniform projection with zero grayscale value, were selected for processing. The surface reflectance coefficients of the object were computed using Equation (9), based on which the optimal projection intensity for saturated regions was derived via Equation (11). The resulting optimal projection intensity in the camera coordinate system is illustrated in Figure 6e.
The metal sheet was measured with both high and low projection intensities for comparative analysis. The high-intensity patterns employed a maximum grayscale value of 255, while the low-intensity patterns used the same intensity as that applied in Figure 6a. Traditional measurement methods were applied to the metal surface, with the results illustrated in Figure 7. As shown in Figure 7a,b, the measurement results based on high-intensity fringe patterns exhibit significant saturation in highly reflective regions, leading to substantial point cloud loss. In contrast, the results obtained with low-intensity patterns, shown in Figure 7c,d, successfully avoid saturation and preserve point cloud completeness in previously overexposed areas. However, due to insufficient modulation depth of the fringes captured in darker regions, point cloud loss still occurs in underexposed areas.
Given that the selected low-intensity fringe pattern (of intensity I G ) avoids saturation and enables accurate point cloud reconstruction in critical regions, a pair of orthogonal low-intensity patterns can be employed to establish a coordinate mapping within these areas. A set of horizontal low-intensity fringe patterns was projected and captured by the camera. The resulting absolute phase maps in the vertical and horizontal directions are shown in Figure 8b and e, respectively. The phase values in saturated regions exhibit smooth and continuous distributions, which contributes to the accuracy of the coordinate mapping. Subsequently, the saturated phase maps were multiplied by the saturation mask to eliminate low-quality absolute phase pixels, thereby improving the efficiency of the mapping process. The results after multiplication are presented in Figure 8c,f.
Subsequently, the coordinate mapping between the camera and the projector image coordinate systems within the saturated regions was established using Equation (12). Based on the previously obtained optimal projection intensity I o p in the camera coordinate system, the corresponding optimal intensity I o p p in the projector coordinate system was derived. Adaptive fringe patterns were then generated and projected onto the target object, and the resulting images were captured by the camera.
The measurement result of our method is shown in Figure 9. As can be seen from the image captured by the camera, pixel saturation has been effectively suppressed in the red rectangular region. Interestingly, the unwrapped phase shown in Figure 9c exhibits a continuous and smooth distribution without phase discontinuities, so the 3D point cloud of this region can be calculated. It can be observed that the point cloud is complete even in highly reflective regions, while areas with lower reflectivity maintain a high signal-to-noise ratio, demonstrating the effectiveness of our method.
To validate the effectiveness of the proposed method in phase calculation for highly reflective regions, the absolute phase decoded using a 20-step phase-shifting algorithm [31] was employed as the ground truth for evaluating the phase errors of both the traditional and proposed methods. Figure 10a highlights the region of interest (marked in red) used for comparative analysis. Figure 10b illustrates the phase error within this region, with the blue solid line representing the phase error of the original system and the red solid line corresponding to the proposed approach. The results demonstrate that the proposed method effectively reduces phase decoding errors caused by saturation.
To further validate the capability of the proposed method in reconstructing highly reflective surfaces, a metal wrench was measured using both the proposed and traditional approaches. A comparative analysis of the results is presented in Figure 11. Figure 11d,h compare the measurement results of the traditional FPP method and the proposed approach. The results demonstrate that the proposed method effectively suppresses image saturation caused by high reflectivity and enables superior measurement outcomes.

3.2. Experimental Evaluation

To further validate the effectiveness and robustness of the proposed method, a computer mouse with a smooth surface and a metal bracket with complex geometry were selected as measurement objects, as illustrated in Figure 12a,b. Figure 13 presents a comparative analysis of the measurement results obtained by the conventional FPP method, the method by Feng et al. [19], the method by Cai et al. [17], and the proposed approach. Figure 13a–d display the measured point clouds of the mouse along with enlarged views of the regions marked by red rectangles, while Figure 13e–h show the corresponding results for the metal bracket. It can be observed that the point cloud generated by the proposed method is more complete and exhibits better continuity in complex and detailed areas. Table 1 summarizes the quantitative comparison of the Number of Point Clouds (NPC) and the Point Cloud Integrity (PCI) within the annotated regions for all evaluated methods. The results demonstrate that the proposed method achieves significantly superior 3D point cloud quality under the PCI metric, for both the smooth and the complex surfaces, thereby thoroughly validating its effectiveness and robustness.
To verify the measurement accuracy of the proposed method, we employed a highly precision-machined stepped metal block as the measured subject, as shown in Figure 12c. For comparison, the multi-exposure method and the 20-step phase-shifting algorithm were employed alongside the proposed approach. The measurement results obtained by the three methods are presented in Figure 14a–c. The metal block comprises four steps; the measured 3D point cloud of each step was fitted to a plane, and the distance between adjacent fitted planes was calculated as the measured step height. The fitting results are shown in Figure 14d–f. Figure 14g–i illustrate the measured distances between adjacent planes. Using the heights measured by a coordinate measuring machine (CMM) as the reference, the absolute errors of the measured step heights for each method are summarized in Table 2. The multi-exposure technique requires 12 images per exposure level across 6 distinct exposure settings, resulting in a total of 72 acquired images. The 20-step phase-shifting method uses 20 phase shifts with 3 distinct fringe frequencies, leading to 60 images in total. In contrast, the proposed method requires only 39 images to complete the measurement. Experimental results demonstrate that the proposed method achieves higher measurement accuracy while requiring fewer projected patterns compared to both the multi-exposure technique [11] and the 20-step phase-shifting algorithm [38].

4. Discussion

This study proposes that by constructing an intensity transfer model incorporating ambient light influence, introducing gamma correction preprocessing, and adopting a neighborhood diffusion strategy, pixel saturation caused by surface over-reflection can be effectively suppressed, thereby achieving complete and high-precision 3D reconstruction of highly reflective objects. Experimental results validate this hypothesis. The proposed method significantly improves point cloud completeness and measurement accuracy across various highly reflective specimens, including metal sheets, wrenches, smooth mouse surfaces, complex-geometry metal brackets, and stepped metal blocks. Compared with traditional multi-exposure techniques, the proposed method requires only a single projection cycle, substantially reducing the number of images captured and the total measurement time, while avoiding the complexities associated with exposure range selection and image fusion in multi-exposure methods. In contrast to methods relying on additional hardware such as polarizers, our approach eliminates the need for extra optical components, thereby reducing system complexity and cost. Compared to some adaptive projection methods based on pixel-level intensity optimization, the neighborhood diffusion strategy employed in this work ensures effective coverage of saturated regions while lowering computational complexity, enhancing both practicality and operational efficiency.
The findings of this research offer broad applicability. In fields such as industrial inspection, reverse engineering, and quality control, 3D measurement of highly reflective surfaces has long posed a challenge. The proposed method provides a feasible software-based solution for rapid, complete, and high-precision measurement of such surfaces. Furthermore, the integration of ambient light modeling and nonlinear correction offers valuable insights for other structured-light-based high dynamic range measurement techniques. However, the method still has certain limitations. Currently, it relies on multi-image acquisition of static scenes and is not readily applicable to dynamic or real-time measurement environments. The proposed method has limitations regarding specific surface characteristics. For highly absorptive objects (e.g., black materials), the signal-to-noise ratio (SNR) of the captured fringes decreases significantly. While our method optimizes intensity to suppress saturation, it inherently prioritizes preventing overexposure. Consequently, in regions where highly absorptive and highly reflective surfaces coexist within a neighborhood, the diffusion strategy—which selects the minimum intensity—may result in insufficient illumination for the absorptive areas, leading to phase errors due to low modulation. Regarding sharp resonances or high-frequency surface discontinuities, the neighborhood diffusion strategy assumes a degree of local spatial continuity. At sharp edges where phase shifts occur abruptly, the diffusion of intensity values might cause a mismatch between the projected intensity and the actual surface reflectance changes, potentially introducing artifacts or smoothing effects in the reconstructed phase map. Future research may be pursued in the following directions. Firstly, exploring more efficient phase extraction and mapping strategies to reduce the number of required images and further enhance measurement speed. Secondly, developing adaptive projection techniques for dynamic scenes, combining real-time reflectance estimation with projection modulation to achieve three-dimensional measurement of highly reflective objects in motion. Finally, considering the rapid advancement of artificial intelligence, the integration of deep learning with adaptive fringe projection presents a promising avenue. AI algorithms, particularly untrained neural networks [29] and generative adversarial frameworks [30], have demonstrated significant potential in end-to-end phase retrieval and saturation correction. While our current method relies on a physical intensity transfer model to ensure accuracy, data-driven approaches could potentially accelerate the estimation of surface reflectance from a single image, bypassing the need for image sequences. Future research could explore a hybrid framework that combines the interpretability of our physical model with the inference speed of deep learning, aiming for real-time high-dynamic-range 3D measurement.

5. Conclusions

This paper proposes a novel approach for 3D measurement of highly reflective surfaces based on adaptive regional projection intensity. The method uses an intensity transfer model to determine optimal projection intensities for saturated pixels. A neighborhood diffusion strategy propagates these values and selects the minimum intensity in overlapping regions, generating an optimal projection template for accurate 3D measurement of highly reflective surfaces. The entire measurement is conducted in a single, continuous projection cycle, which is completed within approximately 1 min and 20 s, without requiring any interruption or reconfiguration of system parameters. Experimental results demonstrate that the proposed method effectively mitigates saturation on highly reflective objects such as metal sheets and wrenches. In the measurement of a smooth mouse surface, our approach achieved a PCI of 99.44% within the selected region, outperforming other comparative methods. Furthermore, it exhibited smaller height measurement errors on a stepped metal block compared to existing techniques. However, a limitation of this approach is its reliance on static images to determine the optimal projection intensity, making it unsuitable for dynamic environments. Additionally, the requirement to project horizontal and vertical fringe patterns restricts measurement speed, currently preventing real-time performance. Future work will focus on achieving real-time 3D measurement of highly reflective surfaces in dynamic scenarios.

Author Contributions

Conceptualization, X.S. and K.Z.; methodology, K.Z.; software, X.S.; validation, K.Z., X.S. and Z.L.; formal analysis, Y.Z.; investigation, X.P.; resources, X.S.; data curation, J.Z.; writing—original draft preparation, K.Z.; writing—review and editing, X.S.; visualization, L.K.; supervision, X.S.; project administration, X.S. and L.K.; funding acquisition, X.S. and X.P. All authors have read and agreed to the published version of the manuscript.

Funding

National Natural Science Foundation of China (52405591); Jiangxi Provincial Natural Science Foundation (20224BAB214053, 20242BAB25104); The National Key Laboratory Foundation (SKLLIM-G-2502); Natural Science Foundation of Hunan Province (2024JJ6460); National Natural Science Foundation of China (52305594); Research Startup Foundation for Faculty at East China Jiaotong University (558).

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yang, S.; Wu, G.; Wu, Y.; Yan, J.; Luo, H.; Zhang, Y.; Liu, F. High-accuracy high-speed unconstrained fringe projection profilometry of 3D measurement. Opt. Laser Technol. 2020, 125, 106063. [Google Scholar] [CrossRef]
  2. Zhang, L.; Chen, Q.; Zuo, C.; Tao, T.; Zhang, Y.; Feng, S. High-dynamic-range 3D shape measurement based on time domain superposition. Meas. Sci. Technol. 2019, 30, 065004. [Google Scholar] [CrossRef]
  3. Qian, J.; Feng, S.; Xu, M.; Tao, T.; Shang, Y.; Chen, Q.; Zuo, C. High-resolution real-time 360∘ 3D surface defect inspection with fringe projection profilometry. Opt. Lasers Eng. 2021, 137, 106382. [Google Scholar] [CrossRef]
  4. Sun, X.; Kong, L.; Wang, X.; Peng, X.; Dong, G. Lights off the image: Highlight suppression for single texture-rich images in optical inspection based on wavelet transform and fusion strategy. Photonics 2024, 11, 623. [Google Scholar] [CrossRef]
  5. Wang, J.; Zhang, Z.; Leach, R.K.; Lu, W.; Xu, J. Predistorting projected fringes for high-accuracy 3-D phase mapping in fringe projection profilometry. IEEE Trans. Instrum. Meas. 2021, 29, 5008209. [Google Scholar] [CrossRef]
  6. Feng, S.; Zuo, C.; Zhang, L.; Tao, T.; Hu, Y.; Yin, W.; Qian, J.; Chen, Q. Calibration of fringe projection profilometry: A comparative review. Opt. Express 2021, 143, 106622. [Google Scholar] [CrossRef]
  7. Zhang, L.; Chen, Q.; Zuo, C.; Feng, S. Real-time high dynamic range 3D measurement using fringe projection. Opt. Express 2020, 28, 24363–24378. [Google Scholar] [CrossRef]
  8. Liu, Y.; Fu, Y.; Zhuan, Y.; Zhong, K.; Guan, B. High dynamic range real-time 3D measurement based on Fourier transform profilometry. Opt. Laser Technol. 2021, 138, 106833. [Google Scholar] [CrossRef]
  9. Zuo, C.; Feng, S.; Huang, L.; Tao, T.; Yin, W.; Chen, Q. Phase shifting algorithms for fringe projection profilometry: A review. Opt. Lasers Eng. 2018, 109, 23–59. [Google Scholar] [CrossRef]
  10. Wang, J.; Yang, Y. High-speed three-dimensional measurement technique for object surface with a large range of reflectivity variations. Appl. Opt. 2018, 57, 9172–9182. [Google Scholar] [CrossRef]
  11. Zhang, S.; Yau, S.T. High dynamic range scanning technique. Opt. Eng. 2009, 48, 033604. [Google Scholar] [CrossRef]
  12. Li, J.; Guan, J.; Chen, X.; Le, X.; Xi, J. Exposure map fusion for precise 3-D reconstruction of high dynamic range surfaces. IEEE Trans. Instrum. Meas. 2022, 71, 5022911. [Google Scholar] [CrossRef]
  13. Feng, S.; Zhang, Y.; Chen, Q.; Zuo, C.; Li, R.; Shen, G. General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique. Opt. Lasers Eng. 2014, 59, 56–71. [Google Scholar] [CrossRef]
  14. Salahieh, B.; Chen, Z.; Rodriguez, J.J.; Liang, R. Multi-polarization fringe projection imaging for high dynamic range objects. Opt. Express 2014, 22, 10064–10071. [Google Scholar] [CrossRef]
  15. Sun, X.; Luo, Z.; Wang, S.; Wang, J.; Zhang, Y.; Zou, D. A Simple Polarization-Based Fringe Projection Profilometry Method for Three-Dimensional Reconstruction of High-Dynamic-Range Surfaces. Photonics 2024, 12, 27. [Google Scholar] [CrossRef]
  16. Liu, Y.; Fu, Y.; Cai, X.; Zhong, K.; Guan, B. A novel high dynamic range 3D measurement method based on adaptive fringe projection technique. Opt. Lasers Eng. 2020, 128, 106004. [Google Scholar] [CrossRef]
  17. Cai, X.; Xu, R.; Li, H.; Wang, Y.; Lu, R. High-reflective surfaces shape measurement technology based on adaptive fringe projection. Sens. Actuators A Phys. 2022, 347, 113916. [Google Scholar] [CrossRef]
  18. Yuan, H.; Li, Y.; Zhao, J.; Zhang, L.; Li, W.; Huang, Y.; Gao, X.; Xie, Q. An adaptive fringe projection method for 3D measurement with high-reflective surfaces. Opt. Laser Technol. 2024, 170, 110062. [Google Scholar] [CrossRef]
  19. Feng, W.; Tang, S.; Zhao, X.; Sun, G.; Zhao, D. Adaptive fringe projection for 3D shape measurement with large reflectivity variations by using image fusion and predicted search. Int. J. Opt. 2020, 2020, 4876876. [Google Scholar] [CrossRef]
  20. Zhang, S.; Yang, Y.; Shi, W.; Feng, L.; Jiao, L. 3D shape measurement method for high-reflection surface based on fringe projection. Appl. Opt. 2021, 60, 10555–10563. [Google Scholar] [CrossRef]
  21. Chen, C.; Gao, N.; Wang, X.; Zhang, Z. Adaptive projection intensity adjustment for avoiding saturation in three-dimensional shape measurement. Opt. Commun. 2018, 410, 694–702. [Google Scholar] [CrossRef]
  22. Wang, J.; Yang, Y. A new method for high dynamic range 3D measurement combining adaptive fringe projection and original-inverse fringe projection. Opt. Lasers Eng. 2023, 163, 107490. [Google Scholar] [CrossRef]
  23. Zhang, L.; Chen, Q.; Zuo, C.; Feng, S. High dynamic range 3D shape measurement based on the intensity response function of a camera. Appl. Opt. 2018, 57, 1378–1386. [Google Scholar] [CrossRef] [PubMed]
  24. Sun, X.; Zhang, Y.; Kong, L.; Peng, X.; Luo, Z.; Shi, J.; Tian, L. Multi-color channel gamma correction in fringe projection profilometry. Photonics 2025, 12, 74. [Google Scholar] [CrossRef]
  25. Palousek, D.; Omasta, M.; Koutny, D.; Bednar, J.; Koutecky, T.; Dokoupil, F. Effect of matte coating on 3D optical measurement accuracy. Opt. Mater. 2015, 40, 1–9. [Google Scholar] [CrossRef]
  26. Bothe, T.; Li, W.; von Kopylow, C.; Juptner, W.P. High-resolution 3d shape measurement on specular surfaces by fringe reflection. Opt. Metrol. Prod. Eng. 2004, 5457, 411–422. [Google Scholar] [CrossRef]
  27. Zhang, L.; Chen, Q.; Zuo, C.; Feng, S. High-speed high dynamic range 3D shape measurement based on deep learning. Opt. Lasers Eng. 2020, 134, 106245. [Google Scholar] [CrossRef]
  28. Liu, H.; Yan, N.; Shao, B.; Yuan, S.; Zhang, X. Deep learning in fringe projection: A review. Neurocomputing 2024, 581, 127493. [Google Scholar] [CrossRef]
  29. Yu, H.; Han, B.; Bai, L.; Zheng, D.; Han, J. Untrained deep learning-based fringe projection profilometry. APL Photonics 2022, 7, 016102. [Google Scholar] [CrossRef]
  30. Tan, J.; Liu, J.; Wang, X.; He, Z.; Su, W.; Huang, T.; Xie, S. Large depth range binary-focusing projection 3D shape reconstruction via unpaired data learning. Opt. Lasers Eng. 2024, 181, 108442. [Google Scholar] [CrossRef]
  31. Su, W.; Tan, J.; He, Z.; Lin, Z.; Liang, H. Separable Hadamard single-pixel imaging for high-resolution reconstruction. Opt. Lett. 2025, 50, 7468–7471. [Google Scholar] [CrossRef] [PubMed]
  32. Ni, Y.; Yu, Z.; Fu, S.; Meng, Z.; Gao, N.; Zhang, Z. High dynamic range three-dimensional imaging via angular Fourier slicing enhanced burst photography. Opt. Laser Technol. 2025, 192, 113955. [Google Scholar] [CrossRef]
  33. Yin, W.; Zhao, H.; Ji, Y.; Deng, Z.; Jin, Z.; Feng, S.; Zhang, X.; Wang, H.; Chen, Q.; Zuo, C. High-resolution, wide-field-of-view, and real-time 3D imaging based on spatial-temporal speckle projection profilometry with a VCSEL projector array. ACS Photonics 2024, 11, 498–511. [Google Scholar] [CrossRef]
  34. Tan, J.; Zou, T.; Chen, H.; Su, W.; He, Z. Spectral division multiplexing-based region projection for 3D measurement under mutual reflection. Opt. Lett. 2026, 51, 488–491. [Google Scholar] [CrossRef]
  35. Feng, S.; Chen, Q.; Zuo, C.; Asundi, A. Fast three-dimensional measurements for dynamic scenes with shiny surfaces. Opt. Commun. 2017, 382, 18–27. [Google Scholar] [CrossRef]
  36. Sun, J.; Zhang, Q. A 3D shape measurement method for high-reflective surface based on accurate adaptive fringe projection. Opt. Lasers Eng. 2022, 153, 106994. [Google Scholar] [CrossRef]
  37. Fetić, A.; Jurić, D.; Osmanković, D. The procedure of a camera calibration using Camera Calibration Toolbox for MATLAB. In Proceedings of the 35th International Convention MIPRO, Opatija, Croatia, 21–25 May 2012; pp. 1752–1757. [Google Scholar]
  38. Chen, B.; Zhang, S. High-quality 3D shape measurement using saturated fringe patterns. Opt. Lasers Eng. 2016, 87, 83–89. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the 3D measurement system.
Figure 1. Schematic diagram of the 3D measurement system.
Photonics 13 00167 g001
Figure 2. Flowchart of the proposed method.
Figure 2. Flowchart of the proposed method.
Photonics 13 00167 g002
Figure 3. Schematic diagram of the pixel coordinate mapping process.
Figure 3. Schematic diagram of the pixel coordinate mapping process.
Photonics 13 00167 g003
Figure 4. Schematic of the experimental system setup.
Figure 4. Schematic of the experimental system setup.
Photonics 13 00167 g004
Figure 5. The acquisition process of unsaturated images I G .
Figure 5. The acquisition process of unsaturated images I G .
Photonics 13 00167 g005
Figure 6. Identification of saturated regions and determination of optimal projection intensity in the camera coordinate system. (a) Image captured with maximum projection intensity; (b) Binary mask of saturated regions; (c) Uniform pattern captured under low projection intensity ( I G ); (d) Uniform pattern captured under zero projection intensity ( I 0 ); (e) Calculated optimal projection intensity distribution map ( I o p ).
Figure 6. Identification of saturated regions and determination of optimal projection intensity in the camera coordinate system. (a) Image captured with maximum projection intensity; (b) Binary mask of saturated regions; (c) Uniform pattern captured under low projection intensity ( I G ); (d) Uniform pattern captured under zero projection intensity ( I 0 ); (e) Calculated optimal projection intensity distribution map ( I o p ).
Photonics 13 00167 g006
Figure 7. Measurement results using the original system: (a,b) results obtained with high-projection-intensity fringe patterns; (c,d) results obtained with low-projection-intensity fringe patterns.
Figure 7. Measurement results using the original system: (a,b) results obtained with high-projection-intensity fringe patterns; (c,d) results obtained with low-projection-intensity fringe patterns.
Photonics 13 00167 g007
Figure 8. Procedure for establishing coordinate mapping: (a,d) captured orthogonal fringe patterns; (b,e) absolute phase maps; (c,f) absolute phase after multiplication with the saturation mask.
Figure 8. Procedure for establishing coordinate mapping: (a,d) captured orthogonal fringe patterns; (b,e) absolute phase maps; (c,f) absolute phase after multiplication with the saturation mask.
Photonics 13 00167 g008
Figure 9. Experimental results of the proposed method: (a) generated adaptive fringe pattern; (b) image captured by the camera; (c) unwrapped phase of the red rectangular region marked in (b); (d) measurement result.
Figure 9. Experimental results of the proposed method: (a) generated adaptive fringe pattern; (b) image captured by the camera; (c) unwrapped phase of the red rectangular region marked in (b); (d) measurement result.
Photonics 13 00167 g009
Figure 10. Phase error comparison: (a) image with marked comparison area; (b) phase error comparison within the red region.
Figure 10. Phase error comparison: (a) image with marked comparison area; (b) phase error comparison within the red region.
Photonics 13 00167 g010
Figure 11. Measurement results of the metal wrench: (a,e) show the captured uniform grayscale image under a projection intensity of 255 and the corresponding generated saturation mask; (b,c) present the projected fringe pattern and the captured image using the original system; (f,g) display the projected fringe pattern and the captured image obtained by the proposed method; (d,h) illustrate the 3D measurement results from the original system and the proposed method.
Figure 11. Measurement results of the metal wrench: (a,e) show the captured uniform grayscale image under a projection intensity of 255 and the corresponding generated saturation mask; (b,c) present the projected fringe pattern and the captured image using the original system; (f,g) display the projected fringe pattern and the captured image obtained by the proposed method; (d,h) illustrate the 3D measurement results from the original system and the proposed method.
Photonics 13 00167 g011
Figure 12. Experimental objects: (a) a smooth-surface mouse, (b) a geometrically complex metal bracket, and (c) a stepped metal block.
Figure 12. Experimental objects: (a) a smooth-surface mouse, (b) a geometrically complex metal bracket, and (c) a stepped metal block.
Photonics 13 00167 g012
Figure 13. Measurement results: (ad) the mouse and (eh) the metal bracket measured by the traditional method, Feng’s method, Cai’s method, and the proposed method, respectively, with enlarged details provided for each.
Figure 13. Measurement results: (ad) the mouse and (eh) the metal bracket measured by the traditional method, Feng’s method, Cai’s method, and the proposed method, respectively, with enlarged details provided for each.
Photonics 13 00167 g013
Figure 14. Measurement results of the stepped metal block: (a) multi-exposure method; (b) 20-step phase-shifting method; (c) proposed method; (d) plane fitting result of (a); (e) plane fitting result of (b); (f) plane fitting result of (c); (g) measured distance between adjacent planes in (d); (h) measured distance between adjacent planes in (e); (i) measured distance between adjacent planes in (f).
Figure 14. Measurement results of the stepped metal block: (a) multi-exposure method; (b) 20-step phase-shifting method; (c) proposed method; (d) plane fitting result of (a); (e) plane fitting result of (b); (f) plane fitting result of (c); (g) measured distance between adjacent planes in (d); (h) measured distance between adjacent planes in (e); (i) measured distance between adjacent planes in (f).
Photonics 13 00167 g014
Table 1. Comparison of measurement methods.
Table 1. Comparison of measurement methods.
MethodOriginal SystemsFeng [19]Cai [17]Proposed
Number of images12613736 + 3
NPC of mouse1699183218811960
PCI of mouse (N = 1971)86.20%92.95%95.43%99.44%
NPC of Metal bracket4511545754155684
PCI of Metal bracket (N = 5788)77.93%94.28%93.55%98.20%
Table 2. Performance comparison of the three methods on the stepped metal block.
Table 2. Performance comparison of the three methods on the stepped metal block.
Step Heights (mm)Measured Heights (mm)Absolute Error (mm)
Multi-Exposure20-Step Phase-ShiftingProposed Multi-Exposure20-Step Phase-ShiftingProposed
Step 1–2 5.0635.08895.03585.03860.026−0.027−0.024
Step 2–3 5.0195.05145.04715.04020.0320.0280.021
Step 3–4 4.9924.95474.95275.0197−0.037−0.0390.028
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, X.; Zhou, K.; Kong, L.; Zeng, J.; Zhang, Y.; Luo, Z.; Peng, X. Adaptive Intensity Modulation for High Dynamic Range Target Measurement Based on Neighbourhood Diffusion. Photonics 2026, 13, 167. https://doi.org/10.3390/photonics13020167

AMA Style

Sun X, Zhou K, Kong L, Zeng J, Zhang Y, Luo Z, Peng X. Adaptive Intensity Modulation for High Dynamic Range Target Measurement Based on Neighbourhood Diffusion. Photonics. 2026; 13(2):167. https://doi.org/10.3390/photonics13020167

Chicago/Turabian Style

Sun, Xiang, Kai Zhou, Lingbao Kong, Jianjun Zeng, Yunpeng Zhang, Zhenjun Luo, and Xing Peng. 2026. "Adaptive Intensity Modulation for High Dynamic Range Target Measurement Based on Neighbourhood Diffusion" Photonics 13, no. 2: 167. https://doi.org/10.3390/photonics13020167

APA Style

Sun, X., Zhou, K., Kong, L., Zeng, J., Zhang, Y., Luo, Z., & Peng, X. (2026). Adaptive Intensity Modulation for High Dynamic Range Target Measurement Based on Neighbourhood Diffusion. Photonics, 13(2), 167. https://doi.org/10.3390/photonics13020167

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop