Next Article in Journal
Real-Time Measurement Technology of Bearing Housing Clearance in a Rolling Mill
Previous Article in Journal
An Experimental Study of the Pull-In Voltage in RF MEMS Switches Fabricated by Au Electroplating and Standard Wet Release: Considering the Bridge Geometry
Previous Article in Special Issue
The Design of a Low-Noise CMOS Image Sensor Using a Hybrid Single-Slope Analog-to-Digital Converter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Polarization-Modulated Optical Homodyne for Time-of-Flight Imaging with Standard CMOS Sensors

1
Deggree Programs in Systems and Information Engineering, University of Tsukuba, Tsukuba 305-8573, Japan
2
Optech Innovation, LLC., Tsukuba 305-0031, Japan
3
Institute of Systems and Information Engineering, University of Tsukuba, Tsukuba 305-8573, Japan
*
Authors to whom correspondence should be addressed.
Sensors 2025, 25(6), 1886; https://doi.org/10.3390/s25061886
Submission received: 20 January 2025 / Revised: 3 March 2025 / Accepted: 15 March 2025 / Published: 18 March 2025
(This article belongs to the Special Issue Recent Advances in CMOS Image Sensor)

Abstract

:
Indirect time-of-flight (iToF) imaging is a widely applied technique to obtain a depth image from the phase difference of amplitude-modulated signals between emitted light and reflected light. The phase difference is computed via electrical correlation on a conventional iToF sensor. However, iToF sensors face a trade-off between spatial resolution and light collection efficiency because it is hard to downsize the circuit of the electrical correlation in a pixel. Thus, we propose a novel iToF depth imaging system based on polarization-modulated optical homodyne detection with a standard CMOS sensor. A resonant photoelastic modulator is employed to modulate the polarization state, enabling optical correlation through interaction with an analyzer. The homodyne detection enhances noise resistance and sensitivity in the phase difference estimation. Furthermore, the use of a polarization camera allows to reduce the number of measurements. We first validate the successful estimation of the phase difference in both setups with an avalanche photodiode or a CMOS sensor. The experimental results show accurate depth estimation even in challenging factors such as a low signal-to-noise ratio, temporal intensity variations, and speckle noise. The proposed system enables high-resolution iToF depth imaging using readily available image sensors.

1. Introduction

Depth imaging has emerged as a critical technology across numerous applications, including autonomous driving, factory automation, and virtual/augmented reality. Time-of-flight (ToF) imaging represents a widely adopted approach for obtaining depth information due to its practical advantages of low cost and accessibility. By leveraging the constant speed of light, ToF systems can estimate scene distances by measuring the time interval between light emission and detection after scene interactions.
There are two primary ToF methodologies: direct ToF (dToF) and indirect ToF (iToF). dToF techniques require high-speed, precise light measurement, typically implemented through a combination of avalanche photodiodes (APDs) and time-to-digital converters. However, integrating these components into dense arrays presents significant technical challenges. Consequently, iToF approaches are generally preferred for applications demanding high spatial resolution, as they can more readily achieve dense pixel arrays while maintaining acceptable depth measurement capabilities.
iToF imaging systems [1,2,3] illuminate targets with intensity-modulated light and estimate flight time by measuring the phase difference between emitted and reflected light signals through temporal correlation operations. Each pixel in an iToF image sensor integrates both a light-sensing element and temporal correlation circuitry (Figure 1a), enabling independent depth measurements at each spatial location. However, this integrated architecture imposes fundamental constraints on the achievable spatial resolution of iToF cameras. While a higher resolution requires reducing the pixel size within fixed sensor dimensions, the temporal correlation circuitry occupies a minimum area that cannot be readily scaled down. Consequently, increasing the spatial resolution necessitates reducing the light-sensing area of each pixel, creating an inherent trade-off between spatial resolution and light collection efficiency.
Two main approaches have been developed to address this spatial resolution limitation. The first leverages compressive sensing to achieve super-resolution depth imaging that exceeds the native capabilities of iToF sensor arrays [4,5,6]. This method utilizes a digital mirror device (DMD) with higher spatial resolution than the iToF sensor to modulate the reflected light. By applying time-varying spatial modulation patterns, a high-resolution depth image corresponding to the resolution of the DMD can be reconstructed from multiple measurements. These methods use spatial encoding with digital mirror devices and diffraction gratings to restore high-resolution depth images by increasing sparsity. However, this method generally yields only a slight improvement in spatial resolution, and only by a factor of several.
The second approach utilizes standard high-resolution CMOS image sensors. In this approach, the phase difference is not calculated on the time correlation circuit on the image sensor, but rather by optical correlation operations. The synthetic wavelength imaging method and the fast optical shutter method are two methods of optical correlation calculation. The former method uses two light sources with close wavelengths (the frequency of the light itself) and measures the phase shift of the beat wave resulting from their optical interference, enabling high-resolution depth measurement [7,8,9]. Although the frequency of the light is very high (THz), the phase shift can be measured even with standard CMOS sensors by using low-frequency beat waves. However, these techniques require two highly coherent lasers, which are expensive and require large pieces of equipment.
In contrast, the latter method uses a high-speed optical shutter (Figure 1b) to perform the correlation computation by charge distribution that has been performed in conventional iToF cameras (Figure 1a). As high-speed optical shutters, image intensifiers [10,11], electro-optic modulators (EOMs) [12], and electro-absorption modulators (EAMs) [13,14,15,16] are used. Image intensifiers require high voltages of several kV and are not suitable for implementation in general consumer-use cameras. EOMs may also require voltages as high as several several hundred volts to several kV (e.g., car cells and Pockels cells), but in combination with resonant circuits, they can be driven at low voltages (several tens of volts) at certain frequencies. They can be modulated at high frequencies in the MHz to GHz range, but the bulkiness of the material crystals and the small photosensitive area mean that depth images must be acquired by scanning. EAMs are also capable of MHz modulation at low voltages and they don’t require scanning to obtain a depth image. They are made of wafers containing multiple quantum wells, which are more complex structures than the single crystal wafers used in EOMs.
In addition to these methods, Atalar et al. [17,18,19,20,21,22] have constructed a polarization modulation-based optical shutter with a resonant photoelastic modulator for iToF imaging. This technique uses heterodyne detection, in which slightly different frequencies are set for light source modulation and polarization modulation, the resulting beat signal is captured by the camera at the differential frequency, and waveform characteristics (phase, amplitude, and offset) are estimated from the time variation of brightness. On the other hand, there is a method of detection called homodyne detection, also known as lock-in detection, that uses a modulated signal and a demodulated signal of the same frequency. Homodyne detection typically demonstrates superior noise resistance compared to heterodyne detection’s waveform-based approach. This enhanced robustness leads to improved sensitivity and accuracy in the estimation of phase difference. Indeed, Atalar [23] acknowledges the potential advantages of incorporating homodyne techniques in future implementations.
In this paper, we propose a novel system of polarization-modulated optical homodyne for iToF imaging. The proposed system mainly consists of a resonant photoelastic modulator and a standard CMOS image sensor. By operating the resonant photoelastic modulator, the angle of linear polarization is temporally modulated, enabling optical correlation through interaction with an analyzer. This configuration allows depth measurement with high spatial resolution using a CMOS image sensor. In this work, we apply a homodyne approach using the identical frequency for both the light source modulation and demodulation signals towards phase difference detection with high sensitivity and precision. Furthermore, we propose a method using a polarization image sensor as a light receiving element, because the use of polarization modulation enables phase shift setting of the demodulated signal depending on the angle of the detector. This method has the advantage of reducing the number of measurements for the optial homodyne method. Due to the relationship between the modulation of the linear polarization angle and the detector angle, four phase-shifted images can be obtained with two exposures by using four polarization angle detectors of a polarization image sensor.
Table 1 presents a comparison of different methods. The proposed method enhances the number of pixels in ToF cameras by capturing the results of optical correlation using a standard CMOS sensor. Compared to conventional iToF sensors, utilizing a high-pixel-count CMOS sensor addresses the issue of spatial resolution. Additionally, while the heterodyne method requires reconstructing phase information from the beat wave based on the intensity variations of multiple images, the proposed method employs the homodyne approach, allowing phase difference estimation from just four phase-shifted images. Furthermore, since the proposed method performs demodulation using an optical shutter based on polarization modulation, a polarization camera with analyzers at different angles enables reducing the number of measurements from four to one. This is not feasible with optical shutters based on light amplification or absorption modulators.

2. Polarization-Modulated Optical Homodyne

2.1. Principle of Polarization Modulation

Polarization modulation refers to the temporal modulation of linear polarization angles using devices capable of electrically controlling polarization states. Specifically, a polarization modulator consists of a phase modulation element that can delay the slow axis phase according to the applied voltage, combined with a quarter-wave plate (QWP). In this paper, we perform polarization modulation using a phase modulation element based on the inverse piezoelectric effect and photoelastic effect. When an electrical signal is input to this optical phase element, strain is generated in the element due to the inverse piezoelectric effect, and the phase delay of the slow axis varies through the photoelastic effect according to the magnitude of this strain.
Figure 2 shows the mechanism of the linear polarization light change by the polarization modulator. Firstly, we assumed that the light reflected on a target object surface is unpolarized. The reflected light passes through a linear polarizer so that its transmission axis is horizontal (x-axis) and the horizontal component is only transmitted. Then, the linearly polarized light passes through the OPM that is arranged so that its fast axis is 45° to the x-axis and changes into any elliptically polarized light depending on the applied voltage. By passing the elliptically polarized light through a QWP so that its fast axis direction is aligned with the original linearly polarized light, the elliptically polarized light returns to linearly polarized light with the different angle.

2.2. Optical Transmission Through Polarization Modulation

When a linear polarizer (analyzer) is placed after the polarization modulator, the optical transmission varies according to the angle of the linear polarization passing through the analyzer. Figure 3 illustrates the relationship between optical transmission and both the linear polarization angle and analyzer angle. Consider θ L P ( t ) as the time-varying angle of linear polarization due to polarization modulation, where θ L P ( t ) varies linearly from 0° to 90° during the first half-period and from 90° to 0° during the second half-period. When an analyzer with a transmission axis angle θ A of 0° is positioned in this configuration, the optical transmission P ( t ) follows the pattern shown in Figure 3 (red line). A phase shift of 180° in θ L P ( t ) results in a corresponding 180° phase shift in optical transmission (red dashed line). Since the phase of θ L P ( t ) is determined by the input signal phase to the optical phase modulator, the phase can be controlled electronically.
Furthermore, when the phase of θ L P ( t ) is reset to 0° and the analyzer angle θ A is set to 90°, the optical transmission exhibits a phase inversion (180° phase shift, blue line) compared to when θ A is 0°. When θ A is set to 45° and 135°, the optical transmission phases δ become 90° (yellow line) and 270° (green line), respectively. Thus, the optical transmission phase can also be controlled through the analyzer angle, where the phase shift in optical transmission is twice the analyzer angle ( δ = 2 θ A ). Since the polarization image sensor is equipped with analyzers at four polarization angles ( θ A = 0 ° ,   45 ° ,   90 ° , and 135°), intensities at four phase shifts for the four-bucket method can be theoretically acquired in a single exposure. However, in practice, multiple exposures may be required due to the limited magnitude of polarization modulation, as discussed in Section 5.2.

2.3. iToF Depth Measurement Using Polarization-Modulated Optical Homodyne

In this subsection, we describe the method for iToF depth measurement using polarization modulation. In our method, a target object is irradiated with intensity-modulated light in the same way as conventional iToF methods. The irradiated light L ( t ) is expressed by the following equation:
L ( t ) = A L cos ( 2 π f m t ) + B L ,
where t is time, A L is the amplitude of the irradiated light, f m is the modulation frequency, and B L is the offset of the irradiated light.
The irradiated light is reflected at a certain point on the object surface and returns to the camera. The reflected light R ( t ) is expressed by the following equation:
R ( t ) = A R cos ( 2 π f m t + ϕ ) + B R ,
where A R is the amplitude of the reflected light, ϕ is the phase difference between the irradiated light and the reflected light, and B R is the offset of the reflected light.
The proposed method differs from conventional methods in that it uses optical transmittance control by polarization modulation to calculate the phase difference. Polarization modulation is changing the angle of linear polarization light temporally and the light transmittance is changed along with the angle between the direction of linearly polarized light and the transmission axis of the analyzer. The light transmittance P ( t ) is, thus, given by
P ( t ) = α cos ( 2 π f d t + δ ) + β ,
where α is the amplitude of the light transmittance, f d is the demodulation frequency, δ is the phase shift between the irradiated light and the light transmittance, and β is the offset of the light transmittance. f d = f m because our proposed method is based on optical homodyne technique. Ideally, polarization modulation that the angle of the linear polarization light repeats increases and decreases in the range from 0° to 90°. In that case, α , β = 0.5 and the transmittance changes in the range from 0 to 1. The intensity of the reflected light observed by each pixel is described as follows:
I δ = lim n 1 n 0 n T R ( t ) P ( t , δ ) d t
= A I cos ( ϕ δ ) + B I ,
where n is the natural number, T is the modulation period, A I = A R α T 2 , and B I = B R β T .
The phase difference ϕ between incident and reflected light can be calculated by the four-bucket method [1,24] as in the conventional iToF methods. In our method, the phase is calculated using four measurements obtained by varying the optical transmission phase at 0°, 90°, 180°, and 270°. The equations for the intensities at these four phase shifts and the phase difference calculation are shown below.
I 0 = I 0 = A I cos ( ϕ ) + B I ,
I 90 = I π 2 = A I sin ( ϕ ) + B I ,
I 180 = I π = A I cos ( ϕ ) + B I ,
I 270 = I π 2 = A I sin ( ϕ ) + B I ,
ϕ = arctan I 270 I 90 I 0 I 180 .
The amplitude A I and offset B I of the observed intensity can also be calculated from the above four observations as follows:
A I = ( I 0 I 180 ) 2 + ( I 270 I 90 ) 2 2 ,
B I = I 0 + I 90 + I 180 + I 270 4 .
Using the calculated phase difference and the light speed c, the depth from camera to the target d is calculated by
d = c ϕ 4 π f m .

3. Implementation

3.1. Fabrication of Optical Phase Modulator

In this paper, we fabricated a optical phase modulator based on the photo-elastic effect caused by the mechanical resonance characteristics of piezoelectric material, with reference to [19]. Figure 4 shows the schematic of the optical phase modulator (a) and its photograph (b). The substrate of the optical phase modulator is a lithium niobate (LN) wafer (Y-cut) that has piezoelectric properties. A 450 nm film of indium tin oxide (ITO) was deposited on both sides of the wafer in a circular area with a radius of 6.35 mm from the center of it. The ITO films function as a transparent conductive film. Also, 10 nm and 200 nm of titanium and gold were deposited as electrodes on the ITO films, respectively. The film shapes of these materials are shown in Figure 4a and were deposited by sputtering with metal masks. The film-formed LN wafer was connected to a circuit board using conductive paste and aluminum wire to drive the optical phase modulator by an external signal.

3.2. Modulation Frequency

Since the optical phase modulator is based on the mechanical resonance characteristics of piezoelectric materials, the magnitude of the angular change in the final output linearly polarized light varies depending on the driving frequency of the optical phase modulator. In addition, as mentioned in Section 2.1, the amplitude of the light transmittance changes depending on the magnitude of the angular change in the linearly polarized light of the polarization modulator, and it affects the intensity of the reflected light observed by the camera. Since the brightness of the reflected light observed by the camera is an important factor related to the accuracy of phase estimation, it is desirable to drive the polarization modulator at a frequency that increases the amplitude of the light transmittance due to polarization modulation. Therefore, in order to determine the frequency used to drive the optical phase modulator, we conducted an experiment to examine the amplitude of the light transmittance at each drive frequency.
Figure 5 shows the experimental setup. We use a laser at 850 nm wavelength (L850P200, Thorlabs, Inc., Newton, NJ, USA) and an avalanche photodiode (APD) (APD430A/M, Thorlabs, Inc., Newton, NJ, USA) as a single-pixel camera. Between the laser and the APD, a linear polarizer (WGPF-30C, SIGMAKOKI Co., Ltd., Tokyo, Japan), the optical phase modulator, a quater wavelength plate (QWP) (WPQW-NIR-4M, SIGMAKOKI Co., Ltd., Tokyo, Japan), and an analyzer (WGPF-30C, SIGMAKOKI Co., Ltd., Tokyo, Japan) are placed. The modulation signals for the light source and the optical phase modulator are generated using two synchronized function generators (DG4000, RIGOL TECHNOLOGIES, Co., SuZhou, China). Since the output of the function generator was insufficient to drive the optical phase modulator, we used a motor driver (MTO-EV021, Marutsuelec Co., Ltd., Tokyo, Japan) to amplify the modulation signal to the extent that polarization modulation could be achieved.
In this experiment, only polarization modulation was performed without modulating the light source, and changes in brightness due to the polarization modulator were observed using APD. The signal detected by APD was captured via a spectrum analyzer (Rigol DSA815). By frequency analysis of the observed signal using a spectrum analyzer, the signal level at the driving frequency was extracted, and the frequency at which the signal level was high was determined by a sparse-density search.
The results are shown in Figure 6. The horizontal axis represents the driving frequency, and the vertical axis represents the signal level. Figure 6a presents measurement results obtained at 1 MHz intervals over a frequency range from 1 MHz to 50 MHz. The highest signal intensity was observed around 3 MHz, indicating that this range includes frequencies where brightness variations due to polarization modulation are maximized. Based on this, a finer search was performed around 3 MHz. As shown in Figure 6b, measurements taken at 100 Hz intervals from 3.7641 MHz to 3.7802 MHz revealed that the signal intensity peaked at 3.7782 MHz. This suggests that when the modulator operates at this frequency, resonance occurs, leading to the maximum polarization modulation amplitude. In contrast, despite differing by only 200 Hz, the signal intensity at 3.7780 MHz dropped significantly, likely due to anti-resonance. Since a 20 dBm decrease in signal intensity results in a 90% reduction in amplitude, precise frequency tuning is crucial. Therefore, 3.7782 MHz was chosen as the driving frequency for the experiment.

3.3. The Angle of Analyzer

As described in Section 2.3, if the range of angular variation of linear polarization due to polarization modulation is from 0° to 90°, the range of variation of light transmittance is from 0 to 1. However, when the fabricated phase modulator was driven, it was found that the change in light transmittance was smaller than ideal. This is probably due to insufficient power supplied to the modulator. Since an RF amplifier with a frequency of 3.7 MHz and high output power was not available, a motor driver with a full-bridge switching circuit was used. Note that the output of the motor driver is a square wave and contains harmonics other than the modulation frequency.
When the angular variation range of linear polarization due to polarization modulation is less than 0 to 90 degrees, the analyzer angle becomes crucial for maximizing the amplitude of light transmission. Considering intensity variations of transmitted light with respect to analyzer angle for a given linearly polarized light, if the polarization angle change from the phase modulator remains constant, the modulation amplitude should be maximized when the analyzer is set at angles where the derivative of the intensity function with respect to analyzer angle is at its maximum or minimum. Conversely, if the analyzer angle is set to the angle at which the derivative of this intensity function is zero, the modulation amplitude will be minimal. In our method, the analyzer angle should be set so that the modulation amplitude is maximized.
To verify this hypothesis, we performed measurements using the same setup as in Section 3.2, driving only the polarization modulator without light source modulation. We rotated the analyzer and measured APD output signal waveforms at various analyzer angles using an oscilloscope (DS1104Z, RIGOL TECHNOLOGIES, Co., Suzhou, China). Figure 7a shows the offset and amplitude of the observed signals plotted against analyzer angle. The results demonstrate that amplitude becomes minimal at angles where the derivative of the offset is zero. Conversely, the amplitude increases at angles where the absolute value of the offset derivative is large. For example, Figure 7b shows waveforms at 100° (the absolute value of the offset derivative is maximum) and 55° (where the derivative of the offset is maximum). The amplitude at 55 is approximately five times larger than at 100°, with the offset also being about half as large, indicating superior modulation performance. Note that 55° is just one example, and that 145°, 235°, and 325° with the largest absolute derivative values are also acceptable. In subsequent sections, experiments are conducted at analyzer angles that provided higher modulation performance.

4. Validation

4.1. Phase Difference Measurement Using an APD

To validate the modulator’s capability for phase difference measurements via optical homodyne detection, we first employ an avalanche photodiode (APD) as the detector, leveraging its high temporal resolution and sensitivity. We utilize a through-beam configuration, as illustrated in Figure 5 in Section 3.2.
The light source amplitude modulation and polarization modulation are both driven at 3.7782 MHz. We systematically vary the phase difference between these two modulation signals from 0° to 360° in 1° increments. At each phase setting, we acquire data and estimate the corresponding phase difference. We capture the temporal intensity variations, which arise from the correlation between light source and polarization modulations, using a oscilloscope (DS1104ZPlus, RIGOL TECHNOLOGIES, Co., SuZhou, China). Phase difference estimations are calculated from waveform data accumulated over a fixed time period. The oscilloscope is configured to record 3000 measurement points with a sampling rate of 350 MSa/s and a time scale of 100 ns.
Figure 8 shows the results of phase difference estimation. The horizontal axis of the graph represents the set phase difference, and the vertical axis represents the estimated phase difference error. The results shows that the estimated phase difference is generally estimated correctly for most of the set phase differences. Additionally, as indicators of accuracy and precision, the mean absolute error (MAE) and standard deviation were calculated, resulting in 1.09° (equivalent to 0.12   m in depth) and 1.65° (equivalent to 0.18   m in depth), respectively.
In addition, 100 measurements for a set phase difference (0° and 90°) were made under the same conditions as in the previous paragraph. The experimental results are shown in Table 2, where the accuracy and precision of the phase difference estimation were approximately 0.75° and 1.05°, respectively.

4.2. Phase Difference Measurement Using a CMOS Image Sensor

The experiments in Section 4.1 successfully demonstrated phase difference estimation using a single-pixel APD. Standard CMOS sensors, however, utilize photodiodes that exhibit lower sensitivity than APDs. The realization of ToF imaging with standard CMOS sensors requires both successful phase difference estimation with these less sensitive photodiodes and spatially uniform phase estimation across the sensor. To validate these capabilities, we performed phase difference estimation experiments using a camera equipped with a standard CMOS sensor.
The experimental configuration is illustrated in Figure 9. An achromatic lens (C280TMD-B, Thorlabs, Inc., Newton, NJ, USA) was used to collimate the laser beam to a diameter of approximately 6.5 mm. Light intensity was controlled using a half-wave plate to rotate the polarization angle of the polarized laser output. All optical components positioned after the polarizer were consistent with the APD-based experimental setup. We utilized a monochrome camera (BFS-U3-51S5M-C, Teledyne FLIR LLC., Wilsonville, OR, USA) equipped with a standard five-megapixel CMOS sensor (IMX250, Sony Semiconductor Solutions Corporation, Atsugi, Japan), operated without a camera lens. Light intensity was adjusted using a neutral density (ND) filter (NE230B, Thorlabs, Inc., Newton, NJ, USA) positioned before the camera.
Both the light source and polarization modulation frequencies were set to 3.7782 MHz, and the phase difference between their respective signals was controlled via a function generator. Phase estimation was performed using four phase-shifted images acquired at 400 μs exposure time for each set phase difference.
Figure 10a shows the estimated phase difference images, resulting from left to right at 90°, 180°, and 270°. The estimated phase difference was not uniform across the device plane; phase differences close to the theoretical values were estimated near the center, while different phase values were obtained in the peripheral regions. The histograms of the estimated phase differences (Figure 10b) demonstrate that the peaks of the distributions align with their respective set phases, confirming that accurate phase difference estimation is generally achievable using a multi-pixel camera.

5. Evaluation

5.1. Depth Imaging with a CMOS Image Sensor

First, we evaluate the performance of the proposed system with a standard CMOS image sensor in a typical depth imaging configuration. Figure 11a shows the experimental iToF imaging system setup. A laser at 637 nm wavelength (HL63142DG, Thorlabs, Inc., Newton, NJ, USA) is positioned near an objective lens (F1.9/35 mm). The image formed by the objective lens is relayed using a plano-convex lens (f = 50 mm). Based on experimental results from Section 4.2, the optical phase modulator produced accurate phase estimates near its center but exhibited different phase estimates toward the periphery. Therefore, to ensure parallel light passed only through the center of the optical phase modulator, we employed a plano-convex lens and an iris. Additionally, since the polarization state of reflected light could reduce light intensity by the analyzer, we positioned a half-wave plate (HWP) at an appropriate angle to maximize light transmission. The polarizer, optical phase modulator, QWP, and analyzer were identical to those used in previous experiments. A monochrome camera (BFS-U3-51S5M-C, Teledyne FLIR LLC., Wilsonville, OR, USA) equipped with a standard CMOS sensor (IMX250, Sony Semiconductor Solutions Corporation, Atsugi, Japan) and camera lens (F2.4/50 mm) was positioned after the analyzer, enabling the acquisition of intensity images as a result of optical correlation computations.
Two white boards (EDU-VS1/M, Thorlabs, Inc., Newton, NJ, USA) were positioned as target objects at different depths, with a separation distance of approximately 1 m between them (Figure 11b). The modulation frequency of both the light source and modulator was set to 3.7782 MHz, resulting in a theoretical phase difference of approximately 9.1° between the two board regions. Due to temporal intensity variations, which will be discussed in detail later, we configured the camera with an exposure time of 400 ms and a frame rate of 2 Hz and image acquisition was performed over a 60 s period. A median image was generated from the captured sequence to mitigate the effects of temporal intensity fluctuations. Furthermore, the reflective configuration intensified speckle noise effects, and the speckle patterns varied with changes in the phase difference between the light source and modulator. To minimize these speckle effects, phase difference estimation was performed using images processed with a combination of median and Gaussian filters.
The experimental results are shown in Figure 12. The estimated depth image (Figure 12a) shows that the depth difference between the two board regions is small. When the depth display range was narrowed to 1.653   m to 1.653   m , subtle differences between the front and rear boards became apparent (Figure 12b). Consequently, we defined regions of interest (ROIs) for the four phase-shifted images in both the front and rear board regions (within the white frames shown in Figure 12b) and estimated the phase using the median values within these ROIs. As a result, the estimated depth in the front board region was 1.07   m , while in the rear board region, it was 2.03   m , yielding a depth difference of 0.96   m . This indicates that the depth was estimated approximately correctly.
Figure 12c,d represent the estimated amplitude and offset images, respectively, while Figure 12e is the amplitude-to-offset ratio (AOR) image computed from them. The AOR is significantly low, with the mean and standard deviation within the white frames (shown in Figure 12e) being 0.031 and 0.014 for the front side and 0.029 and 0.012 for the rear side. Since the sensor’s dynamic range is limited, a low AOR reduces the effective range of amplitude, increasing quantization errors. The primary cause of the low AOR is the small angular change in linearly polarized light due to polarization modulation, which results from insufficient power supplied to the optical phase modulator. By inputting a properly powered sinusoidal signal into the optical phase modulator, the amplitude of the polarization-modulated light transmission can approach its ideal value, leading to improvements in both AOR and depth estimation accuracy.
Additionally, the amplitude image exhibits a speckled pattern caused by laser speckle. Speckle arises from interference in the wavefront scattered by a rough surface, leading to large variations between bright and dark points. When attempting to capture an overall bright image, the bright speckle points tend to saturate easily. Once brightness saturation occurs, phase calculation based on the brightness differences ratio becomes impossible, making speckle noise reduction necessary. One possible approach is to place a rotating diffuser in front of the light source.
Furthermore, as mentioned earlier, temporal brightness fluctuations unrelated to polarization modulation were observed. These fluctuations were not only larger than the brightness variations induced by phase shifts in polarization modulation but also made it difficult to capture the four phase-shifted images within a short period. As a result, the four-phase images were significantly affected by these fluctuations, leading to incorrect phase difference estimation. To mitigate this effect, we generated a median image from a large set of images acquired over 60 s, effectively reducing the impact of temporal variations. Although the exact cause of these fluctuations has not yet been identified, one possible factor is the instability of the laser source output due to thermal effects. In this case, using a VCSEL, which offers greater thermal stability than conventional lasers, could resolve this issue. Additionally, if the previously mentioned issue of small polarization modulation amplitude is resolved, and the brightness variations induced by the correlation between light source modulation and polarization modulation become sufficiently large, this problem can also be mitigated.

5.2. Depth Imaging with a Polarization Image Sensor

Second, we evaluate an extended system with a polarization image sensor. As described in Section 2.2, the four phase shifts required for phase estimation can be achieved by varying the analyzer angle through 0°, 45°, 90°, and 135°. Thus, using a polarization image sensor enables the acquisition of four phase-shifted images in a single capture. However, when the angular variation of linear polarization due to polarization modulation is small, the intensity varies significantly depending on both the offset angle of polarization modulation and the analyzer angle. Since this intensity variation is not caused by the correlation computation from polarization modulation, the phase difference cannot be estimated from the four polarization angle images. Meanwhile, the phase shift magnitude of polarization modulation can also be set through the phase of the signal applied to the optical phase modulator. Therefore, by combining phase shift settings through electrical signals with phase shifts from analyzer angles, we can suppress intensity variations caused by the polarization modulation offset angle and analyzer angle. Since the polarization image sensor is equipped with detectors at different angles, the phase difference can be estimated with only two measurements.
The experimental setup is modified from the standard CMOS sensor configuration shown in Figure 11 by removing the analyzer and replacing the camera with a polarization camera (BFS-U3-51S5P-C, Teledyne FLIR LLC., Wilsonville, OR, USA). The target objects remained identical, with the distance between the two white boards maintained at 1 m . The phases of signals applied to the optical phase modulator are set to 0° and 180°, while analyzer angles of 0° and 135° are employed. The combinations of optical phase modulator signal phases and analyzer angles used to acquire the phase-shifted images are shown in Table 3. As with the standard camera setup, to minimize the effects of temporal intensity variations, the camera exposure time was set to 100 ms with a frame rate of 2 Hz, and images were captured over a 60 s period. The median images computed from these captured sequences were used for phase estimation calculations.
The experimental results are presented in Figure 13. Figure 13a shows the estimated depth map, where little depth difference is observed between the two plate regions. When the depth display range was constrained to 14.899   m to 4.966   m , a difference between the front and rear boards became visible (Figure 13b). As in previous experiments, ROIs were established for the four phase-shifted images in both front and rear board regions (indicated by white frames in Figure 13b). Depth estimation using the median values within these ROIs yielded a depth of 11.2   m for the front board region and 10   m degrees for the rear board region, resulting in a depth difference of 1.2   m . While this shows a slightly larger depth estimation error compared to the standard CMOS sensor experiment (which required four measurements with fixed analyzer angles and varying optical phase modulator input phases), the use of a polarization camera successfully reduced the number of required measurements to two.
Figure 13c,d represent the estimated amplitude and offset images, respectively, while Figure 13e is the AOR image computed from them. The AOR is significantly low, with the mean and standard deviation within the white frame (shown in Figure 13e) being 0.019 and 0.035 for the front side and 0.019 and 0.012 for the rear side. It is even lower compared to the case with a monochrome camera. As explained in Section 3.3, when the polarization modulation amplitude is small, an appropriate angle setting becomes crucial. However, since the polarization camera has a fixed analyzer angle, the AOR was further reduced. If the AOR is improved, it will be possible to acquire the phase-shifted images required for the four-bucket method in a single exposure.

6. Conclusions

In this paper, we proposed a novel polarization-modulated optical homodyne system for iToF imaging using standard CMOS sensors. The system employs a resonant photoelastic modulator combined with polarization optics to enable optical correlation for depth measurement, achieving high spatial resolution without specialized ToF sensor arrays. Through experimental validation, the system successfully demonstrated depth measurement capabilities using both standard CMOS and polarization image sensors, with the latter reducing required measurements from four to two shots. The experimental results show accurate depth estimation between two target boards separated by 1 m, achieving measurements of 0.96 m and 1.2 m using standard and polarization CMOS sensors, respectively.
Currently, due to insufficient driving power supplied to the modulator, the polarization modulation amplitude is reduced, necessitating analyzer angle adjustments and time-averaging processing. Additionally, despite employing the high-precision phase detection method of the homodyne approach, the depth estimation accuracy remains comparable to that of the heterodyne method [18,20]. If the modulator is driven with sufficient power and the polarization modulation amplitude becomes sufficiently large, these issues will be resolved, enabling high-precision depth measurement in a single acquisition, which is a key advantage of the homodyne method.
Additionally, the resonant photoelastic modulator has advantages such as being thin and capable of modulation over a large area. However, it has a limitation where the modulation frequency is determined by the substrate thickness. The device fabricated in this study had a thickness of 0.5 mm, resulting in a modulation frequency of approximately 3.7 MHz. Since higher modulation frequencies lead to higher depth estimation resolution, the depth resolution of this method remains low. Increasing the modulation frequency requires a thinner substrate, but this, in turn, demands higher driving power.
While an electro-optic modulator can achieve extremely high-speed polarization modulation, its small aperture size limits its applicability to multi-pixel imaging [12]. Additionally, while image intensifiers [10,11] and electro-absorption modulators [13,14,15,16] can achieve modulation over a large area at frequencies comparable to conventional ToF cameras, they do not rely on polarization modulation. As a result, even when combined with a polarization camera, they do not enable a reduction in the number of measurements. As different types of modulators each have their own advantages and limitations, a polarization modulator capable of high-speed, large-area modulation remains essential for methods utilizing a high-speed optical shutter and a standard CMOS sensor.
Despite these current limitations, the research demonstrates the potential for achieving high-resolution depth imaging using conventional image sensors through polarization-based optical correlation techniques. The approach of enhancing the spatial resolution of iToF cameras through optical correlation operation, as demonstrated in this study, is inherently valuable. Moving forward, the development of modulation devices for ToF imaging is expected to continue, contributing to the advancement of this field.

Author Contributions

Conceptualization, A.E., T.A. and T.T.; formal analysis, A.E. and T.T.; software, A.E. and T.T.; supervision and project administration, T.A. and T.T.; funding acquisition, A.E. and T.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially funded by JSPS KAKENHI (Grant Numbers JP22KJ0424 and JP23K20382); Exploratory IT Human Resources Project (MITOU Program) of Information-technology Promotion Agency, Japan (IPA) in the fiscal 2021.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Acknowledgments

We acknowledge the NIMS Open Facility Nanofabrication Unit and the “Nanotechnology Platform” (project No. JPMXP09F21NM0092) of the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan, for providing the equipment used in the fabrication of the optical phase modulator device. The authors are grateful to Kazuya Kitano for his contribution to the development of the hardware system.

Conflicts of Interest

Author Takahito Aoto was the CEO of the company Optech Innovation, LLC. However, this does not pose a conflict of interest, as the conpany had no role in the research design, data collection, analysis, or publication process. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Foix, S.; Alenya, G.; Torras, C. Lock-in Time-of-Flight (ToF) Cameras: A Survey. IEEE Sens. J. 2011, 11, 1917–1926. [Google Scholar] [CrossRef]
  2. Yasutomi, K.; Kawahito, S. Lock-in pixel based time-of-flight range imagers: An overview. IEICE Trans. Electron. 2022, 105, 301–315. [Google Scholar] [CrossRef]
  3. Gupta, M.; Nayar, S.K.; Hullin, M.B.; Martin, J. Phasor imaging: A generalization of correlation-based time-of-flight imaging. ACM Trans. Graph. (ToG) 2015, 34, 1–18. [Google Scholar] [CrossRef]
  4. Li, F.; Chen, H.; Pediredla, A.; Yeh, C.; He, K.; Veeraraghavan, A.; Cossairt, O. CS-ToF: High-resolution compressive time-of-flight imaging. Opt. Express 2017, 25, 31096–31110. [Google Scholar] [CrossRef] [PubMed]
  5. Zhang, W.; Song, P.; Wang, X.; Zheng, Z.; Bai, Y.; Geng, H. Fast lightweight framework for time-of-flight super-resolution based on block compressed sensing. Opt. Express 2022, 30, 15096–15112. [Google Scholar] [CrossRef] [PubMed]
  6. Kawachi, H.; Nakamura, T.; Iwata, K.; Makihara, Y.; Yagi, Y. Snapshot super-resolution indirect time-of-flight camera using a grating-based subpixel encoder and depth-regularizing compressive reconstruction. Opt. Contin. 2023, 2, 1368–1383. [Google Scholar] [CrossRef]
  7. Li, F.; Willomitzer, F.; Rangarajan, P.; Gupta, M.; Velten, A.; Cossairt, O. SH-ToF: Micro resolution time-of-flight imaging with superheterodyne interferometry. In Proceedings of the 2018 IEEE International Conference on Computational Photography (ICCP), Pittsburgh, PA, USA, 4–6 May 2018; pp. 1–10. [Google Scholar] [CrossRef]
  8. Li, F.; Willomitzer, F.; Balaji, M.M.; Rangarajan, P.; Cossairt, O. Exploiting wavelength diversity for high resolution time-of-flight 3D imaging. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 2193–2205. [Google Scholar] [CrossRef] [PubMed]
  9. Ballester, M.; Wang, H.; Li, J.; Cossairt, O.; Willomitzer, F. Single-shot synthetic wavelength imaging: Sub-mm precision ToF sensing with conventional CMOS sensors. Opt. Lasers Eng. 2024, 178, 108165. [Google Scholar] [CrossRef]
  10. Dorrington, A.A.; Cree, M.J.; Payne, A.D.; Conroy, R.M.; Carnegie, D.A. Achieving sub-millimetre precision with a solid-state full-field heterodyning range imaging camera. Meas. Sci. Technol. 2007, 18, 2809. [Google Scholar] [CrossRef]
  11. Godbaz, J.P.; Cree, M.J.; Dorrington, A.A. Extending AMCW Lidar Depth-of-Field Using a Coded Aperture. In Computer Vision—ACCV 2010, Proceedings of the 10th Asian Conference on Computer Vision, Queenstown, New Zealand, 8–12 November 2010; Kimmel, R., Klette, R., Sugimoto, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 397–409. [Google Scholar]
  12. Baek, S.H.; Walsh, N.; Chugunov, I.; Shi, Z.; Heide, F. Centimeter-wave free-space neural time-of-flight imaging. ACM Trans. Graph. 2023, 42, 1–18. [Google Scholar] [CrossRef]
  13. Miller, M.; Xia, H.; Beshara, M.; Menzel, S.; Ebeling, K.J.; Michalzik, R. Large-area transmission modulators for 3D time-of-flight imaging. In Proceedings of the Unconventional Optical Imaging II, Online, 6–10 April 2020; Fournier, C., Georges, M.P., Popescu, G., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2020; Volume 11351, p. 113511F. [Google Scholar] [CrossRef]
  14. Miller, M.; Savic, A.; Sassiya, B.; Menzel, S.; Ebeling, K.J.; Michalzik, R. Indirect time-of-flight 3D imaging using large-area transmission modulators. In Proceedings of the Digital Optical Technologies 2021, Online, 21–26 June 2021; Kress, B.C., Peroz, C., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2021; Volume 11788, p. 117880G. [Google Scholar] [CrossRef]
  15. Abdelhamid, H. Indirect Time-of-Flight 3D Imaging Using Fast Segmented Large-Area Electroabsorption Modulators. Master’s Thesis, Ulm University, Ulm, Germany, 2022. [Google Scholar]
  16. Miller, M.; Sassiya, B.; Menzel, S.; Ebeling, K.J.; Michalzik, R. 3D indirect time-of-flight imaging employing transmission electroabsorption modulators. In Proceedings of the Unconventional Optical Imaging III, Strasbourg, France, 3 April–23 May 2022; Georges, M.P., Popescu, G., Verrier, N., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2022; Volume 12136, p. 121360Y. [Google Scholar] [CrossRef]
  17. Atalar, O.; Laer, R.V.; Sarabalis, C.J.; Safavi-Naeini, A.H.; Arbabian, A. Time-of-flight imaging based on resonant photoelastic modulation. Appl. Opt. 2019, 58, 2235–2247. [Google Scholar] [CrossRef] [PubMed]
  18. Atalar, O.; Yee, S.; Safavi-Naeini, A.H.; Arbabian, A. Y-Z cut lithium niobate longitudinal piezoelectric resonant photoelastic modulator. Opt. Express 2022, 30, 47103–47114. [Google Scholar] [CrossRef] [PubMed]
  19. Atalar, O.; Van Laer, R.; Safavi-Naeini, A.H.; Arbabian, A. Longitudinal piezoelectric resonant photoelastic modulator for efficient intensity modulation at megahertz frequencies. Nat. Commun. 2022, 13, 1526. [Google Scholar] [CrossRef] [PubMed]
  20. Atalar, O.; Arbabian, A. Optically isotropic longitudinal piezoelectric resonant photoelastic modulator for wide angle polarization modulation at megahertz frequencies. J. Opt. Soc. Am. A 2023, 40, 2249–2258. [Google Scholar] [CrossRef] [PubMed]
  21. Atalar, O.; Arbabian, A. Polarization-insensitive wide-angle resonant acousto-optic phase modulator. Opt. Lett. 2024, 49, 2141–2144. [Google Scholar] [CrossRef] [PubMed]
  22. Atalar, O.; Arbabian, A. Birefringence-free photoelastic modulator with centimeter-square aperture operating at 2.7 MHz with sub-watt drive power. Opt. Lett. 2024, 49, 5051–5054. [Google Scholar] [CrossRef] [PubMed]
  23. Atalar, O. Resonant Photoelastic Modulation for Time-of-Flight Imaging with Standard Image Sensors. Ph.D. Thesis, Stanford University, Stanford, CA, USA, 2022. Available online: https://purl.stanford.edu/kh488xz0210 (accessed on 31 December 2024).
  24. Lange, R.; Seitz, P. Solid-state time-of-flight range camera. IEEE J. Quantum Electron. 2001, 37, 390–397. [Google Scholar] [CrossRef]
Figure 1. Comparison of correlation operations between the conventional and proposed approaches. (a) Electrical correlation operation in conventional iToF sensors. (b) Optical correlation operation using polarization modulation.
Figure 1. Comparison of correlation operations between the conventional and proposed approaches. (a) Electrical correlation operation in conventional iToF sensors. (b) Optical correlation operation using polarization modulation.
Sensors 25 01886 g001
Figure 2. The concept of our method and polarization modulation system.The blue arrows represent the polarization states.
Figure 2. The concept of our method and polarization modulation system.The blue arrows represent the polarization states.
Sensors 25 01886 g002
Figure 3. The relationship between optical transmission and both the linear polarization angle and analyzer angle.
Figure 3. The relationship between optical transmission and both the linear polarization angle and analyzer angle.
Sensors 25 01886 g003
Figure 4. Fabrication of optical phase modulator. (a) Schematic diagram of the device structure. (b) Photograph of the fabricated modulator.
Figure 4. Fabrication of optical phase modulator. (a) Schematic diagram of the device structure. (b) Photograph of the fabricated modulator.
Sensors 25 01886 g004
Figure 5. Through-beam-type experimental setup with an avalanche photodiode (APD).
Figure 5. Through-beam-type experimental setup with an avalanche photodiode (APD).
Sensors 25 01886 g005
Figure 6. Results of search for driving frequency. (a) Measurement at 1 MHz intervals over the range of 1 MHz to 50MHz. (b) Measurement at 100 Hz intervals over the range of 3.7641 MHz to 3.7802 MHz. The peak frequency was 3.7782 MHz.
Figure 6. Results of search for driving frequency. (a) Measurement at 1 MHz intervals over the range of 1 MHz to 50MHz. (b) Measurement at 100 Hz intervals over the range of 3.7641 MHz to 3.7802 MHz. The peak frequency was 3.7782 MHz.
Sensors 25 01886 g006
Figure 7. Variation in polarization modulation performance with analyzer angle. (a) Offset (blue) and amplitude (orange) of polarization-modulated light intensity versus analyzer angle. (b) Waveforms of polarization-modulated light intensity at specific analyzer angles: red shows the waveform at 55° where the offset is maximum, and green shows the waveform at 100° where the derivative of the offset is maximum.
Figure 7. Variation in polarization modulation performance with analyzer angle. (a) Offset (blue) and amplitude (orange) of polarization-modulated light intensity versus analyzer angle. (b) Waveforms of polarization-modulated light intensity at specific analyzer angles: red shows the waveform at 55° where the offset is maximum, and green shows the waveform at 100° where the derivative of the offset is maximum.
Sensors 25 01886 g007
Figure 8. Result of phase difference estimation using the APD.
Figure 8. Result of phase difference estimation using the APD.
Sensors 25 01886 g008
Figure 9. Through-beam type experimental setup with a camera.
Figure 9. Through-beam type experimental setup with a camera.
Sensors 25 01886 g009
Figure 10. Experimental results of phase difference estimation using the CMOS imaging sensor. (a) Estimated phase difference images (90°, 180°, and 270° from left). (b) Histograms of estimated phase differences (90°, 180°, and 270° from top).
Figure 10. Experimental results of phase difference estimation using the CMOS imaging sensor. (a) Estimated phase difference images (90°, 180°, and 270° from left). (b) Histograms of estimated phase differences (90°, 180°, and 270° from top).
Sensors 25 01886 g010aSensors 25 01886 g010b
Figure 11. Experimental setup for iToF imaging with a standard CMOS sensor. (a) The iToF imaging system. (b) Overall view of the setup, including the target object.
Figure 11. Experimental setup for iToF imaging with a standard CMOS sensor. (a) The iToF imaging system. (b) Overall view of the setup, including the target object.
Sensors 25 01886 g011
Figure 12. Experimental results of iToF imaging using a standard CMOS sensor. (a) Estimated depth image. (b) Depth values from (a) rescaled from 19.84 m to 19.84 m range to 1.653 m to 1.653 m range. (c) Estimated amplitude image. (d) Estimated offset image. (e) SNR (the ratio of amplitude to offset) image. The white frames in (b) and (e) indicate the regions of interest (ROIs) in both the front (right box) and rear (left box) board regions.
Figure 12. Experimental results of iToF imaging using a standard CMOS sensor. (a) Estimated depth image. (b) Depth values from (a) rescaled from 19.84 m to 19.84 m range to 1.653 m to 1.653 m range. (c) Estimated amplitude image. (d) Estimated offset image. (e) SNR (the ratio of amplitude to offset) image. The white frames in (b) and (e) indicate the regions of interest (ROIs) in both the front (right box) and rear (left box) board regions.
Sensors 25 01886 g012
Figure 13. Experimental results of iToF imaging using a polarization CMOS sensor. (a) Estimated depth image. (b) Depth values from (a) rescaled from 19.87 m to 19.87 m range to 14.899 m to 4.966 m range. (c) Estimated amplitude image. (d) Estimated offset image. (e) SNR (the ratio of amplitude to offset) image. The white frames in (b) and (e) indicate the regions of interest (ROIs) in both the front (right box) and rear (left box) board regions.
Figure 13. Experimental results of iToF imaging using a polarization CMOS sensor. (a) Estimated depth image. (b) Depth values from (a) rescaled from 19.87 m to 19.87 m range to 14.899 m to 4.966 m range. (c) Estimated amplitude image. (d) Estimated offset image. (e) SNR (the ratio of amplitude to offset) image. The white frames in (b) and (e) indicate the regions of interest (ROIs) in both the front (right box) and rear (left box) board regions.
Sensors 25 01886 g013aSensors 25 01886 g013b
Table 1. Comparison table of methods.
Table 1. Comparison table of methods.
MethodCorrelationPhase
Detection
Number of
Pixels
Number of
Measurements
Depth
Resolution
Conventional iToFElectricalHomodyneLow2 to 4Middle
SR [4,5,6]ElectricalHomodyneMiddle2 [6], MultipleMiddle
SWI [7,8,9]OpticalHeterodyneHigh1 [9], MultipleHigh
HOS-II [10,11]OpticalHeterodyneHighMultipleMiddle
HOS-EOM [12]OpticalHomodyneSingle
(scanning)
4 × 2
(2 frequencies)
High
HOS-EAM [13,14,15,16]OpticalHeterodyneHigh4Middle
HOS-PEM [17,18,19,20,21,22]OpticalHeterodyneHighMultipleLow
HOS-PEM (Ours)OpticalHomodyneHigh4,
1 (Pol-Cam)
Low
Abbreviations: SR, super-resolution; SWI, synthetic wavelength imaging; HOS, highspeed optical shutter; II, image intensifier; EOM, electro-optic modulator; EAM, electro-absorption modulator; PEM, photo-elastic modulator; Pol-Cam, polarization camera. Bold values indicate superior performance in each respective category.
Table 2. The result of 100 repeated measurements with a fixed set phase difference. The values in parentheses represent the phase difference converted into virtual depth.
Table 2. The result of 100 repeated measurements with a fixed set phase difference. The values in parentheses represent the phase difference converted into virtual depth.
Set Phase DifferenceMean Absolute Error (MAE)Standard Deviation
0° ( 0 m )0.66° ( 0.073   m )1.1° ( 0.12   m )
90° ( 9.9   m )0.84° ( 0.093   m )1.0° ( 0.11   m )
Table 3. Combined setting of phase of optical phase modulator signal and detector angle for 2-shot phase estimation using polarization camera.
Table 3. Combined setting of phase of optical phase modulator signal and detector angle for 2-shot phase estimation using polarization camera.
Four Phase ShiftOptical Phase Modulator Signal PhaseAnalyzer Angle
90°180°135°
180°180°
270°135°
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ebisu, A.; Aoto, T.; Takatani, T. Polarization-Modulated Optical Homodyne for Time-of-Flight Imaging with Standard CMOS Sensors. Sensors 2025, 25, 1886. https://doi.org/10.3390/s25061886

AMA Style

Ebisu A, Aoto T, Takatani T. Polarization-Modulated Optical Homodyne for Time-of-Flight Imaging with Standard CMOS Sensors. Sensors. 2025; 25(6):1886. https://doi.org/10.3390/s25061886

Chicago/Turabian Style

Ebisu, Ayaka, Takahito Aoto, and Tsuyoshi Takatani. 2025. "Polarization-Modulated Optical Homodyne for Time-of-Flight Imaging with Standard CMOS Sensors" Sensors 25, no. 6: 1886. https://doi.org/10.3390/s25061886

APA Style

Ebisu, A., Aoto, T., & Takatani, T. (2025). Polarization-Modulated Optical Homodyne for Time-of-Flight Imaging with Standard CMOS Sensors. Sensors, 25(6), 1886. https://doi.org/10.3390/s25061886

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop