Next Article in Journal
Special Issue on Advanced Sensing and Control Technologies in Power Electronics
Previous Article in Journal
An Ultra-Sensitive Colorimetric Sensing Platform for Simultaneous Detection of Moxifloxacin/Ciprofloxacin and Cr(III) Ions Based on Ammonium Thioglycolate Functionalized Gold Nanoparticles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Comparison of Multipixel Biaxial Scanning Direct Time-of-Flight Light Detection and Ranging Systems With and Without Imaging Optics

1
Fraunhofer Institute for Microelectronic Circuits and Systems, 47057 Duisburg, Germany
2
Department of Electronic Science and Engineering, Kyoto University, Kyoto 615-8510, Japan
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(10), 3229; https://doi.org/10.3390/s25103229
Submission received: 21 March 2025 / Revised: 9 May 2025 / Accepted: 19 May 2025 / Published: 21 May 2025
(This article belongs to the Special Issue SPAD-Based Sensors and Techniques for Enhanced Sensing Applications)

Abstract

:
The laser pulse detection probability of a scanning direct time-of-flight light detection and ranging (LiDAR) measurement is evaluated based on the optical signal distribution on a multipixel single photon avalanche diode (SPAD) array. These detectors intrinsically suffer from dead-times after the successful detection of a single photon and, thus, allow only for limited counting statistics when multiple returning laser photons are imaged on a single pixel. By blurring the imaged laser spot, the transition from single-pixel statistics with high signal intensity to multipixel statistics with less signal intensity is examined. Specifically, a comparison is made between the boundary cases in which (i) the returning LiDAR signal is focused through optics onto a single pixel and (ii) the detection is performed without lenses using all available pixels on the sensor matrix. The omission of imaging optics reduces the overall system size and minimizes optical transfer losses, which is crucial given the limited laser emission power due to safety standards. The investigation relies on a photon rate model for interfering (background) and signal light, applied to a simulated first-photon sensor architecture. For single-shot scenarios that reflect the optimal use of the time budget in scanning LiDAR systems, the lens-less and blurred approaches can achieve comparable or even superior results to the focusing system. This highlights the potential of fully solid-state scanning LiDAR systems utilizing optical phase arrays or multidirectional laser chips.

Graphical Abstract

1. Introduction

Pulsed light detection and ranging (LiDAR) is a 3D imaging technology based on measuring the time-of-flight (ToF) of a laser signal that is emitted, reflected by the scene, and subsequently detected. Single photon avalanche diodes (SPADs) are particularly well suited for LiDAR applications due to their ability to achieve high timing resolution, sensitivity to low light levels, and the feasibility of implementing array configurations. There are various possible system implementations regarding the emission and reception geometry [1] (pp. 6–8). A camera-like arrangement of the transmission and receiving channel is known as Flash-LiDAR [2,3,4], where the entire scene is illuminated with a spatially broadened laser pulse (flash) and a spatially resolved multipixel detector determines the times-of-flight using imaging optics. The lateral resolution is, therefore, dependent on the number of pixels on the detector chip, similar to a camera. Alternatively, the time-of-flight measurements can also be performed with a spatially narrow laser spot, where only a single point of the scene is illuminated at a time. A 3D image is then created by sequentially scanning the scene. An advantage of this scanning approach is the increased permissible laser power per image point, which has to be distributed across the entire scene in the Flash-LiDAR approach [5]. On the other hand, Flash-LiDAR obtains information about the entire scene with each emitted pulse, while a scanning single-point illumination system must perform a separate measurement for each image point. For real-time applications, this places high demands on the time efficiency of the acquisition of each measurement point. For a trade-off between the permissible laser power per image point and scene acquisition time, the flash approach has been adapted to partially illuminate only scene regions or slices [6,7].
There are different approaches to how the single point scanning LiDAR setup can be implemented. The transmission path can be combined with the receiving path so that both are then scanned jointly (coaxial geometry) across the scene using a mirror construction [8,9,10,11]. Alternatively, only the transmission path is scanned across the scene, while the receiving path continuously observes the entire scene (biaxial geometry). This scanning LiDAR implementation is especially reasonable if the laser source itself is capable of illuminating discrete spots without an external scanning mechanism [12,13,14,15]. These solid-state scanning solutions differ from optical phase array scanning [16,17,18,19] and do not require coherent LiDAR techniques but are compatible with pulsed direct ToF measurements. To ensure that the detector is not confronted with background (ambient) light from all parts of the scene, there are methods that image the scene onto a multipixel detector chip and then only evaluate the pixels where the laser pulse spot is currently imaged on [20,21]. Such a system is to be investigated using a SPAD array with pixel-individual time-to-digital-converters (TDCs) and first-photon measurement architecture [22,23,24,25] as a detector chip. This first-photon measurement means that each pixel can only generate one timestamp per emitted laser pulse. Therefore, a distance measurement with this type of detector typically involves multiple pulses, with the resulting timestamps being evaluated in a histogram. This histogramming allows the time-of-flight of the pulses to be determined even if the detector is triggered additionally by ambient light. While the imaging of the scene allows for rejecting uninteresting parts of the scene containing only disruptive ambient light, using a single pixel per laser pulse is not time efficient for generating a histogram. Therefore, it is investigated whether a blurred imaging of the scene, where the laser spot is smeared over multiple pixels, or even lens-less detection with a bare multipixel chip, would be advantageous.

2. Materials and Methods

The LiDAR setups are analyzed with respect to laser power, background light, target distance, and measurement statistics. By considering a broad range of background light intensities, the influence of various environmental conditions and sensor parameters can be addressed, such as dark count rates, sunlight reflections, and other weather-induced constant photon fluxes onto the detector, as all of these are represented by uncorrelated “background” photons. For this purpose, a model for the received light is first developed, based on which a signal-to-noise ratio (SNR) analysis and a Monte Carlo simulation of measurement scenarios are performed. The SNR and the simulations will offer insights into performance differences, including robustness against background light, the minimum required laser power, and the range of the systems.

2.1. Rate Model

The model of the received light is designed for the comparison of a focused setup with an aperture area of A and a lens-less setup with a multipixel sensor area of S. Figure 1 visualizes the contribution of the background light and laser light by the fields-of-view (FoV).
A flat wall at distance z is used as the target. The background (ambient) light emitted by the target wall and the reflection of the laser pulse are received by either the aperture and imaging optics sketched in Figure 1a or the bare detector chip as shown in Figure 1b. The detectable intensity is considered to be proportional to the receiving areas A and S. For simplicity, the rate model is set up only for the central measurement point on the optical axis. Other measurement points would follow a very similar investigation but with an angular-dependent receivable laser intensity due to the Lambertian surface characteristics.
Both systems are designed to have the same total FoV, either by the imaging optic or by barn door covers. While the focused system has equally sized pixel FoVs, the lens-less system covers the same total FoV with every pixel. Unlike in the focused case, where each pixel observes its individual patch of background light and one pixel also observes the laser signal, the lens-less approach results in an identical signal mix for every pixel. The parameters of interest are the photon rates impinging on the detectors. With the knowledge of the laser wavelength, background light spectrum, and detector properties, these rates could be derived from common intensity units. In this paper, the natural unit for the received light will be rates or rates per pixel (Hz). For a detector with N pixels, it will be a total background rate per receiving area RB and a laser signal rate per receiving area RL. Table 1 shows the resulting impinging rates on the detectors.
The division by N of the background rate per pixel originates for the focused setup from the single-pixel FoV, while for the lens-less case, it describes the pixel area fraction of the total receiving area. In contrast to the focused setup, where the laser spot is imaged onto one pixel, the lens-less setup can use all pixels for a measurement. A multiplication by N of the lens-less column corresponds to the total received rates. These correlations show that for equal sizes of A and S, the lens-less approach will receive the same laser signal but gather N times more background light. The total rate of the laser signal only depends on the receiving areas.
The transition between these two setups is the blurred imaging of the laser spot onto a pixel region, similar to a “circle of confusion” in traditional imaging optics. The background rate per pixel is independent of the blurring and corresponds to the background rate of the focused setup. The laser signal rate has to be adopted by dividing the focused rate by the number of pixels illuminated by the blurred laser spot. For every degree of blurring, the total received laser rate is still constant for a fixed size of the receiving area. When not indicated differently, the receiving areas A and S are chosen to be equal throughout this paper. Thereby, the focused and blurred setups are directly comparable to the lens-less setup.
In summary, the background rates per pixel are equal for the three setups. The laser signal per pixel (laser rate) is maximum for the focused setup and must be divided by the number of pixels of the detector for the lens-less setup. For the blurred setup, it must be divided by the number of pixels that are illuminated by the blurred laser spot on the detector.

2.2. Probability Density Function

This paper will focus on 2D SPAD array detectors with a first-photon measurement principle, as this sensor architecture is reasonable for biaxial direct time-of-flight-based LiDAR systems [21,22]. The architecture allows for pixel-individual ToF measurements. It also means that for each laser pulse emitted, each pixel can only generate one timestamp. For a reliable distance determination, multiple timestamps might have to be evaluated, since it is a statistical issue whether a background photon generates a timestamp at a random time or a signal photon is measured that contains reasonable timing information. Moreover, as a result of the first-photon measurement principle, the probability for generating a timestamp is not only rate-dependent but also time-dependent as described by the probability density function (PDF) in Equation (1).
P D F t = r B · exp ( r B · t ) ,     0 t < t T o F ( r B + r L ) · exp ( r B · t ) · exp r L · t t T o F ,     t T o F t < t T o F + t p r B · exp r B · t · exp r L · t p ,     t T o F + t p t
This probability density function results from the fact that for any given time t, the probability to detect a photon is given by the instantaneous photon rate rB or rB + rL and the accumulated counter-probability of an earlier SPAD activation, e.g., Equation (2):
p t 1 0 t p τ d τ
The parameters rB and rL correspond to the background rate and laser rate impinging the detector as defined in Table 1. The traveling time tToF of the laser pulse from transmission via reflection at the target to the reception at the detector is called time-of-flight (ToF). tp corresponds to the temporal laser pulse width. The temporal laser pulse shape used is rectangular with infinitely steep slopes. The characteristics of this PDF are shown in Figure 2a.
For the simulated ToF measurements, arrival times of incident photons or dark counts are generated from the PDF by the inverse transformation method. Since the arrival times of background events and laser photons follow homogeneous Poisson distributions, their inverted cumulative distribution functions (CDFs) can be calculated separately by following Equations (3) and (4).
C D F τ = 0 τ P D F t d t = p
Q p = C D F 1 p
Q b a c k g r o u n d p = l n ( p ) r B
Q l a s e r p = l n ( p ) r L + t T o F
The final arrival time is the smaller value of both quantile functions in Equations (5) and (6), while the laser photon arrival time is only accepted within tToF and tToF + tp. For every laser pulse and pixel, individual arrival times are generated.

2.3. Signal-to-Noise Ratio

The constant background rate over time generates an exponentially decreasing signal in the histogram due to the reduced detection probability of a photon to hit a SPAD that was not already triggered before. With an unlimited number of measurements, a histogram would align (proportionally) with the PDF. In real scenarios with limited statistics, a histogram shows more fluctuations, which will eventually prevent the determination of the correct signal pulse position (tToF). In the Monte Carlo simulation result shown in Figure 2b based on the displayed PDF, the background and laser rates are chosen to reveal the main characteristics, with 1000 measurements for the sake of clarity.
The evaluation of the setups includes a signal-to-noise ratio analysis similar to the one derived in Ref [26]. The SNR definition used originates from a Poisson-distributed photon statistic and evaluates the number of generated events by the laser signal NL and the background events NB:
S N R = μ σ = N L σ L 2 + σ B 2 = N L N L + N B
The number of background-generated events NB is evaluated at the same position in the histogram as where the laser pulse would occur. Both numbers of events are obtained by integrating Equation (1) over the temporal laser pulse width tp. Light hitting the sensor is specified as the background photon rate rB and the laser photon rate rL. The SNR includes the number of possible timestamps that can be generated in the histogram, which is the number of laser pulses M (accumulated measurements) used for a distance determination in the focused setup:
S N R f o c u s e d = M · e r B · t T o F · e r B · t p e r B + r L · t p 1 e r B + r L · t p
For the lens-less setup, with every laser pulse, N (number of pixels illuminated) timestamps can be generated, leading to N times M possible events. Assuming again that A has the same size as S (compare Table 1), the SNR for the lens-less setup carries an additional reciprocal factor N for the laser signal rate:
S N R l e n s - l e s s = N · M · e r B · t T o F · e r B · t p e r B + r L N · t p 1 e r B + r L N · t p
This SNR definition in Equation (9) will be used throughout this paper since it describes the focused, blurred, and lens-less setup with a specific choice of N.

2.4. Laser Pulse Detection Probability

Since a scanning LiDAR system might use only a few laser pulses, the applicability of the SNR is to be verified in this paper. A simulation of measurements and their probability of a successful distance determination is used for further investigation of the performance differences. This probability describes the chance of a specified algorithm to extract the correct ToF (distance) out of the histogram. Since the ToF is the position of the laser pulse characteristic within the histogram, this probability is referred to as laser pulse detection probability (LPDP). The algorithm receives the histogram and subtracts the background characteristic that is assumed to be known. Then, the (first global) maximum of the resulting histogram is defined as the ToF measurement result, as illustrated in Figure 3.
The simulations include a finite time resolution and a temporal laser pulse width so that a measurement is considered successful if the histogram maximum occurs within tToF and tToF + tp. The pulse width is 3 ns, which is comparably short for a pulse-on-demand laser diode. For the discrete time binning, an arbitrary but realistic value of 1 ns bin-width (corresponding to a depth resolution of 15 cm) was chosen. For the numerical simulations shown here, these values have been chosen for simplicity and clarity because larger pulse width or smaller time bins would result in a smearing of the laser pulse that counteracts the highest possible maximum formation in the histogram. Depending on the tToF, the laser pulse might be positioned in the regime of three or four time bins of the histogram.

3. Results

The figures in this chapter evaluate the SNR and the LPDP at a fixed time-of-flight corresponding to an object distance of ten meters when not indicated differently as for the range analysis in Section 3.3.

3.1. SNR Analysis

At first, the theoretical SNR is evaluated for an accumulation of ten laser pulses per measurement. In the later analysis, fewer pulses are used to reflect the limited time budget in scanning LiDAR setups. Using ten pulses, however, suggests a higher significance of the SNR since it represents a statistical value. The SNR value is displayed in Figure 4 for six orders of magnitude in laser rate (horizontal axis) reaching the detector. The vertical axis corresponds to the background rate impinging on the detector.
The individual color scale of Figure 4a,b indicates the major difference between the focused setup and the lens-less setup. The focused setup forms a plateau with a maximum SNR of around three for higher laser rates than approximately 109 and smaller background rates than 107. In contrast, the lens-less SNR increases for the depicted laser rates and surpasses the focused SNR maximum by far. Higher SNR values are, in general, favorable for further data processing but do not necessarily lead to better results in LiDAR measurements: The LiDAR measurement result is rated by the chance of extracting the correct time-of-flight from the histogram data. When a measurement setup guarantees this successful time-of-flight extraction, higher SNR values do not produce better results. Therefore, the minimum SNR value that leads to a successful measurement is of interest for a system comparison.
Since no absolute SNR value can be defined that corresponds to a specific success rate of measurement for varying setups and parameters, the SNR value of 1 is used as a lower limit throughout this discussion. This SNR limit is shown in Figure 5 for the transition from a focused laser spot (1 pixel) to blurring the laser spot (2–300 pixels) to lens-less operation (3000 pixels). The gray area indicates the parameter space of background rates and laser rates where the focused setup surpasses the SNR limit. The colored curves only highlight the outer boundary of the corresponding areas that are not colored completely for visibility.
Depending on the number of pixels the laser spot is blurred on, the parameter space fulfilling the SNR limit changes, and, depending on the impinging background and laser rates, an optimum of the blurring parameter N between the boundaries N = 1 and N = Nmax might occur. However, as the ambient parameters usually vary in real environments, there is not just one optimizing blur. With more pixels illuminated, the SNR limit of 1 can be reached at higher background rates with correspondingly high laser rates. For midrange background rates, the minimum laser rate must be increased compared to the focused setup. The extent of these two observations changes to be less advantageous in Figure 5b with more pulses per measurement. There, the maximum background rate reached by the SNR-limited areas increases only slightly with the pixel number. Also, the need for higher laser rates is more pronounced in Figure 5b, visible, for example, for the yellow lens-less curve that separates a larger gray area to its left, as in Figure 5a. This suggests that the potential performance gain of unfocused systems (blurred) is maximized in one-shot scenarios, as will be confirmed by the LPDP results.
The curves in Figure 5 are calculated assuming the lens aperture and detector area as equal and without transmission losses. A larger receiving area of the focused and blurred setup could be represented by a diagonal shift of the curves to smaller laser and background rates with respect to the lens-less curve (3000 pixels). This shifting representation is based on the proportionality of the impinging rates to the receiving area. Transmission losses of the optics would result in a shift towards the opposite direction.

3.2. Laser Pulse Detection Probability Analysis

Simulated laser pulse detection probabilities (LPDPs) show similar courses like the lower-SNR-limit results. Figure 6 illustrates these similarities by incorporating SNR limit curves into the pulse detection probability maps. While the plateau of high detection probability mimics the SNR limit for the focused setup in Figure 6a, the lens-less setup Figure 6b does not. In Figure 6b, the SNR limit of 3 is drawn to adapt to the lens-less plateau of high probability. Deviations between SNR and simulation are still visible and might originate from the simplicity of the pulse detection algorithm. As defined in the methods section, the algorithm extracts the maximum value of the histogram after background subtraction. The variance in the histogram is proportional to the square root of the histogram-bin value itself. Since the background characteristic is monotonously falling, the maximum variance can be found at early times (beginning of the histogram). The laser pulse characteristic has to, therefore, exceed the early background variance, while the SNR evaluates the signal of the laser pulse and background at the same time position.
A simulated single-shot measurement is used to compare the focused setup with a blurring onto 30 pixels. While the halved measurement statistic (two shots vs. one shot) significantly affects the measurements, the advantage of a non-focused system at high background rates is clearly visible. LPDP false-color representations for the two setups can be found in Figure 7a,b. The success rate for an LPDP of 90% is limited to a background rate of approximately 24 MHz, which is an order of magnitude higher than for the focused case. Also, the need for higher laser rates at medium background rates is not as pronounced as it is for the completely lens-less case, in agreement with the results for the SNR in Figure 5a.
The use of a single shot instead of two pulses means that the maximum background rate at which a successful measurement is possible is significantly lower for the focused case. A single-shot measurement with a desired pulse detection probability of more than 90% is only possible if the background trigger probability is below 10%. The trigger probability pB of a background event before the time-of-flight can be calculated from the PDF to the following:
p b = 1 e r B · t T o F
by integrating Equation (1) from 0 to tToF. The corresponding background rate rB limit for the single-shot scenario is, therefore, approximately 1.6 MHz. For a two-shot scenario, one early background event can still lead to a successful measurement. The background limit is obtained by demanding the squared pB to equal 10%. The resulting 5.7 MHz more than triples the possible background rate in the focused case, describing the main differences of Figure 6a and Figure 7a. For the sake of completion, the minimum laser rate at negligible background rates must be a factor of two higher for the single-shot scenario than for the two-shot scenario. The calculation is based on the laser trigger counter-probability:
p l ¯ = 1 p l = e r l · t p
solved for p l ¯ = 0.1 (one shot) and p l ¯ 2 = 0.1   (two shots). This comparison of single-shot and two-shot scenarios for the focused setup shows these two significant performance losses that are only partially transferable to the blurred setup. While the minimum laser rate at low background rates also doubles for a single-shot measurement, the gain in background saturation rates only rises from 27 MHz to 34 MHz by 26%.

3.3. Range Limitations

The results of the SNR and LPDP so far are based on a fixed-distance evaluation. However, the described characteristics and their distinctiveness depend on the flight time. When discussing the LPDP in the context of achievable ranges, two main aspects contribute to the limitations. On the one hand, the receivable laser rate decreases quadratically with distance. On the other hand, the increase in the time-of-flight increases the chance of background-triggered events before the laser arrival. The latter aspect aggravates further discussion of the LPDP for multiple distances in a false-color representation as seen before. Alternatively, a measurement is defined as successful for LPDP > 90% so that a range value can be assigned to a tuple of laser rate, background rate, and the number of pixels illuminated by the blurred laser spot or 3000 pixels for lens-less operation. For the exemplary range estimation in Figure 8 and Figure 9, the simulated detector properties are chosen to match the characteristics of the CSPADalpha 2D-SPAD detector [22,27] listed in Table 2.
Figure 8a shows the background-dependent course of the range for the maximum laser rate that fulfills the international safety standard [28]. For the 3 ns long laser pulse with a 940 nm wavelength and a pulse repetition rate below 12.5 kHz, the permissible laser peak power is of 31 Watts. For the detector properties shown in Table 2 of a bandpass filter of 10 nm width and an f-number of 1.4, a sunny day (100 klx) would result in approximately 108 Hz of background rate. The length of the histograms used for each data point is adapted to the range value. By this, unusable rear sections of the histograms are avoided that could possibly generate background events that hinder a successful measurement.
The achievable range at sufficiently low background rates is independent of the degree of blurring, as visible in Figure 8a. This is because the decreased probability of a laser trigger event due to the laser rate division is compensated by the gained statistics of more pixels. The probability of no laser events being triggered is, therefore, independent of the number of pixels blurred on. The considered probabilities of zero laser events for a focused and a blurred setup are accordingly equal:
p l , N ¯ =   e r l N · t p N =   e r l · t p = p l ¯
As visible for the data for 300 and 3000 pixels, the limit of negligible background rates decreases for larger numbers of pixels used. This is due to the constant background rate per pixel resulting in a proportional increase in background events with pixel number.
For medium to high background rates, the measurable maximum distance varies with the number of pixels. A conservative blurring onto 30 pixels surpasses the range of the focused setup and a blurring onto 2 pixels for any background rate plotted. When more pixels are used, the range reduction due to the accumulated background events becomes apparent. For high background rates, however, the use of many pixels makes a successful distance measurement possible at all. Taking the achievable range based on the LPDP criterion as a figure of merit, a specific system cannot be optimized by a single degree of blurring.
In Figure 8b, the focused system is compared to the blurring onto 30 pixels for 100%, 10%, and 1% target reflectivity. The system range in the constant regime at low background rates is proportional to the square root of the impinging laser rate, which is proportional to the target reflectivity. The gain in range of the blurred system at higher background rates persists regardless of the reflectivity.
When using more laser pulses per measurement, the general superiority of the blurred setup over the focused setup as achieved in Figure 8b vanishes. The achievable ranges can be compared in Figure 9a again for a blurring onto 30 pixels but for two, ten, and a hundred cumulated laser pulses. The target reflectivity is constant at 1% so that a range of more than 9 m as achieved in Figure 8b originates only from the extra pulses accumulated. With increasing pulse accumulation, the range gain of blurring at high background rates decreases. This can be recognized by the horizontal positions of the intersection where the blurred result surpasses the corresponding focused result. Also, range losses due to blurring, which are not visible in the one-shot scenario in Figure 8a, become already apparent when two pulses instead of one are used. For more accumulation, the range discrepancy begins at lower background rates.
For a clear separation of accumulating pulses in a focused setup and the accumulation of multiple pixels in a blurred setup, Figure 9b depicts three sets of accumulations. The range of the focused setup at low background rates increases with the square root of accumulated pulses. In contrast, the single-shot scenarios are limited in their maximum range and can only reduce the loss in range at high background rates.

4. Discussion

The combination of a starring multipixel receiving channel and a scanning transmitting channel bears the weight of signal intensity and ambient light contribution for the detector architecture used. The investigated first-photon measurement architecture of a SPAD array that is limited to a single timestamp per laser pulse and pixel clearly benefits from statistic increasing techniques, such as a blurred imaging optic. Even if the signal distribution onto multiple pixels does not increase the overall range of a LiDAR measurement in ideal conditions, it can maintain the range better under ambient illumination. This extra ambient light robustness decreases with the use of more laser pulses per measurement. In direct ToF LiDAR systems, the single-point scanning systems have the smallest time budget per pixel, making single-shot measurements favorable anyway. The results presented only hold true for first-photon architecture and will differ for multi-event architectures [29,30,31]. Further investigation could elaborate the gain of blurring for these techniques that still have finite event-depths and dead-times.
A lens-less operation that enables the use of all pixels for each measurement independent of the current laser scanning position does not optimize the range. It mostly reduces the range except for situations with extreme ambient illumination. Range reduction in comparison with focused imaging also occurs depending on the degree of blurring. An optimization of the degree of blurring could be performed based on a desired range per ambient illumination profile. Passive adaptation of the degree of blurring to the respective target distance could be achieved by utilizing the distance dependency of the circle of confusion. Active adaptation of the imaging optics could readjust the blurring for varying ambient illumination.
The generalization of the homogenous ambient light distribution within the total FoV in this paper needs to be considered when designing a blurred starring biaxial LiDAR scanner. Parts of the scanned scene exhibiting strong ambient illumination by sunlight or artificial light sources may lead to a 3D image with missing or incorrect spots for a focused LiDAR system. For a blurred system, these spots might be correctly measured due to the enhanced ambient light robustness. In the worst case, these incorrect spots might become even larger since the blurring distributes these bright spots to neighboring pixels.
In the scanning LiDAR implementation discussed, the spatially resolved multipixel detector is not responsible for the spatial resolution of the 3D image since the laser source or laser scanner provides the current angular position. Therefore, the detector can be replaced by a silicon photomultiplier (SiPM) in general. In a blurred scenario, however, the detector area not illuminated by the laser spot cannot be rejected as it can in a SPAD array with pixel-wise TDCs. Future research may investigate biaxial SiPM LiDAR regarding this drawback and possible advantages of threshold-based signal detection compared to the first-photon measurement principle discussed here.

Author Contributions

Conceptualization, M.L. and J.R.; methodology, K.A.; software, K.A.; validation, K.A., M.L. and A.H.; formal analysis, K.A.; writing—original draft preparation, K.A.; writing—review and editing, M.L., A.H., J.R. and M.D.Z.; supervision, M.L., M.D.Z., S.N. and A.G.; funding acquisition, M.L., J.R., M.D.Z., S.N. and A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a Fraunhofer ICON grant. This work was partially supported by the project of the Council for Science, Technology and Innovation, the Cross Ministerial Strategic Innovation Promotion Program (SIP).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LiDARLight detection and ranging
SPADSingle photon avalanche diode
ToFTime-of-flight
TDCTime to digital converter
SNRSignal-to-noise ratio
FoVField of view
PDFProbability density function
LPDPLase pulse detection probability
SiPMSilicon photomultiplier

References

  1. Villa, F.; Severini, F.; Madonini, F.; Zappa, F. SPADs and SiPMs Arrays for Long-Range High-Speed Light Detection and Ranging (LiDAR). Sensors 2021, 21, 3839. [Google Scholar] [CrossRef] [PubMed]
  2. Makino, K.; Fujita, T.; Hashi, T.; Adachi, S.; Nakamura, S.; Yamamoto, K.; Baba, T.; Suzuki, Y. Development of an InGaAs SPAD 2D array for flash LIDAR. In Proceedings of the Quantum Sensing and Nano Electronics and Photonics XV, San Francisco, CA, USA, 28 January–2 February 2018; Razeghi, M., Brown, G.J., Leo, G., Lewis, J.S., Eds.; SPIE: Bellingham, WA, USA, 2018; p. 19, ISBN 9781510615656. [Google Scholar]
  3. Manuzzato, E.; Tontini, A.; Seljak, A.; Perenzoni, M. A 64 times 64—Pixel Flash LiDAR SPAD Imager with Distributed Pixel-to-Pixel Correlation for Background Rejection, Tunable Automatic Pixel Sensitivity and First-Last Event Detection Strategies for Space Applications. In Proceedings of the 2022 IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, CA, USA, 20–26 February 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 96–98, ISBN 978-1-6654-2800-2. [Google Scholar]
  4. Padmanabhan, P.; Zhang, C.; Charbon, E. Modeling and Analysis of a Direct Time-of-Flight Sensor Architecture for LiDAR Applications. Sensors 2019, 19, 5464. [Google Scholar] [CrossRef] [PubMed]
  5. Burkard, R.; Viga, R.; Ruskowski, J.; Grabmaier, A. Generalized comparison of the accessible emission limits of flash- and scanning LiDAR-systems. In Proceedings of the SMACD/PRIME 2021: International Conference on SMACD and 16th Conference on PRIME, Online, 19–22 July 2021; pp. 292–295. [Google Scholar]
  6. Lemmetti, J.; Sorri, N.; Kallioniemi, I.; Melanen, P.; Uusimaa, P. Long-range all-solid-state flash LiDAR sensor for autonomous driving. In Proceedings of the High-Power Diode Laser Technology XIX, Online, 6–12 March 2021; Zediker, M.S., Ed.; SPIE: Bellingham, WA, USA, 2021; p. 22, ISBN 9781510641716. [Google Scholar]
  7. Yoo, H.W.; Druml, N.; Brunner, D.; Schwarzl, C.; Thurner, T.; Hennecke, M.; Schitter, G. MEMS-based lidar for autonomous driving. Elektrotech. Inftech. 2018, 135, 408–415. [Google Scholar] [CrossRef]
  8. Kasturi, A.; Milanovic, V.; Lovell, D.B.; Hu, F.; Ho, D.; Su, Y.; Ristic, L. Comparison of MEMS mirror LiDAR architectures. In Proceedings of the MOEMS and Miniaturized Systems, San Francisco, CA, USA, 1–6 February 2020; Piyawattanametha, W., Park, Y.-H., Zappe, H., Eds.; SPIE: Bellingham, WA, USA, 2020; p. 31, ISBN 9781510633490. [Google Scholar]
  9. Noda, S.; Tanahashi, Y.; Tanimoto, R.; Koutsuka, Y.; Kitano, K.; Takahashi, K.; Muramatsu, E.; Kawabata, C. Development of coaxial 3D-LiDAR systems using MEMS scanners for automotive applications. In Proceedings of the Optical Data Storage 2018: Industrial Optical Devices and Systems, San Diego, CA, USA, 19–23 August 2018; Katayama, R., Takashima, Y., Eds.; SPIE: Bellingham, WA, USA, 2018; p. 14, ISBN 9781510620858. [Google Scholar]
  10. Wang, Y.; Tang, P.; Shen, L.; Pu, S. LiDAR System Using MEMS Scanner-Based Coaxial Optical Transceiver. In Proceedings of the 2020 IEEE 5th Optoelectronics Global Conference (OGC), Shenzhen, China, 7–11 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 166–168, ISBN 978-1-7281-9860-6. [Google Scholar]
  11. Xu, F.; Qiao, D.; Song, X.; Zheng, W.; He, Y.; Fan, Q. A Semi-coaxial MEMS-based LiDAR. In Proceedings of the IECON 2019—45th Annual Conference of the IEEE Industrial Electronics Society, Lisbon, Portugal, 14–17 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 6726–6731, ISBN 978-1-7281-4878-6. [Google Scholar]
  12. Sakata, R.; Ishizaki, K.; de Zoysa, M.; Fukuhara, S.; Inoue, T.; Tanaka, Y.; Iwata, K.; Hatsuda, R.; Yoshida, M.; Gelleta, J.; et al. Dually modulated photonic crystals enabling high-power high-beam-quality two-dimensional beam scanning lasers. Nat. Commun. 2020, 11, 3487. [Google Scholar] [CrossRef]
  13. Sakata, R.; Ishizaki, K.; de Zoysa, M.; Kitamura, K.; Inoue, T.; Gelleta, J.; Noda, S. Photonic-crystal surface-emitting lasers with modulated photonic crystals enabling 2D beam scanning and various beam pattern emission. Appl. Phys. Lett. 2023, 122, 130503. [Google Scholar] [CrossRef]
  14. Noda, S.; Yoshida, M.; Inoue, T.; de Zoysa, M.; Ishizaki, K.; Sakata, R. Photonic-crystal surface-emitting lasers. Nat. Rev. Electr. Eng. 2024, 1, 802–814. [Google Scholar] [CrossRef]
  15. Zhuo, S.; Xia, T.; Zhao, L.; Sun, M.; Wu, Y.; Wang, L.; Yu, H.; Xu, J.; Wang, J.; Lin, Z.; et al. Solid-State dToF LiDAR System Using an Eight-Channel Addressable, 20-W/Ch Transmitter, and a 128 × 128 SPAD Receiver With SNR-Based Pixel Binning and Resolution Upscaling. IEEE J. Solid-State Circuits 2023, 58, 757–770. [Google Scholar] [CrossRef]
  16. Lei, Y.; Zhang, L.; Yu, Z.; Xue, Y.; Ren, Y.; Sun, X. Si Photonics FMCW LiDAR Chip with Solid-State Beam Steering by Interleaved Coaxial Optical Phased Array. Micromachines 2023, 14, 1001. [Google Scholar] [CrossRef] [PubMed]
  17. Muntaha, S.T.; Hokkanen, A.; Harjanne, M.; Cherchi, M.; Roussey, M.; Aalto, T. Optical beam steering and distance measurement experiments through an optical phased array and a 3D printed lens. Opt. Express 2025, 33, 3685. [Google Scholar] [CrossRef]
  18. Poulton, C.V.; Byrd, M.J.; Russo, P.; Timurdogan, E.; Khandaker, M.; Vermeulen, D.; Watts, M.R. Long-Range LiDAR and Free-Space Data Communication With High-Performance Optical Phased Arrays. IEEE J. Select. Topics Quantum Electron. 2019, 25, 1–8. [Google Scholar] [CrossRef]
  19. Lee, J.; Shin, D.; Jang, B.; Byun, H.; Lee, C.; Shin, C.; Hwang, I.; Shim, D.; Lee, E.; Kim, J.; et al. Single-Chip Beam Scanner with Integrated Light Source for Real-Time Light Detection and Ranging. In Proceedings of the 2020 IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 12–18 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 7.2.1–7.2.4, ISBN 978-1-7281-8888-1. [Google Scholar]
  20. Sesta, V.; Severini, F.; Villa, F.; Lussana, R.; Zappa, F.; Nakamuro, K.; Matsui, Y. Spot Tracking and TDC Sharing in SPAD Arrays for TOF LiDAR. Sensors 2021, 21, 2936. [Google Scholar] [CrossRef] [PubMed]
  21. Burkard, R.; Ligges, M.; Merten, A.; Sandner, T.; Viga, R.; Grabmaier, A. Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection and ranging systems. J. Opt. Microsyst. 2022, 2, 011005. [Google Scholar] [CrossRef]
  22. Haase, J.F.; Grollius, S.; Grosse, S.; Buchner, A.; Ligges, M. A 32x24 pixel SPAD detector system for LiDAR and quantum imaging. In Proceedings of the Photonic Instrumentation Engineering VIII, Online, 6–12 March 2021; Soskind, Y., Busse, L.E., Eds.; SPIE: Piscataway, NJ, USA, 2021; p. 20, ISBN 9781510642218. [Google Scholar]
  23. Henderson, R.K.; Johnston, N.; Hutchings, S.W.; Gyongy, I.; Abbas, T.A.; Dutton, N.; Tyler, M.; Chan, S.; Leach, J. 5.7 A 256×256 40nm/90nm CMOS 3D-Stacked 120dB Dynamic-Range Reconfigurable Time-Resolved SPAD Imager. In Proceedings of the 2019 IEEE International Solid- State Circuits Conference (ISSCC), San Francisco, CA, USA, 17–21 February 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 106–108, ISBN 978-1-5386-8531-0. [Google Scholar]
  24. Padmanabhan, P.; Zhang, C.; Cazzaniga, M.; Efe, B.; Ximenes, A.R.; Lee, M.-J.; Charbon, E. 7.4 A 256×128 3D-Stacked (45nm) SPAD FLASH LiDAR with 7-Level Coincidence Detection and Progressive Gating for 100m Range and 10klux Background Light. In Proceedings of the 2021 IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, CA, USA, 13–22 February 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 111–113, ISBN 978-1-7281-9549-0. [Google Scholar]
  25. Henderson, R.K.; Johnston, N.; Della Mattioli Rocca, F.; Chen, H.; Day-Uei Li, D.; Hungerford, G.; Hirsch, R.; Mcloskey, D.; Yip, P.; Birch, D.J.S. A 192 times 128 Time Correlated SPAD Image Sensor in 40-nm CMOS Technology. IEEE J. Solid-State Circuits 2019, 54, 1907–1916. [Google Scholar] [CrossRef]
  26. Buchner, A.; Hadrath, S.; Burkard, R.; Kolb, F.M.; Ruskowski, J.; Ligges, M.; Grabmaier, A. Analytical Evaluation of Signal-to-Noise Ratios for Avalanche- and Single-Photon Avalanche Diodes. Sensors 2021, 21, 2887. [Google Scholar] [CrossRef] [PubMed]
  27. Leitel, R.; Dannberg, P.; Seise, B.; Brüning, R.; Grosse, S.; Ligges, M. Enhancement of SPAD-camera sensitivity by molded microlens arrays. In Proceedings of the Optical Sensing and Detection VIII, Strasbourg, France, 7–12 April 2024; Berghmans, F., Zergioti, I., Eds.; SPIE: Bellingham, WA, USA, 2024; p. 71, ISBN 9781510673168. [Google Scholar]
  28. IEC 60825-1:2014; Safety of Laser Products—Part 1: Equipment Classification and Requirements. International Electrotechnical Commission: Berlin, Germany, 2014.
  29. Al Abbas, T.; Dutton, N.A.; Almer, O.; Finlayson, N.; Della Rocca, F.M.; Henderson, R. A CMOS SPAD Sensor With a Multi-Event Folded Flash Time-to-Digital Converter for Ultra-Fast Optical Transient Capture. IEEE Sensors J. 2018, 18, 3163–3173. [Google Scholar] [CrossRef]
  30. Fu, Y.-H.; Zhao, Z.-Y.; Zhao, Z.; Li, J.-C.; Chang, Y.-C. A SPAD Readout Circuit Based on Column-Level Event-Driven Time-to-Digital Converter. In Proceedings of the 2022 IEEE 16th International Conference on Solid-State & Integrated Circuit Technology (ICSICT), Nangjing, China, 25–28 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–3, ISBN 978-1-6654-6906-7. [Google Scholar]
  31. Sesta, V.; Incoronato, A.; Madonini, F.; Villa, F. Time-to-digital converters and histogram builders in SPAD arrays for pulsed-LiDAR. Measurement 2023, 212, 112705. [Google Scholar] [CrossRef]
Figure 1. (a) Schematic side view of the pixel FoV for the case that the target wall is imaged in focus onto the pixel matrix; (b) detector FoV for the lens-less case. The dashed lines indicate the fraction of the wide field of view that is contributing any laser signal. Note that the barndoor aperture depicted is not actually limiting the FoV to the same size as in (a), since it is only a scheme.
Figure 1. (a) Schematic side view of the pixel FoV for the case that the target wall is imaged in focus onto the pixel matrix; (b) detector FoV for the lens-less case. The dashed lines indicate the fraction of the wide field of view that is contributing any laser signal. Note that the barndoor aperture depicted is not actually limiting the FoV to the same size as in (a), since it is only a scheme.
Sensors 25 03229 g001
Figure 2. (a) Characteristic first-photon probability density function; (b) corresponding histogram simulated with 1000 measurements.
Figure 2. (a) Characteristic first-photon probability density function; (b) corresponding histogram simulated with 1000 measurements.
Sensors 25 03229 g002
Figure 3. (a) Measurement histogram and trigger probability of the background light (red curve); the maximum position at early times does not correspond to the laser signal that is located in bin 31. (b) The same histogram with subtracted background characteristic and arrow-indicated laser signal contribution resulting in a maximum at the correct position.
Figure 3. (a) Measurement histogram and trigger probability of the background light (red curve); the maximum position at early times does not correspond to the laser signal that is located in bin 31. (b) The same histogram with subtracted background characteristic and arrow-indicated laser signal contribution resulting in a maximum at the correct position.
Sensors 25 03229 g003
Figure 4. (a) SNR result in false-color representation of the focused setup for 10 pulses per measurement at 10 m target distance; (b) SNR of the lens-less setup using all 3000 pixels of the detector.
Figure 4. (a) SNR result in false-color representation of the focused setup for 10 pulses per measurement at 10 m target distance; (b) SNR of the lens-less setup using all 3000 pixels of the detector.
Sensors 25 03229 g004
Figure 5. (a) Regimes of SNR ≥ 1 for 2 pulses per measurement for focused and blurred laser spots on 2, 30, 300, and 3000 pixels; (b) for 100 pulses per measurement.
Figure 5. (a) Regimes of SNR ≥ 1 for 2 pulses per measurement for focused and blurred laser spots on 2, 30, 300, and 3000 pixels; (b) for 100 pulses per measurement.
Sensors 25 03229 g005
Figure 6. (a) Laser pulse detection probability of the focused setup in false-color representation with SNR limit of 1 in red for a two-shot measurement; (b) lens-less LPDP result with additional SNR limit curve for SNR = 3 in pink.
Figure 6. (a) Laser pulse detection probability of the focused setup in false-color representation with SNR limit of 1 in red for a two-shot measurement; (b) lens-less LPDP result with additional SNR limit curve for SNR = 3 in pink.
Sensors 25 03229 g006
Figure 7. (a) LPDP of the focused setup in false-color representation for single-shot measurements; (b) LPDP for a blurring onto 30 pixels.
Figure 7. (a) LPDP of the focused setup in false-color representation for single-shot measurements; (b) LPDP for a blurring onto 30 pixels.
Sensors 25 03229 g007
Figure 8. (a) Background rate dependency of LiDAR range defined by 90% LPDP for single-shot measurements; (b) LiDAR range comparison for focused and blurred laser spots at three target reflectance levels.
Figure 8. (a) Background rate dependency of LiDAR range defined by 90% LPDP for single-shot measurements; (b) LiDAR range comparison for focused and blurred laser spots at three target reflectance levels.
Sensors 25 03229 g008
Figure 9. (a) Range comparison of the focused measurement and blurring onto 30 pixels for three multi-shot scenarios; (b) range results for focused multi-shot scenarios and blurred single-shot scenarios.
Figure 9. (a) Range comparison of the focused measurement and blurring onto 30 pixels for three multi-shot scenarios; (b) range results for focused multi-shot scenarios and blurred single-shot scenarios.
Sensors 25 03229 g009
Table 1. Comparison of the impinging rates on the detectors.
Table 1. Comparison of the impinging rates on the detectors.
Light TypeFocused Setup:
Rate per Pixel
Lens-Less Setup:
Rate per Pixel
Blurred Setup:
Rate per Pixel
Background r B = R B / N · A r B = R B · S / N r B = R B / N · A
Laser signal r L = R L · A r L = R L · S / N r L = R L · A / N b l u r r e d *
* NBlurred denotes the number of pixels illuminated by the blurred laser spot.
Table 2. Detector properties used for range estimations [22,27].
Table 2. Detector properties used for range estimations [22,27].
ParameterValueUnit
Receiving area8.32 × 6.24mm2
Fill factor (with micro lenses)26.8%
Quantum efficiency (at 940 nm)2%
Avalanche probability50%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Albert, K.; Ligges, M.; Henschke, A.; Ruskowski, J.; De Zoysa, M.; Noda, S.; Grabmaier, A. Performance Comparison of Multipixel Biaxial Scanning Direct Time-of-Flight Light Detection and Ranging Systems With and Without Imaging Optics. Sensors 2025, 25, 3229. https://doi.org/10.3390/s25103229

AMA Style

Albert K, Ligges M, Henschke A, Ruskowski J, De Zoysa M, Noda S, Grabmaier A. Performance Comparison of Multipixel Biaxial Scanning Direct Time-of-Flight Light Detection and Ranging Systems With and Without Imaging Optics. Sensors. 2025; 25(10):3229. https://doi.org/10.3390/s25103229

Chicago/Turabian Style

Albert, Konstantin, Manuel Ligges, Andre Henschke, Jennifer Ruskowski, Menaka De Zoysa, Susumu Noda, and Anton Grabmaier. 2025. "Performance Comparison of Multipixel Biaxial Scanning Direct Time-of-Flight Light Detection and Ranging Systems With and Without Imaging Optics" Sensors 25, no. 10: 3229. https://doi.org/10.3390/s25103229

APA Style

Albert, K., Ligges, M., Henschke, A., Ruskowski, J., De Zoysa, M., Noda, S., & Grabmaier, A. (2025). Performance Comparison of Multipixel Biaxial Scanning Direct Time-of-Flight Light Detection and Ranging Systems With and Without Imaging Optics. Sensors, 25(10), 3229. https://doi.org/10.3390/s25103229

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop