^{★}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (

Amplitude modulated continuous wave (AMCW) lidar systems commonly suffer from non-linear phase and amplitude responses due to a number of known factors such as aliasing and multipath inteference. In order to produce useful range and intensity information it is necessary to remove these perturbations from the measurements. We review the known causes of non-linearity, namely aliasing, temporal variation in correlation waveform shape and mixed pixels/multipath inteference. We also introduce other sources of non-linearity, including crosstalk, modulation waveform envelope decay and non-circularly symmetric noise statistics, that have been ignored in the literature. An experimental study is conducted to evaluate techniques for mitigation of non-linearity, and it is found that harmonic cancellation provides a significant improvement in phase and amplitude linearity.

Time-of-Flight (ToF) full-field range imaging cameras measure the distance to objects in the scene for every pixel in an image simultaneously. This is achieved by illuminating the scene with intensity modulated or encoded light, and imaging the backscattered light using a specialised gain modulated image sensor, allowing measurement of the illumination time-of-flight, and thus distance. This technology has matured to a level where it is generating interest in consumer and industrial applications. A number of manufacturers are offering off-the-shelf range-imaging cameras or development systems intended as demonstrators or reference designs for product designers. Although most commercial ToF cameras can achieve subcentimetre measurement precision, accuracy is often an order of magnitude worse due to non-linear range responses. These errors are a significant factor in limiting uptake of full-field range imaging technology in many application areas.

Linearity can be affected by mechanisms either external or internal to the camera. The most significant external influence is generally multi-path or mixed pixel interference, where the distance measurement is perturbed by stray light or multiple returns at object edges; these are extremely scene dependent and unpredictable. This paper focuses primarily on linearity errors due to internal influences, and hence these external mechanisms will not be considered in any depth.

In the idealised case, a scene is illuminated with sinuisoidally modulated light; ranging is achieved by deducing the time delay introduced by the distance travelled. Because it is very difficult to directly measure the phase and amplitude of a high frequency signal across a 2D field-of-view, Amplitude Modulated Continuous Wave lidar detectors operate by indirectly measuring the backscattered illumination. Range measurements are performed by correlating the detected illumination signal at the sensor with a reference signal, and integrating over time. By changing the phase of the reference signal, it is possible to determine the time-of-flight. Commercial range cameras typically achieve this by acquiring four images at 90 degree phase offsets. The negative fundamental bin of a Fourier transform is calculated for every pixel revealing the amplitude and phase information, effectively performing double quadrature detection on the detected illumination.

Internal sources of linearity errors are predominantly influenced by the illumination and sensor modulation waveform shapes; in particular, the level of harmonic content. In an ideal case, pure sinusoidal modulation would result in excellent measurement linearity, but in real-world devices sinusoidal modulation is difficult to achieve due to nonlinear pixel and illumination modulation responses. Square wave modulation is much more practical because square signals can be generated easily with digital electronics. However, because only four samples are acquired per cycle, any odd harmonics present in the correlated waveforms are aliased onto the fundamental, interfering with the measurements and causing linearity errors. These linearity errors are generally viewed as well behaved and predictable; hence manufacturers typically mitigate these effects with fixed calibration tables. However, because the modulation waveforms can change due to factors such as operating frequency and temperature, typical calibrations are only valid for specific, limited operating conditions. In order to ensure robust calibration, given process variation, each camera needs to be individually calibrated, which can impact on the efficiency of the manufacturing process. With demand for ever-increasing accuracy, high quality modelling and mitigation of linearity errors is increasing in importance.

Ideally, linearity errors should be addressed at their source, rather than compensated for by calibration. Several techniques have recently been published for addressing the sources of calibration: including heterodyning and harmonic cancellation. In this paper we first discuss the sources of linearity errors and the principles behind removing them. In particular, we introduce explanations for integration time dependent aliasing and cyclic errors which cannot be explained merely by aliasing.

This paper discusses the causes of non-linear phase and amplitude responses, which can be categorised by plotting the phase and amplitude error over a full phase cycle, ideally using a translation stage. From a diagnostic perspective, many different properties of the system can be inferred from a phase sweep. Linear errors over a phase sweep are generally due to temperature drift over the experiment or due to mixed pixel/multipath interference. In either case, the two ends of the sweep do not join together. Errors composed of a single cycle in amplitude and phase over the entire 2

In Section 2 we start by developing a detailed model for measurement formation, detailing both the idealised case and the practical impacts of real sampling methodologies. Section 3 gives a comprehensive overview of known and new linearity error sources in AMCW lidar systems, starting by covering the well-known problem of the aliasing of correlation waveform harmonics. Standard methods of correction for this problem are detailed, as well as their limitations; these limitations include spatiotemporal variation in correlation waveform shape, which is frequently ignored as a factor by calibration based correction methods. New linearity error sources are discussed; in particular, crosstalk and modulation envelope decay and non-circularly-symmetric noise statistics, generally caused by the presence of a second harmonic in the correlation waveform. Finally, Section 4 details advanced methods of aliasing mitigation.

Functions of a discrete variable are notated as ^{2} = −1 and ^{*}

The scene is illuminated with a modulated waveform which is reflected by the scene back to the sensor. The time-of-flight to the scene and back to the sensor is encoded as a phase shift of the received modulated signal.

Let _{i}(_{m}_{m}_{ξ}_{ξ}

The backscattered illumination received at the sensor, allowing for time-of-flight, can be modelled as a convolution of _{i} with the backscattered signal returns _{ξ}

The signal received at the sensor is correlated with a reference signal, the sensor modulation waveform _{s}, and integrated over time. This gives the correlation waveform

By sampling the correlation waveform, that is, by making measurements of

It is useful to introduce a reference waveform

Most phase and amplitude non-linearities occur because it is not possible to perfectly measure the correlation waveform; in this section we explain the most common homodyne sampling method and some additional less common approaches.

The standard homodyne approach is to calculate the Discrete Fourier Transform (DFT) of

Now let

Let _{q}(

General simplifications of the DFT have also been attempted, by taking fewer than four samples. In its simplest form, this involves taking measurements at only zero and ninety degree phase offsets, therefore the phase measurement problem reduces to

Other approaches to range and amplitude determination have included shape fitting methods [

In reality, it is impossible to directly measure the negative fundamental Fourier bin because the sampling process introduces systematic non-linearities in phase and amplitude. In this section we review the literature regarding these non-linearities and explain the mechanisms behind the systematic errors. These error sources include aliasing of correlation waveform harmonics, due to the sampling inherent in the measurement process, temporal drift in phase and amplitude and irregular phase steps due to either crosstalk or modulation envelope decay. Other error sources discussed include the mixed pixel/multipath interference problem and errors due to non-circularly-symmetric noise statistics in certain systems. In addition to discussing the causes of systematic error, some standard mitigation approaches are discussed, such as B-spline models for aliasing calibration. A summary table of error sources and mitigation methods is included at the end of the section.

The most common cause of non-linear phase and amplitude is aliasing of correlation waveform harmonics.

For the case of a single harmonic aliasing onto the fundamental, the perturbation _{q}(_{N}

There is a critical threshold at which there is more than one solution to the aliasing problem, given by

Although it is not typically a consideration in practice, for a more complicated waveform it is necessary to determine whether _{q}(

There are two major approaches to aliasing mitigation: the first is to try and prevent aliasing from occurring in the first place and the second is to calibrate for the non-linearities after-the-fact. One method to prevent aliasing from occurring is to use pure sinusoidal modulation; however, this is very difficult because of the non-linear illumination and sensor modulation responses. Another approach is to adjust the duty cycles of the laser and sensor modulation so as to remove the frequencies most responsible for the perturbation. Payne

An ideal abstraction of the sensor might assume that there is one singular reference waveform; however, in reality the waveform shape changes both temporally and spatially. Typically a modulated CMOS sensor is driven from one side of the chip and as the signal travels down sensor columns it becomes phase shifted and attenuated. The most appropriate electrical model is that of a transmission line. Each pixel essentially acts as an RLC network and depending on the sensor design and operating frequency; there is the potential for reflections and even standing waves. In contrast to modulated CMOS sensors, image intensifiers are typically driven from the outside, resulting in a characteristic ‘irising effect’ [

The presence of a phase shift across CMOS sensors has been acknowledged in other work: Fuchs

Not only is the phase and amplitude of the negative fundamental frequency of the sensor response waveform perturbed, but the entire waveform is filtered and phase shifted.

Another important effect is temporal variation in the amplitude and phase of the sensor response; this is probably temperature related, as phase response has been found to be a function of sensor temperature [

A number of papers in the literature have temperature induced trends in their phase linearity data. Kahlmann and Ingensand [

An important observation is that aliasing can only cause perturbations with frequencies of

Section 3.3 discussed changes in the correlation waveform over large timescales, primarily due to temperature changes. However, this effect is not limited to long-periods of time; in some systems the modulation waveform shape changes within the integration period. For example,

While previous research [

What is not immediately obvious is how this small-scale variation in illumination output can result in uneven phase steps. Given a continuously operating system taking measurements of phase steps at equally-spaced intervals with no pauses between subsequent measurements the changes in waveform shape and modulation envelope should be identical across all of the phase steps, thus it would be adequate to merely determine the mean waveform shape over the integration period to calibrate the results. However, if the system is not continuously running, then it is possible for the first phase step to have different behaviour than the rest. This could potentially occur if captures are being triggered in hardware or software, or if all the phase steps are being accumulated and then synchronously read-out together at the end of the phase step sequence. Because the changes in waveform shape are relative small, for short integration times the change in modulation envelope is probably the biggest factor. In this circumstance, the sampling function might be modelled as
_{1}, _{2} _{m}_{−1} = 1 and _{0} ≠ 1. Immediately, the frequency response of the sampling function has changed. Whereas for ideal sampling only harmonics satisfying

Crosstalk is a more complicated electrical phenomenon. All AMCW lidar systems require illumination and sensor modulation signals, while some sensors also have complicated drive signals, such as illumination ‘kicker’ pulses to improve switching times and give more control over the resulting illumination waveform shape. It is tremendously difficult to prevent any leakage of one signal into another, particularly when both the illumination and sensor modulation signals are driving relatively heavy loads from the same power supply in close physical proximity. Our own custom-designed system suffered issues with crosstalk between modulation signals and the readout clock of a CCD camera [

Crosstalk is difficult to model as the convolution of a sampling function with a correlation waveform because there strictly is no fixed correlation waveform: effectively, crosstalk results in a correlation waveform shape which is a function of the phase step. While

Without detailed knowledge of the inner workings of a particular camera it is difficult to accurately model the crosstalk process, however we can approximate the simplest possible case: that of a perfectly sinusoidal waveform for both the illumination and sensor modulation. In this case crosstalk between the illumination and sensor modulation manifests as a phase and amplitude shift in the modulation. Considering only the case where the illumination signal is perturbing the sensor modulation, the sampling function is then
_{i}_{i}

It is quite difficult to identify whether similar errors are present in other published results because full-information is not always provided, for example, plotting phase error against range without specifying the modulation frequency [

Mixed pixels occur around the edges of objects when a single pixel integrates light from more than one backscattering source, often resulting in highly erroneous phase and amplitude measurements. Multipath interference is the same effect but due to scattering either within the camera itself or reflections within a scene. In particular, these effects correspond to a violation of the assumption that there is only a single backscattering source within each pixel. One particularly frustrating scenario is that objects outside the field of view of the camera can result in scattered light. While a number of different restoration approaches have been posited [

It is important to be able to identify multipath interference in phase and amplitude response measurements, so that linearity calibrations do not accidentally calibrate for transient phenomena. Assuming a scattering source at a distance _{s}, with an relative amplitude of _{ξ}

_{s} = 0.5. The multipath results in a roughly single cycle error, but with an additional caveat that the two ends of the ambiguity range no-longer match up. If the the change in brightness of the primary return is not modelled, then the error is a single cycle sinusoid. In general, either of these possibilities is quite easily recognised in a plot of phase and amplitude response.

Non-circularly-symmetric noise occurs when the covariance of the real and imaginary parts of a complex domain range measurement is not a scaled identity matrix. Averaging amplitude and phase, rather than averaging complex domain measurements, appears to be a relatively common practice. This averaging method causes overestimation of amplitude and systematic phase perturbations if the noise distribution of the complex domain measurement is not circularly symmetric, although previous studies [_{f} is the phase delay in the correlation waveform and _{Δ} is the relative phase of the second harmonic. For greater than five phase steps, the characteristics are most similar to

A summary of all the perturbation sources and mitigation techniques covered in this section is given in

In Section 3.2 we discussed standard approaches to mitigating non-linear phase and amplitude responses. However many of the assumptions implicit in these models, such as stable temperatures, do not hold in practice. Rather than trying to calibrate out non-linear phase and amplitude responses after-the-fact, an alternative approach is to use different measurements techniques which change the sampling function so as to remove most of the amplitude and phase errors from aliasing. This section discusses two such methods: heterodyning and harmonic cancellation. Not only do these methods attenuate aliasing harmonics in the equi-sampled case, but they also attenuate harmonics in the case of irregular phase steps, which makes these methods potentially applicable to mitigation of effects such as crosstalk.

An alternative to discrete phase stepping is to integrate over a range of relative phases, also known as heterodyning [

With changing phase during integration, the correlation waveform is sampled over a period as

The sampling function then becomes a series of rectangular functions rather than Dirac deltas, giving

The Fourier transform of this sampling function can be expressed in terms of the homodyne case from

With more complicated hardware it is possible to reset the signal generators so as to achieve

One of the major constraining factors for real-time operation of an AMCW system is read-out time; as the number of explicit phase steps increases, the overall range measurement frame-rate decreases. An alternative approach is to retain four separate measurements, but integrate over more than one phase step in order to deliberately cancel out the aliasing harmonics. While a true heterodyne approach requires specialised hardware, it is possible to achieve a similar outcome using discrete homodyne phase steps. Payne

The biggest advantage of harmonic cancellation is that, unlike calibration, it cannot be invalidated by simple changes in the correlation waveform shape due to spatial or temporal variance. The primary limitation of harmonic cancellation is the decreased efficiency of the measurements: if you use eight explicit phase steps, most steps contribute to both the real and imaginary parts of the resultant measurement. In the case of eight step harmonic cancellation with four explicit measurements, each sub-measurement contributes to either the real or imaginary part of the resultant measurement. This requires the repetition of some phase steps, which eight explicit phase steps does not require. However, because most systems are accuracy limited and not precision limited, harmonic cancellation is generally of net benefit.

The sampling function in the case of harmonic cancellation with

An example of the impact of heterodyning and harmonic cancellation is given in

In this paper we have reviewed the literature on non-linear phase and amplitude responses and developed a detailed model for the measurement and sampling process inherent to AMCW lidar. Using this model it was shown how the aliasing of correlation waveform harmonics impacts on phase and amplitude response and how standard amelioration techniques such as lookup tables and B-spline models can be invalidated by subtle effects such as temperature changes. Real data was presented showing how phase and amplitude changes temporally and spatially across a full-field CMOS sensor. The mixed pixel and multipath interference problems were demonstrated to cause a roughly single cycle error over 2

A big thank-you to Andrew Payne for allowing us to use some of his experimental data in this paper. This research was supported by the University of Waikato Strategic Investment Fund.

The phase error and amplitude response of a Canesta XZ-422 system operating at 44 MHz with a 50% illumination modulation duty cycle as a function of relative phase of the illumination modulation. The majority of the systematic error is due to aliasing of the positive third harmonic.

Spatial and long-period temporal variation in the reference waveform shape for a modulated CMOS sensor.

Variation in SwissRanger 4000 illumination modulation waveform shape within a single integration period.

Frequency content of the sampling function using the crosstalk model of _{i}_{0}=_{2}=0 and _{1}=–_{3} as a function of phase shift, _{1}.

The phase and amplitude response of a Canesta XZ-422 system operating at 44 MHz with a 29% illumination modulation duty cycle as a function of relative phase of the illumination modulation. Only a portion of the remaining error is due to aliasing.

Simulation of the impact of scattered light on phase and amplitude response.

Simulation of the systematic perturbation introduced by non-circularly symmetric error in the complex domain measurement as a function of the phase shift in the correlation waveform, _{f}.

Evaluating Advanced Aliasing Mitigation Methods.

Summary of error sources and corresponding mitigation techniques.

Perturbation Source | Error Nature in Cycles/2π (using translation stage) | Countermeasures |
---|---|---|

Aliasing | Adjust modulation duty cycles | |

Crosstalk |
Prominent 2 cycle error, any additional cyclic error | Hardware redesign |

Mixed pixels |
Roughly one cycle error, ends may not match up | Mixed pixel restoration algorithms |

Non-circularly-symmetric noise | Average complex measurements | |

Temporal phase/amplitude drift | Ends do not match up | Hardware redesign |