Next Article in Journal
Geometric Condition Assessment of Traffic Signs Leveraging Sequential Video-Log Images and Point-Cloud Data
Previous Article in Journal
Global C-Factor Estimation: Inter-Model Comparison and SSP-RCP Scenario Projections to 2070
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Deep Learning Approach to Lidar Signal Denoising and Atmospheric Feature Detection

1
Department of Chemical and Biochemical Engineering, University of Iowa, Iowa City, IA 52242, USA
2
Earth System Science Interdisciplinary Center, University of Maryland, College Park, MD 20740, USA
3
Iowa Technology Institute, University of Iowa, Iowa City, IA 52242, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(24), 4060; https://doi.org/10.3390/rs17244060
Submission received: 27 October 2025 / Revised: 13 December 2025 / Accepted: 15 December 2025 / Published: 18 December 2025
(This article belongs to the Section Atmospheric Remote Sensing)

Highlights

What are the main findings?
  • A deep learning-based denoising algorithm using U-Net CNNs significantly improves the signal-to-noise ratio of ICESat-2 daytime lidar data.
  • The method enables accurate daytime cloud–aerosol discrimination and layer detection at native spatial resolution.
What are the implications of the main findings?
  • The approach allows fast processing of photon-counting lidar data, enhancing the utility of daytime observations and improving sensitivity to optically thin atmospheric features.
  • This methodology supports the development of smaller, lower-power spaceborne lidar systems capable of delivering high-quality atmospheric data products comparable to larger instruments.

Abstract

Laser-based remote sensing (lidar) is a proven technique for detecting atmospheric features such as clouds and aerosols as well as for determining their vertical distribution with high accuracy. Even simple elastic backscatter lidars can distinguish clouds from aerosols, and accurate knowledge of their vertical location is essential for air quality assessment, hazard avoidance, and operational decision-making. However, daytime lidar measurements suffer from reduced signal-to-noise ratio (SNR) due to solar background contamination. Conventional processing approaches mitigate this by applying horizontal and vertical averaging, which improves SNR at the expense of spatial resolution and feature detectability. This work presents a deep learning-based framework that enhances lidar SNR at native resolution and performs fast layer detection and cloud–aerosol discrimination. We apply this approach to ICESat-2 532 nm photon-counting data, using artificially noised nighttime profiles to generate simulated daytime observations for training and evaluation. Relative to the simulated daytime data, our method improves peak SNR by more than a factor of three while preserving structural similarity with true nighttime profiles. After recalibration, the denoised photon counts yield an order-of-magnitude reduction in mean absolute percentage error in calibrated attenuated backscatter compared with the simulated daytime data, when validated against real nighttime measurements. We further apply the trained model to a full month of real daytime ICESat-2 observations (April 2023) and demonstrate effective layer detection and cloud–aerosol discrimination, maintaining high recall for both clouds and aerosols and showing qualitative improvement relative to the standard ATL09 data products. As an alternative to traditional averaging-based workflows, this deep learning approach offers accurate, near real-time data processing at native resolution. A key implication is the potential to enable smaller, lower-power spaceborne lidar systems that perform as well as larger instruments.

1. Introduction

Laser-based remote sensing, or lidar, is a powerful tool for studying the Earth’s atmosphere. Backscatter lidar is well suited for detection of clouds and aerosol plumes, their boundaries, and internal structure [1,2,3,4]. Data from spaceborne lidars significantly advance our understanding of the global spatiotemporal distribution of aerosols and clouds [5]. Studies have shown that improving forecast models of surface air quality, aerosol long-range transport, and visibility requires vertical profile information as provided by lidar [6,7,8].
Accurate knowledge of the vertical location of aerosols and clouds has critical implications for air quality, hazard avoidance, and decision-making. Aerosols impact air quality and can act as tracers of events, and knowing the height of aerosols and vertical structure of aerosol plumes is essential for predicting aerosol dispersion, transport, and visibility [9,10]. Clouds obscure targets and can interfere with passive sensor measurements. Accurate vertical registration of clouds is a major limitation of passive sensors that has been shown to adversely impact passive microwave sounder retrievals if not properly estimated [11]. It is challenging for passive optical and passive microwave sensors to retrieve aerosol and cloud height information at even 1 km vertical resolution, and distinguishing multiple layers within an atmospheric column is difficult and often impossible for passive sensors [12,13].
Lidar is a proven method for obtaining highly accurate height registration (down to 10’s m) and even a simple backscatter lidar can be used to differentiate clouds from aerosols [14,15]. Whereas passive sensors rely on scattered photons from external sources (e.g., the sun) as a signal source, lidar uses its own self-generated light source and can operate under both day and night conditions. In fact, for a lidar, scattered solar photons are a noise source and nighttime lidar data is invariably better quality, having higher signal-to-noise ratio (SNR). Operating a lidar only at night, however, is suboptimal and obviates the potential of combining active and passive sensor data to improve aerosol and cloud retrievals [16]. Traditional lidar data processing techniques typically rely on vertical and horizontal averaging to improve daytime SNR, but these improvements in feature detection capabilities are made at the expense of spatial resolution. Previous studies have shown that the operational data processing algorithms of the Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP) and Cloud–Aerosol Transport System (CATS) can struggle to detect optically thin cloud and aerosol features during the daytime [17,18].
A primary challenge for expanding the utility of spaceborne lidar measurements is the reliable detection of cloud and aerosol features in the presence of strong solar background. During daytime operation, background light scattered into the receiver can exceed the magnitude of the atmospheric signal, substantially degrading data quality. The traditional approach to background subtraction is to take signal measured below the Earth’s surface (i.e., guaranteed to be free of laser signal), generate an average value for the solar background, and then subtract that value from all range bins in the profile. Because the background value is typically calculated as an average over several below-surface range bins and then that singular value is subtracted from each range bin in the profile, the result is to leave residual noise in the signal. To compensate, traditional lidar processing techniques typically employ extensive horizontal and/or vertical averaging to improve SNR, but these gains come at the cost of reduced spatial resolution and diminished sensitivity to thin or vertically localized features.
Removing noise to enhance feature detectability and recover optically thin layers is a task well suited to machine learning (ML) methods. Recent studies have demonstrated that ML-based denoising, layer detection, and cloud–aerosol discrimination applied to spaceborne lidar observations can yield substantial gains in SNR and classification performance [19]. For example, Yorks et al. [14] applied wavelet-based denoising to simulated lidar profiles and reported improved SNR and enhanced layer detectability, and similar wavelet approaches have been adopted operationally in the preprocessing of the Aerosol and Carbon Detection Lidar (ACDL) aboard the DQ-1 satellite [20]. Beyond wavelets, deep learning approaches have shown even greater promise. One-dimensional autoencoding Convolutional Neural Networks (CNNs) have been used to denoise simulated and real Mie-scattering lidar data [21], outperforming traditional wavelet techniques, while Selmer et al. [22] demonstrated that a Dense U-Net architecture applied to CATS 1064 nm photon count profiles improved SNR by a factor of 2.5, reduced the minimum detectable backscatter, and enabled more accurate layer detection at full spatial resolution. Together, these results indicate that modern deep learning architectures can significantly improve lidar signal quality while remaining computationally efficient.
This work presents a deep learning-based framework that enhances lidar SNR at native resolution and enables fast layer detection and cloud–aerosol discrimination. Although generally applicable to any photon-counting lidar data, in this paper, we focus on application to the Ice, Cloud, and land Elevation Satellite 2 (ICESat-2) atmospheric profiles, using artificially noised nighttime profiles to generate simulated daytime observations for training and evaluation. Relative to the simulated daytime data, our method improves peak SNR by more than a factor of three while preserving structural similarity with true nighttime profiles. After recalibration, the denoised photon counts yield an order-of-magnitude reduction in mean absolute percentage error in calibrated attenuated backscatter, when compared against real nighttime measurements. We further apply the trained model to one month of real daytime observations (April 2023) and demonstrate effective layer detection and cloud–aerosol discrimination, maintaining high recall for both clouds and aerosols and showing qualitative improvement relative to the standard ATL09 data products. It is important to emphasize that our evaluation reflects improvement relative to the current ICESat-2 processing methods. Because the training data are derived from ICESat-2 observations rather than a synthetic “truth” dataset, we cannot conclusively state that our approach is absolutely more or less accurate than the ICESat-2 processing algorithms, but instead demonstrate that the ML-based approach yields results that, if not improved, are at least comparable to traditional averaging-based workflows, while operating at native spatial resolution. Beyond improved accuracy and resolution, the ability to denoise photon-counting profiles in near real time suggests a path toward smaller, lower-power spaceborne lidar systems that perform comparably to larger instruments and deliver low-latency atmospheric data products [23].
The remainder of this paper is organized as follows. Section 2 describes the ICESat-2 atmospheric data products and the construction of the denoising and classification datasets. Section 3 details the proposed deep learning methodology for photon count denoising and cloud–aerosol discrimination. Section 4 presents performance assessment using simulated daytime data, and Section 5 evaluates the approach on real ICESat-2 daytime observations. Finally, Section 6 summarizes the conclusions and discusses implications for future spaceborne lidar systems.

2. Data Description

2.1. ICESat-2 Atmospheric Data Products

The Advanced Topographic Laser Altimeter System (ATLAS) onboard the ICESat-2 satellite operates a high-repetition-rate (10 kHz), low per-pulse energy 532 nm laser with photon-counting detectors optimized for high-resolution altimetry measurements [24,25]. ICESat-2 also records atmospheric backscatter from clouds and aerosols through a dedicated atmospheric channel. Each atmospheric profile consists of 30 m binned photon returns within a nominal 14 km vertical window, extending approximately 13.75 km above and 0.25 km below the local surface elevation derived from the onboard Digital Elevation Model (DEM) [26].
The atmospheric profiles are downlinked after onboard summation of 400 laser shots, yielding 25 Hz atmospheric profiles with 280 m along-track resolution. The time-ordered, calibrated, and binned telemetry data are contained in the ICESat-2 Level 1B (L1B) ATLAS 02 (ATL02) product, which serves as the source for higher-level products. The Level 2A (L2A) product ATLAS 04 (ATL04) includes the Normalized Relative Backscatter (NRB) profiles and computed 532 nm calibration coefficients. The NRB profiles are generated from the binned atmospheric profiles via background subtraction, range-square correction, and laser energy normalization. The Level 3A (L3A) product ATLAS 09 (ATL09) provides the Calibrated Attenuated Backscatter (CAB) profiles and identifies cloud and aerosol layers detected using the Density Dimension Algorithm (DDA) [26,27,28]. In the current ATL09 V006 release, detected layers are classified as cloud, aerosol, unknown, or blowing snow/diamond dust [26].

2.2. Solar Background Simulation

The denoising dataset was constructed using one month of ICESat-2 ATL04 (version V006) data acquired during November 2018, obtained from the National Snow and Ice Data Center (NSIDC) [29]. A single month of data was selected to ensure representative sampling of background illumination conditions and diverse atmospheric scenes while maintaining manageable data volume for efficient model training. Experiments demonstrate that the choice of different training period has no major impact on model performance, owing to the global nature of the dataset and its inherent diversity in aerosol and cloud characteristics (e.g., types, heights, optical depths), as described by Kuang et al. [5].
As illustrated in Figure 1A, artificial Poisson noise was added to nighttime profiles to emulate daytime solar background noise, thereby creating paired noisy and clean lidar curtain images for supervised training. Specifically, for each vertical bin, noisy photon counts k were sampled from a Poisson distribution with mean λ = s + b , where s is the true signal photon count and b is the estimated solar background count derived from daytime statistics. The probability of observing k counts is given by
P ( k λ ) = λ k e λ k ! , where λ = s + b .
This formulation preserves the discrete nature of photon arrivals and accurately reflects the shot noise characteristics of lidar measurements. Photon count data were obtained from the ATL04 NRB profiles. ATL04 was used instead of the raw ATL02 photon data because it includes essential corrections, such as dead-time and molecular signal folding corrections [26]. Photon counts were recovered from NRB by reversing the laser energy normalization and range-square correction, yielding background-subtracted photon counts. Additional preprocessing restored the corrected counts data to the original vertical range window relative to the onboard DEM prior to denoising.

2.3. Vertical Feature Mask

The dataset for cloud–aerosol discrimination (CAD) with denoised profiles was constructed from one month of ICESat-2 ATL09 (version V006) data collected during December 2018. Only nighttime profiles were retained to minimize solar background contamination. These profiles were preprocessed and denoised following the same procedure used for the denoising model. Each atmospheric curtain image used for CAD training comprised three input channels per vertical bin: bin altitude, denoised CAB, and relative humidity. The bin altitude and relative humidity, based on Goddard Earth Observing System Model, Version 5 (GEOS-5) forecast analysis data [26], were extracted directly from the ATL09 product. The cloud–aerosol vertical feature mask was derived from the ATL09 layer top and bottom heights and their associated classifications assigned by the operational DDA algorithm. Two binary labels were defined for each pixel: a layer detection label (layer detected vs. no layer) and a cloud–aerosol discrimination label (cloud vs. aerosol). Vertical bins without detected layers, or those labeled as unknown or blowing snow/ice by the operational algorithm, were assigned a dummy label (−100) to exclude them from the corresponding loss function during training.

3. Methods

3.1. Photon Count Denoising

As illustrated in Figure 1B, the input to the denoising CNN model consists of noisy, background-subtracted atmospheric profiles arranged as two-dimensional images. Each pixel represents the photon count per bin after standard background subtraction, corresponding to a specific altitude and along-track distance. The network output is an image of the same dimensions, in which residual solar background photon counts are removed by the trained model, yielding a denoised representation of the original atmospheric signal.

3.2. Calibrated Attenuated Backscatter

Following denoising, range-square correction and laser energy normalization are applied to generate the denoised NRB. The resulting NRB is then placed into the standard ICESat-2 Atmospheric Range Window (ARW), and the original calibration constant from the ATL09 product is applied. We observed that, in addition to suppressing solar background noise, the denoising process also removes a substantial portion of the molecular signal, leaving primarily particulate backscatter. While AMB carries physically meaningful information, our approach suppresses low-SNR molecular returns during denoising and subsequently restores them using modeled profiles. To recover the correct signal composition, we estimate the residual attenuated molecular backscatter (AMB) for each atmospheric profile by dividing it into ten equal vertical segments and taking the minimum average backscatter among them. This residual AMB is subtracted from the profile, and a modeled AMB, based on GEOS-5 forecast analysis data [26], is added back to the remaining attenuated particulate backscatter to obtain the denoised CAB. After aligning the result with the theoretical AMB profile, we apply the aerosol scattering ratio (0.95) used in ICESat-2 data processing to compensate for signal attenuation above the calibration region. Additional post-processing is performed after layer detection: when the ICESat-2 operational algorithm fails to identify a surface bin, full attenuation is applied to all signal below the lowest detected layer. Our calibration and profile post-processing techniques are illustrated in Figure 1C.

3.3. Cloud–Aerosol Discrimination

The CAD approach follows the method of Oladipo et al. [31], in which layer detection and classification are treated as a multi-task learning problem. For each vertical bin in an atmospheric profile, the first task predicts the presence or absence of a layer as a binary classification. The second task classifies each detected layer bin as “cloud” or “aerosol.” As illustrated in Figure 1D, classification is performed on the denoised CAB profiles. The layer detection mask from the first task is applied to the cloud–aerosol prediction map to yield a final multi-class feature detection map in a post-processing step.

4. Computational Details

4.1. Dataset Construction

The training dataset for denoising was assembled from paired noisy and clean lidar curtain image data (see Section 2.2). Each paired curtain image was randomly cropped to produce 128 paired image patches, each of size 467 × 467 pixels, yielding approximately 50,000 images. The resulting dataset was value-clipped, constraining photon counts to the range [0, 255]. A small subset of the data was held out for validation. For both input (noisy) and target (pristine) channels, the mean and standard deviation were computed across the training set and used to apply a standardization transform during both training and testing. At test time, either noisy nighttime or unmodified daytime profiles could be supplied to the denoising CNN, which produced denoised photon count profiles after reversing the standardization transform.
For the semantic segmentation task, a separate training dataset was assembled from paired denoised CAB lidar curtain images and ATL09 vertical feature masks. Each paired curtain image was randomly cropping to yield 128 image patches of size 224 × 224 pixels per nighttime subgranule (approximately 50,000 images). A small fraction of this dataset was reserved for validation. The mean and standard deviation of each input channel were again computed and used to standardize the inputs. Data augmentation during segmentation training consisted of random horizontal and vertical flips.

4.2. Convolutional Neural Network

A U-Net CNN architecture was used for both denoising and semantic segmentation of lidar curtain profiles [32]. U-Net consists of a symmetric encoder–decoder structure that enables simultaneous learning of global contextual features and precise spatial localization [30]. The encoder (contracting path) applies a sequence of convolutional and max-pooling layers to extract progressively higher-level representations of the input signal while reducing its spatial resolution. The decoder (expanding path) reconstructs the spatial details through successive up-convolutions, with skip connections linking each encoder layer to its corresponding decoder layer. These skip connections preserve fine-scale structural information that would otherwise be lost during downsampling, thereby enhancing boundary accuracy in the reconstructed output.
For the denoising task, the U-Net was trained to map noisy lidar backscatter profiles to their corresponding noise-reduced representations, effectively removing residual solar background noise while retaining coherent atmospheric structures. For the segmentation task, the network was trained to classify each pixel in the denoised curtain profile as belonging to distinct atmospheric layers: cloud, aerosol, or no layer detected. The combination of the encoder–decoder framework and skip connections allows U-Net to perform robust feature extraction across multiple spatial scales, enabling accurate layer detection and reliable cloud–aerosol discrimination in vertically resolved lidar measurements.

4.3. Signal Quality Metrics

In order to assess the performance of the denoising algorithm, we use the peak signal-to-noise-ratio (PSNR) and structural similarity index measure (SSIM) to compare the simulated daytime and denoised atmospheric profiles with the original nighttime profiles. The PSNR is defined as
PSNR = 10 · log 10 M A X 2 MSE
where M A X is the maximum possible pixel intensity (255 counts per bin) and MSE is the mean squared error between the reconstructed image and the ground truth. PSNR provides a quantitative measure of reconstruction fidelity, with higher values indicating closer agreement between the denoised and reference images. The SSIM is defined as
SSIM ( x , y ) = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 )
where μ x and μ y are the mean intensities of images x and y, σ x 2 and σ y 2 are the variances, σ x y is the covariance, and C 1 and C 2 are small constants that stabilize the division. The SSIM evaluates perceptual image quality by jointly comparing luminance, contrast, and structure, with values closer to 1 indicating higher similarity.
In addition to PSNR and SSIM, we computed the mean absolute percentage error (MAPE) and coefficient of determination ( R 2 ) to quantify the agreement between denoised and reference profiles on a per-profile basis. These metrics were calculated only for vertical bins where the true CAB lies within the range 10−7 to 10−3m−1sr−1, excluding strongly attenuated regions and bins dominated by surface returns.

4.4. Classification Evaluation

We assessed CNN CAD performance by constructing the confusion matrix and calculating classification accuracy metrics precision and recall:
precision = T P T P + F P
recall = T P T P + F N
where T P , F P , F N , and T N are the true positive, false positive, false negative, and true negative classifications, respectively. Precision measures the reliability of class predictions as the ratio between the number of true positive predictions and the sum of true positive and false positive predictions, while recall quantifies the model’s ability to identify all instances of a given class as the ratio between true positives and the sum of true positives and false negatives. For the multi-class CAD problem, precision and recall are computed for each class (cloud, aerosol, and no layer detected) individually using a one-vs.-all approach. Atmospheric bins identified by the operational ATL09 product as unknown or containing blowing snow and/or diamond dust (ice) are excluded from metric calculation. Likewise, bins identified by our methodology as strongly attenuated are omitted from metric calculation at test time.

4.5. Implementation Details

The denoising CNN was trained to minimize an L1 loss function that penalizes the absolute deviation between predicted and target photon count values. We construct a U-Net CNN using 32 feature maps at the initial U-Net encoding step; the number of feature maps is doubled after each encoding step, and halved after each decoding step. Training was performed with the Adam optimizer [33] using a minibatch size of 8 examples and a learning rate of 2 × 10 4 , for a total of 25 epochs. We apply weight decay of 1 × 10 5 to regularize the network and prevent overfitting. The evaluation of the PSNR on the training validation data is performed once per training epoch in order to select the final model parameters by validation early-stopping.
For the CAD model, we optimized the U-Net CNN using a Dice loss function [34], which directly maximizes the overlap between predicted and ground truth segmentation masks. The total training loss is the sum of the layer detection loss and the cloud–aerosol classification loss, allowing the network to jointly optimize both tasks within a multi-task learning framework. We construct a U-Net CNN using 32 feature maps at the initial U-Net encoding step. Training was performed with the Adam optimizer using a minibatch size of 16 examples and a learning rate of 2 × 10 4 , for a total of 5 epochs. No additional regularization was used during model optimization. The evaluation of the layer detection loss on the training validation data is performed once per training epoch in order to select the final model parameters by validation early-stopping.
All models were built and trained using the Pytorch 1.13.1 deep learning library [35]. We utilized several open-source Python 3.7 libraries for metric evaluation and data visualization [36,37,38,39].

5. Performance Assessment Using Simulated Data

5.1. Denoising

We simulate daytime photon count profiles and apply denoising using nighttime atmospheric profiles collected by ICESat-2 in April 2023. Figure 2 presents an example test scene from 11 April 2023. Panel A shows the 532 nm nighttime photon count profiles for a three-minute segment. Panel B displays the simulated daytime profiles obtained by adding Poisson noise sampled from a representative solar background scene. Relative to the original nighttime data, the simulated daytime profiles have a PSNR of 25.64 dB and an SSIM of 0.25. Panel C shows the CNN-denoised profiles, which recover the original signal quality with a PSNR of 42.58 dB and an SSIM of 0.96. The CNN effectively removes solar background noise with minimal signal loss, particularly for regions of faint atmospheric return.
The PSNR and SSIM metrics are calculated individually for each data granule in the April 2023 test set ( N = 448 ). We report the average metric score across the test set together with 95% confidence intervals (CIs). Across all simulated daytime test cases, the mean PSNR and SSIM values were 12.74 (95% CI [12.68, 12.80]) and 0.207 (95% CI [0.206, 0.208]), respectively. After denoising, these increased to 42.36 (95% CI [42.32, 42.40]) and 0.957 (95% CI [0.957, 0.957]), demonstrating a substantial improvement in signal fidelity and preservation of structural similarity to the original nighttime profiles across a diverse set of scenes.

5.2. Layer Detection and Cloud–Aerosol Discrimination

We perform additional processing on the denoised photon count data derived from simulated daytime profiles to construct the denoised CAB, as described in Section 3.2. Figure 3 shows an example of a processed scene from 9 April 2023. Panel A presents the ICESat-2 532 nm nighttime CAB profiles in units of m−1sr−1. Panel B shows the corresponding simulated daytime CAB profiles obtained by applying the artificial solar background noise. Panel C displays the denoised 532 nm CAB profiles resulting from the CNN-based denoising of the simulated daytime data. Unless otherwise indicated, the y-axis shows altitude referenced to the WGS84 ellipsoid.
The proposed denoising framework successfully restores the majority of cloud and aerosol backscatter signals from the noise-contaminated profiles. Some optically thin scattering layers, however, are not fully recovered following the addition of the artificial background. In particular, faint scattering features observed near 5:45 UTC between 4 and 10 km altitude are not restored, and low-altitude aerosol layers near 6:00–6:15 UTC below 2 km exhibit partial recovery. The recalibration process accurately identifies regions of signal attenuation subsequent to denoising. Overall, the results demonstrate that the proposed method effectively suppresses solar background noise while preserving the structural integrity of atmospheric features.
After denoising, the performance of the CNN-based CAD model in layer detection was evaluated using ICESat-2 ATL09 data granules collected during April 2023. Figure 4 illustrates the results for a representative scene from 9 April 2023. Panel A presents the layer detection output from the operational ICESat-2 ATL09 algorithm applied to nighttime profiles. Panel B shows the corresponding results obtained using the CNN-based layer detection model applied to denoised, simulated daytime profiles. The CNN model successfully reproduces the principal cloud and aerosol layer structures identified by the operational algorithm, demonstrating strong qualitative agreement between the denoised daytime and reference nighttime retrievals.
We further evaluate the CNN model’s ability to perform CAD using ICESat-2 ATL09 data granules collected during April 2023. Figure 5 illustrates the results for the 9 April 2023 scene described previously, where the detected layers are classified as either cloud or aerosol. Panel A presents the CAD results from the operational ICESat-2 algorithm applied to nighttime profiles, while Panel B shows the corresponding output from the CNN-based method applied to denoised, simulated daytime profiles.
Several key differences between the two approaches are noteworthy. First, the CNN model does not predict the snow or ice-containing layer classes included in the operational ATL09 product, as these features occur infrequently and require additional input parameters beyond those available to the CNN in order to be identified by the operational algorithm. Similarly, the CNN model does not assign an “Unknown” class when classification confidence is low; instead, all detected layers are labeled as either cloud or aerosol. To handle signal extinction, our workflow combines CNN-based layer detection with the operational ground-bin detection to identify fully attenuated regions of the atmosphere. This approach performs well, as indicated by the agreement between attenuated regions in the nighttime and denoised daytime profiles, except in cases of partial attenuation where surface returns are still visible.
Qualitatively, the CNN-based CAD results compare favorably with the operational algorithm, though some differences remain. Optically thin layers that are lost during the solar background simulation are occasionally absent from the denoised output, leading to missed detections. Conversely, the CNN model demonstrates improved identification of aerosol layers between 6:00 and 6:15 UTC and enhanced discrimination of cloud layers embedded within these aerosol regions. These results suggest that the CNN-based approach effectively reproduces key features of the operational product while offering some improvements.
Figure 6 (left) shows the confusion matrix comparing CNN-based layer detection with the operational ICESat-2 ATL09 algorithm for the scene in Figure 4. Following the classification metric evaluation methodology described in Section 4.4, we exclude from our calculations bins identified by the CNN model as “Attenuated”. Overall, the CNN model reproduces the ATL09 layer assignments with high fidelity: over 98% of bins labeled as “No Layer” by ATL09 are also classified as “No Layer” by the CNN, and nearly 83% of bins labeled as “Layer” by ATL09 are similarly identified by the CNN. Approximately 2% of ATL09 “No Layer” bins are classified as “Layer” by the CNN. These differences largely reflect the CNN’s sensitivity to faint or optically thin layers that are occasionally missed or merged in the traditional product. We observe 18% of ATL09 “Layer” bins are classified as “No Layer”. These differences typically occur when optically thin scattering layers are not fully recovered following the addition of the artificial background.
Figure 6 (right) presents the confusion matrix for CAD comparing the CNN model with ATL09 for the scene in Figure 5, computed over more than 44 million atmospheric bins. Following the classification metric evaluation methodology described in Section 4.4, we exclude from our calculations bins identified by the CNN model as “Attenuated” and bins identified by the ATL09 operational algorithm as either “Unknown” or containing snow and/or ice. Percentages are normalized to the number of bins in each ATL09 class, highlighting an important caveat: classes with few total bins (e.g., aerosol) appear amplified relative to “No Layer,” which dominates the data. Agreement is high for “No Layer” bins (>98%) and substantial for cloud bins (>82%). For example, 15% of ATL09 cloud bins (∼386,000) are classified as “No Layer” by the CNN, while 3% (∼74,000) are classified as aerosol. Aerosol bins constitute only 0.6% of ATL09 predictions (∼285,000), of which 41% (∼117,000) are similarly classified by the CNN, 40% (∼114,000) as “No Layer,” and 19% (∼53,000) as cloud. The CNN also identifies 0.6% (∼260,000) of ATL09 “No Layer” bins as aerosol. We observe that these additional bins detected by the CNN as “Aerosol” typically occur where the traditional data product has identified aerosol layers, but where layer detection is sparse. Discrepancies in cloud–aerosol classification are sometimes due to the CNN’s improved resolution of embedded cloud layers, as illustrated in Figure 5 from 6:00–6:15 UTC.

5.3. Single Profile Comparisons

Figure 7 shows the 9 April 2023 scene previously examined, now zoomed in to display backscatter profiles at higher vertical and horizontal resolution. The figure depicts a simulated daytime boundary-layer cloud–aerosol scene around 1–2 km above ground level. Panel A shows the original ICESat-2 532 nm nighttime CAB profiles, while Panels B and C show the simulated daytime and denoised CAB profiles, respectively. The red line in all panels indicates the central lidar curtain profile, and the white lines indicate a range of ±20 s (approximately 1000 profiles).
Figure 8 complements Figure 7 by displaying the single profile indicated by the red line, captured at 9 April 2023 06:08:16.728190 UTC. Each panel compares the original nighttime CAB profile (black), the simulated daytime profile (gray), the denoised profile (red), and the calculated attenuated molecular backscatter (AMB) profile (gray, dashed). Profiles in Panels A, B, and C are averaged over ±1 s, ±5 s, and ±20 s, respectively, to increase the signal-to-noise ratio and provide the comparison of signals over a wider spatial window.
At 1 s averaging, the denoised signal shows improved SNR relative to the untouched nighttime profiles and substantial improvement over the simulated daytime profiles, with some loss of signal above and below the thin cloud layer. With longer averaging windows, the denoised profiles closely match the nighttime profiles. We note a modest enhancement of denoised signal relative to the nighttime profiles in the aerosol layers below 1 km; a similar enhancement is observed in the noised signal even after 20 s averaging, indicating that modest enhancement of denoised signal relative to nighttime profiles is due to incomplete removal of solar background noise rather than artificial signal generation.
Quantitatively, using the 20 s averaged profiles, the mean absolute percentage deviation of the simulated daytime profile relative to the true nighttime profile is 148.6%, while the denoised profile reduces this deviation to 12.7%. The coefficient of determination, R 2 , improves from 87.5% for the noised profile to 98.9% after denoising, demonstrating the effectiveness of the CNN-based approach in recovering the true backscatter signal.
Figure 9 shows the 9 April 2023 scene previously examined, now zoomed in to display CAD at higher vertical and horizontal resolution. Panel A shows the CAD results from the standard ICESat-2 operational algorithm applied to nighttime data, while Panel B shows the CAD results obtained by the CNN model operating on denoised simulated daytime profiles. Overall, we observe good qualitative agreement between the two methods. Comparison with the backscatter profiles indicates that the CNN model improves identification of cloud layers embedded within aerosol layers. Some quantitative differences between the CAD profiles arise from differences in layer detection. In particular, the presence of aerosol above ground level beneath the cloud layer at approximately 1 km, evident from the denoised signal above the attenuated molecular backscatter in Figure 8, is captured by the CNN-based layer detection, whereas the traditional algorithm occasionally fails to detect these optically thin layers (e.g., between 06:09 and 06:10 UTC).
Figure 10 shows the 9 April 2023 scene examined previously, now zoomed in to display a different set of backscatter profiles at higher vertical and horizontal resolution. This figure depicts a simulated daytime scene of a cloud layer located approximately 5–10 km above ground level. Panel A shows the original ICESat-2 532 nm nighttime CAB profiles, while Panels B and C show the simulated daytime and denoised CAB profiles, respectively. The red line in all panels indicates the central profile of the lidar curtain, and the white lines indicate a range of ±20 s (approximately 1000 profiles).
From the untouched nighttime profiles, the cloud layer is strongly attenuating, resulting in loss of signal below the cloud and preventing detection of the ground return. After solar background simulation and CNN-based denoising, the recovered profiles closely match the original signal, and post-processing correctly identifies the fully attenuated region. However, partially attenuated regions, such as areas below a cloud where ground returns are still present, may not be fully identified by the method, leading to some discrepancies between the true and denoised backscatter signals below the attenuating layer.
Figure 10 is complemented by Figure 11, which shows the single profile indicated by the red line in Figure 10, captured at 9 April 2023 06:17:36.728204 UTC. Each panel in Figure 11 compares the original nighttime CAB profile (black), the simulated daytime profile (gray), the denoised profile (red), and the calculated AMB profile (gray, dashed) for averaging windows of±1, 5, and 20 s. At 1 s averaging, the denoised profiles closely match the true signal, with minor loss near the bottom of the cloud layer. As the averaging window increases, agreement further improves. In all cases, the denoising approach successfully identifies regions of signal attenuation despite the presence of residual solar background noise. Comparison with the traditional CAD in Figure 12 shows good qualitative agreement, with the CNN-based method additionally highlighting regions of fully attenuated signal.

5.4. Summary Statistics

The precision and recall metrics are calculated individually for each data granule in the nighttime April 2023 test set ( N = 448 ), comparing the cloud–aerosol discrimination results from the operational ICESat-2 ATL09 algorithm operating on the nighttime profiles with the CNN-based model applied to denoised, simulated daytime profiles. We report the average metric score across the test set together with 95% confidence intervals.
For the “No Layer” class, the mean precision was 0.98 (95% CI [0.98, 0.98]) and the mean recall was 0.97 (95% CI [0.97, 0.97]), indicating excellent agreement between the CNN-based and operational classifications for clear-sky conditions. The “Cloud” class exhibited a mean precision of 0.75 (95% CI [0.75, 0.75]) and mean recall of 0.81 (95% CI [0.81, 0.81]), reflecting strong performance in identifying cloud layers while maintaining relatively low false positive rates. For the more challenging “Aerosol” class, the mean precision was 0.34 (95% CI [0.34, 0.35]) and mean recall was 0.37 (95% CI [0.37, 0.38]). These lower values indicate greater difficulty in distinguishing aerosol layers from simulated daytime profiles in comparison with the nighttime operational ATL09 CAD product, which is expected given their weaker backscatter signatures and greater overlap with background noise. Overall, the CNN-based classification maintains high accuracy for clear-sky and cloud conditions, with performance reductions primarily confined to aerosol detection.

6. Cloud–Aerosol Discrimination Using Real ICESat-2 Daytime Data

6.1. Denoising

We now assess qualitatively the ability of the CNN models to denoise and perform layer detection by processing real ICESat-2 daytime atmospheric profiles collected during April 2023. Figure 13 shows an example of a processed scene from 3 April 2023. Panel A displays the ICESat-2 532 nm daytime CAB profiles, while Panel B shows the corresponding denoised profiles produced by the CNN-based model. We observe excellent removal of residual solar background noise in the denoised output compared with the original background-subtracted daytime data. The denoised profiles reveal fine-scale cloud and aerosol structures that are otherwise obscured in the noisy daytime signal, demonstrating the model’s ability to recover physically consistent atmospheric features without introducing signal artifacts.

6.2. Layer Detection and Cloud–Aerosol Discrimination

Figure 14 shows the same 3 April 2023 scene, now processed for layer detection. Panel A presents the layer detection results from the standard ICESat-2 ATL09 operational algorithm, while Panel B displays the results obtained using the CNN-based layer detection method applied to the denoised CAB profiles. The CNN algorithm demonstrates strong qualitative agreement with the traditional approach, accurately identifying the major atmospheric features present in the scene. Notably, the CNN method detects a greater number of atmospheric bins containing layers compared with the operational ATL09 algorithm applied to the original daytime profiles. This difference likely reflects the improved signal quality of the denoised data, allowing for the identification of weaker or optically thin layers that are otherwise obscured by solar background noise in the unprocessed daytime observations.
Figure 15 shows the 3 April 2023 scene processed for cloud–aerosol discrimination (CAD). Panel A presents the results from the operational ICESat-2 ATL09 algorithm, while Panel B shows the CNN-based CAD output applied to the denoised 532 nm CAB profiles. The CNN-based method exhibits strong qualitative agreement with the operational algorithm, accurately distinguishing cloud and aerosol layers under daytime conditions. Consistent with the layer detection results, the CNN model identifies additional layers in regions of weak backscatter, where the operational algorithm reports sparse detections, and provides enhanced delineation of cloud layer boundaries across the scene. Notably, the CNN model improves aerosol layer identification, particularly between 22:00–22:05 UTC. Instances labeled as indeterminate or snow/ice by the operational algorithm are typically classified as mixed cloud–aerosol scenes by the CNN-based approach, as these classes are not directly predicted by our methodology. These differences indicate that denoising prior to CAD enhances sensitivity to optically thin features while maintaining robust cloud detection performance.
Figure 16 (left) shows the confusion matrix comparing CNN-based layer detection with the operational ICESat-2 ATL09 algorithm for the scene in Figure 14. Overall, the CNN model reproduces the ATL09 layer assignments with high fidelity: over 97% of bins labeled as “No Layer” by the ATL09 are also classified as “No Layer” by the CNN, and over 91% of bins labeled as “Layer” by ATL09 are similarly identified by the CNN. Approximately 3% of ATL09 “No Layer” bins are classified as “Layer” by the CNN, and nearly 9% of ATL09 “Layer” bins are classified as “No Layer”. In comparison with the scene in Section 5, a reduction in ATL09 “Layer” bins classified as “No Layer” reflects an improvement in layer detection recall. The observed increase in ATL09 “No Layer” bins classified as “Layer” by the CNN may be explained by improved signal quality of the denoised data, allowing for the detection of layers missed by the ATL09 algorithm operating on unprocessed daytime observations.
Figure 16 (right) presents the confusion matrix for CAD comparing the CNN model with ATL09 for the scene in Figure 15, computed over more than 24 million atmospheric bins. For cloud detection, 88% (∼591,000) of ATL09 “Cloud” bins are correctly identified, while 8% (∼51,000) are misclassified as “No Layer” and 5% (∼31,000) as “Aerosol”. Aerosol layers exhibit the greatest variability: 51% (∼60,000) of ATL09 “Aerosol” bins are correctly classified, with 36% (∼42,000) and 13% (∼15,000) misclassified as “Cloud” and “No Layer”, respectively. Compared with the simulated daytime results (Figure 6), where artificial noise was added to nighttime data, the denoising process tended to recover but not introduce new layers, leading to more conservative classification and some loss of faint layers relative to the ATL09 data product. We observe a relative increase in layer recall when operating on real daytime data as evidenced by a reduction in the normalized percentages of ATL09 “Cloud” and “Aerosol” bins classified by the CNN model as “No Layer”. In contrast, when applied to real daytime data, the CNN model often reveals additional layers absent from the operational ATL09 output, especially in regions of weak backscatter where the daytime ATL09 layer detection may underperform, as evidenced by an increase in ATL09 “No Layer” bins classified by the CNN method as “Cloud” or “Aerosol”. Thus, while this behavior decreases “Cloud” and “Aerosol” precision (relative to ATL09), it reflects the enhanced sensitivity of our methodology when denoising is applied prior to CAD.

6.3. Single Profile Comparisons

Figure 17 shows the 3 April 2023 scene previously examined, now zoomed in to display backscatter profiles at higher vertical and horizontal resolution. The figure depicts a boundary-layer aerosol scene around 1–2 km above ground level. Panel A shows the original ICESat-2 532 nm daytime CAB profiles, while Panel B shows the denoised CAB profiles. The red line in all panels indicates the central lidar curtain profile, and the while lines indicate a range of ±20 s (approximately 1000 profiles).
Figure 18 complements Figure 17 by displaying the single profile indicated by the red line, captured at 3 April 2023 22:03:41.236742 UTC. Each panel compares the original daytime CAB profile (black), the denoised profile (red), and the calculated AMB profile (gray dashed) for averaging windows of ± 1, 5, and 20 s. Through comparison of the lidar curtain and single-profile comparison, we observe excellent agreement between the original daytime and denoised backscatter profiles in the 1–2 km vertical region where the across all averaging windows, and a significant qualitative increase in SNR of the denoised profiles relative to the raw daytime profiles. Comparison with the traditional CAD in Figure 19 shows good agreement on the location of “Cloud” and “Aerosol” layers, with qualitative improvements to the assignment of cloud layers embedded in aerosol and overall layer detection.
Figure 20 shows the 3 April 2023 scene previously examined, now zoomed in to display a different set of backscatter profiles at higher vertical and horizontal resolution. This figure depicts a daytime scene of a cloud layer located approximately 1–5 km above ground level. Panel A shows the original ICESat-2 532 nm daytime CAB profiles, while Panel B shows the denoised CAB profiles. The red line in both panels indicates the central profile of the lidar curtain, and the while lines indicate a range of ±20 s (approximately 1000 profiles). We observe the denoising CNN model successfully reproduces the principal cloud layer structure and provides a significant increase to the SNR in comparison with raw daytime profiles.
Figure 20 is again complemented by Figure 21 where we display the single profile indicated by the red line, captured at 3 April 2023 22:16:21.236761 UTC. Each panel compares the original daytime CAB profile (black), the denoised profile (red), and the calculated AMB profile (gray dashed) for averaging windows of ±1, 5, and 20 s. Through comparison of the lidar curtain and single-profile comparison, we observe excellent agreement between the original daytime and denoised backscatter profiles in the 1–2 km vertical region where the across all averaging windows. Comparison with the traditional CAD in Figure 22 shows good agreement on the location of “Cloud” layers.

6.4. Summary Statistics

The precision and recall metrics are calculated individually for each data granule in the daytime April 2023 test set ( N = 893 ), comparing the cloud–aerosol discrimination results from the operational ICESat-2 ATL09 algorithm operating on the daytime profiles with those from the CNN-based model applied to denoised daytime profiles. We report the mean metric values across the test set together with their 95% confidence intervals.
The CNN-based model achieves high precision for the “No Layer” class (1.0, 95% CI [1.0, 1.0]) and strong recall (0.97, 95% CI [0.97, 0.97]), indicating excellent agreement with the operational ATL09 product in identifying clear-sky conditions. For cloud detection, the model attains a precision of 0.56 (95% CI [0.56, 0.56]) and a recall of 0.88 (95% CI [0.88, 0.88]), reflecting reliable recovery of most cloud layers. The reduction in cloud precision is expected, as the CNN often identifies additional layers near cloud boundaries in the denoised CAB profiles, consistent with the improved layer detection observed in Section 6.2. Aerosol classification remains the most challenging, with a precision of 0.19 (95% CI [0.19, 0.19]) and recall of 0.53 (95% CI [0.53, 0.54]), indicating moderate success in detecting aerosol layers while producing a higher false positive rate relative to ATL09. This apparent increase in false positives reflects the CNN’s enhanced sensitivity to aerosol layers in the denoised CAB profiles, which may not be represented in the ATL09 daytime data product. Overall, these results demonstrate that denoising substantially improves the model’s ability to recover atmospheric features from real daytime data. High recall scores indicate strong correspondence with operational results, while reductions in precision largely arise from the detection of additional layers not captured by ATL09, highlighting the CNN-based approach’s increased sensitivity and improved representation of cloud and aerosol structures.

7. Conclusions

We have demonstrated the application of CNN-based data processing techniques for denoising, layer detection, and CAD using ICESat-2 532 nm profiles of atmospheric backscatter. We have constructed lidar signal denoising datasets based on artificially noised nighttime profiles using solar background statistics derived from operational data appropriate for supervised training of the denoising CNN. We have noted the separation of molecular and particulate backscatter profiles resulting from application of the CNN-based denoising algorithm, and present a methodology for restoring the molecular signal and producing a denoised CAB data product. Based on simulated ICESat-2 daytime 532 nm data, we observe an improvement in photon counts peak-signal-to-noise ratio by over a factor of three, and an order-of-magnitude reduction in mean absolute percentage error of CAB signal of the denoised profiles when measured against real ICESat-2 nighttime data. Layer detection and CAD was also performed on a full month of denoised ICESat-2 data (April 2023) to detect and classify layers for the comparison with ICESat-2 standard L3A data product. We maintain good cloud and aerosol recall, and observe some qualitative improvement to daytime cloud and aerosol detections in comparison with the standard data product.
The results presented here are specific to the ICESat-2 atmospheric data; however, the methodology is generally applicable to any photon-counting lidar data. To adapt the approach, two key prerequisites are required: availability of representative training data that captures the noise characteristics of the target instrument, and labeled examples for supervised learning, such as layer boundaries and classifications derived from trusted retrieval algorithms. Additional adjustments may include modifying input features, based on available ancillary meteorological data or additional wavelength and polarization channels. The methodology may be adaptable for lidars using analog detectors, such as CALIPSO, but we have not yet explored that possibility. For photon-counting lidar, we can conclude that the deep learning method performs well and can produce data products at higher resolution than traditional threshold-based lidar retrieval methods. Because ICESat-2 data was used for training, our study cannot conclude absolutely that the ML method is more accurate than the current ICESat-2 algorithms. What we can conclude is that this ML methodology, if used in place of traditional lidar processing methods, produces data products that are at least equivalent if not better than those generated using the traditional approach. Going forward, the deep learning methodology can be leveraged to enable smaller, more capable instruments that perform as well as larger sensors and can provide data products with exceptionally low latency.

Author Contributions

Conceptualization, J.G. and M.J.M.; methodology, J.G.; software, J.G.; validation, P.A.S. and S.K.; formal analysis, J.G.; investigation, J.G.; data curation, J.G.; writing—original draft preparation, J.G. and M.J.M.; writing—review and editing, J.G., M.J.M., P.A.S. and S.K.; visualization, J.G.; funding acquisition, J.G. and M.J.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the ICESat-2 project via NASA grants 80NSSC23K0191 and 80NSSC25K7315.

Data Availability Statement

The data used in this study are available at https://nsidc.org/data/icesat-2 (accessed on 10 December 2025).

Acknowledgments

The authors acknowledge the ICESat-2 project for supporting this preliminary work.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. McGill, M.; Hlavka, D.; Hart, W.; Scott, V.S.; Spinhirne, J.; Schmid, B. Cloud physics lidar: Instrument description and initial measurement results. Appl. Opt. 2002, 41, 3725–3734. [Google Scholar] [CrossRef]
  2. McGill, M.J.; Yorks, J.E.; Scott, V.S.; Kupchock, A.W.; Selmer, P.A. The cloud-aerosol transport system (CATS): A technology demonstration on the international space station. In Proceedings of the Lidar Remote Sensing for Environmental Monitoring XV, San Diego, CA, USA, 12–13 August 2015; SPIE: Bellingham, WA, USA, 2015; Volume 9612, pp. 34–39. [Google Scholar] [CrossRef]
  3. Leifer, I.; Melton, C.; Chatfield, R.; Cui, X.; Fischer, M.L.; Fladeland, M.; Gore, W.; Hlavka, D.L.; Iraci, L.T.; Marrero, J.; et al. Air pollution inputs to the Mojave Desert by fusing surface mobile and airborne in situ and airborne and satellite remote sensing: A case study of interbasin transport with numerical model validation. Atmos. Environ. 2020, 224, 117184. [Google Scholar] [CrossRef]
  4. Nowottnick, E.P.; Christian, K.E.; Yorks, J.E.; McGill, M.J.; Midzak, N.; Selmer, P.A.; Lu, Z.; Wang, J.; Salinas, S.V. Aerosol detection from the cloud–aerosol transport system on the international space station: Algorithm overview and implications for diurnal sampling. Atmosphere 2022, 13, 1439. [Google Scholar] [CrossRef]
  5. Kuang, S.; McGill, M.; Gomes, J.; Selmer, P.; Finneman, G.; Begolka, J. Global Aerosol Climatology from ICESat-2 Lidar Observations. Remote Sens. 2025, 17, 2240. [Google Scholar] [CrossRef]
  6. Janiskova, M.; Stiller, O. Development of Strategies for Radar and Lidar Data Assimilation; European Centre for Medium-Range Weather Forecasts: Reading, UK, 2010. [Google Scholar]
  7. Sekiyama, T.; Tanaka, T.; Shimizu, A.; Miyoshi, T. Data assimilation of CALIPSO aerosol observations. Atmos. Chem. Phys. 2010, 10, 39–49. [Google Scholar] [CrossRef]
  8. Zhang, J.; Campbell, J.R.; Reid, J.S.; Westphal, D.L.; Baker, N.L.; Campbell, W.F.; Hyer, E.J. Evaluating the impact of assimilating CALIOP-derived aerosol extinction profiles on a global mass transport model. Geophys. Res. Lett. 2011, 38, L14801. [Google Scholar] [CrossRef]
  9. Hughes, E.; Yorks, J.; Krotkov, N.; Da Silva, A.; McGill, M. Using CATS near-real-time lidar observations to monitor and constrain volcanic sulfur dioxide (SO2) forecasts. Geophys. Res. Lett. 2016, 43, 11-089. [Google Scholar] [CrossRef]
  10. McGill, M.J.; Swap, R.J.; Yorks, J.E.; Selmer, P.A.; Piketh, S.J. Observation and quantification of aerosol outflow from southern Africa using spaceborne lidar. S. Afr. J. Sci. 2020, 116, 1–6. [Google Scholar] [CrossRef]
  11. Favrichon, S.; Prigent, C.; Jimenez, C.; Aires, F. Detecting cloud contamination in passive microwave satellite measurements over land. Atmos. Meas. Tech. 2019, 12, 1531–1543. [Google Scholar] [CrossRef]
  12. Holz, R.; Ackerman, S.; Nagle, F.; Frey, R.; Dutcher, S.; Kuehn, R.; Vaughan, M.; Baum, B. Global Moderate Resolution Imaging Spectroradiometer (MODIS) cloud detection and height evaluation using CALIOP. J. Geophys. Res. Atmos. 2008, 113, D00A19. [Google Scholar] [CrossRef]
  13. Wu, L.; Hasekamp, O.; van Diedenhoven, B.; Cairns, B.; Yorks, J.E.; Chowdhary, J. Passive remote sensing of aerosol layer height using near-UV multiangle polarization measurements. Geophys. Res. Lett. 2016, 43, 8783–8790. [Google Scholar] [CrossRef]
  14. Yorks, J.E.; Selmer, P.A.; Kupchock, A.; Nowottnick, E.P.; Christian, K.E.; Rusinek, D.; Dacic, N.; McGill, M.J. Aerosol and cloud detection using machine learning algorithms and space-based lidar data. Atmosphere 2021, 12, 606. [Google Scholar] [CrossRef]
  15. McGill, M.J.; Selmer, P.A.; Kupchock, A.W.; Yorks, J.E. Machine learning-enabled real-time detection of cloud and aerosol layers using airborne lidar. Front. Remote Sens. 2023, 4, 1116817. [Google Scholar] [CrossRef]
  16. Xu, F.; Gao, L.; Redemann, J.; Flynn, C.J.; Espinosa, W.R.; da Silva, A.M.; Stamnes, S.; Burton, S.P.; Liu, X.; Ferrare, R.; et al. A combined lidar-polarimeter inversion approach for aerosol remote sensing over ocean. Front. Remote Sens. 2021, 2, 620871. [Google Scholar] [CrossRef]
  17. Toth, T.D.; Campbell, J.R.; Reid, J.S.; Tackett, J.L.; Vaughan, M.A.; Zhang, J.; Marquis, J.W. Minimum aerosol layer detection sensitivities and their subsequent impacts on aerosol optical thickness retrievals in CALIPSO level 2 data products. Atmos. Meas. Tech. 2018, 11, 499–514. [Google Scholar] [CrossRef]
  18. Dolinar, E.K.; Campbell, J.R.; Lolli, S.; Ozog, S.C.; Yorks, J.E.; Camacho, C.; Gu, Y.; Bucholtz, A.; McGill, M.J. Sensitivities in satellite lidar-derived estimates of daytime top-of-the-atmosphere optically thin cirrus cloud radiative forcing: A case study. Geophys. Res. Lett. 2020, 47, e2020GL088871. [Google Scholar] [CrossRef]
  19. Lolli, S. Machine Learning Techniques for Vertical Lidar-Based Detection, Characterization, and Classification of Aerosols and Clouds: A Comprehensive Survey. Remote Sens. 2023, 15, 4318. [Google Scholar] [CrossRef]
  20. Dai, G.; Wu, S.; Long, W.; Liu, J.; Xie, Y.; Sun, K.; Meng, F.; Song, X.; Huang, Z.; Chen, W. Aerosol and cloud data processing and optical property retrieval algorithms for the spaceborne ACDL/DQ-1. Atmos. Meas. Tech. 2024, 17, 1879–1890. [Google Scholar] [CrossRef]
  21. Hu, M.; Mao, J.; Li, J.; Wang, Q.; Zhang, Y. A Novel Lidar Signal Denoising Method Based on Convolutional Autoencoding Deep Learning Neural Network. Atmosphere 2021, 12, 1403. [Google Scholar] [CrossRef]
  22. Selmer, P.; Yorks, J.E.; Nowottnick, E.P.; Cresanti, A.; Christian, K.E. A Deep Learning Lidar Denoising Approach for Improving Atmospheric Feature Detection. Remote Sens. 2024, 16, 2735. [Google Scholar] [CrossRef]
  23. Yorks, J.E.; Wang, J.; McGill, M.J.; Follette-Cook, M.; Nowottnick, E.P.; Reid, J.S.; Colarco, P.R.; Zhang, J.; Kalashnikova, O.; Yu, H.; et al. A SmallSat concept to resolve diurnal and vertical variations of aerosols, clouds, and boundary layer height. Bull. Am. Meteorol. Soc. 2023, 104, E815–E836. [Google Scholar] [CrossRef]
  24. Abdalati, W.; Zwally, H.J.; Bindschadler, R.; Csatho, B.; Farrell, S.L.; Fricker, H.A.; Harding, D.; Kwok, R.; Lefsky, M.; Markus, T.; et al. The ICESat-2 laser altimetry mission. Proc. IEEE 2010, 98, 735–751. [Google Scholar] [CrossRef]
  25. Markus, T.; Neumann, T.; Martino, A.; Abdalati, W.; Brunt, K.; Csatho, B.; Farrell, S.; Fricker, H.; Gardner, A.; Harding, D.; et al. The Ice, Cloud, and land Elevation Satellite-2 (ICESat-2): Science requirements, concept, and implementation. Remote Sens. Environ. 2017, 190, 260–273. [Google Scholar] [CrossRef]
  26. Palm, S.; Yang, Y.; Herzfeld, U.; Hancock, D. ICESat-2 Algorithm Theoretical Basis Document for the Atmosphere, Part I: Level 2 and 3 Data Products; Version 6; National Aeronautics and Space Administration, Goddard Space Flight Center: Glenn Dale, MD, USA, 2022. [Google Scholar]
  27. Herzfeld, U.C.; Trantow, T.M.; Harding, D.; Dabney, P.W. Surface-height determination of crevassed glaciers—Mathematical principles of an autoadaptive density-dimension algorithm and validation using ICESat-2 simulator (SIMPL) data. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1874–1896. [Google Scholar] [CrossRef]
  28. Herzfeld, U.; Hayes, A.; Palm, S.; Hancock, D.; Vaughan, M.; Barbieri, K. Detection and height measurement of tenuous clouds and blowing snow in ICESat-2 ATLAS data. Geophys. Res. Lett. 2021, 48, e2021GL093473. [Google Scholar] [CrossRef]
  29. National Snow and Ice Data Center (NSIDC). ICESat-2 Data Products; National Snow and Ice Data Center: Boulder, CO, USA, 2025; Available online: https://nsidc.org/data/icesat-2 (accessed on 10 December 2025).
  30. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar] [CrossRef]
  31. Oladipo, B.; Gomes, J.; McGill, M.; Selmer, P. Leveraging deep learning as a new approach to layer detection and cloud–aerosol classification using ICESat-2 atmospheric data. Remote Sens. 2024, 16, 2344. [Google Scholar] [CrossRef]
  32. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef]
  33. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), San Diego, CA, USA, 7–9 May 2015. [Google Scholar] [CrossRef]
  34. Milletari, F.; Navab, N.; Ahmadi, S.A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; IEEE: Piscataway, NY, USA, 2016; pp. 565–571. [Google Scholar]
  35. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems (NeurIPS 2019); Curran Associates, Inc.: Red Hook, NY, USA, 2019; Volume 32, pp. 8024–8035. [Google Scholar] [CrossRef]
  36. Hunter, J.D. Matplotlib: A 2D Graphics Environment. Comput. Sci. Eng. 2007, 9, 90–95. [Google Scholar] [CrossRef]
  37. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  38. Virtanen, P.; Gommers, R.; Oliphant, T.E.; Haberland, M.; Reddy, T.; Cournapeau, D.; Burovski, E.; Peterson, P.; Weckesser, W.; Bright, J.; et al. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nat. Methods 2020, 17, 261–272. [Google Scholar] [CrossRef]
  39. Harris, C.R.; Millman, K.J.; Van Der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N.J.; et al. Array Programming with NumPy. Nature 2020, 585, 357–362. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Overview of the denoising and cloud–aerosol discrimination workflow. (A) Solar background noise is simulated based on background count statistics derived from ICESat-2 daytime atmospheric profiles and combined with nighttime atmospheric profiles using an appropriate noise model to generate paired clean and noisy training data. (B) A standard U-Net Convolutional Neural Network (CNN) [30] is trained to reconstruct the original nighttime atmospheric profiles from noisy inputs. (C) A modeled molecular count signal is added to the denoised photon count data, and the Calibrated Attenuated Backscatter (CAB) profile is obtained following the ICESat-2 operational calibration algorithm. (D) A second U-Net CNN is trained to perform cloud–aerosol discrimination and layer detection following the procedure described in [31].
Figure 1. Overview of the denoising and cloud–aerosol discrimination workflow. (A) Solar background noise is simulated based on background count statistics derived from ICESat-2 daytime atmospheric profiles and combined with nighttime atmospheric profiles using an appropriate noise model to generate paired clean and noisy training data. (B) A standard U-Net Convolutional Neural Network (CNN) [30] is trained to reconstruct the original nighttime atmospheric profiles from noisy inputs. (C) A modeled molecular count signal is added to the denoised photon count data, and the Calibrated Attenuated Backscatter (CAB) profile is obtained following the ICESat-2 operational calibration algorithm. (D) A second U-Net CNN is trained to perform cloud–aerosol discrimination and layer detection following the procedure described in [31].
Remotesensing 17 04060 g001
Figure 2. (A) ICESat-2 532 nm nighttime photon count profiles acquired on 11 April 2023. (B) Simulated daytime photon count profiles after addition of solar background noise. (C) Denoised photon count profiles obtained by applying the CNN-based denoising model to the noised data. The peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were computed for both the noised and denoised profiles to quantify reconstruction performance. The y-axis shows altitude referenced to the onboard digital elevation model (DEM).
Figure 2. (A) ICESat-2 532 nm nighttime photon count profiles acquired on 11 April 2023. (B) Simulated daytime photon count profiles after addition of solar background noise. (C) Denoised photon count profiles obtained by applying the CNN-based denoising model to the noised data. The peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were computed for both the noised and denoised profiles to quantify reconstruction performance. The y-axis shows altitude referenced to the onboard digital elevation model (DEM).
Remotesensing 17 04060 g002
Figure 3. (A) ICESat-2 532 nm nighttime attenuated backscatter profiles for 9 April 2023. (B) Simulated daytime profiles generated by adding solar background noise to the nighttime data. (C) Denoised 532 nm attenuated backscatter profiles obtained using the CNN-based denoising model applied to the simulated daytime profiles.
Figure 3. (A) ICESat-2 532 nm nighttime attenuated backscatter profiles for 9 April 2023. (B) Simulated daytime profiles generated by adding solar background noise to the nighttime data. (C) Denoised 532 nm attenuated backscatter profiles obtained using the CNN-based denoising model applied to the simulated daytime profiles.
Remotesensing 17 04060 g003
Figure 4. (A) Layer detection results from the operational ICESat-2 ATL09 algorithm applied to nighttime profiles. (B) Layer detection results from the CNN-based model applied to denoised, simulated daytime profiles.
Figure 4. (A) Layer detection results from the operational ICESat-2 ATL09 algorithm applied to nighttime profiles. (B) Layer detection results from the CNN-based model applied to denoised, simulated daytime profiles.
Remotesensing 17 04060 g004
Figure 5. (A) Cloud–aerosol discrimination (CAD) results from the operational ICESat-2 ATL09 algorithm. (B) CAD results from the CNN-based model applied to denoised 532 nm attenuated backscatter profiles.
Figure 5. (A) Cloud–aerosol discrimination (CAD) results from the operational ICESat-2 ATL09 algorithm. (B) CAD results from the CNN-based model applied to denoised 532 nm attenuated backscatter profiles.
Remotesensing 17 04060 g005
Figure 6. (Left) Confusion matrix comparing CNN-based layer detection results (x-axis) with traditional layer detection from the ICESat-2 ATL09 data product (y-axis) for 9 April 2023. (Right) Confusion matrix comparing CNN-based cloud–aerosol discrimination (CAD) results (x-axis) with the corresponding ATL09 classifications (y-axis) for the same scene. The ATL09 data product is derived from pristine nighttime lidar profiles, whereas the CNN-based methods operate on denoised, simulated daytime profiles. Reported percentages are computed relative to the total number of instances for each class as determined by the ATL09 reference.
Figure 6. (Left) Confusion matrix comparing CNN-based layer detection results (x-axis) with traditional layer detection from the ICESat-2 ATL09 data product (y-axis) for 9 April 2023. (Right) Confusion matrix comparing CNN-based cloud–aerosol discrimination (CAD) results (x-axis) with the corresponding ATL09 classifications (y-axis) for the same scene. The ATL09 data product is derived from pristine nighttime lidar profiles, whereas the CNN-based methods operate on denoised, simulated daytime profiles. Reported percentages are computed relative to the total number of instances for each class as determined by the ATL09 reference.
Remotesensing 17 04060 g006
Figure 7. Segment of the atmospheric profiles shown in Figure 3, highlighting a boundary-layer cloud–aerosol scene. The red line indicates the central profile, and the white lines indicate a range of ±20 s. (A) ICESat-2 532 nm nighttime attenuated backscatter profiles for 9 April 2023. (B) Simulated daytime profiles after application of solar background noise. (C) Denoised 532 nm profiles obtained by the CNN-based method applied to simulated daytime data.
Figure 7. Segment of the atmospheric profiles shown in Figure 3, highlighting a boundary-layer cloud–aerosol scene. The red line indicates the central profile, and the white lines indicate a range of ±20 s. (A) ICESat-2 532 nm nighttime attenuated backscatter profiles for 9 April 2023. (B) Simulated daytime profiles after application of solar background noise. (C) Denoised 532 nm profiles obtained by the CNN-based method applied to simulated daytime data.
Remotesensing 17 04060 g007
Figure 8. Single-profile comparison of ICESat-2 532 nm atmospheric backscatter captured on 9 April 2023 at 06:08:16.728190 UTC (red line in Figure 7). Profiles shown include the original nighttime CAB (true, black), simulated daytime CAB (noised, gray), denoised CAB (red), and attenuated molecular backscatter (AMB) (gray dashed). The profiles are averaged over windows of (A) ±1 s, (B) ±5 s, and (C) ±20 s.
Figure 8. Single-profile comparison of ICESat-2 532 nm atmospheric backscatter captured on 9 April 2023 at 06:08:16.728190 UTC (red line in Figure 7). Profiles shown include the original nighttime CAB (true, black), simulated daytime CAB (noised, gray), denoised CAB (red), and attenuated molecular backscatter (AMB) (gray dashed). The profiles are averaged over windows of (A) ±1 s, (B) ±5 s, and (C) ±20 s.
Remotesensing 17 04060 g008
Figure 9. Segment of the atmospheric data shown in Figure 5, highlighting a boundary-layer cloud–aerosol scene. (A) Layer detection and Cloud–Aerosol Discrimination (CAD) using the ICESat-2 ATL09 operational algorithm. (B) Layer detection and CAD based on denoised 532 nm profiles using the CNN-based CAD method.
Figure 9. Segment of the atmospheric data shown in Figure 5, highlighting a boundary-layer cloud–aerosol scene. (A) Layer detection and Cloud–Aerosol Discrimination (CAD) using the ICESat-2 ATL09 operational algorithm. (B) Layer detection and CAD based on denoised 532 nm profiles using the CNN-based CAD method.
Remotesensing 17 04060 g009
Figure 10. A segment of the atmospheric profiles shown in Figure 3, zoomed in to highlight an example elevated cloud scene. The red line indicates the central profile in this segment, and the white lines indicate a range of ±20 s. (A) ICESat-2 532 nm nighttime attenuated backscatter profiles for 9 April 2023. (B) Result of 532 nm attenuated backscatter profiles after solar background simulation. (C) Result of 532 nm attenuated backscatter profiles after application of CNN-based denoising method to simulated daytime profiles.
Figure 10. A segment of the atmospheric profiles shown in Figure 3, zoomed in to highlight an example elevated cloud scene. The red line indicates the central profile in this segment, and the white lines indicate a range of ±20 s. (A) ICESat-2 532 nm nighttime attenuated backscatter profiles for 9 April 2023. (B) Result of 532 nm attenuated backscatter profiles after solar background simulation. (C) Result of 532 nm attenuated backscatter profiles after application of CNN-based denoising method to simulated daytime profiles.
Remotesensing 17 04060 g010
Figure 11. Single-profile comparison of ICESat-2 532 nm attenuated backscatter for 9 April 2023 08:59:06.554191 UTC (see Figure 10, red line). Profiles include nighttime CAB (black), simulated daytime CAB (gray), denoised CAB (red), and attenuated molecular backscatter (gray, dashed). The profiles are averaged over windows of (A) ±1 s, (B) ±5 s, and (C) ±20 s.
Figure 11. Single-profile comparison of ICESat-2 532 nm attenuated backscatter for 9 April 2023 08:59:06.554191 UTC (see Figure 10, red line). Profiles include nighttime CAB (black), simulated daytime CAB (gray), denoised CAB (red), and attenuated molecular backscatter (gray, dashed). The profiles are averaged over windows of (A) ±1 s, (B) ±5 s, and (C) ±20 s.
Remotesensing 17 04060 g011
Figure 12. Segment of the atmospheric data from Figure 5, highlighting an elevated cloud scene. (A) Layer detection and CAD from the ICESat-2 ATL09 operational algorithm. (B) Layer detection and CAD based on CNN analysis of the denoised profiles.
Figure 12. Segment of the atmospheric data from Figure 5, highlighting an elevated cloud scene. (A) Layer detection and CAD from the ICESat-2 ATL09 operational algorithm. (B) Layer detection and CAD based on CNN analysis of the denoised profiles.
Remotesensing 17 04060 g012
Figure 13. (A) ICESat-2 532 nm daytime attenuated backscatter profiles for 3 April 2023. (B) Result of 532 nm attenuated backscatter profiles after application of CNN-based denoising method to daytime profiles.
Figure 13. (A) ICESat-2 532 nm daytime attenuated backscatter profiles for 3 April 2023. (B) Result of 532 nm attenuated backscatter profiles after application of CNN-based denoising method to daytime profiles.
Remotesensing 17 04060 g013
Figure 14. (A) Result of layer detection using the ICESat-2 ATL09 operational algorithm. (B) Result of layer detection based on denoised 532 nm attenuated backscatter profiles using a CNN-based layer detection method.
Figure 14. (A) Result of layer detection using the ICESat-2 ATL09 operational algorithm. (B) Result of layer detection based on denoised 532 nm attenuated backscatter profiles using a CNN-based layer detection method.
Remotesensing 17 04060 g014
Figure 15. (A) Result of CAD using the ICESat-2 ATL09 operational algorithm. (B) Result of CAD based on denoised 532 nm attenuated backscatter profiles using a CNN-based CAD method.
Figure 15. (A) Result of CAD using the ICESat-2 ATL09 operational algorithm. (B) Result of CAD based on denoised 532 nm attenuated backscatter profiles using a CNN-based CAD method.
Remotesensing 17 04060 g015
Figure 16. (Left) Confusion matrix for the CNN-based layer detection method (x-axis) and traditional layer detection from the ICESat-2 ATL09 data product (y-axis) for 3 April 2023. (Right) The ICESat-2 ATL09 data product is based on the daytime lidar profiles, while the CNN-based layer detection method is derived from the denoised daytime lidar profiles. The reported percentages have been computed relative to the total reported instances of each class given by the ATL09 method.
Figure 16. (Left) Confusion matrix for the CNN-based layer detection method (x-axis) and traditional layer detection from the ICESat-2 ATL09 data product (y-axis) for 3 April 2023. (Right) The ICESat-2 ATL09 data product is based on the daytime lidar profiles, while the CNN-based layer detection method is derived from the denoised daytime lidar profiles. The reported percentages have been computed relative to the total reported instances of each class given by the ATL09 method.
Remotesensing 17 04060 g016
Figure 17. Segment of the atmospheric profiles shown in Figure 13, highlighting an aerosol scene. The red line indicates the central profile, and the white lines indicate a range of ±20 s. (A) ICESat-2 532 nm daytime attenuated backscatter profiles for 3 April 2023. (B) Denoised ICESat-2 532 nm attenuated backscatter profiles.
Figure 17. Segment of the atmospheric profiles shown in Figure 13, highlighting an aerosol scene. The red line indicates the central profile, and the white lines indicate a range of ±20 s. (A) ICESat-2 532 nm daytime attenuated backscatter profiles for 3 April 2023. (B) Denoised ICESat-2 532 nm attenuated backscatter profiles.
Remotesensing 17 04060 g017
Figure 18. Single-profile comparison of ICESat-2 532 nm atmospheric backscatter captured on 3 April 2023 22:03:41.236742 UTC (red line in Figure 17). Profiles shown include the origin daytime CAB (black), denoised CAB profiles (red), and attenuated molecular backscatter (AMB) (gray dashed). The profiles are averaged over windows of (A) ±1 s, (B) ±5 s, and (C) ±20 s.
Figure 18. Single-profile comparison of ICESat-2 532 nm atmospheric backscatter captured on 3 April 2023 22:03:41.236742 UTC (red line in Figure 17). Profiles shown include the origin daytime CAB (black), denoised CAB profiles (red), and attenuated molecular backscatter (AMB) (gray dashed). The profiles are averaged over windows of (A) ±1 s, (B) ±5 s, and (C) ±20 s.
Remotesensing 17 04060 g018
Figure 19. Segment of the atmospheric data from Figure 15, highlighting an example aerosol scene. (A) Layer detection and CAD from the ICESat-2 ATL09 operational algorithm. (B) Layer detection and CAD based on CNN analysis of the denoised profiles.
Figure 19. Segment of the atmospheric data from Figure 15, highlighting an example aerosol scene. (A) Layer detection and CAD from the ICESat-2 ATL09 operational algorithm. (B) Layer detection and CAD based on CNN analysis of the denoised profiles.
Remotesensing 17 04060 g019
Figure 20. A segment of the atmospheric profiles shown in Figure 13, zoomed in to highlight an example elevated cloud scene. The red line indicates the central profile in this segment, and the white lines indicate a range of ±20 s. (A) ICESat-2 532 nm daytime attenuated backscatter profiles for 3 April 2023. (B) Result of 532 nm attenuated backscatter profiles after application of CNN-based denoising method to daytime profiles.
Figure 20. A segment of the atmospheric profiles shown in Figure 13, zoomed in to highlight an example elevated cloud scene. The red line indicates the central profile in this segment, and the white lines indicate a range of ±20 s. (A) ICESat-2 532 nm daytime attenuated backscatter profiles for 3 April 2023. (B) Result of 532 nm attenuated backscatter profiles after application of CNN-based denoising method to daytime profiles.
Remotesensing 17 04060 g020
Figure 21. Single profile comparison of ICESat-2 532 nm atmospheric backscatter captured on 3 April 2023 22:16:21.236761 UTC (red line in Figure 20). Profiles shown include the original daytime CAB (black), denoised CAB (red), and attenuated molecular backscatter profiles (gray dashed). The profiles are averaged over windows of (A) ±1 s, (B) ±5 s, and (C) ±20 s.
Figure 21. Single profile comparison of ICESat-2 532 nm atmospheric backscatter captured on 3 April 2023 22:16:21.236761 UTC (red line in Figure 20). Profiles shown include the original daytime CAB (black), denoised CAB (red), and attenuated molecular backscatter profiles (gray dashed). The profiles are averaged over windows of (A) ±1 s, (B) ±5 s, and (C) ±20 s.
Remotesensing 17 04060 g021
Figure 22. A segment of the atmospheric data shown in Figure 15, zoomed in to highlight an example elevated cloud scene. (A) Layer detection and CAD from the ICESat-2 ATL09 operational algorithm. (B) Layer detection and CAD based on CNN analysis of the denoised profiles.
Figure 22. A segment of the atmospheric data shown in Figure 15, zoomed in to highlight an example elevated cloud scene. (A) Layer detection and CAD from the ICESat-2 ATL09 operational algorithm. (B) Layer detection and CAD based on CNN analysis of the denoised profiles.
Remotesensing 17 04060 g022
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gomes, J.; McGill, M.J.; Selmer, P.A.; Kuang, S. A Deep Learning Approach to Lidar Signal Denoising and Atmospheric Feature Detection. Remote Sens. 2025, 17, 4060. https://doi.org/10.3390/rs17244060

AMA Style

Gomes J, McGill MJ, Selmer PA, Kuang S. A Deep Learning Approach to Lidar Signal Denoising and Atmospheric Feature Detection. Remote Sensing. 2025; 17(24):4060. https://doi.org/10.3390/rs17244060

Chicago/Turabian Style

Gomes, Joseph, Matthew J. McGill, Patrick A. Selmer, and Shi Kuang. 2025. "A Deep Learning Approach to Lidar Signal Denoising and Atmospheric Feature Detection" Remote Sensing 17, no. 24: 4060. https://doi.org/10.3390/rs17244060

APA Style

Gomes, J., McGill, M. J., Selmer, P. A., & Kuang, S. (2025). A Deep Learning Approach to Lidar Signal Denoising and Atmospheric Feature Detection. Remote Sensing, 17(24), 4060. https://doi.org/10.3390/rs17244060

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop