Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (522)

Search Parameters:
Keywords = airborne point cloud

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 33744 KB  
Article
Attention-Based Enhancement of Airborne LiDAR Across Vegetated Landscapes Using SAR and Optical Imagery Fusion
by Michael Marks, Daniel Sousa and Janet Franklin
Remote Sens. 2025, 17(19), 3278; https://doi.org/10.3390/rs17193278 - 24 Sep 2025
Viewed by 443
Abstract
Accurate and timely 3D vegetation structure information is essential for ecological modeling and land management. However, these needs often cannot be met with existing airborne LiDAR surveys, whose broad-area coverage comes with trade-offs in point density and update frequency. To address these limitations, [...] Read more.
Accurate and timely 3D vegetation structure information is essential for ecological modeling and land management. However, these needs often cannot be met with existing airborne LiDAR surveys, whose broad-area coverage comes with trade-offs in point density and update frequency. To address these limitations, this study introduces a deep learning framework built on attention mechanisms, the fundamental building block of modern large language models. The framework upsamples sparse (<22 pt/m2) airborne LiDAR point clouds by fusing them with stacks of multi-temporal optical (NAIP) and L-band quad-polarized Synthetic Aperture Radar (UAVSAR) imagery. Utilizing a novel Local–Global Point Attention Block (LG-PAB), our model directly enhances 3D point-cloud density and accuracy in vegetated landscapes by learning structure directly from the point cloud itself. Results in fire-prone Southern California foothill and montane ecosystems demonstrate that fusing both optical and radar imagery reduces reconstruction error (measured by Chamfer distance) compared to using LiDAR alone or with a single image modality. Notably, the fused model substantially mitigates errors arising from vegetation changes over time, particularly in areas of canopy loss, thereby increasing the utility of historical LiDAR archives. This research presents a novel approach for direct 3D point-cloud enhancement, moving beyond traditional raster-based methods and offering a pathway to more accurate and up-to-date vegetation structure assessments. Full article
Show Figures

Graphical abstract

22 pages, 6748 KB  
Article
Spatial Analysis of Bathymetric Data from UAV Photogrammetry and ALS LiDAR: Shallow-Water Depth Estimation and Shoreline Extraction
by Oktawia Specht
Remote Sens. 2025, 17(17), 3115; https://doi.org/10.3390/rs17173115 - 7 Sep 2025
Viewed by 885
Abstract
The shoreline and seabed topography are key components of the coastal zone and are essential for hydrographic surveys, shoreline process modelling, and coastal infrastructure management. The development of unmanned aerial vehicles (UAVs) and optoelectronic sensors, such as photogrammetric cameras and airborne laser scanning [...] Read more.
The shoreline and seabed topography are key components of the coastal zone and are essential for hydrographic surveys, shoreline process modelling, and coastal infrastructure management. The development of unmanned aerial vehicles (UAVs) and optoelectronic sensors, such as photogrammetric cameras and airborne laser scanning (ALS) using light detection and ranging (LiDAR) technology, has enabled the acquisition of high-resolution bathymetric data with greater accuracy and efficiency than traditional methods using echo sounders on manned vessels. This article presents a spatial analysis of bathymetric data obtained from UAV photogrammetry and ALS LiDAR, focusing on shallow-water depth estimation and shoreline extraction. The study area is Lake Kłodno, an inland waterbody with moderate ecological status. Aerial imagery from the photogrammetric camera was used to model the lake bottom in shallow areas, while the LiDAR point cloud acquired through ALS was used to determine the shoreline. Spatial analysis of support vector regression (SVR)-based bathymetric data showed effective depth estimation down to 1 m, with a reported standard deviation of 0.11 m and accuracy of 0.22 m at the 95% confidence, as reported in previous studies. However, only 44.5% of 1 × 1 m grid cells met the minimum point density threshold recommended by the National Oceanic and Atmospheric Administration (NOAA) (≥5 pts/m2), while 43.7% contained no data. In contrast, ALS LiDAR provided higher and more consistent shoreline coverage, with an average density of 63.26 pts/m2, despite 27.6% of grid cells being empty. The modified shoreline extraction method applied to the ALS data achieved a mean positional accuracy of 1.24 m and 3.36 m at the 95% confidence level. The results show that UAV photogrammetry and ALS laser scanning possess distinct yet complementary strengths, making their combined use beneficial for producing more accurate and reliable maps of shallow waters and shorelines. Full article
Show Figures

Figure 1

19 pages, 5844 KB  
Article
Cloud Particle Detection in 2D-S Imaging Data via an Adaptive Anchor SSD Model
by Shuo Liu, Dingkun Yang and Luhong Fan
Atmosphere 2025, 16(8), 985; https://doi.org/10.3390/atmos16080985 - 19 Aug 2025
Viewed by 524
Abstract
The airborne 2D-S optical array probe has worked for more than ten years and has collected a large number of cloud particle images. However, existing detection methods cannot detect cloud particles with high precision due to the size differences of cloud particles and [...] Read more.
The airborne 2D-S optical array probe has worked for more than ten years and has collected a large number of cloud particle images. However, existing detection methods cannot detect cloud particles with high precision due to the size differences of cloud particles and the occurrence of particle fragmentation during imaging. So, this paper proposes a novel cloud particle detection method. The key innovation is an adaptive anchor SSD module, which overcomes existing limitations by generating anchor points that adaptively align with cloud particle size distributions. Firstly, morphological transformations generate multi-scale image information through repeated dilation and erosion operations, while removing irrelevant artifacts and fragmented particles for data cleaning. After that, the method generates geometric and mass centers across multiple scales and dynamically merges these centers to form adaptive anchor points. Finally, a detection module integrates a modified SSD with a ResNet-50 backbone for accurate bounding box predictions. Experimental results show that the proposed method achieves an mAP of 0.934 and a recall of 0.905 on the test set, demonstrating its effectiveness and reliability for cloud particle detection using the 2D-S probe. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Atmospheric Sciences)
Show Figures

Figure 1

46 pages, 12839 KB  
Article
Tree Type Classification from ALS Data: A Comparative Analysis of 1D, 2D, and 3D Representations Using ML and DL Models
by Sead Mustafić, Mathias Schardt and Roland Perko
Remote Sens. 2025, 17(16), 2847; https://doi.org/10.3390/rs17162847 - 15 Aug 2025
Viewed by 797
Abstract
Accurate classification of individual tree types is a key component in forest inventory, biodiversity monitoring, and ecological modeling. This study evaluates and compares multiple Machine Learning (ML) and Deep Learning (DL) approaches for tree type classification based on Airborne Laser Scanning (ALS) data. [...] Read more.
Accurate classification of individual tree types is a key component in forest inventory, biodiversity monitoring, and ecological modeling. This study evaluates and compares multiple Machine Learning (ML) and Deep Learning (DL) approaches for tree type classification based on Airborne Laser Scanning (ALS) data. A mixed-species forest in southeastern Austria, Europe, served as the test site, with spruce, pine, and a grouped class of broadleaf species as target categories. To examine the impact of data representation, ALS point clouds were transformed into four distinct structures: 1D feature vectors, 2D raster profiles, 3D voxel grids, and unstructured 3D point clouds. A comprehensive dataset, combining field measurements and manually annotated aerial data, was used to train and validate 45 ML and DL models. Results show that DL models based on 3D point clouds achieved the highest overall accuracy (up to 88.1%), followed by multi-view 2D raster and voxel-based methods. Traditional ML models performed well on 1D data but struggled with high-dimensional inputs. Spruce trees were classified most reliably, while confusion between pine and broadleaf species remained challenging across methods. The study highlights the importance of selecting suitable data structures and model types for operational tree classification and outlines potential directions for improving accuracy through multimodal and temporal data fusion. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

25 pages, 9564 KB  
Article
Semantic-Aware Cross-Modal Transfer for UAV-LiDAR Individual Tree Segmentation
by Fuyang Zhou, Haiqing He, Ting Chen, Tao Zhang, Minglu Yang, Ye Yuan and Jiahao Liu
Remote Sens. 2025, 17(16), 2805; https://doi.org/10.3390/rs17162805 - 13 Aug 2025
Viewed by 801
Abstract
Cross-modal semantic segmentation of individual tree LiDAR point clouds is critical for accurately characterizing tree attributes, quantifying ecological interactions, and estimating carbon storage. However, in forest environments, this task faces key challenges such as high annotation costs and poor cross-domain generalization. To address [...] Read more.
Cross-modal semantic segmentation of individual tree LiDAR point clouds is critical for accurately characterizing tree attributes, quantifying ecological interactions, and estimating carbon storage. However, in forest environments, this task faces key challenges such as high annotation costs and poor cross-domain generalization. To address these issues, this study proposes a cross-modal semantic transfer framework tailored for individual tree point cloud segmentation in forested scenes. Leveraging co-registered UAV-acquired RGB imagery and LiDAR data, we construct a technical pipeline of “2D semantic inference—3D spatial mapping—cross-modal fusion” to enable annotation-free semantic parsing of 3D individual trees. Specifically, we first introduce a novel Multi-Source Feature Fusion Network (MSFFNet) to achieve accurate instance-level segmentation of individual trees in the 2D image domain. Subsequently, we develop a hierarchical two-stage registration strategy to effectively align dense matched point clouds (MPC) generated from UAV imagery with LiDAR point clouds. On this basis, we propose a probabilistic cross-modal semantic transfer model that builds a semantic probability field through multi-view projection and the expectation–maximization algorithm. By integrating geometric features and semantic confidence, the model establishes semantic correspondences between 2D pixels and 3D points, thereby achieving spatially consistent semantic label mapping. This facilitates the transfer of semantic annotations from the 2D image domain to the 3D point cloud domain. The proposed method is evaluated on two forest datasets. The results demonstrate that the proposed individual tree instance segmentation approach achieves the highest performance, with an IoU of 87.60%, compared to state-of-the-art methods such as Mask R-CNN, SOLOV2, and Mask2Former. Furthermore, the cross-modal semantic label transfer framework significantly outperforms existing mainstream methods in individual tree point cloud semantic segmentation across complex forest scenarios. Full article
Show Figures

Figure 1

42 pages, 8886 KB  
Article
Standard Classes for Urban Topographic Mapping with ALS: Classification Scheme and a First Implementation
by Agata Walicka and Norbert Pfeifer
Remote Sens. 2025, 17(15), 2731; https://doi.org/10.3390/rs17152731 - 7 Aug 2025
Viewed by 539
Abstract
Research regarding airborne laser scanning (ALS) point cloud semantic segmentation typically revolves around supervised machine learning, which requires time-consuming generation of training data. Therefore, the models are usually trained using one of the benchmarking datasets that cover a small area. Recently, many European [...] Read more.
Research regarding airborne laser scanning (ALS) point cloud semantic segmentation typically revolves around supervised machine learning, which requires time-consuming generation of training data. Therefore, the models are usually trained using one of the benchmarking datasets that cover a small area. Recently, many European countries published classified ALS data, which can be potentially used for training models. However, a review of the classification schemes of these datasets revealed that these schemes vary substantially, therefore limiting their applicability. Thus, our goal was three-fold. First, to develop a common classification scheme that can be applied for the semantic segmentation of various ALS datasets. Second, to unify the classification scheme of existing ALS datasets. Third, to employ them for the training of a classifier that will be able to classify data from different sources and will not require additional training. We propose a classification scheme of four classes: ground and water, vegetation, buildings and bridges, and ‘other’. The developed classifier is trained jointly using ALS data from Austria, Switzerland, and Poland. A test on unseen datasets demonstrates that the achieved intersection over union accuracy varies between 90.0–97.3% for ground and water, 68.0–95.9% for vegetation, 77.6–94.8% for buildings and bridges, and 13.5–52.7% for ‘other’. As a result, we conclude that the developed method generalizes well to previously unseen data. Full article
Show Figures

Figure 1

19 pages, 8766 KB  
Article
Fusion of Airborne, SLAM-Based, and iPhone LiDAR for Accurate Forest Road Mapping in Harvesting Areas
by Evangelia Siafali, Vasilis Polychronos and Petros A. Tsioras
Land 2025, 14(8), 1553; https://doi.org/10.3390/land14081553 - 28 Jul 2025
Cited by 1 | Viewed by 1498
Abstract
This study examined the integraftion of airborne Light Detection and Ranging (LiDAR), Simultaneous Localization and Mapping (SLAM)-based handheld LiDAR, and iPhone LiDAR to inspect forest road networks following forest operations. The goal is to overcome the challenges posed by dense canopy cover and [...] Read more.
This study examined the integraftion of airborne Light Detection and Ranging (LiDAR), Simultaneous Localization and Mapping (SLAM)-based handheld LiDAR, and iPhone LiDAR to inspect forest road networks following forest operations. The goal is to overcome the challenges posed by dense canopy cover and ensure accurate and efficient data collection and mapping. Airborne data were collected using the DJI Matrice 300 RTK UAV equipped with a Zenmuse L2 LiDAR sensor, which achieved a high point density of 285 points/m2 at an altitude of 80 m. Ground-level data were collected using the BLK2GO handheld laser scanner (HPLS) with SLAM methods (LiDAR SLAM, Visual SLAM, Inertial Measurement Unit) and the iPhone 13 Pro Max LiDAR. Data processing included generating DEMs, DSMs, and True Digital Orthophotos (TDOMs) via DJI Terra, LiDAR360 V8, and Cyclone REGISTER 360 PLUS, with additional processing and merging using CloudCompare V2 and ArcGIS Pro 3.4.0. The pairwise comparison analysis between ALS data and each alternative method revealed notable differences in elevation, highlighting discrepancies between methods. ALS + iPhone demonstrated the smallest deviation from ALS (MAE = 0.011, RMSE = 0.011, RE = 0.003%) and HPLS the larger deviation from ALS (MAE = 0.507, RMSE = 0.542, RE = 0.123%). The findings highlight the potential of fusing point clouds from diverse platforms to enhance forest road mapping accuracy. However, the selection of technology should consider trade-offs among accuracy, cost, and operational constraints. Mobile LiDAR solutions, particularly the iPhone, offer promising low-cost alternatives for certain applications. Future research should explore real-time fusion workflows and strategies to improve the cost-effectiveness and scalability of multisensor approaches for forest road monitoring. Full article
Show Figures

Figure 1

28 pages, 5373 KB  
Article
Transfer Learning Based on Multi-Branch Architecture Feature Extractor for Airborne LiDAR Point Cloud Semantic Segmentation with Few Samples
by Jialin Yuan, Hongchao Ma, Liang Zhang, Jiwei Deng, Wenjun Luo, Ke Liu and Zhan Cai
Remote Sens. 2025, 17(15), 2618; https://doi.org/10.3390/rs17152618 - 28 Jul 2025
Viewed by 596
Abstract
The existing deep learning-based Airborne Laser Scanning (ALS) point cloud semantic segmentation methods require a large amount of labeled data for training, which is not always feasible in practice. Insufficient training data may lead to over-fitting. To address this issue, we propose a [...] Read more.
The existing deep learning-based Airborne Laser Scanning (ALS) point cloud semantic segmentation methods require a large amount of labeled data for training, which is not always feasible in practice. Insufficient training data may lead to over-fitting. To address this issue, we propose a novel Multi-branch Feature Extractor (MFE) and a three-stage transfer learning strategy that conducts pre-training on multi-source ALS data and transfers the model to another dataset with few samples, thereby improving the model’s generalization ability and reducing the need for manual annotation. The proposed MFE is based on a novel multi-branch architecture integrating Neighborhood Embedding Block (NEB) and Point Transformer Block (PTB); it aims to extract heterogeneous features (e.g., geometric features, reflectance features, and internal structural features) by leveraging the parameters contained in ALS point clouds. To address model transfer, a three-stage strategy was developed: (1) A pre-training subtask was employed to pre-train the proposed MFE if the source domain consisted of multi-source ALS data, overcoming parameter differences. (2) A domain adaptation subtask was employed to align cross-domain feature distributions between source and target domains. (3) An incremental learning subtask was proposed for continuous learning of novel categories in the target domain, avoiding catastrophic forgetting. Experiments conducted on the source domain consisted of DALES and Dublin datasets and the target domain consists of ISPRS benchmark dataset. The experimental results show that the proposed method achieved the highest OA of 85.5% and an average F1 score of 74.0% using only 10% training samples, which means the proposed framework can reduce manual annotation by 90% while keeping competitive classification accuracy. Full article
Show Figures

Figure 1

25 pages, 12949 KB  
Article
Enhanced Landslide Visualization and Trace Identification Using LiDAR-Derived DEM
by Jie Lv, Chengzhuo Lu, Minjun Ye, Yuting Long, Wenbing Li and Minglong Yang
Sensors 2025, 25(14), 4391; https://doi.org/10.3390/s25144391 - 14 Jul 2025
Viewed by 818
Abstract
In response to the inability of traditional remote sensing technology to accurately capture the micro-topographic features of landslide surfaces in vegetated areas under complex terrain conditions, this paper proposes a method for enhanced landslide terrain display and trace recognition based on airborne LiDAR [...] Read more.
In response to the inability of traditional remote sensing technology to accurately capture the micro-topographic features of landslide surfaces in vegetated areas under complex terrain conditions, this paper proposes a method for enhanced landslide terrain display and trace recognition based on airborne LiDAR technology. Firstly, a high-precision LiDAR-DEM is constructed using preprocessed LiDAR point cloud data, and visual images are generated using visualization methods, including hillshade, slope, openness, and Sky View Factor (SVF). Secondly, pixel-level image fusion methods are applied to the visual images to obtain enhanced display images of the landslide terrain. Finally, a threshold is determined through a fractal model, and the Mean-Shift algorithm is utilized for clustering and denoising to extract landslide traces. The results indicate that employing pixel-level image fusion technology, which combines the advantageous features of multiple terrain visualization images, effectively enhances the display of landslide micro-topography. Moreover, based on the enhanced display images, the fractal model and the Mean-Shift algorithm are applied for denoising to extract landslide traces. Compared to orthophotos, this method can effectively and accurately extract landslide traces. The findings of this study provide valuable references for the enhanced display and trace recognition of landslide terrain in densely vegetated areas within complex mountainous areas, thereby providing technical support for emergency investigations of landslide disasters. Full article
(This article belongs to the Special Issue Sensor Fusion in Positioning and Navigation)
Show Figures

Figure 1

22 pages, 10490 KB  
Article
DFPS: An Efficient Downsampling Algorithm Designed for the Global Feature Preservation of Large-Scale Point Cloud Data
by Jiahui Dong, Maoyi Tian, Jiayong Yu, Guoyu Li, Yunfei Wang and Yuxin Su
Sensors 2025, 25(14), 4279; https://doi.org/10.3390/s25144279 - 9 Jul 2025
Viewed by 763
Abstract
This paper introduces an efficient 3D point cloud downsampling algorithm (DFPS) based on adaptive multi-level grid partitioning. By leveraging an adaptive hierarchical grid partitioning mechanism, the algorithm dynamically adjusts computational intensity in accordance with terrain complexity. This approach effectively balances the global feature [...] Read more.
This paper introduces an efficient 3D point cloud downsampling algorithm (DFPS) based on adaptive multi-level grid partitioning. By leveraging an adaptive hierarchical grid partitioning mechanism, the algorithm dynamically adjusts computational intensity in accordance with terrain complexity. This approach effectively balances the global feature retention of point cloud data with computational efficiency, making it highly adaptable to the growing trend of large-scale 3D point cloud datasets. DFPS is designed with a multithreaded parallel acceleration architecture, which significantly enhances processing speed. Experimental results demonstrate that, for a point cloud dataset containing millions of points, DFPS reduces processing time from approximately 161,665 s using the original FPS method to approximately 71.64 s at a 12.5% sampling rate, achieving an efficiency improvement of over 2200 times. As the sampling rate decreases, the performance advantage becomes more pronounced: at a 3.125% sampling rate, the efficiency improves by nearly 10,000 times. By employing visual observation and quantitative analysis (with the chamfer distance as the measurement index), it is evident that DFPS can effectively preserve global feature information. Notably, DFPS does not depend on GPU-based heterogeneous computing, enabling seamless deployment in resource-constrained environments such as airborne and mobile devices, which makes DFPS an effective and lightweighting tool for providing high-quality input data for subsequent algorithms, including point cloud registration and semantic segmentation. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

17 pages, 6547 KB  
Article
Direct Estimation of Forest Aboveground Biomass from UAV LiDAR and RGB Observations in Forest Stands with Various Tree Densities
by Kangyu So, Jenny Chau, Sean Rudd, Derek T. Robinson, Jiaxin Chen, Dominic Cyr and Alemu Gonsamo
Remote Sens. 2025, 17(12), 2091; https://doi.org/10.3390/rs17122091 - 18 Jun 2025
Viewed by 1965
Abstract
Canada’s vast forests play a substantial role in the global carbon balance but require laborious and expensive forest inventory campaigns to monitor changes in aboveground biomass (AGB). Light detection and ranging (LiDAR) or reflectance observations onboard airborne or unoccupied aerial vehicles (UAVs) may [...] Read more.
Canada’s vast forests play a substantial role in the global carbon balance but require laborious and expensive forest inventory campaigns to monitor changes in aboveground biomass (AGB). Light detection and ranging (LiDAR) or reflectance observations onboard airborne or unoccupied aerial vehicles (UAVs) may address scalability limitations associated with traditional forest inventory but require simple forest structures or large sets of manually delineated crowns. Here, we introduce a deep learning approach for crown delineation and AGB estimation reproducible for complex forest structures without relying on hand annotations for training. Firstly, we detect treetops and delineate crowns with a LiDAR point cloud using marker-controlled watershed segmentation (MCWS). Then we train a deep learning model on annotations derived from MCWS to make crown predictions on UAV red, blue, and green (RGB) tiles. Finally, we estimate AGB metrics from tree height- and crown diameter-based allometric equations, all derived from UAV data. We validate our approach using 14 ha mixed forest stands with various experimental tree densities in Southern Ontario, Canada. Our results show that using an unsupervised LiDAR-only algorithm for tree crown delineation alongside a self-supervised RGB deep learning model trained on LiDAR-derived annotations leads to an 18% improvement in AGB estimation accuracy. In unharvested stands, the self-supervised RGB model performs well for height (adjusted R2, Ra2 = 0.79) and AGB (Ra2 = 0.80) estimation. In thinned stands, the performance of both unsupervised and self-supervised methods varied with stand density, crown clumping, canopy height variation, and species diversity. These findings suggest that MCWS can be supplemented with self-supervised deep learning to directly estimate biomass components in complex forest structures as well as atypical forest conditions where stand density and spatial patterns are manipulated. Full article
Show Figures

Figure 1

21 pages, 4282 KB  
Article
Stability Assessment of Hazardous Rock Masses and Rockfall Trajectory Prediction Using LiDAR Point Clouds
by Rao Zhu, Yonghua Xia, Shucai Zhang and Yingke Wang
Appl. Sci. 2025, 15(12), 6709; https://doi.org/10.3390/app15126709 - 15 Jun 2025
Viewed by 684
Abstract
This study aims to mitigate slope-collapse hazards that threaten life and property at the Lujiawan resettlement site in Wanbi Town, Dayao County, Yunnan Province, within the Guanyinyan hydropower reservoir. It integrates centimeter-level point-cloud data collected by a DJI Matrice 350 RTK equipped with [...] Read more.
This study aims to mitigate slope-collapse hazards that threaten life and property at the Lujiawan resettlement site in Wanbi Town, Dayao County, Yunnan Province, within the Guanyinyan hydropower reservoir. It integrates centimeter-level point-cloud data collected by a DJI Matrice 350 RTK equipped with a Zenmuse L2 airborne LiDAR (Light Detection And Ranging) sensor with detailed structural-joint survey data. First, qualitative structural interpretation is conducted with stereographic projection. Next, safety factors are quantified using the limit-equilibrium method, establishing a dual qualitative–quantitative diagnostic framework. This framework delineates six hazardous rock zones (WY1–WY6), dominated by toppling and free-fall failure modes, and evaluates their stability under combined rainfall infiltration, seismic loading, and ambient conditions. Subsequently, six-degree-of-freedom Monte Carlo simulations incorporating realistic three-dimensional terrain and block geometry are performed in RAMMS::ROCKFALL (Rapid Mass Movements Simulation—Rockfall). The resulting spatial patterns of rockfall velocity, kinetic energy, and rebound height elucidate their evolution coupled with slope height, surface morphology, and block shape. Results show peak velocities ranging from 20 to 42 m s−1 and maximum kinetic energies between 0.16 and 1.4 MJ. Most rockfall trajectories terminate within 0–80 m of the cliff base. All six identified hazardous rock masses pose varying levels of threat to residential structures at the slope foot, highlighting substantial spatial variability in hazard distribution. Drawing on the preceding diagnostic results and dynamic simulations, we recommend a three-tier “zonal defense with in situ energy dissipation” scheme: (i) install 500–2000 kJ flexible barriers along the crest and upper slope to rapidly attenuate rockfall energy; (ii) place guiding or deflection structures at mid-slope to steer blocks and dissipate momentum; and (iii) deploy high-capacity flexible nets combined with a catchment basin at the slope foot to intercept residual blocks. This staged arrangement maximizes energy attenuation and overall risk reduction. This study shows that integrating high-resolution 3D point clouds with rigid-body contact dynamics overcomes the spatial discontinuities of conventional surveys. The approach substantially improves the accuracy and efficiency of hazardous rock stability assessments and rockfall trajectory predictions, offering a quantifiable, reproducible mitigation framework for long slopes, large rock volumes, and densely fractured cliff faces. Full article
(This article belongs to the Special Issue Emerging Trends in Rock Mechanics and Rock Engineering)
Show Figures

Figure 1

26 pages, 5469 KB  
Article
SeqConv-Net: A Deep Learning Segmentation Framework for Airborne LiDAR Point Clouds Based on Spatially Ordered Sequences
by Bin Guo, Chunjing Yao, Hongchao Ma, Jie Wang and Junhao Xu
Remote Sens. 2025, 17(11), 1927; https://doi.org/10.3390/rs17111927 - 1 Jun 2025
Viewed by 1145
Abstract
Point cloud data provide three-dimensional (3D) information about objects in the real world, containing rich semantic features. Therefore, the task of semantic segmentation of point clouds has been widely applied in fields such as robotics and autonomous driving. Although existing research has made [...] Read more.
Point cloud data provide three-dimensional (3D) information about objects in the real world, containing rich semantic features. Therefore, the task of semantic segmentation of point clouds has been widely applied in fields such as robotics and autonomous driving. Although existing research has made unprecedented progress, achieving real-time semantic segmentation of point clouds on airborne devices still faces challenges due to excessive computational and memory requirements. To address this issue, we propose a novel sequence convolution semantic segmentation architecture that integrates Convolutional Neural Networks (CNN) with a sequence-to-sequence (seq2seq) structure, termed SeqConv-Net. This architecture views point cloud semantic segmentation as a sequence generation task. Based on our unique perspective of spatially ordered sequences, we use Recurrent Neural Networks (RNN) to encode elevation information, then input the structured hidden states into a CNN for planar feature extraction. The results are combined with the RNN’s encoded outputs via residual connections and are fed into a decoder for sequence prediction in a seq2seq manner. Experiments show that the SeqConv-Net architecture achieves 75.5% mean Intersection Over Union (mIOU) accuracy on the DALES dataset, with the total processing speed from data preprocessing to prediction being several to tens of times faster than existing methods. Additionally, SeqConv-Net can balance accuracy and speed by adjusting the hyperparameters and using different RNNs and CNNs, providing a new solution for real-time point cloud semantic segmentation in airborne environments. Full article
Show Figures

Figure 1

28 pages, 16050 KB  
Article
Advancing ALS Applications with Large-Scale Pre-Training: Framework, Dataset, and Downstream Assessment
by Haoyi Xiu, Xin Liu, Taehoon Kim and Kyoung-Sook Kim
Remote Sens. 2025, 17(11), 1859; https://doi.org/10.3390/rs17111859 - 27 May 2025
Viewed by 840
Abstract
The pre-training and fine-tuning paradigm has significantly advanced satellite remote sensing applications. However, its potential remains largely underexplored for airborne laser scanning (ALS), a key technology in domains such as forest management and urban planning. In this study, we address this gap by [...] Read more.
The pre-training and fine-tuning paradigm has significantly advanced satellite remote sensing applications. However, its potential remains largely underexplored for airborne laser scanning (ALS), a key technology in domains such as forest management and urban planning. In this study, we address this gap by constructing a large-scale ALS point cloud dataset and evaluating its effectiveness in downstream applications. We first propose a simple, generalizable framework for dataset construction, designed to maximize land cover and terrain diversity while allowing flexible control over dataset size. We instantiate this framework using ALS, land cover, and terrain data collected across the contiguous United States, resulting in a dataset geographically covering 17,000 + km2 (184 billion points) with diverse land cover and terrain types included. As a baseline self-supervised learning model, we adopt BEV-MAE, a state-of-the-art masked autoencoder for 3D outdoor point clouds, and pre-train it on the constructed dataset. The resulting models are fine-tuned for several downstream tasks, including tree species classification, terrain scene recognition, and point cloud semantic segmentation. Our results show that pre-trained models consistently outperform their counterparts trained from scratch across all downstream tasks, demonstrating the strong transferability of the learned representations. Additionally, we find that scaling the dataset using the proposed framework leads to consistent performance improvements, whereas datasets constructed via random sampling fail to achieve comparable gains. Full article
Show Figures

Figure 1

22 pages, 5446 KB  
Article
Dense 3D Reconstruction Based on Multi-Aspect SAR Using a Novel SAR-DAISY Feature Descriptor
by Shanshan Feng, Fei Teng, Jun Wang and Wen Hong
Remote Sens. 2025, 17(10), 1753; https://doi.org/10.3390/rs17101753 - 17 May 2025
Viewed by 748
Abstract
Dense 3D reconstruction from multi-aspect angle synthetic aperture radar (SAR) imagery has gained considerable attention for urban monitoring applications. However, achieving reliable dense matching between multi-aspect SAR images remains challenging due to three fundamental issues: anisotropic scattering characteristics that cause inconsistent features across [...] Read more.
Dense 3D reconstruction from multi-aspect angle synthetic aperture radar (SAR) imagery has gained considerable attention for urban monitoring applications. However, achieving reliable dense matching between multi-aspect SAR images remains challenging due to three fundamental issues: anisotropic scattering characteristics that cause inconsistent features across different aspect angles, geometric distortions, and speckle noise. To overcome these limitations, we introduce SAR-DAISY, a novel local feature descriptor specifically designed for dense matching in multi-aspect SAR images. The proposed method adapts the DAISY descriptor structure to SAR images specifically by incorporating the Gradient by Ratio (GR) operator for robust gradient calculation in speckle-affected imagery and enforcing multi-aspect consistency constraints during matching. We validated our method on W-band airborne SAR data collected over urban areas using circular flight paths. Experimental results demonstrate that SAR-DAISY generates detailed 3D point clouds with well-preserved structural features and high computational efficiency. The estimated heights of urban structures align with ground truth measurements. This approach enables 3D representation of complex urban environments from multi-aspect SAR data without requiring prior knowledge. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis (2nd Edition))
Show Figures

Graphical abstract

Back to TopTop