Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (567)

Search Parameters:
Keywords = LiDAR point cloud segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 5036 KB  
Article
Distilling Vision Foundation Models into LiDAR Networks via Manifold-Aware Topological Alignment
by Yuchuan Yang and Xiaosu Xu
Computers 2026, 15(4), 234; https://doi.org/10.3390/computers15040234 - 9 Apr 2026
Viewed by 64
Abstract
LiDAR point cloud semantic segmentation is essential for autonomous driving, yet LiDAR-only methods remain constrained by sparsity and limited texture cues. We propose Cross-Modal Collaborative Manifold Distillation (CMCMD), which transfers open-world semantic priors from the DINOv3 Vision Foundation Model to a LiDAR student [...] Read more.
LiDAR point cloud semantic segmentation is essential for autonomous driving, yet LiDAR-only methods remain constrained by sparsity and limited texture cues. We propose Cross-Modal Collaborative Manifold Distillation (CMCMD), which transfers open-world semantic priors from the DINOv3 Vision Foundation Model to a LiDAR student network. The framework combines an Adaptive Relation Convolution (ARConv) backbone with geometry-conditioned aggregation, a Unified Bidirectional Mapping Module (UBMM) for explicit 2D–3D interaction, and Manifold-Aware Topological Distillation (MATD), which aligns inter-sample affinity structures in a shared latent manifold rather than enforcing pointwise feature matching. By preserving relational topology instead of absolute feature coordinates, CMCMD mitigates negative transfer across heterogeneous modalities. Experiments on SemanticKITTI and nuScenes yield mIoU values of 72.9% and 81.2%, respectively, surpassing the compared distillation baselines and approaching the performance of multimodal fusion methods at lower inference cost. Additional evaluation on real-world campus scenes further supports the cross-domain robustness of the proposed framework. Full article
Show Figures

Figure 1

23 pages, 5436 KB  
Article
Characterizing Pedestrian Network from Segmented 3D Point Clouds for Accessibility Assessment: A Virtual Robotic Approach
by Ali Ahmadi, Mir Abolfazl Mostafavi, Ernesto Morales and Nouri Sabo
Sensors 2026, 26(7), 2172; https://doi.org/10.3390/s26072172 - 31 Mar 2026
Viewed by 230
Abstract
This study introduces a novel virtual robotic approach for automated characterization of pedestrian network accessibility from semantically segmented 3D LiDAR point clouds. With approximately 8 million Canadians living with disabilities, scalable accessibility assessment methods are critical. The proposed methodology integrates a Tangent Bug [...] Read more.
This study introduces a novel virtual robotic approach for automated characterization of pedestrian network accessibility from semantically segmented 3D LiDAR point clouds. With approximately 8 million Canadians living with disabilities, scalable accessibility assessment methods are critical. The proposed methodology integrates a Tangent Bug navigation algorithm—extended from 2D to 3D point cloud environments—with a triangular virtual robot grounded in ADA and IBC accessibility standards. The robot navigates classified point cloud data to simultaneously extract related parameters per step including those related to the accessibility assessment, including running slope, cross-slope, path width, surface type, and step height, aligned with the Measure of Environmental Accessibility (MEA) framework. Unlike existing approaches, the method characterizes not only formal sidewalk segments but also the critical transitional linkages between building entrances and the pedestrian network. Rather than evaluating features against fixed binary thresholds, it records continuous raw measurements enabling personalized accessibility assessment tailored to individual user profiles. Quantitative validation demonstrates high accuracy for path width (NRMSE = 2.71%) and reliable slope tracking. The proposed approach is faster, more cost-effective, and more comprehensive than traditional manual methods, and its segment-independent architecture makes it well-suited for future city-scale deployment. Full article
(This article belongs to the Special Issue Advances in Wireless Sensor Networks for Smart City)
Show Figures

Figure 1

23 pages, 6950 KB  
Article
Under-Canopy Archaeological Mapping Using LiDAR Data and AI Methods
by Gabriele Mazzacca and Fabio Remondino
Heritage 2026, 9(4), 134; https://doi.org/10.3390/heritage9040134 - 27 Mar 2026
Viewed by 386
Abstract
Airborne laser scanning (ALS) and UAV-mounted LiDAR sensors have become well-established tools for identifying and mapping archaeological features across varying scales and contexts. Numerous algorithms have been developed over the years for generating Digital Terrain or Features Models (DTMs/DFMs), which provide an accurate [...] Read more.
Airborne laser scanning (ALS) and UAV-mounted LiDAR sensors have become well-established tools for identifying and mapping archaeological features across varying scales and contexts. Numerous algorithms have been developed over the years for generating Digital Terrain or Features Models (DTMs/DFMs), which provide an accurate representation of the ground or structures’ surface, serving as the foundation for subsequent archaeological analyses. In this study, we report the developed multi-level multi-resolution (MLMR) methodology, based on machine/deep learning methods, for DFM generation through point cloud semantic segmentation. The work also compares different approaches and the impact of the resolution on their performance. To this end, each approach’s performance is evaluated with a series of quantitative and qualitative analyses, with an eye on hardware limitations and time constraints. Three test sites from Mediterranean and Alpine environments, with manually annotated ground truth data, are used for the evaluation of each methodological approach. Full article
Show Figures

Figure 1

31 pages, 6307 KB  
Article
A Novel Urban Biological Parameter Estimation Method Based on LiDAR Point Cloud Single-Tree Segmentation
by Tongtong Lu, Fang Huang, Yuxin Ding, Qingzhe Lv, Hao Guan, Gongwei Li, Xiang Kang and Geer Teng
Remote Sens. 2026, 18(7), 1001; https://doi.org/10.3390/rs18071001 - 27 Mar 2026
Viewed by 333
Abstract
Aiming at diverse urban tree structures and difficulties in vegetation point cloud extraction and utilization, this study proposed single-tree-scale biological parameter estimation methods for urban scenarios to enhance point cloud’s application value in urban greening management. For single-tree segmentation, it constructed a method [...] Read more.
Aiming at diverse urban tree structures and difficulties in vegetation point cloud extraction and utilization, this study proposed single-tree-scale biological parameter estimation methods for urban scenarios to enhance point cloud’s application value in urban greening management. For single-tree segmentation, it constructed a method based on the constraints of the trees’ geometric features and combined the gravitational modeling characteristics, called the CGF-CG single-tree segmentation method. This method (i) combines clustering and principal direction analysis to extract trunk points, (ii) introduces canopy segmentation based on trunk positions, (iii) optimizes edge point attributes via a gravitational model. Based on CGF-CG’s accurate results, an improved random forest method for single-tree biological parameter (IRF-BP) estimation (aboveground biomass, carbon storage, leaf area index, living vegetation volume) was proposed: (i) correlation analysis with variable screening, (ii) adaptive feature selection and pigeon-inspired optimization to enhance model generalization, (iii) adopting Shapley Additive Explanations (SHAP) to improve interpretability. Based on these, a complete model for different tree species was constructed. Validation showed that CGF-CG exhibited negligible over-segmentation and under-segmentation in the selected study areas, with overall average precision, recall, and F1-score over 98.5%. Additionally, on the selected overall region, the overall mF1 score, mPTP, and mPTR of our method are 99.13%, 99.15%, and 99.12%, respectively, which are superior to Forestmetrics, lidR, PyCrown, and DBSCAN methods. IRF-BP performed well, with a highest R2 of 0.81 and a lowest mean absolute percentage error of 7.5%, effectively surpassing the performance of traditional models such as RFR, GBR, KNN, and XGB. In summary, results provided theoretical and technical support for urban green resource management and evaluation. Full article
Show Figures

Figure 1

28 pages, 22901 KB  
Article
IAMS (Interior-Anchored Mean-Shift) Algorithm for Supervoxel Segmentation of Airborne LiDAR Roof Points
by Hanyu Zhou, Liang Zhang, Zhiyue Zhang, Haiqiong Yang, Xiongfei Tang, Hongchao Ma and Chunjing Yao
Remote Sens. 2026, 18(6), 965; https://doi.org/10.3390/rs18060965 - 23 Mar 2026
Viewed by 238
Abstract
Accurate building roof classification from airborne LiDAR point clouds is fundamental to reliable three-dimensional (3D) urban reconstruction. While supervoxel-based methods offer efficiency and resilience to uneven point density, their performance is critically undermined by cross-boundary segmentation errors—a direct consequence of random seed initialization [...] Read more.
Accurate building roof classification from airborne LiDAR point clouds is fundamental to reliable three-dimensional (3D) urban reconstruction. While supervoxel-based methods offer efficiency and resilience to uneven point density, their performance is critically undermined by cross-boundary segmentation errors—a direct consequence of random seed initialization that merges geometrically similar yet semantically distinct objects. To address this root cause, this study proposes Interior-Anchored Mean-Shift (IAMS), a novel supervoxel segmentation framework that rethinks seed placement as a geometry-aware interior localization problem. By integrating local geometric consistency point density, and spatial correlation into a unified kernel density estimator, supplemented by density-adaptive voxel weighting and a semi-variogram-driven bandwidth, IAMS reliably anchors seeds within object interiors, yielding highly homogeneous supervoxels without post-processing. Extensive experiments on three diverse airborne LiDAR datasets demonstrated that IAMS consistently outperformed state-of-the-art baselines. On the International Society for Photogrammetry and Remote Sensing (ISPRS) Vaihingen benchmark, our approach improved roof classification completeness, correctness, and quality by up to 7.1% (per-object) over the conventional Voxel Cloud Connectivity Segmentation (VCCS) algorithm while being significantly faster than recent boundary-preserving alternatives. Critically, IAMS maintains robust performance under challenging conditions, including sparse sampling and dense vegetation occlusion, making it a practical solution for real-world urban remote sensing. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Figure 1

23 pages, 12466 KB  
Article
Real-Time LiDAR 3D Semantic Segmentation via Multi-View and Cross-Modal Compact Featuring Two-Branch Knowledge Distillation
by Yun Zhang, Kun Qian, Zihan Zhang, Min’ao Zhang and Hai Yu
Sensors 2026, 26(6), 1860; https://doi.org/10.3390/s26061860 - 15 Mar 2026
Viewed by 483
Abstract
Simultaneous online mapping and semantic segmentation using handheld scanners supports various environmental inspection and measurement tasks. For such scanners, combing visual and LiDAR data is beneficial for improving the segmentation performance. But the direct fusion of multi-modal and multi-view features faces challenges in [...] Read more.
Simultaneous online mapping and semantic segmentation using handheld scanners supports various environmental inspection and measurement tasks. For such scanners, combing visual and LiDAR data is beneficial for improving the segmentation performance. But the direct fusion of multi-modal and multi-view features faces challenges in terms of both real-time performance and robustness. To address these challenges, this paper proposes a multi-view and cross-modal knowledge distillation method for supporting runtime LiDAR-only semantic segmentation. The proposed method hierarchically compacts multi-view and cross-model priors and distills them into two branches to improve segmentation accuracy. In addition, we design an improved data augmentation technique based on PolarMix for rendering more realistic point cloud scenes. The experimental results on the SemanticKITTI and nuScenes datasets demonstrate that the mIoU of our approach outperforms the state-of-the-art knowledge-distillation-based methods. In addition, mapping experiments using a handheld scanner demonstrate the proposed method’s superior real-time performance and accuracy. Full article
Show Figures

Figure 1

23 pages, 3579 KB  
Article
Plane Segmentation in Sensor-Acquired 3D Point Clouds Using Supervoxel-Based Geometric Constraints
by Xiaohua Ran, Xu Ning, Qing An and Xijiang Chen
Sensors 2026, 26(6), 1816; https://doi.org/10.3390/s26061816 - 13 Mar 2026
Viewed by 257
Abstract
Plane segmentation of real-world 3D point clouds captured by LiDAR or depth sensors remains challenging due to data sparsity, noise, and complex geometric configurations such as stepwise and intersecting non-coplanar structures. To address these issues inherent in sensor-acquired data, this paper proposes a [...] Read more.
Plane segmentation of real-world 3D point clouds captured by LiDAR or depth sensors remains challenging due to data sparsity, noise, and complex geometric configurations such as stepwise and intersecting non-coplanar structures. To address these issues inherent in sensor-acquired data, this paper proposes a geometry-aware plane segmentation method that leverages supervoxel boundary adjacency, normal coherence, and projection-line fitting constraints. Supervoxels were generated using the toward better boundary preserved supervoxel segmentation (TBBS) algorithm, and their natural adjacency relationships were constructed based on boundary points. Subsequently, the supervoxels were initially clustered according to their normal information. Finally, the projected point clouds of adjacent supervoxel were fitted with straight lines, and the fitting errors were calculated to optimize the clustering results. Experimental results demonstrate that this method performs excellently in handling stepwise non-coplanar structures, effectively segmenting planar regions with significant geometric features. It shows particular advantages in cases involving stepwise non-coplanar and intersecting planes. On benchmark datasets, the method achieves precision and recall rates of (97.7%, 94.4%, 91.2%, 80.4%, 92.3%) and (98.9%, 95.7%, 93.7%, 84.8%, 96.0%), respectively, highlighting its effectiveness and robustness for practical 3D sensing applications. Full article
Show Figures

Figure 1

15 pages, 3088 KB  
Article
Lightweight Semantic Segmentation Algorithm Based on Gated Visual State Space Models
by Kui Di, Jinming Cheng, Lili Zhang and Yubin Bao
Electronics 2026, 15(6), 1175; https://doi.org/10.3390/electronics15061175 - 12 Mar 2026
Viewed by 366
Abstract
LiDAR serves as the primary sensor for acquiring environmental information in intelligent driving systems. However, under adverse weather conditions, point cloud signals obtained by LiDAR suffer from intensity attenuation and noise interference, leading to a decline in segmentation accuracy. To address these issues, [...] Read more.
LiDAR serves as the primary sensor for acquiring environmental information in intelligent driving systems. However, under adverse weather conditions, point cloud signals obtained by LiDAR suffer from intensity attenuation and noise interference, leading to a decline in segmentation accuracy. To address these issues, this paper designs a lightweight semantic segmentation system based on the Gated Visual State Space Model (VMamba), named RainMamba. Specifically, the system utilizes spherical projection to transform point clouds into 2D sequences and constructs a physical perception feature embedding module guided by the Beer–Lambert law to explicitly model and suppress spatial noise at the source. Subsequently, an uncertainty-weighted cross-modal correction module is employed to incorporate RGB images for dynamically calibrating the degraded point cloud data. Finally, a VMamba backbone is adopted to establish global dependencies with linear complexity. Experimental results on the SemanticKITTI dataset demonstrate that the system achieves an inference speed of 83 FPS, with a relative mIoU improvement of approximately 7.2% compared to the real-time baseline PolarNet. Furthermore, zero-shot evaluations on the real-world SemanticSTF dataset validate the system’s robust Sim-to-Real generalization capability. Notably, RainMamba delivers highly competitive accuracy comparable to the state-of-the-art heavy-weight model PTv3 while requiring a significantly lower parameter footprint, thereby demonstrating its immense potential for practical edge-computing deployment. Full article
Show Figures

Figure 1

24 pages, 4915 KB  
Article
Semantic-Guided Matching of Heterogeneous UAV Imagery and Mobile LiDAR Data Using Deep Learning and Graph Neural Networks
by Tee-Ann Teo, Hao Yu and Pei-Cheng Chen
Drones 2026, 10(3), 185; https://doi.org/10.3390/drones10030185 - 8 Mar 2026
Viewed by 349
Abstract
The integration of heterogeneous geospatial data, specifically low-cost unmanned aerial vehicle (UAV) imagery and mobile light detection and ranging (LiDAR) system point clouds, presents a significant challenge due to the significant radiometric and structural discrepancies between the two modalities. This study proposes a [...] Read more.
The integration of heterogeneous geospatial data, specifically low-cost unmanned aerial vehicle (UAV) imagery and mobile light detection and ranging (LiDAR) system point clouds, presents a significant challenge due to the significant radiometric and structural discrepancies between the two modalities. This study proposes a novel air-to-ground semantic feature matching framework to achieve precise geometric registration between these data sources by effectively incorporating semantic-constraint deep learning-based matching. The methodology transformed the cross-sensor alignment challenge into a robust two-dimensional image matching problem. This was achieved by first using YOLOv11 for semantic segmentation of common road markings in both the UAV orthoimage and the converted LiDAR intensity image to generate highly consistent feature references. Subsequently, the SuperPoint detector and a graph neural network matcher, SuperGlue, were applied to these semantic images to establish reliable geomatics information correspondence points. Experimental results confirmed that this semantic-guided strategy consistently outperformed traditional feature-based matching (i.e., scale-invariant feature transform + fast library for approximate nearest neighbors), particularly by converting the noisy LiDAR intensity image into a stabilized semantic representation. The explicit application of semantic constraints further proved effective in eliminating false matches between geometrically similar but semantically distinct objects. The final object-specific analysis demonstrated that features with clear, complex geometric structures (e.g., pedestrian crossings and directional arrows) provide the most robust matching control. In summary, the proposed framework successfully leverages semantic context to overcome cross-sensor heterogeneity, offering an automated and precise solution for the geometric alignment of mobile LiDAR data. Full article
Show Figures

Figure 1

16 pages, 5250 KB  
Article
Identification of Cypress Bark Beetle-Infested Cypress Based on LiDAR and RGB Imagery
by Ke Wu, Zhiqiang Li, Linpan Feng, Shali Shi, Liangying Zhang, Shixing Zhou, Sen Zhai and Lin Xiao
Forests 2026, 17(3), 328; https://doi.org/10.3390/f17030328 - 6 Mar 2026
Viewed by 279
Abstract
Forest pests and diseases are some of the major disturbances affecting the stability of forest ecosystems. Accurate identification of insect-infested trees is therefore crucial for assessing forest health and implementing precision forestry management. This study focuses on stand-level detection of cypress trees ( [...] Read more.
Forest pests and diseases are some of the major disturbances affecting the stability of forest ecosystems. Accurate identification of insect-infested trees is therefore crucial for assessing forest health and implementing precision forestry management. This study focuses on stand-level detection of cypress trees (Cupressus funebris Endl.) that were affected by the cypress bark beetle (Phloeosinus aubei Perris), and the framework enables individual tree segmentation, insect-infested tree detection, and stand infestation assessment. Firstly, individual trees were extracted from Light Detection and Ranging (LiDAR) point cloud data using the layer-stacking seed point algorithm. Based on the segmented tree crowns, four vegetation indices (Visible Atmospherically Resistant Index (VARI), Visible-band Difference Vegetation Index (VDVI), Red-Green Index (RGI), and Color Index of Vegetation Extraction (CIVE)) were calculated from Unmanned Aerial Vehicle (UAV) RGB imagery. Insect-infested cypress trees were extracted through threshold segmentation. Through visual interpretation, the optimal vegetation index was determined and the infestation rate at the stand level was calculated. Based on the above framework, a total of 1368 trees were identified in the cypress stand, with a segmentation Precision of 82.51%, a Recall of 80.00%, and an F1-score of 81.24%. RGI achieved the best performance (Precision = 100.00%, Recall = 86.96%, F1-score = 93.02%) and identified 20 infested trees, accounting for 1.46% of the cypress stand. Supplementary experiments further confirm the superiority of the RGI index and the μ ± 2σ thresholding method. These results demonstrate that the proposed method enables rapid detection of the infested cypress trees, effective monitoring of stand health and infestation severity, thereby supporting informed decision-making in pest control and forest management. Full article
Show Figures

Figure 1

20 pages, 3202 KB  
Article
Robust LiDAR-Based Train Detection via Point Cloud Segmentation for Railway Safety
by Yuxing Yang, Siyue Yu and Jimin Xiao
Sensors 2026, 26(5), 1514; https://doi.org/10.3390/s26051514 - 27 Feb 2026
Viewed by 316
Abstract
Ensuring railway safety requires reliable monitoring of trains in critical safety areas, such as station throat zones and railway crossings. Compared with cameras, roadside LiDAR can more reliably capture the geometry of trains under low-light, high-speed, and adverse weather conditions. However, industrial LiDAR [...] Read more.
Ensuring railway safety requires reliable monitoring of trains in critical safety areas, such as station throat zones and railway crossings. Compared with cameras, roadside LiDAR can more reliably capture the geometry of trains under low-light, high-speed, and adverse weather conditions. However, industrial LiDAR solutions still primarily use the background comparison technique, which compares each sample against a pre-recorded clean map and then applies a size-based filter. Such approaches are highly sensitive to point cloud background changes arising from varying LiDAR installation distances, train speeds, and surface materials, often resulting in fragmented clustering and missed detections. In this paper, train detection is reformulated as a point-level semantic segmentation problem. A lightweight 3D segmentation network that directly predicts train points from raw data is designed, and clustering-based post-processing is applied to generate train-level events in real time. Experiments on real railway data under various operating conditions show that the proposed method achieves higher detection accuracy and greater robustness than traditional compare-based methods and representative deep learning benchmark methods, and is therefore suitable for practical railway safety monitoring. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

13 pages, 1371 KB  
Article
GENet: A Geometry-Enhanced Network for LiDAR Semantic Segmentation
by Yuchen Wu and Hanbing Wei
Sensors 2026, 26(5), 1460; https://doi.org/10.3390/s26051460 - 26 Feb 2026
Viewed by 289
Abstract
LiDAR has been widely applied in autonomous driving and mobile robotics. Recently, many studies focus on real-time point cloud segmentation, aiming to achieve higher accuracy while maintaining real-time inference speed. Current real-time methods mostly rely on 2D projection, which inevitably leads to spatial [...] Read more.
LiDAR has been widely applied in autonomous driving and mobile robotics. Recently, many studies focus on real-time point cloud segmentation, aiming to achieve higher accuracy while maintaining real-time inference speed. Current real-time methods mostly rely on 2D projection, which inevitably leads to spatial information loss. To address the limitations of 2D projection methods, we propose a Geometry-Enhanced Network called GENet that exploits spatial priors. The network employs an Atrous Separable Range Attention (ASRA) module to explicitly utilize spatial priors from range images, enabling geometry-aware feature aggregation with large receptive field at linear complexity. A Geometry-Context Modulation (GCM) mechanism is then used to calibrate semantic features, incorporating geometric priors while preserving the discriminative ability of original features across different categories. Experiments show that our method achieves efficient information fusion while maintaining real-time performance. Compared to existing methods, GENet requires fewer parameters and less computation, achieving a favorable balance between accuracy and efficiency. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Automotive Engineering)
Show Figures

Figure 1

36 pages, 698 KB  
Article
MDGroup: Multi-Grained Dual-Aware Grouping for 3D Point Cloud Instance Segmentation
by Wenyun Sun and Ruifeng Han
Electronics 2026, 15(5), 915; https://doi.org/10.3390/electronics15050915 - 24 Feb 2026
Viewed by 317
Abstract
Instance segmentation of 3D point clouds is a fundamental task for scene understanding in applications such as autonomous driving, robotics, and augmented reality. The inherent irregularity and sparsity of point clouds, compounded by scale variations and instance adhesion, pose significant challenges to accurate [...] Read more.
Instance segmentation of 3D point clouds is a fundamental task for scene understanding in applications such as autonomous driving, robotics, and augmented reality. The inherent irregularity and sparsity of point clouds, compounded by scale variations and instance adhesion, pose significant challenges to accurate segmentation. Existing grouping-based methods are often limited by the loss of geometric details in single-path backbones and by error propagation near complex boundaries. To address these issues, a Multi-grained Dual-aware Grouping algorithm (MDGroup) is proposed, which explicitly integrates multi-grained feature representation with dual awareness of class and boundary. The algorithm features a Dual-Resolution 3D U-Net (DRNet) that preserves local geometric details while aggregating global semantics through adaptive alignment. A four-branch prediction scheme enhances semantic and offset estimation with boundary and directional cues, enabling fine-grained boundary modeling. Furthermore, a Hierarchical Adaptive Multi-grained Feature fusion framework (HAMF) achieves efficient cross-scale alignment by combining Class-Aware Dynamic Voxelization and Class-Aware Pyramid Scaling. Finally, a Boundary-Aware Weighted Aggregation mechanism (BAWA) refines instance grouping by dynamically weighting semantic confidence, geometric distance, boundary probability, and directional consistency. To extend the model to dynamic scenes, a Temporal Adaptive Gating (TAG) module is introduced to leverage historical frame correlations. Extensive experiments on the ScanNet v2, S3DIS, STPLS3D, SemanticKITTI, LiDAR-Net, and OCID benchmarks demonstrate that MDGroup achieves state-of-the-art performance among grouping-based methods, particularly on small objects, complex boundaries, and dynamic environments. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

37 pages, 34025 KB  
Article
Individual Tree Segmentation from LiDAR Point Clouds: A Mamba-Enhanced Sparse CNN Approach for Accurate Forest Inventory
by Xiangji Peng, Jizheng Yi, Rong Liu, Xiangyu Shen and Xiaoyao Li
Remote Sens. 2026, 18(4), 664; https://doi.org/10.3390/rs18040664 - 22 Feb 2026
Viewed by 567
Abstract
Individual tree segmentation is critical for automated forest inventory systems, enabling detailed individual tree records that support precision forest management. While current airborne LiDAR systems can acquire high-density, high-accuracy point clouds of dense forests, significant challenges remain in analyzing the diversity of forest [...] Read more.
Individual tree segmentation is critical for automated forest inventory systems, enabling detailed individual tree records that support precision forest management. While current airborne LiDAR systems can acquire high-density, high-accuracy point clouds of dense forests, significant challenges remain in analyzing the diversity of forest samples across different regions. An improved method of instance segmentation using a Mamba-Enhanced Sparse Convolutional Neural Network is proposed to address the problem of misallocation caused by ambiguous boundaries and overlapping canopies of individual trees. An innovative offset prediction method further reduces the high error rate in low-canopy datasets. On the basis of a variety of features, the designed network customizes the HDBSCAN clustering algorithm and the W-KNN neighborhood search algorithm for fine-grained instance segmentation to achieve optimal performance. To address the lack of block coherence in the FOR-instance dataset and to reduce redundant noisy trees in some regions, this work develops a novel pipeline to simulate real woodland scenes and evaluates the robustness of the network in composite forests. Extensive validation on real and benchmark data demonstrates the method’s superior generalization capability, yielding robust segmentation results across varied forest structures. The most marked gains are achieved in low-canopy settings, confirming the method’s enhanced ability to handle complex structural overlaps. Our method provides a more comprehensive solution for the inventory management of structurally heterogeneous or regionally diverse woodlands, thereby enhancing both the automation and precision of forest resource assessment. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

21 pages, 5145 KB  
Article
Airborne LiDAR Point Cloud Building Reconstruction Based on Planar Optimal Combination and Feature Line Constraints
by Zhao Hai, Cailin Li, Baoyun Guo, Xianlong Wei, Zhuo Yang and Jinhui Zheng
ISPRS Int. J. Geo-Inf. 2026, 15(2), 92; https://doi.org/10.3390/ijgi15020092 - 20 Feb 2026
Viewed by 569
Abstract
This paper proposes a building reconstruction framework for airborne LiDAR data to address the challenge of automated modeling under conditions of uneven point cloud density and missing vertical walls, generating high-precision and structurally compact 3D building models. The method first combines adaptive resolution [...] Read more.
This paper proposes a building reconstruction framework for airborne LiDAR data to address the challenge of automated modeling under conditions of uneven point cloud density and missing vertical walls, generating high-precision and structurally compact 3D building models. The method first combines adaptive resolution hypervoxels with a global graph cut optimization strategy to extract precise roof plane primitives from sparse point clouds of buildings. Subsequently, it infers building facades and internal vertical walls based on point cloud projection contours and height change detection, thereby completing the wall structures commonly missing in airborne LiDAR data. Finally, a feature line constraint term is introduced into the hypothesis-and-selection-based reconstruction framework to guide the structural optimization of candidate planes, ensuring the reconstructed model closely matches the actual building geometry. The proposed method was evaluated on multiple public airborne LiDAR datasets, demonstrating its effectiveness through qualitative and quantitative comparisons with various state-of-the-art approaches. Full article
(This article belongs to the Special Issue Spatial Data Science and Knowledge Discovery)
Show Figures

Figure 1

Back to TopTop