Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,195)

Search Parameters:
Keywords = LiDAR point cloud

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 14391 KB  
Article
Exploratory Analyses of Cross-Species Phenological–Structural Relationships in Urban Park Trees by Using Sentinel-2 Images and Handheld LiDAR Data
by Miao Jiang, Yi Lin and Minghua Cheng
Remote Sens. 2026, 18(8), 1192; https://doi.org/10.3390/rs18081192 - 16 Apr 2026
Viewed by 194
Abstract
Understanding the interplay between tree structure and seasonal dynamics, particularly cross-species, is crucial for managing urban forest ecosystems. However, balancing fine-scale inventory of trees with large-area mapping of forest ecosystems is a challenge. This endeavor integrates multi-temporal Sentinel-2 satellite remote sensing (RS) imagery [...] Read more.
Understanding the interplay between tree structure and seasonal dynamics, particularly cross-species, is crucial for managing urban forest ecosystems. However, balancing fine-scale inventory of trees with large-area mapping of forest ecosystems is a challenge. This endeavor integrates multi-temporal Sentinel-2 satellite remote sensing (RS) imagery with high-density handheld light detection and ranging (LiDAR) point clouds to launch exploratory analyses of cross-species phenological–structural relationships (CSPSRs) in urban park trees. We derived plot-level phenological metrics (e.g., start of growing season, SOS) and quantified fine-scale three-dimensional (3D) tree structural attributes (e.g., tree height and trunk curvature), respectively. Then, we investigated how the 3D structural attributes of urban park trees covary with their phenological traits. The results revealed the underlying CSPSRs, e.g., a weak but significant negative correlation between SOS and tree height in the study area. The derived CSPSRs demonstrate that tree structure is a key predictor of its phenology, even across species. Overall, the integrated RS approach can provide a robust framework for associating the structure and phenology of trees, offering valuable insights for the ecological management of urban forests. Full article
(This article belongs to the Special Issue Close-Range LiDAR for Forest Structure and Dynamics Monitoring)
Show Figures

Figure 1

36 pages, 7426 KB  
Article
SPICD-Net: A Siamese PointNet Framework for Autonomous Indoor Change Detection in 3D LiDAR Point Clouds
by Dalibor Šeljmeši, Vladimir Brtka, Velibor Ilić, Dalibor Dobrilović, Eleonora Brtka and Višnja Ognjenović
AI 2026, 7(4), 141; https://doi.org/10.3390/ai7040141 - 15 Apr 2026
Viewed by 139
Abstract
Reliable change detection in indoor environments remains a challenge for autonomous robotic systems using 3D LiDAR. Existing methods often require manual annotation, computationally intensive architectures, or focus on outdoor scenes. This paper presents SPICD-Net, a lightweight Siamese PointNet framework for indoor 3D change [...] Read more.
Reliable change detection in indoor environments remains a challenge for autonomous robotic systems using 3D LiDAR. Existing methods often require manual annotation, computationally intensive architectures, or focus on outdoor scenes. This paper presents SPICD-Net, a lightweight Siamese PointNet framework for indoor 3D change detection trained exclusively on synthetically generated anomalies, eliminating manual labeling. The framework offers three deployment-oriented contributions: a three-class Siamese formulation separating no-change, changed, and geometrically inconsistent tile pairs; a pre-FPS anomaly injection strategy that aligns synthetic training with inference-time preprocessing; and a stochastic-gated Chamfer-statistics branch that complements learned embeddings with explicit geometric cues under consumer-grade hardware constraints. Evaluated on 14 controlled simulation experiments in an indoor corridor dataset, SPICD-Net achieved aggregated Precision = 0.86, Recall = 0.82, F1-score = 0.84, and Accuracy = 0.96, with zero false positives in the no-change baseline and mean inference time of 22.4 s for a 172-tile map on a single consumer GPU. Additional robustness experiments identified registration accuracy as the main operational prerequisite. A limited real-world validation in one unseen room (four scans, 67 tiles) achieved Precision = 0.583, Recall = 1.000, and F1 = 0.737. Full article
(This article belongs to the Special Issue Artificial Intelligence for Robotic Perception and Planning)
19 pages, 13185 KB  
Article
TreePS: Tree-Based Positioning in Forests Using Map Matching and Co-Registration of Lidar-Derived Stem Locations
by Michael P. Salerno, Robert F. Keefe, Andrew T. Hudak and Ryer M. Becker
Forests 2026, 17(4), 483; https://doi.org/10.3390/f17040483 - 15 Apr 2026
Viewed by 258
Abstract
Artificial intelligence (AI), cloud computing, robotics, automation, and remote sensing technologies are all contributing to digital transformation in forestry. Improving on low-accuracy Global Navigation Satellite Systems (GNSS) positioning affected by multipath error and interception under forest canopies is critical for integrating smart and [...] Read more.
Artificial intelligence (AI), cloud computing, robotics, automation, and remote sensing technologies are all contributing to digital transformation in forestry. Improving on low-accuracy Global Navigation Satellite Systems (GNSS) positioning affected by multipath error and interception under forest canopies is critical for integrating smart and digital technologies into equipment in forest operations. In an era where lidar-derived individual tree locations are now increasingly available in digital forest inventories, a possible alternative approach to positioning resources such as people or equipment accurately could be to match locally-measured tree positions and attributes in the forest with an existing global reference map based on prior remote sensing missions, effectively using the trees themselves as satellites to circumvent the need for GNSS-based positioning. We evaluated a lidar-based alternative to GNSS positioning using predicted tree positions from local terrestrial laser scanning (TLS) matched with a global stem map derived from prior airborne laser scanning (ALS), a methodology we refer to as TreePS. The horizontal error of the TreePS system was estimated using 154 permanent single-tree inventory plots on the University of Idaho Experimental Forest with two different workflows based on two common R packages (lidR v. 4.3.0, FORTLS v. 1.6.2) using either spatial coordinates or spatial plus stem DBH predicted using one or both segmentation routines and a custom matching algorithm. Mean TreePS error using lidR for below and above-canopy segmentation had mean error of 1.04 and 2.04 m with 93.5% and 91.6% of plots with viable match solutions on spatial and spatial plus DBH matching. The second workflow with both FORTLS (TLS point cloud) and lidR (ALS point cloud) had errors of 1.09 and 2.67 m but only 57.9% and 54.2% of plots with solutions using spatial and spatial plus DBH, respectively. There is room for improvement in the matching algorithm but the TreePS methodology and similar feature-matching solutions may be useful for below-canopy positioning of equipment, people or other resources under dense forests and other GNSS-degraded environments to help advance smart and digital forestry. Full article
(This article belongs to the Section Forest Operations and Engineering)
Show Figures

Figure 1

25 pages, 7380 KB  
Article
Integrated Air–Ground Robotic System for Autonomous Post-Blast Operations in GNSS-Denied Tunnels
by Goretti Arias-Ferreiro, Marco A. Montes-Grova, Francisco J. Pérez-Grau, Sergio Noriega-del-Rivero, Rafael Herguedas, María T. Lázaro, Amaia Castelruiz-Aguirre, José Carlos Jimenez Fernandez, Mustafa Karahan and Antonio Alonso-Cepeda
Remote Sens. 2026, 18(8), 1133; https://doi.org/10.3390/rs18081133 - 10 Apr 2026
Viewed by 470
Abstract
Post-blast operations in tunnel construction represent a critical bottleneck due to mandatory downtime and hazardous environmental conditions. This study addresses these challenges by developing and validating an integrated cyber–physical architecture that coordinates an autonomous Unmanned Aerial Vehicle (UAV) and an Autonomous Wheel Loader [...] Read more.
Post-blast operations in tunnel construction represent a critical bottleneck due to mandatory downtime and hazardous environmental conditions. This study addresses these challenges by developing and validating an integrated cyber–physical architecture that coordinates an autonomous Unmanned Aerial Vehicle (UAV) and an Autonomous Wheel Loader (AWL) under the supervision of a Digital Twin acting as central operational digital interface. Specifically, this technology was designed to access the tunnel, evaluate post-blasting conditions, and initiate operations during mandatory exclusion periods for personnel. The system was validated in a realistic, Global Navigation Satellite System (GNSS)-denied tunnel environment emulating post-detonation visibility constraints. The results demonstrate that the aerial agent successfully navigated and mapped the excavation front in less than 8 min, establishing a shared coordinate system for the ground machinery. Through this collaborative workflow, the autonomous deployment enabled operations to commence 50% to 80% earlier than conventional manual procedures. Furthermore, the system reduced daily operational time by approximately 8%, with an estimated return on financial investment between one and seven months. Overall, the proposed framework eliminates human exposure during high-risk inspections and transforms the fragmented excavation cycle into a continuous, data-driven process. Full article
(This article belongs to the Special Issue Mobile Laser Scanning Systems for Underground Applications)
Show Figures

Figure 1

47 pages, 3286 KB  
Review
LiDAR-Based Road Surface Damage Classification: A Survey
by Trevor Greene, Meisam Shayegh Moradi, Muhammad Umair, Nafiul Nawjis, Naima Kaabouch and Timothy Pasch
Sensors 2026, 26(8), 2338; https://doi.org/10.3390/s26082338 - 10 Apr 2026
Viewed by 215
Abstract
Unlike image-only systems that falter in shadows, glare, and low contrast, LiDAR directly records surface geometry and supports depth-aware quantification. This survey examines LiDAR-based road surface damage classification across the entire pipeline, encompassing acquisition with mobile and terrestrial laser scanning, preprocessing and representation [...] Read more.
Unlike image-only systems that falter in shadows, glare, and low contrast, LiDAR directly records surface geometry and supports depth-aware quantification. This survey examines LiDAR-based road surface damage classification across the entire pipeline, encompassing acquisition with mobile and terrestrial laser scanning, preprocessing and representation choices, supervised, semi-supervised, and unsupervised learning techniques, as well as multisensor fusion at early, mid, and late stages. A consistent thread is measurement, not just detection: we describe how LiDAR damage classification maps to agency practices such as the Distress Identification Manual and the Pavement Condition Index. We summarize datasets and evaluation protocols for detection, segmentation, 3D reconstruction, and ride quality. We outline practical concerns for corridor-scale deployment: calibration and timing, intensity normalization, tiling/streaming, and runtime budgeting. The review concludes with open problems and outlines directions for robust, severity-aware, and scalable field systems. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

23 pages, 3484 KB  
Article
IFA-ICP: A Low-Complexity and Image Feature-Assisted Iterative Closest Point (ICP) Scheme for Odometry Estimation in SLAM, and Its FPGA-Based Hardware Accelerator Design
by Jia-En Li and Yin-Tsung Hwang
Sensors 2026, 26(8), 2326; https://doi.org/10.3390/s26082326 - 9 Apr 2026
Viewed by 183
Abstract
Odometry estimation, which calculates the trajectory of a moving object across timeframes, is a critical and time-consuming function in SLAM (Simultaneous Localization and Mapping) systems. Although LiDAR-based sensing is most popular for outdoor and long-range applications because of its ranging accuracy, the sparsity [...] Read more.
Odometry estimation, which calculates the trajectory of a moving object across timeframes, is a critical and time-consuming function in SLAM (Simultaneous Localization and Mapping) systems. Although LiDAR-based sensing is most popular for outdoor and long-range applications because of its ranging accuracy, the sparsity of laser point cloud poses a significant challenge to feature extraction and matching in odometry estimation. In this paper, we investigate odometry estimation from two aspects, i.e., algorithm optimization, and system design/implementation. In algorithm optimization, we present an image feature-assisted odometry estimation scheme that leverages the richness of image information captured by a companion camera to enhance the accuracy of laser point cloud matching. This also serves as a screening mechanism to reduce the matching size and lower the computing complexity for a higher estimation rate. In addition, various schemes, such as adaptive threshold in image feature point selection, principal component analysis (PCA)-based plane fitting for laser point interpolation, and Gauss–Newton optimization for calculating the transform matrix, are also employed to improve the accuracy of odometry estimation. The performance of improved odometry estimation is verified using an existing FLOAM (Fast Lidar Odometry and Mapping) framework. The KITTI dataset for autonomous vehicles with ground truth was used as the test bench. Simulation results indicate that the translation error and rotation error can be reduced by 16.6% and 1.3%, respectively. Computing complexity, measured as the software execution time, also reduced by 63%. In system implementation, a hardware/software (HW/SW) co-design strategy was adopted, where complexity profiling was first conducted to determine the task partitioning and time-consuming tasks are offloaded to a hardware accelerator. This facilitates real-time execution on a resource-constrained embedded platform consisting of a microprocessor module (Raspberry Pi) and an attached FPGA board (Pynq Z2). Efficient hardware designs for customized DSP functions (adaptive threshold and PCA) were developed in an FPGA capable of completing one data frame in 20ms. The final system implementation met the target throughput of 10 estimations per second, and can be scaled up further. Full article
(This article belongs to the Topic Advances in Autonomous Vehicles, Automation, and Robotics)
Show Figures

Figure 1

23 pages, 5036 KB  
Article
Distilling Vision Foundation Models into LiDAR Networks via Manifold-Aware Topological Alignment
by Yuchuan Yang and Xiaosu Xu
Computers 2026, 15(4), 234; https://doi.org/10.3390/computers15040234 - 9 Apr 2026
Viewed by 250
Abstract
LiDAR point cloud semantic segmentation is essential for autonomous driving, yet LiDAR-only methods remain constrained by sparsity and limited texture cues. We propose Cross-Modal Collaborative Manifold Distillation (CMCMD), which transfers open-world semantic priors from the DINOv3 Vision Foundation Model to a LiDAR student [...] Read more.
LiDAR point cloud semantic segmentation is essential for autonomous driving, yet LiDAR-only methods remain constrained by sparsity and limited texture cues. We propose Cross-Modal Collaborative Manifold Distillation (CMCMD), which transfers open-world semantic priors from the DINOv3 Vision Foundation Model to a LiDAR student network. The framework combines an Adaptive Relation Convolution (ARConv) backbone with geometry-conditioned aggregation, a Unified Bidirectional Mapping Module (UBMM) for explicit 2D–3D interaction, and Manifold-Aware Topological Distillation (MATD), which aligns inter-sample affinity structures in a shared latent manifold rather than enforcing pointwise feature matching. By preserving relational topology instead of absolute feature coordinates, CMCMD mitigates negative transfer across heterogeneous modalities. Experiments on SemanticKITTI and nuScenes yield mIoU values of 72.9% and 81.2%, respectively, surpassing the compared distillation baselines and approaching the performance of multimodal fusion methods at lower inference cost. Additional evaluation on real-world campus scenes further supports the cross-domain robustness of the proposed framework. Full article
Show Figures

Graphical abstract

24 pages, 2660 KB  
Article
SpaA: A Spatial-Aware Network for 3D Object Detection from LiDAR Point Clouds
by Jianfeng Song, Chu Zhang, Cheng Zhang, Li Song, Ruobin Wang and Kun Xie
Remote Sens. 2026, 18(8), 1104; https://doi.org/10.3390/rs18081104 - 8 Apr 2026
Viewed by 294
Abstract
Grid-based 3D object detection methods effectively leverage mature point cloud processing techniques and convolutional neural networks for feature extraction and object localization. However, unlike the 2D object detection domain, the unique characteristics of point cloud data being unevenly and sparsely distributed in space [...] Read more.
Grid-based 3D object detection methods effectively leverage mature point cloud processing techniques and convolutional neural networks for feature extraction and object localization. However, unlike the 2D object detection domain, the unique characteristics of point cloud data being unevenly and sparsely distributed in space necessitate that detection networks possess a certain level of spatial structural perception. Learning spatial information such as point cloud density and distribution patterns can significantly benefit 3D detection networks. This paper proposes a Spatial-aware Network for 3D object detection (SpaA). Based on the 3D sparse convolution network, we designed a Variable Sparse Convolution network (VS-Conv) capable of perceiving the importance of locations. To address the issue of set abstraction operations completely ignoring spatial structure during local feature aggregation, we proposed a Spatial-aware Density-based Local Aggregation (SDLA) method. Experiments demonstrate that enhancing the spatial-awareness capability of detection networks is crucial for complex 3D object detection. Detection results on the KITTI dataset validate the effectiveness of our method. The test set results of SpaA achieved 3D AP values of 82.20%, 44.04%, and 70.34% for the Car, Pedestrian, and Cyclist categories, respectively, and a competitive 3D mAP of 67.23%, outperforming several published methods. Full article
Show Figures

Figure 1

27 pages, 4289 KB  
Article
Online Extrinsic Calibration of Camera and LiDAR Based on Cascade Optimization
by Chuanxun Hou, Zheng He, Tong Zhao, Zhenhang Guo and Xinchun Ji
Sensors 2026, 26(7), 2282; https://doi.org/10.3390/s26072282 - 7 Apr 2026
Viewed by 324
Abstract
Accurate and stable extrinsic calibration is the foundation of high-quality fusion sensing and positioning of camera and Light Detection and Ranging (LiDAR). However, traditional targetless calibration methods suffer from limitations such as poor scene adaptability and unstable convergence, which significantly restrict calibration accuracy [...] Read more.
Accurate and stable extrinsic calibration is the foundation of high-quality fusion sensing and positioning of camera and Light Detection and Ranging (LiDAR). However, traditional targetless calibration methods suffer from limitations such as poor scene adaptability and unstable convergence, which significantly restrict calibration accuracy and robustness in complex environments. Aiming at solving those problems, we propose an online cascade-optimization-based extrinsic calibration method of combining motion trajectory alignment and edge feature alignment. In the initial calibration stage, a hand–eye calibration algorithm is designed by minimizing the residual discrepancies between camera odometry and LiDAR odometry sequences. It establishes a robust initialization for subsequent optimization. Then, in order to extract robust edge line features from sparse point clouds, we employ depth difference and planar edges of point clouds in the optimization process. Subsequently, principal component analysis (PCA) is applied to compute the principal direction of the extracted line features, enabling a decoupled optimization scheme that accounts for directional observability. This approach effectively mitigates the adverse effects of uneven environmental feature distributions. Experimental validation on typical urban datasets demonstrates the method’s generalizability and competitive accuracy: rotational parameter errors are constrained within 0.25°, and translational errors are maintained below 0.05 m. This affirms the method’s suitability for high-accuracy engineering applications. Full article
(This article belongs to the Special Issue Intelligent Sensor Calibration: Techniques, Devices and Methodologies)
Show Figures

Figure 1

20 pages, 12712 KB  
Article
Large-Scale Airborne LiDAR Point Cloud Building Extraction Based on Improved Voxelized Deep Learning Network
by Bai Xue, Yanru Song, Pi Ai, Hongzhou Li, Shuhan Liu and Li Guo
Buildings 2026, 16(7), 1450; https://doi.org/10.3390/buildings16071450 - 7 Apr 2026
Viewed by 318
Abstract
High-precision 3D building data are pivotal for smart city development, urban planning, and disaster management. However, large-scale building extraction from airborne LiDAR point clouds remains challenging due to semantic ambiguity, uneven point density, and complex architectural structures. To address these limitations, we propose [...] Read more.
High-precision 3D building data are pivotal for smart city development, urban planning, and disaster management. However, large-scale building extraction from airborne LiDAR point clouds remains challenging due to semantic ambiguity, uneven point density, and complex architectural structures. To address these limitations, we propose a novel framework integrating geometric topology perception with cross-dimensional attention mechanisms within a Sparse Voxel Convolutional Neural Network (SPVCNN). The key contributions include: (1) an enhanced LaserMix++ multi-scale hybrid augmentation strategy featuring cross-scene block replacement, ground normal–constrained rotation, and non-uniform scaling; (2) a dual-branch SPVCNN architecture embedding a collaborative module of Geometric Self-Attention (GSA) and Cross-Space Residual Attention (CSRA) to preserve topological consistency and enable cross-dimensional feature interaction; and (3) a Boundary Enhancement Module (BEM) specifically designed to resolve boundary ambiguity and overlapping predictions. Evaluated on a 177 km2 dataset covering Washington, D.C., our method significantly outperforms the baseline SPVCNN, improving accuracy by 12.04 percentage points (0.8212 to 0.9416) and Intersection over Union (IoU) by 9.96 percentage points (0.866 to 0.9656). Furthermore, it surpasses mainstream networks such as Cylinder3D and MinkResNet by over 50% in absolute accuracy gain. These results demonstrate the effectiveness of synergistically combining geometric perception with adaptive attention for robust building extraction from large-scale LiDAR data. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

17 pages, 4631 KB  
Article
Estimation of Nitrogen Status in Zanthoxylum armatum var. novemfolius Using Machine Learning Algorithms and UAV Hyperspectral and LiDAR Data Fusion
by Shangyuan Zhao, Yong Wei, Jinkun Zhao, Shuai Wang, Xin Ye, Xiaojun Shi and Jie Wang
Plants 2026, 15(7), 1119; https://doi.org/10.3390/plants15071119 - 6 Apr 2026
Viewed by 343
Abstract
Accurate monitoring of nitrogen (N) status is critical for precision N management and optimizing the yield and quality of Zanthoxylum armatum var. novemfolius (ZA). However, individual sensors often struggle to simultaneously capture the biochemical variations and complex canopy structural changes of ZA. Therefore, [...] Read more.
Accurate monitoring of nitrogen (N) status is critical for precision N management and optimizing the yield and quality of Zanthoxylum armatum var. novemfolius (ZA). However, individual sensors often struggle to simultaneously capture the biochemical variations and complex canopy structural changes of ZA. Therefore, field experiments were conducted over two consecutive years, applying four N-application rates (0, 150, 300, and 450 kg N ha−1) to ZA. At each phenological stage, hyperspectral imagery and LiDAR point clouds were collected via three UAV flight altitudes (60 m, 80 m, and 100 m), and canopy nitrogen concentration (CNC) and aboveground nitrogen accumulation (AGNA) were measured. This study developed a framework by synergistically fusing UAV-derived hyperspectral imaging (HSI) and LiDAR data for CNC and AGNA monitoring. Results showed that the response of nitrogen status indicators to fertilization was phenology-specific: CNC showed no significant difference (p > 0.05) among treatments during the vigorous vegetative growth stage (VGS) but differed significantly (p < 0.05) during the fruit expansion stage (FES); AGNA differed significantly among treatments at VGS and FES (p < 0.05). The two-step screening yielded NDSI (732, 879) and NDSI (560, 690) as the optimal CNC indicators at VGS and FES, respectively (r = 0.83 and 0.93), whereas the NDSI (711, 986) and NDSI (515, 736) were identified as the optimal AGNA indicators at VGS and FES, respectively (r = 0.91 and 0.71). Across all phenological stages, Random Forest Regression consistently delivered the highest accuracy for CNC (R2 = 0.93–0.98, RMSE = 0.87–1.02 g kg−1) and AGNA (R2 = 0.95–0.97, RMSE = 1.92–2.55 g plant−1), outperforming MLR, PLSR, and SVR. This synergistic framework provides a high-precision, non-destructive methodology for the precision N monitoring of woody crops. Full article
(This article belongs to the Special Issue Remote Sensing for Diagnosis of Plant Health)
Show Figures

Figure 1

22 pages, 4917 KB  
Technical Note
Reducing Latency in Digital Twins: A Framework for Near-Real-Time Progress and Quality Reporting
by Zvonko Sigmund, Ivica Završki, Ivan Marović and Kristijan Vilibić
Buildings 2026, 16(7), 1448; https://doi.org/10.3390/buildings16071448 - 6 Apr 2026
Viewed by 444
Abstract
While Digital Twins offer transformative potential, their efficacy for real-time control is constrained by the slow data acquisition and the high computational intensity required to process raw datasets like point clouds. This paper identifies these critical bottlenecks—specifically the latency between data capture and [...] Read more.
While Digital Twins offer transformative potential, their efficacy for real-time control is constrained by the slow data acquisition and the high computational intensity required to process raw datasets like point clouds. This paper identifies these critical bottlenecks—specifically the latency between data capture and actionable insight—and proposes a refined theoretical framework for near-real-time automated progress monitoring and quality reporting. Building on the findings of the NORMENG project and informing the subsequent AutoGreenTraC project, this research synthesizes state-of-the-art advancements in reality capture, including LIDAR, SfM-MVS, and 360-degree vision. The study highlights a fundamental divergence in stakeholder requirements: the need for millimeter-level precision in quality control versus the demand for high-velocity documentation for progress monitoring. A key innovation presented is the shift toward neural rendering techniques to bypass the computational delays of traditional photogrammetry and enable immediate on-site visualization. By structuring a tiered processing hierarchy that combines lightweight edge analysis for immediate safety and progress monitoring with asynchronous high-fidelity Digital Twin updates, the framework aims to establish a single source of truth. Full article
Show Figures

Figure 1

16 pages, 9785 KB  
Article
Experimental Assessment of Vertical Greenery Systems Using Shake Table Tests and High-Precision Terrestrial LiDAR
by Vachan Vanian, Pavlos Asteriou, Theodoros Rousakis, Ioannis P. Xynopoulos and Constantin E. Chalioris
Geotechnics 2026, 6(2), 33; https://doi.org/10.3390/geotechnics6020033 - 6 Apr 2026
Viewed by 219
Abstract
The integration of vertical greenery systems (VGSs) into existing reinforced concrete (RC) buildings raises questions regarding interface kinematics and the permanent displacement of soil-retaining elements under seismic excitation. This study experimentally investigates the residual displacement of façade-mounted living walls and rooftop planter pods [...] Read more.
The integration of vertical greenery systems (VGSs) into existing reinforced concrete (RC) buildings raises questions regarding interface kinematics and the permanent displacement of soil-retaining elements under seismic excitation. This study experimentally investigates the residual displacement of façade-mounted living walls and rooftop planter pods anchored to a deficient RC frame under shake table excitation. A 1:3 scale reinforced concrete frame was tested in two distinct phases: initially as a deficient, unretrofitted structure (Phase A), and subsequently as a retrofitted system integrated with vertical greenery elements (Phase B). High-precision terrestrial laser scanning (TLS) was employed before and after successive seismic excitation stages to generate dense three-dimensional point clouds. Cloud-to-cloud comparison techniques were used to quantify global structural displacement and local kinematic behavior of greenery components, while results were validated against conventional displacement sensors. The RC frame exhibited millimeter-scale permanent displacements consistent with draw-wire measurements. In contrast, planter pods demonstrated configuration-dependent behavior, including up to 8 cm translational sliding and rotational responses reaching 13° under repeated excitation, whereas living wall panels remained stable. Notably, a 95% reduction in point cloud density reproduced global deformation patterns with an RMSE of 3.03 mm and quantified peak displacements with only ~2% deviation from full-resolution results. The findings demonstrate the capability of TLS-based monitoring to detect differential kinematic behavior of integrated VGSs, while highlighting the variability in performance of friction-based rooftop anchorage utilizing different robust planter pod fixing systems. Full article
(This article belongs to the Special Issue Recent Advances in Soil–Structure Interaction)
Show Figures

Figure 1

23 pages, 5436 KB  
Article
Characterizing Pedestrian Network from Segmented 3D Point Clouds for Accessibility Assessment: A Virtual Robotic Approach
by Ali Ahmadi, Mir Abolfazl Mostafavi, Ernesto Morales and Nouri Sabo
Sensors 2026, 26(7), 2172; https://doi.org/10.3390/s26072172 - 31 Mar 2026
Viewed by 289
Abstract
This study introduces a novel virtual robotic approach for automated characterization of pedestrian network accessibility from semantically segmented 3D LiDAR point clouds. With approximately 8 million Canadians living with disabilities, scalable accessibility assessment methods are critical. The proposed methodology integrates a Tangent Bug [...] Read more.
This study introduces a novel virtual robotic approach for automated characterization of pedestrian network accessibility from semantically segmented 3D LiDAR point clouds. With approximately 8 million Canadians living with disabilities, scalable accessibility assessment methods are critical. The proposed methodology integrates a Tangent Bug navigation algorithm—extended from 2D to 3D point cloud environments—with a triangular virtual robot grounded in ADA and IBC accessibility standards. The robot navigates classified point cloud data to simultaneously extract related parameters per step including those related to the accessibility assessment, including running slope, cross-slope, path width, surface type, and step height, aligned with the Measure of Environmental Accessibility (MEA) framework. Unlike existing approaches, the method characterizes not only formal sidewalk segments but also the critical transitional linkages between building entrances and the pedestrian network. Rather than evaluating features against fixed binary thresholds, it records continuous raw measurements enabling personalized accessibility assessment tailored to individual user profiles. Quantitative validation demonstrates high accuracy for path width (NRMSE = 2.71%) and reliable slope tracking. The proposed approach is faster, more cost-effective, and more comprehensive than traditional manual methods, and its segment-independent architecture makes it well-suited for future city-scale deployment. Full article
(This article belongs to the Special Issue Advances in Wireless Sensor Networks for Smart City)
Show Figures

Figure 1

18 pages, 3933 KB  
Article
Feature Selection Based on Height Mutual Information in Airborne LiDAR Filtering
by Zhan Cai, Luying Zhao, Qiuli Chen, Zhijun He, Shaoyun Bi and Xiaolong Xu
Remote Sens. 2026, 18(7), 1031; https://doi.org/10.3390/rs18071031 - 30 Mar 2026
Viewed by 304
Abstract
Filtering constitutes a critical step in the post-processing of airborne Light Detection And Ranging (LiDAR) data. Over the past decade, machine learning has emerged as a prominent methodological paradigm across numerous disciplines, attracting significant research interest in its application to LiDAR filtering. From [...] Read more.
Filtering constitutes a critical step in the post-processing of airborne Light Detection And Ranging (LiDAR) data. Over the past decade, machine learning has emerged as a prominent methodological paradigm across numerous disciplines, attracting significant research interest in its application to LiDAR filtering. From a machine learning perspective, filtering is essentially a binary classification task that aims to discriminate between ground and non-ground points. However, the limited information inherent in point clouds often leads to the generation of highly correlated features, particularly those derived from height data, which can compromise filtering accuracy. To address this issue, feature selection becomes imperative. In this study, we employed height-based mutual information as a criterion to identify and eliminate less discriminative features for filtering. The AdaBoost (Adaptive Boosting) algorithm was adopted as the classifier for point cloud filtering. For each point, nineteen features were derived from the raw LiDAR point cloud based on height and other geometric attributes within a defined neighborhood. The performance of the proposed feature selection approach was evaluated using benchmark datasets provided by the International Society for Photogrammetry and Remote Sensing (ISPRS). Experimental results demonstrate that the method is effective and reliable. After removing three selected features, the average kappa coefficient improved, along with a reduction in three categories of error, although a slight increase in Type II error (0.15%) was observed. Full article
Show Figures

Figure 1

Back to TopTop