Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,613)

Search Parameters:
Keywords = point clouds

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 3841 KB  
Article
Comparison of Regression, Classification, Percentile Method and Dual-Range Averaging Method for Crop Canopy Height Estimation from UAV-Based LiDAR Point Cloud Data
by Pai Du, Jinfei Wang and Bo Shan
Drones 2025, 9(10), 683; https://doi.org/10.3390/drones9100683 - 1 Oct 2025
Abstract
Crop canopy height is a key structural indicator that is strongly associated with crop development, biomass accumulation, and crop health. To overcome the limitations of time-consuming and labor-intensive traditional field measurements, Unmanned Aerial Vehicle (UAV)-based Light Detection and Ranging (LiDAR) offers an efficient [...] Read more.
Crop canopy height is a key structural indicator that is strongly associated with crop development, biomass accumulation, and crop health. To overcome the limitations of time-consuming and labor-intensive traditional field measurements, Unmanned Aerial Vehicle (UAV)-based Light Detection and Ranging (LiDAR) offers an efficient alternative by capturing three-dimensional point cloud data (PCD). In this study, UAV-LiDAR data were acquired using a DJI Matrice 600 Pro equipped with a 16-channel LiDAR system. Three canopy height estimation methodological approaches were evaluated across three crop types: corn, soybean, and winter wheat. Specifically, this study assessed machine learning regression modeling, ground point classification techniques, percentile-based method and a newly proposed Dual-Range Averaging (DRA) method to identify the most effective method while ensuring practicality and reproducibility. The best-performing method for corn was Support Vector Regression (SVR) with a linear kernel (R2 = 0.95, RMSE = 0.137 m). For soybean, the DRA method yielded the highest accuracy (R2 = 0.93, RMSE = 0.032 m). For winter wheat, the PointCNN deep learning model demonstrated the best performance (R2 = 0.93, RMSE = 0.046 m). These results highlight the effectiveness of integrating UAV-LiDAR data with optimized processing methods for accurate and widely applicable crop height estimation in support of precision agriculture practices. Full article
(This article belongs to the Special Issue UAV Agricultural Management: Recent Advances and Future Prospects)
Show Figures

Figure 1

24 pages, 4942 KB  
Article
ConvNet-Generated Adversarial Perturbations for Evaluating 3D Object Detection Robustness
by Temesgen Mikael Abraha, John Brandon Graham-Knight, Patricia Lasserre, Homayoun Najjaran and Yves Lucet
Sensors 2025, 25(19), 6026; https://doi.org/10.3390/s25196026 - 1 Oct 2025
Abstract
This paper presents a novel adversarial Convolutional Neural Network (ConvNet) method for generating adversarial perturbations in 3D point clouds, enabling gradient-free robustness evaluation of object detection systems at inference time. Unlike existing iterative gradient methods, our approach embeds the ConvNet directly into the [...] Read more.
This paper presents a novel adversarial Convolutional Neural Network (ConvNet) method for generating adversarial perturbations in 3D point clouds, enabling gradient-free robustness evaluation of object detection systems at inference time. Unlike existing iterative gradient methods, our approach embeds the ConvNet directly into the detection pipeline at the voxel feature level. The ConvNet is trained to maximize detection loss while maintaining perturbations within sensor error bounds through multi-component loss constraints (intensity, bias, and imbalance terms). Evaluation on a Sparsely Embedded Convolutional Detection (SECOND) detector with the KITTI dataset shows 8% overall mean Average Precision (mAP) degradation, while CenterPoint on NuScenes exhibits 24% weighted mAP reduction across 10 object classes. Analysis reveals an inverse relationship between object size and adversarial vulnerability: smaller objects (pedestrians: 13%, cyclists: 14%) show higher vulnerability compared to larger vehicles (cars: 0.2%) on KITTI, with similar patterns on NuScenes, where barriers (68%) and pedestrians (32%) are most affected. Despite perturbations remaining within typical sensor error margins (mean L2 norm of 0.09% for KITTI, 0.05% for NuScenes, corresponding to 0.9–2.6 cm at typical urban distances), substantial detection failures occur. The key novelty is training a ConvNet to learn effective adversarial perturbations during a one-time training phase and then using the trained network for gradient-free robustness evaluation during inference, requiring only a forward pass through the ConvNet (1.2–2.0 ms overhead) instead of iterative gradient computation, making continuous vulnerability monitoring practical for autonomous driving safety assessment. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

28 pages, 6329 KB  
Article
SparsePose–NeRF: Robust Reconstruction Under Limited Observations and Uncalibrated Poses
by Kun Fang, Qinghui Zhang, Chenxia Wan, Pengtao Lv and Cheng Yuan
Photonics 2025, 12(10), 962; https://doi.org/10.3390/photonics12100962 - 28 Sep 2025
Abstract
Neural Radiance Fields (NeRF) reconstruction faces significant challenges under non-ideal conditions, such as sparse viewpoints or missing camera pose information. Existing approaches frequently assume accurate camera poses and validate their effectiveness on standard datasets, which restricts their applicability in real-world scenarios. To tackle [...] Read more.
Neural Radiance Fields (NeRF) reconstruction faces significant challenges under non-ideal conditions, such as sparse viewpoints or missing camera pose information. Existing approaches frequently assume accurate camera poses and validate their effectiveness on standard datasets, which restricts their applicability in real-world scenarios. To tackle the challenge of sparse viewpoints and the inability of Structure-from-Motion (SfM) to accurately estimate camera poses, we propose a novel approach. Our method replaces SfM with the MASt3R-SfM algorithm to robustly compute camera poses and generate dense point clouds, which serve as depth–space constraints for NeRF reconstruction, mitigating geometric information loss caused by limited viewpoints. Additionally, we introduce a high-frequency annealing encoding strategy to prevent network overfitting and employ a depth loss function leveraging Pearson correlation coefficients to extract low-frequency information from images. Experimental results demonstrate that our approach achieves high-quality NeRF reconstruction under conditions of sparse viewpoints and missing camera poses while being better suited for real-world applications. Its effectiveness has been validated on the Real Forward-Facing dataset and in real-world scenarios. Full article
(This article belongs to the Special Issue New Perspectives in Micro-Nano Optical Design and Manufacturing)
Show Figures

Figure 1

34 pages, 9527 KB  
Article
High-Resolution 3D Thermal Mapping: From Dual-Sensor Calibration to Thermally Enriched Point Clouds
by Neri Edgardo Güidi, Andrea di Filippo and Salvatore Barba
Appl. Sci. 2025, 15(19), 10491; https://doi.org/10.3390/app151910491 - 28 Sep 2025
Abstract
Thermal imaging is increasingly applied in remote sensing to identify material degradation, monitor structural integrity, and support energy diagnostics. However, its adoption is limited by the low spatial resolution of thermal sensors compared to RGB cameras. This study proposes a modular pipeline to [...] Read more.
Thermal imaging is increasingly applied in remote sensing to identify material degradation, monitor structural integrity, and support energy diagnostics. However, its adoption is limited by the low spatial resolution of thermal sensors compared to RGB cameras. This study proposes a modular pipeline to generate thermally enriched 3D point clouds by fusing RGB and thermal imagery acquired simultaneously with a dual-sensor unmanned aerial vehicle system. The methodology includes geometric calibration of both cameras, image undistortion, cross-spectral feature matching, and projection of radiometric data onto the photogrammetric model through a computed homography. Thermal values are extracted using a custom parser and assigned to 3D points based on visibility masks and interpolation strategies. Calibration achieved 81.8% chessboard detection, yielding subpixel reprojection errors. Among twelve evaluated algorithms, LightGlue retained 99% of its matches and delivered a reprojection accuracy of 18.2% at 1 px, 65.1% at 3 px and 79% at 5 px. A case study on photovoltaic panels demonstrates the method’s capability to map thermal patterns with low temperature deviation from ground-truth data. Developed entirely in Python, the workflow integrates into Agisoft Metashape or other software. The proposed approach enables cost-effective, high-resolution thermal mapping with applications in civil engineering, cultural heritage conservation, and environmental monitoring applications. Full article
Show Figures

Figure 1

14 pages, 15260 KB  
Article
High-Performance 3D Point Cloud Image Distortion Calibration Filter Based on Decision Tree
by Yao Duan
Photonics 2025, 12(10), 960; https://doi.org/10.3390/photonics12100960 - 28 Sep 2025
Abstract
Structured Light LiDAR is susceptible to lens scattering and temperature fluctuations, resulting in some level of distortion in the captured point cloud image. To address this problem, this paper proposes a high-performance 3D point cloud Least Mean Square filter based on Decision Tree, [...] Read more.
Structured Light LiDAR is susceptible to lens scattering and temperature fluctuations, resulting in some level of distortion in the captured point cloud image. To address this problem, this paper proposes a high-performance 3D point cloud Least Mean Square filter based on Decision Tree, which is called the D−LMS filter for short. The D−LMS filter is an adaptive filtering compensation algorithm based on decision tree, which can effectively distinguish the signal region from the distorted region, thus optimizing the distortion of the point cloud image and improving the accuracy of the point cloud image. The experimental results clearly demonstrate that our proposed D−LMS filtering algorithm significantly improves accuracy by optimizing distorted areas. Compared with the 3D point cloud least mean square filter based on SVM, the accuracy of the proposed D−LMS filtering algorithm is improved from 86.17% to 92.38%, the training time is reduced by 1317 times and the testing time is reduced by 1208 times. Full article
Show Figures

Figure 1

24 pages, 14166 KB  
Article
Robust and Transferable Elevation-Aware Multi-Resolution Network for Semantic Segmentation of LiDAR Point Clouds in Powerline Corridors
by Yifan Wang, Shenhong Li, Guofang Wang, Wanshou Jiang, Yijun Yan and Jianwen Sun
Remote Sens. 2025, 17(19), 3318; https://doi.org/10.3390/rs17193318 - 27 Sep 2025
Abstract
Semantic segmentation of LiDAR point clouds in powerline corridor environments is crucial for the intelligent inspection and maintenance of power infrastructure. However, existing deep learning methods often underperform in such scenarios due to severe class imbalance, sparse and long-range structures, and complex elevation [...] Read more.
Semantic segmentation of LiDAR point clouds in powerline corridor environments is crucial for the intelligent inspection and maintenance of power infrastructure. However, existing deep learning methods often underperform in such scenarios due to severe class imbalance, sparse and long-range structures, and complex elevation variations. We propose EMPower-Net, an Elevation-Aware Multi-Resolution Network, which integrates an Elevation Distribution (ED) module to enhance vertical geometric awareness and a Multi-Resolution (MR) module to enhance segmentation accuracy for corridor structures with varying object scales. Experiments on real-world datasets from Yunnan and Guangdong show that EMPower-Net outperforms state-of-the-art baselines, especially in recognizing power lines and towers with high structural fidelity under occlusion and dense vegetation. Ablation studies confirm the complementary effects of the MR and ED modules, while transfer learning results reveal strong generalization with minimal performance degradation across different powerline regions. Additional tests on urban datasets indicate that the proposed elevation features are also effective for vertical structure recognition beyond powerline scenarios. Full article
(This article belongs to the Special Issue Urban Land Use Mapping Using Deep Learning)
Show Figures

Figure 1

21 pages, 11368 KB  
Article
Introducing SLAM-Based Portable Laser Scanning for the Metric Testing of Topographic Databases
by Eleonora Maset, Antonio Matellon, Simone Gubiani, Domenico Visintini and Alberto Beinat
Remote Sens. 2025, 17(19), 3316; https://doi.org/10.3390/rs17193316 - 27 Sep 2025
Abstract
The advent of portable laser scanners leveraging Simultaneous Localization and Mapping (SLAM) technology has recently enabled the rapid and efficient acquisition of detailed point clouds of the surrounding environment while maintaining a high degree of accuracy and precision, on the order of a [...] Read more.
The advent of portable laser scanners leveraging Simultaneous Localization and Mapping (SLAM) technology has recently enabled the rapid and efficient acquisition of detailed point clouds of the surrounding environment while maintaining a high degree of accuracy and precision, on the order of a few centimeters. This paper explores the use of SLAM systems in an uncharted application domain, namely the metric testing of a large-scale, three-dimensional topographic database (TDB). Three distinct operational procedures (point-to-cloud, line-to-cloud, and line-to-line) are developed to facilitate a comparison between the vector features of the TDB and the SLAM-based point cloud, which serves as a reference. A comprehensive evaluation carried out on the TDB of the Friuli Venezia Giulia region (Italy) highlights the advantages and limitations of the proposed approaches, demonstrating the potential of SLAM-based surveys to complement, or even supersede, the classical topographic field techniques usually employed for geometric verification operations. Full article
Show Figures

Figure 1

21 pages, 4146 KB  
Article
Integration of Drone-Based 3D Scanning and BIM for Automated Construction Progress Control
by Nerea Tárrago Garay, Jose Carlos Jimenez Fernandez, Rosa San Mateos Carreton, Marco Antonio Montes Grova, Oskari Kruth and Peru Elguezabal
Buildings 2025, 15(19), 3487; https://doi.org/10.3390/buildings15193487 - 26 Sep 2025
Abstract
The work progress control is a key aspect for correcting deviations in construction, but currently is a task still carried out very manually by personnel moved to the execution place. This work proposes to digitize and automate the procedure through the combination and [...] Read more.
The work progress control is a key aspect for correcting deviations in construction, but currently is a task still carried out very manually by personnel moved to the execution place. This work proposes to digitize and automate the procedure through the combination and contrast of digital models of the actual state of the work and the theoretical planning. The models of the real situation are generated from the laser scanning executed by drones, the theoretical planning is reflected in the BIM4D models of the project, and their combination is automated with Feature Manipulation Engine (FME) visual programming routines. A web-based digital twin platform allows access to the end user of the service in an agile way. The methodology developed has been validated with its application on a residential building in the structural erection phase in Helsinki (Finland). Full article
(This article belongs to the Special Issue Robotics, Automation and Digitization in Construction)
Show Figures

Figure 1

16 pages, 3013 KB  
Article
Boosting LiDAR Point Cloud Object Detection via Global Feature Fusion
by Xu Zhang, Fengchang Tian, Jiaxing Sun and Yan Liu
Information 2025, 16(10), 832; https://doi.org/10.3390/info16100832 - 26 Sep 2025
Abstract
To address the limitation of receptive fields caused by the use of local convolutions in current point cloud object detection methods, this paper proposes a LiDAR point cloud object detection algorithm that integrates global features. The proposed method employs a Voxel Mapping Block [...] Read more.
To address the limitation of receptive fields caused by the use of local convolutions in current point cloud object detection methods, this paper proposes a LiDAR point cloud object detection algorithm that integrates global features. The proposed method employs a Voxel Mapping Block (VMB) and a Global Feature Extraction Block (GFEB) to convert the point cloud data into a one-dimensional long sequence. It then utilizes non-local convolutions to model the entire voxelized point cloud and incorporate global contextual information, thereby enhancing the network’s receptive field and its capability to extract and learn global features. Furthermore, a Voxel Channel Feature Extraction (VCFE) module is designed to capture local spatial information by associating features across different channels, effectively mitigating the spatial information loss introduced during the one-dimensional transformation. The experimental results demonstrate that, compared with state-of-the-art methods, the proposed approach improves the average precision of vehicle, pedestrian, and cyclist targets on the Waymo subset by 0.64%, 0.71%, and 0.66%, respectively. On the nuScenes dataset, the detection accuracy for var targets increased by 0.7%, with NDS and mAP improving by 0.3% and 0.5%, respectively. In particular, the method exhibits outstanding performance in small object detection, significantly enhancing the overall accuracy of point cloud object detection. Full article
Show Figures

Figure 1

20 pages, 14512 KB  
Article
Dual-Attention-Based Block Matching for Dynamic Point Cloud Compression
by Longhua Sun, Yingrui Wang and Qing Zhu
J. Imaging 2025, 11(10), 332; https://doi.org/10.3390/jimaging11100332 - 25 Sep 2025
Abstract
The irregular and highly non-uniform spatial distribution inherent to dynamic three-dimensional (3D) point clouds (DPCs) severely hampers the extraction of reliable temporal context, rendering inter-frame compression a formidable challenge. Inspired by two-dimensional (2D) image and video compression methods, existing approaches attempt to model [...] Read more.
The irregular and highly non-uniform spatial distribution inherent to dynamic three-dimensional (3D) point clouds (DPCs) severely hampers the extraction of reliable temporal context, rendering inter-frame compression a formidable challenge. Inspired by two-dimensional (2D) image and video compression methods, existing approaches attempt to model the temporal dependence of DPCs through a motion estimation/motion compensation (ME/MC) framework. However, these approaches represent only preliminary applications of this framework; point consistency between adjacent frames is insufficiently explored, and temporal correlation requires further investigation. To address this limitation, we propose a hierarchical ME/MC framework that adaptively selects the granularity of the estimated motion field, thereby ensuring a fine-grained inter-frame prediction process. To further enhance motion estimation accuracy, we introduce a dual-attention-based KNN block-matching (DA-KBM) network. This network employs a bidirectional attention mechanism to more precisely measure the correlation between points, using closely correlated points to predict inter-frame motion vectors and thereby improve inter-frame prediction accuracy. Experimental results show that the proposed DPC compression method achieves a significant improvement (gain of 70%) in the BD-Rate metric on the 8iFVBv2 dataset. compared with the standardized Video-based Point Cloud Compression (V-PCC) v13 method, and a 16% gain over the state-of-the-art deep learning-based inter-mode method. Full article
(This article belongs to the Special Issue 3D Image Processing: Progress and Challenges)
Show Figures

Figure 1

26 pages, 3429 KB  
Article
I-VoxICP: A Fast Point Cloud Registration Method for Unmanned Surface Vessels
by Qianfeng Jing, Mingwang Bai, Yong Yin and Dongdong Guo
J. Mar. Sci. Eng. 2025, 13(10), 1854; https://doi.org/10.3390/jmse13101854 - 25 Sep 2025
Abstract
The accurate positioning and state estimation of surface vessels are prerequisites to autonomous navigation. Recently, the rapid development of 3D LiDARs has promoted the autonomy of both land and aerial vehicles, which has attracted the interest of researchers in the maritime community. However, [...] Read more.
The accurate positioning and state estimation of surface vessels are prerequisites to autonomous navigation. Recently, the rapid development of 3D LiDARs has promoted the autonomy of both land and aerial vehicles, which has attracted the interest of researchers in the maritime community. However, in traditional maritime surface multi-scenario applications, LiDAR scan matching has low point cloud scanning and matching efficiency and insufficient positional accuracy when dealing with large-scale point clouds, so it has difficulty meeting the real-time demand of low-computing-power platforms. In this paper, we use ICP-SVD for point cloud alignment in the Stanford dataset and outdoor dock scenarios and propose an optimization scheme (iVox + ICP-SVD) that incorporates the voxel structure iVox. Experiments show that the average search time of iVox is 72.23% and 96.8% higher than that of ikd-tree and kd-tree, respectively. Executed on an NVIDIA Jetson Nano (four ARM Cortex-A57 cores @ 1.43 GHz) the algorithm processes 18 k downsampled points in 56 ms on average and 65 ms in the worst case—i.e., ≤15 Hz—so every scan is completed before the next 10–20 Hz LiDAR sweep arrives. During a 73 min continuous harbor trial the CPU temperature stabilized at 68 °C without thermal throttling, confirming that the reported latency is a sustainable, field-proven upper bound rather than a laboratory best case. This dramatically improves the retrieval efficiency while effectively maintaining the matching accuracy. As a result, the overall alignment process is significantly accelerated, providing an efficient and reliable solution for real-time point cloud processing. Full article
Show Figures

Figure 1

19 pages, 4890 KB  
Article
Classifying Sex from MSCT-Derived 3D Mandibular Models Using an Adapted PointNet++ Deep Learning Approach in a Croatian Population
by Eva Shimkus, Ivana Kružić, Saša Mladenović, Iva Perić, Marija Jurić Gunjača, Tade Tadić, Krešimir Dolić, Šimun Anđelinović, Željana Bašić and Ivan Jerković
J. Imaging 2025, 11(10), 328; https://doi.org/10.3390/jimaging11100328 - 24 Sep 2025
Viewed by 123
Abstract
Accurate sex estimation is critical in forensic anthropology for developing biological profiles, with the mandible serving as a valuable alternative when crania or pelvic bones are unavailable. This study aims to enhance mandibular sex estimation using deep learning on 3D models in a [...] Read more.
Accurate sex estimation is critical in forensic anthropology for developing biological profiles, with the mandible serving as a valuable alternative when crania or pelvic bones are unavailable. This study aims to enhance mandibular sex estimation using deep learning on 3D models in a southern Croatian population. A dataset of 254 MSCT-derived 3D mandibular models (127 male, 127 female) was processed to generate 4096-point clouds, analyzed using an adapted PointNet++ architecture. The dataset was split into training (60%), validation (20%), and test (20%) sets. Unsupervised analysis employed an autoencoder with t-SNE visualization, while supervised classification used logistic regression on extracted features, evaluated by accuracy, sensitivity, specificity, PPV, NPV, and MCC. The model achieved 93% cross-validation accuracy and 92% test set accuracy, with saliency maps highlighting key sexually dimorphic regions like the chin, gonial, and condylar areas. A user-friendly Gradio web application was developed for real-time sex classification from STL files, enhancing forensic applicability. This approach outperformed traditional mandibular sex estimation methods and could have potential as a robust, automated tool for forensic practice, broader population studies and integration with diverse 3D data sources. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

16 pages, 4406 KB  
Article
Integration of Physical Features and Machine Learning: CSF-RF Framework for Optimizing Ground Point Filtering in Vegetated Regions
by Sisi Zhang, Chenyao Qu, Zhimin Wu and Wei Wang
Sensors 2025, 25(19), 5950; https://doi.org/10.3390/s25195950 - 24 Sep 2025
Viewed by 78
Abstract
Complex terrain conditions and dense vegetation cover in a vegetation area present significant challenges for point cloud data processing and the accurate extraction of ground points. This work integrates the physical characteristics between ground and non-ground points from the traditional Cloth Simulation Filter [...] Read more.
Complex terrain conditions and dense vegetation cover in a vegetation area present significant challenges for point cloud data processing and the accurate extraction of ground points. This work integrates the physical characteristics between ground and non-ground points from the traditional Cloth Simulation Filter (CSF) algorithm and the strong learning capability of the machine learning Random Forest (RF) framework, developing the CSF-RF fusion algorithm for filtering ground points in vegetated areas, which can improve the accuracy of point cloud filtering in complex terrain environments. Both type I and type II errors do not exceed 0.05%, and the total error is maintained within 0.03%. Particularly in areas with dense vegetation and severe terrain undulations, the advantages are evident: the CSF-RF algorithm achieves a total error of only 0.19%, representing a 79.6% relative reduction compared with the 0.93% error of the CSF algorithm, while also reducing cases of ground point omission. Thus, it can be seen that the CSF-RF algorithm can effectively reduce vegetation interference and exhibits good stability, providing effective technical support for the accurate extraction of Digital Elevation Models (DEMs) in vegetated areas. Full article
(This article belongs to the Special Issue Application of SAR and Remote Sensing Technology in Earth Observation)
Show Figures

Figure 1

20 pages, 2911 KB  
Article
Topological Machine Learning for Financial Crisis Detection: Early Warning Signals from Persistent Homology
by Ecaterina Guritanu, Enrico Barbierato and Alice Gatti
Computers 2025, 14(10), 408; https://doi.org/10.3390/computers14100408 - 24 Sep 2025
Viewed by 119
Abstract
We propose a strictly causal early–warning framework for financial crises based on topological signal extraction from multivariate return streams. Sliding windows of daily log–returns are mapped to point clouds, from which Vietoris–Rips persistence diagrams are computed and summarised by persistence landscapes. A single, [...] Read more.
We propose a strictly causal early–warning framework for financial crises based on topological signal extraction from multivariate return streams. Sliding windows of daily log–returns are mapped to point clouds, from which Vietoris–Rips persistence diagrams are computed and summarised by persistence landscapes. A single, interpretable indicator is obtained as the L2 norm of the landscape and passed through a causal decision rule (with thresholds α,β and run–length parameters s,t) that suppresses isolated spikes and collapses bursts to time–stamped warnings. On four major U.S. equity indices (S&P 500, NASDAQ, DJIA, Russell 2000) over 1999–2021, the method, at a fixed strictly causal operating point (α=β=3.1,s=57,t=16), attains a balanced precision–recall (F10.50) with an average lead time of about 34 days. It anticipates two of the four canonical crises and issues a contemporaneous signal for the 2008 global financial crisis. Sensitivity analyses confirm the qualitative robustness of the detector, while comparisons with permissive spike rules and volatility–based baselines demonstrate substantially fewer false alarms at comparable recall. The approach delivers interpretable topology–based warnings and provides a reproducible route to combining persistent homology with causal event detection in financial time series. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

32 pages, 33744 KB  
Article
Attention-Based Enhancement of Airborne LiDAR Across Vegetated Landscapes Using SAR and Optical Imagery Fusion
by Michael Marks, Daniel Sousa and Janet Franklin
Remote Sens. 2025, 17(19), 3278; https://doi.org/10.3390/rs17193278 - 24 Sep 2025
Viewed by 142
Abstract
Accurate and timely 3D vegetation structure information is essential for ecological modeling and land management. However, these needs often cannot be met with existing airborne LiDAR surveys, whose broad-area coverage comes with trade-offs in point density and update frequency. To address these limitations, [...] Read more.
Accurate and timely 3D vegetation structure information is essential for ecological modeling and land management. However, these needs often cannot be met with existing airborne LiDAR surveys, whose broad-area coverage comes with trade-offs in point density and update frequency. To address these limitations, this study introduces a deep learning framework built on attention mechanisms, the fundamental building block of modern large language models. The framework upsamples sparse (<22 pt/m2) airborne LiDAR point clouds by fusing them with stacks of multi-temporal optical (NAIP) and L-band quad-polarized Synthetic Aperture Radar (UAVSAR) imagery. Utilizing a novel Local–Global Point Attention Block (LG-PAB), our model directly enhances 3D point-cloud density and accuracy in vegetated landscapes by learning structure directly from the point cloud itself. Results in fire-prone Southern California foothill and montane ecosystems demonstrate that fusing both optical and radar imagery reduces reconstruction error (measured by Chamfer distance) compared to using LiDAR alone or with a single image modality. Notably, the fused model substantially mitigates errors arising from vegetation changes over time, particularly in areas of canopy loss, thereby increasing the utility of historical LiDAR archives. This research presents a novel approach for direct 3D point-cloud enhancement, moving beyond traditional raster-based methods and offering a pathway to more accurate and up-to-date vegetation structure assessments. Full article
Show Figures

Graphical abstract

Back to TopTop