Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (439)

Search Parameters:
Keywords = point cloud metrics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2498 KB  
Article
Panoramic Image Driven Point Cloud Initialization for 3D Reconstruction
by Haoyu Qian, Lidong Yang, Jing Wang and Muhammad Shahid Anwar
Sensors 2025, 25(22), 6840; https://doi.org/10.3390/s25226840 (registering DOI) - 8 Nov 2025
Abstract
The ability to reconstruct immersive and realistic three-dimensional scenes plays a fundamental role in advancing virtual reality, digital twins, and related fields. With the rapid development of differentiable rendering frameworks, the reconstruction quality of static scenes has been significantly improved. However, we observe [...] Read more.
The ability to reconstruct immersive and realistic three-dimensional scenes plays a fundamental role in advancing virtual reality, digital twins, and related fields. With the rapid development of differentiable rendering frameworks, the reconstruction quality of static scenes has been significantly improved. However, we observe that the challenge of insufficient initialization has been largely overlooked in existing studies, while at the same time heavily relying on dense multi-view imagery that is difficult to obtain. To address these challenges, we propose a pipeline for text driven 3D scene generation, which employs panoramic images as an intermediate representation and integrates with 3D Gaussian Splatting to enhance reconstruction quality and efficiency. Our method introduces an improved point cloud initialization using Fibonacci lattice sampling of panoramic images, combined with a dense perspective pseudo label strategy for teacher–student distillation supervision, enabling more accurate scene geometry and robust feature learning without requiring explicit multi-view ground truth. Extensive experiments validate the effectiveness of our method, consistently outperforming state-of-the-art methods across standard reconstruction metrics. Full article
(This article belongs to the Section Sensing and Imaging)
32 pages, 4693 KB  
Article
GATF-PCQA: A Graph Attention Transformer Fusion Network for Point Cloud Quality Assessment
by Abdelouahed Laazoufi, Mohammed El Hassouni and Hocine Cherifi
J. Imaging 2025, 11(11), 387; https://doi.org/10.3390/jimaging11110387 - 1 Nov 2025
Viewed by 214
Abstract
Point cloud quality assessment remains a critical challenge due to the high dimensionality and irregular structure of 3D data, as well as the need to align objective predictions with human perception. To solve this, we suggest a novel graph-based learning architecture that integrates [...] Read more.
Point cloud quality assessment remains a critical challenge due to the high dimensionality and irregular structure of 3D data, as well as the need to align objective predictions with human perception. To solve this, we suggest a novel graph-based learning architecture that integrates perceptual features with advanced graph neural networks. Our method consists of four main stages: First, key perceptual features, including curvature, saliency, and color, are extracted to capture relevant geometric and visual distortions. Second, a graph-based representation of the point cloud is created using these characteristics, where nodes represent perceptual clusters and weighted edges encode their feature similarities, yielding a structured adjacency matrix. Third, a novel Graph Attention Network Transformer Fusion (GATF) module dynamically refines the importance of these features and generates a unified, view-specific representation. Finally, a Graph Convolutional Network (GCN) regresses the fused features into a final quality score. We validate our approach on three benchmark datasets: ICIP2020, WPC, and SJTU-PCQA. Experimental results demonstrate that our method achieves high correlation with human subjective scores, outperforming existing state-of-the-art metrics by effectively modeling the perceptual mechanisms of quality judgment. Full article
Show Figures

Figure 1

40 pages, 11595 KB  
Article
An Automated Workflow for Generating 3D Solids from Indoor Point Clouds in a Cadastral Context
by Zihan Chen, Frédéric Hubert, Christian Larouche, Jacynthe Pouliot and Philippe Girard
ISPRS Int. J. Geo-Inf. 2025, 14(11), 429; https://doi.org/10.3390/ijgi14110429 - 31 Oct 2025
Viewed by 260
Abstract
Accurate volumetric modeling of indoor spaces is essential for emerging 3D cadastral systems, yet existing workflows often rely on manual intervention or produce surface-only models, limiting precision and scalability. This study proposes and validates an integrated, largely automated workflow (named VERTICAL) that converts [...] Read more.
Accurate volumetric modeling of indoor spaces is essential for emerging 3D cadastral systems, yet existing workflows often rely on manual intervention or produce surface-only models, limiting precision and scalability. This study proposes and validates an integrated, largely automated workflow (named VERTICAL) that converts classified indoor point clouds into topologically consistent 3D solids served as materials for land surveyor’s cadastral analysis. The approach sequentially combines RANSAC-based plane detection, polygonal mesh reconstruction, mesh optimization stage that merges coplanar faces, repairs non-manifold edges, and regularizes boundaries and planar faces prior to CAD-based solid generation, ensuring closed and geometrically valid solids. These modules are linked through a modular prototype (called P2M) with a web-based interface and parameterized batch processing. The workflow was tested on two condominium datasets representing a range of spatial complexities, from simple orthogonal rooms to irregular interiors with multiple ceiling levels, sloped roofs, and internal columns. Qualitative evaluation ensured visual plausibility, while quantitative assessment against survey-grade reference models measured geometric fidelity. Across eight representative rooms, models meeting qualitative criteria achieved accuracies exceeding 97% for key metrics including surface area, volume, and ceiling geometry, with a height RMSE around 0.01 m. Compared with existing automated modeling solutions, the proposed workflow has the ability of dealing with complex geometries and has comparable accuracy results. These results demonstrate the workflow’s capability to produce topologically consistent solids with high geometric accuracy, supporting both boundary delineation and volume calculation. The modular, interoperable design enables integration with CAD environments, offering a practical pathway toward an automated and reliable core of 3D modeling for cadastre applications. Full article
Show Figures

Figure 1

22 pages, 6748 KB  
Article
Automated 3D Reconstruction of Interior Structures from Unstructured Point Clouds
by Youssef Hany, Wael Ahmed, Adel Elshazly, Ahmad M. Senousi and Walid Darwish
ISPRS Int. J. Geo-Inf. 2025, 14(11), 428; https://doi.org/10.3390/ijgi14110428 - 31 Oct 2025
Viewed by 649
Abstract
The automatic reconstruction of existing buildings has gained momentum through the integration of Building Information Modeling (BIM) into architecture, engineering, and construction (AEC) workflows. This study presents a hybrid methodology that combines deep learning with surface-based techniques to automate the generation of 3D [...] Read more.
The automatic reconstruction of existing buildings has gained momentum through the integration of Building Information Modeling (BIM) into architecture, engineering, and construction (AEC) workflows. This study presents a hybrid methodology that combines deep learning with surface-based techniques to automate the generation of 3D models and 2D floor plans from unstructured indoor point clouds. The approach begins with point cloud preprocessing using voxel-based downsampling and robust statistical outlier removal. Room partitions are extracted via DBSCAN applied in the 2D space, followed by structural segmentation using the RandLA-Net deep learning model to classify key building components such as walls, floors, ceilings, columns, doors, and windows. To enhance segmentation fidelity, a density-based filtering technique is employed, and RANSAC is utilized to detect and fit planar primitives representing major surfaces. Wall-surface openings such as doors and windows are identified through local histogram analysis and interpolation in wall-aligned coordinate systems. The method supports complex indoor environments including Manhattan and non-Manhattan layouts, variable ceiling heights, and cluttered scenes with occlusions. The approach was validated using six datasets with varying architectural characteristics, and evaluated using completeness, correctness, and accuracy metrics. Results show a minimum completeness of 86.6%, correctness of 84.8%, and a maximum geometric error of 9.6 cm, demonstrating the robustness and generalizability of the proposed pipeline for automated as-built BIM reconstruction. Full article
Show Figures

Figure 1

24 pages, 5353 KB  
Article
Comparative Accuracy Assessment of Unmanned and Terrestrial Laser Scanning Systems for Tree Attribute Estimation in an Urban Mediterranean Forest
by Ante Šiljeg, Katarina Kolar, Ivan Marić, Fran Domazetović and Ivan Balenović
Remote Sens. 2025, 17(21), 3557; https://doi.org/10.3390/rs17213557 - 28 Oct 2025
Viewed by 316
Abstract
Urban mediterranean forests are key components of urban ecosystems. Accurate, high-resolution data on forest structural attributes are essential for effective management. This study evaluates the efficiency of unmanned laser scanning systems (ULS) and terrestrial LiDAR (TLS) in deriving key tree attributes, diameter at [...] Read more.
Urban mediterranean forests are key components of urban ecosystems. Accurate, high-resolution data on forest structural attributes are essential for effective management. This study evaluates the efficiency of unmanned laser scanning systems (ULS) and terrestrial LiDAR (TLS) in deriving key tree attributes, diameter at breast height (DBH) and tree height, within a small urban park in Zadar, Croatia. Accuracy assessment of the ULS and TLS-derived DBH was conducted based on traditional ground-based measurement (TGBM) data. For ULS, an automatic Spatix workflow was applied that classified points into a Tree class, segmented trees using trunk-based logic, and estimated DBH by fitting a circle to a 1.3 m slice; tree height was computed from the ground-normalized cloud with the Output Tree Cells tool. A semi-automatic CloudCompare/ArcMap workflow used CSF ground filtering, Connected Components segmentation, extraction of a 10 cm slice, manual trunk vectorization, and DBH calculation via Minimum Bounding Geometry. TLS scans, processed in FARO SCENE, were then analyzed in Spatix using the same automatic trunk-fitting procedure to derive DBH and height. Accuracy for DBH was evaluated against TGBM; comparative performance was summarized with standard error metrics, while ULS and TLS tree heights were compared using Concordance Correlation Coefficient (CCC) and Bland–Altman statistics. Results indicate that the semi-automatic approach outperformed the automatic approach in deriving DBH. TLS-derived DBH values demonstrated higher consistency and agreement with TGBM, as evidenced by their strong linear correlation, minimal bias, and narrow residual spread, while ULS exhibited greater variability and systematic deviation. Tree height comparisons between ULS and TLS revealed that ULS consistently produced slightly higher and more uniform measurements. This study highlights limitations in the evaluated techniques and proposes a hybrid approach combining ULS scanning with personal laser scanning (PLS) systems to enhance data accuracy in urban forest assessments. Full article
Show Figures

Figure 1

21 pages, 5023 KB  
Article
Robust 3D Target Detection Based on LiDAR and Camera Fusion
by Miao Jin, Bing Lu, Gang Liu, Yinglong Diao, Xiwen Chen and Gaoning Nie
Electronics 2025, 14(21), 4186; https://doi.org/10.3390/electronics14214186 - 27 Oct 2025
Viewed by 564
Abstract
Autonomous driving relies on multimodal sensors to acquire environmental information for supporting decision making and control. While significant progress has been made in 3D object detection regarding point cloud processing and multi-sensor fusion, existing methods still suffer from shortcomings—such as sparse point clouds [...] Read more.
Autonomous driving relies on multimodal sensors to acquire environmental information for supporting decision making and control. While significant progress has been made in 3D object detection regarding point cloud processing and multi-sensor fusion, existing methods still suffer from shortcomings—such as sparse point clouds of foreground targets, fusion instability caused by fluctuating sensor data quality, and inadequate modeling of cross-frame temporal consistency in video streams—which severely restrict the practical performance of perception systems. To address these issues, this paper proposes a multimodal video stream 3D object detection framework based on reliability evaluation. Specifically, it dynamically perceives the reliability of each modal feature by evaluating the Region of Interest (RoI) features of cameras and LiDARs, and adaptively adjusts their contribution ratios in the fusion process accordingly. Additionally, a target-level semantic soft matching graph is constructed within the RoI region. Combined with spatial self-attention and temporal cross-attention mechanisms, the spatio-temporal correlations between consecutive frames are fully explored to achieve feature completion and enhancement. Verification on the nuScenes dataset shows that the proposed algorithm achieves an optimal performance of 67.3% and 70.6% in terms of the two core metrics, mAP and NDS, respectively—outperforming existing mainstream 3D object detection algorithms. Ablation experiments confirm that each module plays a crucial role in improving overall performance, and the algorithm exhibits better robustness and generalization in dynamically complex scenarios. Full article
Show Figures

Figure 1

29 pages, 13306 KB  
Article
Building Outline Extraction via Topology-Aware Loop Parsing and Parallel Constraint from Airborne LiDAR
by Ke Liu, Hongchao Ma, Li Li, Shixin Huang, Liang Zhang, Xiaoli Liang and Zhan Cai
Remote Sens. 2025, 17(20), 3498; https://doi.org/10.3390/rs17203498 - 21 Oct 2025
Viewed by 348
Abstract
Building outlines are important vector data for various applications, but due to the uneven point density and complex building structures, extracting satisfactory building outlines from airborne light detection and ranging point cloud data poses significant challenges. Thus, a building outline extraction method based [...] Read more.
Building outlines are important vector data for various applications, but due to the uneven point density and complex building structures, extracting satisfactory building outlines from airborne light detection and ranging point cloud data poses significant challenges. Thus, a building outline extraction method based on topology-aware loop parsing and parallel constraint is proposed. First, constrained Delaunay triangulation (DT) is used to organize scattered projected building points, and initial boundary points and edges are extracted based on the constrained DT. Subsequently, accurate semantic boundary points are obtained by parsing the topology-aware loops searched from an undirected graph. Building dominant directions are estimated through angle normalization, merging, and perpendicular pairing. Finally, outlines are regularized using the parallel constraint-based method, which simultaneously considers the fitness between the dominant direction and boundary points, and the length of line segments. Experiments on five datasets, including three datasets provided by ISPRS and two datasets with high-density point clouds and complex building structures, verify that the proposed method can extract sequential and semantic boundary points, with over 97.88% correctness. Additionally, the regularized outlines are attractive, and most line segments are parallel or perpendicular. The RMSE, PoLiS, and RCC metrics are better than 0.94 m, 0.84 m, and 0.69 m, respectively. The extracted building outlines can be used for building three-dimensional (3D) reconstruction. Full article
Show Figures

Figure 1

23 pages, 8417 KB  
Article
A Skewness-Based Density Metric and Deep Learning Framework for Point Cloud Analysis: Detection of Non-Uniform Regions and Boundary Extraction
by Cheng Li, Xianghong Hua, Wenbo Wang and Pengju Tian
Symmetry 2025, 17(10), 1770; https://doi.org/10.3390/sym17101770 - 20 Oct 2025
Viewed by 266
Abstract
This paper redefines point cloud density by utilizing statistical skewness derived from the geometric relationships between points and their local centroids. By comparing with a symmetric uniform reference model, this method can efficiently describe distribution patterns and detect non-uniform regions. Furthermore, a deep [...] Read more.
This paper redefines point cloud density by utilizing statistical skewness derived from the geometric relationships between points and their local centroids. By comparing with a symmetric uniform reference model, this method can efficiently describe distribution patterns and detect non-uniform regions. Furthermore, a deep learning model trained on these skewness features achieves 85.96% accuracy in automated boundary extraction, significantly reducing omission errors compared to conventional density-based methods. The proposed framework offers an effective solution for automated point cloud segmentation and modeling. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

26 pages, 1351 KB  
Review
Trends and Limitations in Transformer-Based BCI Research
by Maximilian Achim Pfeffer, Johnny Kwok Wai Wong and Sai Ho Ling
Appl. Sci. 2025, 15(20), 11150; https://doi.org/10.3390/app152011150 - 17 Oct 2025
Viewed by 647
Abstract
Transformer-based models have accelerated EEG motor imagery (MI) decoding by using self-attention to capture long-range temporal structures while complementing spatial inductive biases. This systematic survey of Scopus-indexed works from 2020 to 2025 indicates that reported advances are concentrated in offline, protocol-heterogeneous settings; inconsistent [...] Read more.
Transformer-based models have accelerated EEG motor imagery (MI) decoding by using self-attention to capture long-range temporal structures while complementing spatial inductive biases. This systematic survey of Scopus-indexed works from 2020 to 2025 indicates that reported advances are concentrated in offline, protocol-heterogeneous settings; inconsistent preprocessing, non-standard data splits, and sparse efficiency frequently reporting cloud claims of generalization and real-time suitability. Under session- and subject-aware evaluation on the BCIC IV 2a/2b dataset, typical performance clusters are in the high-80% range for binary MI and the mid-70% range for multi-class tasks with gains of roughly 5–10 percentage points achieved by strong hybrids (CNN/TCN–Transformer; hierarchical attention) rather than by extreme figures often driven by leakage-prone protocols. In parallel, transformer-driven denoising—particularly diffusion–transformer hybrids—yields strong signal-level metrics but remains weakly linked to task benefit; denoise → decode validation is rarely standardized despite being the most relevant proxy when artifact-free ground truth is unavailable. Three priorities emerge for translation: protocol discipline (fixed train/test partitions, transparent preprocessing, mandatory reporting of parameters, FLOPs, per-trial latency, and acquisition-to-feedback delay); task relevance (shared denoise → decode benchmarks for MI and related paradigms); and adaptivity at scale (self-supervised pretraining on heterogeneous EEG corpora and resource-aware co-optimization of preprocessing and hybrid transformer topologies). Evidence from subject-adjusting evolutionary pipelines that jointly tune preprocessing, attention depth, and CNN–Transformer fusion demonstrates reproducible inter-subject gains over established baselines under controlled protocols. Implementing these practices positions transformer-driven BCIs to move beyond inflated offline estimates toward reliable, real-time neurointerfaces with concrete clinical and assistive relevance. Full article
(This article belongs to the Special Issue Brain-Computer Interfaces: Development, Applications, and Challenges)
Show Figures

Figure 1

20 pages, 18957 KB  
Article
Multi-Modal Data Fusion for 3D Object Detection Using Dual-Attention Mechanism
by Mengying Han, Benlan Shen and Jiuhong Ruan
Sensors 2025, 25(20), 6360; https://doi.org/10.3390/s25206360 - 14 Oct 2025
Viewed by 667
Abstract
To address the issue of missing feature information for small objects caused by the sparsity and irregularity of point clouds, as well as the poor detection performance on small objects due to their weak feature representation, this paper proposes a multi-modal 3D object [...] Read more.
To address the issue of missing feature information for small objects caused by the sparsity and irregularity of point clouds, as well as the poor detection performance on small objects due to their weak feature representation, this paper proposes a multi-modal 3D object detection method based on an improved PointPillars framework. First, LiDAR point clouds are fused with camera images at the data level, incorporating 2D semantic information to enhance small-object feature representation. Second, a Pillar-wise Channel Attention (PCA) module is introduced to emphasize critical features before converting pillar features into pseudo-image representations. Additionally, a Spatial Attention Module (SAM) is embedded into the backbone network to enhance spatial feature representation. Experiments on the KITTI dataset show that, compared with the baseline PointPillars, the proposed method significantly improves small-object detection performance. Specifically, under the bird’s-eye view (BEV) evaluation metrics, the Average Precision (AP) for pedestrians and cyclists increases by 7.06% and 3.08%, respectively; under the 3D evaluation metrics, these improvements are 4.36% and 2.58%. Compared with existing methods, the improved model also achieves relatively higher accuracy in detecting small objects. Visualization results further demonstrate the enhanced detection capability of the proposed method for small objects with different difficulty levels. Overall, the proposed approach effectively improves 3D object detection performance, particularly for small objects, in complex autonomous driving scenarios. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 4537 KB  
Article
A Registration Method for ULS-MLS Data in High-Canopy-Density Forests Based on Feature Deviation Metric
by Houyu Liang, Xiang Zhou, Tingting Lv, Qingwang Liu, Zui Tao and Hongming Zhang
Remote Sens. 2025, 17(20), 3403; https://doi.org/10.3390/rs17203403 - 11 Oct 2025
Viewed by 315
Abstract
The integration of unmanned aerial vehicle-based laser scanning (ULS) and mobile laser scanning (MLS) enables the detection of forest three-dimensional structure in high-density canopy areas and has become an important tool for monitoring and managing forest ecosystems. However, MLS faces difficulties in positioning [...] Read more.
The integration of unmanned aerial vehicle-based laser scanning (ULS) and mobile laser scanning (MLS) enables the detection of forest three-dimensional structure in high-density canopy areas and has become an important tool for monitoring and managing forest ecosystems. However, MLS faces difficulties in positioning due to canopy occlusion, making integration challenging. Due to the variations in observation platforms, ULS and MLS point clouds exhibit significant structural discrepancies and limited overlapping areas, necessitating effective methods for feature extraction and correspondence establishment between these features to achieve high-precision registration and integration. Therefore, we propose a registration algorithm that introduces a Feature Deviation Metric to enable feature extraction and correspondence construction for forest point clouds in complex regional environments. The algorithm first extracts surface point clouds using the hidden point algorithm. Then, it applies the proposed dual-threshold method to cluster individual tree features in ULS, using cylindrical detection to construct a Feature Deviation Metric from the feature points and surface point clouds. Finally, an optimization algorithm is employed to match the optimal Feature Deviation Metric for registration. Experiments were conducted in 8 stratified mixed tropical rainforest plots with complex mixed-species canopies in Malaysia and 6 structurally simple, high-canopy-density pure forest plots in anorthern China. Our algorithm achieved an average RMSE of 0.17 m in eight tropical rainforest plots with an average canopy density of 0.93, and an RMSE of 0.05 m in six northern forest plots in China with an average canopy density of 0.75, demonstrating high registration capability. Additionally, we also conducted comparative and adaptability analyses, and the results indicate that the proposed model exhibits high accuracy, efficiency, and stability in high-canopy-density forest areas. Moreover, it shows promise for high-precision ULS-MLS registration in a wider range of forest types in the future. Full article
Show Figures

Figure 1

17 pages, 7857 KB  
Article
Frequency-Domain Importance-Based Attack for 3D Point Cloud Object Tracking
by Ang Ma, Anqi Zhang, Likai Wang and Rui Yao
Appl. Sci. 2025, 15(19), 10682; https://doi.org/10.3390/app151910682 - 2 Oct 2025
Viewed by 379
Abstract
3D point cloud object tracking plays a critical role in fields such as autonomous driving and robotics, making the security of these models essential. Adversarial attacks are a key approach for studying the robustness and security of tracking models. However, research on the [...] Read more.
3D point cloud object tracking plays a critical role in fields such as autonomous driving and robotics, making the security of these models essential. Adversarial attacks are a key approach for studying the robustness and security of tracking models. However, research on the generalization of adversarial attacks for 3D point-cloud-tracking models is limited, and the frequency-domain information of the point cloud’s geometric structure is often overlooked. This frequency information is closely related to the generalization of 3D point-cloud-tracking models. To address these limitations, this paper proposes a novel adversarial method for 3D point cloud object tracking, utilizing frequency-domain attacks based on the importance of frequency bands. The attack operates in the frequency domain, targeting the low-frequency components of the point cloud within the search area. To make the attack more targeted, the paper introduces a frequency band importance saliency map, which reflects the significance of sub-frequency bands for tracking and uses this importance as attack weights to enhance the attack’s effectiveness. The proposed attack method was evaluated on mainstream 3D point-cloud-tracking models, and the adversarial examples generated from white-box attacks were transferred to other black-box tracking models. Experiments show that the proposed attack method reduces both the average success rate and precision of tracking, proving the effectiveness of the proposed adversarial attack. Furthermore, when the white-box adversarial samples were transferred to the black-box model, the tracking metrics also decreased, verifying the transferability of the attack method. Full article
Show Figures

Figure 1

20 pages, 33056 KB  
Article
Spatiotemporal Analysis of Vineyard Dynamics: UAS-Based Monitoring at the Individual Vine Scale
by Stefan Ruess, Gernot Paulus and Stefan Lang
Remote Sens. 2025, 17(19), 3354; https://doi.org/10.3390/rs17193354 - 2 Oct 2025
Viewed by 441
Abstract
The rapid and reliable acquisition of canopy-related metrics is essential for improving decision support in viticultural management, particularly when monitoring individual vines for targeted interventions. This study presents a spatially explicit workflow that integrates Uncrewed Aerial System (UAS) imagery, 3D point-cloud analysis, and [...] Read more.
The rapid and reliable acquisition of canopy-related metrics is essential for improving decision support in viticultural management, particularly when monitoring individual vines for targeted interventions. This study presents a spatially explicit workflow that integrates Uncrewed Aerial System (UAS) imagery, 3D point-cloud analysis, and Object-Based Image Analysis (OBIA) to detect and monitor individual grapevines throughout the growing season. Vines are identified directly from 3D point clouds without the need for prior training data or predefined row structures, achieving a mean Euclidean distance of 10.7 cm to the reference points. The OBIA framework segments vine vegetation based on spectral and geometric features without requiring pre-clipping or manual masking. All non-vine elements—including soil, grass, and infrastructure—are automatically excluded, and detailed canopy masks are created for each plant. Vegetation indices are computed exclusively from vine canopy objects, ensuring that soil signals and internal canopy gaps do not bias the results. This enables accurate per-vine assessment of vigour. NDRE values were calculated at three phenological stages—flowering, veraison, and harvest—and analyzed using Local Indicators of Spatial Association (LISA) to detect spatial clusters and outliers. In contrast to value-based clustering methods, LISA accounts for spatial continuity and neighborhood effects, allowing the detection of stable low-vigour zones, expanding high-vigour clusters, and early identification of isolated stressed vines. A strong correlation (R2 = 0.73) between per-vine NDRE values and actual yield demonstrates that NDRE-derived vigour reliably reflects vine productivity. The method provides a transferable, data-driven framework for site-specific vineyard management, enabling timely interventions at the individual plant level before stress propagates spatially. Full article
Show Figures

Figure 1

21 pages, 11368 KB  
Article
Introducing SLAM-Based Portable Laser Scanning for the Metric Testing of Topographic Databases
by Eleonora Maset, Antonio Matellon, Simone Gubiani, Domenico Visintini and Alberto Beinat
Remote Sens. 2025, 17(19), 3316; https://doi.org/10.3390/rs17193316 - 27 Sep 2025
Viewed by 680
Abstract
The advent of portable laser scanners leveraging Simultaneous Localization and Mapping (SLAM) technology has recently enabled the rapid and efficient acquisition of detailed point clouds of the surrounding environment while maintaining a high degree of accuracy and precision, on the order of a [...] Read more.
The advent of portable laser scanners leveraging Simultaneous Localization and Mapping (SLAM) technology has recently enabled the rapid and efficient acquisition of detailed point clouds of the surrounding environment while maintaining a high degree of accuracy and precision, on the order of a few centimeters. This paper explores the use of SLAM systems in an uncharted application domain, namely the metric testing of a large-scale, three-dimensional topographic database (TDB). Three distinct operational procedures (point-to-cloud, line-to-cloud, and line-to-line) are developed to facilitate a comparison between the vector features of the TDB and the SLAM-based point cloud, which serves as a reference. A comprehensive evaluation carried out on the TDB of the Friuli Venezia Giulia region (Italy) highlights the advantages and limitations of the proposed approaches, demonstrating the potential of SLAM-based surveys to complement, or even supersede, the classical topographic field techniques usually employed for geometric verification operations. Full article
Show Figures

Figure 1

20 pages, 14512 KB  
Article
Dual-Attention-Based Block Matching for Dynamic Point Cloud Compression
by Longhua Sun, Yingrui Wang and Qing Zhu
J. Imaging 2025, 11(10), 332; https://doi.org/10.3390/jimaging11100332 - 25 Sep 2025
Viewed by 509
Abstract
The irregular and highly non-uniform spatial distribution inherent to dynamic three-dimensional (3D) point clouds (DPCs) severely hampers the extraction of reliable temporal context, rendering inter-frame compression a formidable challenge. Inspired by two-dimensional (2D) image and video compression methods, existing approaches attempt to model [...] Read more.
The irregular and highly non-uniform spatial distribution inherent to dynamic three-dimensional (3D) point clouds (DPCs) severely hampers the extraction of reliable temporal context, rendering inter-frame compression a formidable challenge. Inspired by two-dimensional (2D) image and video compression methods, existing approaches attempt to model the temporal dependence of DPCs through a motion estimation/motion compensation (ME/MC) framework. However, these approaches represent only preliminary applications of this framework; point consistency between adjacent frames is insufficiently explored, and temporal correlation requires further investigation. To address this limitation, we propose a hierarchical ME/MC framework that adaptively selects the granularity of the estimated motion field, thereby ensuring a fine-grained inter-frame prediction process. To further enhance motion estimation accuracy, we introduce a dual-attention-based KNN block-matching (DA-KBM) network. This network employs a bidirectional attention mechanism to more precisely measure the correlation between points, using closely correlated points to predict inter-frame motion vectors and thereby improve inter-frame prediction accuracy. Experimental results show that the proposed DPC compression method achieves a significant improvement (gain of 70%) in the BD-Rate metric on the 8iFVBv2 dataset. compared with the standardized Video-based Point Cloud Compression (V-PCC) v13 method, and a 16% gain over the state-of-the-art deep learning-based inter-mode method. Full article
(This article belongs to the Special Issue 3D Image Processing: Progress and Challenges)
Show Figures

Figure 1

Back to TopTop