Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (121)

Search Parameters:
Keywords = SFM-MVS

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 23093 KB  
Article
Keyframe-Guided Crack Segmentation and 3D Localization for UAV-Based Monocular Inspection
by Feifei Tang, Wuyuntana Gongzhabayier, Jing Li, Tao Zhou, Yue Qiu, Yong Zhan and Qiulin Song
Symmetry 2026, 18(4), 657; https://doi.org/10.3390/sym18040657 - 15 Apr 2026
Viewed by 276
Abstract
In unmanned aerial vehicle (UAV)-based monocular inspection, cracks typically present as geometrically asymmetric, elongated, low-contrast weak targets, making accurate segmentation and spatial localization challenging. Existing methods are susceptible to missed detections and false positives when handling slender cracks, and monocular 3D reconstruction for [...] Read more.
In unmanned aerial vehicle (UAV)-based monocular inspection, cracks typically present as geometrically asymmetric, elongated, low-contrast weak targets, making accurate segmentation and spatial localization challenging. Existing methods are susceptible to missed detections and false positives when handling slender cracks, and monocular 3D reconstruction for localization is often burdened by redundant frames, resulting in limited modeling efficiency. To mitigate these issues, we propose a high-precision framework for crack segmentation and spatial localization from UAV imagery. First, Oriented FAST and Rotated BRIEF–Simultaneous Localization and Mapping, version 3 (ORB-SLAM3) is adopted for keyframe selection to suppress data redundancy and improve reconstruction stability. Second, we develop an enhanced YOLOv11-seg model by integrating the Dilation-wise Residual Segmentation (DWRSeg) module, the Weighted IoU (WIoU) loss, and the Lightweight shared convolutional separator batch-normalization detection head (LSCSBD) to strengthen feature discrimination and segmentation robustness for slender cracks, yielding high-quality crack masks. Finally, the predicted masks are projected onto the reconstructed 3D surface to obtain precise spatial localization. Our experimental results demonstrate that the proposed approach improves the segmentation mAP@50 by 7.2% over the baseline while reducing computational complexity from 10.2 to 9.8 GFLOPs. In addition, keyframe-based processing reduces the 3D modeling time by 59.4% compared to that with full-frame reconstruction. Overall, the proposed framework jointly enhances crack segmentation accuracy and substantially accelerates 3D modeling and localization, providing an effective solution for efficient UAV-based crack inspection. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Intelligent Transportation)
Show Figures

Figure 1

22 pages, 4917 KB  
Technical Note
Reducing Latency in Digital Twins: A Framework for Near-Real-Time Progress and Quality Reporting
by Zvonko Sigmund, Ivica Završki, Ivan Marović and Kristijan Vilibić
Buildings 2026, 16(7), 1448; https://doi.org/10.3390/buildings16071448 - 6 Apr 2026
Viewed by 573
Abstract
While Digital Twins offer transformative potential, their efficacy for real-time control is constrained by the slow data acquisition and the high computational intensity required to process raw datasets like point clouds. This paper identifies these critical bottlenecks—specifically the latency between data capture and [...] Read more.
While Digital Twins offer transformative potential, their efficacy for real-time control is constrained by the slow data acquisition and the high computational intensity required to process raw datasets like point clouds. This paper identifies these critical bottlenecks—specifically the latency between data capture and actionable insight—and proposes a refined theoretical framework for near-real-time automated progress monitoring and quality reporting. Building on the findings of the NORMENG project and informing the subsequent AutoGreenTraC project, this research synthesizes state-of-the-art advancements in reality capture, including LIDAR, SfM-MVS, and 360-degree vision. The study highlights a fundamental divergence in stakeholder requirements: the need for millimeter-level precision in quality control versus the demand for high-velocity documentation for progress monitoring. A key innovation presented is the shift toward neural rendering techniques to bypass the computational delays of traditional photogrammetry and enable immediate on-site visualization. By structuring a tiered processing hierarchy that combines lightweight edge analysis for immediate safety and progress monitoring with asynchronous high-fidelity Digital Twin updates, the framework aims to establish a single source of truth. Full article
Show Figures

Figure 1

22 pages, 1482 KB  
Article
A Reproducible Methodology for 3D Tree-Structure Mensuration and Risk-Oriented Decision Support: Integrating SfM–MVS, Field Referencing, and Rule-Based TRAQ/ALARP Logic
by Elias Milios and Kyriaki Kitikidou
Forests 2026, 17(4), 431; https://doi.org/10.3390/f17040431 - 28 Mar 2026
Viewed by 364
Abstract
This manuscript presents a transferable and reproducible methodology for quantitative 3D tree-structure mensuration and transparent, rule-based decision support for tree risk management. The workflow integrates (i) Structure-from-Motion/Multi-View Stereo (SfM–MVS) reconstruction from multi-view imagery, (ii) independent referencing to ensure metric scaling and a consistent [...] Read more.
This manuscript presents a transferable and reproducible methodology for quantitative 3D tree-structure mensuration and transparent, rule-based decision support for tree risk management. The workflow integrates (i) Structure-from-Motion/Multi-View Stereo (SfM–MVS) reconstruction from multi-view imagery, (ii) independent referencing to ensure metric scaling and a consistent local frame, and (iii) point cloud analytics to derive branch-level geometric descriptors (e.g., base diameter, length, inclination, slenderness, and projected reach). A clear rule-based layer operationalizes Tree Risk Assessment Qualification (TRAQ)-style risk components and As Low As Reasonably Practicable (ALARP) principles to map geometry and exposure into auditable management recommendations (e.g., monitoring intervals, pruning/weight reduction, supplemental support, and exclusion-zone planning). To provide a real-data example, the demonstration uses the public Fuji-SfM apple orchard dataset, including three neighboring trees with partially overlapping crowns for tree instance extraction and subsequent TRAQ/ALARP scenarios on an outer tree. The proposed decision layer is intentionally based on external geometry and exposure; internal decay indicators and species-specific mechanical properties (e.g., Modulus of Elasticity (MOE), Modulus of Rupture (MOR)) are outside this demonstration and should be incorporated via complementary diagnostics in operational deployments. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

21 pages, 10174 KB  
Article
Event-Scale Quantification of Hillslope Landslide Erosion and Channel Incision During Extreme Rainfall: 2009 Typhoon Morakot
by Yi-Chin Chen
Water 2026, 18(6), 708; https://doi.org/10.3390/w18060708 - 18 Mar 2026
Viewed by 282
Abstract
Extreme rainfall events can trigger widespread landsliding and fluvial erosion, exerting a disproportionate influence on sediment production and landscape evolution in mountainous watersheds. However, hillslope–channel coupling during individual extreme events remains poorly quantified due to the scarcity of event-scale topographic observations. This study [...] Read more.
Extreme rainfall events can trigger widespread landsliding and fluvial erosion, exerting a disproportionate influence on sediment production and landscape evolution in mountainous watersheds. However, hillslope–channel coupling during individual extreme events remains poorly quantified due to the scarcity of event-scale topographic observations. This study investigates event-scale hillslope–channel coupling by quantifying landslide-driven hillslope erosion and channel incision associated with Typhoon Morakot (2009) in the Sinwulu River watershed, southeastern Taiwan. High-resolution pre- and post-event digital surface models (DSMs) were reconstructed using an aerial structure-from-motion multi-view stereo (SfM–MVS) photogrammetry workflow and corrected for canopy height to derive meter-scale topographic changes. Hillslope and channel domains were delineated, and linked hillslope–channel units were used to examine spatial relationships between erosion processes and topographic and hydraulic factors. Results indicate that landslide erosion dominated sediment production during the event with watershed-average erosion of 544.35 mm, while channel responses exhibited strong spatial contrasts, with pronounced incision in upstream reaches and substantial deposition downstream of major knickpoints. Event-scale analysis provides evidence for a strong correspondence between channel incision and hillslope landslide erosion, whereas correlations with commonly used hydraulic proxies such as unit stream power are comparatively weaker. These findings highlight the value of event-scale topographic measurements for elucidating transient hillslope–channel coupling processes during extreme rainfall events. Full article
(This article belongs to the Section Water Erosion and Sediment Transport)
Show Figures

Figure 1

30 pages, 3812 KB  
Review
Video-Based 3D Reconstruction: A Review of Photogrammetry and Visual SLAM Approaches
by Ali Javadi Moghadam, Abbas Kiani, Reza Naeimaei, Shirin Malihi and Ioannis Brilakis
J. Imaging 2026, 12(3), 128; https://doi.org/10.3390/jimaging12030128 - 13 Mar 2026
Viewed by 1542
Abstract
Three-dimensional (3D) reconstruction using images is one of the most significant topics in computer vision and photogrammetry, with wide-ranging applications in robotics, augmented reality, and mapping. This study investigates methods of 3D reconstruction using video (especially monocular video) data and focuses on techniques [...] Read more.
Three-dimensional (3D) reconstruction using images is one of the most significant topics in computer vision and photogrammetry, with wide-ranging applications in robotics, augmented reality, and mapping. This study investigates methods of 3D reconstruction using video (especially monocular video) data and focuses on techniques such as Structure from Motion (SfM), Multi-View Stereo (MVS), Visual Simultaneous Localization and Mapping (V-SLAM), and videogrammetry. Based on a statistical analysis of SCOPUS records, these methods collectively account for approximately 6863 journal publications up to the end of 2024. Among these, about 80 studies are analyzed in greater detail to identify trends and advancements in the field. The study also shows that the use of video data for real-time 3D reconstruction is commonly addressed through two main approaches: photogrammetry-based methods, which rely on precise geometric principles and offer high accuracy at the cost of greater computational demand; and V-SLAM methods, which emphasize real-time processing and provide higher speed. Furthermore, the application of IMU data and other indicators, such as color quality and keypoint detection, for selecting suitable frames for 3D reconstruction is investigated. Overall, this study compiles and categorizes video-based reconstruction methods, emphasizing the critical step of keyframe extraction. By summarizing and illustrating the general approaches, the study aims to clarify and facilitate the entry path for researchers interested in this area. Finally, the paper offers targeted recommendations for improving keyframe extraction methods to enhance the accuracy and efficiency of real-time video-based 3D reconstruction, while also outlining future research directions in addressing challenges like dynamic scenes, reducing computational costs, and integrating advanced learning-based techniques. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

27 pages, 14900 KB  
Article
TreeDGS: Aerial Gaussian Splatting for Distant DBH Measurement
by Belal Shaheen, Minh-Hieu Nguyen, Bach-Thuan Bui, Shubham, Tim Wu, Michael Fairley, Matthew Zane, Michael Wu and James Tompkin
Remote Sens. 2026, 18(6), 867; https://doi.org/10.3390/rs18060867 - 11 Mar 2026
Viewed by 585
Abstract
Aerial remote sensing efficiently surveys large areas, but accurate direct object-level measurement remains difficult in complex natural scenes. Advancements in 3D computer vision, particularly radiance field representations such as NeRF and 3D Gaussian splatting, can improve reconstruction fidelity from posed imagery. Nevertheless, direct [...] Read more.
Aerial remote sensing efficiently surveys large areas, but accurate direct object-level measurement remains difficult in complex natural scenes. Advancements in 3D computer vision, particularly radiance field representations such as NeRF and 3D Gaussian splatting, can improve reconstruction fidelity from posed imagery. Nevertheless, direct aerial measurement of important attributes like tree diameter at breast height (DBH) remains challenging. Trunks in aerial forest scans are distant and sparsely observed in image views; at typical operating altitudes, stems may span only a few pixels. With these constraints, conventional reconstruction methods have inaccurate breast-height trunk geometry. TreeDGS is an aerial image reconstruction method that uses 3D Gaussian splatting as a continuous scene representation for trunk measurement. After SfM–MVS initialization and Gaussian optimization, we extract a dense point set from the Gaussian field using RaDe-GS’s depth-aware cumulative-opacity integration and associate each sample with a multi-view opacity reliability score. Then, we isolate trunk points and estimate DBH using opacity-weighted solid-circle fitting. Evaluated on 10 plots with field-measured DBH, TreeDGS reaches 4.79 cm RMSE (about 2.6 pixels at this GSD) and outperforms a LiDAR baseline (7.66 cm RMSE). This shows that TreeDGS can enable accurate, low-cost aerial DBH measurement. Full article
Show Figures

Figure 1

30 pages, 16905 KB  
Article
Real-Time 2D Orthomosaic Mapping from UAV Video via Feature-Based Image Registration
by Se-Yun Hwang, Seunghoon Oh, Jae-Chul Lee, Soon-Sub Lee and Changsoo Ha
Appl. Sci. 2026, 16(4), 2133; https://doi.org/10.3390/app16042133 - 22 Feb 2026
Viewed by 698
Abstract
This study presents a real-time framework for generating two-dimensional (2D) orthomosaic maps directly from UAV video. The method targets operational scenarios in which a continuously updated 2D overview is required during flight or immediately after landing, without relying on time-consuming offline photogrammetry workflows [...] Read more.
This study presents a real-time framework for generating two-dimensional (2D) orthomosaic maps directly from UAV video. The method targets operational scenarios in which a continuously updated 2D overview is required during flight or immediately after landing, without relying on time-consuming offline photogrammetry workflows such as structure-from-motion (SfM) and multi-view stereo (MVS). The proposed procedure incrementally registers sparsely sampled video frames on standard CPU hardware using classical feature-based image registration. Each selected frame is converted to grayscale and processed under a fixed keypoint budget to maintain predictable runtime. Tentative correspondences are obtained through descriptor matching with ratio-test filtering, and outliers are removed using random sample consensus (RANSAC) to ensure geometric consistency. Inter-frame motion is modeled by a planar homography, enabling the mapping process to jointly account for rotation, scale variation, skew, and translation that commonly occur in UAV video due to yaw maneuvers, mild altitude variation, and platform motion. Sequential homographies are accumulated to warp incoming frames into a global mosaic canvas, which is updated incrementally using lightweight blending suitable for real-time visualization. Experimental results on three UAV video sequences with different durations, flight patterns, and scene targets report representative orthomosaic-style outputs and per-step CPU runtime statistics (mean, 95th percentile, and maximum), illustrating typical operating behavior under the tested settings. The framework produces visually coherent orthomosaic-style maps in real time for approximately planar scenes with sufficient overlap and texture, while clarifying practical failure modes under weak texture, motion blur, and strong parallax. Limitations include potential drift over long sequences and the absence of ground-truth references for absolute registration-error evaluation. Full article
Show Figures

Figure 1

20 pages, 3878 KB  
Article
TreeSeg-Net: An End-to-End Instance Segmentation Network for Leaf-Off Forest Point Clouds Using Global Context and Spatial Proximity
by Xingmei Xu, Ruihang Zhang, Shunfu Xiao, Jiayuan Li, Xinyue Zhang, Liying Cao, Helong Yu, Yuntao Ma, Jian Zhang and Xiyang Zhao
Plants 2026, 15(4), 525; https://doi.org/10.3390/plants15040525 - 7 Feb 2026
Viewed by 574
Abstract
Forest ecosystems play a pivotal role in maintaining the balance of the global carbon cycle and conserving biodiversity. High-density point clouds derived from unmanned aerial vehicle (UAV) structure from motion (SfM) and multi-view stereo (MVS) technologies offer a cost-effective solution for data acquisition. [...] Read more.
Forest ecosystems play a pivotal role in maintaining the balance of the global carbon cycle and conserving biodiversity. High-density point clouds derived from unmanned aerial vehicle (UAV) structure from motion (SfM) and multi-view stereo (MVS) technologies offer a cost-effective solution for data acquisition. These technologies have become efficient tools for facilitating precision forest resource management and extracting individual tree structural parameters. However, in complex forest scenarios during the leaf-off season, canopies exhibit unstructured branch network morphologies due to the absence of leaf occlusion, and adjacent crowns are heavily interlaced. Consequently, existing segmentation methods struggle to overcome challenges associated with fuzzy boundaries and instance adhesion. To address these challenges, this study proposes TreeSeg-Net, an end-to-end instance segmentation network designed to precisely separate individual trees directly from raw point clouds. The network incorporates a global context attention module (GCAM) to capture long-range feature dependencies, thereby compensating for the limitations of sparse convolution in perceiving global information. Simultaneously, a spatial proximity weighting module (SPWM) is designed. By introducing geometric center constraints and a distance penalty mechanism, this module effectively mitigates under-segmentation issues caused by the feature similarity of adjacent branches in high-canopy-density environments. Experimental results demonstrate that TreeSeg-Net achieves an average precision (AP) of 97.2% in instance segmentation tasks and a mean intersection over union (mIoU) of 99.7% in semantic segmentation tasks. Compared to mainstream networks, the proposed method exhibits superior segmentation accuracy, providing an efficient and automated technical solution for precise resource inventory in complex forest environments. Full article
Show Figures

Figure 1

24 pages, 3748 KB  
Article
Automated Recognition of Rock Mass Discontinuities on Vegetated High Slopes Using UAV Photogrammetry and an Improved Superpoint Transformer
by Peng Wan, Xianquan Han, Ruoming Zhai and Xiaoqing Gan
Remote Sens. 2026, 18(2), 357; https://doi.org/10.3390/rs18020357 - 21 Jan 2026
Viewed by 603
Abstract
Automated recognition of rock mass discontinuities in vegetated high-slope terrains remains a challenging task critical to geohazard assessment and slope stability analysis. This study presents an integrated framework combining close-range UAV photogrammetry with an Improved Superpoint Transformer (ISPT) for semantic segmentation and structural [...] Read more.
Automated recognition of rock mass discontinuities in vegetated high-slope terrains remains a challenging task critical to geohazard assessment and slope stability analysis. This study presents an integrated framework combining close-range UAV photogrammetry with an Improved Superpoint Transformer (ISPT) for semantic segmentation and structural characterization. High-resolution UAV imagery was processed using an SfM–MVS photogrammetric workflow to generate dense point clouds, followed by a three-stage filtering workflow comprising cloth simulation filtering, volumetric density analysis, and VDVI-based vegetation discrimination. Feature augmentation using volumetric density and the Visible-Band Difference Vegetation Index (VDVI), together with connected-component segmentation, enhanced robustness under vegetation occlusion. Validation on four vegetated slopes in Buyun Mountain, China, achieved an overall classification accuracy of 89.5%, exceeding CANUPO (78.2%) and the baseline SPT (85.8%), with a 25-fold improvement in computational efficiency. In total, 4918 structural planes were extracted, and their orientations, dip angles, and trace lengths were automatically derived. The proposed ISPT-based framework provides an efficient and reliable approach for high-precision geotechnical characterization in complex, vegetation-covered rock mass environments. Full article
Show Figures

Figure 1

22 pages, 9357 KB  
Article
Intelligent Evaluation of Rice Resistance to White-Backed Planthopper (Sogatella furcifera) Based on 3D Point Clouds and Deep Learning
by Yuxi Zhao, Huilai Zhang, Wei Zeng, Litu Liu, Qing Li, Zhiyong Li and Chunxian Jiang
Agriculture 2026, 16(2), 215; https://doi.org/10.3390/agriculture16020215 - 14 Jan 2026
Viewed by 419
Abstract
Accurate assessment of rice resistance to Sogatella furcifera (Horváth) is essential for breeding insect-resistant cultivars. Traditional assessment methods rely on manual scoring of damage severity, which is subjective and inefficient. To overcome these limitations, this study proposes an automated resistance evaluation approach based [...] Read more.
Accurate assessment of rice resistance to Sogatella furcifera (Horváth) is essential for breeding insect-resistant cultivars. Traditional assessment methods rely on manual scoring of damage severity, which is subjective and inefficient. To overcome these limitations, this study proposes an automated resistance evaluation approach based on multi-view 3D reconstruction and deep learning–based point cloud segmentation. Multi-view videos of rice materials with different resistance levels were collected over time and processed using Structure from Motion (SfM) and Multi-View Stereo (MVS) to reconstruct high-quality 3D point clouds. A well-annotated “3D Rice WBPH Damage” dataset comprising 174 samples (15 rice materials, three replicates each, 45 pots) was established, where each sample corresponds to a reconstructed 3D point cloud from a video sequence. A comparative study of various point cloud semantic segmentation models, including PointNet, PointNet++, ShellNet, and PointCNN, revealed that the PointNet++ (MSG) model, which employs a Multi-Scale Grouping strategy, demonstrated the best performance in segmenting complex damage symptoms. To further accurately quantify the severity of damage, an adaptive point cloud dimensionality reduction method was proposed, which effectively mitigates the interference of leaf shrinkage on damage assessment. Experimental results demonstrated a strong correlation (R2 = 0.95) between automated and manual evaluations, achieving accuracies of 86.67% and 93.33% at the sample and material levels, respectively. This work provides an objective, efficient, and scalable solution for evaluating rice resistance to S. furcifera, offering promising applications in crop resistance breeding. Full article
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)
Show Figures

Figure 1

28 pages, 14408 KB  
Article
Evaluating Neural Radiance Fields for Image-Based 3D Reconstruction: A Comparative Study with SfM-MVS
by Alessia Giaquinto, Giampaolo Ferraioli and Silvio Del Pizzo
Geomatics 2026, 6(1), 4; https://doi.org/10.3390/geomatics6010004 - 10 Jan 2026
Viewed by 1255
Abstract
Recent advances in image-based 3D reconstruction have seen a shift from traditional photogrammetric techniques to learning-based methods, with Neural Radiance Fields (NeRFs) emerging as a powerful alternative. This study evaluates NeRF (via Nerfstudio) for accurate 3D reconstruction, comparing its performance to the widely [...] Read more.
Recent advances in image-based 3D reconstruction have seen a shift from traditional photogrammetric techniques to learning-based methods, with Neural Radiance Fields (NeRFs) emerging as a powerful alternative. This study evaluates NeRF (via Nerfstudio) for accurate 3D reconstruction, comparing its performance to the widely used SfM-MVS pipeline implemented in Agisoft Metashape Professional (v. 2.2.1). This work considers a diverse set of datasets with varying object scales, capture methods (including drone imagery), and lighting conditions. Several assessment analyses were conducted, including evaluation of accuracy, completeness, planarity, and density of the reconstructed point clouds. Special attention was given to the influence of shadows and surface flatness on the fidelity of reconstruction. Results show that, despite not being initially designed for metric accuracy, NeRF demonstrates promising spatial consistency, producing reconstructions in some cases comparable to those of conventional methods when provided with precise camera poses. These findings suggest that NeRF may serve as a viable tool for 3D modelling in controlled settings. The applicability of the approach to more diverse and challenging scenarios remains to be explored, with particular attention to optimizing the reconstruction pipeline in terms of pose estimation, point cloud density, and robustness to varying lighting conditions. Full article
Show Figures

Figure 1

21 pages, 7707 KB  
Article
Tomato Growth Monitoring and Phenological Analysis Using Deep Learning-Based Instance Segmentation and 3D Point Cloud Reconstruction
by Warut Timprae, Tatsuki Sagawa, Stefan Baar, Satoshi Kondo, Yoshifumi Okada, Kazuhiko Sato, Poltak Sandro Rumahorbo, Yan Lyu, Kyuki Shibuya, Yoshiki Gama, Yoshiki Hatanaka and Shinya Watanabe
Sustainability 2025, 17(22), 10120; https://doi.org/10.3390/su172210120 - 12 Nov 2025
Cited by 2 | Viewed by 1156
Abstract
Accurate and nondestructive monitoring of tomato growth is essential for large-scale greenhouse production; however, it remains challenging for small-fruited cultivars such as cherry tomatoes. Traditional 2D image analysis often fails to capture precise morphological traits, limiting its usefulness in growth modeling and yield [...] Read more.
Accurate and nondestructive monitoring of tomato growth is essential for large-scale greenhouse production; however, it remains challenging for small-fruited cultivars such as cherry tomatoes. Traditional 2D image analysis often fails to capture precise morphological traits, limiting its usefulness in growth modeling and yield estimation. This study proposes an automated phenotyping framework that integrates deep learning-based instance segmentation with high-resolution 3D point cloud reconstruction and ellipsoid fitting to estimate fruit size and ripeness from daily video recordings. These techniques enable accurate camera pose estimation and dense geometric reconstruction (via SfM and MVS), while Nerfacto enhances surface continuity and photorealistic fidelity, resulting in highly precise and visually consistent 3D representations. The reconstructed models are followed by CIELAB color analysis and logistic curve fitting to characterize the growth dynamics. When applied to real greenhouse conditions, the method achieved an average size estimation error of 8.01% compared to manual caliper measurements. During summer, the maximum growth rate (gmax) of size and ripeness were 24.14%, and 95.24% higher than in winter, respectively. Seasonal analysis revealed that winter-grown tomatoes matured approximately 10 days later than summer-grown fruits, highlighting environmental influences on phenological development. By enabling precise, noninvasive tracking of size and ripeness progression, this approach is a novel tool for smart and sustainable agriculture. Full article
(This article belongs to the Special Issue Green Technology and Biological Approaches to Sustainable Agriculture)
Show Figures

Figure 1

20 pages, 2797 KB  
Article
Seed 3D Phenotyping Across Multiple Crops Using 3D Gaussian Splatting
by Jun Gao, Chao Zhu, Junguo Hu, Fei Deng, Zhaoxin Xu and Xiaomin Wang
Agriculture 2025, 15(22), 2329; https://doi.org/10.3390/agriculture15222329 - 8 Nov 2025
Viewed by 2473
Abstract
This study introduces a versatile seed 3D reconstruction method that is applicable to multiple crops—including maize, wheat, and rice—and designed to overcome the inefficiency and subjectivity of manual measurements and the high costs of laser-based phenotyping. A panoramic video of the seed is [...] Read more.
This study introduces a versatile seed 3D reconstruction method that is applicable to multiple crops—including maize, wheat, and rice—and designed to overcome the inefficiency and subjectivity of manual measurements and the high costs of laser-based phenotyping. A panoramic video of the seed is captured and processed through frame sampling to extract multi-view images. Structure-from-Motion (SFM) is employed for sparse reconstruction and camera pose estimation, while 3D Gaussian Splatting (3DGS) is utilized for high-fidelity dense reconstruction, generating detailed point cloud models. The subsequent point cloud preprocessing, filtering, and segmentation enable the extraction of key phenotypic parameters, including length, width, height, surface area, and volume. The experimental evaluations demonstrated a high measurement accuracy, with coefficients of determination (R2) for length, width, and height reaching 0.9361, 0.8889, and 0.946, respectively. Moreover, the reconstructed models exhibit superior image quality, with peak signal-to-noise ratio (PSNR) values consistently ranging from 35 to 37 dB, underscoring the robustness of 3DGS in preserving fine structural details. Compared to conventional multi-view stereo (MVS) techniques, the proposed method can achieve significantly improved reconstruction accuracy and visual fidelity. The key outcomes of this study confirm that the 3DGS-based pipeline provides a highly accurate, efficient, and scalable solution for digital phenotyping, establishing a robust foundation for its application across diverse crop species. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

31 pages, 19756 KB  
Article
Impact of Climate Change and Other Disasters on Coastal Cultural Heritage: An Example from Greece
by Chryssy Potsiou, Sofia Basiouka, Styliani Verykokou, Denis Istrati, Sofia Soile, Marcos Julien Alexopoulos and Charalabos Ioannidis
Land 2025, 14(10), 2007; https://doi.org/10.3390/land14102007 - 7 Oct 2025
Cited by 1 | Viewed by 2587
Abstract
Protection of coastal cultural heritage is among the most urgent global priorities, as these sites face increasing threats from climate change, sea level rise, and human activity. This study emphasises the value of innovative geospatial tools and data ecosystems for timely risk assessment. [...] Read more.
Protection of coastal cultural heritage is among the most urgent global priorities, as these sites face increasing threats from climate change, sea level rise, and human activity. This study emphasises the value of innovative geospatial tools and data ecosystems for timely risk assessment. The role of land administration systems, geospatial documentation of coastal cultural heritage sites, and the adoption of innovative techniques that combine various methodologies is crucial for timely action. The coastal management infrastructure in Greece is presented, outlining the key public authorities and national legislation, as well as the land administration and geospatial ecosystems and the various available geospatial ecosystems. We profile the Hellenic Cadastre and the Hellenic Archaeological Cadastre along with open geospatial resources, and introduce TRIQUETRA Decision Support System (DSS), produced through the EU’s Horizon project, and a Digital Twin methodology for hazard identification, quantification, and mitigation. Particular emphasis is given to the role of Digital Twin technology, which acts as a continuously updated virtual replica of coastal cultural heritage sites, integrating heterogeneous geospatial datasets such as cadastral information, photogrammetric 3D models, climate projections, and hazard simulations, allowing for stakeholders to test future scenarios of sea level rise, flooding, and erosion, offering an advanced tool for resilience planning. The approach is validated at the coastal archaeological site of Aegina Kolona, where a UAV-based SfM-MVS survey produced using high-resolution photogrammetric outputs, including a dense point cloud exceeding 60 million points, a 5 cm resolution Digital Surface Model, high-resolution orthomosaics with a ground sampling distance of 1 cm and 2.5 cm, and a textured 3D model using more than 6000 nadir and oblique images. These products provided a geospatial infrastructure for flood risk assessment under extreme rainfall events, following a multi-scale hydrologic–hydraulic modelling framework. Island-scale simulations using a 5 m Digital Elevation Model (DEM) were coupled with site-scale modelling based on the high-resolution UAV-derived DEM, allowing for the nested evaluation of water flow, inundation extents, and velocity patterns. This approach revealed spatially variable flood impacts on individual structures, highlighted the sensitivity of the results to watershed delineation and model resolution, and identified critical intervention windows for temporary protection measures. We conclude that integrating land administration systems, open geospatial data, and Digital Twin technology provides a practical pathway to proactive and efficient management, increasing resilience for coastal heritage against climate change threats. Full article
(This article belongs to the Special Issue Land Modifications and Impacts on Coastal Areas, Second Edition)
Show Figures

Figure 1

25 pages, 1596 KB  
Review
A Survey of 3D Reconstruction: The Evolution from Multi-View Geometry to NeRF and 3DGS
by Shuai Liu, Mengmeng Yang, Tingyan Xing and Ran Yang
Sensors 2025, 25(18), 5748; https://doi.org/10.3390/s25185748 - 15 Sep 2025
Cited by 7 | Viewed by 9380
Abstract
Three-dimensional (3D) reconstruction technology is not only a core and key technology in computer vision and graphics, but also a key force driving the flourishing development of many cutting-edge applications such as virtual reality (VR), augmented reality (AR), autonomous driving, and digital earth. [...] Read more.
Three-dimensional (3D) reconstruction technology is not only a core and key technology in computer vision and graphics, but also a key force driving the flourishing development of many cutting-edge applications such as virtual reality (VR), augmented reality (AR), autonomous driving, and digital earth. With the rise in novel view synthesis technologies such as Neural Radiation Field (NeRF) and 3D Gaussian Splatting (3DGS), 3D reconstruction is facing unprecedented development opportunities. This article introduces the basic principles of traditional 3D reconstruction methods, including Structure from Motion (SfM) and Multi View Stereo (MVS) techniques, and analyzes the limitations of these methods in dealing with complex scenes and dynamic environments. Focusing on implicit 3D scene reconstruction techniques related to NeRF, this paper explores the advantages and challenges of using deep neural networks to learn and generate high-quality 3D scene rendering from limited perspectives. Based on the principles and characteristics of 3DGS-related technologies that have emerged in recent years, the latest progress and innovations in rendering quality, rendering efficiency, sparse view input support, and dynamic 3D reconstruction are analyzed. Finally, the main challenges and opportunities faced by current 3D reconstruction technology and novel view synthesis technology were discussed in depth, and possible technological breakthroughs and development directions in the future were discussed. This article aims to provide a comprehensive perspective for researchers in 3D reconstruction technology in fields such as digital twins and smart cities, while opening up new ideas and paths for future technological innovation and widespread application. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop