Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (74)

Search Parameters:
Keywords = Structure-from-Motion & Multi-View Stereo (SfM-MVS)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1596 KB  
Review
A Survey of 3D Reconstruction: The Evolution from Multi-View Geometry to NeRF and 3DGS
by Shuai Liu, Mengmeng Yang, Tingyan Xing and Ran Yang
Sensors 2025, 25(18), 5748; https://doi.org/10.3390/s25185748 - 15 Sep 2025
Viewed by 3590
Abstract
Three-dimensional (3D) reconstruction technology is not only a core and key technology in computer vision and graphics, but also a key force driving the flourishing development of many cutting-edge applications such as virtual reality (VR), augmented reality (AR), autonomous driving, and digital earth. [...] Read more.
Three-dimensional (3D) reconstruction technology is not only a core and key technology in computer vision and graphics, but also a key force driving the flourishing development of many cutting-edge applications such as virtual reality (VR), augmented reality (AR), autonomous driving, and digital earth. With the rise in novel view synthesis technologies such as Neural Radiation Field (NeRF) and 3D Gaussian Splatting (3DGS), 3D reconstruction is facing unprecedented development opportunities. This article introduces the basic principles of traditional 3D reconstruction methods, including Structure from Motion (SfM) and Multi View Stereo (MVS) techniques, and analyzes the limitations of these methods in dealing with complex scenes and dynamic environments. Focusing on implicit 3D scene reconstruction techniques related to NeRF, this paper explores the advantages and challenges of using deep neural networks to learn and generate high-quality 3D scene rendering from limited perspectives. Based on the principles and characteristics of 3DGS-related technologies that have emerged in recent years, the latest progress and innovations in rendering quality, rendering efficiency, sparse view input support, and dynamic 3D reconstruction are analyzed. Finally, the main challenges and opportunities faced by current 3D reconstruction technology and novel view synthesis technology were discussed in depth, and possible technological breakthroughs and development directions in the future were discussed. This article aims to provide a comprehensive perspective for researchers in 3D reconstruction technology in fields such as digital twins and smart cities, while opening up new ideas and paths for future technological innovation and widespread application. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 6994 KB  
Article
Dynamic Quantification of PISHA Sandstone Rill Erosion Using the SFM-MVS Method Under Laboratory Rainfall Simulation
by Yuhang Liu, Sui Zhang, Jiwei Wang, Rongyan Gao, Jiaxuan Liu, Siqi Liu, Xuebing Hu, Jianrong Liu and Ruiqiang Bai
Atmosphere 2025, 16(9), 1045; https://doi.org/10.3390/atmos16091045 - 2 Sep 2025
Viewed by 699
Abstract
Soil erosion is a critical ecological challenge in semi-arid regions of China, particularly in the Yellow River Basin, where Pisha sandstone slopes undergo rapid degradation. Rill erosion, driven by rainfall and overland flow, destabilizes slopes and accelerates ecosystem degradation. To address this, we [...] Read more.
Soil erosion is a critical ecological challenge in semi-arid regions of China, particularly in the Yellow River Basin, where Pisha sandstone slopes undergo rapid degradation. Rill erosion, driven by rainfall and overland flow, destabilizes slopes and accelerates ecosystem degradation. To address this, we developed a multi-view stereo observation system that integrates Structure-from-Motion (SFM) and multi-view stereo (MVS) for high-precision, dynamic monitoring of rill erosion. Laboratory rainfall simulations were conducted under four inflow rates (2–8 L/min), corresponding to rainfall intensities of 30–120 mm/h. The erosion process was divided into four phases: infiltration and particle rolling, splash and sheet erosion, incipient rill incision, and mature rill networks, with erosion concentrated in the middle and lower slope sections. The SFM-MVS system achieved planimetric and vertical errors of 3.1 mm and 3.7 mm, respectively, providing approximately 25% higher accuracy and nearly 50% faster processing compared with LiDAR and UAV photogrammetry. Infiltration stabilized at approximately 6.2 mm/h under low flows (2 L/min) but declined to less than 4 mm/h under high flows (≥6 L/min), leading to intensified rill incision and coarse-particle transport (up to 21.4% of sediment). These results demonstrate that the SFM-MVS system offers a scalable and non-invasive method for quantifying erosion dynamics, with direct implications for field monitoring, ecological restoration, and soil conservation planning. Full article
(This article belongs to the Special Issue Research About Permafrost–Atmosphere Interactions (2nd Edition))
Show Figures

Figure 1

28 pages, 4026 KB  
Article
Multi-Trait Phenotypic Analysis and Biomass Estimation of Lettuce Cultivars Based on SFM-MVS
by Tiezhu Li, Yixue Zhang, Lian Hu, Yiqiu Zhao, Zongyao Cai, Tingting Yu and Xiaodong Zhang
Agriculture 2025, 15(15), 1662; https://doi.org/10.3390/agriculture15151662 - 1 Aug 2025
Viewed by 759
Abstract
To address the problems of traditional methods that rely on destructive sampling, the poor adaptability of fixed equipment, and the susceptibility of single-view angle measurements to occlusions, a non-destructive and portable device for three-dimensional phenotyping and biomass detection in lettuce was developed. Based [...] Read more.
To address the problems of traditional methods that rely on destructive sampling, the poor adaptability of fixed equipment, and the susceptibility of single-view angle measurements to occlusions, a non-destructive and portable device for three-dimensional phenotyping and biomass detection in lettuce was developed. Based on the Structure-from-Motion Multi-View Stereo (SFM-MVS) algorithms, a high-precision three-dimensional point cloud model was reconstructed from multi-view RGB image sequences, and 12 phenotypic parameters, such as plant height, crown width, were accurately extracted. Through regression analyses of plant height, crown width, and crown height, and the R2 values were 0.98, 0.99, and 0.99, respectively, the RMSE values were 2.26 mm, 1.74 mm, and 1.69 mm, respectively. On this basis, four biomass prediction models were developed using Adaptive Boosting (AdaBoost), Support Vector Regression (SVR), Gradient Boosting Decision Tree (GBDT), and Random Forest Regression (RFR). The results indicated that the RFR model based on the projected convex hull area, point cloud convex hull surface area, and projected convex hull perimeter performed the best, with an R2 of 0.90, an RMSE of 2.63 g, and an RMSEn of 9.53%, indicating that the RFR was able to accurately simulate lettuce biomass. This research achieves three-dimensional reconstruction and accurate biomass prediction of facility lettuce, and provides a portable and lightweight solution for facility crop growth detection. Full article
(This article belongs to the Section Crop Production)
Show Figures

Figure 1

17 pages, 610 KB  
Review
Three-Dimensional Reconstruction Techniques and the Impact of Lighting Conditions on Reconstruction Quality: A Comprehensive Review
by Dimitar Rangelov, Sierd Waanders, Kars Waanders, Maurice van Keulen and Radoslav Miltchev
Lights 2025, 1(1), 1; https://doi.org/10.3390/lights1010001 - 14 Jul 2025
Viewed by 1139
Abstract
Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors [...] Read more.
Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors that influence reconstruction accuracy, the lighting conditions at capture time remain one of the most influential, yet widely neglected, variables. This review provides a comprehensive survey of classical and modern 3D reconstruction techniques, including Structure from Motion (SfM), Multi-View Stereo (MVS), Photometric Stereo, and recent neural rendering approaches such as Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS), while critically evaluating their performance under varying illumination conditions. We describe how lighting-induced artifacts such as shadows, reflections, and exposure imbalances compromise the reconstruction quality and how different approaches attempt to mitigate these effects. Furthermore, we uncover fundamental gaps in current research, including the lack of standardized lighting-aware benchmarks and the limited robustness of state-of-the-art algorithms in uncontrolled environments. By synthesizing knowledge across fields, this review aims to gain a deeper understanding of the interplay between lighting and reconstruction and provides research directions for the future that emphasize the need for adaptive, lighting-robust solutions in 3D vision systems. Full article
Show Figures

Figure 1

31 pages, 99149 KB  
Article
Optimizing Camera Settings and Unmanned Aerial Vehicle Flight Methods for Imagery-Based 3D Reconstruction: Applications in Outcrop and Underground Rock Faces
by Junsu Leem, Seyedahmad Mehrishal, Il-Seok Kang, Dong-Ho Yoon, Yulong Shao, Jae-Joon Song and Jinha Jung
Remote Sens. 2025, 17(11), 1877; https://doi.org/10.3390/rs17111877 - 28 May 2025
Cited by 5 | Viewed by 1652
Abstract
The structure from motion (SfM) and multiview stereo (MVS) techniques have proven effective in generating high-quality 3D point clouds, particularly when integrated with unmanned aerial vehicles (UAVs). However, the impact of image quality—a critical factor for SfM–MVS techniques—has received limited attention. This study [...] Read more.
The structure from motion (SfM) and multiview stereo (MVS) techniques have proven effective in generating high-quality 3D point clouds, particularly when integrated with unmanned aerial vehicles (UAVs). However, the impact of image quality—a critical factor for SfM–MVS techniques—has received limited attention. This study proposes a method for optimizing camera settings and UAV flight methods to minimize point cloud errors under illumination and time constraints. The effectiveness of the optimized settings was validated by comparing point clouds generated under these conditions with those obtained using arbitrary settings. The evaluation involved measuring point-to-point error levels for an indoor target and analyzing the standard deviation of cloud-to-mesh (C2M) and multiscale model-to-model cloud comparison (M3C2) distances across six joint planes of a rock mass outcrop in Seoul, Republic of Korea. The results showed that optimal settings improved accuracy without requiring additional lighting or extended survey time. Furthermore, we assessed the performance of SfM–MVS under optimized settings in an underground tunnel in Yeoju-si, Republic of Korea, comparing the resulting 3D models with those generated using Light Detection and Ranging (LiDAR). Despite challenging lighting conditions and time constraints, the results suggest that SfM–MVS with optimized settings has the potential to produce 3D models with higher accuracy and resolution at a lower cost than LiDAR in such environments. Full article
Show Figures

Graphical abstract

22 pages, 64906 KB  
Article
Comparative Assessment of Neural Radiance Fields and 3D Gaussian Splatting for Point Cloud Generation from UAV Imagery
by Muhammed Enes Atik
Sensors 2025, 25(10), 2995; https://doi.org/10.3390/s25102995 - 9 May 2025
Cited by 1 | Viewed by 3165
Abstract
Point clouds continue to be the main data source in 3D modeling studies with unmanned aerial vehicle (UAV) images. Structure-from-Motion (SfM) and MultiView Stereo (MVS) have high time costs for point cloud generation, especially in large data sets. For this reason, state-of-the-art methods [...] Read more.
Point clouds continue to be the main data source in 3D modeling studies with unmanned aerial vehicle (UAV) images. Structure-from-Motion (SfM) and MultiView Stereo (MVS) have high time costs for point cloud generation, especially in large data sets. For this reason, state-of-the-art methods such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have emerged as powerful alternatives for point cloud generation. This paper explores the performance of NeRF and 3DGS methods in generating point clouds from UAV images. For this purpose, the Nerfacto, Instant-NGP, and Splatfacto methods developed in the Nerfstudio framework were used. The obtained point clouds were evaluated by taking the point cloud produced with the photogrammetric method as reference. In this study, the effects of image size and iteration number on the performance of the algorithms were investigated in two different study areas. According to the results, Splatfacto demonstrates promising capabilities in addressing challenges related to scene complexity, rendering efficiency, and accuracy in UAV imagery. Full article
(This article belongs to the Special Issue Stereo Vision Sensing and Image Processing)
Show Figures

Figure 1

20 pages, 10100 KB  
Article
A Method for Identifying Picking Points in Safflower Point Clouds Based on an Improved PointNet++ Network
by Baojian Ma, Hao Xia, Yun Ge, He Zhang, Zhenghao Wu, Min Li and Dongyun Wang
Agronomy 2025, 15(5), 1125; https://doi.org/10.3390/agronomy15051125 - 2 May 2025
Cited by 3 | Viewed by 1115
Abstract
To address the challenge of precise picking point localization in morphologically diverse safflower plants, this study proposes PointSafNet—a novel three-stage 3D point cloud analysis framework with distinct architectural and methodological innovations. In Stage I, we introduce a multi-view reconstruction pipeline integrating Structure from [...] Read more.
To address the challenge of precise picking point localization in morphologically diverse safflower plants, this study proposes PointSafNet—a novel three-stage 3D point cloud analysis framework with distinct architectural and methodological innovations. In Stage I, we introduce a multi-view reconstruction pipeline integrating Structure from Motion (SfM) and Multi-View Stereo (MVS) to generate high-fidelity 3D plant point clouds. Stage II develops a dual-branch architecture employing Star modules for multi-scale hierarchical geometric feature extraction at the organ level (filaments and frui balls), complemented by a Context-Anchored Attention (CAA) mechanism to capture long-range contextual information. This synergistic feature learning approach addresses morphological variations, achieving 86.83% segmentation accuracy (surpassing PointNet++ by 7.37%) and outperforming conventional point cloud models. Stage III proposes an optimized geometric analysis pipeline combining dual-centroid spatial vectorization with Oriented Bounding Box (OBB)-based proximity analysis, resolving picking coordinate localization across diverse plants with 90% positioning accuracy and 68.82% mean IoU (13.71% improvement). The experiments demonstrate that PointSafNet systematically integrates 3D reconstruction, hierarchical feature learning, and geometric reasoning to provide visual guidance for robotic harvesting systems in complex plant canopies. The framework’s dual emphasis on architectural innovation and geometric modeling offers a generalizable solution for precision agriculture tasks involving morphologically diverse safflowers. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

21 pages, 4483 KB  
Article
DEM Generation Incorporating River Channels in Data-Scarce Contexts: The “Fluvial Domain Method”
by Jairo R. Escobar Villanueva, Jhonny I. Pérez-Montiel and Andrea Gianni Cristoforo Nardini
Hydrology 2025, 12(2), 33; https://doi.org/10.3390/hydrology12020033 - 14 Feb 2025
Cited by 1 | Viewed by 2125
Abstract
This paper presents a novel methodology to generate Digital Elevation Models (DEMs) in flat areas, incorporating river channels from relatively coarse initial data. The technique primarily utilizes filtered dense point clouds derived from SfM-MVS (Structure from Motion-Multi-View Stereo) photogrammetry of available crewed aerial [...] Read more.
This paper presents a novel methodology to generate Digital Elevation Models (DEMs) in flat areas, incorporating river channels from relatively coarse initial data. The technique primarily utilizes filtered dense point clouds derived from SfM-MVS (Structure from Motion-Multi-View Stereo) photogrammetry of available crewed aerial imagery datasets. The methodology operates under the assumption that the aerial survey was carried out during low-flow or drought conditions so that the dry (or almost dry) riverbed is detected, although in an imprecise way. Direct interpolation of the detected elevation points yields unacceptable river channel bottom profiles (often exhibiting unrealistic artifacts) and even distorts the floodplain. In our Fluvial Domain Method, channel bottoms are represented like “highways”, perhaps overlooking their (unknown) detailed morphology but gaining in general topographic consistency. For instance, we observed an 11.7% discrepancy in the river channel long profile (with respect to the measured cross-sections) and a 0.38 m RMSE in the floodplain (with respect to the GNSS-RTK measurements). Unlike conventional methods that utilize active sensors (satellite and airborne LiDAR) or classic topographic surveys—each with precision, cost, or labor limitations—the proposed approach offers a more accessible, cost-effective, and flexible solution that is particularly well suited to cases with scarce base information and financial resources. However, the method’s performance is inherently limited by the quality of input data and the simplification of complex channel morphologies; it is most suitable for cases where high-resolution geomorphological detail is not critical or where direct data acquisition is not feasible. The resulting DEM, incorporating a generalized channel representation, is well suited for flood hazard modeling. A case study of the Ranchería river delta in the Northern Colombian Caribbean demonstrates the methodology. Full article
(This article belongs to the Special Issue Hydrological Modeling and Sustainable Water Resources Management)
Show Figures

Figure 1

20 pages, 7029 KB  
Article
Three-Dimensional Reconstruction, Phenotypic Traits Extraction, and Yield Estimation of Shiitake Mushrooms Based on Structure from Motion and Multi-View Stereo
by Xingmei Xu, Jiayuan Li, Jing Zhou, Puyu Feng, Helong Yu and Yuntao Ma
Agriculture 2025, 15(3), 298; https://doi.org/10.3390/agriculture15030298 - 30 Jan 2025
Cited by 9 | Viewed by 1810
Abstract
Phenotypic traits of fungi and their automated extraction are crucial for evaluating genetic diversity, breeding new varieties, and estimating yield. However, research on the high-throughput, rapid, and non-destructive extraction of fungal phenotypic traits using 3D point clouds remains limited. In this study, a [...] Read more.
Phenotypic traits of fungi and their automated extraction are crucial for evaluating genetic diversity, breeding new varieties, and estimating yield. However, research on the high-throughput, rapid, and non-destructive extraction of fungal phenotypic traits using 3D point clouds remains limited. In this study, a smart phone is used to capture multi-view images of shiitake mushrooms (Lentinula edodes) from three different heights and angles, employing the YOLOv8x model to segment the primary image regions. The segmented images were reconstructed in 3D using Structure from Motion (SfM) and Multi-View Stereo (MVS). To automatically segment individual mushroom instances, we developed a CP-PointNet++ network integrated with clustering methods, achieving an overall accuracy (OA) of 97.45% in segmentation. The computed phenotype correlated strongly with manual measurements, yielding R2 > 0.8 and nRMSE < 0.09 for the pileus transverse and longitudinal diameters, R2 = 0.53 and RMSE = 3.26 mm for the pileus height, R2 = 0.79 and nRMSE = 0.12 for stipe diameter, and R2 = 0.65 and RMSE = 4.98 mm for the stipe height. Using these parameters, yield estimation was performed using PLSR, SVR, RF, and GRNN machine learning models, with GRNN demonstrating superior performance (R2 = 0.91). This approach was also adaptable for extracting phenotypic traits of other fungi, providing valuable support for fungal breeding initiatives. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

25 pages, 14926 KB  
Article
Plant Height Estimation in Corn Fields Based on Column Space Segmentation Algorithm
by Huazhe Zhang, Nian Liu, Juan Xia, Lejun Chen and Shengde Chen
Agriculture 2025, 15(3), 236; https://doi.org/10.3390/agriculture15030236 - 22 Jan 2025
Cited by 2 | Viewed by 1849
Abstract
Plant genomics have progressed significantly due to advances in information technology, but phenotypic measurement technology has not kept pace, hindering plant breeding. As maize is one of China’s three main grain crops, accurately measuring plant height is crucial for assessing crop growth and [...] Read more.
Plant genomics have progressed significantly due to advances in information technology, but phenotypic measurement technology has not kept pace, hindering plant breeding. As maize is one of China’s three main grain crops, accurately measuring plant height is crucial for assessing crop growth and productivity. This study addresses the challenges of plant segmentation and inaccurate plant height extraction in maize populations under field conditions. A three-dimensional dense point cloud was reconstructed using the structure from motion–multi-view stereo (SFM-MVS) method, based on multi-view image sequences captured by an unmanned aerial vehicle (UAV). To improve plant segmentation, we propose a column space approximate segmentation algorithm, which combines the column space method with the enclosing box technique. The proposed method achieved a segmentation accuracy exceeding 90% in dense canopy conditions, significantly outperforming traditional algorithms, such as region growing (80%) and Euclidean clustering (75%). Furthermore, the extracted plant heights demonstrated a high correlation with manual measurements, with R2 values ranging from 0.8884 to 0.9989 and RMSE values as low as 0.0148 m. However, the scalability of the method for larger agricultural operations may face challenges due to computational demands when processing large-scale datasets and potential performance variability under different environmental conditions. Addressing these issues through algorithm optimization, parallel processing, and the integration of additional data sources such as multispectral or LiDAR data could enhance its scalability and robustness. The results demonstrate that the method can accurately reflect the heights of maize plants, providing a reliable solution for large-scale, field-based maize phenotyping. The method has potential applications in high-throughput monitoring of crop phenotypes and precision agriculture. Full article
Show Figures

Figure 1

21 pages, 11620 KB  
Article
Performance Evaluation and Optimization of 3D Gaussian Splatting in Indoor Scene Generation and Rendering
by Xinjian Fang, Yingdan Zhang, Hao Tan, Chao Liu and Xu Yang
ISPRS Int. J. Geo-Inf. 2025, 14(1), 21; https://doi.org/10.3390/ijgi14010021 - 7 Jan 2025
Cited by 3 | Viewed by 7010
Abstract
This study addresses the prevalent challenges of inefficiency and suboptimal quality in indoor 3D scene generation and rendering by proposing a parameter-tuning strategy for 3D Gaussian Splatting (3DGS). Through a systematic quantitative analysis of various performance indicators under differing resolution conditions, threshold settings [...] Read more.
This study addresses the prevalent challenges of inefficiency and suboptimal quality in indoor 3D scene generation and rendering by proposing a parameter-tuning strategy for 3D Gaussian Splatting (3DGS). Through a systematic quantitative analysis of various performance indicators under differing resolution conditions, threshold settings for the average magnitude of spatial position gradients, and adjustments to the scaling learning rate, the optimal parameter configuration for the 3DGS model, specifically tailored for indoor modeling scenarios, is determined. Firstly, utilizing a self-collected dataset, a comprehensive comparison was conducted among COLLI-SION-MAPping (abbreviated as COLMAP (V3.7), an open-source software based on Structure from Motion and Multi-View Stereo (SFM-MVS)), Context Capture (V10.2) (abbreviated as CC, a software utilizing oblique photography algorithms), Neural Radiance Fields (NeRF), and the currently renowned 3DGS algorithm. The key dimensions of focus included the number of images, rendering time, and overall rendering effectiveness. Subsequently, based on this comparison, rigorous qualitative and quantitative evaluations are further conducted on the overall performance and detail processing capabilities of the 3DGS algorithm. Finally, to meet the specific requirements of indoor scene modeling and rendering, targeted parameter tuning is performed on the algorithm. The results demonstrate significant performance improvements in the optimized 3DGS algorithm: the PSNR metric increases by 4.3%, and the SSIM metric improves by 0.2%. The experimental results prove that the improved 3DGS algorithm exhibits superior expressive power and persuasiveness in indoor scene rendering. Full article
Show Figures

Figure 1

17 pages, 9384 KB  
Article
Multi-Spectral Point Cloud Constructed with Advanced UAV Technique for Anisotropic Reflectance Analysis of Maize Leaves
by Kaiyi Bi, Yifang Niu, Hao Yang, Zheng Niu, Yishuo Hao and Li Wang
Remote Sens. 2025, 17(1), 93; https://doi.org/10.3390/rs17010093 - 30 Dec 2024
Viewed by 1217
Abstract
Reflectance anisotropy in remote sensing images can complicate the interpretation of spectral signature, and extracting precise structural information under these pixels is a promising approach. Low-altitude unmanned aerial vehicle (UAV) systems can capture high-resolution imagery even to centimeter-level detail, potentially simplifying the characterization [...] Read more.
Reflectance anisotropy in remote sensing images can complicate the interpretation of spectral signature, and extracting precise structural information under these pixels is a promising approach. Low-altitude unmanned aerial vehicle (UAV) systems can capture high-resolution imagery even to centimeter-level detail, potentially simplifying the characterization of leaf anisotropic reflectance. We proposed a novel maize point cloud generation method that combines an advanced UAV cross-circling oblique (CCO) photography route with the Structure from the Motion-Multi-View Stereo (SfM-MVS) algorithm. A multi-spectral point cloud was then generated by fusing multi-spectral imagery with the point cloud using a DSM-based approach. The Rahman–Pinty–Verstraete (RPV) model was finally applied to establish maize leaf-level anisotropic reflectance models. Our results indicated a high degree of similarity between measured and estimated maize structural parameters (R2 = 0.89 for leaf length and 0.96 for plant height) based on accurate point cloud data obtained from the CCO route. Most data points clustered around the principal plane due to a constant angle between the sun and view vectors, resulting in a limited range of view azimuths. Leaf reflectance anisotropy was characterized by the RPV model with R2 ranging from 0.38 to 0.75 for five wavelength bands. These findings hold significant promise for promoting the decoupling of plant structural information and leaf optical characteristics within remote sensing data. Full article
Show Figures

Figure 1

28 pages, 19500 KB  
Article
Empirical Evaluation and Simulation of GNSS Solutions on UAS-SfM Accuracy for Shoreline Mapping
by José A. Pilartes-Congo, Chase Simpson, Michael J. Starek, Jacob Berryhill, Christopher E. Parrish and Richard K. Slocum
Drones 2024, 8(11), 646; https://doi.org/10.3390/drones8110646 - 6 Nov 2024
Cited by 2 | Viewed by 2345 | Correction
Abstract
Uncrewed aircraft systems (UASs) and structure-from-motion/multi-view stereo (SfM/MVS) photogrammetry are efficient methods for mapping terrain at local geographic scales. Traditionally, indirect georeferencing using ground control points (GCPs) is used to georeference the UAS image locations before further processing in SfM software. However, this [...] Read more.
Uncrewed aircraft systems (UASs) and structure-from-motion/multi-view stereo (SfM/MVS) photogrammetry are efficient methods for mapping terrain at local geographic scales. Traditionally, indirect georeferencing using ground control points (GCPs) is used to georeference the UAS image locations before further processing in SfM software. However, this is a tedious practice and unsuitable for surveying remote or inaccessible areas. Direct georeferencing is a plausible alternative that requires no GCPs. It relies on global navigation satellite system (GNSS) technology to georeference the UAS image locations. This research combined field experiments and simulation to investigate GNSS-based post-processed kinematic (PPK) as a means to eliminate or reduce reliance on GCPs for shoreline mapping and charting. The study also conducted a brief comparison of real-time network (RTN) and precise point positioning (PPP) performances for the same purpose. Ancillary experiments evaluated the effects of PPK base station distance and GNSS sample rate on the accuracy of derived 3D point clouds and digital elevation models (DEMs). Vertical root mean square errors (RMSEz), scaled to the 95% confidence interval using an assumption of normally-distributed errors, were desired to be within 0.5 m to satisfy National Oceanic and Atmospheric Administration (NOAA) requirements for nautical charting. Simulations used a Monte Carlo approach and empirical tests to examine the influence of GNSS performance on the quality of derived 3D point clouds. RTN and PPK results consistently yielded RMSEz values within 10 cm, thus satisfying NOAA requirements for nautical charting. PPP did not meet the accuracy requirements but showed promising results that prompt further investigation. PPK experiments using higher GNSS sample rates did not always provide the best accuracies. GNSS performance and model accuracies were enhanced when using base stations located within 30 km of the survey site. Results without using GCPs observed a direct relationship between point cloud accuracy and GNSS performance, with R2 values reaching up to 0.97. Full article
Show Figures

Figure 1

20 pages, 21985 KB  
Article
Aerial SfM–MVS Visualization of Surface Deformation along Folds during the 2024 Noto Peninsula Earthquake (Mw7.5)
by Kazuki Yoshida, Ryo Endo, Junko Iwahashi, Akira Sasagawa and Hiroshi Yarai
Remote Sens. 2024, 16(15), 2813; https://doi.org/10.3390/rs16152813 - 31 Jul 2024
Cited by 3 | Viewed by 2364
Abstract
This study aimed to map and analyze the spatial pattern of the surface deformation associated with the 2024 Noto Peninsula earthquake (Mw7.5) using structure-from-motion/multi-view-stereo (SfM–MVS), an advanced photogrammetric technique. The analysis was conducted using digital aerial photographs with a ground pixel dimension of [...] Read more.
This study aimed to map and analyze the spatial pattern of the surface deformation associated with the 2024 Noto Peninsula earthquake (Mw7.5) using structure-from-motion/multi-view-stereo (SfM–MVS), an advanced photogrammetric technique. The analysis was conducted using digital aerial photographs with a ground pixel dimension of 0.2 m (captured the day after the earthquake). Horizontal locations of GCPs were determined using pre-earthquake data to remove the wide-area horizontal crustal deformation component. The elevations of the GCPs were corrected by incorporating quasi-vertical values derived from a 2.5-dimensional analysis of synthetic aperture radar (SAR) results. In the synclinorium structure area, where no active fault had previously been identified, we observed a 5 km long uplift zone (0.1 to 0.2 km in width), along with multiple scarps that reached a maximum height of 2.2 m. The area and shape of the surface deformation suggested that the induced uplift and surrounding landslides were related to fold structures and their growth. Thus, our study shows the efficacy of SfM–MVS with respect to accurately mapping earthquake-induced deformations, providing crucial data for understanding seismic activity and informing disaster-response strategies. Full article
Show Figures

Graphical abstract

13 pages, 3604 KB  
Article
A Super-Resolution and 3D Reconstruction Method Based on OmDF Endoscopic Images
by Fujia Sun and Wenxuan Song
Sensors 2024, 24(15), 4890; https://doi.org/10.3390/s24154890 - 27 Jul 2024
Viewed by 2528
Abstract
In the field of endoscopic imaging, challenges such as low resolution, complex textures, and blurred edges often degrade the quality of 3D reconstructed models. To address these issues, this study introduces an innovative endoscopic image super-resolution and 3D reconstruction technique named Omni-Directional Focus [...] Read more.
In the field of endoscopic imaging, challenges such as low resolution, complex textures, and blurred edges often degrade the quality of 3D reconstructed models. To address these issues, this study introduces an innovative endoscopic image super-resolution and 3D reconstruction technique named Omni-Directional Focus and Scale Resolution (OmDF-SR). This method integrates an Omnidirectional Self-Attention (OSA) mechanism, an Omnidirectional Scale Aggregation Group (OSAG), a Dual-stream Adaptive Focus Mechanism (DAFM), and a Dynamic Edge Adjustment Framework (DEAF) to enhance the accuracy and efficiency of super-resolution processing. Additionally, it employs Structure from Motion (SfM) and Multi-View Stereo (MVS) technologies to achieve high-precision medical 3D models. Experimental results indicate significant improvements in image processing with a PSNR of 38.2902 dB and an SSIM of 0.9746 at a magnification factor of ×2, and a PSNR of 32.1723 dB and an SSIM of 0.9489 at ×4. Furthermore, the method excels in reconstructing detailed 3D models, enhancing point cloud density, mesh quality, and texture mapping richness, thus providing substantial support for clinical diagnosis and surgical planning. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop