Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (104)

Search Parameters:
Keywords = multi-view photogrammetry

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 10174 KB  
Article
Event-Scale Quantification of Hillslope Landslide Erosion and Channel Incision During Extreme Rainfall: 2009 Typhoon Morakot
by Yi-Chin Chen
Water 2026, 18(6), 708; https://doi.org/10.3390/w18060708 - 18 Mar 2026
Viewed by 154
Abstract
Extreme rainfall events can trigger widespread landsliding and fluvial erosion, exerting a disproportionate influence on sediment production and landscape evolution in mountainous watersheds. However, hillslope–channel coupling during individual extreme events remains poorly quantified due to the scarcity of event-scale topographic observations. This study [...] Read more.
Extreme rainfall events can trigger widespread landsliding and fluvial erosion, exerting a disproportionate influence on sediment production and landscape evolution in mountainous watersheds. However, hillslope–channel coupling during individual extreme events remains poorly quantified due to the scarcity of event-scale topographic observations. This study investigates event-scale hillslope–channel coupling by quantifying landslide-driven hillslope erosion and channel incision associated with Typhoon Morakot (2009) in the Sinwulu River watershed, southeastern Taiwan. High-resolution pre- and post-event digital surface models (DSMs) were reconstructed using an aerial structure-from-motion multi-view stereo (SfM–MVS) photogrammetry workflow and corrected for canopy height to derive meter-scale topographic changes. Hillslope and channel domains were delineated, and linked hillslope–channel units were used to examine spatial relationships between erosion processes and topographic and hydraulic factors. Results indicate that landslide erosion dominated sediment production during the event with watershed-average erosion of 544.35 mm, while channel responses exhibited strong spatial contrasts, with pronounced incision in upstream reaches and substantial deposition downstream of major knickpoints. Event-scale analysis provides evidence for a strong correspondence between channel incision and hillslope landslide erosion, whereas correlations with commonly used hydraulic proxies such as unit stream power are comparatively weaker. These findings highlight the value of event-scale topographic measurements for elucidating transient hillslope–channel coupling processes during extreme rainfall events. Full article
(This article belongs to the Section Water Erosion and Sediment Transport)
Show Figures

Figure 1

30 pages, 3812 KB  
Review
Video-Based 3D Reconstruction: A Review of Photogrammetry and Visual SLAM Approaches
by Ali Javadi Moghadam, Abbas Kiani, Reza Naeimaei, Shirin Malihi and Ioannis Brilakis
J. Imaging 2026, 12(3), 128; https://doi.org/10.3390/jimaging12030128 - 13 Mar 2026
Viewed by 453
Abstract
Three-dimensional (3D) reconstruction using images is one of the most significant topics in computer vision and photogrammetry, with wide-ranging applications in robotics, augmented reality, and mapping. This study investigates methods of 3D reconstruction using video (especially monocular video) data and focuses on techniques [...] Read more.
Three-dimensional (3D) reconstruction using images is one of the most significant topics in computer vision and photogrammetry, with wide-ranging applications in robotics, augmented reality, and mapping. This study investigates methods of 3D reconstruction using video (especially monocular video) data and focuses on techniques such as Structure from Motion (SfM), Multi-View Stereo (MVS), Visual Simultaneous Localization and Mapping (V-SLAM), and videogrammetry. Based on a statistical analysis of SCOPUS records, these methods collectively account for approximately 6863 journal publications up to the end of 2024. Among these, about 80 studies are analyzed in greater detail to identify trends and advancements in the field. The study also shows that the use of video data for real-time 3D reconstruction is commonly addressed through two main approaches: photogrammetry-based methods, which rely on precise geometric principles and offer high accuracy at the cost of greater computational demand; and V-SLAM methods, which emphasize real-time processing and provide higher speed. Furthermore, the application of IMU data and other indicators, such as color quality and keypoint detection, for selecting suitable frames for 3D reconstruction is investigated. Overall, this study compiles and categorizes video-based reconstruction methods, emphasizing the critical step of keyframe extraction. By summarizing and illustrating the general approaches, the study aims to clarify and facilitate the entry path for researchers interested in this area. Finally, the paper offers targeted recommendations for improving keyframe extraction methods to enhance the accuracy and efficiency of real-time video-based 3D reconstruction, while also outlining future research directions in addressing challenges like dynamic scenes, reducing computational costs, and integrating advanced learning-based techniques. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

21 pages, 4699 KB  
Article
Automated Dimensional Measurement of Large-Scale Prefabricated Components Based on UAV Multi-View Images and Improved 3D Gaussian Splatting
by Zihan Xu and Dejiang Wang
Buildings 2026, 16(5), 1054; https://doi.org/10.3390/buildings16051054 - 6 Mar 2026
Viewed by 202
Abstract
The geometric dimensional accuracy of large-scale prefabricated components is critical for the successful implementation of prefabricated construction. However, traditional manual contact-based inspection methods are inefficient and are often simplified or even neglected in practice due to operational difficulties. To address this challenge, this [...] Read more.
The geometric dimensional accuracy of large-scale prefabricated components is critical for the successful implementation of prefabricated construction. However, traditional manual contact-based inspection methods are inefficient and are often simplified or even neglected in practice due to operational difficulties. To address this challenge, this study proposes an automated non-contact dimensional inspection system based on UAV photogrammetry. The system consists of three core modules: First, the 3D Model Generation Module utilizes UAV-captured multi-view imagery to rapidly reconstruct high-fidelity 3D models of construction sites using improved 3D Gaussian Splatting technology, while recovering true physical scales by integrating GPS metadata. Second, the Segmentation Module extracts target components from complex backgrounds through flexible target selection and achieves automated planar segmentation using the Region Growing algorithm. Finally, the Dimensional Inspection Module accurately calculates geometric dimensions using a self-developed “Measurement Tree” algorithm. Engineering validation demonstrates that the system achieves an average relative error of only 0.35% in the inspection of prefabricated bent caps, exhibiting excellent measurement accuracy and robustness. This study provides an efficient, precise, and intelligent solution for the quality control of prefabricated components, effectively bridging the gaps inherent in traditional inspection methods. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

30 pages, 16905 KB  
Article
Real-Time 2D Orthomosaic Mapping from UAV Video via Feature-Based Image Registration
by Se-Yun Hwang, Seunghoon Oh, Jae-Chul Lee, Soon-Sub Lee and Changsoo Ha
Appl. Sci. 2026, 16(4), 2133; https://doi.org/10.3390/app16042133 - 22 Feb 2026
Viewed by 407
Abstract
This study presents a real-time framework for generating two-dimensional (2D) orthomosaic maps directly from UAV video. The method targets operational scenarios in which a continuously updated 2D overview is required during flight or immediately after landing, without relying on time-consuming offline photogrammetry workflows [...] Read more.
This study presents a real-time framework for generating two-dimensional (2D) orthomosaic maps directly from UAV video. The method targets operational scenarios in which a continuously updated 2D overview is required during flight or immediately after landing, without relying on time-consuming offline photogrammetry workflows such as structure-from-motion (SfM) and multi-view stereo (MVS). The proposed procedure incrementally registers sparsely sampled video frames on standard CPU hardware using classical feature-based image registration. Each selected frame is converted to grayscale and processed under a fixed keypoint budget to maintain predictable runtime. Tentative correspondences are obtained through descriptor matching with ratio-test filtering, and outliers are removed using random sample consensus (RANSAC) to ensure geometric consistency. Inter-frame motion is modeled by a planar homography, enabling the mapping process to jointly account for rotation, scale variation, skew, and translation that commonly occur in UAV video due to yaw maneuvers, mild altitude variation, and platform motion. Sequential homographies are accumulated to warp incoming frames into a global mosaic canvas, which is updated incrementally using lightweight blending suitable for real-time visualization. Experimental results on three UAV video sequences with different durations, flight patterns, and scene targets report representative orthomosaic-style outputs and per-step CPU runtime statistics (mean, 95th percentile, and maximum), illustrating typical operating behavior under the tested settings. The framework produces visually coherent orthomosaic-style maps in real time for approximately planar scenes with sufficient overlap and texture, while clarifying practical failure modes under weak texture, motion blur, and strong parallax. Limitations include potential drift over long sequences and the absence of ground-truth references for absolute registration-error evaluation. Full article
Show Figures

Figure 1

22 pages, 3790 KB  
Article
Smartphone-Based Automated Photogrammetry for Reconstruction of Residual Limb Models in Prosthetic Design
by Lander De Waele, Jolien Gooijers and Dante Mantini
Sensors 2026, 26(4), 1251; https://doi.org/10.3390/s26041251 - 14 Feb 2026
Viewed by 343
Abstract
Accurate modeling of residual limb geometry is essential for prosthetic socket design, yet current scanning techniques can be costly, operator-dependent, or impractical for repeated clinical use. This study presents a fully automated, low-cost photogrammetry workflow capable of generating metrically accurate 3D models of [...] Read more.
Accurate modeling of residual limb geometry is essential for prosthetic socket design, yet current scanning techniques can be costly, operator-dependent, or impractical for repeated clinical use. This study presents a fully automated, low-cost photogrammetry workflow capable of generating metrically accurate 3D models of lower-limb residual limbs using video and still images acquired with a standard smartphone or a full-frame digital camera. The pipeline integrates adaptive frame selection, deep learning-based background removal, robust metric scaling via ArUco markers, and open-source Structure-from-Motion and Multi-View Stereo reconstruction, requiring no manual post-processing or proprietary software. Accuracy and repeatability were evaluated using four 3D-printed limb phantoms and high-resolution CT-derived meshes as ground truth. Smartphone video and full-frame camera acquisitions achieved sub-millimeter surface accuracy, volume and perimeter errors within ±1%, and high inter-session repeatability, all within clinically accepted thresholds for prosthetic socket fabrication. In contrast, smartphone still-photo reconstructions showed larger deviations and reduced stability. Acquisition time was under five minutes, and complete reconstruction required approximately 1 h and 30 min. These results demonstrate that smartphone video-based photogrammetry provides a practical, scalable, and clinically viable alternative for residual limb modeling, particularly in resource-constrained or remote care settings. Full article
(This article belongs to the Special Issue Sensors for Object Detection, Pose Estimation, and 3D Reconstruction)
Show Figures

Figure 1

23 pages, 5292 KB  
Article
Research on Rapid 3D Model Reconstruction Based on 3D Gaussian Splatting for Power Scenarios
by Huanruo Qi, Yi Zhou, Chen Chen, Lu Zhang, Peipei He, Xiangyang Yan and Mengqi Zhai
Sustainability 2026, 18(2), 726; https://doi.org/10.3390/su18020726 - 10 Jan 2026
Viewed by 811
Abstract
As core infrastructure of power transmission networks, power towers require high-precision 3D models, which are critical for intelligent inspection and digital twin applications of power transmission lines. Traditional reconstruction methods, such as LiDAR scanning and oblique photogrammetry, suffer from issues including high operational [...] Read more.
As core infrastructure of power transmission networks, power towers require high-precision 3D models, which are critical for intelligent inspection and digital twin applications of power transmission lines. Traditional reconstruction methods, such as LiDAR scanning and oblique photogrammetry, suffer from issues including high operational risks, low modeling efficiency, and loss of fine details. To address these limitations, this paper proposes a 3D Gaussian Splatting (3DGS)-based method for power tower 3D reconstruction to enhance reconstruction efficiency and detail preservation capability. First, a multi-view data acquisition scheme combining “unmanned aerial vehicle + oblique photogrammetry” was designed to capture RGB images acquired by Unmanned Aerial Vehicle (UAV) platforms, which are used as the primary input for 3D reconstruction. Second, a sparse point cloud was generated via Structure from Motion. Finally, based on 3DGS, Gaussian model initialization, differentiable rendering, and adaptive density control were performed to produce high-precision 3D models of power towers. Taking two typical power tower types as experimental subjects, comparisons were made with the oblique photogrammetry + ContextCapture method. Experimental results demonstrate that 3DGS not only achieves high model completeness (with the reconstructed model nearly indistinguishable from the original images) but also excels in preserving fine details such as angle steels and cables. Additionally, the final modeling time is reduced by over 70% compared to traditional oblique photogrammetry. 3DGS enables efficient and high-precision reconstruction of power tower 3D models, providing a reliable technical foundation for digital twin applications in power transmission lines. By significantly improving reconstruction efficiency and reducing operational costs, the proposed method supports sustainable power infrastructure inspection, asset lifecycle management, and energy-efficient digital twin applications. Full article
Show Figures

Figure 1

17 pages, 4360 KB  
Article
3D Gaussian Splatting in Geosciences: A Novel High-Fidelity Approach for Digitizing Geoheritage from Minerals to Immersive Virtual Tours
by Andrei Ionuţ Apopei
Geosciences 2025, 15(10), 373; https://doi.org/10.3390/geosciences15100373 - 24 Sep 2025
Cited by 1 | Viewed by 3796
Abstract
The digitization of geological heritage is essential for geoconservation, research, and education, yet traditional 3D methods like photogrammetry struggle to accurately capture specimens with complex optical properties. This paper evaluates 3D Gaussian Splatting (3DGS) as a high-fidelity alternative. This study presents a multi-scale [...] Read more.
The digitization of geological heritage is essential for geoconservation, research, and education, yet traditional 3D methods like photogrammetry struggle to accurately capture specimens with complex optical properties. This paper evaluates 3D Gaussian Splatting (3DGS) as a high-fidelity alternative. This study presents a multi-scale comparative study, digitizing landscape-scale outcrops with UAVs, architectural-scale museum interiors with smartphones, and specimen-level minerals with complex lusters and transparency. The results demonstrate that 3DGS provides unprecedented realism, successfully capturing view-dependent phenomena such as the labradorescence of feldspar and the translucency of fluorite, which are poorly represented by photogrammetric textured meshes. Furthermore, the 3DGS workflow is significantly faster and eliminates the need for manual post-processing and texture painting. By enabling the creation of authentic digital twins and immersive virtual tours, 3DGS represents a transformative technology for the field. It offers powerful new avenues for enhancing public engagement and creating accessible, high-fidelity digital archives for geoeducation and geotourism. Full article
(This article belongs to the Special Issue Challenges and Research Trends of Geoheritage and Geoconservation)
Show Figures

Figure 1

21 pages, 4674 KB  
Article
CLCFM3: A 3D Reconstruction Algorithm Based on Photogrammetry for High-Precision Whole Plant Sensing Using All-Around Images
by Atsushi Hayashi, Nobuo Kochi, Kunihiro Kodama, Sachiko Isobe and Takanari Tanabata
Sensors 2025, 25(18), 5829; https://doi.org/10.3390/s25185829 - 18 Sep 2025
Cited by 1 | Viewed by 1114
Abstract
This research aims to develop a novel technique to acquire a large amount of high-density, high-precision 3D point cloud data for plant phenotyping using photogrammetry technology. The complexity of plant structures, characterized by overlapping thin parts such as leaves and stems, makes it [...] Read more.
This research aims to develop a novel technique to acquire a large amount of high-density, high-precision 3D point cloud data for plant phenotyping using photogrammetry technology. The complexity of plant structures, characterized by overlapping thin parts such as leaves and stems, makes it difficult to reconstruct accurate 3D point clouds. One challenge in this regard is occlusion, where points in the 3D point cloud cannot be obtained due to overlapping parts, preventing accurate point capture. Another is the generation of erroneous points in non-existent locations due to image-matching errors along object outlines. To overcome these challenges, we propose a 3D point cloud reconstruction method named closed-loop coarse-to-fine method with multi-masked matching (CLCFM3). This method repeatedly executes a process that generates point clouds locally to suppress occlusion (multi-matching) and a process that removes noise points using a mask image (masked matching). Furthermore, we propose the closed-loop coarse-to-fine method (CLCFM) to improve the accuracy of structure from motion, which is essential for implementing the proposed point cloud reconstruction method. CLCFM solves loop closure by performing coarse-to-fine camera position estimation. By facilitating the acquisition of high-density, high-precision 3D data on a large number of plant bodies, as is necessary for research activities, this approach is expected to enable comparative analysis of visible phenotypes in the growth process of a wide range of plant species based on 3D information. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

22 pages, 6994 KB  
Article
Dynamic Quantification of PISHA Sandstone Rill Erosion Using the SFM-MVS Method Under Laboratory Rainfall Simulation
by Yuhang Liu, Sui Zhang, Jiwei Wang, Rongyan Gao, Jiaxuan Liu, Siqi Liu, Xuebing Hu, Jianrong Liu and Ruiqiang Bai
Atmosphere 2025, 16(9), 1045; https://doi.org/10.3390/atmos16091045 - 2 Sep 2025
Viewed by 1043
Abstract
Soil erosion is a critical ecological challenge in semi-arid regions of China, particularly in the Yellow River Basin, where Pisha sandstone slopes undergo rapid degradation. Rill erosion, driven by rainfall and overland flow, destabilizes slopes and accelerates ecosystem degradation. To address this, we [...] Read more.
Soil erosion is a critical ecological challenge in semi-arid regions of China, particularly in the Yellow River Basin, where Pisha sandstone slopes undergo rapid degradation. Rill erosion, driven by rainfall and overland flow, destabilizes slopes and accelerates ecosystem degradation. To address this, we developed a multi-view stereo observation system that integrates Structure-from-Motion (SFM) and multi-view stereo (MVS) for high-precision, dynamic monitoring of rill erosion. Laboratory rainfall simulations were conducted under four inflow rates (2–8 L/min), corresponding to rainfall intensities of 30–120 mm/h. The erosion process was divided into four phases: infiltration and particle rolling, splash and sheet erosion, incipient rill incision, and mature rill networks, with erosion concentrated in the middle and lower slope sections. The SFM-MVS system achieved planimetric and vertical errors of 3.1 mm and 3.7 mm, respectively, providing approximately 25% higher accuracy and nearly 50% faster processing compared with LiDAR and UAV photogrammetry. Infiltration stabilized at approximately 6.2 mm/h under low flows (2 L/min) but declined to less than 4 mm/h under high flows (≥6 L/min), leading to intensified rill incision and coarse-particle transport (up to 21.4% of sediment). These results demonstrate that the SFM-MVS system offers a scalable and non-invasive method for quantifying erosion dynamics, with direct implications for field monitoring, ecological restoration, and soil conservation planning. Full article
(This article belongs to the Special Issue Research About Permafrost–Atmosphere Interactions (2nd Edition))
Show Figures

Figure 1

28 pages, 9030 KB  
Article
UAV Path Planning via Semantic Segmentation of 3D Reality Mesh Models
by Xiaoxinxi Zhang, Zheng Ji, Lingfeng Chen and Yang Lyu
Drones 2025, 9(8), 578; https://doi.org/10.3390/drones9080578 - 14 Aug 2025
Cited by 2 | Viewed by 3079
Abstract
Traditional unmanned aerial vehicle (UAV) path planning methods for image-based 3D reconstruction often rely solely on geometric information from initial models, resulting in redundant data acquisition in non-architectural areas. This paper proposes a UAV path planning method via semantic segmentation of 3D reality [...] Read more.
Traditional unmanned aerial vehicle (UAV) path planning methods for image-based 3D reconstruction often rely solely on geometric information from initial models, resulting in redundant data acquisition in non-architectural areas. This paper proposes a UAV path planning method via semantic segmentation of 3D reality mesh models to enhance efficiency and accuracy in complex scenarios. The scene is segmented into buildings, vegetation, ground, and water bodies. Lightweight polygonal surfaces are extracted for buildings, while planar segments in non-building regions are fitted and projected into simplified polygonal patches. These photography targets are further decomposed into point, line, and surface primitives. A multi-resolution image acquisition strategy is adopted, featuring high-resolution coverage for buildings and rapid scanning for non-building areas. To ensure flight safety, a Digital Surface Model (DSM)-based shell model is utilized for obstacle avoidance, and sky-view-based Real-Time Kinematic (RTK) signal evaluation is applied to guide viewpoint optimization. Finally, a complete weighted graph is constructed, and ant colony optimization is employed to generate a low-energy-cost flight path. Experimental results demonstrate that, compared with traditional oblique photogrammetry, the proposed method achieves higher reconstruction quality. Compared with the commercial software Metashape, it reduces the number of images by 30.5% and energy consumption by 37.7%, while significantly improving reconstruction results in both architectural and non-architectural areas. Full article
Show Figures

Figure 1

27 pages, 5515 KB  
Article
Optimizing Multi-Camera Mobile Mapping Systems with Pose Graph and Feature-Based Approaches
by Ahmad El-Alailyi, Luca Morelli, Paweł Trybała, Francesco Fassi and Fabio Remondino
Remote Sens. 2025, 17(16), 2810; https://doi.org/10.3390/rs17162810 - 13 Aug 2025
Cited by 2 | Viewed by 3235
Abstract
Multi-camera Visual Simultaneous Localization and Mapping (V-SLAM) increases spatial coverage through multi-view image streams, improving localization accuracy and reducing data acquisition time. Despite its speed and generally robustness, V-SLAM often struggles to achieve precise camera poses necessary for accurate 3D reconstruction, especially in [...] Read more.
Multi-camera Visual Simultaneous Localization and Mapping (V-SLAM) increases spatial coverage through multi-view image streams, improving localization accuracy and reducing data acquisition time. Despite its speed and generally robustness, V-SLAM often struggles to achieve precise camera poses necessary for accurate 3D reconstruction, especially in complex environments. This study introduces two novel multi-camera optimization methods to enhance pose accuracy, reduce drift, and ensure loop closures. These methods refine multi-camera V-SLAM outputs within existing frameworks and are evaluated in two configurations: (1) multiple independent stereo V-SLAM instances operating on separate camera pairs; and (2) multi-view odometry processing all camera streams simultaneously. The proposed optimizations include (1) a multi-view feature-based optimization that integrates V-SLAM poses with rigid inter-camera constraints and bundle adjustment; and (2) a multi-camera pose graph optimization that fuses multiple trajectories using relative pose constraints and robust noise models. Validation is conducted through two complex 3D surveys using the ATOM-ANT3D multi-camera fisheye mobile mapping system. Results demonstrate survey-grade accuracy comparable to traditional photogrammetry, with reduced computational time, advancing toward near real-time 3D mapping of challenging environments. Full article
Show Figures

Graphical abstract

17 pages, 5935 KB  
Technical Note
Merging Various Types of Remote Sensing Data and Social Participation GIS with AI to Map the Objects Affected by Light Occlusion
by Yen-Chun Lin, Teng-To Yu, Yu-En Yang, Jo-Chi Lin, Guang-Wen Lien and Shyh-Chin Lan
Remote Sens. 2025, 17(13), 2131; https://doi.org/10.3390/rs17132131 - 21 Jun 2025
Viewed by 1134
Abstract
This study proposes a practical integration of an existing deep learning model (YOLOv9-E) and social participation GIS using multi-source remote sensing data to identify asbestos-containing materials located on the side of a building affected by light occlusions. These objects are often undetectable by [...] Read more.
This study proposes a practical integration of an existing deep learning model (YOLOv9-E) and social participation GIS using multi-source remote sensing data to identify asbestos-containing materials located on the side of a building affected by light occlusions. These objects are often undetectable by traditional vertical or oblique photogrammetry, yet their precise localization is essential for effective removal planning. By leveraging the mobility and responsiveness of citizen investigators, we conducted fine-grained surveys in community spaces that were often inaccessible using conventional methods. The YOLOv9-E model demonstrated robustness on mobile-captured images, enriched with geolocation and orientation metadata, which improved the association between detections and specific buildings. By comparing results from Google Street View and field-based social imagery, we highlight the complementary strengths of both sources. Rather than introducing new algorithms, this study focuses on an applied integration framework to improve detection coverage, spatial precision, and participatory monitoring for environmental risk management. The dataset comprised 20,889 images, with 98% being used for training and validation and 2% being used for independent testing. The YOLOv9-E model achieved an mAP50 of 0.81 and an F1-score of 0.85 on the test set. Full article
Show Figures

Figure 1

21 pages, 33456 KB  
Article
Evolution of Rockfall Based on Structure from Motion Reconstruction of Street View Imagery and Unmanned Aerial Vehicle Data: Case Study from Koto Panjang, Indonesia
by Tiggi Choanji, Michel Jaboyedoff, Yuniarti Yuskar, Anindita Samsu, Li Fei and Marc-Henri Derron
Remote Sens. 2025, 17(11), 1888; https://doi.org/10.3390/rs17111888 - 29 May 2025
Cited by 2 | Viewed by 1615
Abstract
This study explores the growing application of 3D remote sensing in geohazard studies, particularly for rock slope monitoring. It highlights the use of cost-effective Street View Imagery (SVI) and Unmanned Aerial Vehicles (UAV) through Structure-from-Motion (SfM) photogrammetry as tools for 3D rockfall monitoring. [...] Read more.
This study explores the growing application of 3D remote sensing in geohazard studies, particularly for rock slope monitoring. It highlights the use of cost-effective Street View Imagery (SVI) and Unmanned Aerial Vehicles (UAV) through Structure-from-Motion (SfM) photogrammetry as tools for 3D rockfall monitoring. Using multi-temporal SVI and UAV Imagery from the Koto Panjang cliff in Indonesia, we quantify rockfall volume changes over seven years and assess associated geohazards. The results reveal a total rockfall retreat of 5270 m3, with an average annual rate of 7.53 m3/year. Structural analysis identified six major discontinuity sets and confirmed inherent instability within the rock mass. Kinematic simulations using SVI and UAV-derived data further assessed rockfall trajectories and potential impact zones. Results indicate that 40% of simulated rockfall deposits accumulated near existing roads, with significant differences in distribution based on scree slope angles. This emphasizes the role of scree slope in influencing rockfall propagation. In conclusion, SVI and UAV imagery presents a valuable tool for 3D point cloud reconstruction and rockfall hazard assessment, particularly in areas lacking historical data. The study showcases the effectiveness of using SVI and UAV imagery in quantifying historical past rockfall volume and identifies critical areas for mitigation strategies, highlighting the importance of scree slope angle in managing rockfall hazard. Full article
Show Figures

Figure 1

14 pages, 3918 KB  
Article
Transforming Monochromatic Images into 3D Holographic Stereograms Through Depth-Map Extraction
by Oybek Mirzaevich Narzulloev, Jinwon Choi, Jumamurod Farhod Ugli Aralov, Leehwan Hwang, Philippe Gentet and Seunghyun Lee
Appl. Sci. 2025, 15(10), 5699; https://doi.org/10.3390/app15105699 - 20 May 2025
Viewed by 1778
Abstract
Traditional holographic printing techniques prove inadequate when only input data are available. Therefore, this paper proposes a new artificial-intelligence-based process for generating digital holographic stereograms from a single black-and-white photograph. This method eliminates the need for stereo cameras, photogrammetry, or 3D models. In [...] Read more.
Traditional holographic printing techniques prove inadequate when only input data are available. Therefore, this paper proposes a new artificial-intelligence-based process for generating digital holographic stereograms from a single black-and-white photograph. This method eliminates the need for stereo cameras, photogrammetry, or 3D models. In this approach, a convolutional neural network and deep convolutional neural field model are used for image colorization and a depth-map estimation, respectively. Subsequently, the colored image and depth map are used to generate the multiview images required for creating holographic stereograms. This method efficiently preserves the visual characteristics of the original black-and-white images in the final digital holographic portraits. This provides a new and accessible method for holographic reconstruction using limited data, enabling the generation of 3D holographic content from existing images. Experiments were conducted using black-and-photographs of two historical figures, and highly realistic holograms were obtained successfully. This study has significant implications for cultural preservation, personal archiving, and the generation of life-like holographic images with minimal input data. By bridging the gap between historical photographic sources and modern holographic techniques, our approach opens up new possibilities for memory preservation and visual storytelling. Full article
Show Figures

Figure 1

22 pages, 64906 KB  
Article
Comparative Assessment of Neural Radiance Fields and 3D Gaussian Splatting for Point Cloud Generation from UAV Imagery
by Muhammed Enes Atik
Sensors 2025, 25(10), 2995; https://doi.org/10.3390/s25102995 - 9 May 2025
Cited by 3 | Viewed by 5375
Abstract
Point clouds continue to be the main data source in 3D modeling studies with unmanned aerial vehicle (UAV) images. Structure-from-Motion (SfM) and MultiView Stereo (MVS) have high time costs for point cloud generation, especially in large data sets. For this reason, state-of-the-art methods [...] Read more.
Point clouds continue to be the main data source in 3D modeling studies with unmanned aerial vehicle (UAV) images. Structure-from-Motion (SfM) and MultiView Stereo (MVS) have high time costs for point cloud generation, especially in large data sets. For this reason, state-of-the-art methods such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have emerged as powerful alternatives for point cloud generation. This paper explores the performance of NeRF and 3DGS methods in generating point clouds from UAV images. For this purpose, the Nerfacto, Instant-NGP, and Splatfacto methods developed in the Nerfstudio framework were used. The obtained point clouds were evaluated by taking the point cloud produced with the photogrammetric method as reference. In this study, the effects of image size and iteration number on the performance of the algorithms were investigated in two different study areas. According to the results, Splatfacto demonstrates promising capabilities in addressing challenges related to scene complexity, rendering efficiency, and accuracy in UAV imagery. Full article
(This article belongs to the Special Issue Stereo Vision Sensing and Image Processing)
Show Figures

Figure 1

Back to TopTop