Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (49)

Search Parameters:
Keywords = gaussian splatting

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 9679 KB  
Article
Intelligent Defect Detection of Ancient City Walls Based on Computer Vision
by Gengpei Zhang, Xiaohan Dou and Leqi Li
Sensors 2025, 25(16), 5042; https://doi.org/10.3390/s25165042 - 14 Aug 2025
Viewed by 414
Abstract
As an important tangible carrier of historical and cultural heritage, ancient city walls embody the historical memory of urban development and serve as evidence of engineering evolution. However, due to prolonged exposure to complex natural environments and human activities, they are highly susceptible [...] Read more.
As an important tangible carrier of historical and cultural heritage, ancient city walls embody the historical memory of urban development and serve as evidence of engineering evolution. However, due to prolonged exposure to complex natural environments and human activities, they are highly susceptible to various types of defects, such as cracks, missing bricks, salt crystallization, and vegetation erosion. To enhance the capability of cultural heritage conservation, this paper focuses on the ancient city wall of Jingzhou and proposes a multi-stage defect-detection framework based on computer vision technology. The proposed system establishes a processing pipeline that includes image processing, 2D defect detection, depth estimation, and 3D reconstruction. On the processing end, the Restormer and SG-LLIE models are introduced for image deblurring and illumination enhancement, respectively, improving the quality of wall images. The system incorporates the LFS-GAN model to augment defect samples. On the detection end, YOLOv12 is used as the 2D recognition network to detect common defects based on the generated samples. A depth estimation module is employed to assist in the verification of ancient wall defects. Finally, a Gaussian Splatting point-cloud reconstruction method is used to achieve a 3D visual representation of the defects. Experimental results show that the proposed system effectively detects multiple types of defects in ancient city walls, providing both a theoretical foundation and technical support for the intelligent monitoring of cultural heritage. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

32 pages, 19346 KB  
Article
Three-Dimensional Intelligent Understanding and Preventive Conservation Prediction for Linear Cultural Heritage
by Ruoxin Wang, Ming Guo, Yaru Zhang, Jiangjihong Chen, Yaxuan Wei and Li Zhu
Buildings 2025, 15(16), 2827; https://doi.org/10.3390/buildings15162827 - 8 Aug 2025
Viewed by 381
Abstract
This study proposes an innovative method that integrates multi-source remote sensing technologies and artificial intelligence to meet the urgent needs of deformation monitoring and ecohydrological environment analysis in Great Wall heritage protection. By integrating synthetic aperture radar (InSAR) technology, low-altitude oblique photogrammetry models, [...] Read more.
This study proposes an innovative method that integrates multi-source remote sensing technologies and artificial intelligence to meet the urgent needs of deformation monitoring and ecohydrological environment analysis in Great Wall heritage protection. By integrating synthetic aperture radar (InSAR) technology, low-altitude oblique photogrammetry models, and the three-dimensional Gaussian splatting model, an integrated air–space–ground system for monitoring and understanding the Great Wall is constructed. Low-altitude tilt photogrammetry combined with the Gaussian splatting model, through drone images and intelligent generation algorithms (e.g., generative adversarial networks), quickly constructs high-precision 3D models, significantly improving texture details and reconstruction efficiency. Based on the 3D Gaussian splatting model of the AHLLM-3D network, the integration of point cloud data and the large language model achieves multimodal semantic understanding and spatial analysis of the Great Wall’s architectural structure. The results show that the multi-source data fusion method can effectively identify high-risk deformation zones (with annual subsidence reaching −25 mm) and optimize modeling accuracy through intelligent algorithms (reducing detail error by 30%), providing accurate deformation warnings and repair bases for Great Wall protection. Future studies will further combine the concept of ecological water wisdom to explore heritage protection strategies under multi-hazard coupling, promoting the digital transformation of cultural heritage preservation. Full article
Show Figures

Figure 1

20 pages, 2776 KB  
Article
Automatic 3D Reconstruction: Mesh Extraction Based on Gaussian Splatting from Romanesque–Mudéjar Churches
by Nelson Montas-Laracuente, Emilio Delgado Martos, Carlos Pesqueira-Calvo, Giovanni Intra Sidola, Ana Maitín, Alberto Nogales and Álvaro José García-Tejedor
Appl. Sci. 2025, 15(15), 8379; https://doi.org/10.3390/app15158379 - 28 Jul 2025
Viewed by 755
Abstract
This research introduces an automated 3D virtual reconstruction system tailored for architectural heritage (AH) applications, contributing to the ongoing paradigm shift from traditional CAD-based workflows to artificial intelligence-driven methodologies. It reviews recent advancements in machine learning and deep learning—particularly neural radiance fields (NeRFs) [...] Read more.
This research introduces an automated 3D virtual reconstruction system tailored for architectural heritage (AH) applications, contributing to the ongoing paradigm shift from traditional CAD-based workflows to artificial intelligence-driven methodologies. It reviews recent advancements in machine learning and deep learning—particularly neural radiance fields (NeRFs) and its successor, Gaussian splatting (GS)—as state-of-the-art techniques in the domain. The study advocates for replacing point cloud data in heritage building information modeling workflows with image-based inputs, proposing a novel “photo-to-BIM” pipeline. A proof-of-concept system is presented, capable of processing photographs or video footage of ancient ruins—specifically, Romanesque–Mudéjar churches—to automatically generate 3D mesh reconstructions. The system’s performance is assessed using both objective metrics and subjective evaluations of mesh quality. The results confirm the feasibility and promise of image-based reconstruction as a viable alternative to conventional methods. The study successfully developed a system for automated 3D mesh reconstruction of AH from images. It applied GS and Mip-splatting for NeRFs, proving superior in noise reduction for subsequent mesh extraction via surface-aligned Gaussian splatting for efficient 3D mesh reconstruction. This photo-to-mesh pipeline signifies a viable step towards HBIM. Full article
Show Figures

Figure 1

22 pages, 3348 KB  
Article
Comparison of NeRF- and SfM-Based Methods for Point Cloud Reconstruction for Small-Sized Archaeological Artifacts
by Miguel Ángel Maté-González, Roy Yali, Jesús Rodríguez-Hernández, Enrique González-González and Julián Aguirre de Mata
Remote Sens. 2025, 17(14), 2535; https://doi.org/10.3390/rs17142535 - 21 Jul 2025
Viewed by 649
Abstract
This study presents a critical evaluation of image-based 3D reconstruction techniques for small archaeological artifacts, focusing on a quantitative comparison between Neural Radiance Fields (NeRF), its recent Gaussian Splatting (GS) variant, and traditional Structure-from-Motion (SfM) photogrammetry. The research targets artifacts smaller than 5 [...] Read more.
This study presents a critical evaluation of image-based 3D reconstruction techniques for small archaeological artifacts, focusing on a quantitative comparison between Neural Radiance Fields (NeRF), its recent Gaussian Splatting (GS) variant, and traditional Structure-from-Motion (SfM) photogrammetry. The research targets artifacts smaller than 5 cm, characterized by complex geometries and reflective surfaces that pose challenges for conventional recording methods. To address the limitations of traditional methods without resorting to the high costs associated with laser scanning, this study explores NeRF and GS as cost-effective and efficient alternatives. A comprehensive experimental framework was established, incorporating ground-truth data obtained using a metrological articulated arm and a rigorous quantitative evaluation based on root mean square (RMS) error, Chamfer distance, and point cloud density. The results indicate that while NeRF outperforms GS in terms of geometric fidelity, both techniques still exhibit lower accuracy compared to SfM, particularly in preserving fine geometric details. Nonetheless, NeRF demonstrates strong potential for rapid, high-quality 3D documentation suitable for visualization and dissemination purposes in cultural heritage. These findings highlight both the current capabilities and limitations of neural rendering techniques for archaeological documentation and suggest promising future research directions combining AI-based models with traditional photogrammetric pipelines. Full article
Show Figures

Figure 1

24 pages, 14668 KB  
Article
Metric Error Assessment Regarding Geometric 3D Reconstruction of Transparent Surfaces via SfM Enhanced by 2D and 3D Gaussian Splatting
by Dario Billi, Gabriella Caroti and Andrea Piemonte
Sensors 2025, 25(14), 4410; https://doi.org/10.3390/s25144410 - 15 Jul 2025
Viewed by 972
Abstract
This research investigates the metric accuracy of 3D transparent object reconstruction, a task where conventional photogrammetry often fails. The topic is especially relevant in cultural heritage (CH), where accurate digital documentation of glass and transparent artifacts is important. The work proposes a practical [...] Read more.
This research investigates the metric accuracy of 3D transparent object reconstruction, a task where conventional photogrammetry often fails. The topic is especially relevant in cultural heritage (CH), where accurate digital documentation of glass and transparent artifacts is important. The work proposes a practical methodology using existing tools to verify metric accuracy standards. The study compares three methods, conventional photogrammetry, 3D Gaussian splatting (3DGS), and 2D Gaussian splatting (2DGS), to assess their ability to produce complete and metrically reliable 3D models suitable for measurement and geometric analysis. A transparent glass artifact serves as the case study. Results show that 2DGS captures fine surface and internal details with better geometric consistency than 3DGS and photogrammetry. Although 3DGS offers high visual quality, it introduces surface artifacts that affect metric reliability. Photogrammetry fails to reconstruct the object entirely. The study highlights that visual quality does not ensure geometric accuracy, which is critical for measurement applications. In this work, ground truth comparisons confirm that 2DGS offers the best trade-off between accuracy and appearance, despite higher computational demands. These findings suggest extending the experimentation to other sets of images featuring transparent objects, and possibly also reflective ones. Full article
Show Figures

Figure 1

17 pages, 610 KB  
Review
Three-Dimensional Reconstruction Techniques and the Impact of Lighting Conditions on Reconstruction Quality: A Comprehensive Review
by Dimitar Rangelov, Sierd Waanders, Kars Waanders, Maurice van Keulen and Radoslav Miltchev
Lights 2025, 1(1), 1; https://doi.org/10.3390/lights1010001 - 14 Jul 2025
Viewed by 537
Abstract
Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors [...] Read more.
Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors that influence reconstruction accuracy, the lighting conditions at capture time remain one of the most influential, yet widely neglected, variables. This review provides a comprehensive survey of classical and modern 3D reconstruction techniques, including Structure from Motion (SfM), Multi-View Stereo (MVS), Photometric Stereo, and recent neural rendering approaches such as Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS), while critically evaluating their performance under varying illumination conditions. We describe how lighting-induced artifacts such as shadows, reflections, and exposure imbalances compromise the reconstruction quality and how different approaches attempt to mitigate these effects. Furthermore, we uncover fundamental gaps in current research, including the lack of standardized lighting-aware benchmarks and the limited robustness of state-of-the-art algorithms in uncontrolled environments. By synthesizing knowledge across fields, this review aims to gain a deeper understanding of the interplay between lighting and reconstruction and provides research directions for the future that emphasize the need for adaptive, lighting-robust solutions in 3D vision systems. Full article
Show Figures

Figure 1

20 pages, 108154 KB  
Article
Masks-to-Skeleton: Multi-View Mask-Based Tree Skeleton Extraction with 3D Gaussian Splatting
by Xinpeng Liu, Kanyu Xu, Risa Shinoda, Hiroaki Santo and Fumio Okura
Sensors 2025, 25(14), 4354; https://doi.org/10.3390/s25144354 - 11 Jul 2025
Viewed by 667
Abstract
Accurately reconstructing tree skeletons from multi-view images is challenging. While most existing works use skeletonization from 3D point clouds, thin branches with low-texture contrast often involve multi-view stereo (MVS) to produce noisy and fragmented point clouds, which break branch connectivity. Leveraging the recent [...] Read more.
Accurately reconstructing tree skeletons from multi-view images is challenging. While most existing works use skeletonization from 3D point clouds, thin branches with low-texture contrast often involve multi-view stereo (MVS) to produce noisy and fragmented point clouds, which break branch connectivity. Leveraging the recent development in accurate mask extraction from images, we introduce a mask-guided graph optimization framework that estimates a 3D skeleton directly from multi-view segmentation masks, bypassing the reliance on point cloud quality. In our method, a skeleton is modeled as a graph whose nodes store positions and radii while its adjacency matrix encodes branch connectivity. We use 3D Gaussian splatting (3DGS) to render silhouettes of the graph and directly optimize the nodes and the adjacency matrix to fit given multi-view silhouettes in a differentiable manner. Furthermore, we use a minimum spanning tree (MST) algorithm during the optimization loop to regularize the graph to a tree structure. Experiments on synthetic and real-world plants show consistent improvements in completeness and structural accuracy over existing point-cloud-based and heuristic baseline methods. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

21 pages, 6136 KB  
Article
A ROS-Based Online System for 3D Gaussian Splatting Optimization: Flexible Frontend Integration and Real-Time Refinement
by Li’an Wang, Jian Xu, Xuan An, Yujie Ji, Yuxuan Wu and Zhaoyuan Ma
Sensors 2025, 25(13), 4151; https://doi.org/10.3390/s25134151 - 3 Jul 2025
Viewed by 987
Abstract
The 3D Gaussian splatting technique demonstrates significant efficiency advantages in real-time scene reconstruction. However, when its initialization process relies on traditional SfM methods (such as COLMAP), there are obvious bottlenecks, such as high computational resource consumption, as well as the decoupling problem between [...] Read more.
The 3D Gaussian splatting technique demonstrates significant efficiency advantages in real-time scene reconstruction. However, when its initialization process relies on traditional SfM methods (such as COLMAP), there are obvious bottlenecks, such as high computational resource consumption, as well as the decoupling problem between camera pose optimization and map construction. This paper proposes an online 3DGS optimization system based on ROS. Through the design of a loose-coupling architecture, it realizes real-time data interaction between the frontend SfM/SLAM module and backend 3DGS optimization. Using ROS as a middleware, this system can access the keyframe poses and point-cloud data generated by any frontend algorithms (such as ORB-SLAM, COLMAP, etc.). With the help of a dynamic sliding-window strategy and a rendering-quality loss function that combines L1 and SSIM, it achieves online optimization of the 3DGS map. The experimental data shows that compared with the traditional COLMAP-3DGS process, this system reduces the initialization time by 90% and achieves an average PSNR improvement of 1.9 dB on the TUM-RGBD, Tanks and Temples, and KITTI datasets. Full article
Show Figures

Figure 1

20 pages, 24813 KB  
Article
BrushGaussian: Brushstroke-Based Stylization for 3D Gaussian Splatting
by Zhi-Zheng Xiang, Chun Xie and Itaru Kitahara
Appl. Sci. 2025, 15(12), 6881; https://doi.org/10.3390/app15126881 - 18 Jun 2025
Viewed by 860
Abstract
We present a method for enhancing 3D Gaussian Splatting primitives with brushstroke-aware stylization. Previous approaches to 3D style transfer are typically limited to color or texture modifications, lacking an understanding of artistic shape deformation. In contrast, we focus on individual 3D Gaussian primitives, [...] Read more.
We present a method for enhancing 3D Gaussian Splatting primitives with brushstroke-aware stylization. Previous approaches to 3D style transfer are typically limited to color or texture modifications, lacking an understanding of artistic shape deformation. In contrast, we focus on individual 3D Gaussian primitives, exploring their potential to enable style transfer that incorporates both color- and brushstroke-inspired local geometric stylization. Specifically, we introduce additional texture features for each Gaussian primitive and apply a texture mapping technique to achieve brushstroke-like geometric effects in a rendered scene. Furthermore, we propose an unsupervised clustering algorithm to efficiently prune redundant Gaussians, ensuring that our method seamlessly integrates with existing 3D Gaussian Splatting pipelines. Extensive evaluations demonstrate that our approach outperforms existing baselines by producing brushstroke-aware artistic renderings with richer geometric expressiveness and enhanced visual appeal. Full article
(This article belongs to the Special Issue Technical Advances in 3D Reconstruction)
Show Figures

Figure 1

15 pages, 72897 KB  
Article
Dual-Dimensional Gaussian Splatting Integrating 2D and 3D Gaussians for Surface Reconstruction
by Jichan Park, Jae-Won Suh and Yuseok Ban
Appl. Sci. 2025, 15(12), 6769; https://doi.org/10.3390/app15126769 - 16 Jun 2025
Viewed by 1590
Abstract
Three-Dimensional Gaussian Splatting (3DGS) has revolutionized novel-view synthesis, enabling real-time rendering of high-quality scenes. Two-Dimensional Gaussian Splatting (2DGS) improves geometric accuracy by replacing 3D Gaussians with flat 2D Gaussians. However, the flat nature of 2D Gaussians reduces mesh quality on volumetric surfaces and [...] Read more.
Three-Dimensional Gaussian Splatting (3DGS) has revolutionized novel-view synthesis, enabling real-time rendering of high-quality scenes. Two-Dimensional Gaussian Splatting (2DGS) improves geometric accuracy by replacing 3D Gaussians with flat 2D Gaussians. However, the flat nature of 2D Gaussians reduces mesh quality on volumetric surfaces and results in over-smoothed reconstruction. To address this, we propose Dual-Dimensional Gaussian Splatting (DDGS), which integrates both 2D and 3D Gaussians. First, we generalize the homogeneous transformation matrix based on 2DGS to initialize all Gaussians in 3D. Subsequently, during training, we selectively convert Gaussians into 2D representations based on their scale. This approach leverages the complementary strengths of 2D and 3D Gaussians, resulting in more accurate surface reconstruction across both flat and volumetric regions. Additionally, to mitigate over-smoothing, we introduce gradient-based regularization terms. Quantitative evaluations on the DTU and TnT datasets demonstrate that DDGS consistently outperforms prior methods, including 3DGS, SuGaR, and 2DGS, achieving the best Chamfer Distance and F1 score across a wide range of scenes. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

28 pages, 8816 KB  
Article
Reconstruction, Segmentation and Phenotypic Feature Extraction of Oilseed Rape Point Cloud Combining 3D Gaussian Splatting and CKG-PointNet++
by Yourui Huang, Jiale Pang, Shuaishuai Yu, Jing Su, Shuainan Hou and Tao Han
Agriculture 2025, 15(12), 1289; https://doi.org/10.3390/agriculture15121289 - 15 Jun 2025
Viewed by 635
Abstract
Phenotypic traits and phenotypic extraction at the seedling stage of oilseed rape play a crucial role in assessing oilseed rape growth, breeding new varieties and estimating yield. Manual phenotyping not only consumes a lot of labor and time costs, but even the measurement [...] Read more.
Phenotypic traits and phenotypic extraction at the seedling stage of oilseed rape play a crucial role in assessing oilseed rape growth, breeding new varieties and estimating yield. Manual phenotyping not only consumes a lot of labor and time costs, but even the measurement process can cause structural damage to oilseed rape plants. Existing crop phenotype acquisition methods have limitations in terms of throughput and accuracy, which are difficult to meet the demands of phenotype analysis. We propose an oilseed rape segmentation and phenotyping measurement method based on 3D Gaussian splatting with improved PointNet++. The CKG-PointNet++ network is designed to integrate CGLU and FastKAN convolutional modules in the SA layer, and introduce MogaBlock and a self-attention mechanism in the FP layer to enhance local and global feature extraction. Experiments show that the method achieves a 97.70% overall accuracy (OA) and 96.01% mean intersection over union (mIoU) on the oilseed rape point cloud segmentation task. The extracted phenotypic parameters were highly correlated with manual measurements, with leaf length and width, leaf area and leaf inclination R2 of 0.9843, 0.9632, 0.9806 and 0.8890, and RMSE of 0.1621 cm, 0.1546 cm, 0.6892 cm2 and 2.1144°, respectively. This technique provides a feasible solution for high-throughput and rapid measurement of seedling phenotypes in oilseed rape. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

27 pages, 75388 KB  
Article
High-Fidelity 3D Gaussian Splatting for Exposure-Bracketing Space Target Reconstruction: OBB-Guided Regional Densification with Sobel Edge Regularization
by Yijin Jiang, Xiaoyuan Ren, Huanyu Yin, Libing Jiang, Canyu Wang and Zhuang Wang
Remote Sens. 2025, 17(12), 2020; https://doi.org/10.3390/rs17122020 - 11 Jun 2025
Viewed by 2140
Abstract
In this paper, a novel optimization framework based on 3D Gaussian splatting (3DGS) for high-fidelity 3D reconstruction of space targets under exposure bracketing conditions is studied. In the considered scenario, multi-view optical imagery captures space targets under complex and dynamic illumination, where severe [...] Read more.
In this paper, a novel optimization framework based on 3D Gaussian splatting (3DGS) for high-fidelity 3D reconstruction of space targets under exposure bracketing conditions is studied. In the considered scenario, multi-view optical imagery captures space targets under complex and dynamic illumination, where severe inter-frame brightness variations degrade reconstruction quality by introducing photometric inconsistencies and blurring fine geometric details. Unlike existing methods, we explicitly address these challenges by integrating exposure-aware adaptive refinement and edge-preserving regularization into the 3DGS pipeline. Specifically, we propose an exposure bracketing-oriented bounding box (OBB) regional densification strategy to dynamically identify and refine under-reconstructed regions. In addition, we introduce a Sobel edge regularization mechanism to guide the learning of sharp geometric features and improve texture fidelity. To validate the framework, experiments are conducted on both a custom OBR-ST dataset and the public SHIRT dataset, demonstrating that our method significantly outperforms state-of-the-art techniques in geometric accuracy and visual quality under exposure-bracketing scenarios. The results highlight the effectiveness of our approach in enabling robust in-orbit perception for space applications. Full article
(This article belongs to the Special Issue Advances in 3D Reconstruction with High-Resolution Satellite Data)
Show Figures

Graphical abstract

31 pages, 6915 KB  
Review
Trends and Techniques in 3D Reconstruction and Rendering: A Survey with Emphasis on Gaussian Splatting
by Wenhe Chen, Zikai Li, Jingru Guo, Caixia Zheng and Siyi Tian
Sensors 2025, 25(12), 3626; https://doi.org/10.3390/s25123626 - 9 Jun 2025
Viewed by 2389
Abstract
Three-Dimensional Gaussian Splatting (3DGS), an important advancement in the field of computer graphics and 3D vision, has emerged to greatly accelerate the rendering process in novel views’ synthesis. Due to its ability to directly realize the real-time estimation of 3D shapes without neural [...] Read more.
Three-Dimensional Gaussian Splatting (3DGS), an important advancement in the field of computer graphics and 3D vision, has emerged to greatly accelerate the rendering process in novel views’ synthesis. Due to its ability to directly realize the real-time estimation of 3D shapes without neural networks, 3DGS has received a lot of attention in the fields of robotics, urban mapping, autonomous navigation, and virtual reality/augmented reality. In view of the growing popularity of 3DGS, we conduct a systematic review of the relevant literature. We begin by surveying existing work on 3D reconstruction and rendering, outlining the historical development and recent advances from both foundational and innovation-driven perspectives. Next, we summarize the most commonly used datasets and evaluation metrics in 3D reconstruction and rendering. Finally, we summarize the current challenges and suggest potential directions for future research. Through this survey, we aim to provide researchers with a treasure trove of resources in understanding and using techniques related to 3D reconstruction and rendering, in order to promote technological development and application deepening in the field of 3D vision. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

15 pages, 14638 KB  
Article
Gaussian Splatting-Based Color and Shape Deformation Fields for Dynamic Scene Reconstruction
by Kaibin Bao, Wei Wu and Yongtao Hao
Electronics 2025, 14(12), 2347; https://doi.org/10.3390/electronics14122347 - 8 Jun 2025
Viewed by 1269
Abstract
The 3DGS (3D Gaussian Splatting) series of works has achieved significant success in novel view synthesis, but further research is needed for dynamic scene reconstruction tasks. In this paper, we propose a new framework based on 3DGS for handling dynamic scene reconstruction problems [...] Read more.
The 3DGS (3D Gaussian Splatting) series of works has achieved significant success in novel view synthesis, but further research is needed for dynamic scene reconstruction tasks. In this paper, we propose a new framework based on 3DGS for handling dynamic scene reconstruction problems involving color changes. Our approach employs a multi-stage training strategy combining motion and color deformation fields to accurately model dynamic geometry and appearance changes. Additionally, we design two modular components: the Dynamic Component for capturing motion variations and the Color Component for managing material and color changes. These components flexibly adapt to different scenes, enhancing our method’s versatility. Experimental results demonstrate that our method achieves real-time rendering at 80 FPS on an RTX 4090 and achieves higher reconstruction accuracy than baseline methods such as HexPlane and Deformable3DGS. Furthermore, it reduces training time by approximately 10%, indicating improved training efficiency. These quantitative results confirm the effectiveness of our approach in delivering high-fidelity 4D reconstruction of complex dynamic environments. Full article
(This article belongs to the Special Issue 3D Computer Vision and 3D Reconstruction)
Show Figures

Figure 1

19 pages, 8306 KB  
Article
Plant Sam Gaussian Reconstruction (PSGR): A High-Precision and Accelerated Strategy for Plant 3D Reconstruction
by Jinlong Chen, Yingjie Jiao, Fuqiang Jin, Xingguo Qin, Yi Ning, Minghao Yang and Yongsong Zhan
Electronics 2025, 14(11), 2291; https://doi.org/10.3390/electronics14112291 - 4 Jun 2025
Viewed by 760
Abstract
Plant 3D reconstruction plays a critical role in precision agriculture and plant growth monitoring, yet it faces challenges such as complex background interference, difficulties in capturing intricate plant structures, and a slow reconstruction speed. In this study, we propose PlantSamGaussianReconstruction (PSGR), a novel [...] Read more.
Plant 3D reconstruction plays a critical role in precision agriculture and plant growth monitoring, yet it faces challenges such as complex background interference, difficulties in capturing intricate plant structures, and a slow reconstruction speed. In this study, we propose PlantSamGaussianReconstruction (PSGR), a novel method that integrates Grounding SAM with 3D Gaussian Splatting (3DGS) techniques. PSGR employs Grounding DINO and SAM for accurate plant–background segmentation, utilizes algorithms such as Scale-Invariant Feature Transform (SIFT) for camera pose estimation and sparse point cloud generation, and leverages 3DGS for plant reconstruction. Furthermore, a 3D–2D projection-guided optimization strategy is introduced to enhance segmentation precision. The experimental results of various multi-view plant image datasets demonstrate that PSGR effectively removes background noise under diverse environments, accurately captures plant details, and achieves peak signal-to-noise ratio (PSNR) values exceeding 30 in most scenarios, outperforming the original 3DGS approach. Moreover, PSGR reduces training time by up to 26.9%, significantly improving reconstruction efficiency. These results suggest that PSGR is an efficient, scalable, and high-precision solution for plant modeling. Full article
Show Figures

Figure 1

Back to TopTop