Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (71)

Search Parameters:
Keywords = neural radiance fields (NeRFs)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 3315 KiB  
Article
NeRF-RE: An Improved Neural Radiance Field Model Based on Object Removal and Efficient Reconstruction
by Ziyang Li, Yongjian Huai, Qingkuo Meng and Shiquan Dong
Information 2025, 16(8), 654; https://doi.org/10.3390/info16080654 - 31 Jul 2025
Viewed by 124
Abstract
High-quality green gardens can markedly enhance the quality of life and mental well-being of their users. However, health and lifestyle constraints make it difficult for people to enjoy urban gardens, and traditional methods struggle to offer the high-fidelity experiences they need. This study [...] Read more.
High-quality green gardens can markedly enhance the quality of life and mental well-being of their users. However, health and lifestyle constraints make it difficult for people to enjoy urban gardens, and traditional methods struggle to offer the high-fidelity experiences they need. This study introduces a 3D scene reconstruction and rendering strategy based on implicit neural representation through the efficient and removable neural radiation fields model (NeRF-RE). Leveraging neural radiance fields (NeRF), the model incorporates a multi-resolution hash grid and proposal network to improve training efficiency and modeling accuracy, while integrating a segment-anything model to safeguard public privacy. Take the crabapple tree, extensively utilized in urban garden design across temperate regions of the Northern Hemisphere. A dataset comprising 660 images of crabapple trees exhibiting three distinct geometric forms is collected to assess the NeRF-RE model’s performance. The results demonstrated that the ‘harvest gold’ crabapple scene had the highest reconstruction accuracy, with PSNR, LPIPS and SSIM of 24.80 dB, 0.34 and 0.74, respectively. Compared to the Mip-NeRF 360 model, the NeRF-RE model not only showed an up to 21-fold increase in training efficiency for three types of crabapple trees, but also exhibited a less pronounced impact of dataset size on reconstruction accuracy. This study reconstructs real scenes with high fidelity using virtual reality technology. It not only facilitates people’s personal enjoyment of the beauty of natural gardens at home, but also makes certain contributions to the publicity and promotion of urban landscapes. Full article
(This article belongs to the Special Issue Extended Reality and Its Applications)
Show Figures

Figure 1

20 pages, 2776 KiB  
Article
Automatic 3D Reconstruction: Mesh Extraction Based on Gaussian Splatting from Romanesque–Mudéjar Churches
by Nelson Montas-Laracuente, Emilio Delgado Martos, Carlos Pesqueira-Calvo, Giovanni Intra Sidola, Ana Maitín, Alberto Nogales and Álvaro José García-Tejedor
Appl. Sci. 2025, 15(15), 8379; https://doi.org/10.3390/app15158379 - 28 Jul 2025
Viewed by 213
Abstract
This research introduces an automated 3D virtual reconstruction system tailored for architectural heritage (AH) applications, contributing to the ongoing paradigm shift from traditional CAD-based workflows to artificial intelligence-driven methodologies. It reviews recent advancements in machine learning and deep learning—particularly neural radiance fields (NeRFs) [...] Read more.
This research introduces an automated 3D virtual reconstruction system tailored for architectural heritage (AH) applications, contributing to the ongoing paradigm shift from traditional CAD-based workflows to artificial intelligence-driven methodologies. It reviews recent advancements in machine learning and deep learning—particularly neural radiance fields (NeRFs) and its successor, Gaussian splatting (GS)—as state-of-the-art techniques in the domain. The study advocates for replacing point cloud data in heritage building information modeling workflows with image-based inputs, proposing a novel “photo-to-BIM” pipeline. A proof-of-concept system is presented, capable of processing photographs or video footage of ancient ruins—specifically, Romanesque–Mudéjar churches—to automatically generate 3D mesh reconstructions. The system’s performance is assessed using both objective metrics and subjective evaluations of mesh quality. The results confirm the feasibility and promise of image-based reconstruction as a viable alternative to conventional methods. The study successfully developed a system for automated 3D mesh reconstruction of AH from images. It applied GS and Mip-splatting for NeRFs, proving superior in noise reduction for subsequent mesh extraction via surface-aligned Gaussian splatting for efficient 3D mesh reconstruction. This photo-to-mesh pipeline signifies a viable step towards HBIM. Full article
Show Figures

Figure 1

27 pages, 6578 KiB  
Article
Evaluating Neural Radiance Fields for ADA-Compliant Sidewalk Assessments: A Comparative Study with LiDAR and Manual Methods
by Hang Du, Shuaizhou Wang, Linlin Zhang, Mark Amo-Boateng and Yaw Adu-Gyamfi
Infrastructures 2025, 10(8), 191; https://doi.org/10.3390/infrastructures10080191 - 22 Jul 2025
Viewed by 350
Abstract
An accurate assessment of sidewalk conditions is critical for ensuring compliance with the Americans with Disabilities Act (ADA), particularly to safeguard mobility for wheelchair users. This paper presents a novel 3D reconstruction framework based on neural radiance field (NeRF), which utilize a monocular [...] Read more.
An accurate assessment of sidewalk conditions is critical for ensuring compliance with the Americans with Disabilities Act (ADA), particularly to safeguard mobility for wheelchair users. This paper presents a novel 3D reconstruction framework based on neural radiance field (NeRF), which utilize a monocular video input from consumer-grade cameras to generate high-fidelity 3D models of sidewalk environments. The framework enables automatic extraction of ADA-relevant geometric features, including the running slope, the cross slope, and vertical displacements, facilitating an efficient and scalable compliance assessment process. A comparative study is conducted across three surveying methods—manual measurements, LiDAR scanning, and the proposed NeRF-based approach—evaluated on four sidewalks and one curb ramp. Each method was assessed based on accuracy, cost, time, level of automation, and scalability. The NeRF-based approach achieved high agreement with LiDAR-derived ground truth, delivering an F1 score of 96.52%, a precision of 96.74%, and a recall of 96.34% for ADA compliance classification. These results underscore the potential of NeRF to serve as a cost-effective, automated alternative to traditional and LiDAR-based methods, with sufficient precision for widespread deployment in municipal sidewalk audits. Full article
Show Figures

Figure 1

22 pages, 3348 KiB  
Article
Comparison of NeRF- and SfM-Based Methods for Point Cloud Reconstruction for Small-Sized Archaeological Artifacts
by Miguel Ángel Maté-González, Roy Yali, Jesús Rodríguez-Hernández, Enrique González-González and Julián Aguirre de Mata
Remote Sens. 2025, 17(14), 2535; https://doi.org/10.3390/rs17142535 - 21 Jul 2025
Viewed by 350
Abstract
This study presents a critical evaluation of image-based 3D reconstruction techniques for small archaeological artifacts, focusing on a quantitative comparison between Neural Radiance Fields (NeRF), its recent Gaussian Splatting (GS) variant, and traditional Structure-from-Motion (SfM) photogrammetry. The research targets artifacts smaller than 5 [...] Read more.
This study presents a critical evaluation of image-based 3D reconstruction techniques for small archaeological artifacts, focusing on a quantitative comparison between Neural Radiance Fields (NeRF), its recent Gaussian Splatting (GS) variant, and traditional Structure-from-Motion (SfM) photogrammetry. The research targets artifacts smaller than 5 cm, characterized by complex geometries and reflective surfaces that pose challenges for conventional recording methods. To address the limitations of traditional methods without resorting to the high costs associated with laser scanning, this study explores NeRF and GS as cost-effective and efficient alternatives. A comprehensive experimental framework was established, incorporating ground-truth data obtained using a metrological articulated arm and a rigorous quantitative evaluation based on root mean square (RMS) error, Chamfer distance, and point cloud density. The results indicate that while NeRF outperforms GS in terms of geometric fidelity, both techniques still exhibit lower accuracy compared to SfM, particularly in preserving fine geometric details. Nonetheless, NeRF demonstrates strong potential for rapid, high-quality 3D documentation suitable for visualization and dissemination purposes in cultural heritage. These findings highlight both the current capabilities and limitations of neural rendering techniques for archaeological documentation and suggest promising future research directions combining AI-based models with traditional photogrammetric pipelines. Full article
Show Figures

Figure 1

17 pages, 610 KiB  
Review
Three-Dimensional Reconstruction Techniques and the Impact of Lighting Conditions on Reconstruction Quality: A Comprehensive Review
by Dimitar Rangelov, Sierd Waanders, Kars Waanders, Maurice van Keulen and Radoslav Miltchev
Lights 2025, 1(1), 1; https://doi.org/10.3390/lights1010001 - 14 Jul 2025
Viewed by 347
Abstract
Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors [...] Read more.
Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors that influence reconstruction accuracy, the lighting conditions at capture time remain one of the most influential, yet widely neglected, variables. This review provides a comprehensive survey of classical and modern 3D reconstruction techniques, including Structure from Motion (SfM), Multi-View Stereo (MVS), Photometric Stereo, and recent neural rendering approaches such as Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS), while critically evaluating their performance under varying illumination conditions. We describe how lighting-induced artifacts such as shadows, reflections, and exposure imbalances compromise the reconstruction quality and how different approaches attempt to mitigate these effects. Furthermore, we uncover fundamental gaps in current research, including the lack of standardized lighting-aware benchmarks and the limited robustness of state-of-the-art algorithms in uncontrolled environments. By synthesizing knowledge across fields, this review aims to gain a deeper understanding of the interplay between lighting and reconstruction and provides research directions for the future that emphasize the need for adaptive, lighting-robust solutions in 3D vision systems. Full article
Show Figures

Figure 1

11 pages, 5143 KiB  
Communication
Bio-Inspired 3D Affordance Understanding from Single Image with Neural Radiance Field for Enhanced Embodied Intelligence
by Zirui Guo, Xieyuanli Chen, Zhiqiang Zheng, Huimin Lu and Ruibin Guo
Biomimetics 2025, 10(6), 410; https://doi.org/10.3390/biomimetics10060410 - 19 Jun 2025
Viewed by 480
Abstract
Affordance understanding means identifying possible operable parts of objects, which is crucial in achieving accurate robotic manipulation. Although homogeneous objects for grasping have various shapes, they always share a similar affordance distribution. Based on this fact, we propose AFF-NeRF to address the problem [...] Read more.
Affordance understanding means identifying possible operable parts of objects, which is crucial in achieving accurate robotic manipulation. Although homogeneous objects for grasping have various shapes, they always share a similar affordance distribution. Based on this fact, we propose AFF-NeRF to address the problem of affordance generation for homogeneous objects inspired by human cognitive processes. Our method employs deep residual networks to extract the shape and appearance features of various objects, enabling it to adapt to various homogeneous objects. These features are then integrated into our extended neural radiance fields, named AFF-NeRF, to generate 3D affordance models for unseen objects using a single image. Our experimental results demonstrate that our approach outperforms baseline methods in the affordance generation of unseen views on novel objects without additional training. Additionally, more stable grasps can be obtained by employing 3D affordance models generated by our method in the grasp generation algorithm. Full article
Show Figures

Figure 1

18 pages, 43879 KiB  
Article
Using AI to Reconstruct and Preserve 3D Temple Art with Old Images
by Naai-Jung Shih
Technologies 2025, 13(6), 229; https://doi.org/10.3390/technologies13060229 - 3 Jun 2025
Viewed by 758
Abstract
How can AI help us connect to the past in terms of conservation? How can 17-year-old photos be helpful in renewed preservation efforts? This research aims to use AI to connect both in a seamless 3D reconstruction of heritage from images taken of [...] Read more.
How can AI help us connect to the past in terms of conservation? How can 17-year-old photos be helpful in renewed preservation efforts? This research aims to use AI to connect both in a seamless 3D reconstruction of heritage from images taken of Gongfan Palace, Yunlin, Taiwan. AI-assisted 3D modeling was used to reconstruct the details of these images across different 3D platforms of the 3DGS or NeRF models generated by Postshot®, RODIN®, and KIRI Engine®. Mesh and point models created using Zephyr® were referred to and assessed in three sets. The consistent and inconsistent reconstructed results also included AI-assisted modeling outcomes in Stable Diffusion®- and Postshot®-based animations, followed by a 3D assessment and section-based composition analysis. The AI-assisted environment concluded with a recursive reconstruction involving 3D models and 2D images. AI assisted the 3D modeling process in an alternative approach, producing extraordinary structural and visual details. AI-trained models can be assessed and their use extended to composition analysis by section. Evolved documentation and interpretation using AI enables new structures and the management of resources, formats, and interfaces as part of continuous preservation efforts. Full article
(This article belongs to the Section Construction Technologies)
Show Figures

Graphical abstract

19 pages, 8169 KiB  
Article
Exploring the Application of NeRF in Enhancing Post-Disaster Response: A Case Study of the Sasebo Landslide in Japan
by Jinge Zhang, Yan Du, Yujing Jiang, Sunhao Zhang, Hongbin Chen and Dongqi Shang
ISPRS Int. J. Geo-Inf. 2025, 14(6), 218; https://doi.org/10.3390/ijgi14060218 - 30 May 2025
Viewed by 510
Abstract
Rapid acquisition of 3D reconstruction models of landslides is crucial for post-disaster emergency response and rescue operations. This study explores the application potential of Neural Radiance Fields (NeRF) technology for rapid post-disaster site modeling and performs a comparative analysis with traditional photogrammetry methods. [...] Read more.
Rapid acquisition of 3D reconstruction models of landslides is crucial for post-disaster emergency response and rescue operations. This study explores the application potential of Neural Radiance Fields (NeRF) technology for rapid post-disaster site modeling and performs a comparative analysis with traditional photogrammetry methods. Taking a landslide induced by heavy rainfall in Sasebo City, Japan, as a case study, this research utilizes drone-acquired video imagery data and employs two different 3D reconstruction techniques to create digital models of the landslide area. Visual realism and point cloud detail were compared. The results indicate that the high-capacity NeRF model (NeRF 24G) approaches or even surpasses traditional photogrammetry in visual realism under certain scenarios; however, the generated point clouds are inferior in terms of detail compared to those produced by traditional photogrammetry. Nevertheless, NeRF significantly reduces the modeling time. NeRF 6G can generate a point cloud of engineering-useful accuracy in only 45 min, providing a 3D overview of the disaster site to support emergency response efforts. In the future, integrating the advantages of both methods could enable rapid and precise post-disaster 3D reconstruction. Full article
(This article belongs to the Topic Geotechnics for Hazard Mitigation)
Show Figures

Figure 1

22 pages, 64906 KiB  
Article
Comparative Assessment of Neural Radiance Fields and 3D Gaussian Splatting for Point Cloud Generation from UAV Imagery
by Muhammed Enes Atik
Sensors 2025, 25(10), 2995; https://doi.org/10.3390/s25102995 - 9 May 2025
Viewed by 1491
Abstract
Point clouds continue to be the main data source in 3D modeling studies with unmanned aerial vehicle (UAV) images. Structure-from-Motion (SfM) and MultiView Stereo (MVS) have high time costs for point cloud generation, especially in large data sets. For this reason, state-of-the-art methods [...] Read more.
Point clouds continue to be the main data source in 3D modeling studies with unmanned aerial vehicle (UAV) images. Structure-from-Motion (SfM) and MultiView Stereo (MVS) have high time costs for point cloud generation, especially in large data sets. For this reason, state-of-the-art methods such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have emerged as powerful alternatives for point cloud generation. This paper explores the performance of NeRF and 3DGS methods in generating point clouds from UAV images. For this purpose, the Nerfacto, Instant-NGP, and Splatfacto methods developed in the Nerfstudio framework were used. The obtained point clouds were evaluated by taking the point cloud produced with the photogrammetric method as reference. In this study, the effects of image size and iteration number on the performance of the algorithms were investigated in two different study areas. According to the results, Splatfacto demonstrates promising capabilities in addressing challenges related to scene complexity, rendering efficiency, and accuracy in UAV imagery. Full article
(This article belongs to the Special Issue Stereo Vision Sensing and Image Processing)
Show Figures

Figure 1

23 pages, 35780 KiB  
Article
SatGS: Remote Sensing Novel View Synthesis Using Multi- Temporal Satellite Images with Appearance-Adaptive 3DGS
by Nan Bai, Anran Yang, Hao Chen and Chun Du
Remote Sens. 2025, 17(9), 1609; https://doi.org/10.3390/rs17091609 - 1 May 2025
Viewed by 738
Abstract
Novel view synthesis of remote sensing scenes from satellite images is a meaningful but challenging task. Due to the wide temporal span of image acquisition, satellite image collections often exhibit significant appearance variations, such as seasonal changes and shadow movements, as well as [...] Read more.
Novel view synthesis of remote sensing scenes from satellite images is a meaningful but challenging task. Due to the wide temporal span of image acquisition, satellite image collections often exhibit significant appearance variations, such as seasonal changes and shadow movements, as well as transient objects, making it difficult to reconstruct the original scene accurately. Previous work has noted that a large amount of image variation in satellite images is caused by changing light conditions. To address this, researchers have proposed incorporating the direction of solar rays into neural radiance fields (NeRF) to model the amount of sunlight reaching each point in the scene. However, this approach fails to effectively account for seasonal variations and suffers from a long training time and slow rendering speeds due to the need to evaluate numerous samples from the radiance field for each pixel. To achieve fast, efficient, and high-quality novel view synthesis for multi-temporal satellite scenes, we propose SatGS, a novel method that leverages 3D Gaussian points for scene reconstruction with an appearance-adaptive adjustment strategy. This strategy enables our model to adaptively adjust the seasonal appearance features and shadow regions of the rendered images based on the appearance characteristics of the training images and solar angles. Additionally, the impact of transient objects is mitigated through the use of visibility maps and uncertainty optimization. Experiments conducted on WorldView-3 images demonstrate that SatGS not only renders superior image quality compared to existing State-of-the-Art methods but also surpasses them in rendering speed, showcasing its potential for practical applications in remote sensing. Full article
Show Figures

Figure 1

25 pages, 15523 KiB  
Article
Comparative Analysis of Novel View Synthesis and Photogrammetry for 3D Forest Stand Reconstruction and Extraction of Individual Tree Parameters
by Guoji Tian, Chongcheng Chen and Hongyu Huang
Remote Sens. 2025, 17(9), 1520; https://doi.org/10.3390/rs17091520 - 25 Apr 2025
Cited by 1 | Viewed by 1004
Abstract
The accurate and efficient 3D reconstruction of trees is beneficial for urban forest resource assessment and management. Close-range photogrammetry (CRP) is widely used in the 3D model reconstruction of forest scenes. However, in practical forestry applications, challenges such as low reconstruction efficiency and [...] Read more.
The accurate and efficient 3D reconstruction of trees is beneficial for urban forest resource assessment and management. Close-range photogrammetry (CRP) is widely used in the 3D model reconstruction of forest scenes. However, in practical forestry applications, challenges such as low reconstruction efficiency and poor reconstruction quality persist. Recently, novel view synthesis (NVS) technology, such as neural radiance fields (NeRF) and 3D Gaussian splatting (3DGS), has shown great potential in the 3D reconstruction of plants using some limited number of images. However, existing research typically focuses on small plants in orchards or individual trees. It remains uncertain whether this technology can be effectively applied in larger, more complex stands or forest scenes. In this study, we collected sequential images of urban forest plots with varying levels of complexity using imaging devices with different resolutions (cameras on smartphones and UAV). These plots included one with sparse, leafless trees and another with dense foliage and more occlusions. We then performed dense reconstruction of forest stands using NeRF and 3DGS methods. The resulting point cloud models were compared with those obtained through photogrammetric reconstruction and laser scanning methods. The results show that compared to photogrammetric method, NVS methods have a significant advantage in reconstruction efficiency. The photogrammetric method is suitable for relatively simple forest stands, as it is less adaptable to complex ones. This results in tree point cloud models with issues such as excessive canopy noise and wrongfully reconstructed trees with duplicated trunks and canopies. In contrast, NeRF is better adapted to more complex forest stands, yielding tree point clouds of the highest quality that offer more detailed trunk and canopy information. However, it can lead to reconstruction errors in the ground area when the input views are limited. The 3DGS method has a relatively poor capability to generate dense point clouds, resulting in models with low point density, particularly with sparse points in the trunk areas, which affects the accuracy of the diameter at breast height (DBH) estimation. Tree height and crown diameter information can be extracted from the point clouds reconstructed by all three methods, with NeRF achieving the highest accuracy in tree height. However, the accuracy of DBH extracted from photogrammetric point clouds is still higher than that from NeRF point clouds. Meanwhile, compared to ground-level smartphone images, tree parameters extracted from reconstruction results of higher-resolution and varied perspectives of drone images are more accurate. These findings confirm that NVS methods have significant application potential for 3D reconstruction of urban forests. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

22 pages, 13917 KiB  
Article
Pruning Branch Recognition and Pruning Point Localization for Walnut (Juglans regia L.) Trees Based on Point Cloud Semantic Segmentation
by Wei Zhu, Xiaopeng Bai, Daochun Xu and Wenbin Li
Agriculture 2025, 15(8), 817; https://doi.org/10.3390/agriculture15080817 - 9 Apr 2025
Cited by 2 | Viewed by 703
Abstract
Intelligent pruning technology is significant in reducing management costs and improving operational efficiency. In this study, a branch recognition and pruning point localization method was proposed for dormant walnut (Juglans regia L.) trees. First, 3D point clouds of walnut trees were reconstructed [...] Read more.
Intelligent pruning technology is significant in reducing management costs and improving operational efficiency. In this study, a branch recognition and pruning point localization method was proposed for dormant walnut (Juglans regia L.) trees. First, 3D point clouds of walnut trees were reconstructed from multi-view images using Neural Radiance Fields (NeRFs). Second, Walnut-PointNet was improved to segment the walnut tree into Trunk, Branch, and Calibration categories. Next, individual pruning branches were extracted by cluster analysis and pruning rules were adjusted by classifying branches based on length. Finally, Principal Component Analysis (PCA) was used for length extraction, and pruning points were determined based on pruning rules. Walnut-PointNet achieved an OA of 93.39%, an ACC of 95.29%, and an mIoU of 0.912 on the walnut tree dataset. The mean absolute errors in length extraction for the short-growing branch group and the water sprout were 28.04 mm and 50.11 mm, respectively. The average success rate of pruning point recognition reached 89.33%, and the total time for pruning branch recognition and pruning point localization for the entire tree was approximately 16 s. This study provides support for the development of intelligent pruning for walnut trees. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

39 pages, 49962 KiB  
Review
Learning-Based 3D Reconstruction Methods for Non-Collaborative Surfaces—A Metrological Evaluation
by Ziyang Yan, Nazanin Padkan, Paweł Trybała, Elisa Mariarosaria Farella and Fabio Remondino
Metrology 2025, 5(2), 20; https://doi.org/10.3390/metrology5020020 - 3 Apr 2025
Viewed by 3104
Abstract
Non-collaborative (i.e., reflective, transparent, metallic, etc.) surfaces are common in industrial production processes, where 3D reconstruction methods are applied for quantitative quality control inspections. Although the use or combination of photogrammetry and photometric stereo performs well for well-textured or partially textured objects, it [...] Read more.
Non-collaborative (i.e., reflective, transparent, metallic, etc.) surfaces are common in industrial production processes, where 3D reconstruction methods are applied for quantitative quality control inspections. Although the use or combination of photogrammetry and photometric stereo performs well for well-textured or partially textured objects, it usually produces unsatisfactory 3D reconstruction results on non-collaborative surfaces. To improve 3D inspection performances, this paper investigates emerging learning-based surface reconstruction methods, such as Neural Radiance Fields (NeRF), Multi-View Stereo (MVS), Monocular Depth Estimation (MDE), Gaussian Splatting (GS) and image-to-3D generative AI as potential alternatives for industrial inspections. A comprehensive evaluation dataset with several common industrial objects was used to assess methods and gain deeper insights into the applicability of the examined approaches for inspections in industrial scenarios. In the experimental evaluation, geometric comparisons were carried out between the reference data and learning-based reconstructions. The results indicate that no method can outperform all the others across all evaluations. Full article
Show Figures

Figure 1

20 pages, 8973 KiB  
Article
UE-SLAM: Monocular Neural Radiance Field SLAM with Semantic Mapping Capabilities
by Yuquan Zhang, Guangan Jiang, Mingrui Li and Guosheng Feng
Symmetry 2025, 17(4), 508; https://doi.org/10.3390/sym17040508 - 27 Mar 2025
Viewed by 1028
Abstract
Neural Radiance Fields (NeRF) have transformed 3D reconstruction by enabling high-fidelity scene generation from sparse views. However, existing neural SLAM systems face challenges such as limited scene understanding and heavy reliance on depth sensors. We propose UE-SLAM, a real-time monocular SLAM system integrating [...] Read more.
Neural Radiance Fields (NeRF) have transformed 3D reconstruction by enabling high-fidelity scene generation from sparse views. However, existing neural SLAM systems face challenges such as limited scene understanding and heavy reliance on depth sensors. We propose UE-SLAM, a real-time monocular SLAM system integrating semantic segmentation, depth fusion, and robust tracking modules. By leveraging the inherent symmetry between semantic segmentation and depth estimation, UE-SLAM utilizes DINOv2 for instance segmentation and combines monocular depth estimation, radiance field-rendered depth, and an uncertainty framework to produce refined proxy depth. This approach enables high-quality semantic mapping and eliminates the need for depth sensors. Experiments on benchmark datasets demonstrate that UE-SLAM achieves robust semantic segmentation, detailed scene reconstruction, and accurate tracking, significantly outperforming existing monocular SLAM methods. The modular and symmetrical architecture of UE-SLAM ensures a balance between computational efficiency and reconstruction quality, aligning with the thematic focus of symmetry in engineering and computational systems. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

36 pages, 8602 KiB  
Article
Multi-Agent Mapping and Tracking-Based Electrical Vehicles with Unknown Environment Exploration
by Chafaa Hamrouni, Aarif Alutaybi and Ghofrane Ouerfelli
World Electr. Veh. J. 2025, 16(3), 162; https://doi.org/10.3390/wevj16030162 - 11 Mar 2025
Viewed by 833
Abstract
This research presents an intelligent, environment-aware navigation framework for smart electric vehicles (EVs), focusing on multi-agent mapping, real-time obstacle recognition, and adaptive route optimization. Unlike traditional navigation systems that primarily minimize cost and distance, this research emphasizes how EVs perceive, map, and interact [...] Read more.
This research presents an intelligent, environment-aware navigation framework for smart electric vehicles (EVs), focusing on multi-agent mapping, real-time obstacle recognition, and adaptive route optimization. Unlike traditional navigation systems that primarily minimize cost and distance, this research emphasizes how EVs perceive, map, and interact with their surroundings. Using a distributed mapping approach, multiple EVs collaboratively construct a topological representation of their environment, enhancing spatial awareness and adaptive path planning. Neural Radiance Fields (NeRFs) and machine learning models are employed to improve situational awareness, reduce positional tracking errors, and increase mapping accuracy by integrating real-time traffic conditions, battery levels, and environmental constraints. The system intelligently balances delivery speed and energy efficiency by dynamically adjusting routes based on urgency, congestion, and battery constraints. When rapid deliveries are required, the algorithm prioritizes faster routes, whereas, for flexible schedules, it optimizes energy conservation. This dynamic decision making ensures optimal fleet performance by minimizing energy waste and reducing emissions. The framework further enhances sustainability by integrating an adaptive optimization model that continuously refines EV paths in response to real-time changes in traffic flow and charging station availability. By seamlessly combining real-time route adaptation with energy-efficient decision making, the proposed system supports scalable and sustainable EV fleet operations. The ability to dynamically optimize travel paths ensures minimal energy consumption while maintaining high operational efficiency. Experimental validation confirms that this approach not only improves EV navigation and obstacle avoidance but also significantly contributes to reducing emissions and enhancing the long-term viability of smart EV fleets in rapidly changing environments. Full article
(This article belongs to the Special Issue Design Theory, Method and Control of Intelligent and Safe Vehicles)
Show Figures

Figure 1

Back to TopTop