Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (484)

Search Parameters:
Keywords = 3D virtual reconstruction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4633 KB  
Article
Teleoperation System for Service Robots Using a Virtual Reality Headset and 3D Pose Estimation
by Tiago Ribeiro, Eduardo Fernandes, António Ribeiro, Carolina Lopes, Fernando Ribeiro and Gil Lopes
Sensors 2026, 26(2), 471; https://doi.org/10.3390/s26020471 (registering DOI) - 10 Jan 2026
Abstract
This paper presents an immersive teleoperation framework for service robots that combines real-time 3D human pose estimation with a Virtual Reality (VR) interface to support intuitive, natural robot control. The operator is tracked using MediaPipe for 2D landmark detection and an Intel RealSense [...] Read more.
This paper presents an immersive teleoperation framework for service robots that combines real-time 3D human pose estimation with a Virtual Reality (VR) interface to support intuitive, natural robot control. The operator is tracked using MediaPipe for 2D landmark detection and an Intel RealSense D455 RGB-D (Red-Green-Blue plus Depth) camera for depth acquisition, enabling 3D reconstruction of key joints. Joint angles are computed using efficient vector operations and mapped to the kinematic constraints of an anthropomorphic arm on the CHARMIE service robot. A VR-based telepresence interface provides stereoscopic video and head-motion-based view control to improve situational awareness during manipulation tasks. Experiments in real-world object grasping demonstrate reliable arm teleoperation and effective telepresence; however, vision-only estimation remains limited for axial rotations (e.g., elbow and wrist yaw), particularly under occlusions and unfavorable viewpoints. The proposed system provides a practical pathway toward low-cost, sensor-driven, immersive human–robot interaction for service robotics in dynamic environments. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

29 pages, 37031 KB  
Article
Digital Replicas and 3D Virtual Reconstructions for Large Excavations in Urban Archaeology: Methods, Tools, and Techniques Drawn from the “Metro C” Case Study in Rome
by Emanuel Demetrescu, Daniele Ferdani, Bruno Fanini, Enzo D’Annibale, Simone Berto, Simona Morretta and Rossella Rea
Remote Sens. 2026, 18(2), 203; https://doi.org/10.3390/rs18020203 - 8 Jan 2026
Viewed by 58
Abstract
This contribution presents an integrated methodological pipeline for digital documentation and virtual reconstruction of large-scale urban archaeological excavations, developed through the Amba Aradam case study (Metro C line, Rome). The excavation revealed a 2nd-century A.D. military complex extending over 4770 m2 at [...] Read more.
This contribution presents an integrated methodological pipeline for digital documentation and virtual reconstruction of large-scale urban archaeological excavations, developed through the Amba Aradam case study (Metro C line, Rome). The excavation revealed a 2nd-century A.D. military complex extending over 4770 m2 at depths reaching 20 m, documented through multiple photogrammetric campaigns (2016–2018) as structures were progressively excavated and removed. We established an empirically validated texture density standard (1.26 mm2/texel) for photorealistic digital replicas suitable for immersive HMD and desktop exploration, with an explicit texture density calculation formula ensuring reproducibility. The temporal integration workflow merged 3D snapshots acquired across three excavation campaigns while maintaining geometric and chromatic consistency. Semantic documentation, through the extended matrix framework, recorded Virtual Stratigraphic Units linking archaeological evidence, comparative sources, and interpretative reasoning (paradata) for transparent virtual reconstruction. The complete pipeline, implemented through open-source 3DSC 1.4 and EMtools add-ons for Blender and Metashape v0.9 (available on GitHub), addresses specific challenges of documenting complex stratigraphic contexts within active construction environments where in situ preservation is not feasible. The spatial integration of the digital replica with previous archaeological data illuminated the urban evolution of Rome’s military topography during the 2nd–3rd centuries A.D., demonstrating the essential role of advanced digital documentation in contemporary urban archaeology. Full article
Show Figures

Figure 1

18 pages, 6560 KB  
Article
Beyond Traditional Learning with a New Reality: Geoscience Education Enhanced by 3D Reconstruction, Virtual Reality, and a Large Display
by Andreia Santos, Bernardo Marques, João Martins, Rubén Sobral, Carlos Ferreira, Fernando Almeida, Paulo Dias and Beatriz Sousa Santos
Geosciences 2026, 16(1), 28; https://doi.org/10.3390/geosciences16010028 - 4 Jan 2026
Viewed by 184
Abstract
Nowadays, despite the advancements in several technological areas, the education process of various subjects shows minimal evolution from the approaches used in prior years. In light of these, some fields struggle to capture the student’s attention and motivation, in particular, when the subject [...] Read more.
Nowadays, despite the advancements in several technological areas, the education process of various subjects shows minimal evolution from the approaches used in prior years. In light of these, some fields struggle to capture the student’s attention and motivation, in particular, when the subject addresses remote locations that students are unable to visit and relate to. Therefore, an opportunity exists to explore novel technologies for such scenarios. This work introduces an educational approach that integrates 3D Reconstruction, Virtual Reality (VR), and a Large Display to enrich Geoscience learning at the university level. In this teacher-centric approach, manipulation of virtual replicas of real-world geological sites can be performed, creating an immersive yet asymmetric collaborative environment for students in the classroom. The teacher’s VR interactions are mirrored on a large display, enabling clear demonstrations of complex concepts. This allows students, who cannot physically visit these locations, to explore and understand the sites more deeply. To evaluate the effectiveness of this approach, a user study was conducted with 20 participants from Geoscience and Computer Science disciplines, comparing the VR-based method with a conventional approach. Analysis of the collected data suggests that, across multiple relevant dimensions, participants generally favored the VR condition, highlighting its potential for enhancing engagement and comprehension. Full article
Show Figures

Figure 1

19 pages, 3159 KB  
Article
Collaborative Obstacle Avoidance for UAV Swarms Based on Improved Artificial Potential Field Method
by Yue Han, Luji Guo, Chenbo Zhao, Meini Yuan and Pengyun Chen
Eng 2026, 7(1), 10; https://doi.org/10.3390/eng7010010 - 29 Dec 2025
Viewed by 197
Abstract
This paper addresses the issues of target unreachability and local optima in traditional artificial potential field (APF) methods for UAV swarm path planning by proposing an improved collaborative obstacle avoidance algorithm. By introducing a virtual target position function to reconstruct the repulsive field [...] Read more.
This paper addresses the issues of target unreachability and local optima in traditional artificial potential field (APF) methods for UAV swarm path planning by proposing an improved collaborative obstacle avoidance algorithm. By introducing a virtual target position function to reconstruct the repulsive field model, the repulsive force exponentially decays as the UAV approaches the target, effectively resolving the problem where excessive obstacle repulsion prevents UAVs from reaching the goal. Additionally, we design a dynamic virtual target point generation mechanism based on mechanical state detection to automatically create temporary target points when UAVs are trapped in local optima, thereby breaking force equilibrium. For multi-UAV collaboration, intra-formation UAVs are treated as dynamic obstacles, and a 3D repulsive field model is established to avoid local optima in planar scenarios. Combined with a leader–follower control strategy, a hybrid potential field position controller is designed to enable rapid formation reconfiguration post-obstacle avoidance. Simulation results demonstrate that the proposed improved APF method ensures safe obstacle avoidance and formation maintenance for UAV swarms in complex environments, significantly enhancing path planning reliability and effectiveness. Full article
Show Figures

Figure 1

12 pages, 6483 KB  
Article
Synergistic Triad of Mixed Reality, 3D Printing, and Navigation in Complex Craniomaxillofacial Reconstruction
by Elijah Zhengyang Cai, Harry Ho Man Ng, Yujia Gao, Kee Yuan Ngiam, Catherine Tong How Lee and Thiam Chye Lim
Bioengineering 2026, 13(1), 10; https://doi.org/10.3390/bioengineering13010010 - 23 Dec 2025
Viewed by 271
Abstract
The craniofacial skeleton is a complex three-dimensional structure, and major reconstructive cases remain challenging. We describe a synergistic approach combining intra-operative navigation, three-dimensionally (3D) printed skull models, and mixed reality (MR) to improve predictability in surgical outcomes. A patient with previously repaired bilateral [...] Read more.
The craniofacial skeleton is a complex three-dimensional structure, and major reconstructive cases remain challenging. We describe a synergistic approach combining intra-operative navigation, three-dimensionally (3D) printed skull models, and mixed reality (MR) to improve predictability in surgical outcomes. A patient with previously repaired bilateral cleft lip and palate, significant midfacial retrusion, and a large maxillary alveolar gap underwent segmental Le Fort I osteotomy and advancement. Preoperative virtual planning was performed, and reference templates were uploaded onto MR glasses. Intra-operatively, the MR glasses projected the templates as holograms onto the patient’s skull, guiding osteotomy line marking and validating bony segment movement, which was confirmed with conventional navigation. The 3D-printed skull model facilitated dissection and removal of intervening bony spicules. Preoperative planning proceeded seamlessly across software platforms. Osteotomy lines marked with MR showed good concordance with conventional navigation, and final segment positioning was accurately validated. Postoperative outcomes were satisfactory, with re-established occlusion and closure of the maxillary alveolar gap. The combined use of conventional navigation, 3D-printed models, and MR is feasible and allows safe integration of MR into complex craniofacial reconstruction while further validation of the technology is ongoing. Full article
Show Figures

Graphical abstract

13 pages, 2662 KB  
Article
Enhanced Drilling Accuracy in Mandibular Reconstruction with Fibula Free Flap Using a Novel Drill-Fitting Hole Guide: A 3D Simulation-Based In Vitro Comparison with Conventional Guide Systems
by Bo-Yeon Hwang, Chandong Jeen, Junha Kim and Jung-Woo Lee
Appl. Sci. 2025, 15(24), 13144; https://doi.org/10.3390/app152413144 - 14 Dec 2025
Viewed by 320
Abstract
Virtual planning and patient-specific surgical guides have become standard practice to achieve accurate mandibular reconstruction with fibula free flaps. Although these technologies have greatly improved surgical precision, slight deviations may still occur. To further minimize these inaccuracies, we focused on the drilling process [...] Read more.
Virtual planning and patient-specific surgical guides have become standard practice to achieve accurate mandibular reconstruction with fibula free flaps. Although these technologies have greatly improved surgical precision, slight deviations may still occur. To further minimize these inaccuracies, we focused on the drilling process and developed a novel drill-fitting hole guide (DFG) system. This in vitro study compared the DFG with two conventional guide designs—a drill-wide hole guide (DWG) and a trocar-fitting hole guide (TFG)—using 3D-printed resin models. Twenty oral and maxillofacial surgeons performed guided drilling with all three guide types, and drilling accuracy and subsequent plate positioning were evaluated using a fully digitized workflow in 3-matic software. Deviations in drill entry points and trajectories were quantified, along with plate overlap ratios (Dice coefficients) and plate angular discrepancies. The DFG achieved the highest accuracy, showing the smallest drilling point deviation (0.17 ± 0.08 mm) and angular deviation (2.41 ± 1.24°), the greatest plate overlap (0.90 ± 0.04), and the lowest plate angular misalignment (0.87 ± 0.59°). Although all guide types yielded clinically acceptable results, the DFG demonstrated significantly higher accuracy. These findings suggest that the drill-guide interface is a key factor in surgical precision that has received limited attention. Full article
(This article belongs to the Special Issue Recent Development and Emerging Trends in Dental Implants)
Show Figures

Figure 1

16 pages, 4166 KB  
Article
Preliminary Study on the Accuracy Comparison Between 3D-Printed Bone Models and Naked-Eye Stereoscopy-Based Virtual Reality Models for Presurgical Molding in Orbital Floor Fracture Repair
by Masato Tsuchiya, Izumi Yasutake, Satoru Tamura, Satoshi Kubo and Ryuichi Azuma
Appl. Sci. 2025, 15(24), 12963; https://doi.org/10.3390/app152412963 - 9 Dec 2025
Viewed by 291
Abstract
Three-dimensional (3D) printing enables accurate implant pre-shaping in orbital reconstruction but is costly and time-consuming. Naked-eye stereoscopic displays (NEDs) enable virtual implant modeling without fabrication. This study aimed to compare the reproducibility and accuracy of NED-based virtual reality (VR) pre-shaping with conventional 3D-printed [...] Read more.
Three-dimensional (3D) printing enables accurate implant pre-shaping in orbital reconstruction but is costly and time-consuming. Naked-eye stereoscopic displays (NEDs) enable virtual implant modeling without fabrication. This study aimed to compare the reproducibility and accuracy of NED-based virtual reality (VR) pre-shaping with conventional 3D-printed models. Two surgeons pre-shaped implants for 11 unilateral orbital floor fractures using both 3D-printed and NED-based VR models with identical computed tomography data. The depth, area, and axis dimensions were measured, and reproducibility and agreement were assessed using intraclass correlation coefficients (ICCs), Bland–Altman analysis, and shape similarity metrics—Hausdorff distance (HD) and root mean square error (RMSE). Intra-rater ICCs were ≥0.80 for all parameters except depth in the VR model. The HD and RMSE reveal no significant differences between 3D (2.64 ± 0.85 mm; 1.02 ± 0.42 mm) and VR (3.14 ± 1.18 mm; 1.24 ± 0.53 mm). Inter-rater ICCs were ≥0.80 for the area and axes in both modalities, while depth remained low. Between modalities, no significant differences were found; HD and RMSE were 2.95 ± 0.94 mm and 1.28 ± 0.49 mm. The NED-based VR pre-shaping achieved reproducibility and dimensional agreement comparable to 3D printing, suggesting a feasible cost- and time-efficient alternative for orbital reconstruction. These preliminary findings suggest that NED-based preshaping may be feasible; however, larger studies are required to confirm whether VR can achieve performance comparable to 3D-printed models. Full article
(This article belongs to the Special Issue Virtual Reality (VR) in Healthcare)
Show Figures

Figure 1

14 pages, 1993 KB  
Article
Reliability of Immersive Virtual Reality for Pre-Procedural Planning for TAVI: A CT-Based Validation
by Nicole Carabetta, Giuseppe Panuccio, Salvatore Giordano, Sabato Sorrentino, Giuseppe Antonio Mazza, Jolanda Sabatino, Giovanni Canino, Isabella Leo, Nadia Salerno, Antonio Strangio, Maria Petullà, Daniele Torella and Salvatore De Rosa
J. Cardiovasc. Dev. Dis. 2025, 12(12), 481; https://doi.org/10.3390/jcdd12120481 - 8 Dec 2025
Viewed by 395
Abstract
Background. Accurate anatomical assessment is essential for pre-procedural planning in structural heart disease. Advanced 3D imaging could offer improved visualization for more accurate reconstruction. We assessed the performance of a novel immersive 3D virtual reality (VEA) for the pre-procedural planning of transcatheter aortic [...] Read more.
Background. Accurate anatomical assessment is essential for pre-procedural planning in structural heart disease. Advanced 3D imaging could offer improved visualization for more accurate reconstruction. We assessed the performance of a novel immersive 3D virtual reality (VEA) for the pre-procedural planning of transcatheter aortic valve implantation (TAVI) candidates. Methods. Measurement of cardiac-gated contrast-enhanced computed tomography (CT) scans was performed with the novel VEA and established tools: 3Mensio and Horos. Results. 50 consecutive patients were included. Annular and LVOT measurements obtained with VEA were strongly correlated with those derived from standard CT analysis. The intraclass correlation coefficient (ICC) confirmed excellent consistency for annular measurements (ICC = 0.93), while the concordance correlation coefficient indicated very good overall agreement (CCC = 0.83, 95% CI 0.73–0.90). Similarly, LVOT measurements obtained with VEA showed strong correlation with CT values, with good consistency (ICC = 0.90) and good overall agreement (CCC = 0.77, 95% CI 0.64–0.86). VEA-based planning improved prosthesis size selection accuracy, achieving higher concordance with implanted valves and a significant net reclassification gain over conventional CT. Conclusions. Given the increasing use of advanced 3D cardiac imaging technologies, understanding their diagnostic accuracy to guide pre-procedural planning of TAVI is paramount. In our study, VEA provided reliable assessment of aortic root anatomy for TAVI planning. This novel 3D software provides accurate, patient-specific reconstructions of the aortic root and surrounding structures that may optimize valve sizing, improve procedural safety and enhance procedural outcomes. This provides a rationale for future studies to assess the procedural benefit derived from a three-dimensional assessment of the aortic valve geometry. Full article
Show Figures

Graphical abstract

11 pages, 3093 KB  
Review
Artificial Intelligence and 3D Reconstruction in Complex Hepato-Pancreato-Biliary (HPB) Surgery: A Comprehensive Review of the Literature
by Andreas Panagakis, Ioannis Katsaros, Maria Sotiropoulou, Adam Mylonakis, Markos Despotidis, Aristeidis Sourgiadakis, Panagiotis Sakarellos, Stylianos Kapiris, Chrysovalantis Vergadis, Dimitrios Schizas, Evangelos Felekouras and Michail Vailas
J. Pers. Med. 2025, 15(12), 610; https://doi.org/10.3390/jpm15120610 - 8 Dec 2025
Viewed by 440
Abstract
Background: The management of complex hepato-pancreato-biliary (HPB) pathologies demands exceptional surgical precision. Traditional two-dimensional imaging has limitations in depicting intricate anatomical relationships, potentially complicating preoperative planning. This review explores the synergistic application of three-dimensional (3D) reconstruction and artificial intelligence (AI) to support surgical [...] Read more.
Background: The management of complex hepato-pancreato-biliary (HPB) pathologies demands exceptional surgical precision. Traditional two-dimensional imaging has limitations in depicting intricate anatomical relationships, potentially complicating preoperative planning. This review explores the synergistic application of three-dimensional (3D) reconstruction and artificial intelligence (AI) to support surgical decision-making in complex HPB cases. Methods: This narrative review synthesized the existing literature on the applications, benefits, limitations, and implementation challenges of 3D reconstruction and AI technologies in HPB surgery. Results: The literature suggests that 3D reconstruction provides patient-specific, interactive models that significantly improve surgeons’ understanding of tumor resectability and vascular anatomy, contributing to reduced operative time and blood loss. Building upon this, AI algorithms can automate image segmentation for 3D modeling, enhance diagnostic accuracy, and offer predictive analytics for postoperative complications, such as liver failure. By analyzing large datasets, AI can identify subtle risk factors to guide clinical decision-making. Conclusions: The convergence of 3D visualization and AI-driven analytics is contributing to an emerging paradigm shift in HPB surgery. This combination may foster a more personalized, precise, and data-informed surgical approach, particularly in anatomically complex or high-risk cases. However, current evidence is heterogeneous and largely observational, underscoring the need for prospective multicenter validation before routine implementation. Full article
Show Figures

Figure 1

19 pages, 4568 KB  
Article
Role of Computer-Assisted Surgery in the Management of Pediatric Orbital Tumors: Insights from a Leading Referral Center
by Elena Gomez Garcia, Maria Granados, Javier M. Saceda, Maria N. Moreno, Jorge Zamorano, Jose L. Cebrian and Susana Noval
Children 2025, 12(12), 1649; https://doi.org/10.3390/children12121649 - 4 Dec 2025
Viewed by 386
Abstract
Background/Objectives: Pediatric orbital tumors are rare and complex, requiring multidisciplinary care at specialized centers. Contemporary treatment paradigms emphasize centralized care delivery through experienced multidisciplinary teams to optimize patient outcomes. Recent advances in surgical planning technologies and intraoperative navigation systems have substantially enhanced surgical [...] Read more.
Background/Objectives: Pediatric orbital tumors are rare and complex, requiring multidisciplinary care at specialized centers. Contemporary treatment paradigms emphasize centralized care delivery through experienced multidisciplinary teams to optimize patient outcomes. Recent advances in surgical planning technologies and intraoperative navigation systems have substantially enhanced surgical safety through improvement in tumor resection and reconstruction and reduction in complications, including recurrence of the lesion. Computed-aided surgical technologies enable precise virtual planning, minimally invasive approaches and more precise reconstruction methods when necessary by mean of patient-specific cutting guides, premolded orbital plates or individual patient solutions (IPS) prosthesis. Three-dimensional biomodelling visualizes tumor architecture and aids localization while preserving neurovascular structures, and real-time neuronavigation improves safety and efficacy. Methods: We conducted a retrospective analysis of 98 pediatric patients with orbital tumors treated between 2014 and 2025 at a tertiary center to evaluate the use of computed-assisted surgical technologies and the indications for treatment. Inclusion criteria comprised all cases where computer-assisted techniques were employed. Patients were classified into two groups: Group 1—intraconal or extensive periorbital lesions with eye-sparing intent treated via craniofacial approaches; Group 2—periorbital tumors with orbital wall involvement, to analyze the use of the different technologies. Data collected included tumor age, type, location, technology used, adjunctive treatments, and postoperative outcomes. Results: Twelve patients underwent computer-assisted surgery. Technologies employed over the last six years included intraoperative navigation, 3D planning with/without tumor segmentation, orbital-wall reconstruction by mirroring, IPS or titanium mesh bending, and preoperative biomodelling. Patients were grouped by tumor location and treatment goals: Group 1—intraorbital lesions (primarily intraconal or 270–360° involvement), including one case of orbital encephalocele treated transcranially; Group 2—periorbital tumors with orbital-wall destruction, treated mainly via midfacial approaches. Intraoperative navigation was used in 10/12 cases (8/11 with tumor segmentation); in 3 cases with ill-defined margins, navigation localized residual tumor. Virtual surgery predominated in Group 2 (4 patients) and one in Group 1, combined with cutting guides for margins and Individual Prosthetic Solutions (IPS) prosthesis fitting (two patients: titanium and PEEK). In two cases, virtual plans were performed, STL models printed, and premolded titanium meshes used. No complications related to tumor persistence or orbital disturbance were observed. Conclusions: Advanced surgical technologies substantially enhance safety, efficiency, and outcomes in pediatric orbital tumors. Technology-assisted approaches represent a paradigm shift in this complex field. Additional studies are needed to establish evidence-based protocols for systematic integration of technology in pediatric orbital tumor management. Full article
(This article belongs to the Special Issue Pediatric Oral and Facial Surgery: Advances and Future Challenges)
Show Figures

Figure 1

18 pages, 1972 KB  
Article
Automatic Reconstruction of 3D Building Models from ALS Point Clouds Based on Façade Geometry
by Tingting Zhao, Tao Xiong, Muzi Li and Zhilin Li
ISPRS Int. J. Geo-Inf. 2025, 14(12), 462; https://doi.org/10.3390/ijgi14120462 - 25 Nov 2025
Viewed by 712
Abstract
Three-dimensional (3D) building models are essential for urban planning, spatial analysis, and virtual simulations. However, most reconstruction methods based on Airborne LiDAR Scanning (ALS) rely primarily on rooftop information, often resulting in distorted footprints and the omission of façade semantics such as windows [...] Read more.
Three-dimensional (3D) building models are essential for urban planning, spatial analysis, and virtual simulations. However, most reconstruction methods based on Airborne LiDAR Scanning (ALS) rely primarily on rooftop information, often resulting in distorted footprints and the omission of façade semantics such as windows and doors. To address these limitations, this study proposes an automatic 3D building reconstruction method driven by façade geometry. The proposed method introduces three key contributions: (1) a façade-guided footprint generation strategy that eliminates geometric distortions associated with roof projection methods; (2) robust detection and reconstruction of façade openings, enabling reliable identification of windows and doors even under sparse ALS conditions; and (3) an integrated volumetric modeling pipeline that produces watertight models with embedded façade details, ensuring both structural accuracy and semantic completeness. Experimental results show that the proposed method achieves geometric deviations at the decimeter level and feature recognition accuracy exceeding 97%. On average, the reconstruction time of a single building is 91 s, demonstrating reliable reconstruction accuracy and satisfactory computational performance. These findings highlight the potential of the method as a robust and scalable solution for large-scale ALS-based urban modeling, offering substantial improvements in both structural precision and semantic richness compared with conventional roof-based approaches. Full article
(This article belongs to the Special Issue Knowledge-Guided Map Representation and Understanding)
Show Figures

Figure 1

15 pages, 2020 KB  
Article
3D Human Reconstruction from Monocular Vision Based on Neural Fields and Explicit Mesh Optimization
by Kaipeng Wang, Xiaolong Xie, Wei Li, Jie Liu and Zhuo Wang
Electronics 2025, 14(22), 4512; https://doi.org/10.3390/electronics14224512 - 18 Nov 2025
Viewed by 1464
Abstract
Three-dimensional Human Reconstruction from Monocular Vision is a key technology in Virtual Reality and digital humans. It aims to recover the 3D structure and pose of the human body from 2D images or video. Current methods for dynamic 3D reconstruction of the human [...] Read more.
Three-dimensional Human Reconstruction from Monocular Vision is a key technology in Virtual Reality and digital humans. It aims to recover the 3D structure and pose of the human body from 2D images or video. Current methods for dynamic 3D reconstruction of the human body, which are based on monocular views, have low accuracy and remain a challenging problem. This paper proposes a fast reconstruction method based on Instant Human Model (IHM) generation, which achieves highly realistic 3D reconstruction of the human body in arbitrary poses. First, the efficient dynamic human body reconstruction method, InstantAvatar, is utilized to learn the shape and appearance of the human body in different poses. However, due to its direct use of low-resolution voxels as canonical spatial human representations, it is not possible to achieve satisfactory reconstruction results on a wide range of datasets. Next, a voxel occupancy grid is initialized in the A-pose, and a voxel attention mechanism module is constructed to enhance the reconstruction effect. Finally, the Instant Human Model (IHM) method is employed to define continuous fields on the surface, enabling highly realistic dynamic 3D human reconstruction. Experimental results show that, compared to the representative InstantAvatar method, IHM achieves a 0.1% improvement in SSIM and a 2% improvement in PSNR on the PeopleSnapshot benchmark dataset, demonstrating improvements in both reconstruction quality and detail. Specifically, IHM, through voxel attention mechanisms and Mesh adaptive iterative optimization, achieves highly realistic 3D mesh models of human bodies in various poses while ensuring efficiency. Full article
(This article belongs to the Special Issue 3D Computer Vision and 3D Reconstruction)
Show Figures

Graphical abstract

18 pages, 5061 KB  
Article
Real-Time Live Streaming Framework for Cultural Heritage Using Multi-Camera 3D Motion Capture and Virtual Avatars
by Minjoon Kim, Taemin Hwang and Jaehyuk So
Appl. Sci. 2025, 15(22), 12208; https://doi.org/10.3390/app152212208 - 18 Nov 2025
Viewed by 842
Abstract
The preservation and digital transmission of cultural heritage have become increasingly vital in the era of immersive media. This study introduces a real-time framework for digitizing and animating traditional performing arts, with a focus on Korean traditional dance as a representative case study. [...] Read more.
The preservation and digital transmission of cultural heritage have become increasingly vital in the era of immersive media. This study introduces a real-time framework for digitizing and animating traditional performing arts, with a focus on Korean traditional dance as a representative case study. The proposed approach combines three core components: (1) high-fidelity 3D avatar creation through volumetric scanning of performers, costumes, and props; (2) real-time motion capture using multi-camera edge processing; and (3) motion-to-avatar animation that integrates skeletal mapping with physics-based simulation. By transmitting only essential motion keypoints from lightweight edge devices to a central server, the system enables bandwidth-efficient streaming while reconstructing expressive, lifelike 3D avatars. Experiments with eight performers and eight cameras achieved low latency (~200 ms) and minimal network load (<1 Mbps), successfully reproducing the esthetic qualities and embodied gestures of Korean traditional performances in a virtual environment. Beyond its technical contributions, this framework provides a novel pathway for the preservation, dissemination, and immersive re-experiencing of intangible cultural heritage, ensuring that the artistry of traditional dance can be sustained and appreciated in digital form. Full article
Show Figures

Figure 1

8 pages, 1717 KB  
Proceeding Paper
Design and Implementation of a Virtual Reality Environment for Safety Training in Key Stages of the Brewing Industrial Process
by Kevin Mauricio Quishpe, Ney Medrano and Maria Fernanda Trujillo
Eng. Proc. 2025, 115(1), 8; https://doi.org/10.3390/engproc2025115008 - 15 Nov 2025
Viewed by 395
Abstract
This article addresses the need to modernize industrial safety training in production plants, as traditional methods have proven to be ineffective and tedious. An immersive virtual environment was developed using Unreal Engine 5, based on 3D reconstructions, allowing users to interact with simulated [...] Read more.
This article addresses the need to modernize industrial safety training in production plants, as traditional methods have proven to be ineffective and tedious. An immersive virtual environment was developed using Unreal Engine 5, based on 3D reconstructions, allowing users to interact with simulated hazards in a virtualized industrial space. The system was validated through surveys conducted with operational personnel, obtaining 100% acceptance and recommendation for implementation. The results show a high perception of realism and usefulness, positioning the environment as a viable alternative to traditional safety training methodologies. Full article
(This article belongs to the Proceedings of The XXXIII Conference on Electrical and Electronic Engineering)
Show Figures

Figure 1

23 pages, 4818 KB  
Article
Multispectral-NeRF: A Multispectral Modeling Approach Based on Neural Radiance Fields
by Hong Zhang, Fei Guo, Zihan Xie and Dizhao Yao
Appl. Sci. 2025, 15(22), 12080; https://doi.org/10.3390/app152212080 - 13 Nov 2025
Viewed by 744
Abstract
3D reconstruction technology generates three-dimensional representations of real-world objects, scenes, or environments using sensor data such as 2D images, with extensive applications in robotics, autonomous vehicles, and virtual reality systems. Traditional 3D reconstruction techniques based on 2D images typically rely on RGB spectral [...] Read more.
3D reconstruction technology generates three-dimensional representations of real-world objects, scenes, or environments using sensor data such as 2D images, with extensive applications in robotics, autonomous vehicles, and virtual reality systems. Traditional 3D reconstruction techniques based on 2D images typically rely on RGB spectral information. With advances in sensor technology, additional spectral bands beyond RGB have been increasingly incorporated into 3D reconstruction workflows. Existing methods that integrate these expanded spectral data often suffer from expensive scheme prices, low accuracy, and poor geometric features. Three-dimensional reconstruction based on NeRF can effectively address the various issues in current multispectral 3D reconstruction methods, producing high-precision and high-quality reconstruction results. However, currently, NeRF and some improved models such as NeRFacto are trained on three-band data and cannot take into account the multi-band information. To address this problem, we propose Multispectral-NeRF—an enhanced neural architecture derived from NeRF that can effectively integrate multispectral information. Our technical contributions comprise threefold modifications: Expanding hidden layer dimensionality to accommodate 6-band spectral inputs; redesigning residual functions to optimize spectral discrepancy calculations between reconstructed and reference images; and adapting data compression modules to address the increased bit-depth requirements of multispectral imagery. Experimental results confirm that Multispectral-NeRF successfully processes multi-band spectral features while accurately preserving the original scenes’ spectral characteristics. Full article
Show Figures

Figure 1

Back to TopTop