Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (908)

Search Parameters:
Keywords = virtual reconstruction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3402 KB  
Article
Monocular Modeling of Non-Cooperative Space Targets Under Adverse Lighting Conditions
by Hao Chi, Ken Chen and Jiwen Zhang
Aerospace 2025, 12(10), 901; https://doi.org/10.3390/aerospace12100901 - 7 Oct 2025
Viewed by 120
Abstract
Accurate modeling of non-cooperative space targets remains a significant challenge, particularly under complex illumination conditions. A hybrid virtual–real framework is proposed that integrates photometric compensation, 3D reconstruction, and visibility determination to enhance the robustness and accuracy of monocular-based modeling systems. To overcome the [...] Read more.
Accurate modeling of non-cooperative space targets remains a significant challenge, particularly under complex illumination conditions. A hybrid virtual–real framework is proposed that integrates photometric compensation, 3D reconstruction, and visibility determination to enhance the robustness and accuracy of monocular-based modeling systems. To overcome the breakdown of the classical photometric constancy assumption under varying illumination, a compensation-based photometric model is formulated and implemented. A point cloud–driven virtual space is constructed and refined through Poisson surface reconstruction, enabling per-pixel depth, normal, and visibility information to be efficiently extracted via GPU-accelerated rendering. An illumination-aware visibility model further distinguishes self-occluded and shadowed regions, allowing for selective pixel usage during photometric optimization, while motion parameter estimation is stabilized by analyzing angular velocity precession. Experiments conducted on both Unity3D-based simulations and a semi-physical platform with robotic hardware and a sunlight simulator demonstrate that the proposed method consistently outperforms conventional feature-based and direct SLAM approaches in trajectory accuracy and 3D reconstruction quality. These results highlight the effectiveness and practical significance of incorporating virtual space feedback for non-cooperative space target modeling. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

28 pages, 3840 KB  
Article
Adaptive Lag Binning and Physics-Weighted Variograms: A LOOCV-Optimised Universal Kriging Framework with Trend Decomposition for High-Fidelity 3D Cryogenic Temperature Field Reconstruction
by Jiecheng Tang, Yisha Chen, Baolin Liu, Jie Cao and Jianxin Wang
Processes 2025, 13(10), 3160; https://doi.org/10.3390/pr13103160 - 3 Oct 2025
Viewed by 232
Abstract
Biobanks rely on ultra-low-temperature (ULT) storage for irreplaceable specimens, where precise 3D temperature field reconstruction is critical to preserve integrity. This is the first study to apply geostatistical methods to ULT field reconstruction in cryogenic biobanking systems. We address critical gaps in sparse-sensor [...] Read more.
Biobanks rely on ultra-low-temperature (ULT) storage for irreplaceable specimens, where precise 3D temperature field reconstruction is critical to preserve integrity. This is the first study to apply geostatistical methods to ULT field reconstruction in cryogenic biobanking systems. We address critical gaps in sparse-sensor environments where conventional interpolation fails due to vertical thermal stratification and non-stationary trends. Our physics-informed universal kriging framework introduces (1) the first domain-specific adaptation of universal kriging for 3D cryogenic temperature field reconstruction; (2) eight novel lag-binning methods explicitly designed for sparse, anisotropic sensor networks; and (3) a leave-one-out cross-validation-driven framework that automatically selects the optimal combination of trend model, binning strategy, logistic weighting, and variogram model fitting. Validated on real data collected from a 3000 L operating cryogenic chest freezer, the method achieves sub-degree accuracy by isolating physics-guided vertical trends (quadratic detrending dominant) and stabilising variogram estimation under sparsity. Unlike static approaches, our framework dynamically adapts to thermal regimes without manual tuning, enabling centimetre-scale virtual sensing. This work establishes geostatistics as a foundational tool for cryogenic thermal monitoring, with direct engineering applications in biobank quality control and predictive analytics. Full article
Show Figures

Figure 1

36 pages, 462 KB  
Article
No Reproducibility, No Progress: Rethinking CT Benchmarking
by Dmitry Polevoy, Danil Kazimirov, Marat Gilmanov and Dmitry Nikolaev
J. Imaging 2025, 11(10), 344; https://doi.org/10.3390/jimaging11100344 - 2 Oct 2025
Viewed by 251
Abstract
Reproducibility is a cornerstone of scientific progress, yet in X-ray computed tomography (CT) reconstruction, it remains a critical and unresolved challenge. Current benchmarking practices in CT are hampered by the scarcity of openly available datasets, the incomplete or task-specific nature of existing resources, [...] Read more.
Reproducibility is a cornerstone of scientific progress, yet in X-ray computed tomography (CT) reconstruction, it remains a critical and unresolved challenge. Current benchmarking practices in CT are hampered by the scarcity of openly available datasets, the incomplete or task-specific nature of existing resources, and the lack of transparent implementations of widely used methods and evaluation metrics. As a result, even the fundamental property of reproducibility is frequently violated, undermining objective comparison and slowing methodological progress. In this work, we analyze the systemic limitations of current CT benchmarking, drawing parallels with broader reproducibility issues across scientific domains. We propose an extended data model and formalized schemes for data preparation and quality assessment, designed to improve reproducibility and broaden the applicability of CT datasets across multiple tasks. Building on these schemes, we introduce checklists for dataset construction and quality assessment, offering a foundation for reliable and reproducible benchmarking pipelines. A key aspect of our recommendations is the integration of virtual CT (vCT), which provides highly realistic data and analytically computable phantoms, yet remains underutilized despite its potential to overcome many current barriers. Our work represents a first step toward a methodological framework for reproducible benchmarking in CT. This framework aims to enable transparent, rigorous, and comparable evaluation of reconstruction methods, ultimately supporting their reliable adoption in clinical and industrial applications. Full article
(This article belongs to the Special Issue Tools and Techniques for Improving Radiological Imaging Applications)
Show Figures

Figure 1

15 pages, 25292 KB  
Article
Reconstructing Ancient Iron-Smelting Furnaces of Guéra (Chad) Through 3D Modeling and AI-Assisted Video Generation
by Jean-Baptiste Barreau, Djimet Guemona and Caroline Robion-Brunner
Electronics 2025, 14(19), 3923; https://doi.org/10.3390/electronics14193923 - 1 Oct 2025
Viewed by 494
Abstract
This article presents an innovative methodological approach for the documentation and enhancement of ancient ironworking heritage in the Guéra region of Chad. By combining ethno-historical and archaeological surveys, 3D modeling with Blender, and the generation of images and video sequences through artificial intelligence [...] Read more.
This article presents an innovative methodological approach for the documentation and enhancement of ancient ironworking heritage in the Guéra region of Chad. By combining ethno-historical and archaeological surveys, 3D modeling with Blender, and the generation of images and video sequences through artificial intelligence (AI), we propose an integrated production pipeline enabling the faithful reconstruction of three types of metallurgical furnaces. Our method relies on rigorously collected field data to generate multiple and plausible representations from fragmentary information. A standardized evaluation grid makes it possible to assess the archaeological fidelity, cultural authenticity, and visual quality of the reconstructions, thereby limiting biases inherent to generative models. The results offer strong potential for integration into immersive environments, opening up perspectives in education, digital museology, and the virtual preservation of traditional ironworking knowledge. This work demonstrates the relevance of multimodal approaches in reconciling scientific rigor with engaging visual storytelling. Full article
(This article belongs to the Special Issue Augmented Reality, Virtual Reality, and 3D Reconstruction)
11 pages, 1288 KB  
Article
Intensity-Modulated Interventional Radiotherapy (Modern Brachytherapy) Using 3D-Printed Applicators with Multilayer Geometry and High-Density Shielding Materials for the NMSC Treatment
by Enrico Rosa, Sofia Raponi, Bruno Fionda, Maria Vaccaro, Antonio Napolitano, Valentina Lancellotta, Francesco Pastore, Gabriele Ciasca, Frank-André Siebert, Luca Tagliaferri, Marco De Spirito and Elisa Placidi
J. Pers. Med. 2025, 15(10), 460; https://doi.org/10.3390/jpm15100460 - 30 Sep 2025
Viewed by 193
Abstract
Background/Objectives: This study investigates the dosimetric impact of a 3D-printed applicator integrating multilayer catheter geometry and high-density shielding, designed for contact interventional radiotherapy (IRT) in non-melanoma skin cancer (NMSC) treatment. The aim is to assess its potential to enhance target coverage and [...] Read more.
Background/Objectives: This study investigates the dosimetric impact of a 3D-printed applicator integrating multilayer catheter geometry and high-density shielding, designed for contact interventional radiotherapy (IRT) in non-melanoma skin cancer (NMSC) treatment. The aim is to assess its potential to enhance target coverage and reduce doses in organs at risk (OARs). Methods: A virtual prototype of a multilayer applicator was designed using 3D modeling software and realized through fused deposition modeling. Dosimetric simulations were performed using both TG-43 and TG-186 formalisms on CT scans of a water-equivalent phantom. A five-catheter array was reconstructed, and lead-cadmium-based alloy shielding of varying thicknesses (3–15 mm) was contoured. CTVs of 5 mm and 8 mm thickness were analyzed along with a neighboring OAR. Dosimetric endpoints included V95%, V100%, V150% (CTV), D2cc (OAR), and therapeutic window (TW). Results: Compared to TG-43, the TG-186 algorithm yielded lower OAR doses while maintaining comparable CTV coverage. Progressive increase in shielding thickness led to improved V95% and V100% values and a notable reduction in OAR dose, with an optimal trade-off observed between 6 and 9 mm of shielding. The TW remained above 7 mm across all configurations, supporting its use in lesions thicker than conventional guidelines recommend. Conclusions: The integration of multilayer catheter geometry with high-density shielding in a customizable 3D-printed applicator enables enhanced dose modulation and OAR sparing in superficial IRT. This approach represents a step toward personalized brachytherapy, aligning with the broader movement in radiation oncology toward patient-specific solutions, adaptive planning, and precision medicine. Future directions should include prototyping and mechanical testing of the applicator, experimental dosimetric validation in phantoms, and pilot clinical feasibility studies to translate these promising in silico results into clinical practice. Full article
(This article belongs to the Section Personalized Therapy in Clinical Medicine)
Show Figures

Figure 1

21 pages, 4397 KB  
Article
Splatting the Cat: Efficient Free-Viewpoint 3D Virtual Try-On via View-Decomposed LoRA and Gaussian Splatting
by Chong-Wei Wang, Hung-Kai Huang, Tzu-Yang Lin, Hsiao-Wei Hu and Chi-Hung Chuang
Electronics 2025, 14(19), 3884; https://doi.org/10.3390/electronics14193884 - 30 Sep 2025
Viewed by 320
Abstract
As Virtual Try-On (VTON) technology matures, 2D VTON methods based on diffusion models can now rapidly generate diverse and high-quality try-on results. However, with rising user demands for realism and immersion, many applications are shifting towards 3D VTON, which offers superior geometric and [...] Read more.
As Virtual Try-On (VTON) technology matures, 2D VTON methods based on diffusion models can now rapidly generate diverse and high-quality try-on results. However, with rising user demands for realism and immersion, many applications are shifting towards 3D VTON, which offers superior geometric and spatial consistency. Existing 3D VTON approaches commonly face challenges such as barriers to practical deployment, substantial memory requirements, and cross-view inconsistencies. To address these issues, we propose an efficient 3D VTON framework with robust multi-view consistency, whose core design is to decouple the monolithic 3D editing task into a four-stage cascade as follows: (1) We first reconstruct an initial 3D scene using 3D Gaussian Splatting, integrating the SMPL-X model at this stage as a strong geometric prior. By computing a normal-map loss and a geometric consistency loss, we ensure the structural stability of the initial human model across different views. (2) We employ the lightweight CatVTON to generate 2D try-on images, that provide visual guidance for the subsequent personalized fine-tuning tasks. (3) To accurately represent garment details from all angles, we partition the 2D dataset into three subsets—front, side, and back—and train a dedicated LoRA module for each subset on a pre-trained diffusion model. This strategy effectively mitigates the issue of blurred details that can occur when a single model attempts to learn global features. (4) An iterative optimization process then uses the generated 2D VTON images and specialized LoRA modules to edit the 3DGS scene, achieving 360-degree free-viewpoint VTON results. All our experiments were conducted on a single consumer-grade GPU with 24 GB of memory, a significant reduction from the 32 GB or more typically required by previous studies under similar data and parameter settings. Our method balances quality and memory requirement, significantly lowering the adoption barrier for 3D VTON technology. Full article
(This article belongs to the Special Issue 2D/3D Industrial Visual Inspection and Intelligent Image Processing)
Show Figures

Figure 1

17 pages, 4081 KB  
Article
Neural Network-Based Atlas Enhancement in MPEG Immersive Video
by Taesik Lee, Kugjin Yun, Won-Sik Cheong and Dongsan Jun
Mathematics 2025, 13(19), 3110; https://doi.org/10.3390/math13193110 - 29 Sep 2025
Viewed by 272
Abstract
Recently, the demand for immersive videos has surged with the expansion of virtual reality, augmented reality, and metaverse technologies. As an international standard, moving picture experts group (MPEG) has developed MPEG immersive video (MIV) to efficiently transmit large-volume immersive videos. The MIV encoder [...] Read more.
Recently, the demand for immersive videos has surged with the expansion of virtual reality, augmented reality, and metaverse technologies. As an international standard, moving picture experts group (MPEG) has developed MPEG immersive video (MIV) to efficiently transmit large-volume immersive videos. The MIV encoder generates atlas videos to convert extensive multi-view videos into low-bitrate formats. When these atlas videos are compressed using conventional video codecs, compression artifacts often appear in the reconstructed atlas videos. To address this issue, this study proposes a feature-extraction-based convolutional neural network (FECNN) to reduce the compression artifacts during MIV atlas video transmission. The proposed FECNN uses quantization parameter (QP) maps and depth information as inputs and consists of shallow feature extraction (SFE) blocks and deep feature extraction (DFE) blocks to utilize layered feature characteristics. Compared to the existing MIV, the proposed method improves the Bjontegaard delta bit-rate (BDBR) by −4.12% and −6.96% in the basic and additional views, respectively. Full article
(This article belongs to the Special Issue Coding Theory and the Impact of AI)
Show Figures

Figure 1

14 pages, 2921 KB  
Article
Design and Validation of an Augmented Reality Training Platform for Patient Setup in Radiation Therapy Using Multimodal 3D Modeling
by Jinyue Wu, Donghee Han and Toshioh Fujibuchi
Appl. Sci. 2025, 15(19), 10488; https://doi.org/10.3390/app151910488 - 28 Sep 2025
Viewed by 203
Abstract
This study presents the development and evaluation of an Augmented Reality (AR)-based training system aimed at improving patient setup accuracy in radiation therapy. Leveraging Microsoft HoloLens 2, the system provides an immersive environment for medical staff to enhance their understanding of patient setup [...] Read more.
This study presents the development and evaluation of an Augmented Reality (AR)-based training system aimed at improving patient setup accuracy in radiation therapy. Leveraging Microsoft HoloLens 2, the system provides an immersive environment for medical staff to enhance their understanding of patient setup procedures. High-resolution 3D anatomical models were reconstructed from CT scans using 3D Slicer, while Luma AI was employed to rapidly capture complete body surface models. Due to limitations in each method—such as missing extremities or back surfaces—Blender was used to merge the models, improving completeness and anatomical fidelity. The AR application was developed in Unity, employing spatial anchors and 125 × 125 mm2 QR code markers to stabilize and align virtual models in real space. System accuracy testing demonstrated that QR code tracking achieved millimeter-level variation, with an expanded uncertainty of ±2.74 mm. Training trials for setup showed larger deviations in the X (left–right), Y (up-down), and Z (front-back) axes at the centimeter scale. This meant that we were able to quantify the user’s patient setup skills. While QR code positioning was relatively stable, manual placement of markers and the absence of real-time verification contributed to these errors. The system offers a radiation-free and interactive platform for training, enhancing spatial awareness and procedural skills. Future work will focus on improving tracking stability, optimizing the workflow, and integrating real-time feedback to move toward clinical applicability. Full article
(This article belongs to the Special Issue Novel Technologies in Radiology: Diagnosis, Prediction and Treatment)
Show Figures

Figure 1

21 pages, 26320 KB  
Article
Agent-Based Models of Sexual Selection in Bird Vocalizations Using Generative Approaches
by Hao Zhao, Takaya Arita and Reiji Suzuki
Appl. Sci. 2025, 15(19), 10481; https://doi.org/10.3390/app151910481 - 27 Sep 2025
Viewed by 216
Abstract
The current agent-based evolutionary models for animal communication rely on simplified signal representations that differ significantly from natural vocalizations. We propose a novel agent-based evolutionary model based on text-to-audio (TTA) models to generate realistic animal vocalizations, advancing from VAE-based real-valued genotypes to TTA-based [...] Read more.
The current agent-based evolutionary models for animal communication rely on simplified signal representations that differ significantly from natural vocalizations. We propose a novel agent-based evolutionary model based on text-to-audio (TTA) models to generate realistic animal vocalizations, advancing from VAE-based real-valued genotypes to TTA-based textual genotypes that generate bird songs using a fine-tuned Stable Audio Open 1.0 model. In our sexual selection framework, males vocalize songs encoded by their genotypes while females probabilistically select mates based on the similarity between males’ songs and their preference patterns, with mutations and crossovers applied to textual genotypes using a large language model (Gemma-3). As a proof of concept, we compared TTA-based and VAE-based sexual selection models for the Blue-and-white Flycatcher (Cyanoptila cyanomelana)’s songs and preferences. While the VAE-based model produces population clustering but constrains the evolution to a narrow region near the latent space’s origin where reconstructed songs remain clear, the TTA-based model enhances the genotypic and phenotypic diversity, drives song diversification, and fosters the creation of novel bird songs. Generated songs were validated by a virtual expert using the BirdNET classifier, confirming their acoustic realism through classification into related taxa. These findings highlight the potential of combining large language models and TTA models in agent-based evolutionary models for animal communication. Full article
(This article belongs to the Special Issue Evolutionary Algorithms and Their Real-World Applications)
Show Figures

Figure 1

31 pages, 18458 KB  
Article
Leveraging NeRF for Cultural Heritage Preservation: A Case Study of the Katolička Porta in Novi Sad
by Ivana Vasiljević, Nenad Kuzmanović, Anica Draganić, Maria Silađi, Miloš Obradović and Ratko Obradović
Electronics 2025, 14(19), 3785; https://doi.org/10.3390/electronics14193785 - 24 Sep 2025
Viewed by 457
Abstract
In recent years, digital technologies have become indispensable tools for the preservation and documentation of architectural and cultural heritage. Traditional 3D modeling methods, such as photogrammetry and laser scanning, require specialized equipment and extensive manual processing. Neural Radiance Field, an AI-based technique, enables [...] Read more.
In recent years, digital technologies have become indispensable tools for the preservation and documentation of architectural and cultural heritage. Traditional 3D modeling methods, such as photogrammetry and laser scanning, require specialized equipment and extensive manual processing. Neural Radiance Field, an AI-based technique, enables photorealistic 3D reconstructions from a limited set of 2D images. NeRF excels in cultural heritage documentation by effectively rendering reflective and translucent surfaces, which often pose challenges to conventional methods. These approaches significantly accelerate workflows, reduce costs, and minimize manual intervention, making them ideal for inaccessible or fragile sites. The application of NeRF combined with drone-acquired high-resolution images, as demonstrated in the Katolička Porta project in Novi Sad, produces highly detailed and accurate digital replicas. This integration also supports virtual restoration and texture enhancement, enabling non-invasive exploration of conservation scenarios. Katolička Porta, a historically significant site that has evolved over centuries, benefits from these advanced digital preservation techniques, which help maintain its unique architectural and cultural identity. This integration of technologies represents the future of cultural heritage conservation, offering innovative possibilities for visualization, research, and protection. Full article
Show Figures

Figure 1

22 pages, 8860 KB  
Article
Generating Multi-View Action Data from a Monocular Camera Video by Fusing Human Mesh Recovery and 3D Scene Reconstruction
by Hyunsu Kim and Yunsik Son
Appl. Sci. 2025, 15(19), 10372; https://doi.org/10.3390/app151910372 - 24 Sep 2025
Viewed by 432
Abstract
Multi-view data, captured from various perspectives, is crucial for training view-invariant human action recognition models, yet its acquisition is hindered by spatio-temporal constraints and high costs. This study aims to develop the Pose Scene EveryWhere (PSEW) framework, which automatically generates temporally consistent, multi-view [...] Read more.
Multi-view data, captured from various perspectives, is crucial for training view-invariant human action recognition models, yet its acquisition is hindered by spatio-temporal constraints and high costs. This study aims to develop the Pose Scene EveryWhere (PSEW) framework, which automatically generates temporally consistent, multi-view 3D human action data from a single monocular video. The proposed framework first predicts 3D human parameters from each video frame using a deep learning-based Human Mesh Recovery (HMR) model. Subsequently, it applies tracking, linear interpolation, and Kalman filtering to refine temporal consistency and produce naturalistic motion. The refined human meshes are then reconstructed into a virtual 3D scene by estimating a stable floor plane for alignment, and finally, novel-view videos are rendered using user-defined virtual cameras. As a result, the framework successfully generated multi-view data with realistic, jitter-free motion from a single video input. To assess fidelity to the original motion, we used Root Mean Square Error (RMSE) and Mean Per Joint Position Error (MPJPE) as metrics, achieving low average errors in both 2D (RMSE: 0.172; MPJPE: 0.202) and 3D (RMSE: 0.145; MPJPE: 0.206) space. PSEW provides an efficient, scalable, and low-cost solution that overcomes the limitations of traditional data collection methods, offering a remedy for the scarcity of training data for action recognition models. Full article
(This article belongs to the Special Issue Advanced Technologies Applied for Object Detection and Tracking)
Show Figures

Figure 1

12 pages, 2022 KB  
Case Report
Implementation of Medicalholodeck® for Augmented Reality Surgical Navigation in Microsurgical Mandibular Reconstruction: Enhanced Vessel Identification
by Norman Alejandro Rendón Mejía, Hansel Gómez Arámbula, José Humberto Baeza Ramos, Yidam Villa Martínez, Francisco Hernández Ávila, Mónica Quiñonez Pérez, Carolina Caraveo Aguilar, Rogelio Mariñelarena Hernández, Claudio Reyes Montero, Claudio Ramírez Espinoza and Armando Isaac Reyes Carrillo
Healthcare 2025, 13(19), 2406; https://doi.org/10.3390/healthcare13192406 - 24 Sep 2025
Viewed by 453
Abstract
Mandibular reconstruction with the fibula free flap is the gold standard for large defects, with virtual surgical planning becoming integral to the process. The localization and dissection of critical vessels, such as the recipient vessels in the neck and the perforating vessels of [...] Read more.
Mandibular reconstruction with the fibula free flap is the gold standard for large defects, with virtual surgical planning becoming integral to the process. The localization and dissection of critical vessels, such as the recipient vessels in the neck and the perforating vessels of the fibula flap, are demanding steps that directly impact surgical success. Augmented reality (AR) offers a solution by overlaying three-dimensional virtual models directly onto the surgeon’s view of the operative field. We report the first case in Latin America utilizing a low-cost, commercially available holographic navigation system for complex microsurgical mandibular reconstruction. A 26-year-old female presented with a large, destructive osteoblastoma of the left mandible, requiring wide resection and reconstruction. Preoperative surgical planning was conducted using DICOM data from the patient’s CT scans to generate 3D holographic models with the Medicalholodeck® software. Intraoperatively, the primary surgeon used the AR system to superimpose the holographic models onto the patient. The system provided real-time, immersive guidance for identifying the facial artery, which was anatomically displaced by the tumor mass, as well as for localizing the peroneal artery perforators for donor flap harvest. A free fibula flap was harvested and transferred. During the early postoperative course and after 3-months of follow-up, the patient presented with an absence of any clinical complications. This case demonstrates the successful application and feasibility of using a low-cost, consumer-grade holographic navigation system. Full article
(This article belongs to the Special Issue Virtual Reality Technologies in Health Care)
Show Figures

Figure 1

13 pages, 8429 KB  
Article
Advances in the Treatment of Midface Fractures: Innovative CAD/CAM Drill Guides and Implants for the Simultaneous Primary Treatment of Zygomatic-Maxillary-Orbital-Complex Fractures
by Marcel Ebeling, Sebastian Pietzka, Andreas Sakkas, Stefan Kist, Mario Scheurer, Alexander Schramm and Frank Wilde
Appl. Sci. 2025, 15(18), 10194; https://doi.org/10.3390/app151810194 - 18 Sep 2025
Viewed by 304
Abstract
Background: Midfacial trauma involving the zygomatic-maxillary-orbital (ZMO) complex poses significant reconstructive challenges due to anatomical complexity and the necessity for high-precision alignment. Traditional manual reduction techniques often result in inconsistent outcomes, necessitating revisions. Methods: This feasibility study presents two clinical cases treated using [...] Read more.
Background: Midfacial trauma involving the zygomatic-maxillary-orbital (ZMO) complex poses significant reconstructive challenges due to anatomical complexity and the necessity for high-precision alignment. Traditional manual reduction techniques often result in inconsistent outcomes, necessitating revisions. Methods: This feasibility study presents two clinical cases treated using a novel, fully digital workflow incorporating computer-aided design and manufacturing (CAD/CAM) of patient-specific osteosynthesis plates and surgical drill guides. Following virtual fracture reduction and implant design, drill guides and implants were fabricated using selective laser melting. Surgical procedures included intraoral and transconjunctival approaches with intraoperative 3D imaging (mobile C-arm CT) to verify implant positioning. Postoperative results were compared to the virtual plan through image fusion. Results: Both cases demonstrated precise fit and anatomical restoration. The “one-position-fits-only” orbital implant design enabled highly accurate orbital wall reconstruction. Key procedural refinements between cases included enhanced interdisciplinary collaboration and improved guide designs, resulting in decreased planning-to-surgery intervals (<7 days) and seamless intraoperative application. Image fusion confirmed near-identical congruence between planned and achieved outcomes. Conclusions: The presented method demonstrates that fully digital, CAD/CAM-based midface reconstruction is feasible in the primary trauma setting. The technique offers reproducible precision, reduced intraoperative time, and improved functional and aesthetic outcomes. It may represent a paradigm shift in trauma care, particularly for complex ZMO fractures. Broader clinical adoption appears viable as production speed and workflow integration continue to improve. Full article
(This article belongs to the Special Issue Advances in Orthodontics and Dentofacial Orthopedics)
Show Figures

Figure 1

25 pages, 27717 KB  
Article
MCS-Sim: A Photo-Realistic Simulator for Multi-Camera UAV Visual Perception Research
by Qiming Qi, Guoyan Wang, Yonglei Pan, Hongqi Fan and Biao Li
Drones 2025, 9(9), 656; https://doi.org/10.3390/drones9090656 - 18 Sep 2025
Viewed by 727
Abstract
Multi-camera systems (MCSs) are pivotal in aviation surveillance and autonomous navigation due to their wide coverage and high-resolution sensing. However, challenges such as complex setup, time-consuming data acquisition, and costly testing hinder research progress. To address these, we introduce MCS-Sim, a photo-realistic [...] Read more.
Multi-camera systems (MCSs) are pivotal in aviation surveillance and autonomous navigation due to their wide coverage and high-resolution sensing. However, challenges such as complex setup, time-consuming data acquisition, and costly testing hinder research progress. To address these, we introduce MCS-Sim, a photo-realistic MCSsimulator for UAV visual perception research. MCS-Sim integrates vision sensor configurations, vehicle dynamics, and dynamic scenes, enabling rapid virtual prototyping and multi-task dataset generation. It supports dense flow estimation, 3D reconstruction, visual simultaneous localization and mapping, object detection, and tracking. With a hardware-in-loop interface, MCS-Sim facilitates closed-loop simulation for system validation. Experiments demonstrate its effectiveness in synthetic dataset generation, visual perception algorithm testing, and closed-loop simulation. Here we show that MCS-Sim significantly advances multi-camera UAV visual perception research, offering a versatile platform for future innovations. Full article
Show Figures

Figure 1

22 pages, 9837 KB  
Article
SSR-HMR: Skeleton-Aware Sparse Node-Based Real-Time Human Motion Reconstruction
by Linhai Li, Jiayi Lin and Wenhui Zhang
Electronics 2025, 14(18), 3664; https://doi.org/10.3390/electronics14183664 - 16 Sep 2025
Viewed by 404
Abstract
The growing demand for real-time human motion reconstruction in Virtual Reality (VR), Augmented Reality (AR), and the Metaverse requires high accuracy with minimal hardware. This paper presents SSR-HMR, a skeleton-aware, sparse node-based method for full-body motion reconstruction from limited inputs. The approach incorporates [...] Read more.
The growing demand for real-time human motion reconstruction in Virtual Reality (VR), Augmented Reality (AR), and the Metaverse requires high accuracy with minimal hardware. This paper presents SSR-HMR, a skeleton-aware, sparse node-based method for full-body motion reconstruction from limited inputs. The approach incorporates a lightweight spatiotemporal graph convolutional module, a torso pose refinement design to mitigate orientation drift, and kinematic tree-based optimization to enhance end-effector positioning accuracy. Smooth motion transitions are achieved via a multi-scale velocity loss. Experiments demonstrate that SSR-HMR achieves high-accuracy reconstruction, with mean joint and end-effector position errors of 1.06 cm and 0.52 cm, respectively, while operating at 267 FPS on a CPU. Full article
(This article belongs to the Special Issue AI Models for Human-Centered Computer Vision and Signal Analysis)
Show Figures

Figure 1

Back to TopTop