Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (75)

Search Parameters:
Keywords = virtual landmarks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 5974 KiB  
Article
Proof of Concept and Validation of Single-Camera AI-Assisted Live Thumb Motion Capture
by Huy G. Dinh, Joanne Y. Zhou, Adam Benmira, Deborah E. Kenney and Amy L. Ladd
Sensors 2025, 25(15), 4633; https://doi.org/10.3390/s25154633 - 26 Jul 2025
Viewed by 216
Abstract
Motion analysis can be useful for multiplanar analysis of hand kinematics. The carpometacarpal (CMC) joint has been traditionally difficult to capture with surface-based motion analysis but is the most commonly arthritic joint of the hand and is of particular clinical interest. Traditional 3D [...] Read more.
Motion analysis can be useful for multiplanar analysis of hand kinematics. The carpometacarpal (CMC) joint has been traditionally difficult to capture with surface-based motion analysis but is the most commonly arthritic joint of the hand and is of particular clinical interest. Traditional 3D motion capture of the CMC joint using multiple cameras and reflective markers and manual goniometer measurement has been challenging to integrate into clinical workflow. We therefore propose a markerless single-camera artificial intelligence (AI)-assisted motion capture method to provide real-time estimation of clinically relevant parameters. Our study enrolled five healthy subjects, two male and three female. Fourteen clinical parameters were extracted from thumb interphalangeal (IP), metacarpal phalangeal (MP), and CMC joint motions using manual goniometry and live motion capture with the Google AI MediaPipe Hands landmarker model. Motion capture measurements were assessed for accuracy, precision, and correlation with manual goniometry. Motion capture demonstrated sufficient accuracy in 11 and precision in all 14 parameters, with mean error of −2.13 ± 2.81° (95% confidence interval [CI]: −5.31, 1.05). Strong agreement was observed between both modalities across all subjects, with a combined Pearson correlation coefficient of 0.97 (p < 0.001) and an intraclass correlation coefficient of 0.97 (p < 0.001). The results suggest AI-assisted live motion capture can be an accurate and practical thumb assessment tool, particularly in virtual patient encounters, for enhanced range of motion (ROM) analysis. Full article
Show Figures

Figure 1

15 pages, 1800 KiB  
Article
Digital Orthodontic Setups in Orthognathic Surgery: Evaluating Predictability and Precision of the Workflow in Surgical Planning
by Olivier de Waard, Frank Baan, Robin Bruggink, Ewald M. Bronkhorst, Anne Marie Kuijpers-Jagtman and Edwin M. Ongkosuwito
J. Clin. Med. 2025, 14(15), 5270; https://doi.org/10.3390/jcm14155270 - 25 Jul 2025
Viewed by 288
Abstract
Background: Inadequate presurgical planning is a key contributor to suboptimal outcomes in orthognathic surgery. This study aims to assess the accuracy of a digital surgical planning workflow conducted prior to any orthodontic intervention. Methods: Digital planning was performed for 26 patients before orthodontic [...] Read more.
Background: Inadequate presurgical planning is a key contributor to suboptimal outcomes in orthognathic surgery. This study aims to assess the accuracy of a digital surgical planning workflow conducted prior to any orthodontic intervention. Methods: Digital planning was performed for 26 patients before orthodontic treatment (T0) and compared to the actual preoperative planning (T1). Digitized plaster casts were merged with CBCT data and converted to orthodontic setups to create a 3D virtual head model. After voxel-based registration of T0 and T1, dental arches were virtually osteotomized and repositioned according to planned outcomes. These T0 segments were then aligned with T1 planning using bony landmarks of the maxilla. Anatomical landmarks were used to construct virtual triangles on maxillary and mandibular segments, enabling assessment of positional and orientational differences. Transformations between T0 and T1 were translated into clinically meaningful metrics. Results: Significant differences were found between T0 and T1 at the dental level. T1 exhibited a greater clockwise rotation of the dental maxilla (mean: 2.85°) and a leftward translation of the mandibular dental arch (mean: 1.19 mm). In SARME cases, the bony mandible showed larger anti-clockwise roll differences. Pitch variations were also more pronounced in maxillary extraction cases, with both the dental maxilla and bony mandible demonstrating increased clockwise rotations. Conclusions: The proposed orthognathic surgical planning workflow shows potential for simulating mandibular outcomes but lacks dental-level accuracy, especially in maxillary anterior torque. While mandibular bony outcome predictions align reasonably with pretreatment planning, notable discrepancies exceed clinically acceptable thresholds. Current accuracy limits routine use; further refinement and validation in larger, homogeneous patient groups are needed to enhance clinical reliability and applicability. Full article
(This article belongs to the Special Issue Orthodontics: Current Advances and Future Options)
Show Figures

Figure 1

12 pages, 1504 KiB  
Article
Precision of the Fully Digital 3D Treatment Plan in Orthognathic Surgery
by Paula Locmele, Oskars Radzins, Martins Lauskis, Girts Salms, Anda Slaidina and Andris Abeltins
J. Clin. Med. 2025, 14(14), 4916; https://doi.org/10.3390/jcm14144916 - 11 Jul 2025
Viewed by 229
Abstract
Background/Objectives: The aim of this study was to investigate the accuracy of implementing a virtual treatment plan in orthognathic surgery. Methods: The study included 30 patients (11 males and 19 females with a mean age of 23.7 years) with a digital surgical plan. [...] Read more.
Background/Objectives: The aim of this study was to investigate the accuracy of implementing a virtual treatment plan in orthognathic surgery. Methods: The study included 30 patients (11 males and 19 females with a mean age of 23.7 years) with a digital surgical plan. All patients underwent bimaxillary orthognathic surgery: LeFort I osteotomy of the maxilla combined with bilateral split sagittal osteotomy (BSSO) of the mandible. Eleven landmarks on the pre-surgical (planned) model and the same landmarks on the post-surgical model were used for comparison and linear difference measurements between the real and predicted outcomes in all three planes—transversal, sagittal, and vertical. Results: All median values fell within the 2 mm range in the transversal plane, and the mean displacement was 0.57 mm. In the sagittal and vertical planes, the treatment outcome in the maxilla was more precise than in the mandible. The mean displacement in the sagittal plane was −0.88 mm and that in the vertical plane was 0.44 mm. All deviations were less than 2 mm. Conclusions: The data obtained in this study show that the digital surgical plan for orthognathic surgery is clinically reliable in all planes. Full article
Show Figures

Figure 1

16 pages, 3593 KiB  
Article
Preservation of Synagogues in Greece: Using Digital Tools to Represent Lost Heritage
by Elias Messinas
Heritage 2025, 8(6), 211; https://doi.org/10.3390/heritage8060211 - 5 Jun 2025
Viewed by 684
Abstract
In the wake of the Holocaust and the post-war reconstruction of Greece’s historic city centers, many Greek synagogues were demolished, abandoned, or appropriated, erasing centuries of Jewish architectural and communal presence. This study presents a thirty year-long research and documentation initiative aimed at [...] Read more.
In the wake of the Holocaust and the post-war reconstruction of Greece’s historic city centers, many Greek synagogues were demolished, abandoned, or appropriated, erasing centuries of Jewish architectural and communal presence. This study presents a thirty year-long research and documentation initiative aimed at preserving, recovering, and eventually digitally reconstructing these “lost” synagogues, both as individual buildings and within their urban context. Drawing on architectural surveys, archival research, oral histories, and previously unpublished materials, including the recently rediscovered Shemtov Samuel archive, the project grew through the use of technology. Beginning with in situ surveys in the early 1990s, it evolved into full-scale digitally enhanced architectural drawings that formed the basis for further digital exploration, 3D models, and virtual reality outputs. With the addition of these new tools to existing documentation, the project can restore architectural detail and cultural context with a high degree of fidelity, even in cases where only fragmentary evidence survives. These digital reconstructions have informed physical restoration efforts as well as public exhibitions, heritage education, and urban memory initiatives across Greece. By reintroducing “invisible” Jewish landmarks into contemporary consciousness, the study addresses the broader implications of post-war urban homogenization, the marginalization of minority heritage, and the ethical dimensions of digital preservation. This interdisciplinary approach, which bridges architectural history, digital humanities, urban studies, and cultural heritage, demonstrates the value of digital tools in reconstructing “lost” pasts and highlights the potential for similar projects in other regions facing comparable erasures. Full article
Show Figures

Figure 1

17 pages, 1829 KiB  
Article
Research on Improved Occluded-Face Restoration Network
by Shangzhen Pang, Tzer Hwai Gilbert Thio, Fei Lu Siaw, Mingju Chen and Li Lin
Symmetry 2025, 17(6), 827; https://doi.org/10.3390/sym17060827 - 26 May 2025
Viewed by 351
Abstract
The natural features of the face exhibit significant symmetry. In practical applications, faces may be partially occluded due to factors like wearing masks or glasses, or the presence of other objects. Occluded-face restoration has broad application prospects in fields such as augmented reality, [...] Read more.
The natural features of the face exhibit significant symmetry. In practical applications, faces may be partially occluded due to factors like wearing masks or glasses, or the presence of other objects. Occluded-face restoration has broad application prospects in fields such as augmented reality, virtual reality, healthcare, security, etc. It is also of significant practical importance in enhancing public safety and providing efficient services. This research establishes an improved occluded-face restoration network based on facial feature points and Generative Adversarial Networks. A facial landmark prediction network is constructed based on an improved MobileNetV3-small network. On the foundation of U-Net, dilated convolutions and residual blocks are introduced to form an enhanced generator network. Additionally, an improved discriminator network is built based on Patch-GAN. Compared to the Contextual Attention network, under various occlusions, the improved face restoration network shows a maximum increase in the Peak Signal-to-Noise Ratio of 24.47%, and in the Structural Similarity Index of 24.39%, and a decrease in the Fréchet Inception Distance of 81.1%. Compared to the Edge Connect network, under various occlusions, the improved network shows a maximum increase in the Peak Signal-to-Noise Ratio of 7.89% and in the Structural Similarity Index of 10.34%, and a decrease in the Fréchet Inception Distance of 27.2%. Compared to the LaFIn network, under various occlusions, the improved network shows a maximum increase in the Peak Signal-to-Noise Ratio of 3.4% and in the Structural Similarity Index of 3.31%, and a decrease in the Fréchet Inception Distance of 9.19%. These experiments show that the improved face restoration network yields better restoration results. Full article
(This article belongs to the Section Physics)
Show Figures

Figure 1

13 pages, 1264 KiB  
Article
Equidistant Landmarks Fail to Produce the Blocking Effect in Spatial Learning Using a Virtual Water Maze Task with Healthy Adults: A Role for Cognitive Mapping?
by Róisín Deery and Seán Commins
Brain Sci. 2025, 15(4), 414; https://doi.org/10.3390/brainsci15040414 - 19 Apr 2025
Viewed by 434
Abstract
Background/Objectives: Cue competition is a feature of associative learning, whereby during learning, cues compete with each other, based on their relative salience, to influence subsequent performance. Blocking is a feature of cue competition where prior knowledge of a cue (X) will interfere with [...] Read more.
Background/Objectives: Cue competition is a feature of associative learning, whereby during learning, cues compete with each other, based on their relative salience, to influence subsequent performance. Blocking is a feature of cue competition where prior knowledge of a cue (X) will interfere with the subsequent learning of a second cue (XY). When tested with the second cue (Y) alone, participants show an impairment in responding. While blocking has been observed across many domains, including spatial learning, previous research has raised questions regarding replication and the conditions necessary for it to occur. Furthermore, two prominent spatial learning theories predict contrary results for blocking. Associative learning accounts predict that the addition of a cue will lead to a blocking effect and impaired performance upon testing. Whereas the cognitive map theory suggests that the novel cue will be integrated into a map with no subsequent impairment in performance. Methods: Using a virtual water maze task, we investigated the blocking effect in human participants. Results: Results indicated that the cue learned in phase 1 of the experiment did not interfere with learning of a subsequent cue introduced in phase 2. Conclusions: This suggests that blocking did not occur and supports a cognitive mapping approach in human spatial learning. However, the relative location of the cues relative to the goal and how this might determine the learning strategy used by participants was discussed. Full article
(This article belongs to the Section Neuropsychology)
Show Figures

Figure 1

13 pages, 3561 KiB  
Article
Research on Lightweight Facial Landmark Prediction Network
by Shangzhen Pang, Tzer Hwai Gilbert Thio, Fei Lu Siaw, Mingju Chen and Li Lin
Electronics 2025, 14(6), 1211; https://doi.org/10.3390/electronics14061211 - 19 Mar 2025
Viewed by 506
Abstract
Facial landmarks, as direct and reliable biometric features, are widely utilized in various fields, including information security, public safety, virtual reality, and augmented reality. Facial landmarks, which are discrete key points on the face, preserve expression features and maintain the topological structure between [...] Read more.
Facial landmarks, as direct and reliable biometric features, are widely utilized in various fields, including information security, public safety, virtual reality, and augmented reality. Facial landmarks, which are discrete key points on the face, preserve expression features and maintain the topological structure between facial organs. Fast and accurate facial landmark prediction is essential in solving computer vision problems involving facial analysis, particularly in occlusion scenarios. This research proposes a lightweight facial landmark prediction network for occluded faces using an improved depthwise separable convolutional neural network architecture. The model is trained using 30,000 images from the CelebA-HQ dataset. The model is then tested under different occlusion ratios, including 10–20%, 30–40%, 40–50%, and 50–60% random occlusion, as well as 25% center occlusion. Using 68 facial landmarks for occlusion prediction, the proposed method always achieved significant improvements. Experimental results show that the proposed lightweight facial landmark prediction method is 1.97 times faster than FAN* and 1.67 times faster than ESR*, while still achieving better prediction results with lower NMSE values across all tested occlusion ratios for both frontal and profile faces. Full article
Show Figures

Figure 1

19 pages, 7816 KiB  
Article
4D+ City Sidewalk: Integrating Pedestrian View into Sidewalk Spaces to Support User-Centric Urban Spatial Perception
by Jinjing Zhao, Yunfan Chen, Yancheng Li, Haotian Xu, Jingjing Xu, Xuliang Li, Hong Zhang, Lei Jin and Shengyong Xu
Sensors 2025, 25(5), 1375; https://doi.org/10.3390/s25051375 - 24 Feb 2025
Viewed by 883
Abstract
As urban environments become increasingly interconnected, the demand for precise and efficient pedestrian solutions in digitalized smart cities has grown significantly. This study introduces a scalable spatial visualization system designed to enhance interactions between individuals and the street in outdoor sidewalk environments. The [...] Read more.
As urban environments become increasingly interconnected, the demand for precise and efficient pedestrian solutions in digitalized smart cities has grown significantly. This study introduces a scalable spatial visualization system designed to enhance interactions between individuals and the street in outdoor sidewalk environments. The system operates in two main phases: the spatial prior phase and the target localization phase. In the spatial prior phase, the system captures the user’s perspective using first-person visual data and leverages landmark elements within the sidewalk environment to localize the user’s camera. In the target localization phase, the system detects surrounding objects, such as pedestrians or cyclists, using high-angle closed-circuit television (CCTV) cameras. The system was deployed in a real-world sidewalk environment at an intersection on a university campus. By combining user location data with CCTV observations, a 4D+ virtual monitoring system was developed to present a spatiotemporal visualization of the mobile participants within the user’s surrounding sidewalk space. Experimental results show that the landmark-based localization method achieves a planar positioning error of 0.468 m and a height error of 0.120 m on average. With the assistance of CCTV cameras, the localization of other targets maintains an overall error of 0.24 m. This system establishes the spatial relationship between pedestrians and the street by integrating detailed sidewalk views, with promising applications for pedestrian navigation and the potential to enhance pedestrian-friendly urban ecosystems. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

16 pages, 3353 KiB  
Article
Development of a Method to Evaluate the Dynamic Fit of Face Masks
by Katarina E. Goodge, Drew E. Brown, Margaret Frey and Fatma Baytar
Textiles 2025, 5(1), 9; https://doi.org/10.3390/textiles5010009 - 24 Feb 2025
Viewed by 989
Abstract
Evaluating designed objects in real-world use cases enables usability optimization. For functional objects such as face masks, the mask must fit the user initially and continue to fit during movements such as talking. This paper describes methodology development for dynamic fit analysis of [...] Read more.
Evaluating designed objects in real-world use cases enables usability optimization. For functional objects such as face masks, the mask must fit the user initially and continue to fit during movements such as talking. This paper describes methodology development for dynamic fit analysis of face masks using 3D head scans. Participants were scanned while wearing Basic, Cup, and Petal model masks before and after reading a passage aloud and completed surveys across eight fit dimensions. Face and mask measurements were virtually extracted from the head scans for quantitative fit analysis, and mask overlays were inspected for qualitative fit analysis. Four of eleven facial measurements changed significantly from closed to open-mouth posture while the nasal dorsum was identified as a stable landmark and served as a reference to define a mask shift metric. The mask shift was compared to the survey results for the model masks, with the Cup design fitting best and the Petal design rated as most comfortable. Poor fit modes identified from mask overlays were fabric buckling, compressed nose and ears, and gapping between the mask and facial features. This methodology can be implemented during the analysis stage of the iterative design process and complements static fit analyses. Full article
Show Figures

Graphical abstract

12 pages, 5726 KiB  
Article
Computer-Assisted Evaluation of Zygomatic Fracture Outcomes: Case Series and Proposal of a Reproducible Workflow
by Simone Benedetti, Andrea Frosolini, Flavia Cascino, Laura Viola Pignataro, Leonardo Franz, Gino Marioni, Guido Gabriele and Paolo Gennaro
Tomography 2025, 11(2), 19; https://doi.org/10.3390/tomography11020019 - 18 Feb 2025
Cited by 1 | Viewed by 1166
Abstract
Background: Zygomatico-maxillary complex (ZMC) fractures are prevalent facial injuries with significant functional and aesthetic implications. Computer-assisted surgery (CAS) offers precise surgical planning and outcome evaluation. The study aimed to evaluate the application of CAS in the analysis of ZMC fracture outcomes and to [...] Read more.
Background: Zygomatico-maxillary complex (ZMC) fractures are prevalent facial injuries with significant functional and aesthetic implications. Computer-assisted surgery (CAS) offers precise surgical planning and outcome evaluation. The study aimed to evaluate the application of CAS in the analysis of ZMC fracture outcomes and to propose a reproducible workflow for surgical outcome assessment using cephalometric landmarks. Methods: A retrospective cohort study was conducted on 16 patients treated for unilateral ZMC fractures at the Maxillofacial Surgery Unit of Siena University Hospital (2017–2024). Inclusion criteria included ZMC fractures classified as Zingg B or C, treated via open reduction and internal fixation (ORIF). Pre- and post-operative CT scans were processed for two- and three-dimensional analyses. Discrepancies between CAS-optimized reduction and achieved surgical outcomes were quantified using cephalometric landmarks and volumetric assessments. Results: Out of the 16 patients (69% male, mean age 48.1 years), fractures were predominantly on the right side (81%). CAS comparison between the post-operative and the contralateral side revealed significant asymmetries along the X and Y axes, particularly in the fronto-zygomatic suture (FZS), zygo-maxillary point (MP), and zygo-temporal point (ZT). Computer-assisted comparison between the post-operative and the CAS-simulated reductions showed statistical differences along all three orthonormal axes, highlighting the challenges in achieving ideal symmetry despite advanced surgical techniques. CAS-optimized reductions demonstrated measurable improvements compared to traditional methods, underscoring their utility in outcome evaluation. Conclusions: CAS technology enhances the precision of ZMC fracture outcome evaluation, allowing for detailed comparison between surgical outcomes and virtual simulations. Its application underscores the potential for improved surgical planning and execution, especially in complex cases. Future studies should focus on expanding sample size, refining workflows, and integrating artificial intelligence to automate processes for broader clinical applicability. Full article
Show Figures

Figure 1

18 pages, 1223 KiB  
Article
GazeCapsNet: A Lightweight Gaze Estimation Framework
by Shakhnoza Muksimova, Yakhyokhuja Valikhujaev, Sabina Umirzakova, Jushkin Baltayev and Young Im Cho
Sensors 2025, 25(4), 1224; https://doi.org/10.3390/s25041224 - 17 Feb 2025
Cited by 1 | Viewed by 1548
Abstract
Gaze estimation is increasingly pivotal in applications spanning virtual reality, augmented reality, and driver monitoring systems, necessitating efficient yet accurate models for mobile deployment. Current methodologies often fall short, particularly in mobile settings, due to their extensive computational requirements or reliance on intricate [...] Read more.
Gaze estimation is increasingly pivotal in applications spanning virtual reality, augmented reality, and driver monitoring systems, necessitating efficient yet accurate models for mobile deployment. Current methodologies often fall short, particularly in mobile settings, due to their extensive computational requirements or reliance on intricate pre-processing. Addressing these limitations, we present Mobile-GazeCapsNet, an innovative gaze estimation framework that harnesses the strengths of capsule networks and integrates them with lightweight architectures such as MobileNet v2, MobileOne, and ResNet-18. This framework not only eliminates the need for facial landmark detection but also significantly enhances real-time operability on mobile devices. Through the innovative use of Self-Attention Routing, GazeCapsNet dynamically allocates computational resources, thereby improving both accuracy and efficiency. Our results demonstrate that GazeCapsNet achieves competitive performance by optimizing capsule networks for gaze estimation through Self-Attention Routing (SAR), which replaces iterative routing with a lightweight attention-based mechanism, improving computational efficiency. Our results show that GazeCapsNet achieves state-of-the-art (SOTA) performance on several benchmark datasets, including ETH-XGaze and Gaze360, achieving a mean angular error (MAE) reduction of up to 15% compared to existing models. Furthermore, the model maintains a real-time processing capability of 20 milliseconds per frame while requiring only 11.7 million parameters, making it exceptionally suitable for real-time applications in resource-constrained environments. These findings not only underscore the efficacy and practicality of GazeCapsNet but also establish a new standard for mobile gaze estimation technologies. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

15 pages, 4304 KiB  
Article
Face and Voice Recognition-Based Emotion Analysis System (EAS) to Minimize Heterogeneity in the Metaverse
by Surak Son and Yina Jeong
Appl. Sci. 2025, 15(2), 845; https://doi.org/10.3390/app15020845 - 16 Jan 2025
Viewed by 2462
Abstract
The metaverse, where users interact through avatars, is evolving to closely mirror the real world, requiring realistic object responses based on users’ emotions. While technologies like eye-tracking and hand-tracking transfer physical movements into virtual spaces, accurate emotion detection remains challenging. This study proposes [...] Read more.
The metaverse, where users interact through avatars, is evolving to closely mirror the real world, requiring realistic object responses based on users’ emotions. While technologies like eye-tracking and hand-tracking transfer physical movements into virtual spaces, accurate emotion detection remains challenging. This study proposes the “Face and Voice Recognition-based Emotion Analysis System (EAS)” to bridge this gap, assessing emotions through both voice and facial expressions. EAS utilizes a microphone and camera to gauge emotional states, combining these inputs for a comprehensive analysis. It comprises three neural networks: the Facial Emotion Analysis Model (FEAM), which classifies emotions using facial landmarks; the Voice Sentiment Analysis Model (VSAM), which detects vocal emotions even in noisy environments using MCycleGAN; and the Metaverse Emotion Recognition Model (MERM), which integrates FEAM and VSAM outputs to infer overall emotional states. EAS’s three primary modules—Facial Emotion Recognition, Voice Emotion Recognition, and User Emotion Analysis—analyze facial features and vocal tones to detect emotions, providing a holistic emotional assessment for realistic interactions in the metaverse. The system’s performance is validated through dataset testing, and future directions are suggested based on simulation outcomes. Full article
Show Figures

Figure 1

13 pages, 2462 KiB  
Article
The Effectiveness and Safety of Tibial-Sided Osteotomy for Fibula Untethering in Lateral Close-Wedge High Tibial Osteotomy: A Novel Technique with Video Illustration
by Keun Young Choi, Man Soo Kim and Yong In
Medicina 2025, 61(1), 91; https://doi.org/10.3390/medicina61010091 - 8 Jan 2025
Viewed by 1123
Abstract
Background and Objectives: Despite its advantages, lateral close-wedge high tibial osteotomy (LCWHTO) requires proximal tibiofibular joint detachment (PTFJD) or fibular shaft osteotomy for gap closing. These fibula untethering procedures are technically demanding and not free from the risk of neurovascular injuries. Our [...] Read more.
Background and Objectives: Despite its advantages, lateral close-wedge high tibial osteotomy (LCWHTO) requires proximal tibiofibular joint detachment (PTFJD) or fibular shaft osteotomy for gap closing. These fibula untethering procedures are technically demanding and not free from the risk of neurovascular injuries. Our novel fibula untethering technique, tibial-sided osteotomy (TSO) near the proximal tibiofibular joint (PTFJ), aims to reduce technical demands and the risk of injury to the peroneal nerve and popliteal neurovascular structures. The purposes of this study were to introduce the TSO technique and compare the complexity and safety of TSO with those of radiographic virtual PTFJD, which is defined based on radiographic landmarks representing the traditional PTFJD technique. Materials and Methods: Between March and December 2023, 13 patients who underwent LCWHTO with TSO for fibula untethering were enrolled. All patients underwent MRI preoperatively and CT scanning postoperatively. The location of the TSO site on the postoperative CT scans was matched to preoperative MRI to measure the shortest distance to the peroneal nerve and popliteal artery. These values were compared with estimates of the distance between the PTFJ and neurovascular structures in the radiographic virtual PTFJD group. The protective effect of the popliteus muscle was evaluated by extending the osteotomy direction toward the posterior compartment of the knee. Results: The TSO procedure was straightforward and reproducible without producing incomplete gap closure during LCWHTO. On axial images, the distances between the surgical plane and the peroneal nerve or popliteal artery were significantly longer in the TSO group than in the radiographic virtual PTFJD group (both p = 0.001). On coronal and axial MRI, the popliteus muscle covered the posterior osteotomy plane in all patients undergoing TSO but did not cover the PTFJD plane in the radiographic virtual PTFJD group. Conclusions: Our novel TSO technique for fibula untethering during LCWHTO is reproducible and reduces the risk of neurovascular injury by placing the separation site more medially than in the PTFJD procedure. Full article
(This article belongs to the Special Issue Cutting-Edge Concepts in Knee Surgery)
Show Figures

Figure 1

11 pages, 5555 KiB  
Article
Proportional Condylectomy Using a Titanium 3D-Printed Cutting Guide in Patients with Condylar Hyperplasia
by Wenko Smolka, Carl-Peter Cornelius, Katharina Theresa Obermeier, Sven Otto and Paris Liokatis
Craniomaxillofac. Trauma Reconstr. 2025, 18(1), 7; https://doi.org/10.3390/cmtr18010007 - 3 Jan 2025
Viewed by 2143
Abstract
Background: The purpose of the study was to describe proportional condylectomy in patients with condylar hyperplasia using a titanium 3D-printed ultrathin wire mesh cutting guide placed below the planned bone resection. Methods: Eight patients with condylar hyperplasia underwent proportional condylectomy using an ultrathin [...] Read more.
Background: The purpose of the study was to describe proportional condylectomy in patients with condylar hyperplasia using a titanium 3D-printed ultrathin wire mesh cutting guide placed below the planned bone resection. Methods: Eight patients with condylar hyperplasia underwent proportional condylectomy using an ultrathin titanium 3D-printed cutting guide placed below the planned bone resection. The placement of the guide was facilitated by the incorporation of anatomical landmarks. The accuracy of bone resections guided by such devices was evaluated on postoperative radiographs. The mean postoperative follow-up was 30 months. Results: Surgery could be performed in all patients in the same manner as virtually planned. The fitting accuracy of the cutting guides was judged as good. Postoperative radiographs revealed that the virtually planned shape of the newly formed condylar head after condylectomy could be achieved. Conclusions: In conclusion, the use of virtual computer-assisted planning and CAD/CAM-based cutting guides for proportional condylectomy in unilateral condylar hyperplasia of the mandible offers high accuracy and guarantees very predictable results. Full article
Show Figures

Figure 1

49 pages, 45431 KiB  
Article
Concepts Towards Nation-Wide Individual Tree Data and Virtual Forests
by Matti Hyyppä, Tuomas Turppa, Heikki Hyyti, Xiaowei Yu, Hannu Handolin, Antero Kukko, Juha Hyyppä and Juho-Pekka Virtanen
ISPRS Int. J. Geo-Inf. 2024, 13(12), 424; https://doi.org/10.3390/ijgi13120424 - 26 Nov 2024
Cited by 1 | Viewed by 3040
Abstract
Individual tree data could offer potential uses for both forestry and landscape visualization but has not yet been realized on a large scale. Relying on 5 points/m2 Finnish national laser scanning, we present the design and implementation of a system for producing, [...] Read more.
Individual tree data could offer potential uses for both forestry and landscape visualization but has not yet been realized on a large scale. Relying on 5 points/m2 Finnish national laser scanning, we present the design and implementation of a system for producing, storing, distributing, querying, and viewing individual tree data, both in a web browser and in a game engine-mediated interactive 3D visualization, “virtual forest”. In our experiment, 3896 km2 of airborne laser scanning point clouds were processed for individual tree detection, resulting in over 100 million trees detected, but the developed technical infrastructure allows for containing 10+ billion trees (a rough number of log-sized trees in Finland) to be visualized in the same system. About 92% of trees wider than 20 cm in diameter at breast height (corresponding to industrial log-size trees) were detected using national laser scanning data. Obtained relative RMSE for height, diameter, volume, and biomass (stored above-ground carbon) at individual tree levels were 4.5%, 16.9%, 30.2%, and 29.0%, respectively. The obtained RMSE and bias are low enough for operational forestry and add value over current area-based inventories. By combining the single-tree data with open GIS datasets, a 3D virtual forest was produced automatically. A comparison against georeferenced panoramic images was performed to assess the verisimilitude of the virtual scenes, with the best results obtained from sparse grown forests on sites with clear landmarks. Both the online viewer and 3D virtual forest can be used for improved decision-making in multifunctional forestry. Based on the work, individual tree inventory is expected to become operational in Finland in 2026 as part of the third national laser scanning program. Full article
Show Figures

Figure 1

Back to TopTop