Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (31)

Search Parameters:
Keywords = prosthetic vision

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 4540 KB  
Article
Vision-Guided Grasp Planning for Prosthetic Hands with AABB-Based Object Representation
by Shifa Sulaiman, Akash Bachhar, Ming Shen and Simon Bøgh
Robotics 2026, 15(1), 22; https://doi.org/10.3390/robotics15010022 - 14 Jan 2026
Viewed by 173
Abstract
Recent advancements in prosthetic technology have increasingly focused on enhancing dexterity and autonomy through intelligent control systems. Vision-based approaches offer promising results for enabling prosthetic hands to interact more naturally with diverse objects in dynamic environments. Building on this foundation, the paper presents [...] Read more.
Recent advancements in prosthetic technology have increasingly focused on enhancing dexterity and autonomy through intelligent control systems. Vision-based approaches offer promising results for enabling prosthetic hands to interact more naturally with diverse objects in dynamic environments. Building on this foundation, the paper presents a vision-guided grasping algorithm for a prosthetic hand, integrating perception, planning, and control for dexterous manipulation. A camera mounted on the set up captures the scene, and a Bounding Volume Hierarchy (BVH)-based vision algorithm is employed to segment an object for grasping and define its bounding box. Grasp contact points are then computed by generating candidate trajectories using Rapidly-exploring Random Tree Star (RRT*) algorithm, and selecting fingertip end poses based on the minimum Euclidean distance between these trajectories and the object’s point cloud. Each finger’s grasp pose is determined independently, enabling adaptive, object-specific configurations. Damped Least Square (DLS) based Inverse kinematics solver is used to compute the corresponding joint angles, which are subsequently transmitted to the finger actuators for execution. Our intention in this work was to present a proof-of-concept pipeline demonstrating that fingertip poses derived from a simple, computationally lightweight geometric representation, specifically an AABB-based segmentation can be successfully propagated through per-finger planning and executed in real time on the Linker Hand O7 platform. The proposed method is validated in simulation, and experimental integration on a Linker Hand O7 platform. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

38 pages, 3741 KB  
Article
Hybrid Convolutional Vision Transformer for Robust Low-Channel sEMG Hand Gesture Recognition: A Comparative Study with CNNs
by Ruthber Rodriguez Serrezuela, Roberto Sagaro Zamora, Daily Milanes Hermosilla, Andres Eduardo Rivera Gomez and Enrique Marañon Reyes
Biomimetics 2025, 10(12), 806; https://doi.org/10.3390/biomimetics10120806 - 3 Dec 2025
Viewed by 694
Abstract
Hand gesture classification using surface electromyography (sEMG) is fundamental for prosthetic control and human–machine interaction. However, most existing studies focus on high-density recordings or large gesture sets, leaving limited evidence on performance in low-channel, reduced-gesture configurations. This study addresses this gap by comparing [...] Read more.
Hand gesture classification using surface electromyography (sEMG) is fundamental for prosthetic control and human–machine interaction. However, most existing studies focus on high-density recordings or large gesture sets, leaving limited evidence on performance in low-channel, reduced-gesture configurations. This study addresses this gap by comparing a classical convolutional neural network (CNN), inspired by Atzori’s design, with a Convolutional Vision Transformer (CViT) tailored for compact sEMG systems. Two datasets were evaluated: a proprietary Myo-based collection (10 subjects, 8 channels, six gestures) and a subset of NinaPro DB3 (11 transradial amputees, 12 channels, same gestures). Both models were trained using standardized preprocessing, segmentation, and balanced windowing procedures. Results show that the CNN performs robustly on homogeneous signals (Myo: 94.2% accuracy) but exhibits increased variability in amputee recordings (NinaPro: 92.0%). In contrast, the CViT consistently matches or surpasses the CNN, reaching 96.6% accuracy on Myo and 94.2% on NinaPro. Statistical analyses confirm significant differences in the Myo dataset. The objective of this work is to determine whether hybrid CNN–ViT architectures provide superior robustness and generalization under low-channel sEMG conditions. Rather than proposing a new architecture, this study delivers the first systematic benchmark of CNN and CViT models across amputee and non-amputee subjects using short windows, heterogeneous signals, and identical protocols, highlighting their suitability for compact prosthetic–control systems. Full article
Show Figures

Graphical abstract

19 pages, 11348 KB  
Article
Vision-Based Grasping Method for Prosthetic Hands via Geometry and Symmetry Axis Recognition
by Yi Zhang, Yanwei Xie, Qian Zhao, Xiaolei Xu, Hua Deng and Nianen Yi
Biomimetics 2025, 10(4), 242; https://doi.org/10.3390/biomimetics10040242 - 15 Apr 2025
Cited by 1 | Viewed by 1545
Abstract
This paper proposes a grasping method for prosthetic hands based on object geometry and symmetry axis. The method utilizes computer vision to extract the geometric shape, spatial position, and symmetry axis of target objects and selects appropriate grasping modes and postures through the [...] Read more.
This paper proposes a grasping method for prosthetic hands based on object geometry and symmetry axis. The method utilizes computer vision to extract the geometric shape, spatial position, and symmetry axis of target objects and selects appropriate grasping modes and postures through the extracted features. First, grasping patterns are classified based on the analysis of hand-grasping movements. A mapping relationship between object geometry and grasp patterns is established. Then, target object images are captured using binocular depth cameras, and the YOLO algorithm is employed for object detection. The SIFT algorithm is applied to extract the object’s symmetry axis, thereby determining the optimal grasp point and initial hand posture. An experimental platform is built based on a seven-degree-of-freedom (7-DoF) robotic arm and a multi-mode prosthetic hand to conduct grasping experiments on objects with different characteristics. Experimental results demonstrate that the proposed method achieves high accuracy and real-time performance in recognizing object geometric features. The system can automatically match appropriate grasp modes according to object features, improving grasp stability and success rate. Full article
(This article belongs to the Special Issue Human-Inspired Grasp Control in Robotics 2025)
Show Figures

Figure 1

44 pages, 3233 KB  
Review
Advancements in Ocular Neuro-Prosthetics: Bridging Neuroscience and Information and Communication Technology for Vision Restoration
by Daniele Giansanti
Biology 2025, 14(2), 134; https://doi.org/10.3390/biology14020134 - 28 Jan 2025
Cited by 4 | Viewed by 8542
Abstract
Background: Neuroprosthetics for vision restoration have advanced significantly, incorporating technologies like retinal implants, cortical implants, and non-invasive stimulation methods. These advancements hold the potential to tackle major challenges in visual prosthetics, such as enhancing functionality, improving biocompatibility, and enabling real-time object recognition. Aim: [...] Read more.
Background: Neuroprosthetics for vision restoration have advanced significantly, incorporating technologies like retinal implants, cortical implants, and non-invasive stimulation methods. These advancements hold the potential to tackle major challenges in visual prosthetics, such as enhancing functionality, improving biocompatibility, and enabling real-time object recognition. Aim: The aim of this review overview is to provide a comprehensive analysis of the latest advancements in ocular neuroprostheses. Methods: A narrative review was conducted, focusing on the latest developments in visual neuroprosthetics. Comprehensive searches were carried out on Google Scholar, PubMed, and Scopus using specific keywords. A specific narrative checklist was applied, alongside a tailored quality assessment methodology, to evaluate the quality of the studies included. A total of sixteen relevant studies from the past three years were included in the review. Results and discussion: The integration of artificial retinas, cortical implants, high technology-enabled prosthetics, gene therapies, nanotechnology, and bioprinting has shown significant promise in enhancing the quality and functionality of vision restoration systems, offering potential to address complex visual impairments and improve independence and mobility for individuals with blindness. These innovations appear to have the potential to transform healthcare systems in the future by enabling more efficient and personalized therapies and prosthetic devices. However, challenges such as energy efficiency, scalability, and the neural dynamics of vision restoration persist, requiring continued interdisciplinary collaboration to refine these technologies, overcome ethical and regulatory hurdles, and ensure their effectiveness in real-world applications. Conclusions: While visual neuroprosthetics have made remarkable progress, addressing challenges related to energy consumption and regulatory and ethical concerns will be crucial for ensuring that neuroprosthetic devices can effectively meet the needs of individuals with visual impairments. Full article
(This article belongs to the Special Issue The Convergence of Neuroscience and ICT: From Data to Insights)
Show Figures

Figure 1

17 pages, 2661 KB  
Article
Spatially Localized Visual Perception Estimation by Means of Prosthetic Vision Simulation
by Diego Luján Villarreal and Wolfgang Krautschneider
J. Imaging 2024, 10(11), 294; https://doi.org/10.3390/jimaging10110294 - 18 Nov 2024
Viewed by 1965
Abstract
Retinal prosthetic devices aim to repair some vision in visually impaired patients by electrically stimulating neural cells in the visual system. Although there have been several notable advancements in the creation of electrically stimulated small dot-like perceptions, a deeper comprehension of the physical [...] Read more.
Retinal prosthetic devices aim to repair some vision in visually impaired patients by electrically stimulating neural cells in the visual system. Although there have been several notable advancements in the creation of electrically stimulated small dot-like perceptions, a deeper comprehension of the physical properties of phosphenes is still necessary. This study analyzes the influence of two independent electrode array topologies to achieve single-localized stimulation while the retina is electrically stimulated: a two-dimensional (2D) hexagon-shaped array reported in clinical studies and a patented three-dimensional (3D) linear electrode carrier. For both, cell stimulation is verified in COMSOL Multiphysics by developing a lifelike 3D computational model that includes the relevant retinal interface elements and dynamics of the voltage-gated ionic channels. The evoked percepts previously described in clinical studies using the 2D array are strongly associated with our simulation-based findings, allowing for the development of analytical models of the evoked percepts. Moreover, our findings identify differences between visual sensations induced by the arrays. The 2D array showed drawbacks during stimulation; similarly, the state-of-the-art 2D visual prostheses provide only dot-like visual sensations in close proximity to the electrode. The 3D design could offer a technique for improving cell selectivity because it requires low-intensity threshold activation which results in volumes of stimulation similar to the volume surrounded by a solitary RGC. Our research establishes a proof-of-concept technique for determining the utility of the 3D electrode array for selectively activating individual RGCs at the highest density via small-sized electrodes while maintaining electrochemical safety. Full article
Show Figures

Figure 1

10 pages, 1629 KB  
Article
Digitization of Dentate and Edentulous Maxillectomy and Mandibulectomy Defects with Three Different Intraoral Scanners: A Comparative In Vitro Study
by Mariko Hattori, Sandra Stadler, Yuka I. Sumita, Benedikt C. Spies, Kirstin Vach, Ralf-Joachim Kohal and Noriyuki Wakabayashi
J. Clin. Med. 2024, 13(22), 6810; https://doi.org/10.3390/jcm13226810 - 13 Nov 2024
Cited by 1 | Viewed by 1620
Abstract
Objectives: The objective of this study was to compare the trueness and precision of three intraoral scanners (IOSs) for the digitization of dentate and edentulous maxillectomy and mandibulectomy defects in artificial models. Methods: Four representative defect models—a dentate and an edentulous maxillectomy [...] Read more.
Objectives: The objective of this study was to compare the trueness and precision of three intraoral scanners (IOSs) for the digitization of dentate and edentulous maxillectomy and mandibulectomy defects in artificial models. Methods: Four representative defect models—a dentate and an edentulous maxillectomy model and a dentate and an edentulous mandibulectomy model—were used for digital scanning. After a reference scan of each model, they were scanned with three IOSs: CEREC AC Omnicam, True Definition, and cara TRIOS 3. For comparison, five conventional impressions with a polysiloxane material were taken and digitized with a laboratory scanner. The obtained data were evaluated with three-dimensional (3D) inspection software and superimposed with the reference scan data by using a best-fit algorithm. The mean absolute 3D deviations of the IOS compared to the reference data (trueness) and when comparing the datasets within the IOS (precision) were analyzed. Linear mixed models and multiple pairwise comparisons were used for statistical analyses. Results: The overall comparison of the four evaluated procedures for data acquisition showed a significant difference in trueness (p < 0.0001) and precision (p < 0.0001). The average mean trueness of the IOSs ranged from 32.17 to 204.43 µm, compared to 32.07 to 64.85 µm for conventional impressions. Here, the conventional impression and cara TRIOS 3 performed the most precisely with no significant difference. CEREC AC Omnicam achieved the worst precision. Conclusions: Using a suitable intraoral scanner, defective jaws even without teeth could be captured in satisfying accuracy. This shows the possibility to use an intraoral scanner for maxillofacial defect patients and gives a vision of using digital technology in maxillofacial prosthetics. Full article
(This article belongs to the Section Dentistry, Oral Surgery and Oral Medicine)
Show Figures

Figure 1

32 pages, 15790 KB  
Review
Human–AI Collaboration for Remote Sighted Assistance: Perspectives from the LLM Era
by Rui Yu, Sooyeon Lee, Jingyi Xie, Syed Masum Billah and John M. Carroll
Future Internet 2024, 16(7), 254; https://doi.org/10.3390/fi16070254 - 18 Jul 2024
Cited by 9 | Viewed by 6825
Abstract
Remote sighted assistance (RSA) has emerged as a conversational technology aiding people with visual impairments (VI) through real-time video chat communication with sighted agents. We conducted a literature review and interviewed 12 RSA users to understand the technical and navigational challenges faced by [...] Read more.
Remote sighted assistance (RSA) has emerged as a conversational technology aiding people with visual impairments (VI) through real-time video chat communication with sighted agents. We conducted a literature review and interviewed 12 RSA users to understand the technical and navigational challenges faced by both agents and users. The technical challenges were categorized into four groups: agents’ difficulties in orienting and localizing users, acquiring and interpreting users’ surroundings and obstacles, delivering information specific to user situations, and coping with poor network connections. We also presented 15 real-world navigational challenges, including 8 outdoor and 7 indoor scenarios. Given the spatial and visual nature of these challenges, we identified relevant computer vision problems that could potentially provide solutions. We then formulated 10 emerging problems that neither human agents nor computer vision can fully address alone. For each emerging problem, we discussed solutions grounded in human–AI collaboration. Additionally, with the advent of large language models (LLMs), we outlined how RSA can integrate with LLMs within a human–AI collaborative framework, envisioning the future of visual prosthetics. Full article
Show Figures

Figure 1

15 pages, 5779 KB  
Article
Development of the Anthropomorphic Arm for Collaborative and Home Service Robot CHARMIE
by Fawad A. Syed, Gil Lopes and A. Fernando Ribeiro
Actuators 2024, 13(7), 239; https://doi.org/10.3390/act13070239 - 26 Jun 2024
Cited by 1 | Viewed by 3456
Abstract
Service robots are rapidly transitioning from concept to reality, making significant strides in development. Similarly, the field of prosthetics is evolving at an impressive pace, with both areas now being highly relevant in the industry. Advancements in these fields are continually pushing the [...] Read more.
Service robots are rapidly transitioning from concept to reality, making significant strides in development. Similarly, the field of prosthetics is evolving at an impressive pace, with both areas now being highly relevant in the industry. Advancements in these fields are continually pushing the boundaries of what is possible, leading to the increasing creation of individual arm and hand prosthetics, either as standalone units or combined packages. This trend is driven by the rise of advanced collaborative robots that seamlessly integrate with human counterparts in real-world applications. This paper presents an open-source, 3D-printed robotic arm that has been assembled and programmed using two distinct approaches. The first approach involves controlling the hand via teleoperation, utilizing a camera and machine learning-based hand pose estimation. This method details the programming techniques and processes required to capture data from the camera and convert it into hardware signals. The second approach employs kinematic control using the Denavit-Hartenbergmethod to define motion and determine the position of the end effector in 3D space. Additionally, this work discusses the assembly and modifications made to the arm and hand to create a cost-effective and practical solution. Typically, implementing teleoperation requires numerous sensors and cameras to ensure smooth and successful operation. This paper explores methods enabled by artificial intelligence (AI) that reduce the need for extensive sensor arrays and equipment. It investigates how AI-generated data can be translated into tangible hardware applications across various fields. The advancements in computer vision, combined with AI capable of accurately predicting poses, have the potential to revolutionize the way we control and interact with the world around us. Full article
Show Figures

Figure 1

18 pages, 6176 KB  
Article
On Automated Object Grasping for Intelligent Prosthetic Hands Using Machine Learning
by Jethro Odeyemi, Akinola Ogbeyemi, Kelvin Wong and Wenjun Zhang
Bioengineering 2024, 11(2), 108; https://doi.org/10.3390/bioengineering11020108 - 24 Jan 2024
Cited by 9 | Viewed by 3511
Abstract
Prosthetic technology has witnessed remarkable advancements, yet challenges persist in achieving autonomous grasping control while ensuring the user’s experience is not compromised. Current electronic prosthetics often require extensive training for users to gain fine motor control over the prosthetic fingers, hindering their usability [...] Read more.
Prosthetic technology has witnessed remarkable advancements, yet challenges persist in achieving autonomous grasping control while ensuring the user’s experience is not compromised. Current electronic prosthetics often require extensive training for users to gain fine motor control over the prosthetic fingers, hindering their usability and acceptance. To address this challenge and improve the autonomy of prosthetics, this paper proposes an automated method that leverages computer vision-based techniques and machine learning algorithms. In this study, three reinforcement learning algorithms, namely Soft Actor-Critic (SAC), Deep Q-Network (DQN), and Proximal Policy Optimization (PPO), are employed to train agents for automated grasping tasks. The results indicate that the SAC algorithm achieves the highest success rate of 99% among the three algorithms at just under 200,000 timesteps. This research also shows that an object’s physical characteristics can affect the agent’s ability to learn an optimal policy. Moreover, the findings highlight the potential of the SAC algorithm in developing intelligent prosthetic hands with automatic object-gripping capabilities. Full article
Show Figures

Figure 1

17 pages, 3949 KB  
Article
A Finger Vein Liveness Detection System Based on Multi-Scale Spatial-Temporal Map and Light-ViT Model
by Liukui Chen, Tengwen Guo, Li Li, Haiyang Jiang, Wenfu Luo and Zuojin Li
Sensors 2023, 23(24), 9637; https://doi.org/10.3390/s23249637 - 5 Dec 2023
Cited by 6 | Viewed by 2826
Abstract
Prosthetic attack is a problem that must be prevented in current finger vein recognition applications. To solve this problem, a finger vein liveness detection system was established in this study. The system begins by capturing short-term static finger vein videos using uniform near-infrared [...] Read more.
Prosthetic attack is a problem that must be prevented in current finger vein recognition applications. To solve this problem, a finger vein liveness detection system was established in this study. The system begins by capturing short-term static finger vein videos using uniform near-infrared lighting. Subsequently, it employs Gabor filters without a direct-current (DC) component for vein area segmentation. The vein area is then divided into blocks to compute a multi-scale spatial–temporal map (MSTmap), which facilitates the extraction of coarse liveness features. Finally, these features are trained for refinement and used to predict liveness detection results with the proposed Light Vision Transformer (Light-ViT) model, which is equipped with an enhanced Light-ViT backbone, meticulously designed by interleaving multiple MN blocks and Light-ViT blocks, ensuring improved performance in the task. This architecture effectively balances the learning of local image features, controls network parameter complexity, and substantially improves the accuracy of liveness detection. The accuracy of the Light-ViT model was verified to be 99.63% on a self-made living/prosthetic finger vein video dataset. This proposed system can also be directly applied to the finger vein recognition terminal after the model is made lightweight. Full article
(This article belongs to the Special Issue AI-Driven Sensing for Image Processing and Recognition)
Show Figures

Figure 1

17 pages, 3772 KB  
Article
A Semiautonomous Control Strategy Based on Computer Vision for a Hand–Wrist Prosthesis
by Gianmarco Cirelli, Christian Tamantini, Luigi Pietro Cordella and Francesca Cordella
Robotics 2023, 12(6), 152; https://doi.org/10.3390/robotics12060152 - 13 Nov 2023
Cited by 10 | Viewed by 3862
Abstract
Alleviating the burden on amputees in terms of high-level control of their prosthetic devices is an open research challenge. EMG-based intention detection presents some limitations due to movement artifacts, fatigue, and stability. The integration of exteroceptive sensing can provide a valuable solution to [...] Read more.
Alleviating the burden on amputees in terms of high-level control of their prosthetic devices is an open research challenge. EMG-based intention detection presents some limitations due to movement artifacts, fatigue, and stability. The integration of exteroceptive sensing can provide a valuable solution to overcome such limitations. In this paper, a novel semiautonomous control system (SCS) for wrist–hand prostheses using a computer vision system (CVS) is proposed and validated. The SCS integrates object detection, grasp selection, and wrist orientation estimation algorithms. By combining CVS with a simulated EMG-based intention detection module, the SCS guarantees reliable prosthesis control. Results show high accuracy in grasping and object classification (≥97%) at a fast frame analysis frequency (2.07 FPS). The SCS achieves an average angular estimation error ≤18° and stability ≤0.8° for the proposed application. Operative tests demonstrate the capabilities of the proposed approach to handle complex real-world scenarios and pave the way for future implementation on a real prosthetic device. Full article
(This article belongs to the Special Issue AI for Robotic Exoskeletons and Prostheses)
Show Figures

Figure 1

13 pages, 1737 KB  
Article
Experimental Evaluation of a Hybrid Sensory Feedback System for Haptic and Kinaesthetic Perception in Hand Prostheses
by Emre Sariyildiz, Fergus Hanss, Hao Zhou, Manish Sreenivasa, Lucy Armitage, Rahim Mutlu and Gursel Alici
Sensors 2023, 23(20), 8492; https://doi.org/10.3390/s23208492 - 16 Oct 2023
Cited by 6 | Viewed by 7446
Abstract
This study proposes a new hybrid multi-modal sensory feedback system for prosthetic hands that can provide not only haptic and proprioceptive feedback but also facilitate object recognition without the aid of vision. Modality-matched haptic perception was provided using a mechanotactile feedback system that [...] Read more.
This study proposes a new hybrid multi-modal sensory feedback system for prosthetic hands that can provide not only haptic and proprioceptive feedback but also facilitate object recognition without the aid of vision. Modality-matched haptic perception was provided using a mechanotactile feedback system that can proportionally apply the gripping force through the use of a force controller. A vibrotactile feedback system was also employed to distinguish four discrete grip positions of the prosthetic hand. The system performance was evaluated with a total of 32 participants in three different experiments (i) haptic feedback, (ii) proprioceptive feedback and (iii) object recognition with hybrid haptic-proprioceptive feedback. The results from the haptic feedback experiment showed that the participants’ ability to accurately perceive applied force depended on the amount of force applied. As the feedback force was increased, the participants tended to underestimate the force levels, with a decrease in the percentage of force estimation. Of the three arm locations (forearm volar, forearm ventral and bicep), and two muscle states (relaxed and tensed) tested, the highest accuracy was obtained for the bicep location in the relaxed state. The results from the proprioceptive feedback experiment showed that participants could very accurately identify four different grip positions of the hand prosthesis (i.e., open hand, wide grip, narrow grip, and closed hand) without a single case of misidentification. In experiment 3, participants could identify objects with different shapes and stiffness with an overall high success rate of 90.5% across all combinations of location and muscle state. The feedback location and muscle state did not have a significant effect on object recognition accuracy. Overall, our study results indicate that the hybrid feedback system may be a very effective way to enrich a prosthetic hand user’s experience of the stiffness and shape of commonly manipulated objects. Full article
(This article belongs to the Special Issue Sensor Technology for Improving Human Movements and Postures: Part II)
Show Figures

Figure 1

13 pages, 3764 KB  
Case Report
Vibrotactile Feedback for a Person with Transradial Amputation and Visual Loss: A Case Report
by Gerfried Peternell, Harald Penasso, Henriette Luttenberger, Hildegard Ronacher, Roman Schlintner, Kara Ashcraft, Alexander Gardetto, Jennifer Ernst and Ursula Kropiunig
Medicina 2023, 59(10), 1710; https://doi.org/10.3390/medicina59101710 - 25 Sep 2023
Cited by 4 | Viewed by 2507
Abstract
Background and Objectives: After major upper-limb amputation, people face challenges due to losing tactile information and gripping function in their hands. While vision can confirm the success of an action, relying on it diverts attention from other sensations and tasks. This case report [...] Read more.
Background and Objectives: After major upper-limb amputation, people face challenges due to losing tactile information and gripping function in their hands. While vision can confirm the success of an action, relying on it diverts attention from other sensations and tasks. This case report presents a 30-year-old man with traumatic, complete vision loss and transradial left forearm amputation. It emphasizes the importance of restoring tactile abilities when visual compensation is impossible. Materials and Methods: A prototype tactile feedback add-on system was developed, consisting of a sensor glove and upper arm cuff with related vibration actuators. Results: We found a 66% improvement in the Box and Blocks test and an overall functional score increase from 30% to 43% in the Southampton Hand Assessment Procedure with feedback. Qualitative improvements in bimanual activities, ergonomics, and reduced reliance on the unaffected hand were observed. Incorporating the tactile feedback system improved the precision of grasping and the utility of the myoelectric hand prosthesis, freeing the unaffected hand for other tasks. Conclusions: This case demonstrated improvements in prosthetic hand utility achieved by restoring peripheral sensitivity while excluding the possibility of visual compensation. Restoring tactile information from the hand and fingers could benefit individuals with impaired vision and somatosensation, improving acceptance, embodiment, social integration, and pain management. Full article
(This article belongs to the Special Issue Innovations in Amputation Care)
Show Figures

Figure 1

18 pages, 10124 KB  
Article
A Tool to Assist in the Analysis of Gaze Patterns in Upper Limb Prosthetic Use
by Peter Kyberd, Alexandru Florin Popa and Théo Cojean
Prosthesis 2023, 5(3), 898-915; https://doi.org/10.3390/prosthesis5030063 - 8 Sep 2023
Cited by 3 | Viewed by 2151
Abstract
Gaze-tracking, where the point of regard of a subject is mapped onto the image of the scene the subject sees, can be employed to study the visual attention of the users of prosthetic hands. It can show whether the user pays greater attention [...] Read more.
Gaze-tracking, where the point of regard of a subject is mapped onto the image of the scene the subject sees, can be employed to study the visual attention of the users of prosthetic hands. It can show whether the user pays greater attention to the actions of their prosthetic hand as they use it to perform manipulation tasks, compared with the general population. Conventional analysis of the video data requires a human operator to identify the key areas of interest in every frame of the video data. Computer vision techniques can assist with this process, but fully automatic systems require large training sets. Prosthetic investigations tend to be limited in numbers. However, if the assessment task is well-controlled, it is possible to make a much simpler system that uses the initial input from an operator to identify the areas of interest and then the computer tracks the objects throughout the task. The tool described here employs colour separation and edge detection on images of the visual field to identify the objects to be tracked. To simplify the computer’s task further, this test uses the Southampton Hand Assessment Procedure (SHAP) to define the activity spatially and temporarily, reducing the search space for the computer. The work reported here concerns the development of a software tool capable of identifying and tracking the points of regard and areas of interest throughout an activity with minimum human operator input. Gaze was successfully tracked for fourteen unimpaired subjects and was compared with the gaze of four users of myoelectric hands. The SHAP cutting task is described and the differences in attention observed with a greater number of shorter fixations by the prosthesis users compared to unimpaired subjects. There was less looking ahead to the next phase of the task by the prosthesis users. Full article
Show Figures

Figure 1

9 pages, 3153 KB  
Article
The Status of Digital Dental Technology Implementation in the Saudi Dental Schools’ Curriculum: A National Cross-Sectional Survey for Healthcare Digitization
by Hayam A. Alfallaj, Kelvin I. Afrashtehfar, Ali K. Asiri, Farah S. Almasoud, Ghaida H. Alnaqa and Nadia S. Al-Angari
Int. J. Environ. Res. Public Health 2023, 20(1), 321; https://doi.org/10.3390/ijerph20010321 - 25 Dec 2022
Cited by 16 | Viewed by 4412
Abstract
Objective: The primary objective of this cross-sectional national study was to investigate the status of digital dental technology (DDT) adoption in Saudi Arabian undergraduate dental education. A secondary objective was to explore the impact of dental schools’ funding sources to incorporate digital technologies. [...] Read more.
Objective: The primary objective of this cross-sectional national study was to investigate the status of digital dental technology (DDT) adoption in Saudi Arabian undergraduate dental education. A secondary objective was to explore the impact of dental schools’ funding sources to incorporate digital technologies. Methods: A self-administered questionnaire was distributed to the chairpersons of prosthetic sciences departments of the 27 dental schools in Saudi Arabia. If any department chairman failed to respond to the survey, a designated full-time faculty member was contacted to fill out the form. The participants were asked about the school’s sector, DDT implementation in the curriculum, implemented level, their perceptions of the facilitators and challenges for incorporating DDT. Results: Of the 27 dental schools (18 public and 8 private), 26 responded to the questionnaire (response rate: 96.3%). The geographic distribution of the respondent schools was as follows: 12 schools in the central region, 6 in the western region, and 8 in other regions. Seventeen schools secure and preserve patients’ records using electronic software, whereas nine schools use paper charts. Seventeen schools (64,4%) implemented DDT in their curricula. The schools that did not incorporate DDT into their undergraduate curricula were due to not being included in the curriculum (78%), lack of expertise (66%), untrained faculty and staff (44%), and cost (33%). Conclusions: This national study showed that digital components still need to be integrated into Saudi Arabian dental schools’ curricula and patient care treatment. Additionally, there was no association between funding sources and the DDT implementation into the current curricula. Consequently, Saudi dental schools must emphasize the implementation and utilization of DDT to align with Saudi Vision 2030 for healthcare digitization and to graduate competent dentists in digital dental care. Full article
(This article belongs to the Special Issue Health Professions Education and Clinical Training)
Show Figures

Figure 1

Back to TopTop