Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (41)

Search Parameters:
Keywords = tactile aids

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 4713 KiB  
Article
Cutting-Edge Vibration Sensor Morphologically Configured by Mimicking a Tactile Cutaneous Receptor Using Magnetic-Responsive Hybrid Fluid (HF)
by Kunio Shimada
Sensors 2025, 25(11), 3366; https://doi.org/10.3390/s25113366 - 27 May 2025
Viewed by 414
Abstract
Vibration sensors are important in many engineering fields, including industry, surgery, space, and mechanics, such as for remote and autonomous driving. We propose a novel, cutting-edge vibratory sensor that mimics human tactile receptors, with a configuration different from current sensors such as strain [...] Read more.
Vibration sensors are important in many engineering fields, including industry, surgery, space, and mechanics, such as for remote and autonomous driving. We propose a novel, cutting-edge vibratory sensor that mimics human tactile receptors, with a configuration different from current sensors such as strain gauges and piezo materials. The basic principle involves the perception of vibration via touch, with a cutaneous mechanoreceptor that is sensitive to vibration. We investigated the characteristics of the proposed vibratory sensor, in which the mechanoreceptor was covered either in hard rubber (such as silicon oil) or soft rubber (such as urethane), for both low- and high-frequency ranges. The fabricated sensor is based on piezoelectricity with a built-in voltage. It senses applied vibration by means of hairs in the sensor and the hardness of the outer cover. We also investigated two proposed parameters: the sensor response time to stimuli to the vibration aiding the equivalent firing rate (e.f.r.) and the gauge factor (GF,pe) proposed as treated in piezo-resistivity. The evaluation with the parameters was effective in designing a sensor based on piezoelectricity. These parameters were enhanced by the hairs in the sensor and the hardness of the outer cover. Our results were helpful for designing the present novel vibratory sensor. Full article
(This article belongs to the Special Issue Advancements and Applications of Biomimetic Sensors Technologies)
Show Figures

Figure 1

18 pages, 2572 KiB  
Review
Deep Learning Approaches for 3D Model Generation from 2D Artworks to Aid Blind People with Tactile Exploration
by Rocco Furferi
Heritage 2025, 8(1), 12; https://doi.org/10.3390/heritage8010012 - 28 Dec 2024
Viewed by 2523
Abstract
An effective method to enable the enjoyment of works of art by the blind is to reproduce tactile copies of the work, to facilitate tactile exploration. This is even more important when it comes to paintings, which are inherently not accessible to the [...] Read more.
An effective method to enable the enjoyment of works of art by the blind is to reproduce tactile copies of the work, to facilitate tactile exploration. This is even more important when it comes to paintings, which are inherently not accessible to the blind unless they are transformed into 3D models. Today, artificial intelligence techniques are rapidly growing and represent a paramount method for solving a variety of previously hard-to-solve tasks. It is, therefore, presumable that the translation from 2D images to 3D models using such methods will be also in continuous development. Unfortunately, reconstructing a 3D model from a single image, especially when it comes to painting-based images, is an ill-posed problem due to the depth ambiguity and the lack of a ground truth for the 3D model. To confront this issue, this paper proposes an overview of artificial intelligence-based methods for reconstructing 3D geometry from a single image is provided. The survey explores the potentiality of Convolutional Neural Networks, Generative Adversarial Networks, Variational Autoencoders, and zero-shot methods. Through a small set of case studies, the capabilities and limitations of CNNs in creating a 3D-scene model from artworks are also encompassed. The findings suggest that, while deep learning models demonstrate that they are effective for 3D retrieval from paintings, they also call for post-processing and user interaction to improve the accuracy of the 3D models. Full article
(This article belongs to the Special Issue AI and the Future of Cultural Heritage)
Show Figures

Figure 1

19 pages, 484 KiB  
Article
Preventing Dysgraphia: Early Observation Protocols and a Technological Framework for Monitoring and Enhancing Graphomotor Skills
by Silvia Ceccacci, Arianna Taddei, Noemi Del Bianco, Catia Giaconi, Dolors Forteza Forteza and Francisca Moreno-Tallón
Information 2024, 15(12), 781; https://doi.org/10.3390/info15120781 - 5 Dec 2024
Cited by 3 | Viewed by 2423
Abstract
Writing is first-order instrumental learning that develops throughout the life cycle, a complex process evolving from early childhood education. The identification of risk predictors of dysgraphia at age 5 has the potential to significantly reduce the impact of graphomotor difficulties in early primary [...] Read more.
Writing is first-order instrumental learning that develops throughout the life cycle, a complex process evolving from early childhood education. The identification of risk predictors of dysgraphia at age 5 has the potential to significantly reduce the impact of graphomotor difficulties in early primary school, which affects handwriting performance to such an extent that it can become illegible. Building on established scientific literature, this study focuses on screening processes, with particular attention to writing requirements. This paper proposes a novel prevention and intervention system based on new technologies for teachers and educators or therapists. Specifically, it presents a pilot study testing an innovative tactile device to analyze graphomotor performance and motor coordination in real time. The research explores whether this haptic device can be used as an effective pedagogical aid for preventing graphomotor issues in children aged 5 to 6 years. The results showed a high level of engagement and usability among young participants. Furthermore, the quality of graphomotor traces, respectively executed by children after virtual and physical training, were comparable, supporting the use of the tool as a complementary training resource for the observation and enhancement of graphomotor processes. Full article
Show Figures

Figure 1

17 pages, 6147 KiB  
Article
Tactile Simultaneous Localization and Mapping Using Low-Cost, Wearable LiDAR
by John LaRocco, Qudsia Tahmina, John Simonis, Taylor Liang and Yiyao Zhang
Hardware 2024, 2(4), 256-272; https://doi.org/10.3390/hardware2040012 - 29 Sep 2024
Viewed by 1776
Abstract
Tactile maps are widely recognized as useful tools for mobility training and the rehabilitation of visually impaired individuals. However, current tactile maps lack real-time versatility and are limited because of high manufacturing and design costs. In this study, we introduce a device (i.e., [...] Read more.
Tactile maps are widely recognized as useful tools for mobility training and the rehabilitation of visually impaired individuals. However, current tactile maps lack real-time versatility and are limited because of high manufacturing and design costs. In this study, we introduce a device (i.e., ClaySight) that enhances the creation of automatic tactile map generation, as well as a model for wearable devices that use low-cost laser imaging, detection, and ranging (LiDAR,) used to improve the immediate spatial knowledge of visually impaired individuals. Our system uses LiDAR sensors to (1) produce affordable, low-latency tactile maps, (2) function as a day-to-day wayfinding aid, and (3) provide interactivity using a wearable device. The system comprises a dynamic mapping and scanning algorithm and an interactive handheld 3D-printed device that houses the hardware. Our algorithm accommodates user specifications to dynamically interact with objects in the surrounding area and create map models that can be represented with haptic feedback or alternative tactile systems. Using economical components and open-source software, the ClaySight system has significant potential to enhance independence and quality of life for the visually impaired. Full article
Show Figures

Figure 1

11 pages, 292 KiB  
Review
Recent Advances in Robotic Surgery for Urologic Tumors
by Sen-Yuan Hong and Bao-Long Qin
Medicina 2024, 60(10), 1573; https://doi.org/10.3390/medicina60101573 - 25 Sep 2024
Cited by 3 | Viewed by 2194
Abstract
This review discusses recent advances in robotic surgery for urologic tumors, focusing on three key areas: robotic systems, assistive technologies, and artificial intelligence. The Da Vinci SP system has enhanced the minimally invasive nature of robotic surgeries, while the Senhance system offers advantages [...] Read more.
This review discusses recent advances in robotic surgery for urologic tumors, focusing on three key areas: robotic systems, assistive technologies, and artificial intelligence. The Da Vinci SP system has enhanced the minimally invasive nature of robotic surgeries, while the Senhance system offers advantages such as tactile feedback and eye-tracking capabilities. Technologies like 3D reconstruction combined with augmented reality and fluorescence imaging aid surgeons in precisely identifying the anatomical relationships between tumors and surrounding structures, improving surgical efficiency and outcomes. Additionally, the development of artificial intelligence lays the groundwork for automated robotics. As these technologies continue to evolve, we are entering an era of minimally invasive, precise, and intelligent robotic surgery. Full article
(This article belongs to the Section Urology & Nephrology)
15 pages, 3502 KiB  
Article
Evaluation of Haptic Textures for Tangible Interfaces for the Tactile Internet
by Nikolaos Tzimos, George Voutsakelis, Sotirios Kontogiannis and Georgios Kokkonis
Electronics 2024, 13(18), 3775; https://doi.org/10.3390/electronics13183775 - 23 Sep 2024
Cited by 2 | Viewed by 1812
Abstract
Every texture in the real world provides us with the essential information to identify the physical characteristics of real objects. In addition to sight, humans use the sense of touch to explore their environment. Through haptic interaction we obtain unique and distinct information [...] Read more.
Every texture in the real world provides us with the essential information to identify the physical characteristics of real objects. In addition to sight, humans use the sense of touch to explore their environment. Through haptic interaction we obtain unique and distinct information about the texture and the shape of objects. In this paper, we enhance X3D 3D graphics files with haptic features to create 3D objects with haptic feedback. We propose haptic attributes such as static and dynamic friction, stiffness, and maximum altitude that provide the optimal user experience in a virtual haptic environment. After numerous optimization attempts on the haptic textures, we propose various haptic geometrical textures for creating a virtual 3D haptic environment for the tactile Internet. These tangible geometrical textures can be attached to any geometric shape, enhancing the haptic sense. We conducted a study of user interaction with a virtual environment consisting of 3D objects enhanced with haptic textures to evaluate performance and user experience. The goal is to evaluate the realism and recognition accuracy of each generated texture. The findings of the study aid visually impaired individuals to better understand their physical environment, using haptic devices in conjunction with the enhanced haptic textures. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 6915 KiB  
Article
Automated Crack Detection in Monolithic Zirconia Crowns Using Acoustic Emission and Deep Learning Techniques
by Kuson Tuntiwong, Supan Tungjitkusolmun and Pattarapong Phasukkit
Sensors 2024, 24(17), 5682; https://doi.org/10.3390/s24175682 - 31 Aug 2024
Viewed by 2005
Abstract
Monolithic zirconia (MZ) crowns are widely utilized in dental restorations, particularly for substantial tooth structure loss. Inspection, tactile, and radiographic examinations can be time-consuming and error-prone, which may delay diagnosis. Consequently, an objective, automatic, and reliable process is required for identifying dental crown [...] Read more.
Monolithic zirconia (MZ) crowns are widely utilized in dental restorations, particularly for substantial tooth structure loss. Inspection, tactile, and radiographic examinations can be time-consuming and error-prone, which may delay diagnosis. Consequently, an objective, automatic, and reliable process is required for identifying dental crown defects. This study aimed to explore the potential of transforming acoustic emission (AE) signals to continuous wavelet transform (CWT), combined with Conventional Neural Network (CNN) to assist in crack detection. A new CNN image segmentation model, based on multi-class semantic segmentation using Inception-ResNet-v2, was developed. Real-time detection of AE signals under loads, which induce cracking, provided significant insights into crack formation in MZ crowns. Pencil lead breaking (PLB) was used to simulate crack propagation. The CWT and CNN models were used to automate the crack classification process. The Inception-ResNet-v2 architecture with transfer learning categorized the cracks in MZ crowns into five groups: labial, palatal, incisal, left, and right. After 2000 epochs, with a learning rate of 0.0001, the model achieved an accuracy of 99.4667%, demonstrating that deep learning significantly improved the localization of cracks in MZ crowns. This development can potentially aid dentists in clinical decision-making by facilitating the early detection and prevention of crack failures. Full article
(This article belongs to the Special Issue Intelligent Sensing Technologies in Structural Health Monitoring)
Show Figures

Figure 1

20 pages, 16964 KiB  
Article
A Wearable Visually Impaired Assistive System Based on Semantic Vision SLAM for Grasping Operation
by Fei Fei, Sifan Xian, Ruonan Yang, Changcheng Wu and Xiong Lu
Sensors 2024, 24(11), 3593; https://doi.org/10.3390/s24113593 - 2 Jun 2024
Cited by 4 | Viewed by 2103
Abstract
Because of the absence of visual perception, visually impaired individuals encounter various difficulties in their daily lives. This paper proposes a visual aid system designed specifically for visually impaired individuals, aiming to assist and guide them in grasping target objects within a tabletop [...] Read more.
Because of the absence of visual perception, visually impaired individuals encounter various difficulties in their daily lives. This paper proposes a visual aid system designed specifically for visually impaired individuals, aiming to assist and guide them in grasping target objects within a tabletop environment. The system employs a visual perception module that incorporates a semantic visual SLAM algorithm, achieved through the fusion of ORB-SLAM2 and YOLO V5s, enabling the construction of a semantic map of the environment. In the human–machine cooperation module, a depth camera is integrated into a wearable device worn on the hand, while a vibration array feedback device conveys directional information of the target to visually impaired individuals for tactile interaction. To enhance the system’s versatility, a Dobot Magician manipulator is also employed to aid visually impaired individuals in grasping tasks. The performance of the semantic visual SLAM algorithm in terms of localization and semantic mapping was thoroughly tested. Additionally, several experiments were conducted to simulate visually impaired individuals’ interactions in grasping target objects, effectively verifying the feasibility and effectiveness of the proposed system. Overall, this system demonstrates its capability to assist and guide visually impaired individuals in perceiving and acquiring target objects. Full article
(This article belongs to the Section Sensors Development)
Show Figures

Figure 1

10 pages, 1485 KiB  
Article
Developing a Virtual Reality Simulation System for Preoperative Planning of Robotic-Assisted Thoracic Surgery
by Hideki Ujiie, Ryohei Chiba, Aogu Yamaguchi, Shunsuke Nomura, Haruhiko Shiiya, Aki Fujiwara-Kuroda, Kichizo Kaga, Chad Eitel, Tod R. Clapp and Tatsuya Kato
J. Clin. Med. 2024, 13(2), 611; https://doi.org/10.3390/jcm13020611 - 21 Jan 2024
Cited by 19 | Viewed by 3573
Abstract
Background. Robotic-assisted thoracic surgery (RATS) is now standard for lung cancer treatment, offering advantages over traditional methods. However, RATS’s minimally invasive approach poses challenges like limited visibility and tactile feedback, affecting surgeons’ navigation through com-plex anatomy. To enhance preoperative familiarization with patient-specific [...] Read more.
Background. Robotic-assisted thoracic surgery (RATS) is now standard for lung cancer treatment, offering advantages over traditional methods. However, RATS’s minimally invasive approach poses challenges like limited visibility and tactile feedback, affecting surgeons’ navigation through com-plex anatomy. To enhance preoperative familiarization with patient-specific anatomy, we devel-oped a virtual reality (VR) surgical navigation system. Using head-mounted displays (HMDs), this system provides a comprehensive, interactive view of the patient’s anatomy pre-surgery, aiming to improve preoperative simulation and intraoperative navigation. Methods. We integrated 3D data from preoperative CT scans into Perspectus VR Education software, displayed via HMDs for in-teractive 3D reconstruction of pulmonary structures. This detailed visualization aids in tailored preoperative resection simulations. During RATS, surgeons access these 3D images through Tile-ProTM multi-display for real-time guidance. Results. The VR system enabled precise visualization of pulmonary structures and lesion relations, enhancing surgical safety and accuracy. The HMDs offered true 3D interaction with patient data, facilitating surgical planning. Conclusions. VR sim-ulation with HMDs, akin to a robotic 3D viewer, offers a novel approach to developing robotic surgical skills. Integrated with routine imaging, it improves preoperative planning, safety, and accuracy of anatomical resections. This technology particularly aids in lesion identification in RATS, optimizing surgical outcomes. Full article
(This article belongs to the Special Issue Latest Advances in Thoracic Surgery)
Show Figures

Figure 1

14 pages, 5642 KiB  
Article
Data-Driven Contact-Based Thermosensation for Enhanced Tactile Recognition
by Tiancheng Ma and Min Zhang
Sensors 2024, 24(2), 369; https://doi.org/10.3390/s24020369 - 8 Jan 2024
Viewed by 1811
Abstract
Thermal feedback plays an important role in tactile perception, greatly influencing fields such as autonomous robot systems and virtual reality. The further development of intelligent systems demands enhanced thermosensation, such as the measurement of thermal properties of objects to aid in more accurate [...] Read more.
Thermal feedback plays an important role in tactile perception, greatly influencing fields such as autonomous robot systems and virtual reality. The further development of intelligent systems demands enhanced thermosensation, such as the measurement of thermal properties of objects to aid in more accurate system perception. However, this continues to present certain challenges in contact-based scenarios. For this reason, this study innovates by using the concept of semi-infinite equivalence to design a thermosensation system. A discrete transient heat transfer model was established. Subsequently, a data-driven method was introduced, integrating the developed model with a back propagation (BP) neural network containing dual hidden layers, to facilitate accurate calculation for contact materials. The network was trained using the thermophysical data of 67 types of materials generated by the heat transfer model. An experimental setup, employing flexible thin-film devices, was constructed to measure three solid materials under various heating conditions. Results indicated that measurement errors stayed within 10% for thermal conductivity and 20% for thermal diffusion. This approach not only enables quick, quantitative calculation and identification of contact materials but also simplifies the measurement process by eliminating the need for initial temperature adjustments, and minimizing errors due to model complexity. Full article
(This article belongs to the Special Issue The Advanced Flexible Electronic Devices)
Show Figures

Figure 1

10 pages, 2153 KiB  
Article
Characterizing the Impact of Compression Duration and Deformation-Related Loss of Closure Force on Clip-Induced Spinal Cord Injury in Rats
by Po-Hsuan Lee, Heng-Juei Hsu, Chih-Hao Tien, Chi-Chen Huang, Chih-Yuan Huang, Hui-Fang Chen, Ming-Long Yeh and Jung-Shun Lee
Neurol. Int. 2023, 15(4), 1383-1392; https://doi.org/10.3390/neurolint15040088 - 13 Nov 2023
Cited by 1 | Viewed by 1867
Abstract
The clip-induced spinal cord injury (SCI) rat model is pivotal in preclinical SCI research. However, the literature exhibits variability in compression duration and limited attention to clip deformation-related loss of closure force. We aimed to investigate the impact of compression duration on SCI [...] Read more.
The clip-induced spinal cord injury (SCI) rat model is pivotal in preclinical SCI research. However, the literature exhibits variability in compression duration and limited attention to clip deformation-related loss of closure force. We aimed to investigate the impact of compression duration on SCI severity and the influence of clip deformation on closure force. Rats received T10-level clip-induced SCI with durations of 1, 5, 10, 20, and 30 s, and a separate group underwent T10 transection. Outcomes included functional, histological, electrophysiological assessments, and inflammatory cytokine analysis. A tactile pressure mapping system quantified clip closure force after open–close cycles. Our results showed a positive correlation between compression duration and the severity of functional, histological, and electrophysiological deficits. Remarkably, even a brief 1-s compression caused significant deficits comparable to moderate-to-severe SCI. SSEP waveforms were abolished with durations over 20 s. Decreased clip closure force appeared after five open–close cycles. This study offers critical insights into regulating SCI severity in rat models, aiding researchers. Understanding compression duration and clip fatigue is essential for experiment design and interpretation using the clip-induced SCI model. Full article
Show Figures

Figure 1

13 pages, 4551 KiB  
Article
Wearable Capacitive Tactile Sensor Based on Porous Dielectric Composite of Polyurethane and Silver Nanowire
by Gen-Wen Hsieh and Chih-Yang Chien
Polymers 2023, 15(18), 3816; https://doi.org/10.3390/polym15183816 - 19 Sep 2023
Cited by 8 | Viewed by 2207
Abstract
In recent years, the implementation of wearable and biocompatible tactile sensing elements with sufficient response into healthcare, medical detection, and electronic skin/amputee prosthetics has been an intriguing but challenging quest. Here, we propose a flexible all-polyurethane capacitive tactile sensor that utilizes a salt [...] Read more.
In recent years, the implementation of wearable and biocompatible tactile sensing elements with sufficient response into healthcare, medical detection, and electronic skin/amputee prosthetics has been an intriguing but challenging quest. Here, we propose a flexible all-polyurethane capacitive tactile sensor that utilizes a salt crystal-templated porous elastomeric framework filling with silver nanowire as the composite dielectric material, sandwiched by a set of polyurethane films covering silver nanowire networks as electrodes. With the aids of these cubic air pores and conducting nanowires, the fabricated capacitive tactile sensor provides pronounced enhancement of both sensor compressibility and effective relative dielectric permittivity across a broad pressure regime (from a few Pa to tens of thousands of Pa). The fabricated silver nanowire–porous polyurethane sensor presents a sensitivity improvement of up to 4−60 times as compared to a flat polyurethane device. An ultrasmall external stimulus as light as 3 mg, equivalent to an applied pressure of ∼0.3 Pa, can also be clearly recognized. Our all-polyurethane capacitive tactile sensor based on a porous dielectric framework hybrid with conducting nanowire reveals versatile potential applications in physiological activity detection, arterial pulse monitoring, and spatial pressure distribution, paving the way for wearable electronics and artificial skin. Full article
Show Figures

Figure 1

15 pages, 6130 KiB  
Review
Advancements in Oral Maxillofacial Surgery: A Comprehensive Review on 3D Printing and Virtual Surgical Planning
by Jwa-Young Kim, Yong-Chan Lee, Seong-Gon Kim and Umberto Garagiola
Appl. Sci. 2023, 13(17), 9907; https://doi.org/10.3390/app13179907 - 1 Sep 2023
Cited by 22 | Viewed by 10876
Abstract
This comprehensive review explores the advancements in Orthognathic and Oral Maxillofacial Surgery, focusing on the integration of 3D Printing and Virtual Surgical Planning (VSP). Traditional surgical methods, while effective, come with inherent risks and complications, and can lead to variability in outcomes due [...] Read more.
This comprehensive review explores the advancements in Orthognathic and Oral Maxillofacial Surgery, focusing on the integration of 3D Printing and Virtual Surgical Planning (VSP). Traditional surgical methods, while effective, come with inherent risks and complications, and can lead to variability in outcomes due to the reliance on the surgeon’s skill and experience. The shift towards patient-centric care necessitates personalized surgical methods, which can be achieved through advanced technology. The amalgamation of 3D printing and VSP revolutionizes surgical planning and implementation by providing tactile 3D models for visualization and planning, and accurately designed surgical guides for execution. This convergence of digital planning and physical modeling facilitates a more predictable, personalized, and precise surgical process. However, the adoption of these technologies presents challenges, including the need for extensive software training and the steep learning curve associated with computer-aided design programs. Despite these challenges, the integration of 3D printing and VSP paves the way for advanced patient care in orthognathic and oral maxillofacial surgery. Full article
(This article belongs to the Special Issue Clinical Applications for Dentistry and Oral Health, 2nd Volume)
Show Figures

Figure 1

21 pages, 5600 KiB  
Article
Elderly Fall Detection Based on GCN-LSTM Multi-Task Learning Using Nursing Aids Integrated with Multi-Array Flexible Tactile Sensors
by Tong Li, Yuhang Yan, Minghui Yin, Jing An, Gang Chen, Yifan Wang, Chunxiu Liu and Ning Xue
Biosensors 2023, 13(9), 862; https://doi.org/10.3390/bios13090862 - 31 Aug 2023
Cited by 6 | Viewed by 2324
Abstract
Due to the frailty of elderly individuals’ physical condition, falling can lead to severe bodily injuries. Effective fall detection can significantly reduce the occurrence of such incidents. However, current fall detection methods heavily rely on visual and multi-sensor devices, which incur higher costs [...] Read more.
Due to the frailty of elderly individuals’ physical condition, falling can lead to severe bodily injuries. Effective fall detection can significantly reduce the occurrence of such incidents. However, current fall detection methods heavily rely on visual and multi-sensor devices, which incur higher costs and complex wearable designs, limiting their wide-ranging applicability. In this paper, we propose a fall detection method based on nursing aids integrated with multi-array flexible tactile sensors. We design a kind of multi-array capacitive tactile sensor and arrange the distribution of tactile sensors on the foot based on plantar force analysis and measure tactile sequences from the sole of the foot to develop a dataset. Then we construct a fall detection model based on a graph convolution neural network and long-short term memory network (GCN-LSTM), where the GCN module and LSTM module separately extract spatial and temporal features from the tactile sequences, achieving detection on tactile data of foot and walking states for specific time series in the future. Experiments are carried out with the fall detection model, the Mean Squared Error (MSE) of the predicted tactile data of the foot at the next time step is 0.0716, with the fall detection accuracy of 96.36%. What is more, the model can achieve fall detection on 5-time steps with 0.2-s intervals in the future with high confidence results. It exhibits outstanding performance, surpassing other baseline algorithms. Besides, we conduct experiments on different ground types and ground morphologies for fall detection, and the model showcases robust generalization capabilities. Full article
(This article belongs to the Section Biosensor and Bioelectronic Devices)
Show Figures

Figure 1

17 pages, 2967 KiB  
Article
Frame-Based Slip Detection for an Underactuated Robotic Gripper for Assistance of Users with Disabilities
by Lennard Marx, Ásgerdur Arna Pálsdóttir and Lotte N. S. Andreasen Struijk
Appl. Sci. 2023, 13(15), 8620; https://doi.org/10.3390/app13158620 - 26 Jul 2023
Viewed by 2015
Abstract
Stable grasping is essential for assistive robots aiding individuals with severe motor–sensory disabilities in their everyday lives. Slip detection can prevent unstably grasped objects from falling out of the gripper and causing accidents. Recent research on slip detection focuses on tactile sensing; however, [...] Read more.
Stable grasping is essential for assistive robots aiding individuals with severe motor–sensory disabilities in their everyday lives. Slip detection can prevent unstably grasped objects from falling out of the gripper and causing accidents. Recent research on slip detection focuses on tactile sensing; however, not every robot arm can be equipped with such sensors. In this paper, we propose a slip detection method solely based on data collected by a RealSense D435 Red Green Blue-Depth (RGBd) camera. By utilizing Farneback optical flow (OF) to estimate the motion field of the grasped object relative to the gripper, while also removing potential background noise, the algorithm can perform in a multitude of environments. The algorithm was evaluated on a dataset of 28 daily objects that were lifted 30 times each, resulting in a total of 840 frame sequences. Our proposed slip detection method achieves an accuracy of up to 82.38% and a recall of up to 87.14%, which is comparable to state-of-the-art approaches when only using camera data. When excluding objects for which movements are challenging for vision-based methods to detect, such as untextured or transparent objects, the proposed method performs even better, with an accuracy of up to 87.19% and a recall of up to 95.09%. Full article
(This article belongs to the Special Issue Trajectory Planning for Intelligent Robotic and Mechatronic Systems)
Show Figures

Figure 1

Back to TopTop