Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (361)

Search Parameters:
Keywords = communicative gesture

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 3647 KB  
Article
Study on Auxiliary Rehabilitation System of Hand Function Based on Machine Learning with Visual Sensors
by Yuqiu Zhang and Guanjun Bao
Sensors 2026, 26(3), 793; https://doi.org/10.3390/s26030793 - 24 Jan 2026
Viewed by 233
Abstract
This study aims to assess hand function recovery in stroke patients during the mid-to-late Brunnstrom stages and to encourage active participation in rehabilitation exercises. To this end, a deep residual network (ResNet) integrated with Focal Loss is employed for gesture recognition, achieving a [...] Read more.
This study aims to assess hand function recovery in stroke patients during the mid-to-late Brunnstrom stages and to encourage active participation in rehabilitation exercises. To this end, a deep residual network (ResNet) integrated with Focal Loss is employed for gesture recognition, achieving a Macro F1 score of 91.0% and a validation accuracy of 90.9%. Leveraging the millimetre-level precision of Leap Motion 2 hand tracking, a mapping relationship for hand skeletal joint points was established, and a static assessment gesture data set containing 502,401 frames was collected through analysis of the FMA scale. The system implements an immersive augmented reality interaction through the Unity development platform; C# algorithms were designed for real-time motion range quantification. Finally, the paper designs a rehabilitation system framework tailored for home and community environments, including system module workflows, assessment modules, and game logic. Experimental results demonstrate the technical feasibility and high accuracy of the automated system for assessment and rehabilitation training. The system is designed to support stroke patients in home and community settings, with the potential to enhance rehabilitation motivation, interactivity, and self-efficacy. This work presents an integrated research framework encompassing hand modelling and deep learning-based recognition. It offers the possibility of feasible and economical solutions for stroke survivors, laying the foundation for future clinical applications. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

19 pages, 3206 KB  
Article
Human-Centered Collaborative Robotic Workcell Facilitating Shared Autonomy for Disability-Inclusive Manufacturing
by YongKuk Kim, DaYoung Kim, DoKyung Hwang, Juhyun Kim, Eui-Jung Jung and Min-Gyu Kim
Electronics 2026, 15(2), 461; https://doi.org/10.3390/electronics15020461 - 21 Jan 2026
Viewed by 87
Abstract
Workers with upper-limb disabilities face difficulties in performing manufacturing tasks requiring fine manipulation, stable handling, and multistep procedural understanding. To address these limitations, this paper presents an integrated collaborative workcell designed to support disability-inclusive manufacturing. The system comprises four core modules: a JSON-based [...] Read more.
Workers with upper-limb disabilities face difficulties in performing manufacturing tasks requiring fine manipulation, stable handling, and multistep procedural understanding. To address these limitations, this paper presents an integrated collaborative workcell designed to support disability-inclusive manufacturing. The system comprises four core modules: a JSON-based collaboration database that structures manufacturing processes into robot–human cooperative units; a projection-based augmented reality (AR) interface that provides spatially aligned task guidance and virtual interaction elements; a multimodal interaction channel combining gesture tracking with speech and language-based communication; and a personalization mechanism that enables users to adjust robot behaviors—such as delivery poses and user-driven task role switching—which are then stored for future operations. The system is implemented using ROS-style modular nodes with an external WPF-based projection module and evaluated through scenario-based experiments involving workers with upper-limb impairments. The experimental scenarios illustrate that the proposed workcell is capable of supporting step transitions, part handover, contextual feedback, and user-preference adaptation within a unified system framework, suggesting its feasibility as an integrated foundation for disability-inclusive human–robot collaboration in manufacturing environments. Full article
Show Figures

Figure 1

32 pages, 4599 KB  
Article
Adaptive Assistive Technologies for Learning Mexican Sign Language: Design of a Mobile Application with Computer Vision and Personalized Educational Interaction
by Carlos Hurtado-Sánchez, Ricardo Rosales Cisneros, José Ricardo Cárdenas-Valdez, Andrés Calvillo-Téllez and Everardo Inzunza-Gonzalez
Future Internet 2026, 18(1), 61; https://doi.org/10.3390/fi18010061 - 21 Jan 2026
Viewed by 83
Abstract
Integrating people with hearing disabilities into schools is one of the biggest problems that Latin American societies face. Mexican Sign Language (MSL) is the main language and culture of the deaf community in Mexico. However, its use in formal education is still limited [...] Read more.
Integrating people with hearing disabilities into schools is one of the biggest problems that Latin American societies face. Mexican Sign Language (MSL) is the main language and culture of the deaf community in Mexico. However, its use in formal education is still limited by structural inequalities, a lack of qualified interpreters, and a lack of technology that can support personalized instruction. This study outlines the conceptualization and development of a mobile application designed as an adaptive assistive technology for learning MSL, utilizing a combination of computer vision techniques, deep learning algorithms, and personalized pedagogical interaction. The suggested system uses convolutional neural networks (CNNs) and pose-estimation models to recognize hand gestures in real time with 95.7% accuracy. It then gives the learner instant feedback by changing the difficulty level. A dynamic learning engine automatically changes the level of difficulty based on how well the learner is doing, which helps them learn signs and phrases over time. The Scrum agile methodology was used during the development process. This meant that educators, linguists, and members of the deaf community all worked together to design the product. Early tests show that sign recognition accuracy and indicators of user engagement and motivation show favorable performance and are at appropriate levels. This proposal aims to enhance inclusive digital ecosystems and foster linguistic equity in Mexican education through scalable, mobile, and culturally relevant technologies, in addition to its technical contributions. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Computer Vision—2nd Edition)
Show Figures

Figure 1

26 pages, 7469 KB  
Article
Generalized Vision-Based Coordinate Extraction Framework for EDA Layout Reports and PCB Optical Positioning
by Pu-Sheng Tsai, Ter-Feng Wu and Wen-Hai Chen
Processes 2026, 14(2), 342; https://doi.org/10.3390/pr14020342 - 18 Jan 2026
Viewed by 270
Abstract
Automated optical inspection (AOI) technologies are widely used in PCB and semiconductor manufacturing to improve accuracy and reduce human error during quality inspection. While existing AOI systems can perform defect detection, they often rely on pre-defined camera positions and lack flexibility for interactive [...] Read more.
Automated optical inspection (AOI) technologies are widely used in PCB and semiconductor manufacturing to improve accuracy and reduce human error during quality inspection. While existing AOI systems can perform defect detection, they often rely on pre-defined camera positions and lack flexibility for interactive inspection, especially when the operator needs to visually verify solder pad conditions or examine specific layout regions. This study focuses on the front-end optical positioning and inspection stage of the AOI workflow, providing an automated mechanism to link digitally generated layout reports from EDA layout tools with real PCB inspection tasks. The proposed system operates on component-placement reports exported by EDA layout environments and uses them to automatically guide the camera to the corresponding PCB coordinates. Since PCB design reports may vary in format and structure across EDA tools, this study proposes a vision-based extraction approach that employs Hough transform-based region detection and a CNN-based digit recognizer to recover component coordinates from visually rendered design data. A dual-axis sliding platform is driven through a hierarchical control architecture, where coarse positioning is performed via TB6600 stepper control and Bluetooth-based communication, while fine alignment is achieved through a non-contact, gesture-based interface designed for clean-room operation. A high-resolution autofocus camera subsequently displays the magnified solder pads on a large screen for operator verification. Experimental results show that the proposed platform provides accurate, repeatable, and intuitive optical positioning, improving inspection efficiency while maintaining operator ergonomics and system modularity. Rather than replacing defect-classification AOI systems, this work complements them by serving as a positioning-assisted inspection module for interactive and semi-automated PCB quality evaluation. Full article
Show Figures

Figure 1

28 pages, 4162 KB  
Article
Linguistic and Material Ways of Communicating with Cows—The Dung Pusher as a Semiotic Resource
by Anni Jääskeläinen
Animals 2026, 16(2), 201; https://doi.org/10.3390/ani16020201 - 9 Jan 2026
Viewed by 352
Abstract
This study examines how farm workers working with cattle talk to and interact with these non-human animals. This study presents linguistic animal studies and multi-species pragmatics, and it is based on fieldwork, interviews, and video recordings from several types of Finnish dairy farms. [...] Read more.
This study examines how farm workers working with cattle talk to and interact with these non-human animals. This study presents linguistic animal studies and multi-species pragmatics, and it is based on fieldwork, interviews, and video recordings from several types of Finnish dairy farms. This study concentrates especially on one facet of human–cattle interaction: how humans use dung pushers and other sticks when communicating with cows. Thus, it draws on the materiality of language. It is shown how objects, bodies, and spaces, as well as words and linguistic constructions, are meaningful in human–animal interaction. Videoed recordings are analysed with multimodal conversation analysis. It is shown how dung pushers and snow stakes are used when steering cows, making them stand up, and pointing at things. It is then shown how these objects become ‘meaning-carriers’ for humans and for cows. For example, the dung pusher acquires four different meaning qualities for the human participants in the cattle barns: floor-cleaner quality, shepherd’s-crook quality, pointer quality, and weapon quality. The study examines how the cows’ and humans’ Umwelts, the subjective meaning universes of these species and their constituent individuals, influence interaction on farms and how and why the dung pusher becomes a semiotic resource. Full article
(This article belongs to the Special Issue Structures of Human–Animal Interaction)
Show Figures

Figure 1

19 pages, 2708 KB  
Article
A TPU-Based 3D Printed Robotic Hand: Design and Its Impact on Human–Robot Interaction
by Younglim Choi, Minho Lee, Seongmin Yea, Seunghwan Kim and Hyunseok Kim
Electronics 2026, 15(2), 262; https://doi.org/10.3390/electronics15020262 - 7 Jan 2026
Viewed by 279
Abstract
This study outlines the design and evaluation of a biomimetic robotic hand tailored for Human–Robot Interaction (HRI), focusing on improvements in tactile fidelity driven by material choice. Thermoplastic polyurethane (TPU) was selected over polylactic acid (PLA) based on its reported elastomeric characteristics and [...] Read more.
This study outlines the design and evaluation of a biomimetic robotic hand tailored for Human–Robot Interaction (HRI), focusing on improvements in tactile fidelity driven by material choice. Thermoplastic polyurethane (TPU) was selected over polylactic acid (PLA) based on its reported elastomeric characteristics and mechanical compliance described in prior literature. Rather than directly matching human skin properties, TPU was perceived as providing a softer and more comfortable tactile interaction compared to rigid PLA. The robotic hand was anatomically reconstructed from an open-source model and integrated with AX-12A and MG90S actuators to simplify wiring and enhance motion precision. A custom PCB, built around an ATmega2560 microcontroller, enables real-time communication with ROS-based upper-level control systems. Angular displacement analysis of repeated gesture motions confirmed the high repeatability and consistency of the system. A repeated-measures user study involving 47 participants was conducted to compare the PLA- and TPU-based prototypes during interactive tasks such as handshakes and gesture commands. The TPU hand received significantly higher ratings in tactile realism, grip satisfaction, and perceived responsiveness (p < 0.05). Qualitative feedback further supported its superior emotional acceptance and comfort. These findings indicate that incorporating TPU in robotic hand design not only enhances mechanical performance but also plays a vital role in promoting emotionally engaging and natural human–robot interactions, making it a promising approach for affective HRI applications. Full article
Show Figures

Figure 1

17 pages, 1312 KB  
Article
RGB Fusion of Multiple Radar Sensors for Deep Learning-Based Traffic Hand Gesture Recognition
by Hüseyin Üzen
Electronics 2026, 15(1), 140; https://doi.org/10.3390/electronics15010140 - 28 Dec 2025
Viewed by 332
Abstract
Hand gesture recognition (HGR) systems play a critical role in modern intelligent transportation frameworks by enabling reliable communication between pedestrians, traffic operators, and autonomous vehicles. This work presents a novel traffic hand gesture recognition method that combines nine grayscale radar images captured from [...] Read more.
Hand gesture recognition (HGR) systems play a critical role in modern intelligent transportation frameworks by enabling reliable communication between pedestrians, traffic operators, and autonomous vehicles. This work presents a novel traffic hand gesture recognition method that combines nine grayscale radar images captured from multiple millimeter-wave radar nodes into a single RGB representation through an optimized rotation–shift fusion strategy. This transformation preserves complementary spatial information while minimizing inter-image interference, enabling deep learning models to more effectively utilize the distinctive micro-Doppler and spatial patterns embedded in radar measurements. Extensive experimental studies were conducted to verify the model’s performance, demonstrating that the proposed RGB fusion approach provides higher classification accuracy than single-sensor or unfused representations. In addition, the proposed model outperformed state-of-the-art methods in the literature with an accuracy of 92.55%. These results highlight its potential as a lightweight yet powerful solution for reliable gesture interpretation in future intelligent transportation and human–vehicle interaction systems. Full article
(This article belongs to the Special Issue Advanced Techniques for Multi-Agent Systems)
Show Figures

Figure 1

14 pages, 281 KB  
Article
Wine Inebriation: Representation of Judah’s Cultural Trauma in Proverbs 23:29–35
by Shirley S. Ho
Religions 2026, 17(1), 24; https://doi.org/10.3390/rel17010024 - 25 Dec 2025
Viewed by 244
Abstract
Regarding Judah’s exilic realities and forced migration experience, this article proposes that the sage responsible for this poem functioned as a carrier group in articulating a narrative of collective trauma. The paper begins by summarizing key components of cultural trauma theory as developed [...] Read more.
Regarding Judah’s exilic realities and forced migration experience, this article proposes that the sage responsible for this poem functioned as a carrier group in articulating a narrative of collective trauma. The paper begins by summarizing key components of cultural trauma theory as developed by Jeffrey C. Alexander. It also situates the shared socio-historical context of the final textual forms of Jeremiah and Proverbs within the exilic/post-exilic realities of the Judahite community. It next traces the trope of wine inebriation across several Jeremiah texts, focusing especially on Jeremiah 25:15–29 to show how this motif is integrally woven into the book’s overarching themes of indictment, judgment, and exile. A conventional wisdom reading of Proverbs 23:29–35 yields a moralistic warning about the self-destructive cycle of wine intoxication of the fools in the book of Proverbs. But a cultural trauma hermeneutic of the poem—when paired with intertextual echoes of Jeremiah 25:15–29—opens the poem to a deeper reading. Within this framework, the sapiential poem emerges as a creative, dramatic and theologically rich act of trauma storytelling, depicting foolish Judah’s metaphorical intoxication as an embodiment of exilic indictment, woes and suffering, yet gesturing toward the possibility of healing and restoration through wisdom reflection and re-narration of their past. Full article
21 pages, 575 KB  
Article
Characterizing Autism Traits in Toddlers with Down Syndrome: Preliminary Associations with Language, Executive Functioning, and Other Developmental Domains
by Tiffany Chavers Edgar, Claudia Schabes, Marianne Elmquist, Miriam Kornelis, Lizbeth Finestack and Audra Sterling
Behav. Sci. 2026, 16(1), 39; https://doi.org/10.3390/bs16010039 - 24 Dec 2025
Viewed by 353
Abstract
Children with Down syndrome (DS) show considerable variability in social-communication and cognitive profiles, and a subset meet criteria for co-occurring autism. In the present study, we examined the associations between developmental domains and autistic trait severity in toddlers with DS. Participants included 38 [...] Read more.
Children with Down syndrome (DS) show considerable variability in social-communication and cognitive profiles, and a subset meet criteria for co-occurring autism. In the present study, we examined the associations between developmental domains and autistic trait severity in toddlers with DS. Participants included 38 toddlers (M = 4.19 years, SD = 0.99) who completed a home-based assessment, including measures of language, fine motor, and visual reception skills. Caregivers also completed standardized questionnaires on communication and executive functioning. Multiple regression analyses tested the degree of association between these developmental domains and autistic traits. Fewer words produced fewer gestures, and more impaired fine motor and visual reception scores were significantly associated with higher autism trait severity, whereas executive function domains were not significantly associated. Preliminary findings indicate that variability in language and nonverbal developmental skills contributes to the expression of autism traits in DS, underscoring the need for early, multidomain assessment approaches to support accurate identification and tailored intervention. Full article
Show Figures

Figure 1

26 pages, 10862 KB  
Article
Recurrent Neural Networks for Mexican Sign Language Interpretation in Healthcare Services
by Armando de Jesús Becerril-Carrillo, Héctor Julián Selley-Rojas and Elizabeth Guevara-Martínez
Sensors 2026, 26(1), 27; https://doi.org/10.3390/s26010027 - 19 Dec 2025
Viewed by 543
Abstract
In Mexico, the Deaf community faces persistent communication barriers that restrict their integration and access to essential services, particularly in healthcare. Even though approximately two million individuals use Mexican Sign Language (MSL) as their primary form of communication, technological tools for supporting effective [...] Read more.
In Mexico, the Deaf community faces persistent communication barriers that restrict their integration and access to essential services, particularly in healthcare. Even though approximately two million individuals use Mexican Sign Language (MSL) as their primary form of communication, technological tools for supporting effective interaction remain limited. While recent research in sign-language recognition has led to important advances for several languages, work focused on MSL, particularly for healthcare scenarios, remains scarce. To address this gap, this study introduces a health-oriented dataset of 150 signs, with 800 synthetic video sequences per word, totaling more than 35 GB of data. This dataset was used to train recurrent neural networks with regularization and data augmentation. The best configuration achieved a maximum precision of 98.36% in isolated sign classification, minimizing false positives, which is an essential requirement in clinical applications. Beyond isolated recognition, the main contribution of this study is its exploratory evaluation of sequential narrative inference in MSL. Using short scripted narratives, the system achieved a global sequential recall of 45.45% under a realistic evaluation protocol that enforces temporal alignment. These results highlight both the potential of recurrent architectures in generalizing from isolated gestures to structured sequences and the substantial challenges posed by continuous signing, co-articulation, and signer-specific variation. While not intended for clinical deployment, the methodology, dataset, and open-source implementation presented here establish a reproducible baseline for future research. This work provides initial evidence, tools, and insights to support the long-term development of accessible technologies for the Deaf community in Mexico. Full article
Show Figures

Figure 1

36 pages, 7640 KB  
Article
Predicting and Synchronising Co-Speech Gestures for Enhancing Human–Robot Interactions Using Deep Learning Models
by Enrique Fernández-Rodicio, Christian Dondrup, Javier Sevilla-Salcedo, Álvaro Castro-González and Miguel A. Salichs
Biomimetics 2025, 10(12), 835; https://doi.org/10.3390/biomimetics10120835 - 13 Dec 2025
Viewed by 437
Abstract
In recent years, robots have started to be used in tasks involving human interaction. For this to be possible, humans must perceive robots as suitable interaction partners. This can be achieved by giving the robots an animate appearance. One of the methods that [...] Read more.
In recent years, robots have started to be used in tasks involving human interaction. For this to be possible, humans must perceive robots as suitable interaction partners. This can be achieved by giving the robots an animate appearance. One of the methods that can be utilised to endow a robot with a lively appearance is giving it the ability to perform expressions on its own, that is, combining multimodal actions to convey information. However, this can become a challenge if the robot has to use gestures and speech simultaneously, as the non-verbal actions need to support the message communicated by the verbal component. In this manuscript, we present a system that, based on a robot’s utterances, predicts the corresponding gesture and synchronises it with the speech. A deep learning-based prediction model labels the robot’s speech with the types of expressions that should accompany it. Then, a rule-based synchronisation module connects different gestures to the correct parts of the speech. For this, we have tested two different approaches: (i) using a combination of recurrent neural networks and conditional random fields; and (ii) using transformer models. The results show that the proposed system can properly select co-speech gestures under the time constraints imposed by real-world interactions. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 4th Edition)
Show Figures

Figure 1

27 pages, 5048 KB  
Article
Living Counter-Maps: A Board Game as Critical Design for Relational Communication in Dementia Care
by Shital Desai, Sheryl Peris, Ria Saraiya and Rachel Remesat
Societies 2025, 15(12), 347; https://doi.org/10.3390/soc15120347 - 11 Dec 2025
Viewed by 461
Abstract
Dementia disrupts communication not only as a cognitive process but as a relational practice, leaving people living with dementia (PLwD) at risk of exclusion when language fragments. This study examines how communication closeness, the felt sense of being understood, emotionally attuned, and socially [...] Read more.
Dementia disrupts communication not only as a cognitive process but as a relational practice, leaving people living with dementia (PLwD) at risk of exclusion when language fragments. This study examines how communication closeness, the felt sense of being understood, emotionally attuned, and socially connected, might be supported through Research in and through Design (Ri&tD). Drawing on formative mixed-reality studies and a participatory co-design workshop with PLwD, caregivers, and stakeholders, we iteratively developed a series of playful artifacts culminating in Neighbourly, a tactile board game designed to support relational interaction through rule-based, multimodal play. Across this design genealogy, prototypes were treated as Living Counter-Maps: participatory mappings that made patterns of gesture, rhythm, shared attention, and material engagement visible and discussable. Through iterative interpretation and synthesis, the study identifies three guiding principles for designing for communication closeness: supporting co-regulation rather than correction, enabling multimodal reciprocity, and providing a shared material focus for joint agency. The paper consolidates these insights in the Living Counter-Maps Framework, which integrates counter-mapping and Ri&tD as a methodological approach for studying and designing relational communication in dementia care. Full article
Show Figures

Figure 1

16 pages, 1701 KB  
Article
Research on YOLOv5s-Based Multimodal Assistive Gesture and Micro-Expression Recognition with Speech Synthesis
by Xiaohua Li and Chaiyan Jettanasen
Computation 2025, 13(12), 277; https://doi.org/10.3390/computation13120277 - 1 Dec 2025
Viewed by 429
Abstract
Effective communication between deaf–mute and visually impaired individuals remains a challenge in the fields of human–computer interaction and accessibility technology. Current solutions mostly rely on single-modal recognition, which often leads to issues such as semantic ambiguity and loss of emotional information. To address [...] Read more.
Effective communication between deaf–mute and visually impaired individuals remains a challenge in the fields of human–computer interaction and accessibility technology. Current solutions mostly rely on single-modal recognition, which often leads to issues such as semantic ambiguity and loss of emotional information. To address these challenges, this study proposes a lightweight multimodal fusion framework that combines gestures and micro-expressions, which are then processed through a recognition network and a speech synthesis module. The core innovations of this research are as follows: (1) a lightweight YOLOv5s improvement structure that integrates residual modules and efficient downsampling modules, which reduces the model complexity and computational overhead while maintaining high accuracy; (2) a multimodal fusion method based on an attention mechanism, which adaptively and efficiently integrates complementary information from gestures and micro-expressions, significantly improving the semantic richness and accuracy of joint recognition; (3) an end-to-end real-time system that outputs the visual recognition results through a high-quality text-to-speech module, completing the closed-loop from “visual signal” to “speech feedback”. We conducted evaluations on the publicly available hand gesture dataset HaGRID and a curated micro-expression image dataset. The results show that, for the joint gesture and micro-expression tasks, our proposed multimodal recognition system achieves a multimodal joint recognition accuracy of 95.3%, representing a 4.5% improvement over the baseline model. The system was evaluated in a locally deployed environment, achieving a real-time processing speed of 22 FPS, with a speech output latency below 0.8 s. The mean opinion score (MOS) reached 4.5, demonstrating the effectiveness of the proposed approach in breaking communication barriers between the hearing-impaired and visually impaired populations. Full article
Show Figures

Figure 1

16 pages, 5099 KB  
Article
Semi-Interpenetrating Highly Conductive and Transparent Hydrogels for Wearable Sensors and Gesture-Driven Cryptography
by Dan Li, Hong Li, Yilin Wei, Lu Jiang, Hongqing Feng and Qiang Zheng
Micro 2025, 5(4), 53; https://doi.org/10.3390/micro5040053 - 23 Nov 2025
Viewed by 623
Abstract
Developing conductive hydrogels that balance high conductivity, stretchability, transparency, and sensitivity for next-generation wearable sensors remains challenging due to inherent trade-offs. This study introduces a straightforward approach to fabricate a semi-interpenetrating double-network hydrogel comprising polyvinyl alcohol (PVA), polyacrylamide (PAM), and lithium chloride (LiCl) [...] Read more.
Developing conductive hydrogels that balance high conductivity, stretchability, transparency, and sensitivity for next-generation wearable sensors remains challenging due to inherent trade-offs. This study introduces a straightforward approach to fabricate a semi-interpenetrating double-network hydrogel comprising polyvinyl alcohol (PVA), polyacrylamide (PAM), and lithium chloride (LiCl) to overcome these limitations. Leveraging hydrogen bonding for energy dissipation and chemical cross-linking for structural integrity, the design achieves robust mechanical properties. The incorporation of 1 mol/L LiCl significantly enhances ionic conductivity, while also providing plasticizing and moisture-retention benefits. The optimized hydrogel exhibits impressive ionic conductivity (0.47 S/m, 113% enhancement), excellent mechanical performance (e.g., 0.177 MPa tensile strength, 730% elongation, 0.68 MJ m−3 toughness), high transparency (>85%), and superior strain sensitivity (gauge factors ~1). It also demonstrates rapid response/recovery and robust fatigue resistance. Functioning as a wearable sensor, it reliably monitors diverse human activities and enables novel, secure data handling applications, such as finger-motion-driven Morse code interfaces and gesture-based password systems. This accessible fabrication method yields versatile hydrogels with promising applications in health tracking, interactive devices, and secure communication technologies. Full article
Show Figures

Figure 1

13 pages, 245 KB  
Case Report
Noncontact Gesture-Based Switch Improves Communication Speed and Social Function in Advanced Duchenne Muscular Dystrophy: A Case Report
by Daisuke Nishida, Takafumi Kinoshita, Tatsuo Hayakawa, Takashi Nakajima, Yoko Kobayashi, Takatoshi Hara, Ikushi Yoda and Katsuhiro Mizuno
Healthcare 2025, 13(22), 2989; https://doi.org/10.3390/healthcare13222989 - 20 Nov 2025
Viewed by 559
Abstract
Augmentative and alternative communication (AAC) enables digital access for individuals with severe motor impairment. Conventional contact-based switches rely on residual voluntary movement, limiting efficiency. We report the clinical application of a novel, researcher-developed noncontact assistive switch, the Augmentative Alternative Gesture Interface (AAGI), in [...] Read more.
Augmentative and alternative communication (AAC) enables digital access for individuals with severe motor impairment. Conventional contact-based switches rely on residual voluntary movement, limiting efficiency. We report the clinical application of a novel, researcher-developed noncontact assistive switch, the Augmentative Alternative Gesture Interface (AAGI), in a 39-year-old male with late-stage Duchenne Muscular Dystrophy (DMD) retaining minimal motion. The AAGI converts subtle, noncontact gestures into digital inputs, enabling efficient computer operations. Before intervention, the participant used a conventional mechanical switch, achieving 12 characters per minute (CPM) in a 2 min text entry task and was unable to perform high-speed ICT tasks such as gaming or video editing. After 3 months of AAGI use, the input speed increased to 30 CPM (+2.5-fold), and previously inaccessible tasks became feasible. The System Usability Scale (SUS) improved from 82.5 to 90.0, indicating enhanced usability, whereas the Short Form 36 (SF-36) Social Functioning (+13) and Mental Health (+4) demonstrated meaningful gains. Daily living activities remained stable. This case demonstrates that the AAGI system, developed by our group can substantially enhance communication efficiency, usability, and social engagement in advanced DMD, highlighting its potential as a practical, patient-centered AAC solution that extends digital accessibility to individuals with severe motor disabilities. Full article
(This article belongs to the Special Issue Applications of Assistive Technologies in Health Care Practices)
Show Figures

Graphical abstract

Back to TopTop