Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (622)

Search Parameters:
Keywords = signed language

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 898 KB  
Article
A Unified Morphosyntactic Analysis of Reduplication as Inclusion
by Ludovico Franco and Paolo Lorusso
Languages 2026, 11(3), 38; https://doi.org/10.3390/languages11030038 - 27 Feb 2026
Abstract
This paper proposes a unified analysis of reduplication as the lexical spell-out of a relational part–whole/inclusion predicate (⊆) in morphosyntax. Adopting the framework of Manzini and colleagues, we argue that reduplicative morphology—across diverse languages and domains—encodes a subset relation, whereby an event, individual, [...] Read more.
This paper proposes a unified analysis of reduplication as the lexical spell-out of a relational part–whole/inclusion predicate (⊆) in morphosyntax. Adopting the framework of Manzini and colleagues, we argue that reduplicative morphology—across diverse languages and domains—encodes a subset relation, whereby an event, individual, or property is interpreted as included in a larger set or continuum of similar instances. We bring evidence from a range of typologically diverse languages (Tagalog, Bikol, Malay, Fulfulde, Italian, and sign languages) to show that reduplication correlates with non-maximality: plural number (members of a set), distributivity (individuals/events taken one by one), iterative aspect (sub-events in a larger event), and evaluative attenuation or intensification (a degree as part of a scale). The analysis is developed in a formal syntactic representation where reduplication is triggered by an elementary inclusion operator (⊆) at the X or XP level. We show that a single semantic primitive (⊆) can account for the varied meanings of reduplication in nominal, verbal, and adjectival domains. We discuss the implications of this unified approach, suggesting that reduplication is not a mere iconic or phonological process, but rather the surface reflex of a fundamental grammatical operation of inclusion. Full article
(This article belongs to the Special Issue Morpho(phono)logy/Syntax Interface)
24 pages, 1406 KB  
Article
Linguistic Landscape as a Resource in EGAP Courses: A Case Study
by Maria Yelenevskaya
Educ. Sci. 2026, 16(3), 359; https://doi.org/10.3390/educsci16030359 - 25 Feb 2026
Viewed by 66
Abstract
This article explores the incorporation of linguistic landscape (LL) studies into English for General Academic Purposes (EGAP) courses, emphasizing its potential to enhance language learning through real-world engagement. This study highlights the growing interest in LL as a sociolinguistic phenomenon that reflects urban [...] Read more.
This article explores the incorporation of linguistic landscape (LL) studies into English for General Academic Purposes (EGAP) courses, emphasizing its potential to enhance language learning through real-world engagement. This study highlights the growing interest in LL as a sociolinguistic phenomenon that reflects urban multilingualism and cultural dynamics. The goal of this article is to analyze pedagogical benefits of integrating LL into language education, such as fostering critical thinking, pragmatic competence, intercultural awareness among students, and creating situations in which the target language is used in natural communication. Through a case study conducted at the Guangdong Technion–Israel Institute of Technology, the author presents specific classroom activities and reports on how they can be combined with fieldwork conducted by students. The goal of the tasks was to let students analyze language use in public spaces, classifying the surrounding signs into top-down and bottom-up, and informative and regulatory, and discuss how social prestige of languages is reflected in multilingual signs. In documenting written language in public places, creating their own signs and assessing their peers’ work, students were practicing both receptive and productive skills. Most of the work was done in small groups, which contributed to the students’ ability to collaborate with peers. The findings suggest that LL projects can effectively bridge classroom learning with lived language experiences, although challenges remain in implementation due to time constraints and pedagogical ideologies. Full article
(This article belongs to the Special Issue Innovation and Design in Multilingual Education)
26 pages, 969 KB  
Article
Student Learning Outcome Prediction via Sheaflet-Based Graph Learning and LLM
by Dongmei Zhang, Zhanle Zhu, Yukang Cheng and Yongchun Gu
Appl. Sci. 2026, 16(3), 1658; https://doi.org/10.3390/app16031658 - 6 Feb 2026
Viewed by 191
Abstract
Accurately modeling the interactions between students and learning content is a central challenge in achieving personalized and adaptive learning in online education. However, existing methods often struggle to simultaneously capture the multi-scale structural dependencies and the rich semantic information embedded in educational materials. [...] Read more.
Accurately modeling the interactions between students and learning content is a central challenge in achieving personalized and adaptive learning in online education. However, existing methods often struggle to simultaneously capture the multi-scale structural dependencies and the rich semantic information embedded in educational materials. To bridge this gap, we propose EduSheaf—a unified framework that integrates large language models (LLMs) with a sheaflet-based signed graph neural network. Specifically, LLMs are employed to extract fine-grained semantic embeddings from multiple-choice questions (MCQs), thereby enriching graph representations with contextual knowledge. A signed graph is then constructed to encode student–MCQ interactions, where correct and incorrect responses are represented as positive and negative edges. On top of this, a novel sheaflet-based signed graph neural network performs multi-frequency learning through low-pass and high-pass filters, enabling the joint modeling of global consensus and local variations, while sheaf structures enforce edge-level consistency. Extensive experiments on multiple real-world educational datasets demonstrate that EduSheaf consistently outperforms state-of-the-art baselines, including both semantic-enhanced and signed graph models, in terms of prediction accuracy and robustness. Ablation studies further reveal the complementary roles of semantic embeddings and multi-frequency graph filters. Full article
(This article belongs to the Special Issue Generative AI for Intelligent Knowledge Systems and Adaptive Learning)
Show Figures

Figure 1

16 pages, 257 KB  
Article
Sign Language and Educational Exclusion: Testimonies of Deaf Individuals Schooled Between 1960 and 1980
by Iván Vázquez-Villar, Rosa Espada-Chavarria and Ricardo Moreno-Rodriguez
Disabilities 2026, 6(1), 15; https://doi.org/10.3390/disabilities6010015 - 6 Feb 2026
Viewed by 667
Abstract
This study explores the educational trajectories of elderly deaf people in Spain who were educated between 1960 and 1980. The research was based on biographical-narrative methodology as a qualitative research technique. The data analysis was structural, using code identification and a system of [...] Read more.
This study explores the educational trajectories of elderly deaf people in Spain who were educated between 1960 and 1980. The research was based on biographical-narrative methodology as a qualitative research technique. The data analysis was structural, using code identification and a system of categories and dimensions. Based on the stories and testimonies of 18 deaf people over the age of 65 living in Galicia, the stereotypes, prejudices and academic barriers in their school experience are analysed. The testimonies reveal an exclusionary education system, marked by a lack of accessibility, an absence of sign language interpreters, and the imposition of oralism as the only means of teaching. These conditions negatively affected the participants’ personal development, self-esteem, and employment opportunities. Discriminatory attitudes on the part of teachers and the school community were also identified. However, some highlighted key support and the informal use of sign language as positive elements. The study emphasises that, although there have been improvements in the education of deaf people, further progress is needed in the development of inclusive education policies that recognise sign language and promote accessibility and equity in the education of deaf people. Full article
Show Figures

Graphical abstract

16 pages, 268 KB  
Article
Unspoken, Yet Lived: Reflections on Sexual and Reproductive Health and Rights Among Youth with Disabilities in Gulu, Northern Uganda
by Muriel Mac-Seing, Bryan Eryong, Emma Ajok, Peace Anena, Priscilla Lakot, Prisca Aciro, Caesar Okello, Christopher Opworwot and Martin Daniel Ogenrwot
Youth 2026, 6(1), 17; https://doi.org/10.3390/youth6010017 - 6 Feb 2026
Viewed by 345
Abstract
Background: Youth with disabilities remain among the most overlooked groups in global sexual and reproductive health and rights (SRHR) discourses, including in sub-Saharan Africa. Yet, their SRHR needs are often ignored. This reflexive article aims to illuminate and recenter the experiences and [...] Read more.
Background: Youth with disabilities remain among the most overlooked groups in global sexual and reproductive health and rights (SRHR) discourses, including in sub-Saharan Africa. Yet, their SRHR needs are often ignored. This reflexive article aims to illuminate and recenter the experiences and perspectives of youth with disabilities living in Gulu City and Gulu District, Northern Uganda, exploring what matters to them regarding SRHR and their broader life aspirations. Methods: We adopted a qualitative, reflexive and participatory approach. Data were collected among six Ugandan young co-researchers with different disabilities (physical, visual, hearing, and albinism), who interacted with two Ugandan research assistants and a Canadian researcher involved in a larger SRHR research project. They engaged in in-person and virtual WhatsApp and Microsoft Teams exchanges over weeks, with the support of three Ugandan Sign Language interpreters. We thematically analyzed data, informed by the Intersectionality-based Policy Analysis and Structural Health Vulnerabilities and Agency frameworks. Results: Our analysis revealed four main findings: (1) the persistent feeling of social discrimination, stigma, and exclusion, including from parents, (2) inaccessible SRHR information and services, and knowledge gaps, (3) gender- and disability-based violence, and (4) youth with disabilities’ aspirations for SRHR and in life. Conclusions: The voices of youth with disabilities in Gulu underscore the value of disability equity-focused research. They reminded us that they are intelligent, capable, and thoughtful citizens with agency whose SRHR and broader well-being must be acknowledged and respected. Their perspectives carry critical implications for SRHR programming, policy, and research. Full article
17 pages, 7804 KB  
Article
A 3D Camera-Based Approach for Real-Time Hand Configuration Recognition in Italian Sign Language
by Luca Ulrich, Asia De Luca, Riccardo Miraglia, Emma Mulassano, Simone Quattrocchio, Giorgia Marullo, Chiara Innocente, Federico Salerno and Enrico Vezzetti
Sensors 2026, 26(3), 1059; https://doi.org/10.3390/s26031059 - 6 Feb 2026
Viewed by 249
Abstract
Deafness poses significant challenges to effective communication, particularly in contexts where access to sign language interpreters is limited. Hand configuration recognition represents a fundamental component of sign language understanding, as configurations constitute a core cheremic element in many sign languages, including Italian Sign [...] Read more.
Deafness poses significant challenges to effective communication, particularly in contexts where access to sign language interpreters is limited. Hand configuration recognition represents a fundamental component of sign language understanding, as configurations constitute a core cheremic element in many sign languages, including Italian Sign Language (LIS). In this work, we address configuration-level recognition as an independent classification task and propose a machine vision framework based on RGB-D sensing. The proposed approach combines MediaPipe-based hand landmark extraction with normalized three-dimensional geometric features and a Support Vector Machine classifier. The first contribution of this study is the formulation of LIS hand configuration recognition as a standalone, configuration-level problem, decoupled from temporal gesture modeling. The second contribution is the integration of sensor-acquired RGB-D depth measurements into the landmark-based feature representation, enabling a direct comparison with estimated depth obtained from monocular data. The third contribution consists of a systematic experimental evaluation on two LIS configuration sets (6 and 16 classes), demonstrating that the use of real depth significantly improves classification performance and class separability, particularly for geometrically similar configurations. The results highlight the critical role of depth quality in configuration-level recognition and provide insights into the design of robust vision-based systems for LIS analysis. Full article
(This article belongs to the Special Issue Sensing and Machine Learning Control: Progress and Applications)
Show Figures

Figure 1

22 pages, 763 KB  
Article
Comparative Evaluation of LSTM and 3D CNN Models in a Hybrid System for IoT-Enabled Sign-to-Text Translation in Deaf Communities
by Samar Mouti, Hani Al Chalabi, Mohammed Abushohada, Samer Rihawi and Sulafa Abdalla
Informatics 2026, 13(2), 27; https://doi.org/10.3390/informatics13020027 - 5 Feb 2026
Viewed by 393
Abstract
This paper presents a hybrid deep learning framework for real-time sign language recognition (SLR) tailored to Internet of Things (IoT)-enabled environments, enhancing accessibility for Deaf communities. The proposed system integrates a Long Short-Term Memory (LSTM) network for static gesture recognition and a 3D [...] Read more.
This paper presents a hybrid deep learning framework for real-time sign language recognition (SLR) tailored to Internet of Things (IoT)-enabled environments, enhancing accessibility for Deaf communities. The proposed system integrates a Long Short-Term Memory (LSTM) network for static gesture recognition and a 3D Convolutional Neural Network (3D CNN) for dynamic gesture recognition. Implemented on a Raspberry Pi device using MediaPipe for landmark extraction, the system supports low-latency, on-device inference suitable for resource-constrained edge computing. Experimental results demonstrate that the LSTM model achieves its highest stability and performance for static signs at 1000 training epochs, yielding an average F1-score of 0.938 and an accuracy of 86.67%. In contrast, at 2000 epochs, the model exhibits a catastrophic performance collapse (F1-score of 0.088) due to overfitting and weight instability, highlighting the necessity of careful training regulation. Despite this, the overall system achieves consistently high classification performance under controlled conditions. In contrast, the 3D CNN component maintains robust and consistent performance across all evaluated training phases (500–2000 epochs), achieving up to 99.6% accuracy on dynamic signs. When deployed on a Raspberry Pi platform, the system achieves real-time performance with a frame rate of 12–15 FPS and an average inference latency of approximately 65 ms per frame. The hybrid architecture effectively balances recognition accuracy with computational efficiency by routing static gestures to the LSTM and dynamic gestures to the 3D CNN. This work presents a detailed epoch-wise comparative analysis of model stability and computational feasibility, contributing a practical and scalable IoT-enabled solution for inclusive, real-time sign-to-text communication in intelligent environments. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

17 pages, 980 KB  
Article
Dual-View Sign Language Recognition via Front-View Guided Feature Fusion for Automatic Sign Language Training
by Siyuan Jing and Gaorong Yan
Information 2026, 17(2), 158; https://doi.org/10.3390/info17020158 - 5 Feb 2026
Viewed by 223
Abstract
The foundation of an automatic sign language training (ASLT) system lies in word-level sign language recognition (WSLR), which refers to the translation of captured sign language signals into sign words. However, two key issues need to be addressed in this field: (1) the [...] Read more.
The foundation of an automatic sign language training (ASLT) system lies in word-level sign language recognition (WSLR), which refers to the translation of captured sign language signals into sign words. However, two key issues need to be addressed in this field: (1) the number of sign words in all public sign language datasets is too small, and the words do not match real-world scenarios, and (2) only single-view sign videos are typically provided, which makes solving the problem of hand occlusion difficult. In this work, we design an efficient algorithm for WSLR which is trained on our recently released NationalCSL-DP dataset. The algorithm first performs frame-level alignment of dual-view sign videos. A two-stage deep neural network is then employed to extract the spatiotemporal features of the signers, including hand motions and body gestures. Furthermore, a front-view guided early fusion (FvGEF) strategy is proposed for effective fusion of features from different views. Extensive experiments were carried out to evaluate the algorithm. The results show that the proposed algorithm significantly outperformed existing dual-view sign language recognition algorithms. Compared with several state-of-the-art methods, the proposed algorithm achieves Top-1 accuracy on the NationalCSL6707 dataset that is 10.29 and 11.38 higher than MViT and CNN + Transformer, respectively. Full article
Show Figures

Graphical abstract

20 pages, 8475 KB  
Article
Characterizing the Spatial Distribution of Imprinted Signs on Old Forestry Tools Across the Alpine Region
by Barbara Vinceti, Onorio Zanier and Pietro Piussi
Heritage 2026, 9(2), 49; https://doi.org/10.3390/heritage9020049 - 29 Jan 2026
Viewed by 312
Abstract
The presence of distinctive imprinted signs on old forestry tools reflects a little-documented tradition practiced by artisanal blacksmiths in the Alpine region until the early 20th century. These marks, hammered onto tools such as axes and pickaroons, carried meanings that intertwined craftsmanship, ownership, [...] Read more.
The presence of distinctive imprinted signs on old forestry tools reflects a little-documented tradition practiced by artisanal blacksmiths in the Alpine region until the early 20th century. These marks, hammered onto tools such as axes and pickaroons, carried meanings that intertwined craftsmanship, ownership, and local identity. This element of material culture is rarely mentioned in the literature. This study examined imprinted signs on 331 tools from 88 locations across the Alpine regions of Italy, from Friuli-Venezia Giulia to Valle d’Aosta, with supplementary observations in other countries. The objectives were to record the geographic distribution of imprints, interpret their potential meanings, and preserve evidence of a disappearing tradition. The spatial distribution of the markings corresponded to the Alpine territory and overlapped with a shared cultural region inhabited by three ethnic groups, although similar signs were recorded as far as the Carpathian regions. The meanings of certain imprints, such as religious symbols or representations of the tree of life, are recognizable, whereas those of other common signs remain unknown. The findings suggest that the imprints may reflect a distinct cultural practice and a symbolic language whose full significance has yet to be understood and would require further ethnographic investigations. Full article
Show Figures

Figure 1

20 pages, 875 KB  
Review
On the Coexistence of Captions and Sign Language as Accessibility Solutions in Educational Settings
by Francesco Pavani and Valerio Leonetti
Audiol. Res. 2026, 16(1), 20; https://doi.org/10.3390/audiolres16010020 - 29 Jan 2026
Viewed by 329
Abstract
Background/Objectives: In mainstream educational settings, deaf and hard-of-hearing (DHH) students may have limited or no access to the spoken lectures and discussions that are central to the hearing majority classroom. Yet, engagement in these educational and social exchanges is fundamental to their learning [...] Read more.
Background/Objectives: In mainstream educational settings, deaf and hard-of-hearing (DHH) students may have limited or no access to the spoken lectures and discussions that are central to the hearing majority classroom. Yet, engagement in these educational and social exchanges is fundamental to their learning and inclusion. Two primary visual accessibility solutions can support this need: real-time speech-to-text transcriptions (i.e., captioning) and high-quality sign language interpreting. Their combined use (or coexistence), however, raises concerns of competition between concurrent streams of visual information. This article examines the empirical evidence concerning the effectiveness of using both captioning and sign language simultaneously in educational settings. Specifically, it investigates whether this combined approach leads to better or worse content learning for DHH students, when compared to using either visual accessibility solution in isolation. Methods: A review of all English language studies in peer-reviewed journals until August 2025 was performed. Eligible studies used an experimental design to compare content learning when using sign language and captions together, versus using sign language or captions on their own. Databases Reviewed: EMBASE, PubMed/MEDLINE, and PsycInfo. Results: A total of four studies met the criteria for inclusion. This limited evidence is insufficient to decide on the coexistence of captioning and sign language. Yet, it underscores the potential of captions for content access in education for DHH, even when sign language is available. Conclusions: The present article reveals the lack of evidence in favor or against its coexistence with sign language. With the aim to be constructive for future research, the discussion offers considerations on the attentional demands of simultaneous visual accessibility resources, the diversity of DHH learners, and the impact of current and forthcoming technological advancements. Full article
Show Figures

Figure 1

16 pages, 1699 KB  
Article
A Comparative Assessment of ChatGPT, Gemini, and DeepSeek Accuracy: Examining Visual Medical Assessment in Internal Medicine Cases with and Without Clinical Context
by Rayah Asiri, Azfar Athar Ishaqui, Salman Ashfaq Ahmad, Muhammad Imran, Khalid Orayj and Adnan Iqbal
Diagnostics 2026, 16(3), 388; https://doi.org/10.3390/diagnostics16030388 - 26 Jan 2026
Viewed by 530
Abstract
Background and Aim: Large language models (LLMs) demonstrate significant potential in assisting with medical image interpretation. However, the diagnostic accuracy of general-purpose LLMs on image-based internal medicine cases and the added value of brief clinical history remain unclear. This study evaluated three general-purpose [...] Read more.
Background and Aim: Large language models (LLMs) demonstrate significant potential in assisting with medical image interpretation. However, the diagnostic accuracy of general-purpose LLMs on image-based internal medicine cases and the added value of brief clinical history remain unclear. This study evaluated three general-purpose LLMs (ChatGPT, Gemini, and DeepSeek) on expert-curated cases to quantify diagnostic accuracy with image-only input versus image plus brief clinical context. Methods: We conducted a comparative evaluation using 138 expert-curated cases from Harrison’s Visual Case Challenge. Each case was presented to the models in two distinct phases: Phase 1 (image only) and Phase 2 (image plus a brief clinical history). The primary endpoint was top-1 diagnostic accuracy for the textbook diagnosis, comparing performance with versus without a brief clinical history. Secondary/Exploratory analyses compared models and assessed agreement between model-generated differential lists and the textbook differential. Statistical analysis included Wilson 95% confidence intervals, McNemar’s tests, Cochran’s Q with Benjamini–Hochberg correction, and Wilcoxon signed-rank tests. Results: The inclusion of clinical history substantially improved diagnostic accuracy for all models. ChatGPT’s accuracy increased from 50.7% in Phase 1 to 80.4% in Phase 2. Gemini’s accuracy improved from 39.9% to 72.5%, and DeepSeek’s accuracy rose from 30.4% to 75.4%. In Phase 2, diagnostic accuracy reached at least 65% across most disease nature and organ system categories. However, agreement with the reference differential diagnoses remained modest, with average overlap rates of 6.99% for ChatGPT, 36.39% for Gemini, and 32.74% for DeepSeek. Conclusions: The provision of brief clinical history significantly enhances the diagnostic accuracy of large language models on visual internal medicine cases. In this benchmark, performance differences between models were smaller in Phase 2 than in Phase 1. While diagnostic precision improves markedly, the models’ ability to generate comprehensive differential diagnoses that align with expert consensus is still limited. These findings underscore the utility of context-aware, multimodal LLMs for educational support and structured diagnostic practice in supervised settings while also highlighting the need for more sophisticated, semantics-sensitive benchmarks for evaluating diagnostic reasoning. Full article
(This article belongs to the Special Issue Deep Learning in Medical Imaging: Challenges and Opportunities)
Show Figures

Figure 1

37 pages, 2397 KB  
Article
MedROAD V2: An AI-Integrated Electronic Medical Record System with Advanced Clinical Decision Support
by Pierre Boulanger
AI Med. 2026, 1(1), 4; https://doi.org/10.3390/aimed1010004 - 23 Jan 2026
Viewed by 534
Abstract
Despite widespread adoption, Electronic Medical Record (EMR) systems remain limited in providing intelligent clinical decision support, particularly for early detection of patient deterioration. We present MedROAD V2 (Medical Records Organization, Analysis, and Display), an open-source EMR that integrates AI-driven physiological analysis with comprehensive [...] Read more.
Despite widespread adoption, Electronic Medical Record (EMR) systems remain limited in providing intelligent clinical decision support, particularly for early detection of patient deterioration. We present MedROAD V2 (Medical Records Organization, Analysis, and Display), an open-source EMR that integrates AI-driven physiological analysis with comprehensive patient management. The system combines continuous vital sign monitoring and laboratory data using an ensemble of the following four complementary machine learning models: gradient boosting for supervised prediction, isolation forests for anomaly detection, autoencoders for pattern recognition, and Long Short-Term Memory networks for temporal modeling. A novel framework couples these predictions with a large language model (Claude AI) to generate explainable differential diagnoses grounded in medical literature. Validation on the MIMIC-IV database demonstrated excellent 12 h deterioration prediction. MedROAD demonstrates that combining quantitative prediction with natural language explanation can enhance clinical decision support while extending quality care to populations that would otherwise lack access. Full article
(This article belongs to the Special Issue Machine Learning Applications for Risk Stratification in Healthcare)
Show Figures

Figure 1

10 pages, 193 KB  
Review
Attention to Elderspeak: A Call for Dignity-Affirming Communication in Advanced Nursing Care
by Takahiko Nagamine
Clin. Pract. 2026, 16(1), 21; https://doi.org/10.3390/clinpract16010021 - 22 Jan 2026
Cited by 1 | Viewed by 277
Abstract
Elderspeak is a form of communication overaccommodation directed toward older adults, characterized by simplified language and an elevated pitch. While typically well-intentioned, it is rooted in ageist stereotypes and linked to negative health outcomes. A literature search was conducted in PubMed, CINAHL, and [...] Read more.
Elderspeak is a form of communication overaccommodation directed toward older adults, characterized by simplified language and an elevated pitch. While typically well-intentioned, it is rooted in ageist stereotypes and linked to negative health outcomes. A literature search was conducted in PubMed, CINAHL, and PsycINFO (2018–2025), yielding 24 key articles focusing on acute and surgical settings. The purpose of this narrative review is to synthesize current evidence on Elderspeak within acute care hospitals and propose a research framework and intervention strategies. Elderspeak is a key determinant of resistiveness to care (RTC), particularly in acute settings where it is triggered by functional impairment. Exposure increases patient distress and negatively impacts vital signs and cooperation with medical interventions. Inconsistent measurement is being addressed through standardized schemes like the Iowa Coding Scheme for Elderspeak (ICodE). This paper proposes that future research must employ mixed-methods, longitudinal designs to capture the impact of Elderspeak on long-term outcomes. Drawing on the ICodE, we propose a qualitative self-reflection tool for clinicians to enhance awareness in high-stakes acute settings. Eliminating Elderspeak is a foundational necessity for patient safety and dignity-affirming care in advanced nursing. Full article
32 pages, 4599 KB  
Article
Adaptive Assistive Technologies for Learning Mexican Sign Language: Design of a Mobile Application with Computer Vision and Personalized Educational Interaction
by Carlos Hurtado-Sánchez, Ricardo Rosales Cisneros, José Ricardo Cárdenas-Valdez, Andrés Calvillo-Téllez and Everardo Inzunza-Gonzalez
Future Internet 2026, 18(1), 61; https://doi.org/10.3390/fi18010061 - 21 Jan 2026
Viewed by 324
Abstract
Integrating people with hearing disabilities into schools is one of the biggest problems that Latin American societies face. Mexican Sign Language (MSL) is the main language and culture of the deaf community in Mexico. However, its use in formal education is still limited [...] Read more.
Integrating people with hearing disabilities into schools is one of the biggest problems that Latin American societies face. Mexican Sign Language (MSL) is the main language and culture of the deaf community in Mexico. However, its use in formal education is still limited by structural inequalities, a lack of qualified interpreters, and a lack of technology that can support personalized instruction. This study outlines the conceptualization and development of a mobile application designed as an adaptive assistive technology for learning MSL, utilizing a combination of computer vision techniques, deep learning algorithms, and personalized pedagogical interaction. The suggested system uses convolutional neural networks (CNNs) and pose-estimation models to recognize hand gestures in real time with 95.7% accuracy. It then gives the learner instant feedback by changing the difficulty level. A dynamic learning engine automatically changes the level of difficulty based on how well the learner is doing, which helps them learn signs and phrases over time. The Scrum agile methodology was used during the development process. This meant that educators, linguists, and members of the deaf community all worked together to design the product. Early tests show that sign recognition accuracy and indicators of user engagement and motivation show favorable performance and are at appropriate levels. This proposal aims to enhance inclusive digital ecosystems and foster linguistic equity in Mexican education through scalable, mobile, and culturally relevant technologies, in addition to its technical contributions. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Computer Vision—2nd Edition)
Show Figures

Figure 1

31 pages, 643 KB  
Review
Emotional Intelligence Measurement Tools and Deaf and Hard-of-Hearing People—Scoping Review
by Petra Potmesilova, Milon Potmesil, Ling Guo, Veronika Ruzickova, Gabriela Spinarova and Jana Kvintova
Disabilities 2026, 6(1), 10; https://doi.org/10.3390/disabilities6010010 - 16 Jan 2026
Viewed by 382
Abstract
Background: Emotions—including joy, sadness, fear, and anger—are fundamental expressions of human experience. For children and adults who are deaf or hard-of-hearing, emotional experiences and communication can differ due to linguistic and communication-related factors. Methods: This scoping review identifies instruments that are suitable for [...] Read more.
Background: Emotions—including joy, sadness, fear, and anger—are fundamental expressions of human experience. For children and adults who are deaf or hard-of-hearing, emotional experiences and communication can differ due to linguistic and communication-related factors. Methods: This scoping review identifies instruments that are suitable for assessing emotional intelligence in the context of the lived and cultural experiences of individuals who are deaf or hard-of-hearing. A comprehensive search was conducted in April 2024 following the JBI methodology. Results: Out of 3091 articles, 21 studies were included. Two adapted methods were identified: the Meadow/Kendall Social–Emotional Assessment Inventory and ISEAR-D. Assessments supported by sign language revealed no significant differences in age or gender. Conclusions: The authors recommend further development of screening instruments that reflect the specific experiences of the population who are deaf or hard-of-hearing. Full article
Show Figures

Figure 1

Back to TopTop