Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (654)

Search Parameters:
Keywords = sign language

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
44 pages, 7975 KB  
Article
A Validated Design Guideline for Mobile Applications Grounded in the Participation of Deaf Users for Accessible Development
by Andrés Eduardo Fuentes-Cortázar and José Rafael Rojano-Cáceres
Computers 2026, 15(5), 278; https://doi.org/10.3390/computers15050278 - 27 Apr 2026
Viewed by 47
Abstract
Mobile devices are widely used, yet accessibility for people with disabilities remains a critical challenge. Deaf users who rely primarily on sign language (SL) frequently encounter barriers when interacting with applications not designed for their communication needs. This study proposes a design guide [...] Read more.
Mobile devices are widely used, yet accessibility for people with disabilities remains a critical challenge. Deaf users who rely primarily on sign language (SL) frequently encounter barriers when interacting with applications not designed for their communication needs. This study proposes a design guide for developing mobile applications tailored to sign language users. The guide was developed through the active participation of three groups: Deaf individuals, usability and user experience (UX) experts, and mobile application developers. Based on their contributions, thirteen design guidelines were defined, addressing sign language integration, visual feedback, navigation, content presentation, and interface design. The guidelines were validated through usability and UX evaluations conducted with the three participant groups. A mobile application was subsequently developed following the proposed guidelines to assess their practical applicability. The evaluation results indicate that the guide effectively supports the development of more accessible and usable mobile applications for Deaf users. Incorporating sign language-centered design principles significantly improves usability and user experience for individuals with hearing disabilities, contributing to more inclusive mobile application development. Full article
(This article belongs to the Section Human–Computer Interactions)
Show Figures

Figure 1

24 pages, 983 KB  
Article
Letter Position Dyslexia in Fingerspelling: Similar Error Patterns in Reading Written and Fingerspelled Words
by Naama Friedmann, Neta Haluts and Doron Levy
Behav. Sci. 2026, 16(5), 654; https://doi.org/10.3390/bs16050654 - 26 Apr 2026
Viewed by 110
Abstract
Letter position dyslexia is a deficit in letter position encoding in the orthographic-visual analysis stage. We report here the first cases of deaf signers who show letter position dyslexia in both written and fingerspelled words. Their error pattern was identical in the two [...] Read more.
Letter position dyslexia is a deficit in letter position encoding in the orthographic-visual analysis stage. We report here the first cases of deaf signers who show letter position dyslexia in both written and fingerspelled words. Their error pattern was identical in the two modes of presentation, written and fingerspelled: in both modes, they had almost only within-word letter transpositions; their transpositions involved middle letters and almost never exterior letters; they had errors of doubled letters; and their errors occurred almost only in migratable words. They showed no transpositions in reading multi-digit numbers. These results indicate that despite the temporal separation between the fingerspelled letters, the reading of fingerspelling uses the same cognitive processes, and specifically, the same letter-position-encoding mechanism, as the reading of written words. Full article
(This article belongs to the Special Issue Understanding Dyslexia and Developmental Language Disorders)
15 pages, 685 KB  
Review
Ocular Clues to Liver Disease: A Strategic Diagnostic Lens
by Muhammad Dahshan, Hassan Dahshan, Ayhan Basoglu and Huseyin Kadikoy
Diseases 2026, 14(5), 152; https://doi.org/10.3390/diseases14050152 - 24 Apr 2026
Viewed by 199
Abstract
Background/Objectives: Hepatic diseases frequently present with ocular manifestations that aid diagnosis, provide prognostic data, and guide therapy. Despite the clear utility of the liver–eye axis, the literature lacks reviews that categorize these manifestations by etiology. This review evaluates current evidence to identify ocular [...] Read more.
Background/Objectives: Hepatic diseases frequently present with ocular manifestations that aid diagnosis, provide prognostic data, and guide therapy. Despite the clear utility of the liver–eye axis, the literature lacks reviews that categorize these manifestations by etiology. This review evaluates current evidence to identify ocular findings that serve as clinical tools for diagnosis, prognosis, and therapeutic monitoring of hepatic pathologies. Methods: A narrative review was conducted using PubMed and Google Scholar to identify English-language articles addressing ocular manifestations associated with liver disease. The primary search encompassed publications from 2000 to 2025, with inclusion of select foundational works published prior to 2000 when they represented seminal studies establishing diagnostic criteria, pathophysiological mechanisms, or natural history data not superseded by subsequent research. Search terms included combinations of liver, hepatic, hepatitis, cirrhosis, cholestasis, eye, ocular, retina, cornea, sclera, conjunctiva, ophthalmic manifestations, and specific disease names. All study designs were eligible. Society guidelines, systematic reviews, and studies from high-impact journals were prioritized. The final selection comprised 59 references representing the most authoritative sources across the spectrum of hepatic conditions. Results: A spectrum of ocular findings linked to distinct hepatic conditions was identified. Manifestations with established clinicopathologic associations were categorized into congenital and acquired etiologies. Congenital liver pathologies included metabolic disorders (Wilson disease, galactosemia, lysosomal storage disorders) and syndromic/genetic causes (Alagille syndrome, hereditary hemochromatosis). Acquired liver diseases encompassed infectious (hepatitis B/C), drug-induced and iatrogenic (interferon, immune checkpoint inhibitors), nutritional (vitamin A deficiency), neoplastic (metastatic hepatocellular carcinoma), and cirrhotic causes. Conclusions: Specific ocular signs raise clinical suspicion for underlying liver disease and warrant targeted hepatic evaluation. Recognizing these associations facilitates earlier diagnosis and improves outcomes. Systematic screening for these signs is supported in at-risk populations, and prospective validation studies should establish their sensitivity and specificity. Full article
(This article belongs to the Special Issue Viral Hepatitis: Diagnosis, Treatment and Management—2nd Edition)
Show Figures

Figure 1

16 pages, 758 KB  
Article
Large Language Models in Medical and Dental Education: A Cross-Sectional Comparison of AI-Generated and Faculty-Authored Prosthodontic Materials
by Alexia-Ecaterina Cârstea, Lucian-Toma Ciocan, Vlad-Gabriel Vasilescu, Ana-Maria Cristina Țâncu, Marina Imre, Andreea-Cristiana Didilescu and Silviu-Mirel Pițuru
Dent. J. 2026, 14(5), 249; https://doi.org/10.3390/dj14050249 - 23 Apr 2026
Viewed by 174
Abstract
Background/Objectives: This study aimed to compare AI-generated educational material with faculty-authored content in Dental Prostheses Technology, evaluating perceived clarity, accuracy, structure, usefulness, and overall instructional quality across different age and professional groups. Methods: An analytical cross-sectional study was conducted using two [...] Read more.
Background/Objectives: This study aimed to compare AI-generated educational material with faculty-authored content in Dental Prostheses Technology, evaluating perceived clarity, accuracy, structure, usefulness, and overall instructional quality across different age and professional groups. Methods: An analytical cross-sectional study was conducted using two versions of the first three chapters of a prosthodontics textbook: the original faculty-authored text and a reformulated version generated by ChatGPT 5.2 (OpenAI). Images were removed and formatting standardized to ensure a text-only comparison. An anonymized online questionnaire based on a five-point Likert scale assessed clarity, accuracy, readability, usefulness and structure. To reduce potential bias, participants were unaware of the authorship of the evaluated materials (human-authored vs. AI-generated). A total of 130 participants independently reviewed both documents. Data were analyzed using Wilcoxon signed-rank, Mann–Whitney U, and Friedman tests. Results: Both materials received favorable evaluations across all dimensions. The AI-generated version demonstrated a statistically significant advantage in clarity (Z = −2.107, p = 0.035; r = 0.19), while no significant differences were observed for structure, accuracy, readability, or usefulness. Generational differences emerged: younger participants valued improved clarity but reported reduced usefulness, mid-career participants showed the greatest improvement in perceived accuracy, and senior professionals reported substantial gains in usefulness and readability. Conclusions: AI-generated educational material demonstrates pedagogical equivalence to faculty-authored content, with clarity representing its principal advantage. Large language models may serve as effective complementary tools in dental education, particularly for restructuring complex content. Full article
(This article belongs to the Special Issue Dental Education: Innovation and Challenge)
8 pages, 2823 KB  
Proceeding Paper
Innovative Filipino Sign Language Translation and Interpretation with MediaPipe
by Zylwyn A. Alejo, Nathan Cyvel Jann R. Fuentes, Maria Patricia Z. Lungay, Alpha Isabel D. Maniquez, Paul Emmanuel G. Empas and John Paul T. Cruz
Eng. Proc. 2026, 134(1), 75; https://doi.org/10.3390/engproc2026134075 - 22 Apr 2026
Viewed by 338
Abstract
Filipino Sign Language (FSL) serves as a vital means of communication for the Deaf and hard-of-hearing in the Philippines. However, its societal use remains limited due to the scarcity of qualified interpreters and the general lack of FSL literacy among the population. Therefore, [...] Read more.
Filipino Sign Language (FSL) serves as a vital means of communication for the Deaf and hard-of-hearing in the Philippines. However, its societal use remains limited due to the scarcity of qualified interpreters and the general lack of FSL literacy among the population. Therefore, this study aims to address the gap between FSL development and automated FSL translation by employing machine learning and computer vision techniques. A model was trained using the FSL-105 dataset, which comprises video clips of gestures related to greetings and colors, and utilized MediaPipe for real-time detection of hand, face, and body landmarks. Through iterative training with transfer learning, the model’s performance improved from an initial accuracy of 80% to a final accuracy of 98.75%. The results demonstrate that the MediaPipe-based model can reliably interpret FSL gestures, positioning it as a potentially accessible assistive tool for the Deaf and hard of hearing community. This technology holds promise for applications in education, healthcare, and public service, offering new opportunities to promote the social inclusion of Filipino Deaf communities through more inclusive communication. Full article
Show Figures

Figure 1

27 pages, 3995 KB  
Article
Video-Based Arabic Sign Language Recognition with Mediapipe and Deep Learning Techniques
by Dana El-Rushaidat, Nour Almohammad, Raine Yeh and Kinda Fayyad
J. Imaging 2026, 12(4), 177; https://doi.org/10.3390/jimaging12040177 - 20 Apr 2026
Viewed by 348
Abstract
This paper addresses the critical communication barrier experienced by deaf and hearing-impaired individuals in the Arab world through the development of an affordable, video-based Arabic Sign Language (ArSL) recognition system. Designed for broad accessibility, the system eliminates specialized hardware by leveraging standard mobile [...] Read more.
This paper addresses the critical communication barrier experienced by deaf and hearing-impaired individuals in the Arab world through the development of an affordable, video-based Arabic Sign Language (ArSL) recognition system. Designed for broad accessibility, the system eliminates specialized hardware by leveraging standard mobile or laptop cameras. Our methodology employs Mediapipe for real-time extraction of hand, face, and pose landmarks from video streams. These anatomical features are then processed by a hybrid deep learning model integrating Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), specifically Bidirectional Long Short-Term Memory (BiLSTM) layers. The CNN component captures spatial features, such as intricate hand shapes and body movements, within individual frames. Concurrently, BiLSTMs model long-term temporal dependencies and motion trajectories across consecutive frames. This integrated CNN-BiLSTM architecture is critical for generating a comprehensive spatiotemporal representation, enabling accurate differentiation of complex signs where meaning relies on both static gestures and dynamic transitions, thus preventing misclassification that CNN-only or RNN-only models would incur. Rigorously evaluated on the author-created JUST-SL dataset and the publicly available KArSL dataset, the system achieved 96% overall accuracy for JUST-SL and an impressive 99% for KArSL. These results demonstrate the system’s superior accuracy compared to previous research, particularly for recognizing full Arabic words, thereby significantly enhancing communication accessibility for the deaf and hearing-impaired community. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

15 pages, 1071 KB  
Review
Early Warning Signs, Effects, Risk Factors, and Diagnostic Indicators of Toxoplasmosis in Pregnant Women in Africa: A Scoping Review
by Cherotich Jesca Tangus, Ndichu Maingi, James Chege Nganga, Davis Karanja Njuguna, Kariuki Njaanake, Bruno Enagnon Lokonon, Gloria Ivy Mensah, Kennedy Kwasi Addo, Andrée Prisca Ndjoug Ndour and Bassirou Bonfoh
Trop. Med. Infect. Dis. 2026, 11(4), 104; https://doi.org/10.3390/tropicalmed11040104 - 17 Apr 2026
Viewed by 216
Abstract
Toxoplasmosis is a widely distributed zoonosis caused by the protozoan parasite Toxoplasma gondii. Infection during pregnancy is a major public health concern due to its potential impact on both maternal health and fetal development. Early detection of maternal infection is critical to prevent [...] Read more.
Toxoplasmosis is a widely distributed zoonosis caused by the protozoan parasite Toxoplasma gondii. Infection during pregnancy is a major public health concern due to its potential impact on both maternal health and fetal development. Early detection of maternal infection is critical to prevent adverse outcomes; however, maternal signs are often subtle, non-specific or absent, complicating timely diagnosis. This scoping review aimed to map and synthesise existing evidence on early maternal signs, pregnancy and foetal outcomes, frequently assessed risk factors, and diagnostic approaches of toxoplasmosis in expectant mothers in Africa. The review was done in accordance with the PRISMA-ScR guidelines. A literature search of PubMed, Scopus, ResearchGate, and Google Scholar was performed to identify studies published between 2000 and 2025. Retrieved records were managed using Zotero (version 8.0.4) for deduplication and screening. Only English-language studies conducted in Africa and reporting relevant maternal or clinical data were included. A total of 28 cross-sectional studies were included. Lymphadenopathy (25.0%) was the most frequently reported maternal early sign, followed by flu-like illness, asymptomatic infection, low-grade or mild fever, and fatigue or malaise (each 10.7%). Congenital anomalies (50.0%) and miscarriage or spontaneous abortion (42.9%) were the most commonly reported foetal and pregnancy outcomes. Frequently reported risk factors were exposure to cat faeces (57.1%) and ingestion of undercooked or raw meat (42.9%). Diagnostic approaches were commonly enzyme-based immunoassays (78.6%), with limited use of RDTs and molecular methods. These findings suggest the need for improved early detection and prevention strategies in high-risk, low-resource African settings. Enhancing routine screening, health education, and access to appropriate diagnostics are considered. Future studies should consider adopting standardised reporting and integrating sensitive, affordable, rapid diagnostic approaches to enhance early detection and reduce the burden of congenital toxoplasmosis. Full article
Show Figures

Figure 1

16 pages, 450 KB  
Article
The Effects of Computer-Assisted Writing on Written Language Production in Students with Specific Learning Difficulties: Implications for Sustainable Digital Education
by Georgios Polydoros, Ilias Vasileiou, Zoe Krokou and Alexandros-Stamatios Antoniou
Computers 2026, 15(4), 251; https://doi.org/10.3390/computers15040251 - 17 Apr 2026
Viewed by 320
Abstract
This study investigated the effects of computer-assisted writing on the written language production of secondary school students with Specific Learning Difficulties (SLD), particularly dyslexia. Writing is a complex cognitive process requiring the coordination of spelling, lexical retrieval, syntactic organization, transcription, and revision, areas [...] Read more.
This study investigated the effects of computer-assisted writing on the written language production of secondary school students with Specific Learning Difficulties (SLD), particularly dyslexia. Writing is a complex cognitive process requiring the coordination of spelling, lexical retrieval, syntactic organization, transcription, and revision, areas in which students with SLD often experience persistent difficulties. The study compared handwritten and computer-based texts produced by 40 students with SLD and 20 students without learning difficulties using a counterbalanced design, with an interval of approximately two weeks between the two writing sessions. In the handwriting condition, students used printed reference materials, whereas in the computer-based condition they had access to general-purpose digital tools, including spell-checkers, electronic dictionaries, online resources, and word-processing software. Written texts were evaluated using the Spelling Accuracy Index and holistic scores assigned by independent raters. Data were analyzed using descriptive statistics and non-parametric tests (Mann–Whitney U and Wilcoxon signed-rank tests). The findings revealed statistically significant improvements in favor of computer-based writing for both groups, with particularly strong gains among students with SLD. Computer-written texts demonstrated higher spelling accuracy and received higher evaluation scores, indicating improved performance in the assessed writing outcomes. The findings suggest that computer-assisted writing may support written language production in secondary school students with SLD, particularly in relation to spelling accuracy and overall text evaluation, and may offer a useful avenue for more inclusive writing instruction. Full article
Show Figures

Figure 1

20 pages, 248 KB  
Article
Challenges and Professionalization in Teaching English to Deaf and Hard-of-Hearing Students: A Qualitative Study of Teacher Perspectives
by Kristin Gross, Melanie Kellner and Katharina Urbann
Educ. Sci. 2026, 16(4), 635; https://doi.org/10.3390/educsci16040635 - 16 Apr 2026
Viewed by 202
Abstract
This qualitative study investigates the challenges teachers face when teaching English as a foreign language (EFL) to deaf (in this article, deaf (lower case) refers to the audiological condition of hearing loss, whereas Deaf (capitalized) is used to denote individuals who identify as [...] Read more.
This qualitative study investigates the challenges teachers face when teaching English as a foreign language (EFL) to deaf (in this article, deaf (lower case) refers to the audiological condition of hearing loss, whereas Deaf (capitalized) is used to denote individuals who identify as members of the Deaf community and share a common sign language and distinct cultural values) and hard-of-hearing (DHH) students in German schools for the Deaf. The study is situated within a structural–theoretical professionalization framework, which focuses on the relationship between institutional conditions, teacher education structures, and professional action. Semi-structured interviews were conducted with 16 teachers of DHH students and the data were examined using qualitative content analysis. The findings reveal five central areas of challenge: (1) heterogeneity of the student body; (2) limited time (for preparing and adapting materials); (3) restricted subject-matter and sign-language competence, including missing links between EFL didactics and Deaf education in teacher training; (4) uncertainties surrounding the language design of EFL instruction, particularly the role of American Sign Language (ASL), German Sign Language (DGS), and written English; and (5) the lack of consistent, accessible exam formats and standards. Teachers report substantial insecurity due to the absence of coherent concepts, policy frameworks, and specialized training pathways, which fosters divergent classroom practices and tensions within teaching staff. The results highlight an urgent need for systematic integration of Deaf education, sign language training, and EFL pedagogy in teacher education, as well as for evidence-based guidelines on language classroom practice and assessment for DHH learners. Full article
9 pages, 1166 KB  
Proceeding Paper
Development of Transactional Filipino Sign Language Recognition System Using MediaPipe and Gated Recurrent Units
by Angela Cardano, Franz Railey Columna and Jocelyn Villaverde
Eng. Proc. 2026, 134(1), 47; https://doi.org/10.3390/engproc2026134047 - 14 Apr 2026
Viewed by 218
Abstract
Persistent communication barriers for the deaf and hard-of-hearing community in the Philippines are addressed in this study by developing a Filipino Sign Language Recognition (SLR) system. The system focuses on transactional signs commonly used in commercial environments such as markets and public facilities, [...] Read more.
Persistent communication barriers for the deaf and hard-of-hearing community in the Philippines are addressed in this study by developing a Filipino Sign Language Recognition (SLR) system. The system focuses on transactional signs commonly used in commercial environments such as markets and public facilities, thereby filling a gap left by existing SLR models. A vision-based approach was adopted, employing MediaPipe for landmark detection and Gated Recurrent Units for translating signs into text. To train the model, a custom dataset comprising 1065 video samples of 26 transactional signs was created, accounting for subtle variations in individual signing styles. The complete system was implemented on a Raspberry Pi 5 equipped with a webcam and touchscreen display. When evaluated on unseen data, the system achieved a recognition accuracy of 87%, demonstrating its potential for real-world applications in supporting commercial interactions for deaf and hard-of-hearing individuals. Full article
Show Figures

Figure 1

19 pages, 356 KB  
Article
Screening for Superficial Oral Mucosal Lesions in Sjögren’s Disease Using Natural Language Processing (NLP) Approaches
by Jose Ramon Herrera, Balaji Kolasani, Sandeepkumar Gaddam, Aishwarya Kunam, Devon Roese, George J. Eckert, Grace Gomez Felix Gomez and Thankam P. Thyvalikakath
Oral 2026, 6(2), 44; https://doi.org/10.3390/oral6020044 - 14 Apr 2026
Viewed by 358
Abstract
Background/Objectives: Superficial oral mucosal (SOM) lesions are prevalent among patients with Sjögren’s disease (SjD) due to mucosal dryness. Given the limited evidence on screening and referral for SOMs, and the presence of relevant information only in dental clinical notes, a natural language processing [...] Read more.
Background/Objectives: Superficial oral mucosal (SOM) lesions are prevalent among patients with Sjögren’s disease (SjD) due to mucosal dryness. Given the limited evidence on screening and referral for SOMs, and the presence of relevant information only in dental clinical notes, a natural language processing (NLP) pipeline was developed to screen for SOMs among SjD patients. This retrospective study analyzed dental clinical notes from 180 linked electronic dental and health records, including both with and without a diagnosis of SjD. Materials and Methods: An annotation schema with four classes (SOMs, signs and symptoms of dry mouth, treatment for xerostomia, referral to specialists) was inductively created using the Extensible Human Oracle Suite of Tools (eHOST) to manually annotate clinical notes. Relevant keyterms were retrieved using a rule-based approach with Python’s Natural Language Toolkit (NLTK). SjD and control groups were compared using Fisher’s Exact tests. Four annotators reviewed ninety-three records. Results: SjD patients (mean age 54.8 ± 11.7 years) had fewer total visits across 15 years but had more dental visits per year (10.2 ± 13.3) than controls. SjD patients were more likely to have oral candidiasis (p = 0.041), exhibit signs and symptoms of dry mouth (p = 0.004), receive treatments for xerostomia (p < 0.001), be treated with cholinergic agonists (p = 0.005), and be referred to a specialist (p = 0.046), but findings were not significant for all SOMs. Additionally, SjD patients had a higher proportion of sialadenitis (p = 0.045), rheumatoid arthritis (p = 0.001), systemic lupus erythematosus (p < 0.001), myalgia/myositis/fibromyalgia (p = 0.010), and anxiety/nervousness (p = 0.004). Conclusions: These findings encourage the feasibility of using text mining from dental clinical notes for screening and management of oral conditions. Full article
Show Figures

Figure 1

19 pages, 9603 KB  
Article
Understanding Modality-Specific Vulnerabilities in Vision–Language Models Under Adversarial Attacks
by Maisha Binte Rashid and Pablo Rivas
AI 2026, 7(4), 135; https://doi.org/10.3390/ai7040135 - 9 Apr 2026
Viewed by 486
Abstract
Vision–language models (VLMs), such as Contrastive Language–Image Pretraining (CLIP), are increasingly deployed in real-world applications, including content moderation, misinformation detection, and fraud analysis, making their robustness to adversarial attacks a critical concern. While adversarial robustness has been widely studied in unimodal models, modality-specific [...] Read more.
Vision–language models (VLMs), such as Contrastive Language–Image Pretraining (CLIP), are increasingly deployed in real-world applications, including content moderation, misinformation detection, and fraud analysis, making their robustness to adversarial attacks a critical concern. While adversarial robustness has been widely studied in unimodal models, modality-specific vulnerabilities in multimodal models remain underexplored. In this work, we analyze CLIP by applying gradient-based adversarial attacks to its vision and language modalities, both independently and jointly, and evaluating performance on two multimodal classification benchmarks: the Facebook Hateful Memes dataset and a large-scale Suspicious Car Parts dataset. Using Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks along with multiple adversarial retraining strategies, we show that adversarial perturbations on the image modality consistently cause the most severe and unstable performance degradation. These results demonstrate that the vision modality is the primary vulnerability in CLIP, highlighting the need for modality-specific defense strategies that focus more on the weaker modality in multimodal systems. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Graphical abstract

24 pages, 4042 KB  
Article
Memory Cueing and Augmented Sensory Feedback in Virtual Reality as an Assistive Technology for Enhancing Hand Motor Performance
by Zachary Marvin, Sophie Dewil, Yu Shi, Noam Y. Harel and Raviraj Nataraj
Technologies 2026, 14(4), 217; https://doi.org/10.3390/technologies14040217 - 8 Apr 2026
Viewed by 414
Abstract
Neurological injuries and disorders affecting hand motor control can severely impair the ability to perform activities of daily living and substantially reduce quality of life. Technologies such as virtual reality (VR) are increasingly used to address fundamental challenges in therapy, including motivation and [...] Read more.
Neurological injuries and disorders affecting hand motor control can severely impair the ability to perform activities of daily living and substantially reduce quality of life. Technologies such as virtual reality (VR) are increasingly used to address fundamental challenges in therapy, including motivation and engagement; further, programmable features of digital interfaces offer additional opportunities to personalize and optimize motor training. In this proof-of-concept study, we developed and evaluated a novel VR-based training framework to support improved dexterity and hand function using physiological (sensory-driven) and cognitive (memory) cues designed to promote greater task-relevant neural engagement. The proposed approach leverages the integration of augmented sensory feedback (ASF) with memory-anchored cues for motor learning of target hand gestures. Using a within-subjects design, thirteen neurotypical adults completed four training conditions: (1) control (baseline gesture-matching in VR), (2) visual ASF (enhanced visualization and feedback of gesture accuracy), (3) memory-anchored cues (associating gestures with semantically meaningful entities, loosely analogous to American Sign Language), and (4) hybrid multimodal (visual ASF + memory-anchored cues). Training with the hybrid condition produced the fastest skill acquisition (9.3 trials to reach an 80% accuracy threshold) and the steepest initial learning slope (1.86 ± 0.12%/trial), with all conditions differing significantly in initial slope (all p < 0.002). Post-training assessment showed that the hybrid condition achieved the highest gesture accuracy (95.2%), greatest normalized post-training accuracy gain (14.3% above baseline), fastest execution time to target gesture (1.14 s), and lowest variability in gestural kinematics (SD = 3.9%). Both ASF and memory-anchored cue conditions each also independently outperformed the control condition on gesture accuracy (both p ≤ 0.002), with omnibus ANOVAs indicating significant condition effects across metrics. Together, these findings suggest that pairing ASF cues with memory-based cognitive scaffolding can yield additive benefits for motor skill acquisition and stability. Pending validation in clinical populations, such approaches may inform the design of VR-based motor training frameworks for rehabilitation. Full article
Show Figures

Figure 1

30 pages, 1286 KB  
Article
Large Language Model Recommendations for Empiric Antibiotics Versus Clinician Prescribing: A Non-Interventional Paired Retrospective Antimicrobial Stewardship Analysis
by Ninel Iacobus Antonie, Vlad Alexandru Ionescu, Gina Gheorghe, Loredana-Crista Tiucă and Camelia Cristina Diaconu
Antibiotics 2026, 15(4), 368; https://doi.org/10.3390/antibiotics15040368 - 2 Apr 2026
Viewed by 468
Abstract
Background/Objectives: Antimicrobial resistance (AMR) remains a major global health threat, strengthening the case for antimicrobial stewardship strategies that limit unnecessary broad-spectrum empiric therapy while preserving timely escalation when clinically warranted. Before any clinical deployment of large language model (LLM)-based antibiotic decision support [...] Read more.
Background/Objectives: Antimicrobial resistance (AMR) remains a major global health threat, strengthening the case for antimicrobial stewardship strategies that limit unnecessary broad-spectrum empiric therapy while preserving timely escalation when clinically warranted. Before any clinical deployment of large language model (LLM)-based antibiotic decision support can be considered, structured offline evaluation is needed to assess whether model outputs align with auditable stewardship constraints under real-world admission contexts. We therefore evaluated whether post hoc LLM-generated empiric antibiotic recommendations showed greater concordance with a pre-specified stewardship benchmarking framework than clinician-initiated regimens in a retrospective shadow-mode setting. Methods: Single-center retrospective paired evaluation at Clinical Emergency Hospital of Bucharest (Internal Medicine, 2020–2024). The unit of analysis was the admission (N = 493), with paired 24 h empiric regimens (clinician-prescribed vs. post hoc LLM-recommended via OpenAI API; not visible to clinicians; no influence on care). Local laboratory-derived epidemiology was precomputed from microbiology exports and provided as structured prompt context to approximate information parity with clinicians’ implicit local ecology knowledge. Primary (prespecified) endpoint: any contextual guardrail violation (unjustified carbapenem/antipseudomonal/anti-MRSA under prespecified structured severity/MDR-risk rules), exact McNemar. Key secondary (prespecified): Δ contextual guardrail penalty (LLM − Clin), sign test and Wilcoxon signed-rank (ties reported). Ethics committee approval was obtained. Results: Guardrail violations occurred in 17.0% of clinician regimens vs. 4.9% of LLM regimens (paired RD −12.2%; matched OR 0.216, 95% CI 0.127–0.367; McNemar exact p = 1.60 × 10−10). Δ penalty had median 0 with 398/493 ties; among non-ties, improvements (Δ < 0) exceeded adverse shifts (79 vs. 16; sign-test p = 3.47 × 10−11). Conclusions: In this offline, non-interventional paired evaluation, LLM-generated empiric regimens showed greater concordance with a pre-specified stewardship benchmarking framework than clinician empiric regimens for the same admissions. These findings should not be interpreted as evidence of clinical superiority, patient safety, or causal effectiveness, but rather as process-level benchmarking within a rule-based stewardship construct. As such, reproducible guardrail-based benchmarking may serve as an early pre-implementation step to identify alignment and potential failure modes before prospective, safety-governed evaluation. Full article
(This article belongs to the Section Antibiotics Use and Antimicrobial Stewardship)
Show Figures

Figure 1

42 pages, 2464 KB  
Article
Energy-Aware Multilingual Evaluation of Large Language Models
by I. de Zarzà, Mauro Liz, J. de Curtò and Carlos T. Calafate
Electronics 2026, 15(7), 1395; https://doi.org/10.3390/electronics15071395 - 27 Mar 2026
Viewed by 537
Abstract
The rapid deployment of Large Language Models (LLMs) in multilingual, production-scale systems has made inference-time energy consumption a critical yet systematically under-evaluated dimension of model quality. While accuracy-centric benchmarks dominate current evaluation practice, they fail to capture the energy cost of reasoning, particularly [...] Read more.
The rapid deployment of Large Language Models (LLMs) in multilingual, production-scale systems has made inference-time energy consumption a critical yet systematically under-evaluated dimension of model quality. While accuracy-centric benchmarks dominate current evaluation practice, they fail to capture the energy cost of reasoning, particularly across languages and task complexities where consumption profiles diverge substantially. In this work, we present a comprehensive energy–performance evaluation of five instruction-tuned LLMs, spanning Transformer, Grouped-Query Attention, and State Space Model architectures, across thirteen typologically diverse languages and multiple task difficulty levels under controlled GPU-level energy measurement on NVIDIA H200 hardware. Our analysis encompasses 65 model–language configurations totaling over 5100 individual inference runs, supported by rigorous non-parametric statistical testing (Friedman tests, pairwise Wilcoxon signed-rank with Holm correction, and paired Cohen’s d effect sizes). We report four principal findings. First, energy consumption varies up to threefold across models under identical workloads (χ2=49.42, p=4.78×1010, Friedman test), stratifying into three distinct energy regimes driven by architecture and generation dynamics rather than parameter count. Second, energy expenditure and reasoning performance are only weakly coupled, as confirmed by Spearman rank correlation analysis (rs=0.109, p=0.386). Third, task category and difficulty level introduce substantial and model-dependent variation in both energy demand and performance, with cross-lingual performance variance amplifying at higher difficulty levels. Fourth, language choice acts as a measurable deployment parameter as follows: Romance languages on average achieve lower energy consumption than English across multiple models, while model efficiency rankings shift across languages, yielding language-dependent Pareto-optimal frontiers. We formalize these trade-offs through multi-objective Pareto analysis and introduce a composite AI Energy Score metric that captures reasoning quality per unit of energy. Of the 65 evaluated configurations, only four are Pareto-optimal, three Mistral-7B configurations at the low-energy extreme and one Phi-4-mini-instruct configuration at the high-performance end, while three of the five models are entirely dominated across all language configurations. These findings provide actionable guidelines for energy-aware model selection in multilingual deployments and support the integration of AI Energy Scores as a standard complementary criterion in LLM evaluation frameworks. Full article
(This article belongs to the Special Issue Data-Related Challenges in Machine Learning: Theory and Application)
Show Figures

Figure 1

Back to TopTop