Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (530)

Search Parameters:
Keywords = signed language

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 836 KiB  
Article
Early Language Access and STEAM Education: Keys to Optimal Outcomes for Deaf and Hard of Hearing Students
by Marie Coppola and Kristin Walker
Educ. Sci. 2025, 15(7), 915; https://doi.org/10.3390/educsci15070915 (registering DOI) - 17 Jul 2025
Abstract
This paper offers an overview of a large study of language and cognitive development in deaf and hard of hearing children. Specifically, we investigated how acquiring a signed or spoken language (language modality) and when a child’s access to language begins (i.e., at [...] Read more.
This paper offers an overview of a large study of language and cognitive development in deaf and hard of hearing children. Specifically, we investigated how acquiring a signed or spoken language (language modality) and when a child’s access to language begins (i.e., at birth or later in development) influence cognitive development. We conducted in-person behavioral assessments with 404 children 3–10 years old (280 deaf and hard of hearing; 124 typically hearing). The tasks measured a range of abilities along a continuum of how strongly they depend on language input, such as general vocabulary and number words (strongly dependent) vs. skills such as tracking sets of two to three objects and standardized ‘nonverbal’ picture-similarity tasks (relatively independent of language). Overall, the timing of children’s access to language predicted more variability in their performance than language modality. These findings help refine our theories about how language influences development and suggest how a STEAM pedagogical approach may ameliorate the impacts of later access to language. These results underscore children’s need for language early in development. That is, deaf and hard of hearing children must receive fully accessible language input as early as possible through sign language, accompanied by hearing technology aimed at improving access to spoken language, if desired. Full article
(This article belongs to the Special Issue Full STEAM Ahead! in Deaf Education)
Show Figures

Figure 1

19 pages, 709 KiB  
Article
Fusion of Multimodal Spatio-Temporal Features and 3D Deformable Convolution Based on Sign Language Recognition in Sensor Networks
by Qian Zhou, Hui Li, Weizhi Meng, Hua Dai, Tianyu Zhou and Guineng Zheng
Sensors 2025, 25(14), 4378; https://doi.org/10.3390/s25144378 - 13 Jul 2025
Viewed by 127
Abstract
Sign language is a complex and dynamic visual language that requires the coordinated movement of various body parts, such as the hands, arms, and limbs—making it an ideal application domain for sensor networks to capture and interpret human gestures accurately. To address the [...] Read more.
Sign language is a complex and dynamic visual language that requires the coordinated movement of various body parts, such as the hands, arms, and limbs—making it an ideal application domain for sensor networks to capture and interpret human gestures accurately. To address the intricate task of precise and expedient SLR from raw videos, this study introduces a novel deep learning approach by devising a multimodal framework for SLR. Specifically, feature extraction models are built based on two modalities: skeleton and RGB images. In this paper, we firstly propose a Multi-Stream Spatio-Temporal Graph Convolutional Network (MSGCN) that relies on three modules: a decoupling graph convolutional network, a self-emphasizing temporal convolutional network, and a spatio-temporal joint attention module. These modules are combined to capture the spatio-temporal information in multi-stream skeleton features. Secondly, we propose a 3D ResNet model based on deformable convolution (D-ResNet) to model complex spatial and temporal sequences in the original raw images. Finally, a gating mechanism-based Multi-Stream Fusion Module (MFM) is employed to merge the results of the two modalities. Extensive experiments are conducted on the public datasets AUTSL and WLASL, achieving competitive results compared to state-of-the-art systems. Full article
(This article belongs to the Special Issue Intelligent Sensing and Artificial Intelligence for Image Processing)
Show Figures

Figure 1

15 pages, 1449 KiB  
Article
Cochlear Implant in Children with Congenital CMV Infection: Long-Term Results from an Italian Multicentric Study
by Francesca Forli, Silvia Capobianco, Stefano Berrettini, Francesco Lazzerini, Rita Malesci, Anna Rita Fetoni, Serena Salomè, Davide Brotto, Patrizia Trevisi, Leonardo Franz, Elisabetta Genovese, Andrea Ciorba and Silvia Palma
Children 2025, 12(7), 908; https://doi.org/10.3390/children12070908 - 10 Jul 2025
Viewed by 186
Abstract
Background/Objectives: Congenital cytomegalovirus (cCMV) infection is the most common non-genetic cause of sensorineural hearing loss (SNHL) in children. In cases of severe-to-profound SNHL, cochlear implantation (CI) is a widely used intervention, but outcomes remain variable due to possible neurodevelopmental comorbidities. This study [...] Read more.
Background/Objectives: Congenital cytomegalovirus (cCMV) infection is the most common non-genetic cause of sensorineural hearing loss (SNHL) in children. In cases of severe-to-profound SNHL, cochlear implantation (CI) is a widely used intervention, but outcomes remain variable due to possible neurodevelopmental comorbidities. This study aimed to evaluate the long-term auditory and language outcomes in children with cCMV after CI and to explore clinical and radiological predictors of post-CI performance. Methods: Fifty-three children with cCMV and bilateral severe-to-profound SNHL who underwent CI at five tertiary referral centers in Italy were included in the study. Auditory and language outcomes were assessed pre- and post-implantation using the Categories of Auditory Performance II (CAP-II) scale, the Nottingham 3-Level Classification, and the Bates Language Development Scale. Brain MRI abnormalities were classified according to the Alarcón classification. Correlations were explored between outcome scores and symptomatic status at birth, MRI findings, and neurodevelopmental comorbidities. Results: At birth, 40 children (75.5%) were symptomatic and 13 (24.5%) asymptomatic. Neurodevelopmental comorbidities were present in 19 children (35.8%). MRI was normal in 15 (28.3%), mildly abnormal in 26 (49%), and moderately to severely abnormal in 12 (22.6%). Auditory and language outcomes improved significantly post-CI (p < 0.001), though the outcomes varied widely. Twenty-five children (47%) reached CAP level ≥ 6, and thirteen (23%) reached Bates Level 6. Symptomatic status at birth correlated weakly with worse CAP (ρ = −0.291, p = 0.038) and Bates (ρ = −0.310, p = 0.028) scores. Higher Alarcón scores were significantly associated with neurodevelopmental comorbidities, though not directly with post-CI auditory and language outcomes. Finally, the presence of neurodevelopmental disabilities was generally associated with lower results, even if without statistical significance. Conclusions: CI provides substantial auditory and language benefit in children with cCMV, even in cases of severe neurodevelopmental comorbidities. MRI and developmental assessments, as well as perinatal history for clinical signs and symptoms, are helpful in guiding expectations and personalizing post-implantation support. Full article
(This article belongs to the Special Issue Treatment Strategies for Hearing Loss in Children)
Show Figures

Figure 1

17 pages, 5876 KiB  
Article
Optimization of Knitted Strain Sensor Structures for a Real-Time Korean Sign Language Translation Glove System
by Youn-Hee Kim and You-Kyung Oh
Sensors 2025, 25(14), 4270; https://doi.org/10.3390/s25144270 - 9 Jul 2025
Viewed by 172
Abstract
Herein, an integrated system is developed based on knitted strain sensors for real-time translation of sign language into text and audio voices. To investigate how the structural characteristics of the knit affect the electrical performance, the position of the conductive yarn and the [...] Read more.
Herein, an integrated system is developed based on knitted strain sensors for real-time translation of sign language into text and audio voices. To investigate how the structural characteristics of the knit affect the electrical performance, the position of the conductive yarn and the presence or absence of elastic yarn are set as experimental variables, and five distinct sensors are manufactured. A comprehensive analysis of the electrical and mechanical performance, including sensitivity, responsiveness, reliability, and repeatability, reveals that the sensor with a plain-plated-knit structure, no elastic yarn included, and the conductive yarn positioned uniformly on the back exhibits the best performance, with a gauge factor (GF) of 88. The sensor exhibited a response time of less than 0.1 s at 50 cycles per minute (cpm), demonstrating that it detects and responds promptly to finger joint bending movements. Moreover, it exhibits stable repeatability and reliability across various angles and speeds, confirming its optimization for sign language recognition applications. Based on this design, an integrated textile-based system is developed by incorporating the sensor, interconnections, snap connectors, and a microcontroller unit (MCU) with built-in Bluetooth Low Energy (BLE) technology into the knitted glove. The complete system successfully recognized 12 Korean Sign Language (KSL) gestures in real time and output them as both text and audio through a dedicated application, achieving a high recognition accuracy of 98.67%. Thus, the present study quantitatively elucidates the structure–performance relationship of a knitted sensor and proposes a wearable system that accounts for real-world usage environments, thereby demonstrating the commercialization potential of the technology. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

14 pages, 701 KiB  
Article
Early Access to Sign Language Boosts the Development of Serial Working Memory in Deaf and Hard-of-Hearing Children
by Brennan P. Terhune-Cotter and Matthew W. G. Dye
Behav. Sci. 2025, 15(7), 919; https://doi.org/10.3390/bs15070919 - 7 Jul 2025
Viewed by 237
Abstract
Deaf and hard-of-hearing (DHH) children are often reported to show deficits on working memory (WM) tasks. These deficits are often characterized as contributing to their struggles to acquire spoken language. Here we report a longitudinal study of a large (N = 103) sample [...] Read more.
Deaf and hard-of-hearing (DHH) children are often reported to show deficits on working memory (WM) tasks. These deficits are often characterized as contributing to their struggles to acquire spoken language. Here we report a longitudinal study of a large (N = 103) sample of DHH children who acquired American Sign Language (ASL) as their first language. Using an n-back working memory task, we show significant growth in WM performance across the 7–13-year-old age range. Furthermore, we show that children with early access to ASL from their DHH parents demonstrate faster WM growth and that this group difference is mediated by ASL receptive skills. The data suggest the important role of early access to perceivable natural language in promoting typical WM growth during the middle school years. We conclude that the acquisition of a natural visual–gestural language is sufficient to support the development of WM in DHH children. Further research is required to determine how the timing and quality of ASL exposure may play a role, or whether the effects are driven by acquisition-related corollaries, such as parent–child interactions and maternal stress. Full article
(This article belongs to the Special Issue Language and Cognitive Development in Deaf Children)
Show Figures

Figure 1

13 pages, 1519 KiB  
Article
ChatGPT Performance Deteriorated in Patients with Comorbidities When Providing Cardiological Therapeutic Consultations
by Wen-Rui Hao, Chun-Chao Chen, Kuan Chen, Long-Chen Li, Chun-Chih Chiu, Tsung-Yeh Yang, Hung-Chang Jong, Hsuan-Chia Yang, Chih-Wei Huang, Ju-Chi Liu and Yu-Chuan (Jack) Li
Healthcare 2025, 13(13), 1598; https://doi.org/10.3390/healthcare13131598 - 3 Jul 2025
Viewed by 258
Abstract
Background: Large language models (LLMs) like ChatGPT are increasingly being explored for medical applications. However, their reliability in providing medication advice for patients with complex clinical situations, particularly those with multiple comorbidities, remains uncertain and under-investigated. This study aimed to systematically evaluate [...] Read more.
Background: Large language models (LLMs) like ChatGPT are increasingly being explored for medical applications. However, their reliability in providing medication advice for patients with complex clinical situations, particularly those with multiple comorbidities, remains uncertain and under-investigated. This study aimed to systematically evaluate the performance, consistency, and safety of ChatGPT in generating medication recommendations for complex cardiovascular disease (CVD) scenarios. Methods: In this simulation-based study (21 January–1 February 2024), ChatGPT 3.5 and 4.0 were prompted 10 times for each of 25 scenarios, representing five common CVDs paired with five major comorbidities. A panel of five cardiologists independently classified each unique drug recommendation as “high priority” or “low priority”. Key metrics included physician approval rates, the proportion of high-priority recommendations, response consistency (Jaccard similarity index), and error pattern analysis. Statistical comparisons were made using Z-tests, chi-square tests, and Wilcoxon Signed-Rank tests. Results: The overall physician approval rate for GPT-4 (86.90%) was modestly but significantly higher than that for GPT-3.5 (85.06%; p = 0.0476) based on aggregated data. However, a more rigorous paired-scenario analysis of high-priority recommendations revealed no statistically significant difference between the models (p = 0.407), indicating the advantage is not systematic. A chi-square test confirmed significant differences in error patterns (p < 0.001); notably, GPT-4 more frequently recommended contraindicated drugs in high-risk scenarios. Inter-model consistency was low (mean Jaccard index = 0.42), showing the models often provide different advice. Conclusions: While demonstrating high overall physician approval rates, current LLMs exhibit inconsistent performance and pose significant safety risks when providing medication advice for complex CVD cases. Their reliability does not yet meet the standards for autonomous clinical application. Future work must focus on leveraging real-world data for validation and developing domain-specific, fine-tuned models to enhance safety and accuracy. Until then, vigilant professional oversight is indispensable. Full article
Show Figures

Figure 1

34 pages, 3185 KiB  
Article
A Student-Centric Evaluation Survey to Explore the Impact of LLMs on UML Modeling
by Bilal Al-Ahmad, Anas Alsobeh, Omar Meqdadi and Nazimuddin Shaikh
Information 2025, 16(7), 565; https://doi.org/10.3390/info16070565 - 1 Jul 2025
Viewed by 298
Abstract
Unified Modeling Language (UML) diagrams serve as essential tools for visualizing system structure and behavior in software design. With the emergence of Large Language Models (LLMs) that automate various phases of software development, there is growing interest in leveraging these models for UML [...] Read more.
Unified Modeling Language (UML) diagrams serve as essential tools for visualizing system structure and behavior in software design. With the emergence of Large Language Models (LLMs) that automate various phases of software development, there is growing interest in leveraging these models for UML diagram generation. This study presents a comprehensive empirical investigation into the effectiveness of GPT-4-turbo in generating four fundamental UML diagram types: Class, Deployment, Use Case, and Sequence diagrams. We developed a novel rule-based prompt-engineering framework that transforms domain scenarios into optimized prompts for LLM processing. The generated diagrams were then synthesized using PlantUML and evaluated through a rigorous survey involving 121 computer science and software engineering students across three U.S. universities. Participants assessed both the completeness and correctness of LLM-assisted and human-created diagrams by examining specific elements within each diagram type. Statistical analyses, including paired t-tests, Wilcoxon signed-rank tests, and effect size calculations, validate the significance of our findings. The results reveal that while LLM-assisted diagrams achieve meaningful levels of completeness and correctness (ranging from 61.1% to 67.7%), they consistently underperform compared to human-created diagrams. The performance gap varies by diagram type, with Sequence diagrams showing the closest alignment to human quality and Use Case diagrams exhibiting the largest discrepancy. This research contributes a validated framework for evaluating LLM-generated UML diagrams and provides empirically-grounded insights into the current capabilities and limitations of LLMs in software modeling education. Full article
Show Figures

Figure 1

13 pages, 554 KiB  
Article
Correlating Patient Symptoms and CT Morphology in AI-Detected Incidental Pulmonary Embolisms
by Selim Abed, Lucas Brandstetter and Klaus Hergan
Diagnostics 2025, 15(13), 1639; https://doi.org/10.3390/diagnostics15131639 - 27 Jun 2025
Viewed by 322
Abstract
Background/Objectives: Incidental pulmonary embolisms (IPEs) may be asymptomatic and radiologists may discover them for unrelated reasons, and they can thereby go underdiagnosed and undertreated. Artificial intelligence (AI) has emerged as a possible aid to radiologists in identifying IPEs. This study aimed to [...] Read more.
Background/Objectives: Incidental pulmonary embolisms (IPEs) may be asymptomatic and radiologists may discover them for unrelated reasons, and they can thereby go underdiagnosed and undertreated. Artificial intelligence (AI) has emerged as a possible aid to radiologists in identifying IPEs. This study aimed to assess the clinical and radiological significance of IPEs that a deep learning AI algorithm detected and correlate them with thrombotic burden, CT morphologic signs of right heart strain, and clinical symptoms. Methods: We retrospectively evaluated 13,603 contrast-enhanced thoracic and abdominal CT scans performed over one year at a tertiary care hospital using a CE- and FDA-cleared AI algorithm. Natural language processing (NLP) tools were used to determine whether IPEs were reported by radiologists. We scored confirmed IPEs using the Mastora, Qanadli, Ghanima, and Kirchner scores, and morphologic indicators of right heart strain and clinical parameters such as symptomatology, administered anticoagulation, and 6-month outcomes were analyzed. Results: AI identified 41 IPE cases, of which 61% occurred in oncologic patients. Most emboli were segmental, with no signs of right heart strain. Only 10% of patients were symptomatic. Thrombotic burden scores were similar between oncologic and non-oncologic groups. Four deaths occurred—all in oncologic patients. One untreated case experienced the recurrence of pulmonary embolism. Despite frequent detection, many IPEs were clinically silent. Conclusions: AI can effectively detect IPEs that are missed on initial review. However, most AI-detected IPEs are clinically silent. Integrating AI findings with morphologic and clinical criteria is crucial to avoid overtreatment and to guide appropriate management. Full article
Show Figures

Figure 1

17 pages, 1529 KiB  
Systematic Review
Iatrogenic Pneumopericardium After Pericardiocentesis: A Systematic Review and Case Report
by Andreas Merz, Hong Ran, Cheng-Ying Chiu, Henryk Dreger, Daniel Armando Morris and Matthias Schneider-Reigbert
J. Cardiovasc. Dev. Dis. 2025, 12(7), 246; https://doi.org/10.3390/jcdd12070246 - 26 Jun 2025
Viewed by 284
Abstract
Background: Pneumopericardium is the presence of air within the pericardial cavity. We report a case of iatrogenic pneumopericardium following pericardiocentesis in a patient with primary cardiac angiosarcoma. Additionally, we provide a systematic review of pericardiocentesis-associated pneumopericardium to offer a comprehensive overview and evaluate [...] Read more.
Background: Pneumopericardium is the presence of air within the pericardial cavity. We report a case of iatrogenic pneumopericardium following pericardiocentesis in a patient with primary cardiac angiosarcoma. Additionally, we provide a systematic review of pericardiocentesis-associated pneumopericardium to offer a comprehensive overview and evaluate the role of echocardiography in its diagnosis. Methods: The PubMed database was searched from inception until January 2025 to perform a systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to evaluate articles on iatrogenic pneumopericardium following pericardiocentesis published in the English language. The Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Case Reports was used to appraise the included case reports. Results: Of the 108 search results obtained, after screening and a backward citation search, 37 articles were selected for inclusion in this review, accounting for a total of 37 patients. According to the JBI Critical Appraisal Checklist for Case Reports, 7 case reports were of high quality and 12 of low quality. The overall evidence of quality of the case reports was moderate, and 51.6% of patients developed hemodynamic compromise or showed signs of cardiac tamponade. The main underlying cause for the development of pneumopericardium was issues relating to the catheter drainage system; 64.9% of cases required decompressive therapy. Conclusions: Pneumopericardium can occur as a complication after pericardiocentesis and must therefore be considered in symptomatic patients. While detection by transthoracic echocardiography is difficult and relies on non-validated signs, chest X-ray and computed tomography can provide a definitive diagnosis. Full article
(This article belongs to the Special Issue The Role of Echocardiography in Cardiovascular Diseases)
Show Figures

Figure 1

18 pages, 4982 KiB  
Article
Unsupervised Clustering and Ensemble Learning for Classifying Lip Articulation in Fingerspelling
by Nurzada Amangeldy, Nazerke Gazizova, Marek Milosz, Bekbolat Kurmetbek, Aizhan Nazyrova and Akmaral Kassymova
Sensors 2025, 25(12), 3703; https://doi.org/10.3390/s25123703 - 13 Jun 2025
Viewed by 348
Abstract
This paper presents a new methodology for analyzing lip articulation during fingerspelling aimed at extracting robust visual patterns that can overcome the inherent ambiguity and variability of lip shape. The proposed approach is based on unsupervised clustering of lip movement trajectories to identify [...] Read more.
This paper presents a new methodology for analyzing lip articulation during fingerspelling aimed at extracting robust visual patterns that can overcome the inherent ambiguity and variability of lip shape. The proposed approach is based on unsupervised clustering of lip movement trajectories to identify consistent articulatory patterns across different time profiles. The methodology is not limited to using a single model. Still, it includes the exploration of varying cluster configurations and an assessment of their robustness, as well as a detailed analysis of the correspondence between individual alphabet letters and specific clusters. In contrast to direct classification based on raw visual features, this approach pre-tests clustered representations using a model-based assessment of their discriminative potential. This structured approach enhances the interpretability and robustness of the extracted features, highlighting the importance of lip dynamics as an auxiliary modality in multimodal sign language recognition. The obtained results demonstrate that trajectory clustering can serve as a practical method for generating features, providing more accurate and context-sensitive gesture interpretation. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

31 pages, 2868 KiB  
Article
Optimized Scheduling for Multi-Drop Vehicle–Drone Collaboration with Delivery Constraints Using Large Language Models and Genetic Algorithms with Symmetry Principles
by Mingyang Geng and Anping Chen
Symmetry 2025, 17(6), 934; https://doi.org/10.3390/sym17060934 - 12 Jun 2025
Viewed by 433
Abstract
With the rapid development of e-commerce and globalization, logistics distribution systems have become integral to modern economies, directly impacting transportation efficiency, resource utilization, and supply chain flexibility. However, solving the Vehicle and Multi-Drone Cooperative Delivery Problem with Delivery Restrictions is challenging due to [...] Read more.
With the rapid development of e-commerce and globalization, logistics distribution systems have become integral to modern economies, directly impacting transportation efficiency, resource utilization, and supply chain flexibility. However, solving the Vehicle and Multi-Drone Cooperative Delivery Problem with Delivery Restrictions is challenging due to complex constraints, including limited payloads, short endurance, regional restrictions, and multi-objective optimization. Traditional optimization methods, particularly genetic algorithms, struggle to address these complexities, often relying on static rules or single-objective optimization that fails to balance exploration and exploitation, resulting in local optima and slow convergence. The concept of symmetry plays a crucial role in optimizing the scheduling process, as many logistics problems inherently possess symmetrical properties. By exploiting these symmetries, we can reduce the problem’s complexity and improve solution efficiency. This study proposes a novel and scalable scheduling approach to address the Vehicle and Multi-Drone Cooperative Delivery Problem with Delivery Restrictions, tackling its high complexity, constraint handling, and real-world applicability. Specifically, we propose a logistics scheduling method called Loegised, which integrates large language models with genetic algorithms while incorporating symmetry principles to enhance the optimization process. Loegised includes three innovative modules: a cognitive initialization module to accelerate convergence by generating high-quality initial solutions, a dynamic operator parameter adjustment module to optimize crossover and mutation rates in real-time for better global search, and a local optimum escape mechanism to prevent stagnation and improve solution diversity. The experimental results on benchmark datasets show that Loegised achieves an average delivery time of 14.80, significantly outperforming six state-of-the-art baseline methods, with improvements confirmed by Wilcoxon signed-rank tests (p<0.001). In large-scale scenarios, Loegised reduces delivery time by over 20% compared to conventional methods, demonstrating strong scalability and practical applicability. These findings validate the effectiveness and real-world potential of symmetry-enhanced, language model-guided optimization for advanced logistics scheduling. Full article
Show Figures

Figure 1

26 pages, 3917 KiB  
Article
Multimodal Existential Negation in Ecuadorian Highland Kichwa
by Simeon Floyd
Languages 2025, 10(6), 138; https://doi.org/10.3390/languages10060138 - 11 Jun 2025
Viewed by 1140
Abstract
Conventionalized or symbolic “emblematic” visual expressions are the types of “gesture” that most closely resemble lexical and grammatical elements seen in spoken languages or in sign languages in the visual modality. The relationship between conventionalization in the visual modality and in morphosyntax is [...] Read more.
Conventionalized or symbolic “emblematic” visual expressions are the types of “gesture” that most closely resemble lexical and grammatical elements seen in spoken languages or in sign languages in the visual modality. The relationship between conventionalization in the visual modality and in morphosyntax is a topic that remains only partially explored, with more research focused on iconic and indexical aspects of visual expression than on symbolic aspects. However, the culture-specific nature of symbolic gestures makes them an important phenomenon for the study of cultural variation at the intersection of modality and linguistic diversity. This study examines the relationship of a specific area of morphosyntax, negation and syntactic polarity, to an element of the visual modality, a practice of visual existential negation used by speakers of Imbabura Kichwa, a variety of Ecuadorian Highland Kichwa, a Quechuan language spoken in the Ecuadorian Andes. A data set of natural speech recordings will illustrate this open-handed rotating gesture that expresses negative existence: “there is none”. This gesture will be analyzed in terms of its form, meaning, and combination with spoken elements in discourse context, finding that in this variety of Kichwa, this practice is associated with a specific verb root meaning “to lack” or “to not exist”. This discussion will be framed in the wider context of the areal distribution of similar types of visual existential negation in other languages of Ecuador, reflecting the diversity of multimodal conventionalization across speech communities. Full article
Show Figures

Figure 1

27 pages, 6771 KiB  
Article
A Deep Neural Network Framework for Dynamic Two-Handed Indian Sign Language Recognition in Hearing and Speech-Impaired Communities
by Vaidhya Govindharajalu Kaliyaperumal and Paavai Anand Gopalan
Sensors 2025, 25(12), 3652; https://doi.org/10.3390/s25123652 - 11 Jun 2025
Viewed by 471
Abstract
Language is that kind of expression by which effective communication with another can be well expressed. One may consider such as a connecting bridge for bridging communication gaps for the hearing- and speech-impaired, even though it remains as an advanced method for hand [...] Read more.
Language is that kind of expression by which effective communication with another can be well expressed. One may consider such as a connecting bridge for bridging communication gaps for the hearing- and speech-impaired, even though it remains as an advanced method for hand gesture expression along with identification through the various different unidentified signals to configure their palms. This challenge can be met with a novel Enhanced Convolutional Transformer with Adaptive Tuna Swarm Optimization (ECT-ATSO) recognition framework proposed for double-handed sign language. In order to improve both model generalization and image quality, preprocessing is applied to images prior to prediction, and the proposed dataset is organized to handle multiple dynamic words. Feature graining is employed to obtain local features, and the ViT transformer architecture is then utilized to capture global features from the preprocessed images. After concatenation, this generates a feature map that is then divided into various words using an Inverted Residual Feed-Forward Network (IRFFN). Using the Tuna Swarm Optimization (TSO) algorithm in its enhanced form, the provided Enhanced Convolutional Transformer (ECT) model is optimally tuned to handle the problem dimensions with convergence problem parameters. In order to solve local optimization constraints when adjusting the position for the tuna update process, a mutation operator was introduced. The dataset visualization that demonstrates the best effectiveness compared to alternative cutting-edge methods, recognition accuracy, and convergences serves as a means to measure performance of this suggested framework. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

16 pages, 2108 KiB  
Article
One Possible Path Towards a More Robust Task of Traffic Sign Classification in Autonomous Vehicles Using Autoencoders
by Ivan Martinović, Tomás de Jesús Mateo Sanguino, Jovana Jovanović, Mihailo Jovanović and Milena Djukanović
Electronics 2025, 14(12), 2382; https://doi.org/10.3390/electronics14122382 - 11 Jun 2025
Viewed by 573
Abstract
The increasing deployment of autonomous vehicles (AVs) has exposed critical vulnerabilities in traffic sign classification systems, particularly against adversarial attacks that can compromise safety. This study proposes a dual-purpose defense framework based on convolutional autoencoders to enhance robustness against two prominent white-box attacks: [...] Read more.
The increasing deployment of autonomous vehicles (AVs) has exposed critical vulnerabilities in traffic sign classification systems, particularly against adversarial attacks that can compromise safety. This study proposes a dual-purpose defense framework based on convolutional autoencoders to enhance robustness against two prominent white-box attacks: Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). Experiments on the German Traffic Sign Recognition Benchmark (GTSRB) dataset show that, although these attacks can significantly degrade system performance, the proposed models are capable of partially recovering lost accuracy. Notably, the defense demonstrates strong capabilities in both detecting and reconstructing manipulated traffic signs, even under low-perturbation scenarios. Additionally, a feature-based autoencoder is introduced, which—despite a high false positive rate—achieves perfect detection in critical conditions, a tradeoff considered acceptable in safety-critical contexts. These results highlight the potential of autoencoder-based architectures as a foundation for resilient AV perception while underscoring the need for hybrid models integrating visual-language frameworks for real-time, fail-safe operation. Full article
(This article belongs to the Special Issue Autonomous and Connected Vehicles)
Show Figures

Figure 1

26 pages, 8022 KiB  
Article
Toward a Recognition System for Mexican Sign Language: Arm Movement Detection
by Gabriela Hilario-Acuapan, Keny Ordaz-Hernández, Mario Castelán and Ismael Lopez-Juarez
Sensors 2025, 25(12), 3636; https://doi.org/10.3390/s25123636 - 10 Jun 2025
Viewed by 637
Abstract
This paper describes ongoing work surrounding the creation of a recognition system for Mexican Sign Language (LSM). We propose a general sign decomposition that is divided into three parts, i.e., hand configuration (HC), arm movement (AM), and non-hand gestures (NHGs). This paper focuses [...] Read more.
This paper describes ongoing work surrounding the creation of a recognition system for Mexican Sign Language (LSM). We propose a general sign decomposition that is divided into three parts, i.e., hand configuration (HC), arm movement (AM), and non-hand gestures (NHGs). This paper focuses on the AM features and reports the approach created to analyze visual patterns in arm joint movements (wrists, shoulders, and elbows). For this research, a proprietary dataset—one that does not limit the recognition of arm movements—was developed, with active participation from the deaf community and LSM experts. We analyzed two case studies involving three sign subsets. For each sign, the pose was extracted to generate shapes of the joint paths during the arm movements and fed to a CNN classifier. YOLOv8 was used for pose estimation and visual pattern classification purposes. The proposed approach, based on pose estimation, shows promising results for constructing CNN models to classify a wide range of signs. Full article
Show Figures

Figure 1

Back to TopTop