error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = Mexican sign language

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 3490 KB  
Article
Multimodal Minimal-Angular-Geometry Representation for Real-Time Dynamic Mexican Sign Language Recognition
by Gerardo Garcia-Gil, Gabriela del Carmen López-Armas and Yahir Emmanuel Ramirez-Pulido
Technologies 2026, 14(1), 48; https://doi.org/10.3390/technologies14010048 - 8 Jan 2026
Abstract
Current approaches to dynamic sign language recognition commonly rely on dense landmark representations, which impose high computational cost and hinder real-time deployment on resource-constrained devices. To address this limitation, this work proposes a computationally efficient framework for real-time dynamic Mexican Sign Language (MSL) [...] Read more.
Current approaches to dynamic sign language recognition commonly rely on dense landmark representations, which impose high computational cost and hinder real-time deployment on resource-constrained devices. To address this limitation, this work proposes a computationally efficient framework for real-time dynamic Mexican Sign Language (MSL) recognition based on a multimodal minimal angular-geometry representation. Instead of processing complete landmark sets (e.g., MediaPipe Holistic with up to 468 keypoints), the proposed method encodes the relational geometry of the hands, face, and upper body into a compact set of 28 invariant internal angular descriptors. This representation substantially reduces feature dimensionality and computational complexity while preserving linguistically relevant manual and non-manual information required for grammatical and semantic discrimination in MSL. A real-time end-to-end pipeline is developed, comprising multimodal landmark extraction, angular feature computation, and temporal modeling using a Bidirectional Long Short-Term Memory (BiLSTM) network. The system is evaluated on a custom dataset of dynamic MSL gestures acquired under controlled real-time conditions. Experimental results demonstrate that the proposed approach achieves 99% accuracy and 99% macro F1-score, matching state-of-the-art performance while using fewer features dramatically. The compactness, interpretability, and efficiency of the minimal angular descriptor make the proposed system suitable for real-time deployment on low-cost devices, contributing toward more accessible and inclusive sign language recognition technologies. Full article
(This article belongs to the Special Issue Image Analysis and Processing)
26 pages, 10862 KB  
Article
Recurrent Neural Networks for Mexican Sign Language Interpretation in Healthcare Services
by Armando de Jesús Becerril-Carrillo, Héctor Julián Selley-Rojas and Elizabeth Guevara-Martínez
Sensors 2026, 26(1), 27; https://doi.org/10.3390/s26010027 - 19 Dec 2025
Viewed by 412
Abstract
In Mexico, the Deaf community faces persistent communication barriers that restrict their integration and access to essential services, particularly in healthcare. Even though approximately two million individuals use Mexican Sign Language (MSL) as their primary form of communication, technological tools for supporting effective [...] Read more.
In Mexico, the Deaf community faces persistent communication barriers that restrict their integration and access to essential services, particularly in healthcare. Even though approximately two million individuals use Mexican Sign Language (MSL) as their primary form of communication, technological tools for supporting effective interaction remain limited. While recent research in sign-language recognition has led to important advances for several languages, work focused on MSL, particularly for healthcare scenarios, remains scarce. To address this gap, this study introduces a health-oriented dataset of 150 signs, with 800 synthetic video sequences per word, totaling more than 35 GB of data. This dataset was used to train recurrent neural networks with regularization and data augmentation. The best configuration achieved a maximum precision of 98.36% in isolated sign classification, minimizing false positives, which is an essential requirement in clinical applications. Beyond isolated recognition, the main contribution of this study is its exploratory evaluation of sequential narrative inference in MSL. Using short scripted narratives, the system achieved a global sequential recall of 45.45% under a realistic evaluation protocol that enforces temporal alignment. These results highlight both the potential of recurrent architectures in generalizing from isolated gestures to structured sequences and the substantial challenges posed by continuous signing, co-articulation, and signer-specific variation. While not intended for clinical deployment, the methodology, dataset, and open-source implementation presented here establish a reproducible baseline for future research. This work provides initial evidence, tools, and insights to support the long-term development of accessible technologies for the Deaf community in Mexico. Full article
Show Figures

Figure 1

22 pages, 8469 KB  
Article
Virtual Trainer for Learning Mexican Sign Language Using Video Similarity Analysis
by Felipe de Jesús Rivera-Cervantes, Diana-Margarita Córdova-Esparza, Juan Terven, Julio-Alejandro Romero-González, Jaime-Rodrigo González-Rodríguez, Mauricio-Arturo Ibarra-Corona and Pedro-Alfonso Ramírez-Pedraza
Technologies 2025, 13(12), 540; https://doi.org/10.3390/technologies13120540 - 21 Nov 2025
Viewed by 526
Abstract
Learning Mexican Sign Language (MSL) benefits from interactive systems that provide immediate feedback without requiring specialized sensors. This work presents a virtual training platform that operates with a conventional RGB camera and applies computer vision techniques to guide learners in real time. A [...] Read more.
Learning Mexican Sign Language (MSL) benefits from interactive systems that provide immediate feedback without requiring specialized sensors. This work presents a virtual training platform that operates with a conventional RGB camera and applies computer vision techniques to guide learners in real time. A dataset of 335 videos was recorded across 12 lessons with professional interpreters and used as the reference material for practice sessions. From each video, 48 keypoints corresponding to hands and facial landmarks were extracted using MediaPipe, normalized, and compared with user trajectories through Dynamic Time Warping (DTW). A sign is accepted when the DTW distance is below a similarity threshold, allowing users to receive quantitative feedback on performance. Additionally, an experimental baseline using video embeddings generated by the Qwen2.5-VL, VideoMAEv2, and VJEPA2 models and classified via Matching Networks was evaluated for scalability. Results show that the DTW-based module provides accurate and interpretable feedback for guided practice with minimal computational cost, while the embedding-based approach serves as an exploratory baseline for larger-scale classification and semi-automatic labeling. A user study with 33 participants evidenced feasibility and perceived usefulness (all category means significantly above neutral; Cronbach’s α=0.81). Overall, the proposed framework offers an accessible, low-cost, and effective solution for inclusive MSL education and represents a promising foundation for future multimodal sign-language learning tools. Full article
Show Figures

Figure 1

26 pages, 8022 KB  
Article
Toward a Recognition System for Mexican Sign Language: Arm Movement Detection
by Gabriela Hilario-Acuapan, Keny Ordaz-Hernández, Mario Castelán and Ismael Lopez-Juarez
Sensors 2025, 25(12), 3636; https://doi.org/10.3390/s25123636 - 10 Jun 2025
Cited by 1 | Viewed by 1575
Abstract
This paper describes ongoing work surrounding the creation of a recognition system for Mexican Sign Language (LSM). We propose a general sign decomposition that is divided into three parts, i.e., hand configuration (HC), arm movement (AM), and non-hand gestures (NHGs). This paper focuses [...] Read more.
This paper describes ongoing work surrounding the creation of a recognition system for Mexican Sign Language (LSM). We propose a general sign decomposition that is divided into three parts, i.e., hand configuration (HC), arm movement (AM), and non-hand gestures (NHGs). This paper focuses on the AM features and reports the approach created to analyze visual patterns in arm joint movements (wrists, shoulders, and elbows). For this research, a proprietary dataset—one that does not limit the recognition of arm movements—was developed, with active participation from the deaf community and LSM experts. We analyzed two case studies involving three sign subsets. For each sign, the pose was extracted to generate shapes of the joint paths during the arm movements and fed to a CNN classifier. YOLOv8 was used for pose estimation and visual pattern classification purposes. The proposed approach, based on pose estimation, shows promising results for constructing CNN models to classify a wide range of signs. Full article
Show Figures

Figure 1

24 pages, 9841 KB  
Article
Mexican Sign Language Recognition: Dataset Creation and Performance Evaluation Using MediaPipe and Machine Learning Techniques
by Mario Rodriguez, Outmane Oubram, A. Bassam, Noureddine Lakouari and Rasikh Tariq
Electronics 2025, 14(7), 1423; https://doi.org/10.3390/electronics14071423 - 1 Apr 2025
Cited by 4 | Viewed by 2955
Abstract
In Mexico, around 2.4 million people (1.9% of the national population) are deaf, and Mexican Sign Language (MSL) support is essential for people with communication disabilities. Research and technological prototypes of sign language recognition have been developed to support public communication systems without [...] Read more.
In Mexico, around 2.4 million people (1.9% of the national population) are deaf, and Mexican Sign Language (MSL) support is essential for people with communication disabilities. Research and technological prototypes of sign language recognition have been developed to support public communication systems without human interpreters. However, most of these systems and research are closely related to American Sign Language (ASL) or other sign languages of other languages whose scope has had the highest level of accuracy and recognition of letters and words. The objective of the current study is to develop and evaluate a sign language recognition system tailored to MSL. The research aims to achieve accurate recognition of dactylology and the first ten numerical digits (1–10) in MSL. A database of sign language and numeration of MSL was created with the 29 different characters of MSL’s dactylology and the first ten digits with a camera. Then, MediaPipe was first applied for feature extraction for both hands (21 points per hand). Once the features were extracted, Machine Learning and Deep Learning Techniques were applied to recognize MSL signs. The recognition of MSL patterns in the context of static (29 classes) and continuous signs (10 classes) yielded an accuracy of 92% with Support Vector Machine (SVM) and 86% with Gated Recurrent Unit (GRU) accordingly. The trained algorithms are based on full scenarios with both hands; therefore, it will sign under these conditions. To improve the accuracy, it is suggested to amplify the number of samples. Full article
Show Figures

Figure 1

19 pages, 12541 KB  
Article
Advanced Hybrid Neural Networks for Accurate Recognition of the Extended Alphabet and Dynamic Signs in Mexican Sign Language (MSL)
by Arturo Lara-Cázares, Marco A. Moreno-Armendáriz and Hiram Calvo
Appl. Sci. 2024, 14(22), 10186; https://doi.org/10.3390/app142210186 - 6 Nov 2024
Cited by 1 | Viewed by 1305
Abstract
The Mexican deaf community primarily uses Mexican Sign Language (MSL) for communication, but significant barriers arise when interacting with hearing individuals unfamiliar with the language. Learning MSL requires a substantial commitment of at least 18 months, which is often impractical for many hearing [...] Read more.
The Mexican deaf community primarily uses Mexican Sign Language (MSL) for communication, but significant barriers arise when interacting with hearing individuals unfamiliar with the language. Learning MSL requires a substantial commitment of at least 18 months, which is often impractical for many hearing people. To address this gap, we present an MSL-to-Spanish translation system that facilitates communication through a spelling-based approach, enabling deaf individuals to convey any idea while simplifying the AI’s task by limiting the number of signs to be recognized. Unlike previous systems that focus exclusively on static signs for individual letters, our solution incorporates dynamic signs, such as “k”, “rr”, and “ll”, to better capture the nuances of MSL and enhance expressiveness. The proposed Hybrid Neural Network-based algorithm integrates these dynamic elements effectively, achieving an F1 score of 90.91%, precision of 91.25%, recall of 91.05%, and accuracy of 91.09% in the extended alphabet classification. These results demonstrate the system’s potential to improve accessibility and inclusivity for the Mexican deaf community. Full article
Show Figures

Figure 1

17 pages, 13756 KB  
Communication
Sign Language Interpreting System Using Recursive Neural Networks
by Erick A. Borges-Galindo, Nayely Morales-Ramírez, Mario González-Lee, José R. García-Martínez, Mariko Nakano-Miyatake  and Hector Perez-Meana 
Appl. Sci. 2024, 14(18), 8560; https://doi.org/10.3390/app14188560 - 23 Sep 2024
Cited by 4 | Viewed by 2892
Abstract
According to the World Health Organization (WHO), 5% of people around the world have hearing disabilities, which limits their capacity to communicate with others. Recently, scientists have proposed systems based on deep learning techniques to create a sign language-to-text translator, expecting this to [...] Read more.
According to the World Health Organization (WHO), 5% of people around the world have hearing disabilities, which limits their capacity to communicate with others. Recently, scientists have proposed systems based on deep learning techniques to create a sign language-to-text translator, expecting this to help deaf people communicate; however, the performance of such systems is still low for practical scenarios. Furthermore, the proposed systems are language-oriented, which leads to particular problems related to the signs for each language. For this reason, to address this problem, in this paper, we propose a system based on a Recursive Neural Network (RNN) focused on Mexican Sign Language (MSL) that uses the spatial tracking of hands and facial expressions to predict the word that a person intends to communicate. To achieve this, we trained four RNN-based models using a dataset of 600 clips that were 30 s long; each word included 30 clips. We conducted two experiments; we tailored the first experiment to determine the most well-suited model for the target application and measure the accuracy of the resulting system in offline mode; in the second experiment, we measured the accuracy of the system in online mode. We assessed the system’s performance using the following metrics: the precision, recall, F1-score, and the number of errors during online scenarios, and the results computed indicate an accuracy of 0.93 in the offline mode and a higher performance for the online operating mode compared to previously proposed approaches. These results underscore the potential of the proposed scheme in scenarios such as teaching, learning, commercial transactions, and daily communications among deaf and non-deaf people. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

24 pages, 8029 KB  
Article
Real-Time Machine Learning for Accurate Mexican Sign Language Identification: A Distal Phalanges Approach
by Gerardo García-Gil, Gabriela del Carmen López-Armas, Juan Jaime Sánchez-Escobar, Bryan Armando Salazar-Torres and Alma Nayeli Rodríguez-Vázquez
Technologies 2024, 12(9), 152; https://doi.org/10.3390/technologies12090152 - 4 Sep 2024
Cited by 6 | Viewed by 4958
Abstract
Effective communication is crucial in daily life, and for people with hearing disabilities, sign language is no exception, serving as their primary means of interaction. Various technologies, such as cochlear implants and mobile sign language translation applications, have been explored to enhance communication [...] Read more.
Effective communication is crucial in daily life, and for people with hearing disabilities, sign language is no exception, serving as their primary means of interaction. Various technologies, such as cochlear implants and mobile sign language translation applications, have been explored to enhance communication and improve the quality of life of the deaf community. This article presents a new, innovative method that uses real-time machine learning (ML) to accurately identify Mexican sign language (MSL) and is adaptable to any sign language. Our method is based on analyzing six features that represent the angles between the distal phalanges and the palm, thus eliminating the need for complex image processing. Our ML approach achieves accurate sign language identification in real-time, with an accuracy and F1 score of 99%. These results demonstrate that a simple approach can effectively identify sign language. This advance is significant, as it offers an effective and accessible solution to improve communication for people with hearing impairments. Furthermore, the proposed method has the potential to be implemented in mobile applications and other devices to provide practical support to the deaf community. Full article
Show Figures

Figure 1

22 pages, 12633 KB  
Article
MediaPipe Frame and Convolutional Neural Networks-Based Fingerspelling Detection in Mexican Sign Language
by Tzeico J. Sánchez-Vicinaiz, Enrique Camacho-Pérez, Alejandro A. Castillo-Atoche, Mayra Cruz-Fernandez, José R. García-Martínez and Juvenal Rodríguez-Reséndiz
Technologies 2024, 12(8), 124; https://doi.org/10.3390/technologies12080124 - 1 Aug 2024
Cited by 10 | Viewed by 5018
Abstract
This research proposes implementing a system to recognize the static signs of the Mexican Sign Language (MSL) dactylological alphabet using the MediaPipe frame and Convolutional Neural Network (CNN) models to correctly interpret the letters that represent the manual signals coming from a camera. [...] Read more.
This research proposes implementing a system to recognize the static signs of the Mexican Sign Language (MSL) dactylological alphabet using the MediaPipe frame and Convolutional Neural Network (CNN) models to correctly interpret the letters that represent the manual signals coming from a camera. The development of these types of studies allows the implementation of technological advances in artificial intelligence and computer vision in teaching Mexican Sign Language (MSL). The best CNN model achieved an accuracy of 83.63% over the sets of 336 test images. In addition, considering samples of each letter, the following results are obtained: an accuracy of 84.57%, a sensitivity of 83.33%, and a specificity of 99.17%. The advantage of this system is that it could be implemented on low-consumption equipment, carrying out the classification in real-time, contributing to the accessibility of its use. Full article
Show Figures

Figure 1

16 pages, 5042 KB  
Article
Towards a Bidirectional Mexican Sign Language–Spanish Translation System: A Deep Learning Approach
by Jaime-Rodrigo González-Rodríguez, Diana-Margarita Córdova-Esparza, Juan Terven and Julio-Alejandro Romero-González
Technologies 2024, 12(1), 7; https://doi.org/10.3390/technologies12010007 - 5 Jan 2024
Cited by 15 | Viewed by 5022
Abstract
People with hearing disabilities often face communication barriers when interacting with hearing individuals. To address this issue, this paper proposes a bidirectional Sign Language Translation System that aims to bridge the communication gap. Deep learning models such as recurrent neural networks (RNN), bidirectional [...] Read more.
People with hearing disabilities often face communication barriers when interacting with hearing individuals. To address this issue, this paper proposes a bidirectional Sign Language Translation System that aims to bridge the communication gap. Deep learning models such as recurrent neural networks (RNN), bidirectional RNN (BRNN), LSTM, GRU, and Transformers are compared to find the most accurate model for sign language recognition and translation. Keypoint detection using MediaPipe is employed to track and understand sign language gestures. The system features a user-friendly graphical interface with modes for translating between Mexican Sign Language (MSL) and Spanish in both directions. Users can input signs or text and obtain corresponding translations. Performance evaluation demonstrates high accuracy, with the BRNN model achieving 98.8% accuracy. The research emphasizes the importance of hand features in sign language recognition. Future developments could focus on enhancing accessibility and expanding the system to support other sign languages. This Sign Language Translation System offers a promising solution to improve communication accessibility and foster inclusivity for individuals with hearing disabilities. Full article
(This article belongs to the Section Assistive Technologies)
Show Figures

Figure 1

13 pages, 1112 KB  
Article
Exploring a Novel Mexican Sign Language Lexicon Video Dataset
by Víctor Martínez-Sánchez, Iván Villalón-Turrubiates, Francisco Cervantes-Álvarez and Carlos Hernández-Mejía
Multimodal Technol. Interact. 2023, 7(8), 83; https://doi.org/10.3390/mti7080083 - 19 Aug 2023
Cited by 6 | Viewed by 5817
Abstract
This research explores a novel Mexican Sign Language (MSL) lexicon video dataset containing the dynamic gestures most frequently used in MSL. Each gesture consists of a set of different versions of videos under uncontrolled conditions. The MX-ITESO-100 dataset is composed of a lexicon [...] Read more.
This research explores a novel Mexican Sign Language (MSL) lexicon video dataset containing the dynamic gestures most frequently used in MSL. Each gesture consists of a set of different versions of videos under uncontrolled conditions. The MX-ITESO-100 dataset is composed of a lexicon of 100 gestures and 5000 videos from three participants with different grammatical elements. Additionally, the dataset is evaluated in a two-step neural network model as having an accuracy greater than 99% and thus serves as a benchmark for future training of machine learning models in computer vision systems. Finally, this research provides an inclusive environment within society and organizations, in particular for people with hearing impairments. Full article
Show Figures

Figure 1

25 pages, 5556 KB  
Article
Use of Spherical and Cartesian Features for Learning and Recognition of the Static Mexican Sign Language Alphabet
by Homero V. Rios-Figueroa, Angel J. Sánchez-García, Candy Obdulia Sosa-Jiménez and Ana Luisa Solís-González-Cosío
Mathematics 2022, 10(16), 2904; https://doi.org/10.3390/math10162904 - 12 Aug 2022
Cited by 6 | Viewed by 3701
Abstract
The automatic recognition of sign language is very important to allow for communication by hearing impaired people. The purpose of this study is to develop a method of recognizing the static Mexican Sign Language (MSL) alphabet. In contrast to other MSL recognition methods, [...] Read more.
The automatic recognition of sign language is very important to allow for communication by hearing impaired people. The purpose of this study is to develop a method of recognizing the static Mexican Sign Language (MSL) alphabet. In contrast to other MSL recognition methods, which require a controlled background and permit changes only in 2D space, our method only requires indoor conditions and allows for variations in the 3D pose. We present an innovative method that can learn the shape of each of the 21 letters from examples. Before learning, each example in the training set is normalized in the 3D pose using principal component analysis. The input data are created with a 3D sensor. Our method generates three types of features to represent each shape. When applied to a dataset acquired in our laboratory, an accuracy of 100% was obtained. The features used by our method have a clear, intuitive geometric interpretation. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

16 pages, 2079 KB  
Article
Automatic Recognition of Mexican Sign Language Using a Depth Camera and Recurrent Neural Networks
by Kenneth Mejía-Peréz, Diana-Margarita Córdova-Esparza, Juan Terven, Ana-Marcela Herrera-Navarro, Teresa García-Ramírez and Alfonso Ramírez-Pedraza
Appl. Sci. 2022, 12(11), 5523; https://doi.org/10.3390/app12115523 - 29 May 2022
Cited by 27 | Viewed by 5215
Abstract
Automatic sign language recognition is a challenging task in machine learning and computer vision. Most works have focused on recognizing sign language using hand gestures only. However, body motion and facial gestures play an essential role in sign language interaction. Taking this into [...] Read more.
Automatic sign language recognition is a challenging task in machine learning and computer vision. Most works have focused on recognizing sign language using hand gestures only. However, body motion and facial gestures play an essential role in sign language interaction. Taking this into account, we introduce an automatic sign language recognition system based on multiple gestures, including hands, body, and face. We used a depth camera (OAK-D) to obtain the 3D coordinates of the motions and recurrent neural networks for classification. We compare multiple model architectures based on recurrent networks such as Long Short-Term Memories (LSTM) and Gated Recurrent Units (GRU) and develop a noise-robust approach. For this work, we collected a dataset of 3000 samples from 30 different signs of the Mexican Sign Language (MSL) containing features coordinates from the face, body, and hands in 3D spatial coordinates. After extensive evaluation and ablation studies, our best model obtained an accuracy of 97% on clean test data and 90% on highly noisy data. Full article
Show Figures

Figure 1

24 pages, 3755 KB  
Article
Optimization of Convolutional Neural Networks Architectures Using PSO for Sign Language Recognition
by Jonathan Fregoso, Claudia I. Gonzalez and Gabriela E. Martinez
Axioms 2021, 10(3), 139; https://doi.org/10.3390/axioms10030139 - 29 Jun 2021
Cited by 52 | Viewed by 7113
Abstract
This paper presents an approach to design convolutional neural network architectures, using the particle swarm optimization algorithm. The adjustment of the hyper-parameters and finding the optimal network architecture of convolutional neural networks represents an important challenge. Network performance and achieving efficient learning models [...] Read more.
This paper presents an approach to design convolutional neural network architectures, using the particle swarm optimization algorithm. The adjustment of the hyper-parameters and finding the optimal network architecture of convolutional neural networks represents an important challenge. Network performance and achieving efficient learning models for a particular problem depends on setting hyper-parameter values and this implies exploring a huge and complex search space. The use of heuristic-based searches supports these types of problems; therefore, the main contribution of this research work is to apply the PSO algorithm to find the optimal parameters of the convolutional neural networks which include the number of convolutional layers, the filter size used in the convolutional process, the number of convolutional filters, and the batch size. This work describes two optimization approaches; the first, the parameters obtained by PSO are kept under the same conditions in each convolutional layer, and the objective function evaluated by PSO is given by the classification rate; in the second, the PSO generates different parameters per layer, and the objective function is composed of the recognition rate in conjunction with the Akaike information criterion, the latter helps to find the best network performance but with the minimum parameters. The optimized architectures are implemented in three study cases of sign language databases, in which are included the Mexican Sign Language alphabet, the American Sign Language MNIST, and the American Sign Language alphabet. According to the results, the proposed methodologies achieved favorable results with a recognition rate higher than 99%, showing competitive results compared to other state-of-the-art approaches. Full article
(This article belongs to the Special Issue Various Deep Learning Algorithms in Computational Intelligence)
Show Figures

Figure 1

Back to TopTop