Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (12)

Search Parameters:
Keywords = deaf-mutism

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 7147 KiB  
Article
A Novel Sustainable and Cost-Effective Triboelectric Nanogenerator Connected to the Internet of Things for Communication with Deaf–Mute People
by Enrique Delgado-Alvarado, Muhammad Waseem Ashraf, Shahzadi Tayyaba, José Amir González-Calderon, Ricardo López-Esparza, Ma. Cristina Irma Pérez-Pérez, Victor Champac, José Hernandéz-Hernández, Maximo Alejandro Figueroa-Navarro and Agustín Leobardo Herrera-May
Technologies 2025, 13(5), 188; https://doi.org/10.3390/technologies13050188 - 7 May 2025
Viewed by 1102
Abstract
Low-cost and sustainable technological systems are required to improve communication between deaf–mute and non-deaf–mute people. Herein, we report a novel low-cost and eco-friendly triboelectric nanogenerator (TENG) composed of recycled and waste components. This TENG can be connected to a smartphone using the internet [...] Read more.
Low-cost and sustainable technological systems are required to improve communication between deaf–mute and non-deaf–mute people. Herein, we report a novel low-cost and eco-friendly triboelectric nanogenerator (TENG) composed of recycled and waste components. This TENG can be connected to a smartphone using the internet of things (IoT), which allows the transmission of information from deaf–mute to non-deaf–mute people. The proposed TENG can harness kinetic energy to convert it into electrical energy with advantages such as a compact portable design, a light weight, cost-effective fabrication, good voltage stability, and easy signal processing. In addition, this nanogenerator uses recycled and waste materials composed of radish leaf, polyimide tape, and a polyethylene terephthalate (PET) sheet. This TENG reaches an output power density of 340.3 µWm−2 using a load resistance of 20.5 MΩ at 23 Hz, respectively. This nanogenerator achieves a stable performance even after 41,400 working cycles. Also, this device can power a digital calculator and chronometer, as well as light 116 ultra-bright blue commercial LEDs. This TENG can convert the movements of the fingers of a deaf–mute person into electrical signals that are transmitted as text messages to a smartphone. Thus, the proposed TENG can be used as a low-cost wireless communication device for deaf–mute people, contributing to an inclusive society. Full article
(This article belongs to the Special Issue Technological Advances in Science, Medicine, and Engineering 2024)
Show Figures

Graphical abstract

13 pages, 565 KiB  
Review
The Inheritance of Hearing Loss and Deafness: A Historical Perspective
by Alessandro Martini, Andrea Cozza and Valerio Maria Di Pasquale Fiasca
Audiol. Res. 2024, 14(1), 116-128; https://doi.org/10.3390/audiolres14010010 - 26 Jan 2024
Cited by 2 | Viewed by 3665
Abstract
If the term “genetics” is a relatively recent proposition, introduced in 1905 by English biologist William Bateson, who rediscovered and spread in the scientific community Mendel’s principles of inheritance, since the dawn of human civilization the influence of heredity has been recognized, especially [...] Read more.
If the term “genetics” is a relatively recent proposition, introduced in 1905 by English biologist William Bateson, who rediscovered and spread in the scientific community Mendel’s principles of inheritance, since the dawn of human civilization the influence of heredity has been recognized, especially in agricultural crops and animal breeding. And, later, in familial dynasties. In this concise review, we outline the evolution of the idea of hereditary hearing loss, up to the current knowledge of molecular genetics and epigenetics. Full article
(This article belongs to the Special Issue Genetics of Hearing Loss—Volume II)
Show Figures

Figure 1

10 pages, 246 KiB  
Review
Congenital Deafness and Deaf-Mutism: A Historical Perspective
by Andrea Cozza, Valerio Maria Di Pasquale Fiasca and Alessandro Martini
Children 2024, 11(1), 51; https://doi.org/10.3390/children11010051 - 30 Dec 2023
Cited by 3 | Viewed by 4849
Abstract
Hearing loss is the most common sensory deficit and one of the most common congenital abnormalities. The estimated prevalence of moderate and severe hearing loss in a normal newborn is 0.1–0.3%, while the prevalence is 2–4% in newborns admitted to the newborn intensive [...] Read more.
Hearing loss is the most common sensory deficit and one of the most common congenital abnormalities. The estimated prevalence of moderate and severe hearing loss in a normal newborn is 0.1–0.3%, while the prevalence is 2–4% in newborns admitted to the newborn intensive care unit. Therefore, early detection and prompt treatment are of utmost importance in preventing the unwanted sequel of hearing loss on normal language development. The problem of congenital deafness is today addressed on the one hand with hearing screening at birth, on the other with the early (at around 3 months of age) application of hearing aids or, in case of lack of benefit, by the cochlear implant. Molecular genetics, antibody tests for some viruses, and diagnostic imaging have largely contributed to an effective etiological classification. A correct diagnosis and timely fitting of hearing aids or cochlear implants is useful for deaf children. The association between congenital deafness and “mutism”, with all the consequences on/the consideration that deaf mutes have had since ancient times, not only from a social point of view but also from a legislative point of view, continued until the end of the nineteenth century, with the development on one side of new methods for the rehabilitation of language and on the other of sign language. But we need to get to the last decades of the last century to have, on the one hand, the diffusion of “universal newborn hearing screening”, the discovery of the genetic causes of over half of congenital deafness, and on the other hand the cochlear implants that have allowed thousands of children born deaf the development of normal speech. Below, we will analyze the evolution of the problem between deafness and deaf-mutism over the centuries, with particular attention to the nineteenth century. Full article
(This article belongs to the Section Pediatric Otolaryngology)
18 pages, 4335 KiB  
Article
Toward a Vision-Based Intelligent System: A Stacked Encoded Deep Learning Framework for Sign Language Recognition
by Muhammad Islam, Mohammed Aloraini, Suliman Aladhadh, Shabana Habib, Asma Khan, Abduatif Alabdulatif and Turki M. Alanazi
Sensors 2023, 23(22), 9068; https://doi.org/10.3390/s23229068 - 9 Nov 2023
Cited by 10 | Viewed by 2452
Abstract
Sign language recognition, an essential interface between the hearing and deaf-mute communities, faces challenges with high false positive rates and computational costs, even with the use of advanced deep learning techniques. Our proposed solution is a stacked encoded model, combining artificial intelligence (AI) [...] Read more.
Sign language recognition, an essential interface between the hearing and deaf-mute communities, faces challenges with high false positive rates and computational costs, even with the use of advanced deep learning techniques. Our proposed solution is a stacked encoded model, combining artificial intelligence (AI) with the Internet of Things (IoT), which refines feature extraction and classification to overcome these challenges. We leverage a lightweight backbone model for preliminary feature extraction and use stacked autoencoders to further refine these features. Our approach harnesses the scalability of big data, showing notable improvement in accuracy, precision, recall, F1-score, and complexity analysis. Our model’s effectiveness is demonstrated through testing on the ArSL2018 benchmark dataset, showcasing superior performance compared to state-of-the-art approaches. Additional validation through an ablation study with pre-trained convolutional neural network (CNN) models affirms our model’s efficacy across all evaluation metrics. Our work paves the way for the sustainable development of high-performing, IoT-based sign-language-recognition applications. Full article
Show Figures

Figure 1

20 pages, 12115 KiB  
Article
Interpretation of Bahasa Isyarat Malaysia (BIM) Using SSD-MobileNet-V2 FPNLite and COCO mAP
by Iffah Zulaikha Saiful Bahri, Sharifah Saon, Abd Kadir Mahamad, Khalid Isa, Umi Fadlilah, Mohd Anuaruddin Bin Ahmadon and Shingo Yamaguchi
Information 2023, 14(6), 319; https://doi.org/10.3390/info14060319 - 31 May 2023
Cited by 5 | Viewed by 5309
Abstract
This research proposes a study on two-way communication between deaf/mute and normal people using an Android application. Despite advancements in technology, there is still a lack of mobile applications that facilitate two-way communication between deaf/mute and normal people, especially by using Bahasa Isyarat [...] Read more.
This research proposes a study on two-way communication between deaf/mute and normal people using an Android application. Despite advancements in technology, there is still a lack of mobile applications that facilitate two-way communication between deaf/mute and normal people, especially by using Bahasa Isyarat Malaysia (BIM). This project consists of three parts: First, we use BIM letters, which enables the recognition of BIM letters and BIM combined letters to form a word. In this part, a MobileNet pre-trained model is implemented to train the model with a total of 87,000 images for 29 classes, with a 10% test size and a 90% training size. The second part is BIM word hand gestures, which consists of five classes that are trained with the SSD-MobileNet-V2 FPNLite 320 × 320 pre-trained model with a speed of 22 s/frame rate and COCO mAP of 22.2, with a total of 500 images for all five classes and first-time training set to 2000 steps, while the second- and third-time training are set to 2500 steps. The third part is Android application development using Android Studio, which contains the features of the BIM letters and BIM word hand gestures, with the trained models converted into TensorFlow Lite. This feature also includes the conversion of speech to text, whereby this feature allows converting speech to text through the Android application. Thus, BIM letters obtain 99.75% accuracy after training the models, while BIM word hand gestures obtain 61.60% accuracy. The suggested system is validated as a result of these simulations and tests. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

21 pages, 19480 KiB  
Article
A Machine Learning Based Full Duplex System Supporting Multiple Sign Languages for the Deaf and Mute
by Muhammad Imran Saleem, Atif Siddiqui, Shaheena Noor, Miguel-Angel Luque-Nieto and Enrique Nava-Baro
Appl. Sci. 2023, 13(5), 3114; https://doi.org/10.3390/app13053114 - 28 Feb 2023
Cited by 6 | Viewed by 4392
Abstract
This manuscript presents a full duplex communication system for the Deaf and Mute (D-M) based on Machine Learning (ML). These individuals, who generally communicate through sign language, are an integral part of our society, and their contribution is vital. They face communication difficulties [...] Read more.
This manuscript presents a full duplex communication system for the Deaf and Mute (D-M) based on Machine Learning (ML). These individuals, who generally communicate through sign language, are an integral part of our society, and their contribution is vital. They face communication difficulties mainly because others, who generally do not know sign language, are unable to communicate with them. The work presents a solution to this problem through a system enabling the non-deaf and mute (ND-M) to communicate with the D-M individuals without the need to learn sign language. The system is low-cost, reliable, easy to use, and based on a commercial-off-the-shelf (COTS) Leap Motion Device (LMD). The hand gesture data of D-M individuals is acquired using an LMD device and processed using a Convolutional Neural Network (CNN) algorithm. A supervised ML algorithm completes the processing and converts the hand gesture data into speech. A new dataset for the ML-based algorithm is created and presented in this manuscript. This dataset includes three sign language datasets, i.e., American Sign Language (ASL), Pakistani Sign Language (PSL), and Spanish Sign Language (SSL). The proposed system automatically detects the sign language and converts it into an audio message for the ND-M. Similarities between the three sign languages are also explored, and further research can be carried out in order to help create more datasets, which can be a combination of multiple sign languages. The ND-M can communicate by recording their speech, which is then converted into text and hand gesture images. The system can be upgraded in the future to support more sign language datasets. The system also provides a training mode that can help D-M individuals improve their hand gestures and also understand how accurately the system is detecting these gestures. The proposed system has been validated through a series of experiments resulting in hand gesture detection accuracy exceeding 95%. Full article
Show Figures

Figure 1

19 pages, 9418 KiB  
Article
A Novel Machine Learning Based Two-Way Communication System for Deaf and Mute
by Muhammad Imran Saleem, Atif Siddiqui, Shaheena Noor, Miguel-Angel Luque-Nieto and Pablo Otero
Appl. Sci. 2023, 13(1), 453; https://doi.org/10.3390/app13010453 - 29 Dec 2022
Cited by 11 | Viewed by 7977
Abstract
Deaf and mute people are an integral part of society, and it is particularly important to provide them with a platform to be able to communicate without the need for any training or learning. These people rely on sign language, but for effective [...] Read more.
Deaf and mute people are an integral part of society, and it is particularly important to provide them with a platform to be able to communicate without the need for any training or learning. These people rely on sign language, but for effective communication, it is expected that others can understand sign language. Learning sign language is a challenge for those with no impairment. Another challenge is to have a system in which hand gestures of different languages are supported. In this manuscript, a system is presented that provides communication between deaf and mute (DnM) and non-deaf and mute (NDnM). The hand gestures of DnM people are acquired and processed using deep learning, and multiple language support is achieved using supervised machine learning. The NDnM people are provided with an audio interface where the hand gestures are converted into speech and generated through the sound card interface of the computer. Speech from NDnM people is acquired using microphone input and converted into text. The system is easy to use and low cost. The system is modular and can be enhanced by adding data to support more languages in the future. A supervised machine learning dataset is defined and created that provides automated multi-language communication between the DnM and NDnM people. It is expected that this system will support DnM people in communicating effectively with others and restoring a feeling of normalcy in their daily lives. The hand gesture detection accuracy of the system is more than 90% for most, while for certain scenarios, this is between 80% and 90% due to variations in hand gestures between DnM people. The system is validated and evaluated using a series of experiments. Full article
Show Figures

Figure 1

16 pages, 5051 KiB  
Article
A Sign Language Recognition System Applied to Deaf-Mute Medical Consultation
by Kun Xia, Weiwei Lu, Hongliang Fan and Qiang Zhao
Sensors 2022, 22(23), 9107; https://doi.org/10.3390/s22239107 - 24 Nov 2022
Cited by 16 | Viewed by 8226
Abstract
It is an objective reality that deaf-mute people have difficulty seeking medical treatment. Due to the lack of sign language interpreters, most hospitals in China currently do not have the ability to interpret sign language. Normal medical treatment is a luxury for deaf [...] Read more.
It is an objective reality that deaf-mute people have difficulty seeking medical treatment. Due to the lack of sign language interpreters, most hospitals in China currently do not have the ability to interpret sign language. Normal medical treatment is a luxury for deaf people. In this paper, we propose a sign language recognition system: Heart-Speaker. Heart-Speaker is applied to a deaf-mute consultation scenario. The system provides a low-cost solution for the difficult problem of treating deaf-mute patients. The doctor only needs to point the Heart-Speaker at the deaf patient and the system automatically captures the sign language movements and translates the sign language semantics. When a doctor issues a diagnosis or asks a patient a question, the system displays the corresponding sign language video and subtitles to meet the needs of two-way communication between doctors and patients. The system uses the MobileNet-YOLOv3 model to recognize sign language. It meets the needs of running on embedded terminals and provides favorable recognition accuracy. We performed experiments to verify the accuracy of the measurements. The experimental results show that the accuracy rate of Heart-Speaker in recognizing sign language can reach 90.77%. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 25144 KiB  
Article
Sign Language Recognition Method Based on Palm Definition Model and Multiple Classification
by Nurzada Amangeldy, Saule Kudubayeva, Akmaral Kassymova, Ardak Karipzhanova, Bibigul Razakhova and Serikbay Kuralov
Sensors 2022, 22(17), 6621; https://doi.org/10.3390/s22176621 - 1 Sep 2022
Cited by 15 | Viewed by 4532
Abstract
Technologies for pattern recognition are used in various fields. One of the most relevant and important directions is the use of pattern recognition technology, such as gesture recognition, in socially significant tasks, to develop automatic sign language interpretation systems in real time. More [...] Read more.
Technologies for pattern recognition are used in various fields. One of the most relevant and important directions is the use of pattern recognition technology, such as gesture recognition, in socially significant tasks, to develop automatic sign language interpretation systems in real time. More than 5% of the world’s population—about 430 million people, including 34 million children—are deaf-mute and not always able to use the services of a living sign language interpreter. Almost 80% of people with a disabling hearing loss live in low- and middle-income countries. The development of low-cost systems of automatic sign language interpretation, without the use of expensive sensors and unique cameras, would improve the lives of people with disabilities, contributing to their unhindered integration into society. To this end, in order to find an optimal solution to the problem, this article analyzes suitable methods of gesture recognition in the context of their use in automatic gesture recognition systems, to further determine the most optimal methods. From the analysis, an algorithm based on the palm definition model and linear models for recognizing the shapes of numbers and letters of the Kazakh sign language are proposed. The advantage of the proposed algorithm is that it fully recognizes 41 letters of the 42 in the Kazakh sign alphabet. Until this time, only Russian letters in the Kazakh alphabet have been recognized. In addition, a unified function has been integrated into our system to configure the frame depth map mode, which has improved recognition performance and can be used to create a multimodal database of video data of gesture words for the gesture recognition system. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

15 pages, 1105 KiB  
Article
Robust Hand Gesture Recognition Using HOG-9ULBP Features and SVM Model
by Jianyong Li, Chengbei Li, Jihui Han, Yuefeng Shi, Guibin Bian and Shuai Zhou
Electronics 2022, 11(7), 988; https://doi.org/10.3390/electronics11070988 - 23 Mar 2022
Cited by 27 | Viewed by 5309
Abstract
Hand gesture recognition is an area of study that attempts to identify human gestures through mathematical algorithms, and can be used in several fields, such as communication between deaf-mute people, human–computer interaction, intelligent driving, and virtual reality. However, changes in scale and angle, [...] Read more.
Hand gesture recognition is an area of study that attempts to identify human gestures through mathematical algorithms, and can be used in several fields, such as communication between deaf-mute people, human–computer interaction, intelligent driving, and virtual reality. However, changes in scale and angle, as well as complex skin-like backgrounds, make gesture recognition quite challenging. In this paper, we propose a robust recognition approach for multi-scale as well as multi-angle hand gestures against complex backgrounds. First, hand gestures are segmented from complex backgrounds using the single Gaussian model and K-means algorithm. Then, the HOG feature and an improved 9ULBP feature are fused into the HOG-9ULBP feature, which is invariant in scale and rotation and enables accurate feature extraction. Finally, SVM is adopted to complete the hand gesture classification. Experimental results show that the proposed method achieves the highest accuracy of 99.01%, 97.50%, and 98.72% on the self-collected dataset, the NUS dataset, and the MU HandImages ASL dataset, respectively. Full article
(This article belongs to the Special Issue Recent Advanced Applications of Rehabilitation and Medical Robotics)
Show Figures

Figure 1

25 pages, 10390 KiB  
Article
Elderly Care Based on Hand Gestures Using Kinect Sensor
by Munir Oudah, Ali Al-Naji and Javaan Chahl
Computers 2021, 10(1), 5; https://doi.org/10.3390/computers10010005 - 26 Dec 2020
Cited by 27 | Viewed by 6055
Abstract
Technological advances have allowed hand gestures to become an important research field especially in applications such as health care and assisting applications for elderly people, providing a natural interaction with the assisting system through a camera by making specific gestures. In this study, [...] Read more.
Technological advances have allowed hand gestures to become an important research field especially in applications such as health care and assisting applications for elderly people, providing a natural interaction with the assisting system through a camera by making specific gestures. In this study, we proposed three different scenarios using a Microsoft Kinect V2 depth sensor then evaluated the effectiveness of the outcomes. The first scenario used joint tracking combined with a depth threshold to enhance hand segmentation and efficiently recognise the number of fingers extended. The second scenario utilised the metadata parameters provided by the Kinect V2 depth sensor, which provided 11 parameters related to the tracked body and gave information about three gestures for each hand. The third scenario used a simple convolutional neural network with joint tracking by depth metadata to recognise and classify five hand gesture categories. In this study, deaf-mute elderly people performed five different hand gestures, each related to a specific request, such as needing water, meal, toilet, help and medicine. Next, the request was sent via the global system for mobile communication (GSM) as a text message to the care provider’s smartphone because the elderly subjects could not execute any activity independently. Full article
(This article belongs to the Special Issue Artificial Intelligence for Health)
Show Figures

Figure 1

29 pages, 3836 KiB  
Review
Hand Gesture Recognition Based on Computer Vision: A Review of Techniques
by Munir Oudah, Ali Al-Naji and Javaan Chahl
J. Imaging 2020, 6(8), 73; https://doi.org/10.3390/jimaging6080073 - 23 Jul 2020
Cited by 453 | Viewed by 57574
Abstract
Hand gestures are a form of nonverbal communication that can be used in several fields such as communication between deaf-mute people, robot control, human–computer interaction (HCI), home automation and medical applications. Research papers based on hand gestures have adopted many different techniques, including [...] Read more.
Hand gestures are a form of nonverbal communication that can be used in several fields such as communication between deaf-mute people, robot control, human–computer interaction (HCI), home automation and medical applications. Research papers based on hand gestures have adopted many different techniques, including those based on instrumented sensor technology and computer vision. In other words, the hand sign can be classified under many headings, such as posture and gesture, as well as dynamic and static, or a hybrid of the two. This paper focuses on a review of the literature on hand gesture techniques and introduces their merits and limitations under different circumstances. In addition, it tabulates the performance of these methods, focusing on computer vision techniques that deal with the similarity and difference points, technique of hand segmentation used, classification algorithms and drawbacks, number and types of gestures, dataset used, detection range (distance) and type of camera used. This paper is a thorough general overview of hand gesture methods with a brief discussion of some possible applications. Full article
Show Figures

Figure 1

Back to TopTop