Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (43)

Search Parameters:
Keywords = American Sign Language (ASL)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 701 KiB  
Article
Early Access to Sign Language Boosts the Development of Serial Working Memory in Deaf and Hard-of-Hearing Children
by Brennan P. Terhune-Cotter and Matthew W. G. Dye
Behav. Sci. 2025, 15(7), 919; https://doi.org/10.3390/bs15070919 - 7 Jul 2025
Viewed by 351
Abstract
Deaf and hard-of-hearing (DHH) children are often reported to show deficits on working memory (WM) tasks. These deficits are often characterized as contributing to their struggles to acquire spoken language. Here we report a longitudinal study of a large (N = 103) sample [...] Read more.
Deaf and hard-of-hearing (DHH) children are often reported to show deficits on working memory (WM) tasks. These deficits are often characterized as contributing to their struggles to acquire spoken language. Here we report a longitudinal study of a large (N = 103) sample of DHH children who acquired American Sign Language (ASL) as their first language. Using an n-back working memory task, we show significant growth in WM performance across the 7–13-year-old age range. Furthermore, we show that children with early access to ASL from their DHH parents demonstrate faster WM growth and that this group difference is mediated by ASL receptive skills. The data suggest the important role of early access to perceivable natural language in promoting typical WM growth during the middle school years. We conclude that the acquisition of a natural visual–gestural language is sufficient to support the development of WM in DHH children. Further research is required to determine how the timing and quality of ASL exposure may play a role, or whether the effects are driven by acquisition-related corollaries, such as parent–child interactions and maternal stress. Full article
(This article belongs to the Special Issue Language and Cognitive Development in Deaf Children)
Show Figures

Figure 1

22 pages, 7640 KiB  
Article
Bilingual Sign Language Recognition: A YOLOv11-Based Model for Bangla and English Alphabets
by Nawshin Navin, Fahmid Al Farid, Raiyen Z. Rakin, Sadman S. Tanzim, Mashrur Rahman, Shakila Rahman, Jia Uddin and Hezerul Abdul Karim
J. Imaging 2025, 11(5), 134; https://doi.org/10.3390/jimaging11050134 - 27 Apr 2025
Cited by 2 | Viewed by 1722
Abstract
Communication through sign language effectively helps both hearing- and speaking-impaired individuals connect. However, there are problems with the interlingual communication between Bangla Sign Language (BdSL) and English Sign Language (ASL) due to the absence of a unified system. This study aims to introduce [...] Read more.
Communication through sign language effectively helps both hearing- and speaking-impaired individuals connect. However, there are problems with the interlingual communication between Bangla Sign Language (BdSL) and English Sign Language (ASL) due to the absence of a unified system. This study aims to introduce a detection system that incorporates these two sign languages to enhance the flow of communication for those who use these forms of sign language. This study developed and tested a deep learning-based sign-language detection system that can recognize both BdSL and ASL alphabets concurrently in real time. The approach uses a YOLOv11 object detection architecture that has been trained with an open-source dataset on a set of 9556 images containing 64 different letter signs from both languages. Data preprocessing was applied to enhance the performance of the model. Evaluation criteria, including the precision, recall, mAP, and other parameter values were also computed to evaluate the model. The performance analysis of the proposed method shows a precision of 99.12% and average recall rates of 99.63% in 30 epochs. The studies show that the proposed model outperforms the current techniques in sign language recognition (SLR) and can be used in communicating assistive technologies and human–computer interaction systems. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

22 pages, 8938 KiB  
Article
Enhancing Hand Gesture Image Recognition by Integrating Various Feature Groups
by Ismail Taha Ahmed, Wisam Hazim Gwad, Baraa Tareq Hammad and Entisar Alkayal
Technologies 2025, 13(4), 164; https://doi.org/10.3390/technologies13040164 - 19 Apr 2025
Cited by 2 | Viewed by 1133
Abstract
Human gesture image recognition is the process of identifying, deciphering, and classifying human gestures in images or video frames using computer vision algorithms. These gestures can vary from the simplest hand motions, body positions, and facial emotions to complicated gestures. Two significant problems [...] Read more.
Human gesture image recognition is the process of identifying, deciphering, and classifying human gestures in images or video frames using computer vision algorithms. These gestures can vary from the simplest hand motions, body positions, and facial emotions to complicated gestures. Two significant problems affecting the performance of human gesture picture recognition methods are ambiguity and invariance. Ambiguity occurs when gestures have the same shape but different orientations, while invariance guarantees that gestures are correctly classified even when scale, lighting, or orientation varies. To overcome this issue, hand-crafted features can be combined with deep learning to greatly improve the performance of hand gesture image recognition models. This combination improves the model’s overall accuracy and dependability in identifying a variety of hand movements by enhancing its capacity to record both shape and texture properties. Thus, in this study, we propose a hand gesture recognition method that combines Reset50 model feature extraction with the Tamura texture descriptor and uses the adaptability of GAM to represent intricate interactions between the features. Experiments were carried out on publicly available datasets containing images of American Sign Language (ASL) gestures. As Tamura-ResNet50-OptimizedGAM achieved the highest accuracy rate in the ASL datasets, it is believed to be the best option for human gesture image recognition. According to the experimental results, the accuracy rate was 96%, which is higher than the total accuracy of the state-of-the-art techniques currently in use. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

24 pages, 9841 KiB  
Article
Mexican Sign Language Recognition: Dataset Creation and Performance Evaluation Using MediaPipe and Machine Learning Techniques
by Mario Rodriguez, Outmane Oubram, A. Bassam, Noureddine Lakouari and Rasikh Tariq
Electronics 2025, 14(7), 1423; https://doi.org/10.3390/electronics14071423 - 1 Apr 2025
Cited by 3 | Viewed by 1149
Abstract
In Mexico, around 2.4 million people (1.9% of the national population) are deaf, and Mexican Sign Language (MSL) support is essential for people with communication disabilities. Research and technological prototypes of sign language recognition have been developed to support public communication systems without [...] Read more.
In Mexico, around 2.4 million people (1.9% of the national population) are deaf, and Mexican Sign Language (MSL) support is essential for people with communication disabilities. Research and technological prototypes of sign language recognition have been developed to support public communication systems without human interpreters. However, most of these systems and research are closely related to American Sign Language (ASL) or other sign languages of other languages whose scope has had the highest level of accuracy and recognition of letters and words. The objective of the current study is to develop and evaluate a sign language recognition system tailored to MSL. The research aims to achieve accurate recognition of dactylology and the first ten numerical digits (1–10) in MSL. A database of sign language and numeration of MSL was created with the 29 different characters of MSL’s dactylology and the first ten digits with a camera. Then, MediaPipe was first applied for feature extraction for both hands (21 points per hand). Once the features were extracted, Machine Learning and Deep Learning Techniques were applied to recognize MSL signs. The recognition of MSL patterns in the context of static (29 classes) and continuous signs (10 classes) yielded an accuracy of 92% with Support Vector Machine (SVM) and 86% with Gated Recurrent Unit (GRU) accordingly. The trained algorithms are based on full scenarios with both hands; therefore, it will sign under these conditions. To improve the accuracy, it is suggested to amplify the number of samples. Full article
Show Figures

Figure 1

21 pages, 5202 KiB  
Article
Real-Time American Sign Language Interpretation Using Deep Learning and Keypoint Tracking
by Bader Alsharif, Easa Alalwany, Ali Ibrahim, Imad Mahgoub and Mohammad Ilyas
Sensors 2025, 25(7), 2138; https://doi.org/10.3390/s25072138 - 28 Mar 2025
Cited by 1 | Viewed by 5996
Abstract
Communication barriers pose significant challenges for the Deaf and Hard-of-Hearing (DHH) community, limiting their access to essential services, social interactions, and professional opportunities. To bridge this gap, assistive technologies leveraging artificial intelligence (AI) and deep learning have gained prominence. This study presents a [...] Read more.
Communication barriers pose significant challenges for the Deaf and Hard-of-Hearing (DHH) community, limiting their access to essential services, social interactions, and professional opportunities. To bridge this gap, assistive technologies leveraging artificial intelligence (AI) and deep learning have gained prominence. This study presents a real-time American Sign Language (ASL) interpretation system that integrates deep learning with keypoint tracking to enhance accessibility and foster inclusivity. By combining the YOLOv11 model for gesture recognition with MediaPipe for precise hand tracking, the system achieves high accuracy in identifying ASL alphabet letters in real time. The proposed approach addresses challenges such as gesture ambiguity, environmental variations, and computational efficiency. Additionally, this system enables users to spell out names and locations, further improving its practical applications. Experimental results demonstrate that the model attains a mean Average Precision (mAP@0.5) of 98.2%, with an inference speed optimized for real-world deployment. This research underscores the critical role of AI-driven assistive technologies in empowering the DHH community by enabling seamless communication and interaction. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition (3rd Edition))
Show Figures

Figure 1

19 pages, 251 KiB  
Article
Insights from a Pre-Pandemic K-12 Virtual American Sign Language Program for a Post-Pandemic Online Era
by Casey W. Guynes, Nora Griffin-Shirley, Kristen Guynes and Leigh Kackley
Educ. Sci. 2024, 14(8), 892; https://doi.org/10.3390/educsci14080892 - 15 Aug 2024
Viewed by 1188
Abstract
In the past five years, the number of virtual American Sign Language (ASL) classes has dramatically increased from being a novel option to being a common course delivery mode across the country. Yet, little is known regarding virtual ASL course design and the [...] Read more.
In the past five years, the number of virtual American Sign Language (ASL) classes has dramatically increased from being a novel option to being a common course delivery mode across the country. Yet, little is known regarding virtual ASL course design and the implementation of evidence-based practices. Overarchingly, this programmatic case study sought insight from a small population of experienced virtual ASL teachers who had been teaching ASL online prior to the crisis teaching phenomenon that has laid the foundation for virtual ASL as it stands today. More specifically, the qualitative design utilized questionnaires, semi-structured interviews, member checks, and document reviews of five teachers who had been teaching ASL virtually to K-12 students prior to the onset of the COVID-19 pandemic. Rich qualitative data, analyzed through directed and summative content analysis, revealed many themes specific to virtual ASL education, including differences from traditional ASL instruction, specific job responsibilities, limitations, advantages, disadvantages, and suggestions for improvement. Additionally, aligning with previous literature, we explored teacher, student, and programmatic characteristics that were perceived to be conducive to virtual students’ success. Finally, all participants expressed broader concerns that continue to exist in the field of ASL education. Implications for stakeholders, including K-12 ASL students, their families, teachers, administrators, and teacher training programs are addressed, followed by suggestions for future research. Full article
(This article belongs to the Topic Advances in Online and Distance Learning)
20 pages, 7762 KiB  
Article
Applying Swin Architecture to Diverse Sign Language Datasets
by Yulia Kumar, Kuan Huang, Chin-Chien Lin, Annaliese Watson, J. Jenny Li, Patricia Morreale and Justin Delgado
Electronics 2024, 13(8), 1509; https://doi.org/10.3390/electronics13081509 - 16 Apr 2024
Cited by 2 | Viewed by 3043
Abstract
In an era where artificial intelligence (AI) bridges crucial communication gaps, this study extends AI’s utility to American and Taiwan Sign Language (ASL and TSL) communities through advanced models like the hierarchical vision transformer with shifted windows (Swin). This research evaluates Swin’s adaptability [...] Read more.
In an era where artificial intelligence (AI) bridges crucial communication gaps, this study extends AI’s utility to American and Taiwan Sign Language (ASL and TSL) communities through advanced models like the hierarchical vision transformer with shifted windows (Swin). This research evaluates Swin’s adaptability across sign languages, aiming for a universal platform for the unvoiced. Utilizing deep learning and transformer technologies, it has developed prototypes for ASL-to-English translation, supported by an educational framework to facilitate learning and comprehension, with the intention to include more languages in the future. This study highlights the efficacy of the Swin model, along with other models such as the vision transformer with deformable attention (DAT), ResNet-50, and VGG-16, in ASL recognition. The Swin model’s accuracy across various datasets underscore its potential. Additionally, this research explores the challenges of balancing accuracy with the need for real-time, portable language recognition capabilities and introduces the use of cutting-edge transformer models like Swin, DAT, and video Swin transformers for diverse datasets in sign language recognition. This study explores the integration of multimodality and large language models (LLMs) to promote global inclusivity. Future efforts will focus on enhancing these models and expanding their linguistic reach, with an emphasis on real-time translation applications and educational frameworks. These achievements not only advance the technology of sign language recognition but also provide more effective communication tools for the deaf and hard-of-hearing community. Full article
(This article belongs to the Special Issue Applications of Deep Learning Techniques)
Show Figures

Figure 1

4 pages, 447 KiB  
Proceeding Paper
ASL Fingerspelling Classification for Use in Robot Control
by Kevin McCready, Dermot Kerr, Sonya Coleman and Emmett Kerr
Eng. Proc. 2024, 65(1), 12; https://doi.org/10.3390/engproc2024065012 - 6 Mar 2024
Viewed by 942
Abstract
This paper proposes a gesture-based control system for industrial robots. To achieve that goal, the performance of an image classifier trained on three different American Sign Language (ASL) fingerspelling image datasets is considered. Then, the three are combined into a single larger dataset, [...] Read more.
This paper proposes a gesture-based control system for industrial robots. To achieve that goal, the performance of an image classifier trained on three different American Sign Language (ASL) fingerspelling image datasets is considered. Then, the three are combined into a single larger dataset, and the classifier is trained on that. The result of this process is then compared with the original three. Full article
(This article belongs to the Proceedings of The 39th International Manufacturing Conference)
Show Figures

Figure 1

11 pages, 4139 KiB  
Proceeding Paper
Hand Gesture Recognition in Indian Sign Language Using Deep Learning
by Harsh Kumar Vashisth, Tuhin Tarafder, Rehan Aziz, Mamta Arora and Alpana
Eng. Proc. 2023, 59(1), 96; https://doi.org/10.3390/engproc2023059096 - 21 Dec 2023
Cited by 13 | Viewed by 9959
Abstract
Sign languages are important for the deaf and hard-of-hearing communities, as they provide a means of communication and expression. However, many people outside of the deaf community are not familiar with sign languages, which can lead to communication barriers and exclusion. Each country [...] Read more.
Sign languages are important for the deaf and hard-of-hearing communities, as they provide a means of communication and expression. However, many people outside of the deaf community are not familiar with sign languages, which can lead to communication barriers and exclusion. Each country and culture have its own sign language, and some countries have multiple sign languages. Indian Sign Language (ISL) is a visual language used by the deaf and hard-of-hearing community in India. It is a complete language, with its own grammar and syntax, and is used to convey information through hand gestures, facial expressions, and body language. Over time, ISL has evolved into its own distinct language, with regional variations and dialects. Recognizing hand gestures in sign languages is a challenging task due to the high variability in hand shapes, movements, and orientations. ISL uses a combination of one-handed and two-handed gestures, which makes it fundamentally different from other common sign languages like American Sign Language (ASL). This paper aims to address the communication gap between specially abled (deaf) people who can only express themselves through the Indian sign language and those who do not understand it, thereby improving accessibility and communication for sign language users. This is achieved by using and implementing Convolutional Neural Networks on our self-made dataset. This is a necessary step, as none of the existing datasets fulfills the need for real-world images. We have achieved 0.0178 loss and 99% accuracy on our dataset. Full article
(This article belongs to the Proceedings of Eng. Proc., 2023, RAiSE-2023)
Show Figures

Figure 1

20 pages, 459 KiB  
Article
Sign Language Studies with Chimpanzees in Sanctuary
by Mary Lee Jensvold, Kailie Dombrausky and Emily Collins
Animals 2023, 13(22), 3486; https://doi.org/10.3390/ani13223486 - 11 Nov 2023
Cited by 1 | Viewed by 5269
Abstract
Adult chimpanzees Tatu and Loulis lived at the Fauna Foundation sanctuary. They had acquired signs of American Sign Language (ASL) while young and continued to use them as adults. Caregivers with proficiency in ASL maintained daily sign language records during interactions and passive [...] Read more.
Adult chimpanzees Tatu and Loulis lived at the Fauna Foundation sanctuary. They had acquired signs of American Sign Language (ASL) while young and continued to use them as adults. Caregivers with proficiency in ASL maintained daily sign language records during interactions and passive observation. Sign checklists were records of daily vocabulary use. Sign logs were records of signed interactions with caregivers and other chimpanzees. This study reports sign use from eight years of these records. Tatu and Loulis used a majority of their base vocabularies consistently over the study period. They used signs that they had acquired decades earlier and new signs. Their utterances served a variety of communicative functions, including responses, conversational devices, requests, and descriptions. They signed to caregivers, other chimpanzees, including those who did not use signs, and to themselves privately. This indicates the importance of a stimulating and interactive environment to understand the scope of ape communication and, in particular, their use of sign language. Full article
Show Figures

Figure 1

26 pages, 3814 KiB  
Article
SDViT: Stacking of Distilled Vision Transformers for Hand Gesture Recognition
by Chun Keat Tan, Kian Ming Lim, Chin Poo Lee, Roy Kwang Yang Chang and Ali Alqahtani
Appl. Sci. 2023, 13(22), 12204; https://doi.org/10.3390/app132212204 - 10 Nov 2023
Cited by 4 | Viewed by 2223
Abstract
Hand gesture recognition (HGR) is a rapidly evolving field with the potential to revolutionize human–computer interactions by enabling machines to interpret and understand human gestures for intuitive communication and control. However, HGR faces challenges such as the high similarity of hand gestures, real-time [...] Read more.
Hand gesture recognition (HGR) is a rapidly evolving field with the potential to revolutionize human–computer interactions by enabling machines to interpret and understand human gestures for intuitive communication and control. However, HGR faces challenges such as the high similarity of hand gestures, real-time performance, and model generalization. To address these challenges, this paper proposes the stacking of distilled vision transformers, referred to as SDViT, for hand gesture recognition. An initially pretrained vision transformer (ViT) featuring a self-attention mechanism is introduced to effectively capture intricate connections among image patches, thereby enhancing its capability to handle the challenge of high similarity between hand gestures. Subsequently, knowledge distillation is proposed to compress the ViT model and improve model generalization. Multiple distilled ViTs are then stacked to achieve higher predictive performance and reduce overfitting. The proposed SDViT model achieves a promising performance on three benchmark datasets for hand gesture recognition: the American Sign Language (ASL) dataset, the ASL with digits dataset, and the National University of Singapore (NUS) hand gesture dataset. The accuracies achieved on these datasets are 100.00%, 99.60%, and 100.00%, respectively. Full article
Show Figures

Figure 1

15 pages, 4273 KiB  
Article
Fusion of Attention-Based Convolution Neural Network and HOG Features for Static Sign Language Recognition
by Diksha Kumari and Radhey Shyam Anand
Appl. Sci. 2023, 13(21), 11993; https://doi.org/10.3390/app132111993 - 3 Nov 2023
Cited by 8 | Viewed by 2876
Abstract
The deaf and hearing-impaired community expresses their emotions, communicates with society, and enhances the interaction between humans and computers using sign language gestures. This work presents a strategy for efficient feature extraction that uses a combination of two different methods that are the [...] Read more.
The deaf and hearing-impaired community expresses their emotions, communicates with society, and enhances the interaction between humans and computers using sign language gestures. This work presents a strategy for efficient feature extraction that uses a combination of two different methods that are the convolutional block attention module (CBAM)-based convolutional neural network (CNN) and standard handcrafted histogram of oriented gradients (HOG) feature descriptor. The proposed framework aims to enhance accuracy by extracting meaningful features and resolving issues like rotation, similar hand orientation, etc. The HOG feature extraction technique provides a compact feature representation that signifies meaningful information about sign gestures. The CBAM attention module is incorporated into the structure of CNN to enhance feature learning using spatial and channel attention mechanisms. Then, the final feature vector is formed by concatenating these features. This feature vector is provided to the classification layers to predict static sign gestures. The proposed approach is validated on two publicly available static Massey American Sign Language (ASL) and Indian Sign Language (ISL) databases. The model’s performance is evaluated using precision, recall, F1-score, and accuracy. Our proposed methodology achieved 99.22% and 99.79% accuracy for the ASL and ISL datasets. The acquired results signify the efficiency of the feature fusion and attention mechanism. Our network performed better in accuracy compared to the earlier studies. Full article
(This article belongs to the Special Issue Research on Image Analysis and Computer Vision)
Show Figures

Figure 1

11 pages, 2587 KiB  
Article
An AI-Based Framework for Translating American Sign Language to English and Vice Versa
by Vijayendra D. Avina, Md Amiruzzaman, Stefanie Amiruzzaman, Linh B. Ngo and M. Ali Akber Dewan
Information 2023, 14(10), 569; https://doi.org/10.3390/info14100569 - 15 Oct 2023
Cited by 3 | Viewed by 5748
Abstract
In this paper, we propose a framework to convert American Sign Language (ASL) to English and English to ASL. Within this framework, we use a deep learning model along with the rolling average prediction that captures image frames from videos and classifies the [...] Read more.
In this paper, we propose a framework to convert American Sign Language (ASL) to English and English to ASL. Within this framework, we use a deep learning model along with the rolling average prediction that captures image frames from videos and classifies the signs from the image frames. The classified frames are then used to construct ASL words and sentences to support people with hearing impairments. We also use the same deep learning model to capture signs from the people with deaf symptoms and convert them into ASL words and English sentences. Based on this framework, we developed a web-based tool to use in real-life application and we also present the tool as a proof of concept. With the evaluation, we found that the deep learning model converts the image signs into ASL words and sentences with high accuracy. The tool was also found to be very useful for people with hearing impairment and deaf symptoms. The main contribution of this work is the design of a system to convert ASL to English and vice versa. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

20 pages, 1550 KiB  
Article
Deep Learning Technology to Recognize American Sign Language Alphabet
by Bader Alsharif, Ali Salem Altaher, Ahmed Altaher, Mohammad Ilyas and Easa Alalwany
Sensors 2023, 23(18), 7970; https://doi.org/10.3390/s23187970 - 19 Sep 2023
Cited by 31 | Viewed by 9750
Abstract
Historically, individuals with hearing impairments have faced neglect, lacking the necessary tools to facilitate effective communication. However, advancements in modern technology have paved the way for the development of various tools and software aimed at improving the quality of life for hearing-disabled individuals. [...] Read more.
Historically, individuals with hearing impairments have faced neglect, lacking the necessary tools to facilitate effective communication. However, advancements in modern technology have paved the way for the development of various tools and software aimed at improving the quality of life for hearing-disabled individuals. This research paper presents a comprehensive study employing five distinct deep learning models to recognize hand gestures for the American Sign Language (ASL) alphabet. The primary objective of this study was to leverage contemporary technology to bridge the communication gap between hearing-impaired individuals and individuals with no hearing impairment. The models utilized in this research include AlexNet, ConvNeXt, EfficientNet, ResNet-50, and VisionTransformer were trained and tested using an extensive dataset comprising over 87,000 images of the ASL alphabet hand gestures. Numerous experiments were conducted, involving modifications to the architectural design parameters of the models to obtain maximum recognition accuracy. The experimental results of our study revealed that ResNet-50 achieved an exceptional accuracy rate of 99.98%, the highest among all models. EfficientNet attained an accuracy rate of 99.95%, ConvNeXt achieved 99.51% accuracy, AlexNet attained 99.50% accuracy, while VisionTransformer yielded the lowest accuracy of 88.59%. Full article
(This article belongs to the Collection Machine Learning and AI for Sensors)
Show Figures

Figure 1

22 pages, 397 KiB  
Article
A Transition to Multimodal Multilingual Practice: From SimCom to Translanguaging
by Julia Silvestri and Jodi L. Falk
Languages 2023, 8(3), 190; https://doi.org/10.3390/languages8030190 - 11 Aug 2023
Viewed by 3262
Abstract
Historically, the field of deaf education has revolved around language planning discourse, but little research has been conducted on Deaf and Hard of Hearing (DHH) students with additional disabilities as dynamic multilingual and multimodal language users. The current study focuses on the language [...] Read more.
Historically, the field of deaf education has revolved around language planning discourse, but little research has been conducted on Deaf and Hard of Hearing (DHH) students with additional disabilities as dynamic multilingual and multimodal language users. The current study focuses on the language planning process at a school serving DHH and Deaf–Blind students with varied additional disabilities. A previous Total Communication philosophy at the school was implemented in practice as Simultaneous Communication (SimCom) and later revised as a multimodal-multilingual approach with the goal of separating American Sign Language (ASL) and English and using multimodal communication such as tactile ASL and Augmentative and Alternative Communication (AAC). To implement this philosophy without reverting back to SimCom, the school employed a language planning process using action research to reflect on cycles of improvement. A grounded theory approach was used to identify and analyze themes over a three-year period of language planning and professional development in multimodal communication. Triangulated data includes language planning artifacts and an online survey of staff perceptions—analyzed by coding concepts and categories, relating concepts to define translanguaging mechanisms and attitudes, and developing an overarching theory on how a school values translanguaging after 3 years of valuing complete access to language. In the context of a multilingual, multimodal language planning cycle, developing a shared language ideology guided by how Deaf, DeafBlind, and Deaf-Disabled (DDBDD) people use language emerged as an overarching theme that promoted dynamic languaging and understanding of strategies for effective communication. Full article
(This article belongs to the Special Issue Translanguaging in Deaf Communities)
Back to TopTop