Next Article in Journal
Leveraging Large Language Models for Clinical Trial Eligibility Criteria Classification
Previous Article in Journal
Comparative Evaluation of Artificial Intelligence Models for Contraceptive Counseling
Previous Article in Special Issue
Digital Skills and Gender Equity: Perceptions and Practices of Portuguese Primary Education Teachers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advancing Personalized and Inclusive Education for Students with Disability Through Artificial Intelligence: Perspectives, Challenges, and Opportunities

by
Samia Ahmed
1,
Md. Sazzadur Rahman
2,
M. Shamim Kaiser
2,* and
A. S. M. Sanwar Hosen
3,*
1
Department of Information and Communication Technology (ICT), Bangladesh University of Professionals, Dhaka 1216, Bangladesh
2
Institute of Information Technology, Jahangirnagar University, Dhaka 1342, Bangladesh
3
Department of Artificial Intelligence and Big Data, Woosong University, Daejeon 34606, Republic of Korea
*
Authors to whom correspondence should be addressed.
Digital 2025, 5(2), 11; https://doi.org/10.3390/digital5020011
Submission received: 10 January 2025 / Revised: 6 March 2025 / Accepted: 21 March 2025 / Published: 27 March 2025
(This article belongs to the Collection Multimedia-Based Digital Learning)

Abstract

:
Students with disabilities often face challenges in participating in classroom activities with normal students. Assistive technologies powered by Artificial Intelligence (AI) or Machine Learning (ML) can provide vital support to ensure inclusive and equitable learning environments. In this paper, we identify AI or ML-powered inclusive education tools and technologies, explore the factors required for developing personalized learning plans using AI, and propose a real-time personalized learning framework. We have identified inclusive education tools and technology driven by AI or ML as well as factors impacting the creation of AI-based personalized learning based on our exploration of Google Database, blog sites, company sites, tools, and techniques used in different centers. This study proposes a system model that includes engagement and adaptive learning components. The system uses Bloom’s taxonomy to continuously track the learner’s development. We identified a comprehensive list of AI- or ML-powered inclusive education tools and technologies and determined key factors for developing personalized learning plans, including emotional state, student progress, preferences, learning styles, and outcomes. Based on this research, AI-based inclusive education has the potential to improve educational experiences for students with disabilities by creating a more equitable and inclusive learning environment.

1. Introduction

Educational equity goes beyond offering equal opportunities for all students, irrespective of the needs of individuals, to ensure that all students receive essential resources and support to use their full potential. Students with disabilities who experience various physical and cognitive impairments face significant barriers when accessing traditional educational systems [1]. The range of disabilities covers various conditions, including neurodevelopmental disorders such as ADHD and learning disabilities along with mobility impairments, motor disabilities such as speech impairment, sensory disabilities, and acquired conditions such as traumatic brain injuries [2,3]. Students with different disabilities require specific support systems to actively participate in classroom activities.
Traditional educational systems have struggled to establish learning environments that meet the inclusivity needs of disabled students. Inclusive education represents a transformative method for ensuring that all students can participate fully in the mainstream educational environment regardless of their individual needs [4]. The current fast pace of progress in Artificial Intelligence (AI) and Machine Learning (ML) creates unique possibilities to correct the deficiencies present in traditional schooling systems. AI-driven assistive technologies alongside adaptive learning systems create opportunities for inclusive education by allowing students with disabilities to engage actively and succeed together with their peers [5]. These systems dynamically adjust to students’ distinct learning styles and emotional states while tracking their progress in order to deliver personalized feedback and teaching strategies that meet individual needs.
To address individual variances in learning ability, pace, and preferences, ref. [6] examined the present usage of AI technology in personalizing learning experiences to satisfy various student requirements. They investigated how AI can lessen educational disparities, and also looked at how AI-powered tools affected student motivation, performance, and long-term academic achievement. A number of technical tools used to promote inclusive teaching methods were highlighted by [7], who considered papers from 2013 to 2023. In The authors of [8] want to identify and examine methods and tools that encourage students with a range of disabilities to participate. In accordance with PRISMA criteria, a systematic review was carried out to collect research inquiries from 159 research studies. The application of AI in fostering collaborative learning settings, its role in personalized learning, and its potential to further improve accessibility and inclusion for students with disabilities were covered in [9]. The authors of [10] investigated the use of AI to improve inclusive education for children with impairments. Personalized learning platforms, speech recognition, augmented communication devices, accessible content conversion, predictive analytics, social robots, virtual reality, smart assistive devices, and automated accessibility features in educational technology are just a few of the AI applications covered in this study. In [11], the authors examined issues with inclusive education for students with disabilities and how they relate to the desire for innovation. Inclusion and assistance for pupils with impairments were described in [12]. The purpose of their paper was to provide information and offer an insightful analysis of the current state of learning analytics and inclusivity. In [13], the authors reviewed AI applications in inclusive education in India, emphasizing important developments, effective uses, and difficulties. They offered information on how AI might be used to support inclusive education in India and meet the various demands of students.
The present study aims to inspect the transformative power of AI in generating inclusive educational practices for students with disabilities in the context of developing countries. We focus on the utilization of AI/ML technologies to develop personalized learning plans that address students’ challenges and foster their academic success. This study will addresses key issues for creating personalized learning experiences, including individual needs, learning styles, abilities, and preferences. In addition, it identifies and analyzes the assistive technologies required to support diverse disabilities within inclusive classroom environments. Based on these findings, we propose an AI-based personalized learning model that adapts to real-time feedback, student progress, and emotional states. Our approach aims to enhance the educational experience for students with disabilities. To implement it, we evaluate existing assistive technologies, drawing from academic databases, industry innovations, and insights from specialized institutions of developing countries. Our investigation addresses ethical considerations for technology-enhanced education, including data privacy, algorithmic bias, and equity while also exploring challenges and opportunities in implementing inclusive policies supported by digital solutions. Furthermore, our study aligns with the principles of “Education 4.0” [14], which emphasizes personalized experiential learning. By integrating assistive tools with these principles, this research investigates how to cultivate engaging and effective environments for students with disabilities. Finally, this work seeks to advance equitable education in developing countries by:
  • Identifying critical factors for designing personalized learning experiences tailored to individual student needs.
  • Evaluating assistive tools required to support diverse disabilities in inclusive classrooms.
  • Proposing a technology-driven framework for personalized learning tailored to students with disabilities.
  • Analyzing ethical challenges and future directions for implementing individualized learning in higher education settings.

2. Methodology

This study examines journals, book chapters, and conference publications from January 2019 to December 2024 using data from the IEEE Xplore, SpringerLink, PubMed, and Google Scholar databases. To narrow down the search results, the search strategy employed terms such as “inclusive education”, “personalized learning”, “disability”, and “special needs”, as well as Boolean operators. Publications that presented empirical research, theoretical frameworks, or thorough reviews on the topic throughout the stated time frame were included, while duplicate items were eliminated. The initial screening focused on titles and abstracts in order to find potentially relevant studies, which were then fully evaluated to determine eligibility. The analysis gave preference to papers that specifically addressed the selected keywords and associated issues, as well as papers which were frequently cited, while the selected date range ensured the selection of recent advancements.

3. Assistive Tools

In general, accessing and participating in conventional education programs is difficult for individuals with physical limitations. However, with the help of assistive technologies they may participate in regular classes. Additionally, assistive technology provides disabled pupils with the chance to learn independently. There are numerous applications and assistive technologies that can provide disabled children with a promising future. Figure 1 visualizes some of these assistive tools.

3.1. Hearing Impairment

Students who have difficulty hearing require assistive devices. Various types of assistive tools are used in classrooms, including hearing aids, Frequency-Modulated (FM) amplification systems, speech-to-text converters, and more.

3.1.1. Hearing Aids and Cochlear Implants

Electronic gadgets such as cochlear implants and hearing aids can help the deaf to hear sounds and improve their communication. Hearing aids enhance sounds, while cochlear implants directly stimulate the auditory nerve. In Bangladesh, hearing aids are used for hearing impaired students. HI-CARE is a wholesale dealer for hearing aids that are imported from Singapore, and more recently from England. For primary and secondary education, a typical curriculum is used. Students in grades 9 and 10 who continue their education take tests at the Open University, which permits some wiggle room in meeting their demands [15].

3.1.2. Sign Language

Some schools use sign language and lip-reading for hearing impaired students. Old French Sign language dates back to the seventeenth century, and served as the foundation for the American Sign Language (ASL) [16]. Various sign language apps are available that can help deaf individuals learn sign language, communicate with others, and translate written text into sign language. Popular sign language apps include SignSchool, ASL Dictionary, and Hand Talk. A number of different sign languages have also been detected using various ML algorithms [17,18,19,20,21,22,23,24,25].

3.1.3. FM Listening Systems

These tools are used in conjunction with a microphone and a small transmitter device. The speaker’s voice is forwarded directly to the listener’s ear by the instrument. As a result, either the child’s or speaker’s voice becomes louder [26]. The use of wireless communications at radio frequencies that get around the spillover issue was first introduced by the Fredericia School in Denmark in 1959. Children were provided with AM receivers that were set to the proper frequency channel for their class. Later, FM transmissions took the place of AM transmissions. An AM system was first used in classrooms in the United States (US) in 1963 by the Electronics Futures Company of New Haven, Connecticut, and a portable FM system followed a few years later [27]. The benefit of these strategies for children with listening issues is that they can hear what the speaker or teacher is saying [26].

3.1.4. Tape Recorders

Children with listening difficulties utilize these tools to record spoken material from the speaker or teacher’s lesson. Children who have trouble processing, interpreting, or remembering what they hear might use these recorders to listen to oral presentations repeatedly [28].

3.1.5. Infrared Hearing Systems

Infrared hearing systems represent a popular alternative to induction loop devices for assisting people with hearing impairments in communicating.

3.1.6. Visual Alerts

Signaling devices can be used for visual alerts. An event or action such as a phone call, doorbell, or fire alarm can be conveyed by a visual alert. For deaf people who may not be able to hear auditory messages, visual alerts such as flashing lights can be very useful [29]. An Arduino-powered call bell-driven LED system for the deaf is one specially designed device that provides a visual indication when a call bell is pressed. This technology offers an effective means of communication for people with hearing impairments. In circumstances where there might not be sufficient auditory signals, it can ensure that they can receive messages promptly and reliably [30].

3.1.7. Vibrotactile Aids

A mechanical device affixed to the head close to the ear can assist deaf people in detecting and interpreting sounds using their sense of touch. Vibration techniques can inform those with hearing loss that there may be a sound to which they must respond.

3.1.8. Audio–Visual Speech Recognition

The speech recognition process can be enhanced by Audio Visual Speech Recognition (AVSR) by including the video modality, which primarily uses the speaker’s lip action to convey information. Several authors [31,32,33] have proposed different ML based approaches for AVSR. LipNet [34] is an end-to-end trained model for lipreading.

3.1.9. Speech-to-Text Converters

Speech-to-text conversion offers textual captions for both audio and video information. It can be applied to live events, online videos, movies, and television programs. Deaf people can freely access AVSR content thanks to closed captioning. The first system to identify a single person’s spoken numbers was Audrey, a speech recognition system created at Bell Laboratories in 1952. In 1962, IBM released a product that could recognize sixteen English words. The Soviet Union, along with the United States, England, and Japan, developed a device that detects four vowels and nine consonants. The Carnegie Mellon “Harpy” speech recognition engine recognized 1011 utterances between 1971 and 1976. The first commercial speech recognition companies to successfully interpret the utterances of multiple people were Bell Labs and Thresholds Science [35]. More recently, Dragon Naturally Speaking, Okay Google, Hey Siri, and Cortana are notable speech-to-text converters [32]. In [36], the authors used Google Speech-to-Text for hearing impaired persons.

3.1.10. Audio Looping

Another kind of amplification system is audio looping. It has been implemented to address the requirement to regulate the teacher’s voice volume and ensure consistency in auditory cues between the classroom and home and school environments in order to better manage background noise and offer the most mobility possible inside a classroom [37]. A magnetic induction loop communication system was patented in 1937 by the British inventor Joseph Poliakoff, while Denmark set the standard for the use of inductive loop systems in deaf classrooms [27].

3.1.11. Gesture Recognition

The ability to recognize gestures is crucial for deaf individuals. They can interact with the blind using gesture recognition. In [38], the authors created a method for identifying gestures.

3.1.12. Online Handwritten Character Recognition (OHCR)

OHCR allows a computer to read and understand handwriting input by accepting a string of coordinate pairs from the pressure of a stylus or finger touching a pressure-sensitive digital tablet. It does this by detecting the movement of the stylus or fingertip. In [39], the authors implemented OHCR as an assistive tool in a web-based learning application to help deaf students with learning.

3.1.13. Augmented Reality Glasses

Through real-time transcription, voice emotion detection, sound indication capabilities, and classroom assistance tools, ref. [40] created an intelligent software solution for augmented reality glasses that would help students in their educational pursuits.

3.1.14. Speech Enhancement (SE) Techniques

The Speech Enhancement (SE) approach was created to help Hearing Aid (HA) users perceive speech as being of higher overall quality. The suggested approach was put into practice on a smartphone as a real-time SE application [41].

3.1.15. BridgeApp

This mobile communication app is designed to help people who are deaf, hard of hearing, mute, or without disabilities communicate offline. People without disabilities and the deaf/mute can use this as a tool to break down communication barriers. The developed system utilizes both Filipino Sign Language (FSL) and ASL to facilitate communication between users in their daily activities. BridgeApp was created based on the individual needs of the deaf community, taking into account their particular daily activities [42].

3.2. Vision Impairment

3.2.1. Braille

A Braille display is a hardware tool that converts digital text into Braille characters. It enables blind pupils to use a refreshable Braille display to read and write text on a computer screen. Louis Braille developed the Braille alphabet at the start of the nineteenth century by adapting Charles Barbier de la Serre’s military-purpose language. Today, Braille is one of the most popular reading aids for the blind [43]. To help blind students in Bangladesh learn math Braille using Nemeth code, an interactive assistive application for mobile phones is suggested [44].

3.2.2. Optical Character Recognition (OCR)

With the aid of OCR technology, blind pupils can scan books or other printed materials and have the text interpreted and read aloud in synthetic or digital voice [37].

3.2.3. Reading Assistive Tool

DAISY (Digital Accessible Information System) is a technical standard for computerized text, magazines, and digital audio books. DAISY is made specifically for use with printed materials, and is intended to be a complete audio replacement for those with print difficulties, such as dyslexics, blind students, and people with poor eyesight. Books, magazines, newspapers, journals, computerized text, and synchronized presentations of text and audio are all examples of DAISY multimedia. The DAISY format, which is based on the MP3 and XML formats, contains sophisticated features such as the ability for users to search, set bookmarks, precisely navigate line-by-line, and control speaking pace without distortion. In addition, DAISY offers tables, references, and supplementary pieces of information that are accessible audibly. DAISY enables visually challenged listeners to navigate through complex materials such as encyclopedias and textbooks [45]. In [46], the authors investigated how visually impaired students perceived and experienced the use of Microsoft’s Reading Progress program to improve their English reading abilities, specifically their pronunciation and fluency. Raspberry Pi with Google Assistant and Audio–Visual Call is used in AI-WEAR, a smart text reader for students with visual impairment [47].

3.2.4. Screen Magnifier

Screen magnifiers such as Lunar and Zoom Text can be used for students who have poor eyesight. In France, both computer and white board magnifiers are available [48].

3.2.5. Text-to-Speech Converters

Text-to-speech converters are software programs that convert text to speech so that blind people can hear it. Text-to-speech conversion can be used to read out text from books, documents, and web pages. The transition from printed books to talking books started in 1935 [49]. A few years later, electronic text-to-speech was introduced, allowing blind people to use their improved hearing to consume written content [50]. Popular text-to-speech software programs include NaturalReader and Read and Write [48], while [51,52,53,54,55] have many authors have proposed various text-to-speech converters. In [56], the authors developed an approach for converting text to speech and speech to text. By creating a text-to-speech system, ref. [57] made it possible for blind pupils to take tests on their own. ChatGPT can translate written text into speech, helping children who struggle with reading or vision difficulties [58].

3.2.6. Screen Reader

Screen readers are software applications that read the information on the user’s computer screen out loud. They make it possible for blind students to access digital text and to use software programs such as word processors, email clients, and web browsers. Well known screen readers include JAWS, NVDA, Voice Over, Talkback, and Narrator [48].

3.2.7. Obstacle Avoider

For visually impaired people, a tool that can avoid obstacles is very important. Researchers [59,60,61,62,63] have proposed several obstacle avoidance tools; [64] created a portable robot that can help blind individuals navigating a motorway with obstacles, while [65] developed the LTE-connected Smart Blind Stick to identify obstacles.

3.2.8. Electronic Notetakers

Electronic notetakers are portable devices that allow students to take notes in Braille or on a keyboard. They can be used to record lectures, take notes, and read electronic documents.

3.2.9. Object Detection

Blind people have an extremely hard time navigating the world. An “Assistive Voice Guidance” system was developed by [66] to help blind individuals learn information about others in real time. The vision aid developed by [67] includes text-to-speech conversion, distance estimation, and object detection.

3.2.10. Gesture Recognizer

Gesture recognition is very important for blind people. Through gesture recognition, blind individuals can communicate with the deaf and dumb. In [38], the authors developed an approach for gesture recognition.

3.2.11. Tactile Diagrams

Tactile diagrams are raised line drawings that allow blind students to access visual information such as graphs and charts. They can be created using specialized software and embossing machines. Visually impaired students learn about their surroundings through kinesthetic and tactile input. Such input should be viewed as an additional system that facilitates learning, rather than as “lesser senses” to be used in the absence of vision [68].

3.2.12. Human Activity Recognizer (HAR)

Due to the loss of sight, which is essential for comprehending and interacting with the environment, visually impaired pupils encounter a variety of practical challenges. To understand the environment of the classroom, HAR is very important. Several authors [20,69,70,71] have suggested different models for human activity recognition.

3.3. Speech Impairment

Students who have a speech impairment often suffer in a normal classroom. Assistive tools are a blessing for them.

3.3.1. Speech Synthesizers

Text-to-speech synthesis research started in 1947, and in 1960 the first English-language text-to-speech synthesizer was created [72]. For students who struggle with verbal communication, advanced speech synthesizers can serve as substitute voices and a compensatory aid. These devices provide them with understandable speaking voices. Students using portable systems can join in class discussions [73].

3.3.2. Visual Speech Recognition (VSR)

Lip reading is a VSR technique that uses visual signals generated by lip movement to interpret and comprehend spoken words. LipTalk is a cutting-edge visual voice recognition program created especially for use by those with speech impairments [74].

3.3.3. Automatic Speech Recognition (ASR)

In [75], the authors presented a mobile application that enables people with speech impairments to record their audio input for the purpose of improving the speech model. This method enabled the creation of the first database of speech samples from native Italian speakers with dysarthria, taking into account Italian as the primary language. A gamified e-tutor system that teaches statistics to senior high school students with speech impairments was developed in Philippines by using speech recognition [76].

3.4. Mobility Impairment

Students with locomotive disorders face problems in normal classrooms. For easy movement, assistive tools are very important.

3.4.1. Wheelchair

People with physical limitations can undertake daily mobility-related tasks in wheelchairs. Wheelchairs have been used to transport individuals with disabilities since the sixth century BCE, although mass production of these devices only began in 1933 [77]. Standing wheelchairs are now available, which allow users to converse with other standing people and prevent secondary issues such as pressure sores. In [78], the authors proposed a hand gesture recognition-based wheelchair for people with impaired mobility.

3.4.2. Basic Adaptive Keyboard

Basic keyboard modifications that make it easier for students with physical disabilities to use computers include reducing the number of keys on the keyboard, arranging the letter keys alphabetically, and using larger keys that are easier to see and touch in place of the standard keys [37].

3.4.3. Prosthetic

An artificial device known as a prosthesis substitutes for a missing bodily component that may have been lost due to trauma, illness, or congenital abnormality. Prosthetics are designed to replace the missing bodily part’s lost functionality. A prosthetic device’s primary purpose is to give a disabled person a tool that can replace one or more limbs. In [79,80], the authors proposed robotic arms for disabled persons.

3.4.4. Robotic Exoskeleton

Exoskeleton technology has assisted in the rehabilitation of people with various levels of handicap. In general, an exoskeleton can be thought of as a rigid mechanical frame with joints that allow the human operator to move. In [81], the authors proposed an RNN-based robotic exoskeleton for mobility impaired persons.

3.4.5. Home Modification

Home improvements can improve accessibility for those with physical disabilities. This can involve lowering counters and enlarging entrances in addition to the installation of grab bars, stairlifts, and ramps.

3.5. Neurodevelopmental Impairment

3.5.1. Reading Assistive Tool

Dyslexic students can use a reading assistive tool. ReaDys is a prototype intelligence-assisted speech-to-text system which will aid dyslexic children in their reading processes. Microsoft Speech was successfully used in the development of the speech-based system, and an Application Programming Interface (API) was designed specifically for children who struggle with dyslexia. Using Microsoft Visual Studio and a C Windows form application, the prototype development was finished using technological instruments [82].

3.5.2. Digital Notepad

Students with learning difficulties may be able to enhance their working memory as well as their visual and auditory learning capacity to supplement information processing during lectures and review by using the digital note-taking tools [83]. In [84], the authors proposed a digital notepad called “JollyMate” for dyslexic students.

3.5.3. Listening Devices

An FM transmitter and receiver-based listening device can be used as an assistive tool for ADHD students. In [85], a wireless device was used, which was then modified to FM-induced. Teachers then used small microphones and transmitter devices to deliver lectures, allowing hyperactive students to concentrate on the lectures without any outside noises.

3.5.4. Humanoid Robots

Humanoid robot can help when teaching students with Autism Spectrum Disorder (ASD). Robots can interact with the students and improve their social interaction skills. A humanoid robot named “Kaspar” has been developed to assist ASD students [86]. It was first developed and utilized for ASD children in [87].

3.5.5. CareMate

CareMate is a web application designed for people with severe autism that uses UI/UX principles to teach functional skills. It has shown notable increases in post-test scores [88].
Table 1 summarizes the reviewed assistive tools along with their features and limitations.

4. AI- and ML-Based Assistive Tools and Technologies

AI and ML technologies have the potential to provide effective personalized learning environments. There are various algorithms that are used for personalized learning. AI-based assistive tools entail creating and deploying AI solutions to overcome obstacles and difficulties encountered by individuals with impairments in a range of spheres of life. Participation in regular classes is possible for disabled students with the use of AI-based assistive technologies. Helpful technologies can also allow students with disabilities to pursue autonomous learning.

4.1. Convolutional Neural Network (CNN)

4.1.1. Sign Language from Images

Convolutional Neural Networks (CNNs) have been developed to discern visual patterns directly from pixel pictures with minimal preparation. CNNs have been shown to recognize Bengali Sign Language (BdSL). Typically, impaired students use interpreters to communicate with others. However, interpreters are not always easily available. CNNs can play a critical role in recognizing and interpreting BdSL in such settings, providing a more inclusive and accessible mode of communication. CNNs can be used to detect sign languages from images [17]. Deep Convolutional Neural Network (DCNN) is an advanced variant of CNN with a higher number of hidden layers compared to a standard CNN. A DCNN was specifically designed for the creation of an assistive system aimed at detecting and interpreting Arabic sign language for individuals who are hearing impaired or deaf [18]. By leveraging the power of DCNN technology, this system can facilitate effective communication and bridge the gap between individuals with hearing disabilities and the wider Arabic-speaking community.

4.1.2. Text-to-Speech (TTS) Technology

Models based on CNN architectures can be used with TTS technology. Image datasets are used for this type of model. An image of Braille language input is provided to the classifier; the classifier detects it, then converts the text into English language audio output [51].

4.1.3. Audio–Visual Speech Recognition (AVSR)

In [31], a deep denoizing autoencoder was used for audio feature extraction and a CNN consisting of seven layers was used for visual feature extraction. Word recordings were used as training data for audio speech recognition, while raw mouth area images were used for visual recognition. In [32], the authors proposed a novel deep learning-based audiovisual speech recognition model for lip reading.

4.1.4. Obstacle Avoidance

For those with visual impairments, a real-time CNN-based object detection system was proposed in [59]. The prototype identified visual items and communicated the discovered information through sound. The whole process was been developed using an image dataset.

4.1.5. Object Detection

A real-time CNN-based system was developed in [66] based on a wearable camera that takes pictures of persons approaching the blind user. The system analyzes these photos using a CNN algorithm to identify whether the approaching person is recognized or unknown. In [67], the authors developed an assistive technology that incorporates real-time YOLOv8 object detection for persons with visual impairments.

4.1.6. Speech Enhancement (SE)

To increase the overall quality of speech, ref. [41] proposed an SE approach based on a multi-objective learning CNN.

4.2. Recurrent Neural Network (RNN)

4.2.1. Sign Language from Image

Recurrent Neural Network (RNN) is a type of Artificial Neural Network (ANN) designed to process sequential or time series data. A specific type of RNN known as Long Short-Term Memory (LSTM) is commonly utilized. LSTM is particularly useful for sign language recognition, as it can effectively capture and interpret the sequential nature of sign language gestures. To evaluate RNN-based models, Kinect 2.0 is utilized with Chinese sign language vocabulary. To create a complete model, LSTM is used. This system is devoid of intentional feature design and receives as inputs the moving trajectories of four skeleton joints [19].

4.2.2. Human Activity Recognition (HAR)

Personalized Learning Model Technology in education is valuable for identifying hidden trends in learner data that can be used to improve online learning environments. Research on full-path recommendations for personalized learning is particularly crucial for the creation of cutting-edge e-learning systems. A method for creating a personalized learning full-path recommendation model using LSTM neural networks and clustering algorithms is provided. This approach uses learning resource datasets such as course data, learner base data, learner type data, and learner behavior data [89]. LSTM architecture is utilized to recognize human activity. Along with RNN, view adaptation schemes are incorporated to detect human activity. The network allows users to adjust the best observation angles from beginning to end [69].

4.2.3. Gesture Recognition

RNN-based LSTM technique is used for hand gesture recognition based on Electromyogram (EGM) signals. This signal is taken from muscle using electrodes. After collection, these signals are preprocessed and LSTM is applied for classification [78].

4.2.4. Obstacle Avoidance

For obstacle avoidance, an RNN based on the metaheuristic optimization method 324 Beetle Antennae Olfactory is implemented. To calculate the penalty term using the GJK algorithm, the manipulator and obstacle’s 3D geometries are directly used. The controller may operate for an obstacle of any shape, as the GJK algorithm is used to calculate the distance between the manipulator and the obstructions [60].

4.2.5. Text to Speech (TTS) Technology

RNN technology is applied to Mandarin TTS that uses a prosodic information synthesizer. It uses a four-layer compact RNN to produce prosodic data simultaneously. Many human phonologic rules were taught to the RNN prosody synthesizer. The RNN was trained on real utterances and their corresponding texts [52].

4.2.6. Audio–Visual Speech Recognition (AVSR)

Vocabulary test sets that are created from segmented utterances are used for AVSR. Three different techniques have been followed: audio-only, visual-only, and audio–visual. For audio-only models, RNN-T model is used having a five-layer stack of bidirectional LSTMs. For visual-only models, convolutional filters are used [33].

4.2.7. Robotic Exoskeletons

To categorize the user’s movements and regulate the trajectories of an exoskeleton, an RNN and an Adaptive Non-singular Fast Terminal Sliding Mode Controller (ANFTSMC) method is combined. The sternocleidomastoid and brachii biceps muscles’ EMGs are employed for training [81].

4.2.8. Reading Assistive Tool

An “AI-WEAR” reading aid for blind or visually impaired students was created with LSTM in [47].

4.3. CNN and RNN Hybrid Approach

4.3.1. Image Captionbot

The development of image captions generation can make use of Deep Learning (DL) algorithms, a recent development in programming. For classifying images and identifying features in images, VGG16, one of the best CNN architectures, can be employed. For text description, an embedding layer and LSTM can be utilized. A network for creating image captions is created by combining these two networks [90].

4.3.2. Human Activity Recognition (HAR)

LSTM can be used to recognize human behaviors. In [70], the authors proposed a model that uses CNN as well as RNN for HAR. The model has a single LSTM layer.

4.3.3. Obstacle Avoidance

The combination of RNN and CNN is employed for wayfinding and obstacle detection to aid visually impaired individuals. CNN plays a crucial role in identifying obstacles in the environment, while RNN is utilized to predict the appropriate navigation paths [61]. By leveraging the capabilities of both RNN and CNN, visually impaired individuals can benefit from improved obstacle detection and reliable navigation assistance.

4.3.4. Text To Speech (TTS) Technology

Approaches combining CNN and LSTM are used for TTS technology. Captions are generated from images using the CNN-LSTM method. While LSTM determines the visual input, CNN extracts the relevant information. This intelligent system converts captions into voice [53].

4.3.5. Audio–Visual Speech Recognition (AVSR)

People who are short of hearing can greatly benefit from assistive technology by employing AVSR. Three modules make up AVSR: audio speech recognition, visual speech recognition, and multimodal fusion. Audio speech recognition is the process of turning spoken words into text. The neural network model is trained using an audio dataset. A visual speech recognition system uses lip movement to automatically identify the uttered words. The model is trained using frames from videos collected by this technique. This hybrid process uses speech-to-text based on an RNN-GRU model and visual speech-to-text based on a CNN model [32].

4.4. Support Vector Machine (SVM)

Support Vector Machine (SVM) can be utilized to develop an assistive device capable of extracting text from printed images, enabling individuals with visual impairments to access and read documents. This technology offers a user-friendly solution for those with visual impairments, allowing them to interact with printed materials more easily and independently. By harnessing the power of SVM, accessibility can be enhanced. Individuals with visual impairments can be empowered to engage with printed information effortlessly by using SVM [91].

4.4.1. Gesture Recognition

SVM is utilized for gesture recognition. Active Shape Model (ASM) is used for gesture recognition and SVM is used for classification. ASM is applied on image dataset and then SVM classification is applied [92].

4.4.2. Sign Language from Image

To detect ISL from images, SVM is applied. At first the, the images are preprocessed, then segmentation and feature extraction are applied. Finally, SVM is used for classification [20].

4.4.3. Human Activity Recognition (HAR)

SVM is used for classification for HAR. After data preprocessing and feature extraction, classification is used. SVM is applied along with other classification methods on EMG-based lower limb dataset. SVM works best for this type of classification [20].

4.4.4. Control of Robotic Arm

To operate an assistive robot arm and interact with human arm movements, an EMG-based classification approach is used. The system, which uses a SVM as its foundation, categorizes movements of the upper human limbs using EMG data from the brachioradialis, biceps, and anterior deltoid as the input features. Using a seven-axis Denso robot arm, the suggested method is implemented as a control system to replicate human arm movements [80].

4.4.5. Text To Speech (TTS) Technology

SVM was applied in developing a mobile application named “Let’s Talk” for TTS technology. It uses the Google Speech API to convert speech to text and text to speech. The SVM kernel classifier is used in the camera API to classify the raw images of hand gestures for sign language [54].

4.4.6. Obstacle Avoidance

Using the multi-sensor principle, an electronic white stick is intended to identify obstacles obstructing those with visual impairments. Real time images captured from a webcam are used for this prototype. The system uses statistical analysis to detect objects following preprocessing. An SVM classifier is used to evaluate detection utilizing a USB webcam. A hyperplane-based SVM linear classifier differentiates objects from floors. When headphones are plugged into the audio jack, an alarm message regarding an object that is seen by the USB webcam and ultrasonic sensor is sent. Similar to this, a buzzer sounds when an IR sensor detects an object at close range [62].

4.4.7. DytectiveU

DytectiveU offers customized game-based activities to improve cognitive abilities connected to dyslexia. This tool uses an SVM algorithm [86].

4.5. Reinforcement Learning

In general, a reinforcement learning agent can detect and comprehend its surroundings, act, and learn through trial and error.

4.5.1. Sign Language

Video is used for the detection of sign language using reinforcement learning. The features of sign language are extracted from videos; first, a CNN based 3D-ResNet is used for feature extraction. Then, a transformer for continuous sign language recognition is applied. At last, a reinforcement learning method is used for end-to end system [21].

4.5.2. Personalized Environment

A personalized learning environment for special needs students can be built in the classroom. It can activate the required learning material according to the particular needs of the students [93].

4.5.3. Obstacle Avoidance

With the help of reinforcement learning, an assistive device prototype was developed for visually impaired people. The device is helpful while walking on the sidewalk, and can deal with obstacles that are either static or dynamic in nature. Finding a “free path” and implementing it into an assistance system are three different concepts. In simulated robotic environment free path modeling, a robot is trained to avoid obstacles by taking the free path; this robot is known as the Sidewalk Obstacle Avoidance Agent (SOAA). Sidewalk Obstacle dialogue Agent (SOCA), another agent in charge of natural dialogue, should be trained. To develop this system, important sidewalk obstacles were identified. It was made possible by the creation of the “AS” image database, which includes roadblocks noted by representative users. Finally, captions for the images were generated [63].

4.5.4. Kaspar Robot

The application of reinforcement learning algorithms has been used to help students with ASD and improves their capacity for social connection. Some reports suggest that Kaspar’s humanoid form and characteristics make an ASD pupil more engaging.

4.6. Decision Tree (DT)

When it comes to investigating the relationship between independent and dependent variables, the Decision Tree (DT) method is seen to be more effective than other algorithms. This property allows DT algorithms to provide useful insights into the underlying patterns and linkages in a dataset. Using DT-based personalized education approaches to improve student performance in the classroom has yielded encouraging outcomes [94]. We can improve learning outcomes and encourage academic achievement by personalizing the educational experience to individual students using DT-based tactics.

4.6.1. Human Activity Recognition (HAR)

HAR is a very important task for assistive technology. DT is applied for HAR. EMG-based lower limb dataset is used. This data included data from three separate exercises performed by 22 male individuals. DT was applied with other classification algorithms, after data preparation and feature extraction [20].

4.6.2. Gesture Recognition

For hand gesture identification, a wearable smart sEMG recorder integrated Gradient Boosting Decision Tree (GBDT) is used. An Analogue Front End (AFE) chip for neural signal acquisition is used to acquire the sEMG signal. Due to its simplicity, GBDT is employed in this work for classification. GBDT is a boosting algorithm that uses DTs as its weak learner at the foundation [95].

4.7. Naive Bayes (NB)

4.7.1. Gesture Recognition

Five hand gestures are recognized using the Naive Bayes (NB) approach. The movement of the fingers is identified using the EMG. To obtain the signal, a myoarmband is used. The components of this system include a muscle sensor, a computer, a remote control and a moving robot [96].

4.7.2. Sign Language Recognition

Leap Motion Controller (LMC) and NB algorithm is used for Indonesian sign language detection. Through LMC, right hand gesture features are collected, and classification uses the NB algorithm [22].

4.7.3. Human Activity Recognition (HAR)

For HAR, networks of body sensors are used. Wireless body sensors and inertial sensors on cellphones are used to monitor human activities. The training dataset was made up of motion sensor readings collected from publicly available data. For performance analysis, NB, and other clustering and classification techniques were used [71].

4.8. Random Forest (RF)

4.8.1. Sign Language Recognition

A set of sensor gloves was created that can translate motions made in ISL into audible speech. Numerous sensors and modules, including Arduino Nano micro-controllers, flex sensors, touch sensors, Inertial Measurement Units (IMU), RF, and Bluetooth modules, were attached on the gloves. These sensors allow for the quantification of both hands’ states in a series of numerical data to record their mobility. Additionally, the sensor data are forwarded to Random Forest (RF) to increase gesture recognition accuracy [23].

4.8.2. Gesture Recognition

Five individuals were used in the development and testing of a unique wearable gesture recognition system. The premise behind this research is that accurate gesture detection will be possible using the combined data from numerous electrodes wrapped around the limb. The impacts of motion artifacts should be reduced by completely encircling the limb with sensors, which also eliminates the requirement to place sensors exactly over specific muscle groups. The training data were classified Using the RF method [97].

4.9. K-Nearest Neighbour (KNN)

4.9.1. Sign Language Recognition from Images

For diverse complicated contexts, K-Nearest Neighbour (KNN) is used to derive a computerized sign language recognition system for Bangla. Camera images are used for training purposes. After preprocessing, classification is done using KNN [24].

4.9.2. Gesture Recognition

Adaptive Network-based Fuzzy Inference System (ANFIS) and KNN algorithm IOS were applied for hand gesture identification. The ANFIS learning algorithm divides images into groups and KNN recognizes hand gestures. Real-time pictures captured by cameras were used for this work. No gloves or wrist censors were used during capturing images [98].

4.9.3. JollyMate

A digital notebook was created using audiovisual aids to assist dyslexic pupils learn writing letters and numbers in the English language. This model classifies handwritten text using Neural Network (NN) and KNN techniques [84].

4.10. Hidden Markov Models (HMM)

4.10.1. Intelligent Dyslexic System

Ndombo et al. [99] proposed an intelligent system for dyslexic students. It enhances reading, writing, and phonological awareness, the three components of English language literacy. They used gaming middleware in conjunction with an ML approach called the Hidden Markov Model (HMM) to assist dyslexic children with seeing letters.

4.10.2. YUSR

This program was introduced to help dyslexic pupils learn Arabic. To aid with phonic, reading, and spelling challenges, it uses an HMM algorithm for automated voice recognition that was trained on 500 samples [100].

4.11. Computer Vision

To create assistive technology that can recognize and categorize objects, faces, and emotions, AI-powered computer vision can be used. With the use of this technology, persons who are visually impaired may better navigate their surroundings and recognize people and objects [101].

4.11.1. Obstacle Avoidance

Computer vision is utilized in electrical wheelchairs for obstacle detection. The technology has a distance threshold that will sound an alarm when a wheelchair is getting close to an obstruction. A smartphone camera mounted on the back of a wheelchair serves as an alert system. The YOLOv3 model was employed to detect objects. To increase the system’s detection effectiveness, the researchers created an algorithm to identify barriers including pillars, doors, and wall edges. As a result, the system can choose between edge detection and obstacle detection thanks to the use of two methods [102].

4.11.2. Gesture Recognition

On the embedded Raspberry Pi platform, a vision-based algorithm was created to recognize and categorize dynamic hand motions in real-time. Convex extraction, contour detection, and rule-based classification were the three basic techniques. The apparatus can find six distinct movements with both hands in a variety of positions. Real-time image frames were used for this work [103].

4.11.3. Sign Language from Images

Machines can extract significant features from the photos that have been collected thanks to computer vision. It is possible to think of image processing and ML as subsets of this huge field. Computer vision along with CNN is applied for real-time sign language recognition. For extracting features, CNN is utilized. For this work, real-time photos taken by a camera are used [25].

4.11.4. Text-to-Speech (TTS) Technology

By locating the path, detecting impediments in front of them, and avoiding them, assistive technologies make it easier for the blind to move around. Computer vision-related techniques called object detection are used. YOLOv3 is used to detect objects. It will be able to read texts with the aid of OCR, which uses Python and an API to recognize the text. The final output for the users is text that is converted to voice using TTS. It uses pictures captured using a camera [55].

4.12. Virtual Reality (VR)

The application of Virtual Reality (VR) for disabled people is increasing. For visually impaired people, VR is very helpful. VR can be incorporated into Electrical Travel Aids (AID) to perform better. VR platforms are made to test and refine the viability of the ETA devices that consist of various equipment, including wearable haptic feedback devices [104].

4.13. Augmented Reality (AR)

AI-driven Augmented Reality (AR) can be used to create assistive technologies that can give people with disabilities access to real-time environmental information. For those who have visual impairments, an AR headset can offer real-time navigational assistance. Virtual objects are incorporated into real landscape using AR technology. Identification tags, webcam devices, and picture processing equipment are the main elements of AR [105].

Glasses

Utilizing augmented reality, ref. [40] created glasses that can help students in their educational journey by providing classroom assistive tools, speech emotion recognition, real-time transcription, and sound indication features.

4.14. Human–Computer Interaction (HCI)

Human–Computer Interaction (HCI) plays a crucial role in the design and development of assistive technologies designed to enhance the functional capabilities of individuals with disabilities.

5. Inclusive Learning and Assistive Tools for Personalized Learning

Inclusive education [106] ensures equitable access to quality education for all students in the class or same learning space by eliminating diverse learning styles, language barriers, or disabilities. It ensures that all students as learners can join the mainstream learning space with full potential [107]. A student’s full potential can be ensured by providing physical, social, and academic accessibility. Inclusive practices offer differentiated instruction, assistive technologies, and collaborative learning to support these accessibilities. In differentiated instruction, the instructor is responsible for providing appropriate teaching to a student that fulfills that student’s needs. For example, a numerical exercise can be presented to students in written, audio, visual, or hands-on format. Students in the class may need assistive tools such as text-to-speech and speech-to-text to empower them based on their need in order to ensure an equitable environment. Collaborative learning can also be ensured through group activities that include all students. A student with autism might work with peers on a science project, contributing through visual diagrams rather than verbal presentations. Personalized learning [108] is a teaching approach that adapts learning material and its pace so that a student’s interests, learning styles, preferences, and main strengths are adjusted. A teacher is allowed to modify learning content, evaluation process, and activity depending on a student’s needs. On the other hand, Universal Design for Learning (UDL) [109] dispenses a framework for designing learning material in multiple formats such as written text, audio, video, interactive content, or hands-on experiments. In addition, assistive tools present a book as audiobooks for students with visual impairments and speech-to-text for those with motor disabilities. Such tools empower students to access learning material based on their needs or learning styles. By blending this inclusive education practice and personalized and assistive tools, we can fulfill the demands of the students and foster autonomy, confidence, and equitable growth for all students.

6. Personalized Inclusive Learning Model

With the increasing focus on educational equity, personalized learning has emerged as a prominent area of research, recognizing the unique needs and abilities of individual students. By leveraging ML techniques, personalized learning models can deliver tailored learning experiences based on students’ interests, abilities, performances, learning styles, preferences, and demographics. These models incorporate adaptive algorithms to dynamically adjust the curriculum, content, and instructional strategies, aiming to optimize learning outcomes. Evaluations of personalized learning approaches encompass comprehensive metrics, including academic achievements, engagement levels, and student satisfaction. The findings highlight the potential of personalized learning models to enhance students’ success and cultivate a more efficient and enjoyable learning environment.
The general structure of the proposed model for the system is shown in Figure 2. The whole structure can be divided into two basic parts. One is front-end and the other one is back-end. The front-end and back-end are connected through an Application Programming Interface (API). The front-end of the system contains the whole user interface and is the location where all interactions with students take place. The back end of the system contains all the rule-based and machine intelligence-based algorithms that make the decisions, such as what to present to each student and when to provide it, based on the student’s interactions with the system. The whole environment is proposed as an adaptive e-learning system.

6.1. Engagement and Emotion Detection

A student engagement system is an AI-driven method designed to quantify student engagement in the learning process (primarily e-learning). It employs a variety of tools, techniques, and strategies to actively engage students in their educational experience, encouraging their motivation, participation, and interest in learning. The proposed system comprises subsystems for emotion detection, eye-gaze and head posture detection, and activity recognition. Figure 3 shows engagement and emotion detection subsystem. The system utilizes input devices such as a webcam, mouse, keyboard, and touchpad. By analyzing the video frames captured by the webcam, it detects and tracks various parameters, including eye gaze, head posture, and facial features associated with emotional expressions and sentiments. Emotion detection algorithms classify learners’ emotional states (e.g., frustration, interest, boredom) to adjust learning content dynamically. AI activity detection can analyze user interactions with a keyboard, mouse, and touchpad. By capturing and processing input data from these input devices, AI algorithms can identify and classify specific activities or behaviors performed by the user. Real-time feedback mechanisms provide personalized interventions to maintain motivation and focus.

6.2. Learner Profile Identification

AI-based adaptive e-learning systems are designed to personalize the learning experience for individual students. These systems utilize AI algorithms to analyze and understand the unique characteristics, preferences, and learning styles of each learner. Through interactions, the system creates a profile for each learner by collecting data on their demographic information, cognitive abilities, learning preferences (visual, auditory, kinaesthetic, and so on), previous performance and progress, and assistive needs for students with special needs. These data can include assessment results, interaction patterns, and feedback. The learner’s profile identification process is illustrated in Figure 4, where a personalized learning profile is created for each learner based on learning style and type.

6.3. Pedagogy Approach and Difficulty Level Selection

Activity detection using AI can be employed to analyze user interactions with a keyboard, mouse, and touchpad. By capturing and processing input data from these input devices, AI algorithms can identify and classify specific activities or behaviors performed by the user. By combining data from multiple input devices and leveraging AI algorithms, it becomes possible to detect and understand user activities in real-time or retrospectively. AI algorithms (a reinforcement learning model in this case) analyze the learner’s data that is acquired from the engagement app to identify their preferred learning style, whether visual, auditory, kinesthetic, or a combination. This information helps in tailoring the content and instructional strategies to suit students’ specific needs. This is an indication of a student’s present level of subject matter comprehension. Based on the learner’s profile and identified learning style, the system adapts the delivery of learning materials such as text, images, videos, or interactive simulations. This ensures that the content aligns with the learner’s preferences and enhances their engagement and comprehension. Reinforcement algorithms provide personalized feedback and assessment based on the learner’s performance and progress. The system follows the Bloom taxonomy [110] to adjust the difficulty level of questions, suggest additional resources or remedial exercises, and offer targeted feedback to address specific learning gaps. Learning objectives are categorized using this framework according to various cognitive levels. These stages cover everything from recalling fundamental information (Remembering) to using knowledge to solve issues (Applying) in order to producing new knowledge (Creating).
The system continuously monitors the learner’s progress, analyzing their responses, time spent on tasks, and mastery of concepts. The approach identifies a student’s proficiency when they can properly and swiftly respond to simple questions, and then presents increasingly difficult assignments. If a student finds it difficult or takes longer to respond, then the system slows down the pace, offers more resources, and goes back to simpler concepts. The system dynamically adjusts the contents on the basis of the student’s need. If the system observes that the learner performs better while interacting with animations and spends more time interacting with visual aspects, then the system switches to an interactive simulation rather than providing a long text. AI algorithms dynamically adapt the learning pathway, adjusting the content, pacing, and difficulty level to optimize the learning experience and promote mastery. This information is used to modify the content’s Bloom level and complexity in order to reflect students’ development. This flexible method guarantees that tasks are within the learner’s ideal range of difficulty and are neither too easy (which could bore them) nor too challenging (which could frustrate them). The system may introduce more difficult subjects or give students challenges with higher Bloom levels, such as analyzing or assessing material, if a student regularly does well. The system may provide more resources, go back and review earlier Bloom levels for reinforcement, or present other explanations if a learner is having trouble.

6.4. Feeback

The system generates insights and recommendations for educators based on aggregated data from multiple learners. This helps educators to identify common learning challenges, refine instructional strategies, and provide targeted support to individual students.

7. Challenges and Future Research Direction

AI-based personalized inclusive education presents opportunities to enhance learning experiences, promote accessibility, and improve outcomes for students with disability. However, there are also some challenges. The following are among the challenges of employing AI for ensuring personalized learning. Figure 5 shows a pictorial representation of the challenges of personalized learning.
  • Data Bias: In the case of personalized learning, data bias occurs when the data do not represent the whole population. Most of the time, society does not accept the use of assistive tools or personalized learning facilities. This perception does not allow the actual situation to come out, resulting in data bias.
  • Lack of Accessibility of Infrastructure and Resources: In developing assistive technology, lack of infrastructure and resources is a great problem. Most of the time, there are not adequate resources for building assistive technology. In rural areas the situation is even worse, and sometimes the infrastructure is not available at all. Schools and educational institutions in rural areas often do not have proper hardware system, and the internet facilities are also not good. If a student with hearing disability requires speech-to-text conversion, even when an assistive device is available, infrastructure such as a mobile phone or computer as well a good internet connection is needed. Thus, the use of these tools in remote areas is very difficult. Most of these areas have no electricity or network connections. When it comes to such areas, using most current assistive tools can be a challenge.
  • Data Privacy and Security: For AI-based personalized learning systems, data privacy and security become huge challenges. The use of complex personalization algorithms frequently necessitates the collection of large amounts of data, which gives rise to concerns over the potential invasion of individual privacy. To solve data privacy and security issues, federated learning and data encryption can be used. Federated learning trains models on decentralized data without transferring information to central servers, meaning that the data can remain secure. Secure cloud storage can be used to protect student data. Strict government policies also need to be followed
  • Personalization Accuracy: Different people will have different personalized accuracy measures. As a result, there will not be any standardized measurement, and this can cause confusion among users.
  • Collaboration and Stakeholder Engagement: The creation of a comprehensive and personalized learning environment requires cooperation between educators, administrators, parents, and students. Keeping parents updated on the trajectory of their child’s development, areas of strength, and areas for growth is essential to preserving a positive family atmosphere for learning. Any miscommunication can cause a huge problem between the stakeholders.
  • Lack of Awareness: People who do not have enough knowledge about personalized learning represent another challenge. To utilize these technologies, people must have knowledge about it. Due to lack of awareness, people may not use assistive technology and might consider their special children as burdens. Even if they know about these tools, they may not know how to operate them.
  • Financial Barrier: Another challenge is financial problems. While there are plenty of assistive tools intended to make life easier for people with disabilities, they are not free of cost. Schools have directly felt the impact of this problem. Many lack funds to sponsor technology in their classrooms. They not only lack the tools, but also the support needed to educate trainers on how to use these technological advancements. That means that most schools suffer not because they do not want new technology, but because they cannot finance significant projects. As a result, the affected parties remain the disabled children and staff.
  • Dialect differences: There are many languages in this world, and different languages also have different dialects. Because of these variances, speech recognition or sign language tools developed for one area might not work well in another. Different accents can also cause trouble for speech-to-text converters.
AI-driven personalized inclusive education provides several opportunities to enhance learning experiences and outcomes for students with disabilities. Figure 6 presents different research opportunities for personalized education. These opportunities are as follows:
  • Personalized Learning: Based on the abilities, preferences, and needs of a student, AI can be used to create a personalized learning environment customized to that student. Adaptive content, instructional materials, and assessments can be generated to match individual requirements, promoting more effective and engaging learning. There is scope to develop AI-based personalized learning model for students with different needs.
  • Accessibility Enhancements: AI technologies have the potential to greatly improve accessibility for students with disabilities. Text-to-speech, closed captioning, and image recognition can help students with visual or hearing impairments access educational content. AI-based tools can potentially help students with communication issues to communicate and interact.
  • Supportive Learning Environment: AI-powered virtual assistants and chatbots have the capability to provide on-demand support and assistance specifically tailored to students with disability. These virtual agents can deliver explanations, respond to inquiries, and offer guidance, thereby fostering a supportive and inclusive learning environment. Designing AI-powered virtual assistants and chatbots in local languages presents a valuable opportunity for the research community.
  • Adaptive Content and Resources: AI has the capability to dynamically adapt content, resources, and materials to cater to the specific needs of students with special requirements. By offering alternative formats based on individual preferences, such as audio, visual, or tactile, AI can make learning more accessible and engaging. It is crucial to design adaptive models that cater to the diverse abilities of students.
  • Assistive Technologies: AI-powered assistive technologies have the potential to provide support to students with disabilities in multiple ways. For example, speech recognition and synthesis technologies can assist students with speech impairments in effective communication. Text-to-speech and optical character recognition can facilitate access to written content for students with visual impairments. Designing such assistive technologies while considering the local context creates opportunities for the designers to address specific needs and requirements.
  • Real-time Feedback and Assessment: AI can provide immediate and personalized feedback to students, allowing them to track their progress and identify areas for improvement. It can also offer real-time assessment, enabling teachers to adjust their instructional strategies and interventions based on individual student needs.
  • Virtual Mentors and Tutors: AI-powered virtual mentors or tutors can provide additional support and guidance to students with disabilities. These virtual agents can assist students in problem-solving, offer explanations, and engage them in interactive learning activities, fostering a supportive and inclusive learning environment.
  • Data Analytics for Intervention: AI’s ability to analyze vast the amount of educational data can identify patterns and trends related to student performance, engagement, and progress. Educators and policymakers can leverage these data to develop targeted interventions, implement evidence-based practices, and improve educational outcomes.
  • Data-driven Decision-Making: AI can analyze vast amounts of educational data, such as student performance and engagement data, in order to generate insights for educators, policymakers, and parents. Data-driven decision-making can lead to evidence-based interventions, personalized interventions, and continuous improvement of educational practices.
  • Collaboration and Community Building: AI-based platforms can facilitate collaboration and community building among students, teachers, and parents. Through online forums, virtual classrooms, and social learning networks, students with disabilities can connect with peers, share experiences, and receive peer support.
  • Continuous Improvement: AI offers data-driven approach that allow for continuous monitoring and evaluation of educational interventions. This iterative process helps to identify what works best for students with disabilities, enabling educators to refine and improve their instructional practices over time.
Figure 6. Future research opportunities for personalized education.
Figure 6. Future research opportunities for personalized education.
Digital 05 00011 g006

8. Discussion

Traditional educational institutions fail to meet the needs of all students, particularly those who have different requirements than other students. This leads to the placement of students with disabilities in special schools or distinct settings. These special institutes may provide the required setting; however, such a situation isolates these students from others, resulting in social embarrassment, low self-esteem, and missed opportunities for social development. Inclusive educational approaches ensure that all students in the education setting have equal access to a quality education. With the use of assistive equipment, students with impairments can overcome their barriers and participate in regular classrooms. AI has the ability to change the educational landscape by offering personalized and inclusive experiences to students with special needs. Educational content, distribution techniques, and teaching strategies may all be adjusted based on the preference and learning style of each student. By improving accessibility, enabling personalized learning, and fostering inclusion, AI has the potential to transform how students with disabilities are taught while also assuring their integration into mainstream educational environments. AI-driven personalized education represents a paradigm shift away from traditional teaching approaches. Using AI technologies, instructors may create personalized learning experiences for each student. AI algorithms can tailor the learning process based on a student’s learning style, strengths, weaknesses, progress, and preferences. This ensures that students receive the appropriate level of challenge, reinforcement, and feedback to help them grow and develop. For students with disabilities, personalized AI-driven systems can significantly enhance their learning experience. Existing reviews have examined the impact of AI in personalized and inclusive learning and have identified AI-based tools and algorithms that impact disabled students’ education. However, they have only identified challenges while failing to propose any theoretical framework for inclusive education. In this work, we have considered the recent works and thoroughly discussed the assistive tools and technologies used for inclusive learning. We have also analyzed the AI-based algorithms utilized for developing these tools. Further, we suggest a personalized inclusive learning model for disabled students. The proposed model has the potential to completely transform individualized education by using AI algorithms to customize feedback, instructional strategies, and content according to each student’s unique learning preferences and development. Its engagement detection module actively aids impaired learners and guarantees inclusion by combining activity analysis, eye-gaze tracking, and emotion identification. Based on learning style and type, a learner’s profile is created. Following Bloom’s taxonomy, the system dynamically modifies task complexity to maximize understanding and proficiency. It also provides instructors with data-driven insights to pinpoint learning difficulties and improve their pedagogical approaches. However, careful consideration of the ethical implications, proper training for teachers, and usable technological solutions are necessary for successful deployment. It is necessary to solve issues such as protecting data privacy, reducing algorithmic biases, and preserving scalability. To fully utilize AI in supporting inclusive education, collaboration between educators, researchers, policymakers, and technology developers is essential.

Author Contributions

Conceptualization, S.A. and M.S.R.; methodology, S.A.; formal analysis, S.A. and M.S.K.; investigation, A.S.M.S.H.; resources, S.A.; data curation, S.A.; writing—original draft preparation, S.A.; writing—review and editing, M.S.K. and A.S.M.S.H.; visualization, S.A.; supervision, M.S.K. and A.S.M.S.H.; project administration, M.S.R.; funding acquisition, A.S.M.S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Woosong University Academic Research Fund 2025, South Korea.

Acknowledgments

This research is supported by the University Grants Commission of Bangladesh.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Luborsky, M. The cultural adversity of physical disability: Erosion of full adult personhood. J. Aging Stud. 1994, 8, 239–253. [Google Scholar] [PubMed]
  2. Soetan, A.; Onojah, A.; Alaka, T.; Onojah, A. Attitude of hearing impaired students towards assistive technology utilization in Oyo state adopting the survey method. Indones. J. Community Spec. Needs Educ. 2021, 1, 103–118. [Google Scholar]
  3. University of Rochester. Common Disabilities. Available online: https://www.rochester.edu/college/disability/faculty/common-disabilities.html (accessed on 7 January 2025).
  4. Moriña, A. Inclusive education in higher education: Challenges and opportunities. In Postsecondary Educational Opportunities for Students with Special Education Needs; Taylor & Francis Group: Milton, UK, 2019; pp. 3–17. [Google Scholar]
  5. Chalkiadakis, A.; Seremetaki, A.; Kanellou, A.; Kallishi, M.; Morfopoulou, A.; Moraitaki, M.; Mastrokoukou, S. Impact of artificial intelligence and virtual reality on educational inclusion: A systematic review of technologies supporting students with disabilities. Educ. Sci. 2024, 14, 1223. [Google Scholar] [CrossRef]
  6. Shireesha, M.; Jeevan, J. The Role of Artificial Intelligence in Personalized Learning: A Pathway to Inclusive Education. Libr. Prog.-Libr. Sci. Inf. Technol. Comput. 2024, 44, 21746. [Google Scholar]
  7. Toto, G.A.; Marinelli, C.V.; Cavioni, V.; di Furia, M.; Traetta, L.; Iuso, S.; Petito, A. What is the Role of Technologies for Inclusive Education? A Systematic Review. In Higher Education Learning Methodologies and Technologies Online; Casalino, G., Di Fuccio, R., Fulantelli, G., Raviolo, P., Rivoltella, P.C., Taibi, D., Toto, G.A., Eds.; Springer: Cham, Switzerland, 2024; pp. 533–565. [Google Scholar]
  8. Navas-Bonilla, C.R.; Guerra-Arango, J.A.; Oviedo-Guado, D.A.; Murillo-Noriega, D.E. Inclusive education through technology: A systematic review of types, tools and characteristics. Front. Educ. 2025, 10, 1527851. [Google Scholar]
  9. Adeleye, O.O.; Eden, C.A.; Adeniyi, I.S. Innovative teaching methodologies in the era of artificial intelligence: A review of inclusive educational practices. World J. Adv. Eng. Technol. Sci. 2024, 11, 069–079. [Google Scholar] [CrossRef]
  10. Ramya, M.M. 2 Advancing inclusive learning through systematic AI integration for children with disabilities. Innov. Res. 2024, 2, 6–10. [Google Scholar] [CrossRef]
  11. Lestari, A.D.S.; Murwani, F.; Wardana, L.; Wati, A. Problems of Inclusive Learning in Fostering Entrepreneurial Motivation in Students with Disabilities: Systematic Literature Review (SLR). J. Educ. Anal. 2024, 3, 161–180. [Google Scholar]
  12. Khalil, M.; Slade, S.; Prinsloo, P. Learning analytics in support of inclusiveness and disabled students: A systematic review. J. Comput. High. Educ. 2024, 36, 202–219. [Google Scholar]
  13. Gupta, M.; Kaul, S. AI in Inclusive Education: A Systematic Review of Opportunities and Challenges in the Indian Context. Mier J. Educ. Stud. Trends Pract. 2024, 14, 429–461. [Google Scholar]
  14. Udvaros, J.; Forman, N. Artificial intelligence and Education 4.0. In Proceedings of the 17th International Technology, Education and Development Conference, Valencia, Spain, 6–8 March 2023; pp. 6309–6317. [Google Scholar]
  15. Ackerman, P.; Thormann, M.; Huq, S.; Baten, E. Assessment of Educational Needs of Disabled Children in Bangladesh; Technical Report; Creative Associates International Inc./USAID: Washington, DC, USA, 2005. [Google Scholar]
  16. Reagan, T. Historical linguistics and the case for sign language families. Sign Lang. Stud. 2021, 21, 427–454. [Google Scholar]
  17. Rafi, A.; Nawal, N.; Bayev, N.; Nima, L.; Shahnaz, C.; Fattah, S. Image-based Bengali sign language alphabet recognition for deaf and dumb community. In Proceedings of the 2019 IEEE Global Humanitarian Technology Conference (GHTC), Seattle, WA, USA, 17 October 2019; pp. 1–7. [Google Scholar]
  18. Latif, G.; Mohammad, N.; AlKhalaf, R.; AlKhalaf, R.; Alghazo, J.; Khan, M. An automatic Arabic sign language recognition system based on deep CNN: An assistive system for the deaf and hard of hearing. Int. J. Comput. Digit. Syst. 2020, 9, 715–724. [Google Scholar]
  19. Liu, T.; Zhou, W.; Li, H. Sign language recognition with long short-term memory. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25 September 2016; pp. 2871–2875. [Google Scholar]
  20. Raghuveera, T.; Deepthi, R.; Mangalashri, R.; Akshaya, R. A depth-based Indian sign language recognition using Microsoft Kinect. Sādhanā 2020, 45, 34. [Google Scholar]
  21. Zhang, Z.; Pu, J.; Zhuang, L.; Zhou, W.; Li, H. Continuous sign language recognition via reinforcement learning. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22 September 2019; pp. 285–289. [Google Scholar]
  22. Wibowo, M.; Nurtanio, I.; Ilham, A. Indonesian sign language recognition using leap motion controller. In Proceedings of the 2017 11th International Conference on Information & Communication Technology and System (ICTS), Surabaya, Indonesia, 31 October 2017; pp. 67–72. [Google Scholar]
  23. Ajay, S.; Potluri, A.; George, S.; Gaurav, R.; Anusri, S. Indian sign language recognition using random forest classifier. In Proceedings of the 2021 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, 9–11 July 2021; pp. 1–6. [Google Scholar]
  24. Haque, P.; Das, B.; Kaspy, N. Two-handed bangla sign language recognition using principal component analysis (PCA) and KNN algorithm. In Proceedings of the 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox’s Bazar, Bangladesh, 7–9 February 2019; pp. 1–4. [Google Scholar]
  25. Raval, J.; Gajjar, R. Real-time sign language recognition using computer vision. In Proceedings of the 2021 3rd International Conference on Signal Processing and Communication (ICPSC), Coimbatore, India, 13–14 May 2021; pp. 542–546. [Google Scholar]
  26. Jagota, U. Role of Assistive Technology in Inclusive Classrooms. 2018. Available online: https://www.jetir.org/view?paper=JETIR1806409 (accessed on 7 January 2025).
  27. Levitt, H. Historically, the paths of hearing aids and telephones have often intertwined. Hear. J. 2007, 60, 20–24. [Google Scholar]
  28. Riviere, A. Assistive Technology: Meeting the Needs of Adults with Learning Disabilities. 1996. Available online: https://files.eric.ed.gov/fulltext/ED401686.pdf (accessed on 7 January 2025).
  29. Dhanjal, A.S.; Singh, W. Tools and Techniques of Assistive Technology for Hearing Impaired People. In Proceedings of the 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), Faridabad, India, 14–16 February 2019; pp. 205–210. [Google Scholar] [CrossRef]
  30. Anshu, A. Leveraging Internet of Things (IoT) to Enhance Accessibility and Independence for People with Disabilities. LatIA 2025, 3, 114. [Google Scholar]
  31. Noda, K.; Yamaguchi, Y.; Nakadai, K.; Okuno, H.; Ogata, T. Audio-visual speech recognition using deep learning. Appl. Intell. 2015, 42, 722–737. [Google Scholar]
  32. Kumar, L.; Renuka, D.; Rose, S.; Wartana, I. Deep learning based assistive technology on audio visual speech recognition for hearing impaired. Int. J. Cogn. Comput. Eng. 2022, 3, 24–30. [Google Scholar]
  33. Makino, T.; Liao, H.; Assael, Y.; Shillingford, B.; Garcia, B.; Braga, O.; Siohan, O. Recurrent neural network transducer for audio-visual speech recognition. In Proceedings of the 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Singapore, 14–18 December 2019; pp. 905–912. [Google Scholar]
  34. Jishnu, T.; Antony, A. LipNet: End-to-End Lipreading. Indian J. Data Min. (IJDM) 2024, 4, 1–4. [Google Scholar]
  35. Vyapari, R.R.; Nimbhore, D.S.S. Marathi isolated speech recognition for diseases using HTK in healthcare sector. Int. J. Adv. Res. Eng. Appl. Sci. 2023, 12, 1–17. [Google Scholar]
  36. Yadava, T.; Nagaraja, B.G.; Reddy, S.; Rohan, K.; Mohamed, L.M. Advancements in Speech-to-Text Systems for the Hearing Impaired. In Proceedings of the 2024 IEEE North Karnataka Subsection Flagship International Conference (NKCon), Bagalkote, India, 21–22 September 2024; pp. 1–6. [Google Scholar] [CrossRef]
  37. Hasselbring, T.; Glaser, C. Use of computer technology to help students with special needs. Future Child. 2000, 10, 102–122. [Google Scholar] [CrossRef]
  38. Gophika, T.; Manoj, A.; Maraikar, S. Machine language based gesture recognition system to aid visually impaired people. BioGecko 2023, 7, 2145–2153. [Google Scholar]
  39. Samonte, M.J.C.; Garcia, A.R.I.; Valencia, B.J.D.; Ocampo, M.J.S. Using online handwritten character recognition in assistive tool for students with hearing and speech impairment. In Proceedings of the 2020 11th International Conference on E-Education, E-Business, E-Management, and E-Learning, Osaka, Japan, 10–12 January 2020; pp. 189–194. [Google Scholar]
  40. Ridha, A.M.; Shehieb, W. Assistive technology for hearing-impaired and deaf students utilizing augmented reality. In Proceedings of the 2021 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), Virtual, 12–17 September 2021; pp. 1–5. [Google Scholar]
  41. Bhat, G.S.; Shankar, N.; Reddy, C.K.; Panahi, I.M. A real-time convolutional neural network based speech enhancement for hearing impaired listeners using smartphone. IEEE Access 2019, 7, 78421–78433. [Google Scholar] [CrossRef] [PubMed]
  42. Samonte, M.J.C.; Gazmin, R.A.; Soriano, J.D.S.; Valencia, M.N.O. BridgeApp: An Assistive Mobile Communication Application for the Deaf and Mute. In Proceedings of the 2019 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Republic of Korea, 16–18 October 2019; pp. 1310–1315. [Google Scholar] [CrossRef]
  43. Jiménez, J.; Olea, J.; Torres, J.; Alonso, I.; Harder, D.; Fischer, K. Biography of louis braille and invention of the braille alphabet. Surv. Ophthalmol. 2009, 54, 142–149. [Google Scholar] [CrossRef]
  44. Nahar, L.; Sulaiman, R.; Jaafar, A. An interactive math braille learning application to assist blind students in Bangladesh. Assist. Technol. 2022, 34, 157–169. [Google Scholar] [CrossRef]
  45. Bhattacharjee, V.; Shiblee, S. COVID-19, Technology-Based Education and Disability: The Case of Bangladesh, Emerging Practices in Inclusive Digital Learning for Students with Disabilities. 2021. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000377665 (accessed on 7 January 2025).
  46. Attachoo, B. Voices Beyond Text: Unravelling Perceptions and Experiences of Thai Students with Visual Impairment using Microsoft’s Reading Progress-A Narrative Inquiry Approach. LEARN J. Lang. Educ. Acquis. Res. Netw. 2025, 18, 700–722. [Google Scholar] [CrossRef]
  47. Llorca, A.A.; Gueta, H.M.; Villarica, M.V.; Mercado, M.A.T. AI-WEAR: SMART TEXT READER FOR BLIND/VISUALLY IMPAIRED STUDENTS USING RASPBERRY PI WITH AUDIO-VISUAL CALL AND GOOGLE ASSISTANCE. Int. J. Adv. Res. Comput. Sci. 2023, 14, 119. [Google Scholar] [CrossRef]
  48. Marion, H. Technology for Inclusion. 2020. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000373655/PDF/373655eng.pdf (accessed on 7 January 2025).
  49. Philips, D. Talking books: The encounter of literature and technology in the audio book. Convergence 2007, 13, 293–306. [Google Scholar] [CrossRef]
  50. Ohna, S.E. Open your eyes: Deaf studies talking. Scand. J. Disabil. Res. 2010, 12, 141–146. [Google Scholar] [CrossRef]
  51. Revelli, V.; Sharma, G. Automate extraction of braille text to speech from an image. Adv. Eng. Softw. 2022, 172, 103180. [Google Scholar] [CrossRef]
  52. Chen, S.; Hwang, S.; Wang, Y. An RNN-based prosodic information synthesizer for Mandarin text-to-speech. IEEE Trans. Speech Audio Process. 1998, 6, 226–239. [Google Scholar] [CrossRef]
  53. Ganesan, J.; Azar, A.; Alsenan, S.; Kamal, N.; Qureshi, B.; Hassanien, A. Deep learning reader for visually impaired. Electronics 2022, 11, 3335. [Google Scholar] [CrossRef]
  54. Seebun, G.; Nagowah, L. Let’s talk: An assistive mobile technology for hearing and speech impaired persons. In Proceedings of the 2020 3rd International Conference on Emerging Trends in Electrical, Electronic and Communications Engineering (ELECOM), Balaclava, Mauritius, 25–27 November 2020; pp. 210–215. [Google Scholar]
  55. Abraham, L.; Mathew, N.; George, L.; Sajan, S. VISION-wearable speech based feedback system for the visually impaired using computer vision. In Proceedings of the 2020 4th International Conference on Trends in Electronics and Informatics (ICOEI)(48184), Tirunelveli, India, 15–17 June 2020; pp. 972–976. [Google Scholar]
  56. Islam, M.Z.; Based, M.A. Speech Recognition System for Speech to Text and Text to Speech for Autistic Person. Barrister Shameem Haider Patwary 2019, 11, 86. [Google Scholar]
  57. Swarnamba, S.; Revanna, B. Efficient Examination Method for Blind People using MATLAB and Embedded System. Int. J. Eng. Res. Technol. (IJERT) 2024, 13, 1–4. [Google Scholar]
  58. Ayala, S. ChatGPT as a Universal Design for Learning Tool Supporting College Students with Disabilities. Educ. Renaiss. 2023, 12, 22–41. [Google Scholar]
  59. Wong, Y.; Lai, J.; Ranjit, S.; Syafeeza, A.; Hamid, N. Convolutional neural network for object detection system for blind people. J. Telecommun. Electron. Comput. Eng. (JTEC) 2019, 11, 1–6. [Google Scholar]
  60. Khan, A.; Li, S.; Luo, X. Obstacle avoidance and tracking control of redundant robotic manipulator: An RNN-based metaheuristic approach. IEEE Trans. Ind. Inform. 2019, 16, 4670–4680. [Google Scholar]
  61. Ahmed, F.; Mahmud, M.; Yeasin, M. RNN and CNN for way-finding and obstacle avoidance for visually impaired. In Proceedings of the 2019 2nd International Conference on Data Intelligence and Security (ICDIS), South Padre Island, TX, USA, 28–30 June 2019; pp. 225–228. [Google Scholar]
  62. Patel, C.; Mistry, V.; Desai, L.; Meghrajani, Y. Multisensor-based object detection in indoor environment for visually impaired people. In Proceedings of the 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 14–15 June 2018; pp. 1–4. [Google Scholar]
  63. Ahmed, F.; Mahmud, M.S.; Moinuddin, K.A.; Hyder, M.I.; Yeasin, M. Virtual Experience to Real World Application: Sidewalk Obstacle Avoidance Using Reinforcement Learning for Visually Impaired. arXiv 2020, arXiv:2009.12877. [Google Scholar]
  64. Chaki, S.; Ahmed, S.; Biswas, M.; Tamanna, I. A framework of an obstacle avoidance robot for the visually impaired people. In Proceedings of Trends in Electronics and Health Informatics: TEHI 2021; Springer: Singapore, 2022; pp. 269–280. [Google Scholar]
  65. Ahmed, A.I.; Abdulrazzaq, A.Q.; Ali, A.H.A.; Al-Bayati, H.N.A.; Alamro, L.; Balina, O.; Al-Obaidi, M.I.J. The LTE-Connected Smart Blind Stick Will Completely Transform Mobility for the Blind. In Proceedings of the 2024 36th Conference of Open Innovations Association (FRUCT), Lappeenranta, Finland, 30 October–1 November 2024; pp. 272–282. [Google Scholar]
  66. Sabarika, M.; Santhoshkumar, R.; Dharson, R.; Jayamani, S. Assistive Voice Guidance System for Blind Individuals using Deep Learning Techniques. In Proceedings of the 2024 2nd International Conference on Artificial Intelligence and Machine Learning Applications Theme: Healthcare and Internet of Things (AIMLA), Namakkal, India, 15–16 March 2024; pp. 1–4. [Google Scholar]
  67. More, P.; Sangamkar, S. Vision Aid For Blind People Using YOLOV8. In Proceedings of the 2024 2nd International Conference on Networking, Embedded and Wireless Systems (ICNEWS), Bangalore, India, 22–23 August 2024; pp. 1–8. [Google Scholar]
  68. ara Zaman, R.; Haider, M.S. Mainstreaming Students with Visual Impairment in Secondary Science Education (IX-X): Curriculum Consideration and Assistive Technologies. Bangladesh J. Educ. Res. 2015, 1, 29–41. [Google Scholar]
  69. Zhang, P.; Lan, C.; Xing, J.; Zeng, W.; Xue, J.; Zheng, N. View adaptive recurrent neural networks for high performance human action recognition from skeleton data. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2117–2126. [Google Scholar]
  70. Shiranthika, C.; Premakumara, N.; Chiu, H.; Samani, H.; Shyalika, C.; Yang, C. Human activity recognition using CNN & LSTM. In Proceedings of the 2020 5th International Conference on Information Technology Research (ICITR), Moratuwa, Sri Lanka, 2–4 December 2020; pp. 1–6. [Google Scholar]
  71. Chetty, G.; White, M. Body sensor networks for human activity recognition. In Proceedings of the 2016 3rd International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 11–12 February 2016; pp. 660–665. [Google Scholar]
  72. Islam, N. Development of a Bangla Text to Speech Converter. 1995. Available online: http://lib.buet.ac.bd:8080/xmlui/bitstream/handle/123456789/3826/Full%20Thesis.pdf?sequence=1 (accessed on 7 January 2025).
  73. Ahmad, F. Use of assistive technology in inclusive education: Making room for diverse learning needs. Transcience 2015, 6, 62–77. [Google Scholar]
  74. Paul, S.; Lakhani, D.; Aryan, D.; Das, S.; Varshney, R. Lip Reading System for Speech-Impaired Individuals. Int. J. Multidiscip. Res. (IJFMR) 2024, 6, IJFMR240218643. [Google Scholar] [CrossRef]
  75. Mulfari, D.; Meoni, G.; Marini, M.; Fanucci, L. Machine learning assistive application for users with speech disorders. Appl. Soft Comput. 2021, 103, 107147. [Google Scholar]
  76. Samonte, M.J.C.; Guce, F.C.D.; Peraja, J.M.P.; Sambile, G.D.V. Assistive gamification and speech recognition e-tutor system for speech impaired students. In Proceedings of the 2nd International Conference on Image and Graphics Processing, Singapore, 23–25 February 2019; pp. 37–41. [Google Scholar]
  77. Woods, B.; Watson, N. The social and technological history of wheelchairs. Int. J. Ther. Rehabil. 2004, 11, 407–410. [Google Scholar]
  78. Adebayo, O.; Adetiba, E.; Ajayi, O. Hand gesture recognition-based control of motorized wheelchair using electromyography sensors and recurrent neural network. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1107, 012063. [Google Scholar]
  79. Giuffrida, G.; Meoni, G.; Fanucci, L. A YOLOv2 convolutional neural network-based human–machine interface for the control of assistive robotic manipulators. Appl. Sci. 2019, 9, 2243. [Google Scholar] [CrossRef]
  80. Liao, L.; Tseng, Y.; Chiang, H.; Wang, W. EMG-based control scheme with SVM classifier for assistive robot arm. In Proceedings of the 2018 International Automatic Control Conference (CACS), Taoyuan, Taiwan, 4–7 November 2018; pp. 1–5. [Google Scholar]
  81. Fuentes-Alvarez, R.; Hernandez, J.; Matehuala-Moran, I.; Alfaro-Ponce, M.; Lopez-Gutierrez, R.; Salazar, S.; Lozano, R. Assistive robotic exoskeleton using recurrent neural networks for decision taking for the robust trajectory tracking. Expert Syst. Appl. 2022, 193, 116482. [Google Scholar]
  82. Ramlan, S.; Isa, I.; Harron, N.; Saod, A.; Azid, M.; Lepas, B. Reading Assistive Tool (ReaDys) for Dyslexic Children: Speech Recognition Performance. J. Creat. Pract. Lang. Learn. Teach. (CPLT) 2023, 11, 57–73. [Google Scholar]
  83. Belson, S.; Hartmann, D.; Sherman, J. Digital note taking: The use of electronic pens with students with specific learning disabilities. J. Spec. Educ. Technol. 2013, 28, 13–24. [Google Scholar]
  84. Khakhar, J.; Madhvanath, S. Jollymate: Assistive technology for young children with dyslexia. In Proceedings of the 2010 12th International Conference on Frontiers in Handwriting Recognition, Kolkata, India, 16–18 November 2010; pp. 576–580. [Google Scholar]
  85. Weswibul, S.; Teeravarunyou, S. Assistive Tool for Attention Deficit Hyperactivity Disorder Children in Classroom. 2020. Available online: https://soad.kmutt.ac.th/wp-content/uploads/2018/10/2009IntC6.pdf (accessed on 7 January 2025).
  86. Barua, P.; Vicnesh, J.; Gururajan, R.; Oh, S.; Palmer, E.; Azizan, M.; Kadri, N.; Acharya, U. Artificial intelligence enabled personalised assistive tools to enhance education of children with neurodevelopmental disorders—A review. Int. J. Environ. Res. Public Health 2022, 19, 1192. [Google Scholar] [CrossRef]
  87. Pal, S. Artificial Intelligence is An Assistive Tool for Children with Autism Spectrum Disorder. Sci. Approach Self Reliant India 2024, 2, 25. [Google Scholar]
  88. Samonte, M.J.C.; Arpilleda, J.A.T.; Cunanan, T.S.; Frias, T.V. CareMate: An Assistive Web Application for Learners with Severe Autism Spectrum Disorder. In International Conference on Advances in Education and Information Technology; Springer: Singapore, 2024; pp. 279–294. [Google Scholar]
  89. Zhou, Y.; Huang, C.; Hu, Q.; Zhu, J.; Tang, Y. Personalized learning full-path recommendation model based on LSTM neural networks. Inf. Sci. 2018, 444, 135–152. [Google Scholar]
  90. Safiya, K.; Pandian, R. Image Captionbot for Assistive Technology. Math. Stat. Eng. Appl. 2022, 71, 1629–1634. [Google Scholar]
  91. Gamage, N.; Jayadewa, K.; Jayakody, J. Document reader for vision impaired elementary school children to identify printed images. In Proceedings of the 2019 International Conference on Advancements in Computing (ICAC), Malabe, Sri Lanka, 5–7 December 2019; pp. 279–284. [Google Scholar]
  92. Yuan, Y. Image-Based Gesture Recognition with Support Vector Machines; University of Delaware: Newark, DE, USA, 2008. [Google Scholar]
  93. Shawky, D.; Badawi, A. Towards a personalized learning experience using reinforcement learning. In Machine Learning Paradigms: Theory and Application; Springer: Cham, Switzerland, 2019; pp. 169–187. [Google Scholar]
  94. Lin, C.; Yeh, Y.; Hung, Y.; Chang, R. Data mining for providing a personalized learning path in creativity: An application of decision trees. Comput. Educ. 2013, 68, 199–210. [Google Scholar] [CrossRef]
  95. Song, W.; Han, Q.; Lin, Z.; Yan, N.; Luo, D.; Liao, Y.; Zhang, M.; Wang, Z.; Xie, X.; Wang, A.; et al. Design of a flexible wearable smart sEMG recorder integrated gradient boosting decision tree based hand gesture recognition. IEEE Trans. Biomed. Circuits Syst. 2019, 13, 1563–1574. [Google Scholar] [CrossRef] [PubMed]
  96. Simatupang, I.; Pamungkas, D.; Risandriya, S. Naïve Bayes classifier for hand gestures recognition. In Proceedings of the 3rd International Conference on Applied Engineering (ICAE 2020), Batam, Indonesia, 7–8 October 2020; pp. 110–114. [Google Scholar]
  97. Dankovich, L.; Bergbreiter, S. Gesture recognition via flexible capacitive touch electrodes. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 9028–9034. [Google Scholar]
  98. Mufarroha, F.; Utaminingrum, F. Hand gesture recognition using adaptive network based fuzzy inference system and K-nearest neighbor. Int. J. Technol. 2017, 8, 559–567. [Google Scholar] [CrossRef]
  99. Mpia, N.D.; Ojo, S.; Osunmakinde, I. An intelligent integrative assistive system for dyslexic learners. J. Assist. Technol. 2013, 7, 172–187. [Google Scholar] [CrossRef]
  100. Taileb, M.; Al-Saggaf, R.; Al-Ghamdi, A.; Al-Zebaidi, M.; Al-Sahafi, S. YUSR: Speech recognition software for dyslexics. In Design, User Experience, and Usability. Health, Learning, Playing, Cultural, and Cross-Cultural User Experience: Second International Conference, DUXU 2013, Held as Part of HCI International 2013, Las Vegas, NV, USA, 21–26 July 2013, Proceedings, Part II; Springer: Berlin/Heidelberg, Germany, 2013; pp. 296–303. [Google Scholar]
  101. Sivan, S.; Darsan, G. Computer vision based assistive technology for blind and visually impaired people. In Proceedings of the 7th International Conference on Computing Communication and Networking Technologies, Dallas, TX, USA, 6–8 July 2016; pp. 1–8. [Google Scholar]
  102. Patthanajitsilp, P.; Chongstitvatana, P. Obstacles detection for electric wheelchair with computer vision. In Proceedings of the 2022 14th International Conference on Knowledge and Smart Technology (KST), Chon Buri, Thailand, 26–29 January 2022; pp. 97–101. [Google Scholar]
  103. Ganokratanaa, T.; Pumrin, S. Hand Gesture Recognition Algorithm for Smart Cities based on Wireless Sensor. Int. J. Online Eng. 2017, 13, 58–75. [Google Scholar] [CrossRef]
  104. Ricci, F.; Boldini, A.; Ma, X.; Beheshti, M.; Geruschat, D.; Seiple, W.; Rizzo, J.; Porfiri, M. Virtual reality as a means to explore assistive technologies for the visually impaired. PLoS Digit. Health 2023, 2, e0000275. [Google Scholar] [CrossRef]
  105. Chien-Yu, L.; Chao, J.; Wei, H. Augmented reality-based assistive technology for handicapped children. In Proceedings of the 2010 International Symposium on Computer, Communication, Control and Automation (3CA), Tainan, Taiwan, 5–7 May 2010; Volume 1, pp. 61–64. [Google Scholar]
  106. Malak, M.S.; Begum, H.A.; Habib, M.A.; Shaila, M.; Roshid, M.M. Inclusive Education in Bangladesh: Policy and Practice; Australian Association for Research in Education: Melbourne, Australia, 2013. [Google Scholar]
  107. Ahmmed, M.; Sharma, U.; Deppeler, J. Variables affecting teachers’ attitudes towards inclusive education in Bangladesh. J. Res. Spec. Educ. Needs 2012, 12, 132–140. [Google Scholar] [CrossRef]
  108. Zualkernan, I.A. Personalized learning for the developing world: Issues, constraints, and opportunities. In The Future of Ubiquitous Learning: Learning Designs for Emerging Pedagogies; Springer: Berlin/Heidelberg, Germany, 2016; pp. 241–258. [Google Scholar]
  109. Rahaman, M.M.; Das, A.; Zaman, R.A. Accessibility and Inclusion of Students with Disabilities in University of Dhaka: Transforming the University in Line with Sustainable Development Goals. Teach. World J. Educ. Res. 2023, 49, 205–225. [Google Scholar] [CrossRef]
  110. Arapi, P.; Moumoutzis, N.; Mylonakis, M.; Theodorakis, G.; Christodoulakis, S. A pedagogy-driven personalization framework to support automatic construction of adaptive learning experiences. In Proceedings of the Advances in Web Based Learning–ICWL 2007: 6th International Conference, Edinburgh, UK, 15–17 August 2007; Revised Papers 6. Springer: Berlin/Heidelberg, Germany, 2008; pp. 55–65. [Google Scholar]
Figure 1. Visual representation of some examples of assistive tools.
Figure 1. Visual representation of some examples of assistive tools.
Digital 05 00011 g001
Figure 2. Personalized learning model. The figure provides an illustration of the suggested personalized learning model. With the use of AI tools and techniques, the model is designed to guarantee a personalized learning experience. Based on the learner’s profile (learning style and type), cognitive preferences, and level of engagement with the specific learning and evaluation material, the personalized learning model offers dynamic learning and evaluation materials.
Figure 2. Personalized learning model. The figure provides an illustration of the suggested personalized learning model. With the use of AI tools and techniques, the model is designed to guarantee a personalized learning experience. Based on the learner’s profile (learning style and type), cognitive preferences, and level of engagement with the specific learning and evaluation material, the personalized learning model offers dynamic learning and evaluation materials.
Digital 05 00011 g002
Figure 3. Engagement and emotion detection app. The engagement and emotion detection app uses AI-driven facial recognition and activity analysis to determine a student’s engagement with a specific learning content, then uses behavioral monitoring to monitor the learner’s emotional state.
Figure 3. Engagement and emotion detection app. The engagement and emotion detection app uses AI-driven facial recognition and activity analysis to determine a student’s engagement with a specific learning content, then uses behavioral monitoring to monitor the learner’s emotional state.
Digital 05 00011 g003
Figure 4. Example learning profile; a personalized learning profile is created for each learner based on their learning style and type.
Figure 4. Example learning profile; a personalized learning profile is created for each learner based on their learning style and type.
Digital 05 00011 g004
Figure 5. Challenges of personalized learning systems.
Figure 5. Challenges of personalized learning systems.
Digital 05 00011 g005
Table 1. Brief descriptions of assistive tools for different disabilities, including features, details, and limitations.
Table 1. Brief descriptions of assistive tools for different disabilities, including features, details, and limitations.
DisabilityAssistive ToolsFeatureLimitation
Hearing ImpairmentHearing Aid and Cochlear ImplantsCochlear implants stimulate the auditory nerve, and hearing aids enhance sounds.Both hearing aids and cochlear implants require maintenance
Sign LanguageHelps deaf people to communicate.Different countries have different sign languages, making global communication difficult.
FM Listening SystemsThe speaker’s voice is directly forwarded to the listener’s ear by the instrument.FM listening systems have limited working range.
Tape RecordersRecords spoken material from the speaker or teacher’s lesson.Does not work in real time and does not have modern features.
Infrared Hearing SystemsAllows for multiple users.Obstructions can create disturbance for audio transmission.
Visual AlertA phone call, doorbell, or fire alarm might be conveyed by visual alert.Not suitable for all users
Vibrotactile AidA mechanical device affixed to the head close to the ear.Expensive compared to other tools.
Speech to Text ConverterOffers textual captions for both audio and video information.May struggle with different accents and dialects.
Audio LoopRegulates teacher’s voice volume and ensures consistency in auditory cues.Expensive setup.
Gesture RecognizerHelps in identifying gestures of people.-
Visual ImpairmentBrailleEnables blind pupils to read and write text on a computer screen.Requires fine motor skill.
Optical Character RecognitionBlind students can scan and read through the synthetic voice can hear the contents.OCR requires high quality images and text for recognition.
DAISY ReaderCan be used as complete audio replacement for blind students.-
Screen MagnifierEnlarges letters for those who have poor eyesight.Requires proper training for using.
Text-to-Speech ConverterCan be used to read out text from books or from any print document.May lack emotion intonation.
Screen ReaderRead the information on the user’s computer screen out loud.May lack emotion intonation.
Obstacle AvoidanceProvides information about obstacles on the way.-
Electronic Note-takersEnable students to take notes in Braille or on a keyboard.Not available in rural areas.
Gesture RecognizerHelps visually impaired students to communicate with deaf and dumb students.-
Tactile DiagramsAllows blind students to access visual information such as graphs and charts.Lack of fine detail.
Human Activity RecognizerHelps blind students to acknowledge the environment.-
IPadBoosts visual attentiveness and conversation.Not available in rural areas
Speech ImpairmentSpeech SynthesizersProvide understandable speaking voices.Lack of emotion.
Mobility ImpairmentWheelchairProvide facility for mobility-related activities.Many educational institutions are not accessible for wheelchairs.
Basic Adaptive KeyboardHelps physically disabled students to use computers.-
ProstheticsCan replace any body part.Many devices lack modern features.
Home ModificationLowering or elevating any equipment, enlarging entrance, installing stairlifts and ramps, etc.-
Robotic ExoskeletonsAssist in rehabilitation.High cost
Neurodevelopmental ImpairmentReaDys SynthesizerAids dyslexic children in their reading processes.-
Digital NotepadHelps disabled students with learning.High cost.
Listening DevicesHelp students with ADHD to concentrate on the class lectures.-
Humanoid RobotsAssist students with ASD to improve their social interaction skills.High cost.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahmed, S.; Rahman, M.S.; Kaiser, M.S.; Hosen, A.S.M.S. Advancing Personalized and Inclusive Education for Students with Disability Through Artificial Intelligence: Perspectives, Challenges, and Opportunities. Digital 2025, 5, 11. https://doi.org/10.3390/digital5020011

AMA Style

Ahmed S, Rahman MS, Kaiser MS, Hosen ASMS. Advancing Personalized and Inclusive Education for Students with Disability Through Artificial Intelligence: Perspectives, Challenges, and Opportunities. Digital. 2025; 5(2):11. https://doi.org/10.3390/digital5020011

Chicago/Turabian Style

Ahmed, Samia, Md. Sazzadur Rahman, M. Shamim Kaiser, and A. S. M. Sanwar Hosen. 2025. "Advancing Personalized and Inclusive Education for Students with Disability Through Artificial Intelligence: Perspectives, Challenges, and Opportunities" Digital 5, no. 2: 11. https://doi.org/10.3390/digital5020011

APA Style

Ahmed, S., Rahman, M. S., Kaiser, M. S., & Hosen, A. S. M. S. (2025). Advancing Personalized and Inclusive Education for Students with Disability Through Artificial Intelligence: Perspectives, Challenges, and Opportunities. Digital, 5(2), 11. https://doi.org/10.3390/digital5020011

Article Metrics

Back to TopTop