Previous Article in Journal
UNCRPD and Sport: A Comparative Analysis of European States Parties Reports
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Supporting Disabilities Using Artificial Intelligence and the Internet of Things: Research Issues and Future Directions

1
Department of Computer Science, College of Computer Science and Engineering, Taibah University, Madinah 42353, Saudi Arabia
2
King Salman Center for Disability Research, Riyadh 11614, Saudi Arabia
3
College of Computer Science and Engineering, Taibah University, Yanbu 46522, Saudi Arabia
*
Author to whom correspondence should be addressed.
Disabilities 2026, 6(1), 3; https://doi.org/10.3390/disabilities6010003
Submission received: 26 July 2025 / Revised: 11 December 2025 / Accepted: 18 December 2025 / Published: 29 December 2025

Abstract

Adaptive technologies have become more sophisticated with Artificial Intelligence (AI) and the Internet of Things (IoT), providing world-changing solutions to help people living with disabilities live better lives. In this article, we discuss the potential of AI and IoT to address issues related to Down Syndrome (DS), Autism Spectrum Disorder (ASD), Mobility Impairment (MI), Hearing Impairment (HI), Attention-Deficit/Hyperactivity Disorder (ADHD), and Visual Impairment (VI). In addition, we propose an analytical framework for evaluating AI and IoT disability assistance prototypes. The framework consists of three different layers: Disability Monitoring, Disability Analysis, and Disability Assistance layers. In each layer, a set of dimensions are identified (e.g., technology, data, security, customization, and response time) and used as criteria to evaluate the research prototypes. Moreover, we evaluate 30 representative AI and IoT disability assistance research prototypes published from 2020 to 2024. The evaluation offers valuable insights into the new strategies, technologies, and approaches that will define AI and IoT disability support in the future. While these technologies have promise in enabling access, autonomy, and interfacing, there remain major open research issues such as data privacy, security, cost, scalability, and real-time response. Furthermore, we discuss future research directions to tackle these issues and allow the people with disabilities community to enhance their quality of life and be more independent.

1. Introduction

Assistive technologies have been a revolution for people with disabilities over the past several years, giving them more independence, a better quality of life, and greater social integration. Artificial Intelligence (AI) and the Internet of Things (IoT), incorporated into assistive technologies, further changed this arena by bringing about cutting-edge devices and systems designed to meet the individual needs of people with disabilities. These devices rely on data, decision-making, and interconnected devices to meet the demands of people with disabilities such as communication difficulties, mobility problems, and sensory loss [1,2,3].
AI and IoT will increasingly hold a place in assistive technology because they can deliver innovative, responsive, and personalized solutions. AI-powered models like machine learning (ML) and deep learning (DL) perform well with big data to provide insights and predictions, while IoT enables devices and environments to communicate efficiently. This combination has allowed for the creation of novel apps addressing disabilities such as Down Syndrome (DS), Autism Spectrum Disorder (ASD), Mobility Impairment (MI), Hearing Impairment (HI), Attention-Deficit/Hyperactivity Disorder (ADHD), and Visual Impairment (VI) [1,2].
Nonetheless, despite these developments, significant challenges remain to be overcome for the widespread adoption of AI and IoT in assistive technology ecosystems. These challenges are multi-dimensional and include issues related to user acceptance, clinical adoption, industrial deployment, and regulation readiness, all of which contribute to the lack of real-world implementation. Security, privacy, real-time processing, affordability, personalization, scalability, reliability, interoperability, and limited accessibility in resource-constrained settings, among others, continue to limit their impact [4,5]. The diversity of disabilities and user needs also calls for flexible and inclusive design considerations. Acknowledging and addressing these multi-level challenges are crucial for harnessing the full potential of AI and IoT for sustainable assistive technologies [6,7].
The research questions we set out to address in this study are as follows.
  • RQ1: Among the six selected disability categories (i.e., Down Syndrome, Autism Spectrum Disorder, Mobility Impairment, Hearing Impairment, Attention-Deficit/Hyperactivity Disorder, and Visual Impairment), which ones are most frequently targeted in AI and IoT research prototypes published between 2020 and 2024?
  • RQ2: Which monitoring, analysis, and assistance technologies, models, and data modalities are most commonly adopted, and in which operational settings?
  • RQ3: To what extent do the reviewed prototypes address or offer support for security, privacy, personalization, cost-efficiency, and response time?
  • RQ4: What are the cross-cutting gaps and directions in the AI–IoT assistive technology landscape?
In this work, we present a comprehensive survey of AI and IoT applications in assistive technologies. The survey covers several AI and IoT solutions that support different types of disabilities, including Down Syndrome, Autism Spectrum Disorder, Mobility Impairment, Hearing Impairment, Attention-Deficit/Hyperactivity Disorder, and Visual Impairment. The survey presents the research potential, issues, and future research directions of AI and IoT solutions. In a nutshell, the salient contributions of this work include the following.
  • The survey describes the application of AI and IoT for assistive technologies with regard to facilitating accessibility, communication, and independence for different types of disabilities, including Down Syndrome, Autism Spectrum Disorder, Mobility Impairment, Hearing Impairment, Attention-Deficit/Hyperactivity Disorder, and Visual Impairment.
  • An analytical framework is proposed for evaluating AI and IoT disability assistance prototypes. The framework consists of three different layers: the Disability Monitoring, Disability Analysis, and Disability Assistance layers. In each layer, a set of dimensions are identified (e.g., technology, data, security, customization, and response time) and used as criteria to evaluate the research prototypes.
  • The survey evaluates 30 AI and IoT disability assistance research prototypes published from 2020 to 2024 that demonstrate the latest trends in the field. The evaluation offers valuable insights into the new strategies, technologies, and approaches that will define future AI- and IoT-based disability support.
  • The survey identifies significant research issues in AI and IoT-assisted technology (e.g., security, scalability, cost-effectiveness, and user-centric design) and explores research directions to address them.
The remaining sections of this article are organized to enable a smooth transition from basic understanding to analysis and future research directions. Section 2 provides the background necessary for a better understanding of the proposed work by first introducing key ideas and the evolution of related work in assistive systems, aided by recent advances in AI and IoT. In Section 3, this research is then situated in the current literature in the field of AI- and IoT-enabled assistive systems with an emphasis on presenting some of the issues and gaps in the existing body of literature as a motivating factor to look for a holistic framework to unify the analysis and comparison of AI- and IoT-enabled assistive technologies. Section 4 then details the proposed three-layered framework to support cross-comparisons of the studied assistive technology prototypes in the context of addressing multiple disability types. On that basis, in Section 5, a systematic analysis of 30 research prototypes using the framework is proposed for multiple applications, and the results are consolidated to provide a holistic view for a better understanding of the prototypes proposed in research. Section 6 is then devoted to the discussion of the open issues and future research directions. Section 7 finally concludes the article and summarizes the main contributions and impact.

2. Background

The conceptual foundation of assistive technology research is deeply rooted in early studies on human–computer interaction (HCI) and accessibility. Pioneering works such as Stephanidis’s concept of Universal Usability [8] and Abascal and Nicolle’s principles of Inclusive Design [9] emphasized the design of technologies that accommodate diverse user needs, abilities, and contexts. Similarly, Cook and Polgar’s comprehensive framework in Assistive Technologies: Principles and Practice [10] formalized the clinical and ergonomic aspects of assistive device development, establishing relationships among user capabilities, task demands, and environmental constraints. These foundational theories continue to inform modern AI- and IoT-enabled systems, guiding the development of adaptive, intelligent, and user-centered assistive solutions.
Assistive technologies have diversified in recent years due to advances in human–computer interfaces, machine learning (ML), the Internet of Things (IoT), and deep learning (DL). These developments have brought new innovations that improve the lives and independence of people with disabilities. This section explores how different technologies have been developed and applied to assist people with disabilities in the past, including contributions from human–computer interaction, IoT, ML, and DL. Addressing disabilities including Down Syndrome, Autism Spectrum Disorder, Mobility Impairment, Hearing Impairment, Attention-Deficit/Hyperactivity Disorder, and Visual Impairment, this section highlights the revolutionary effect of technology on making everyday things and spaces accessible and usable for individuals with varied needs.

2.1. Assistive Technologies and Disability

Assistive technologies (ATs) are specialized machines or equipment used to help people with disabilities become more independent and lead better lives [1]. These devices are specially designed for a particular disability, such as Down Syndrome, Autism Spectrum Disorder, Mobility Impairment, Hearing Impairment, Attention-Deficit/Hyperactivity Disorder, and Visual impairment. These disabilities and the ATs that help to overcome those difficulties are discussed below.
Down Syndrome is a genetic condition that results from having an extra chromosome 21, which can result in delays and learning disabilities [11]. Those with Down Syndrome can also develop speech and language delays, cognitive dysfunction, and motor problems. On the other hand, Augmentative and Alternative Communication (AAC) devices, such as speech-generating devices and communication boards, are assistive technology tools that can support speech and language development for individuals with Down Syndrome. AAC is not a therapy in itself but is commonly used in therapy programs (e.g., speech or language therapy) to support communication, social, and cognitive engagement in children with Down Syndrome [12]. Moreover, educational software and apps such as language learning, memory training, and learning thinking skills are promising technologies that help Down Syndrome patients with cognitive rehabilitation [13,14]. In one study, computerized programs increased literacy and numeracy in Down Syndrome patients [12].
Autism Spectrum Disorder includes various neuro-developmental disorders associated with social, communication, and repetitive behavior problems [15,16]. As Autism Spectrum Disorder is such a spectrum, each individual can be at various stages of these challenges. Yet, just as in Down Syndrome, AAC devices aid speech in non-verbal people with Autism Spectrum Disorder. AAC has also been linked to better performance in speech production and social communication [17]. Furthermore, educational programs and apps that teach social signals, facial expressions, and social skills have been shown to improve social functioning in people with Autism Spectrum Disorder [18]. Furthermore, there are devices such as weighted vests, sensory swings, and noise-canceling headphones that can address sensory processing challenges commonly seen in Autism Spectrum Disorder [19].
Mobility Impairments are injuries to the body that restrict movement, balance, or sensation. These can be the result of birth defects, accidents, or musculoskeletal or nervous disorders [20]. On the other hand, manual, powered, and smart wheelchairs as well as scooters are mobility options for those with varying ranges of movement limitations. In addition, special equipment prosthetics replace limbs that have been lost, and orthotics support and augment functional limbs. Materials and technologies have advanced, making devices more practical and comfortable [21]. Moreover, environmental control systems give people with severe disabilities who have limited mobility a way to control household appliances, lights, and other devices via voice commands or other inputs, thereby promoting independence [22].
Hearing Impairment can be partial or total, and either or both ears can be affected. The disorder may influence communication, social behavior, and hearing [23]. By contrast, hearing aids or amplification devices can help people who are still hearing. These days, hearing aids are highly individualized and adjustable to each patient’s hearing loss. Additionally, surgically implanted aids can create sound for patients with severe to profound deafness. There is now evidence of meaningful gains in speech perception and living standards for people with cochlear implants [24]. Assistive Listening Devices (ALDs), such as FM, infrared, and induction loop systems, boost the signal-to-noise ratio for easier hearing in noisy listening rooms [25]. Furthermore, online transcript services and applications convert speech to text so that people with Hearing Impairment can easily hear what is being said [26].
Attention-Deficit/Hyperactivity Disorder is a neuro-developmental syndrome marked by inattention, hyperactivity, and impulsivity [27]. These symptoms can affect grades, job performance, and everyday activities. Digital planners, reminder apps, and task management software, by contrast, help Attention-Deficit/Hyperactivity Disorder patients plan time, organize work, and stay on task. Moreover, noise-canceling headphones also keep sound from interrupting concentration in noisy environments. In addition, gamified learning and interactive educational programs can retain focus and improve education for students with Attention-Deficit/Hyperactivity Disorder [28].
Visual Impairment refers to conditions that cause Visual Impairment from partial to total blindness, reducing the capacity of a person to do tasks that demand seeing [29]. Yet, screen readers and magnification programs translate printed text on a monitor into digital speech or magnify it so we can access digital data. Also, Braille displays display text in real time, and Braille printers create tactile Braille from the Internet. Optical Character Recognition (OCR) machines take printed text and convert it into digital text that screen readers can read aloud or display on Braille displays. Furthermore, with GPS navigation apps and gadgets, audio directions and environmental information can help users with Visual Impairment to navigate new terrain [30].
Conventional assistive technologies are mainly based on hardwired mechanical/ electronic responses with fixed pre-programmed commands and a lack of learnable adaptation and customization. In contrast, AI- and IoT-enabled solutions incorporate adaptive learning algorithms, data-driven personalization, and context-aware responsiveness to environmental stimuli and user behavior. For example, AI-driven diagnostic systems (e.g., CNN- and LSTM-based models) have 15–30% better accuracy in identifying and classifying disabilities compared to rule-based detection systems [31,32]. In contrast, IoT-enabled monitoring systems provide continuous, remote, real-time access to physiological or behavioral data [7,33]. In general, they provide multimodal feedback, speech and gesture recognition, and predictive decision-making, which significantly increase independence and decrease the need for external caregiver support. Table 1 provides a comparative baseline between assistive systems based on AI/IoT and traditional approaches with respect to their functional, responsive, scalable, and personalized attributes.

2.2. Internet of Things (IoT)

IoT has transformed the industry by enabling interconnected devices to communicate and share data in real time. This technological transformation has played an enormous role in AT and has helped people with disabilities improve their quality of life.
IoT has received a big boost from other emerging technologies, such as edge computing, which eliminates latency and bandwidth constraints to enable real-time decision-making [34]. Moreover, Artificial Intelligence (AI) algorithms aggregate vast amounts of data generated by IoT devices for predictive maintenance, anomaly analysis, and customized services [2]. IoT connectivity increased with the introduction of 5G networks, enabling higher data transfer speeds and lower latency, making communication between IoT devices faster and more secure [35]. Furthermore, IoT is more secure because security protocols have been implemented to ensure data integrity and privacy. IoT security protocols will be necessary as the number of connected devices increases [36].
Innovators in AT have also incorporated IoT. For instance, in smart homes, IoT-connected devices enable people with Mobility Impairment to control appliances, lights, and locks via voice commands or a mobile application, enabling independence [37]. Health wearable monitors (e.g., smartwatches and exercise trackers that measure health status and activities) give people with disabilities and caregivers real-time health information, especially for those with chronic disease or disability [38]. In addition, IoT is used to build communication devices for people with speech impairment or Hearing Impairment to deliver real-time translation and transcription services [7]. Furthermore, IoT-enabled apps help Visual Impairment individuals interact with the world by providing real-time audio or tactile feedback [39].

2.3. Human–Computer Interaction (HCI)

HCI is an interdisciplinary domain of computer technology design and use primarily in user–computer interfaces. HCI evolution has a significant impact on the design of assistive technologies that are more accessible and usable for people with disabilities [40]. HCI has elevated traditional AT to enable people with disabilities to have better lives. With user-centered design, HCI ensures that these technologies are tailored to specific users’ requirements, making them more accessible and usable.
For instance, speech recognition technology is being used to help people with Mobility Impairment operate devices and communicate [41]. The same goes for screen readers and magnifiers for the Visual Impairment, making online content accessible [42]. In addition, HCI development has allowed the development of brain-computer interfaces (BCIs), which provide communication channels for patients with severe MI [43]. The integration of AI and HCI techniques to enhance AT has revolutionized people with disabilities’ experiences, enabling systems to learn and respond to users’ preferences and habits. This flexibility allows adaptive devices to be personalized to users with disabilities [44].

2.4. Machine Learning (ML)

In the past decade, ML has made significant advances across many areas, including AT for people with disabilities. ML models played an essential role in advancing traditional assistive technology, providing people with disabilities with innovative AT [45]. For example, Support Vector Machines (SVMs) work well for classification and regression problems with high-dimensional data. They optimally construct a hyperplane between the data points. SVMs were employed in Morse code systems that allowed patients with Mobility Impairment to communicate [46].
Decision Trees (DTs) produce tree-like structures for classification and regression tasks. DTs are used in assistive device selection software, which allows people with disabilities to select appropriate technologies based on their needs [47]. A Multi-Layer Perceptron (MLP) is a type of neural network that is better at teaching non-linear relations. MLP is widely used in the creation of speech-impaired communication technology [48]. Extreme Gradient Boosting (XGBoost) increases prediction accuracy by iteratively fitting decision trees. XGBoost is used to predict assistive technology adoption and to analyze health information to create personalized treatments [49]. K-Nearest Neighbor (KNN) is built on proximity classification and regression. KNN is used in navigation systems for people with Visual Impairment to navigate the environment [50]. Knowledge Models (KMs) organize domain-specific data for reasoning and inference. KMs are integral in building cognitive assistive technology [51]. Fuzzy models control for uncertainty using membership levels and can therefore be used in adaptive interfaces for assistive tasks [52]. Random Forest (RF) combines multiple decision trees to improve accuracy and reduce overfitting. RF is employed in developing customized AT for various disabilities [53]. Mel-Frequency Cepstral Coefficients (MFCCs) extract features from audio signals that are useful for speech recognition. MFCC plays a critical role in assistive devices for people with speech loss [32].
ML models are changing AT; for instance, speech recognition for Hearing Impairment uses MLPs and MFCCs that help machines convert speech to text and therefore communicate with Hearing Impairment users [32]. In smart homes, DTs and RFs are used to deliver accessible environments (e.g., smart homes) where MI people can operate appliances via voice commands or mobile applications [53]. In social interaction tools for Autism Spectrum Disorder, SVMs and MLPs play an essential role in helping Autism Spectrum Disorder caregivers identify the patients’ feelings and social signals [48]. In navigation for Visual Impairment, KNN-based systems instruct Visual Impairment patients by transforming visual data into audio or haptic feedback [50]. XGBoost models in behavioural monitoring for Attention-Deficit/Hyperactivity Disorder assess patterns to suggest therapy to people with Attention-Deficit/Hyperactivity Disorder [49].

2.5. Deep Learning (DL)

DL, a subset of ML, has revolutionized some fields by enabling machines to analyze and comprehend data patterns. DL’s evolution has affected traditional assistive technologies, helping people with disabilities enhance their quality of life. There are different DL models with other properties and uses [54,55]. For example, Artificial Neural Networks (ANNs) are neural network simulations inspired by the human brain. ANNs have layers of nodes (i.e., neurons) that process input data into actionable outputs. ANNs are a core part of DL and are used for AT to predict the best assistive technology for people with disabilities based on their preference [56].
Convolutional Neural Networks (CNNs) are neural networks designed for structured grid data, such as images. They use convolutional layers to automatically and flexibly learn spatial feature hierarchies from input data. CNNs are used to create AT for people with Visual Impairment for object detection and navigation [57]. Long Short-Term Memory (LSTM) is a type of Recurrent Neural Network (RNN) that relies on sequence data for learning. LSTM is useful, especially for time-series data such as speech or language recognition. LSTM is used for real-time speech recognition and synthesis in speech aids for speech-impaired people [58]. You Only Look Once (YOLO) is a real-time object detection model that focuses on regression problems for bounding boxes and class probabilities directly from entire images. YOLO is used in AT to recognize and identify real-world objects, guiding users with Visual Impairment to find their way and recognize objects [59].
DL models have been key in refining traditional assistive technologies that improve the lives of people with disabilities. For instance, speech recognition for Hearing Impairment uses LSTM networks to convert spoken words into text, enabling people with Hearing Impairment to communicate [58]. In predictive text and communication systems, ANNs and LSTMs are used to predict words or phrases from user input patterns, enabling faster and more effective communication between MI patients [56]. The CNN model is used in environmental control systems (e.g., smart homes) so that people with Mobility Impairment can manage home appliances via voice commands or other interfaces, thereby helping maintain autonomy [57]. In Autism Spectrum Disorder interventions, LSTM and CNN are used to create apps that help Autism Spectrum Disorder patients identify emotions and social signals to support more social engagement [58]. In Visual Impairment aids, YOLO and CNN models support Visual Impairment patients in recognizing objects and moving around, using images as inputs [59]. In Attention-Deficit/Hyperactivity Disorder management, ANN and LSTM are employed in systems that track behaviors and deliver treatments to assist with focus and organization in people with Attention-Deficit/Hyperactivity Disorder [56].
AT, driven by HCI, IoT, ML, and DL technologies, has changed the way people with disabilities relate to their world and to each other. From communication to assist speech-impaired users to navigation to assist Visual Impairment users, these technologies are bridging gaps in accessibility. The integration of user-centered design, real-time data exchange, adaptive algorithms, and intelligent interfaces demonstrates that cross-disciplinary collaboration is critical for the design of viable assistive technologies [60]. Such advances not only improve the lives of people with disabilities but also pave the way for future efforts focused on inclusivity, independence, and personalization.

3. Related Work

Over the past few years, this area has attracted significant attention in the care and support of people with disabilities, driven by the rapid development of Artificial Intelligence (AI) and the Internet of Things (IoT). Hence, new trends in this field of research concerning the adoption of these technologies in assistive systems to help people with disabilities are continually being developed and discovered by researchers, with a focus on independence, communication, mobility, and overall quality of life. The existing literature describes the potential and pitfalls of using these technologies to promote access, healthcare, and self-sufficiency [61]. Researchers across all fields, from medical and rehabilitation to education and smart cities, report on how AI and IoT are addressing access problems, with data privacy, interoperability, cost-effectiveness, and adoption cited as potential research issues [62,63,64]. In this section, we present an overview of 20 major surveys on assistive disability that have been conducted and published in the literature from 2020 to 2024.
Habbal et al. [6] discuss Industry 4.0’s privacy requirements and challenges and present blockchain and AI privacy solutions across various industries, such as assistive technologies. The authors propose a taxonomy of privacy preservation that identifies data and network privacy solutions needed by underprivileged groups, such as people with disabilities. This paper also argues that with AI and IoT, the demand for flexible, privacy-informed models is becoming a requirement of modern life. They stress that we still have more research to do on privacy if we are to make a digital future inclusive.
Baker and Xiang [65] present an Artificial Intelligence of Things (AIoT) healthcare survey on AI, IoT, and the advancements, challenges, and prospects of IoT healthcare systems. They describe an AIoT architecture for better healthcare, remote monitoring, care planning, and the Internet of Medical Things (IoMT) that helps with the ongoing needs of people with disabilities. This survey uncovers a gap in healthcare IoT apps regarding data privacy, and explainable AI is identified as a potential area of growth. Their research provides an AIoT foundation for assistive use cases but needs further investigation to address issues of AI precision and data privacy in sensitive healthcare systems.
Modi and Singh [66] present an information modeling-based survey on assistive technologies from 2005 to 2020. They are interested in identifying emerging trends in assistive technology, especially the shift toward smart and wearable devices in rehabilitation. The main issues mentioned are privacy concerns with data stored in the cloud and latency issues in remote rehabilitation. They even offer fog computing to minimize latency and enhance data privacy by processing closer to the end user.
Zdravkova et al. [67] present a review of AI communication and learning systems for children with disabilities spanning 2011 to 2021. The research focuses on how AI can be used in the design of assistive technologies, such as Augmentative and Alternative Communication (AAC), Natural Language Processing (NLP), and conversational AI. Examples include the moral question of using AI in sensitive areas, such as special needs education, and the limitations of current tools for personalization. As solutions, suggestions are better moral principles and stronger Machine Learning (ML) algorithms to personalize tools.
Maskeliūnas et al. [68] report a survey on IoT applications for Ambient Assisted Living (AAL) environments (i.e., for elderly care) using articles published between 2011 and 2019. The paper also focuses on how IoT can improve the autonomy and well-being of older people through smart home technologies. The main research issues are the high cost of IoT products and seniors’ resistance to technology. They recommend addressing these problems through more user-centric design and further research into cost-effective IoT solutions to increase the adoption of technology in AAL systems.
Barua et al. [33] present a review of personalized AI assistive software for educational improvement for children with neuro-developmental problems from 2008 to 2021. They specialize in ML solutions for treating Attention-Deficit Hyperactivity Disorder, Autism, and Dyslexia in children. One of the main problems is the limited personalization offered by current AI solutions and their inability to handle the complexities of real-time learning. The authors suggest that as an intervention, there should be greater model flexibility and the use of AI tools, such as reinforcement learning, to provide more targeted education.
Khalid et al. [69] present a survey of the effects of AI-based solutions in the rehabilitation industry across research articles from 2010 to 2023. In this survey, the authors highlight how AI is reshaping the field of rehabilitation by identifying apps in seven areas: personal apps, neurological and developmental disorder rehabilitation, Virtual Reality (VR) apps, telerehabilitation, etc. This review also highlights the accessibility, affordability, and flexibility of AI solutions, which could be improved through greater IoT/VR integration to enable better real-time intervention and performance. Possible research directions would be adaptive treatment via reinforcement learning and immersive VR for neurocognitive therapy.
Wambua and Oduor [70] reviewed IoT research for students with disabilities, focusing on articles published between 2010 and 2021. Their study is about how IoT-connected education technologies, such as smart classrooms and digital accessibility devices, enable learning for students with disabilities. The list of problems outlined includes data security, lack of standard accessibility, and lack of access tools. To maintain data protection and accessibility, the authors suggest adopting universal design and IoT device security standards. More research areas are possible for broader IoT use cases in multi-sensory learning and performance monitoring in education.
Semary et al. [7] examine how IoT can support the lives of people with disabilities and reviewed research from 2005 to 2024. The authors break down IoT applications into categories such as assistive devices, smart home adapters, and health-monitoring systems. In addition, the authors identify research issues such as data privacy, interoperability, and the affordability of IoT products. Some of the proposed solutions include universal IoT standards to improve compatibility and adaptive security protocols to protect confidential information. According to the authors, IoT developments must be accessible and affordable in order to be more inclusive for people with disabilities.
Taimoor and Rehman [71] reported on reliable and resilient AI and IoT healthcare services that address personalized healthcare needs, based on studies conducted between 2015 and 2021. This overview breaks the healthcare landscape into reliability, resilience, and personalization, with the goal of closing the loop in monitoring complex medical patients. The main research challenges are data replication across devices and maintaining service consistency despite environmental and technical limitations. For these reasons, they propose adopting a three-layer IoT architecture and stronger security protocols to enhance resilience and customization in healthcare delivery, especially in remote patient monitoring platforms.
De Freitas et al. [72] analyze studies on AIoT deployments in assistive technology from 2010 to 2022. The focus of their review is on AIoT in equipment for daily tasks for people with disabilities. In particular, the review focuses on visual assistance for mobility, and the most commonly used model is the Deep Neural Networks (DNN), achieving 81% due to its pattern recognition precision. Device costs, energy usage, and data privacy are the key issues discussed in this work, for which the authors suggest low-power models and improved data governance as future research directions.
Lavric et al. [73] investigated assistive technologies for people with Visual Impairment, including AI and Visible Light Communications (VLC), and they reviewed articles from 2015 to 2024. VLC’s specialty in indoor navigation and data transfer can be helpful for people with Visual Impairment, according to the review. However, several research issues remain, such as outdoor coverage and device synchronization. The authors call for greater integration of VLC with AI to accelerate data processing and improve user input, and they stress the need for long-term, affordable VLC devices to enable broader adoption.
Nasr et al. [74] present a detailed analysis of AI- and IoT-enabled smart healthcare solutions, drawing on papers published between 2013 and 2021. The documents analyzed specialize in wearable technologies, disease prediction, and AAL systems for the elderly and people with disabilities. Scalability, data security, and interoperability are named as significant challenges for smart healthcare ecosystems in this research. In response to these research challenges, the authors recommend fog and edge computing to reduce latency and enhance data privacy, as well as blockchain for decentralized patient data storage.
Thilakarathne et al. [75] present a systematic review of publications on IoT applications in healthcare from 2012 to 2020. The work focuses on remote health monitoring and chronic disease management applications where the IoT’s power can help alleviate the healthcare burden for seniors. In addition, research issues such as network bandwidth, privacy, and high costs are listed. Possible solutions include blockchain for secure data transfer and AI algorithms for improved predictive analysis. This article outlines how we can better leverage IoT and AI in home healthcare systems.
Kumar et al. [76] investigated how to use IoT and Digital Twins to improve medical training for physically impaired students, from 2015 to 2024. In this paper, the authors used Digital Twins to emulate real-world clinical situations and provide people with hands-on experience without physical restrictions. Among the most significant research issues are the computational energy required to run simulations and the synchronization of data between virtual and physical computers. The authors propose the use of distributed systems to achieve higher data rates and greater interaction, fostering an inclusive learning culture.
Alshamrani [77] presents a survey of IoT and AI implementations for remote healthcare monitoring, covering articles from 2008 to 2021. In this survey, IoT and ML are discussed in the context of assessing the use cases of patient tracking in smart city infrastructure (e.g., body temperature and heart rate monitoring). Among the identified research issues are the difficulty in integrating different IoT devices and data security. The author recommends implementing new network protocols and strong encryption to protect patient information. Moreover, the author advises that further research should standardize IoT device interoperability for healthcare applications in smart cities.
Domingo [78] provides an overview of ML and 5G uses for assistive technology for people with disabilities from studies published between 2010 and 2021. The survey is all about bringing ML and 5G tools together more effectively to enable people with visual, auditory, and physical disabilities. Some of the most important issues dealt with are keeping latency low and transmitting data securely, especially for high-speed applications such as navigation and object detection. It is possible to think of 5G slicing and personalized ML algorithms, both of which might reduce data traffic and enable real-time interaction. The paper recommends further research into low-energy, scalable technologies to allow their upscaling.
Perez et al. [79] conducted a survey of IoT systems for older people and people with disabilities from 2010 to 2023. These studies focus on IoT use cases in smart homes, including wearable devices, health monitoring, and disaster response systems, to help users maintain independence. The main issues discussed here include IoT device costs, data privacy, and the need for adequate network infrastructure for continuous monitoring. According to the authors, local data processing and fog computing can be used to enhance privacy and reduce costs by scaling IoT systems up and down as users’ requirements change.
Joudar et al. [80] systematically review AI-based early diagnosis and treatment of Autism Spectrum Disorder for research works published between 2017 and 2022. This article focuses on AI for Autism Spectrum Disorder triage, clinical diagnosis, and telemedicine, specifically Magnetic Resonance Imaging (MRI) and Electroenc-Ephalo-Graphic (EEG) data for diagnostic accuracy. Some issues identified in this systematic review include (i) poor quality Autism Spectrum Disorder datasets, (ii) AI model unexplainability, and (iii) cross-modal data integration. The authors suggest that to address these issues, combining explainable AI (XAI) and AutoML enables more transparent, reliable AI models in clinical applications.
Zhou et al. [81] explore smart cities that include people with disabilities, and the studies covered in this survey paper were published from 2015 to 2023. This work analyses, through a sociotechnical lens and within the Quadruple Helix Model framework, the capacity of government, industry, academia, and citizens to promote inclusion in cities. The research found that there is insufficient stakeholder engagement and accessible design in public infrastructure, which discourages people with disabilities from participating. Recommendations from this survey paper include cross-sector collaborations and the use of IoT and AI to enable flexible public spaces that are responsive to individual accessibility requirements, thereby fostering social inclusion in smart cities.
Though there have been previous surveys of AI and IoT use in assistive technologies, they have not generally included a broad understanding of the diverse needs of people with disabilities or an evaluation framework for such technologies. Our survey distinguishes itself by systematically addressing these gaps through a structured literature selection and evaluation process that spans six major disability categories. Firstly, our survey offers a detailed discussion of how AI and IoT are making accessibility, communication, and independence easier for those with disabilities across multiple conditions, including Down Syndrome, Autism Spectrum Disorder, Mobility Impairment (MI), Hearing Impairment, Attention-Deficit/Hyperactivity Disorder, and Visual Impairment. Secondly, an analytical framework with three layers is presented, including disability monitoring, disability analysis, and disability assistance, along with specific dimensions such as technology, data, security, personalization, and response time, in order to evaluate AI and IoT research prototypes holistically. Thirdly, the survey assesses 30 research prototypes published from 2020 to 2024 to provide insights into the trends, strategies, and technologies shaping the future of AI and IoT disability support. Finally, our survey highlights research gaps (e.g., security, scalability, cost, and user experience) and suggests future research directions to advance the field. Such contributions make our work a notable step forward in systematic research and innovation of AI and IoT solutions for multiple disabilities.

4. AI and IoT Disability Assistance Analytical Framework

To understand the benefits and drawbacks of Artificial Intelligence (AI) techniques and Internet of Things (IoT) technologies in disability assistance, we propose a generic AI and IoT framework for disability assistance. In addition, we outline criteria for evaluating recently proposed research prototypes. The results of this evaluation will help identify any open challenges in assistive technologies for people with disabilities. The proposed analytical framework can be related to and used in conjunction with existing concepts and models of disability, such as the World Health Organization’s International Classification of Functioning (ICF), Disability and Health. In that manner, while ICF defines important concepts of disability, such as bodily functions, activities, and participation and environmental factors, in our framework, these concepts are operationalized as technological dimensions that cut across the three layers: Monitoring, Analysis, and Assistance. For instance, the Monitoring Layer corresponds to ICF’s reference to body functions and the environment (i.e., monitoring of the user’s body functions and environment), the Analysis Layer to activities and participation (i.e., analysis of the user’s activities and context), and the Assistance Layer to the actual support and assistive technologies (i.e., assistive technologies that enable users’ participation and provide support). In this way, our proposed framework is situated in a global context. It can serve as the foundation for a rigorous, comprehensive approach to multidisciplinary disability research and to the design of technologies for people with disabilities. Figure 1 depicts a generic architecture for an assistive disability that we propose in this section.

4.1. Layers of the AI and IoT Disability Assistance Analytical Framework

A standard disability assistance system consists of three primary layers: (i) the Disability Monitoring Layer, (ii) the Disability Analysis Layer, and (iii) the Disability Assistance Layer. Each layer in the system has a specific role, as detailed below.
1. Disability Monitoring Layer: This layer aggregates and pre-processes data from different sources so it can be a suitable candidate for analysis later on. In AI-powered assistance for people with disabilities, this layer includes collecting data on disability types, technologies, and environments. Data gathered can come from cameras, sensors, IoT hardware, and user input, and it is stored in layers for processing in later stages.
2. Disability Analysis Layer: This is the layer where information obtained in the Disability Monitoring Layer is processed and turned into insights that can be used to make decisions. It uses cutting-edge methods, models, and algorithms to understand data for specific types of disability, enabling effective decision-making and assistance. This layer ensures that raw data is converted into actionable insights through features such as pattern recognition, anomaly or object detection, and predictive analysis.
3. Disability Assistance Layer: This is the layer where we target disability assistance according to the findings of the Analysis Layer. It focuses on personalized interventions in real time, delivered via assistive technologies. This layer ensures that all received data is converted into real-world applications (i.e., mobility assistance, communication enhancement, and diagnostics).
To summarize, a typical disability assistance system has three main layers, each for a different reason. The Disability Monitoring Layer takes input from cameras, sensors, IoT devices, and user feedback and preprocesses it. The Disability Analysis Layer then works with models and algorithms on this data to derive valuable decision-making insights through methods such as pattern analysis and predictive analytics. At last, the Disability Assistance Layer takes this knowledge and applies it to intervention, personalizing and delivering real-time assistance through assistive mobility, communication, and diagnostic technologies.

4.2. Criteria for Evaluating Assistive Disability

For each layer of the proposed generic AI and IoT disability assistance analytical framework, we identified a collection of criteria for detecting, diagnosing, and assisting people with disabilities in multiple environments, as detailed below.

4.2.1. Disability Monitoring Layer

Below are the criteria and attributes of the Disability Monitoring Layer:
1. Focus: This criterion refers to the main disability domain that the system measures. It indicates which impairments the data are being acquired and processed for. Different disabilities call for different data sources and approaches for effective assistance. Examples include the following.
  • Down Syndrome (DS): Down Syndrome prototypes collect image data (i.e., often from placed cameras) for real-time monitoring, behavioral profiling, or other diagnostic purposes.
  • Autism Spectrum Disorder (ASD): Sensors (i.e., loop detectors, radar, microwave, etc.) are installed in spaces to gather activity and movement data in real time.
  • Mobility Impairment (MI): Global Positioning System (GPS) or navigational devices are used to track movement patterns, travel distances, and travel times.
  • Hearing Impairment (HI): Mobile devices and other interfaces collect text and audio information for real-time support and data analysis.
  • Attention-Deficit/Hyperactivity Disorder (ADHD): Information is extracted from the user interface app or intelligent systems to analyze behavioral dynamics and adaptively support.
  • Visual Impairment (VI): Advanced imaging technologies and weather information boost compass reading and situational awareness.
2. Technology: This refers to the technologies or tools used to collect and analyze information about each disability type. These include the following.
  • Internet of Things (IoT): A general-purpose device used on embedded hardware that can collect and share real-time data with disabilities like Autism Spectrum Disorder, Mobility Impairment, or Hearing Impairment.
  • Raspberry Pi (RP): Small stand-alone devices, primarily for Down Syndrome and Visual Impairment applications.
  • Arduino (AR): Sensors used in Mobility Impairment-related projects for integration and real-time data monitoring.
  • Bluetooth (BT): A communication medium that allows data to be sent and received from one device to another, mostly used in Autism Spectrum Disorder systems.
  • Long-Range Communication (LoRa): A communication medium effective for Mobility Impairment and Visual Impairment to give access to a wide area.
  • Radio Frequency Identification (RFID): Sensors for Autism Spectrum Disorder or Mobility Impairment to track and recognize where people are.
3. Data Source: This criterion focuses on where the data comes from, and it depends on the technology and disability orientation.
  • Cameras (C): These gather visual information for analyzing, like obstacles or behavior.
  • Sensors (S): These collect environmental and activity information for trends or anomalies.
  • Scanned Images (SI): These take static visuals using medical devices (e.g., Ultrasound and Magnetic Resonance Imaging (MRI)) for deeper analysis, especially in Autism Spectrum Disorder or Mobility Impairment environments.
  • Text Inputs (TI): These provide user-specific inputs in the form of text or a command.
  • Microphones (M): These capture audio data for Hearing Impairment applications.
4. Data Type: This criterion defines the type of data collected, which is one of the following.
  • Images (I): Images from cameras for recognition of patterns or detection of objects.
  • People with Disabilities Location Data (DL): This denotes geographical locations mostly for Mobility Impairment and Autism Spectrum Disorder.
  • Video Frames (VF): These return sorted image data (i.e., image frames) for dynamic analysis, mostly in Mobility Impairment and Visual Impairment systems.
  • Time-Series Data (TS): This represents trends over time and is very helpful when looking for patterns, especially in Autism Spectrum Disorder or Attention-Deficit/Hyperactivity Disorder prototypes.
  • Text Data (TD): This represents written content related to people with disabilities’ activity or symptoms, commonly used in Hearing Impairment and Attention-Deficit/Hyperactivity Disorder applications.
  • Audio Data (AD): This denotes audio data that comes from speakers mainly used for Hearing Impairment and Visual Impairment.
5. Environment: The environment criterion controls where the data is received and interpreted: Examples include the following.
  • Indoor (ID): Systems cover a safe space like a home, clinic, or school.
  • Outdoor (OD): This is for outdoor systems that cover open areas such as parks, streets, or public places.
To sum up, the Disability Monitoring Layer ensures that data collection is tailored to each disability type, supported by new technologies and custom environments. This layer, with its systematic separation of sources, types, and environments, provides a solid foundation for valid and efficient AI- and IoT-based disability assistance systems.

4.2.2. Disability Analysis Layer

The Disability Analysis Layer criteria and feature sets are as follows.
1. Techniques: The criteria here are about how the data is processed and interpreted. Different methods depend on the type of disability and the data. Examples include the following.
  • Machine Learning (ML): ML recognizes patterns and predicts and classifies disabilities like Down Syndrome, Attention-Deficit/Hyperactivity Disorder, and Hearing Impairment.
  • Deep Learning (DL): This is useful for more challenging data such as images, video frames, and sequences in Autism Spectrum Disorder, Visual Impairment, and Hearing Impairment prototypes.
  • Human–Computer Interaction (HCI): This develops interactive systems for those with disabilities such as Attention-Deficit/Hyperactivity Disorder and Hearing Impairment that are user-friendly and accessible.
2. Model: This is the model of the computations used for the analysis of data. The choice of model depends on the data type and task complexity. Examples include the following.
  • Support Vector Machine (SVM): This is useful for classification, most often used in Down Syndrome analysis.
  • Convolutional Neural Network (CNN): This has been proven for Autism Spectrum Disorder, Visual Impairment, and Hearing Impairment image and video analysis.
  • Extreme Gradient Boosting (XGBoost): High level of predictive power, especially in Attention-Deficit/Hyperactivity Disorder and Down Syndrome.
  • Artificial Neural Network (ANN): This is useful for classification, most often used in Mobility Impairment analysis.
  • K-Nearest Neighbor (KNN): A simple classification algorithm to perform MI or Hearing Impairment analysis.
  • You Only Look Once (YOLO): This has a proven higher level of object detection commonly used in Visual Impairment or Mobility Impairment detection tasks.
  • Long Short-Term Memory (LSTM): This handles linear data like behavioral analysis or visual perception tasks.
  • Decision Tree (DT): Simple classification algorithm mostly used in Attention-Deficit/Hyperactivity Disorder analysis.
  • Knowledge Model (KM): A model useful for prediction, most often used in Mobility Impairment analysis.
  • Random Forest (RF): A model useful for classification, most often used in Attention-Deficit/Hyperactivity Disorder analysis.
3. Parameters: This criterion is about the filtered features or attributes within the data. Examples include the following.
  • Facial Recognition (FR): Commonly used in Down Syndrome for facial features recognition.
  • Children (CH): Limited to children features or attributes mostly used in Autism Spectrum Disorder and Down Syndrome prototypes.
  • Adults/Elderly (A/E): Limited to adults or elderly features or attributes mostly used in Autism Spectrum Disorder and Mobility Impairment prototypes.
  • Navigation Routes (NR): Plans a route and creates access for people with disabilities, particularly used in Visual Impairment and Mobility Impairment.
  • Body Organs (BO): This examines the body’s inner organs for predictive or diagnostic tasks, primarily used in Attention-Deficit/Hyperactivity Disorder, Autism Spectrum Disorder, or Down Syndrome.
  • Drawing (DR): Examines drawing patterns for people with disabilities, mainly used for children with Autism Spectrum Disorder.
  • Body Motors (BM): This examines body mechanics, especially in Mobility Impairment and Attention-Deficit/Hyperactivity Disorder.
  • Behavior (B): This observes and monitors behavior, in particular in systems associated with Attention-Deficit/Hyperactivity Disorder.
  • Visual Perception (VP): Query vision-based data for Visual Impairment and Hearing Impairment use cases.
4. Architecture: The organizational strategy of the way data is structured and interpreted.
  • Centralized (C): Data is centralized in terms of pattern recognition, anomaly or object detection, and predictive analysis. The centralized architecture is easier to manage, requires high computational resources, and is the most common architecture used for AI and IoT prototype development.
  • Decentralized (D): Data is decentralized in terms of pattern recognition, anomaly or object detection, and predictive analysis. This enables local processing for real-time use, supports scalability in terms of the number of people with disabilities, and ensures the system’s availability and security. This architectural design is primarily used in Autism Spectrum Disorder and Mobility Impairment prototypes.
5. Security and Privacy: Data is processed in the appropriate security conditions to ensure confidentiality and privacy of the people with disabilities and integrity of their data.
  • Supporting Security (SS): Implements protection of people with disabilities data from intrusion and unauthorized access, such as access control methods and encryption.
  • Supporting Privacy (SP): Supports privacy, especially in child and adult-sensitive apps by using anonymization techniques.
  • Supporting Security and Privacy (SSP): A combination of security and privacy methodologies for total protection.
In summary, the Disability Analysis Layer is responsible for deciphering data with sophisticated tools, models, and frameworks, keeping in mind key parameters and security and privacy. This layer enables the conversion of unstructured data into meaningful data that can be targeted and efficiently provided for a wide variety of disabilities.

4.2.3. Disability Assistance Layer

The disability assistance layer criteria and features are as follows:
1. Technology: This criterion identifies the technologies employed to deliver disability assistance, tailored to the specific needs of users. Examples include the following.
  • Assistive Robots (ARB): Physical and cognitive support, especially in mobility and rehabilitation tasks.
  • Assistive Chatbots (AC): These provide conversational chat for communicating or cognitive rehabilitation.
  • Mobility Devices (MD): These support people with mobility impairments by helping them move or enhance mechanical mobility and find directions.
  • Navigation Devices (ND): These help people with disabilities to find the routes and find their way around easily.
  • Hearing Devices (HD): These provide hearing and communication support to Hearing Impairment users or those having difficulties in hearing.
  • Visual Devices (VD): They improve visual perception and navigation for Visual Impairment users.
  • Diagnostic/Detection Assistance (DA): These are targeted towards diagnostic and anomaly or object detection in real time.
  • Metaverse (MV): This provides virtual worlds in which avatars represent people with disabilities and helps in cognitive or physical therapy.
  • Augmented Reality (AR): This provides augmented support for diagnostic, therapy, or communications.
2. Type of Assistance: This determines the type of support individuals receive based on their disability. Examples include the following.
  • Mobility Assistance (MA): This facilitates patients with physical movement for Mobility Impairment.
  • Rehabilitation (R): Therapeutic interventions for rehabilitation or adaptation.
  • Navigation Assistance (NA): This leads people with disabilities through foreign landscapes in a comfortable and controlled manner.
  • Cognitive Rehabilitation (CR): This improves cognitive function via instructed engagement.
  • Diagnostic/Detection (D): This allows people with disabilities to detect objects or to identify hazards in real time.
  • Communication Assistance (CA): This improves the communication skills of people with disabilities, especially in cases of speech or hearing loss.
3. Personalization: This criterion denotes the customization of the help based on the needs of people with disabilities.
  • Supporting Personalization (SP): Individualized services based on personal preferences and needs.
  • Not Supporting (NS): Standardized solutions that may not be flexible to particular user scenarios.
4. Cost: This criterion refers to the economic feasibility of the disability assistance system.
  • Cost Effective (CE): Easy-to-use solutions can be delivered with an integrated functionality that is affordable.
  • Uneconomical (UE): The system may provide more, but it is not affordable.
5. Response Time: This criterion represents the velocity and efficiency of assistance.
  • Strong Emphasis (SE): Response-optimized for real-time support.
  • No Strong Emphasis (NSE): The system has possible delays or lacks real-time communication.
In a nutshell, the Disability Assistance Layer is dedicated to translating data insights into practical support for people with disabilities. Advanced technologies enable this layer and prioritize personalization, cost-efficiency, and response speed, resulting in practical, helpful support for a variety of disabilities.

5. Research Prototypes

In this section, we provide a detailed summary of 30 representative AI- and IoT-based research prototypes that have been developed with the aim of assisting with disabilities. These prototypes were carefully selected from the literature based on criteria including scientific impact, methodological novelty, technological maturity, and practical relevance to assistive use cases. Collectively, they showcase the diversity of current research efforts spanning the six disability types and the range of assistive application domains where AI and IoT are being applied. The evaluation of these studies provides an insight not only into the state of the art but also into the emerging trends that can empower people with disabilities through the use of intelligent, connected, and inclusive technologies. In addition, this evaluation highlights common research themes, ongoing challenges, and potential opportunities in AI- and IoT-enabled disability assistance.

5.1. Research Strategy

We applied a scoping review methodology based on the PRISMA-ScR statement [82] and the systematic review protocol by Kitchenham and Charters [83]. Peer-reviewed papers were retrieved from IEEE Xplore, ACM Digital Library, ScienceDirect, and MDPI based on keywords such as “AI assistive technology”, “IoT disability”, and “assistive system”. Thirty eligible prototypes (2020–2024) were selected if (i) they explicitly address one or more disability categories (Down Syndrome, Autism Spectrum Disorder, Mobility Impairment, Hearing Impairment, Attention-Deficit/Hyperactivity Disorder, Visual Impairment), (ii) use AI or IoT as part of their system, and (iii) report any design or evaluation data. The percentage values in Table 2, Table 3 and Table 4 were calculated by normalizing frequencies across all 30 prototypes, treating each technique or data source as occurring once per category. This evaluation setting provides reproducible, measurable grounding for our Research Questions (RQs) in Section 1.
Figure 2a shows the distribution of 32 research prototypes by year. It can be observed that from 2020 to 2024, the number of studies focusing on this area increased, and more assistive technologies empowered by AI and IoT were developed. The number of studies in 2024 is the highest. This is because more researchers are increasingly implementing deep learning, edge computing, and sensor fusion in their work. Figure 2b shows the distribution of these research prototypes by the publisher. The figure shows that most studies are published in IEEE venues. The other publishers are Springer, MDPI, and Elsevier, respectively. In addition, a small amount of related work was found in other well-known journals, such as Diagnostics and Applied Neuropsychology: Child, which account for around 7.7% of the studied dataset. The statistics show that most studies focus on the engineering and implementation of AI–IoT systems from a computer science perspective. However, other fields, such as medical diagnostics, neuropsychology, and human factors research, are also paying increasing attention to this field.

5.2. Overview of Major AI and IoT Disabilities Assistance Research Prototypes

To better illuminate key research prototypes in AI and IoT applications for disability, we describe prominent prototypes and studies. These research works have done great work in improving assistive technologies for individuals with disabilities ranging from Down Syndrome to Autism Spectrum Disorder, Mobility Impairment, Hearing Impairment, Attention-Deficit/Hyperactivity Disorder, and Visual Impairment. In presenting this overview, we would like to show the range and breadth of research that goes into developing novel AI solutions, spanning multiple subjects, machine learning (ML) and deep learning (DL) approaches, and practical use cases. This review can be used to identify innovations, trends, and principal conclusions from researchers seeking to promote accessibility and empowerment for people with disabilities. With this overview, we can identify critical themes, research challenges, and opportunities to continue advancing AI-enabled disability assistance.

5.2.1. Down Syndrome

In paper [84], the authors present an AI-driven backup robot that supports more efficient thinking for children with Down Syndrome, including speech training, individual feedback, and language recognition libraries that the robot implements to support language learning. Raspberry Pi and 3D printing are used to support children’s learning of languages playfully, amusingly, and straightforwardly. The project is carried out using the V-model methodological approach, linking the two major layers, hardware and software, and the third layer, user interaction, to ensure efficiency and uniformity in linguistic and emotional development. The benefits are (i) personalized assistance for cognitive and language acquisition and (ii) customized feedback tailored to a child’s learning progress and emotional state, which may enhance drive and self-confidence. Meanwhile, the obstacles are a lack of scalability to different settings, production costs, and the need for broader clinical validation in multilingual settings and further customization for long-term home and classroom usage.
In research paper [85], the authors propose an IoT-based information system for people Down Syndrome to help them navigate through the transportation network independently. The researchers base the information system on the Comprehensive Assistive Technology (CAT) model and suggest deploying IoT sensors, such as Bluetooth Low-Energy (BLE) beacons, cameras, and sound sensors, at traffic junctions, bus stops, and train stations. The IoT infrastructure collects real-time data on location, travel times, and traffic conditions and provides simplified navigation and safety alerts through a mobile application. Parents and caregivers can also be informed of users’ locations remotely and receive SOS alerts in emergency situations. To design human-centered features, the researchers conducted a survey of 38 Down Syndrome users aged 21–66 in Zagreb, which showed that people with Down Syndrome are heavy smartphone users but lack navigation skills, underscoring the need for IoT assistance. The system is shown to enhance independence and mobility, but the authors note that challenges such as infrastructure costs, maintenance, and connectivity in rural areas remain.
The article [31] introduces a face-based approach for automatic detection of Down Syndrome using Deep Convolutional Neural Networks (DCNNs). The method leverages unconstrained 2D facial images to differentiate between subjects with and without Down Syndrome, achieving high accuracy and specificity. The main contributions are (i) full automation of the diagnostic process with no manual subjectivity and (ii) broader accessibility in under-resourced clinical settings. The study also highlights robustness to pose and illumination variations, making it applicable in low-cost clinical imaging devices. However, the dataset’s diversity is limited, and generalization across ethnic and age groups should be validated for clinical adoption.
A similar approach is taken by the authors of the study [86]. There, ML models are applied to Down Syndrome screening during both trimesters. Multiple classifiers, including K-Nearest Neighbors (KNN), Support Vector Machines (SVMs), Random Forests (RFs), and eXtreme Gradient Boosting (XGBoost), are applied to ultrasound and biochemical data, with XGBoost achieving the highest predictive accuracy. The model is shown to outperform conventional rule-based screening in terms of sensitivity and specificity. The advantages of the approach are that (i) it is cost-effective and flexible to local resource constraints in low-to-middle-income settings and (ii) it allows early non-invasive diagnosis. Its limitations include a reliance on high-quality ultrasound data and the need for validation in diverse healthcare infrastructures to ensure robustness.
The work presented in article [87] for accurate diagnosis of Down Syndrome using fetal ultrasound is innovative in several ways. It takes an interdisciplinary approach, combining data analytics and DL for imaging, segmentation, and feature extraction, aiming to improve diagnostic accuracy. The study has potential for clinical interpretability, leveraging CNN architectures to achieve high detection sensitivity. Its strengths include (i) the application of state-of-the-art DL techniques to improve non-invasive image-based diagnostics and (ii) the cross-domain linkages it creates between clinical and IT professionals to enable more effective early prenatal screening methods. However, the need for large annotated datasets and dependence on high-resolution ultrasound systems may limit the scalability of such solutions in low-resource clinical settings. As such, federated or transfer learning-based models may be necessary for greater accessibility and generalization.

5.2.2. Autism Spectrum Disorder

An AI monitoring system for individuals with Autism Spectrum Disorder was proposed in [88]. The platform receives sensor inputs and reads emotions from facial expressions to offer caregivers tailored interaction and notifications. The key advantages of the proposed system include (i) continuous monitoring to fine-tune the way they learn, which helps to maintain cognitive growth, and (ii) mobile services for periods such as COVID-19 lockdowns when in-person therapy is not available. On the other hand, the system’s reliance on consistent sensor input may limit its robustness across varied real-world environments, such as homes, classrooms, and outdoor settings, where fluctuations in lighting, noise, or movement can affect sensor readings.
An AI screening platform, Autism AI, that uses a CNN for Autism Spectrum Disorder diagnosis is presented in [89]. It is a mobile data collection system and real-time assessment platform powered by a cloud-based AI model that is more accurate than traditional diagnostic systems. The advantages of the proposed system include (i) high levels of accuracy, sensitivity, and specificity of screening and (ii) easy access via mobile applications. In contrast, the paper also suggests that model tuning may be needed to ensure the findings are reliable across different demographic groups.
An IoT and AI app called PandaSays, for automatically processing the affective state of children with autism, was developed in [90]. The tool scans children’s drawings for emotions such as joy, sadness, and anger, which are then classified using neural networks such as MobileNet and ResNet50. The most salient advantages of the proposed approach are (i) a non-verbal indicator of emotions for children, which will benefit the caregiver and therapist, and (ii) connection to the Alpha 1 Pro robot that will carry out certain activities according to the child’s emotional state, thus increasing the value of the experience. On the other hand, there are drawbacks such as low accuracy and overfitting, so models need to be improved, and the data collected should be increased to enhance accuracy.
An IoT/AI system named AutBot is presented in [91] for diagnosing and treating Autism Spectrum Disorder. It is a software system available as an Android app that combines emotion recognition and performance analysis to help clinicians track children with Autism Spectrum Disorder across a range of activities. The proposed system’s principal components include an (i) Application module, Arduino Uno, the Emotion module OpenVino, and (ii) Face Detector and Autism Card reader modules. This research shows that AutBot accurately detects emotions, thereby improving communication between parents and healthcare providers. However, issues such as dependence on stable connectivity and non-transparent functions can impact its application in real-world environments.
In research work [92], DL and transfer learning were used to detect Autism Spectrum Disorder early from brain image data. The experiment draws on the Autism Brain Imaging Data Exchange (ABIDE) dataset and uses a CNN with transfer learning for better classification. The key contributions of this research work are (i) achieving diagnostic precision of 81.56% using a CNN model in ABIDE datasets for early Autism Spectrum Disorder diagnosis, and (ii) presenting an efficient algorithm for improving diagnostic quality in multiple image data. However, the limitations of the research work include reliance on high-quality imaging data, which may limit generalization, and the use of large datasets to improve model efficiency.

5.2.3. Mobility Impairment

In study [93], an AI- and IoT-enabled exoskeleton platform is proposed for people with paralysis recovering in smart cities. This exoskeleton is mapped using AI-based Simultaneous Localization and Mapping (SLAM) for real-time navigation and movement, which utilizes LoRa sensors and the Message Queuing Telemetry Transport (MQTT) protocol to enable data exchange between caregivers and patients. It is a modular system structure that uses an Artificial Neural Network (ANN) approach of Global Bayesian with a detector in a closed loop for decision-making. In addition, IoT is used to send sensor data, empowering user autonomy and safety while enabling caregivers to monitor and guide movement from anywhere. The advantages of the proposed platform include (i) improved real-time scalability and (ii) less need for human intervention. On the other hand, the platform has high deployment costs and complex technology, which may create challenges to mass adoption. In addition, an environment with excessive noise or near silence can impair the application’s ability to function. This is caused by the unfeasibility of voice or sound-based inputs for proper interaction.
In research work [94], Boxly, a self-healing clinic design, harnesses the Metaverse, AI, and IoT for rehabilitation therapy for people with mobility impairments. The clinic also uses a CNN to monitor and recognize posture during exercises, enabling therapy to adjust in real time based on patients’ movements. Metaverse-enabled Virtual Reality (VR) training transports patients to a VR world, while IoT sensors monitor movement and inform clinicians. The system’s advantages are (i) enhanced patient motivation with VR-based therapy and (ii) precise body positioning tracking with AI algorithms. Privacy and data security issues, as well as the cost of VR/IoT components, can limit mass adoption.
Vourganas et al. [95] propose a hybrid model combining XGBoost and K-Nearest Neighbors (k-NN) to enable patients who have recovered from COVID-19 to be rehabilitated at home in an unsupervised, AI-compatible manner, aligned with ART (Accountability, Responsibility, Transparency) principles. This AI, tailored to patients’ needs, administers the Timed Up and Go (TUG) and Five-Time Sit-to-Stand (FTSTS) tests, using artificial ambient intelligence (AmI) to evaluate users and provide instructions for recovery during daily living activities. To avoid dataset bias, synthetic data is combined with experimental data to form a weighted training dataset. The hybrid model achieves 100% accuracy on FTSTS and up to 83% on TUG tests, providing real-time progress, but it can also adapt slowly to users’ requirements by continuously retraining. The advantages of the proposed model include (i) high model performance and low computational overhead appropriate for IoT and (ii) increased patient motivation and interaction via tailored feedback. However, there are some minor misclassifications in the TUG test subcategories (e.g., difficulty distinguishing between turning and standing phases), which may require fine-tuning the dataset or including sensory inputs to make the system as accurate as possible. Furthermore, the dataset is small and quite biased; therefore, increasing the number of participants and the dataset as a whole will be beneficial.
In study [96], an AI and IoT-based fall detection algorithm for the elderly using Radio Frequency Identification (RFID) tags embedded in smart floors is proposed. The proposed system uses multiple ML classifiers, such as KNN, RF, XGBoost, and Gated Recurrent Units (GRUs), to process RFID sensor data and identify falls from normal motion. The most successful fall detection, 99.5%, was recorded by KNN, which was also the algorithm used for the final model. This proposed system provides (i) live notifications to caregivers and (ii) precision without wearables. The issues with the proposed system include potential technical interference and the best way to use them across different environments.
In [97], the authors propose an AI-powered body-tracking system to assess Orientation and Mobility (O&M) in visually impaired individuals. The system’s human detection and tracking algorithm is built on You Only Look Once (YOLOv8) and analyzes video data of participants crossing a white line in an indoor environment that simulates real-world scenarios. YOLOv8 captures precise position information to analyze deviation from the route plan. The main features of the proposed system include (i) measurement of participant velocity with minimum deviation from the traced line and (ii) the application of the data (e.g., real-time speed prediction and total path length calculation). This approach yields a useful measure of spatial patterns in visually impaired individuals, providing better metrics for O&M performance evaluation. In contrast, the challenges here are environmental rigor and computational resources, which would need to be fine-tuned for real-time use in different environments.

5.2.4. Hearing Impairment

In [98], a mobile app to help people with Hearing Impairment read and understand the soundscape is presented. The app identifies sounds using Mel-Frequency Cepstral Coefficients (MFCC) and Indian Sign Language (ISL) gestures using a CNN and an RNN. The authors use a dataset that includes 8700 urban sound samples from the UrbanSound8K dataset, including car horns and sirens. The app runs in real time on phone hardware and does not require any third-party IoT, making it an affordable alternative for those of us working in crowded environments where audio signals can be missed. The main features of the proposed app include (i) cheap, mobile-accessible without the need for IoT hardware and (ii) utility in noisy, busy areas where you will hear nothing but what you want to hear. In this system, the model only recognizes a small number of classes of sound present in the dataset, and it is limited in its ability to accurately recognize under extreme noise conditions. In environments with very loud or unusually low background noise levels, the model struggles to identify and classify sounds, limiting its usefulness.
The research work in [99] is an Indian Sign Language (ISL) sign-to-text machine based on CNN and OpenCV (i.e., a library of programming functions). The proposed approach works for real-time hand gesture detection in ISL. The model performs image pre-processing, landmark detection, and feature extraction for ISL gesture detection from webcam-based video inputs. Their dataset contains 80,000 images, including alphabets and numbers, and their experiments achieved 98.02% accuracy. It works on affordable computer vision hardware, such as webcams, making it usable for education and speech aids for people with Hearing Impairment. The key contributions of this research work are (i) accuracy up to 98.02% using CNN, enabling real-time communication for ISL users, and (ii) easy installation on simple computer vision devices such as webcams, making it suitable for wide-ranging educational and support use. In contrast, the downsides are that the model (i) is restricted to Indian Sign Language, so it cannot be applied anywhere in the world, and (ii) needs constant lighting to recognize accurate gestures, so its performance may suffer given any change in lighting.
In study [100], the authors propose a wearable assistive device with obstacle detection, Global Navigation Satellite (GPS) navigation, and AI-based facial recognition to aid the visually and auditorily impaired. The proposed approach is trained with a CNN for face recognition and an ultrasonic detector for obstacle detection. Driven by Sony’s Spresense microcontroller and using a Long-Range Communication (LoRa) Wide Area Network (WAN) and Global System for Mobile (GSM) for connectivity, it communicates with caregivers through GPS-based live location information and SOS alerts. The proposed approach is self-driving and includes bone-conduction audio for auditory feedback, so people with visual and auditory impairments can move comfortably without hindering their other senses. The benefits of the proposed approach include (i) a full-bodied, all-inclusive assistive device (i.e., navigation, obstacle detection, crisis notification) in one device and (ii) a device that combines audio feedback via bone conduction so people with auditory impairments can receive advice without the restriction of hearing capacity. However, the limitations of the research work include (i) high implementation costs, since it requires special components that may not be available anywhere, and (ii) the need for reliable network connectivity (GSM, LoRa WAN), which might not be available in remote or urban places.
In research work [101], an integration of CNN and Long Short-Term Memory (LSTM) networks is proposed for the real-time detection of Arabic Sign Language (ArSL) gestures. The CNN extracts spatial properties from still photos, and the LSTM extracts temporal distributions of motionless hand movements. The authors used a dataset of 4000 images and 500 videos for 20 ArSL words in their evaluation. The proposed approach achieves up to 94.4% accuracy in spatial feature extraction and 82.7% in temporal pattern recognition. This means the proposed approach can be easily applied in public environments with limited interpreter services. It is designed for use with standard video processors running on a computer. The benefits of the proposed approach include (i) filling a huge accessibility gap for Arabic-speaking deaf people by allowing for real-time ArSL recognition, and (ii) deployment in public environments with limited interpreter resources based on high spatial (94.4%) and temporal (82.7%) recognition. In contrast, the drawbacks include (i) the limited 20-word set, which may not work well for more general communication needs, and (ii) the time consumption of the approach (i.e., it needs significant computation for real-time performance and thus may not be portable).
In study [102], the authors propose a novel approach that integrates AI and human–computer interaction (HCI) techniques to produce a digital picture book that supports deaf and hard-of-hearing children and enhances literacy in both sign and spoken languages. In particular, the outcome app supports vocabulary and reading with image processing and ML. There are Augmented Reality (AR) overlays to help word recognition and facilitate communication between deaf children and other children. It runs on tablets and smartphones and provides an interactive, accessible option for inclusive early education. The strengths of the proposed approach are (i) inclusion of learning by connecting hearing-impaired children with others and (ii) interactivity through AR and interactions that make learning fun and accessible on tablets and smartphones. However, it is limited in that (i) it is only available in early reading (i.e., which could limit its application as children grow) and (ii) it is too dependent on AR-enabled gadgets (i.e., which might be challenging to access in limited resources).

5.2.5. Attention-Deficit/Hyperactivity Disorder

In [103], a hybrid model combining ML algorithms and knowledge models is proposed to provide decision support in the diagnosis of Attention-Deficit/Hyperactivity Disorder in adults. Decision Trees (DTs), SVMs, RFs, and Naive Bayes (NB) are employed for classification, while the knowledge model incorporates clinical expertise in the form of if–then rules. The hybrid model, comprising 69 patients with Attention-Deficit/Hyperactivity Disorder from the UK National Health Service, includes a wide range of demographic information and screening results. Although it is not integrated with IoT, the proposed system achieved 95% diagnostic accuracy with the ML component, further enhanced by integrating clinical observations. It can also help diagnose the condition by addressing uncertainty in complex symptoms. The research approach highlights the importance of considering both data-driven inference and human expert reasoning to obtain clinically reproducible support for diagnosis and to provide a new direction for clinically relevant decision-making by integrating data-driven ML with domain-specific knowledge bases. In contrast, the use of hybrid reasoning can limit bias in diagnosis and improve the model’s interpretability for broader psychiatric assessment.
In research work [104], the authors use a Fuzzy Inference System (FIS) to come up with a system to detect Attention-Deficit/Hyperactivity Disorder in children. They also use trapezoidal and triangular membership functions to model levels of Attention-Deficit/Hyperactivity Disorder. The authors also use the Centroid Decision-Making (CDM) method to determine the child’s most dominant symptom level. By using their custom Attention-Deficit/Hyperactivity Disorder symptoms dataset, they can identify early signs of Attention-Deficit/Hyperactivity Disorder using non-expert tools. These tools are accessible to both parents and school teachers. The authors claim 100% diagnostic accuracy for their tools, which is ideal for early identification. The transparency of the fuzzy model stems from its interpretability and linguistic rule base, which mimic human cognition and inference, thereby enhancing understanding for both caregivers and clinicians. The authors also state that they plan to extend the proposed system framework with Sugeno or Tsukamoto fuzzy models to accommodate multi-dimensional sets of symptoms and account for dynamic behavior tracking for real-time monitoring of children’s behavior.
In study [105], ML models such as RF, NB, SVM, and others were able to predict Attention-Deficit/Hyperactivity Disorder, and Shapley Additive Explanations (SHAPs) were used to enhance the models’ interpretability. It includes 694 children (6–16 years old) with Attention-Deficit/Hyperactivity Disorder and controls, whose attributes were mapped to the features of the fourth edition of the Wechsler Intelligence Scale for Children (WISC-IV). The model achieved 90% accuracy when SHAP values were used to highlight predictor contributions, making the model’s decision path more interpretable for clinicians. Also, SHAP visualizations were used to single out attention span, working memory, and verbal comprehension as critical markers, thus steering neuropsychological assessment toward targeted interventions and minimizing subjective clinical judgment.
The research work [106] proposes and discusses the DL pipeline, designed to build an Attention-Deficit/Hyperactivity Disorder diagnosis tool, based on electro-encephalo-gram (EEG) biomarker data and using CNNs and LSTM models and a feature extraction method, such as Absolute Shrinkage and Selection Operator (LASSO), that offers an optimal choice of features. The data used for model training consists of EEGs acquired from 61 Attention-Deficit/Hyperactivity Disorder and 60 control children during a visual attention task, with relevant time–frequency and entropy features extracted. The trained CNN model achieved 97.75% accuracy and a significant improvement in predictive performance compared to classical models. Additionally, the study integrates recursive feature elimination and CatBoost classifiers to enhance the reliability and clinical interpretability of the extracted biomarkers; however, the EEG preprocessing’s computational complexity hinders the system’s scalability to real-time and edge computing applications, which may be investigated in future work (examples include edge computing-based approaches for Attention-Deficit/Hyperactivity Disorder diagnostics).
In [107], the authors propose Engineering Education (EE) games as an intervention to treat Attention-Deficit/Hyperactivity Disorder in children. More specifically, the authors suggest that a bibliometric analysis searching the Web of Science (WoS) database for scientific literature and mapping keyword co-occurrence using VOSviewer (version 1.6.20) enable the research patterns discussed in the space between Attention-Deficit/Hyperactivity Disorder treatment and educational games to be examined. By creating intersections among themes, the authors suggest that EE games can serve as gamified solutions that educate and treat Attention-Deficit/Hyperactivity Disorder. An Augmented Reality–Engineering Education (AR-EE) game prototype was simulated and tested with children with Attention-Deficit/Hyperactivity Disorder, and engagement and cognitive performance were recorded via EEG. The primary advantages of the proposed method are that (i) AR can offer more interactive and engaging experiences (e.g., this could lead to better attention in Attention-Deficit/Hyperactivity Disorder-affected children) and (ii) learning and therapy are both on the same platform. The limitations of the research include the small sample size on which the methods were tested and the need for more extensive clinical validation to support EE games’ treatment efficacy for Attention-Deficit/Hyperactivity Disorder on a broader scale. Furthermore, EEG tests require specialized equipment that may not be available outside the clinic.

5.2.6. Visual Impairment

Paper [108] proposes an AI-driven image captioning system designed for the benefit of people with Visual Impairment to create descriptive captions for pictures. It combines computer vision and Natural Language Processing by using CNNs for image feature extraction and LSTM networks for contextual captioning. The proposed system is implemented in Python (version 3.10) and TensorFlow (version 2.10) and trains using a list of labeled images. In particular, the system converts images into descriptive text, which is then translated into Braille for improved accessibility. The method has promising performance for rich scene description but also has the limitation of high computational complexity for CNN and LSTM models, which may affect real-time response. Furthermore, the system will only work if it has high-quality training data and may not handle unknown objects across multiple environments well.
The research in [109] introduces a smart navigation system for people with Visual Impairment that uses a Raspberry Pi 3 B+ as the central controller, along with ultrasonic sensors, GPS, and a camera module. The system uses DL based on the TensorFlow object recognition system and OpenCV for image classification. GPS shows the user’s location in real time, and ultrasonic sensors detect obstacles, all via a Bluetooth headset with audio feedback. One of the main features of the proposed system is that it provides accurate object detection and real-time advice, enabling user autonomy. On the other hand, the constant internet access required for GPS and TensorFlow model updates is unreliable, and the system’s environment (e.g., light) dependence affects object detection.
The research work in [110] describes an assistive system for navigation, object detection, and text recognition. It uses a Raspberry Pi 3 B+, Google Maps API, ultrasonic sensors, a General Packet Radio Service (GPRS) module, and Micro Electro Mechanical System (MEMS) accelerometers for transportation and protection. CNN detects objects, and Optical Character Recognition (OCR) parses images. It will help with navigation and autonomy by delivering interactive maps and real-time environmental sounds. Though it works well in a more formal environment, it has issues with dependencies on stable network connectivity for GPS and poor performance in dim areas. In addition, adding many sensors makes hardware more expensive and increases complexity.
In the study [111], the authors introduce an AI-based voice assistant for people with Visual Impairment to use online content and services. The tool includes speech recognition and GingerIt AI to spot grammar mistakes for better voice communication. It converts sound into text, which is helpful for users who want to write emails and search for data. This is good for fundamental interactions, but the system’s reliance on accurate voice recognition may prove problematic in noisy environments. Moreover, the Chatbot currently does not use any advanced Natural Language Processing (NLP), so it only responds to standard queries and predefined commands, which might make complex interactions impossible to use.
The paper in [112] presents AI-Vision, a three-layered accessible image search system to help people with Visual Impairment in China. AI-Vision is an Android app that enables people with Visual Impairment to obtain complete image information at three levels (i.e., general image description, local object description, and metadata). The algorithm uses computer vision (i.e., image segmentation, object recognition, and NLP) to produce multi-level image descriptions. In a 7-day diary experiment conducted with 10 people with Visual Impairment, the usability and efficacy of AI-Vision were evaluated, revealing that the tool improved image comprehension. The main benefits of AI-Vision are (i) its multi-layer exploration of the image and (ii) its ability to respond to user requirements in low-resource countries where this kind of technology is relatively rare. However, the trade-offs are dependence on high-quality image processing to provide descriptions and a reliable internet connection (which could prevent it from being usable in regions with limited access). Furthermore, the proposed app is tied to the Android platform and is not accessible to users of other operating systems.

5.3. Evaluation of Major AI and IoT Disability Assistance Research Prototypes

Our comprehensive survey on AI and IoT disability assistance uses an in-depth analysis of 30 research prototypes that demonstrate the latest trends in the field. These prototypes were all published in the past 4 years, proving the significance and timeliness of the findings. By examining new papers, this analysis ensures that the insights it provides are based on the most recent research and reflect current trends and developments in AI- and IoT-based disability assistance. Such research prototypes are an ideal way to assess progress and innovation in AI and IoT disability assistance, offering valuable insights into the new strategies, technologies, and approaches that will define AI and IoT disability support in the future.
We proceeded with evaluation, applying the three-layer classification— Monitoring, Analysis, and Assistance—to the 30 selected studies in a bottom-up fashion and examined which layer(s) each research system focused on or fulfilled. The purpose of this was to quantitatively map and compare the contributions of the different projects in assistive technologies across various disability types and in terms of technological focus, functional coverage, and integration with other layers. As depicted in Table 2, Table 3 and Table 4 and Figure 3, Figure 4 and Figure 5, this resulted in the quantified distributions given in this and the following section. These reveal and help to characterize the research focus and the relative under- or oversaturation of different solution types (e.g., the prevalence of monitoring-related solutions and the relative lack of cross-layer integrated assistive solutions).
Table 2 summarizes the comprehensive assessment of central AI and IoT disability assistance research prototypes considering the Disability Monitoring Layer. It organizes these prototypes by their focus use (e.g., Down Syndrome, Autism Spectrum Disorder, Visual Impairment, Hearing Impairment, etc.), the technology used (i.e., IoT, Raspberry Pi, RFID, etc.), data source (cameras, sensors, or scanned images), data type (e.g., images, text, audio, etc.), and operational environments (i.e., indoor or outdoor). This systematic comparison showcases how novel technologies and various data sources are used to monitor people with disabilities. For instance, IoT technology is prevalent across a wide range of disability assistance research prototypes, demonstrating variety in both data collection and environmental adaptability.
Table 3 summarizes the comprehensive assessment of central AI and IoT disability assistance research prototypes considering the Disability Analysis Layer. Prototypes of research projects are assessed by techniques (e.g., ML, DL, or HCI), models (e.g., Support Vector Machine, Convolutional Neural Networks, etc.), parameter types (e.g., facial recognition, body motors, behavior, etc.), the structure of the system (i.e., centralized or decentralized), and security and privacy support (i.e, supporting security, privacy, or neither). The analysis shows that DL methods, such as CNNs, dominate the majority of research prototypes and have proven to be efficient for more complex data types. Furthermore, very few research prototypes address privacy and security, which should not be a standard for AI and IoT disability assistance research.
Table 4 summarizes the comprehensive assessment of major AI and IoT disability assistance research prototypes considering the Disability Assistance Layer. The technology that each research prototype utilizes is assessed (e.g., assistive robots, navigational devices, mobility devices, etc.), type of assistance offered (e.g., diagnostic, mobility, cognitive rehabilitation, or communication), personalization (e.g., enabling personalization depending on the people with disabilities’ need or not), cost-efficiency (i.e., is the research prototype proposed cost-effective or not), and response time (i.e., has the proposed research prototype significantly reduced assistive technology latency in responding to the people with disabilities or not). One such trend involves diagnostic support and personalization through research prototypes of assistive robots and devices. However, the varying costs and response times also highlight the difficulty of finding a compromise between cost and accessibility in AI- and IoT-based disability assistance systems.
To evaluate the effectiveness of AI and IoT research prototypes in providing disability support, we conducted a quantitative analysis across all levels of disability support (i.e., monitoring, analysis, and assistance). These 30 prototypes were analyzed and shown to be helpful for the specific needs of people with disabilities. By using multiple technologies and techniques, including DL models such as CNNs and data gathering from various sources, these prototypes accommodate all types of disabilities. Every prototype is evaluated based on its adoption of different technological implementations, data types, and environmental adaptations. This shows how these prototypes perform across several dimensions and how they can significantly improve quality of life for people with disabilities.
Figure 3 depicts the quantitative assessment of AI and IoT prototypes against the Disability Monitoring Layer assessment dimension, including focus, technology, data source, data type, and environment, each with a set of criteria. It is worth noting that the data exceeds 100% in most dimensions because many AI and IoT research prototypes focus on multiple disabilities (e.g., Visual Impairment and Mobility Impairment), adopt multiple technologies, gather data from various sources, and use different types of data.
In the focus dimension, we observe diversity in AI and IoT research prototypes for disabilities, with most disabilities (i.e., Attention-Deficit/Hyperactivity Disorder, Hearing Impairment, Mobility Impairment, Autism Spectrum Disorder, and Down Syndrome) scoring 16.67%. This supports the claim that our survey is comprehensive, and the data used for this analysis covers most disability areas. The depicted data shows a wide range in technology adoption for developing assistive solutions for people with disabilities, with IoT as the dominant technology at 43.33%, followed by RP at 13.33%. Other technologies such as BT, RFID, AR, and LoRa scored 6.67%, indicating comparatively lower adoption in disability assistive solutions. Based on the data presented in the data source, the most commonly used sources for disability data are C and S, with 50%, followed by TI, SI, and M, with 23.33%, 13.33%, and 6.67%, respectively. Among the different types of data used in assistive disability, the most common is I, reaching 60%, followed by TD at 43.33%. Other data types, including DL, VF, TS, and AD, scored 16.67%, 13.33%, 6.67%, and 6.67%, respectively. For the environment dimension, we observe that most AI and IoT research prototypes are developed for ID environments, with a 90% score.
Figure 4 illustrates the quantitative assessment of AI and IoT prototypes against the Disability Analysis Layer assessment dimension, including technique, model, parameters, architecture, and the support of security and privacy, where each dimension has a set of criteria. It is worth noting that the data exceeds 100% in most dimensions because many AI and IoT disability research prototypes use multiple techniques (e.g., different methods for disability data analysis, such as ML and DL) and leverage various models (e.g., CNNs and LSTMs).
Among techniques, DL leads significantly with 63.33%, followed by ML at 33.33%, while HCI is used sparingly at 6.67%. The model choices prominently feature CNN at 53.33%, followed by LSTM at 10.0%, and KNN and XGBoost at 6.67% each. Other models, including SVM, ANN, YOLO, DT, KM, RF, Fuzzy, Multi-Layer Perceptron (MLP), Mel-Frequency Cepstral Coefficient (MFCC), and OpenCV, are comparatively lower in terms of leveraging, where each model scored 3.33% (i.e., all of these models are categorized as others in the figure scoring 33.33% in total). In association with the dimension of the parameters, we observe that the main parameter chosen by researchers in the field of assistive disability is Children (CH), reaching approximately 40.0%, while Adult (A) and Elderly (E) show less interest, reaching 6.67%, and 3.33%, respectively (i.e., where the researcher has literally emphasized these parameters in their work). The parameters that piqued researchers’ interest included Behavior (B), which scored 56.67%, and Body Motors (BMs), which scored 30.0%. Based on the parameter dimension results, we noticed that Facial Recognition (FR), Body Organs (BO), and Visual Perception (VP) showed the least interest, each at 16.67%, followed by Navigation Routes (NR) at 13.33%. In the architecture dimension, we observe that 73.33% of AI and IoT research prototypes on disabilities use a centralized architecture. Finally, in the supporting security and privacy dimension, we observe that 73.33% of the AI and IoT research prototypes on disabilities lack support for security or privacy (i.e., no access control methods or anonymization techniques are used). However, only 13.33% of AI and IoT research prototypes on disabilities support Security (SS), and 10.0% support Privacy (SP). Surprisingly, we observe that only 3.33% of assistive solutions for disabilities support Security and Privacy (SSP).
Figure 5 illustrates the quantitative assessment of AI and IoT prototypes against the Disability Assistance Layer assessment dimension, including technology, type of assistance, personalization, cost, and response time, each with a set of criteria. It is worth mentioning that the data exceeds 100% in most of the dimensions because most of the AI and IoT disabilities research prototypes adopt multiple assistive technologies (i.e., adopting different multiple assistive technologies for people with disabilities, such as Assistive Robot (ARB) and Augmented Reality (AR), and providing several assistance types (e.g., Diagnostic or Detection (D) and Communication Assistance (CA)).
In the technology dimension, we observe that the most adopted technology for providing disability assistance is Diagnostic or Detection Assistance DA 53.33%, followed by Navigation Devices (NDs) and Assistive Chatbot (AC), scoring 16.67% each. Augmented Reality (AR) and Visual Devices (VD) score 13.33% each, while Assistive Robot (ARB) scores 10.0%. On the other hand, the least adopted technologies for providing disability assistance are Mobility Devices (MD), Hearing Devices (HD), and Metaverse (MV), each at 3.33%.
Given the diversity of disability assistance types, we observe that the most commonly provided type of assistance is Diagnostic or Detection (D) assistance, accounting for 63.33%, followed by Navigation Assistance (NAS) and Rehabilitation (R), each at 16.67%. The last three disability assistance types are Communication Assistance (CA), Mobility Assistance (MA), and Cognitive Rehabilitation (CR), reaching 13.33%, 10.0%, and 6.67 %, respectively. For the personalization dimension, we surprisingly observe that 53.33% of the AI and IoT research prototypes on disabilities are supporting Personalization (SP). In the cost dimension, we observe that 63.33% of the AI and IoT research prototypes for disabilities are cost-effective (CE). Finally, we assess the proposed research prototype and determine whether it significantly reduces assistive technology latency in responding to people with disabilities. We observe that 70.0% of the AI and IoT research prototypes on disabilities have no strong emphasis (NAS) regarding assistive technology latency in responding to people with disabilities.
This quantitative assessment highlights the enormous achievements and varied approaches of AI and IoT assistive technology created for the people with disabilities community. These prototypes not only adopt different technologies and designs but also cater to different types of disabilities, making the case for how the broad application of AI and IoT can positively affect this industry. On the other hand, the findings of this assessment identify areas for improvement, such as security, privacy, and latency. From these findings, we argue that future research should address these gaps and work towards more integrated, secure, and responsive assistive services. Moreover, by analyzing multiple research prototypes and extracting success factors, it is possible to synthesize and identify the following observations. These trends are observable across many types of disabilities, so they are not specific to any one type.
  • Common Success Factors: are having multimodal data acquisition with the help of IoT sensors, having personalization based on AI for an adaptive user experience, and having real-time feedback loops for maintaining user engagement and autonomy. These elements are associated with higher usability and impact in terms of therapy, and are present in many systems with positive results, regardless of the disability they serve, such as Autism Spectrum Disorder, Down Syndrome, Visual Impairment, etc.
  • Recurring Limitations: limited diversity of datasets, the absence of longitudinal trials, high costs of development and maintenance, and low involvement of users in the co-design process during development. Many systems show promising technical results in specific use cases, but few are validated at scale or face legal and regulatory limitations on real-world deployment.
  • Emerging Trends: in the long term, we can identify several emerging trends, such as the use of a hybrid AI–IoT architecture for continuous monitoring, explainable AI to increase system interpretability, and edge computing for privacy-preserving inference. Taken together, these observations show an increasing interest in the field in developing more intelligent, ethical, and aware assistive ecosystems that can scale in size and accessibility while remaining affordable.

6. Open Issues and Research Directions in AI and IoT Assistive Disability

In the rapidly growing domains of Artificial Intelligence (AI) and the Internet of Things (IoT), significant progress has been made in assistive technologies for people with disabilities. On the other hand, as much as these technologies could increase autonomy and the standard of living, they also pose unique challenges and open research issues that need to be addressed to make them effective and ubiquitous. In this section, we identify open research issues by analyzing 30 AI and IoT research prototypes for disability support and provide direct answers to RQ3 and RQ4. We address all open issues, including security and privacy, real-time processing, cost, customization, scale, and many other challenges that existing AI or IoT assistive technologies have not addressed. Moreover, we discuss possible research directions through which to address these issues and enable AI and IoT to fully transform the lives of people with disabilities.
AI and IoT technologies are providing a level of opportunity never before seen in the area of assistive technology. However, for such technologies to reach their full potential, it is also essential to consider associated risks and ethical and regulatory considerations. Thus, this section also provides an insight into risks, limitations, and ethical and regulatory considerations, not only offering open research issues and trends. It is worth noting that most of the open research issues listed below concern technology transfer and large-scale deployment, not the development of a working prototype. However, their consideration during prototype design at the research level is becoming increasingly important.
1. Security and Privacy: This is one of the main open issues revealed by the evaluation, where 13.33% of research prototypes explicitly address security; even fewer, 10.0%, address privacy, while just 3.33% address both. This highlights a critical gap in the security and privacy of assistive technologies. Because some data in disability support technologies is very personal (e.g., health data and identifiers), inadequate security and privacy protections may hinder their use and confidence in these technologies. These are areas should be enhanced not only for convenience but also to meet global data privacy standards and support user trust and safety. Developing more flexible, secure, and private assistive disability solutions is essential to address the security and privacy research issue. Assistive disability solutions must be supported by end-to-end encryption, anonymization, and secure data transmission mechanisms [113]. Research should also focus on blockchain-based decentralized data storage to give people with disabilities greater control over their data. Advancing security and privacy models will safeguard people with disabilities and increase trust in assistive technologies for people with disabilities.
2. Response Time and Real-Time Processing: The results’ evaluation showcases that 74.0% of research prototypes do not prioritize lowering latency, which indicates an alarming research gap in current AI and IoT assistive technologies. Whether these technologies are for emergency scenarios or for navigation systems for people with Visual Impairment, assistive technologies need real-time responses. The lag time is a critical factor in the practical usability and trustworthiness of assistive technology, so there is no time to compromise on creating solutions that provide instant feedback and response. As immediate responses in assistive technology are important, future research on efficient real-time data processing offers attractive possibilities [114]. Future research needs to develop faster algorithms and use edge computing techniques to bring data closer to its destination, reducing latency. This kind of technology would revolutionize how devices that require instant feedback operate (e.g., in real-time communication and emergency services). Moreover, federated learning can play an important role in expediting the learning and identification processes that are required in most assistive disability applications.
3. Cost-Effectiveness: Even though 63.33% of research prototypes are economically sound, 36.67% are still beyond the affordability of many people with disabilities who could leverage them. Often, high-end technologies are expensive (e.g., Advanced IoT sensors, Raspberry Pi, and Arduino), and they can prevent mass adoption, especially among people with disabilities. In addition, research and development must focus on reducing the cost of these technologies by using cheaper materials and manufacturing processes and possibly subsidizing or establish funding structures that lower the cost for people with disabilities. Future research must focus on cost-saving mechanisms to make AI and IoT assistive technology more accessible. Those might include using cheaper, more resilient materials; changing supply chains and production methods; or adopting business models that leverage economies of scale [72]. Collaboration with government and non-profit organizations might also reduce prices, making these technologies accessible to a broader audience with disabilities.
4. Personalization and User-Centric Design: While the research prototypes support personalization in more than half (53.33%) of cases (i.e., based on the evaluation results), it could be much better. Individualization of assistive technologies is critical because disability is variable and varies in severity. Only 6.67% of current research prototypes use human–computer interaction (HCI) principles, which could be beneficial in supporting personalization. Moreover, current research prototypes do not consider the specific demands and preferences of people with disabilities and, therefore, are less likely to be successful and widely accepted. Making things more personal is not just about setting things up; it is about making user feedback fully present during the design process and using adaptive algorithms that can learn and adapt to the behavior and interests of people with disabilities. One promising research area for bringing personalization and user-centric design to AI and IoT assistive technologies is applying HCI theory to adaptive systems. This can take the form of adaptive user interfaces that evolve based on preferences and capacities, context-aware systems that adapt to environmental and emotional triggers, and emotion recognition that personalizes device behavior based on the user’s emotions. The creation of real-time user feedback loops can also optimize personalization, and involving users during the design process through participatory design will ensure that technologies are well matched to people with disabilities [115,116]. Such HCI-oriented research will allow us to create assistive technologies that are not only efficient but also natural, in tune with people with disabilities’ needs, for a better user experience and satisfaction.
5. Scalability: This is one of the main open issues exposed by the evaluation, where the majority of the research prototypes, 73.33%, are centralized, unscalable, and more likely to run into issues of availability and security. As assistive technologies improve, it can still be difficult to scale them to more people with disabilities or different geographical areas or cultures. Problems with scaling can make these technologies unusable for the global disability sector. Future research in AI and IoT assistive technologies for people with disabilities should focus on adopting decentralized architectures to scale assistive technologies, enabling them to scale easily or adapt to different scenarios [6]. This means modular architectures that can be adapted to different people with disabilities as well as cloud-based systems that can be easily deployed to new users and regions.
6. Reliability: The majority of assistive devices are not meant to be sustained over time and under various environmental conditions. This weakness may result in more frequent breakdowns, causing not only inconvenience but also additional maintenance expenses that will decrease the reliability of these technologies for those who use them every day. In future research, researchers should focus on making assistive devices more robust in both physical and functional terms. This involves the usage of more resilient materials, maintenance-free designs, and self-diagnosing devices that alert people to issues before they break down [117].
7. Interoperability: The existing assistive technology research prototypes are often composed of a host of devices and platforms that are all stand-alone. This interoperability can reduce the overall impact of assistive solutions, as people with disabilities may need to work with systems that do not interact with one another or share information seamlessly. To address platform interoperability, future research must focus on universal integration architectures that enable different devices and software to interact with each other [118]. This would lead to a more integrated set of assistive technologies that are more efficient and friendly to people with disabilities.
8. Poor Accessibility in Limited Infrastructures: Rural and under-resourced regions are less able to access AI and IoT technologies, owing to insufficient infrastructure, cost, and technological literacy. This limits access for people with disabilities living in these areas and leaves them without access to cutting-edge assistive technologies [119]. These technologies have to be locally driven and adaptable to rural and underdeveloped contexts. Researchers can focus on developing low-cost technology that is not dependent on the internet and creating offline functions that people with disabilities can use without a network.
9. Training and User Education: Based on the evaluation results, only 3.33% of the research prototypes focus on the elderly (i.e., the use of assistive technologies). New technologies have their learning curve, which is a challenge for most users (i.e., especially the elderly or non-tech-savvy). Practical training and education are essential to help users take full advantage of AI and IoT assistive technologies, yet they are often neglected. Researchers need to focus on developing easy-to-learn training modules and documents that accommodate diverse user needs and abilities (e.g., non-tech-savvy users) [120]. Future research should involve adaptive learning algorithms that adjust training speed and complexity based on user progress, and feedback could help improve user adoption and satisfaction.
Open research issues in AI and IoT assistive technologies for people with disabilities provide a clearer picture of the challenges and potential in this field. Resolving these challenges is not only about enabling technological advances but also about making them accessible, reliable, and personalized for people with disabilities. In the future, researchers need to focus on secure, fast, low-cost, and tailored solutions that are easily scalable and can be used in all types of environments and by people with disabilities. With interdisciplinary research and user-centric design, researchers can pursue a vision of assistive technology that is not only functional but also empowers people with disabilities to achieve greater independence and higher living standards. This will take constant innovation, dedication, and a close understanding of the complex requirements of people with disabilities.

Limitations, Risks, and Ethical and Regulatory Considerations

AI- and IoT-enabled assistive systems, while having tremendous potential to transform the domain, are not without limitations, risks, and barriers to adoption in real-world disability settings. One of the most prominent challenges is privacy and security concerns, as many assistive technologies rely on continuous data collection via cameras, microphones, and biosensors, which are prone to misuse, hacking, or data leakage [6]. Another significant issue is algorithmic bias, where AI models may exhibit suboptimal performance or even discriminate against users from underrepresented groups if trained on biased or non-diverse datasets, leading to misclassifications and unfair outcomes. Technical failures are another concern; these may be unreliable sensor performance, network latency, or cloud connectivity issues, which can reduce the system’s dependability, particularly in assistive scenarios where system failure may have significant consequences, such as mobility support or medical monitoring applications [7]. On the other hand, socioeconomic factors, such as cost and accessibility barriers, can limit the adoption of AI- and IoT-enabled assistive technologies, especially in low-income communities where assistive devices may be unaffordable or hard to access. In addition, user adoption challenges—including a lack of digital literacy, resistance to technology, and cultural stigmas—can hinder the adoption of such technologies. To address some of these limitations and barriers, recent research has suggested the use of Explainable AI (XAI) approaches for improving transparency, Federated Learning (FL) to maintain data privacy, and the development of low-cost edge-based architectures for cost-effective and scalable deployment [74,78]. Nevertheless, it is crucial to address these risks and limitations holistically to ensure that AI- and IoT-enabled assistive systems are ethical, dependable, and socially equitable.
The convergence of AI and IoT for assistive technologies also raises ethical and regulatory questions, especially as these systems increasingly engage with sensitive health and behavior data. A key issue is data privacy and user consent, as real-time monitoring and data exchange between cloud and edge networks require compliance with policies such as the General Data Protection Regulation (GDPR) and national privacy laws. Another concern is algorithmic fairness and transparency, since biases in training data can perpetuate existing inequalities and may disproportionately impact marginalized or underrepresented disability groups. Accountability and liability also remain challenging questions, particularly in clarifying responsibility when autonomous systems malfunction or make harmful recommendations. Finally, embedding ethics-by-design principles throughout the design and development process can help ensure consent, accessibility, and inclusion. Regulations will need to be updated or developed to consider cross-border data flows, interoperability standards, and certification processes that can validate the safety, reliability, and explainability of AI- and IoT-enabled assistive devices. This will involve policy dialogue between researchers, clinicians, and regulatory authorities to ensure that emerging assistive technologies are both innovative and ethically sound.

7. Conclusions

This article offers a comprehensive survey of existing Artificial Intelligence (AI) and Internet of Things (IoT) assistive technology applications for cognitive, physical, and sensory disabilities. This assistive technology aims to help people with Down Syndrome, Autism Spectrum Disorder, Mobility Impairment, Hearing Impairment, Attention-Deficit/Hyperactivity Disorder, and Vision Impairment. Evaluating 30 of the latest research prototypes across all three categories under a unified analytical lens sheds light on nascent design principles, common challenges, and cross-domain opportunities for future inclusive, adaptive, and data-driven assistive systems. To provide a comprehensive survey of AI and IoT applications in assistive technologies, we apply a scoping review methodology. To provide structure and clarity to this scoping review, we also articulate detailed Research Questions (RQs) that direct and underpin the quantitative analyses and synthesis in this article. The study quantifies observable patterns and design trends in the 30 AI and IoT research prototypes for disability assistance. The results in Section 5 and Section 6 collectively and comprehensively answer the four research questions presented in Section 1. More concretely, the results demonstrate (i) the coverage of disability types (RQ1); (ii) the prevalent technologies and models used (RQ2); iii) the extent of security, personalization, and cost considerations (RQ3); and (iv) open issues for future research on scalable, privacy-preserving, and user-centered assistive systems (RQ4). Collectively, these results provide an empirically grounded and transparent starting point for more in-depth investigations into AI- and IoT-enabled disability assistance.
In other words, our analysis of 30 research prototypes demonstrates how AI and IoT could be used to provide intelligent, adaptive, and individualized services that will increase independence, connectivity, and access. However, data privacy, security, cost, scalability, and real-time response remain challenging issues, even though significant improvements have been made in the field. AI and IoT assistive technologies need secure, scalable, and user-centered designs supported by blockchain, federated learning, and edge computing to meet expectations about technology adoption. Inspiring innovation and inclusive use can enable AI and IoT to redefine assistive technologies and help people with disabilities to live better and more independent lives.

Author Contributions

Conceptualization, A.N. and H.A.; methodology, A.N., H.A. and E.-S.A.; software, A.N., H.A. and E.-S.A.; validation, H.A. and E.-S.A.; formal analysis, A.N., H.A. and T.H.N.; investigation, A.N., E.-S.A. and T.H.N.; resources, A.N., E.-S.A. and T.H.N.; data curation, A.N., H.A. and E.-S.A.; writing—original draft preparation, A.N. and H.A.; writing—review and editing, E.-S.A. and T.H.N.; visualization, A.N. and T.H.N.; supervision, A.N.; project administration, T.H.N.; funding acquisition, A.N. and T.H.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the King Salman Center for Disability Research through Research Group no KSRG-2024-140.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data generated or analyzed during this study are included in the published paper.

Acknowledgments

The authors extend their appreciation to the King Salman Center for Disability Research for funding this work.

Conflicts of Interest

The authors declare no conflicts of interest.

Disability Language/Terminology Positionality Statement

Throughout the review, we have attempted to use person-first language (e.g., “people with disabilities”) to be respectful of individuals’ humanity and to be consistent with international standards and documents (e.g., the UN Convention on the Rights of Persons with Disabilities). This approach is also consistent with our theoretical perspective, which is influenced by the social model of disability and focuses on the social construction of functional limitations. We recognize, however, that many communities and people themselves prefer identity-first wording. We have attempted to strike a balance between these concerns and overall consistency, clarity, and inclusiveness, considering the diverse types of disabilities represented in our survey and the international readership it is likely to attract.

References

  1. Hersh, M.A.; Johnson, M.A. Disability and assistive technology systems. In Assistive Technology for Visually Impaired and Blind People; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1–50. [Google Scholar]
  2. Kushwah, R.; Batra, P.K.; Jain, A. Internet of things architectural elements, challenges and future directions. In Proceedings of the 2020 6th International Conference on Signal Processing and Communication (ICSC), Noida, India, 5–7 March 2020; IEEE: New York, NY, USA, 2020; pp. 1–5. [Google Scholar]
  3. Papanastasiou, G.; Drigas, A.; Skianis, C.; Lytras, M.; Papanastasiou, E. Patient-centric ICTs based healthcare for students with learning, physical and/or sensory disabilities. Telemat. Inform. 2018, 35, 654–664. [Google Scholar] [CrossRef]
  4. Hudda, S.; Haribabu, K. A review on WSN based resource constrained smart IoT systems. Discov. Internet Things 2025, 5, 56. [Google Scholar] [CrossRef]
  5. Marques, G.; Pitarma, R.M.; Garcia, N.; Pombo, N. Internet of things architectures, technologies, applications, challenges, and future directions for enhanced living environments and healthcare systems: A review. Electronics 2019, 8, 1081. [Google Scholar] [CrossRef]
  6. Habbal, A.; Hamouda, H.; Alnajim, A.M.; Khan, S.; Alrifaie, M.F. Privacy as a Lifestyle: Empowering assistive technologies for people with disabilities, challenges and future directions. J. King Saud-Univ.-Comput. Inf. Sci. 2024, 36, 102039. [Google Scholar] [CrossRef]
  7. Semary, H.; Al-Karawi, K.A.; Abdelwahab, M.M.; Elshabrawy, A. A Review on Internet of Things (IoT)-Related Disabilities and Their Implications. J. Disabil. Res. 2024, 3, 20240012. [Google Scholar] [CrossRef]
  8. Stephanidis, C.; Salvendy, G. Designing for Usability, Inclusion and Sustainability in Human-Computer Interaction; CRC Press: Boca Raton, FL, USA, 2024. [Google Scholar]
  9. Abascal, J.; Nicolle, C. Moving towards inclusive design guidelines for socially and ethically aware HCI. Interact. Comput. 2005, 17, 484–505. [Google Scholar] [CrossRef]
  10. Cook, A.M.; Polgar, J.M. Cook & Hussey’s Assistive Technologies; Elsevier Health Sciences: Amsterdam, The Netherlands, 2008. [Google Scholar]
  11. Rafii, M.S.; Kleschevnikov, A.M.; Sawa, M.; Mobley, W.C. Down syndrome. Handb. Clin. Neurol. 2019, 167, 321–336. [Google Scholar]
  12. McNaughton, D.; Light, J. The iPad and mobile technology revolution: Benefits and challenges for individuals who require augmentative and alternative communication. Augment. Altern. Commun. 2013, 29, 107–116. [Google Scholar] [CrossRef]
  13. Buzzi, M.C.; Buzzi, M.; Perrone, E.; Senette, C. Personalized technology-enhanced training for people with cognitive impairment. Univers. Access Inf. Soc. 2019, 18, 891–907. [Google Scholar] [CrossRef]
  14. Reddy, K.J. Cognitive Training Programs. In Innovations in Neurocognitive Rehabilitation: Harnessing Technology for Effective Therapy; Springer: Berlin/Heidelberg, Germany, 2025; pp. 171–209. [Google Scholar]
  15. Mukherjee, S.B. Autism spectrum disorders—Diagnosis and management. Indian J. Pediatr. 2017, 84, 307–314. [Google Scholar] [CrossRef]
  16. Noor, A.; Almukhalfi, H.; Souza, A.; Noor, T.H. Harnessing YOLOv11 for Enhanced Detection of Typical Autism Spectrum Disorder Behaviors Through Body Movements. Diagnostics 2025, 15, 1786. [Google Scholar] [CrossRef] [PubMed]
  17. Schlosser, R.W.; Wendt, O. Effects of augmentative and alternative communication intervention on speech production in children with autism: A systematic review. Am. J. -Speech-Lang. Pathol. 2008, 17, 212–230. [Google Scholar] [CrossRef] [PubMed]
  18. Parsons, S.; Mitchell, P. The potential of virtual reality in social skills training for people with autistic spectrum disorders. J. Intellect. Disabil. Res. 2002, 46, 430–443. [Google Scholar] [CrossRef] [PubMed]
  19. Touch, D. Autistic Disorder, College Students, and Animals. J. Child Adolesc. Psychopharmacol. 1992, 2, 1. [Google Scholar] [CrossRef]
  20. Corkery, M.; Wilmarth, M.A. Impaired Joint Mobility, Motor. In Musculoskeletal Essentials: Applying the Preferred Physical Therapist Practice Patterns; SLACK Incorporated: Thorofare, NJ, USA, 2006; p. 101. [Google Scholar]
  21. Highsmith, M.J.; Kahle, J.T.; Miro, R.M.; Orendurff, M.S.; Lewandowski, A.L.; Orriola, J.J.; Sutton, B.; Ertl, J.P. Prosthetic interventions for people with transtibial amputation: Systematic review and meta-analysis of high-quality prospective literature and systematic reviews. J. Rehabil. Res. Dev. 2016, 53, 157–184. [Google Scholar] [CrossRef]
  22. Argall, B.D. Autonomy in rehabilitation robotics: An intersection. Annu. Rev. Control. Robot. Auton. Syst. 2018, 1, 441–463. [Google Scholar] [CrossRef]
  23. Humes, L.E.; Pichora-Fuller, M.K.; Hickson, L. Functional consequences of impaired hearing in older adults and implications for intervention. In Aging and Hearing: Causes and Consequences; Springer International Publishing: Cham, Switzerland, 2020; pp. 257–291. [Google Scholar]
  24. Peterson, N.R.; Pisoni, D.B.; Miyamoto, R.T. Cochlear implants and spoken language processing abilities: Review and assessment of the literature. Restor. Neurol. Neurosci. 2010, 28, 237–250. [Google Scholar] [CrossRef]
  25. Corey, R.M.; Singer, A.C. Immersive Enhancement and Removal of Loudspeaker Sound Using Wireless Assistive Listening Systems and Binaural Hearing Devices. In Proceedings of the ICASSP 2023—2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–2. [Google Scholar]
  26. Kuhn, K.; Kersken, V.; Reuter, B.; Egger, N.; Zimmermann, G. Measuring the accuracy of automatic speech recognition solutions. ACM Trans. Access. Comput. 2024, 16, 1–23. [Google Scholar] [CrossRef]
  27. Frank-Briggs, A.I. Attention Deficit Hyperactivity Disorder (ADHD). J. Pediatr. Neurol. 2011, 9, 291–298. [Google Scholar] [CrossRef]
  28. Cibrian, F.L.; Lakes, K.D.; Schuck, S.E.; Hayes, G.R. The potential for emerging technologies to support self-regulation in children with ADHD: A literature review. Int. J. Child-Comput. Interact. 2022, 31, 100421. [Google Scholar] [CrossRef]
  29. Cattaneo, Z.; Vecchi, T.; Cornoldi, C.; Mammarella, I.; Bonino, D.; Ricciardi, E.; Pietrini, P. Imagery and spatial processes in blindness and visual impairment. Neurosci. Biobehav. Rev. 2008, 32, 1346–1360. [Google Scholar] [CrossRef] [PubMed]
  30. Kim, H.N.; Smith-Jackson, T.L.; Kleiner, B.M. Accessible haptic user interface design approach for users with visual impairments. Univers. Access Inf. Soc. 2014, 13, 415–437. [Google Scholar] [CrossRef]
  31. Qin, B.; Liang, L.; Wu, J.; Quan, Q.; Wang, Z.; Li, D. Automatic identification of Down syndrome using facial images with deep convolutional neural network. Diagnostics 2020, 10, 487. [Google Scholar] [CrossRef] [PubMed]
  32. Shahamiri, S.R.; Salim, S.S.B. Artificial neural networks as speech recognisers for dysarthric speech: Identifying the best-performing set of MFCC parameters and studying a speaker-independent approach. Adv. Eng. Inform. 2014, 28, 102–110. [Google Scholar] [CrossRef]
  33. Barua, P.D.; Vicnesh, J.; Gururajan, R.; Oh, S.L.; Palmer, E.; Azizan, M.M.; Kadri, N.A.; Acharya, U.R. Artificial intelligence enabled personalised assistive tools to enhance education of children with neurodevelopmental disorders—A review. Int. J. Environ. Res. Public Health 2022, 19, 1192. [Google Scholar] [CrossRef]
  34. Yi, S.; Li, C.; Li, Q. A survey of fog computing: Concepts, applications and issues. In Proceedings of the 2015 Workshop on Mobile Big Data, Brussels, Belgium, 22–23 June 2015; pp. 37–42. [Google Scholar]
  35. Agiwal, M.; Roy, A.; Saxena, N. Next generation 5G wireless networks: A comprehensive survey. IEEE Commun. Surv. Tutor. 2016, 18, 1617–1655. [Google Scholar] [CrossRef]
  36. Sicari, S.; Rizzardi, A.; Grieco, L.A.; Coen-Porisini, A. Security, privacy and trust in Internet of Things: The road ahead. Comput. Netw. 2015, 76, 146–164. [Google Scholar] [CrossRef]
  37. Chan, M.; Estève, D.; Escriba, C.; Campo, E. A review of smart homes—Present state and future challenges. Comput. Methods Programs Biomed. 2008, 91, 55–81. [Google Scholar] [CrossRef]
  38. Patel, S.; Park, H.; Bonato, P.; Chan, L.; Rodgers, M. A review of wearable sensors and systems with application in rehabilitation. J. Neuroeng. Rehabil. 2012, 9, 21. [Google Scholar] [CrossRef]
  39. Willis, S.; Helal, S. RFID information grid for blind navigation and wayfinding. In Proceedings of the Ninth IEEE International Symposium on Wearable Computers (ISWC’05), Osaka, Japan, 18–21 October 2005; IEEE: New York, NY, USA, 2005; pp. 34–37. [Google Scholar]
  40. Rubio-Tamayo, J.L.; Gertrudix Barrio, M.; García García, F. Immersive environments and virtual reality: Systematic review and advances in communication, interaction and simulation. Multimodal Technol. Interact. 2017, 1, 21. [Google Scholar] [CrossRef]
  41. Fitzpatrick, G. A short history of human computer interaction: A people-centred perspective. In Proceedings of the 2018 ACM SIGUCCS Annual Conference, Orlando, FL, USA, 7–10 October 2018; p. 3. [Google Scholar]
  42. Myers, B.A. A brief history of human-computer interaction technology. Interactions 1998, 5, 44–54. [Google Scholar] [CrossRef]
  43. Awuah, W.A.; Ahluwalia, A.; Darko, K.; Sanker, V.; Tan, J.K.; Pearl, T.O.; Ben-Jaafar, A.; Ranganathan, S.; Aderinto, N.; Mehta, A.; et al. Bridging minds and machines: The recent advances of brain-computer interfaces in neurological and neurosurgical applications. World Neurosurg. 2024, 189, 138–153. [Google Scholar] [CrossRef] [PubMed]
  44. Asif, M. AI and Human Interaction: Enhancing User Experience Through Intelligent Systems. Front. Artif. Intell. Res. 2024, 1, 209–249. [Google Scholar]
  45. Malviya, R.; Rajput, S. AI-Driven Innovations in Assistive Technology for People with Disabilities. In Advances and Insights into AI-Created Disability Supports; Springer: Berlin/Heidelberg, Germany, 2025; pp. 61–77. [Google Scholar]
  46. Kranthi, B.J.; Suhas, G.; Varma, K.B.; Reddy, G.P. A two-way communication system with Morse code medium for people with multiple disabilities. In Proceedings of the 2020 IEEE 7th Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON), Prayagraj, India, 27–29 November 2020; IEEE: New York, NY, USA, 2020; pp. 1–6. [Google Scholar]
  47. Chi, C.F.; Tseng, L.K.; Jang, Y. Pruning a decision tree for selecting computer-related assistive devices for people with disabilities. IEEE Trans. Neural Syst. Rehabil. Eng. 2012, 20, 564–573. [Google Scholar] [CrossRef]
  48. Khorosheva, T.; Novoseltseva, M.; Geidarov, N.; Krivosheev, N.; Chernenko, S. Neural network control interface of the speaker dependent computer system «Deep Interactive Voice Assistant DIVA» to help people with speech impairments. In Proceedings of the Third International Scientific Conference “Intelligent Information Technologies for Industry”(IITI’18), Sochi, Russia, 17–21 September 2018; Springer: Cham, Switzerland, 2019; Volume 1, pp. 444–452. [Google Scholar]
  49. Kiangala, S.K.; Wang, Z. An effective adaptive customization framework for small manufacturing plants using extreme gradient boosting-XGBoost and random forest ensemble learning algorithms in an Industry 4.0 environment. Mach. Learn. Appl. 2021, 4, 100024. [Google Scholar] [CrossRef]
  50. Singh, Y.; Kaur, L.; Neeru, N. A new improved obstacle detection framework using IDCT and CNN to assist visually impaired persons in an outdoor environment. Wirel. Pers. Commun. 2022, 124, 3685–3702. [Google Scholar] [CrossRef]
  51. Spoladore, D.; Negri, L.; Arlati, S.; Mahroo, A.; Fossati, M.; Biffi, E.; Davalli, A.; Trombetta, A.; Sacco, M. Towards a knowledge-based decision support system to foster the return to work of wheelchair users. Comput. Struct. Biotechnol. J. 2024, 24, 374–392. [Google Scholar] [CrossRef]
  52. Gao, X.; Yang, T.; Peng, J. Logic-enhanced adaptive network-based fuzzy classifier for fall recognition in rehabilitation. IEEE Access 2020, 8, 57105–57113. [Google Scholar] [CrossRef]
  53. Namoun, A.; Humayun, M.A.; BenRhouma, O.; Hussein, B.R.; Tufail, A.; Alshanqiti, A.; Nawaz, W. Service selection using an ensemble meta-learning classifier for students with disabilities. Multimodal Technol. Interact. 2023, 7, 42. [Google Scholar] [CrossRef]
  54. Gheisari, M.; Ebrahimzadeh, F.; Rahimi, M.; Moazzamigodarzi, M.; Liu, Y.; Dutta Pramanik, P.K.; Heravi, M.A.; Mehbodniya, A.; Ghaderzadeh, M.; Feylizadeh, M.R.; et al. Deep learning: Applications, architectures, models, tools, and frameworks: A comprehensive survey. CAAI Trans. Intell. Technol. 2023, 8, 581–606. [Google Scholar]
  55. Abdolrasol, M.G.; Hussain, S.S.; Ustun, T.S.; Sarker, M.R.; Hannan, M.A.; Mohamed, R.; Ali, J.A.; Mekhilef, S.; Milad, A. Artificial neural networks based optimization techniques: A review. Electronics 2021, 10, 2689. [Google Scholar] [CrossRef]
  56. Sarker, I.H. Deep learning: A comprehensive overview on techniques, taxonomy, applications and research directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef]
  57. Ashar, A.A.K.; Abrar, A.; Liu, J. A Survey on Deep Learning-based Smart Assistive Aids for Visually Impaired Individuals. In Proceedings of the 2023 7th International Conference on Information System and Data Mining, Atlanta, GA, USA, 10–12 May 2023; pp. 90–95. [Google Scholar]
  58. ZainEldin, H.; Gamel, S.A.; Talaat, F.M.; Aljohani, M.; Baghdadi, N.A.; Malki, A.; Badawy, M.; Elhosseini, M.A. Silent no more: A comprehensive review of artificial intelligence, deep learning, and machine learning in facilitating deaf and mute communication. Artif. Intell. Rev. 2024, 57, 188. [Google Scholar] [CrossRef]
  59. Leo, M.; Furnari, A.; Medioni, G.G.; Trivedi, M.; Farinella, G.M. Deep learning for assistive computer vision. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018; pp. 1–12. [Google Scholar]
  60. Zhang, X.; Huang, X.; Ding, Y.; Long, L.; Li, W.; Xu, X. Advancements in Smart Wearable Mobility Aids for Visual Impairments: A Bibliometric Narrative Review. Sensors 2024, 24, 7986. [Google Scholar] [CrossRef] [PubMed]
  61. Haux, R.; Hein, A.; Kolb, G.; Künemund, H.; Eichelberg, M.; Appell, J.E.; Appelrath, H.J.; Bartsch, C.; Bauer, J.M.; Becker, M.; et al. Information and communication technologies for promoting and sustaining quality of life, health and self-sufficiency in ageing societies–outcomes of the Lower Saxony Research Network Design of Environments for Ageing (GAL). Inform. Health Soc. Care 2014, 39, 166–187. [Google Scholar] [CrossRef] [PubMed]
  62. Stan, I.E.; D’Auria, D.; Napoletano, P. A Systematic Literature Review of Innovations, Challenges, and Future Directions in Telemonitoring and Wearable Health Technologies. IEEE J. Biomed. Health Inform. 2025, 99, 1–22. [Google Scholar] [CrossRef]
  63. Vrančić, A.; Zadravec, H.; Orehovački, T. The role of smart homes in providing care for older adults: A systematic literature review from 2010 to 2023. Smart Cities 2024, 7, 1502–1550. [Google Scholar] [CrossRef]
  64. Bollineni, C.; Sharma, M.; Hazra, A.; Kumari, P.; Manipriya, S.; Tomar, A. IoT for Next-Generation Smart Healthcare: A Comprehensive Survey. IEEE Internet Things J. 2025, 12, 32616–32639. [Google Scholar] [CrossRef]
  65. Baker, S.; Xiang, W. Artificial intelligence of things for smarter healthcare: A survey of advancements, challenges, and opportunities. IEEE Commun. Surv. Tutor. 2023, 25, 1261–1293. [Google Scholar] [CrossRef]
  66. Modi, N.; Singh, J. A survey of research trends in assistive technologies using information modelling techniques. Disabil. Rehabil. Assist. Technol. 2022, 17, 605–623. [Google Scholar] [CrossRef]
  67. Zdravkova, K.; Krasniqi, V.; Dalipi, F.; Ferati, M. Cutting-edge communication and learning assistive technologies for disabled children: An artificial intelligence perspective. Front. Artif. Intell. 2022, 5, 970430. [Google Scholar] [CrossRef]
  68. Maskeliūnas, R.; Damaševičius, R.; Segal, S. A review of internet of things technologies for ambient assisted living environments. Future Internet 2019, 11, 259. [Google Scholar] [CrossRef]
  69. Khalid, U.b.; Naeem, M.; Stasolla, F.; Syed, M.H.; Abbas, M.; Coronato, A. Impact of AI-powered solutions in rehabilitation process: Recent improvements and future trends. Int. J. Gen. Med. 2024, 17, 943–969. [Google Scholar] [CrossRef] [PubMed]
  70. Wambuaa, R.N.; Oduorb, C.D. Implications of Internet of Things (IoT) on the Education for students with disabilities: A Systematic Literature Review. Int. J. Res. Public 2022, 102, 378–407. [Google Scholar] [CrossRef]
  71. Taimoor, N.; Rehman, S. Reliable and resilient AI and IoT-based personalised healthcare services: A survey. IEEE Access 2021, 10, 535–563. [Google Scholar] [CrossRef]
  72. de Freitas, M.P.; Piai, V.A.; Farias, R.H.; Fernandes, A.M.; de Moraes Rossetto, A.G.; Leithardt, V.R.Q. Artificial intelligence of things applied to assistive technology: A systematic literature review. Sensors 2022, 22, 8531. [Google Scholar] [CrossRef]
  73. Lavric, A.; Beguni, C.; Zadobrischi, E.; Căilean, A.M.; Avătămăniței, S.A. A comprehensive survey on emerging assistive technologies for visually impaired persons: Lighting the path with visible light communications and artificial intelligence innovations. Sensors 2024, 24, 4834. [Google Scholar] [CrossRef]
  74. Nasr, M.; Islam, M.M.; Shehata, S.; Karray, F.; Quintana, Y. Smart healthcare in the age of AI: Recent advances, challenges, and future prospects. IEEE Access 2021, 9, 145248–145270. [Google Scholar] [CrossRef]
  75. Thilakarathne, N.N.; Kagita, M.K.; Gadekallu, T.R. The role of the internet of things in health care: A systematic and comprehensive study. Int. J. Eng. Manag. Res. 2020, 10, 145–159. [Google Scholar] [CrossRef]
  76. Kumar, A.; Saudagar, A.K.J.; Khan, M.B. Enhanced Medical Education for Physically Disabled People through Integration of IoT and Digital Twin Technologies. Systems 2024, 12, 325. [Google Scholar] [CrossRef]
  77. Alshamrani, M. IoT and artificial intelligence implementations for remote healthcare monitoring systems: A survey. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 4687–4701. [Google Scholar] [CrossRef]
  78. Domingo, M.C. An overview of machine learning and 5G for people with disabilities. Sensors 2021, 21, 7572. [Google Scholar] [CrossRef] [PubMed]
  79. Perez, A.J.; Siddiqui, F.; Zeadally, S.; Lane, D. A review of IoT systems to enable independence for the elderly and disabled individuals. Internet Things 2023, 21, 100653. [Google Scholar] [CrossRef]
  80. Joudar, S.S.; Albahri, A.S.; Hamid, R.A.; Zahid, I.A.; Alqaysi, M.E.; Albahri, O.S.; Alamoodi, A.H. Artificial intelligence-based approaches for improving the diagnosis, triage, and prioritization of autism spectrum disorder: A systematic review of current trends and open issues. Artif. Intell. Rev. 2023, 56, 53–117. [Google Scholar] [CrossRef]
  81. Zhou, S.; Loiacono, E.T.; Kordzadeh, N. Smart cities for people with disabilities: A systematic literature review and future research directions. Eur. J. Inf. Syst. 2023, 33, 845–862. [Google Scholar] [CrossRef]
  82. Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.J.; Horsley, T.; Weeks, L.; et al. PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Ann. Intern. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef]
  83. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Technical Report, EBSE Technical Report; Keele University: Newcastle, UK, 2007. [Google Scholar]
  84. Oyuela, F.Z.; Paz, Ó.A. AI-powered backup robot designed to enhance cognitive abilities in children with Down syndrome. In Proceedings of the 2023 IEEE Central America and Panama Student Conference (CONESCAPAN), Guatemala, 26–29 September 2023; IEEE: New York, NY, USA, 2023; pp. 134–138. [Google Scholar]
  85. Peraković, D.; Periša, M.; Cvitić, I.; Zorić, P. Application of the Internet of Things Concept to Inform People with Down Syndrome. In Proceedings of the XL Simpozijum o Novim Tehnologijama u Poštanskom i Telekomunikacionom Saobraćaju–PosTel 2022, Beograd, Serbia, 29–30 November 2022; pp. 279–288. [Google Scholar]
  86. Do, H.D.; Allison, J.J.; Nguyen, H.L.; Phung, H.N.; Tran, C.D.; Le, G.M.; Nguyen, T.T. Applying machine learning in screening for Down Syndrome in both trimesters for diverse healthcare scenarios. Heliyon 2024, 10, e34476. [Google Scholar] [CrossRef]
  87. Mavaluru, D.; Ravula, S.R.; Auguskani, J.P.L.; Dharmarajlu, S.M.; Chellathurai, A.; Ramakrishnan, J.; Mugaiahgari, B.K.M.; Ravishankar, N. Advancing Fetal Ultrasound Diagnostics: Innovative Methodologies for Improved Accuracy in Detecting Down Syndrome. Med. Eng. Phys. 2024, 126, 104132. [Google Scholar] [CrossRef]
  88. Al Banna, M.H.; Ghosh, T.; Taher, K.A.; Kaiser, M.S.; Mahmud, M. A monitoring system for patients of autism spectrum disorder using artificial intelligence. In Proceedings of the International Conference on Brain Informatics, Padua, Italy, 19 September 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 251–262. [Google Scholar]
  89. Shahamiri, S.R.; Thabtah, F. Autism AI: A new autism screening system based on artificial intelligence. Cogn. Comput. 2020, 12, 766–777. [Google Scholar] [CrossRef]
  90. Popescu, A.L.; Popescu, N.; Dobre, C.; Apostol, E.S.; Popescu, D. IoT and AI-based application for automatic interpretation of the affective state of children diagnosed with autism. Sensors 2022, 22, 2528. [Google Scholar] [CrossRef]
  91. Shelke, N.A.; Rao, S.; Verma, A.K.; Kasana, S.S. Autism Spectrum Disorder Detection Using AI and IoT. In Proceedings of the 2022 Fourteenth International Conference on Contemporary Computing, Noida, India, 4–6 August 2022; pp. 213–219. [Google Scholar]
  92. Ashraf, A.; Zhao, Q.; Bangyal, W.H.; Iqbal, M. Analysis of brain imaging data for the detection of early age autism spectrum disorder using transfer learning approaches for Internet of Things. IEEE Trans. Consum. Electron. 2023, 70, 4478–4489. [Google Scholar] [CrossRef]
  93. Jacob, S.; Alagirisamy, M.; Xi, C.; Balasubramanian, V.; Srinivasan, R.; Parvathi, R.; Jhanjhi, N.; Islam, S.M. AI and IoT-enabled smart exoskeleton system for rehabilitation of paralyzed people in connected communities. IEEE Access 2021, 9, 80340–80350. [Google Scholar] [CrossRef]
  94. Aldolaim, R.J.; Gull, H.; Iqbal, S.Z. Boxly: Design and Architecture of a Smart Physical Therapy Clinic for People Having Mobility Disability Using Metaverse, AI, and IoT Technologies in Saudi Arabia. In Proceedings of the 2024 IEEE International Conference on Information Technology, Electronics and Intelligent Communication Systems (ICITEICS), Bangalore, India, 28–29 June 2024; IEEE: New York, NY, USA, 2024; pp. 1–5. [Google Scholar]
  95. Vourganas, I.; Stankovic, V.; Stankovic, L. Individualised responsible artificial intelligence for home-based rehabilitation. Sensors 2020, 21, 2. [Google Scholar] [CrossRef] [PubMed]
  96. Alharbi, H.A.; Alharbi, K.K.; Hassan, C.A.U. Enhancing elderly fall detection through IoT-enabled smart flooring and AI for independent living sustainability. Sustainability 2023, 15, 15695. [Google Scholar] [CrossRef]
  97. Morollón Ruiz, R.; Garcés, J.A.C.; Soo, L.; Fernández, E. The Implementation of Artificial Intelligence Based Body Tracking for the Assessment of Orientation and Mobility Skills in Visual Impaired Individuals. In Proceedings of the International Work-Conference on the Interplay Between Natural and Artificial Computation, Olhâo, Portugal, 31 May–3 June 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 485–494. [Google Scholar]
  98. Ozarkar, S.; Chetwani, R.; Devare, S.; Haryani, S.; Giri, N. AI for accessibility: Virtual assistant for hearing impaired. In Proceedings of the 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kharagpur, India, 1–3 July 2020; IEEE: New York, NY, USA, 2020; pp. 1–7. [Google Scholar]
  99. Vinitha, I.; Kalyani, G.; Prathyusha, K.; Pavan, D. AI-Enabled Sign Language Detection: Bridging Communication Gaps for the Hearing Impaired. In Proceedings of the 2024 Control Instrumentation System Conference (CISCON), Manipal, India, 2–3 August 2024; IEEE: New York, NY, USA, 2024; pp. 1–5. [Google Scholar]
  100. Rajesh Kannan, S.; Ezhilarasi, P.; Rajagopalan, V.G.; Krishnamithran, S.; Ramakrishnan, H.; Balaji, H.K. Integrated AI based smart wearable assistive device for visually and hearing-impaired people. In Proceedings of the 2023 International Conference on Recent Trends in Electronics and Communication (ICRTEC), Mysore, India, 10–11 February 2023; IEEE: New York, NY, USA, 2023; pp. 1–6. [Google Scholar]
  101. Noor, T.H.; Noor, A.; Alharbi, A.F.; Faisal, A.; Alrashidi, R.; Alsaedi, A.S.; Alharbi, G.; Alsanoosy, T.; Alsaeedi, A. Real-Time Arabic Sign Language Recognition Using a Hybrid Deep Learning Model. Sensors 2024, 24, 3683. [Google Scholar] [CrossRef]
  102. Bekeš, E.R.; Galzina, V.; Kolar, E.B. Using human-computer interaction (hci) and artificial intelligence (ai) in education to improve the literacy of deaf and hearing-impaired children. In Proceedings of the 2024 47th MIPRO ICT and Electronics Convention (MIPRO), Opatija, Croatia, 20–24 May 2024; IEEE: New York, NY, USA, 2024; pp. 1375–1380. [Google Scholar]
  103. Tachmazidis, I.; Chen, T.; Adamou, M.; Antoniou, G. A hybrid AI approach for supporting clinical diagnosis of Attention Deficit Hyperactivity Disorder (ADHD) in adults. Health Inf. Sci. Syst. 2020, 9, 1. [Google Scholar] [CrossRef]
  104. Ristiyanti, N.; Dirgantoro, B.; Setianingsih, C. Behavioral disorder test to identify Attention-Deficit/Hyperactivity Disorder (ADHD) in children using fuzzy algorithm. In Proceedings of the 2021 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS), Bandung, Indonesia, 23–24 November 2021; IEEE: New York, NY, USA, 2021; pp. 234–240. [Google Scholar]
  105. Navarro-Soria, I.; Rico-Juan, J.R.; Juárez-Ruiz de Mier, R.; Lavigne-Cervan, R. Prediction of attention deficit hyperactivity disorder based on explainable artificial intelligence. Appl. Neuropsychol. Child 2024, 14, 474–487. [Google Scholar] [CrossRef]
  106. Alkahtani, H.; Aldhyani, T.H.; Ahmed, Z.A.; Alqarni, A.A. Developing System-Based Artificial Intelligence Models for Detecting the Attention Deficit Hyperactivity Disorder. Mathematics 2023, 11, 4698. [Google Scholar] [CrossRef]
  107. Tan, Z.; Liu, Z.; Gong, S. Potential attempt to treat attention deficit/hyperactivity disorder (adhd) children with engineering education games. In Proceedings of the International Conference on Human-Computer Interaction, Copenhagen, Denmark, 23–28 July 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 166–184. [Google Scholar]
  108. Wadhwa, V.; Gupta, B.; Gupta, S. AI based automated image caption tool implementation for visually impaired. In Proceedings of the 2021 International Conference on Industrial Electronics Research and Applications (ICIERA), New Delhi, India, 22–24 December 2021; IEEE: New York, NY, USA, 2021; pp. 1–6. [Google Scholar]
  109. Shandu, N.E.; Owolawi, P.A.; Mapayi, T.; Odeyemi, K. AI based pilot system for visually impaired people. In Proceedings of the 2020 International Conference on Artificial Intelligence, Big Data, Computing and Data Communication Systems (icABCD), Durban, South Africa, 6–7 August 2020; IEEE: New York, NY, USA, 2020; pp. 1–7. [Google Scholar]
  110. Ratheesh, R.; Sri Rakshaga, S.R.; Asan Fathima, A.; Dhanusha, S.; Harini, A. AI-Based Smart Visual Assistance System for Navigation, Guidance, and Monitoring of Visually Impaired People. In Proceedings of the 2024 Ninth International Conference on Science Technology Engineering and Mathematics (ICONSTEM), Chennai, India, 4–5 April 2024; IEEE: New York, NY, USA, 2024; pp. 1–10. [Google Scholar]
  111. Abhishek, S.; Sathish, H.; Kumar, A.; Anjali, T. Aiding the visually impaired using artificial intelligence and speech recognition technology. In Proceedings of the 2022 4th International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India, 21–23 September 2022; IEEE: New York, NY, USA, 2022; pp. 1356–1362. [Google Scholar]
  112. Zhao, K.; Lai, R.; Guo, B.; Liu, L.; He, L.; Zhao, Y. AI-Vision: A Three-Layer Accessible Image Exploration System for People with Visual Impairments in China. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2024, 8, 1–27. [Google Scholar] [CrossRef]
  113. Ahmad, I.; Asghar, Z.; Kumar, T.; Li, G.; Manzoor, A.; Mikhaylov, K.; Shah, S.A.; Höyhtyä, M.; Reponen, J.; Huusko, J.; et al. Emerging technologies for next generation remote health care and assisted living. IEEE Access 2022, 10, 56094–56132. [Google Scholar] [CrossRef]
  114. Elmannai, W.; Elleithy, K. Sensor-based assistive devices for visually-impaired people: Current status, challenges, and future directions. Sensors 2017, 17, 565. [Google Scholar] [CrossRef]
  115. Herskovitz, J. DIY Assistive Software: End-User Programming for Personalized Assistive Technology. ACM SIGACCESS Access. Comput. 2024, 4, 1. [Google Scholar] [CrossRef]
  116. Torkamaan, H.; Tahaei, M.; Buijsman, S.; Xiao, Z.; Wilkinson, D.; Knijnenburg, B.P. The Role of Human-Centered AI in User Modeling, Adaptation, and Personalization—Models, Frameworks, and Paradigms. In A Human-Centered Perspective of Intelligent Personalized Environments and Systems; Springer: Cham, Switzerland, 2024; pp. 43–83. [Google Scholar]
  117. Andò, B.; Baglio, S.; Castorina, S.; Marletta, V.; Crispino, R. On the assessment and reliability of assistive technology: The case of falls and postural sway monitoring. IEEE Instrum. Meas. Mag. 2021, 24, 5–12. [Google Scholar] [CrossRef]
  118. Hazra, A.; Adhikari, M.; Amgoth, T.; Srirama, S.N. A comprehensive survey on interoperability for IIoT: Taxonomy, standards, and future directions. ACM Comput. Surv. (CSUR) 2021, 55, 1–35. [Google Scholar] [CrossRef]
  119. Zaharudin, R.; Izhar, N.A.; Hwa, D.L. Evaluating mobile application as assistive technology to improve students with learning disabilities for communication, personal care and physical function. Int. J. Learn. Teach. Educ. Res. 2024, 23, 19–37. [Google Scholar] [CrossRef]
  120. Cha, H.T.; Lee, Y.H.; Wang, Y.F. Developing Assistive Technology Products Based on Experiential Learning for Elderly Care. SN Comput. Sci. 2024, 5, 502. [Google Scholar] [CrossRef]
Figure 1. AI and IoT disability assistance analytical framework.
Figure 1. AI and IoT disability assistance analytical framework.
Disabilities 06 00003 g001
Figure 2. Distribution of published research prototypes (a) by year and (b) by publisher.
Figure 2. Distribution of published research prototypes (a) by year and (b) by publisher.
Disabilities 06 00003 g002
Figure 3. Quantitative assessment of AI and IoT disabilities research prototypes among the Disability Monitoring Layer.
Figure 3. Quantitative assessment of AI and IoT disabilities research prototypes among the Disability Monitoring Layer.
Disabilities 06 00003 g003
Figure 4. Quantitative assessment of AI and IoT disabilities research prototypes among the Disability Analysis Layer.
Figure 4. Quantitative assessment of AI and IoT disabilities research prototypes among the Disability Analysis Layer.
Disabilities 06 00003 g004
Figure 5. Quantitative assessment of AI and IoT disabilities research prototypes among the Disability Assistance Layer.
Figure 5. Quantitative assessment of AI and IoT disabilities research prototypes among the Disability Assistance Layer.
Disabilities 06 00003 g005
Table 1. Comparative baseline between AI/IoT-based and traditional assistive technologies.
Table 1. Comparative baseline between AI/IoT-based and traditional assistive technologies.
CriterionTraditional Assistive TechnologiesAI/IoT-Based Assistive Technologies
AdaptabilityStatic, predefined functionsLearns user behavior through AI models and adaptive feedback
Data ProcessingManual or device-specificAutomated, real-time data analytics via IoT and cloud
PersonalizationLimited or user-configured onlyDynamic personalization through ML and user-context modeling
Response TimeDelayed or event-triggeredReal-time response via edge/IoT computation
ScalabilityDifficult to expandModular and easily extensible via connected devices
InteractivityOne-way (device to user)Bidirectional communication using speech, gesture, or environmental
sensors
MaintenanceFrequent manual calibrationSelf-learning updates reduce maintenance frequency
AccessibilityLocal use onlyRemote accessibility via IoT networks
Table 2. Evaluation of major AI and IoT disabilities research prototypes considering the Disability Monitoring Layer.
Table 2. Evaluation of major AI and IoT disabilities research prototypes considering the Disability Monitoring Layer.
PrototypeFocusTechnologyData SourceData TypeEnvironment
[84]DSRPCIID
[85]DSIoT, BTSTD, DLOD
[31]DSNACIID
[86]DSNASIIID
[87]DSNASIIID
[88]ASDIoTSI, DLID
[89]ASDIoTSI, TIIID
[90]ASDIoT, RP, BTC, SI, TDID
[91]ASDIoT, AR, RFIDC, SI, TDID
[92]ASDIoTSII, TSID
[93]MIIoT, AR, LoRaC, SI, TDID
[94]MIIoTC, SI, TDID
[95]MIIoTSTDID
[96]MIIoT, RFIDSTDID
[97]VI, MIIoTCVFID
[98]HINAC, MVF, ADOD
[99]HINACI, VFID
[100]HI, VIIoT, LoRaC, SI, DLOD
[101]HINACI, VFID
[102]HINATITDID
[103]ADHDNATITDID
[104]ADHDNATITDID
[105]ADHDNATITDID
[106]ADHDNATITDID
[107]ADHDIoTTITDID
[108]VINACIID, OD
[109]VIRPC, SI, DLID, OD
[110]VIRPC, SI, DLID, OD
[111]VINAMADID
[112]VINACI, TSID, OD
FocusTechnologyData SourceData TypeEnvironment
DSDown SyndromeIoTInternet of ThingsCCameraIImagesIDIn-Door
ASDAutism Spectrum DisorderRPRaspberry PiSSensorsDLPeople with Disabilities LocationODOut-Door
MIMobility ImpairmentARArduinoSIScanned ImagesVFVideo Frames
HIHearing ImpairmentBTBluetoothTIText InputTSTime-Series
ADHDAttention-Deficit/Hyperactivity DisorderLoRaLong Range CommunicationMMicrophoneTDText Data
VIVisual ImpairmentRFIDRadio Frequency Identification ADAudio Data
NANot Applicable
Table 3. Evaluation of major AI and IoT disabilities research prototypes considering the Disability Analysis Layer.
Table 3. Evaluation of major AI and IoT disabilities research prototypes considering the Disability Analysis Layer.
PrototypeTechniqueModelParametersArchitectureSecurity and Privacy (S & P)
[84]MLSVMFR, CHCSS
[85]MLNACH, NRCN
[31]DLCNNFRCN
[86]MLXGBoostCH, BOCN
[87]DLCNNCH, BOCSSP
[88]DLCNNFRDCN
[89]DLCNNCH, A, BMDCN
[90]DLCNNCH, DRDCN
[91]DLCNNFR, CHDCN
[92]DLCNNBOCN
[93]DLANNBMDCSS
[94]DLCNNBMDCN
[95]DLXGBoost, KNNBMCSS
[96]MLKNNE, BMCSP
[97]DLYOLOv8NRCN
[98]DLCNN, LSTMBMCN
[99]DLCNNBMCN
[100]DLCNNFR, NRDCN
[101]DLCNN, LSTMBMDCSS
[102]HCINACH, BMCN
[103]MLDT, KMA, BCSP
[104]MLFuzzyCH, BCN
[105]MLRFCH, BCN
[106]ML, DLCNN, M-Layer PerceptronCH, BOCN
[107]HCINACH, BOCN
[108]DLCNN, LSTMVPCN
[109]DLCNNVPCN
[110]DLCNNNR, VPCN
[111]MLMel Freq. Cepstral Coeff.VPCN
[112]MLOpenCVVPCSP
TechniqueModelParametersArchitectureS & P
MLMachine LearningSVMSupport Vector MachineFRFacial RecognitionCCentralizedSSSupporting Security
DLDeep LearningCNNConvolutional Neural NetworkCHChildrenDDecentralizedSPSupporting Privacy
HCIHuman–Computer InteractionXGBoostExtreme Gradient BoostingA/EAdults/Elderly SSPSupporting Security and Privacy
ANNArtificial Neural NetworkNRNavigation Routes NNone
KNNK-Nearest NeighborBOBody Organs
YOLOYou Only Look OnceDRDrawing
LSTMLong Short-Term MemoryBMBody Motors
DTDecision TreeBBehavior
KMKnowledge ModelVPVisual Perception
RFRandom Forest
NANot Applicable
Table 4. Evaluation of major AI and IoT disabilities research prototypes considering the Disability Assistance Layer.
Table 4. Evaluation of major AI and IoT disabilities research prototypes considering the Disability Assistance Layer.
PrototypeTechnologyType of AssistancePersonalizationCostResponse Time
[84]ARBCRSPUENSE
[85]NDNASSPCENSE
[31]DADSPCENSE
[86]DADSPCENSE
[87]DADSPCENSE
[88]DADSPCENSE
[89]DADSPCESE
[90]AC, DADSPUESE
[91]DADSPUENSE
[92]DADSPCENSE
[93]ND, MDMA, RSPUESE
[94]MV, ARBMA, RSPCENSE
[95]DAMA, RSPUESE
[96]DADNSUESE
[97]NDNASSPCESE
[98]AC, HDNAS, CASPCESE
[99]DACANSCENSE
[100]ARBNASSPUESE
[101]ACCANSCESE
[102]ARCR, CANSCENSE
[103]DADNSCENSE
[104]DADNSCENSE
[105]DADNSCENSE
[106]DADNSUENSE
[107]DADNSUENSE
[108]VDDNSCENSE
[109]ND, VDNAS, DNSUENSE
[110]ND, VDNAS, DNSUENSE
[111]AC, VDDNSCENSE
[112]ACDNSCENSE
TechnologyType of AssistancePersonalizationCostResponse Time
ARBAssistive RobotMAMobility AssistanceSPSupporting PersonalizationCECost EffectiveSEStrong Emphasis
ACAssistive ChatbotRRehabilitationNSNot SupportingUEUneconomicalNSENo Strong Emphasis
MDMobility DevicesNASNavigation Assistance
NDNavigation DevicesCRCognitive Rehabilitation
HDHearing DevicesDDiagnostic/Detection
VDVisual DevicesCACommunication Assistance
DADiagnostic/Detection Assistance
MVMetaverse
ARAugmented Reality
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Noor, A.; Almukhalfi, H.; Atlam, E.-S.; Noor, T.H. Supporting Disabilities Using Artificial Intelligence and the Internet of Things: Research Issues and Future Directions. Disabilities 2026, 6, 3. https://doi.org/10.3390/disabilities6010003

AMA Style

Noor A, Almukhalfi H, Atlam E-S, Noor TH. Supporting Disabilities Using Artificial Intelligence and the Internet of Things: Research Issues and Future Directions. Disabilities. 2026; 6(1):3. https://doi.org/10.3390/disabilities6010003

Chicago/Turabian Style

Noor, Ayman, Hanan Almukhalfi, El-Sayed Atlam, and Talal H. Noor. 2026. "Supporting Disabilities Using Artificial Intelligence and the Internet of Things: Research Issues and Future Directions" Disabilities 6, no. 1: 3. https://doi.org/10.3390/disabilities6010003

APA Style

Noor, A., Almukhalfi, H., Atlam, E.-S., & Noor, T. H. (2026). Supporting Disabilities Using Artificial Intelligence and the Internet of Things: Research Issues and Future Directions. Disabilities, 6(1), 3. https://doi.org/10.3390/disabilities6010003

Article Metrics

Back to TopTop