Next Article in Journal
Transient Analysis of Vortex-Induced Pressure Pulsations in a Vertical Axial Pump with Bidirectional Flow Passages Under Stall Conditions
Previous Article in Journal
LiMS-MFormer: A Lightweight Multi-Scale and Multi-Dimensional Attention Transformer for Robust Rolling Bearing Fault Diagnosis Under Complex Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Evolution and Emerging Trends in Intelligent Wheelchair Control: A Comprehensive Review

Department of Mechanical and Measurement & Control Engineering, Idaho State University, Pocatello, ID 83209, USA
*
Author to whom correspondence should be addressed.
Machines 2026, 14(1), 33; https://doi.org/10.3390/machines14010033
Submission received: 4 November 2025 / Revised: 8 December 2025 / Accepted: 16 December 2025 / Published: 25 December 2025
(This article belongs to the Section Automation and Control Systems)

Abstract

As wheelchair technology evolves and embraces a more prominent role in assistive technology, the onset of intelligent control systems necessitates a comprehensive review from an engineering perspective. In this work, we analyze the development and the emerging trends in intelligent wheelchair control. A specific focus is provided on classifying and comparing model-driven and data-driven control methodologies. In this review, findings from a range of past contributions are examined, including conventional control theories, rule-based systems, and modern data-driven approaches that include supervised, unsupervised, and reinforcement learning control algorithms. The analysis indicates that while model-driven methods offer interpretability, data-driven techniques—in particular those leveraging machine learning—provide for a superior adaptability for navigating complex and dynamic environments. We further highlight key supporting systems found in sensors, actuators, and human-machine interfaces. Additionally, the important functionalities such as autonomous navigation and obstacle avoidance methods are identified. Our findings point to some future objectives that need to be addressed. For example, energy efficiency, robustness in unpredictable settings, computational requirements, and associated demands when utilizing data-driven methods. One of the highlighted fields of study in this work is the integration of reinforcement learning and sensor fusion, which may hold some promising results for future wheelchair technologies.

1. Introduction

In 1955 a watchmaker named Stephan Farffler designed and developed the first electric self propelled wheelchair [1]. However, such wheelchairs required human assistance, specifically a significant amount of upper body strength. Consequently, with a desire to assist people with quadriplegia, George Klein built the first power wheelchair after the World War II [2]. Although the purpose of such power wheelchairs was to assist individuals with mobility impairments, it soon found significant help to people suffering from fatigue or pain. The world’s first commercial wheelchair was then developed by an American wheelchair company, Everest & Jennings [3]. According to past literature reviews, the first recorded intelligent wheelchair is widely credited to the work of Madarasz and colleagues in the early 1980s [4]. They presented one of the first autonomous wheelchairs equipped with sensors and on-board control for navigation and obstacle avoidance.
Modern day wheel chairs come with various drive system options for front wheel, center wheel or, all wheel drive, and some models even include features such as stair climbing or standing capabilities [5,6]. To support these multivariate wheelchairs, various types of controllers are available in power wheelchairs, for example from simple joysticks to numerous interfaces including hand controlled joysticks, sip-and-puff systems, touch screens, voice control, eye gaze systems etc. Despite having significant advancements in intelligent wheelchair technology, there are many opportunities for further automation, tailored to patients with limited mobility and physical limitations. In a research work Leaman and La exemplified the potential for instrumenting more advanced technology into wheelchairs. They highlighted how innovative solutions could enhance the independence and quality of life for users requiring more comprehensive assistance [7,8].
In terms of the power source, the most popular wheelchairs now a days incorporate the use of different types of batteries for example Sealed Lead Acid, Gel, Lithium-Ion, etc. However, other types of power sources for example fuel cells, solar power, manual electric hybrid systems, etc. are also getting commercialized. Recent research trends in this field include the use of innovative electrode and electrolyte materials [9], battery management systems (BMS), fast-charging technologies, and integration into smart grids and energy storage system [10], lithium sulfur batteries, and cobalt-free lithium-ion batteries, etc. [11]. Last but not the least, seating arrangements have also been improved, featuring cushions to pressure sores, motorized backrests, lateral supports, and adjustable footrests to provide comfort [12].
Components that transform a regular wheelchair into an intelligent wheelchair include sensor and perception systems, control systems, and navigation and path planning modules. Due to recent advances in machine learning (ML) and artificial intelligence (AI), researchers have increasingly used powered wheelchairs as test benches to evaluate new algorithms, thus making existing setups more efficient and user friendly. The integration of modern ML and AI approaches has revolutionized this research domain from multiple aspects. Autonomous wheelchairs are now capable of object detection using algorithms such as YOLO (You Only Look Once) and deep neural networks, as researchers have claimed high detection accuracy using these algorithms [13,14]. Beyond perception, path planning and navigation, human machine interaction, and multi sensor fusion have also been greatly benefited from AI driven innovations. Consequently, keeping track of this rapidly evolving field and identifying the remaining research gaps has become a key challenge.
Although a few recent review papers have summarized the progress made in intelligent wheelchair research [12,15,16], few have analyzed these advancements from a control engineering perspective. For instance, Ref. [12] was published in 2017 and reflects the then state of the art. However, in depth discussion of control techniques proposed until then was limited. Similarly, while [15] analyzed several control approaches, the number of reviewed works remained relatively limited. Researchers in [16] published their work mainly focusing on the user feedback criteria. Our paper aims to address these limitations and highlights different past and ongoing control algorithms implemented in automated wheelchair setups, including key components and functionalities, and will examine their existing shortcomings.
Conventional control approaches often struggle to manage highly nonlinear and complex systems under uncertain real world conditions, and they may lack the capability to generate continuous action spaces required to safely maneuver a wheelchair out of unexperienced situations. This means, although termed as intelligent, such wheelchairs are not smart enough to encounter many real world challenges such as working under low light conditions, sensor faults, avoid unseen obstacles, or navigate through unknown paths. This work will review how modern nonlinear and AI augmented control systems are being developed to overcome these limitations. The major contribution of this study is to provide a systematic classification of research efforts from a control engineer’s point of view, discuss their current limitations, and propose key future research directions necessary to advance fully autonomous wheelchair technologies.

2. Early Control Systems and Literature Search Methodology

Understanding the development of intelligent wheelchair technologies requires first examining the early control systems that preceded modern autonomous and AI-based approaches. This section details traditionally controlled wheelchairs and semi-automated and early intelligent systems-based wheelchairs before explaining the search methodology for the intelligent wheelchair research.

2.1. Early Control Mechanisms (Traditional/Manual)

Evidence of using wheeled furniture for human transport can be dated back to ancient civilizations, as found in stone carvings from Ancient China and Greece [17]. Manual wheelchairs were common in history to provide a basic means of assistance for mobility. For example, King Philip II of Spain became immobile due to severe gout and used a crafted wheelchair with human assistance for mobility [18]. In the 18th century, push rim-propelled wheelchairs became popular, however, maneuvering this type of chair requires significant physical effort from the user [19]. To offset this, different propulsion techniques such as crank-propelled, lever-propelled, and geared wheelchairs are developed which offer more leverage to the users improving the quality of their life [20]. A detail study on the stability of this type of wheel chair is demonstrated in [21].
Traditional wheelchairs inherently come with physical and environmental limitations along with functional inconveniences. Research has shown that prolonged use of manual wheelchairs can contribute to significant restrictions in participation various life domains [22]. With limited customization options, traditional wheelchairs pose significant challenges for individuals with additional impairments. The adverse effects on mental and physical well-being and overall quality of life have driven researchers to introduce more advanced assistive technologies such as powered wheelchairs. The stated limitations motivated the development of more sophisticated control mechanisms, including proportional joystick controls and intelligent sensing and control systems [23,24,25].

2.2. Semi-Automated and Early Intelligent Systems

The first powered wheelchair prototypes were developed in the early 1950s by Canadian engineer George Klein under a government supported research initiative to assist World War II veterans with mobility impairments [26]. The incorporation of key technologies such as joystick control interface, tighter turning mechanisms, and independent wheel drives offered a more convenient way to control wheelchairs. Besides proportional joysticks, isometric joysticks were developed to offer more accurate control by sensing force rather than displacement [27]. In the 1980s, research into intelligent wheelchairs flourished, integrating powered wheelchairs with computers and different sensors to provide navigation assistance [28].
Early intelligent wheelchairs utilized a variety of sensing devices primarily for obstacle detection, contact detection, line following, and mapping activities. Wakaumi et al. proposed an automated wheelchair that used infrared sensors and magnetic ferrite marker tape for line following and obstacle detection. A robotic wheelchair system called Wheelesley, proposed by Yanco, which featured autonomous navigation and collision avoidance by employing a combination of infrared sensors, sonar technology and computer vision [29]. Several other similar prototypes such as TAO [30], RobChair [31], NavChair [32], were also developed utilizing various sensors and computer vision. Luo et al. proposed an automatic intelligent wheelchair system with different operating modes which follows a guiding service robot in an unknown environment [33]. Although functional, these early intelligent systems often faced significant limitations related to sensor accuracy and robustness dealing with dynamic environments.
A comprehensive and unbiased methodology was followed to choose the literature during the beginning phase of the work. This section outlines the search strategy, databases used, inclusion and exclusion criteria, screening procedure, and the overall scope of the review.

2.3. Search Strategy

A multi-stage search process was conducted to identify relevant publications on intelligent wheelchairs, control architectures, sensing technologies, human–machine interfaces, and emerging data-driven methods. Boolean operators were used to refine queries and capture both classical and emerging research directions. Representative search strings included:
  • “Intelligent wheelchair” AND (”control” OR ”navigation” OR ”autonomy”)
  • “Wheelchair” AND (“machine learning” OR “reinforcement learning” OR “sensor fusion”)
  • “Smart wheelchair” AND (“path planning” OR “obstacle avoidance” OR “human–robot interaction”)
  • “Assistive mobility” AND (”MPC” OR ”PID” OR ”fuzzy” OR “data-driven control”)

2.3.1. Inclusion and Exclusion Criteria

The selection process screened titles/abstracts using the inclusion/exclusion criteria below; full texts of shortlisted papers were then reviewed.

2.3.2. Screening and Selection Procedure

The search initially yielded 1336 research articles across seven databases using the strings above. Duplicates were removed, then titles and abstracts were screened against the criteria in Table 1. Full-text reviews were conducted for shortlisted papers. The numbers of selected articles by source are summarized below in Table 2.

3. Intelligent Control Systems in Wheelchairs

Designing a controller successfully involves several steps: controller design based on a model representation of the system, simulating the controller before applying it to the real system, and performance validation with diverse scenarios. The emergence of intelligent control systems in wheelchairs can be categorized into two main approaches: model-driven and data-driven approaches. Figure 1 demonstrates a block representation of an intelligent wheelchair showing different subsystems.
Here, y represents measured states and y ^ represents the estimated state using sensors and perception algorithms. Figure 1 shows how an intelligent controller updates its control parameters based on the error (e) between estimated state ( y ^ ) and true or reference state ( x r e f ). The state estimation involves sensor feedback and different perception algorithms. Control signal u is then fed into the actuators, which in turn generate the mechanical force or torque (F, τ ) required for wheelchair movement. The parameter adjustment takes care of the error in state estimation process.

3.1. Model-Driven Approaches

An early and fundamental approach in developing intelligent wheelchairs is the model-driven approach, which relies on mathematical models based on wheelchair dynamics, environmental interactions, and predefined control logic. Researchers formulate control laws and rules to dictate the wheelchair’s actions by understanding the system’s behavior. This can either be achieved by operating on a set of predefined rules or by designing a controller based on fundamental control theories.

3.1.1. Control Theory-Based Methods

Classical control methods have been extensively applied to wheelchair movement and stability. Design of such controllers requires an accurate mathematical representation of the system. PID (Proportional–Integral–Derivative) controllers have been widely used due to their simplicity and effectiveness in maintaining desired trajectories and speeds [23,34]. In [35], the authors formulated a mathematical model for a DC servo motor-supported electric wheelchair (EW) and observed a smooth angular speed response and immediate reaction. The result indicates that using a PID controller, researchers were able to find the precise derivative controller tuning. Castillo et al. [36] developed a cost-effective wheelchair prototype which was capable of climbing stairs using four X-type wheel rims. The wheelchair features an adjustable seat that uses IMU sensors and a PID controller to maintain horizontal balance by detecting seat angle. The project in [37], successfully designed and implemented a smart wheelchair system featuring a PID-controlled automatic seat inclination adjustment at 3 degrees. However, researchers, while designing a PID controller, often struggle to come up with a set of sufficient control parameters and their proper tuning [38]. Low-cost control architectures are still relevant, such as the semi-automatic wheelchair described in [39], which uses an Arduino-based motor control system to regulate wheel speed and direction. In practice, PID controllers often struggle to adapt to sudden changes in system dynamics, such as variations in torque limits during stair climbing or fluctuations in load. Another conventional but more advanced controller is model predictive control (MPC), which requires a reasonably accurate mathematical model of a dynamic system. MPC allows systems to anticipate future states and optimize control actions accordingly [40,41]. In [42], MPC was used to manage the dynamic behavior of a vehicle. Initially, the controller was fed with various inputs, such as lateral acceleration and steering wheel angle, and then based on those inputs, roll angle was predicted utilizing long short-term memory (LSTM) cells. Moreover, in [43], the authors presented a robust control method for an ergometer equipped with a manual wheelchair using velocity control. They introduced a new dynamic model for simulating the wheelchair and ergometer and proposed an explicit MPC (eMPC) control approach and compared the performance to a conventional PI controller. Experimental results demonstrate that the eMPC approach effectively meets velocity tracking requirements under various conditions, outperforming the PI controller. However, the performance of MPC highly depends on the accuracy of the mathematical model. If the model is not accurate enough, the controller will struggle predicting the future states, and thus performance may decline in an undefined nonlinear environment.

3.1.2. Rule-Based Systems

Model-driven approaches offer predictability and interpretability; however, they often require precise mathematical models of the system dynamics, which can be challenging to develop for complex wheelchair–environment interactions. Hence, for highly nonlinear systems such as a wheelchair, a fuzzy controller is more suitable. While rule-based control systems operate on predefined sets of logical rules derived from expert knowledge and practical experience, these systems use if-then logic to make navigation and control decisions based on sensor inputs and environmental conditions. Due to the heavy reliance on the direct mapping of conditions to actions, these systems offer a decision-making process that is transparent and easily traceable [44]. The primary driving forces of developing these systems are computational simplicity and ease of implementation. In [45], the researchers designed and implemented a fuzzy PID to control the angular speed of a wheelchair. The authors have also shown that the designed fuzzy PID controller performs better in both dynamic and steady-state conditions over the regular PID controller. Furthermore, in [46] Aulia et al. developed a fuzzy PID control system to regulate speed and maintain stability at an angle not exceeding 10°. The system demonstrated 80% effectiveness across users of varying weights. These early implementations demonstrated the viability of these approaches for basic navigation tasks, though they often struggled with edge cases and novel scenarios not covered by the predefined rules [23,47].

3.2. Emerging Data-Driven Approaches

Traditional model driven approaches such as PID or MPC are often limited by their heavy dependency on accurate system representation. Similarly, rule based control approaches such as fuzzy logic are sensitive to the distribution of their membership functions and the number of rules. Due to these complexities, researchers leaned towards data-driven approaches, which offer superior adaptability and decision-making capabilities. Among data-driven methodologies, the neural network has emerged as the most widely adopted approach. The first significant research on the use of neural networks for controlling a wheelchair was conducted in the early 1990s. Specifically, one of the pioneering studies was by Chen et al. [48] in 1993, who explored the use of hybrid learning algorithms for Gaussian potential function networks to control a robotic wheelchair. This work laid the groundwork for integrating neural network based control systems in assistive mobility devices. However, the first commercially available wheelchair to implement neural network technology for control and navigation purposes was introduced much later. A significant leap in commercial application happened when CNNs were applied on intelligent electric wheelchairs for enhancing navigation and obstacle detection. One notable example is the work by Sutikno et al. in 2020, which implemented CNNs for navigating wheelchairs using a Raspberry Pi platform [49]. Despite all these advancements, there are still a lot of opportunities for improvement. Intelligent wheelchairs incorporating such neural networks or data-driven approaches can be broadly categorized into three sections, which are reviewed in the following.

3.2.1. Supervised Learning

Supervised Learning approaches include direct supervision over decision-making from humans via data extraction through sensors like EEG, EMG, vision sensor (camera, LIDAR), sonar sensor, etc. In such an approach, a model is trained to follow either the user’s intention or to perceive its surroundings with assistive sensor technology.
  • In user-intent recognition- based wheelchair control systems, models are trained to follow user commands. Commands can be direct, such as joystick manipulation or voice control [47,50], or indirect, requiring interpretation of biological signals. BCIs using EEG signals have been developed to enable wheelchair control for users with severe motor disabilities [51,52]. Similarly, gaze-based control systems use eye-tracking technology to interpret user intent from eye movements, offering hands-free operation for individuals with limited motor function [52]. In addition to these approaches, researchers have explored fully vision-driven systems for users with severe disabilities. For example, ref. [53] proposes an autonomous wheelchair controlled solely through eye orientation. Voice-controlled systems have gained popularity due to their intuitive nature and ease of implementation, allowing users to navigate through vocal commands while the system intelligently manages collision avoidance [50,51]. The objective of semi-automatic wheelchair systems is to assist users as needed by following their intentions from joystick commands and adjusting control signals to ensure safety [47,54]. In these implementations, users actively participate in controlling the wheelchair, and the system intervenes only when necessary to prevent collisions or ensure safe navigation. Recent work has also examined how to improve the efficiency of EEG-based decision-making in wheelchair control. For example, ref. [55] simulates a Pac-Man-style indoor navigation task operated solely through thought commands using a visual oddball paradigm.
  • Environment perception systems utilize deep learning for semantic segmentation and object recognition to assist users in navigating both indoor and outdoor environments [56]. Visual surveying approaches have been developed for autonomous corridor following and doorway passage, using computer vision to extract environmental features such as vanishing points and wall boundaries [57]. These systems demonstrate the ability to navigate safely while enhancing mobility for users with visual or cognitive impairments. Hybrid control architectures, where control is shared between the user and the autonomous system, have gained significant traction [24,58,59,60]. In hybrid architectures, learning based algorithms typically integrate both sensor inputs, such as LiDAR, cameras, and user-intent signals (for example, joystick, voice, or EEG), to enable the controller to adapt to dynamic environments. Figure 2 represents a simple hybrid model formation. As shown, the control mode selector decides whether the user will control the wheelchair or the intelligent controller. Now, this control mode selection can either be chosen manually based on the user’s intent, or it can be decided autonomously as well. In an automated hybrid approach, a trained model continuously updates the shared control ratio between the human and autonomous subsystems, ensuring smoother transitions and reducing delay in decision-making. Research has shown that hybrid navigation decision control mechanisms combining user commands with autonomous navigation improve safety and reduce cognitive load by intelligently switching control based on environmental and wheelchair parameters [24,60].
    Comparative studies have evaluated different control paradigms, including signal filtering, signal blending, and autonomy switching, demonstrating the importance of matching control strategies to user capabilities and preferences [24].
  • Multimodal interfaces combine various input methods (e.g., human signals, various sensors, camera, LiDAR, radar) to provide flexible and robust control options. Meena et al. [61] proposed a multimodal interface that addressed the “Midas touch” problem in gaze-controlled wheelchairs by integrating additional input modalities. Multimodal systems are becoming increasingly popular due to their enhanced reliability and accuracy, as systems can make decisions based on multiple factors. Recent implementations integrate occupant, wheelchair, and environment sensing to support multimodal control [62]. For example, Cui et al. designed an IoT-enabled wheelchair that integrates a PAJ7620 gesture sensor for hand movements, GPS and inertial sensors for location and orientation, and LiDAR plus environmental sensors (temperature, humidity, and light) for road and climate awareness [62]. Their platform allows users to steer with a conventional handle, perform gestures, or issue commands via a mobile app, with sensor data transmitted over MQTT so caregivers can monitor or control the chair remotely [62]. Such multimodal architectures illustrate how modern systems fuse user perception, state estimation, and environmental awareness. To reduce cognitive load and enable safe navigation, these systems increasingly adopt shared-control strategies. Occupant state detection can rely on head-pose or eye tracking, as well as EMG or EEG signals to infer user intent [51,52,62]. At the same time, onboard sensors (inertial, LiDAR, vision, range) support fully autonomous navigation. Shared-control frameworks use MPC to blend joystick commands with sensor inputs, adjusting the weighting between user and autonomous plans to ensure collision-free trajectories while respecting user intent [40,41,60,63]. Agent-based architectures have also been proposed, utilizing multiple specialized agents for tasks such as voice recognition, EEG analysis, collision detection, and accelerometer monitoring to enable comprehensive wheelchair control [51]. These examples highlight the rich spectrum of controlling signals and software frameworks—from joysticks, voice commands, and gestures to EEG driven interfaces, MPC, and neuro-fuzzy algorithms [50]. More problem specific research was accomplished in [64], where health monitoring setup was incorporated with the intelligent wheelchair system to provide live health data and alerts. From the bitter experience we had during COVID-19, researchers equipped the smart wheelchair with a mobile application-based health monitoring, and the result showed that the system was successfully monitoring and transferring the conditions of the assigned patient.

3.2.2. Unsupervised Learning

Unsupervised learning models do not require prelabeled data, and in such approaches, the data pattern is analyzed to make the correct decision. For example, in the ICA approach, a multivariate signal is separated into independent non-Gaussian components. This technique has been applied in [65] to detect unusual sitting postures or movements based on camera data.

3.2.3. Reinforcement Learning in Intelligent Wheelchairs

The more recent yet less explored section of data driven approach is the RL. RL is a reward penalty-based approach that penalizes or appreciates an action based on the observation states of an environment [66]. Such an approach is very effective in highly nonlinear and high-dimensional systems where finding an accurate mathematical model is a daunting task.
In Figure 3, steps of a basic model free RL is shown. Common components of an RL algorithm includes—observation space s t (state variables that agent can observe), action space a t (Possible actions that an agent can take), Reward function R t (Designed by control engineers to guide learning). As shown in the Figure 3 agent takes an action a t and receives a feedback in the form of reward R t and new states S t + 1 . Agent stores this collected data to learn the system dynamics. In model free RL since transition model is unknown, agent start learning the transition model from data collected through some initial random actions. With time, the agent learns the optimal policy π * through trail and error via policy based or value based method. In these methods the agent come up with an optimal policy based on Q-function optimization or using methods like policy iteration, value iteration. Thus without having a prior knowledge RL can adopt the nonlinearity of a highly dimensional system through continuous interaction with the environment (plant).
Modern-day technological advantages like modern simulation frameworks (Matlab, Unity, Gazebo, CARLA), and rich datasets available in different web sources (Roboflow, Kaggle, Stanford Vision Lab, etc.) have not only improved the training of RL but also enhanced the model’s efficiency. Advanced simulation frameworks allow researchers to develop more realistic and reproducible test benches, and using these more realistic simulation models, researchers can emulate critical real-world scenarios, avoiding any physical damage. Inclusion of critical scenarios in the training phase will enable the RL controller to work more efficiently in real-world applications. For example, researchers in [67] utilized the Unity platform to design a prosthetic hand and designed an automated RL-based hand actuation controller. Again in the [68] researchers demonstrated design and implementation of an intelligent wheelchair using soft actor-critic (SAC) based RL approach using Matlab Simulink platform. Again, successful training leading to a consistent validation performance depends on the diversity, size, and accuracy of the dataset used during the training period. For instance, in [69], a Kaggle dataset was used in training an RL agent to prescribe dynamic treatment strategies. Utilizing such technological blessings, researchers around the world may try to engage model-free reinforcement learning algorithms in designing more efficient and accurate RL controllers for highly non-dimensional systems, such as an intelligent wheelchair.
In a recent work [68], researchers proposed an SAC-based RL controller that was able to follow a predefined pathway with good accuracy. Again, deep reinforcement learning approaches have been used to generate collision-free mappings, demonstrating that model-free RL algorithms can be trained to navigate wheelchairs to target locations based on real-time sensor data only [70]. These approaches show promise for autonomous navigation in dynamic environments without requiring explicit environmental models. The user posture adaptation capabilities of RL have also been explored, with systems learning optimal seating configurations based on user preferences [71]. All the mentioned works reflect the potential of the RL controller in dealing with highly nonlinear systems. However, coming up with a proper reward function to ensure that the algorithm learns in a meaningful way within the time constraint is a challenge to be addressed.
To provide a clearer understanding of how various major control algorithms differ in terms of their operating principles, experimental validation, and application scope, a comparative summary is presented in Table 3. It also emphasizes the trade-offs between simplicity, adaptability, and computational complexity that guide the choice of control strategies in real-world wheelchair implementations.
To complement the qualitative discussion quantitatively, comparison of performance metrics implementing PID, fuzzy PID, MPC, eMPC, and RL-based controllers on intelligent wheelchairs would have been a good strategy. However, direct numerical comparison remains difficult, since the studies employ different tasks, environments, and the researchers are mostly implementation-based (either simulation or hardware). For example, Kim et al. [73] presented an intelligent wheelchair with vision and ultrasonic sensors. The authors compared an online occupancy grid mapping approach with two learning-based path recommendation (NN and SVM) approaches. They trained their data-driven models on 80,000 real indoor–outdoor images, where the SVM-based classifier achieved path recommendation accuracy of 88% indoors and 92% outdoors. On the other hand, the NN variant reaches 83.8% and 89%, while a VFH (Vector Field Histogram) baseline reaches only 68% and about 69%. In [37], the authors implemented a PID controller that increased PWM output when the seat deviated from 3° and decreased as it approached the setpoint. In [35], the authors claimed object detection latency to be 0.52 ms (in simulation) and 0.5–1 s (in real test). Authors in [74], proposed a robust door passing strategy and validated their approach with 10 trails starting from different position and with different heading angles. The proposed strategy successfully navigated all the trials. In [68] researchers proposed an RL based controller that successfully tracked a predefined path. In their experiment, the authors compared the performance between an optimized PID and SAC-based RL controller and showed that the RL controller completed all the paths (training path and four different validation paths) in comparatively less time. Hence, there is no universal performance metrics applicable to a wheelchair’s performance considering its different research objectives and methodologies. However, in general, model-driven controllers (PID, MPC) achieve low tracking error and fast convergence in well-modeled scenarios, whereas data-driven controllers show improved performance in highly nonlinear or cluttered environments at the cost of higher computational requirements.

3.3. Hierarchical Taxonomy

In this review, we describe "intelligent wheelchair control" within a three-layer control hierarchy that is common in control engineering:
  • Low-level control: controllers that act directly on motors and actuators to regulate velocities, torques, and positions. Typical methods include PID and related classical controllers, model predictive control for trajectory tracking, and local stability oriented designs.
  • Mid-level control: controllers responsible for path planning, obstacle avoidance, localization, and dynamic safety margins. These modules transform user commands into feasible trajectories and enable obstacle avoidance, using sensor fusion, optimization-based MPC, and several data-driven methods for perception and decision-making.
  • High-level control: algorithms that interpret user intent, allocate authority between user and autonomy, and manage human–robot interaction. Examples include brain–computer interfaces, gaze and gesture-based interfaces, shared control schemes, and reinforcement learning frameworks that adapt behavior to user preferences and context.
This hierarchical taxonomy shown in Figure 4 distinguishes our review from existing surveys [12,15,16], which often group contributions primarily by sensing modality, interface type, or application scenario. Here, we aim instead to provide a structured map for control strategies that clarifies how different algorithms and system architectures contribute at each level of the control stack and how emerging intelligent approaches can be integrated into existing wheelchair platforms.

4. Key Technologies and Components

4.1. Sensors and Perception Systems

Modern-day sensors and perception systems are instrumental for executing various intelligent actions. They not only enable the system to effectively understand and interact with its surroundings but also help to receive the feedback once an action is taken. As shown in Figure 1, system outputs are first measured using a set of sensors. The measured data are then utilized by the perception system to estimate the full state x ^ or required states y ^ to make the system minimally realizable. The selection of sensor type in an intelligent system is primarily determined by the nature of the variable to be measured, the accuracy and update rate required by the control loop, the characteristics of the operating environment, and the sensing range needed for safe navigation or interaction. Practical considerations such as cost, power consumption, robustness, and ease of integration also play vital roles. For safety-critical platforms such as intelligent wheelchairs, sensor selection must further consider medical safety standards, reliability, and user comfort. Together, these criteria ensure that the chosen sensor provides the required information with sufficient fidelity for the system’s perception and control tasks. In addition to the classical ultrasonic and infrared sensors, recent intelligent wheelchairs make use of laser-based ranging (2D and 3D LiDAR), RGB D cameras, IMUs, GPS receivers, and even sonar or radar modules [40,71,75,76]. They also incorporate sophisticated perception software such as sensor fusion, computer vision, machine learning, and brain–computer interfaces. For instance, recent work in [77] demonstrates a multiple-control wheelchair in which a Raspberry Pi executes a deep learning-based real-time object detection framework using OpenCV to support omnidirectional navigation and IoT-enabled monitoring. Based on our synthesis of recent studies, controlling signals and software frameworks vary widely: manual joysticks and voice control remain common [23], but many systems also rely on vision based cues [57,76], LiDAR scans [59,71] and shared control paradigms [60,63]. For example, offline trajectory computation combined with LiDAR or ultrasonic feedback can plan safe paths [47]; visual serving uses vanishing points and wall angles for corridor following [57]; fuzzy potential and model predictive control blend user commands with autonomous planners [40,41]; and brain–computer or multimodal interfaces decode EEG, EMG, head or eye movements to enable users with severe disabilities to drive [50,51,52,62]. These diverse approaches informing the sensor and perception system designs are summarized in Table 4 and Table 5. Figure 5 shows how these two subsystems work together in an intelligent control. The sensor mainly collects data, data of continuous interaction between the system and the environment. Based on the collected data, perception algorithms identify the system’s state, localize objects, perceive users’ or controllers’ commands, and track path deviation. A properly calibrated sensor and perception module provides important feedback to the controller, which then actuates motion commands to navigate autonomously.

4.2. Actuators and Mechanics

The most commonly utilized actuator in wheelchairs are DC motor. DC motors are widely used in industrial applications for their simplicity, reliability, and ease of control. Such motors provide forward and backward motion, and use differential speed concepts to provide rotating speed as well. However, efficient and precise speed and direction control remains a key challenge, especially for dynamic load conditions. Researchers in [91] have used two DC motors with gear adjustment and rotary encoders. The setup has provided high torque output and simple installation to provide a robust and reliable way to control the wheelchair’s movement. Design innovations in wheel configurations, such as omnidirectional wheels with suspension mechanisms, have been explored to reduce slip and vibration while enabling more flexible maneuverability [23].
Apart from the wheel motors, other kinds of actuators are also used, for example, linear actuators, which are used in adjusting the seating position and foot plate position etc. [92,93]. These actuators convert a motor’s rotational motion into a linear push-and-pull movement. Another example of an actuator is the use of a stair-climbing wheelchair that employs a kinematic mechanism design to switch modes of locomotion and adjust its pose with changing terrain, using linear actuators for this purpose [94]. Advanced robotic standing wheelchairs have been developed with unified control schemes managing both platform mobility and standing actions, requiring sophisticated actuator coordination [95].

4.3. Human–Machine Interface (HMI)

In recent times, many researchers have engaged in “Human–Machine Interface” based intelligent wheelchair topics where signals like brain signals, tongue movement, hand gestures, and eye retina movement have been utilized. Voice-controlled wheelchairs have gained increasing popularity due to their simple construction and easy implementation. Iskanderani et al. [96] introduced a voice-controlled intelligent wheelchair that allowed users to navigate with vocal commands. The proposed system intelligently exchanged control between the user and the wheelchair to avoid collisions and ensure safety. Similar voice recognition systems utilizing adaptive neuro-fuzzy inference systems (ANFIS) have demonstrated effective motor control while processing voice commands and sensor data simultaneously [50]. Agent-based architectures have also integrated voice recognition modules alongside other control modalities to provide comprehensive control options [51].
Hand gesture recognition has been explored as a control method for intelligent wheelchairs. Souza et al. [64] used IMU and EMG sensors to detect hand gestures, enabling users to command their wheelchairs. This approach is particularly beneficial for users who may find voice control challenging due to speech impairments. Multimodal sensing systems have extended gesture control capabilities by integrating PAJ7620 gesture sensors with other input modalities, such as handles and mobile app commands [62].
Gaze-based control systems represent another significant advancement in wheelchair interfaces. Eye tracking technology combined with augmented reality visualization enables efficient semi-autonomous navigation for domestic use, with systems accounting for uncertainties in gaze detection to ensure safe operation [52]. Brain–computer interfaces (BCIs) using EEG signals have been developed to enable wheelchair control for users with severe motor disabilities, with agent-based systems processing neural signals to infer user intentions [97].
Again, another alternative control system for users with dexterity disabilities who cannot operate conventional interfaces was developed in [25], exploring various input methods beyond traditional joysticks. Shared control paradigms have also gained traction recently, where control is distributed between the user and the autonomous system [58]. Hybrid navigation decision control mechanisms combine user commands with autonomous navigation. In a more recent work, force feedback joysticks have been integrated into shared control architectures to provide haptic cues to users, which improves situational awareness and control precision [60].

5. Applications and Functionalities of Intelligent Wheelchairs

5.1. Automatic Navigation and Path Planning

Navigation and path planning are crucial features of intelligent wheelchairs, enabling autonomous navigation through environments while avoiding obstacles. Path planning refers to the initial phase of finding a collision-free path. The function of the process relies on sensor feedback, algorithm performance, and computational models. Researchers have proposed combinations of manual control and automatic control, using regular vision to detect target objects and incorporating computer vision-based local path planning methods inspired by angular navigation and angle potential fields [98]. Advanced localization techniques have been developed specifically for crowded and narrow environments. Improved particle filter algorithms enable accurate localization even in dynamic environments where traditional methods fail [47]. These systems integrate multiple path planning approaches, including A* algorithms for global planning and local curve methods for navigating narrow spaces such as doorways [57]. Visual tracking and navigation fusion approaches have demonstrated robust performance in complex environments. Semi-supervised online boosting enables reliable target tracking while η 3 -spline path planning ensures smooth obstacle avoidance trajectories [59]. Convolutional neural networks combined with depth cameras have been applied for real-time object detection and navigation strategy development, providing effective autonomous driving capabilities in indoor environments [86]. Fuzzy potential model predictive control (FPMPC) has been applied to autonomous navigation in crowded environments, utilizing Monte Carlo optimization to handle discontinuities and achieve high-quality solutions. These systems predict future obstacle positions and dynamically replan trajectories to ensure safe navigation [40]. From a deployment perspective, closed-loop control of autonomous wheelchairs must typically run with update periods on the order of a few tens of milliseconds. In practice, this corresponds to control-loop frequencies in the tens of hertz, while common sensors such as 2D/3D LiDAR and RGB-D cameras provide measurements at roughly 10–30 Hz. This budget is compatible with comfortable indoor driving speeds and is consistent with real-time implementations [40,41]. Laser Data Receivers (LDR) acquire environmental data to build maps and identify obstacles [61]. Spatial Information Processors (SIP) calculate distances to obstacles, angles toward obstacles relative to navigation direction, and other relevant parameters. Popular path planning and navigation approaches include graph-based path planning [99], probabilistic roadmap [100], deep reinforcement learning [101], and simultaneous localization and mapping (SLAM) [71,102]. The gmapping package, combined with the navfn algorithm for global planning and the elastic band technique for local optimization, has shown effective performance in autonomous wheelchair systems [71]. A related implementation is presented in [103], where authors integrate STM32-based motor control with a Raspberry Pi to achieve autonomous navigation using the Gmapping SLAM algorithm. Natural language-controlled navigation represents an emerging frontier in wheelchair autonomy. Comprehensive surveys have identified various approaches for interpreting user commands ranging from simple metric instructions to complex route descriptions and intention-based control [90]. Autonomous docking capabilities have been developed to enable wheelchairs to automatically connect with transport systems, expanding their utility in institutional settings [104].

5.2. Obstacle Avoidance

Obstacle avoidance is an essential feature of modern automatic wheelchairs. Intelligent wheelchairs can guide themselves autonomously without human influence and navigate through planned paths safely. Researchers have demonstrated sensor fusion combining laser sensors and cameras [105]. Their proposed models detect obstacles and navigate environments autonomously by combining laser and optical flow data to build virtual barriers for fully independent obstacle detection and navigation. Advanced path planning algorithms compute waypoints, desired speeds, and attitudes. Systems using modified hybrid reciprocal velocity obstacle (HRVO) algorithms autonomously avoid obstacles while considering wheelchair dynamics, including user influence on motion [106]. Model predictive control approaches incorporate obstacle avoidance constraints directly into optimization formulations, enabling real-time collision avoidance while maintaining passenger comfort [40,41]. Multiple control paradigms have been compared for their effectiveness in obstacle avoidance, including signal filtering, signal blending with immediate goals, and autonomy switching strategies [24]. Camera-based systems utilizing YOLOv7 for object detection have been implemented for autonomous wheelchair movement, demonstrating effective obstacle recognition and avoidance in real-world scenarios [76]. These vision-based approaches complement LiDAR and ultrasonic sensing to provide comprehensive environmental awareness.

5.3. Decision-Making and User-Intent Recognition

Once a path is planned, avoiding obstacles, the next phase is to follow that intended path. Effective path-following controllers help maintain preferred trajectories. Researchers have proposed soft-actor critic RL controllers that follow defined paths, comparing results with genetic algorithm-optimized PID controllers and demonstrating higher accuracy though lower energy efficiency [68]. User-intent recognition has been extensively studied across multiple modalities. EEG-based brain–computer interfaces enable control for users with severe motor disabilities by interpreting brain activity patterns [51,52]. Agent-based architectures have integrated multiple analysis modules, including voice recognition, EEG interpretation, collision detection, and accelerometer analysis, to comprehensively decode user intent [51]. Adaptive neuro-fuzzy inference systems (ANFIS) have been applied to process voice commands combined with sensor data, generating appropriate motor control signals while ensuring obstacle avoidance [50]. Gaze-driven control systems demonstrate advanced intent recognition by interpreting eye movements to control wheelchair navigation. Systems have successfully implemented direct drive control via gaze positioning and semi-autonomous control through point-of-interest selection, accounting for uncertainties in gaze detection to ensure safe operation [52]. Comparative studies examining different control paradigms and interfaces have revealed that user performance, effort, and preference vary significantly based on the control strategy employed and the user’s specific capabilities [24]. This research emphasizes the importance of adaptive systems that can accommodate individual user needs and preferences.

5.4. Safety and Reliability Features

Sensor fusion and multimodal interfaces are becoming increasingly important for wheelchair safety and reliability. Multimodal interfaces combine various input methods (human signals, sensors, cameras, LiDAR, radar) to provide more robust control options [51,61,62]. Meena et al. addressed the “Midas touch” problem in gaze-controlled wheelchairs by integrating additional input modalities [61]. Multimodal systems are becoming popular because they make systems more reliable and accurate, allowing decisions based on multiple factors. IoT-enabled wheelchairs integrate comprehensive sensor suites, including gesture sensors, GPS, inertial sensors, LiDAR, and environmental monitoring capabilities [62]. These systems enable real-time remote monitoring by caregivers and adaptive control based on environmental conditions. Collision avoidance agents continuously analyze proximity sensor data to prevent accidents and take control when necessary to ensure user safety [51]. Shared control strategies explicitly prioritize safety by blending user commands with autonomous planners. Model predictive control approaches incorporate safety constraints directly into optimization formulations, ensuring collision-free trajectories while respecting user intent [40,41,60]. Force feedback mechanisms provide haptic cues to users when approaching obstacles or unsafe conditions, enhancing situational awareness without completely overriding user control [60]. Experimental assessments of different control paradigms have demonstrated that systems incorporating autonomy switching with high-level goals can significantly reduce collision risks while maintaining acceptable user satisfaction [24]. The integration of unified control schemes for robotic standing wheelchairs ensures stability during transitions between sitting and standing positions, addressing critical safety concerns for rehabilitation applications [95]. Health monitoring integration represents an emerging safety feature, with systems incorporating vital sign monitoring and alert capabilities to track user wellbeing during operation [64].

5.5. Human–Robot Interaction

Human–robot interaction in intelligent wheelchairs extends beyond basic control to encompass comprehensive communication and adaptation. Natural language interfaces enable intuitive high-level command interpretation, allowing users to specify destinations, routes, and navigation preferences using everyday speech. Comprehensive surveys of natural language-controlled wheelchairs have identified key challenges in language understanding, spatial reasoning, and dialogue management [90]. Multimodal interaction frameworks combine voice, gesture, gaze, and physiological signals to provide flexible communication channels suited to individual user capabilities [51,52,62]. Agent-based architectures facilitate sophisticated human–machine interaction by coordinating multiple specialized processing modules for different input modalities [51]. Adaptive systems learn user preferences over time, adjusting control parameters and interface characteristics to match individual needs. Reinforcement learning approaches have been applied to optimize user posture preferences, demonstrating the potential for long-term personalization [71]. Neuromorphic integration enables adaptive control of wheelchair-mounted robotic arms with online learning capabilities, extending functionality beyond mobility to manipulation tasks [88]. Design considerations for users with dexterity disabilities have led to alternative control systems that accommodate varying levels of motor function [25]. These systems recognize that one-size-fits-all approaches are insufficient and that personalization based on individual capabilities is essential for effective human–robot interaction.

6. Current Limitations and Future Directions

Ongoing research work in intelligent wheelchairs shows promising advancements; however, several limitations can be addressed to enhance the reliability, functionality, and improve user experience. By exploring sensor fusion, optimizing controllers, integrating outdoor adaptability, managing energy efficiently, and integrating IoT, future researchers can develop more advanced and user-friendly intelligent wheelchair systems.

6.1. Dynamic Scene

6.1.1. Limitation

Many of the intelligent wheelchair research works have focused on adapting to predefined environments only [47,57,61,71,99]. However, a significant limitation of these works is that they have overlooked the effectiveness of the designed system in dynamic or unknown environments, particularly under varying lighting conditions and other environmental factors. Object detection using computer vision may struggle with localization in an environment that does not have enough distinct features [57]. In [34], researchers utilized a data-driven system identification approach to obtain a nonlinear model of wheelchair dynamics. They selected an NLARX structure based on real input–output measurements and validated their designed GA optimized PID controller performance both in simulation and on a real-world prototype. However, such system ID-based approaches are highly dependent on collected data and may be vulnerable to conditions absent during data collection.

6.1.2. Future Direction

Future research should focus on designing an intelligent wheelchair capable of operating in dynamic or unpredictable situations. Adaptive frameworks such as Reinforcement Learning (RL) can be employed to capture changes in lighting, moving obstacles, and environmental variations in real time. RL uses online learning to facilitate continuous adaptive mechanisms and would enable the system to improve performance as it encounters new environments. Furthermore, to expand the wheelchair’s utility beyond predefined indoor spaces, controllers can be trained with complex outdoor scenarios such as road damage detection, road corner detection, speed breaker detection, etc., to encounter real-world mobility challenges. An interesting opportunity for future research is recognizing far-field outdoor objects. Far field object recognition refers to the ability to detect and identify objects located at a considerable distance from the sensor [107].

6.2. Energy Management

6.2.1. Limitation

Recent research works have explored sophisticated sensors and algorithms like neural networks for features such as object detection, obstacle avoidance, and autonomous navigation. Although integration of numerous sensors and complex algorithms enhances system robustness, it significantly increases energy consumption. While advanced Energy Management Strategies (EMS) have been proposed for powertrains [108], such facilities have been hardly explored in intelligent wheelchairs. In [68], the authors proposed an intelligent wheelchair that optimized the energy consumption using a genetic algorithm-optimized PID controller. However, their proposed methodology was in simulation, and they did not work extensively with energy consumption from an active source. Furthermore, the balance between computational complexity, control performance, and energy efficiency requires further investigation [40,41]. In conventional powered wheelchairs, the traction motors predominate energy consumption with an average electrical power in the order of 100–300 W during normal indoor driving. This substantially peaks higher when climbing ramps or traversing rough terrain [109]. In contrast, sensing and computation usually consume around 5–30 W. Although the motor power is higher, continuous operation of the sensors and computation stack can effectively reduce driving range.

6.2.2. Future Direction

One key feature that future researchers should address is the design of efficient and precise control algorithms for speed and direction from an energy consumption perspective. In an intelligent wheelchair, most movements are actuated via DC motors, so the control algorithm should minimize energy usage without degrading performance. Moreover, such approaches should be validated through hardware implementation rather than only software simulation, since real-world nonlinearities (friction, battery characteristics, motor heating, variable loads) often degrade performance compared with ideal simulation. In recent days, Neuromorphic spiking neural networks (SNNs) have shown promising prospects to provide an energy-efficient framework for real-time machine learning driven adaptive control closely mimicking the natural sparse activation of our neurons. Ehrlich et al. [88] designed a wheelchair mounted robotic arm and used an SNN-driven adaptive control mechanism deployed on Intel’s Loihi chip. Unfortunately, although conceptually SNN reduces energy consumption, researchers did not report any direct measurement and did not provide any comparison between the proposed methodology and any other methodology optimized for energy consumption. Hence, future research work can focus on this research opportunity.

6.3. Sensor Fusion

6.3.1. Limitation

Most intelligent wheelchairs rely heavily on sensor feedback, which in turn makes system functionality vulnerable to sensor malfunction [50,59,61,96,98]. Recent research works often depend on specific sensor types (LiDAR, RGB camera, sonar, depth camera) for safe navigation. Simple sensor fusion work has been explored, such as combining laser and camera systems [59,105]. Again, injection of noisy data due to sensor malfunction may trouble the user. For example, researchers in [96] mentioned that their proposed system accuracy dropped down to 87.5%, and one of the findings in their research work was that the used Google API was decoding the “Right” command as “Write”. Similarly, object detection in low light or object duplication due to mirror reflection can be tricky in indoor environments such as hospitals or other residential buildings while using an RGB camera. In such scenarios, algorithms can be fed through multiple sensors to make the system less error-prone. Although multiple sensor fusion might enhance system reliability and performance, comprehensive implementations such as fusing RGB camera, thermal, LIDAR, and depth camera remain limited.

6.3.2. Future Direction

Integration of multiple sophisticated sensors, such as LIDAR, RGB camera, thermal camera, and radar, can revolutionize object detection and navigation systems [62]. Barriuso et al. [51] integrated different sensors such as camera, ultrasound sensor, EEG device, accelerometer to collect data and used a Sensor Gateway Agent to normalize collected data and transmit it to other organizations that request it. Fusing the data from multiple sensors is crucial to maintain system accuracy and reliability, especially in challenging conditions like low-light environments or confounding mirror reflections. While this technology is not novel, it remains underexplored in the context of intelligent wheelchairs. Another effective method that can be explored in future work is transfer learning. Transfer learning can be utilized to adapt a pretrained model (such as a visual or inertial encoder) to new sensor inputs.

6.4. Higher Dimensionality

6.4.1. Limitation

Traditional wheelchairs often suffer from slippage and vibration caused due to inefficient mechanical design including suspension systems and motor-wheel assembly optimization [23]. The integration of actuators for standing functionality increases the dimensionality of the problem [95]. In such cases, the derivation of mathematical models and designing controllers using conventional approaches becomes painstaking. Moreover, autonomous docking systems require precise mechanical alignment and control [104]. Few research works addressed these factors, and designing a controller to maintain a higher-dimensional system remains a limitation.

6.4.2. Future Direction

RL can be explored since they entertain higher dimensional problems. Model-free RL approaches do not require any prior knowledge about the system dynamics. Transfer learning can be another interesting prospect in this regard. Controllers trained on a specific model can be utilized to train for a higher-dimensional complex system. Despite having progress in individual components (sensing, control, interfaces), successful integration and control of an intelligent wheelchair with multiple capabilities (navigation, manipulation, health monitoring, communication) remains an open challenge.

6.5. Computational Requirements

6.5.1. Limitation

Advanced control strategies such as model predictive control and machine learning based data driven approaches estimate control parameters upon numerous collected data points and require significant computational resources. The most basic reason why real-time implementation on embedded systems often struggles is that the controller’s update rates do not match the sensor’s data sampling rate [40,41,71]. Real-world hardware setup has limited processing speed and memory that retards the proper synchronization between modern-day sensors with high resolution and data collection rate. Moreover, the field lacks standardized evaluation metrics and benchmark environments for comparing different intelligent wheelchair systems. Research efforts employ diverse experimental setups, making direct comparison difficult [23,24,47].

6.5.2. Future Direction

In [110], the authors have demonstrated how local computation slows down the control loop. In future research works, researchers may explore a multirate sensor fusion approach to generate the desired outcome while accommodating slower processors. Another need is to optimize computational demands to enable reliable real-time implementation of advanced control strategies. This implies implementing control and perception algorithms that can run with a low memory of a few megabytes on microcontrollers or low-power SoCs. Furthermore, these should be able to deliver control updates within less than 10–20 ms even when multiple sensors are active. Approaches such as model compression, quantization, and tinyML-oriented architectures could be used to deploy learning-based controllers on hardware platforms that currently host only classical control laws. At the same time, this field can benefit from more standardized datasets and evaluation protocols. This is crucial as most of the wheelchair studies rely on customized setups and metrics, which complicates any direct comparison among different approaches. One possible solution is to develop shared, publicly available datasets collected in different indoor and outdoor scenarios to enable more systematic cross-comparison of results. Future systems can also benefit from new computing methods such as edge computing and federated learning. With edge computing, some perception or mapping tasks can be handled by nearby devices instead of the wheelchair itself, which can reduce delay and save battery power. Federated learning can help wheelchairs improve their navigation and control models by learning from each other, while keeping all user data stored locally and private. In [111], the authors showed how smart wheelchair control algorithms can be benefited from federated learning when active assistance is needed. Another recent research [112] proposed a novel federated learning architecture through edge computing for resource constrained IoT devices which offered five times better performance compared with the existing solutions. Although this is validated in the simulation domain, future researchers can implement this in practical scenarios.

6.6. User Safety and Preventing Accidents

6.6.1. Limitation

While research works have explored various shared control strategies and obstacle avoidance algorithms [24,60], they have rarely attested their proposed design with safety validation across diverse age groups and environments. In [23], the authors did not include any elderly or disabled users and rather suggested such tests for future research works. In fact, very few have validated their proposed design with patient feedback [113]. However, it requires further validation in larger clinical trials and potential challenges in real-world implementation, such as environmental adaptability and user acceptance. In [24], the researchers also conducted user experiments, albeit with limited scope. They did not provide any extensive patient testing. In [61], the researchers developed a wheelchair mechanism incorporated with a hybrid navigation decision control mechanism, with the provision for user input through vocal commands or autonomous navigation capabilities as well. However, the research overlooked how to balance between the user and the intelligent controller’s intervention to prevent both accidents and excessive constraint of user freedom.

6.6.2. Future Direction

In case of main processing unit failure or failure of any sensor equipment, a primary requirement is an emergency control loop, which will make sure the wheelchair is stopped immediately. Moreover, researchers in [114] have encouraged the use of a shared control solution to make sure user’s intention is acknowledged and ensures a safe medium of transportation at the same time. To validate the effectiveness and safety of future proposed designs, researchers must move beyond limited lab experiments and include extensive patient testing and larger clinical trials. Apart from technical safety aspects, future research should also propose clear ethical and regulatory standards for intelligent wheelchair control design. As a reference future endeavors can consider [115] as a reference where Wangmo et al., proposes that intelligent assistive technologies must address broader ethical priorities such as patient autonomy, informed consent, data management, and distributive justice, especially since such devices often collect sensitive health and behavioral data and operate in close proximity to the user. In parallel, standardized regulatory frameworks are needed to ensure safe human–machine interaction during autonomous operation, for example by defining how clinical validation results, application domains, and validation limitations must be communicated to users.

6.7. Privacy Issues with Data Collection

6.7.1. Limitation

Internet of Things, also termed IoT, refers to a network of various physical devices that are interconnected and embedded with sensors, processing software, and connectivity network [116]. IoT integration enables remote monitoring and data exchange for intelligent wheelchairs [62]. However, no substantial research has been conducted on securing collected data or addressing privacy concerns. IoT-enabled wheelchairs collect sensitive information, including location data, health metrics, and usage patterns, that require robust security measures. The integration of brain–computer interfaces and physiological monitoring further amplifies privacy concerns, as these systems access intimate neural and biological data [51,52,82].

6.7.2. Future Direction

Researchers should focus on the implementation of enhanced encryption, such as homomorphic encryption or secure multi-party computation, to ensure data security. Data anonymization protocols can also be used to protect user identity. Another possible solution is to employ decentralized security architectures such as blockchain technology to give users greater control over who accesses their data.

6.8. Personalization for Individual Needs

6.8.1. Limitation

One of the major concerns regarding existing autonomous wheelchairs is that over-reliance on autonomous features may reduce users’ residual motor skills and independence. On the other hand, authors in a study have shown that user performance, effort, and preference vary significantly across different control paradigms and interfaces [24]. In their paper, the authors emphasized the importance of matching control strategies to individual user capabilities. In [117], Zhang et al demonstrated a setup where the user controlled the vehicle’s start, stop, and directional commands using a steady state visually evoked potential (SSVEP) brain–computer interface. BCI (brain–computer interface) detected the user’s intention to turn left, turn right, or go forward based on their focus on specific checkerboards. One limitation of this research is the reliance on precise and deliberate eye closure for stop commands, which can be challenging for some users, especially those with involuntary eye movements or difficulty maintaining sustained eye closure. Similarly, the proposed wheelchair in [61] highly depends on the user’s voice command. Systems developed for users with severe motor disabilities may differ substantially from those designed for users with partial motor function [25,52,90]. Again, previous studies have shown that user performance, effort, and preference vary significantly across different control paradigms and interfaces [24]. Sometimes, users may require a training session on how to use the intelligent wheelchairs and the learning curve for advanced control interfaces, particularly BCI and gaze-based systems, requires careful consideration [51,52].

6.8.2. Future Direction

Systems should be designed to support and enhance user capabilities rather than replace them entirely. The appropriate level of autonomy should be adaptable to individual user needs and preferences [24,25]. The customization of control parameters, interface characteristics, and autonomy levels should be user-configurable to match individual requirements. Reinforcement learning approaches show promise for learning user preferences over time [71], but more comprehensive personalization frameworks are needed. Model-based reinforcement learning approaches that incorporate system identification and fuzzy reward structures represent promising directions for improving sample efficiency and control performance in nonlinear wheelchair systems [118]. Alternative control systems are essential for users with dexterity disabilities who cannot operate conventional interfaces [25]. Natural language interfaces must accommodate varying speech capabilities and communication styles [50,90]. The development of multimodal interfaces that support switching between control modalities (joystick, voice, gesture, gaze, BCI) based on user preference and capability is an important direction. An advanced hybrid system design can also be explored where AI-based modules will be trained to predict optimal control actions or switching thresholds to switch between the user and the intelligent controller by mapping environmental features and user states. However, interface complexity must be managed to avoid overwhelming users with options.
Based on our discussion in this section, we can categorize the present limitations into three major fields: technical, ethical, and user-centered. Table 6 presents a summary of the limitations and future directions classified into the three major classes.

7. Discussion and Conclusions

This research paper systematically explores the historical and technological evolution and design of intelligent wheelchairs from conventional to recent data-driven approaches. Section 2 reviews early manual and semi-automated wheelchairs, as well as traces out the development of the first intelligent prototype from the very basic mechanical wheelchair. Section 3 analyzes different control mechanisms, compares model-driven approaches (PID, MPC, Fuzzy Logic) and data-driven approaches (supervised learning, unsupervised learning, and reinforcement learning), and discusses their individual successes and shortcomings. Then Section 4 and Section 5 cover the hardware layers such as sensors, actuators, perception systems, and some real-world applications and functionalities, for example, path planning, obstacle avoidance, decision-making, safety, and human–robot interaction, etc., respectively. Finally, in Section 6, some key limitations in existing approaches and some prospective future directions to solve those limitations are demonstrated to improve the overall performance of autonomous intelligent wheelchairs. Overall, the review reveals that future research in autonomous wheelchair controller design must address the existing limitations of present designs and incorporate real-world challenges to a greater extent. While conventional control algorithms have achieved remarkable success, the next generation of intelligent wheelchairs must be more reliable, operate with higher precision, and maintain the best possible performance in diverse operating conditions. With the help of modern-day sensors, actuator technologies, and AI algorithms, achieving precise motion control is no longer a formidable barrier. However, to keep pace with the latest sensor technology, more robust computational resources are required. Unfortunately, integration of these high-performance sensors and complex control logics, along with the computational resources, raises serious concerns regarding energy consumption, cost, and system complexity. Such growing complexity adds more dimensions to the system, and to control the highly nonlinear nature of wheelchair dynamics (variable user weight, terrain slope, battery state), greater emphasis must be placed on machine learning driven control. Hence, future designs should incorporate energy-optimized control strategies and should be adaptive to the system’s dynamic change. RL can be the next milestone in this field since RL can enable controllers that are highly adaptive to novel situations, learn from user behavior, and optimize for multi-objective criteria such as comfort, safety, energy efficiency, and speed. Since intelligent wheelchairs must serve residential, medical, and commercial areas, controllers that are pretrained in one environment can be fine-tuned for another, eliminating the need to design from scratch for each new environment. Moreover, such control strategies must be validated with a real hardware setup, since real-world nonlinearities often degrade simulated performance. This review paper should help future researchers to set their goals and contribute to the field of biomedical engineering to some extent.

Author Contributions

Conceptualization, A.G., N.F., K.R.C. and M.P.S.; methodology, A.G., N.F., K.R.C. and M.P.S.; formal analysis: A.G., N.F. and K.R.C.; investigation, A.G., N.F. and K.R.C.; resources, A.G., N.F., K.R.C. and M.P.S.; data curation, A.G., N.F. and K.R.C.; writing—original draft preparation, A.G., N.F. and K.R.C.; writing—review and editing, A.G., N.F., K.R.C. and M.P.S.; supervision, M.P.S.; project administration, M.P.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
ANFISAdaptive Neuro-Fuzzy Inference System
ARAugmented Reality
ATRSAutomated Transport and Retrieval System
BCIBrain–Computer Interface
BMSBattery Management System
CNNConvolutional Neural Networks
DDSDriving Decision Strategy
DNNDeep Neural Network
eMPCexplicit Model Predictive Controller
EEGElectroencephalography
EMGElectromyography
EMSEnergy Management Strategy
EWElectric Wheelchair
FPMPCFuzzy Potential Model Predictive Control
GPSGlobal Positioning System
GAGenetic Algorithm
HMIHuman–Machine Interface
HRVOHybrid Reciprocal Velocity Obstacle
ICAIndependent Component Analysis
IMUInertial Measurement Units
IoTInternet of Things
LSTMLong Short Term Memory
LiDARLight Detection and Ranging
LDRLaser Data Receivers
MPCModel Predictive Control
MLMachine Learning
NLPNatural Language Processing
PIDProportional Integral Derivative
PIProportional Integral
ROSRobot Operating System
RADARRadio Detection And Ranging
RGB-DRed Green Blue – Depth
RLReinforcement Learning
SONARSound Navigation and Ranging
SACSoft Actor Critic
SIPSpatial Information Processor
SNNSpiking Neural Network
SVMSupport Vector Machine
SLAMSimultaneous Localization And Mapping
SSVEPSteady State Visually Evoked Potential
VFHVector Field Histogram
YOLOYou Only Look Once

References

  1. HISTORY.PHYSIO. History of the Wheelchair. Available online: https://history.physio/history-of-the-wheelchair/ (accessed on 23 October 2025).
  2. University of Toronto Faculty of Applied Science & Engineering. The Maker: George Klein and the First Electric Wheelchair. Available online: https://news.engineering.utoronto.ca/maker-george-klein-first-electric-wheelchair/ (accessed on 15 December 2025).
  3. Tremblay, M. Going back to Civvy Street: A historical account of the impact of the Everest and Jennings wheelchair for Canadian World War II veterans with spinal cord injury. Disabil. Soc. 1996, 11, 149–170. [Google Scholar] [CrossRef]
  4. Simpson, R.C. Smart wheelchairs: A literature review. J. Rehabil. Res. Dev. 2005, 42, 423–436. [Google Scholar] [CrossRef]
  5. Motion Wheelchair. Standing Wheelchairs. Available online: https://motionwheelchairs.com.au/collections/standing-wheelchairs (accessed on 23 October 2025).
  6. Toyota Motor East Japan Inc. All Wheel Drive Powered Wheelchair. Available online: https://www.toyota-ej.co.jp/english/products/patrafour.html (accessed on 23 October 2025).
  7. Leaman, J.; La, H.M. ichair: Intelligent powerchair for severely disabled people. In Proceedings of the ISSAT International Conference on Modeling of Complex Systems and Environments (MCSE), Da Nang, Vietnam, 8–10 June 2015; pp. 8–10. [Google Scholar]
  8. Michele Chandler. America’s Next Great Inventions. The Mercury News. Available online: https://www.mercurynews.com/2007/03/21/americas-next-great-inventions/ (accessed on 23 October 2025).
  9. Itani, K.; De Bernardinis, A. Review on new-generation batteries technologies: Trends and future directions. Energies 2023, 16, 7530. [Google Scholar] [CrossRef]
  10. González, M.; Anseán, D. Advanced Battery Technologies: New Applications and Management Systems; MDPI: Basel, Switzerland, 2021. [Google Scholar]
  11. Biba, J. 9 New Battery Technologies to Watch. Available online: https://builtin.com/hardware/new-battery-technologies (accessed on 23 October 2025).
  12. Leaman, J.; La, H.M. A comprehensive review of smart wheelchairs: Past, present, and future. IEEE Trans. Hum.-Mach. Syst. 2017, 47, 486–499. [Google Scholar] [CrossRef]
  13. Gupta, C.; Gill, N.S.; Gulia, P.; Kumar, A.; Karamti, H.; Moges, D.M. An optimized YOLO NAS based framework for realtime object detection. Sci. Rep. 2025, 15, 32903. [Google Scholar] [CrossRef]
  14. Murat, A.A.; Kiran, M.S. A comprehensive review on YOLO versions for object detection. Eng. Sci. Technol. Int. J. 2025, 70, 102161. [Google Scholar] [CrossRef]
  15. Maheswari, M.M.P.; Karthick, A.; Perumal, A. A Review on Smart Wheelchair Monitoring and Controlling System. Int. J. Res. Anal. Rev. (IJRAR) 2022, 9, 49–54. [Google Scholar]
  16. Sahoo, S.K.; Choudhury, B.B. A Review on Human-Robot Interaction and User Experience in Smart Robotic Wheelchairs. J. Technol. Innov. Energy 2023, 2, 39–55. [Google Scholar] [CrossRef]
  17. Science Museum. The History of the Wheelchair. Available online: https://www.sciencemuseum.org.uk/objects-and-stories/history-wheelchair (accessed on 23 October 2025).
  18. Kwok, T. Give Me Gout or Give Me Death: The Rise of Gout in the Eighteenth Century. In Proceedings of the 17th Annual History of Medicine Days, Calgary, AB, Canada, 7–8 March 2008. [Google Scholar]
  19. van der Woude, L.H.V.; Sonja de Groot, T.W.J. Manual wheelchairs: Research and innovation in rehabilitation, sports, daily life and health. Med. Eng. Phys. 2006, 28, 905–915. [Google Scholar] [CrossRef]
  20. Flemmer, C.L.; Flemmer, R.C. A review of manual wheelchairs. Disabil. Rehabil. Assist. Technol. 2016, 11, 177–187. [Google Scholar] [CrossRef] [PubMed]
  21. Tomlinson, J.D. Managing Maneuverability and Rear Stability of Adjustable Manual Wheelchairs: An Update. Phys. Ther. 2000, 80, 904–911. [Google Scholar] [CrossRef]
  22. Domingues, I.; Pinheiro, J.; Silveira, J.; Francisco, P.; Jutai, J.; Correia Martins, A. Psychosocial Impact of Powered Wheelchair, Users’ Satisfaction and Their Relation to Social Participation. Technologies 2019, 7, 73. [Google Scholar] [CrossRef]
  23. Kundu, A.S.; Mazumder, O.; Lenka, P.K.; Bhaumik, S. Design and Performance Evaluation of 4 Wheeled Omni Wheelchair with Reduced Slip and Vibration. Procedia Comput. Sci. 2017, 105, 289–295. [Google Scholar] [CrossRef]
  24. Erdogan, A.; Argall, B.D. The Effect of Robotic Wheelchair Control Paradigm and Interface on User Performance, Effort and Preference: An Experimental Assessment. Robot. Auton. Syst. 2017, 94, 282–297. [Google Scholar] [CrossRef]
  25. Oliver, S.; Khan, A. Design and Evaluation of an Alternative Wheelchair Control System for Dexterity Disabilities. Healthc. Technol. Lett. 2019, 6, 109–114. [Google Scholar] [CrossRef]
  26. Woods, B.; Watson, N. A Short History of Powered Wheelchairs. Assist. Technol. 2003, 15, 164–180. [Google Scholar] [CrossRef] [PubMed]
  27. Cooper, R.; Widman, L.; Jones, D.; Robertson, R.; Ster, J. Force sensing control for electric powered wheelchairs. IEEE Trans. Control Syst. Technol. 2000, 8, 112–117. [Google Scholar] [CrossRef]
  28. Desai, S.; Mantha, S.S.; Phalle, V.M. Advances in smart wheelchair technology. In Proceedings of the 2017 International Conference on Nascent Technologies in Engineering (ICNTE), Mumbai, India, 27–28 January 2017; pp. 1–7. [Google Scholar] [CrossRef]
  29. Yanco, H.A. Wheelesley: A Robotic Wheelchair System: Indoor Navigation and User Interface. In Assistive Technology and Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 1998; pp. 256–268. [Google Scholar]
  30. Gomi, T. The TAO Project: Intelligent wheelchairs for the handicapped. AAAI Tech. Rep. 1996, 5, 28–37. [Google Scholar]
  31. Pires, G.; Nunes, U.; de Almeida, A. RobChair—A Semi-Autonomous Wheelchair for Disabled People. In Proceedings of the 3rd IFAC Symposium on Intelligent Autonomous Vehicles 1998 (IAV’98), Madrid, Spain, 25–27 March 1998; IFAC Proceedings Volumes. Volume 31, pp. 509–513. [Google Scholar] [CrossRef]
  32. Levine, S.; Bell, D.; Jaros, L.; Simpson, R.; Koren, Y.; Borenstein, J. The NavChair Assistive Wheelchair Navigation System. IEEE Trans. Rehabil. Eng. 1999, 7, 443–451. [Google Scholar] [CrossRef]
  33. Luo, R.; Chen, T.M.; Lin, M.H. Automatic guided intelligent wheelchair system using hierarchical grey-fuzzy motion decision-making algorithms. In Proceedings of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No.99CH36289), Kyongju, Republic of Korea, 17–21 October 1999; Volume 2, pp. 900–905. [Google Scholar] [CrossRef]
  34. Shamseldin, M.A.; Khaled, E.; Youssef, A.; Mohamed, D.; Ahmed, S.; Hesham, A.; Elkodama, A.; Badran, M. A New Design Identification and Control Based on GA Optimization for An Autonomous Wheelchair. Robotics 2022, 11, 101. [Google Scholar] [CrossRef]
  35. Balambica, V.; Anto, A.; Achudhan, M.; Deepak, V.; Juzer, M.; Selvan, T. PID sensor controlled automatic wheelchair for physically disabled people. Turk. J. Comput. Math. Educ. 2021, 12, 5848–5857. [Google Scholar]
  36. Castillo, B.D.; Kuo, Y.F.; Chou, J.J. Novel design of a wheelchair with stair climbing capabilities. In Proceedings of the 2015 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), Okinawa, Japan, 28–30 November 2015; pp. 208–215. [Google Scholar]
  37. Poma, S.Z.; Serna, A.J.C.; Delion, J.C.G.; de Leon, E.J.C.; Larroca, F.H.P. Design and implementation of an automatic control system applying PID in the positioning of an electric wheelchair. In Proceedings of the 4th South American International Industrial Engineering and Operations Management Conference, Lima, Peru, 9–11 May 2023. [Google Scholar]
  38. Copot, C.; Muresan, C.; Ionescu, C.M.; Vanlanduit, S.; De Keyser, R. Calibration of UR10 robot controller through simple auto-tuning approach. Robotics 2018, 7, 35. [Google Scholar] [CrossRef]
  39. Mahadevaswamy, U.B.; Rohith, M.N.; Arivazhagan, R. Development of a Semi-Automatic Wheelchair System for Improved Mobility and User Independence. In Proceedings of the 2024 3rd International Conference on Automation, Computing and Renewable Systems (ICACRS), Pudukkottai, India, 4–6 December 2024; pp. 36–43. [Google Scholar] [CrossRef]
  40. Kawaguchi, E.; Sekiguchi, K.; Nonaka, K. Self-driving Electric Wheelchair in Crowded Environments Using a Fuzzy Potential Model Predictive Control. IFAC-PapersOnLine 2023, 56, 11827–11833. [Google Scholar] [CrossRef]
  41. Ceravolo, E.; Gabellone, M.; Farina, M.; Bascetta, L.; Matteucci, M. Model Predictive Control of an Autonomous Wheelchair. IFAC-PapersOnLine 2017, 50, 9821–9826. [Google Scholar] [CrossRef]
  42. Blume, S.; Sieberg, P.M.; Maas, N.; Schramm, D. Neural roll angle estimation in a model predictive control system. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 1625–1630. [Google Scholar]
  43. Bentaleb, T.; Nguyen, V.T.; Sentouh, C.; Conreur, G.; Poulain, T.; Pudlo, P. A real-time multi-objective predictive control strategy for wheelchair ergometer platform. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 2397–2404. [Google Scholar]
  44. Nasir, N.M.; Ghani, N.M.A.; Nasir, A.N.K.; Ahmad, M.A.; Tokhi, M.O. Neuro-modelling and fuzzy logic control of a two-wheeled wheelchair system. J. Low Freq. Noise Vib. Act. Control 2025, 44, 588–602. [Google Scholar] [CrossRef]
  45. Wu, W.; Ma, X.; Gao, X. The application of fuzzy PID control in intelligent wheelchair system. In Proceedings of the 2010 Second WRI Global Congress on Intelligent Systems, Wuhan, China, 16–17 December 2010; Volume 1, pp. 230–232. [Google Scholar]
  46. Aulia, M.; Arifin, A.; Purwanto, D. Control of Wheelchair on the Ramp Trajectory Using Bioelectric Impedance with Fuzzy-PID Controller. In Proceedings of the 1st International Conference on Electronics, Biomedical Engineering, and Health Informatics: ICEBEHI, Surabaya, Indonesia, 8–9 October 2020; Springer: Berlin/Heidelberg, Germany, 2021; pp. 421–437. [Google Scholar]
  47. Wang, J.; Chen, W.; Liao, W. An Improved Localization and Navigation Method for Intelligent Wheelchair in Narrow and Crowded Environments. In Proceedings of the 13th IFAC Symposium on Large Scale Complex Systems: Theory and Applications, Shanghai, China, 7–10 July 2013; IFAC Proceedings Volumes. Volume 46, pp. 389–394. [Google Scholar]
  48. Chen, C.L.; Chen, W.C.; Chang, F.Y. Hybrid learning algorithm for Gaussian potential function networks. Iee Proc. (Control Theory Appl.) 1993, 140, 442–448. [Google Scholar] [CrossRef]
  49. Sutikno; Anam, K.; Sujanarko, B. Design of electrical wheelchair navigation for disabled patient using convolutional neural networks on Raspberry Pi 3. Aip Conf. Proc. 2020, 2278, 020048. [Google Scholar]
  50. Abdulghani, M.M.; Al-Aubidy, K.M.; Ali, M.M.; Hamarsheh, Q.J. Wheelchair Neuro Fuzzy Control and Tracking System Based on Voice Recognition. Sensors 2020, 20, 2872. [Google Scholar] [CrossRef]
  51. Barriuso, A.L.; Pérez-Marcos, J.; Villarrubia González, D.J.; de Paz, J.F. Agent-Based Intelligent Interface for Wheelchair Movement Control. Sensors 2018, 18, 1511. [Google Scholar] [CrossRef] [PubMed]
  52. Maule, L.; Luchetti, A.; Zanetti, M.; Tomasin, P.; Pertile, M.; Tavernini, M.; Guandalini, G.M.; De Cecco, M. RoboEye, an Efficient, Reliable and Safe Semi-Autonomous Gaze Driven Wheelchair for Domestic Use. Technologies 2021, 9, 16. [Google Scholar] [CrossRef]
  53. Masud, U.; Abdualaziz Almolhis, N.; Alhazmi, A.; Ramakrishnan, J.; Ul Islam, F.; Razzaq Farooqi, A. Smart Wheelchair Controlled Through a Vision-Based Autonomous System. IEEE Access 2024, 12, 65099–65116. [Google Scholar] [CrossRef]
  54. Carlson, T.; Demiris, Y. Collaborative control for a robotic wheelchair: Evaluation of performance, attention, and workload. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2012, 42, 876–888. [Google Scholar] [CrossRef] [PubMed]
  55. Heskebeck, F.; Tufvesson, P. Automatic Control of a Wheelchair Using a Brain Computer Interface and Real-Time Decision-Making. In Proceedings of the 2024 European Control Conference (ECC), Stockholm, Sweden, 5–28 June 2024; pp. 3892–3897. [Google Scholar] [CrossRef]
  56. Mohamed, E.; Sirlantzis, K.; Howells, G. Indoor/Outdoor Semantic Segmentation Using Deep Learning for Visually Impaired Wheelchair Users. IEEE Access 2021, 9, 147914–147932. [Google Scholar] [CrossRef]
  57. Pasteau, F.; Narayanan, V.K.; Babel, M.; Chaumette, F. A Visual Servoing Approach for Autonomous Corridor Following and Doorway Passing in a Wheelchair. Robot. Auton. Syst. 2016, 75, 28–40. [Google Scholar] [CrossRef]
  58. Bi, L.; Fan, X.a.; Jie, K.; Teng, T.; Ding, H.; Liu, Y. Using a head-up display-based steady-state visually evoked potential brain–computer interface to control a simulated vehicle. IEEE Trans. Intell. Transp. Syst. 2013, 15, 959–966. [Google Scholar] [CrossRef]
  59. Lei, S.; Li, Z. Fusing Visual Tracking and Navigation for Autonomous Control of an Intelligent Wheelchair. In Proceedings of the 3rd IFAC Conference on Intelligent Control and Automation Science ICONS, Chengdu, China, 2–4 September 2013; Volume 46, pp. 549–554. [Google Scholar] [CrossRef]
  60. Nguyen, V.T.; Sentouh, C.; Pudlo, P.; Popieul, J.C. Model-based Shared Control Approach for a Power Wheelchair Driving Assistance System Using a Force Feedback Joystick. Front. Control Eng. 2023, 4, 1058802. [Google Scholar] [CrossRef]
  61. Meena, Y.K.; Cecotti, H.; Wong-Lin, K.; Prasad, G. A multimodal interface to solve the Midas-Touch problem in gaze-controlled wheelchair. In Proceedings of the 2017 39th annual international conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju Island, Republic of Korea, 11–15 July 2017; pp. 905–908. [Google Scholar]
  62. Cui, J.; Cui, L.; Huang, Z.; Li, X.; Han, F. IoT Wheelchair Control System Based on Multi-Mode Sensing and Human-Machine Interaction. Micromachines 2022, 13, 1108. [Google Scholar] [CrossRef] [PubMed]
  63. Fadelli, I. Smart Robotic Wheelchair Offers Enhanced Autonomy and Control. TechXplore. 2025. Available online: https://techxplore.com/news/2025-02-smart-robotic-wheelchair-autonomy.html (accessed on 15 December 2025).
  64. D’Souza, L.; Kulal, B.G.; Shrinivas, U.; Shetty, P.M.; Singh, C.; Bekal, A. Design & Implementation of Novel AI Based Hand Gestured Smart Wheelchair. In Proceedings of the 2023 International Conference on Applied Intelligence and Sustainable Computing (ICAISC), Dharwad, India, 16–17 June 2023; pp. 1–6. [Google Scholar]
  65. Zolotas, M.; Demiris, Y. Disentangled sequence clustering for human intention inference. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 9814–9820. [Google Scholar]
  66. Dev, A.; Chowdhury, K.R.; Schoen, M.P. Q-Learning Based Control for Swing-Up and Balancing of Inverted Pendulum. In Proceedings of the 2024 Intermountain Engineering, Technology and Computing (IETC), Logan, UT, USA, 13–14 May 2024; pp. 209–214. [Google Scholar]
  67. Done, C.; Palmer, J.; Oakey, K.; Gupta, A.; Thiros, C.; Franklin, J.; Schoen, M.P. Reinforcement Learning-Driven Prosthetic Hand Actuation in a Virtual Environment Using Unity ML-Agents. Virtual Worlds 2025, 4, 53. [Google Scholar]
  68. Gupta, A.; Schoen, M.P. Analysis of Simulated Autonomous Wheelchair Driving using GA-PID and RL based Controllers. In Proceedings of the 2025 Intermountain Engineering, Technology and Computing (IETC), Orem, UT, USA, 9–10 May 2025; pp. 1–6. [Google Scholar] [CrossRef]
  69. Liu, S.; See, K.C.; Ngiam, K.Y.; Celi, L.A.; Sun, X.; Feng, M. Reinforcement learning for clinical decision support in critical care: Comprehensive review. J. Med. Internet Res. 2020, 22, e18477. [Google Scholar] [CrossRef]
  70. Chatzidimitriadis, S.; Sirlantzis, K. Deep reinforcement learning for autonomous navigation in robotic wheelchairs. In Proceedings of the International Conference on Pattern Recognition and Artificial Intelligence, Xiamen China, 23–25 September 2022; Springer: Cham, Switzerland, 2022; pp. 271–282. [Google Scholar]
  71. Ryu, H.Y.; Kwon, J.S.; Lim, J.H.; Kim, A.H.; Baek, S.J.; Kim, J.W. Development of an Autonomous Driving Smart Wheelchair for the Physically Weak. Appl. Sci. 2022, 12, 377. [Google Scholar] [CrossRef]
  72. Torres-Vega, J.G.; Cuevas-Tello, J.C.; Puente, C.; Nunez-Varela, J.; Soubervielle-Montalvo, C. Towards an Intelligent Electric Wheelchair: Computer Vision Module. In Intelligent Sustainable Systems: Selected Papers of WorldS4 2022; Springer: Berlin/Heidelberg, Germany, 2023; Volume 1, pp. 253–261. [Google Scholar]
  73. Kim, E.Y. Wheelchair navigation system for disabled and elderly people. Sensors 2016, 16, 1806. [Google Scholar] [CrossRef]
  74. Wang, S.; Chen, L.; Hu, H.; McDonald-Maier, K. Doorway passing of an intelligent wheelchair by dynamically generating bezier curve trajectory. In Proceedings of the 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), Guangzhou, China, 11–14 December 2012; pp. 1206–1211. [Google Scholar]
  75. Zhang, Z.; Xu, P.; Wu, C.; Yu, H. Smart Nursing Wheelchairs: A New Trend in Assisted Care and the Future of Multifunctional Integration. Biomimetics 2024, 9, 492. [Google Scholar] [CrossRef]
  76. Sarker, M.A.B.; Sola-Thomas, E.; Jamieson, C.; Imtiaz, M.H. Autonomous Movement of Wheelchair by Cameras and YOLOv7. Eng. Proc. 2023, 31, 60. [Google Scholar] [CrossRef]
  77. Chatterjee, S.; Majumdar, A.; Roy, S. A New Control Approach for a Multi-Controlled Wheelchair Utilizing Deep Learning Framework and Raspberry Pi. IETE J. Res. 2025, 71, 1338–1355. [Google Scholar] [CrossRef]
  78. Zhou, Y.; Yan, Z.; Yang, Y.; Wang, Z.; Lu, P.; Yuan, P.F.; He, B. Bioinspired sensors and applications in intelligent robots: A review. Robot. Intell. Autom. 2024, 44, 215–228. [Google Scholar] [CrossRef]
  79. Haddad, M.; Sanders, D.; Tewkesbury, G.; Langner, M.; Simandjuntak, S. Intelligent user interface to control a powered wheelchair using infrared sensors. In Proceedings of the SAI Intelligent Systems Conference, Virtual, 15–16 July; Springer: Cham, Switzerland, 2021; pp. 640–649. [Google Scholar]
  80. Rodrigues, N.; Sousa, A.; Reis, L.P.; Coelho, A. Intelligent wheelchairs rolling in pairs using reinforcement learning. In Proceedings of the Iberian Robotics Conference, Zaragoza, Spain, 23–25 November 2022; Springer: Cham, Switzerland, 2022; pp. 274–285. [Google Scholar]
  81. Xia, X.; Hashemi, E.; Xiong, L.; Khajepour, A.; Xu, N. Autonomous vehicles sideslip angle estimation: Single antenna GNSS/IMU fusion with observability analysis. IEEE Internet Things J. 2021, 8, 14845–14859. [Google Scholar] [CrossRef]
  82. Zhang, X.; Li, J.; Zhang, R.; Liu, T. A Brain-Controlled and User-Centered Intelligent Wheelchair: A Feasibility Study. Sensors 2024, 24, 3000. [Google Scholar] [CrossRef] [PubMed]
  83. Kulvicius, T.; Zhang, D.; Poustka, L.; Bölte, S.; Jahn, L.; Flügge, S.; Kraft, M.; Zweckstetter, M.; Nielsen-Saines, K.; Wörgötter, F.; et al. Deep learning empowered sensor fusion to improve infant movement classification. arXiv 2024, arXiv:2406.09014. [Google Scholar] [CrossRef]
  84. Fariña, B.; Toledo, J.; Acosta, L. Sensor fusion algorithm selection for an autonomous wheelchair based on EKF/UKF comparison. Int. J. Mech. Eng. Robot. Res. 2023, 12, 1–7. [Google Scholar] [CrossRef]
  85. SAHU, D.; KHAN, G.; AKHTER, P. A Survey on Computer Vision and Its Applications. Int. J. Sci. Res. Eng. Manag. 2024, 8, 1–13. [Google Scholar] [CrossRef]
  86. Farheen, N.; Jaman, G.G.; Schoen, M.P. Object Detection and Navigation Strategy for Obstacle Avoidance Applied to Autonomous Wheel Chair Driving. In Proceedings of the 2022 Intermountain Engineering, Technology and Computing (IETC), Orem, UT, USA, 13–14 May 2022; pp. 1–5. [Google Scholar] [CrossRef]
  87. Kumari, E.V.; Swetha, K.; Navya, S. A Driving Decision Strategy (DDS) Based on Machine learning for an autonomous vehicle. In Proceedings of the 2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC), Bhubaneswar, India, 9–20 November 2022; pp. 1–5. [Google Scholar] [CrossRef]
  88. Ehrlich, M.; Zaidel, Y.; Weiss, P.L.; Melamed Yekel, A.; Gefen, N.; Supic, L.; Ezra Tsur, E. Adaptive Control of a Wheelchair Mounted Robotic Arm with Neuromorphically Integrated Velocity Readings and Online-Learning. Front. Neurosci. 2022, 16, 1007736. [Google Scholar] [CrossRef]
  89. El Bouazzaoui, I.; Chghaf, M.; Rodriguez, S.; Nguyen, D.D.; El Ouardi, A. An Extended HOOFR SLAM Algorithm Using IR-D Sensor Data for Outdoor Autonomous Vehicle Localization. J. Intell. Robot. Syst. 2023, 109, 56. [Google Scholar] [CrossRef]
  90. Williams, T.; Scheutz, M. The State-of-the-Art in Autonomous Wheelchairs Controlled through Natural Language: A Survey. Robot. Auton. Syst. 2017, 96, 171–183. [Google Scholar] [CrossRef]
  91. Thai, P.Q.; Tai, V.; Tien, L. Design and implementation of an electric wheelchair operating in different terrains. Int. J. Mech. Eng. Robot. Res. 2020, 9, 797–802. [Google Scholar] [CrossRef]
  92. Hsu, P.E.; Hsu, Y.L.; Lu, J.M.; Chang, C.H. Seat adjustment design of an intelligent robotic wheelchair based on the stewart platform. Int. J. Adv. Robot. Syst. 2013, 10, 168. [Google Scholar] [CrossRef]
  93. Singkhleewon, N.; Asawasilapakul, T. A development of adjustable standing and automatic stop electric wheelchair prototype operated with a smartphone. Syst. Rev. Pharm. 2020, 11, 564–570. [Google Scholar]
  94. Sharma, B.; Pillai, B.M.; Borvorntanajanya, K.; Suthakorn, J. Modeling and design of a stair climbing wheelchair with pose estimation and adjustment. J. Intell. Robot. Syst. 2022, 106, 66. [Google Scholar] [CrossRef]
  95. Ortiz, J.S.; Palacios-Navarro, G.; Andaluz, V.H.; Recalde, L.F. Three-Dimensional Unified Motion Control of a Robotic Standing Wheelchair for Rehabilitation Purposes. Sensors 2021, 21, 3057. [Google Scholar] [CrossRef] [PubMed]
  96. Iskanderani, A.I.; Tamim, F.R.; Rana, M.M.; Ahmed, W.; Mehedi, I.M.; Aljohani, A.J.; Latif, A.; Shaikh, S.A.L.; Shorfuzzaman, M.; Akther, F.; et al. Voice Controlled Artificial Intelligent Smart Wheelchair. In Proceedings of the 2020 8th International Conference on Intelligent and Advanced Systems (ICIAS), Kuching, Malaysia, 13–15 July 2021; pp. 1–5. [Google Scholar]
  97. Dev, A.; Rahman, M.A.; Mamun, N. Design of an EEG-Based Brain Controlled Wheelchair for Quadriplegic Patients. In Proceedings of the 2018 3rd International Conference for Convergence in Technology (I2CT), Pune, India, 6–8 April 2018; pp. 1–5. [Google Scholar] [CrossRef]
  98. Deng, X.; Yu, Z.L.; Lin, C.; Gu, Z.; Li, Y. A bayesian shared control approach for wheelchair robot with brain machine interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 28, 328–338. [Google Scholar] [CrossRef] [PubMed]
  99. Dang, T.; Mascarich, F.; Khattak, S.; Papachristos, C.; Alexis, K. Graph-based path planning for autonomous robotic exploration in subterranean environments. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 3105–3112. [Google Scholar]
  100. Irani, B.; Wang, J.; Chen, W. A localizability constraint-based path planning method for autonomous vehicles. IEEE Trans. Intell. Transp. Syst. 2018, 20, 2593–2604. [Google Scholar] [CrossRef]
  101. Li, H.; Zhang, Q.; Zhao, D. Deep reinforcement learning-based automatic exploration for navigation in unknown environment. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 2064–2076. [Google Scholar] [CrossRef]
  102. Li, Z.; Xiong, Y.; Zhou, L. ROS-based indoor autonomous exploration and navigation wheelchair. In Proceedings of the 2017 10th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 9–10 December 2017; Volume 2, pp. 132–135. [Google Scholar]
  103. Chen, Y.; Ji, H.; Wu, Q.; Wang, L. Design of intelligent wheelchair control system with multi-control integration. In Proceedings of the 2024 IEEE 6th Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Chongqing, China, 24–26 May 2024; Volume 6, pp. 911–915. [Google Scholar] [CrossRef]
  104. Gao, C.; Miller, T.H.; Spletzer, J.R.; Hoffman, I.; Panzarella, T. Autonomous Docking of a Smart Wheelchair for the Automated Transport and Retrieval System (ATRS). J. Field Robot. 2008, 25, 203–222. [Google Scholar] [CrossRef]
  105. Arnay, R.; Hernández-Aceituno, J.; Toledo, J.; Acosta, L. Laser and optical flow fusion for a non-intrusive obstacle detection system on an intelligent wheelchair. IEEE Sens. J. 2018, 18, 3799–3805. [Google Scholar] [CrossRef]
  106. Jung, Y.; Kim, Y.; Lee, W.H.; Bang, M.S.; Kim, Y.; Kim, S. Path planning algorithm for an autonomous electric wheelchair in hospitals. IEEE Access 2020, 8, 208199–208213. [Google Scholar] [CrossRef]
  107. Rozsa, Z.; Sziranyi, T. Object detection from a few LIDAR scanning planes. IEEE Trans. Intell. Veh. 2019, 4, 548–560. [Google Scholar] [CrossRef]
  108. Ruan, J.; Wu, C.; Cui, H.; Li, W.; Sauer, D.U. Delayed deep deterministic policy gradient-based energy management strategy for overall energy consumption optimization of dual motor electrified powertrain. IEEE Trans. Veh. Technol. 2023, 72, 11415–11427. [Google Scholar] [CrossRef]
  109. Chen, P.C.; Koh, Y.F. Residual traveling distance estimation of an electric wheelchair. In Proceedings of the 2012 5th International Conference on BioMedical Engineering and Informatics, Chongqing, China, 16–18 October 2012; pp. 790–794. [Google Scholar] [CrossRef]
  110. Ballotta, L.; Schenato, L.; Carlone, L. Computation-communication trade-offs and sensor selection in real-time estimation for processing networks. IEEE Trans. Netw. Sci. Eng. 2020, 7, 2952–2965. [Google Scholar] [CrossRef]
  111. Casado, F.E.; Demiris, Y. Federated Learning from Demonstration for Active Assistance to Smart Wheelchair Users. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 9326–9331. [Google Scholar] [CrossRef]
  112. Mughal, F.R.; He, J.; Das, B.; Dharejo, F.A.; Zhu, N.; Khan, S.B.; Alzahrani, S. Adaptive federated learning for resource-constrained IoT devices through edge intelligence and multi-edge clustering. Sci. Rep. 2024, 14, 28746. [Google Scholar] [CrossRef] [PubMed]
  113. He, S.; Zhang, R.; Wang, Q.; Chen, Y.; Yang, T.; Feng, Z.; Zhang, Y.; Shao, M.; Li, Y. A P300-based threshold-free brain switch and its application in wheelchair control. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 25, 715–725. [Google Scholar] [CrossRef] [PubMed]
  114. Miller, C.X.; Gebrekristos, T.; Young, M.; Montague, E.; Argall, B. An analysis of human-robot information streams to inform dynamic autonomy allocation. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 1872–1878. [Google Scholar]
  115. Wangmo, T.; Lipps, M.; Kressig, R.W.; Ienca, M. Ethical concerns with the use of intelligent assistive technology: Findings from a qualitative study with professional stakeholders. BMC Med. Ethics 2019, 20, 98. [Google Scholar] [CrossRef] [PubMed]
  116. Alam, F.; Gupta, A.; Saha, A.; Salimullah, S.M. Establishing an Internet of Things (IoT)-enabled solar-powered smart water treatment system. In Proceedings of the 2023 International Conference on Electrical, Computer and Communication Engineering (ECCE), Chittagong, Bangladesh, 23–25 February 2023; pp. 01–06. [Google Scholar]
  117. Zhang, R.; Li, Y.; Yan, Y.; Zhang, H.; Wu, S.; Yu, T.; Gu, Z. Control of a wheelchair in an indoor environment based on a brain–computer interface and automated navigation. IEEE Trans. Neural Syst. Rehabil. Eng. 2015, 24, 128–139. [Google Scholar] [CrossRef] [PubMed]
  118. Farheen, N.; Jaman, G.G.; Schoen, M.P. Model-Based Reinforcement Learning with System Identification and Fuzzy Reward. In Proceedings of the 2024 Intermountain Engineering, Technology and Computing (IETC), Logan, UT, USA, 13–14 May 2024; pp. 80–85. [Google Scholar] [CrossRef]
Figure 1. Simplified representation of an intelligent wheelchair.
Figure 1. Simplified representation of an intelligent wheelchair.
Machines 14 00033 g001
Figure 2. Simplified representation of a Hybrid control system in an intelligent wheelchair.
Figure 2. Simplified representation of a Hybrid control system in an intelligent wheelchair.
Machines 14 00033 g002
Figure 3. Flow chart showing steps of a basic RL algorithm.
Figure 3. Flow chart showing steps of a basic RL algorithm.
Machines 14 00033 g003
Figure 4. Hierarchical taxonomy for intelligent wheelchair control.
Figure 4. Hierarchical taxonomy for intelligent wheelchair control.
Machines 14 00033 g004
Figure 5. Contribution of the sensing perception block in an intelligent wheelchair.
Figure 5. Contribution of the sensing perception block in an intelligent wheelchair.
Machines 14 00033 g005
Table 1. Paper selection methodology.
Table 1. Paper selection methodology.
Area SpecificationInclusion CriteriaExclusion Criteria
Scope
Intelligent, autonomous, or semi-autonomous wheelchairs
Control algorithms relevant to mobility or navigation
Sensing, perception, HMI, or user–intent frameworks for wheelchairs
Areas excluding inclusion criteria.
Autonomous driving of vehicles/other machines
DatabaseIEEE Xplore/Access, ScienceDirect, Springer, Frontiers, MDPI, SAGE, Wiley Online LibraryOther journals and online databases
LanguageEnglishLanguage except English
Journal/Conference typeResearch articleBook, Chapter, Review article
Selected Time Period2001–2025Before 2001
Table 2. Screening summary by digital library.
Table 2. Screening summary by digital library.
Digital Library1st Selected Research ArticlesScreeningIncluded Research Articles
IEEE47411655
MDPI2284110
Springer330359
ScienceDirect227234
Frontiers1892
SAGE1751
Wiley Library4282
Total133623783
Table 3. Comparison of Intelligent Control Design Approaches for Wheelchair Systems.
Table 3. Comparison of Intelligent Control Design Approaches for Wheelchair Systems.
Intelligent Control DesignInput/Output formatControl PrincipleExperimental Validation
ApproachAlgorithm
Model-
driven
PIDInput: tracking/state error; Output: control signal to actuatorsLinear feedback, minimizes error through proportional, integral, and derivative gain tuning.DC motor-based electric wheelchair achieving smooth angular response and fast reaction [35], seat leveling stair-climbing prototype using IMU based PID [36], automatic seat inclination system at 3° [37].
MPCInput: reference and estimated state; Output: optimized control sequencePredictive optimization based on a dynamic model to anticipate future states.Ergometer-equipped manual wheelchair using explicit MPC (eMPC) showing superior velocity tracking vs. PI controller [43], neural MPC for vehicle dynamics control using LSTM prediction [42].
Fuzzy PIDInput: error and change of error; Output: updated PID gainsCombines rule-based fuzzy inference to tune PID controller gains.Improved steady state and dynamic performance over conventional PID [45], stable control within ±10° slope with 80% effectiveness across user weights [46].
Data-
driven
Supervised Learning (User-intent recognition)Input: user signals via EEG, voice, joystick, etc.; Output: motion commandLearns mapping between user signals and control commands via labeled data.BCI-controlled and gaze-based wheelchair navigation validated on users with motor disabilities [51,52], voice-controlled navigation with collision avoidance [50].
Supervised Learning (Environment perception)Input: camera/LiDAR measurements; Output: waypoints or environment featuresLearns object recognition and obstacle avoidance via labeled data.CNN based Raspberry Pi wheelchair for indoor–outdoor navigation [72], visual servoing for autonomous corridor following and doorway passage [56,57].
Reinforcement
Learning
(RL/SAC/
Deep RL)
Input: observation space; Output: action spaceAgent learns an optimal policy to control nonlinear system with continuous interaction (via action, state change and reward) with the surroundings.SAC-based RL controller following predefined path in simulation [68], deep RL for collision-free navigation [70], posture adaptation through RL learning optimal seating configurations [71].
Table 4. List of sensors used in intelligent wheelchairs.
Table 4. List of sensors used in intelligent wheelchairs.
Sensor TypeWorking PrincipleApplicationReference
Ultrasonic sensorEmit sound waves and measure the time-of-flight of echoes to estimate distance to nearby objects.Short-range obstacle detection and avoidance.[50,78]
Infrared sensorEmit infrared light and measure reflections from surfaces to detect the presence of objects or temperature; performance can vary with ambient light and temperature.Proximity sensing and short-range detection.[79]
2D LiDARUse laser beams to scan a plane and create a two-dimensional map.Flat map creation and obstacle navigation.[47,59,75,80]
3D LiDAREmit laser pulses to generate detailed three-dimensional point clouds of the environment; provide holistic spatial information but require significant computation.High-resolution mapping and navigation in complex environments.[75,80]
RGB-D cameraCapture color images and depth information simultaneously, providing both visual context and 3D structure; sensitive to lighting conditions.Object detection, obstacle recognition, and path following.[24,52,59,71,72,75]
GPSReceive satellite signals to determine geographic position; good outdoors but limited indoors.Outdoor navigation and large-scale positioning.[62,75]
IMUMeasure acceleration, orientation, and angular velocity using accelerometers and gyroscopes; cumulative errors lead to drift over time.Motion tracking, stabilization, and enhancing navigation accuracy.[51,62,75,81]
Sonar sensorEmit acoustic waves and measure reflections; robust to lighting but provide limited spatial resolution.Distance measurement and obstacle detection.[75]
Radar and tilt sensorsUse radio waves to measure range/velocity and inclinometers to sense slope angles; can detect ramps and uneven terrain.Navigating ramps and uneven ground.[62]
Gesture and bio-sensorsRecognize hand/body gestures (e.g., PAJ7620) or biosignals for user interaction.Occupant perception and gesture-based control.[51,52,62,82]
Environmental sensorsMeasure environmental parameters (e.g., temperature, humidity, ambient light; e.g., DHT11, BH1750) to complement navigation sensors by monitoring conditions.Monitoring environment and adapting control/comfort.[62]
Table 5. List of perception and control systems for intelligent wheelchairs.
Table 5. List of perception and control systems for intelligent wheelchairs.
Perception and Control SystemWorking PrincipleApplicationReference
Multi-mode sensor fusionIntegrates occupant, wheelchair, and environment data (e.g., LiDAR, cameras, ultrasonic, GPS, IMU, gesture/bio sensors) to yield comprehensive maps and state estimates.Autonomous navigation, remote monitoring, occupant state estimation, and environmental awareness.[47,51,59,62,75,83,84]
Visual sensing and computer visionUtilizes cameras and LiDAR with algorithms (e.g., image-based visual servoing, AR markers, deep nets such as YOLO) to extract features like keypoints, walls, and objects.Corridor following, doorway passage, object recognition, and semantic navigation.[47,52,57,59,62,75,76,85,86]
Machine learning and neuro-fuzzy controlApplies ML techniques (voice-command classifiers, adaptive/neuro-fuzzy inference systems, spiking neural networks) to process sensor streams (voice, gestures, EEG/EMG) and generate control actions; includes intent prediction.Voice/gesture recognition, adaptive motor control, and prediction of user intent. [51,59,62,82,87,88]
Model predictive and optimization-based controlEmploys model predictive control or similar optimization techniques to predict optimal trajectories based on user inputs and sensor data; cost functions balance safety and user intent; may include fuzzy potentials.Shared-control navigation, obstacle avoidance, and path optimization.[40,41,62,63,75,89]
Natural language and multimodal interfaceUses speech recognition, natural language processing, and multimodal inputs (e.g., handles, gestures, mobile app) to interpret high-level commands; can include remote IoT connectivity.Human-friendly control interfaces, remote supervision, and assistance for varied abilities.[50,51,62,90]
Brain–computer interface (BCI)Acquires and interprets EEG/EMG signals using signal processing and classification (P300, SSVEP, motor imagery) to decode user intention; may employ spiking networks or SVMs.Mobility control for users with severe motor disabilities.[51,52,82]
Shared control and signal blendingBlends user commands with an autonomous planner to ensure safety; includes filtering/blending/authority switching and frameworks like ROS for control arbitration.Balancing user input and autonomy to reduce cognitive load and improve safety.[24,41,60,63,75]
Table 6. Limitations and corresponding future directions.
Table 6. Limitations and corresponding future directions.
ChallengesExisting LimitationsProposed Future Directions
CategoryItem
TechnicalDynamic sceneDependency on predefined environments, lack of dynamic scenes, lack of challenging environments [45,53,57,67,92].Explore dynamic scenes such as dynamic obstacle avoidance, road damage, and road-corner detection, far-field object recognition; integrate adaptive frameworks such as RL.
Energy managementHigh energy consumption from complex sensors and algorithms [40,41], lack of real-world validation [68]Develop efficient DC motor control algorithms minimizing energy use, explore neuromorphic SNN control [88]
Sensor fusionLack of complex multi-sensor fusion, errors from malfunction or noisy data [96,105]Employ multimodal sensor fusion (LiDAR, RGB, thermal, radar); apply transfer learning for robustness [62,75]
Higher dimensionalityController design difficult for systems with standing/docking actuators [95,104]Use model-free reinforcement learning and transfer learning to manage higher dimensional control [118]
Computational requirementsController update rates lag behind sensor data sampling; hardware limits [41,71]Apply model compression, quantization, tinyML, or multi-rate fusion for real-time execution [110]
EthicalUser safety and preventing accidentsLack of validation across diverse users and environments [23,24], absence of ethical safety guidelinesEstablish ethical and regulatory standards for safety certification, informed consent, and user transparency [115]
Privacy issues and data collectionLack of research methodologies focusing on securing sensitive neural and biological data  [51,52]Implement homomorphic encryption, decentralized security (blockchain), and anonymization [116]
User centeredPersonalization for individual needsUser capability variation ignored [117], might require training for users to get used to the proposed systemAdaptive autonomy using reinforcement learning and hybrid control [25,90]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gupta, A.; Chowdhury, K.R.; Farheen, N.; Schoen, M.P. Evolution and Emerging Trends in Intelligent Wheelchair Control: A Comprehensive Review. Machines 2026, 14, 33. https://doi.org/10.3390/machines14010033

AMA Style

Gupta A, Chowdhury KR, Farheen N, Schoen MP. Evolution and Emerging Trends in Intelligent Wheelchair Control: A Comprehensive Review. Machines. 2026; 14(1):33. https://doi.org/10.3390/machines14010033

Chicago/Turabian Style

Gupta, Atulan, Kanan Roy Chowdhury, Nusrat Farheen, and Marco P. Schoen. 2026. "Evolution and Emerging Trends in Intelligent Wheelchair Control: A Comprehensive Review" Machines 14, no. 1: 33. https://doi.org/10.3390/machines14010033

APA Style

Gupta, A., Chowdhury, K. R., Farheen, N., & Schoen, M. P. (2026). Evolution and Emerging Trends in Intelligent Wheelchair Control: A Comprehensive Review. Machines, 14(1), 33. https://doi.org/10.3390/machines14010033

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop