Evolution and Emerging Trends in Intelligent Wheelchair Control: A Comprehensive Review
Abstract
1. Introduction
2. Early Control Systems and Literature Search Methodology
2.1. Early Control Mechanisms (Traditional/Manual)
2.2. Semi-Automated and Early Intelligent Systems
2.3. Search Strategy
- “Intelligent wheelchair” AND (”control” OR ”navigation” OR ”autonomy”)
- “Wheelchair” AND (“machine learning” OR “reinforcement learning” OR “sensor fusion”)
- “Smart wheelchair” AND (“path planning” OR “obstacle avoidance” OR “human–robot interaction”)
- “Assistive mobility” AND (”MPC” OR ”PID” OR ”fuzzy” OR “data-driven control”)
2.3.1. Inclusion and Exclusion Criteria
2.3.2. Screening and Selection Procedure
3. Intelligent Control Systems in Wheelchairs
3.1. Model-Driven Approaches
3.1.1. Control Theory-Based Methods
3.1.2. Rule-Based Systems
3.2. Emerging Data-Driven Approaches
3.2.1. Supervised Learning
- In user-intent recognition- based wheelchair control systems, models are trained to follow user commands. Commands can be direct, such as joystick manipulation or voice control [47,50], or indirect, requiring interpretation of biological signals. BCIs using EEG signals have been developed to enable wheelchair control for users with severe motor disabilities [51,52]. Similarly, gaze-based control systems use eye-tracking technology to interpret user intent from eye movements, offering hands-free operation for individuals with limited motor function [52]. In addition to these approaches, researchers have explored fully vision-driven systems for users with severe disabilities. For example, ref. [53] proposes an autonomous wheelchair controlled solely through eye orientation. Voice-controlled systems have gained popularity due to their intuitive nature and ease of implementation, allowing users to navigate through vocal commands while the system intelligently manages collision avoidance [50,51]. The objective of semi-automatic wheelchair systems is to assist users as needed by following their intentions from joystick commands and adjusting control signals to ensure safety [47,54]. In these implementations, users actively participate in controlling the wheelchair, and the system intervenes only when necessary to prevent collisions or ensure safe navigation. Recent work has also examined how to improve the efficiency of EEG-based decision-making in wheelchair control. For example, ref. [55] simulates a Pac-Man-style indoor navigation task operated solely through thought commands using a visual oddball paradigm.
- Environment perception systems utilize deep learning for semantic segmentation and object recognition to assist users in navigating both indoor and outdoor environments [56]. Visual surveying approaches have been developed for autonomous corridor following and doorway passage, using computer vision to extract environmental features such as vanishing points and wall boundaries [57]. These systems demonstrate the ability to navigate safely while enhancing mobility for users with visual or cognitive impairments. Hybrid control architectures, where control is shared between the user and the autonomous system, have gained significant traction [24,58,59,60]. In hybrid architectures, learning based algorithms typically integrate both sensor inputs, such as LiDAR, cameras, and user-intent signals (for example, joystick, voice, or EEG), to enable the controller to adapt to dynamic environments. Figure 2 represents a simple hybrid model formation. As shown, the control mode selector decides whether the user will control the wheelchair or the intelligent controller. Now, this control mode selection can either be chosen manually based on the user’s intent, or it can be decided autonomously as well. In an automated hybrid approach, a trained model continuously updates the shared control ratio between the human and autonomous subsystems, ensuring smoother transitions and reducing delay in decision-making. Research has shown that hybrid navigation decision control mechanisms combining user commands with autonomous navigation improve safety and reduce cognitive load by intelligently switching control based on environmental and wheelchair parameters [24,60].Comparative studies have evaluated different control paradigms, including signal filtering, signal blending, and autonomy switching, demonstrating the importance of matching control strategies to user capabilities and preferences [24].
- Multimodal interfaces combine various input methods (e.g., human signals, various sensors, camera, LiDAR, radar) to provide flexible and robust control options. Meena et al. [61] proposed a multimodal interface that addressed the “Midas touch” problem in gaze-controlled wheelchairs by integrating additional input modalities. Multimodal systems are becoming increasingly popular due to their enhanced reliability and accuracy, as systems can make decisions based on multiple factors. Recent implementations integrate occupant, wheelchair, and environment sensing to support multimodal control [62]. For example, Cui et al. designed an IoT-enabled wheelchair that integrates a PAJ7620 gesture sensor for hand movements, GPS and inertial sensors for location and orientation, and LiDAR plus environmental sensors (temperature, humidity, and light) for road and climate awareness [62]. Their platform allows users to steer with a conventional handle, perform gestures, or issue commands via a mobile app, with sensor data transmitted over MQTT so caregivers can monitor or control the chair remotely [62]. Such multimodal architectures illustrate how modern systems fuse user perception, state estimation, and environmental awareness. To reduce cognitive load and enable safe navigation, these systems increasingly adopt shared-control strategies. Occupant state detection can rely on head-pose or eye tracking, as well as EMG or EEG signals to infer user intent [51,52,62]. At the same time, onboard sensors (inertial, LiDAR, vision, range) support fully autonomous navigation. Shared-control frameworks use MPC to blend joystick commands with sensor inputs, adjusting the weighting between user and autonomous plans to ensure collision-free trajectories while respecting user intent [40,41,60,63]. Agent-based architectures have also been proposed, utilizing multiple specialized agents for tasks such as voice recognition, EEG analysis, collision detection, and accelerometer monitoring to enable comprehensive wheelchair control [51]. These examples highlight the rich spectrum of controlling signals and software frameworks—from joysticks, voice commands, and gestures to EEG driven interfaces, MPC, and neuro-fuzzy algorithms [50]. More problem specific research was accomplished in [64], where health monitoring setup was incorporated with the intelligent wheelchair system to provide live health data and alerts. From the bitter experience we had during COVID-19, researchers equipped the smart wheelchair with a mobile application-based health monitoring, and the result showed that the system was successfully monitoring and transferring the conditions of the assigned patient.
3.2.2. Unsupervised Learning
3.2.3. Reinforcement Learning in Intelligent Wheelchairs
3.3. Hierarchical Taxonomy
- Low-level control: controllers that act directly on motors and actuators to regulate velocities, torques, and positions. Typical methods include PID and related classical controllers, model predictive control for trajectory tracking, and local stability oriented designs.
- Mid-level control: controllers responsible for path planning, obstacle avoidance, localization, and dynamic safety margins. These modules transform user commands into feasible trajectories and enable obstacle avoidance, using sensor fusion, optimization-based MPC, and several data-driven methods for perception and decision-making.
- High-level control: algorithms that interpret user intent, allocate authority between user and autonomy, and manage human–robot interaction. Examples include brain–computer interfaces, gaze and gesture-based interfaces, shared control schemes, and reinforcement learning frameworks that adapt behavior to user preferences and context.
4. Key Technologies and Components
4.1. Sensors and Perception Systems
4.2. Actuators and Mechanics
4.3. Human–Machine Interface (HMI)
5. Applications and Functionalities of Intelligent Wheelchairs
5.1. Automatic Navigation and Path Planning
5.2. Obstacle Avoidance
5.3. Decision-Making and User-Intent Recognition
5.4. Safety and Reliability Features
5.5. Human–Robot Interaction
6. Current Limitations and Future Directions
6.1. Dynamic Scene
6.1.1. Limitation
6.1.2. Future Direction
6.2. Energy Management
6.2.1. Limitation
6.2.2. Future Direction
6.3. Sensor Fusion
6.3.1. Limitation
6.3.2. Future Direction
6.4. Higher Dimensionality
6.4.1. Limitation
6.4.2. Future Direction
6.5. Computational Requirements
6.5.1. Limitation
6.5.2. Future Direction
6.6. User Safety and Preventing Accidents
6.6.1. Limitation
6.6.2. Future Direction
6.7. Privacy Issues with Data Collection
6.7.1. Limitation
6.7.2. Future Direction
6.8. Personalization for Individual Needs
6.8.1. Limitation
6.8.2. Future Direction
7. Discussion and Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| AI | Artificial Intelligence |
| ANFIS | Adaptive Neuro-Fuzzy Inference System |
| AR | Augmented Reality |
| ATRS | Automated Transport and Retrieval System |
| BCI | Brain–Computer Interface |
| BMS | Battery Management System |
| CNN | Convolutional Neural Networks |
| DDS | Driving Decision Strategy |
| DNN | Deep Neural Network |
| eMPC | explicit Model Predictive Controller |
| EEG | Electroencephalography |
| EMG | Electromyography |
| EMS | Energy Management Strategy |
| EW | Electric Wheelchair |
| FPMPC | Fuzzy Potential Model Predictive Control |
| GPS | Global Positioning System |
| GA | Genetic Algorithm |
| HMI | Human–Machine Interface |
| HRVO | Hybrid Reciprocal Velocity Obstacle |
| ICA | Independent Component Analysis |
| IMU | Inertial Measurement Units |
| IoT | Internet of Things |
| LSTM | Long Short Term Memory |
| LiDAR | Light Detection and Ranging |
| LDR | Laser Data Receivers |
| MPC | Model Predictive Control |
| ML | Machine Learning |
| NLP | Natural Language Processing |
| PID | Proportional Integral Derivative |
| PI | Proportional Integral |
| ROS | Robot Operating System |
| RADAR | Radio Detection And Ranging |
| RGB-D | Red Green Blue – Depth |
| RL | Reinforcement Learning |
| SONAR | Sound Navigation and Ranging |
| SAC | Soft Actor Critic |
| SIP | Spatial Information Processor |
| SNN | Spiking Neural Network |
| SVM | Support Vector Machine |
| SLAM | Simultaneous Localization And Mapping |
| SSVEP | Steady State Visually Evoked Potential |
| VFH | Vector Field Histogram |
| YOLO | You Only Look Once |
References
- HISTORY.PHYSIO. History of the Wheelchair. Available online: https://history.physio/history-of-the-wheelchair/ (accessed on 23 October 2025).
- University of Toronto Faculty of Applied Science & Engineering. The Maker: George Klein and the First Electric Wheelchair. Available online: https://news.engineering.utoronto.ca/maker-george-klein-first-electric-wheelchair/ (accessed on 15 December 2025).
- Tremblay, M. Going back to Civvy Street: A historical account of the impact of the Everest and Jennings wheelchair for Canadian World War II veterans with spinal cord injury. Disabil. Soc. 1996, 11, 149–170. [Google Scholar] [CrossRef]
- Simpson, R.C. Smart wheelchairs: A literature review. J. Rehabil. Res. Dev. 2005, 42, 423–436. [Google Scholar] [CrossRef]
- Motion Wheelchair. Standing Wheelchairs. Available online: https://motionwheelchairs.com.au/collections/standing-wheelchairs (accessed on 23 October 2025).
- Toyota Motor East Japan Inc. All Wheel Drive Powered Wheelchair. Available online: https://www.toyota-ej.co.jp/english/products/patrafour.html (accessed on 23 October 2025).
- Leaman, J.; La, H.M. ichair: Intelligent powerchair for severely disabled people. In Proceedings of the ISSAT International Conference on Modeling of Complex Systems and Environments (MCSE), Da Nang, Vietnam, 8–10 June 2015; pp. 8–10. [Google Scholar]
- Michele Chandler. America’s Next Great Inventions. The Mercury News. Available online: https://www.mercurynews.com/2007/03/21/americas-next-great-inventions/ (accessed on 23 October 2025).
- Itani, K.; De Bernardinis, A. Review on new-generation batteries technologies: Trends and future directions. Energies 2023, 16, 7530. [Google Scholar] [CrossRef]
- González, M.; Anseán, D. Advanced Battery Technologies: New Applications and Management Systems; MDPI: Basel, Switzerland, 2021. [Google Scholar]
- Biba, J. 9 New Battery Technologies to Watch. Available online: https://builtin.com/hardware/new-battery-technologies (accessed on 23 October 2025).
- Leaman, J.; La, H.M. A comprehensive review of smart wheelchairs: Past, present, and future. IEEE Trans. Hum.-Mach. Syst. 2017, 47, 486–499. [Google Scholar] [CrossRef]
- Gupta, C.; Gill, N.S.; Gulia, P.; Kumar, A.; Karamti, H.; Moges, D.M. An optimized YOLO NAS based framework for realtime object detection. Sci. Rep. 2025, 15, 32903. [Google Scholar] [CrossRef]
- Murat, A.A.; Kiran, M.S. A comprehensive review on YOLO versions for object detection. Eng. Sci. Technol. Int. J. 2025, 70, 102161. [Google Scholar] [CrossRef]
- Maheswari, M.M.P.; Karthick, A.; Perumal, A. A Review on Smart Wheelchair Monitoring and Controlling System. Int. J. Res. Anal. Rev. (IJRAR) 2022, 9, 49–54. [Google Scholar]
- Sahoo, S.K.; Choudhury, B.B. A Review on Human-Robot Interaction and User Experience in Smart Robotic Wheelchairs. J. Technol. Innov. Energy 2023, 2, 39–55. [Google Scholar] [CrossRef]
- Science Museum. The History of the Wheelchair. Available online: https://www.sciencemuseum.org.uk/objects-and-stories/history-wheelchair (accessed on 23 October 2025).
- Kwok, T. Give Me Gout or Give Me Death: The Rise of Gout in the Eighteenth Century. In Proceedings of the 17th Annual History of Medicine Days, Calgary, AB, Canada, 7–8 March 2008. [Google Scholar]
- van der Woude, L.H.V.; Sonja de Groot, T.W.J. Manual wheelchairs: Research and innovation in rehabilitation, sports, daily life and health. Med. Eng. Phys. 2006, 28, 905–915. [Google Scholar] [CrossRef]
- Flemmer, C.L.; Flemmer, R.C. A review of manual wheelchairs. Disabil. Rehabil. Assist. Technol. 2016, 11, 177–187. [Google Scholar] [CrossRef] [PubMed]
- Tomlinson, J.D. Managing Maneuverability and Rear Stability of Adjustable Manual Wheelchairs: An Update. Phys. Ther. 2000, 80, 904–911. [Google Scholar] [CrossRef]
- Domingues, I.; Pinheiro, J.; Silveira, J.; Francisco, P.; Jutai, J.; Correia Martins, A. Psychosocial Impact of Powered Wheelchair, Users’ Satisfaction and Their Relation to Social Participation. Technologies 2019, 7, 73. [Google Scholar] [CrossRef]
- Kundu, A.S.; Mazumder, O.; Lenka, P.K.; Bhaumik, S. Design and Performance Evaluation of 4 Wheeled Omni Wheelchair with Reduced Slip and Vibration. Procedia Comput. Sci. 2017, 105, 289–295. [Google Scholar] [CrossRef]
- Erdogan, A.; Argall, B.D. The Effect of Robotic Wheelchair Control Paradigm and Interface on User Performance, Effort and Preference: An Experimental Assessment. Robot. Auton. Syst. 2017, 94, 282–297. [Google Scholar] [CrossRef]
- Oliver, S.; Khan, A. Design and Evaluation of an Alternative Wheelchair Control System for Dexterity Disabilities. Healthc. Technol. Lett. 2019, 6, 109–114. [Google Scholar] [CrossRef]
- Woods, B.; Watson, N. A Short History of Powered Wheelchairs. Assist. Technol. 2003, 15, 164–180. [Google Scholar] [CrossRef] [PubMed]
- Cooper, R.; Widman, L.; Jones, D.; Robertson, R.; Ster, J. Force sensing control for electric powered wheelchairs. IEEE Trans. Control Syst. Technol. 2000, 8, 112–117. [Google Scholar] [CrossRef]
- Desai, S.; Mantha, S.S.; Phalle, V.M. Advances in smart wheelchair technology. In Proceedings of the 2017 International Conference on Nascent Technologies in Engineering (ICNTE), Mumbai, India, 27–28 January 2017; pp. 1–7. [Google Scholar] [CrossRef]
- Yanco, H.A. Wheelesley: A Robotic Wheelchair System: Indoor Navigation and User Interface. In Assistive Technology and Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 1998; pp. 256–268. [Google Scholar]
- Gomi, T. The TAO Project: Intelligent wheelchairs for the handicapped. AAAI Tech. Rep. 1996, 5, 28–37. [Google Scholar]
- Pires, G.; Nunes, U.; de Almeida, A. RobChair—A Semi-Autonomous Wheelchair for Disabled People. In Proceedings of the 3rd IFAC Symposium on Intelligent Autonomous Vehicles 1998 (IAV’98), Madrid, Spain, 25–27 March 1998; IFAC Proceedings Volumes. Volume 31, pp. 509–513. [Google Scholar] [CrossRef]
- Levine, S.; Bell, D.; Jaros, L.; Simpson, R.; Koren, Y.; Borenstein, J. The NavChair Assistive Wheelchair Navigation System. IEEE Trans. Rehabil. Eng. 1999, 7, 443–451. [Google Scholar] [CrossRef]
- Luo, R.; Chen, T.M.; Lin, M.H. Automatic guided intelligent wheelchair system using hierarchical grey-fuzzy motion decision-making algorithms. In Proceedings of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No.99CH36289), Kyongju, Republic of Korea, 17–21 October 1999; Volume 2, pp. 900–905. [Google Scholar] [CrossRef]
- Shamseldin, M.A.; Khaled, E.; Youssef, A.; Mohamed, D.; Ahmed, S.; Hesham, A.; Elkodama, A.; Badran, M. A New Design Identification and Control Based on GA Optimization for An Autonomous Wheelchair. Robotics 2022, 11, 101. [Google Scholar] [CrossRef]
- Balambica, V.; Anto, A.; Achudhan, M.; Deepak, V.; Juzer, M.; Selvan, T. PID sensor controlled automatic wheelchair for physically disabled people. Turk. J. Comput. Math. Educ. 2021, 12, 5848–5857. [Google Scholar]
- Castillo, B.D.; Kuo, Y.F.; Chou, J.J. Novel design of a wheelchair with stair climbing capabilities. In Proceedings of the 2015 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), Okinawa, Japan, 28–30 November 2015; pp. 208–215. [Google Scholar]
- Poma, S.Z.; Serna, A.J.C.; Delion, J.C.G.; de Leon, E.J.C.; Larroca, F.H.P. Design and implementation of an automatic control system applying PID in the positioning of an electric wheelchair. In Proceedings of the 4th South American International Industrial Engineering and Operations Management Conference, Lima, Peru, 9–11 May 2023. [Google Scholar]
- Copot, C.; Muresan, C.; Ionescu, C.M.; Vanlanduit, S.; De Keyser, R. Calibration of UR10 robot controller through simple auto-tuning approach. Robotics 2018, 7, 35. [Google Scholar] [CrossRef]
- Mahadevaswamy, U.B.; Rohith, M.N.; Arivazhagan, R. Development of a Semi-Automatic Wheelchair System for Improved Mobility and User Independence. In Proceedings of the 2024 3rd International Conference on Automation, Computing and Renewable Systems (ICACRS), Pudukkottai, India, 4–6 December 2024; pp. 36–43. [Google Scholar] [CrossRef]
- Kawaguchi, E.; Sekiguchi, K.; Nonaka, K. Self-driving Electric Wheelchair in Crowded Environments Using a Fuzzy Potential Model Predictive Control. IFAC-PapersOnLine 2023, 56, 11827–11833. [Google Scholar] [CrossRef]
- Ceravolo, E.; Gabellone, M.; Farina, M.; Bascetta, L.; Matteucci, M. Model Predictive Control of an Autonomous Wheelchair. IFAC-PapersOnLine 2017, 50, 9821–9826. [Google Scholar] [CrossRef]
- Blume, S.; Sieberg, P.M.; Maas, N.; Schramm, D. Neural roll angle estimation in a model predictive control system. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 1625–1630. [Google Scholar]
- Bentaleb, T.; Nguyen, V.T.; Sentouh, C.; Conreur, G.; Poulain, T.; Pudlo, P. A real-time multi-objective predictive control strategy for wheelchair ergometer platform. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 2397–2404. [Google Scholar]
- Nasir, N.M.; Ghani, N.M.A.; Nasir, A.N.K.; Ahmad, M.A.; Tokhi, M.O. Neuro-modelling and fuzzy logic control of a two-wheeled wheelchair system. J. Low Freq. Noise Vib. Act. Control 2025, 44, 588–602. [Google Scholar] [CrossRef]
- Wu, W.; Ma, X.; Gao, X. The application of fuzzy PID control in intelligent wheelchair system. In Proceedings of the 2010 Second WRI Global Congress on Intelligent Systems, Wuhan, China, 16–17 December 2010; Volume 1, pp. 230–232. [Google Scholar]
- Aulia, M.; Arifin, A.; Purwanto, D. Control of Wheelchair on the Ramp Trajectory Using Bioelectric Impedance with Fuzzy-PID Controller. In Proceedings of the 1st International Conference on Electronics, Biomedical Engineering, and Health Informatics: ICEBEHI, Surabaya, Indonesia, 8–9 October 2020; Springer: Berlin/Heidelberg, Germany, 2021; pp. 421–437. [Google Scholar]
- Wang, J.; Chen, W.; Liao, W. An Improved Localization and Navigation Method for Intelligent Wheelchair in Narrow and Crowded Environments. In Proceedings of the 13th IFAC Symposium on Large Scale Complex Systems: Theory and Applications, Shanghai, China, 7–10 July 2013; IFAC Proceedings Volumes. Volume 46, pp. 389–394. [Google Scholar]
- Chen, C.L.; Chen, W.C.; Chang, F.Y. Hybrid learning algorithm for Gaussian potential function networks. Iee Proc. (Control Theory Appl.) 1993, 140, 442–448. [Google Scholar] [CrossRef]
- Sutikno; Anam, K.; Sujanarko, B. Design of electrical wheelchair navigation for disabled patient using convolutional neural networks on Raspberry Pi 3. Aip Conf. Proc. 2020, 2278, 020048. [Google Scholar]
- Abdulghani, M.M.; Al-Aubidy, K.M.; Ali, M.M.; Hamarsheh, Q.J. Wheelchair Neuro Fuzzy Control and Tracking System Based on Voice Recognition. Sensors 2020, 20, 2872. [Google Scholar] [CrossRef]
- Barriuso, A.L.; Pérez-Marcos, J.; Villarrubia González, D.J.; de Paz, J.F. Agent-Based Intelligent Interface for Wheelchair Movement Control. Sensors 2018, 18, 1511. [Google Scholar] [CrossRef] [PubMed]
- Maule, L.; Luchetti, A.; Zanetti, M.; Tomasin, P.; Pertile, M.; Tavernini, M.; Guandalini, G.M.; De Cecco, M. RoboEye, an Efficient, Reliable and Safe Semi-Autonomous Gaze Driven Wheelchair for Domestic Use. Technologies 2021, 9, 16. [Google Scholar] [CrossRef]
- Masud, U.; Abdualaziz Almolhis, N.; Alhazmi, A.; Ramakrishnan, J.; Ul Islam, F.; Razzaq Farooqi, A. Smart Wheelchair Controlled Through a Vision-Based Autonomous System. IEEE Access 2024, 12, 65099–65116. [Google Scholar] [CrossRef]
- Carlson, T.; Demiris, Y. Collaborative control for a robotic wheelchair: Evaluation of performance, attention, and workload. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2012, 42, 876–888. [Google Scholar] [CrossRef] [PubMed]
- Heskebeck, F.; Tufvesson, P. Automatic Control of a Wheelchair Using a Brain Computer Interface and Real-Time Decision-Making. In Proceedings of the 2024 European Control Conference (ECC), Stockholm, Sweden, 5–28 June 2024; pp. 3892–3897. [Google Scholar] [CrossRef]
- Mohamed, E.; Sirlantzis, K.; Howells, G. Indoor/Outdoor Semantic Segmentation Using Deep Learning for Visually Impaired Wheelchair Users. IEEE Access 2021, 9, 147914–147932. [Google Scholar] [CrossRef]
- Pasteau, F.; Narayanan, V.K.; Babel, M.; Chaumette, F. A Visual Servoing Approach for Autonomous Corridor Following and Doorway Passing in a Wheelchair. Robot. Auton. Syst. 2016, 75, 28–40. [Google Scholar] [CrossRef]
- Bi, L.; Fan, X.a.; Jie, K.; Teng, T.; Ding, H.; Liu, Y. Using a head-up display-based steady-state visually evoked potential brain–computer interface to control a simulated vehicle. IEEE Trans. Intell. Transp. Syst. 2013, 15, 959–966. [Google Scholar] [CrossRef]
- Lei, S.; Li, Z. Fusing Visual Tracking and Navigation for Autonomous Control of an Intelligent Wheelchair. In Proceedings of the 3rd IFAC Conference on Intelligent Control and Automation Science ICONS, Chengdu, China, 2–4 September 2013; Volume 46, pp. 549–554. [Google Scholar] [CrossRef]
- Nguyen, V.T.; Sentouh, C.; Pudlo, P.; Popieul, J.C. Model-based Shared Control Approach for a Power Wheelchair Driving Assistance System Using a Force Feedback Joystick. Front. Control Eng. 2023, 4, 1058802. [Google Scholar] [CrossRef]
- Meena, Y.K.; Cecotti, H.; Wong-Lin, K.; Prasad, G. A multimodal interface to solve the Midas-Touch problem in gaze-controlled wheelchair. In Proceedings of the 2017 39th annual international conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju Island, Republic of Korea, 11–15 July 2017; pp. 905–908. [Google Scholar]
- Cui, J.; Cui, L.; Huang, Z.; Li, X.; Han, F. IoT Wheelchair Control System Based on Multi-Mode Sensing and Human-Machine Interaction. Micromachines 2022, 13, 1108. [Google Scholar] [CrossRef] [PubMed]
- Fadelli, I. Smart Robotic Wheelchair Offers Enhanced Autonomy and Control. TechXplore. 2025. Available online: https://techxplore.com/news/2025-02-smart-robotic-wheelchair-autonomy.html (accessed on 15 December 2025).
- D’Souza, L.; Kulal, B.G.; Shrinivas, U.; Shetty, P.M.; Singh, C.; Bekal, A. Design & Implementation of Novel AI Based Hand Gestured Smart Wheelchair. In Proceedings of the 2023 International Conference on Applied Intelligence and Sustainable Computing (ICAISC), Dharwad, India, 16–17 June 2023; pp. 1–6. [Google Scholar]
- Zolotas, M.; Demiris, Y. Disentangled sequence clustering for human intention inference. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 9814–9820. [Google Scholar]
- Dev, A.; Chowdhury, K.R.; Schoen, M.P. Q-Learning Based Control for Swing-Up and Balancing of Inverted Pendulum. In Proceedings of the 2024 Intermountain Engineering, Technology and Computing (IETC), Logan, UT, USA, 13–14 May 2024; pp. 209–214. [Google Scholar]
- Done, C.; Palmer, J.; Oakey, K.; Gupta, A.; Thiros, C.; Franklin, J.; Schoen, M.P. Reinforcement Learning-Driven Prosthetic Hand Actuation in a Virtual Environment Using Unity ML-Agents. Virtual Worlds 2025, 4, 53. [Google Scholar]
- Gupta, A.; Schoen, M.P. Analysis of Simulated Autonomous Wheelchair Driving using GA-PID and RL based Controllers. In Proceedings of the 2025 Intermountain Engineering, Technology and Computing (IETC), Orem, UT, USA, 9–10 May 2025; pp. 1–6. [Google Scholar] [CrossRef]
- Liu, S.; See, K.C.; Ngiam, K.Y.; Celi, L.A.; Sun, X.; Feng, M. Reinforcement learning for clinical decision support in critical care: Comprehensive review. J. Med. Internet Res. 2020, 22, e18477. [Google Scholar] [CrossRef]
- Chatzidimitriadis, S.; Sirlantzis, K. Deep reinforcement learning for autonomous navigation in robotic wheelchairs. In Proceedings of the International Conference on Pattern Recognition and Artificial Intelligence, Xiamen China, 23–25 September 2022; Springer: Cham, Switzerland, 2022; pp. 271–282. [Google Scholar]
- Ryu, H.Y.; Kwon, J.S.; Lim, J.H.; Kim, A.H.; Baek, S.J.; Kim, J.W. Development of an Autonomous Driving Smart Wheelchair for the Physically Weak. Appl. Sci. 2022, 12, 377. [Google Scholar] [CrossRef]
- Torres-Vega, J.G.; Cuevas-Tello, J.C.; Puente, C.; Nunez-Varela, J.; Soubervielle-Montalvo, C. Towards an Intelligent Electric Wheelchair: Computer Vision Module. In Intelligent Sustainable Systems: Selected Papers of WorldS4 2022; Springer: Berlin/Heidelberg, Germany, 2023; Volume 1, pp. 253–261. [Google Scholar]
- Kim, E.Y. Wheelchair navigation system for disabled and elderly people. Sensors 2016, 16, 1806. [Google Scholar] [CrossRef]
- Wang, S.; Chen, L.; Hu, H.; McDonald-Maier, K. Doorway passing of an intelligent wheelchair by dynamically generating bezier curve trajectory. In Proceedings of the 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), Guangzhou, China, 11–14 December 2012; pp. 1206–1211. [Google Scholar]
- Zhang, Z.; Xu, P.; Wu, C.; Yu, H. Smart Nursing Wheelchairs: A New Trend in Assisted Care and the Future of Multifunctional Integration. Biomimetics 2024, 9, 492. [Google Scholar] [CrossRef]
- Sarker, M.A.B.; Sola-Thomas, E.; Jamieson, C.; Imtiaz, M.H. Autonomous Movement of Wheelchair by Cameras and YOLOv7. Eng. Proc. 2023, 31, 60. [Google Scholar] [CrossRef]
- Chatterjee, S.; Majumdar, A.; Roy, S. A New Control Approach for a Multi-Controlled Wheelchair Utilizing Deep Learning Framework and Raspberry Pi. IETE J. Res. 2025, 71, 1338–1355. [Google Scholar] [CrossRef]
- Zhou, Y.; Yan, Z.; Yang, Y.; Wang, Z.; Lu, P.; Yuan, P.F.; He, B. Bioinspired sensors and applications in intelligent robots: A review. Robot. Intell. Autom. 2024, 44, 215–228. [Google Scholar] [CrossRef]
- Haddad, M.; Sanders, D.; Tewkesbury, G.; Langner, M.; Simandjuntak, S. Intelligent user interface to control a powered wheelchair using infrared sensors. In Proceedings of the SAI Intelligent Systems Conference, Virtual, 15–16 July; Springer: Cham, Switzerland, 2021; pp. 640–649. [Google Scholar]
- Rodrigues, N.; Sousa, A.; Reis, L.P.; Coelho, A. Intelligent wheelchairs rolling in pairs using reinforcement learning. In Proceedings of the Iberian Robotics Conference, Zaragoza, Spain, 23–25 November 2022; Springer: Cham, Switzerland, 2022; pp. 274–285. [Google Scholar]
- Xia, X.; Hashemi, E.; Xiong, L.; Khajepour, A.; Xu, N. Autonomous vehicles sideslip angle estimation: Single antenna GNSS/IMU fusion with observability analysis. IEEE Internet Things J. 2021, 8, 14845–14859. [Google Scholar] [CrossRef]
- Zhang, X.; Li, J.; Zhang, R.; Liu, T. A Brain-Controlled and User-Centered Intelligent Wheelchair: A Feasibility Study. Sensors 2024, 24, 3000. [Google Scholar] [CrossRef] [PubMed]
- Kulvicius, T.; Zhang, D.; Poustka, L.; Bölte, S.; Jahn, L.; Flügge, S.; Kraft, M.; Zweckstetter, M.; Nielsen-Saines, K.; Wörgötter, F.; et al. Deep learning empowered sensor fusion to improve infant movement classification. arXiv 2024, arXiv:2406.09014. [Google Scholar] [CrossRef]
- Fariña, B.; Toledo, J.; Acosta, L. Sensor fusion algorithm selection for an autonomous wheelchair based on EKF/UKF comparison. Int. J. Mech. Eng. Robot. Res. 2023, 12, 1–7. [Google Scholar] [CrossRef]
- SAHU, D.; KHAN, G.; AKHTER, P. A Survey on Computer Vision and Its Applications. Int. J. Sci. Res. Eng. Manag. 2024, 8, 1–13. [Google Scholar] [CrossRef]
- Farheen, N.; Jaman, G.G.; Schoen, M.P. Object Detection and Navigation Strategy for Obstacle Avoidance Applied to Autonomous Wheel Chair Driving. In Proceedings of the 2022 Intermountain Engineering, Technology and Computing (IETC), Orem, UT, USA, 13–14 May 2022; pp. 1–5. [Google Scholar] [CrossRef]
- Kumari, E.V.; Swetha, K.; Navya, S. A Driving Decision Strategy (DDS) Based on Machine learning for an autonomous vehicle. In Proceedings of the 2022 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC), Bhubaneswar, India, 9–20 November 2022; pp. 1–5. [Google Scholar] [CrossRef]
- Ehrlich, M.; Zaidel, Y.; Weiss, P.L.; Melamed Yekel, A.; Gefen, N.; Supic, L.; Ezra Tsur, E. Adaptive Control of a Wheelchair Mounted Robotic Arm with Neuromorphically Integrated Velocity Readings and Online-Learning. Front. Neurosci. 2022, 16, 1007736. [Google Scholar] [CrossRef]
- El Bouazzaoui, I.; Chghaf, M.; Rodriguez, S.; Nguyen, D.D.; El Ouardi, A. An Extended HOOFR SLAM Algorithm Using IR-D Sensor Data for Outdoor Autonomous Vehicle Localization. J. Intell. Robot. Syst. 2023, 109, 56. [Google Scholar] [CrossRef]
- Williams, T.; Scheutz, M. The State-of-the-Art in Autonomous Wheelchairs Controlled through Natural Language: A Survey. Robot. Auton. Syst. 2017, 96, 171–183. [Google Scholar] [CrossRef]
- Thai, P.Q.; Tai, V.; Tien, L. Design and implementation of an electric wheelchair operating in different terrains. Int. J. Mech. Eng. Robot. Res. 2020, 9, 797–802. [Google Scholar] [CrossRef]
- Hsu, P.E.; Hsu, Y.L.; Lu, J.M.; Chang, C.H. Seat adjustment design of an intelligent robotic wheelchair based on the stewart platform. Int. J. Adv. Robot. Syst. 2013, 10, 168. [Google Scholar] [CrossRef]
- Singkhleewon, N.; Asawasilapakul, T. A development of adjustable standing and automatic stop electric wheelchair prototype operated with a smartphone. Syst. Rev. Pharm. 2020, 11, 564–570. [Google Scholar]
- Sharma, B.; Pillai, B.M.; Borvorntanajanya, K.; Suthakorn, J. Modeling and design of a stair climbing wheelchair with pose estimation and adjustment. J. Intell. Robot. Syst. 2022, 106, 66. [Google Scholar] [CrossRef]
- Ortiz, J.S.; Palacios-Navarro, G.; Andaluz, V.H.; Recalde, L.F. Three-Dimensional Unified Motion Control of a Robotic Standing Wheelchair for Rehabilitation Purposes. Sensors 2021, 21, 3057. [Google Scholar] [CrossRef] [PubMed]
- Iskanderani, A.I.; Tamim, F.R.; Rana, M.M.; Ahmed, W.; Mehedi, I.M.; Aljohani, A.J.; Latif, A.; Shaikh, S.A.L.; Shorfuzzaman, M.; Akther, F.; et al. Voice Controlled Artificial Intelligent Smart Wheelchair. In Proceedings of the 2020 8th International Conference on Intelligent and Advanced Systems (ICIAS), Kuching, Malaysia, 13–15 July 2021; pp. 1–5. [Google Scholar]
- Dev, A.; Rahman, M.A.; Mamun, N. Design of an EEG-Based Brain Controlled Wheelchair for Quadriplegic Patients. In Proceedings of the 2018 3rd International Conference for Convergence in Technology (I2CT), Pune, India, 6–8 April 2018; pp. 1–5. [Google Scholar] [CrossRef]
- Deng, X.; Yu, Z.L.; Lin, C.; Gu, Z.; Li, Y. A bayesian shared control approach for wheelchair robot with brain machine interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 28, 328–338. [Google Scholar] [CrossRef] [PubMed]
- Dang, T.; Mascarich, F.; Khattak, S.; Papachristos, C.; Alexis, K. Graph-based path planning for autonomous robotic exploration in subterranean environments. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 3105–3112. [Google Scholar]
- Irani, B.; Wang, J.; Chen, W. A localizability constraint-based path planning method for autonomous vehicles. IEEE Trans. Intell. Transp. Syst. 2018, 20, 2593–2604. [Google Scholar] [CrossRef]
- Li, H.; Zhang, Q.; Zhao, D. Deep reinforcement learning-based automatic exploration for navigation in unknown environment. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 2064–2076. [Google Scholar] [CrossRef]
- Li, Z.; Xiong, Y.; Zhou, L. ROS-based indoor autonomous exploration and navigation wheelchair. In Proceedings of the 2017 10th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 9–10 December 2017; Volume 2, pp. 132–135. [Google Scholar]
- Chen, Y.; Ji, H.; Wu, Q.; Wang, L. Design of intelligent wheelchair control system with multi-control integration. In Proceedings of the 2024 IEEE 6th Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Chongqing, China, 24–26 May 2024; Volume 6, pp. 911–915. [Google Scholar] [CrossRef]
- Gao, C.; Miller, T.H.; Spletzer, J.R.; Hoffman, I.; Panzarella, T. Autonomous Docking of a Smart Wheelchair for the Automated Transport and Retrieval System (ATRS). J. Field Robot. 2008, 25, 203–222. [Google Scholar] [CrossRef]
- Arnay, R.; Hernández-Aceituno, J.; Toledo, J.; Acosta, L. Laser and optical flow fusion for a non-intrusive obstacle detection system on an intelligent wheelchair. IEEE Sens. J. 2018, 18, 3799–3805. [Google Scholar] [CrossRef]
- Jung, Y.; Kim, Y.; Lee, W.H.; Bang, M.S.; Kim, Y.; Kim, S. Path planning algorithm for an autonomous electric wheelchair in hospitals. IEEE Access 2020, 8, 208199–208213. [Google Scholar] [CrossRef]
- Rozsa, Z.; Sziranyi, T. Object detection from a few LIDAR scanning planes. IEEE Trans. Intell. Veh. 2019, 4, 548–560. [Google Scholar] [CrossRef]
- Ruan, J.; Wu, C.; Cui, H.; Li, W.; Sauer, D.U. Delayed deep deterministic policy gradient-based energy management strategy for overall energy consumption optimization of dual motor electrified powertrain. IEEE Trans. Veh. Technol. 2023, 72, 11415–11427. [Google Scholar] [CrossRef]
- Chen, P.C.; Koh, Y.F. Residual traveling distance estimation of an electric wheelchair. In Proceedings of the 2012 5th International Conference on BioMedical Engineering and Informatics, Chongqing, China, 16–18 October 2012; pp. 790–794. [Google Scholar] [CrossRef]
- Ballotta, L.; Schenato, L.; Carlone, L. Computation-communication trade-offs and sensor selection in real-time estimation for processing networks. IEEE Trans. Netw. Sci. Eng. 2020, 7, 2952–2965. [Google Scholar] [CrossRef]
- Casado, F.E.; Demiris, Y. Federated Learning from Demonstration for Active Assistance to Smart Wheelchair Users. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 9326–9331. [Google Scholar] [CrossRef]
- Mughal, F.R.; He, J.; Das, B.; Dharejo, F.A.; Zhu, N.; Khan, S.B.; Alzahrani, S. Adaptive federated learning for resource-constrained IoT devices through edge intelligence and multi-edge clustering. Sci. Rep. 2024, 14, 28746. [Google Scholar] [CrossRef] [PubMed]
- He, S.; Zhang, R.; Wang, Q.; Chen, Y.; Yang, T.; Feng, Z.; Zhang, Y.; Shao, M.; Li, Y. A P300-based threshold-free brain switch and its application in wheelchair control. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 25, 715–725. [Google Scholar] [CrossRef] [PubMed]
- Miller, C.X.; Gebrekristos, T.; Young, M.; Montague, E.; Argall, B. An analysis of human-robot information streams to inform dynamic autonomy allocation. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 1872–1878. [Google Scholar]
- Wangmo, T.; Lipps, M.; Kressig, R.W.; Ienca, M. Ethical concerns with the use of intelligent assistive technology: Findings from a qualitative study with professional stakeholders. BMC Med. Ethics 2019, 20, 98. [Google Scholar] [CrossRef] [PubMed]
- Alam, F.; Gupta, A.; Saha, A.; Salimullah, S.M. Establishing an Internet of Things (IoT)-enabled solar-powered smart water treatment system. In Proceedings of the 2023 International Conference on Electrical, Computer and Communication Engineering (ECCE), Chittagong, Bangladesh, 23–25 February 2023; pp. 01–06. [Google Scholar]
- Zhang, R.; Li, Y.; Yan, Y.; Zhang, H.; Wu, S.; Yu, T.; Gu, Z. Control of a wheelchair in an indoor environment based on a brain–computer interface and automated navigation. IEEE Trans. Neural Syst. Rehabil. Eng. 2015, 24, 128–139. [Google Scholar] [CrossRef] [PubMed]
- Farheen, N.; Jaman, G.G.; Schoen, M.P. Model-Based Reinforcement Learning with System Identification and Fuzzy Reward. In Proceedings of the 2024 Intermountain Engineering, Technology and Computing (IETC), Logan, UT, USA, 13–14 May 2024; pp. 80–85. [Google Scholar] [CrossRef]





| Area Specification | Inclusion Criteria | Exclusion Criteria |
|---|---|---|
| Scope |
|
|
| Database | IEEE Xplore/Access, ScienceDirect, Springer, Frontiers, MDPI, SAGE, Wiley Online Library | Other journals and online databases |
| Language | English | Language except English |
| Journal/Conference type | Research article | Book, Chapter, Review article |
| Selected Time Period | 2001–2025 | Before 2001 |
| Digital Library | 1st Selected Research Articles | Screening | Included Research Articles |
|---|---|---|---|
| IEEE | 474 | 116 | 55 |
| MDPI | 228 | 41 | 10 |
| Springer | 330 | 35 | 9 |
| ScienceDirect | 227 | 23 | 4 |
| Frontiers | 18 | 9 | 2 |
| SAGE | 17 | 5 | 1 |
| Wiley Library | 42 | 8 | 2 |
| Total | 1336 | 237 | 83 |
| Intelligent Control Design | Input/Output format | Control Principle | Experimental Validation | |
|---|---|---|---|---|
| Approach | Algorithm | |||
| Model- driven | PID | Input: tracking/state error; Output: control signal to actuators | Linear feedback, minimizes error through proportional, integral, and derivative gain tuning. | DC motor-based electric wheelchair achieving smooth angular response and fast reaction [35], seat leveling stair-climbing prototype using IMU based PID [36], automatic seat inclination system at 3° [37]. |
| MPC | Input: reference and estimated state; Output: optimized control sequence | Predictive optimization based on a dynamic model to anticipate future states. | Ergometer-equipped manual wheelchair using explicit MPC (eMPC) showing superior velocity tracking vs. PI controller [43], neural MPC for vehicle dynamics control using LSTM prediction [42]. | |
| Fuzzy PID | Input: error and change of error; Output: updated PID gains | Combines rule-based fuzzy inference to tune PID controller gains. | Improved steady state and dynamic performance over conventional PID [45], stable control within ±10° slope with 80% effectiveness across user weights [46]. | |
| Data- driven | Supervised Learning (User-intent recognition) | Input: user signals via EEG, voice, joystick, etc.; Output: motion command | Learns mapping between user signals and control commands via labeled data. | BCI-controlled and gaze-based wheelchair navigation validated on users with motor disabilities [51,52], voice-controlled navigation with collision avoidance [50]. |
| Supervised Learning (Environment perception) | Input: camera/LiDAR measurements; Output: waypoints or environment features | Learns object recognition and obstacle avoidance via labeled data. | CNN based Raspberry Pi wheelchair for indoor–outdoor navigation [72], visual servoing for autonomous corridor following and doorway passage [56,57]. | |
| Reinforcement Learning (RL/SAC/ Deep RL) | Input: observation space; Output: action space | Agent learns an optimal policy to control nonlinear system with continuous interaction (via action, state change and reward) with the surroundings. | SAC-based RL controller following predefined path in simulation [68], deep RL for collision-free navigation [70], posture adaptation through RL learning optimal seating configurations [71]. | |
| Sensor Type | Working Principle | Application | Reference |
|---|---|---|---|
| Ultrasonic sensor | Emit sound waves and measure the time-of-flight of echoes to estimate distance to nearby objects. | Short-range obstacle detection and avoidance. | [50,78] |
| Infrared sensor | Emit infrared light and measure reflections from surfaces to detect the presence of objects or temperature; performance can vary with ambient light and temperature. | Proximity sensing and short-range detection. | [79] |
| 2D LiDAR | Use laser beams to scan a plane and create a two-dimensional map. | Flat map creation and obstacle navigation. | [47,59,75,80] |
| 3D LiDAR | Emit laser pulses to generate detailed three-dimensional point clouds of the environment; provide holistic spatial information but require significant computation. | High-resolution mapping and navigation in complex environments. | [75,80] |
| RGB-D camera | Capture color images and depth information simultaneously, providing both visual context and 3D structure; sensitive to lighting conditions. | Object detection, obstacle recognition, and path following. | [24,52,59,71,72,75] |
| GPS | Receive satellite signals to determine geographic position; good outdoors but limited indoors. | Outdoor navigation and large-scale positioning. | [62,75] |
| IMU | Measure acceleration, orientation, and angular velocity using accelerometers and gyroscopes; cumulative errors lead to drift over time. | Motion tracking, stabilization, and enhancing navigation accuracy. | [51,62,75,81] |
| Sonar sensor | Emit acoustic waves and measure reflections; robust to lighting but provide limited spatial resolution. | Distance measurement and obstacle detection. | [75] |
| Radar and tilt sensors | Use radio waves to measure range/velocity and inclinometers to sense slope angles; can detect ramps and uneven terrain. | Navigating ramps and uneven ground. | [62] |
| Gesture and bio-sensors | Recognize hand/body gestures (e.g., PAJ7620) or biosignals for user interaction. | Occupant perception and gesture-based control. | [51,52,62,82] |
| Environmental sensors | Measure environmental parameters (e.g., temperature, humidity, ambient light; e.g., DHT11, BH1750) to complement navigation sensors by monitoring conditions. | Monitoring environment and adapting control/comfort. | [62] |
| Perception and Control System | Working Principle | Application | Reference |
|---|---|---|---|
| Multi-mode sensor fusion | Integrates occupant, wheelchair, and environment data (e.g., LiDAR, cameras, ultrasonic, GPS, IMU, gesture/bio sensors) to yield comprehensive maps and state estimates. | Autonomous navigation, remote monitoring, occupant state estimation, and environmental awareness. | [47,51,59,62,75,83,84] |
| Visual sensing and computer vision | Utilizes cameras and LiDAR with algorithms (e.g., image-based visual servoing, AR markers, deep nets such as YOLO) to extract features like keypoints, walls, and objects. | Corridor following, doorway passage, object recognition, and semantic navigation. | [47,52,57,59,62,75,76,85,86] |
| Machine learning and neuro-fuzzy control | Applies ML techniques (voice-command classifiers, adaptive/neuro-fuzzy inference systems, spiking neural networks) to process sensor streams (voice, gestures, EEG/EMG) and generate control actions; includes intent prediction. | Voice/gesture recognition, adaptive motor control, and prediction of user intent. | [51,59,62,82,87,88] |
| Model predictive and optimization-based control | Employs model predictive control or similar optimization techniques to predict optimal trajectories based on user inputs and sensor data; cost functions balance safety and user intent; may include fuzzy potentials. | Shared-control navigation, obstacle avoidance, and path optimization. | [40,41,62,63,75,89] |
| Natural language and multimodal interface | Uses speech recognition, natural language processing, and multimodal inputs (e.g., handles, gestures, mobile app) to interpret high-level commands; can include remote IoT connectivity. | Human-friendly control interfaces, remote supervision, and assistance for varied abilities. | [50,51,62,90] |
| Brain–computer interface (BCI) | Acquires and interprets EEG/EMG signals using signal processing and classification (P300, SSVEP, motor imagery) to decode user intention; may employ spiking networks or SVMs. | Mobility control for users with severe motor disabilities. | [51,52,82] |
| Shared control and signal blending | Blends user commands with an autonomous planner to ensure safety; includes filtering/blending/authority switching and frameworks like ROS for control arbitration. | Balancing user input and autonomy to reduce cognitive load and improve safety. | [24,41,60,63,75] |
| Challenges | Existing Limitations | Proposed Future Directions | |
|---|---|---|---|
| Category | Item | ||
| Technical | Dynamic scene | Dependency on predefined environments, lack of dynamic scenes, lack of challenging environments [45,53,57,67,92]. | Explore dynamic scenes such as dynamic obstacle avoidance, road damage, and road-corner detection, far-field object recognition; integrate adaptive frameworks such as RL. |
| Energy management | High energy consumption from complex sensors and algorithms [40,41], lack of real-world validation [68] | Develop efficient DC motor control algorithms minimizing energy use, explore neuromorphic SNN control [88] | |
| Sensor fusion | Lack of complex multi-sensor fusion, errors from malfunction or noisy data [96,105] | Employ multimodal sensor fusion (LiDAR, RGB, thermal, radar); apply transfer learning for robustness [62,75] | |
| Higher dimensionality | Controller design difficult for systems with standing/docking actuators [95,104] | Use model-free reinforcement learning and transfer learning to manage higher dimensional control [118] | |
| Computational requirements | Controller update rates lag behind sensor data sampling; hardware limits [41,71] | Apply model compression, quantization, tinyML, or multi-rate fusion for real-time execution [110] | |
| Ethical | User safety and preventing accidents | Lack of validation across diverse users and environments [23,24], absence of ethical safety guidelines | Establish ethical and regulatory standards for safety certification, informed consent, and user transparency [115] |
| Privacy issues and data collection | Lack of research methodologies focusing on securing sensitive neural and biological data [51,52] | Implement homomorphic encryption, decentralized security (blockchain), and anonymization [116] | |
| User centered | Personalization for individual needs | User capability variation ignored [117], might require training for users to get used to the proposed system | Adaptive autonomy using reinforcement learning and hybrid control [25,90] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Gupta, A.; Chowdhury, K.R.; Farheen, N.; Schoen, M.P. Evolution and Emerging Trends in Intelligent Wheelchair Control: A Comprehensive Review. Machines 2026, 14, 33. https://doi.org/10.3390/machines14010033
Gupta A, Chowdhury KR, Farheen N, Schoen MP. Evolution and Emerging Trends in Intelligent Wheelchair Control: A Comprehensive Review. Machines. 2026; 14(1):33. https://doi.org/10.3390/machines14010033
Chicago/Turabian StyleGupta, Atulan, Kanan Roy Chowdhury, Nusrat Farheen, and Marco P. Schoen. 2026. "Evolution and Emerging Trends in Intelligent Wheelchair Control: A Comprehensive Review" Machines 14, no. 1: 33. https://doi.org/10.3390/machines14010033
APA StyleGupta, A., Chowdhury, K. R., Farheen, N., & Schoen, M. P. (2026). Evolution and Emerging Trends in Intelligent Wheelchair Control: A Comprehensive Review. Machines, 14(1), 33. https://doi.org/10.3390/machines14010033

