Next Article in Journal
Effect of N-Acetyl-L-Cysteine (NAC) on Inflammation After Intraperitoneal Mesh Placement in an Escherichia coli Septic Rat Model: A Randomized Experimental Study
Previous Article in Journal
Determinants of Entero-Invasive and Non-Entero-Invasive Diarrheagenic Bacteria Among HIV-Positive and HIV-Negative Adults in Ghana
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Ethical Horizons in Robotic Rehabilitation: Ensuring Safe AI Use Under the EU AI Act

by
Rocco Salvatore Calabrò
IRCCS Centro Neurolesi “Bonino-Pulejo”, 98123 Messina, Italy
Med. Sci. 2025, 13(4), 317; https://doi.org/10.3390/medsci13040317 (registering DOI)
Submission received: 30 November 2025 / Accepted: 11 December 2025 / Published: 14 December 2025
(This article belongs to the Section Neurosciences)
Artificial intelligence (AI) is reshaping robotic rehabilitation and shifting practice beyond pre-programmed repetitive movement patterns toward data-driven and personalised therapeutic interventions for people with neurological and musculoskeletal impairments [1,2]. Contemporary AI-enabled robotic platforms, including end-effector gait trainers, overground exoskeletons and upper-limb workstations, integrate models that continuously adjust assistance, trajectories and feedback in response to performance, physiology and context, with the promise of improving recovery while easing pressure on overstretched rehabilitation services [1,2]. Wearable robots for gait rehabilitation rely increasingly on intention-detection algorithms that infer user goals from kinematic patterns, surface electromyography or inertial sensor streams, and these systems modulate assistance in real time to maintain an assist-as-needed paradigm that aims to maximise active participation [3,4]. High-performance wearable robots that follow human-in-the-loop optimisation further blur boundaries between assistive and restorative technologies and promise to reconstruct motor function and augment sensory feedback [5].
Evidence from narrative reviews, systematic reviews and controlled trials indicates that robot-assisted gait training can enhance walking speed, endurance and functional ambulation in stroke survivors compared with conventional therapy alone, particularly when integrated into intensive multidisciplinary programmes [1,2,6,7]. Similar benefits emerge for exoskeleton-assisted walking in people with spinal cord injury and for AI-assisted rehabilitation in musculoskeletal disorders, supporting gains in pulmonary function, pain, range of motion and walking parameters within unified therapeutic frameworks [8,9,10]. Clinical heterogeneity remains a persistent feature and effect sizes vary across devices, protocols and patient subgroups, so AI-driven robotic rehabilitation cannot be treated as a universal panacea and demands rigorous evaluation [1,6,7,9]. Technological developments of this kind illustrate both the transformative potential and the ethical fragility of AI in rehabilitation, because robots that become more autonomous, more intimately coupled to the body and more deeply embedded in care pathways expose patients to amplified risks of opaque decision making, malfunction, bias and misuse. The European Union Artificial Intelligence Act (AI Act), in force since 2024, represents a landmark regulatory response to these concerns through a harmonised risk-based framework for AI systems across sectors [11]. Healthcare applications, including AI systems embedded in robotic rehabilitation devices, fall into the high-risk category because they directly affect patient safety and fundamental rights and therefore face stringent obligations [11,12].
High-risk status requires providers to implement comprehensive risk management, rely on high-quality and representative data, undergo ex ante conformity assessment, maintain post-market monitoring and secure meaningful human oversight throughout the lifecycle of the system [11,12,13,14]. Clinicians and developers can view these requirements as codified expressions of core bioethical principles that demand concrete technical and organisational implementation. Safety represents the foundational concern whenever AI assumes partial control of electromechanical systems that move fragile bodies during high-intensity training sessions. Real-time controllers that misinterpret noise as human intention or fail to detect loss of balance may cause falls, joint overstress or other adverse events that undermine the basic rehabilitative purpose of these devices [3,4,5]. The AI Act focus on continuous risk management, incident reporting and post-market surveillance aligns with a safety culture that already characterises rehabilitation practice, yet operationalising algorithmic vigilance at the bedside remains challenging [11,12,13,14]. Robust safety culture in AI-enabled robotic rehabilitation requires continuous logging of system behaviour, structured analysis of adverse events and near misses and feedback loops between clinicians, engineers and manufacturers so that usability problems rapidly inform updates to algorithms and protocols. High-risk AI systems in rehabilitation therefore belong inside learning health systems that interrogate not only clinical outcomes but also the behaviour of the AI itself. Human oversight closely follows safety as a core requirement for responsible AI-enabled rehabilitation. Robots that implement advanced control strategies can support clinical decision making but must not displace clinician responsibility for therapeutic choices. The AI Act demands design features that enable human operators to understand the role of the system, recognise erroneous outputs and intervene or interrupt operation when necessary [11,12]. Rehabilitation therapists therefore need at least conceptual insight into how assistance parameters are adapted, which triggers drive major changes in support or task difficulty and how the system responds to unusual events such as sudden increases in spasticity or fatigue. International guidelines for trustworthy medical AI highlight human-centred design, explicit allocation of clinical responsibility and training for professionals who deploy AI-enabled tools [11,12,13,14]. Meaningful oversight depends on digital literacy among rehabilitation staff, transparent documentation of algorithms and failure modes and user interfaces that highlight clinically relevant information, because otherwise human operators risk becoming passive monitors of systems they cannot properly control.
Data governance forms a second major ethical axis where robotic rehabilitation intersects with the AI Act. Robotic devices function as mobile data platforms and continuously capture movement trajectories, force profiles and physiological signals, producing longitudinal datasets that support personalisation, monitoring, algorithm retraining, secondary research and commercial development [2,3,4,9]. These datasets are powerful instruments for clinical and scientific progress and also constitute highly identifying and sensitive records, since de-identified gait signatures or muscle activation patterns may permit reidentification when linked with other sources. The AI Act and the General Data Protection Regulation mandate data minimisation, privacy by design, strong cybersecurity and clear specification of purposes for which data are processed, and these legal norms closely reflect long-standing ethical concerns about informational harm and loss of control [11,13,15]. Analyses of healthcare data mining warn that secondary use of clinical data without robust governance can erode trust and expose patients to discrimination or exploitation [15,16,17,18]. Responsible data governance for robotic rehabilitation therefore combines consent and communication about data flows with technical strategies such as federated learning and privacy-preserving analytics that support collaboration and bias mitigation.
Algorithmic bias and fairness emerge as particularly insidious ethical challenges because unfair models can silently undermine rehabilitation as a vehicle for equity and inclusion. Studies of AI in medicine document how models reproduce disparities when they learn from skewed datasets, rely on biased proxies for health or encode structural inequities in subtle patterns [18,19]. Datasets used to train controllers and decision support algorithms for robotic rehabilitation often arise from small samples in specialised centres and typically overrepresent younger and well-resourced patients with higher baseline function [1,3,6]. Device design itself frequently reflects the anthropometrics and movement patterns of average-sized adults and models calibrated on these bodies may perform poorly in older adults, women and people with atypical morphology or severe multimorbidity. Allocation of robotic rehabilitation, titration of training intensity and rules for reducing assistance may consequently disfavour people who diverge from the majority profile, including patients with cognitive deficits or limited prior technology use. Fairness-focused reviews argue that mitigation of these risks requires curated and diverse training cohorts, bias-aware performance evaluation across clinically relevant subgroups and algorithmic strategies such as reweighting, fairness constraints or robust optimisation [18,19]. The AI Act requirement that datasets for high-risk systems be relevant, representative, free of errors and complete offers a legal basis for demanding evidence of equitable performance and for conformity assessments that examine distributional as well as average effects [11,12,13]. Transparency and explainability, long recognised as ethical desiderata in medical AI, present distinctive challenges in robotic rehabilitation because controllers for exoskeletons and gait trainers often rely on layered architectures that combine low-level feedback with higher-level predictive modules, and these designs obscure the causal chain from sensor input to motor output [4,5].
Rehabilitation remains a domain where patient agency and understanding carry particular weight and individuals who undertake arduous therapy programs reasonably expect to know what they will be undergoing and why. Mapping reviews and scoping reviews on AI ethics and explainability show that different stakeholders require different styles of transparency, with clinicians needing interpretable performance metrics and documentation of failure modes and patients benefiting more from plain-language explanations of system purpose, limitations and circumstances in which clinicians might override recommendations [16,17,20]. Informed consent for AI-mediated robotic rehabilitation must reflect these insights and describe the role of AI in adjusting assistance and task parameters, acknowledge that no algorithm is infallible and outline available safeguards, including human oversight and emergency stop mechanisms. Patients who live with cognitive impairment, aphasia or limited digital literacy may require accessible multimodal communication, involvement of family or legally authorised representatives and opportunities to revisit decisions over time, because such approaches better respect autonomy than single, dense consent forms that are quickly forgotten.
Ethical horizons for AI-enabled robotic rehabilitation extend beyond compliance with the AI Act. Regulatory provisions define high-risk categories and essential requirements but do not replace professional and moral responsibilities of clinicians, researchers and manufacturers. Conceptual frameworks in bioethics and digital health argue that ethical AI in healthcare represents a continuous process of reflection and negotiation rather than static satisfaction of technical checklists [16,17]. That process begins with problem formulation, including choices about which patient groups receive priority, which outcomes are optimised and how individual benefit is balanced against costs at a system level. Participatory and co-design approaches that involve patients, caregivers, rehabilitation professionals, engineers and ethicists from early stages of development entail negotiating values and aligning system functions with real-world needs and preferences instead of abstract engineering goals [2,3,4]. Evaluation practices must also evolve because traditional clinical trials, though indispensable for establishing safety and efficacy, seldom capture the ethical and social impacts of AI-enabled robotics. Outcomes such as therapeutic alliance, perceived autonomy, trust in technology and stigma associated with visible assistive devices often influence adherence and long-term benefit yet rarely appear as primary endpoints. Systematic reviews report inconsistent documentation of adverse events, usability issues and discontinuation reasons in robotic rehabilitation trials and this inconsistency obscures how AI-related failures manifest in real practice [6,7,9]. Qualitative methods, patient-reported outcomes and extended follow-up can generate richer accounts of lived experience with robotic systems and linking these approaches to AI Act post-market monitoring requirements supports informed decisions about updating or withdrawing technologies when unacceptable patterns arise [11,12,13,14].
Global and regional dimensions of AI governance shape the future of robotic rehabilitation. Technologies interact with reimbursement schemes, workforce shortages, infrastructural constraints and cultural attitudes toward disability and technology, and comparative analyses of regulation reveal substantial variation in how jurisdictions balance innovation and protection [12,13,14]. The EU AI Act stands out as a stringent attempt to foreground fundamental rights and safety in AI deployment and this stance creates both challenges and opportunities for manufacturers and healthcare organisations that operate across borders. Compliance with high-risk requirements may impose significant costs, particularly for small and medium-sized enterprises that develop innovative rehabilitation devices, yet alignment with this framework can function as a mark of trustworthiness and facilitate broader diffusion, echoing the Brussels effect previously observed with the General Data Protection Regulation [11,12,13,14]. The rehabilitation community now possesses an opportunity to contribute proactively to AI Act interpretation and implementation so that guidance documents, conformity assessment practices and technical standards reflect specific features of robotic rehabilitation rather than generic assumptions about software-only medical AI. AI-enabled robotic rehabilitation consequently stands at a pivotal ethical juncture. Technologies that allow robots to infer movement intention, adapt assistance and learn from large cohorts can accelerate recovery and extend access, yet can also introduce opacity, bias and new vulnerabilities into complex care pathways [1,2,3,4,5,18,19].
The EU AI Act supplies a timely and robust legal scaffold for managing these risks by classifying rehabilitation AI as high-risk and imposing demands for safety, data governance, transparency and human oversight [11,12,13,14]. Genuine ethical alignment requires more than regulatory compliance and development and the deployment of robotic rehabilitation should embed ethics throughout the lifecycle of AI systems, including participatory design, representative data collection, transparent implementation, clinician education and reflexive post-market monitoring. Collaborative engagement by clinicians, engineers, ethicists, regulators and patients can enable robotic rehabilitation to evolve not only as a powerful tool to restore movement but also as a paradigm of human-centred and rights-respecting AI in clinical practice.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Abbas, G.H.; Speksnijder, C.; Ramnarain, D.; Parmar, C.; Parmar, A.; Ahmad, S.; Pouwels, S. AI-Driven Rehabilitation Robotics: Advancements in and Impacts on Patient Recovery. Cureus 2025, 17, e94273. [Google Scholar] [CrossRef] [PubMed]
  2. Alshami, A.; Nashwan, A.; AlDardour, A.; Qusini, A. Artificial Intelligence in rehabilitation: A narrative review on advancing patient care. Rehabilitacion 2025, 59, 100911. [Google Scholar] [CrossRef] [PubMed]
  3. Cha, J.M.; Hong, J.; Yoo, J.; Rha, D.W. Wearable Robots for Rehabilitation and Assistance of Gait: A Narrative Review. Ann. Rehabil. Med. 2025, 49, 187–195. [Google Scholar] [CrossRef] [PubMed]
  4. Coser, O.; Tamantini, C.; Soda, P.; Zollo, L. AI-based methodologies for exoskeleton-assisted rehabilitation of the lower limb: A review. Front. Robot. AI 2024, 11, 1341580. [Google Scholar] [CrossRef] [PubMed]
  5. Xia, H.; Zhang, Y.; Rajabi, N.; Taleb, F.; Yang, Q.; Kragic, D.; Li, Z. Shaping high-performance wearable robots for human motor and sensory reconstruction and enhancement. Nat. Commun. 2024, 15, 1760. [Google Scholar] [CrossRef] [PubMed]
  6. Lee, J.H.; Kim, G. Effectiveness of Robot-Assisted Gait Training in Stroke Rehabilitation: A Systematic Review and Meta-Analysis. J. Clin. Med. 2025, 14, 4809. [Google Scholar] [CrossRef] [PubMed]
  7. Hu, M.M.; Wang, S.; Wu, C.Q.; Li, K.P.; Geng, Z.H.; Xu, G.H.; Dong, L. Efficacy of robot-assisted gait training on lower extremity function in subacute stroke patients: A systematic review and meta-analysis. J. Neuroeng. Rehabil. 2024, 21, 165. [Google Scholar] [CrossRef] [PubMed]
  8. Xiang, X.N.; Zong, H.Y.; Ou, Y.; Yu, X.; Cheng, H.; Du, C.P.; He, H.C. Exoskeleton-assisted walking improves pulmonary function and walking parameters among individuals with spinal cord injury: A randomized controlled pilot study. J. Neuroeng. Rehabil. 2021, 18, 86. [Google Scholar] [CrossRef] [PubMed]
  9. Luo, Z.; Wang, Y.; Zhang, T.; Wang, J. Effectiveness of AI-assisted rehabilitation for musculoskeletal disorders: A network meta-analysis of pain, range of motion, and functional outcomes. Front. Bioeng. Biotechnol. 2025, 13, 1660524. [Google Scholar] [CrossRef] [PubMed]
  10. Ben Abdallah, I.; Bouteraa, Y.; Alotaibi, A. AI-driven hybrid rehabilitation: Synergizing robotics and electrical stimulation for upper-limb recovery after stroke. Front. Bioeng. Biotechnol. 2025, 13, 1619247. [Google Scholar] [CrossRef] [PubMed]
  11. European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). Off. J. Eur. Union. 2024, 206, 1–120. Available online: http://data.europa.eu/eli/reg/2024/1689/oj (accessed on 15 November 2025).
  12. Aboy, M.; Minssen, T.; Vayena, E. Navigating the EU AI Act: Implications for regulated digital medical products. npj Digit. Med. 2024, 7, 237. [Google Scholar] [CrossRef] [PubMed]
  13. Palaniappan, K.; Lin, E.Y.T.; Vogel, S. Global Regulatory Frameworks for the Use of Artificial Intelligence (AI) in the Healthcare Services Sector. Healthcare 2024, 12, 562. [Google Scholar] [CrossRef] [PubMed]
  14. Palaniappan, K.; Lin, E.Y.T.; Vogel, S.; Lim, J.C.W. Gaps in the Global Regulatory Frameworks for the Use of Artificial Intelligence (AI) in the Healthcare Services Sector and Key Recommendations. Healthcare 2024, 12, 1730. [Google Scholar] [CrossRef] [PubMed]
  15. Ahmed, M.M.; Okesanya, O.J.; Oweidat, M.; Othman, Z.K.; Musa, S.S.; Lucero-Prisno, D.E., III. The ethics of data mining in healthcare: Challenges, frameworks, and future directions. BioData Min. 2025, 18, 47. [Google Scholar] [CrossRef] [PubMed]
  16. Morley, J.; Machado, C.C.V.; Burr, C.; Cowls, J.; Joshi, I.; Taddeo, M.; Floridi, L. The ethics of AI in health care: A mapping review. Soc. Sci. Med. 2020, 260, 113172. [Google Scholar] [CrossRef] [PubMed]
  17. Gorelik, A.J.; Li, M.; Hahne, J.; Wang, J.; Ren, Y.; Yang, L.; Zhang, X.; Liu, X.; Wang, X.; Bogdan, R.; et al. Ethics of AI in healthcare: A scoping review demonstrating applicability of a foundational framework. Front. Digit. Health 2025, 7, 1662642. [Google Scholar] [CrossRef] [PubMed]
  18. Chen, R.J.; Wang, J.J.; Williamson, D.F.K.; Chen, T.Y.; Lipkova, J.; Lu, M.Y.; Sahai, S.; Mahmood, F. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat. Biomed. Eng. 2023, 7, 719–742. [Google Scholar] [CrossRef] [PubMed]
  19. Norori, N.; Hu, Q.; Aellen, F.M.; Faraci, F.D.; Tzovara, A. Addressing bias in big data and AI for health care: A call for open science. Patterns 2021, 2, 100347. [Google Scholar] [CrossRef] [PubMed]
  20. Lekadir, K.; Frangi, A.F.; Porras, A.R.; Glocker, B.; Cintas, C.; Langlotz, C.P.; Weicken, E.; Asselbergs, F.W.; Prior, F.; Collins, G.S.; et al. FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare. BMJ 2025, 388, e081554, Erratum in BMJ 2025, 388, r340. https://doi.org/10.1136/bmj.r340. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Calabrò, R.S. Ethical Horizons in Robotic Rehabilitation: Ensuring Safe AI Use Under the EU AI Act. Med. Sci. 2025, 13, 317. https://doi.org/10.3390/medsci13040317

AMA Style

Calabrò RS. Ethical Horizons in Robotic Rehabilitation: Ensuring Safe AI Use Under the EU AI Act. Medical Sciences. 2025; 13(4):317. https://doi.org/10.3390/medsci13040317

Chicago/Turabian Style

Calabrò, Rocco Salvatore. 2025. "Ethical Horizons in Robotic Rehabilitation: Ensuring Safe AI Use Under the EU AI Act" Medical Sciences 13, no. 4: 317. https://doi.org/10.3390/medsci13040317

APA Style

Calabrò, R. S. (2025). Ethical Horizons in Robotic Rehabilitation: Ensuring Safe AI Use Under the EU AI Act. Medical Sciences, 13(4), 317. https://doi.org/10.3390/medsci13040317

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop