Next Article in Journal
Fast Equipartition of Complex 2D Shapes with Minimal Boundaries
Previous Article in Journal
Generating Job Recommendations Based on User Personality and Gallup Tests
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence and the Human–Computer Interaction in Occupational Therapy: A Scoping Review

by
Ioannis Kansizoglou
1,*,
Christos Kokkotis
2,
Theodoros Stampoulis
2,
Erasmia Giannakou
2,
Panagiotis Siaperas
3,
Stavros Kallidis
2,
Maria Koutra
2,
Paraskevi Malliou
2,
Maria Michalopoulou
2 and
Antonios Gasteratos
1
1
Laboratory of Robotics and Automation, Department of Production and Management Engineering, School of Engineering, Democritus University of Thrace, 67100 Xanthi, Greece
2
Department of Physical Education and Sport Science, School of Physical Education, Sport Science and Occupational Therapy, Democritus University of Thrace, 69100 Komotini, Greece
3
Occupational Therapy Department, Metropolitan College of Athens, 10672 Athens, Greece
*
Author to whom correspondence should be addressed.
Algorithms 2025, 18(5), 276; https://doi.org/10.3390/a18050276
Submission received: 18 March 2025 / Revised: 3 May 2025 / Accepted: 6 May 2025 / Published: 8 May 2025
(This article belongs to the Collection Feature Papers in Evolutionary Algorithms and Machine Learning)

Abstract

:
Occupational therapy (OT) is a client-centered health profession focused on enhancing individuals’ ability to perform meaningful activities and daily tasks, particularly for those recovering from injury, illness, or disability. As a core component of rehabilitation, it promotes independence, well-being, and quality of life through personalized, goal-oriented interventions. Identifying and measuring the role of artificial intelligence (AI) in the human–computer interaction (HCI) within OT is critical for improving therapeutic outcomes and patient engagement. Despite AI’s growing significance, the integration of AI-driven HCI in OT remains relatively underexplored in the existing literature. This scoping review identifies and maps current research on the topic, highlighting applications and proposing directions for future work. A structured literature search was conducted using the Scopus and PubMed databases. Articles were included if their primary focus was on the intersection of AI, HCI, and OT. Out of 55 retrieved articles, 26 met the inclusion criteria. This work highlights three key findings: (i) machine learning, robotics, and virtual reality are emerging as prominent AI-driven HCI techniques in OT; (ii) the integration of AI-enhanced HCI offers significant opportunities for developing personalized therapeutic interventions; (iii) further research is essential to evaluate the long-term efficacy, ethical implications, and patient outcomes associated with AI-driven HCI in OT. These insights aim to guide future research efforts and clinical applications within this evolving interdisciplinary field. In conclusion, AI-driven HCI holds considerable promise for advancing OT practice, yet further research is needed to fully realize its clinical potential.

1. Introduction

Occupational therapy (OT) is a client-centered healthcare profession dedicated to enabling individuals with physical, cognitive, or emotional impairment to participate in meaningful daily activities [1]. Independence in daily life for every service user, irrespective of age or health, and a sense of competence has particular meaning and value to the quality of life. According to the World Federation of Occupational Therapists (WFOT): “In OT, occupations refer to the daily activities that people do as individuals, in families, and with communities to occupy time and bring meaning and purpose to life. Occupations include things people want to, need to, and are expected to do” [2]. Hence, a broad range of occupations are classified as activities of daily living (ADLs), instrumental activities of daily living (IADLs), health management, rest and sleep, education, work, play, leisure, and social participation. There are many specific occupations within each of these nine broad categories. OT interventions aim to enhance independence, functional ability, and overall well-being through personalized strategies tailored to each individual’s needs. Assistive technologies (AT) play a pivotal role in this process, enabling people with disabilities of all ages to engage in everyday occupations of their choice like ADLs independently [3]. In recent years, there has been a rapid increase in research exploiting artificial intelligence (AI) to enhance participation in meaningful activities, a key outcome for rehabilitation [4]. Integrating AI into rehabilitation interventions holds promise for improving patient functional independence and quality of life, motivating the development of innovative, technology-supported therapy methods.
The convergence of AI with the human–computer interaction (HCI) has catalyzed groundbreaking rehabilitation tools. Integrating AI with electronics, robotics, and software has essentially revolutionized assistive technology, resulting in innovations such as mind-controlled exoskeletons, bionic limbs, intelligent wheelchairs, and smart home assistants [5]. These AI-driven solutions allow users to regain mobility and autonomy in ways not previously possible with traditional therapy alone. A scoping review of AI in rehabilitation found a growing body of work using machine learning (ML), deep learning (DL), computer vision, and other AI techniques to support therapy outcomes, strengthening the potential of AI-powered interventions across diverse groups of serve users [4]. Such advances indicate that AI has become an integral component of modern diagnosis and therapy methodologies, complementing the clinical expertise of therapists with data-driven personalization and automation [6].
Meanwhile, robotic technologies are at the forefront of AI-driven HCI applications in rehabilitation. For instance, powered prosthetic limbs with microprocessor control have shown significant benefits for amputees: a recent user-centered survey of transfemoral amputees found that advanced robotic prostheses, when appropriately prescribed, markedly improved patients’ mobility, sense of autonomy, and overall health [7]. In elder care, social and assistive robots are emerging to support daily living tasks [8]. Focus group studies reveal a strong interest and demand for caregiving robots among older adults and their caregivers, provided these systems are introduced with proper training and ethically tailored to user needs [9]. Such robots must be customized to match the user’s abilities, beliefs, and preferences, underscoring the importance of involving end-users in design and implementation. Likewise, in neurodevelopmental therapy for conditions like attention-deficit/hyperactivity disorder (ADHD), interactive robots have been employed to assist children in therapeutic exercises. Literature reviews on robotics in ADHD and autism care highlight promising technical results, e.g., improved engagement and attention, but also emphasize that current robot-assisted interventions require better scalability, more natural human–machine interaction, and improved data processing to be effective in real-world clinical settings [10]. These examples illustrate how robotic HCI systems, from intelligent prostheses to socially assistive robots, can be integrated into OT practice to augment traditional rehabilitation techniques.
Beyond robotics, other AI-driven interfaces are advancing rehabilitation and OT outcomes. Wearable sensor systems, for example, can continuously monitor benefits and automatically recognize personal needs, providing objective data to therapists and caregivers [11]. Meanwhile, immersive technologies such as virtual reality (VR) combined with human–computer interfaces open new frontiers in physical therapy. VR provides an engaging, simulated environment for people to practice tasks, while interfaces translate user behavior into control commands, thus enabling hands-free interaction with smart home devices or assistive tools. Integrating VR with HCI has been shown to improve the accuracy and speed of neural command issuance by providing real-time feedback and adaptive training scenarios [12]. This synergy leads to more intuitive control of devices and greater user independence and efficiency in managing Internet-of-things (IoT) tasks, especially for individuals with severe mobility limitations. Similar VR and brain–computer interface (BCI) systems are explored for neurorehabilitation and cognitive training, with researchers highlighting the value of adaptive signal processing and multimodal feedback (e.g., combining brain signals with eye-tracking or motion data) to enhance user engagement and therapeutic effectiveness [13,14]. Incorporating advanced AI techniques into these virtual interfaces is poised to further personalize rehabilitation as ML algorithms adjust to each user’s behavioral patterns and progress.
In recent years, AI has gained significant traction across various medical fields, demonstrating its potential to enhance diagnostics [15], prognostics [16], treatment planning [17], surgery [18], and patient monitoring [19]. Numerous reviews have mapped these broad medical applications, highlighting how ML, DL, and natural language processing transform healthcare delivery. Such applications of AI in healthcare range from medical imaging [20,21] and precision medicine [22,23] to predictive analytics [24], big data [25], and robotic surgery [26], thus offering enhanced accuracy and efficiency in clinical decision-making [27]. However, while the integration of AI into traditional medical disciplines has been extensively studied, its role in supporting patient-centered interventions such as OT, particularly through HCI technologies, remains underexplored. This scoping review aims to bridge this gap by uniquely focusing on the intersection of AI, HCI, and OT, identifying and mapping the current research, highlighting applications, and proposing directions for future work.
In this context, the current scoping review was conducted to (i) explore the application of AI-driven HCI technologies within OT, (ii) identify emerging trends and techniques, such as ML, robotics, and VR, and (iii) highlight existing research gaps to inform future scientific approaches. Unlike previous literature that broadly addresses AI applications in healthcare, this review specifically focuses on the integration of AI and HCI in the notable field of OT and how these technologies enhance therapeutic interventions and patient engagement with different domains of human occupations. By providing an in-depth analysis of the methodologies, therapeutic contexts, and results of selected studies, this scoping review aims to illuminate the current state of AI-driven HCI in OT and guide future research efforts in this evolving field.
The remainder of this paper is organized as follows: Section 2 describes the methodology used for the scoping review, including the search strategy, inclusion and exclusion criteria, and data extraction process. Section 3 presents the results of the thematic and bibliometric analysis of the included articles. Section 4 discusses the principal findings, open issues, future research perspectives, and limitations of the study. Finally, Section 5 concludes the paper by summarizing the main insights and highlighting directions for future research.

2. Materials and Methods

According to Joanna Briggs Institute guidelines, this scoping review was registered on the Open Science Framework (OSF) on 5 March 2025 [28], thus not requiring any ethical approval. Furthermore, this scoping review adhered to the 22-item Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) [29]. Finally, a bibliometric analysis of the selected articles was conducted [30], with the main aim of providing a performance analysis that identifies key research themes, notable authors, and the evolution of the field over time, informing future research directions.

2.1. Literature Searches

The literature search was performed using the PubMed and Scopus databases, selected for their comprehensive coverage of healthcare and OT research, as well as their alignment with the PRISMA-ScR guidelines [29]. To refine the search, relevant keywords related to “artificial intelligence”, “human–computer interaction”, and “occupational therapy” were combined using the Boolean operator AND. Articles were included in the review if these keywords appeared in the title, abstract, or as designated keywords. The complete search strings for each database are provided (https://doi.org/10.17605/OSF.IO/9WFQD).

2.2. Eligibility Criteria

2.2.1. Inclusion Criteria

In this scoping review, only peer-reviewed articles published in journals or conference proceedings between 1 January 2015 and 7 March 2025 were considered. This time frame reflects the rapid technological developments in AI and HCI, and preliminary exploration of the literature indicated a limited presence of relevant studies prior to 2015. The review exclusively included studies focusing on the intersection of AI and HCI within OT.

2.2.2. Exclusion Criteria

The following categories were excluded: (i) non-English articles, (ii) review articles, conference reviews, and books, (iii) studies focusing on HCI and/or AI applications outside OT, (iv) research centered on therapeutic contexts without a clear inclusion of AI and HCI, (v) articles examining AI-driven interactions for individual professionals (e.g., therapists or caregivers) rather than patient-focused therapy, and (vi) studies that did not specifically address AI integration in HCI through practical interaction scenarios but instead relied solely on offline databases.

2.3. Data Extraction

Initially, two authors (I.K. and C.K.) independently screened the titles, keywords, and abstracts of all 41 retrieved publications. In cases of disagreement during the selection process, a third independent reviewer was consulted to resolve conflicts and establish consensus. For each included article, the following information was extracted: author, year of publication, methodology, e.g., ML, robotics, VR, or other, sample characteristics, e.g., number of participants, therapy sessions, or case studies, type of data used, e.g., sensor-based, motion tracking, or other, therapeutic context, and key outcomes.

2.4. Assessed Outcomes

The studies included in this review were categorized into three main groups: (i) the prevalent AI technologies utilized to improve HCI in OT, (ii) the effectiveness of AI-driven HCI in various therapeutic interventions, and (iii) contextual factors influencing the adoption of AI in HCI. This classification was based on the most common research objectives identified across the studies. Articles exploring multiple aspects were assigned to more than one category. The first category includes studies focusing on the identification, development, and implementation of AI technologies within HCI for OT. After categorizing the articles, the following information was extracted from each study: author, year of publication, methodology (ML, robotics, VR, or other), sample characteristics (number of participants, therapy sessions, or case studies), type of data used (sensor-based, motion tracking, or other), therapeutic context, and key outcomes.

3. Results

3.1. Search Results

The initial search retrieved 55 titles from the selected databases. After removing duplicates, 52 unique articles remained. We applied the exclusion criteria in Section 2.2.2, identifying conference reviews, review articles, and books, thus eliminating 11 studies. The title, abstracts, and keywords of the remaining 41 articles were then reviewed, and 14 articles were excluded due to their lack of relevance to the study’s objectives. Finally, after a full-text assessment, only 26 articles met the inclusion criteria and were selected for the scoping review, as shown in Figure 1.
Subsequently, the studies included in this scoping review were classified into four categories based on their application domain of AI and HCI: (i) motor function (5 studies), (ii) autism spectrum disorder (ASD), and developmental conditions (10 studies), (iii) elderly support and assisted living (AL) (8 studies), and (iv) virtual companionship and emotional expression (3 studies). Among the 26 studies, only one focused mainly on predicting functional outcomes for both domains. This study was categorized into motor function.

3.1.1. Motor Function After Stroke

The studies included in Table 1 explore various AI and HCI applications for motor recovery after stroke, leveraging advanced technologies such as digital twins, VR, exoskeletons, and robotic-assisted rehabilitation. More specifically, Lauer et al. introduced a rehabilitation therapy configuration combining exoskeletons, serious-game VR, and ML models such as logistic regression and convolutional neural networks (CNNs) [31]. Their approach demonstrated a high detection accuracy ( 96 % ) for unnatural supportive movements, emphasizing the potential of integrating immersive VR with real-time motion tracking for more effective rehabilitation. Similarly, Li et al. focused on upper-limb motor ability recognition using a seven-degree-of-freedom (DoF) robotic arm, ML algorithms, such as particle swarm optimization–support vector machine (PSO–SVM) and long short-term memory (LSTM) cells, and multimodal data sources, including electromyography (EMG) and kinematic signals [32]. Their results showed promising recognition rates across three training stages and a mean absolute error (MAE) of 0.3890 in torque estimation, indicating the feasibility of AI-driven motion analysis for personalized rehabilitation.
Other works examined alternative rehabilitation strategies, such as teleoperation, VR-based exercises, and patient-specific assessments. Connan et al. [33] investigated the use of a bimanual humanoid robot for daily task execution in stroke patients, finding that interactive ML and real-time EMG and inertial measurement unit (IMU)-based motion tracking reduced perceived task difficulty by 32 % between initial and final repetitions. Both works [34,35] explored VR-based rehabilitation and assessment approaches using Kalman filter-based kinematic estimation. The latter demonstrated functional improvements in stroke patients, with a 14.5 % increase in Fugl–Meyer upper extremity (FMUE) scores and a 9.6 % improvement in the Wolf motor function test–functional ability score (WMFT–FAS). The other study assessed upper extremity motor function using the VR-based VOTA system, revealing a moderate correlation (Spearman’s ρ = 0.56 ) between task duration and WMFT-TIME. These studies collectively highlight the growing role of AI and HCI technologies in enhancing stroke rehabilitation outcomes, offering novel and personalized approaches to motor function recovery.
These studies collectively highlight the growing role of AI and HCI technologies in enhancing stroke rehabilitation outcomes, offering novel and personalized approaches to motor function recovery. Yet, it is important to note that the existing research primarily focuses on isolated technologies and small-scale trials. More comprehensive studies are needed to evaluate the long-term impact of these interventions, their integration into diverse clinical settings, and their cost-effectiveness. Finally, further exploration of AI models’ adaptability and generalizability across different patient populations would provide more robust evidence for the widespread adoption of these technologies [36].
Table 1. Findings and characteristics of included AI and HCI studies for upper-limb motor function after stroke.
Table 1. Findings and characteristics of included AI and HCI studies for upper-limb motor function after stroke.
AuthorsApplication DomainTechnologyInput DataValidation MethodEmployed SubjectsResults
Lauer et al., 2024 [31]Exoskeleton and serious game-based stroke rehabilitation therapy configurationDigital twins, exoskeleton, serious-game VR, logistic regression, and CNNsEMG, RGB, and motion trackingReal test on young adults8 patients and 6 therapists 96 % detection of unnatural supportive movements
Li et al., 2023 [32]Patient upper-limb motor ability recognition and space reshaping7-DoF robotic arm, particle swarm optimization (PSO)–SVM, and LSTMEMG and kinematic dataReal test on young adults10 healthy73.47, 61.61, and 68.07% recognition of three training stages, and 0.3890 MAE on torques estimation
Connan et al., 2021 [33]Teleoperation of a bimanual humanoid robot for daily tasks executionHumanoid robot TORO and interactive MLEMG and IMU-based motion trackingReal test on young adults2 patients and 7  healthy 32 % perceived difficulty decrease between first and last repetition of daily tasks
Adams et al., 2017 [35]VR exercises for upper extremity recovery after strokeVR (SaeboVR), Kalman filter-based kinematic pose estimation, and linear mixed modelKinect sensory dataReal test on adults15 patients 14.5 % FMUE test and 9.6 % WMFT–FAS
Adams et al., 2014 [34]VR exercises for upper extremity assessment after strokeVR (VOTA) and unscented Kalman filterKinect sensory dataReal test on adults14 patients 0.56 Spearman correlation between VOTA-duration and WMFT-TIME

3.1.2. ASD and Developmental Conditions

The studies listed in Table 2 highlight various AI and HCI technologies used to support children and individuals with developmental conditions, particularly focusing on ASD and cerebral palsy (CP). One key application domain is robotic assistance for children with CP, where technologies such as the LOLA2 robotic platform, incorporating CNNs, were evaluated for their impact on independence and self-esteem. Lagos et al. [37] observed improvements in competence and adaptability, achieving a functional independence measure (FIM) score of 98 across nine patients. Similarly, Chandrashekhar et al. [38] used the SIPPC robot to explore the relationship between cognition and delayed motor skill improvement in infants with CP, finding a notable connection between cognitive abilities and motor learning progression.
In the domain of socially assistive robotics for children with ASD, several studies focused on engagement and interaction with robotic agents. Jain et al. [40] employed socially assistive robots integrated with CNNs and online reinforcement learning (RL) to foster engagement, achieving 65 % interaction in human–robot interaction (HRI) and a 90 % area under the receiver operating characteristic (AUROC) value in engagement detection, a well-established metric in binary classification tasks. To achieve that, cutting-edge AI-enabled human modeling platforms have been adopted, including OpenFace [47], OpenPose [48], and Praat [49]. Hortsmann et al. [39] examined the design of successful interaction systems, emphasizing the importance of AI-based engagement detection as well as the robot’s features, appearance, and functionality when working with children with ASD. Furthermore, AlSadoun et al. [41] proposed a virtual agent framework for art therapy aimed at users with communication impairments, focusing on facial landmark extraction using artificial neural networks (ANN) for therapeutic purposes.
Finally, a significant body of work focused on mental well-being promotion and game-based interventions for children with ASD and developmental delays. Fang et al. [42] developed a VR-serious game with procedural content generation (PCG) AI to promote mental well-being. In a similar vein, Li et al. [43] employed a robotic agent combined with RL and CNNs to assist in engaging children with ASD in a game environment, resulting in minor improvements in social responsiveness. These studies reflect a growing interest in utilizing AI, robotics, and HCI technologies to foster engagement, support motor recovery, and improve cognitive and emotional outcomes for individuals with developmental conditions.
These studies reflect a growing interest in utilizing AI, robotics, and HCI technologies to foster engagement, support recovery, and improve cognitive and emotional outcomes for individuals with developmental conditions. However, while many studies show promising results, there remains a need for standardization of evaluation metrics and longitudinal studies to establish sustained efficacy in real-world educational and therapeutic settings. In addition, attention should be given to the accessibility, scalability, and cost-effectiveness of these technologies to ensure that they can be implemented in real-world clinical settings [50].

3.1.3. Elderly Support and AL

The studies illustrated in Table 3 focus on various AI and HCI technologies designed for elderly support and AL, with several applications geared towards enhancing daily activities and mobility. One major area of focus is robotic assistance for elderly care, where robots are deployed for tasks such as household activities and personal assistance. To that end, Tsui et al. [51] explored the factors influencing the use of AI-enabled humanoid robots, gathering insights from caregivers and care receivers through questionnaires and focus groups. Their findings highlighted the importance of robots in performing household tasks and providing communication support. Similarly, Tiersen et al. [52] examined smart home systems for individuals with dementia and their caregivers, utilizing tablet-based puzzles, chatbots, smartwatches, and ambient sensors. Their study emphasized how participatory design processes can foster more effective, inclusive, and rapid innovation in public health applications.
Several studies have investigated specific application domains in AT for mobility and physical support, specifically related to the elderly. To that end, Ranieri et al. [53] used the TIAGo robot, shown in Figure 2, combined with CNN, LSTM, and temporal convolutional network (TCN) models for multi-modal activity recognition, achieving an impressive 98.61 % accuracy in action recognition under realistic scenarios, further enhancing assistive robots’ effectiveness in daily living. Furthermore, Coviello et al. [54] introduced the ASTRO robot, which uses decision trees for walking assistance, resulting in 94.2 % driving path accuracy and positive user experiences. These studies highlight the growing role of AI and robotics in improving the quality of life of elderly individuals by providing both physical support and enhancing interaction in everyday tasks.
Table 3. Findings and characteristics of included AI and HCI studies for elderly support and AL.
Table 3. Findings and characteristics of included AI and HCI studies for elderly support and AL.
AuthorsApplication DomainTechnologyInput DataValidation MethodEmployed SubjectsResults
Tsui et al., 2025 [51]Factors of robotic assistants for elderlyAI-enabled humanoid robotsFocus groups and questionnairesReal test on older adults82 caregivers and care receiversRobots performing household and communicating
Fei et al., 2024 [55]Visually impaired assistive system for graspingDobot magician manipulator and YOLOv5 CNNRGB-D images, speech data, and vibration dataReal test on healthy subjectsn/a99.3% mAP object detection, 37 s assisted grasping, and 24 s active grasping
Shahria et al., 2024 [56]Object detection for AAL assistive robotic armsDatasetRGB imagesNo112,000 images from COCO, Open Images, LVIS, and Roboflow Universe
Ranieri et al., 2021 [53]Multi-modal ADL recognitionTIAGo robot, RALT living-lab, and CNN-LSTM/TCNRGB images and IMU dataReal test on adults16 healthy98.61% action recognition on real data
Try et al., 2021 [57]Robotic manipulation for assisted drinkingKinova Jaco robotic arm, face, and dlib landmark detectionRGB images and distance sensory dataReal test on adults9 healthy 99.54 % cup delivery success rate
Tiersen et al., 2021 [52]Smart home systems for people with dementia and caregiversTablet-based puzzles, chatbots, smartwatches, ambient sensors, and physiological measurement devicesSemi-structured interviews, focus groups, and workshopsReal test on older adults35 caregivers and 35 care receiversParticipatory design processes foster more effective, inclusive, and rapid innovation in public health sectors
Erickson et al., 2019 [58]Robotic manipulation for assisted dressing and bathingPR2 robot, 7-DoF robotic arm, amd ANNCapacitive sensory dataReal test on adults4 healthy<2.6 cm distance and <6 N applied force
Coviello et al., 2019 [54]Walking support for elderlyASTRO robot and decision treesLaser scan data, FSR sensory data, and questionnaireReal test on adults7 healthy 94.2 % driving path accuracy and positive user experience
Figure 2. Indicative robotic platforms used in AI and HCI applications for assistive purposes, including (a) the TIAGo assistive robot [53] and (b) the Kinova Jaco robotic arm [57].
Figure 2. Indicative robotic platforms used in AI and HCI applications for assistive purposes, including (a) the TIAGo assistive robot [53] and (b) the Kinova Jaco robotic arm [57].
Algorithms 18 00276 g002
Finally, further studies investigated robotic manipulation and support systems designed to improve independence in ADLs. Erickson et al. [58] employed the PR2 robot with a seven-DoF arm and a simple ANN to assist in dressing and bathing tasks, achieving precision with minimal applied force. Meanwhile, Try et al. [57] focused on robotic manipulation for assisted drinking, employing the Kinova Jaco robotic arm with face and landmark detection to successfully deliver cups with a success rate of 99.54 % , as depicted in Figure 2. This research demonstrates significant advances in robotic assistance for elderly individuals with mobility limitations. Focusing on a different domain, Fei et al. [55] used the Dobot magician manipulator integrated with the YOLOv5 architecture for a visually impaired assistive system, achieving 99.3 % mean average precision (mAP) in object detection and reducing the time for assisted grasping.
Collectively, these studies demonstrate that elderly support systems perform remarkably in controlled settings. Nonetheless, scalability, robustness under real-world variability, and user acceptance remain critical areas for future research [59]. In particular, future work should more rigorously assess system usability over extended periods and under conditions of user fatigue or cognitive decline to ensure sustained effectiveness and adoption in real-world elderly care settings.

3.1.4. Emotional Expression and Virtual Companionship

The studies in Table 4 explore the applications of AI and HCI in emotional expression and virtual companionship. To begin with, Winkle [60] explored assistive social robotics for engagement and therapy, emphasizing the importance of AI-driven social behaviors, interaction modalities, and design principles. This study, based on focus groups, interviews, and observations, provides valuable insights into how robotics can be effectively integrated into therapeutic settings. Subsequently, Zentner et al. [61] investigated how lighting variations impact emotion recognition during HCI scenarios using VR and CNN-based models on EEG data, achieving a recognition rate of 41.63 % . Their findings highlight the challenges of accurately detecting emotions in dynamic environments, an essential aspect of adaptive HCI systems. Finally, Xie et al. [62] examined the role of virtual interactions in mitigating social anxiety, analyzing chatbot-based textual and vocal communication patterns in a longitudinal study with 618 undergraduate students. Their results indicate that frequent engagement in HCI reduces online social anxiety but not offline anxiety, while emotional expression plays a key role in lowering anxiety levels.
Collectively, these studies underline the growing potential of AI-enhanced HCI for emotional support and social engagement. However, the relatively modest emotion recognition rates and the limited transferability of improvements from online to offline contexts point to key challenges. Future research should aim to improve emotion detection accuracy under real-world conditions and evaluate the long-term psychological impacts of sustained interaction with AI companions [63].
To sum up, Figure 3 illustrates the integration of AI and HCI technologies within OT based on the existing bibliography studied. The three layers capture from right to left (1) key application domains, (2) enabling HCI technologies, and (3) associated AI methods, offering a structured view of how these components align to support diverse assistive needs.

3.2. Bibliometric Analysis

In this section, we further perform a performance analysis based on bibliometric data displaying insights into the volume, distribution, and temporal trends in research on the integration of AI and HCI within OT. This analysis is important as it provides a comprehensive overview of the trends, key authors, and research outputs in the field of AI and HCI integration within OT, helping to highlight gaps and opportunities for future research. As already mentioned, a total of 26 studies and 8 review articles were identified as suitable for inclusion in the analysis. The bar chart in Figure 4 depicts the annual publication frequency from 2015 to 2025. The yearly distribution of the total articles is illustrated in blue, while the distribution of review articles is shown in red. A discernible upward trajectory is evident, as indicated by the dotted trend line denoting a moving average of the previous two years. The number of publications has exhibited a steady increase over the years, commencing with two in 2015 and culminating in eight in 2024, notwithstanding a minor reduction in 2022 and 2023.
Table 5 below highlights the key contributors and their notable impact in the field, listing the authors with at least 50 citations along with the number of documents they have authored. Adams, Lichter, Ellington, White, and Diamond lead the rankings, each with 162 citations across two documents. Other prominent contributors include Jain, Thiagarajan, Shi, Clabaugh, and Matarić, each with 115 citations from a single document. Krepkovich follows with 88 citations in one document.
Table 6 presents the academic sources with at least two published documents and their corresponding citation counts. The Sensors journal stands out as the primary source, with four documents and a total of 100 citations, underscoring its substantial contribution to the field. Following closely is Lecture Notes in Computer Science with three documents and 12 citations. Another notable source constitutes the IEEE Transactions on Neural Systems and Rehabilitation Engineering, which has two documents and 162 citations.

4. Discussion

4.1. Principal Findings

In summary, we observe that the integration of AI and HCI in OT practices has expanded significantly in recent years, addressing the diverse needs of people across multiple domains. The reviewed studies have been categorized into four major areas: upper-limb motor function after stroke, ASD and developmental conditions, elderly support and AL, and emotional expression with virtual companionship. Across these domains, various technologies have been employed, evolving with advancements in AI-driven approaches and interactive systems.
Figure 5 illustrates the comparative distribution of studies across four main application domains over the examined years. The main patterns highlighted in Figure 5 refer to both the proportional distribution of research across domains and the temporal evolution of studied areas, emphasizing the dynamic progression of AI and HCI integration within OT. Furthermore, by paying careful attention, the reader can observe that research related to ASD and developmental conditions has maintained a consistent presence between 2015 and 2022, indicating sustained scientific interest in these areas. In contrast, studies focusing on elderly support and AL show a pronounced increase from 2021 onwards, reflecting the growing societal need to address challenges associated with aging populations. In addition, emotional expression and virtual companionship appear to be more recent research focuses, with most studies published between 2024 and 2025 suggesting an emerging recognition of technology’s role in supporting emotional well-being. Research on motor function rehabilitation after stroke appears more sporadically distributed, with studies emerging in 2014, 2017, 2023, and 2024 indicating ongoing but less concentrated efforts in this area.
In the domain of upper-limb motor function rehabilitation, earlier studies primarily utilized VR and traditional motion-tracking techniques for assessment and training, e.g., Kinect-based sensory data. Yet, more recent works have incorporated DT, serious-game VR, and AI-driven motion analysis, like CNNs, to improve stroke rehabilitation strategies. The trend suggests an increasing reliance on robotics solutions, like exoskeletons and interactive AI models, for real-time patient adaptation and feedback, highlighting a shift towards more adaptive and personalized physical therapy methods [31,32].
For individuals with ASD and other developmental conditions, socially assistive robotics and serious-game applications have been consistently utilized, with an increasing emphasis on AI-driven engagement detection. Early studies focused on tangible interfaces and game-based interventions, whereas more recent research integrates DL techniques, such as CNNs and RL, to optimize engagement levels [37]. Furthermore, AI-driven emotion recognition and adaptive interaction models have gained prominence, demonstrating a move towards personalized therapy experiences that cater to individual behavioral responses [39].
The domain of elderly support and AL has seen an evolution from rule-based smart home automation towards more sophisticated AI-driven robotic assistance and multi-modal activity recognition. Earlier works emphasized decision trees and basic sensor-based automation, while recent studies have also incorporated cutting-edge DL models and real-world participatory design approaches. The trend indicates a growing preference for AI-augmented humanoid robots and assistive robotic arms aiming to enhance ADL recognition, grasping, and manipulation tasks for elderly individuals [51].
Subsequently, in the field of emotional expression and virtual companionship, there is a notable transition from traditional chatbot-based interventions towards more immersive VR and AI-driven emotion recognition systems. While earlier research focused on structured social interactions using chatbots and socially assistive robotics, more recent works exploit EEG-based emotion recognition and advanced DL models for real-time adaptation. The increased use of VR and conversational AI indicates a trend towards enhancing emotional well-being through more natural and engaging AI–human interactions [61,62].

4.2. Open Issues and Future Perspectives

However, despite the aforementioned advancements of AI and HCI, several challenges and future research directions remain. To begin with, the emergence of large language models (LLMs) presents new opportunities for enhancing virtual companionship, adaptive learning, and personalized therapeutic interventions, yet their integration into clinical and assistive settings requires further validation [64]. Additionally, the applicability of AI-driven solutions in real-world scenarios is often hindered by high implementation costs, technical complexities, and the need for seamless integration into existing healthcare infrastructures [65].
Moreover, as AI-driven HCI technologies are increasingly integrated into OT, several ethical concerns shall be addressed to ensure their responsible and equitable application [66]. One key issue constitutes data privacy, where the collection, storage, and processing of sensitive patient data raise significant concerns about how these data are protected and whether individuals’ rights are adequately safeguarded [67]. It is of the utmost importance to ensure that AI systems adhere to strict data privacy regulations, such as the General Data Protection Regulation (GDPR) in Europe [68], and maintain patients’ confidentiality. Another crucial ethical challenge is algorithmic bias. If not properly designed, AI algorithms may inadvertently perpetuate existing biases in healthcare, leading to inequitable treatment outcomes for certain groups, particularly those from marginalized or under-represented populations [69]. It is essential that AI systems are regularly audited and tested to ensure they are fair and unbiased in their decision-making [70]. Finally, technological accessibility remains a pressing issue. Although AI technologies hold great promise for enhancing OT interventions, they may not be equally accessible to all patients, particularly in low-resource settings or among individuals with limited technological literacy [71]. Efforts to develop user-friendly and accessible AI tools are necessary to ensure that these advancements benefit a broad range of patients, including those with physical or cognitive impairments [72].
Building on the aforementioned concerns, several specific strategies can be proposed to mitigate associated risks. To protect data privacy, robust encryption standards, secure data storage practices, and strict access control measures should be implemented [73]. Meanwhile, the use of diverse, representative datasets and regular audits of AI models are critical to ensure fairness and transparency, thus countering algorithmic bias [74]. Then, the promotion of equitable access requires the design of affordable, intuitive technologies alongside initiatives that improve digital literacy among patients and providers [75]. Therefore, it becomes evident that interdisciplinary collaboration among AI researchers, healthcare professionals, and policymakers will be essential to address these challenges in a comprehensive way, ensuring that the integration of AI and HCI into OT is not only innovative but also ethical, inclusive, and sustainable [76].

4.3. Review Limitations

A key limitation of this scoping review is the exclusion of gray literature and the reliance on two online databases for the literature search. This approach may have contributed to the relatively small number of included studies, potentially limiting the scope of insights obtained. Further restricting the search to the last 10 years may have excluded earlier studies; however, preliminary exploration showed minimal relevant work prior to this period. Furthermore, the exclusion of rehabilitation studies primarily focused on injury-related outcomes represents another constraint, as it may have led to the omission of valuable perspectives on AI applications within the broader rehabilitation landscape. Expanding future searches to encompass additional databases, gray literature, and a wider range of studies could offer a more comprehensive understanding of AI applications in OT.

5. Conclusions

Ultimately, the integration of AI and HCI in OT has shown significant progress in recent years, leveraging advanced technologies to enhance the quality of life, well-being, support, and interaction across diverse groups of clients. The paper at hand has reviewed recent research in the field, categorizing studies into four distinct areas: upper-limb motor function after stroke, ASD and developmental conditions, elderly support and AL, and emotional expression with virtual companionship. Across these four categories, a clear shift has been observed towards DL-based methodologies, immersive virtual environments, and AI-powered robotics in OT applications. The increasing reliance on CNNs, RL, and multi-modal data integration highlights a broader trend in AI personalization, moving towards more adaptive, interactive, and user-centered solutions for individuals’ daily well-being. At the same time, while these considerations pose strong limitations to the widespread adoption of AI and HCI in OT, they also represent key challenges that will shape future research and innovation in the field. To that end, addressing these issues will not only enhance the reliability and effectiveness of AI-driven interventions but also pave the way for more equitable and personalized OT solutions in the years to come.

Author Contributions

Conceptualization, I.K., C.K., T.S., E.G. and P.S.; methodology, I.K., C.K., T.S. and E.G.; validation, P.S., S.K. and M.K.; formal analysis, P.S. and P.M.; data curation, I.K., C.K., T.S. and E.G.; writing—original draft preparation, I.K., C.K., T.S. and E.G.; writing—review and editing, P.S., P.M., M.M. and A.G.; visualization, S.K., P.M. and M.K.; supervision, M.M. and A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ADLActivity of daily living
AIArtificial Intelligence
CNNConvolutional Neural Network
DLDeep Learning
HCIHuman–Computer Interaction
OTOccupational Therapy

References

  1. Cowen, K.; Collins, T.; Carr, S.; Wilson Menzfeld, G. The role of Occupational Therapy in community development to combat social isolation and loneliness. Br. J. Occup. Ther. 2024, 87, 434–442. [Google Scholar] [CrossRef]
  2. World Federation of Occupational Therapists. Definitions of Occupational Therapy from Member Organisations; World Health Organization: Geneva, Switzerland, 2013.
  3. Pancholi, S.; Wachs, J.P.; Duerstock, B.S. Use of artificial intelligence techniques to assist individuals with physical disabilities. Annu. Rev. Biomed. Eng. 2024, 26, 1–24. [Google Scholar] [CrossRef]
  4. Kaelin, V.C.; Valizadeh, M.; Salgado, Z.; Parde, N.; Khetani, M.A. Artificial intelligence in rehabilitation targeting the participation of children and youth with disabilities: Scoping review. J. Med. Internet Res. 2021, 23, e25745. [Google Scholar] [CrossRef]
  5. Oikonomou, K.M.; Papapetros, I.T.; Kansizoglou, I.; Sirakoulis, G.C.; Gasteratos, A. Navigation with Care: The ASPiDA Assistive Robot. In Proceedings of the 2023 18th International Workshop on Cellular Nanoscale Networks and their Applications (CNNA), Xanthi, Greece, 28–30 September 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–4. [Google Scholar]
  6. Kansizoglou, I.; Tsintotas, K.A.; Bratanov, D.; Gasteratos, A. Drawing-aware Parkinson’s disease detection through hierarchical deep learning models. IEEE Access 2025, 13, 21880–21890. [Google Scholar] [CrossRef]
  7. Fanciullacci, C.; McKinney, Z.; Monaco, V.; Milandri, G.; Davalli, A.; Sacchetti, R.; Laffranchi, M.; De Michieli, L.; Baldoni, A.; Mazzoni, A.; et al. Survey of transfemoral amputee experience and priorities for the user-centered design of powered robotic transfemoral prostheses. J. Neuroeng. Rehabil. 2021, 18, 168. [Google Scholar] [CrossRef]
  8. Keroglou, C.; Kansizoglou, I.; Michailidis, P.; Oikonomou, K.M.; Papapetros, I.T.; Dragkola, P.; Michailidis, I.T.; Gasteratos, A.; Kosmatopoulos, E.B.; Sirakoulis, G.C. A survey on technical challenges of assistive robotics for elder people in domestic environments: The aspida concept. IEEE Trans. Med. Robot. Bionics 2023, 5, 196–205. [Google Scholar] [CrossRef]
  9. Sawik, B.; Tobis, S.; Baum, E.; Suwalska, A.; Kropińska, S.; Stachnik, K.; Pérez-Bernabeu, E.; Cildoz, M.; Agustin, A.; Wieczorowska-Tobis, K. Robots for elderly care: Review, multi-criteria optimization model and qualitative case study. Healthcare 2023, 11, 1286. [Google Scholar] [CrossRef]
  10. Berrezueta-Guzman, J.; Robles-Bykbaev, V.E.; Pau, I.; Pesántez-Avilés, F.; Martín-Ruiz, M.L. Robotic technologies in ADHD care: Literature review. IEEE Access 2021, 10, 608–625. [Google Scholar] [CrossRef]
  11. Zhang, Y.; D’Haeseleer, I.; Coelho, J.; Vanden Abeele, V.; Vanrumste, B. Recognition of bathroom activities in older adults using wearable sensors: A systematic review and recommendations. Sensors 2021, 21, 2176. [Google Scholar] [CrossRef]
  12. Piszcz, A.; Rojek, I.; Mikołajewski, D. Impact of Virtual Reality on Brain–Computer Interface Performance in IoT Control—Review of Current State of Knowledge. Appl. Sci. 2024, 14, 10541. [Google Scholar] [CrossRef]
  13. Ku, J.; Kang, Y.J. Novel virtual reality application in field of neurorehabilitation. Brain Neurorehabilit. 2018, 11, e5. [Google Scholar] [CrossRef]
  14. Piszcz, A. BCI in VR: An immersive way to make the brain-computer interface more efficient. Stud. I Mater. Inform. Stosow. 2021, 13, 11–16. [Google Scholar]
  15. Shen, J.; Zhang, C.J.; Jiang, B.; Chen, J.; Song, J.; Liu, Z.; He, Z.; Wong, S.Y.; Fang, P.H.; Ming, W.K.; et al. Artificial intelligence versus clinicians in disease diagnosis: Systematic review. JMIR Med. Inform. 2019, 7, e10010. [Google Scholar] [CrossRef]
  16. Ochella, S.; Shafiee, M.; Dinmohammadi, F. Artificial intelligence in prognostics and health management of engineering systems. Eng. Appl. Artif. Intell. 2022, 108, 104552. [Google Scholar] [CrossRef]
  17. Wang, C.; Zhu, X.; Hong, J.C.; Zheng, D. Artificial intelligence in radiotherapy treatment planning: Present and future. Technol. Cancer Res. Treat. 2019, 18, 1533033819873922. [Google Scholar] [CrossRef]
  18. Zhang, Y.; Weng, Y.; Lund, J. Applications of explainable artificial intelligence in diagnosis and surgery. Diagnostics 2022, 12, 237. [Google Scholar] [CrossRef]
  19. Shaik, T.; Tao, X.; Higgins, N.; Li, L.; Gururajan, R.; Zhou, X.; Acharya, U.R. Remote patient monitoring using artificial intelligence: Current state, applications, and challenges. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2023, 13, e1485. [Google Scholar] [CrossRef]
  20. Pinto-Coelho, L. How artificial intelligence is shaping medical imaging technology: A survey of innovations and applications. Bioengineering 2023, 10, 1435. [Google Scholar] [CrossRef]
  21. Tang, X. The role of artificial intelligence in medical imaging research. BJR|Open 2019, 2, 20190031. [Google Scholar] [CrossRef] [PubMed]
  22. Bhinder, B.; Gilvary, C.; Madhukar, N.S.; Elemento, O. Artificial intelligence in cancer research and precision medicine. Cancer Discov. 2021, 11, 900–915. [Google Scholar] [CrossRef] [PubMed]
  23. Carini, C.; Seyhan, A.A. Tribulations and future opportunities for artificial intelligence in precision medicine. J. Transl. Med. 2024, 22, 411. [Google Scholar] [CrossRef] [PubMed]
  24. Badawy, M.; Ramadan, N.; Hefny, H.A. Healthcare predictive analytics using machine learning and deep learning techniques: A survey. J. Electr. Syst. Inf. Technol. 2023, 10, 40. [Google Scholar] [CrossRef]
  25. Khan, Z.F.; Alotaibi, S.R. Applications of artificial intelligence and big data analytics in m-health: A healthcare system perspective. J. Healthc. Eng. 2020, 2020, 8894694. [Google Scholar] [CrossRef] [PubMed]
  26. Moglia, A.; Georgiou, K.; Georgiou, E.; Satava, R.M.; Cuschieri, A. A systematic review on artificial intelligence in robot-assisted surgery. Int. J. Surg. 2021, 95, 106151. [Google Scholar] [CrossRef]
  27. Khalifa, M.; Albadawy, M.; Iqbal, U. Advancing clinical decision support: The role of artificial intelligence across six domains. Comput. Methods Programs Biomed. Update 2024, 5, 100142. [Google Scholar] [CrossRef]
  28. Peters, M.D.; Marnie, C.; Tricco, A.C.; Pollock, D.; Munn, Z.; Alexander, L.; McInerney, P.; Godfrey, C.M.; Khalil, H. Updated methodological guidance for the conduct of scoping reviews. JBI Evid. Implement. 2021, 19, 3–10. [Google Scholar] [CrossRef]
  29. Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.; Horsley, T.; Weeks, L.; et al. PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Ann. Intern. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef] [PubMed]
  30. Ding, Z.; Ji, Y.; Gan, Y.; Wang, Y.; Xia, Y. Current status and trends of technology, methods, and applications of Human–Computer Intelligent Interaction (HCII): A bibliometric research. Multimed. Tools Appl. 2024, 83, 69111–69144. [Google Scholar] [CrossRef]
  31. Lauer-Schmaltz, M.W.; Cash, P.; Hansen, J.P.; Das, N. Human digital twins in rehabilitation: A case study on exoskeleton and serious-game-based stroke rehabilitation using the ethica methodology. IEEE Access 2024, 12, 180968–180991. [Google Scholar] [CrossRef]
  32. Li, X.; Lu, Q.; Chen, P.; Gong, S.; Yu, X.; He, H.; Li, K. Assistance level quantification-based human-robot interaction space reshaping for rehabilitation training. Front. Neurorobot. 2023, 17, 1161007. [Google Scholar] [CrossRef]
  33. Connan, M.; Sierotowicz, M.; Henze, B.; Porges, O.; Albu-Schäffer, A.; Roa, M.A.; Castellini, C. Learning to teleoperate an upper-limb assistive humanoid robot for bimanual daily-living tasks. Biomed. Phys. Eng. Express 2021, 8, 015022. [Google Scholar] [CrossRef]
  34. Adams, R.J.; Lichter, M.D.; Krepkovich, E.T.; Ellington, A.; White, M.; Diamond, P.T. Assessing upper extremity motor function in practice of virtual activities of daily living. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 23, 287–296. [Google Scholar] [CrossRef] [PubMed]
  35. Adams, R.J.; Lichter, M.D.; Ellington, A.; White, M.; Armstead, K.; Patrie, J.T.; Diamond, P.T. Virtual activities of daily living for recovery of upper extremity motor function. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 26, 252–260. [Google Scholar] [CrossRef] [PubMed]
  36. Senadheera, I.; Hettiarachchi, P.; Haslam, B.; Nawaratne, R.; Sheehan, J.; Lockwood, K.J.; Alahakoon, D.; Carey, L.M. AI applications in adult stroke recovery and rehabilitation: A scoping review using AI. Sensors 2024, 24, 6585. [Google Scholar] [CrossRef] [PubMed]
  37. Lagos, M.; Pousada, T.; Fernández, A.; Carneiro, R.; Martínez, A.; Groba, B.; Nieto-Riveiro, L.; Pereira, J. Outcome measures applied to robotic assistive technology for people with cerebral palsy: A pilot study. Disabil. Rehabil. Assist. Technol. 2024, 19, 3015–3022. [Google Scholar] [CrossRef]
  38. Chandrashekhar, R.; Wang, H.; Rippetoe, J.; James, S.A.; Fagg, A.H.; Kolobe, T.H. The impact of cognition on motor learning and skill acquisition using a robot intervention in infants with cerebral palsy. Front. Robot. AI 2022, 9, 805258. [Google Scholar] [CrossRef]
  39. Horstmann, A.C.; Mühl, L.; Köppen, L.; Lindhaus, M.; Storch, D.; Bühren, M.; Röttgers, H.R.; Krajewski, J. Important preliminary insights for designing successful communication between a robotic learning assistant and children with autism spectrum disorder in Germany. Robotics 2022, 11, 141. [Google Scholar] [CrossRef]
  40. Jain, S.; Thiagarajan, B.; Shi, Z.; Clabaugh, C.; Matarić, M.J. Modeling engagement in long-term, in-home socially assistive robot interventions for children with autism spectrum disorders. Sci. Robot. 2020, 5, eaaz3791. [Google Scholar] [CrossRef]
  41. AlSadoun, W.; Alwahaibi, N.; Altwayan, L. Towards intelligent technology in art therapy contexts. In Proceedings of the HCI International 2020-Late Breaking Papers: Multimodality and Intelligence: 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, 19–24 July 2020; Proceedings 22. Springer: Cham, Switzerland, 2020; pp. 397–405. [Google Scholar]
  42. Fang, Z.; Paliyawan, P.; Thawonmas, R.; Harada, T. Towards an angry-birds-like game system for promoting mental well-being of players using art-therapy-embedded procedural content generation. In Proceedings of the 2019 IEEE 8th Global Conference on Consumer Electronics (GCCE), Osaka, Japan, 15–18 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 947–948. [Google Scholar]
  43. Li, M.; Li, X.; Xie, L.; Liu, J.; Wang, F.; Wang, Z. Assisted therapeutic system based on reinforcement learning for children with autism. Comput. Assist. Surg. 2019, 24, 94–104. [Google Scholar] [CrossRef]
  44. Irani, A.; Moradi, H.; Vahid, L.K. Autism screening using a video game based on emotions. In Proceedings of the 2018 2nd National and 1st International Digital Games Research Conference: Trends, Technologies, and Applications (DGRC), Tehran, Iran, 29–30 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 40–45. [Google Scholar]
  45. Bonillo, C.; Cerezo, E.; Marco, J.; Baldassarri, S. Designing therapeutic activities based on tangible interaction for children with developmental delay. In Proceedings of the Universal Access in Human-Computer Interaction. Users and Context Diversity: 10th International Conference, UAHCI 2016, Held as Part of HCI International 2016, Toronto, ON, Canada, 17–22 July 2016; Proceedings, Part III 10. Springer: Cham, Switzerland, 2016; pp. 183–192. [Google Scholar]
  46. Zidianakis, E.; Zidianaki, I.; Ioannidi, D.; Partarakis, N.; Antona, M.; Paparoulis, G.; Stephanidis, C. Employing ambient intelligence technologies to adapt games to children’s playing maturity. In Proceedings of the Universal Access in Human-Computer Interaction, Access to Learning, Health and Well-Being: 9th International Conference, UAHCI 2015, Held as Part of HCI International 2015, Los Angeles, CA, USA, 2–7 August 2015; Proceedings, Part III 9. Springer: Cham, Switzerland, 2015; pp. 577–589. [Google Scholar]
  47. Baltrusaitis, T.; Zadeh, A.; Lim, Y.C.; Morency, L.P. OpenFace 2.0: Facial Behavior Analysis Toolkit. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition, Xi’an, China, 15–19 May 2018; pp. 59–66. [Google Scholar]
  48. Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.E.; Sheikh, Y. Openpose: Realtime multi-person 2d pose estimation using part affinity fields. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 172–186. [Google Scholar] [CrossRef]
  49. Boersma, P. Praat, a system for doing phonetics by computer. Glot. Int. 2001, 5, 341–345. [Google Scholar]
  50. Mahmoudi Asl, A.; Molinari Ulate, M.; Franco Martin, M.; van der Roest, H. Methodologies used to study the feasibility, usability, efficacy, and effectiveness of social robots for elderly adults: Scoping review. J. Med. Internet Res. 2022, 24, e37434. [Google Scholar] [CrossRef]
  51. Tsui, K.M.; Baggett, R.; Chiang, C. Exploring Embodiment Form Factors of a Home-Helper Robot: Perspectives from Care Receivers and Caregivers. Appl. Sci. 2025, 15, 891. [Google Scholar] [CrossRef]
  52. Tiersen, F.; Batey, P.; Harrison, M.J.; Naar, L.; Serban, A.I.; Daniels, S.J.; Calvo, R.A. Smart home sensing and monitoring in households with dementia: User-centered design approach. JMIR Aging 2021, 4, e27047. [Google Scholar] [CrossRef]
  53. Ranieri, C.M.; MacLeod, S.; Dragone, M.; Vargas, P.A.; Romero, R.A.F. Activity recognition for ambient assisted living with videos, inertial units and ambient sensors. Sensors 2021, 21, 768. [Google Scholar] [CrossRef]
  54. Coviello, L.; Cavallo, F.; Limosani, R.; Rovini, E.; Fiorini, L. Machine learning based physical human-robot interaction for walking support of frail people. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 3404–3407. [Google Scholar]
  55. Fei, F.; Xian, S.; Yang, R.; Wu, C.; Lu, X. A Wearable Visually Impaired Assistive System Based on Semantic Vision SLAM for Grasping Operation. Sensors 2024, 24, 3593. [Google Scholar] [CrossRef]
  56. Shahria, M.T.; Rahman, M.H. Activities of Daily Living Object Dataset: Advancing Assistive Robotic Manipulation with a Tailored Dataset. Sensors 2024, 24, 7566. [Google Scholar] [CrossRef]
  57. Try, P.; Schöllmann, S.; Wöhle, L.; Gebhard, M. Visual sensor fusion based autonomous robotic system for assistive drinking. Sensors 2021, 21, 5419. [Google Scholar] [CrossRef]
  58. Erickson, Z.; Clever, H.M.; Gangaram, V.; Turk, G.; Liu, C.K.; Kemp, C.C. Multidimensional capacitive sensing for robot-assisted dressing and bathing. In Proceedings of the 2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR), Toronto, ON, Canada, 24–28 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 224–231. [Google Scholar]
  59. Søraa, R.A.; Tøndel, G.; Kharas, M.W.; Serrano, J.A. What do older adults want from social robots? A qualitative research approach to human-robot interaction (HRI) studies. Int. J. Soc. Robot. 2023, 15, 411–424. [Google Scholar] [CrossRef]
  60. Winkle, K. Social robots for motivation and engagement in therapy. In Proceedings of the Proceedings of the 19th ACM International Conference on Multimodal Interaction, Glasgow, UK, 13–17 November 2017; pp. 614–617. [Google Scholar]
  61. Zentner, S.; Barradas Chacon, A.; Wriessnegger, S.C. The Impact of Light Conditions on Neural Affect Classification: A Deep Learning Approach. Mach. Learn. Knowl. Extr. 2024, 6, 199–214. [Google Scholar] [CrossRef]
  62. Xie, Z.; Wang, Z. Longitudinal examination of the relationship between virtual companionship and social anxiety: Emotional expression as a mediator and mindfulness as a moderator. Psychol. Res. Behav. Manag. 2024, 17, 765–782. [Google Scholar] [CrossRef]
  63. Kansizoglou, I.; Misirlis, E.; Tsintotas, K.; Gasteratos, A. Continuous emotion recognition for long-term behavior modeling through recurrent neural networks. Technologies 2022, 10, 59. [Google Scholar] [CrossRef]
  64. Chang, Y.; Wang, X.; Wang, J.; Wu, Y.; Yang, L.; Zhu, K.; Chen, H.; Yi, X.; Wang, C.; Wang, Y.; et al. A survey on evaluation of large language models. ACM Trans. Intell. Syst. Technol. 2024, 15, 39. [Google Scholar] [CrossRef]
  65. Aliferis, C.; Simon, G. Lessons Learned from Historical Failures, Limitations and Successes of AI/ML in Healthcare and the Health Sciences. Enduring Problems, and the Role of Best Practices. In Artificial Intelligence and Machine Learning in Health Care and Medical Sciences: Best Practices and Pitfalls; Springer: Cham, Swizterland, 2024; pp. 543–606. [Google Scholar]
  66. Naik, N.; Hameed, B.; Shetty, D.K.; Swain, D.; Shah, M.; Paul, R.; Aggarwal, K.; Ibrahim, S.; Patil, V.; Smriti, K.; et al. Legal and ethical consideration in artificial intelligence in healthcare: Who takes responsibility? Front. Surg. 2022, 9, 862322. [Google Scholar] [CrossRef]
  67. Yadav, N.; Pandey, S.; Gupta, A.; Dudani, P.; Gupta, S.; Rangarajan, K. Data privacy in healthcare: In the era of artificial intelligence. Indian Dermatol. Online J. 2023, 14, 788–792. [Google Scholar] [CrossRef]
  68. Voigt, P.; Von dem Bussche, A. The eu general data protection regulation (gdpr). In A Practical Guide, 1st ed.; Springer International Publication: Cham, Swizterland, 2017; Volume 10, pp. 10–5555. [Google Scholar]
  69. Chen, R.J.; Wang, J.J.; Williamson, D.F.; Chen, T.Y.; Lipkova, J.; Lu, M.Y.; Sahai, S.; Mahmood, F. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat. Biomed. Eng. 2023, 7, 719–742. [Google Scholar] [CrossRef]
  70. Nazer, L.H.; Zatarah, R.; Waldrip, S.; Ke, J.X.C.; Moukheiber, M.; Khanna, A.K.; Hicklen, R.S.; Moukheiber, L.; Moukheiber, D.; Ma, H.; et al. Bias in artificial intelligence algorithms and recommendations for mitigation. PLoS Digit. Health 2023, 2, e0000278. [Google Scholar] [CrossRef]
  71. Chemnad, K.; Othman, A. Digital accessibility in the era of artificial intelligence—Bibliometric analysis and systematic review. Front. Artif. Intell. 2024, 7, 1349668. [Google Scholar] [CrossRef]
  72. Guo, J.; Li, B. The application of medical artificial intelligence technology in rural areas of developing countries. Health Equity 2018, 2, 174–181. [Google Scholar] [CrossRef] [PubMed]
  73. Hazra, R.; Chatterjee, P.; Singh, Y.; Podder, G.; Das, T. Data Encryption and Secure Communication Protocols. In Strategies for E-Commerce Data Security: Cloud, Blockchain, AI, and Machine Learning; IGI Global: Hershey, PA, USA, 2024; pp. 546–570. [Google Scholar]
  74. Longpre, S.; Mahari, R.; Chen, A.; Obeng-Marnu, N.; Sileo, D.; Brannon, W.; Muennighoff, N.; Khazam, N.; Kabbara, J.; Perisetla, K.; et al. A large-scale audit of dataset licensing and attribution in AI. Nat. Mach. Intell. 2024, 6, 975–987. [Google Scholar] [CrossRef]
  75. Ruh, D.M. Patient perspectives on digital health. In Digital Health; Elsevier: Amsterdam, The Netherlands, 2025; pp. 481–501. [Google Scholar]
  76. Siala, H.; Wang, Y. SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. Soc. Sci. Med. 2022, 296, 114782. [Google Scholar] [CrossRef]
Figure 1. Flowchart illustrating the methodology used for the article selection process.
Figure 1. Flowchart illustrating the methodology used for the article selection process.
Algorithms 18 00276 g001
Figure 3. Overview of the AI and HCI integration in OT: a three-layered representation highlighting the intersection of research domains, technological enablers, and targeted therapeutic outcomes.
Figure 3. Overview of the AI and HCI integration in OT: a three-layered representation highlighting the intersection of research domains, technological enablers, and targeted therapeutic outcomes.
Algorithms 18 00276 g003
Figure 4. Bar plot depicting the annual distribution of document counts for all the included records (blue) and review articles (red). The dotted line indicates the temporal trend over the study period.
Figure 4. Bar plot depicting the annual distribution of document counts for all the included records (blue) and review articles (red). The dotted line indicates the temporal trend over the study period.
Algorithms 18 00276 g004
Figure 5. The distribution of the collected 26 studies across the 4 application domains and the corresponding years. The proportional distribution of both research across domains and the temporal evolution of studied areas are illustrated.
Figure 5. The distribution of the collected 26 studies across the 4 application domains and the corresponding years. The proportional distribution of both research across domains and the temporal evolution of studied areas are illustrated.
Algorithms 18 00276 g005
Table 2. Findings and characteristics of included AI and HCI studies for patients with ASD and developmental conditions.
Table 2. Findings and characteristics of included AI and HCI studies for patients with ASD and developmental conditions.
AuthorsApplication DomainTechnologyInput DataValidation MethodEmployed SubjectsResults
Lagos et al., 2024 [37]Robotic assistants for people with CPLOLA2 robotic platform and CNNRGB imagesReal test on adults9 patientsindependence F I M = 98 , competence ( 0.25 ), adaptability ( 0.33 ), and self-esteem ( 0.25 )
Chandrashekhar et al., 2022 [38]Cognition on motor learning in infants with CPSIPPC robotMDI and MOCSReal test on infants63 patients and healthyRelationship between cognitive ability and delayed motor skills improvement
Hortsmann et al., 2022 [39]Design of successful interaction between robots and children with ASDRobots and speech recognitionSpeech and interviewsReal test on children7 patients and 6 therapistsNeed for AI-based engagement detection, features, appearance, and functions
Jain et al., 2020 [40]Engagement with socially assistive robotics for children with ASDSocially assistive robotics, CNN, and online RLRGB images, speech, and game scoresReal test on children7 patients 65 % engagement in HRI and 90 % AUROC in engagement detection
AlSadoun et al., 2020 [41]Art therapy for users with impaired communication skillsVirtual agent, ANN, and facial landmark extractionRGB imagesNoFramework of the smart art therapy system
Fang et al., 2019 [42]Art therapy game for mental well-being promotionVR-serious game, PCG AI, and CNNGame scores and game imagesNoDesign and implementation of the AI-enabled game
Li et al., 2019 [43]Assisted game for engaging children with ASDRL, CNN, SVR, and robotic agentRGB imagesReal test on children11 patientsMinor decrease in SRS scores
Irani et al., 2018 [44]Assisted game for screening children with ASDSerious game and Gaussian SVMRGB images and game scoresReal test on children23 patients and 22 healthy 93.3 % ASD detection accuracy
Bonillo et al., 2016 [45]Activities for children with developmental delaySerious game, tangible interfaces, and tabletopsRGB images, videos, and game scoresReal test on children10 patients4 definite tangible games
Zidianakis et al., 2015 [46]Ambient intelligent games adapting to children maturitySerious game, augmented artifacts, and VRForse pressure, accelerometer, tactile sensor, and IR cameraNoDesign and implementation of games
Table 4. Findings and characteristics of included AI and HCI studies for emotional expression and virtual companionship.
Table 4. Findings and characteristics of included AI and HCI studies for emotional expression and virtual companionship.
AuthorsApplication DomainTechnologyInput DataValidation MethodEmployed SubjectsResults
Zentner et al., 2024 [61]Emotion recognition during HCI under lighting variationsVR and CNNEEGReal test on adults30 healthy 41.63 % emotion recognition
Xie et al., 2024 [62]Virtual interactions and emotional expression alleviating social anxietyChatbots and textural and vocal interactionFrequency, mindfulness, social anxiety, emotional expression patterns, and questionnairesReal test on undergraduate students618 participantsFrequency of HCI decreases online social anxiety, and emotion expression reduces social anxiety
Winkle et al., 2017 [60]Socially assistive robotics in engagement and therapySocially assistive roboticsFocus groups, interviews, and observationsNoDesign of AI, social behaviors, and interaction modalities
Table 5. Authors with over 50 citations and their corresponding document counts.
Table 5. Authors with over 50 citations and their corresponding document counts.
AuthorCitationsDocuments
Adams, R.J.1622
Lichter, M.D.1622
Ellington, A.1622
White, M.1622
Diamond, P.T.1622
Jain, S.1151
Thiagarajan, B.1151
Shi, Z.1151
Clabaugh, C.1151
Matarić, M.J.1151
Krepkovich, E.T.881
Ranieri, C.M.851
MacLeod, S.851
Dragone, M.851
Vargas, P.A.851
Romero, R.A.F.851
Armstead, K.741
Patrie, J.T.741
Tiersen, F.671
Batey, P.671
Harrison, M.J.C.671
Naar, L.671
Serban, A.I.671
Daniels, S.J.C.671
Calvo, R.A.671
Table 6. Key academic sources with a minimum of two (2) documents and their citation impact.
Table 6. Key academic sources with a minimum of two (2) documents and their citation impact.
SourceDocumentsCitations
Sensors4100
Lecture Notes in Computer Science312
IEEE Transactions on Neural Systems and Rehabilitation Engineering2162
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kansizoglou, I.; Kokkotis, C.; Stampoulis, T.; Giannakou, E.; Siaperas, P.; Kallidis, S.; Koutra, M.; Malliou, P.; Michalopoulou, M.; Gasteratos, A. Artificial Intelligence and the Human–Computer Interaction in Occupational Therapy: A Scoping Review. Algorithms 2025, 18, 276. https://doi.org/10.3390/a18050276

AMA Style

Kansizoglou I, Kokkotis C, Stampoulis T, Giannakou E, Siaperas P, Kallidis S, Koutra M, Malliou P, Michalopoulou M, Gasteratos A. Artificial Intelligence and the Human–Computer Interaction in Occupational Therapy: A Scoping Review. Algorithms. 2025; 18(5):276. https://doi.org/10.3390/a18050276

Chicago/Turabian Style

Kansizoglou, Ioannis, Christos Kokkotis, Theodoros Stampoulis, Erasmia Giannakou, Panagiotis Siaperas, Stavros Kallidis, Maria Koutra, Paraskevi Malliou, Maria Michalopoulou, and Antonios Gasteratos. 2025. "Artificial Intelligence and the Human–Computer Interaction in Occupational Therapy: A Scoping Review" Algorithms 18, no. 5: 276. https://doi.org/10.3390/a18050276

APA Style

Kansizoglou, I., Kokkotis, C., Stampoulis, T., Giannakou, E., Siaperas, P., Kallidis, S., Koutra, M., Malliou, P., Michalopoulou, M., & Gasteratos, A. (2025). Artificial Intelligence and the Human–Computer Interaction in Occupational Therapy: A Scoping Review. Algorithms, 18(5), 276. https://doi.org/10.3390/a18050276

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop