Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (250)

Search Parameters:
Keywords = HCI (human–computer interaction)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 1520 KiB  
Article
Designing a Patient Outcome Clinical Assessment Tool for Modified Rankin Scale: “You Feel the Same Way Too”
by Laura London and Noreen Kamal
Informatics 2025, 12(3), 78; https://doi.org/10.3390/informatics12030078 (registering DOI) - 4 Aug 2025
Abstract
The modified Rankin Scale (mRS) is a widely used outcome measure for assessing disability in stroke care; however, its administration is often affected by subjectivity and variability, leading to poor inter-rater reliability and inconsistent scoring. Originally designed for hospital discharge evaluations, the mRS [...] Read more.
The modified Rankin Scale (mRS) is a widely used outcome measure for assessing disability in stroke care; however, its administration is often affected by subjectivity and variability, leading to poor inter-rater reliability and inconsistent scoring. Originally designed for hospital discharge evaluations, the mRS has evolved into an outcome tool for disability assessment and clinical decision-making. Inconsistencies persist due to a lack of standardization and cognitive biases during its use. This paper presents design principles for creating a standardized clinical assessment tool (CAT) for the mRS, grounded in human–computer interaction (HCI) and cognitive engineering principles. Design principles were informed in part by an anonymous online survey conducted with clinicians across Canada to gain insights into current administration practices, opinions, and challenges of the mRS. The proposed design principles aim to reduce cognitive load, improve inter-rater reliability, and streamline the administration process of the mRS. By focusing on usability and standardization, the design principles seek to enhance scoring consistency and improve the overall reliability of clinical outcomes in stroke care and research. Developing a standardized CAT for the mRS represents a significant step toward improving the accuracy and consistency of stroke disability assessments. Future work will focus on real-world validation with healthcare stakeholders and exploring self-completed mRS assessments to further refine the tool. Full article
Show Figures

Figure 1

20 pages, 980 KiB  
Article
Dynamic Decoding of VR Immersive Experience in User’s Technology-Privacy Game
by Shugang Li, Zulei Qin, Meitong Liu, Ziyi Li, Jiayi Zhang and Yanfang Wei
Systems 2025, 13(8), 638; https://doi.org/10.3390/systems13080638 (registering DOI) - 1 Aug 2025
Viewed by 170
Abstract
The formation mechanism of Virtual Reality (VR) Immersive Experience (VRIE) is notably complex; this study aimed to dynamically decode its underlying drivers by innovatively integrating Flow Theory and Privacy Calculus Theory, focusing on Perceptual-Interactive Fidelity (PIF), Consumer Willingness to Immerse in Technology (CWTI), [...] Read more.
The formation mechanism of Virtual Reality (VR) Immersive Experience (VRIE) is notably complex; this study aimed to dynamically decode its underlying drivers by innovatively integrating Flow Theory and Privacy Calculus Theory, focusing on Perceptual-Interactive Fidelity (PIF), Consumer Willingness to Immerse in Technology (CWTI), and the applicability of Loss Aversion Theory. To achieve this, we analyzed approximately 30,000 user reviews from Amazon using Latent Semantic Analysis (LSA) and regression analysis. The findings reveal that user attention’s impact on VRIE is non-linear, suggesting an optimal threshold, and confirm PIF as a central influencing mechanism; furthermore, CWTI significantly moderates users’ privacy calculus, thereby affecting VRIE, while Loss Aversion Theory showed limited explanatory power in the VR context. These results provide a deeper understanding of VR user behavior, offering significant theoretical guidance and practical implications for future VR system design, particularly in strategically balancing user cognition, PIF, privacy concerns, and individual willingness. Full article
Show Figures

Figure 1

81 pages, 11973 KiB  
Article
Designing and Evaluating XR Cultural Heritage Applications Through Human–Computer Interaction Methods: Insights from Ten International Case Studies
by Jolanda Tromp, Damian Schofield, Pezhman Raeisian Parvari, Matthieu Poyade, Claire Eaglesham, Juan Carlos Torres, Theodore Johnson, Teele Jürivete, Nathan Lauer, Arcadio Reyes-Lecuona, Daniel González-Toledo, María Cuevas-Rodríguez and Luis Molina-Tanco
Appl. Sci. 2025, 15(14), 7973; https://doi.org/10.3390/app15147973 - 17 Jul 2025
Viewed by 881
Abstract
Advanced three-dimensional extended reality (XR) technologies are highly suitable for cultural heritage research and education. XR tools enable the creation of realistic virtual or augmented reality applications for curating and disseminating information about cultural artifacts and sites. Developing XR applications for cultural heritage [...] Read more.
Advanced three-dimensional extended reality (XR) technologies are highly suitable for cultural heritage research and education. XR tools enable the creation of realistic virtual or augmented reality applications for curating and disseminating information about cultural artifacts and sites. Developing XR applications for cultural heritage requires interdisciplinary collaboration involving strong teamwork and soft skills to manage user requirements, system specifications, and design cycles. Given the diverse end-users, achieving high precision, accuracy, and efficiency in information management and user experience is crucial. Human–computer interaction (HCI) design and evaluation methods are essential for ensuring usability and return on investment. This article presents ten case studies of cultural heritage software projects, illustrating the interdisciplinary work between computer science and HCI design. Students from institutions such as the State University of New York (USA), Glasgow School of Art (UK), University of Granada (Spain), University of Málaga (Spain), Duy Tan University (Vietnam), Imperial College London (UK), Research University Institute of Communication & Computer Systems (Greece), Technical University of Košice (Slovakia), and Indiana University (USA) contributed to creating, assessing, and improving the usability of these diverse cultural heritage applications. The results include a structured typology of CH XR application scenarios, detailed insights into design and evaluation practices across ten international use cases, and a development framework that supports interdisciplinary collaboration and stakeholder integration in phygital cultural heritage projects. Full article
(This article belongs to the Special Issue Advanced Technologies Applied to Cultural Heritage)
Show Figures

Figure 1

37 pages, 618 KiB  
Systematic Review
Interaction, Artificial Intelligence, and Motivation in Children’s Speech Learning and Rehabilitation Through Digital Games: A Systematic Literature Review
by Chra Abdoulqadir and Fernando Loizides
Information 2025, 16(7), 599; https://doi.org/10.3390/info16070599 - 12 Jul 2025
Viewed by 505
Abstract
The integration of digital serious games into speech learning (rehabilitation) has demonstrated significant potential in enhancing accessibility and inclusivity for children with speech disabilities. This review of the state of the art examines the role of serious games, Artificial Intelligence (AI), and Natural [...] Read more.
The integration of digital serious games into speech learning (rehabilitation) has demonstrated significant potential in enhancing accessibility and inclusivity for children with speech disabilities. This review of the state of the art examines the role of serious games, Artificial Intelligence (AI), and Natural Language Processing (NLP) in speech rehabilitation, with a particular focus on interaction modalities, engagement autonomy, and motivation. We have reviewed 45 selected studies. Our key findings show how intelligent tutoring systems, adaptive voice-based interfaces, and gamified speech interventions can empower children to engage in self-directed speech learning, reducing dependence on therapists and caregivers. The diversity of interaction modalities, including speech recognition, phoneme-based exercises, and multimodal feedback, demonstrates how AI and Assistive Technology (AT) can personalise learning experiences to accommodate diverse needs. Furthermore, the incorporation of gamification strategies, such as reward systems and adaptive difficulty levels, has been shown to enhance children’s motivation and long-term participation in speech rehabilitation. The gaps identified show that despite advancements, challenges remain in achieving universal accessibility, particularly regarding speech recognition accuracy, multilingual support, and accessibility for users with multiple disabilities. This review advocates for interdisciplinary collaboration across educational technology, special education, cognitive science, and human–computer interaction (HCI). Our work contributes to the ongoing discourse on lifelong inclusive education, reinforcing the potential of AI-driven serious games as transformative tools for bridging learning gaps and promoting speech rehabilitation beyond clinical environments. Full article
Show Figures

Graphical abstract

17 pages, 2108 KiB  
Article
Designing for Dyads: A Comparative User Experience Study of Remote and Face-to-Face Multi-User Interfaces
by Mengcai Zhou, Jingxuan Wang, Ono Kenta, Makoto Watanabe and Chacon Quintero Juan Carlos
Electronics 2025, 14(14), 2806; https://doi.org/10.3390/electronics14142806 - 12 Jul 2025
Viewed by 322
Abstract
Collaborative digital games and interfaces are increasingly used in both research and commercial contexts, yet little is known about how the spatial arrangement and interface sharing affect the user experience in dyadic settings. Using a two-player iPad pong game, this study compared user [...] Read more.
Collaborative digital games and interfaces are increasingly used in both research and commercial contexts, yet little is known about how the spatial arrangement and interface sharing affect the user experience in dyadic settings. Using a two-player iPad pong game, this study compared user experiences across three collaborative gaming scenarios: face-to-face single-screen (F2F-OneS), face-to-face dual-screen (F2F-DualS), and remote dual-screen (Rmt-DualS) scenarios. Eleven dyads participated in all conditions using a within-subject design. After each session, the participants completed a 21-item user experience questionnaire and took part in brief interviews. The results from a repeated-measure ANOVA and post hoc paired t-tests showed significant scenario effects for several experience items, with F2F-OneS yielding higher engagement, novelty, and accomplishment than remote play, and qualitative interviews supported the quantitative findings, revealing themes of social presence and interaction. These results highlight the importance of spatial and interface design in collaborative settings, suggesting that both technical and social factors should be considered in multi-user interface development. Full article
(This article belongs to the Special Issue Innovative Designs in Human–Computer Interaction)
Show Figures

Figure 1

21 pages, 2624 KiB  
Article
GMM-HMM-Based Eye Movement Classification for Efficient and Intuitive Dynamic Human–Computer Interaction Systems
by Jiacheng Xie, Rongfeng Chen, Ziming Liu, Jiahao Zhou, Juan Hou and Zengxiang Zhou
J. Eye Mov. Res. 2025, 18(4), 28; https://doi.org/10.3390/jemr18040028 - 9 Jul 2025
Viewed by 306
Abstract
Human–computer interaction (HCI) plays a crucial role across various fields, with eye-tracking technology emerging as a key enabler for intuitive and dynamic control in assistive systems like Assistive Robotic Arms (ARAs). By precisely tracking eye movements, this technology allows for more natural user [...] Read more.
Human–computer interaction (HCI) plays a crucial role across various fields, with eye-tracking technology emerging as a key enabler for intuitive and dynamic control in assistive systems like Assistive Robotic Arms (ARAs). By precisely tracking eye movements, this technology allows for more natural user interaction. However, current systems primarily rely on the single gaze-dependent interaction method, which leads to the “Midas Touch” problem. This highlights the need for real-time eye movement classification in dynamic interactions to ensure accurate and efficient control. This paper proposes a novel Gaussian Mixture Model–Hidden Markov Model (GMM-HMM) classification algorithm aimed at overcoming the limitations of traditional methods in dynamic human–robot interactions. By incorporating sum of squared error (SSE)-based feature extraction and hierarchical training, the proposed algorithm achieves a classification accuracy of 94.39%, significantly outperforming existing approaches. Furthermore, it is integrated with a robotic arm system, enabling gaze trajectory-based dynamic path planning, which reduces the average path planning time to 2.97 milliseconds. The experimental results demonstrate the effectiveness of this approach, offering an efficient and intuitive solution for human–robot interaction in dynamic environments. This work provides a robust framework for future assistive robotic systems, improving interaction intuitiveness and efficiency in complex real-world scenarios. Full article
Show Figures

Figure 1

21 pages, 480 KiB  
Perspective
Towards Predictive Communication: The Fusion of Large Language Models and Brain–Computer Interface
by Andrea Carìa
Sensors 2025, 25(13), 3987; https://doi.org/10.3390/s25133987 - 26 Jun 2025
Viewed by 770
Abstract
Integration of advanced artificial intelligence with neurotechnology offers transformative potential for assistive communication. This perspective article examines the emerging convergence between non-invasive brain–computer interface (BCI) spellers and large language models (LLMs), with a focus on predictive communication for individuals with motor or language [...] Read more.
Integration of advanced artificial intelligence with neurotechnology offers transformative potential for assistive communication. This perspective article examines the emerging convergence between non-invasive brain–computer interface (BCI) spellers and large language models (LLMs), with a focus on predictive communication for individuals with motor or language impairments. First, I will review the evolution of language models—from early rule-based systems to contemporary deep learning architectures—and their role in enhancing predictive writing. Second, I will survey existing implementations of BCI spellers that incorporate language modeling and highlight recent pilot studies exploring the integration of LLMs into BCI. Third, I will examine how, despite advancements in typing speed, accuracy, and user adaptability, the fusion of LLMs and BCI spellers still faces key challenges such as real-time processing, robustness to noise, and the integration of neural decoding outputs with probabilistic language generation frameworks. Finally, I will discuss how fully integrating LLMs with BCI technology could substantially improve the speed and usability of BCI-mediated communication, offering a path toward more intuitive, adaptive, and effective neurotechnological solutions for both clinical and non-clinical users. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

18 pages, 1498 KiB  
Article
Speech Emotion Recognition on MELD and RAVDESS Datasets Using CNN
by Gheed T. Waleed and Shaimaa H. Shaker
Information 2025, 16(7), 518; https://doi.org/10.3390/info16070518 - 21 Jun 2025
Viewed by 1066
Abstract
Speech emotion recognition (SER) plays a vital role in enhancing human–computer interaction (HCI) and can be applied in affective computing, virtual support, and healthcare. This research presents a high-performance SER framework based on a lightweight 1D Convolutional Neural Network (1D-CNN) and a multi-feature [...] Read more.
Speech emotion recognition (SER) plays a vital role in enhancing human–computer interaction (HCI) and can be applied in affective computing, virtual support, and healthcare. This research presents a high-performance SER framework based on a lightweight 1D Convolutional Neural Network (1D-CNN) and a multi-feature fusion technique. Rather than employing spectrograms as image-based input, frame-level characteristics (Mel-Frequency Cepstral Coefficients, Mel-Spectrograms, and Chroma vectors) are calculated throughout the sequences to preserve temporal information and reduce the computing expense. The model attained classification accuracies of 94.0% on MELD (multi-party talks) and 91.9% on RAVDESS (acted speech). Ablation experiments demonstrate that the integration of complimentary features significantly outperforms the utilisation of a singular feature as a baseline. Data augmentation techniques, including Gaussian noise and time shifting, enhance model generalisation. The proposed method demonstrates significant potential for real-time emotion recognition using audio only in embedded or resource-constrained devices. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Human-Computer Interaction)
Show Figures

Figure 1

25 pages, 1822 KiB  
Article
Emotion Recognition from Speech in a Subject-Independent Approach
by Andrzej Majkowski and Marcin Kołodziej
Appl. Sci. 2025, 15(13), 6958; https://doi.org/10.3390/app15136958 - 20 Jun 2025
Cited by 1 | Viewed by 632
Abstract
The aim of this article is to critically and reliably assess the potential of current emotion recognition technologies for practical applications in human–computer interaction (HCI) systems. The study made use of two databases: one in English (RAVDESS) and another in Polish (EMO-BAJKA), both [...] Read more.
The aim of this article is to critically and reliably assess the potential of current emotion recognition technologies for practical applications in human–computer interaction (HCI) systems. The study made use of two databases: one in English (RAVDESS) and another in Polish (EMO-BAJKA), both containing speech recordings expressing various emotions. The effectiveness of recognizing seven and eight different emotions was analyzed. A range of acoustic features, including energy features, mel-cepstral features, zero-crossing rate, fundamental frequency, and spectral features, were utilized to analyze the emotions in speech. Machine learning techniques such as convolutional neural networks (CNNs), long short-term memory (LSTM) networks, and support vector machines with a cubic kernel (cubic SVMs) were employed in the emotion classification task. The research findings indicated that the effective recognition of a broad spectrum of emotions in a subject-independent approach is limited. However, significantly better results were obtained in the classification of paired emotions, suggesting that emotion recognition technologies could be effectively used in specific applications where distinguishing between two particular emotional states is essential. To ensure a reliable and accurate assessment of the emotion recognition system, care was taken to divide the dataset in such a way that the training and testing data contained recordings of completely different individuals. The highest classification accuracies for pairs of emotions were achieved for Angry–Fearful (0.8), Angry–Happy (0.86), Angry–Neutral (1.0), Angry–Sad (1.0), Angry–Surprise (0.89), Disgust–Neutral (0.91), and Disgust–Sad (0.96) in the RAVDESS. In the EMO-BAJKA database, the highest classification accuracies for pairs of emotions were for Joy–Neutral (0.91), Surprise–Neutral (0.80), Surprise–Fear (0.91), and Neutral–Fear (0.91). Full article
(This article belongs to the Special Issue New Advances in Applied Machine Learning)
Show Figures

Figure 1

27 pages, 6771 KiB  
Article
A Deep Neural Network Framework for Dynamic Two-Handed Indian Sign Language Recognition in Hearing and Speech-Impaired Communities
by Vaidhya Govindharajalu Kaliyaperumal and Paavai Anand Gopalan
Sensors 2025, 25(12), 3652; https://doi.org/10.3390/s25123652 - 11 Jun 2025
Viewed by 554
Abstract
Language is that kind of expression by which effective communication with another can be well expressed. One may consider such as a connecting bridge for bridging communication gaps for the hearing- and speech-impaired, even though it remains as an advanced method for hand [...] Read more.
Language is that kind of expression by which effective communication with another can be well expressed. One may consider such as a connecting bridge for bridging communication gaps for the hearing- and speech-impaired, even though it remains as an advanced method for hand gesture expression along with identification through the various different unidentified signals to configure their palms. This challenge can be met with a novel Enhanced Convolutional Transformer with Adaptive Tuna Swarm Optimization (ECT-ATSO) recognition framework proposed for double-handed sign language. In order to improve both model generalization and image quality, preprocessing is applied to images prior to prediction, and the proposed dataset is organized to handle multiple dynamic words. Feature graining is employed to obtain local features, and the ViT transformer architecture is then utilized to capture global features from the preprocessed images. After concatenation, this generates a feature map that is then divided into various words using an Inverted Residual Feed-Forward Network (IRFFN). Using the Tuna Swarm Optimization (TSO) algorithm in its enhanced form, the provided Enhanced Convolutional Transformer (ECT) model is optimally tuned to handle the problem dimensions with convergence problem parameters. In order to solve local optimization constraints when adjusting the position for the tuna update process, a mutation operator was introduced. The dataset visualization that demonstrates the best effectiveness compared to alternative cutting-edge methods, recognition accuracy, and convergences serves as a means to measure performance of this suggested framework. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

26 pages, 1708 KiB  
Article
Research on Task Complexity Measurements in Human—Computer Interaction in Nuclear Power Plant DCS Systems Based on Emergency Operating Procedures
by Ensheng Pang and Licao Dai
Entropy 2025, 27(6), 600; https://doi.org/10.3390/e27060600 - 4 Jun 2025
Viewed by 606
Abstract
Within the scope of digital transformation in nuclear power plants (NPPs), task complexity in human–computer interaction (HCI) has become a critical factor affecting the safe and stable operation of NPPs. This study systematically reviews and analyzes existing complexity sources and assessment methods and [...] Read more.
Within the scope of digital transformation in nuclear power plants (NPPs), task complexity in human–computer interaction (HCI) has become a critical factor affecting the safe and stable operation of NPPs. This study systematically reviews and analyzes existing complexity sources and assessment methods and suggests that complexity is primarily driven by core factors such as the quantity of, variety of, and relationships between elements. By innovatively introducing Halstead’s E measure, this study constructs a quantitative model of dynamic task execution complexity (TEC), addressing the limitations of traditional entropy-based metrics in analyzing interactive processes. By combining entropy metrics and the E measure, a task complexity quantification framework is established, encompassing both the task execution and intrinsic dimensions. Specifically, Halstead’s E measure focuses on analyzing operators and operands, defining interaction symbols between humans and interfaces to quantify task execution complexity (TEC). Entropy metrics, on the other hand, measure task logical complexity (TLC), task scale complexity (TSC), and task information complexity (TIC) based on the intrinsic structure and scale of tasks. Finally, the weighted Euclidean norm of these four factors determines the task complexity (TC) of each step. Taking the emergency operating procedures (EOP) for a small-break loss-of-coolant accident (SLOCA) in an NPP as an example, the entropy and E metrics are used to calculate the task complexity of each step, followed by experimental validation using NASA-TLX task load scores and step execution time for regression analysis. The results show that task complexity is significantly positively correlated with NASA-TLX subjective scores and task execution time, with the determination coefficients reaching 0.679 and 0.785, respectively. This indicates that the complexity metrics have high explanatory power, showing that the complexity quantification model is effective and has certain application value in improving human–computer interfaces and emergency procedures. Full article
Show Figures

Figure 1

31 pages, 1751 KiB  
Article
Enhancing User Experiences in Digital Marketing Through Machine Learning: Cases, Trends, and Challenges
by Alexios Kaponis, Manolis Maragoudakis and Konstantinos Chrysanthos Sofianos
Computers 2025, 14(6), 211; https://doi.org/10.3390/computers14060211 - 29 May 2025
Viewed by 1878
Abstract
Online marketing environments are rapidly being transformed by Artificial Intelligence (AI). This represents the implementation of Machine Learning (ML) that has significant potential in content personalization, enhanced usability, and hyper-targeted marketing, and it will reconfigure how businesses reach and serve customers. This systematic [...] Read more.
Online marketing environments are rapidly being transformed by Artificial Intelligence (AI). This represents the implementation of Machine Learning (ML) that has significant potential in content personalization, enhanced usability, and hyper-targeted marketing, and it will reconfigure how businesses reach and serve customers. This systematic examination of machine learning in the Digital Marketing (DM) industry is also closely examined, focusing on its effect on human–computer interaction (HCI). This research methodically elucidates how machine learning can be applied to the automation of strategies for user engagement that increase user experience (UX) and customer retention, and how to optimize recommendations from consumer behavior. The objective of the present study is to critically analyze the functional and ethical considerations of ML integration in DM and to evaluate its implications on data-driven personalization. Through selected case studies, the investigation also provides empirical evidence of the implications of ML applications on UX/customer loyalty as well as associated ethical aspects. These include algorithmic bias, concerns about the privacy of the data, and the need for greater transparency of ML-based decision-making processes. This research also contributes to the field by delivering actionable, data-driven strategies for marketing professionals and offering them frameworks to deal with the evolving responsibilities and tasks that accompany the introduction of ML technologies into DM. Full article
Show Figures

Figure 1

38 pages, 3310 KiB  
Article
SteXMeC: A Student eXperience Evaluation Methodology with Cultural Aspects
by Nicolás Matus, Federico Botella and Cristian Rusu
Appl. Sci. 2025, 15(10), 5314; https://doi.org/10.3390/app15105314 - 9 May 2025
Viewed by 440
Abstract
Cultural factors shape students’ expectations and perceptions within diverse educational settings. The perceived quality of a Higher Education Institution (HEI) is crucial to its success, with student satisfaction determined mainly by their overall experiences. The concept of Student eXperience (SX) can be analyzed [...] Read more.
Cultural factors shape students’ expectations and perceptions within diverse educational settings. The perceived quality of a Higher Education Institution (HEI) is crucial to its success, with student satisfaction determined mainly by their overall experiences. The concept of Student eXperience (SX) can be analyzed through the lens of Customer eXperience (CX) from a Human–Computer Interaction (HCI) perspective, positioning students as the “customers” of the institution. SX encompasses academic and physical interactions and students’ emotional, social, and psychological responses toward an institution’s systems, products, and services. By accounting for factors such as emotions, socioeconomic status, disabilities, and, importantly, cultural background, SX provides a comprehensive measure of student experiences. Building upon our previous SX model and Hofstede’s national culture model, we have developed a Student eXperience evaluation methodology that serves as a diagnostic tool to assess both student satisfaction and how effectively HEIs serve a diverse student population. This methodology ensures that all students, regardless of their background, are considered in the evaluation process, facilitating the early identification of institutional strengths and weaknesses. Incorporating cultural aspects into the assessment delivers more precise results. Furthermore, our approach supports HEIs in promoting equity, diversity, and inclusion by addressing the needs of minority students and students with disabilities, as well as reducing gender disparities. These objectives align with UNESCO’s Sustainable Development Goals, contributing to fostering an equitable learning environment. By adopting such inclusive evaluation practices, HEIs can enhance the perceived quality of education and their responsiveness to the needs of an increasingly multicultural student body. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Smart Factory and Industry 4.0)
Show Figures

Figure 1

19 pages, 755 KiB  
Review
Artificial Intelligence and the Human–Computer Interaction in Occupational Therapy: A Scoping Review
by Ioannis Kansizoglou, Christos Kokkotis, Theodoros Stampoulis, Erasmia Giannakou, Panagiotis Siaperas, Stavros Kallidis, Maria Koutra, Paraskevi Malliou, Maria Michalopoulou and Antonios Gasteratos
Algorithms 2025, 18(5), 276; https://doi.org/10.3390/a18050276 - 8 May 2025
Viewed by 1042
Abstract
Occupational therapy (OT) is a client-centered health profession focused on enhancing individuals’ ability to perform meaningful activities and daily tasks, particularly for those recovering from injury, illness, or disability. As a core component of rehabilitation, it promotes independence, well-being, and quality of life [...] Read more.
Occupational therapy (OT) is a client-centered health profession focused on enhancing individuals’ ability to perform meaningful activities and daily tasks, particularly for those recovering from injury, illness, or disability. As a core component of rehabilitation, it promotes independence, well-being, and quality of life through personalized, goal-oriented interventions. Identifying and measuring the role of artificial intelligence (AI) in the human–computer interaction (HCI) within OT is critical for improving therapeutic outcomes and patient engagement. Despite AI’s growing significance, the integration of AI-driven HCI in OT remains relatively underexplored in the existing literature. This scoping review identifies and maps current research on the topic, highlighting applications and proposing directions for future work. A structured literature search was conducted using the Scopus and PubMed databases. Articles were included if their primary focus was on the intersection of AI, HCI, and OT. Out of 55 retrieved articles, 26 met the inclusion criteria. This work highlights three key findings: (i) machine learning, robotics, and virtual reality are emerging as prominent AI-driven HCI techniques in OT; (ii) the integration of AI-enhanced HCI offers significant opportunities for developing personalized therapeutic interventions; (iii) further research is essential to evaluate the long-term efficacy, ethical implications, and patient outcomes associated with AI-driven HCI in OT. These insights aim to guide future research efforts and clinical applications within this evolving interdisciplinary field. In conclusion, AI-driven HCI holds considerable promise for advancing OT practice, yet further research is needed to fully realize its clinical potential. Full article
(This article belongs to the Collection Feature Papers in Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

14 pages, 894 KiB  
Review
Artificial Intelligence as Assessment Tool in Occupational Therapy: A Scoping Review
by Christos Kokkotis, Ioannis Kansizoglou, Theodoros Stampoulis, Erasmia Giannakou, Panagiotis Siaperas, Stavros Kallidis, Maria Koutra, Christina Koutra, Anastasia Beneka and Evangelos Bebetsos
BioMedInformatics 2025, 5(2), 22; https://doi.org/10.3390/biomedinformatics5020022 - 28 Apr 2025
Viewed by 2215
Abstract
Occupational therapy (OT) is vital in improving functional outcomes and aiding recovery for individuals with long-term disabilities, particularly those resulting from neurological diseases. Traditional assessment methods often rely on clinical judgment and individualized evaluations, which may overlook broader, data-driven insights. The integration of [...] Read more.
Occupational therapy (OT) is vital in improving functional outcomes and aiding recovery for individuals with long-term disabilities, particularly those resulting from neurological diseases. Traditional assessment methods often rely on clinical judgment and individualized evaluations, which may overlook broader, data-driven insights. The integration of artificial intelligence (AI) presents a transformative opportunity to enhance assessment precision and personalize therapeutic interventions. Additionally, advancements in human–computer interaction (HCI) enable more intuitive and adaptive AI-driven assessment tools, improving user engagement and accessibility in OT. This scoping review investigates current applications of AI in OT, particularly regarding the evaluation of functional outcomes and support for clinical decision-making. The literature search was conducted using the PubMed and Scopus databases. Studies were included if they focused on AI applications in evaluating functional outcomes within OT assessment tools. Out of an initial pool of 85 articles, 13 met the inclusion criteria, highlighting diverse AI methodologies such as support vector machines, deep neural networks, and natural language processing. These were primarily applied in domains including motor recovery, pediatric developmental assessments, and cognitive engagement evaluations. Findings suggest that AI can significantly improve evaluation processes by systematically integrating diverse data sources (e.g., sensor measurements, clinical histories, and behavioral analytics), generating precise predictive insights that facilitate tailored therapeutic interventions and comprehensive assessments of both pre- and post-treatment strategies. This scoping review also identifies existing gaps and proposes future research directions to optimize AI-driven assessment tools in OT. Full article
Show Figures

Figure 1

Back to TopTop