Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,629)

Search Parameters:
Keywords = human–machine interactions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 8202 KB  
Article
Continuous Lower-Limb Joint Angle Prediction Under Body Weight-Supported Training Using AWDF Model
by Li Jin, Liuyi Ling, Zhipeng Yu, Liyu Wei and Yiming Liu
Fractal Fract. 2025, 9(10), 655; https://doi.org/10.3390/fractalfract9100655 (registering DOI) - 11 Oct 2025
Abstract
Exoskeleton-assisted bodyweight support training (BWST) has demonstrated enhanced neurorehabilitation outcomes in which joint motion prediction serves as the critical foundation for adaptive human–machine interactive control. However, joint angle prediction under dynamic unloading conditions remains unexplored. This study introduces an adaptive wavelet-denoising fusion (AWDF) [...] Read more.
Exoskeleton-assisted bodyweight support training (BWST) has demonstrated enhanced neurorehabilitation outcomes in which joint motion prediction serves as the critical foundation for adaptive human–machine interactive control. However, joint angle prediction under dynamic unloading conditions remains unexplored. This study introduces an adaptive wavelet-denoising fusion (AWDF) model to predict lower-limb joint angles during BWST. Utilizing a custom human-tracking bodyweight support system, time series data of surface electromyography (sEMG), and inertial measurement unit (IMU) from ten adults were collected across graded bodyweight support levels (BWSLs) ranging from 0% to 40%. Systematic comparative experiments evaluated joint angle prediction performance among five models: the sEMG-based model, kinematic fusion model, wavelet-enhanced fusion model, late fusion model, and the proposed AWDF model, tested across prediction time horizons of 30–150 ms and BWSL gradients. Experimental results demonstrate that increasing BWSLs prolonged gait cycle duration and modified muscle activation patterns, with a concomitant decrease in the fractal dimension of sEMG signals. Extended prediction time degraded joint angle estimation accuracy, with 90 ms identified as the optimal tradeoff between system latency and prediction advancement. Crucially, this study reveals an enhancement in prediction performance with increased BWSLs. The proposed AWDF model demonstrated robust cross-condition adaptability for hip and knee angle prediction, achieving average root mean square errors (RMSE) of 1.468° and 2.626°, Pearson correlation coefficients (CC) of 0.983 and 0.973, and adjusted R2 values of 0.992 and 0.986, respectively. This work establishes the first computational framework for BWSL-adaptive joint prediction, advancing human–machine interaction in exoskeleton-assisted neurorehabilitation. Full article
Show Figures

Figure 1

30 pages, 1428 KB  
Review
Healthcare 5.0-Driven Clinical Intelligence: The Learn-Predict-Monitor-Detect-Correct Framework for Systematic Artificial Intelligence Integration in Critical Care
by Hanene Boussi Rahmouni, Nesrine Ben El Hadj Hassine, Mariem Chouchen, Halil İbrahim Ceylan, Raul Ioan Muntean, Nicola Luigi Bragazzi and Ismail Dergaa
Healthcare 2025, 13(20), 2553; https://doi.org/10.3390/healthcare13202553 - 10 Oct 2025
Abstract
Background: Healthcare 5.0 represents a shift toward intelligent, human-centric care systems. Intensive care units generate vast amounts of data that require real-time decisions, but current decision support systems lack comprehensive frameworks for safe integration of artificial intelligence. Objective: We developed and validated the [...] Read more.
Background: Healthcare 5.0 represents a shift toward intelligent, human-centric care systems. Intensive care units generate vast amounts of data that require real-time decisions, but current decision support systems lack comprehensive frameworks for safe integration of artificial intelligence. Objective: We developed and validated the Learn–Predict–Monitor–Detect–Correct (LPMDC) framework as a methodology for systematic artificial intelligence integration across the critical care workflow. The framework improves predictive analytics, continuous patient monitoring, intelligent alerting, and therapeutic decision support while maintaining essential human clinical oversight. Methods: Framework development employed systematic theoretical modeling integrating Healthcare 5.0 principles, comprehensive literature synthesis covering 2020–2024, clinical workflow analysis across 15 international ICU sites, technology assessment of mature and emerging AI applications, and multi-round expert validation by 24 intensive care physicians and medical informaticists. Each LPMDC phase was designed with specific integration requirements, performance metrics, and safety protocols. Results: LPMDC implementation and aggregated evidence from prior studies demonstrated significant clinical improvements: 30% mortality reduction, 18% ICU length-of-stay decrease (7.5 to 6.1 days), 45% clinician cognitive load reduction, and 85% sepsis bundle compliance improvement. Machine learning algorithms achieved an 80% sensitivity for sepsis prediction three hours before clinical onset, with false-positive rates below 15%. Additional applications demonstrated effectiveness in predicting respiratory failure, preventing cardiovascular crises, and automating ventilator management. Digital twins technology enabled personalized treatment simulations, while the integration of the Internet of Medical Things provided comprehensive patient and environmental surveillance. Implementation challenges were systematically addressed through phased deployment strategies, staff training programs, and regulatory compliance frameworks. Conclusions: The Healthcare 5.0-enabled LPMDC framework provides the first comprehensive theoretical foundation for systematic AI integration in critical care while preserving human oversight and clinical safety. The cyclical five-phase architecture enables processing beyond traditional cognitive limits through continuous feedback loops and system optimization. Clinical validation demonstrates measurable improvements in patient outcomes, operational efficiency, and clinician satisfaction. Future developments incorporating quantum computing, federated learning, and explainable AI technologies offer additional advancement opportunities for next-generation critical care systems. Full article
(This article belongs to the Section Artificial Intelligence in Healthcare)
Show Figures

Figure 1

23 pages, 1428 KB  
Article
Digital Organizational Resilience in Latin American MSMEs: Entangled Socio-Technical Systems of People, Practices, and Data
by Alexander Sánchez-Rodríguez, Reyner Pérez-Campdesuñer, Gelmar García-Vidal, Yandi Fernández-Ochoa, Rodobaldo Martínez-Vivar and Freddy Ignacio Alvarez-Subía
Systems 2025, 13(10), 889; https://doi.org/10.3390/systems13100889 - 10 Oct 2025
Viewed by 26
Abstract
This study develops a systemic framework to conceptualize digital organizational resilience in micro, small, and medium-sized enterprises (MSMEs) as an emergent property of entangled socio-technical systems. Building on theories of distributed cognition, sociomateriality, and resilience engineering, this paper argues that resilience does not [...] Read more.
This study develops a systemic framework to conceptualize digital organizational resilience in micro, small, and medium-sized enterprises (MSMEs) as an emergent property of entangled socio-technical systems. Building on theories of distributed cognition, sociomateriality, and resilience engineering, this paper argues that resilience does not reside in isolated elements—such as leadership, technologies, or procedures—but in their dynamic interplay. Four interdependent dimensions—human, technological, organizational, and institutional—are identified as constitutive of resilience capacities. The research design is conceptual and exploratory in nature. Two theory-driven conceptual statements are formulated: first, that natural language mediation in human–machine interaction enhances coordination and adaptability; and second, that distributed cognition and prototyping practices strengthen collective problem-solving and adaptive capacity. These conceptual statements are not statistically tested but serve as conceptual anchors for the model and as guiding directions for future empirical studies. Empirical illustrations from Ecuadorian MSMEs ground the framework in practice. The evidence highlights three insights: (1) structural fragility, as micro and small firms dominate the economy but face high mortality and financial vulnerability; (2) uneven digitalization, with limited adoption of BPM, ERP, and AI due to skill and resource constraints; and (3) disproportionate gains from modest interventions, such as optimization models or collaborative prototyping. This study contributes to organizational theory by positioning MSMEs as socio-technical ecosystems, providing a conceptual foundation for future empirical validation. Full article
Show Figures

Figure 1

30 pages, 21831 KB  
Article
Optimizing University Campus Functional Zones Using Landscape Feature Recognition and Enhanced Decision Tree Algorithms: A Study on Spatial Response Differences Between Students and Visitors
by Xiaowen Zhuang, Yi Cai, Zhenpeng Tang, Zheng Ding and Christopher Gan
Buildings 2025, 15(19), 3622; https://doi.org/10.3390/buildings15193622 (registering DOI) - 9 Oct 2025
Viewed by 87
Abstract
As universities become increasingly open, campuses are no longer only places for study and daily life for students and faculty, but also essential spaces for public visits and cultural identity. Traditional perception evaluation methods that rely on manual surveys are limited by sample [...] Read more.
As universities become increasingly open, campuses are no longer only places for study and daily life for students and faculty, but also essential spaces for public visits and cultural identity. Traditional perception evaluation methods that rely on manual surveys are limited by sample size and subjective bias, making it challenging to reveal differences in experiences between groups (students/visitors) and the complex relationships between spatial elements and perceptions. This study uses a comprehensive open university in China as a case study to address this. It proposes a research framework that combines street-view image semantic segmentation, perception survey scores, and interpretable machine learning with sample augmentation. First, full-sample modeling is used to identify key image semantic features influencing perception indicators (nature, culture, aesthetics), and then to compare how students and visitors differ in their perceptions and preferences across campus spaces. To overcome the imbalance in survey data caused by group–space interactions, the study applies the CTGAN method, which expands minority samples through conditional generation while preserving distribution authenticity, thereby improving the robustness and interpretability of the model. Based on this, attribution analysis with an interpretable decision tree algorithm further quantifies semantic features’ contribution, direction, and thresholds to perceptions, uncovering heterogeneity in perception mechanisms across groups. The results provide methodological support for perception evaluation of campus functional zones and offer data-driven, human-centered references for campus planning and design optimization. Full article
Show Figures

Figure 1

15 pages, 3254 KB  
Article
Rodent Social Behavior Recognition Using a Global Context-Aware Vision Transformer Network
by Muhammad Imran Sharif, Doina Caragea and Ahmed Iqbal
AI 2025, 6(10), 264; https://doi.org/10.3390/ai6100264 - 8 Oct 2025
Viewed by 254
Abstract
Animal behavior recognition is an important research area that provides insights into areas such as neural functions, gene mutations, and drug efficacy, among others. The manual coding of behaviors based on video recordings is labor-intensive and prone to inconsistencies and human error. Machine [...] Read more.
Animal behavior recognition is an important research area that provides insights into areas such as neural functions, gene mutations, and drug efficacy, among others. The manual coding of behaviors based on video recordings is labor-intensive and prone to inconsistencies and human error. Machine learning approaches have been used to automate the analysis of animal behavior with promising results. Our work builds on existing developments in animal behavior analysis and state-of-the-art approaches in computer vision to identify rodent social behaviors. Specifically, our proposed approach, called Vision Transformer for Rat Social Interactions (ViT-RSI), leverages the existing Global Context Vision Transformer (GC-ViT) architecture to identify rat social interactions. Experimental results using five behaviors of the publicly available Rat Social Interaction (RatSI) dataset show that the ViT-RatSI approach can accurately identify rat social interaction behaviors. When compared with prior results from the literature, the ViT-RatSI approach achieves best results for four out of five behaviors, specifically for the “Approaching”, “Following”, “Moving away”, and “Solitary” behaviors, with F1 scores of 0.81, 0.81, 0.86, and 0.94, respectively. Full article
(This article belongs to the Special Issue AI in Bio and Healthcare Informatics)
Show Figures

Figure 1

24 pages, 1623 KB  
Review
Beyond the Resistome: Molecular Insights, Emerging Therapies, and Environmental Drivers of Antibiotic Resistance
by Nada M. Nass and Kawther A. Zaher
Antibiotics 2025, 14(10), 995; https://doi.org/10.3390/antibiotics14100995 - 4 Oct 2025
Viewed by 296
Abstract
Antibiotic resistance remains one of the most formidable challenges to modern medicine, threatening to outpace therapeutic innovation and undermine decades of clinical progress. While resistance was once viewed narrowly as a clinical phenomenon, it is now understood as the outcome of complex ecological [...] Read more.
Antibiotic resistance remains one of the most formidable challenges to modern medicine, threatening to outpace therapeutic innovation and undermine decades of clinical progress. While resistance was once viewed narrowly as a clinical phenomenon, it is now understood as the outcome of complex ecological and molecular interactions that span soil, water, agriculture, animals, and humans. Environmental reservoirs act as silent incubators of resistance genes, with horizontal gene transfer and stress-induced mutagenesis fueling their evolution and dissemination. At the molecular level, advances in genomics, structural biology, and systems microbiology have revealed intricate networks involving plasmid-mediated resistance, efflux pump regulation, integron dynamics, and CRISPR-Cas interactions, providing new insights into the adaptability of pathogens. Simultaneously, the environmental dimensions of resistance, from wastewater treatment plants and aquaculture to airborne dissemination, highlight the urgency of adopting a One Health framework. Yet, alongside this growing threat, novel therapeutic avenues are emerging. Innovative β-lactamase inhibitors, bacteriophage-based therapies, engineered lysins, antimicrobial peptides, and CRISPR-driven antimicrobials are redefining what constitutes an “antibiotic” in the twenty-first century. Furthermore, artificial intelligence and machine learning now accelerate drug discovery and resistance prediction, raising the possibility of precision-guided antimicrobial stewardship. This review synthesizes molecular insights, environmental drivers, and therapeutic innovations to present a comprehensive landscape of antibiotic resistance. By bridging ecological microbiology, molecular biology, and translational medicine, it outlines a roadmap for surveillance, prevention, and drug development while emphasizing the need for integrative policies to safeguard global health. Full article
(This article belongs to the Special Issue Antimicrobial Resistance and Environmental Health, 2nd Edition)
Show Figures

Figure 1

15 pages, 1613 KB  
Article
EEG-Powered UAV Control via Attention Mechanisms
by Jingming Gong, He Liu, Liangyu Zhao, Taiyo Maeda and Jianting Cao
Appl. Sci. 2025, 15(19), 10714; https://doi.org/10.3390/app151910714 - 4 Oct 2025
Viewed by 227
Abstract
This paper explores the development and implementation of a brain–computer interface (BCI) system that utilizes electroencephalogram (EEG) signals for real-time monitoring of attention levels to control unmanned aerial vehicles (UAVs). We propose an innovative approach that combines spectral power analysis and machine learning [...] Read more.
This paper explores the development and implementation of a brain–computer interface (BCI) system that utilizes electroencephalogram (EEG) signals for real-time monitoring of attention levels to control unmanned aerial vehicles (UAVs). We propose an innovative approach that combines spectral power analysis and machine learning classification techniques to translate cognitive states into precise UAV command signals. This method overcomes the limitations of traditional threshold-based approaches by adapting to individual differences and improving classification accuracy. Through comprehensive testing with 20 participants in both controlled laboratory environments and real-world scenarios, our system achieved an 85% accuracy rate in distinguishing between high and low attention states and successfully mapped these cognitive states to vertical UAV movements. Experimental results demonstrate that our machine learning-based classification method significantly enhances system robustness and adaptability in noisy environments. This research not only advances UAV operability through neural interfaces but also broadens the practical applications of BCI technology in aviation. Our findings contribute to the expanding field of neurotechnology and underscore the potential for neural signal processing and machine learning integration to revolutionize human–machine interaction in industries where dynamic relationships between cognitive states and automated systems are beneficial. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Graphical abstract

21 pages, 4623 KB  
Article
Combining Neural Architecture Search and Weight Reshaping for Optimized Embedded Classifiers in Multisensory Glove
by Hiba Al Youssef, Sara Awada, Mohamad Raad, Maurizio Valle and Ali Ibrahim
Sensors 2025, 25(19), 6142; https://doi.org/10.3390/s25196142 - 4 Oct 2025
Viewed by 215
Abstract
Intelligent sensing systems are increasingly used in wearable devices, enabling advanced tasks across various application domains including robotics and human–machine interaction. Ensuring these systems are energy autonomous is highly demanded, despite strict constraints on power, memory and processing resources. To meet these requirements, [...] Read more.
Intelligent sensing systems are increasingly used in wearable devices, enabling advanced tasks across various application domains including robotics and human–machine interaction. Ensuring these systems are energy autonomous is highly demanded, despite strict constraints on power, memory and processing resources. To meet these requirements, embedded neural networks must be optimized to achieve a balance between accuracy and efficiency. This paper presents an integrated approach that combines Hardware-Aware Neural Architecture Search (HW-NAS) with optimization techniques—weight reshaping, quantization, and their combination—to develop efficient classifiers for a multisensory glove. HW-NAS automatically derives 1D-CNN models tailored to the NUCLEO-F401RE board, while the additional optimization further reduces model size, memory usage, and latency. Across three datasets, the optimized models not only improve classification accuracy but also deliver an average reduction of 75% in inference time, 69% in flash memory, and more than 45% in RAM compared to NAS-only baselines. These results highlight the effectiveness of integrating NAS with optimization techniques, paving the way towards energy-autonomous wearable systems. Full article
(This article belongs to the Special Issue Feature Papers in Smart Sensing and Intelligent Sensors 2025)
Show Figures

Figure 1

14 pages, 2088 KB  
Article
Flexible, Stretchable, and Self-Healing MXene-Based Conductive Hydrogels for Human Health Monitoring
by Ruirui Li, Sijia Chang, Jiaheng Bi, Haotian Guo, Jianya Yi and Chengqun Chu
Polymers 2025, 17(19), 2683; https://doi.org/10.3390/polym17192683 - 3 Oct 2025
Viewed by 337
Abstract
Conductive hydrogels (CHs) have attracted significant attention in the fields of flexible electronics, human–machine interaction, and electronic skin (e-skin) due to their self-adhesiveness, environmental stability, and multi-stimuli responsiveness. However, integrating these diverse functionalities into a single conductive hydrogel system remains a challenge. In [...] Read more.
Conductive hydrogels (CHs) have attracted significant attention in the fields of flexible electronics, human–machine interaction, and electronic skin (e-skin) due to their self-adhesiveness, environmental stability, and multi-stimuli responsiveness. However, integrating these diverse functionalities into a single conductive hydrogel system remains a challenge. In this study, polyvinyl alcohol (PVA) and polyacrylamide (PAM) were used as the dual-network matrix, lithium chloride and MXene were added, and a simple immersion strategy was adopted to synthesize a multifunctional MXene-based conductive hydrogel in a glycerol/water (1:1) binary solvent system. A subsequent investigation was then conducted on the hydrogel. The prepared PVA/PAM/LiCl/MXene hydrogel exhibits excellent tensile properties (~1700%), high electrical conductivity (1.6 S/m), and good self-healing ability. Furthermore, it possesses multimodal sensing performance, including humidity sensitivity (sensitivity of −1.09/% RH), temperature responsiveness (heating sensitivity of 2.2 and cooling sensitivity of 1.5), and fast pressure response/recovery times (220 ms/230 ms). In addition, the hydrogel has successfully achieved real-time monitoring of human joint movements (elbow and knee bending) and physiological signals (pulse, breathing), as well as enabled monitoring of spatial pressure distribution via a 3 × 3 sensor array. The performance and versatility of this hydrogel make it a promising candidate for next-generation flexible sensors, which can be applied in the fields of human health monitoring, electronic skin, and human–machine interaction. Full article
(This article belongs to the Special Issue Semiflexible Polymers, 3rd Edition)
Show Figures

Figure 1

19 pages, 9302 KB  
Article
Real-Time Face Gesture-Based Robot Control Using GhostNet in a Unity Simulation Environment
by Yaseen
Sensors 2025, 25(19), 6090; https://doi.org/10.3390/s25196090 - 2 Oct 2025
Viewed by 390
Abstract
Unlike traditional control systems that rely on physical input devices, facial gesture-based interaction offers a contactless and intuitive method for operating autonomous systems. Recent advances in computer vision and deep learning have enabled the use of facial expressions and movements for command recognition [...] Read more.
Unlike traditional control systems that rely on physical input devices, facial gesture-based interaction offers a contactless and intuitive method for operating autonomous systems. Recent advances in computer vision and deep learning have enabled the use of facial expressions and movements for command recognition in human–robot interaction. In this work, we propose a lightweight, real-time facial gesture recognition method, GhostNet-BiLSTM-Attention (GBA), which integrates GhostNet and BiLSTM with an attention mechanism, is trained on the FaceGest dataset, and is integrated with a 3D robot simulation in Unity. The system is designed to recognize predefined facial gestures such as head tilts, eye blinks, and mouth movements with high accuracy and low inference latency. Recognized gestures are mapped to specific robot commands and transmitted to a Unity-based simulation environment via socket communication across machines. This framework enables smooth and immersive robot control without the need for conventional controllers or sensors. Real-time evaluation demonstrates the system’s robustness and responsiveness under varied user and lighting conditions, achieving a classification accuracy of 99.13% on the FaceGest dataset. The GBA holds strong potential for applications in assistive robotics, contactless teleoperation, and immersive human–robot interfaces. Full article
(This article belongs to the Special Issue Smart Sensing and Control for Autonomous Intelligent Unmanned Systems)
Show Figures

Figure 1

30 pages, 6459 KB  
Article
FREQ-EER: A Novel Frequency-Driven Ensemble Framework for Emotion Recognition and Classification of EEG Signals
by Dibya Thapa and Rebika Rai
Appl. Sci. 2025, 15(19), 10671; https://doi.org/10.3390/app151910671 - 2 Oct 2025
Viewed by 296
Abstract
Emotion recognition using electroencephalogram (EEG) signals has gained significant attention due to its potential applications in human–computer interaction (HCI), brain computer interfaces (BCIs), mental health monitoring, etc. Although deep learning (DL) techniques have shown impressive performance in this domain, they often require large [...] Read more.
Emotion recognition using electroencephalogram (EEG) signals has gained significant attention due to its potential applications in human–computer interaction (HCI), brain computer interfaces (BCIs), mental health monitoring, etc. Although deep learning (DL) techniques have shown impressive performance in this domain, they often require large datasets and high computational resources and offer limited interpretability, limiting their practical deployment. To address these issues, this paper presents a novel frequency-driven ensemble framework for electroencephalogram-based emotion recognition (FREQ-EER), an ensemble of lightweight machine learning (ML) classifiers with a frequency-based data augmentation strategy tailored for effective emotion recognition in low-data EEG scenarios. Our work focuses on the targeted analysis of specific EEG frequency bands and brain regions, enabling a deeper understanding of how distinct neural components contribute to the emotional states. To validate the robustness of the proposed FREQ-EER, the widely recognized DEAP (database for emotion analysis using physiological signals) dataset, SEED (SJTU emotion EEG dataset), and GAMEEMO (database for an emotion recognition system based on EEG signals and various computer games) were considered for the experiment. On the DEAP dataset, classification accuracies of up to 96% for specific emotion classes were achieved, while on the SEED and GAMEEMO, it maintained 97.04% and 98.6% overall accuracies, respectively, with nearly perfect AUC values confirming the frameworks efficiency, interpretability, and generalizability. Full article
Show Figures

Figure 1

27 pages, 3071 KB  
Review
From Trust in Automation to Trust in AI in Healthcare: A 30-Year Longitudinal Review and an Interdisciplinary Framework
by Kelvin K. L. Wong, Yong Han, Yifeng Cai, Wumin Ouyang, Hemin Du and Chao Liu
Bioengineering 2025, 12(10), 1070; https://doi.org/10.3390/bioengineering12101070 - 1 Oct 2025
Viewed by 318
Abstract
Human–machine trust has shifted over the past three decades from trust in automation to trust in AI, while research paradigms, disciplines, and problem spaces have expanded. Centered on AI in healthcare, this narrative review offers a longitudinal synthesis that traces and compares phase-specific [...] Read more.
Human–machine trust has shifted over the past three decades from trust in automation to trust in AI, while research paradigms, disciplines, and problem spaces have expanded. Centered on AI in healthcare, this narrative review offers a longitudinal synthesis that traces and compares phase-specific changes in theory and method, providing design guidance for human-AI systems at different stages of maturity. From a cross-disciplinary view, we introduce an Interdisciplinary Human-AI Trust Research (I-HATR) framework that aligns explainable AI (XAI) with human–computer interaction/human factors engineering (HCI/HFE). We distill three core categories of determinants of human-AI trust in healthcare, user characteristics, AI system attributes, and contextual factors, and summarize the main measurement families and their evolution from self-report to behavioral and psychophysiological approaches, with growing use of multimodal and dynamic evaluation. Finally, we outline key trends, opportunities, and practical challenges to support the development of human-centered, trustworthy AI in healthcare, emphasizing the need to bridge actual trustworthiness and perceived trust through shared metrics, uncertainty communication, and trust calibration. Full article
Show Figures

Figure 1

18 pages, 386 KB  
Article
Do Perceived Values Influence User Identification and Attitudinal Loyalty in Social Robots? The Mediating Role of Active Involvement
by Hua Pang, Zhen Wang and Lei Wang
Behav. Sci. 2025, 15(10), 1329; https://doi.org/10.3390/bs15101329 - 28 Sep 2025
Viewed by 314
Abstract
With the rapid advancement of artificial intelligence, the deployment of social robots has significantly broadened, extending into diverse fields such as education, medical services, and business. Despite this expansive growth, there remains a notable scarcity of empirical research addressing the underlying psychological mechanisms [...] Read more.
With the rapid advancement of artificial intelligence, the deployment of social robots has significantly broadened, extending into diverse fields such as education, medical services, and business. Despite this expansive growth, there remains a notable scarcity of empirical research addressing the underlying psychological mechanisms that influence human–robot interactions. To address this critical research gap, the present study proposes and empirically tests a theoretical model designed to elucidate how users’ multi-dimensional perceived values of social robots influence their attitudinal responses and outcomes. Based on questionnaire data from 569 social robot users, the study reveals that users’ perceived utilitarian value, emotional value, and hedonic value all exert significant positive effects on active involvement, thereby fostering their identification and reinforcing attitudinal loyalty. Among these dimensions, emotional value emerged as the strongest predictor, underscoring the pivotal role of emotional orientation in cultivating lasting human–robot relationships. Furthermore, the findings highlight the critical mediating function of active involvement in linking perceived value to users’ psychological sense of belonging, thereby elucidating the mechanism through which perceived value enhances engagement and promotes sustained long-term interaction. These findings extend the conceptual boundaries of human–machine interaction, offer a theoretical foundation for future explorations of user psychological mechanisms, and inform strategic design approaches centered on emotional interaction and user-oriented experiences, providing practical guidance for optimizing social robot design in applications. Full article
Show Figures

Figure 1

26 pages, 962 KB  
Article
Conceptualisation of Digital Wellbeing Associated with Generative Artificial Intelligence from the Perspective of University Students
by Michal Černý
Eur. J. Investig. Health Psychol. Educ. 2025, 15(10), 197; https://doi.org/10.3390/ejihpe15100197 - 27 Sep 2025
Viewed by 434
Abstract
Digital wellbeing has been the subject of extensive research in educational contexts. Yet, there remains a paucity of studies conducted within the paradigm of generative AI, a field with the potential to significantly influence students’ sentiments and dispositions in this domain. This study [...] Read more.
Digital wellbeing has been the subject of extensive research in educational contexts. Yet, there remains a paucity of studies conducted within the paradigm of generative AI, a field with the potential to significantly influence students’ sentiments and dispositions in this domain. This study analyses 474 student recommendations (information science and library science) for digital wellbeing in generative artificial intelligence. The research is based on the context of pragmatism, which rejects the differentiation between thinking and acting and ties both phenomena into one interpretive whole. The research method is thematic analysis; students proposed rules for digital wellbeing in the context of generative AI, which was followed by the established theory. The study has identified four specific areas that need to be the focus of research attention: societal expectations of the positive benefits of using generative AI, particular ways of interacting with generative AI, its risks, and students’ adaptive strategies. Research has shown that risks in this context must be considered part of the elements that make up the environment in which students seek to achieve balance through adaptive strategies. The key adaptive elements included the ability to think critically and creatively, autonomy, care for others, take responsibility, and the reflected ontological difference between humans and machines. Full article
Show Figures

Figure 1

23 pages, 1708 KB  
Review
Grasping in Shared Virtual Environments: Toward Realistic Human–Object Interaction Through Review-Based Modeling
by Nicole Christoff, Nikolay N. Neshov, Radostina Petkova, Krasimir Tonchev and Agata Manolova
Electronics 2025, 14(19), 3809; https://doi.org/10.3390/electronics14193809 - 26 Sep 2025
Viewed by 289
Abstract
Virtual communication, involving the transmission of all human senses, is the next step in the development of telecommunications. Achieving this vision requires real-time data exchange with low latency, which in turn necessitates the implementation of the Tactile Internet (TI). TI will ensure the [...] Read more.
Virtual communication, involving the transmission of all human senses, is the next step in the development of telecommunications. Achieving this vision requires real-time data exchange with low latency, which in turn necessitates the implementation of the Tactile Internet (TI). TI will ensure the transmission of high-quality tactile data, especially when combined with audio and video signals, thus enabling more realistic interactions in virtual environments. In this context, advances in realism increasingly depend on the accurate simulation of the grasping process and hand–object interactions. To address this, in this paper, we methodically present the challenges of human–object interaction in virtual environments, together with a detailed review of the datasets used in grasping modeling and the integration of physics-based and machine learning approaches. Based on this review, we propose a multi-step framework that simulates grasping as a series of biomechanical, perceptual, and control processes. The proposed model aims to support realistic human interaction with virtual objects in immersive settings and to enable integration into applications such as remote manipulation, rehabilitation, and virtual learning. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Back to TopTop