Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (433)

Search Parameters:
Keywords = emotions in feedback

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 2708 KB  
Article
A TPU-Based 3D Printed Robotic Hand: Design and Its Impact on Human–Robot Interaction
by Younglim Choi, Minho Lee, Seongmin Yea, Seunghwan Kim and Hyunseok Kim
Electronics 2026, 15(2), 262; https://doi.org/10.3390/electronics15020262 - 7 Jan 2026
Viewed by 24
Abstract
This study outlines the design and evaluation of a biomimetic robotic hand tailored for Human–Robot Interaction (HRI), focusing on improvements in tactile fidelity driven by material choice. Thermoplastic polyurethane (TPU) was selected over polylactic acid (PLA) based on its reported elastomeric characteristics and [...] Read more.
This study outlines the design and evaluation of a biomimetic robotic hand tailored for Human–Robot Interaction (HRI), focusing on improvements in tactile fidelity driven by material choice. Thermoplastic polyurethane (TPU) was selected over polylactic acid (PLA) based on its reported elastomeric characteristics and mechanical compliance described in prior literature. Rather than directly matching human skin properties, TPU was perceived as providing a softer and more comfortable tactile interaction compared to rigid PLA. The robotic hand was anatomically reconstructed from an open-source model and integrated with AX-12A and MG90S actuators to simplify wiring and enhance motion precision. A custom PCB, built around an ATmega2560 microcontroller, enables real-time communication with ROS-based upper-level control systems. Angular displacement analysis of repeated gesture motions confirmed the high repeatability and consistency of the system. A repeated-measures user study involving 47 participants was conducted to compare the PLA- and TPU-based prototypes during interactive tasks such as handshakes and gesture commands. The TPU hand received significantly higher ratings in tactile realism, grip satisfaction, and perceived responsiveness (p < 0.05). Qualitative feedback further supported its superior emotional acceptance and comfort. These findings indicate that incorporating TPU in robotic hand design not only enhances mechanical performance but also plays a vital role in promoting emotionally engaging and natural human–robot interactions, making it a promising approach for affective HRI applications. Full article
Show Figures

Figure 1

22 pages, 632 KB  
Review
“Your Digital Doctor Will Now See You”: A Narrative Review of VR and AI Technology in Chronic Illness Management
by Albert Łukasik, Milena Celebudzka and Arkadiusz Gut
Healthcare 2026, 14(2), 143; https://doi.org/10.3390/healthcare14020143 - 6 Jan 2026
Viewed by 76
Abstract
This narrative review examines how immersive virtual and mixed-reality (VR/MR) technologies, combined with AI-driven virtual agents, can support the prevention and long-term management of chronic illness. Chronic diseases represent a significant global health burden, and conventional care models often struggle to sustain patient [...] Read more.
This narrative review examines how immersive virtual and mixed-reality (VR/MR) technologies, combined with AI-driven virtual agents, can support the prevention and long-term management of chronic illness. Chronic diseases represent a significant global health burden, and conventional care models often struggle to sustain patient engagement, motivation, and adherence over time. To address this gap, we conducted a narrative review of reviews and meta-analyses. We selected empirical studies published between 2020 and 2025, identified through searches in PubMed, Web of Science, and Google Scholar. The aim was to capture the state of the art in the integrated use of VR/MR and AI in chronic illness care, and to identify key opportunities, challenges, and considerations relevant to clinical practice. The reviewed evidence indicates that VR/MR interventions consistently enhance engagement, motivation, symptom coping, and emotional well-being, particularly in rehabilitation, pain management, and psychoeducation. At the same time, AI-driven conversational agents and virtual therapists add adaptive feedback, personalization, real-time monitoring, and continuity of care between clinical visits. However, persistent challenges are also reported, including technical limitations such as latency and system dependence, ethical concerns related to data privacy and algorithmic bias, as well as psychosocial risks such as emotional overattachment or discomfort arising from avatar design. Overall, the findings suggest that the most significant clinical value emerges when VR/MR and AI are deployed together rather than in isolation. When implemented with patient-centered design, clinician oversight, and transparent governance, these technologies can meaningfully support more engaging, personalized, and sustainable chronic illness management. Full article
Show Figures

Figure 1

24 pages, 2362 KB  
Article
Attention Bidirectional Recurrent Neural Zero-Shot Semantic Classifier for Emotional Footprint Identification
by Karthikeyan Jagadeesan and Annapurani Kumarappan
Computation 2026, 14(1), 8; https://doi.org/10.3390/computation14010008 - 2 Jan 2026
Viewed by 125
Abstract
Exploring emotions in organization settings, particularly in feedback on organizational welfare programs, is critical for understanding employee experiences and enhancing organizational policies. Recognizing emotions from a conversation (i.e., leaving an emotional footprint) is a predominant task for a machine to comprehend the full [...] Read more.
Exploring emotions in organization settings, particularly in feedback on organizational welfare programs, is critical for understanding employee experiences and enhancing organizational policies. Recognizing emotions from a conversation (i.e., leaving an emotional footprint) is a predominant task for a machine to comprehend the full context of the conversation. While fine-tuning of pre-trained models has invariably provided state-of-the-art results in emotion footprint recognition tasks, the prospect of a zero-shot learned model in this sphere is, on the whole, unexplored. The objective here remains to identify the emotional footprint of the members participating in the conversation after the conversation is over with improved accuracy, time and minimal error rate. To address these gaps, in this work, a method called Attention Bidirectional Recurrent Neural Zero-Shot Semantic Classifier (ABRN-ZSSC) for emotional footprint identification is proposed. The ABRN-ZSSC for emotional footprint identification is split into two sections. First, the raw data from a Two-Party Conversation with Emotional Footprint and Emotional Intensity are subjected to the Attention Bidirectional Recurrent Neural Network model with the intent of identifying the emotional footprint for each party near the conclusion of the conversation and, second, with the identified emotional footprint in a conversation. The Zero-Shot Learning-based classifier is applied to train and classify emotions both accurately and precisely. We verify the utility of these approaches (i.e., emotional footprint identification and classification) by performing an extensive experimental evaluation on two corpora on four aspects, training time, accuracy, precision, and error rate for varying samples. Experimental results demonstrate that the ABRN-ZSSC method outperforms two existing baseline models in emotion inference tasks across the dataset. An outcome of the proposed ABRN-ZSSC method is that it obtains superior performance in terms of 10% precision, 17% accuracy and 8% recall as well as 19% training time and 18% error rate compared to the conventional methods. Full article
(This article belongs to the Section Computational Social Science)
Show Figures

Figure 1

24 pages, 2678 KB  
Article
“Trigger the Mind, Target the Gold”: Development and Validation of an ACPT (Acceptance and Commitment Performance Training) for Elite Shooters
by Suyoung Hwang, Woori Han and Eun-Surk Yi
Behav. Sci. 2026, 16(1), 52; https://doi.org/10.3390/bs16010052 - 27 Dec 2025
Viewed by 372
Abstract
Acceptance and Commitment Therapy (ACT) has been widely applied in clinical contexts; however, its systematic adaptation to elite sports, particularly precision-based disciplines such as shooting, remains underexplored. The present study aimed to develop and preliminarily validate an ACT-based psychological training program—the Acceptance and [...] Read more.
Acceptance and Commitment Therapy (ACT) has been widely applied in clinical contexts; however, its systematic adaptation to elite sports, particularly precision-based disciplines such as shooting, remains underexplored. The present study aimed to develop and preliminarily validate an ACT-based psychological training program—the Acceptance and Commitment Performance Training for Shooters (ACPT-S)—by reframing ACT from a therapeutic intervention into a performance-oriented training framework. Using a multiphase formative evaluation design, a needs assessment was first conducted with 28 elite and collegiate shooters to identify sport-specific psychological demands. Based on these findings, a ten-session ACPT-S program was developed by integrating the six core ACT processes with shooter-specific routines, embodied exercises, and performance-relevant metaphors. The program was subsequently examined through two pilot studies: Phase 1 with four collegiate/corporate athletes and Phase 2 with 15 national-level shooters. Data were collected via session reflections, focus group interviews, and expert panel evaluations, and the Content Validity Ratio (CVR) analysis was used to assess conceptual clarity and implementation feasibility. The results indicated that ACPT-S was perceived as both feasible and contextually appropriate, with athletes reporting improvements in attentional focus, emotional acceptance, value-based motivation, and reduced anxiety. Qualitative analyses demonstrated strong engagement with ACT principles and their functional integration into shooting performance contexts, while all program components achieved CVR scores of ≥0.80, indicating a strong expert consensus. Program refinements were guided by feedback related to activity sequencing, metaphor resonance and personalization strategies. Overall, this study reconceptualizes ACT as a performance-enhancement framework rather than a purely clinical approach and introduces the ACPT-S as a novel, theory-driven, and scalable psychological training model for precision sports, providing a robust foundation for future longitudinal and comparative research. Full article
Show Figures

Figure 1

18 pages, 515 KB  
Article
A Conceptual Model for Designing Anxiety-Reducing Digital Games in Mathematics Education
by Ljerka Jukić Matić, Sonia Palha and Jenni Huhtasalo
Educ. Sci. 2026, 16(1), 34; https://doi.org/10.3390/educsci16010034 - 27 Dec 2025
Viewed by 348
Abstract
This paper presents a conceptual model for creating digital educational games that aim to reduce mathematics anxiety (MA) and promote positive emotional engagement in mathematics education. No empirical data were collected or analyzed; the proposed model is based on a synthesis of theory [...] Read more.
This paper presents a conceptual model for creating digital educational games that aim to reduce mathematics anxiety (MA) and promote positive emotional engagement in mathematics education. No empirical data were collected or analyzed; the proposed model is based on a synthesis of theory and empirical findings from prior studies. Drawing on Control-Value Theory and recent meta-analyses and systematic reviews, the model identifies key psychological mechanisms underlying MA and proposes game features that address both cognitive and emotional domains. Adaptive difficulty and feedback, safe error handling, narrative, collaborative play, emotional regulation tools, mastery-oriented low-stakes practice, and non-competitive progress tracking are all discussed in terms of their theoretical foundation and empirical support. The paper explains how these features can improve learners’ perceived control and value, reducing anxiety while increasing motivation, self-efficacy, and engagement. The proposed model combines game design principles with evidence-based intervention strategies to provide guidance for the future development and evaluation of anxiety-reducing digital math games. This framework is intended to help researchers and practitioners create digital games that effectively support students with high math anxiety and improve mathematics education outcomes. Full article
Show Figures

Figure 1

31 pages, 5478 KB  
Article
An Intelligent English-Speaking Training System Using Generative AI and Speech Recognition
by Ching-Ta Lu, Yen-Ju Chen, Tai-Ying Wu and Yen-Yu Lu
Appl. Sci. 2026, 16(1), 189; https://doi.org/10.3390/app16010189 - 24 Dec 2025
Viewed by 433
Abstract
English is the first foreign language most Taiwanese have encountered, yet few have achieved proficient speaking skills. This paper presents a generative AI-based English speaking training system designed to enhance oral proficiency through interactive AI agents. The system employs ChatGPT version 5.2 to [...] Read more.
English is the first foreign language most Taiwanese have encountered, yet few have achieved proficient speaking skills. This paper presents a generative AI-based English speaking training system designed to enhance oral proficiency through interactive AI agents. The system employs ChatGPT version 5.2 to generate diverse and tailored conversational scenarios, enabling learners to practice in contextually relevant situations. Spoken responses are captured via speech recognition and analyzed by a large language model, which provides intelligent scoring and personalized feedback to guide improvement. Learners can automatically generate scenario-based scripts according to their learning needs. The D-ID AI system then produces a virtual character of the AI agent, whose lip movements are synchronized with the conversation, thereby creating realistic video interactions. Learning with an AI agent, the system maintains controlled emotional expression, reduces communication anxiety, and helps learners adapt to non-native interaction, fostering more natural and confident speech production. Accordingly, the proposed system supports compelling, immersive, and personalized language learning. The experimental results indicate that repeated practice with the proposed system substantially improves English speaking proficiency. Full article
(This article belongs to the Section Applied Neuroscience and Neural Engineering)
Show Figures

Figure 1

14 pages, 808 KB  
Article
UnderstandingDelirium.ca: A Mixed-Methods Observational Evaluation of an Internet-Based Educational Intervention for the Public and Care Partners
by Randi Shen, Dima Hadid, Stephanie Ayers, Sandra Clark, Rebekah Woodburn, Roland Grad and Anthony J. Levinson
Geriatrics 2025, 10(6), 168; https://doi.org/10.3390/geriatrics10060168 - 16 Dec 2025
Viewed by 230
Abstract
Background/Objectives: Delirium, an acute cognitive disturbance, is often unrecognized by family or friend care partners, contributing to delayed interventions and negative health outcomes. UnderstandingDelirium.ca is an e-learning lesson developed to address this gap by improving delirium knowledge among the public, patients, and family/friend [...] Read more.
Background/Objectives: Delirium, an acute cognitive disturbance, is often unrecognized by family or friend care partners, contributing to delayed interventions and negative health outcomes. UnderstandingDelirium.ca is an e-learning lesson developed to address this gap by improving delirium knowledge among the public, patients, and family/friend care partners. Our objective was to evaluate the acceptability, intention to use, and perceived impact of Understanding Delirium e-learning among public users. Methods: A convergent mixed-methods observational evaluation combining survey-based quantitative data and thematic analysis was conducted. The survey included the Net Promoter Score (NPS), the short-form Information Assessment Method for patients and consumers (IAM4all-SF), and an open-text feedback item. Descriptive statistics were used to summarize IAM4all-SF responses, assessing perceived relevance, understandability, intended use, and anticipated benefit. Open-text comments were analyzed thematically by two independent reviewers who reached consensus through discussion. Subgroup analysis of qualitative themes was performed by age, gender, and NPS category. Results: Among 629 survey respondents, over 90% of respondents agreed that the lesson was relevant, understandable, likely to be used, and beneficial. The NPS was rated ‘excellent’ (score of 71), and lesson uptake included over 7000 unique users with a 35% completion rate. Qualitative analysis revealed themes of high educational value, emotional resonance, and perceived gaps in prior healthcare communication. Respondents emphasized the lesson’s clarity, intent to share, and potential for wider dissemination. Conclusions: UnderstandingDelirium.ca is a promising, guideline-aligned digital intervention that has potential to enhance delirium literacy and reduce care partner distress. Findings suggest that the Understanding Delirium e-learning can effectively improve public delirium literacy and should be integrated into care partner and clinical workflows. Full article
Show Figures

Figure 1

28 pages, 14015 KB  
Article
Evaluating Passenger Behavioral Experience in Metro Travel: An Integrated Model of One-Way and Interactive Behaviors
by Ning Song, Xuemei He, Fan Liu and Anjie Tian
Sustainability 2025, 17(24), 11257; https://doi.org/10.3390/su172411257 - 16 Dec 2025
Viewed by 352
Abstract
With the continuous expansion of urban metro systems, balancing passenger experience and operational efficiency has become a central concern in contemporary public transportation design. However, most existing metro service studies continue to focus on perceptual comfort or isolated usability tasks and lack an [...] Read more.
With the continuous expansion of urban metro systems, balancing passenger experience and operational efficiency has become a central concern in contemporary public transportation design. However, most existing metro service studies continue to focus on perceptual comfort or isolated usability tasks and lack an integrated, behavior-centered perspective that accounts for the full travel chain and diverse user groups. This study develops the Bi-directional Service Behavioral Experience Model (BSBEM), which systematically integrates one-way navigation behaviors and interactive operational behaviors within a unified dual-path framework to identify behavioral patterns and experiential disparities across user groups. Based on the People–Touchpoints–Environments–Messages–Services–Time–Emotion (POEMSTI) behavioral observation framework, this study employs a mixed-method approach combining video-based behavioral coding, usability testing, and subjective evaluation. An empirical study conducted at Beidajie Station on Xi’an Metro Line 2 involved three representative passenger groups: high-frequency commuters, urban leisure travelers, and special-care passengers. Multi-source data were collected to capture temporal, spatial, and interactional dynamics throughout the travel process. Results show that high-frequency commuters demonstrate the highest operational fluency, urban leisure travelers exhibit greater visual dependency and exploratory pauses, and special-care passengers are most affected by accessibility and feedback latency. Further analysis reveals a positive correlation between route complexity and interaction delay, highlighting discontinuous information feedback as a key experiential bottleneck. By jointly modeling one-way and interactive behaviors and linking group-specific patterns to concrete metro touchpoints, this research extends behavioral evaluation in metro systems and offers a novel behavior-based perspective along with empirical evidence for inclusive, adaptive, and human-centered service design. Full article
Show Figures

Figure 1

16 pages, 425 KB  
Article
Supporting the Community’s Health Advocates: Initial Insights into the Implementation of a Dual-Purpose Educational and Supportive Group for Community Health Workers
by Marcie Johnson, Kimberly Hailey-Fair, Elisabeth Vanderpool, Victoria DeJaco, Rebecca Chen, Christopher Goersch, Ursula E. Gately, Amanda Toohey and Panagis Galiatsatos
Healthcare 2025, 13(24), 3288; https://doi.org/10.3390/healthcare13243288 - 15 Dec 2025
Viewed by 279
Abstract
Background/Objectives: Community health workers (CHWs) play a critical role in advancing health equity by bridging gaps in care for underserved populations. However, limited institutional support, inconsistent training, and lack of integration contribute to high rates of burnout. The Lunch and Learn program was [...] Read more.
Background/Objectives: Community health workers (CHWs) play a critical role in advancing health equity by bridging gaps in care for underserved populations. However, limited institutional support, inconsistent training, and lack of integration contribute to high rates of burnout. The Lunch and Learn program was launched in Maryland in fall 2023 as a virtual continuing education and peer-support initiative designed to foster professional development, enhance connections among CHWs, and align with Maryland state CHW certification requirements. This article describes the program’s first year of implementation as a proof-of-concept and model for scalable CHW workforce support. Methods: The program offered twice-monthly, one-hour virtual sessions that included expert-led presentations, Q&A discussions, and dedicated peer-support time. Participant engagement was assessed using attendance metrics, post-session surveys, and annual feedback forms to identify trends in participation, learning outcomes, and evolving professional priorities. Results: Participation increased over time with the program’s listserv expanding from 29 to 118 members and average session attendance more than doubling. CHWs highlighted the program’s value in meeting both educational and emotional support needs. Conclusions: The Lunch and Learn program demonstrates a promising model for addressing burnout through education and community connection. As an adaptable, CHW-informed initiative, it supports both professional growth and well-being. Ongoing development will focus on expanding access, incorporating experiential learning assessments, and advocating for sustainable funding to ensure long-term program impact and CHW workforce stability. Full article
Show Figures

Figure 1

19 pages, 3468 KB  
Article
Sensory Representation of Neural Networks Using Sound and Color for Medical Imaging Segmentation
by Irenel Lopo Da Silva, Nicolas Francisco Lori and José Manuel Ferreira Machado
J. Imaging 2025, 11(12), 449; https://doi.org/10.3390/jimaging11120449 - 15 Dec 2025
Viewed by 297
Abstract
This paper introduces a novel framework for sensory representation of brain imaging data, combining deep learning-based segmentation with multimodal visual and auditory outputs. Structural magnetic resonance imaging (MRI) predictions are converted into color-coded maps and stereophonic/MIDI sonifications, enabling intuitive interpretation of cortical activation [...] Read more.
This paper introduces a novel framework for sensory representation of brain imaging data, combining deep learning-based segmentation with multimodal visual and auditory outputs. Structural magnetic resonance imaging (MRI) predictions are converted into color-coded maps and stereophonic/MIDI sonifications, enabling intuitive interpretation of cortical activation patterns. High-precision U-Net models efficiently generate these outputs, supporting clinical decision-making, cognitive research, and creative applications. Spatial, intensity, and anomalous features are encoded into perceivable visual and auditory cues, facilitating early detection and introducing the concept of “auditory biomarkers” for potential pathological identification. Despite current limitations, including dataset size, absence of clinical validation, and heuristic-based sonification, the pipeline demonstrates technical feasibility and robustness. Future work will focus on clinical user studies, the application of functional MRI (fMRI) time-series for dynamic sonification, and the integration of real-time emotional feedback in cinematic contexts. This multisensory approach offers a promising avenue for enhancing the interpretability of complex neuroimaging data across medical, research, and artistic domains. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Graphical abstract

6 pages, 346 KB  
Article
A Structured Approach to History and Physical Examination in Oncology for Medical Learners
by Leenah Abojaib, Aashvi Patel and Beatrice T. B. Preti
Int. Med. Educ. 2025, 4(4), 54; https://doi.org/10.3390/ime4040054 - 11 Dec 2025
Viewed by 326
Abstract
In oncology, traditional H&P templates centered on a single chief complaint often fail to address the longitudinal care needs and emotional complexities of cancer patients, leaving learners unprepared for sensitive conversations such as breaking bad news or discussing treatment goals. To address this, [...] Read more.
In oncology, traditional H&P templates centered on a single chief complaint often fail to address the longitudinal care needs and emotional complexities of cancer patients, leaving learners unprepared for sensitive conversations such as breaking bad news or discussing treatment goals. To address this, we conducted a literature review of specialty-focused H&P tools in child psychiatry and gynecology and, drawing on our experiences as two first-year medical students in an outpatient oncology clinic, developed an oncology H&P template to guide novice clinicians. The guide incorporates structured prompts for rapport-building; detailed oncologic and family cancer history; functional independence assessments; treatment goals; emotional wellbeing; support networks; and responding to emotion. After initial pilot testing by the two developers under supervisor guidance, the template was distributed to five then ten additional students and disseminated via the ASCO online forum and Twitter. Feedback from ten oncologists and oncology trainees highlighted the template’s value in gathering review of systems, past treatment details, functional status, and cancer history. Our findings suggest that this oncology-tailored tool enhances interview flow, promotes comprehensive data collection, and supports empathetic patient engagement. Integration into routine oncology training is planned, with future adaptations for specific oncological subspecialties and potential use in other medical specialties. Full article
Show Figures

Figure 1

38 pages, 2967 KB  
Article
Exploring the Impact of Affective Pedagogical Agents: Enhancing Emotional Engagement in Higher Education
by Marta Arguedas, Thanasis Daradoumis, Santi Caballe, Jordi Conesa and Elvis Ortega-Ochoa
Computers 2025, 14(12), 542; https://doi.org/10.3390/computers14120542 - 10 Dec 2025
Cited by 1 | Viewed by 548
Abstract
This study examines the influence of pedagogical agents on enhancing emotional engagement in higher education settings through the provision of cognitive and affective feedback. The research focuses on students in a collaborative “Database Systems and Design”, comparing the effects of feedback from a [...] Read more.
This study examines the influence of pedagogical agents on enhancing emotional engagement in higher education settings through the provision of cognitive and affective feedback. The research focuses on students in a collaborative “Database Systems and Design”, comparing the effects of feedback from a human teacher (control group) to those of an Affective Pedagogical Tutor (APT) (experimental group). Emotional engagement was measured through key positive emotions such as motivation, curiosity, optimism, confidence, and satisfaction, as well as the reduction in negative emotions like boredom, anger, insecurity, and anxiety. Results suggest that APT feedback was associated with higher levels of emotional engagement compared to teacher feedback. Cognitive feedback from the APT was perceived as supporting learning outcomes by offering detailed, task-specific guidance, while affective feedback further supported emotional regulation and positive emotional states. Students interacting with the APT reported feeling more motivated, curious, and optimistic, which contributed to sustained participation and greater confidence in their work. At the same time, boredom and anger were notably reduced in the experimental group. These findings illustrate the potential of affective pedagogical agents to complement educational experiences by fostering positive emotional states and mitigating barriers to engagement. By integrating affective and cognitive feedback, pedagogical agents can create more emotionally supportive and engaging learning environments, particularly in collaborative and complex academic tasks. Full article
Show Figures

Figure 1

30 pages, 1289 KB  
Article
AI-Enabled Microlearning and Case Study Atomisation: ICT Pathways for Inclusive and Sustainable Higher Education
by Hassiba Fadli
Sustainability 2025, 17(24), 11012; https://doi.org/10.3390/su172411012 - 9 Dec 2025
Viewed by 688
Abstract
The integration of Artificial Intelligence (AI) into higher education offers new opportunities for inclusive and sustainable learning. This study investigates the impact of an AI-enabled microlearning cycle—comprising short instructional videos, formative quizzes, and structured discussions—on student engagement, inclusivity, and academic performance in postgraduate [...] Read more.
The integration of Artificial Intelligence (AI) into higher education offers new opportunities for inclusive and sustainable learning. This study investigates the impact of an AI-enabled microlearning cycle—comprising short instructional videos, formative quizzes, and structured discussions—on student engagement, inclusivity, and academic performance in postgraduate management education. A mixed-methods design was applied across two cohorts (2023, n = 138; 2024, n = 140). Data included: (1) survey responses on engagement, accessibility, and confidence (5-point Likert scale); (2) learning analytics (video views, quiz completion, forum activity); (3) academic results; and (4) qualitative feedback from open-ended questions. Quantitative analyses used Wilcoxon signed-rank tests, regressions, and subgroup comparisons; qualitative data underwent thematic analysis. Findings revealed significant improvements across all dimensions (p < 0.001), with large effect sizes (r = 0.35–0.48). Engagement, accessibility, and confidence increased most, supported by behavioural data showing higher video viewing (+19%), quiz completion (+21%), and forum participation (+65%). Regression analysis indicated that forum contributions (β = 0.39) and video engagement (β = 0.31) were the strongest predictors of grades. Subgroup analysis confirmed equitable outcomes, with non-native English speakers reporting slightly higher accessibility gains. Qualitative themes highlighted interactivity, real-world application, and inclusivity, but also noted quiz-related anxiety and a need for industry tools. The AI-enabled microlearning model enhanced engagement, equity, and academic success, aligning with SDG 4 (Quality Education) and SDG 10 (Reduced Inequalities). By combining Cognitive Load Theory, Kolb’s experiential learning, and Universal Design for Learning, it offers a scalable, pedagogically sustainable framework. Future research should explore emotional impacts, AI co-teaching models, and cross-disciplinary applications. By integrating Kolb’s experiential learning, Universal Design for Learning, and Cognitive Load Theory, this model advances both pedagogical and ecological sustainability. Full article
Show Figures

Figure 1

18 pages, 1443 KB  
Review
Empathy by Design: Reframing the Empathy Gap Between AI and Humans in Mental Health Chatbots
by Alastair Howcroft and Holly Blake
Information 2025, 16(12), 1074; https://doi.org/10.3390/info16121074 - 4 Dec 2025
Viewed by 1985
Abstract
Artificial intelligence (AI) chatbots are now embedded across therapeutic contexts, from the United Kingdom’s National Health Service (NHS) Talking Therapies to widely used platforms like ChatGPT. Whether welcomed or not, these systems are increasingly used for both patient care and everyday support, sometimes [...] Read more.
Artificial intelligence (AI) chatbots are now embedded across therapeutic contexts, from the United Kingdom’s National Health Service (NHS) Talking Therapies to widely used platforms like ChatGPT. Whether welcomed or not, these systems are increasingly used for both patient care and everyday support, sometimes even replacing human contact. Their capacity to convey empathy strongly influences how people experience and benefit from them. However, current systems often create an “AI empathy gap”, where interactions feel impersonal and superficial compared to those with human practitioners. This paper, presented as a critical narrative review, cautiously challenges the prevailing narrative that empathy is a uniquely human skill that AI cannot replicate. We argue this belief can stem from an unfair comparison: evaluating generic AIs against an idealised human practitioner. We reframe capabilities seen as exclusively human, such as building bonds through long-term memory and personalisation, not as insurmountable barriers but as concrete design targets. We also discuss the critical architectural and privacy trade-offs between cloud and on-device (edge) solutions. Accordingly, we propose a conceptual framework to meet these targets. It integrates three key technologies: Retrieval-Augmented Generation (RAG) for long-term memory; feedback-driven adaptation for real-time emotional tuning; and lightweight adapter modules for personalised conversational styles. This framework provides a path toward systems that users perceive as genuinely empathic, rather than ones that merely mimic supportive language. While AI cannot experience emotional empathy, it can model cognitive empathy and simulate affective and compassionate responses in coordinated ways at the behavioural level. However, because these systems lack conscious, autonomous ‘helping’ intentions, these design advancements must be considered alongside careful ethical and regulatory safeguards. Full article
(This article belongs to the Special Issue Internet of Things (IoT) and Cloud/Edge Computing)
Show Figures

Figure 1

20 pages, 31486 KB  
Article
Design and Implementation of a Companion Robot with LLM-Based Hierarchical Emotion Motion Generation
by Yoongu Lim, Jaeuk Cho, Duk-Yeon Lee, Dongwoon Choi and Dong-Wook Lee
Appl. Sci. 2025, 15(23), 12759; https://doi.org/10.3390/app152312759 - 2 Dec 2025
Viewed by 538
Abstract
Recently, human–robot interaction (HRI) with social robots has attracted significant attention. Among them, companion robots, which exhibit pet-like behaviors and interact with people primarily through non-verbal means, particularly require the generation of appropriate gestures. This paper presents the design and implementation of a [...] Read more.
Recently, human–robot interaction (HRI) with social robots has attracted significant attention. Among them, companion robots, which exhibit pet-like behaviors and interact with people primarily through non-verbal means, particularly require the generation of appropriate gestures. This paper presents the design and implementation of a companion cat robot, named PEPE, with a large language model (LLM)-based hierarchical emotional motion generation method. To design the cat-like companion robot, an analysis of feline emotional behaviors was conducted to identify the body parts and motions essential for effective emotional expression. Based on this analysis, the required degrees of freedom (DoFs) and structural configuration for PEPE were derived. To generate expressive gestures efficiently and reliably, a hierarchical LLM-based emotional motion generation method was proposed. The process defines the robot’s structural features, establishes a gesture generation code format, and incorporates emotion-based guidelines grounded in feline behavioral analysis to mitigate LLM hallucination and ensure physical feasibility. The proposed method was implemented on the physical robot, and eight emotional gestures were generated—Happy, Angry, Sad, Fearful, Joyful, Excited, Positive Feedback, and Negative Feedback. A user study with 15 participants was conducted to validate the system. The high-arousal gestures—Angry, Joyful, and Excited—were rated significantly above the neutral clarity threshold (p < 0.05), demonstrating clear user recognition. Meanwhile, low-arousal gestures exhibited neutral-level perceptions consistent with their subtle motion profiles. These results confirm that the proposed LLM-based framework effectively generates expressive, physically executable gestures for a companion robot. Full article
Show Figures

Figure 1

Back to TopTop