Next Article in Journal
YOLO-AEB: PCB Surface Defect Detection Based on Adaptive Multi-Branch Attention and Efficient Atrous Spatial Pyramid Pooling
Previous Article in Journal
From Static Prediction to Mindful Machines: A Paradigm Shift in Distributed AI Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Impact of Affective Pedagogical Agents: Enhancing Emotional Engagement in Higher Education

1
Faculty of Education, Valencian Internacional University, 46002 Valencia, Spain
2
IN3-Department of Computer Science, Multimedia and Telecommunications, Open University of Catalonia, 08018 Barcelona, Spain
3
Research Centre of Gameful Realities, Tampere University, Kalevantie 4, 33100 Tampere, Finland
*
Author to whom correspondence should be addressed.
Computers 2025, 14(12), 542; https://doi.org/10.3390/computers14120542
Submission received: 7 October 2025 / Revised: 30 November 2025 / Accepted: 1 December 2025 / Published: 10 December 2025

Abstract

This study examines the influence of pedagogical agents on enhancing emotional engagement in higher education settings through the provision of cognitive and affective feedback. The research focuses on students in a collaborative “Database Systems and Design”, comparing the effects of feedback from a human teacher (control group) to those of an Affective Pedagogical Tutor (APT) (experimental group). Emotional engagement was measured through key positive emotions such as motivation, curiosity, optimism, confidence, and satisfaction, as well as the reduction in negative emotions like boredom, anger, insecurity, and anxiety. Results suggest that APT feedback was associated with higher levels of emotional engagement compared to teacher feedback. Cognitive feedback from the APT was perceived as supporting learning outcomes by offering detailed, task-specific guidance, while affective feedback further supported emotional regulation and positive emotional states. Students interacting with the APT reported feeling more motivated, curious, and optimistic, which contributed to sustained participation and greater confidence in their work. At the same time, boredom and anger were notably reduced in the experimental group. These findings illustrate the potential of affective pedagogical agents to complement educational experiences by fostering positive emotional states and mitigating barriers to engagement. By integrating affective and cognitive feedback, pedagogical agents can create more emotionally supportive and engaging learning environments, particularly in collaborative and complex academic tasks.

1. Introduction

Emotions are vital in fostering motivation, self-regulated learning, and cognitive development. They influence students’ ability to sustain interest in educational activities and improve overall learning outcomes [1,2]. Emotional learning research explores multiple domains, including emotional awareness and affective feedback. Key strategies include fostering the ability to identify and manage one’s emotions [3], understanding others’ emotions [4], and transforming negative emotions into positive ones through self-motivation. Additionally, methods like conflict recovery and emotional attachment to peers and studies enhance socio-emotional learning [5,6]. These approaches demonstrate the value of integrating emotional skills into education to enhance both academic performance and social-emotional growth [7].

1.1. Animated Pedagogical Agents

A pedagogical agent is an intelligent computer character designed to guide learners in educational environments. These agents interact socially with learners and assume roles such as tutor or co-learner. Animated pedagogical agents are extensively used to provide personalized feedback, improving students’ emotional states and academic performance [8]. Beyond academic support, some agents are designed to offer social, emotional, and relational feedback, fostering long-term socio-emotional relationships. However, initial designs relied on task-related feedback with limited emotional support, often through scheduled motivational messages like praise, which could become counterproductive over time.
Pedagogical agents aim to create engaging, safe, and supportive learning environments while assisting learners in overcoming difficulties, achieving objectives, and enhancing self-reflection about their learning process [9]. Some approaches use Multiple Intelligent Pedagogical Agents with roles like Expert, Motivator, or Mentor, which are positively received for aiding learning and motivation. Emotional multi-agent systems have also been developed, enabling agents to collaborate in promoting flexible, dynamic affective communication based on learners’ cognitive and emotional needs [10].
Advanced agents can detect and process users’ emotions in real-time, offering complex motivational feedback. However, implementing such systems involves costly computational modeling with a high risk of failure [11]. Research on pedagogical agents yields mixed results. Some reviews question their efficiency, arguing that they do not necessarily outperform simpler teaching tools [12]. In contrast, meta-analyses highlight their potential for small positive effects on learning under specific conditions [13,14] emphasize the importance of contextual and pedagogical features, while [15] calls for more research on diverse learning scenarios. Recent reviews by [16] confirm that pedagogical agents improve learning, but further studies are needed to determine which features and conditions are most effective.

1.2. Cognitive Feedback

Cognitive feedback plays a crucial role in helping students adjust their cognition and behavior to improve learning outcomes [17]. However, recent studies have questioned the effectiveness of computerized cognitive feedback, noting that it does not always enhance learning [18]. Theoretical models suggest that feedback is most effective when it directly targets students’ cognition and motivation within a digital environment [19,20].
The content of feedback significantly impacts its effectiveness. Research distinguishes between global feedback, which indicates whether an answer is correct or incorrect, and elaborate feedback, which provides detailed explanations, hints, or examples. Elaborated feedback engages students in deeper cognitive processes, leading to better performance in subsequent tasks [21,22]. However, some studies show that elaborate feedback may not always benefit learners, as seen in sixth-grade students performing textual comprehension tasks.
The success of cognitive feedback is influenced by factors such as task complexity, learner characteristics, and feedback format [23,24]. For instance, ref. [25] found that elaborate verbal feedback from animated agents outperformed simple feedback in improving learning. Ref. [26] further observed that students perceive and process feedback differently depending on its type, emphasizing the importance of tailoring feedback to individual needs.
Despite extensive research, little attention has been given to how students perceive cognitive feedback in digital learning environments and its impact on learning outcomes [27]. Therefore, future research should explore these perceptions and the conditions under which cognitive feedback can maximize its impact, particularly in computer-based learning settings.

1.3. Affective Feedback

Affective feedback has gained recognition for its role in supporting students’ emotional development and engagement in learning. Early e-learning systems incorporated emotion-based feedback to motivate learners and enhance their mood [28]. Strategies included empathetic responses, such as parallel-empathetic (mirroring the user’s emotions), reactive-empathetic (responding to the user’s affective state), and task-based strategies (adjusting tasks to address emotions) [29]. For instance, AutoTutor used affective feedback to address confusion and boost learning for students with low baseline knowledge [30].
Research indicates that pedagogical agents providing affective feedback can enhance motivation, enjoyment, and perceived usefulness, although the believability of such feedback varies [31]. Some studies suggest that the enthusiasm of pedagogical agents positively influences learners’ emotional states and cognitive outcomes [32]. However, the focus of many studies has been limited to specific positive emotions, such as motivation and satisfaction [33].
Emotion regulation strategies in Intelligent Tutoring Systems (ITSs) have shown promise in improving learning. Coping strategies, such as problem-focused and emotion-focused approaches, effectively regulate negative emotions [34]. Instructed reappraisal, which reframes negative situations positively, has also been linked to higher engagement and learning gains.
Feedback must be appropriately designed to avoid counterproductive effects. For example, overly encouraging messages may fail to sustain engagement if their content does not align with students’ needs [35,36]. Practical insights suggest that combining motivational, instructional, and emotional feedback can reduce boredom and off-task behavior [37].
Text-based affective feedback remains popular due to its accessibility and ease of use across devices. Effective text-based feedback should be timely, concise, visually appealing, and easy to read [38]. Personalized feedback, addressing frustration or negative emotions, has been shown to improve emotional states and learning outcomes [39]. Overall, integrating various feedback types tailored to learners’ emotional and cognitive needs is essential for enhancing engagement and learning outcomes.

1.4. Wrapping Up

Both cognitive and affective feedback contribute significantly to learning by fostering reflection and improving engagement. However, their impact depends on context, personalization, and timing. This study highlights the need for further exploration of animated pedagogical agents’ potential in balancing these feedback types. By comparing their effectiveness with human teachers in a higher education “Database Systems and Design”, this research underscores the importance of tailoring feedback to meet the emotional and cognitive needs of students.
A central motivation for this exploratory study is to examine how cognitive and affective feedback relate to students’ emotional states and whether these relationships contribute to enhancing learners’ emotional engagement. Recent research in online and technology-mediated learning shows that students’ perceptions of feedback are closely associated with their cognitive and emotional engagement [40], and that affective feedback delivered by adaptive learning systems can influence learning engagement and self-directed learning [41]. Moreover, a recent scoping review of affective intelligent tutoring systems highlights substantial variability in the emotional effects of affective support, revealing gaps in how cognitive and affective mechanisms jointly shape learners’ emotional responses [42]. Building upon these insights, the present study incorporates cognitive feedback, affective feedback, and students’ emotional-state variables within a unified analytical framework to clarify how these components interact and to determine whether the Affective Pedagogical Tutor (APT) produces the expected enhancing effects on emotional engagement compared to human teacher feedback.
In recent years, there has been increasing academic interest in understanding how emotionally intelligent systems can influence students’ engagement and learning processes. However, empirical studies exploring the emotional impact of affective pedagogical agents in real university settings remain limited. This study addresses this gap by investigating how the use of an Affective Pedagogical Tutor (APT) influences students’ emotional states and their reception of cognitive and affective feedback in a higher education course. Our aim is to provide initial exploratory evidence that can guide future research in this area.

2. Materials and Methods

This section outlines the design and implementation of an experimental study conducted within the “Database Systems and Design” course, aimed at evaluating the impact of affective and cognitive feedback delivered by an Affective Pedagogical Tutor (APT) compared to a human teacher. The study involved 130 undergraduate students, randomly assigned to control and experimental groups, and structured around a three-week collaborative project. Emotional engagement was assessed through a combination of forum-based discourse analysis and post-activity surveys. The APT operated within an Activity Theory framework, integrating real-time emotion analysis and time-sensitive feedback mechanisms. Independent variables included affective and cognitive feedback, while the dependent variable was students’ emotional states. Data collection instruments were designed to capture perceptions of feedback and emotional responses using Likert-type scales. Statistical procedures included reliability testing (Cronbach’s alpha), normality verification (K-S test and Box–Cox transformation), and descriptive analyses, ensuring methodological rigor and validity for subsequent inferential analyses.

2.1. Research Aims

The aim of this study, it studies whether both the affective and cognitive APT feedback supported significantly more student’s emotional engagement compared to human teacher’s feedback.

2.1.1. Learning Scenario and Course Context

The study was conducted at a private university in Spain within an undergraduate Computer Science program. The instructional intervention took place in the “Database Systems and Design” course, selected for its emphasis on both conceptual understanding and practical application. This context provided an appropriate setting to examine the influence of cognitive and affective feedback on students’ emotional engagement.
Designing emotionally engaging learning experiences in technical subjects requires the careful alignment of authentic tasks, collaborative work, and feedback that addresses both cognitive and affective dimensions. In this study, the learning activities were intentionally structured to immerse students in a realistic, application-oriented scenario while they received different types of feedback from either a human instructor or an AI-based tutor.
Within this setting, students worked in teams on database-related challenges supported by the institution’s learning management system (LMS). The intervention was designed so that both the control group and the experimental group operated under comparable technological and instructional conditions, differing only in the source and nature of the feedback they received. Emotional engagement and students’ perceptions of feedback were systematically monitored and assessed through standardized questionnaires.
The learning scenario followed a project-based learning (PBL) methodology. Students collaborated to design and implement a relational database system based on realistic case studies, such as inventory management, academic record tracking, or event scheduling. Each team developed a complete solution using MySQL, including requirement analysis, entity–relationship modeling, normalization, SQL query development, and the production of both a technical report and a live demonstration of the functional database. Collaboration was facilitated through Moodle discussion forums and shared online documents.
Participants were randomly assigned to one of two conditions. The control group received feedback exclusively from the human instructor, whereas the experimental group interacted with the Affective Pedagogical Tutor (APT), an intelligent conversational agent embedded in the LMS. Both groups engaged in equivalent learning activities, ensuring the validity of between-group comparisons.
The instructional intervention was deployed across nine consecutive sessions, each lasting approximately two hours and distributed across two consecutive days per week. As illustrated in Figure 1, the learning process was organized into three consecutive blocks:
Block 1 (Sessions 1–3): Introduction and context.
This initial phase provided students with an overview of the project and the theoretical foundations of emotional engagement, feedback strategies, and collaborative learning. During these sessions, participants were also introduced to the technological tools and feedback modalities used in the study.
Block 2 (Sessions 4–6): Core collaborative activity.
In this phase, students engaged in problem-based teamwork to develop their assigned database projects. The experimental group interacted with the Affective Pedagogical Tutor (APT), receiving cognitive and affective feedback, whereas the control group was supported exclusively by the human instructor. Feedback was integrated throughout the workflow to guide task progression and strengthen engagement.
Block 3 (Sessions 7–9): Consolidation and reflection.
The final phase involved advanced simulations and reflective exercises designed to reinforce learning and promote the application of feedback strategies in authentic pedagogical contexts. These sessions helped consolidate students’ understanding and prepare them for the concluding evaluation.
Sessions were held in two-hour blocks delivered across two consecutive days (e.g., Monday–Tuesday, Tuesday–Wednesday, Thursday–Friday), resulting in three sessions per week during a three-week period. Students’ emotional states were continuously monitored through an emotion-analysis model applied to their written contributions in the LMS forums.
Upon completing Session 9, all participants filled out evaluation instruments assessing their emotional engagement, perceived feedback, and overall learning experience. This structured and controlled design enabled a systematic comparison between the control and experimental groups under equivalent instructional conditions.
To complement the block-based description presented above, Table 1 provides a structured overview of the sub-phases within each block, specifying the activities conducted, the roles of the instructor and the APT, and the types of data collected across the three-week intervention. It details the activities conducted during each step, along with the roles of the instructor and the Affective Pedagogical Tutor (APT), as well as the type of data collected throughout the intervention.
In summary, the study presents a coherent instructional scenario conducted within a single undergraduate course and structured across nine sessions over a three-week period. The learning design follows a consistent sequence of introduction, collaborative development, and consolidation, with participants randomly assigned to either human-delivered or APT-mediated feedback under equivalent technological conditions. Data collection procedures, including forum-based emotion analysis and post-intervention surveys, were aligned with this timeline to provide a clear and internally consistent account of the learning activities and assessment moments throughout the intervention.
The Affective Pedagogical Tutor (APT) used in this study was developed within a broader framework grounded in Activity Theory (AT), which provides a conceptual lens for analyzing learning as a socially mediated and tool-supported activity [43]. In this context, the APT operates as a mediating agent that interacts with students through emotionally responsive feedback, adapting its interventions based on the evolving emotional states of learners. The emotion analysis model integrated into the framework processes student-generated content—such as forum posts, chat messages, and collaborative documents—using discourse analysis techniques and sentiment detection tools to identify affective patterns over time [44].
A key innovation of this framework is the incorporation of the time factor as a dynamic variable that influences emotional transitions and learning outcomes. Emotions are not treated as isolated events but as trajectories that unfold throughout the learning process. The system monitors how long students remain in specific emotional states and identifies critical thresholds where prolonged negative emotions (e.g., frustration, anxiety) may hinder engagement or performance. This temporal sensitivity enables the APT to intervene at optimal moments, offering tailored cognitive and affective feedback that promotes emotional regulation and supports learning continuity [1].
Figure 2 presents a visual representation of the Emotion Analysis Model, which extends the Activity Theory scenario by integrating emotional, cognitive, and temporal dimensions. The model includes components such as the student learning profile, rhetorical structure analysis, fuzzy logic for emotion interpretation, and ontology-based context modeling. These elements work together to provide both the teacher and the APT with actionable insights into students’ emotional states, allowing for timely and personalized pedagogical interventions. By embedding emotion awareness and time-sensitive feedback into the learning environment, the APT fosters a more adaptive and emotionally supportive educational experience [45].
It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.
The experimental study was conducted in a private university. Prior to its implementation, permission was formally requested and obtained from the institution’s management team. In line with ethical research standards, all participating students were informed about the purpose and nature of the study and subsequently signed an informed consent form. This document, reviewed and validated by both an expert psychologist and a legal advisor, guaranteed that participation was entirely voluntary, ensured confidentiality of the data collected, and clarified that the participants could withdraw at any time without any consequences. In compliance with the institutional regulations, the name of the university has been omitted from this manuscript.

2.1.2. Research Questions

The main research hypothesis we deal with in this work is the following:
“Affective and cognitive feedback provided by Affective Pedagogical Tutors (APTs) significantly enhances students’ emotional engagement compared to human teacher feedback. Furthermore, there is a significant relationship between both cognitive and affective APT feedback types and the students’ emotional states emotional states of students.”
We break down this hypothesis into this specific research questions:
RQ1. Have both the affective and cognitive APT feedback supported significantly student’s emotional engagement compared to human teacher’s feedback?
RQ2. Is there a significant relationship between all APT cognitive and affective feedback types and students’ emotional states?

2.1.3. Definition of Variables for the Learning Situation

The study involves both independent and dependent variables, which are integral to understanding the dynamics of the learning situation where teaching and learning processes occur. The independent variables are affective feedback (A) and cognitive feedback (C), both of which hold equal importance in shaping the learning environment. These variables influence the dependent variable, defined as students’ emotional states (E), as shown in Table 2. The focus is on emotional states that sustain students’ interest in activities and learning, which are further detailed in another section of the study. The learning situation emphasizes the balance and interaction between affective and cognitive elements to create a student-centered teaching environment.

2.2. Method

This section outlines the methodological procedures followed to carry out the experimental study, detailing the characteristics of the participants and the steps undertaken during the data collection process. By providing a comprehensive overview of both the control and experimental groups, we aim to clarify the sampling strategy, the context in which the study was conducted, and the protocols implemented to ensure consistency and reliability throughout the research.

2.2.1. Participant Profile and Research Procedure

The study involved a total of 130 undergraduate students enrolled in the “Database Systems and Design” course, part of a Computer Science degree program. This course was selected for its emphasis on collaborative learning and the practical application of database concepts.
Participants consisted of second-year undergraduate students enrolled in a Computer Science program. The group presented a relatively homogeneous academic profile in terms of cognitive abilities, prior academic performance, and familiarity with digital tools. As part of their curriculum, students had completed foundational courses in programming, information systems, and introductory database concepts, which ensured adequate cognitive readiness for the project-based tasks included in the study. Owing to their continuous engagement with digital learning environments and the institutional Learning Management System (LMS), participants demonstrated sufficient digital literacy to effectively interact with the technological tools and the Affective Pedagogical Tutor (APT). Furthermore, participants possessed the necessary competencies to complete open-ended tasks, respond meaningfully to survey questions, and provide reflective and critical feedback, ensuring the relevance and validity of the data collected.
In addition to the academic profile, demographic information was collected to contextualize the sample. Participants’ ages ranged from 19 to 23 years (M = 20.4, SD = 1.1). Of the 128 students initially enrolled in the course, 39% were female (n = 50) and 61% were male (n = 78). All participants were enrolled full-time in the Computer Science program at a private Spanish university and shared similar cultural and linguistic backgrounds. A total of 115 students ultimately completed the course and responded to the questionnaire—57 in the Control Group (C) and 58 in the Experimental Group (EG). Of these final participants, 45 were female (39.1%) and 70 were male (60.9%). Distributed by group, the Control Group included 22 female and 35 male students, whereas the Experimental Group included 23 female and 35 male students, maintaining proportions comparable to the initial sample. These demographic variables ensured that no major cohort differences could confound the interpretation of emotional engagement outcomes. A summary of the demographic distribution is presented in Table 3.
This table summarizes the demographic characteristics of the initial cohort of 128 students and the final sample of 115 students who completed the study. It includes age distribution, average age, gender composition, and academic year, as well as the distribution of these variables across the Control Group (CG) and the Experimental Group (EG).
Table 2. Dependent and Independent Variables of the Study.
Table 2. Dependent and Independent Variables of the Study.
Main Teaching Session Involving Student-Centered Learning Activities
Independent VariablesA = Affective Feedback; C = Cognitive Feedback
Dependent VariablesE = Students’ emotional states (we focus on those emotional states that maintain students’ interest toward the activities and learning, as shown in Table 4b)
Table 3. Demographic Characteristics of Participants in the Initial Sample (n = 128), Final Sample (n = 115), and by Group (CG and EG).
Table 3. Demographic Characteristics of Participants in the Initial Sample (n = 128), Final Sample (n = 115), and by Group (CG and EG).
CategoryInitial Sample (n = 128)Final Sample (n = 115)CG (n = 57)EE (n = 58)
Age19–20 years61 (47.7%)55 (47.8%)27 (47.4%)28 (48.3%)
21–23 years67 (52.3%)60 (52.2%)30 (52.6%)30 (51.7%)
Mean age (SD)20.4 (±1.1)20.4 (±1.1)20.4 (±1.1)20.5 (±1.1)
GenderFemale50 (39.0%)45 (39.1%)22 (38.6%)23 (39.7%)
Male78 (61.0%)70 (60.9%)35 (61.4%)35 (60.3%)
Academic year2nd-year undergraduate128 (100%)115 (100%)57 (100%)58 (100%)
Note. CG = Control Group; EG = Experimental Group.
Moreover, Figure 3 presents the demographic composition of the final sample (n = 115) in terms of gender and age. The top row displays the proportion of male and female students in the total sample and across the control (CG, n = 57) and experimental (EG, n = 58) groups. The bottom row shows the age distribution of participants, categorized into 19–20 years and 21–23 years, for the total sample and for each group. These visual summaries illustrate the demographic equivalence between CG and EG, supporting the comparability of groups for subsequent analyses.
Out of the initial 130 students assessed for eligibility, 2 students (1.54%) were excluded from the study because they withdrew from the course before the intervention began. The remaining 128 participants (98.46%) were randomly assigned to one of two groups: the control group (n = 64), which received feedback from a human teacher, and the experimental group (n = 64), which received affective and cognitive feedback from an Affective Pedagogical Tutor (APT).
To facilitate meaningful collaboration, both groups were further divided into teams of four members. Once the learning experience was completed, 7 students from the control group (10.94%) and 6 students from the experimental group (9.37%) did not respond to the final post-experience questionnaire. Consequently, the number of valid cases available for analysis was 57 students in the control group (89.06%) and 58 students in the experimental group (90.63%).
As shown in Figure 4, the CONSORT-style flow diagram illustrates the recruitment, random assignment, and retention of participants throughout the study.
The learning activity was structured around a three-week project in which students designed and implemented a relational database system for a simulated organization. Each team worked on a unique case study, such as managing inventory, scheduling events, or handling student records. The project required students to analyze requirements, model data using entity-relationship diagrams, normalize tables, and write SQL queries. Collaboration occurred in Moodle forums, where students exchanged ideas and posted updates. Emotional states were monitored throughout the activity using an emotion analysis model, which processed students’ written contributions and triggered feedback interventions.
To provide structured support throughout the learning activity, we categorized the feedback delivered by both tutors into cognitive and affective types, as detailed in Table 4a. These categories were designed to address instructional and emotional needs, enhancing students’ engagement and performance [17,19,20,28,29,34,37,39]. Cognitive feedback included strategies such as clarifying objectives, organizing content, and enriching knowledge, while affective feedback focused on fostering a positive emotional climate and supporting motivation and confidence. In parallel, Table 4b presents the emotional states monitored during the study, selected based on Pekrun’s achievement emotions framework [2] and the PAD model [47]. These included positive emotions like motivation, curiosity, and confidence, as well as negative emotions such as boredom, insecurity, and anxiety. Together, these tables provided the foundation for evaluating how feedback types influenced students’ emotional engagement.
Table 4a presents the types of cognitive and affective feedback provided during the nine teaching sessions. These feedback categories were designed to address both instructional and emotional needs of students, supporting their learning process and engagement. Cognitive feedback focuses on clarifying course objectives, organizing content, enriching knowledge, and facilitating group work and individualized learning. These strategies align with established models of instructional design and formative feedback [17,19,20], which emphasize the importance of elaborated and task-specific guidance to improve performance and self-regulation.
Affective feedback, on the other hand, aimed to foster a positive emotional climate, reduce anxiety, and enhance motivation and confidence. This included helping students deal with difficulties, supporting final evaluations, and maintaining interest in the activity. The role of affective feedback in learning has been widely recognized in the literature [28,29,34], particularly in the context of Intelligent Tutoring Systems (ITSs) and pedagogical agents. Studies have shown that emotionally responsive feedback can improve learners’ mood, reduce off-task behavior, and increase persistence in challenging tasks [37,39].
Both the human teacher and the APT delivered feedback independently, using their own expressions while adhering to the predefined categories. This ensured consistency in the type of support provided across groups, while allowing for variation in tone and delivery. The feedback types were mapped to specific pedagogical functions, enabling a structured comparison of their impact on students’ emotional states and learning outcomes.
Table 4. (a) The cognitive and affective feedback types provided in the teaching sessions 1–9. (b) Students’ emotional states.
Table 4. (a) The cognitive and affective feedback types provided in the teaching sessions 1–9. (b) Students’ emotional states.
The Cognitive and Affective Feedback Types Provided in the Teaching Sessions 1–9 (a)
Cognitive FeedbackAffective Feedback
  Make the course objectives clearer and more understandable (3.1)  Create a satisfactory working climate in the group (3.9)
  Provide students with appropriate and complementary information to increase their ability to complete their work (3.2)  Be subtle enough not to interfere and affect the duration of the course negatively (3.11)
  Organize and present the contents in a more orderly manner (3.3)  Guide students to better communicate their individual results in the group (3.12)
  Build on students’ existing knowledge based on their level and needs (3.4)  Help students complete the activity successfully (3.13)
  Enrich the knowledge presented with novel elements (3.5)  Support students to deal with the final evaluation successfully (3.15)
  Enable students to work more effectively in small groups (3.6)  Help students acquire skills and attitudes (3.16)
  Foster individualized learning within working groups (3.7)  Enable students to better face their difficulties (3.17)
  Provide more support to practical aspects (3.8)  Offer students possibilities to make the best decision in cases of doubt (3.18)
  Support students’ learning with effective instructional procedures (3.10)  Trigger and maintain students’ interest in the activity and their learning (3.19)
  Ensure the accomplishment of the learning objectives according to the criteria set by the course (3.14)  —
Students’ Emotional States (b)
CodeEmotional State
E.1Motivated
E.2Curious
E.3Confident
E.4Pleased
E.5Optimistic/Challenging (Stimulated)
E.6Insecure or Embarrassed
E.7Bored
E.8Anxiety or dismayed
E.9Outraged
Table 4b outlines the emotional states considered in this study. The selection of these states was based on Pekrun’s framework on achievement emotions [2] and the PAD (Pleasure-Arousal-Dominance) model [47], which provide a multidimensional understanding of affective experiences in learning. These models distinguish between activating and deactivating emotions, as well as positive and negative valence, allowing for a nuanced analysis of students’ emotional engagement.
The positive emotional states included motivation, curiosity, confidence, pleasure, and optimism/stimulation. Motivation and curiosity are widely recognized as key drivers of engagement and deep learning [48]. Curiosity, in particular, has been linked to increased exploration and persistence in problem-solving tasks [48]. Confidence and pleasure contribute to a sense of competence and satisfaction, reinforcing students’ willingness to participate and take risks in learning activities [49].
Optimism and stimulation were grouped together due to their shared role in promoting resilience and a challenge-oriented mindset. Optimism has been shown to enhance persistence and reduce the perception of threat in demanding situations [50,51]. These emotional states are especially relevant in project-based learning environments, such as the one used in this study, where students face complex tasks over extended periods.
Negative emotional states included insecurity/embarrassment, boredom, anxiety/dismay, and outrage. These emotions can hinder learning by reducing attention, increasing avoidance behaviors, and impairing self-regulation [2]. The emotion analysis model used in this study aimed to detect these states in real time and trigger appropriate feedback interventions to mitigate their impact.
By monitoring and responding to these emotional states, the APT and the human teacher were able to provide targeted support that aligned with students’ affective needs. This approach reflects current research on affect-aware learning environments and the importance of emotional regulation in educational success [34,37,39].

2.2.2. Instrument Design and Data Collection Procedure

To assess the impact of cognitive and affective feedback on students’ emotional engagement and perceived learning outcomes, a structured questionnaire was administered immediately after the completion of the learning activity. This timing ensured that students could reflect on their experience holistically, considering both the instructional and emotional dimensions of the feedback received throughout the project.
The questionnaire was designed to capture students’ perceptions in a comprehensive and focused manner. It was divided into two main sections, each aligned with the core variables of the study. The first section consisted of 19 items, each corresponding to one of the cognitive or affective feedback types described in Table 4a. These items aimed to evaluate how each feedback strategy, whether delivered by the human teacher or the Affective Pedagogical Tutor (APT), contributed to students’ understanding, motivation, and ability to complete the learning tasks.
The second section of the questionnaire focused on students’ emotional responses to the feedback received. It included 9 items, each targeting a specific emotional state identified in Table 4b, such as motivation, curiosity, confidence, pleasure, and anxiety. These emotional states were selected based on established theoretical frameworks, including Pekrun’s achievement emotions and the PAD (Pleasure-Arousal-Dominance) model, which provide a multidimensional understanding of affective experiences in educational contexts.
The purpose of the first section was to analyze the balance and interaction between cognitive and affective feedback elements in fostering a student-centered learning environment. By comparing the mean scores of feedback types across the control group (human teacher) and the experimental group (APT), the study aimed to determine which feedback strategies were perceived as more effective in supporting learning and engagement.
The second section served to measure the dependent variable—students’ emotional states—by examining whether specific feedback types elicited emotional responses such as feeling motivated, curious, confident, or anxious. The questions were formulated in a clear and direct manner, ensuring that each item was explicitly linked to the emotional state under investigation. This approach allowed for a precise evaluation of how feedback influenced students’ emotional engagement during the learning process.
All items in the questionnaire were rated using a five-point Likert-type scale, ranging from 1 (“Almost never”) to 5 (“Almost always”). This scale enabled the collection of quantitative data suitable for statistical analysis, including comparisons between groups and assessments of internal consistency. The simplicity and clarity of the scale also facilitated students’ responses, minimizing ambiguity and ensuring reliable data collection.
To complement the quantitative data, each section of the questionnaire also included an open-ended question to gather qualitative feedback from students.
In Part A, students were asked: “In what ways did the APT’s feedback help you better understand the task, organize your work, or feel more confident in completing the activity? Please describe specific moments or examples.” And in Part B, students were asked:
“How did the APT’s feedback influence your emotional experience during the activity (e.g., motivation, anxiety, boredom)? Feel free to share how it affected your engagement or helped you overcome challenges.”
The complete version of the questionnaire, including all items related to both the human teacher and the APT, is provided in the Appendix A of this manuscript. This instrument played a central role in capturing students’ subjective experiences and evaluating the effectiveness of the feedback strategies implemented during the study. To enrich the data collection, each section of the questionnaire also included an open-ended question, previously mentioned, allowing students to express their qualitative opinions about the experience and provide deeper insights into how the APT influenced their learning and emotional engagement.

2.2.3. Statistical Assumptions and Research Analyses

Analytical Procedures
To strengthen the transparency and reproducibility of the study, the analytical workflow is clarified here. The analyses conducted in this section followed a sequential structure: (a) verification of statistical assumptions through distributional diagnostics and data transformation when required; (b) comparison of post-intervention differences between the control and experimental groups using independent-samples t-tests with associated effect sizes; (c) examination of relationships among cognitive feedback, affective feedback, and emotional-state variables through Pearson correlations; and (d) identification of latent emotional dimensions using Principal Component Analysis (PCA), followed by hierarchical clustering to derive engagement profiles. This integrated description contextualizes the subsequent subsections and clarifies how each analytic technique contributes to ad-dressing the research questions.
Before conducting inferential analyses, we examined the statistical assumptions required for the use of parametric tests. Given that Likert-type data may exhibit mild asymmetries in distribution, and that the analyses included t-tests, correlation analyses, PCA and cluster analysis, the distributional characteristics of the dataset were assessed comprehensively.
Normality assessment
Normality was assessed using the Kolmogorov–Smirnov (K–S) test, a procedure widely recognized for its statistical power and suitability for small to moderately sized samples. Given that both the control and experimental groups included more than 50 participants, the K–S test was applied to all items to verify whether their empirical distributions significantly deviated from a normal reference distribution. This test evaluates the degree of deviation from normality through its associated p-value: values greater than 0.05 indicate that the null hypothesis of normality cannot be rejected, whereas p-values below this threshold suggest statistically significant departures from a normal distribution.
In this study, the K–S test was applied to all items related to cognitive feedback, affective feedback, and students’ emotional states across both the control group (CG) and the experimental group (EG). The results showed that most items exhibited p-values above the 0.05 threshold, indicating approximate normality. Only two items showed significant deviations: item 3.11 in the CG and item E.4 in the EG. These findings confirm that, overall, the data satisfy the normality assumption required for parametric analyses. As detailed in Appendix B, the Box–Cox transformation was applied to stabilize variance and correct mild skewness, and the transformed variables showed improved adherence to normality, as reflected by the predominance of non-significant p-values reported in Table 5.
Skewness and kurtosis
To complement the K–S test, we computed skewness and kurtosis values for each item in both groups (see Table 6). Skewness reflects the asymmetry of a distribution, while kurtosis describes the weight of its tails relative to a normal curve. Following established criteria, values between −1 and +1 are considered excellent indicators of normality [52], while values within the broader range of −2 to +2 remain acceptable for most parametric analyses [53]. In larger samples and multivariate contexts, even wider thresholds—absolute skewness < 2 and kurtosis < 7—are considered valid [54]. Across our dataset, nearly all items fell comfortably within these ranges, reinforcing the assumption of approximate univariate and multivariate normality. We report excess kurtosis values, centered at 0 for a normal distribution.
In our dataset, nearly all variables fell within these recommended ranges, supporting the assumption of approximate normality prior to transformation. For example, cognitive feedback items in the CG showed skewness ranging from −0.53 to 0.19 and kurtosis from −1.14 to −0.86, while emotional items ranged from −0.41 to −0.06 in skewness and −0.99 to −0.73 in kurtosis. Similar patterns were observed in the EG. These findings, combined with the results of the K–S test, justify the use of parametric statistical techniques and validate the application of the Box–Cox transformation to refine the distributional properties where necessary. In sum, these results reinforce the assumption of approximate univariate and multivariate normality and justify the application of parametric statistical techniques.
Reliability analysis
To ensure the reliability of the questionnaires and the consistency of the collected data, Cronbach’s alpha coefficients were calculated for both the control and experimental groups. The results indicated a high level of internal consistency, with alpha values of 0.978 for the control group and 0.972 for the experimental group. These values exceed the commonly accepted threshold of 0.70 recommended in the literature [55], confirming the reliability of the instruments used to measure cognitive feedback, affective feedback, and students’ emotional engagement.
Descriptive and inferential analyses
Once reliability and normality were verified, to examine the effects of cognitive and affective feedback on students’ emotional engagement, a range of statistical analyses were conducted, including descriptive statistics, including means, standard deviations, and observed ranges for all items, correlation analysis, t-tests, a comparative analysis of effect sizes using Cohen’s d and Hedges’ g, and multivariate analyses such as Principal Component Analysis (PCA) and cluster analysis. Although these analyses allow for a rigorous comparison between groups, it is important to acknowledge that the post-test-only design—without pre-intervention emotional measures—limits the strength of causal claims derived from these comparisons, and the results should therefore be interpreted as indicative rather than strictly causal. These methods were employed to comprehensively assess and compare the emotional responses of students in the control group (who received feedback from the human teacher) with those in the experimental group (who interacted with the Affective Pedagogical Tutor, APT). This combination of analyses ensured a robust interpretation of the magnitude, direction, and underlying structure of the observed emotional responses.
In addition to the procedures described above, we explicitly clarified the rationale underlying the selection of each statistical analysis. The choice of parametric tests was directly aligned with the structure of the research questions and the continuous nature of the variables after transformation. Independent-samples t-tests were used to address RQ1 by comparing post-intervention emotional responses between groups, while correlation analyses examined the associations between cognitive/affective feedback and students’ emotional states. These parametric tests were appropriate because variables were continuous after transformation, normally distributed and independent across groups.
Furthermore, we elaborated on the role of Principal Component Analysis (PCA) and cluster analysis in addressing RQ2. These techniques were not employed solely as exploratory tools but as theoretically grounded procedures to identify latent emotional patterns and emergent student profiles. PCA enabled the reduction in item-level complexity into meaningful underlying components, and hierarchical clustering grouped students according to similarities in these component scores. Together, these complementary analyses provided a coherent structural interpretation of emotional engagement within the dataset.
Finally, we added a brief explanation regarding the interpretive framework associated with a post-test-only design. Because no pre-intervention emotional measures were available, the results are interpreted as indicative rather than strictly causal. The expanded explanation strengthens the methodological transparency and supports the alignment between the analyses performed and the aims of the study.
All statistical procedures were selected based on the structure of the research questions (RQ1 and RQ2) and the nature of the variables involved. For RQ1, which examined differences between the control and experimental groups, independent-samples t-tests were used because the variables were continuous (post-transformation), normally distributed, and independent across groups, and effect sizes were additionally computed using Cohen’s d and Hedges’ g to quantify the magnitude of those differences. These measures were included because, in educational and psychological research, statistical significance alone may not reflect the practical importance of an effect; Cohen’s d [56] provides an interpretable standardized measure of group separation, while Hedges’ g [57] offers a bias-corrected estimate particularly suitable for groups of slightly different sizes, thereby strengthening the robustness and interpretability of the inferential results.
For RQ2, which explored relationships among emotional, cognitive, and affective constructs and latent emotional patterns, correlation analyses, Principal Component Analysis (PCA) and cluster analysis were conducted. Emotional-state variables were treated as dependent measures, and the type of feedback (cognitive or affective; human or APT system) served as the independent factor. PCA was used to identify latent emotional dimensions, while clustering facilitated the detection of patterns across emotional profiles.
To ensure methodological transparency in the derivation of engagement profiles, we provide additional details regarding the multivariate procedures. Principal Component Analysis (PCA) was conducted using the Kaiser criterion (eigenvalues > 1) and inspection of the scree plot to determine the number of components to retain. A varimax rotation was applied to improve interpretability and clarify the loading structure of emotional-state variables. Cluster analysis was subsequently performed using hierarchical clustering with Ward’s method and squared Euclidean distance, which is recommended for identifying compact and well-separated clusters. The resulting cluster solution yielded two stable profiles, with cluster sizes reported in the Results section. These methodological decisions ensure a coherent and theoretically grounded identification of latent emotional engagement profiles.
Multiple comparisons and significance criteria
It is important to note that conducting multiple independent t-tests across the 19 feedback items and the 9 emotional-state items increases the risk of Type I error inflation. Although we did not apply a formal correction procedure (e.g., Bonferroni or Holm adjustments), the consistently large effect sizes and the coherent patterns observed across cognitive, affective, and emotional variables partially mitigate—but do not fully eliminate—this risk. This consideration should therefore be taken into account when interpreting the inferential results.
Statistical significance was set at α = 0.05. In accordance with APA 7 guidelines, effect sizes were reported for all inferential analyses. Cohen’s d was included for independent-samples t-tests, Pearson’s r for correlation analyses, and explained variance percentages were reported for components extracted in PCA.
Justification of parametric approach
Finally, although Likert-type variables can depart from strict interval-level assumptions, the combined diagnostics (K–S tests, skewness and kurtosis thresholds, and the Box–Cox–improved distributions reported in Appendix B) indicated that the data met the requirements for parametric testing. Given the sample sizes in both groups (>50), t-tests are generally robust to moderate deviations from normality, and the observed distributional properties further support their use. For this reason, and consistent with established methodological recommendations, parametric analyses were retained as the primary inferential approach, with results demonstrating stability across the examined distributional conditions
Together, these methodological decisions—supported by reliability evidence, normality verification, validated assumptions, and appropriate parametric analyses—ensure a rigorous and transparent statistical foundation for evaluating the impact of cognitive and affective feedback on students’ emotional engagement. The Results section presents the outcomes in detail.

3. Results

This section presents the results related to the study’s two research questions. For RQ1, independent samples t-tests revealed that students in the experimental group who received feedback from the APT reported significantly higher scores across most cognitive and affective feedback items, as well as in positive emotional states such as motivation, curiosity, confidence, pleasure, and stimulation. Effect sizes were moderate to large, indicating a notable association between APT feedback and emotional engagement. For RQ2, bivariate analyses using Pearson’s correlations demonstrated strong associations between specific feedback types and emotional states, while multivariate analyses (PCA and cluster analysis) identified two dominant engagement profiles—one characterized by high emotional engagement and positive feedback, and another by lower engagement and negative feedback. These results confirm that affective feedback is a key driver of emotional engagement and that cognitive feedback contributes meaningfully to instructional clarity and emotional regulation.

3.1. The Results Regarding RQ1: Have Both the Affective and Cognitive APT Feedback Supported Significantly Student’s Emotional Engagement Compared to Human Teacher’s Feedback?

The results regarding RQ1 suggest that both cognitive and affective feedback provided by the APT were associated with higher emotional engagement compared to feedback from a human teacher.
As shown in Table 7a, cognitive feedback items such as 3.2 (providing complementary information), 3.4 (building on students’ existing knowledge), and 3.10 (supporting learning with effective instructional procedures) yielded statistically significant differences with large effect sizes. One student commented, “The APT gave me just the right information at the right time—it felt tailored to my needs and helped me complete the task with confidence.” Another noted, “It built on what I already knew and made me feel capable of solving the problem.” A third student added, “The way the APT structured the content helped me stay focused and avoid confusion—it was much clearer than usual.”
In Table 7b, affective feedback items like 3.13 (helping students complete the activity successfully), 3.15 (supporting final evaluation), and 3.19 (maintaining interest in the activity) also showed significant differences. A student shared, “Before the final presentation, the APT’s messages helped me calm down and focus—I felt supported, not judged.” Another added, “The emotional tone of the feedback made me feel like someone was genuinely interested in my progress.” A third participant stated, “The APT kept me engaged even when the task got repetitive—it felt like it understood when I was losing motivation.” One student emphasized, “I usually get nervous before evaluations, but the APT’s feedback made me feel prepared and reassured.”
Table 7c reveals that positive emotional states such as motivation (E1), confidence (E3), and pleasure (E4) were significantly higher in the experimental group. One participant reflected, “I stayed motivated throughout the project because the APT kept reminding me of my progress—it was like having a coach.” Regarding negative emotional states, although differences were not statistically significant, students reported meaningful emotional support. For example, one student experiencing anxiety (E8) noted, “When I felt overwhelmed, the APT’s messages helped me pause and reframe the situation—it didn’t solve the problem, but it made me feel less alone.” Another student who reported insecurity (E6) shared, “I usually hesitate to ask questions, but the APT’s tone made me feel safe to express doubts without fear of being wrong.” A participant who struggled with boredom (E7) stated, “The APT kept the activity dynamic—it felt more interactive and less monotonous than usual.” These qualitative insights reinforce the quantitative findings, confirming that APT feedback not only improved instructional clarity and emotional support but also students perceived that APT feedback helped them cope with negative emotional states, fostering a more balanced and engaging learning experience.
To enhance the clarity of the results and facilitate a more straightforward interpretation, the main body of the article presents Table 7a–c containing the essential statistics for each item: means and standard deviations for the CG and EG groups, p-values from the independent-samples t-tests, and effect sizes (Cohen’s d and Hedges’ g). In addition, the corresponding graphical representations are included to visually compare the differences between groups.
To avoid overloading the Results section, Appendix C provides the complete set of data used in the analysis, organized in full detail.
To complement the statistical results reported in Table 7a–c, Figure 5 provides an integrated visual representation of the differences between the control group (CG) and the experimental group (EG). The figure summarizes, for each item, the mean scores of both groups together with the corresponding p-values and effect sizes (Cohen’s d and Hedges’ g), allowing a direct visual comparison of statistical significance and magnitude of the effects. This combined graphical format facilitates a clearer interpretation of the patterns observed across cognitive feedback, affective feedback, and emotional-state measures.
These qualitative insights reinforce the quantitative findings, confirming that APT feedback not only improved instructional clarity and emotional support but also students perceived that APT feedback helped them cope with negative emotional states, fostering a more balanced and engaging learning experience.
As illustrated in Figure 6, the comparative analysis of effect sizes using Cohen’s d and Hedges’ g across cognitive feedback, affective feedback, and emotional states reveals consistently large effects in favor of the Affective Pedagogical Tutor (APT). Cohen’s d values ranged from 0.76 to 0.92, while Hedges’ g values were slightly lower due to sample size correction, ranging from 0.74 to 0.90. These findings confirm that the APT had a substantial impact on students’ emotional engagement, with the strongest effects observed in emotional states, followed by cognitive feedback and affective feedback.
For cognitive feedback (d ≈ 0.85; g ≈ 0.83), students highlighted improvements in clarity and organization: “The APT helped me understand the objectives and organize my work better.” Another noted, “It structured the content so clearly that I avoided confusion and saved time.” A third added, “The explanations felt personalized, which made me more confident in completing the tasks.”
For affective feedback (d ≈ 0.76; g ≈ 0.74), comments reflected emotional support: “I felt accompanied and less anxious before the evaluation thanks to the APT’s messages.” Another student shared, “The encouraging tone kept me engaged even when the project became challenging.” A third remarked, “It felt like someone cared about my progress, which motivated me to keep going.”
The largest effects were found in emotional states (d ≈ 0.92; g ≈ 0.90), with students reporting increased motivation and confidence: “I stayed motivated throughout the project because the APT reminded me of my progress.” Another participant said, “I felt more optimistic and less frustrated when facing complex tasks.” Even for negative emotions, students mentioned mitigation: “When I felt insecure, the APT gave me confidence to ask questions without fear.” One more added, “It helped me manage boredom by making the activity feel more interactive.”
The alignment between both metrics reinforces the robustness of these results and highlights the pedagogical relevance of integrating affective and cognitive strategies to foster emotional engagement and improve learning outcomes.
In sum, the results for RQ1 provide evidence suggesting that the Affective Pedagogical Tutor (APT) feedback was associated with greater emotional engagement compared to feedback from a human teacher. Large effect sizes across cognitive feedback, affective feedback, and emotional states, as illustrated in Figure 6, indicate a notable association between APT feedback and emotional engagement. Cognitive feedback improved clarity, organization, and instructional support, while affective feedback fostered motivation and emotional regulation. The strongest effects were observed in positive emotional states such as motivation, confidence, and pleasure, with qualitative comments highlighting increased persistence and reduced anxiety. These results underscore the pedagogical relevance of integrating affective and cognitive strategies within intelligent tutoring systems to create emotionally supportive learning environments that promote engagement and optimize learning outcomes.

3.2. The Results Regarding RQ2: Is There a Significant Relationship Between All APT Cognitive and Affective Feedback Types and Students’ Emotional States?

To answer RQ2, we analyzed, on the one hand, the association between students’ emotional states after the activity and cognitive feedback, and on the other hand, the association between students’ emotional states after the activity and affective feedback, within the experimental group that interacted with the APT. The analysis was carried out from both a bivariate and a multivariate perspective. From the bivariate point of view, Pearson’s correlation coefficients were calculated between the variables. This analysis was further extended with a multivariate approach by performing a Principal Component Analysis (PCA) on both the cognitive and affective feedback items. Subsequently, a cluster analysis was conducted to group items with similar correlation profiles, providing an overall view of the relationship between the full set of variables.

3.2.1. Bivariate Analysis

Table 8a shows a strong positive relationship between most students’ emotional states (engagement) and APT cognitive feedback. In particular, the emotional state “motivated” (E1) was significantly correlated to cognitive feedback items 3.1, 3.2, 3.3, 3.5, 3.10 and 3.14. The emotional state “curious” (E2) was significantly correlated to cognitive feedback items 3.1, 3.2, 3.3, 3.6 and 3.10. The emotional state “confident” (E3) was significantly correlated to cognitive feedback item 3.10. The emotional state “pleased” (E4) was significantly correlated to cognitive feedback items 3.1, 3.3, 3.4, 3.5, 3.10 and 3.14. The emotional state “optimistic/challenging/stimulated” (E5) was significantly correlated to cognitive feedback items 3.4, 3.8 and 3.10. The emotional state “insecure or embarrassed” (E6) was not significantly correlated to any cognitive feedback item. The emotional state “bored” (E7) was significantly correlated to cognitive feedback items 3.1, 3.2, 3.3, 3.4, 3.5 and 3.10. The emotional state “anxious or dismayed” (E8) was not significantly correlated to any cognitive feedback item. Finally, the emotional state “outraged” (E9) was significantly correlated to cognitive feedback items 3.1 and 3.4.
In addition, in Table 8b, we observe a statistically significant positive correlation between almost all students’ emotional states (engagement) and APT affective feedback. In particular, the emotional state “motivated” (E1) was significantly correlated to affective feedback items 3.9, 3.12, 3.13, 3.15, 3.17 and 3.19. The emotional state “curious” (E2) was significantly correlated to affective feedback items 3.12, 3.13 and 3.17. The emotional state “confident” (E3) was significantly correlated to affective feedback item 3.15. The emotional state “pleased” (E4) was significantly correlated to affective feedback items 3.9, 3.13, 3.15, 3.17 and 3.19. The emotional state “optimistic/challenging/stimulated” (E5) was significantly correlated to affective feedback items 3.15, 3.17, 3.18 and 3.19. The emotional state “insecure or embarrassed” (E6) was significantly correlated to affective feedback items 3.9 and 3.11. The emotional state “bored” (E7) was significantly correlated to affective feedback items 3.12, 3.13, 3.15, 3.17 and 3.18. The emotional state “anxious or dismayed” (E8) was significantly correlated to affective feedback items 3.16 and 3.18. Finally, the emotional state “outraged” (E9) was significantly correlated to affective feedback items 3.13, 3.15 and 3.19.
To enhance the interpretability of the correlation patterns, Figure 7 provides a combined visualization of the relationships between cognitive and affective feedback items and students’ emotional states. The heatmaps and correlation matrices reveal two clear clusters of associations: (a) strong positive correlations between supportive, clarity-oriented feedback and positive emotions (motivation, curiosity, pleasure, optimism), and (b) weaker or negative correlations involving negative emotional states such as insecurity, anxiety, and boredom. These patterns provide the structural basis for the multidimensional organization later identified through PCA and clustering in Section 3.2.2.
In conclusion, the results reveal a statistically significant positive correlation between most students’ emotional states (engagement) and both cognitive and affective feedback provided by the APT system. Specifically, the emotional states of “motivated” (E1), “curious” (E2), “confident” (E3), “pleased” (E4), “optimistic/challenging/stimulated” (E5), “bored” (E7), and “outraged” (E9) showed significant associations with various cognitive and affective feedback items. Notably, E1 and E4 were consistently correlated with a wide range of feedback items across both dimensions, suggesting a strong link between these positive emotional states and the reception of feedback. In contrast, “insecure or embarrassed” (E6) and “anxious or dismayed” (E8) showed limited or no significant correlations, indicating a weaker connection to the feedback provided. These findings highlight the nuanced interplay between students’ emotional engagement and the type of feedback they receive, which is further discussed in the following section.

3.2.2. Multivariate Analysis

While the bivariate analysis provided valuable insights into the individual relationships between emotional states and specific feedback items, a multivariate approach allows for a more comprehensive understanding of the underlying structure of these associations. To this end, we conducted a Principal Component Analysis (PCA) and cluster analysis to identify latent dimensions and engagement profiles that may not be evident through pairwise correlations alone.
This analysis complements the previous findings by providing a multidimensional perspective that uncovers latent structures beyond pairwise correlations. Building on the bivariate correlations presented in the previous section, the multivariate analysis provides a deeper understanding of the structural relationships between feedback types and students’ emotional states. Principal Component Analysis (PCA) revealed that the first two components account for the majority of variance in both datasets, which are over 60% for cognitive feedback and approximately 58% for affective feedback, indicating that these relationships are organized around two dominant dimensions. The remaining components contribute little variance, indicating that the structure is concentrated in two main factors. The first component primarily captures positive engagement patterns, with high loadings for emotions such as motivated, pleased, and optimistic, alongside feedback items related to support and goal achievement (e.g., items 3.1, 3.13, 3.15, and 3.19). The second component reflects a secondary dimension associated with curiosity and confidence, linked to feedback strategies emphasizing organization and communication (e.g., items 3.3 and 3.12), as shown in Figure 8. Conversely, negative emotional states such as insecurity, anxiety, and boredom cluster at the opposite end of these dimensions, suggesting weaker associations with supportive feedback. These findings confirm that emotional engagement and feedback strategies are not randomly distributed but structured around latent factors that differentiate high-engagement profiles from low-engagement ones, providing a foundation for the cluster analysis discussed below, and confirm that the correlation structure is organized around two key dimensions: Dimension 1—High positive emotional involvement combined with supportive feedback; Dimension 2—Organization and communication versus less activating emotions, as shown in Figure 9.
The combined correlation data for cognitive and affective feedback items was analyzed using PCA and clustering techniques. Figure 10 illustrates the PCA projection, which reveals two principal components capturing the main variance in the dataset and visualizes the distribution of feedback items according to their emotional engagement profiles. Items were grouped using K-means and hierarchical clustering, both producing consistent results with a Silhouette Score of 0.498, indicating moderate cluster separation and good internal cohesion.
Cluster 1 primarily includes cognitive feedback items such as 3.1, 3.2, 3.3, 3.4, and 3.5, along with affective items like 3.9, 3.12, and 3.13. These strategies are strongly associated with positive emotional states such as motivation, curiosity, and pleasure, as evidenced by high correlations with E.1, E.2, and E.4. This cluster represents feedback that emphasizes clarity, organization, and a supportive climate, which are critical for fostering engagement and sustaining interest in learning activities.
Cluster 2 contains cognitive items such as 3.6, 3.8, 3.10, and 3.14, together with affective items 3.15, 3.16, 3.17, 3.18, and 3.19. These feedback strategies are linked to confidence, optimism, and persistence, as shown by strong correlations with E.3 and E.5. They focus on practical support, evaluation, and emotional regulation, making them particularly relevant for students who need reassurance or assistance in overcoming challenges.
The clustering results confirm two dominant engagement profiles: one driven by motivational and clarity-oriented feedback, and another centered on support and resilience-building strategies. Both profiles are essential for creating emotionally intelligent learning environments that adapt to students’ needs.
Overall, the multivariate analysis, illustrated in Figure 8, Figure 9 and Figure 10, provides a deeper understanding of the structural relationships between feedback types and students’ emotional states. PCA revealed that two principal components explain most of the variance in both cognitive and affective feedback datasets, highlighting two dominant dimensions: one associated with high positive emotional involvement and supportive feedback, and another linked to organization and communication versus less activating emotions. Building on these findings, cluster analysis identified two distinct engagement profiles. The first cluster groups feedback strategies that promote clarity, organization, and a positive climate, strongly correlated with motivation, curiosity, and pleasure. The second cluster includes items focused on practical support, evaluation, and persistence, associated with confidence and optimism. These results confirm that feedback strategies are not randomly distributed but structured around latent factors that differentiate high-engagement from low-engagement profiles, offering valuable insights into designing emotionally intelligent learning environments.
To facilitate an integrated view of the main outcomes, Table 9 summarizes the overall patterns observed across cognitive feedback, affective feedback and students’ emotional states, integrating the detailed results reported in Table 7a–c and Table 8a,b.

4. Discussion

The present study aims to examine whether the APT cognitive and affective feedback provided by an animated pedagogic tutor enhanced students’ emotional engagement compared to human teacher’s feedback. Students’ emotional engagement has been accounted for by nine emotional states, namely motivated, curious, confident, pleased, optimistic/challenging (stimulated), insecure (embarrassed), bored, anxious (dismayed) and outraged, being the emotions that arise in a learning situation and are linked to instruction, learning and achievement [2,47]. The objective of the study was to investigate whether both affective and cognitive APT feedback supported significantly more student’s emotional engagement compared to human teacher’s feedback.

4.1. Discussion of RQ1 Findings

The first objective (RQ1) of the study was to investigate whether both affective and cognitive APT feedback supported significantly more student’s emotional engagement compared to human teacher’s feedback.
The results indicate that affective and cognitive feedback delivered by the Affective Pedagogical Tutor (APT) was linked to higher emotional engagement compared to traditional teacher feedback. Students interacting with the APT reported higher levels of motivation, curiosity, confidence, enjoyment, and optimism, suggesting an important role of affective feedback in supporting emotional regulation and persistence, while cognitive feedback contributed to clarity and task scaffolding. These findings align with research emphasizing the synergy between cognitive and affective strategies in promoting engagement and learning outcomes in digital environments [58,59]. Emotional design studies confirm that pedagogical agents exhibiting enthusiastic or empathic behaviors foster positive emotions and retention, although their impact on transfer performance remains mixed [59,60]. The absence of consistent differences in negative emotions echoes findings that affective interventions primarily enhance positive states without exacerbating negative affect [61]. Slight increases in insecurity or anxiety may reflect heightened emotional investment and perceived challenge, a phenomenon also reported in systematic reviews on empathic conversational agents [62]. Large effect sizes across variables suggest consistent patterns, though results should be interpreted cautiously given the study design and qualitative insights highlight that students perceived APT feedback as personalized, supportive, and instrumental in reducing anxiety and sustaining interest. Overall, these outcomes underscore the importance of designing pedagogical agents that integrate emotional responsiveness with adaptive cognitive support to create humanized and effective learning experiences [63,64].
To optimize emotional engagement, educators should integrate clarity-oriented cognitive feedback with affective strategies that promote motivation and confidence. Effective feedback must be personalized, timely, and supportive, addressing both instructional objectives and students’ emotional needs. Embedding these approaches into intelligent tutoring systems can help reduce anxiety, sustain interest, and foster persistence, ultimately contributing to adaptive and emotionally intelligent learning environments. Recent research emphasizes that affective intelligent tutoring systems and pedagogical agents significantly enhance engagement when emotional responsiveness is combined with adaptive cognitive support [58]. Meta-analyses confirm that elaborate and affective feedback improves motivation and higher-order learning outcomes in digital contexts [35,65]. Furthermore, studies on conversational agents highlight the importance of personalization and emotional cues for sustaining learner motivation and reducing frustration [61,63]. These findings underscore the need for integrated feedback strategies that combine instructional clarity with emotional support to create humanized and effective learning experiences.

4.2. Discussion of RQ2 Findings

The second objective (RQ2) investigated the relationships between cognitive and affective feedback provided by the animated agent (APT) and student’s emotional engagement. The findings related to Research Question 2 (RQ2) indicate a statistically significant association between the cognitive and affective feedback delivered by the Affective Pedagogical Tutor (APT) and students’ emotional engagement. Through both bivariate and multivariate analyses, the study demonstrates that these feedback modalities are not only influential in shaping emotional states but are also organized around latent dimensions that reflect distinct engagement profiles.
The bivariate analysis revealed robust positive correlations between cognitive feedback and several positive emotional states, including motivation (E1), curiosity (E2), confidence (E3), pleasure (E4), and stimulation/optimism (E5). Similarly, affective feedback exhibited significant associations with these same emotional states, while also showing differentiated effects on negative emotions such as boredom (E7), insecurity (E6), and anxiety (E8).
The multivariate analysis, employing Principal Component Analysis (PCA) and cluster analysis, further substantiated these findings by identifying two dominant engagement profiles. The first profile, characterized by high emotional engagement, was associated with feedback strategies that emphasized clarity, organization, and emotional support. The second profile reflected a more resilience-oriented approach, linked to feedback that supported persistence, evaluation readiness, and emotional coping.
Importantly, the PCA results indicated that the first two components accounted for the majority of variance in both cognitive and affective feedback datasets, suggesting that emotional engagement is structured around core dimensions of positive and negative affect.
In summary, the results of RQ2 highlight the pedagogical importance of integrating both cognitive and affective feedback within intelligent tutoring systems. The APT’s ability to adapt its feedback based on students’ emotional states exemplifies the potential of emotionally responsive agents to personalize learning experiences and foster sustained engagement. These findings contribute to the growing body of literature advocating for emotionally intelligent educational technologies and underscore the need for future research to explore dynamic personalization and multimodal emotion detection in pedagogical agents [61,63].

5. Conclusions

This study provides empirical evidence suggesting that integrating cognitive and affective feedback through an Affective Pedagogical Tutor (APT) may support students’ emotional engagement in higher education. Given that the study employed a post-test–only design without pre-intervention measures of emotional engagement, the findings should be interpreted cautiously and do not allow for strong causal claims about “significant enhancement.” However, the comparative analysis between feedback provided by a human teacher and that delivered by the APT indicated that the latter was associated with higher levels of positive emotional states such as motivation, curiosity, confidence, pleasure, and optimism—emotions recognized as key drivers of deep learning and sustained academic engagement [2,66].
Findings from RQ1 demonstrated that cognitive feedback from the APT was associated with improvements in instructional clarity, task organization, and knowledge enrichment, while affective feedback contributed to emotional regulation, reduced anxiety, and sustained engagement. These associations align with prior research on the role of pedagogical agents in supporting both cognitive and emotional dimensions of learning [58,61,64]. These findings directly address RQ1, showing that both the cognitive and affective feedback delivered by the APT were associated with higher levels of emotional engagement than feedback from the human teacher. This is consistent with the significant differences observed across motivation, curiosity, confidence, pleasure, and optimism (Table 7a–c), as well as with the moderate-to-large effect sizes reported. Therefore, the results support the initial hypothesis that APT feedback contributes more strongly to students’ positive emotional states.
RQ2 confirmed that students’ emotional states showed significant correlations with specific feedback strategies. Multivariate analyses revealed two dominant engagement profiles: one characterized by motivational and clarity-oriented feedback, and another centered on resilience-building and evaluative support. These profiles reflect the structured nature of emotional engagement and underscore the importance of tailoring feedback to learners’ affective and cognitive needs [62].
Moreover, the study supports the conclusion that affective feedback may play an important role in mitigating negative emotional states such as boredom, insecurity, and anxiety—effects that cognitive feedback alone cannot fully achieve. This reinforces the need for emotionally intelligent learning environments that integrate both instructional and emotional support [65,67].
In summary, the synergy between cognitive and affective feedback appears essential for optimizing both academic performance and emotional well-being. While cognitive strategies provide structure and clarity, affective strategies sustain engagement and foster emotional resilience. The APT’s ability to deliver personalized, timely, and emotionally responsive feedback illustrates its potential as a complementary pedagogical tool in higher education.
The findings also respond to RQ2, as the correlation analyses, PCA, and cluster analysis reveal consistent relationships between specific feedback strategies and students’ emotional-state patterns. These multivariate results show that cognitive and affective feedback operate together to shape emotional engagement, confirming that both types of feedback are meaningfully linked to students’ emotional experiences.
These findings contribute to the growing body of research advocating for emotion-aware educational technologies and highlight the importance of designing feedback systems that address both instructional and emotional dimensions of learning. Future research should employ more robust experimental designs, including pre- and post-intervention measures and longitudinal approaches, to more accurately assess the causal impact of adaptive feedback mechanisms on learners’ evolving emotional and cognitive needs. In doing so, intelligent tutoring systems may further promote inclusive, engaging, and humanized digital learning experiences [68].

6. Limitations of the Study and Directions for Future Research

Despite its contributions, this study presents several limitations that warrant consideration. First, the research was conducted within a single course context, which restricts the generalizability of findings across disciplines and instructional formats—a limitation also noted in recent reviews of affective intelligent tutoring systems [65]. The study took place in a three-week collaborative project in a computer science course at a private Spanish university with voluntary student participation, which may favor students who are more comfortable with technology-mediated, group-based learning and thus further limit the transferability of the results. Future studies should explore the effectiveness of affective pedagogical agents across diverse academic domains and learning modalities, including asynchronous and individual learning environments.
Second, the reliance on self-reported measures introduces potential biases related to social desirability and recall accuracy, echoing concerns raised in prior studies on emotion-aware educational technologies [61,62]. Although the instrument used in this study demonstrated high reliability (Cronbach’s alpha > 0.97), self-reporting is inherently subjective. Future research should incorporate multimodal emotion recognition systems—such as biometric sensors, facial expression analysis, and voice tone detection—to complement self-reported data and provide a more nuanced understanding of learners’ emotional trajectories [64,66].
In addition, the exclusive reliance on post-activity self-reports implies that some of the associations identified in RQ2 between feedback and emotional states may partly reflect shared method variance and should therefore be interpreted as indicative rather than definitive.
Third, the study’s cross-sectional, post-test–only design limits the ability to draw causal conclusions regarding changes in emotional engagement or academic performance, highlighting the need for longitudinal research to assess the durability and evolution of any observed effects. The absence of pre-measures of emotional engagement further restricts the strength of claims that can be made about potential “enhancement” effects.
Fourth, although the study examined both cognitive and affective feedback types, it did not isolate their individual effects. The interaction between these feedback types was central to the study’s design, but further research is needed to disentangle their specific contributions to emotional engagement and learning outcomes. Experimental designs that manipulate feedback types independently could help identify which strategies are most effective under different conditions [58].
Fifth, the emotional states analyzed were based on a predefined set of nine emotions derived from established models [2]. While these models offer a solid foundation, emotions in learning are dynamic and complex. Future research could benefit from expanding the emotional taxonomy to include additional states such as frustration, pride, empathy, or flow, and examine how these interact with feedback and engagement over time [61].
Finally, while the APT incorporated predefined feedback categories, its adaptability to individual learner profiles was limited and should not be interpreted as demonstrating transformative or extensive adaptive capabilities, particularly when compared to emerging conversational agents powered by generative AI. Advances in Conversational Pedagogical Agents (CPA) and affect-aware AI systems offer promising directions for developing more flexible, context-sensitive agents capable of dynamically adjusting feedback based on learners’ emotional and cognitive profiles [68].
In conclusion, future research should adopt more rigorous designs—including pre- and post-intervention measures, multimodal data collection, and longitudinal approaches—to strengthen the interpretability of findings and avoid overstating effects beyond what the methodology can support. These efforts will be essential for realizing the full potential of affective pedagogical agents in fostering emotionally intelligent and inclusive educational experiences.
In particular, future work should replicate the comparative design of RQ1 and the feedback–emotion associations examined in RQ2 across different institutions, disciplines, and course formats in order to assess the robustness and boundary conditions of the present findings.

Author Contributions

M.A.: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing—original draft, Writing—review and editing. T.D.: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing—original draft, Writing—review and editing. S.C.: Supervision, Validation, Writing—review and editing. J.C.: Supervision, Validation, Writing—review and editing. E.O.-O.: Supervision, Validation, Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the following projects: CARUOC: Conversational Agents and Recommenders for UOC, funded by the Research Accelerator Program of the Universitat Oberta de Catalunya. REGRANAPIA: Repositories Operated with Grammars: Navigation, Personalization and Intelligence Aspects, funded by the Spanish Ministry of Science and Innovation (PID2021-123048NB-I00). LExDigTeach: Use of Learning Analytics in Digital Environments: Impact on the Improvement of University Teaching Practice, funded by the Spanish Ministry of Science and Innovation (PID2020-115115GB-I00). The funders had no role in the study design, data collection, analysis, interpretation of results, or the decision to publish this manuscript.

Data Availability Statement

The anonymized datasets generated and analyzed during the current study are available from the corresponding author upon reasonable request. All requests for data access should be directed to martaarg@uoc.edu and will be evaluated in accordance with institutional and ethical guidelines.

Acknowledgments

We thank the participating university for granting access to the learning environment and facilitating the implementation of the study. We are also grateful to the students who voluntarily took part in the experiment, and to the faculty members and psychologists who assisted in designing the consent procedures and validating the research protocol. Their contributions were essential to the successful completion of this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
E.1Motivated
E.2Curious
E.3Confident
E.4Pleased
E.5Optimistic/Challenging (Stimulated)
E.6Insecure or Embarrassed
E.7Bored
E.8Anxiety or dismayed
E.9Outraged
PADPleasure-Arousal-Dominance
EStudents’ Emotional States
CCognitive Feedback
AAffective Feedback
CGControl Group
EGExperimental Group
APTAffective Pedagogical Tutor
CPAConversational Pedagogical Agents

Appendix A

The questionnaires used in this study.
Note: For the sake of saving space, we only provide questions that concern the virtual agent (APT). The same questions were also asked with respect to the human teacher.
Part A
Has the APT achieved to:
  • make the course objectives clearer and more understandable to you?
  • provide you appropriate and complementary information to increase your ability to complete your work?
  • organize and present the contents in a more orderly manner?
  • build on your existing knowledge based on your level and needs?
  • enrich the knowledge presented with novel elements?
  • make you work more effectively in your group?
  • foster individualized learning while working in your group?
  • provide you more support to practical aspects?
  • support your learning with effective instructional procedures?
  • ensure the accomplishment of the learning objectives according to the criteria set by the course?
  • create a satisfactory working climate in your group?
  • be subtle enough not to interfere and affect the duration of the course negatively?
  • guide you to better communicate your individual results in your group?
  • help you complete the activity successfully?
  • support you to deal with the final evaluation successfully?
  • help you acquire skills and attitudes?
  • make you face your difficulties better?
  • offer you possibilities to make the best decision in case of doubt?
  • trigger and maintain your interest in the activity and learning?
Open-ended question:
In what ways did the APT’s feedback help you better understand the task, organize your work, or feel more confident in completing the activity? Please, describe specific moments or examples.
Part B
Specify how each of the above APT interventions has:
  • motivated you to carry out the activity.
  • inspired your curiosity in the activity and the course.
  • build your confidence in attaining the activity better.
  • brought you joy or satisfaction to keep on with the activity.
  • helped you feel more optimistic, challenged or stimulated in your efforts to complete the activity.
  • helped you overcome insecurity or embarrassment while doing the activity.
  • assisted you to cope with feelings of boredom when they arose during the activity.
  • relieved anxiety or feelings of stuck and dismay during the activity.
  • helped you deal with your anger or with issues that made you feel outraged.
Open-ended question:
How did the APT’s feedback influence your emotional experience during the activity (e.g., motivation, anxiety, boredom)? Feel free to share how it affected your engagement or helped you overcome challenges.

Appendix B. Box–Cox Transformation Procedure

The Box–Cox transformation [69] was applied to improve the distributional properties of the Likert-type variables prior to conducting parametric statistical analyses. This procedure is commonly used to reduce skewness, stabilize variance, and approximate normality in continuous or ordinal-interval data that exhibit non-normal patterns.
Transformation Formula
The Box–Cox transformation is defined as:
y ( λ ) = { y λ 1 λ , if   λ 0 l n ( y ) , if   λ = 0
where
  • y represents the original variable (requiring strictly positive values),
  • y ( λ ) is the transformed variable, and
  • λ is an exponent parameter estimated via maximum likelihood.
Estimation of λ
For each item, the value of λ was estimated independently by maximizing the log-likelihood of the transformed data under the assumption of normality. This item-level estimation allowed the transformation to be adapted to the particular distributional characteristics of each variable.
Procedure
  • Verification of positive values:
    All variables were checked to ensure that the raw values were strictly positive, satisfying the requirements of the Box–Cox transformation.
  • Estimation of λ:
    An optimal λ was computed for each item using maximum likelihood optimization.
  • Application of the transformation:
    Each variable was transformed using its corresponding estimated λ .
  • Post-transformation diagnostics:
    The Kolmogorov–Smirnov test was applied to the transformed variables.
    Skewness and kurtosis values were computed to assess distributional symmetry and tail behavior.
    The transformed items demonstrated improved conformity to normality, with non-significant K–S results for nearly all variables (see Table 3).
Interpretation Considerations
The Box–Cox transformation does not alter the substantive meaning of the variables or the interpretation of group comparisons. Instead, it enhances statistical robustness by ensuring that the assumptions underlying parametric methods (e.g., t-tests, correlation analysis) are met. The analyses reported in the main manuscript were conducted using the transformed variables when appropriate, ensuring reliable and interpretable results.

Appendix C. Complete Statistical Tables for RQ1

This appendix presents the full set of statistical tables used in the analysis conducted to address Research Question 1 (RQ1). It includes all item-level values corresponding to cognitive feedback (Table A1a), affective feedback (Table A1b), and emotional-state measures (Table A1c). The tables report the complete numerical outputs for each variable, including descriptive statistics, independent-samples t-test results, effect sizes, and all additional calculations performed during the analysis. By compiling these results in an extended format, Appendix C provides a comprehensive overview of the statistical information supporting the findings summarized in the main text.
Table A1. (a) Mean values of students’ learning outcomes and t-test related to cognitive feedback. (b) Mean values of students’ learning outcomes and t-test related to affective feedback. (c) Mean values of students’ emotional states and t-test outcomes related to students’ emotional states (E).
Table A1. (a) Mean values of students’ learning outcomes and t-test related to cognitive feedback. (b) Mean values of students’ learning outcomes and t-test related to affective feedback. (c) Mean values of students’ emotional states and t-test outcomes related to students’ emotional states (E).
Mean Values of Students’ Learning Outcomes and t-Test Related to Cognitive Feedback (a)
Control Group (n = 57)Experimental Group (n = 58)Levene’s Test
(1)
t-Test for Equality of Means (2)
ItemMean
(SD)
Min–MaxMean
(SD)
Min–MaxFp-Valuetdfp-ValueCohen’s dHedges’ g
3.13.58 (1.387)1–54.27 (0.550)3–511.416<0.001 (b)−1.97572.9340.0520.6560.652
3.23.16 (1.068)1–54.05 (0.785)2–52.2430.137 (a)−2.9381130.0040.9510.945
3.33.26 (1.368)1–54.18 (0.588)3–511.4160.001 (b)−2.57475.7290.0120.8760.871
3.43.21 (1.228)1–54.23 (0.612)3–57.8440.006 (b)−3.05981.8820.0031.0541.047
3.53.16 (1.463)1–54.23 (0.612)3–511.416<0.001 (b)−2.77374.7450.0070.9570.951
3.63.63 (1.212)1–54.18 (0.664)3–59.2020.003 (b)−1.72086.5140.0890.5640.560
3.73.05 (1.353)1–53.77 (0.752)2–57.5460.007 (b)−1.99687.2620.0490.6590.655
3.83.16 (1.119)1–44.18 (0.501)3–510.0100.002 (b)−3.42177.2900.0011.1801.172
3.103.37 (1.165)1–54.09 (0.811)2–53.9250.050 (b)−2.20299.8020.0300.7180.714
3.143.58 (1.121)1–54.23 (0.528)3–510.0100.002 (b)−2.21079.3740.0300.7440.739
Mean Values of Students’ Learning Outcomes and t-Test Related to Affective Feedback (b)
Control Group (n = 57)Experimental Group (n = 58)Levene’s Test
(1)
t-Test for Equality of Means (2)
ItemMean
(SD)
Min–MaxMean
(SD)
Min–MaxFp-Valuetdfp-ValueCohen’s dHedges’ g
3.93.53 (1.219)1–54.14 (0.710)2–56.3690.013(b)−1.86889.7620.0650.6130.609
3.113.37 (1.383)1–54.09 (0.750)2–510.0100.002 (b)−1.97085.9910.0520.6490.644
3.123.11 (1.150)1–54.14 (0.640)2–55.4790.021 (b)−3.18687.3300.0021.1091.102
3.133.47 (1.219)1–54.18 (0.733)2–55.7630.018 (b)−2.14091.5170.0350.7070.703
3.153.53 (1.389)1–54.00 (0.690)2–511.4160.001 (b)−1.32581.7330.1890.4300.427
3.163.53 (1.264)1–54.05 (0.785)2–56.1020.015 (b)−1.52093.3260.1320.4950.492
3.173.16 (1.015)1–44.05 (0.844)2–51.9340.167 (a)−2.9381130.0040.9540.948
3.183.26 (1.485)1–53.95 (0.722)2–511.416<0.001 (b)−1.79780.7590.0760.5930.589
3.193.26 (1.447)1–54.05 (0.722)3–511.4160.001 (b)−2.06681.9370.0420.6930.688
Mean Values of Students’ Emotional States and t-Test Outcomes Related to Students’ Emotional States (E) (c)
Control Group (n = 57)Experimental Group (n = 58)Levene’s Test
(1)
t-Test for Equality of Means (2)
ItemMean
(SD)
Min–MaxMean
(SD)
Min–MaxFp-Valuetdfp-ValueCohen’s dHedges’ g
E12.63 (1.212)1–53.91 (0.921)1–53.3950.068 (a)−3.379113<0.0011.1911.183
E22.68 (1.250)1–53.50 (1.102)1–51.0660.304 (a)−2.1711130.0320.6960.692
E32.79 (1.548)1–54.05 (0.950)2–56.8650.010 (b)−2.87692.6550.0050.9830.977
E42.84 (1.344)1–54.27 (0.550)3–511.416<0.001 (b)−3.42773.981<0.0011.3971.388
E52.84 (1.425)1–54.00 (0.756)2–511.416<0.001 (b)−2.95984.8840.0041.0191.013
E62.63 (1.257)1–52.68 (1.287)1–40.3740.542 (a)−0.1261130.9000.0390.039
E73.32 (1.493)1–52.73 (1.202)1–41.3660.245 (a)1.3811130.170−0.436−0.433
E82.95 (1.268)1–53.32 (1.129)1–50.1770.675 (a)−0.9821130.3280.3080.306
E92.84 (1.344)1–52.50 (1.300)1–40.0810.777 (a)0.8221130.413−0.257−0.255
Note. (1) Results obtained from Levene’s test; a: Equal variances are assumed; b: Equal variances are not assumed; (2) Results obtained from t-test.

References

  1. Arguedas, M.; Daradoumis, T.; Xhafa, F. Analyzing how emotion awareness influences students’ motivation, engagement, self-regulation, and learning outcome. Educ. Technol. Soc. 2016, 19, 87–103. [Google Scholar]
  2. Pekrun, R.; Goetz, T.; Titz, W.; Perry, R.P. Academic emotions in students’ self-regulated learning and achievement: A program of qualitative and quantitative research. Educ. Psychol. 2002, 37, 91–105. [Google Scholar] [CrossRef]
  3. Kort, B.; Reilly, R. Analytical models of emotions, learning, and relationships: Towards an affect-sensitive cognitive machine. In Proceedings of the Conference on Intelligent Tutoring Systems, Biarritz, France and San Sebastian, Spain, 2–7 June 2002; pp. 955–962. [Google Scholar]
  4. Pekrun, R. Progress and open problems in educational emotion research. Learn. Instr. 2005, 15, 497–506. [Google Scholar] [CrossRef]
  5. D’Mello, S.K.; Taylor, R.S.; Graesser, A.C. Monitoring affective trajectories during complex learning. In Proceedings of the Annual Meeting of the Cognitive Science Society, Nashville, TN, USA, 1–4 August 2007; pp. 203–208. [Google Scholar]
  6. Han, H.; Johnson, S.D. Relationship between students’ emotional intelligence, social bond, and interactions in online learning. Internet High. Educ. 2012, 15, 2–10. [Google Scholar]
  7. Devis-Rozental, C.; Eccles, S.; Mayer, P. Developing socio-emotional intelligence in higher education learners. J. Furth. High. Educ. 2017, 41, 146–162. [Google Scholar]
  8. Atkinson, R.K. Optimizing learning from examples using animated pedagogical agents. J. Educ. Psychol. 2002, 94, 416–427. [Google Scholar] [CrossRef]
  9. Daradoumis, T.; Arguedas, M. Cultivating students’ reflective learning in metacognitive activities through an affective pedagogical agent. Educ. Technol. Soc. 2020, 23, 19–31. [Google Scholar]
  10. Ammar, M.B.; Neji, M.; Alimi, A.M.; Gouarderes, G. Emotional multi-agent system for e-learning. Int. J. Knowl. Soc. Res. 2010, 1, 54–61. [Google Scholar]
  11. Scholten, M.R.M.; Kelders, S.M.; Van Gemert-Pijnen, J.E.W.C. Self-guiding eHealth to support standardized health care and knowledge dissemination: A concept design. Health Policy Technol. 2017, 6, 356–364. [Google Scholar]
  12. Heidig, S.; Clarebout, G. Do pedagogical agents make a difference to students’ motivation and learning? Educ. Res. Rev. 2011, 6, 27–54. [Google Scholar] [CrossRef]
  13. Schroeder, N.L.; Adesope, O.O.; Gilbert, R.B. A meta-analytic review of the modality and contiguity effects in multimedia learning. Educ. Psychol. Rev. 2013, 25, 569–590. [Google Scholar]
  14. Schroeder, N.L.; Romine, W.L.; Craig, S.D. Cognitive load, split attention, and the impact of technology on mathematics achievement. Comput. Educ. 2017, 105, 124–133. [Google Scholar]
  15. Dinçer, S.; Doğanay, A. The effects of multiple-pedagogical-agent learning environments on academic achievement, motivation, and cognitive load. Comput. Hum. Behav. 2017, 74, 191–200. [Google Scholar]
  16. Martha, S.G.; Santoso, H.B. The effectiveness of pedagogical agents in learning: A systematic review. Int. J. Emerg. Technol. Learn. 2019, 14, 98–110. [Google Scholar]
  17. Hattie, J.; Timperley, H. The power of feedback. Rev. Educ. Res. 2007, 77, 81–112. [Google Scholar] [CrossRef]
  18. Fyfe, E.R. Providing feedback on computer-based algebra homework in middle school classrooms. J. Educ. Psychol. 2016, 108, 603–617. [Google Scholar] [CrossRef]
  19. Narciss, S. Designing and evaluating informative tutoring feedback for digital learning environments. Digit. Educ. Rev. 2013, 22, 17–28. [Google Scholar]
  20. Timmers, C.F.; Walraven, A.; Veldkamp, B.P. The effect of feedback on metacognition, self-regulation, and performance: A systematic review of studies in secondary and higher education. Educ. Res. Rev. 2015, 14, 7–24. [Google Scholar]
  21. Finn, B.; Thomas, R.C.; Rawson, K.A. Learning more from feedback: Elaborating feedback with examples enhances concept learning. Learn. Instr. 2018, 54, 104–113. [Google Scholar] [CrossRef]
  22. Lachner, A.; Burkhart, C.; Nückles, M. Mind the gap! Automated concept mapping supports memory and transfer of knowledge from expository science texts. Learn. Instr. 2017, 49, 25–36. [Google Scholar]
  23. Fyfe, E.R.; Rittle-Johnson, B. Feedback both helps and hinders learning: The causal role of prior knowledge. J. Educ. Psychol. 2016, 108, 82–97. [Google Scholar] [CrossRef]
  24. Van der Kleij, F.M.; Feskens, R.C.W.; Eggen, T.J.H.M. Effects of feedback in computer-based learning environments: A meta-analysis. Rev. Educ. Res. 2015, 85, 475–511. [Google Scholar] [CrossRef]
  25. Lin, L.; Atkinson, R.K.; Christopherson, M.; Joseph, S.S.; Harrison, C. Verbal feedback by animated pedagogical agents and its impact on learners. Comput. Educ. 2013, 62, 263–275. [Google Scholar]
  26. Attali, Y.; Laitusis, C.; Stone, E. Feedback content and learning outcomes. Educ. Assess. 2016, 21, 344–363. [Google Scholar]
  27. Wang, S.; Gong, J.; Xu, J.; Hu, X. The role of elaborated feedback in learning: A meta-analysis. J. Educ. Psychol. 2019, 111, 763–781. [Google Scholar]
  28. Mao, X.; Li, Z. Implementing emotion-based user-aware e-learning. In Proceedings of the CHI’09 Extended Abstracts on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; pp. 3787–3792. [Google Scholar] [CrossRef]
  29. Robison, J.L.; McQuiggan, S.W.; Lester, J.C. Modeling task-based vs. affect-based feedback behavior in pedagogical agents: A quantitative analysis. In Proceedings of the 2009 Conference on Artificial Intelligence in Education: Building Learning Systems that Care: From Knowledge Representation to Affective Modelling, Brighton, UK, 6–10 July 2009; pp. 13–20. [Google Scholar]
  30. D’Mello, S.K.; Lehman, B.; Graesser, A.C. A motivationally supportive affect-sensitive intelligent tutoring system. Int. J. Artif. Intell. Educ. 2011, 21, 77–105. [Google Scholar] [CrossRef]
  31. Guo, Y.R.; Goh, D.H.L. Evaluation of affective embodied agents in an information literacy game. Comput. Educ. 2016, 103, 59–75. [Google Scholar] [CrossRef]
  32. Liew, T.W.; Mat Zin, N.A.; Sahari, N. Exploring the affective, motivational and cognitive effects of pedagogical agent enthusiasm in a multimedia learning environment. Hum.-Centric Comput. Inf. Sci. 2017, 7, 9. [Google Scholar] [CrossRef]
  33. Lin, L.; Ginns, P.; Wang, T.; Zhang, P. Pedagogical agents in educational contexts: A meta-analysis of their effects on learning and affective outcomes. J. Comput. Assist. Learn. 2020, 36, 303–319. [Google Scholar] [CrossRef]
  34. Malekzadeh, M.; Mustafa, M.B.; Lahsasna, A. A review of emotion regulation in intelligent tutoring systems. J. Educ. Technol. Soc. 2015, 18, 435–445. [Google Scholar]
  35. Cabestrero, R.; Crespo, A.; Pérez-Sánchez, M.; Fernández-Escribano, M.; Ledesma, J.C. Effects of encouraging feedback on student engagement: A field study. Learn. Instr. 2018, 56, 59–67. [Google Scholar] [CrossRef]
  36. Henrie, C.R.; Halverson, L.R.; Graham, C.R. Measuring student engagement in technology-mediated learning: A review. Comput. Educ. 2015, 90, 36–53. [Google Scholar] [CrossRef]
  37. Holmes, W.; Mavrikis, M.; Hansen, A.; Grawemeyer, B. Purpose and Level of Feedback in an Exploratory Learning Environment for Fractions. In Artificial Intelligence in Education; AIED 2015; Conati, C., Heffernan, N., Mitrovic, A., Verdejo, M., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015; Volume 9112. [Google Scholar] [CrossRef]
  38. Grawemeyer, B.; Mavrikis, M.; Holmes, W.; Gutiérrez-Santos, S.; Wiedmann, M.; Rummel, N. Affective learning: Improving engagement and enhancing learning with affect-aware feedback. User Model. User-Adapt. Interact. 2017, 27, 119–158. [Google Scholar] [CrossRef]
  39. Rajendran, R.; Iyer, S.; Murthy, S. Personalized affective feedback to address students’ frustration in ITS. IEEE Trans. Learn. Technol. 2018, 12, 87–97. [Google Scholar] [CrossRef]
  40. Mayordomo, R.M.; Espasa, A.; Guasch, T.; Martínez-Melo, M. Perception of online feedback and its impact on cognitive and emotional engagement with feedback. Educ. Inf. Technol. 2022, 27, 7947–7971. [Google Scholar] [CrossRef]
  41. Liu, H.L.; Wang, T.H.; Lin, H.C.K.; Lai, C.F.; Huang, Y.M. The influence of affective feedback adaptive learning system on learning engagement and self-directed learning. Front. Psychol. 2022, 13, 858411. [Google Scholar] [CrossRef]
  42. Fernández-Herrero, J. Evaluating recent advances in affective intelligent tutoring systems: A scoping review of educational impacts and future prospects. Educ. Sci. 2024, 14, 839. [Google Scholar] [CrossRef]
  43. Engeström, Y.; Miettinen, R.; Punamäki, R.L. Perspectives on Activity Theory; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar] [CrossRef]
  44. Arguedas, M.; Casillas, L.; Xhafa, F.; Daradoumis, T.; Peña, A.; Caballé, S. A fuzzy-based approach for classifying students’ emotional states in online collaborative work. In Proceedings of the 2016 10th International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS), Fukuoka, Japan, 6–8 July 2016; pp. 61–68. [Google Scholar] [CrossRef]
  45. Arguedas, M.; Xhafa, F.; Casillas, L.; Daradoumis, T.; Peña, A.; Caballé, S. A model for providing emotion awareness and feedback using fuzzy logic in online learning. Soft Comput. 2018, 22, 963–977. [Google Scholar] [CrossRef]
  46. Arguedas, M.M.; Daradoumis, T. Exploring learners’ emotions over time in virtual learning. eLearn Cent. Res. Pap. Ser. 2013, 6, 29–39. [Google Scholar]
  47. Mehrabian, A.; O’Reilly, E. Effects of personality and situational variables on communication accuracy. Psychol. Rep. 1980, 46, 815–824. [Google Scholar] [CrossRef]
  48. Pluck, G.; Johnson, H.L. Stimulating curiosity to enhance learning. Educ. Sci. Psychol. 2011, 19, 24–31. [Google Scholar]
  49. Icekson, T.; Roskes, M.; Moran, S. Effects of optimism on persistence and motivation in goal pursuit. Motiv. Emot. 2014, 38, 102–111. [Google Scholar] [CrossRef]
  50. Brown, J.D.; Marshall, M.A. The three faces of self-esteem. In Blackwell Handbook of Social Psychology: Group Processes; Hogg, M., Tindale, R., Eds.; Blackwell: London, UK, 2001; pp. 232–256. [Google Scholar] [CrossRef]
  51. Chang, E.C. Hope, problem-solving ability, and coping in a college student population: Some implications for theory and practice. J. Clin. Psychol. 1998, 54, 953–962. [Google Scholar] [CrossRef]
  52. George, D.; Mallery, P. SPSS for Windows Step by Step: A Simple Guide and Reference, 10th ed.; Pearson: London, UK, 2010. [Google Scholar]
  53. Gravetter, F.J.; Wallnau, L.B. Essentials of Statistics for the Behavioral Sciences, 8th ed.; Wadsworth Cengage Learning: Boston, MA, USA, 2014. [Google Scholar]
  54. West, S.G.; Finch, J.F.; Curran, P.J. Structural equation models with nonnormal variables: Problems and remedies. In Structural Equation Modeling: Concepts, Issues, and Applications; Hoyle, R.H., Ed.; Sage Publications: Thousand Oaks, CA, USA, 1995; pp. 56–75. [Google Scholar]
  55. Nunnally, J.C.; Bernstein, I.H. Psychometric Theory, 3rd ed.; McGraw-Hill: Columbus, OH, USA, 1994. [Google Scholar]
  56. Cohen, J. Statistical Power Analysis for the Behavioral Sciences, 2nd ed.; Routledge: Boca Raton, FL, USA, 1988. [Google Scholar] [CrossRef]
  57. Hedges, L.V.; Olkin, I. Statistical Methods for Meta-Analysis; Academic Press: Cambridge, MA, USA, 2014. [Google Scholar]
  58. Arguedas, M.; Daradoumis, T.; Caballé, S. Measuring the effects of pedagogical agent cognitive and affective feedback on students’ academic performance. Front. Artif. Intell. 2024, 7, 1495342. [Google Scholar] [CrossRef] [PubMed]
  59. Beege, M.; Schneider, B. Emotional design of pedagogical agents: The influence of enthusiasm and model-observer similarity. Educ. Technol. Res. Dev. 2023, 71, 859–880. [Google Scholar] [CrossRef]
  60. Wang, Y.; Lang, Y.; Gong, S.; Xie, K.; Cao, Y. The impact of emotional and elaborated feedback of a pedagogical agent on multimedia learning. Front. Psychol. 2022, 13, 810194. [Google Scholar] [CrossRef] [PubMed]
  61. Lang, Y.; Gong, S.; Hu, X.; Xiao, B.; Wang, Y.; Jiang, T. The roles of pedagogical agent’s emotional support: Dynamics between emotions and learning strategies in multimedia learning. J. Educ. Comput. Res. 2024, 62, 1485–1516. [Google Scholar] [CrossRef]
  62. Ortega-Ochoa, E.; Arguedas, M.; Daradoumis, T. Empathic pedagogical conversational agents: A systematic literature review. Br. J. Educ. Technol. 2024, 55, 886–909. [Google Scholar] [CrossRef]
  63. Tang, X.; Jiang, L.; Liu, G.; Li, H. The interaction effect of pedagogical agent and emotional feedback on effective learning: A 2 × 2 factorial experiment in online formative assessment. Front. Psychol. 2025, 16, 1610550. [Google Scholar] [CrossRef]
  64. Liu, L.; Ji, Y.; Gao, Y.; Li, T.; Xu, W. A Data-Driven Adaptive Emotion Recognition Model for College Students Using an Improved Multifeature Deep Neural Network Technology. Comput. Intell. Neurosci. 2022, 2022, 1343358. [Google Scholar] [CrossRef]
  65. Khediri, N.; Ben Ammar, M.; Kherallah, M. A real-time multimodal intelligent tutoring emotion recognition system (MITERS). Multimed. Tools Appl. 2024, 83, 57759–57783. [Google Scholar] [CrossRef]
  66. Schneider, S.; Krieglstein, F.; Beege, M.; Rey, G.D. The impact of video lecturers’ nonverbal communication on learning–An experiment on gestures and facial expressions of pedagogical agents. Comput. Educ. 2022, 176, 104350. [Google Scholar] [CrossRef]
  67. Cabestrero, R.; Quirós, P.; Santos, O.C.; Salmeron-Majadas, S.; Uria-Rivas, R.; Boticario, J.G.; Arnau, D.; Arevalillo-Herráez, M.; Ferri, F.J. Some insights into the impact of affective information when delivering feedback to students. Behav. Inf. Technol. 2018, 37, 1252–1263. [Google Scholar] [CrossRef]
  68. Yusuf, H.; Money, A.; Daylamani-Zad, D. Pedagogical AI conversational agents in higher education: A conceptual framework and survey of the state of the art. Educ. Technol. Res. Dev. 2025, 73, 815–874. [Google Scholar] [CrossRef]
  69. Box, G.E.P.; Cox, D.R. An analysis of transformations. J. R. Stat. Soc. Ser. B (Methodol.) 1964, 26, 211–252. [Google Scholar] [CrossRef]
Figure 1. Three-week timeline of the learning intervention, outlining the sequence of sessions, the structure of the three instructional blocks, and the moment when students completed the emotional engagement survey.
Figure 1. Three-week timeline of the learning intervention, outlining the sequence of sessions, the structure of the three instructional blocks, and the moment when students completed the emotional engagement survey.
Computers 14 00542 g001
Figure 2. Emotion analysis model based on an extended Activity Theory scenario integrating emotional, cognitive, and temporal dimensions. Adapted from [46].
Figure 2. Emotion analysis model based on an extended Activity Theory scenario integrating emotional, cognitive, and temporal dimensions. Adapted from [46].
Computers 14 00542 g002
Figure 3. Gender and age distribution of participants in the Final sample, Control Group (CG), and Experimental Group (EG). Note. CG = Control Group; EG = Experimental Group.
Figure 3. Gender and age distribution of participants in the Final sample, Control Group (CG), and Experimental Group (EG). Note. CG = Control Group; EG = Experimental Group.
Computers 14 00542 g003
Figure 4. CONSORT-style flow diagram illustrating the progression of participants throughout the study.
Figure 4. CONSORT-style flow diagram illustrating the progression of participants throughout the study.
Computers 14 00542 g004
Figure 5. Combined line plots comparing the control group (CG) and the experimental group (EG) across cognitive feedback items (Table 7a), affective feedback items (Table 7b), and students’ emotional-state items (Table 7c). Each panel displays group means together with the corresponding p-values from independent-samples t-tests, Cohen’s d, and Hedges’ g, providing an integrated visual summary of both statistical significance and effect sizes for all analyzed items. Because Cohen’s d and Hedges’ g yield extremely similar values across all items, their corresponding lines overlap almost perfectly in the plots. As a result, one line may visually obscure the other, even though both measures are included in the figure. Note. CG = Control Group; EG = Experimental Group.
Figure 5. Combined line plots comparing the control group (CG) and the experimental group (EG) across cognitive feedback items (Table 7a), affective feedback items (Table 7b), and students’ emotional-state items (Table 7c). Each panel displays group means together with the corresponding p-values from independent-samples t-tests, Cohen’s d, and Hedges’ g, providing an integrated visual summary of both statistical significance and effect sizes for all analyzed items. Because Cohen’s d and Hedges’ g yield extremely similar values across all items, their corresponding lines overlap almost perfectly in the plots. As a result, one line may visually obscure the other, even though both measures are included in the figure. Note. CG = Control Group; EG = Experimental Group.
Computers 14 00542 g005
Figure 6. Comparative visualization of Cohen’s d and Hedges’ g effect size estimates. Note. This figure displays the comparative effect size magnitude using Cohen’s d and Hedges’ g across the variables analyzed in the study.
Figure 6. Comparative visualization of Cohen’s d and Hedges’ g effect size estimates. Note. This figure displays the comparative effect size magnitude using Cohen’s d and Hedges’ g across the variables analyzed in the study.
Computers 14 00542 g006
Figure 7. Combined heatmap and correlation matrix visualization of the associations between cognitive/affective feedback items and students’ emotional states. The top panels display heatmaps illustrating the strength and direction of correlations for cognitive feedback (left) and affective feedback (right). The bottom panels present the corresponding correlation matrices, showing consistent patterns across feedback types. Positive emotional states (motivation, curiosity, pleasure, optimism) exhibit strong positive associations with clarity-, support-, and organization-oriented feedback items, whereas negative emotional states (insecurity, anxiety, boredom) show weaker or inverse associations. This figure complements Table 8a,b by providing an integrated and highly legible visual summary of the correlation structure.
Figure 7. Combined heatmap and correlation matrix visualization of the associations between cognitive/affective feedback items and students’ emotional states. The top panels display heatmaps illustrating the strength and direction of correlations for cognitive feedback (left) and affective feedback (right). The bottom panels present the corresponding correlation matrices, showing consistent patterns across feedback types. Positive emotional states (motivation, curiosity, pleasure, optimism) exhibit strong positive associations with clarity-, support-, and organization-oriented feedback items, whereas negative emotional states (insecurity, anxiety, boredom) show weaker or inverse associations. This figure complements Table 8a,b by providing an integrated and highly legible visual summary of the correlation structure.
Computers 14 00542 g007
Figure 8. Variance Explained by PCA Components: Cognitive vs. Affective Feedback.
Figure 8. Variance Explained by PCA Components: Cognitive vs. Affective Feedback.
Computers 14 00542 g008
Figure 9. Combined PCA Biplot: Exploring latent dimensions of engagement.
Figure 9. Combined PCA Biplot: Exploring latent dimensions of engagement.
Computers 14 00542 g009
Figure 10. Cluster Comparison Visualization. Note: Colors represent clusters from K-means.; Symbols represent clusters from Hierarchical clustering; Facets separate Cognitive and Affective feedback items.
Figure 10. Cluster Comparison Visualization. Note: Colors represent clusters from K-means.; Symbols represent clusters from Hierarchical clustering; Facets separate Cognitive and Affective feedback items.
Computers 14 00542 g010
Table 1. Summary of the instructional intervention protocol across the three-week learning sequence.
Table 1. Summary of the instructional intervention protocol across the three-week learning sequence.
StepSessions/WeekDescription of ActivitiesInstructor RoleAPT RoleData Collected
Step 1—Orientation and initial briefingSession 1,
Week 1
Overview of objectives, emotional-engagement concepts, tools. Baseline info collected.Leads introduction, collects baseline info.Not active.Prior SQL experience, initial motivation responses.
Step 2—Baseline instruction and initial practiceSessions 2–3, Week 1Intro SQL tasks. CG: instructor-only feedback. EG: APT cognitive and affective feedback.Monitors task completion and provide feedback to CG.Analyzes text and triggers feedback in EG.Completion times, help requests, APT outputs, emotional-linguistic indicators.
Step 3—Collaborative project developmentSessions 4–6, Week 2Team SQL project. Emotional states monitored every 15 min.Facilitates teamwork, evaluates SQL.Detects frustration/engagement and issues feedback.Forum messages, SQL errors, emotional timestamps, APT interventions.
Step 4—Consolidation and reflectionSessions 7–8, Week 3Advanced SQL tasks and reflection. Threshold-based feedback.Supports reflection and monitor’s progress.Triggers feedback at emotional thresholds.Reflective notes, emotional updates, performance indicators, intervention logs.
Step 5—Post-test evaluationSession 9, Week 3Final surveys on emotional engagement and feedback perception.Supervises assessment.Not active.Survey responses, interaction logs, emotional-state data export.
Note. CG = Control Group; EG = Experimental Group.
Table 5. Results of the Kolmogorov–Smirnov (K–S) test for normality applied to the control and experimental groups across the cognitive feedback, affective feedback, and students’ emotional states variables.
Table 5. Results of the Kolmogorov–Smirnov (K–S) test for normality applied to the control and experimental groups across the cognitive feedback, affective feedback, and students’ emotional states variables.
Cognitive
Feedback
Affective
Feedback
Students’
Emotional States
CGEG CGEG CGEG
Variablep-Valuep-ValueVariablep-Valuep-ValueVariablep-Valuep-Value
3.10.37560.53403.90.69340.246E.10.22900.7122
3.20.32230.63443.110.02300.2383E.20.77800.7999
3.30.82390.20073.120.94140.5375E.30.42930.5486
3.40.69220.43153.130.46750.8211E.40.88570.0231
3.50.22760.55223.150.38470.1454E.50.88850.2224
3.60.18190.87023.160.24380.7254E.60.73050.5581
3.70.44880.85883.170.39510.3981E.70.81040.5046
3.80.29310.65933.180.13320.2723E.80.71900.2898
3.100.59110.58093.190.55800.8034E.90.84810.5915
3.140.16520.2593
Note. CG = Control Group; EG = Experimental Group.
Table 6. Skewness and kurtosis values for each item in the control and experimental groups.
Table 6. Skewness and kurtosis values for each item in the control and experimental groups.
Cognitive Feedback Variable3.13.23.33.43.53.63.73.83.103.14
CGSkewness−0.19−0.31−0.36−0.46−0.090.19−0.53−0.27−0.11−0.12
Kurtosis−0.92−0.99−1.14−0.86−0.91−1.06−1.03−1.09−1.01−0.92
EGSkewness−0.28−0.14−0.4−0.35−0.080.22−0.39−0.33−0.1−0.24
Kurtosis−0.84−1.02−0.89−0.78−0.96−0.97−1.06−1.01−1.05−0.96
Affective Feedback Variable3.93.113.123.133.153.163.173.183.19
CGSkewness−0.190.39−0.45−0.24−0.21−0.110.050.1−0.11
Kurtosis−0.99−0.63−0.87−0.89−1.03−0.95−1.12−0.96−0.87
EGSkewness−0.250.11−0.31−0.19−0.35−0.270.03−0.02−0.13
Kurtosis−0.99−0.84−0.91−0.93−1.08−0.93−1.04−0.87−0.89
Students’ emotional states VariableE.1E.2E.3E.4E.5E.6E.7E.8E.9
CGSkewness−0.33−0.41−0.27−0.06−0.12−0.19−0.17−0.28−0.29
Kurtosis−0.81−0.92−0.87−0.73−0.91−0.99−0.94−0.97−0.95
EGSkewness−0.35−0.43−0.290.14−0.18−0.21−0.24−0.27−0.22
Kurtosis−0.85−0.91−0.88−0.54−0.83−0.91−0.98−0.95−0.91
Note. CG = Control Group; EG = Experimental Group.
Table 7. (a). Mean values of students’ learning outcomes and t-test related to cognitive feedback. (b). Mean values of students’ learning outcomes and t-test related to affective feedback. (c). Mean values of students’ emotional states and t-test outcomes related to students’ emotional states (E).
Table 7. (a). Mean values of students’ learning outcomes and t-test related to cognitive feedback. (b). Mean values of students’ learning outcomes and t-test related to affective feedback. (c). Mean values of students’ emotional states and t-test outcomes related to students’ emotional states (E).
Mean Values of Students’ Learning Outcomes and t-Test Related to Cognitive Feedback (a)
Control Group (n = 57)Experimental Group (n = 58)t-Test for Equality of Means
ItemMeanSDMeanSDp_ValueCohen_dHedges_g
3.13.581.3874.270.5500.0520.6560.652
3.23.161.0684.050.7850.0040.9510.945
3.33.261.3684.180.5880.0120.8760.871
3.43.211.2284.230.6120.0031.0541.047
3.53.161.4634.230.6120.0070.9570.951
3.63.631.2124.180.6640.0890.5640.560
3.73.051.3533.770.7520.0490.6590.655
3.83.161.1194.180.5010.0011.1801.172
3.103.371.1654.090.8110.0300.7180.714
3.143.581.1214.230.5280.0300.7440.739
MEAN values of Students’ Learning Outcomes and t-Test Related to Affective Feedback (b)
Control Group (n = 57)Experimental Group (n = 58)t-Test for Equality of Means
ItemMeanSDMeanSDp_ValueCohen_dHedges_g
3.93.531.2194.140.7100.0650.6130.609
3.113.371.3834.090.7500.0520.6490.644
3.123.111.154.140.6400.0021.1091.102
3.133.471.2194.180.7330.0350.7070.703
3.153.531.3894.000.6900.1890.430.427
3.163.531.2644.050.7850.1320.4950.492
3.173.161.0154.050.8440.0040.9540.948
3.183.261.4853.950.7220.0760.5930.589
3.193.261.4474.050.7220.0420.6930.688
Mean Values of Students’ Emotional States and t-Test Outcomes Related to Students’ Emotional States (E) (c)
Control Group (n = 57)Experimental Group (n = 58)t-Test for Equality of Means
ItemMeanSDMeanSDp_ValueCohen_dHedges_g
E12.631.2123.910.9210.0011.1911.183
E22.681.253.501.1020.0320.6960.692
E32.791.5484.050.9500.0050.9830.977
E42.841.3444.270.5500.0011.3971.388
E52.841.4254.000.7560.0041.0191.013
E62.631.2572.681.2870.9000.0390.039
E73.321.4932.731.2020.170−0.436−0.433
E82.951.2683.321.1290.3280.3080.306
E92.841.3442.501.3000.413−0.257−0.255
Note. SD: Standard Deviation.
Table 8. (a) Pearson correlations between APT cognitive feedback and Students’ Emotional Engagement in Experimental Group (EG, n = 58). (b) Pearson Correlations Between APT Affective Feedback and Students’ Emotional Engagement in the Experimental Group (EG, n = 58).
Table 8. (a) Pearson correlations between APT cognitive feedback and Students’ Emotional Engagement in Experimental Group (EG, n = 58). (b) Pearson Correlations Between APT Affective Feedback and Students’ Emotional Engagement in the Experimental Group (EG, n = 58).
Pearson Correlations Between APT Cognitive Feedback and Students’ Emotional Engagement in Experimental Group (EG, n = 58) (a)
Total Number of Students of Experimental Group
E.1E.2E.3E.4E.5E.6E.7E.8E.9
3.10.531 *0.592 **0.1370.618 **0.358−0.094−0.576 **−0.203−0.634 **
3.20.477 *0.581 **0.1890.3280.346−0.285−0.556 *−0.404−0.253
3.30.565 *0.506 *0.4210.749 **0.393−0.134−0.696 **−0.120−0.308
3.40.4280.4440.2580.560 *0.496 *−0.379−0.553 *−0.242−0.517 *
3.50.473 *0.4540.1870.691 **0.439−0.208−0.609 **−0.025−0.382
3.60.4320.579 **0.0450.372−0.261−0.094−0.424−0.049−0.072
3.7−0.1570.109−0.021−0.0260.149−0.086−0.421−0.128−0.209
3.80.2090.1970.3410.3130.644 **−0.035−0.298−0.268−0.352
3.100.535 *0.542 *0.507 *0.571 *0.506 *−0.320−0.582 **−0.362−0.422
3.140.493 *0.4150.2020.617 **0.373−0.353−0.348−0.134−0.341
Pearson Correlations Between APT Affective Feedback and Students’ Emotional Engagement in the Experimental Group (EG, n = 58). (b)
Total Number of Students of Experimental Group
E.1E.2E.3E.4E.5E.6E.7E.8E.9
3.90.628 **0.4440.3270.630 **0.435−0.519 *−0.341−0.233−0.455
3.110.2510.0710.1680.3620.370−0.461 *−0.221−0.083−0.146
3.120.468 *0.605 **0.1380.4070.350−0.125−0.603 **−0.263−0.384
3.130.614 **0.578 **0.3500.692 **0.429−0.206−0.667 **−0.414−0.528 *
3.150.485 *0.3250.700 **0.612 **0.718 **−0.296−0.594 **−0.362−0.608 **
3.160.2060.2170.3150.1820.357−0.291−0.417−0.467 *−0.439
3.170.547 *0.655 **0.3410.590 **0.556 *−0.126−0.585 **−0.123−0.388
3.180.1800.2870.1700.1050.467 *−0.243−0.491 *−0.612 **−0.312
3.190.565 *0.4480.3480.765 **0.479 *−0.066−0.4520.008−0.463 *
Note. p < 0.05 (*), p < 0.01 (**), two-tailed.
Table 9. Summary of empirical patterns, statistical evidence, and substantive conclusions across cognitive feedback, affective feedback, emotional states, and multivariate associations. The table synthesizes the main findings reported in Table 7a–c and Table 8a,b, as well as the PCA and clustering results described in Section 3.2.2. It highlights consistent advantages of the experimental group (EG) over the control group (CG) in cognitive and affective feedback perceptions, stronger positive emotional engagement, and systematic feedback–emotion associations. The multivariate analysis further reveals two distinct engagement profiles that differentiate students with high positive involvement from those with lower emotional activation.
Table 9. Summary of empirical patterns, statistical evidence, and substantive conclusions across cognitive feedback, affective feedback, emotional states, and multivariate associations. The table synthesizes the main findings reported in Table 7a–c and Table 8a,b, as well as the PCA and clustering results described in Section 3.2.2. It highlights consistent advantages of the experimental group (EG) over the control group (CG) in cognitive and affective feedback perceptions, stronger positive emotional engagement, and systematic feedback–emotion associations. The multivariate analysis further reveals two distinct engagement profiles that differentiate students with high positive involvement from those with lower emotional activation.
DomainEmpirical Pattern (CG vs. EG)Statistical Evidence ReportedSubstantive Conclusion
Cognitive feedback items (Table 7a)EG shows higher mean scores than CG across all cognitive feedback items.Several items show statistically significant differences in favor of EG, with moderate to large effect sizes (Cohen’s d, Hedges’ g), as reported in Table 7a.APT cognitive feedback improves clarity, organization and perceived instructional support compared to human-teacher feedback.
Affective feedback items (Table 7b)EG consistently reports higher affective feedback ratings than CG.Multiple items present significant differences with moderate to large effect sizes, as shown in Table 7b.APT affective feedback enhances students’ motivation and emotional regulation, supporting higher emotional engagement.
Positive emotional states (E1–E5, Table 7c)EG reports higher levels of motivation, curiosity, confidence, pleasure and stimulation.Significant differences and large effect sizes for several positive emotions in favor of EG, as indicated in Table 7c.APT feedback is strongly associated with increases in positive emotional engagement.
Negative emotional states (E6–E9, Table 7c)Differences between CG and EG are smaller and often non-significant.Effect sizes are small and p-values non-significant for some negative emotions (e.g., anxiety, boredom), as shown in Table 7c.APT feedback helps students cope with negative states, but improvements are more subtle than positive emotions.
Feedback–emotion associations (Table 8a,b)Strong links between specific feedback items and emotional states within EG.Significant Pearson correlations between cognitive/affective feedback and emotional items, reported in Table 8a,b.Emotional engagement is systematically related to both cognitive and affective feedback components.
Engagement profiles (PCA + clustering)Two dominant profiles: one highly engaged with positive feedback and one with lower engagement and more negative emotions.PCA and cluster analysis identify two main patterns of emotional engagement, as described in Section 3.2.2 and related figures.APT feedback contributes to differentiating high-engagement vs. low-engagement student profiles.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Arguedas, M.; Daradoumis, T.; Caballe, S.; Conesa, J.; Ortega-Ochoa, E. Exploring the Impact of Affective Pedagogical Agents: Enhancing Emotional Engagement in Higher Education. Computers 2025, 14, 542. https://doi.org/10.3390/computers14120542

AMA Style

Arguedas M, Daradoumis T, Caballe S, Conesa J, Ortega-Ochoa E. Exploring the Impact of Affective Pedagogical Agents: Enhancing Emotional Engagement in Higher Education. Computers. 2025; 14(12):542. https://doi.org/10.3390/computers14120542

Chicago/Turabian Style

Arguedas, Marta, Thanasis Daradoumis, Santi Caballe, Jordi Conesa, and Elvis Ortega-Ochoa. 2025. "Exploring the Impact of Affective Pedagogical Agents: Enhancing Emotional Engagement in Higher Education" Computers 14, no. 12: 542. https://doi.org/10.3390/computers14120542

APA Style

Arguedas, M., Daradoumis, T., Caballe, S., Conesa, J., & Ortega-Ochoa, E. (2025). Exploring the Impact of Affective Pedagogical Agents: Enhancing Emotional Engagement in Higher Education. Computers, 14(12), 542. https://doi.org/10.3390/computers14120542

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop