Next Article in Journal
Enhancing Student Engagement and Outcomes: The Effects of Cooperative Learning in an Ethiopian University’s Classrooms
Previous Article in Journal
Linking Traditional Teaching to Innovative Approaches: Student Conceptions in Kinematics
Previous Article in Special Issue
Development and Evaluation of a Custom GPT for the Assessment of Students’ Designs in a Typography Course
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Re-Evaluating Components of Classical Educational Theories in AI-Enhanced Learning: An Empirical Study on Student Engagement

1
Institute of Information Technology, University of Dunaújváros, Táncsics M. Street 1/a, H-2400 Dunaújváros, Hungary
2
Teacher Training Center, University of Dunaújváros, Táncsics M. Street 1/a, H-2400 Dunaújváros, Hungary
3
Department of Business Information Technology, Budapest Business University, Buzogány u. 10-12, H-1149 Budapest, Hungary
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(9), 974; https://doi.org/10.3390/educsci14090974
Submission received: 22 July 2024 / Revised: 22 August 2024 / Accepted: 26 August 2024 / Published: 3 September 2024
(This article belongs to the Special Issue ChatGPT as Educative and Pedagogical Tool: Perspectives and Prospects)

Abstract

:
The primary goal of this research was to empirically identify and validate the factors influencing student engagement in a learning environment where AI-based chat tools, such as ChatGPT or other large language models (LLMs), are intensively integrated into the curriculum and teaching–learning process. Traditional educational theories provide a robust framework for understanding diverse dimensions of student engagement, but the integration of AI-based tools offers new personalized learning experiences, immediate feedback, and resource accessibility that necessitate a contemporary exploration of these foundational concepts. Exploratory Factor Analysis (EFA) was utilized to uncover the underlying factor structure within a large set of variables, and Confirmatory Factor Analysis (CFA) was employed to verify the factor structure identified by EFA. Four new factors have been identified: “Academic Self-Efficacy and Preparedness”, “Autonomy and Resource Utilization”, “Interest and Engagement”, and “Self-Regulation and Goal Setting.” Based on these factors, a new engagement measuring scale has been developed to comprehensively assess student engagement in AI-enhanced learning environments.

1. Introduction

Traditional educational theories provide a robust framework for understanding diverse dimensions of student engagement, including self-efficacy, self-regulation, intrinsic motivation, autonomy, competence, relatedness, and other important components. However, in the rapidly evolving landscape of education, the integration of AI-based tools offers new personalized learning experiences, immediate feedback, and resource accessibility that can potentially transform educational practices [1]. These unique capabilities of AI technologies necessitate a contemporary exploration of the foundational concepts of student engagement.
In exploring the impact of AI-based chat tools on student engagement, it is essential to ground the discussion in established educational theories. Two pivotal frameworks in this domain are Bandura’s Social Cognitive Theory (SCT) [2] and Self-Determination Theory (SDT) by Deci and Ryan [3]. These theories offer comprehensive insights into the psychological constructs that underpin effective learning and student engagement [4].
What follows can be considered the first part of the literature discussion, providing an overview of these foundational theories before delving into a more detailed literature review in the subsequent section.

1.1. Bandura’s Social Cognitive Theory (SCT)

Albert Bandura’s Social Cognitive Theory (SCT) emphasizes the importance of observing, modeling, and imitating the behaviors, attitudes, and emotional reactions of others [2,5,6]. SCT considers how environmental and cognitive factors interact to influence human learning and behavior. One of the core concepts of SCT is reciprocal determinism [7], which refers to the dynamic and reciprocal interaction of personal factors, behavior, and the environment. According to Bandura [2,7], a person’s behavior is influenced by and influences their environment and personal factors, such as cognitive skills, attitudes, and beliefs.
Another fundamental aspect of SCT is observational learning (modeling). This occurs when individuals learn by observing the behaviors of others and the outcomes of those behaviors [8,9]. The importance of models—such as parents, peers, teachers, and media figures—is highlighted in this process, which involves key stages like attention, retention, reproduction, and motivation [2,10].
Central to SCT is the concept of self-efficacy, which is the belief in one’s ability to succeed in specific situations or accomplish tasks. Bandura [5,11] identified self-efficacy as a crucial factor in how people approach goals, tasks, and challenges. Higher self-efficacy leads to greater motivation and persistence, while lower self-efficacy can result in avoidance and a lack of effort [12].
Outcome expectations also play a significant role in SCT. These refer to the anticipated consequences of a person’s behavior. People are more likely to engage in behaviors they believe will lead to positive outcomes and avoid behaviors expected to result in negative consequences [2].
Behavioral capability involves having the knowledge and skills necessary to perform a behavior. Bandura [2] emphasized that effective learning requires understanding what to do (knowledge) and how to do it (skills). Reinforcement can be direct, vicarious, or self-reinforced, and it plays a critical role in the learning process. Direct reinforcement involves rewards or punishments following a behavior, vicarious reinforcement occurs when individuals observe others being rewarded or punished, and self-reinforcement involves individuals rewarding themselves for meeting their own standards [2].
The concept of self-regulation involves setting goals, monitoring progress, and adjusting behaviors to achieve personal objectives. It includes self-observation, the judgment of one’s behavior against personal standards or external criteria, and self-reaction, which involves rewarding or punishing oneself based on performance [2,13]. Lastly, moral disengagement explains how individuals justify unethical behavior, allowing them to engage in harmful or unethical actions without feeling guilt through mechanisms such as euphemistic labeling, moral justification, and displacement of responsibility [2,14].
By integrating these concepts, Bandura’s Social Cognitive Theory provides a comprehensive framework for understanding how people learn from their social environment and how personal and environmental factors influence behavior.
To comprehensively understand and measure these psychological concepts, various metrics and scales have been developed. Perceived self-efficacy is concerned with people’s beliefs in their abilities to produce given attainments [15]. According to Bandura [16], there is no single measure of the construct, and a “one measure fits all” approach is not feasible. Individuals may report different shades of self-efficacy depending on their task and domain. Bandura, in his psychometric analysis of the construct, provides a number of narrow or broad domain-specific scales: Self-Efficacy to Regulate Exercise, Self-Efficacy to Regulate Eating Habits, Problem-Solving Self-Efficacy, the Children’s Self-Efficacy Scale, the Teacher Self-Efficacy Scale, etc. These questionnaires usually consist of statements about one or more abilities that are judged in terms of their confidence.
There is also another psychometric conception of self-efficacy that considers it as a domain-general, trait-like construct, i.e., one that is considered to be fairly constant across contexts. This approach is operationalized in the 10-item questionnaire of the Generalized Self-Efficacy Scale [15,16].
Several other instruments have been developed to measure self-efficacy. Without wishing to be exhaustive, we will mention here only a few examples that appear more frequently in research papers.
The 150 Likert scale items of the Self-Efficacy Survey [17] map self-efficacy in the following 10 functional areas: intellectual, family, educational, professional, social, religious, erotic, moral, life, and health. The New General Self-Efficacy Scale [18] has eight items (five-point Likert scales) and conceptualizes the construct as a trait. The Self-Efficacy Scale [19], published in 1982, has two subscales. Seventeen items measure the factor of General Self-Efficacy and six items measure Social Self-Efficacy. Some questionnaires measure academic self-efficacy specifically, which is related to the subjects’ educational challenges, e.g., [20,21,22,23,24].

1.2. Self-Determination Theory (SDT)

Self-Determination Theory (SDT), developed by Deci and Ryan [3,25,26], is another essential framework in educational psychology that focuses on human motivation and personality. SDT posits that people are motivated to grow and change by three innate and universal psychological needs: autonomy, competence, and relatedness [27]. Autonomy refers to feeling in control of one’s own behaviors and goals. Competence involves feeling effective in one’s activities and achieving desired outcomes. Relatedness is the need to feel connected to others [28].
According to SDT, when these needs are satisfied, individuals experience greater motivation, well-being, and optimal functioning. In the context of education, SDT emphasizes the importance of creating learning environments that support these needs, thereby enhancing intrinsic motivation and engagement [3,29].
Autonomy in learning can be fostered by allowing students to have a choice in their learning activities and encouraging self-initiative. Competence can be supported by providing opportunities for students to experience success and by giving constructive feedback. Relatedness can be enhanced by fostering a sense of community within the classroom, where students feel cared for and connected to others [4,26].
The Self-Determination Scale (SDS), designed by Sheldon and Deci [30], is a tool for measuring individual differences in self-determined behavior. This construction can be viewed as a relatively enduring (trait-like) aspect of individuals that reflects two key factors: firstly, how aware they are of their own feelings and self (self-contact), and secondly, how much they perceive choice in their behavior (choicefulness). The SDS comprises 10 items, divided into two five-item subscales. Respondents indicate on a five-point Likert scale which of two statements they feel is more accurate for themselves. The first subscale assesses self-awareness, while the second subscale covers the perception of choices about own actions. Subscale scores can be considered separately or combined in an aggregate score. Sheldon’s studies identified a correlational relationship between self-determination and creativity [31].
The AIR Self-Determination Assessment (24 Likert scales and 3 open-ended questions) was developed primarily for the study of school students [32]. One of the factors measured is capacity, which refers to students’ knowledge, skills and perceptions that enable their self-determination. The second factor, opportunity, refers to the chances to realize their potential and use their knowledge and skills, i.e., their capacity. Both teacher and parent versions of this questionnaire are available.
The Arc’s Self-Determination Scale [33] is a tool used to assess adolescents with disabilities. It is a 72-item questionnaire that covers four broad domains of self-determination: autonomy, self-regulation, psychological empowerment and self-realization. The items are of various types, including open-ended questions, Likert-scale responses and binary choices (agree/disagree), and the total score is out of 100 points.
A number of additional questionnaires and assessment tools are employed in self-determination studies. The central research website [34] lists over 40 such instruments, which assess various aspects of motivation, self-regulation, and autonomy needs, among other factors related to self-determination.

1.3. Goal of the Research

The primary goal of this research is to empirically identify and validate the factors that influence student engagement when using AI-based chat tools, such as ChatGPT and other large language models (LLMs). This analysis is based on students’ responses to the questions of a comprehensive survey conducted during their studies. To achieve this, we employed statistical tools such as Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) to uncover and confirm the underlying factor structures within the survey data.
It is important to clarify that when we speak of “factors”, “components”, or “dimensions” (these three notions are frequently used interchangeably) in relation to SCT or SDT, we are referring to theoretical constructs that are identified as influential in social cognitive processes and are designed to explain human psychological functioning in various domains, including educational settings. In contrast, the statistical notion of factors (sometimes also called components, dimensions or constructs) in Exploratory Factor Analysis (EFA) or Confirmatory Factor Analysis (CFA) refers to underlying (latent) structures of variables identified within data sets through statistical methods. EFA is used to uncover the underlying factor structure of a relatively large set of variables, identifying groups of variables that are highly correlated with each other. CFA, on the other hand, is used to confirm the factor structure identified by EFA (or other hypothesized factor structure), ensuring that the data fits the proposed model.
A further goal is to seek the alignments of the potentially newly identified latent statistical factors with the components of SCT and SDT theories. This alignment may extend the components of these traditional theories to better incorporate the present educational environments shaped by advanced AI technologies.

1.4. The Research Questions

  • What empirically validated factors can be identified through the use of EFA and CFA in the context of students using AI-based chat tools?
  • How do these empirically identified factors align with the theoretical constructs of Bandura’s Social Cognitive Theory and Deci and Ryan’s Self-Determination Theory?
  • What novelties, if any, emerge from the use of AI-based chat tools that extend beyond the traditional components of SCT and SDT?
  • How can we develop a new scale for the systematic quantification of the factors identified by EFA and CFA?
It should be emphasized that the primary goal of this paper is to determine the latent factor structure of student engagement in the new AI-enhanced learning environment. The actual levels of student engagement factors and their changes over time will be addressed in a subsequent longitudinal analysis study.

2. Literature Review

The integration of AI-based chat tools into educational settings represents a significant shift in how students engage with learning materials and processes. This literature review examines the impact of these technologies through the lens of established educational theories, specifically Bandura’s Social Cognitive Theory (SCT) and Deci and Ryan’s Self-Determination Theory (SDT).

2.1. The Role of AI in Enhancing Educational Practices

AI technologies, particularly large language models (LLMs) like ChatGPT, have been recognized for their potential to transform educational practices by offering personalized learning experiences, immediate feedback, and accessible resources [35,36]. These tools are designed to support a more interactive and engaging learning environment, which can cater to the individual needs of students. For instance, Luckin et al. (2016) highlighted the role of AI in providing tailored educational content that can adapt to the pace and style of each learner [1].

2.2. Self-Efficacy and AI Tools

Self-efficacy, a core component of SCT, refers to an individual’s belief in their ability to succeed in specific situations [2,37,38]. Research has shown that AI-based tools can enhance self-efficacy by providing immediate feedback and support, helping students to understand their progress and areas for improvement [39]. Holmes, Bialik, and Fadel (2019) discussed how AI tools can foster a sense of achievement and confidence in students by enabling them to tackle complex problems with guided assistance [40,41].

2.3. Intrinsic Motivation and Personalized Learning

Intrinsic motivation, central to SDT, involves engaging in an activity for its inherent satisfaction rather than for some separable consequence [3]. AI-based chat tools can significantly boost intrinsic motivation by making learning more enjoyable and engaging [40]. Ryan and Deci (2000) [42] emphasize that when students feel competent and autonomous, their intrinsic motivation increases. AI tools can provide a customized learning experience that aligns with these principles by offering choices and adapting to students’ needs [43,44].

2.4. Self-Regulation and AI Technologies

Self-regulation, another critical aspect of SCT, involves setting goals, monitoring progress, and adjusting behaviors to achieve personal objectives [2,45]. AI-based tools can support self-regulation by providing structured learning paths and real-time analytics on performance [46]. Studies have shown that these technologies can help students develop better study habits and time management skills, leading to improved academic outcomes [47,48].

2.5. Autonomy, Competence, and Relatedness

SDT posits that autonomy, competence, and relatedness are essential for fostering intrinsic motivation [3]. AI-based chat tools can enhance these psychological needs by allowing students to control their learning process, providing feedback to build competence, and facilitating collaborative learning environments that enhance relatedness [49,50]. For example, Ng et al. (2021) and Hornberger et al. (2023) found that AI tools can create a sense of community among learners, which is crucial for maintaining engagement and motivation [51,52].

2.6. Potential Challenges and Ethical Considerations

Despite the numerous benefits, the integration of AI in education is not without challenges. Concerns about data privacy, the potential for over-reliance on technology, and the risk of diminishing critical thinking skills are prominent in the literature [53,54]. Guerra-Carrillo, Katovich, and Bunge (2017) noted that while AI tools can enhance learning, they should be used to complement, rather than replace, traditional teaching methods [55]. Additionally, ethical considerations regarding the use of student data and the potential for biased algorithms must be addressed to ensure equitable and fair educational practices [56,57].

2.7. Empirical Evidence Supporting AI Integration

Empirical studies have provided robust evidence supporting the integration of AI in education. For instance, Ray (2023) demonstrated that AI tools could substantially enhance medical education by providing detailed knowledge and simulation-based learning experiences [58]. Similarly, Farrokhniaa et al. (2023) conducted a SWOT analysis, identifying strengths such as improved information accessibility and personalized learning but also noting potential threats like academic integrity issues [59]. Tlili et al. (2023) emphasized the need for vigilant implementation and robust usage guidelines to maximize the benefits of AI in educational settings [57]. In the collection of works by Khine [60], several aspects of academic self-efficacy are discussed and empirically investigated.
The existing literature underscores the transformative potential of AI-based chat tools in education, particularly in enhancing self-efficacy, intrinsic motivation, and self-regulation. However, despite these promising implications, there has been a noticeable lack of comprehensive studies that explore this topic through the lens of established learning theories. This gap in the research is significant because understanding how the empirically determined factors of learning engagement align with these foundational principles is crucial for effectively integrating these technologies into educational environments.
Our hope is that this research is both timely and necessary, addressing this critical gap by systematically investigating the alignment of these empirically determined factors of learning engagement with the core concepts of SCT and SDT within an AI-based chat-supported educational environment.
By conducting this comprehensive study, we aim to provide educators and policymakers with valuable insights into the practical considerations of integrating AI-based chat tools in educational settings. This will not only help in creating more engaging and effective learning environments but also ensure that the use of such technologies is aligned with the established principles of educational psychology, promoting the responsible and effective use of AI in education.

3. Circumstances of the Survey Data

3.1. Circumstances

In this research, a contemporary exploration of the foundational concepts of student engagement is investigated. First, we created learning environments to ensure the integration of AI-based chat tools into the educational process. The involvement of several teachers was crucial, as they modified the curriculum across various university subjects to incorporate these tools effectively. This comprehensive approach ensured that the AI-based chat tools were embedded in both classroom activities and at-home study practices, providing a holistic learning experience. The research encompassed a wide range of subjects, from engineering and information technology to social sciences and teacher training, necessitating the collaboration of numerous educators and authors to design and implement the modified curricula. This broad scope was essential to capture the diverse impacts of AI tools on student engagement across different academic disciplines.
To collect the necessary data, we developed a comprehensive set of survey questions designed to address our specific research questions. These questions were meticulously crafted to cover a wide range of engagement dimensions, ensuring that we could empirically identify and validate the factors that influence student engagement in AI-enhanced learning environments. The survey was administered to a large and diverse sample of university students, allowing us to gather robust data for analysis.
The set of questions that served as a starting point to select those that are relevant for the given purpose consisted of 30 questions in the form of statements (See in Table A1 in Appendix A). In the following, when we refer to each question, we will use the notation Q1, …, Q30. The statements could be answered on a five-point Likert scale: (1) strongly disagree—(5) strongly agree.
In addition, the questionnaire included demographic questions regarding the respondent’s gender, age, field of study, and language of study. These questions can serve as the basis for later multi-group analysis, where the goal is to investigate whether the constructs are valid across different layers of the studied population. This analysis will help ensure that the identified factors of student engagement are consistent and reliable across various demographic segments, providing a comprehensive understanding of how AI-based chat tools impact diverse groups of students.
Other contextual questions were also asked throughout the semester, mainly referring to students’ habits in using AI-based chats, the perceived usefulness for different subjects, etc. The analysis of the responses to these contextual questions will be studied in a subsequent paper.
The studied population consisted of university students from two Hungarian universities: the University of Dunaújváros and Budapest Business School. Filling out the questionnaire was voluntary. Both full-time and part-time students filled out the questionnaire. We received 716 responses, and after the usual data cleaning, we had 642 valid ones. Answers came from students representing several disciplines of study: Economics, Engineering, Information Technology, Social Sciences, and Teacher Training.
We received answers in Hungarian and English, of which Hungarian totaled 603 and English totaled 39. English-speaking students study in Hungary either with the help of Erasmus or other scholarship programs or are self-financed. The nationalities of the respondents were diverse: Chinese, Turkish, Portuguese, etc.
Of the respondents, 223 were women, and 416 were men. Three student sdid not want to answer to this gender question. Even though we examined university students, we did not only reach the 18–24 age group with the questionnaire. Since the respondents included correspondence learners and postgraduate students, the oldest respondent was 58 years old.
We used Google Forms to collect data and IBM SPSS Statistics V29, IBM SPSS AMOS V29, and Minitab V22 applications for data analysis.
The survey was conducted towards the end of the semester, ensuring that all participating students had ample exposure to courses where the curriculum had been modified to incorporate the use of AI-based chat tools. These modifications were designed to familiarize students with the capabilities and functionalities of AI-based chat tools, integrating their use into both classroom teaching–learning activities and home study assignments.
In this project, 12 teachers from various disciplines were actively involved to ensure the effective integration of AI-based chat tools into their curricula. Each teacher took on the task of revising their course materials and instructional strategies to encourage students to use AI-based chat tools intensively, both in the classroom and at home. To facilitate this integration, the teachers participated in several preparatory workshops and training sessions focused on the pedagogical benefits and practical implementation of AI technologies. These sessions provided them with the necessary knowledge and skills to adapt their teaching methods and curriculum to incorporate AI-based chat tools effectively while also addressing the limitations and possible biases inherent in using these tools. The active involvement of teachers and comprehensive curriculum revisions were critical to the success of this project.

3.2. Justification for Using the Selected 30 Survey Questions

In designing the survey, we carefully selected 30 questions to cover a wide range of educational concepts. The broad coverage ensures that our investigation is grounded in robust theoretical foundations, incorporating traditional concepts such as self-efficacy, intrinsic motivation, self-regulation, autonomy, competence, relatedness, etc. By examining a broad spectrum of established concepts, we can identify areas where AI-based chat tools either fill existing gaps or introduce entirely new dimensions not adequately addressed by traditional theories. This process helps in extending these theoretical frameworks to better incorporate modern educational technologies. A wide-ranging approach allows for a holistic analysis of how AI-based tools impact different facets of learning, ensuring no significant aspect of student engagement is overlooked. One possible categorization, as an example, is detailed in Table A2 in the Appendix A. This table categorizes these questions across different educational concepts.
The selection of these questions was guided by several other key considerations. Practical constraints were considered to maintain the survey’s feasibility. Keeping the survey concise was necessary to ensure a high response rate and maintain the quality of responses. Longer surveys can lead to participant fatigue, potentially compromising the accuracy and reliability of the data collected. Additionally, the scope of this study, including time constraints and available resources, required a focused approach. By limiting the number of questions, we ensured that the survey could be administered efficiently and effectively within the available timeframe.
The specificity to AI-based learning contexts was also a crucial factor. The selected questions are specifically tailored to the context of AI-based chat tools, making them highly relevant to the study’s objectives. This focus allows for a nuanced understanding of how these tools impact students’ learning experiences and outcomes. By concentrating on these core concepts and considerations, we ensured that the survey is both comprehensive and focused, providing valuable insights into the dimensions of student engagement with AI-based chat tools.

4. Data Analysis—Assessing Construct Validity and Reliability

4.1. Exploratory Factor Analysis

Exploratory Factor Analysis (EFA) is a powerful statistical method utilized to uncover the underlying structure within a large set of variables [61,62]. It plays a crucial role in the process of developing scales as it helps determine the number of latent factors and elucidates how observed variables are associated with these factors. This technique is especially useful when there is no predefined theoretical framework or when the structure of the data is not well-known. In the context of evaluating student engagement when using AI-based chat tools in their studies, EFA aids in identifying coherent clusters of behaviors and refining the survey by pinpointing the most representative items for each factor.
In this study, EFA and CFA were performed on two distinct datasets to ensure the robustness and validity of the findings. From the 642 responses collected, 400 were randomly assigned to the EFA, and the remaining 242 were used for CFA. Through principal component analysis (PCA) within EFA, the scree plot revealed a four-factor structure that underlies the behavioral dimensions of university students. These four components, each with eigenvalues close to or greater than one, account for a significant portion of the variance in the responses, collectively explaining 60.786% of the total variance.
The decision to use PCA over other factor extraction methods was based on the fact that PCA provided the highest proportion of the total variance explained in the sample data. We applied several other factor extraction methods, but the factors determined by PCA were most effective in capturing the underlying structure of the data, making it the most suitable choice for our analysis. This allowed us to maximize the explained variance and enhance the robustness of our findings.
Analyzing the pattern matrix in Table 1, we observe the factor loadings after applying a Promax rotation with Kaiser Normalization.
Factor loadings are coefficients that represent the correlation between an observed variable and a latent factor, indicating the degree to which the variable is associated with the factor. The Promax rotation, an oblique method, allows for correlation among factors, which is consistent with the concept that different dimensions of engagement are interrelated.
Questions that did not load sufficiently on any factor (factor loading < 0.4) and were therefore excluded from the factor structure included Q4, Q7, Q9, Q10, Q14, Q15, and Q18. Their exclusion suggests that these items either do not correlate strongly with the defined dimensions or may overlap with other survey items, reducing their usefulness in the final questionnaire. This refinement results in a more robust and focused measure that can effectively assess the specific domains.
In what follows, we will interpret these latent factors (components) in detail and provide corresponding names based on the supporting items. For each component, we will also evaluate the alignment with the components of Bandura’s Social Cognitive Theory (SCT) and Deci and Ryan’s Self-Determination Theory (SDT), providing a comprehensive understanding of how these theoretical frameworks intersect with our empirical findings.
  • Component 1: Academic Self-Efficacy and Preparedness
  • Q8: Seeing the potential of AI-based chat makes me more optimistic that my academic performance will improve.
  • Q26: I am confident in my learning abilities when using AI-based chat.
  • Q27: I believe that with the help of AI-based chat, I can successfully complete difficult tasks.
  • Q28: When using AI-based chat, I am persistent in solving challenging problems.
  • Q29: After using AI-based chat, I feel prepared for exams and assessments.
  • Q30: I am confident that I can learn independently through AI-based chat.
Interpretation: This component reflects the correlation between students’ confidence in their ability to succeed academically and their use of AI-based chat tools. It suggests a strong association between the belief in one’s competence to manage and overcome academic challenges and the use of these tools. The inclusion of questions about optimism (Q8), confidence in learning abilities (Q26), and preparedness for exams (Q29) indicates a significant relationship with self-efficacy. Optimism (Q8) shows a correlation with students’ positive outlook on their academic future, potentially influenced by the supportive features of AI tools. Confidence in learning abilities (Q26) suggests that students’ perceived capability in their academic pursuits is linked to using AI-based resources. Preparedness for exams (Q29) implies that students believe these tools are associated with better preparation for assessments, indicating a sense of readiness. Additionally, the relationship between independent learning (Q30) and the use of AI-based tools suggests that students feel they can handle their studies more independently, contributing to a sense of preparedness and academic resilience. However, it is important to note that these correlations do not necessarily imply causation or a positive impact, as the relationship could influence the factor in either direction.
Alignment: This component aligns closely with Bandura’s concept of self-efficacy, which suggests that individuals with higher self-efficacy are more likely to engage in and persist with challenging tasks. Bandura’s theory posits that self-efficacy influences the choices individuals make, the effort they put forth, their perseverance in the face of obstacles, and their resilience to adversity. Higher self-efficacy leads to greater motivation and persistence, which are crucial for academic success. By fostering a strong belief in personal capabilities, AI-based tools help students to engage more deeply with their studies, tackle difficult subjects with confidence, and persist through challenges. This component, therefore, reflects a critical aspect of SCT by demonstrating how technological tools can influence students’ self-efficacy and overall academic performance.
  • Component 2: Autonomy and Resource Utilization
  • Q1: When using AI-based chat, I look forward to learning new topics.
  • Q2: I feel the world is opening up to me when I learn using AI-based chats.
  • Q3: When using AI-based chat, I feel that I am in control of the learning process and the pace.
  • Q5: When using AI-based chat, I feel more independent in my learning, and this is important for my academic success.
  • Q6: AI-based chat provides me with a resource, a tool to help me clarify issues that are confusing to me.
  • Q16: It is easy for me to understand new learning materials when I use AI-based chat.
Interpretation: This component reflects the correlation between students’ sense of autonomy in their learning process and their use of AI-based chat tools. It suggests a significant association between how autonomous students feel and how effectively they utilize the resources provided by these tools. The inclusion of questions about looking forward to learning new topics (Q1), feeling in control of the learning process (Q3), and feeling more independent (Q5) indicates a strong relationship with autonomy. Specifically, looking forward to learning new topics (Q1) shows a correlation with students’ motivation, driven by the opportunities AI-based tools provide for exploring new areas of knowledge. Feeling in control of the learning process (Q3) suggests that students’ perception of guiding their own educational journey is linked to the use of AI tools, allowing them to make decisions that best suit their learning styles and needs. Feeling more independent (Q5) implies that students’ reliance on their initiative and the resources available through AI tools is associated with a reduced dependence on traditional instructional methods.
Alignment: According to Self-Determination Theory (SDT), autonomy is crucial for intrinsic motivation, fostering a sense of volition and choice in learning activities. SDT posits that when students feel autonomous, they are more likely to engage in learning for inherent satisfaction and interest rather than for external rewards. The inclusion of items such as the ability to use AI tools to clarify confusing issues (Q6) and easily understand new materials (Q16) indicates a strong correlation with effective resource utilization. Clarifying confusing issues (Q6) suggests that students can independently resolve their doubts, potentially leading to a more profound learning experience. Easily understanding new materials (Q16) suggests that AI tools aid in comprehension and the mastery of content, which may empower students to tackle and understand new information independently. This correlation supports the notion of autonomous learning by indicating that students have the tools they need to manage their educational experience effectively, aligning with SDT’s emphasis on the importance of autonomy in fostering intrinsic motivation and effective learning.
  • Component 3: Interest and Engagement
  • Q11: When using AI-based chat, as my understanding of the course material grows, so does my interest.
  • Q12: For me, it is enjoyable when I share and discuss my AI-based chat learning experiences with my peers.
  • Q13: When using AI-based chat, I am willing to make more effort to achieve better results.
  • Q17: When using AI-based chat, I can accurately recall information I have heard/seen before.
  • Q25: When using AI-based chat, I regularly reflect on what I have learned and any misconceptions I may have had.
Interpretation: This component reflects the correlation between student interest and engagement in learning activities facilitated by AI-based tools. Questions about increasing interest in course material (Q11), enjoyment in sharing AI-based chat experiences with peers (Q12), and willingness to put in more effort (Q13) indicate a significant relationship with intrinsic motivation. Increasing interest in course material (Q11) shows a correlation with AI tools making the learning content more appealing and engaging, which can sustain students’ attention and curiosity. Enjoyment in sharing AI-based chat experiences with peers (Q12) suggests a link with the social aspect of learning, where students find pleasure in discussing and exchanging ideas facilitated by AI tools. Willingness to put in more effort (Q13) implies that students’ motivation to exert more effort in their studies is associated with finding the learning process enjoyable and rewarding.
Alignment: Intrinsic motivation, as explained in Self-Determination Theory (SDT), involves engaging in activities for their inherent enjoyment and satisfaction. SDT posits that intrinsic motivation is driven by internal rewards, such as the pleasure and interest derived from the activity itself, rather than external rewards. The inclusion of items such as reflecting on learned material and identifying misconceptions (Q25) suggests a strong correlation with active cognitive engagement, where students critically evaluate their understanding and rectify any errors or misconceptions. Accurately recalling information (Q17) indicates a strong association with the effectiveness of AI tools in enhancing memory and retention—key indicators of deep learning. This component suggests that AI-based tools are correlated with both the enjoyment and depth of learning, fostering a more engaging and intrinsically motivated learning environment. By making the learning process more enjoyable and cognitively stimulating, AI tools are associated with a richer and more fulfilling educational experience, aligning with SDT’s emphasis on the importance of intrinsic motivation for effective learning.
  • Component 4: Self-Regulation and Goal Setting
  • Q21: Using AI-based chat, I develop new learning habits.
  • Q22: I plan my learning process effectively with the use of AI-based chat.
  • Q23: I manage my learning materials in a systematic way with the use of AI-based chat.
  • Q24: I set realistic learning goals with the use of AI-based chat.
Interpretation: This component focuses on the correlation between self-regulatory behaviors and goal-setting strategies that students develop through the use of AI-based chat tools. Questions related to developing new learning habits (Q21), planning the learning process effectively (Q22), managing learning materials systematically (Q23), and setting realistic goals (Q24) highlight the processes of self-regulation. Developing new learning habits (Q21) suggests a correlation with AI tools introducing students to more effective study techniques and routines, potentially helping them adapt and refine their learning strategies over time. Planning the learning process effectively (Q22) indicates an association with the importance of organization and foresight, allowing students to allocate their time and resources efficiently to meet their academic goals. Managing learning materials systematically (Q23) emphasizes the role of AI tools in helping students keep their study resources organized, making it easier to track and retrieve information when needed. Setting realistic goals (Q24) points to the significance of goal setting in the learning process, where students set achievable targets that guide their efforts and keep them motivated.
Alignment: According to Social Cognitive Theory (SCT), self-regulation involves monitoring one’s own progress and making necessary adjustments to achieve personal objectives. SCT emphasizes that self-regulation is a dynamic process that includes setting goals, self-monitoring, self-evaluation, and self-reinforcement. The inclusion of items related to developing new learning habits (Q21), planning the learning process effectively (Q22), managing learning materials systematically (Q23), and setting realistic goals (Q24) indicates a strong correlation with these self-regulatory processes. The structured support provided by AI-based tools, such as real-time feedback, reminders, and personalized learning paths, helps students cultivate these skills. This component suggests that AI tools are associated with enhancing students’ ability to regulate their own learning processes, making them more organized, goal-oriented, and self-directed learners. By facilitating the development of self-regulatory behaviors, AI tools are correlated with students taking greater control of their learning, which may lead to improved academic outcomes and a stronger sense of personal efficacy.
  • Reliability of the survey questions
Reliability is a critical measure in assessing the consistency and stability of the components derived from the survey. It reflects the degree to which the items within each component are correlated, ensuring that they collectively measure the underlying construct accurately. In this study, we evaluated the reliability of each component using Cronbach’s alpha, a widely accepted statistic for internal consistency. A higher Cronbach’s alpha indicates greater reliability, with values above 0.7 generally considered acceptable for social science research.
The reliability statistics for each component are summarized in Table 2 below.
For all components, the Cronbach’s alpha exceeds 0.8, indicating excellent reliability. These results underscore the robustness of the survey instruments, confirming that the items within each component are consistently measuring the intended constructs. This high reliability is crucial for ensuring the validity of findings and their implications for understanding student engagement in AI-enhanced learning environments.

4.2. Confirmatory Factor Analysis

Confirmatory Factor Analysis (CFA), as a specific type of structural equation modeling (SEM), tests the validity of hypothesized factor structures and measures the relationships between latent and observed variables within a comprehensive model [63,64,65,66]. CFA was employed to validate the factor structure identified by EFA, accommodating inter-factor correlations as suggested by the Promax rotation. CFA offers a variety of advanced tools to optimize model fit, including the ability to model correlations between error terms of observed variables. Incorporating these error term correlations into the CFA model enhances our understanding of the data structure, leading to potential improvements in model fit.

4.2.1. The Path Diagram

In structural equation modeling (SEM), path diagrams serve to visually map out the hypothesized relationships between variables. Latent variables, which cannot be directly measured, are illustrated as circles or ovals, whereas observed variables, which can be measured, are depicted as squares or rectangles. In our research, these observed variables represent the answers collected from the survey.
The diagram uses arrows to indicate the direction and nature of the relationships between variables. Single-headed arrows denote a proposed causal influence from one variable to another, while double-headed arrows show correlations. Path coefficients, which are displayed alongside these arrows, quantify the strength and direction of these relationships, much like regression coefficients, indicating the expected change in the dependent variable for a one-unit change in the independent variable, assuming all other variables remain constant. Error terms, represented by circles labeled “e” capture the unexplained variance in the observed variables, indicating the model’s imperfections. Additionally, curved double-headed arrows show correlations among exogenous variables (those not influenced by other variables within the model).
Path diagrams provide a detailed depiction of the theoretical framework that aims to explain the data relationships, making them essential for SEM analysis. Figure 1 illustrates the final structural equation model, developed through a rigorous selection process to identify the best-fitting model.
In this diagram, certain error terms associated with items within the same factor are connected by curved arrows, indicating correlations between these error terms. These correlations suggest that there are unique variances shared by these items that are not captured by the latent factors, potentially due to method effects or similarities in the wording of the questions.

4.2.2. Model Fit Metrics

The evaluation of model fit indicates excellent results across several key metrics. Key indices such as the Comparative Fit Index (CFI) and Tucker–Lewis Index (TLI) are 0.956 and 0.947, respectively, both exceeding the 0.90 threshold, indicating excellent model fit. The Root-Mean-Square Error of Approximation (RMSEA) is 0.047, with a 90% confidence interval of 0.036 to 0.058 and a PCLOSE of 0.654, all of which signify a close fit to the data.
Although the Chi-square value is 267.197 with 174 degrees of freedom and a p-value of 0.000, which is significant but expected given the large sample size. The Chi-square-to-degrees-of-freedom ratio (CMIN/DF) is 1.536, falling well within the acceptable range, suggesting a good fit.
Overall, the model demonstrates robust fit statistics, confirming its suitability for representing the relationships among the observed and latent variables in the study.

4.2.3. Model Fit across Cultural and Demographic Groups

The respondent students represented a wide range of demographic segments. While the model exhibited an excellent fit for this diverse group, specific conclusions for distinct segments required separate validation processes. It is vital to confirm that the model’s structural validity holds for each individual segment, ensuring accurate and meaningful interpretations within different population subsets.
A multi-group Confirmatory Factor Analysis (CFA) was conducted on the original dataset of 642 university students. In case of good CFA model-fit results, this practice of performing the analysis on the original dataset is recommended to achieve more robust multi-group reliability results. Four distinct group comparisons were performed:
  • Gender: Female vs. male.
  • Age group: Under 24 years old, 24 to 30 years old, 30 to 40 years old, and over 40 years old.
  • Academic discipline: Technical, which includes Engineering and Information Technology, and Social, which encompasses Economics, Social Sciences, and Teacher Training.
  • Language and cultural background: English-speaking international students vs. Hungarian students.
The model fit evaluation across various demographics shows strong results. For gender, both male and female groups exhibit a good fit, with RMR at 0.047, GFI at 0.911, and AGFI at 0.882. High baseline indices (NFI: 0.905, IFI: 0.954, CFI: 0.953) and an RMSEA of 0.037 further affirm robustness, supported by Hoelter’s critical N values of 386 at the 0.05 level. For age groups, the model demonstrates good fit across all categories (Under 24, 24–30, 30–40, over 40) with RMR at 0.072, GFI at 0.862, AGFI at 0.817, and robust baseline indices (NFI: 0.842, IFI: 0.929, CFI: 0.927). An RMSEA of 0.033 and Hoelter’s critical N values of 414 at the 0.05 level confirm an excellent fit. Field of disciplines, divided into Technical and Social, also show a strong fit, with RMR at 0.048, GFI at 0.912, AGFI at 0.883, and high baseline indices (NFI: 0.905, IFI: 0.953, CFI: 0.952). An RMSEA of 0.037 and Hoelter’s critical N values of 384 at the 0.05 level indicate suitability for larger samples. Language groups (Hungarian vs. English) also indicate a very good fit, with RMR at 0.078, GFI at 0.909, AGFI at 0.879, and high baseline indices (NFI: 0.896, IFI: 0.943, CFI: 0.942). An RMSEA of 0.041 and Hoelter’s critical N values of 346 at the 0.05 level suggest reliability for multi-group analysis.
Overall, the model fit evaluation across gender, age, field of discipline, and language groups consistently demonstrates a strong and robust fit, confirmed by multiple indices, including high baseline comparisons, low RMSEA values, and suitable Hoelter’s critical N values, indicating the model’s reliability and suitability for diverse demographic analyses.

5. Novelties Compared to Traditional Components

The newly identified components offer several novel insights compared to traditional components of SCT (Bandura’s Social Cognitive Theory) and SDT (Self-Determination Theory).

5.1. Integration of AI Tools into Self-Efficacy and Preparedness

Self-Efficacy Theory (SCT) traditionally focuses on an individual’s belief in their ability to succeed in specific tasks or situations. The component “Academic Self-Efficacy and Preparedness” specifically highlights how AI-based tools enhance students’ confidence in their academic abilities and their preparedness for exams and assessments. This extension of self-efficacy includes the direct impact of AI tools on students’ perception of their academic readiness, which is not typically addressed in traditional SCT.

5.2. Enhanced Autonomy through Resource Utilization

Autonomy (SDT) involves feeling in control of one’s own behavior and goals, and Behavioral Capability (SCT) refers to having the knowledge and skills necessary to perform a behavior. The component “Autonomy and Resource Utilization” not only emphasizes students’ sense of control over their learning but also highlights the importance of effectively utilizing AI tools as resources. This dual focus on autonomy and resource utilization represents a more nuanced understanding of how technological tools can empower students to take control of their learning process.

5.3. Deepened Engagement through AI Interaction

Intrinsic Motivation (SDT) involves engaging in activities for their inherent satisfaction, and Observational Learning (SCT) refers to learning by observing others. The component “Interest and Engagement” underscores the role of AI tools in making learning more enjoyable and intrinsically motivating. This component highlights the active engagement and interest generated by AI tools, suggesting that these technologies can enhance students’ intrinsic motivation by making the learning process more interactive and rewarding.

5.4. Structured Self-Regulation and Goal Setting with AI Support

Self-Regulation (SCT) involves setting goals, monitoring progress, and adjusting behaviors to achieve objectives, and Competence (SDT) refers to feeling effective in one’s activities. The component “Self-Regulation and Goal Setting” focuses on how AI tools support students in developing structured self-regulatory behaviors and effective goal-setting strategies. This component extends traditional self-regulation by emphasizing the specific role of AI in helping students to organize their learning, manage time efficiently, and set realistic goals.

5.5. Context-Specific Learning Experiences

Outcome Expectations (SCT) involve anticipated consequences of a behavior, and Relatedness (SDT) refers to feeling connected to others. The components identified in this study highlight context-specific experiences unique to AI-based learning environments. For instance, the role of AI in providing immediate feedback, personalized learning paths, and facilitating peer discussions through AI tools represents a contextual adaptation of traditional concepts. These experiences are specifically tailored to modern educational settings where AI plays a crucial role, providing a more comprehensive understanding of how AI-based chat tools influence student engagement and extending traditional educational theories to better accommodate the evolving landscape of technology-enhanced learning.

6. Development and Use of a New Scale for Student Engagement

The development of a scale in research serves multiple critical purposes, primarily focusing on the measurement of latent variables that are not directly observable. In the context of this educational research, developing a new scale (named the AI-Enhanced Learning Engagement Scale or AIELE Scale) allows for the systematic quantification of the abstract concepts identified by EFA and CFA. This quantification is essential for conducting rigorous empirical analyses and drawing meaningful conclusions about the phenomena under study.
In defining a scale with correlated factors, providing the items and the factor score weights is the best practice (Table 3). This approach ensures that the computed scores are accurate and reflect the unique contributions of each item to the latent constructs, accounting for the correlations between factors.
To obtain the overall score for a given factor, each individual’s responses to the survey questions are multiplied by the corresponding factor score weights and the products are then summed.
For example, if an individual’s responses to the survey questions are as follows: Q1: 4, Q2: 3, Q3: 5, Q5: 2, Q6: 4, Q8: 3, Q11: 5, Q12: 4, Q13: 3, Q16: 2, Q17: 4, Q21: 5, Q22: 3, Q23: 4, Q24: 3, Q25: 4, Q26: 5, Q27: 3, Q28: 4, Q29: 5, and Q30: 4, the calculation proceeds as follows:
Factor 1 Score = (4 × 0.014) + (3 × 0.024) + (5 × 0.014) + (2 × 0.013) + (4 × 0.016) + (3 × 0.095) + (5 × 0.024) + (4 × 0.009) + (3 × 0.015) + (2 × 0.038) + (4 × 0.018) + (5 × 0.021) + (3 × 0.016) + (4 × 0.024) + (3 × 0.023) + (4 × 0.015) + (5 × 0.088) + (3 × 0.065) + (4 × 0.099) + (5 × 0.132) + (4 × 0.14) = 3.115.
Thus, the individual’s score on Factor 1 is 3.115.
Determining these newly identified, more comprehensive factors and students’ factor scores based on this new scale can provide numerous practical benefits. Personalized learning experiences are enhanced by understanding each student’s strengths and weaknesses, allowing educators to provide individualized support and create adaptive learning paths. This ensures that students are both engaged and appropriately challenged. Educational outcomes improve through targeted interventions for struggling students and by designing engaging activities that boost intrinsic motivation.
Instructional strategies are informed by factor scores, guiding educators to incorporate techniques that enhance autonomy, control, and resource utilization. Providing feedback based on these scores encourages student self-reflection and self-regulation, promoting awareness and the improvement of learning processes. Institutional policies and practices benefit from insights gained through factor scores, aiding in curriculum development and effective resource allocation to areas needing the most support, such as academic skills and learning habit development.
Student support services are enhanced as academic advisors and counselors can offer more personalized guidance, and peer mentoring programs can be developed by identifying students with high scores in certain areas to support peers needing help. A growth mindset is fostered by highlighting strengths and areas for improvement, encouraging persistence and the setting of realistic goals.
Self-regulated learning is promoted through teaching students to use their factor scores to develop effective strategies, such as time management and goal setting. This empowers students, increasing their autonomy and self-efficacy. Understanding factors contributing to academic success helps institutions create supportive learning environments that foster engagement, motivation, and confidence. Leveraging AI-based tools and other technologies ensures these environments are aligned with the factors driving student success.

7. Conclusions

The integration of AI-based chat tools into educational environments has been shown to significantly impact various dimensions of student engagement, as identified through our empirical study. By grounding our research in Bandura’s Social Cognitive Theory and Deci and Ryan’s Self-Determination Theory, we validated several new components of student engagement. Firstly, the component of Academic Self-Efficacy and Preparedness highlights the association of students’ confidence and optimism in their academic performance with the supportive features of AI tools. Secondly, the component of Autonomy and Resource Utilization reflects how AI tools provide students with control over their learning process and the necessary resources to clarify complex topics independently. Thirdly, the component of Interest and Engagement captures the association of interest and willingness to invest effort in learning with the engaging and interactive nature of AI-based tools. Lastly, the component of Self-Regulation and Goal Setting underscores the role of AI tools in helping students develop effective learning habits, plan their study processes, manage their materials systematically, and set realistic goals.
The development of the AI-Enhanced Learning Engagement Scale (AIELE Scale) represents a significant advancement in the measurement of student engagement within AI-enhanced educational environments. This new scale systematically quantifies the abstract concepts identified through Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA). The AIELE Scale offers practical benefits such as enhancing personalized learning, improving educational outcomes through targeted interventions, and developing instructional strategies that foster autonomy and resource utilization. It can support institutional policies and student services by providing insights for curriculum development and resource allocation. By fostering self-regulated learning and a growth mindset, the AIELE Scale empowers students, increasing their engagement, motivation, and confidence in a technology-enhanced learning environment.
Looking to the future, there is a clear need for further research to expand on the findings of this study. One critical avenue for future exploration is conducting longitudinal studies to examine how student engagement with AI tools evolves over extended periods. It may be of particular interest to examine this change from the phase when they used these tools only occasionally in their independent learning to the phase when they required the use of AI to be consciously integrated into their subjects over a semester. Such studies would provide deeper insights into the long-term impacts of AI integration on educational outcomes. In fact, a longitudinal study has already been conducted as a continuation of this research, and a subsequent paper detailing those findings is currently under review in the same journal. This ongoing research will contribute to a more comprehensive understanding of the sustained effects of AI tools on student engagement and learning success.
Additionally, future research could explore the differential impacts of AI tools across various student demographics, such as age, gender, and field of study, to determine whether certain groups benefit more from AI integration than others. Investigating the role of AI in collaborative learning environments versus individual learning contexts could also yield valuable insights, particularly in understanding how AI tools influence group dynamics and peer learning.
Another promising area for future research is the development and evaluation of AI tools tailored to specific disciplines or learning styles. Customizing AI tools to meet the unique needs of different subjects or adapting them to various learning preferences could further enhance their effectiveness and student engagement.
Furthermore, an exciting direction for future research is the possible use of students’ factor scores, derived from the AIELE Scale, as predictors in a machine learning model to forecast students’ performance in their studies. By leveraging these factor scores alongside other well-known predictors, educators and institutions could potentially improve the accuracy of performance forecasts, identify students at risk of underperforming, and intervene early with tailored support, thereby improving overall educational outcomes.
In summary, while this study provides a solid foundation for understanding the role of AI in student engagement, the field is ripe for further exploration across these various dimensions.

Author Contributions

Conceptualization, L.B.; validation, L.B.; formal analysis, L.B.; resources, L.B., G.Á., A.B.-B., T.F., G.G., A.J., L.Z.J., E.K. (Edina Kocsó), E.K. (Endre Kovács), E.M., A.I.M.K. and G.S.; data curation, L.B.; writing—original draft preparation, L.B.; writing—review and editing, L.B.; writing—part of the literature review, L.Z.J.; project administration, L.B. and E.K. (Edina Kocsó) All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

At our university, while there is no dedicated ethics committee, we adhere to rigorous internal guidelines that ensure the anonymity and voluntariness of all participant involvement.

Informed Consent Statement

Prior to participation, all individuals were informed about the study’s aims and the anonymous and voluntary nature of the survey through an initial statement in the questionnaire. This statement also made clear that by participating, they consent to the anonymized use of their data solely for statistical analysis. We have ensured that it is technically impossible to identify any participant from the data collected, maintaining the strictest levels of confidentiality and data protection.

Data Availability Statement

Data available through the link: https://drive.google.com/file/d/1_-tc9M5hV7jc7KTEMc9pP6BtleeRs6Yf/view?usp=sharing (accessed on 17 July 2024).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. The survey questions for engagement.
Table A1. The survey questions for engagement.
Q1. When using AI-based chat, I look forward to learning new topics.
Q2. I feel the world is opening up to me when I learn using AI-based chats.
Q3. When using AI-based chat, I feel that I am in control of the learning process and the pace.
Q4. I appreciate being able to choose how and when I use AI-based chat in my learning.
Q5. When using AI-based chat, I feel more independent in my learning and this is important for my academic success.
Q6. AI-based chat provides me with a resource, a tool to help me clarify issues that are confusing to me.
Q7. When using AI-based chat, I feel that I have the skills and knowledge to successfully complete my studies.
Q8. Seeing the potential of AI-based chat makes me more optimistic that my academic performance will improve.
Q9. AI-based chat provides the resources that are important for my academic success.
Q10. Learning with AI-based chat motivates and inspires me to study.
Q11. When using AI-based chat, as my understanding of the course material grows, so does my interest.
Q12. For me, it is enjoyable when I share and discuss my AI-based chat learning experiences with my peers.
Q13. When using AI-based chat, I am willing to make more effort to achieve better results.
Q14. I am engaged more deeply with the learning materials when I use AI-based chat.
Q15. I use AI-based chat to get additional resources and information to help my learning.
Q16. It is easy for me to understand new learning materials when I use AI-based chat.
Q17. When using AI-based chat, I can accurately recall information I have heard/seen before.
Q18. When using AI-based chat in my studies, I can effectively identify key concepts.
Q19. I feel able to apply the knowledge gained from AI-based chat to real-life situations.
Q20. I am good problem solver in my studies when I use AI-based chat.
Q21. Using AI-based chat, I develop new learning habits.
Q22. I plan my learning process effectively with the use of AI-based chat.
Q23. I manage my learning materials in a systematic way with the use of AI-based chat.
Q24. I set realistic learning goals with the use of AI-based chat.
Q25. When using AI-based chat, I regularly reflect on what I have learned and any misconceptions I may have had.
Q26. I am confident in my learning abilities when using AI-based chat.
Q27. I believe that with the help of AI-based chat, I can successfully complete difficult tasks.
Q28. When using AI-based chat, I am persistent in solving challenging problems.
Q29. After using AI-based chat, I feel prepared for exams and assessments.
Q30. I am confident that I can learn independently through AI-based chat.
Table A2. One possible categorization of the survey questions to theoretical concepts.
Table A2. One possible categorization of the survey questions to theoretical concepts.
Concept/
Question
Self-EfficacyIntrinsic MotivationSelf-RegulationAutonomyCompetenceRelatednessObservational LearningOutcome ExpectationsBehavioral CapabilityReinforcement
Q1
Q2
Q3
Q4
Q5
Q6
Q7
Q8
Q9
Q10
Q11
Q12
Q13
Q14
Q15
Q16
Q17
Q18
Q19
Q20
Q21
Q22
Q23
Q24
Q25
Q26
Q27
Q28
Q29
Q30

References

  1. Luckin, R.; Holmes, W.; Griffiths, M.; Forcier, L.B. Intelligence Unleashed: An Argument for AI in Education; Pearson Education: London, UK, 2016. [Google Scholar]
  2. Bandura, A. Social Foundations of Thought and Action: A Social Cognitive Theory; Prentice-Hall: Englewood Cliffs, NJ, USA, 1986. [Google Scholar]
  3. Deci, E.L.; Ryan, R.M. Intrinsic Motivation and Self-Determination in Human Behavior; Plenum: New York, NY, USA, 1985. [Google Scholar]
  4. Carver, C.S.; Scheier, M.F. Perspectives on Personality, 8th ed.; Pearson: London, UK, 2017. [Google Scholar]
  5. Bandura, A. Self-Efficacy: The Exercise of Control; W.H. Freeman: New York, NY, USA, 1997. [Google Scholar]
  6. Bandura, A. Social Cognitive Theory: An Agentic Perspective. Annu. Rev. Psychol. 2001, 52, 1–26. [Google Scholar] [CrossRef]
  7. Bandura, A. The Self System in Reciprocal Determinism. Am. Psychol. 1978, 33, 344–358. [Google Scholar] [CrossRef]
  8. Bandura, A. Social Learning Theory; Prentice-Hall: Englewood Cliffs, NJ, USA, 1977. [Google Scholar]
  9. Bandura, A.; Walters, R. Social Learning and Personality Development; Holt, Rinehart & Winston: New York, NJ, USA, 1963. [Google Scholar]
  10. Bandura, A.; Ross, D.; Ross, S.A. Transmission of Aggression Through the Imitation of Aggressive Models. J. Abnorm. Soc. Psychol. 1961, 63, 575–582. [Google Scholar] [CrossRef]
  11. Bandura, A. Self-efficacy: Toward a Unifying Theory of Behavioral Change. Psychol. Rev. 1997, 84, 135–155. [Google Scholar]
  12. Caprara, G.; Fida, R.; Vecchione, M.; Del Bove, G.; Vecchio, G.; Barbaranelli, C.; Bandura, A. Longitudinal Analysis of the Role of Perceived Self-Efficacy for Self-Regulatory Learning in Academic Continuance and Achievement. J. Educ. Psychol. 2008, 100, 525–534. [Google Scholar] [CrossRef]
  13. Bandura, A. Toward a Psychology of Human Agency. Perspect. Psychol. Sci. 2006, 1, 164–180. [Google Scholar] [CrossRef] [PubMed]
  14. Bandura, A. Selective Moral Disengagement in the Exercise of Moral Agency. J. Moral Educ. 2002, 31, 101–119. [Google Scholar] [CrossRef]
  15. Schwarzer, R.; Jerusalem, M. Generalized Self-Efficacy Scale. In Measures in Health Psychology: A User’s Portfolio. Causal and Control Beliefs; Weinman, J., Wright, S.C., Johnson, M., Eds.; Nfer-Nelson: Windsor, UK, 1995; pp. 35–37. [Google Scholar]
  16. Bandura, A. Guide for Constructing Self-Efficacy Scales. In Self-Efficacy Beliefs of Adolescents; Pajares, F., Urdan, T., Eds.; Information Age Publishing: Greenwich, CT, USA, 2006; pp. 307–337. [Google Scholar]
  17. Panc, T.; Mihalcea, A.; Panc, I. Self-Efficacy Survey: A New Assessment Tool. Procedia—Soc. Behav. Sci. 2012, 33, 880–884. [Google Scholar] [CrossRef]
  18. Chen, G.; Gully, S.M.; Eden, D. Validation of a New General Self-Efficacy Scale. Organ. Res. Methods 2014, 4, 62–83. [Google Scholar] [CrossRef]
  19. Sherer, M.; Maddux, J.E.; Mercandante, B.; Percente-Dunn, S.; Jacobs, B.; Rogers, R.W. The Self-Efficacy Scale: Construction and Validation. Psychol. Rep. 1982, 51, 663–671. [Google Scholar] [CrossRef]
  20. Van Zyl, L.E.; Klibert, J.; Shankland, R.; See-To, E.W.K.; Rothmann, S. The General Academic Self-Efficacy Scale: Psychometric Properties, Longitudinal Invariance, and Criterion Validity. J. Psychoeduc. Assess. 2022, 40, 777–789. [Google Scholar] [CrossRef]
  21. Dever, B.V.; Kim, S.Y. Measurement Equivalence of the PALS Academic Self-Efficacy Scale. Eur. J. Psychol. Assess. 2016, 32, 61–67. [Google Scholar] [CrossRef]
  22. Nielsen, T.; Dammeyer, J.; Vang, M.L.; Makransky, G. Gender Fairness in Self-Efficacy? A Rasch-Based Validity Study of the General Academic Self-Efficacy Scale (GASE). Scand. J. Educ. Res. 2018, 62, 664–681. [Google Scholar] [CrossRef]
  23. Zimmerman, B.J.; Bandura, A.; Martinez-Pons, M. Self-Motivation for Academic Attainment: The Role of Self-Efficacy Beliefs and Personal Goal Setting. Am. Educ. Res. J. 1992, 29, 663–676. [Google Scholar] [CrossRef]
  24. Owen, S.V.; Froman, R.D. Development of a College Academic Self-Efficacy Scale. In Proceedings of the Annual Meeting of the National Council on Measurement in Education, New Orleans, LA, USA, 5–9 April 1988. [Google Scholar]
  25. Ryan, R.M. (Ed.) The Oxford Handbook of Self-Determination Theory, 1st ed.; Oxford University Press: Oxford, UK, 2023. [Google Scholar]
  26. Deci, E.L.; Ryan, R.M. Motivation, Personality, and Development within Embedded Social Context: An Overview of Self-Determination Theory. In Oxford Handbook of Human Motivation; Ryan, R.M., Ed.; Oxford University Press: Oxford, UK, 2012; pp. 85–107. [Google Scholar]
  27. Ryan, R.M.; Deci, E.L. Self-Determination Theory: Basic Psychological Needs in Motivation, Development, and Wellness; Guilford Publishing: New York, NY, USA, 2017. [Google Scholar]
  28. Deci, E.L.; Ryan, R.M. Self-Determination Theory: A Consideration of Human Motivational Universals. In The Cambridge Handbook of Personality Psychology; Corr, P.J., Matthews, G., Eds.; Cambridge University Press: Cambridge, UK, 2009; pp. 441–456. [Google Scholar]
  29. Ryan, R.M.; Deci, E.L. Self-Determination Theory and the Role of Basic Psychological Needs in Personality and the Organization of Behavior. In Handbook of Personality; John, P.J., Robins, R.W., Pervin, L.A., Eds.; The Guilford Press: New York, NY, USA, 2008; pp. 654–678. [Google Scholar]
  30. Sheldon, K.M.; Deci, E.; Self Determination Scale (SDS). APA PsycTests. 1993. Available online: https://psycnet.apa.org/doiLanding?doi=10.1037%2Ft53985-000 (accessed on 21 July 2024).
  31. Sheldon, K.M. Creativity and self-determination in personality. Creat. Res. J. 1995, 8, 61–72. [Google Scholar] [CrossRef]
  32. The American Institutes for Research (AIR). AIR Self-Determination Scale. 1994. Available online: https://www.ou.edu/zarrow/AIR%20User%20Guide.pdf (accessed on 21 July 2024).
  33. Wehmeyer, M.L. Arc’s Self-Determination Scale. APA PsycTests. 1995. Available online: https://www.thearc.org/wp-content/uploads/forchapters/SD%20Scale%20Procedural%20Guidelines.pdf (accessed on 21 July 2024).
  34. Self-Determination Theory. Available online: https://www.selfdeterminationtheory.org (accessed on 17 July 2024).
  35. Chen, L.; Chen, P.; Lin, Z. Artificial intelligence in education: A review. IEEE Access 2020, 8, 75264–75278. [Google Scholar] [CrossRef]
  36. George, B.; Wooden, O. Managing the strategic transformation of higher education through artificial intelligence. Admin. Sci. 2023, 13, 196. [Google Scholar] [CrossRef]
  37. Bandura, A. Perceived Self-Efficacy in Cognitive Development and Functioning. Educ. Psychol. 1993, 82, 117–148. [Google Scholar] [CrossRef]
  38. Bandura, A.; Barbaranelli, C. Multifaceted Impact of Self-Efficacy Beliefs on Academic Functioning. Child Dev. 1996, 67, 1206–1222. [Google Scholar] [CrossRef]
  39. Grassini, S. Shaping the future of education: Exploring the potential and consequences of AI and ChatGPT in educational settings. Educ. Sci. 2023, 13, 692. [Google Scholar] [CrossRef]
  40. Holmes, W.; Bialik, M.; Fadel, C. Artificial Intelligence in Education: Promises and Implications for Teaching and Learning; Center for Curriculum Redesign: Boston, MA, USA, 2019. [Google Scholar]
  41. Lo, C.K. What is the impact of ChatGPT on education? A rapid review of the literature. Educ. Sci. 2023, 13, 410. [Google Scholar] [CrossRef]
  42. Ryan, R.M.; Deci, E.L. Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being. Am. Psychol. 2000, 55, 68–78. [Google Scholar] [CrossRef]
  43. Rawas, S. ChatGPT: Empowering lifelong learning in the digital age of higher education. Educ. Inf. Technol. 2024, 29, 6895–6908. [Google Scholar] [CrossRef]
  44. Giansanti, D. Precision Medicine 2.0: How digital health and AI are changing the game. J. Pers. Med. 2023, 13, 1057. [Google Scholar] [CrossRef] [PubMed]
  45. Bandura, A. Human Agency in Social Cognitive Theory. Am. Psychol. 1989, 44, 1175–1184. [Google Scholar] [CrossRef]
  46. Ng, D.T.K.; John, B.; Gaiser, E.; Weible, J. The impact of artificial intelligence on learner–instructor interaction in online learning. Int. J. Educ. Technol. High. Educ. 2021, 18, 54. [Google Scholar] [CrossRef]
  47. Yu, Y.; Zhuang, Y.; Zhang, J.; Meng, Y.; Ratner, A.J.; Krishna, R.; Shen, J.; Zhang, C. Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias. arXiv 2023, arXiv:2306.15895. [Google Scholar] [CrossRef]
  48. Borji, A. A categorical archive of ChatGPT failures. arXiv 2023, arXiv:2302.03494. [Google Scholar] [CrossRef]
  49. Zajko, M. Artificial intelligence, algorithms, and social inequality: Sociological contributions to contemporary debates. Sociol. Compass 2022, 16, e12962. [Google Scholar] [CrossRef]
  50. Tlili, A.; John, B.; Gaiser, E.; Weible, J. What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn. Environ. 2023, 10, 15. [Google Scholar] [CrossRef]
  51. Imran, A. Why addressing digital inequality should be a priority. Electron. J. Inf. Syst. Dev. Cries. 2023, 89, e12255. [Google Scholar] [CrossRef]
  52. Hill, C.; Lawton, W. Universities, the digital divide and global inequality. J. High. Educ. Policy Manag. 2018, 40, 598–610. [Google Scholar] [CrossRef]
  53. Breier, M. From ‘financial considerations’ to ‘poverty’: Towards a reconceptualisation of the role of finances in higher education student drop out. High. Educ. 2010, 60, 657–670. [Google Scholar] [CrossRef]
  54. Akour, M.; Alenezi, M. Higher education future in the era of digital transformation. Educ. Sci. 2022, 12, 784. [Google Scholar] [CrossRef]
  55. Guerra-Carrillo, B.; Katovich, K.; Bunge, S.A. Does higher education hone cognitive functioning and learning efficacy? Findings from a large and diverse sample. PLoS ONE 2017, 12, e0182276. [Google Scholar] [CrossRef]
  56. Tlili, A.; John, B.; Gaiser, E.; Weible, J. Analyzing the students’ views, concerns, and perceived ethics about chat GPT usage. Comput. Educ. Artif. Intell. 2023, 5, 100180. [Google Scholar] [CrossRef]
  57. Farrokhnia, M.; Ateser, I.; Pazouki, A.; Noroozi, O. A SWOT analysis of ChatGPT: Implications for educational practice and research. Innov. Educ. Teach. Int. 2024, 61, 460–474. [Google Scholar] [CrossRef]
  58. Ray, P.P. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet Things Cyber-Physical Syst. 2023, 3, 121–154. [Google Scholar] [CrossRef]
  59. Sallam, M. ChatGPT utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns. Healthcare 2023, 11, 887. [Google Scholar] [CrossRef]
  60. Khine, M.; Nielsen, T. Academic Self-Efficacy in Education: Nature, Assessment, and Research; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  61. Fabrigar, L.R.; Wegener, D.T. Exploratory Factor Analysis; Oxford University Press: Oxford, UK, 2012. [Google Scholar]
  62. Costello, A.B.; Osborne, J.W. Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Pract. Assess. Res. Eval. 2005, 10, 1–9. [Google Scholar]
  63. Kline, R.B. Principles and Practice of Structural Equation Modeling, 4th ed.; Guilford Press: New York, NY, USA, 2015. [Google Scholar]
  64. Byrne, B.M. Structural Equation Modeling with AMOS: Basic Concepts, Applications, and Programming, 3rd ed.; Routledge: New York, NY, USA, 2016. [Google Scholar]
  65. Brown, T.A. Confirmatory Factor Analysis for Applied Research, 2nd ed.; Guilford Press: New York, NY, USA, 2015. [Google Scholar]
  66. Schumacker, R.E.; Lomax, R.G. A Beginner’s Guide to Structural Equation Modeling, 4th ed.; Routledge: New York, NY, USA, 2016. [Google Scholar]
Figure 1. The path diagram.
Figure 1. The path diagram.
Education 14 00974 g001
Table 1. Pattern matrix.
Table 1. Pattern matrix.
Pattern Matrix
Component
1234
Q1 Learning Enthusiasm 0.728
Q2 New Methods Openness 0.880
Q3 Learning Pace Control 0.564
Q5 Independence Importance 0.630
Q6 Clarification Tool Use 0.651
Q8 Performance Optimism0.460
Q11 Tech Curriculum Connection 0.620
Q12 Learning Experience Share 0.689
Q13 Effort for Results 0.770
Q16 New Material Understanding 0.405
Q17 Learned Info Recall 0.688
Q19 Knowledge Application
Q20 Problem Solving Skill
Q21 New Habits Openness 0.798
Q22 Learning Planning 0.707
Q23 Material Management 0.582
Q24 Realistic Goals Setting 0.435
Q25 Misconception Review 0.536
Q26 Learning Abilities Confidence0.709
Q27 Difficult Tasks Completion0.791
Q28 Challenges Persistence0.705
Q29 Exam Preparedness0.811
Q30 Autonomous Learning Confidence0.680
Extraction method: Principal component analysis. Rotation method: Promax with Kaiser Normalization.
Table 2. The value of Cronbach’s alpha for the components.
Table 2. The value of Cronbach’s alpha for the components.
ComponentCronbach’s AlphaNumber of Question Items
1. Academic Self-Efficacy and Preparedness0.8826
2. Autonomy and Resource Utilization0.8416
3. Interest and Engagement0.8015
4. Self-Regulation and Goal Setting0.8464
Table 3. Factor scores for a specific individual.
Table 3. Factor scores for a specific individual.
Factor 1Factor 2Factor 3Factor 4
Q10.0140.0610.0260.002
Q20.0240.1060.0440.004
Q30.0140.0630.0260.002
Q50.0130.0580.0240.002
Q60.0160.0690.0290.002
Q80.0950.0170.0210.012
Q110.0240.0350.140.021
Q120.0090.0140.0550.008
Q130.0150.0220.0880.013
Q160.0380.1690.0710.006
Q170.0180.0260.1050.016
Q210.0210.0050.0350.178
Q220.0160.0040.0270.138
Q230.0240.0050.0380.198
Q240.0230.0050.0380.195
Q250.0150.0220.0890.013
Q260.0880.0160.020.011
Q270.0650.0120.0140.008
Q280.0990.0180.0220.012
Q290.1320.0240.0290.016
Q300.140.0250.0310.017
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bognár, L.; Ágoston, G.; Bacsa-Bán, A.; Fauszt, T.; Gubán, G.; Joós, A.; Juhász, L.Z.; Kocsó, E.; Kovács, E.; Maczó, E.; et al. Re-Evaluating Components of Classical Educational Theories in AI-Enhanced Learning: An Empirical Study on Student Engagement. Educ. Sci. 2024, 14, 974. https://doi.org/10.3390/educsci14090974

AMA Style

Bognár L, Ágoston G, Bacsa-Bán A, Fauszt T, Gubán G, Joós A, Juhász LZ, Kocsó E, Kovács E, Maczó E, et al. Re-Evaluating Components of Classical Educational Theories in AI-Enhanced Learning: An Empirical Study on Student Engagement. Education Sciences. 2024; 14(9):974. https://doi.org/10.3390/educsci14090974

Chicago/Turabian Style

Bognár, László, György Ágoston, Anetta Bacsa-Bán, Tibor Fauszt, Gyula Gubán, Antal Joós, Levente Zsolt Juhász, Edina Kocsó, Endre Kovács, Edit Maczó, and et al. 2024. "Re-Evaluating Components of Classical Educational Theories in AI-Enhanced Learning: An Empirical Study on Student Engagement" Education Sciences 14, no. 9: 974. https://doi.org/10.3390/educsci14090974

APA Style

Bognár, L., Ágoston, G., Bacsa-Bán, A., Fauszt, T., Gubán, G., Joós, A., Juhász, L. Z., Kocsó, E., Kovács, E., Maczó, E., Mihálovicsné Kollár, A. I., & Strauber, G. (2024). Re-Evaluating Components of Classical Educational Theories in AI-Enhanced Learning: An Empirical Study on Student Engagement. Education Sciences, 14(9), 974. https://doi.org/10.3390/educsci14090974

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop