Next Article in Journal
Promoting Recycling Efficiency Through the Use of Sub-Terahertz Waves for Proper Wood Identification
Previous Article in Journal
AI-Enabled Flexible Design of Resilient Forest-to-Bioenergy Supply Chains Under Wildfire Disruption Risk
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Role of Cognitive Processes and SDG Awareness in Student Engagement and Mathematics Learning Outcomes in Higher Education

Department of Mathematics, Rajamangala University of Technology Suvarnabhumi, Phra Nakhon Sri Ayutthaya 13000, Thailand
*
Author to whom correspondence should be addressed.
Sustainability 2026, 18(4), 2087; https://doi.org/10.3390/su18042087
Submission received: 12 January 2026 / Revised: 7 February 2026 / Accepted: 17 February 2026 / Published: 19 February 2026

Abstract

Higher education is increasingly integrating artificial intelligence (AI) and sustainability-oriented learning, aligning with the United Nations Sustainable Development Goals (SDGs). Nonetheless, there is limited empirical evidence explaining how cognitive processes, AI-supported learning (AI-SL), and SDG awareness (SDGA) jointly relate to learner engagement (ENG) and learning outcomes (LO) across academic disciplines. This study examines the associations among working memory (WM), metacognition (MET), reasoning ability (REA), AI-SL, SDGA, ENG, and LO in higher education. Survey data were collected from undergraduate students across nine campuses of a public university system in Thailand. Multi-group structural equation modeling (MG-SEM) was employed to compare students enrolled in STEM and social science programs. Measurement model evaluation indicated satisfactory reliability and convergent and discriminant validity, with partial scalar invariance supported across groups, enabling meaningful structural comparisons. The results showed that ENG was strongly associated with LO in both groups. MET emerged as the strongest cognitive correlate of ENG overall, while REA was more strongly associated with ENG among STEM students, and SDGA showed a stronger association among social science students. AI-SL was positively associated with ENG and LO, with discipline-specific differences in the strength of the effect. The model explained substantial variance in ENG (R2 = 0.48) and LO (R2 = 0.52). These findings emphasize the value of integrating cognitive processes (WM, MET, and REA), AI-SL, and sustainability-oriented content within discipline-sensitive instructional designs in higher education.

1. Introduction

The education sector is changing as artificial intelligence (AI) is increasingly integrated into higher education, aligning with the global agenda of the United Nations’ Sustainable Development Goals (SDGs). Universities are now expected not only to deliver disciplinary knowledge but also to develop students’ cognitive, technological, and sustainability-related competencies that prepare them to address complex global challenges [1,2]. Mathematics and other quantitatively intensive subjects are central to this agenda, as they foster problem-solving, modelling, and evidence-based decision-making skills that directly support SDG4 (Quality Education), SDG9 (Industry, Innovation, and Infrastructure), and SDG13 (Climate Action) [3,4]. Contemporary mathematics education research further highlights that advanced problem-solving and analytical thinking are essential for success in modern, technology-enhanced learning environments [5,6].
From a cognitive science perspective, higher-order cognitive processes play a crucial role in advanced mathematics learning. Working memory (WM) enables the temporary storage and manipulation of information required for complex tasks and has been consistently identified as a strong predictor of mathematics achievement across educational levels [7,8]. Metacognition (MET), encompassing planning, monitoring, and self-evaluation of learning, supports students in regulating strategies and adapting their approaches, thereby enhancing reasoning and academic performance [9,10]. Reasoning ability (REA) represents a core mathematical competence, particularly in STEM contexts, facilitating abstract thinking, conceptual understanding, and knowledge transfer [2,6]. WM, MET, and REA can potentially form a cognitive foundation that guides how students engage with complex mathematical ideas in digital and AI-supported learning environments.
AI-supported learning (AI-SL) systems are now widely adopted in universities worldwide. AI-SL offers intelligent tutoring, automated feedback, adaptive assessment, personalized learning pathways, and advanced analytics that support both learners and instructors [11,12,13]. Recent studies indicate that generative AI can improve learning efficiency and academic outcomes, while persistent concerns remain about academic integrity, cognitive offloading, and independent thinking [14]. Scholarly work, hence, stresses the need for responsible AI integration guided by institutional policies that encourage ethical use and sustained student engagement [15]. Empirical research also indicates that AI-SL effectiveness depends not only on technological sophistication but also on learners’ cognitive readiness and self-regulatory capacity [16].
Recent work by Putthidech et al. [17] builds on earlier studies and reports that students’ views of AI-based learning systems, e.g., adaptability, feedback accuracy, and trust, bear a strong relation to calculus achievement through cognitive and emotional pathways. The investigation, however, did not clarify the separate contributions of working memory, metacognition, and reasoning ability, and it left out sustainability-oriented competencies, including Sustainable Development Goal awareness (SDGA).
Sustainability has likewise become a strategic priority in higher education. Integrating sustainability content into STEM education promotes systems thinking and sustainability literacy [4,18,19,20], while learning about the SDGs has been shown to enhance student motivation, prosocial learning, and engagement [21,22]. SDGA may strengthen learner engagement (ENG) by increasing the perceived relevance of academic content to real-world and societal challenges. When students recognize the social purpose and sustainability implications of their learning, they are more likely to invest greater cognitive effort and emotional commitment. Moreover, SDG-informed instruction can foster systems thinking, encouraging learners to integrate knowledge across domains and engage more deeply with complex and ill-structured problems. Despite these promising insights, relatively few studies have examined how SDGA interacts with cognitive processes (WM, MET, and REA) and AI-SL to jointly shape ENG and learning outcomes (LO) in higher education.
Relationships among learning factors may vary across academic disciplines. STEM programmes place greater emphasis on abstract reasoning and algorithmic thinking, which aligns more closely with working memory and reasoning ability, whereas social science programmes prioritize reflective and conceptual thinking that relates more strongly to metacognition [23,24]. Systematic comparisons of disciplinary differences within a single analytical framework remain limited. The present study seeks to clarify how cognitive processes comprising WM, MET, and REA, AI-supported learning, and Sustainable Development Goal awareness relate to learning engagement and learning outcomes. A multi-group structural equation modeling analysis compares STEM and social science students to examine disciplinary variation. The central argument holds that linking cognitive, technological, and sustainability dimensions is essential for understanding learning engagement and outcomes in contemporary higher education.
Proposed hypotheses are, thus, grouped into three themes aligned with the study objectives. Group H1–H6 examines links between WM, MET, and REA with ENG and LO. Group H7–H12 addresses roles of AI-SL and SDGA in ENG and LO and their indirect paths through ENG. Group H13–H18 investigates disciplinary differences by testing through MG-SEM whether structural relations among WM, MET, REA, AI-SL, SDGA, ENG, and LO differ between STEM and social science students. All hypotheses (H1–H18) follow a consistent format and align directly with the proposed conceptual framework.
H1. 
WM is positively associated with ENG.
H2. 
MET is positively associated with ENG.
H3. 
REA is positively associated with ENG.
H4. 
WM is positively associated with LO.
H5. 
MET is positively associated with LO.
H6. 
REA is positively associated with LO.
H7. 
AI-SL is positively associated with ENG.
H8. 
AI-SL is positively associated with LO.
H9. 
SDGA is positively associated with ENG.
H10. 
SDGA is positively associated with LO.
H11. 
ENG is positively associated with LO.
H12. 
ENG mediates the relationship between WM and LO.
H13. 
ENG mediates the relationship between MET and LO.
H14. 
ENG mediates the relationship between REA and LO.
H15. 
ENG mediates the relationship between AI-SL and LO.
H16. 
ENG mediates the relationship between SDGA and LO.
H17. 
The relationships between cognitive processes (WM, MET, and REA) and ENG differ between STEM and social science students.
H18. 
The effects of AI-SL and SDGA on ENG and LO differ between STEM and social science students.
The research anticipates that STEM students may experience stronger changes associated with AI implementation, whereas social science students may achieve better results through SDG education.

2. Materials and Methods

The study was approved by Rajamangala University of Technology Suvarnabhumi on Ethics in Human Research (IRB-RUS-2568-104).

2.1. Research Design

This study used a survey-based research design to examine the links among mental processes, AI-supported learning (AI-SL), sustainable development goal awareness (SDGA), learner engagement (ENG), and learning outcomes (LO) in higher education. Undergraduate students from nine campuses within the Rajamangala University of Technology system in Thailand completed a straightforward questionnaire, and data were collected at a single point in time.
Following data collection, Structural Equation Modeling (SEM) served as the main analytical method for the theoretically specified relationships among latent constructs. The analysis began with a confirmatory factor analysis (CFA) to validate the measurement model, followed by estimation of the structural model to examine association patterns among constructs.
The analysis compared STEM students with Social Science students to examine disciplinary differences. Measurement invariance testing was conducted to confirm that both groups interpreted the survey items in a comparable way before any group comparisons were made. Data were collected at a single point in time, so the findings indicate associations rather than causal effects. The design supports testing theoretically guided relationships and identifying links that can be examined more rigorously through longitudinal or experimental research.

2.2. Measures

All study constructs were measured with a structured self-report questionnaire. This instrument captured students’ perceptions, cognitive processes (WM, MET, and REA), and learning behaviors. Instruments were adapted from the established literature and modified for use in higher education in Thailand. Items were reviewed for clarity, relevance, and content validity before data collection. To record responses, a five-point Likert scale was used, ranging from 1 (‘strongly disagree’) to 5 (‘strongly agree’), unless noted. Higher scores indicate greater levels of each construct.

2.2.1. Cognitive Factors

(a)
Working Memory (WM)
WM is a core cognitive system that supports the temporary storage and manipulation of information required for complex learning, reasoning, and problem-solving. In higher education, WM enables students to process academic material and manage cognitively demanding tasks, particularly in mathematics and technology-supported learning environments. Recent research consistently demonstrates that individual differences in WM capacity are strongly associated with ENG, problem-solving efficiency, and academic achievement [8,25].
Building upon this, maintaining information during academic tasks (WM1) refers to students’ ability to keep task-relevant information actively available while performing learning activities. This enables learners to retain instructions, key concepts, or problem conditions long enough for meaningful cognitive processing. When students can do this effectively, they are better equipped to sustain attention and avoid cognitive disruption during demanding academic tasks [25].
Updating information while learning (WM2) reflects students’ capacity to revise and replace information in WM as new or more accurate input becomes available. This process supports cognitive flexibility, enabling learners to adjust their understanding and strategies in response to feedback or changing task requirements. Effective updating has been identified as a central executive function that facilitates deep learning and adaptive problem solving in dynamic learning environments [26,27].
Managing multiple ideas simultaneously (WM3) captures the ability to coordinate and integrate multiple pieces of information. This capability is essential for higher-order reasoning, conceptual integration, and decision-making in complex academic contexts. Furthermore, research grounded in cognitive load theory suggests that students who efficiently manage multiple ideas experience reduced cognitive overload and improved learning performance [28,29,30].
Tracking multistep problem-solving processes (WM4) refers to students’ capacity to monitor and retain sequential steps in tasks that demand extended reasoning. This aspect of WM is especially important in mathematics and analytical disciplines, where solutions depend on interconnected steps and intermediate results. Strong multistep tracking ability supports logical coherence and accuracy in problem solving and relates to stronger mathematical reasoning and academic performance [6,7]. Sample items include “I can remember important details even when I am focused on other things at the moment” and “I am able to update and adjust information in my mind while working through a task.” The scale assesses WM capacity through a scoring system in which higher scores indicate stronger capacity. A pilot study found that the WM demonstrated acceptable internal consistency, with a Cronbach’s alpha of 0.83, supporting its suitability for research in higher education contexts.
(b)
Metacognition (MET)
MET refers to individuals’ awareness and regulation of their own cognitive processes during learning. It involves the ability to plan, monitor, and evaluate one’s understanding and strategies to achieve learning goals effectively [31]. In higher education, MET is widely recognised as a key component of self-regulated learning and has been shown to enhance academic engagement, problem-solving efficiency, and LO, particularly in complex and technology-supported learning environments [10,27].
Planning learning strategies (MET1) reflects students’ ability to set goals, select appropriate strategies, and allocate cognitive resources before engaging in learning tasks. Effective planning helps learners approach academic activities with clear intentions and structured methods, which supports deeper engagement and more efficient learning processes. Research indicates that students who actively plan their learning tend to demonstrate higher levels of academic control and persistence [10].
Complementing planning, monitoring, and understanding (MET2) refers to students’ ongoing awareness of their comprehension and performance during learning. This ability enables learners to detect confusion, identify errors, and recognise when additional effort or alternative strategies are needed. Continuous monitoring is essential for maintaining learning effectiveness, especially in complex tasks that require sustained attention and adaptive thinking [27,32].
Evaluating learning progress (MET3) captures students’ ability to reflect on their LO and assess whether their goals have been achieved. In turn, this evaluative process enables learners to assess the effectiveness of their strategies and identify areas for improvement. Furthermore, empirical studies show that reflective evaluation supports long-term learning and improves academic performance in higher education contexts [10,33].
Adjusting strategies when ineffective (MET4) refers to students’ capacity to modify learning approaches when current strategies do not lead to successful outcomes. Adaptive regulation enables flexible responses to challenges and changing task demands. Strategic adjustment characterizes effective self-regulated learners and relates to stronger engagement and resilience in demanding academic contexts [27,32]. Sample items include “I monitor my reading comprehension throughout my work on all course assignments” and “I change my learning approach when I understand that my current methods are not leading to successful results.” The assessment evaluates MET ability by capturing awareness and regulation of cognitive processes through a scoring system in which higher ratings reflect stronger skills. Pilot testing indicated high internal consistency for the MET items, with a Cronbach’s alpha of 0.88, supporting their use in higher education research.
(c)
Reasoning Ability (REA)
REA refers to individuals’ capacity to think logically, identify relationships among concepts, and draw valid conclusions to solve problems. In higher education, REA is a fundamental cognitive competence that underpins problem-solving, decision-making, and knowledge transfer across disciplines. Recent studies highlight that strong reasoning skills are particularly critical in mathematics, STEM-related fields, and technology-enhanced learning environments, where learners must analyse information systematically and apply concepts to novel situations [2,6].
Building on this foundation, logical reasoning and inference (REA1) reflects students’ ability to evaluate arguments, identify logical relationships, and draw sound conclusions from available information. This process enables students to interpret data, assess the validity of solutions, and avoid reasoning errors in academic tasks. Empirical evidence shows strong associations between logical reasoning and both academic engagement and performance in analytically demanding subjects [6].
In addition, analytical problem-solving (REA2) means students can break down complex problems into manageable components and use systematic strategies to reach solutions. This skill helps learners structure their thinking, compare approaches, and select solution paths. Research suggests that analytical problem-solving is key to engagement and improved outcomes in higher education, especially in problem-based and AI-supported learning contexts [2,27].
Applying reasoning to new situations (REA3) refers to students’ capacity to reason in unfamiliar contexts. The construct reflects flexible thinking and the ability to extend reasoning beyond routine tasks. Mathematics education research links cross-context reasoning with advanced cognitive competence and regards it as essential for meaningful learning and long-term academic success [6,24]. Sample items include “The AI tools I use enable me to learn complex subjects at a better level” and “AI tools, including essay grading software and automated math solvers, provide feedback that helps me enhance my learning activities.” Higher REA is indicated by higher scores on the scale. A Cronbach’s alpha of 0.86 indicated strong internal consistency for the REA scale, supporting its use in higher education research.

2.2.2. AI-Supported Learning (AI-SL)

AI-SL refers to students’ perceptions of how artificial intelligence–based tools and systems support, enhance, and personalise their learning processes in higher education. In contemporary university settings, AI-SL plays an increasingly important role by providing adaptive learning pathways, automated feedback, intelligent tutoring, and real-time learning analytics. These features are designed to support students’ cognitive processing, sustain engagement, and improve LO, particularly in mathematically and analytically demanding subjects [11,12].
Perceived usefulness of AI tools for learning (AI-SL1) reflects students’ beliefs that AI systems help them better understand course content. Learners who perceive AI tools as useful are more likely to integrate them into routines and stay engaged. Prior studies suggest that perceived usefulness (the belief that technology improves learning) is a key factor in students’ acceptance of AI-based learning systems. This perception also affects continued use in higher education [11,17].
The accuracy of AI-generated feedback (AI-SL2) captures how correct, clear, and relevant students find AI-SL2. Accurate, timely feedback helps students spot errors, refine understanding, and use adaptive learning strategies. Studies show high-quality AI feedback improves learning and supports deeper engagement, especially in problem-solving and quantitative tasks [12,14].
Trust in AI-supported learning systems (AI-SL3) reflects students’ confidence in the reliability (consistency and dependability) and fairness (impartiality and equal treatment) of AI tools. Students must trust AI recommendations (suggestions or solutions) and feedback to rely on them without doubt. Evidence shows that trust in AI increases ENG and mediates the link between AI use and LO [17].
AI assistance in understanding complex content (AI-SL4) matches students’ views. They say AI tools explain difficult concepts, guide problem-solving (help break down and solve problems), and reduce cognitive load (lessen the mental effort needed for learning). This dimension shows how AI-SL supports students’ WM (ability to hold and manipulate information), MET (awareness of one’s thinking), and REA (logical analysis of information) [16,29]. Studies show this assistance helps sustain engagement and improve academic performance in higher education [11,27]. Example items include “The AI tools I use help me understand complex topics more clearly.” And “AI-generated feedback helps me improve my learning strategies.” Higher scores on the AI-SL scale (a measurement assessing students’ views of AI-SL) indicate stronger perceived support from AI-based learning systems. Pilot testing revealed high internal consistency for the AI-SL scale. Cronbach’s alpha coefficient (a statistic measuring internal consistency) exceeded 0.90. This supports its suitability for research on AI-enhanced learning in higher education.

2.2.3. SDG Awareness (SDGA)

SDGA encompasses students’ comprehension of the UN Sustainable Development Goals (SDGs) and awareness of sustainability challenges at both global and societal scales. In higher education, SDGA has emerged as an important LO that supports students’ motivation, ethical responsibility, and ENG with socially relevant academic content. Recent studies indicate that integrating SDG-related knowledge into university curricula is positively related to students’ engagement in learning and contributes to more meaningful, purpose-driven learning experiences [1,22].
Building on this foundation, understanding Sustainable Development Goal (SDG1) reflects students’ knowledge of the core principles, objectives, and scope of the SDGs. This dimension captures learners’ awareness of how sustainability goals relate to education, society, and future professional roles. Students who clearly understand the SDGs tend to place greater value on sustainability-oriented learning and show higher levels of academic engagement [22].
In addition to understanding the SDGs, awareness of global sustainability challenges (SDG2) involves students recognizing the major environmental, social, and economic issues they address, such as climate change, inequality, and resource scarcity. This awareness enables learners to connect academic content with real-world problems and fosters systems thinking. Empirical evidence shows that exposure to global sustainability challenges is positively related to students’ critical thinking and motivation to engage in interdisciplinary learning [18,21].
Responsibility for sustainable development (SDG3) reflects students’ personal and social responsibility. The construct encourages support for sustainable development through individual actions and future career choices. The dimension captures the attitudinal and ethical aspects of SDGA. Higher education research links a strong sense of sustainability responsibility with ENG, prosocial orientations, and long-term societal commitment [4,22]. Sample items include “I understand the importance of the United Nations Sustainable Development Goals” and “I take it as my duty to help solve worldwide sustainability problems which affect the planet.” The SDGA assessment yields higher scores for stronger performance. Pilot testing reported acceptable internal consistency for the SDGA scale, with Cronbach’s alpha of 0.87, supporting its suitability for higher education research.

2.2.4. Learner Engagement (ENG)

ENG refers to the degree to which students are actively involved in their learning activities through behavioral participation, cognitive investment, and emotional commitment. In higher education, ENG is widely recognised as a central mechanism linking instructional practices, cognitive processes (WM, MET, and REA), and LO. Recent research consistently demonstrates that higher levels of engagement are associated with improved academic performance, persistence, and deeper learning, particularly in technology-enhanced and student-centred learning environments [34,35].
Active participation in learning activities (ENG1) means students are visibly involved in academic tasks, such as joining discussions, completing assignments, and putting effort into coursework. Here, “active participation” refers to the direct, purposeful actions students take to engage with materials, peers, or instructors. This readiness to engage has been linked to higher achievement and persistence in higher education [34].
In addition to active participation, sustained attention and effort (ENG2) means students stay focused, concentrated, and persistent with academic tasks, even when they are challenging. Sustained engagement supports deeper thinking and lowers the chance of disengagement in complex situations. Studies suggest that students who sustain effort achieve better outcomes and remain motivated [35].
Cognitive and emotional involvement in learning (ENG3) reflects students’ psychological investment, including interest, enjoyment, and perceived relevance of the content. This dimension combines cognitive engagement and emotional commitment, enhancing meaningful learning and long-term success. Recent research shows that emotional and cognitive engagement are critical for deep understanding and adaptive learning in higher education [34,36].

2.2.5. Learning Outcomes (LO)

LO refers to the extent to which students achieve intended academic goals as a result of their learning experiences. In higher education, learning outcomes encompass both perceived learning gains and demonstrated academic performance, reflecting students’ cognitive development, problem-solving ability, and mastery of course content. Recent research emphasizes that LO are shaped not only by instructional quality but also by students’ cognitive processes (WM, MET, and REA), ENG, and self-regulation, particularly in technology-supported learning environments [35,36].
Achievement of learning objectives (LO1) reflects students’ perceived attainment of course goals and competencies. This dimension captures whether learners believe they have met academic standards. Studies indicate that perceived achievement of learning objectives closely relates to motivation, engagement, and persistence and serves as an important indicator of instructional effectiveness [35].
Improvement in subject understanding (LO2) refers to students’ perceived growth in conceptual knowledge and comprehension of course material. This dimension reflects deeper learning beyond rote memorization, emphasizing meaningful understanding and the ability to explain or apply concepts. Empirical evidence suggests that improvements in subject understanding are strongly associated with active engagement and cognitively demanding learning activities, particularly in mathematics and problem-based learning contexts [27,36].
Confidence in problem-solving ability (LO3) refers to students’ judgments about their capacity to solve academic problems. The construct signals readiness to transfer learning to new challenges. Research links stronger confidence with higher achievement, sustained engagement, and positive academic progress in higher education [6,17]. Sample items include “I have developed better skills to tackle problems that the course presents” and “I have reached all the learning targets that the subject established.” Higher values on perceived and objective measures indicate stronger LO. The Perceived Learning Scale demonstrated strong internal consistency, with a Cronbach’s alpha of 0.91, supporting its use in higher education research.
The constructs and measurement items are shown in Table 1, along with item codes, brief descriptions, number of items, and source references.

2.2.6. Content Validity and Expert Review

All measurement items underwent a systematic expert review before data collection. The questionnaire was developed by adapting validated instruments from previous studies by Schraw and Moshman [37]. These instruments were refined to fit higher education, AI-SL, and sustainability education. Clarity and construct relevance were emphasized. Alignment between theoretical definitions and observed indicators was also ensured. Building upon this foundation, the following steps enhanced the instrument’s rigor.
A panel of three experts in educational psychology, mathematics education, and instructional technology evaluated each item independently. Item–Objective Congruence (IOC) values were calculated to assess content validity. Items with IOC values below 0.50 were revised or removed, based on expert feedback. Minor wording adjustments enhanced conceptual clarity and contextual appropriateness. After incorporating these revisions, additional validation steps were taken.
The revised instrument was pilot-tested with undergraduate students who were not included in the final sample to assess item clarity and response consistency. Reliability analysis from the pilot data indicated acceptable internal consistency across all constructs. These combined phases of expert review and pilot testing confirmed that the measurement instrument was valid, reliable, and suitable for investigating cognitive processes (WM, MET, and REA), ENG, and LO in higher education settings.

2.2.7. Pilot Testing

A pilot study was conducted with 50 undergraduate students from Rajamangala University of Technology Suvarnabhumi. These students were excluded from the final study sample. The pilot testing was intended to assess the appropriateness of the measurement tools prior to extensive data collection. It specifically evaluated item clarity, suitability for higher education and AI-SL environments, and response variability across all constructs.
The results showed that all questionnaire items were clearly understood by participants. The items were appropriately worded for the target population and showed sufficient response variance. No items were reported as confusing or ambiguous. Only minor wording refinements were made to enhance clarity. Reliability analysis indicated strong internal consistency, with Cronbach’s alpha coefficients exceeding 0.82 for all constructs.
Due to the sample size limitations, factor analysis was not performed during pilot testing. Stable factor structure assessment requires larger samples. Confirmatory factor analysis (CFA) was therefore conducted on the full dataset (N = 762) to examine construct validity and the factorial structure, following established guidelines for structural equation modeling (SEM). The combination of pilot reliability findings and expert content validation demonstrated strong measurement quality across the study instruments.

2.3. Participants

Undergraduate students from mathematics, general education, and interdisciplinary courses at the nine campuses of the Rajamangala University of Technology system in Thailand participated in the study, offering academic and institutional diversity. Including these campuses ensured a range of educational environments and approaches, including AI-based learning systems, thereby enhancing the study’s findings.
The research employed stratified sampling by dividing undergraduates into two strata: STEM fields (engineering, industrial technology) and applied/social sciences (business administration, humanities, social sciences). A total of 800 participants were randomly selected, including 412 from STEM and 388 from the social sciences. After screening for missing values, identifying multivariate outliers using Mahalanobis distance, and checking for response inconsistencies, 762 valid responses remained for analysis.
The research sample size exceeded the minimum requirements for SEM. With 35 model parameters, Kline [38] suggests at least 350–400 participants. For multi-group SEM (MG-SEM), Hair et al. [39] recommend a minimum of 200 participants per group to achieve analytical stability and power. The final dataset thus met these criteria, supporting reliable SEM and Multi-Group analyses of configural, metric, and scalar invariance.
Participants ranged in age from 18 to 24 years old (M = 20.6, SD = 1.4); 56% were female, 43% were male, and 1% identified as other. All participants provided electronic consent, and the study complied with the university’s research ethics guidelines.

2.4. Data Collection Procedure

The Institutional Review Board of Rajamangala University of Technology Suvarnabhumi approved the study. The target population included undergraduate students from nine campuses in the university system. Before collecting data, we obtained permission from academic units. We informed participants about the study’s purpose, voluntary participation, and data confidentiality.
The questionnaire was administered using an online survey platform, which facilitated access across campuses and ensured standardized data collection procedures. To achieve adequate representation of students from both STEM and Social Science disciplines, a stratified sampling approach was employed. The survey link was then distributed through official university channels and course instructors during the academic semester.
Participants provided electronic informed consent before responding to the questionnaire. Only students who agreed to participate continued to the survey. The questionnaire took about 15–20 min to complete. Data collection lasted for four weeks to allow sufficient participation time. After data collection, responses were screened for missing values, inconsistencies, and outliers. Consequently, incomplete or invalid responses were excluded. As a result, the final dataset was suitable for statistical analyses, including CFA and SEM.

2.5. Data Analysis

Data analysis used a two-stage SEM approach. Prior to hypothesis testing, researchers checked the dataset for missing values, outliers, and distributional issues. The expectation–maximization method handled minimal missing data, while skewness and kurtosis were used to assess univariate normality. Researchers identified multivariate outliers using Mahalanobis distance. Variance inflation factors (VIFs) were used to assess multicollinearity; all values were below the acceptable threshold.
The first stage consisted of CFA to assess the measurement model. This process examined factor loadings, which indicate the strength of the relationship between each item and its underlying factor, alongside construct reliability and validity, which measure the consistency and accuracy with which each construct is assessed. To evaluate convergent validity (ensuring related items are genuinely related), analysis included standardized factor loadings, composite reliability (CR), and average variance extracted (AVE). In contrast, discriminant validity (the distinction between separate constructs) was assessed using the Fornell–Larcker criterion and the heterotrait–monotrait (HTMT) ratio. Model fit, reflecting how well the model aligned with the data, was assessed using indices such as the comparative fit index (CFI), the Tucker–Lewis index (TLI), the root mean square error of approximation (RMSEA), and the standardized root mean square residual (SRMR). The second stage, SEM, was used to examine the proposed connections among cognitive processes (WM, MET, and REA), AI-SL, SDGA, ENG, and LO. SEM is a statistical technique used to test complex relationships. Standardized path coefficients, which express the strength of relationships on a common scale, estimated both direct and indirect effects. Bootstrapping, a resampling technique, was used with 5000 samples and bias-corrected confidence intervals to examine mediation effects. Mediation occurs when one variable is associated with the relationship between two others.
MG-SEM analysis compared STEM (Science, Technology, Engineering, and Mathematics) and Social Science students to examine differences between disciplines. Researchers sequentially tested measurement invariance at three levels: configural (the same basic factor structure), metric (equal factor loadings), and scalar (equal item intercepts). After confirming these levels, they made structural comparisons. Analyses adhered to established SEM guidelines for educational research.

2.5.1. Preliminary Screening and Assumptions

The research team conducted data screening to verify that the data met SEM requirements, following established procedures. The team allowed for less than 5% missing data points. They determined these values could be imputed using the expectation–maximization method [40]. The researchers used skewness values between −2 and 2 and kurtosis values between −2 and 2 to check for univariate normality [38]. VIF values below 5 indicated the absence of significant multicollinearity [39]. The Mahalanobis distance test (p < 0.001) identified multivariate outliers using the chi-square distribution method by Kline [38]. The analysis used robust maximum likelihood estimation (MLR) when researchers detected non-normality, as this method performs best for such data [41]. This process established that the data fulfilled all necessary conditions for CFA and MG-SEM analysis.

2.5.2. Confirmatory Factor Analysis (CFA)

The CFA evaluated the measurement model of all latent variables using the approach of Hair et al. [39]. The model fit assessment included several indicators. CFI and TLI values were above 0.90, RMSEA values were below 0.08, and SRMR values were below 0.08. These values indicated acceptable model fit [42]. Convergent validity was established through factor loadings, all of which exceeded 0.50. CR values surpassed 0.70, and AVE values exceeded 0.50 [43]. The research team applied two verification methods to establish discriminant validity. The Forell–Larcker criterion required the square root of each AVE to exceed the inter-construct correlations. The HTMT ratio also had to remain below 0.85 [44]. All constructs met the required thresholds, confirming their reliability and validity for future SEM analyses.

2.5.3. Measurement Invariance Testing

The research employed MG-SEM to assess STEM students relative to Social Science students. Measurement invariance was examined in three hierarchical steps following Vandenberg and Lance [45]:
(a)
Configural Invariance
Configural invariance was first examined to determine whether the STEM and Social. Science groups shared the same underlying factor structure. The evaluation process in this step determines whether the combination of fixed and free parameters yields equivalent patterns across groups, without requiring parameter equality. The configural invariance testing results demonstrate that both groups possess identical construct structures, which allows us to develop an initial model for upcoming research. The model showed acceptable fit in both groups, indicating that the factor structure remained identical across the two groups, as reported by Byrne [41].
(b)
Metric Invariance
Following the establishment of configural invariance, metric invariance was examined by constraining factor loadings to be equal across the two groups. The evaluation process in this stage determines whether the measured indicators show equivalent relationships with the hidden constructs, enabling researchers to study the structural connections between variables. The assessment of metric invariance involved examining changes in global fit indices rather than relying solely on chi-square difference tests, as these tests are sensitive to sample size. Consistent with contemporary methodological recommendations, invariance was supported when changes in comparative fit index (ΔCFI) did not exceed 0.01, and changes in root mean square error of approximation (ΔRMSEA) did not exceed 0.015 [46,47]. The research findings showed that restricting factor loadings did not cause major deterioration in model accuracy which supported metric invariance and demonstrated that the latent constructs had similar measurement characteristics between academic fields.
(c)
Scalar Invariance
The research team evaluated scalar invariance using a set-based process. The item intercepts are set to be equal across groups. The demonstration of scalar invariance shows that observed score differences between groups stem from true differences in latent variables rather than from flaws in the testing instrument. Similarly to the metric level, invariance was evaluated based on ΔCFI ≤ 0.01 and ΔRMSEA ≤ 0.015. The analysis results confirmed partial scalar invariance, enabling researchers to compare latent means and structural parameters across groups. The observed group differences in constructs become meaningful for interpretation because of this level of invariance.

2.5.4. Structural Model Evaluation and Multi-Group Analysis

After confirming adequate measurement invariance, the structural model was estimated to examine the hypothesized relationships among cognitive processes (WM, MET, and REA), AI-SL, SDGA, ENG, and LO. The research used SEM with robust maximum likelihood estimation to address minor violations of normality in the data. Model fit was evaluated using multiple indices, including CFI, TLI, RMSEA, and SRMR, to determine whether the model met acceptable standards of adequacy.
The team examined path coefficients to determine the strength and direction of relationships among variables. This analysis allowed them to evaluate how WM, MET, REA, AI-SL, and SDGA affected student ENG and LO. They used 5000 bootstrapped resamples to assess indirect effects and created bias-corrected confidence intervals for mediation analysis. Moreover, the study compared effects across academic subjects through MG-SEM analyses of STEM and Social Science students. Sequential constraints were applied to structural paths, and changes in fit indices (ΔCFI ≤ 0.01) were tested for significant group differences. This approach enables researchers to detect effects specific to each group when examining responses to cognitive, technological, and sustainability factors.

2.5.5. Additional Diagnostics

SEM was conducted using the lavaan engine in Jamovi (version 2.6.26). Maximum likelihood (ML) estimation was used. This estimator was appropriate given the adequate sample size and distributional properties. The skewness and kurtosis values of the observed variables were within recommended thresholds. Bias-corrected bootstrapping with 5000 resamples strengthened the assessment of mediation effects by providing robust standard errors and confidence intervals for indirect effects. Additional diagnostics included multicollinearity checks using VIF. Multiple goodness-of-fit indices were also evaluated. Table 2 summarizes the complete data analysis procedure, detailing analytical steps, statistical techniques, software, and rationales. These procedures enhanced the robustness, reliability, and replicability of the analytical results.

3. Results

3.1. Sample Characteristics

This study surveyed undergraduate students from nine Rajamangala University of Technology campuses in Thailand. We ensured representation from both STEM and Social Science fields to examine disciplinary differences in cognitive processes (WM, MET, and REA), engagement (ENG), and learning outcomes (LO). We collected data using a structured questionnaire and screened responses for completeness, consistency, and outliers before analysis.
After screening, we retained 762 valid responses for analysis. This sample size exceeds the minimum requirements for structural equation modeling and multi-group SEM (MG-SEM), ensuring sufficient statistical power. Table 3 summarizes the sample’s demographics. The sample included 412 STEM students (54.1%) and 350 Social Science students (45.9%), showing a balanced distribution. Participants ranged from 18 to 24 years old (mean = 20.6, SD = 1.4). In terms of gender, 56.0% were female, 43.0% male, and 1.0% identified as other. This profile reflects the undergraduate diversity at Rajamangala University of Technology. The distribution across disciplines, age groups, and gender supports the sample’s representativeness. This diversity provides a strong basis for confirmatory factor analysis, structural equation modeling, and comparisons between STEM and Social Science students, discussed in the following sections.

3.2. Preliminary Analysis

Table 4 displays descriptive statistics (means, standard deviations, skewness, and kurtosis) for all variables. Mean scores ranged from 3.62 to 4.15 (SD = 0.60–0.78), indicating adequate variability across items. These values suggest that participants generally reported moderate to high levels across cognitive processes (WM, MET, and REA), SDG awareness (SDGA), ENG, and LO. No severe univariate outliers were detected, and multivariate outliers were examined using Mahalanobis distance (p < 0.001). The distributional properties of the data met the assumptions required for maximum likelihood estimation, as all skewness values were below 2 and absolute kurtosis values were well below the recommended threshold of 7.
Missing data comprised less than 2% of responses and was handled using full information maximum likelihood (FIML) estimation, which yields unbiased parameter estimates under missing-at-random conditions without requiring data imputation.
To assess multicollinearity, variance inflation factors (VIFs) were examined for all observed variables. All VIF values were below 2.50, indicating that multicollinearity was not a concern in the subsequent analyses.

3.3. Assessment of Common Method Bias (Harman’s Single-Factor Test)

Common method bias was assessed using Harman’s single-factor test [48,49]. All measurement items were entered into an unrotated exploratory factor analysis. The results showed that multiple factors emerged. The first factor accounted for approximately 32% of the total variance. This value falls below the recommended 50% threshold, suggesting that common method bias was not a significant concern [49,50]. In addition, procedural remedies were applied, including ensuring respondent anonymity and using clear, concise item wording. These steps further reduced the risk of common method bias.

3.4. Descriptive Statistics and Correlations

Table 5 presents the descriptive statistics and Pearson correlations among Working Memory (WM), Metacognition (MET), Reasoning Ability (REA), AI-SL, SDGA, ENG, and LO. All constructs had a mean above the midpoint of the five-point Likert scale, suggesting generally positive perceptions from participants. Moving from descriptive to relational findings, the correlation analysis revealed moderate to strong positive associations among all observed variables, with correlation coefficients ranging from r = 0.42 to r = 0.74. In particular, AI-SL items showed consistently strong correlations with ENG and LO. This suggests that perceived AI support is closely linked to students’ active involvement in learning and perceived academic success. These correlations support the theoretical framework and justify proceeding with CFA and SEM analyses.

3.5. Measurement Model

3.5.1. Factor Loadings and Convergent Validity

All standardized factor loadings ranged from 0.72 to 0.87 and exceeded the recommended threshold of 0.50 for practical significance [39]. This result demonstrates that all indicators effectively assessed their respective hidden variables. The research maintained internal consistency, as the Composite Reliability (CR) values exceeded 0.70 and the Average Variance Extracted (AVE) values exceeded 0.50, both meeting the convergent validity criteria established by Fornell and Larcker [43].
The CFA results indicated that the measurement model fit the data well, meeting common standards. Key indicators included: χ2/df = 2.31 (<3; [38]), CFI = 0.94 and TLI = 0.93 (≥0.90), GFI = 0.91 and AGFI = 0.89 (≥0.85), RMSEA = 0.055 (≤0.06–0.08), and SRMR = 0.046 (≤0.08), based on Hu and Bentler [42]. The results in Table 6 indicate that the measurement model exhibits adequate reliability, convergent validity, and structural adequacy.

3.5.2. Discriminant Validity (Fornell–Larcker Criterion)

The results of the Fornell–Larcker analysis provided strong evidence of discriminant validity among all constructs. The diagonal values, representing the square roots of the Average Variance Extracted (√AVE), ranged from 0.74 to 0.82 and were consistently higher than the correlations between each pair of constructs. The results show that each latent variable explains more variance in its indicators than in any other construct, which fulfills the Fornell and Larcker [43] criterion.
As shown in Table 7, the ENG construct showed its strongest relationship with AI-SL (0.51), and ENG showed its strongest relationship with LO (0.55). The research findings are logical because student involvement is the key factor driving learning success in technology-based educational environments. The results show that the variables maintain their distinct nature, as their values remain below the √AVE thresholds, which provides evidence of their discriminant validity. The constructs show sufficient distinction, which validates their application in the structural model.
Discriminant validity was assessed using the Heterotrait–Monotrait (HTMT) ratio. Table 8 shows that all HTMT values were below the 0.85 threshold, indicating adequate validity. The highest value, 0.83, was between AI-SL and LO, which is theoretically plausible given their close relationship. These results confirm that all constructs are empirically distinct.

3.6. Measurement Invariance

Measurement Invariance Testing

The results of the measurement invariance analysis provide strong evidence that the measurement model operates consistently across STEM and Social Science student groups. The configural model demonstrated good fit to the data (χ2 = 1125.40, df = 480, CFI = 0.93, RMSEA = 0.057), indicating that both groups shared an identical factor structure and conceptualization of the constructs. The present findings provide a reliable foundation for upcoming tests of invariance.
The analysis of metric invariance, which examines whether factor loadings (how well observed variables reflect underlying factors) are equal across groups, provided support as the factor loadings between groups remained stable despite small changes in fit index values (ΔCFI = 0.002; ΔRMSEA = 0.001). The chi-square difference test results indicated a small increase in the model fit (Δχ2 = 22.82, df = 13, p = 0.042). Model fit remains unaffected when changes in the Comparative Fit Index (CFI) are less than 0.01, as reported by Cheung and Rensvold [51]. The research shows that the relationships between indicators (measured variables) and latent constructs (unobserved concepts measured by indicators) remain similar across groups, allowing scientists to directly evaluate the structural path coefficients (the relationships between constructs).
The assessment of scalar invariance, i.e., testing whether item intercepts (expected item score when the factor is zero) are equivalent across groups, also shown in Table 9, did not initially achieve full support due to a notable change in model fit (ΔCFI = 0.012) and a statistically significant chi-square difference (Δχ2 = 48.33, p = 0.001). After releasing (freeing) two non-invariant intercepts (items that functioned differently across groups), partial scalar invariance was achieved. Vandenberg and Lance [45] showed through their study that researchers can analyze both latent mean differences (average differences on unobserved variables between groups) and group structural relationships by conducting partial invariance assessments. The invariance results confirm that the MG-SEM analysis produces valid results and allows examination of how cognitive elements, AI-based educational tools, and knowledge of the Sustainable Development Goals relate to different academic subjects.
Partial scalar invariance indicates that a limited number of item intercepts differ between groups, while most remain equivalent, allowing meaningful comparisons of latent means and structural relationships across STEM and social science students.

3.7. Structural Model Analysis

3.7.1. Direct Effects (Pooled Sample)

The structural model analysis, as presented in Table 10, showed that all proposed relationships between variables were statistically significant, confirming that cognitive elements, AI-based education, and Sustainable Development Goal knowledge affect how students engage with their learning activities and their academic achievements. The cognitive processes (WM, MET, and REA) that predict ENG showed their strongest positive effect from MET (MET → ENG, β = 0.31, p < 0.001). Students who actively control their thinking processes through MET tend to show higher levels of ENG. The research showed that WM (WM → ENG, β = 0.24, p < 0.001) and REA (REA → ENG, β = 0.27, p < 0.001) had significant positive relationships with ENG.
Table 10 reported that AI-based learning systems led to more student participation. This was because students received individualized feedback from systems that adapted to their learning preferences (AI-SL → ENG, β = 0.29, p < 0.001). Participation in classroom activities also depended on SDGA (SDGA → ENG, β = 0.26, p < 0.001). Students who valued sustainability goals participated more, likely because of greater motivation and a better understanding of their involvement.
ENG had the most powerful direct relationship with optimal LO and academic results (ENG → LO, β = 0.44, p < 0.001). The research reveals that ENG is the primary link connecting mental processes and motivation to academic success. The findings also indicated that cognitive processes (WM, MET, and REA) directly improved LO through WM (WM → LO, β = 0.18, p = 0.004), MET (MET → LO, β = 0.22, p < 0.001), and REA (REA → LO, β = 0.20, p = 0.003).
The research data displayed significant associations between cognitive abilities and AI educational platforms, which enhance students’ learning of sustainability content and improve academic achievement through active student involvement.

3.7.2. Multi-Group Structural (MG-SEM) Comparisons (STEM vs. Social Science)

The results of the MG-SEM analysis, summarized in Table 11, following the establishment of partial scalar invariance, MG-SEM was conducted to examine structural differences between STEM and Social Science students. Structural path differences were examined using chi-square difference tests comparing constrained and unconstrained models for each specific path. Significant group differences were observed in several key relationships. The effect of REA on ENG was significantly stronger for STEM students (β = 0.34) than for Social Science students (β = 0.18; Δχ2 = 6.18, p = 0.013). In contrast, MET exerted a stronger influence on ENG among Social Science students (β = 0.37) compared to STEM students (β = 0.22; Δχ2 = 7.31, p = 0.007). Additionally, the direct effect of AI-SL on LO was stronger in the STEM group (β = 0.21) than in the Social Science group (β = 0.09; Δχ2 = 4.56, p = 0.033). Finally, SDGA showed a significantly greater effect on ENG among Social Science students (β = 0.33) than STEM students (β = 0.18; Δχ2 = 5.92, p = 0.015). These findings indicate discipline-specific patterns in how cognitive, technological, and sustainability-related factors shape ENG and LO.

3.8. Effect Size Comparison Across Groups

The effect size analysis, as reported in Table 12, further clarified the practical significance of the group differences observed in the structural model. Specifically, for the path from REA to learner ENG (REA → ENG), a medium effect size was observed among STEM students (f2 = 0.12), while only a small effect was found among Social Science students (f2 = 0.04). This suggests that students who excel at reasoning tasks maintain higher engagement levels when studying complex STEM subjects, which require analytical thinking for their understanding. Building on these findings, the next analysis examines the MET → ENG path across academic domains.
The MET → ENG path demonstrated a greater practical effect on Social Science students than STEM students, as Social Science students achieved a medium effect size of 0.14, whereas STEM students achieved a small effect size of 0.05. This observed pattern shows that students who use self-regulation and reflective thinking skills will become more engaged in their social science classroom work. Moreover, the effect size results support the multi-group path comparisons, indicating that the observed differences across academic domains are both statistically significant and of practical importance.

3.9. Explained Variance by Group

The structural model provided an adequate explanation for both ENG and LO across all study groups, as summarized in Table 13. In the analysis of ENG, STEM students explained more of their engagement variance (R2 = 0.52) than Social Science students (R2 = 0.44). This finding suggests that the combination of cognitive processes (WM, MET, and REA), AI-SL, and SDGA knowledge explained more of the ENG in subjects that require analytical thinking. The model demonstrated equal ability to explain LO across groups. It explained 64% of STEM student results and 60% of Social Science student results. The research shows that academic domain-specific factors affect student engagement, but the proposed model successfully explains both LO and student participation for all participants. The research results showed that the structural model functioned properly across different academic fields within university environments.

3.10. Indirect Effects (Pooled Sample)

The bootstrapped indirect effects analysis in Table 14 showed that ENG was a key mediator linking cognitive processes (WM, MET, and REA), AI-SL, and SDGA to LO. The study found that WM had a small but important connection to results (β = 0.11, 95% CI [0.06, 0.18]). MET produced the strongest mediated effect (β = 0.14, 95% CI [0.09, 0.22]), showing that students with better regulatory abilities achieved better results through higher participation. REA had an indirect effect on performance, with a value of 0.12 (95% CI [0.07, 0.19]). Two non-cognitive predictors also showed significant indirect effects through their relationship with achievement. The research found that AI-SL and SDGA had positive effects on student achievement by boosting student involvement. ENG was the main factor linking learning resources and technological support to sustainable learning practices, resulting in better academic outcomes.
Figure 1 presents arrows that denote hypothesized direct effects among the latent constructs, and associated indicators represent the measurement components used to operationalize each construct. The paths outline relationships linking cognitive processes WM, MET, and REA with AI-SL and SDGA in relation to ENG and LO.

4. Discussion

The research data reveals essential knowledge about students’ thinking abilities and their understanding of AI-based education and sustainability, which determine their university classroom participation and academic achievement. The pooled data showed that metacognition was the leading cognitive factor in determining learner engagement (ENG), according to current research, demonstrating that students need planning and monitoring abilities to maintain deep academic task involvement [10,27]. Research shows that working memory (WM), together with reasoning ability (REA), strongly determines student engagement [6,8]. Studies indicate that students need strong cognitive abilities and analytical skills to handle demanding academic work in math-based learning settings. The research data show that students build cognitive readiness, enabling them to understand difficult academic content when solving problems and using technology-based learning tools [52].
The research data exhibited that student participation in educational activities had the strongest direct impact on academic results, supporting modern educational concepts that stress student engagement as the primary factor connecting classroom work to academic success [34,35]. The mediation analysis validated this role by demonstrating that learning outcomes (LO) stem from students’ cognitive activities, AI-based educational support, and SDG knowledge, as these elements influence how students become engaged in their learning [53]. Students who develop their thinking abilities through continuous learning, dedicated work, and persistent effort will achieve higher academic success, according to the study. Research on self-regulated learning with technology support now investigates student engagement as a surface indicator, reflecting their mental control and active participation in educational environments through AI tool use [11,36,52,54].
The implementation of AI-supported learning (AI-SL) systems brought about major changes, affecting how students engaged with their work and their unanticipated learning achievements. Research indicates that students maintain their motivation through intelligent tutoring systems, adaptive feedback, and automated explanations, which support their problem-solving and learning activities [12,14]. Student engagement is an indirect effect of AI-based learning because AI provides educational value through digital learning environments that actively support students during their learning activities [54].
The program’s success depended on participants’ SDGA, as it determined their level of involvement and LO from the experience. The research results from this study expand existing knowledge about sustainability education, as students who learn about global sustainability issues show improved academic participation and a stronger natural drive and life direction [4,22,53]. The research shows that sustainability competencies serve as motivational factors that enable students to achieve their academic goals while producing positive effects for society and ethics.
The research used multi-group analyses to show that academic fields produced substantial variation that affected the research results. STEM students demonstrated higher engagement because they used their reasoning skills and AI learning tools, which STEM education demands that students solve problems through technological analytical work [24]. The Social Science students achieved better results in metacognition and SDGA because their academic program required them to evaluate their work and identify its social effects [18,23]. The research results demonstrate that educational design needs to be tailored to specific subjects, aligning learning requirements with AI and sustainable content that reflects how students learn in each subject area.
The research establishes theoretical value through its combination of cognitive and technological approaches and sustainability, developing a single structural model that shows institutions how to create learning spaces with AI technology, SDG principles, and inclusive design. Practically, the results highlight the importance of instructional design that considers discipline-specific needs. For STEM programmes, learning environments may benefit from strengthening reasoning-focused tasks within AI-enhanced systems, such as problem-based activities supported by adaptive feedback and intelligent tutoring. In contrast, Social Science programmes may place greater emphasis on metacognitive scaffolding and SDG-focused reflective activities, encouraging students to connect course content with social relevance and sustainability challenges. Such targeted approaches may help maximize learner engagement and learning outcomes across disciplinary contexts.
The study used a cross-sectional design, so the observed relationships represent associations rather than causal effects. The structural model located significant links among cognitive processes (WM, MET, REA), AI-SL, SDGA, ENG, and LO, yet causal claims remain unwarranted. Data collection relied on a single online, self-reported survey. Procedural steps and statistical tests reduced common method bias, but complete elimination remains unlikely. Unmeasured factors, including prior academic achievement, instructor practices, course difficulty, and institutional context, may also affect ENG and LO. Future work should adopt longitudinal or experimental designs, gather data from multiple sources, or use objective performance indicators to clarify causal pathways and lower potential method bias.

5. Conclusions

The research introduces new insights into students’ mental processes when using AI tools for learning, enabling them to acquire sustainability knowledge that leads to better academic results. The research data showed that working memory (WM), along with metacognition (MET) and reasoning ability (REA), served as fundamental factors in how students engaged (ENG) with challenging academic tasks. ENG proved to be the main factor in determining academic success, according to research data. ENG served as a link that enabled students to use their abilities, motivations, and technological experiences to achieve better academic results. The research results confirm theoretical models that identify engagement as the primary driver of learning success in self-regulated, technology-based educational systems.
The research findings show that AI-based learning systems boost student participation, leading to better academic achievement. The research indicates that AI tools create value through their dual functionality: providing individualized feedback and boosting student engagement with their coursework. SDGA positively affected student participation, leading to better LO because sustainability competencies enhance academic achievement by fostering student motivation, purposeful learning, and subject relevance.
The multi-group analysis produced key results, revealing that academic disciplines yielded unique outcomes. STEM students depended on reasoning skills and AI tools for success, while Social Science students excelled through MET and knowledge of SDGs. These results indicate that instructional strategies should account for students’ learning approaches in each discipline and tailor methods to fit specific cognitive, technological, and sustainability needs. The research develops a theoretical framework integrating cognitive, technological, and sustainability perspectives to construct a unified model that guides the design of AI-based educational environments, enhancing student engagement and academic achievement across disciplines.

Author Contributions

Conceptualization, A.P., W.S. and P.K.; methodology, A.P., L.S. and S.P.; software, W.S.; validation, A.P., P.K. and L.S.; formal analysis, A.P. and S.P.; investigation, L.S. and S.P.; resources, A.P. and W.S.; data curation, A.P. and P.K.; writing—original draft preparation, A.P. and L.S.; writing—review and editing, A.P. and W.S.; visualization, A.P.; supervision, A.P. and S.P.; project administration, A.P. and L.S.; funding acquisition, A.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study is waived for ethical review as it is carried out in compliance with the international guidelines for human research protection as Declaration of Helsinki, The Belmont Report, CIOMS Guideline, International Conference on Harmonization in Good Clinical Practice (ICH-GCP) and 45CFR 46.101(b) by Institution Committee.

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author because the research data which supports the study results remains protected from public disclosure. The information remains private because it contains sensitive data which needs to stay protected under privacy laws and contractual agreements that enforce confidentiality.

Acknowledgments

The authors used ChatGPT4 to help with APA formatting work and to improve manuscript clarity through their review process. The authors edited all AI-generated content. The authors take responsibility, for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
WMWorking Memory
WM1Maintaining information during tasks
WM2Updating information while learning
WM3Managing multiple ideas simultaneously
WM4Tracking multistep problem-solving processes
METMetacognition
MET1Planning learning strategies
MET2Monitoring understanding
MET3Evaluating learning progress
MET4Adjusting strategies when ineffective
REAReasoning Ability
REA1Logical reasoning and inference
REA2Analytical problem-solving
REA3Applying reasoning to new situations
AI-SLAI-Supported Learning
AI-SL1Usefulness of AI tools for learning
AI-SL2Accuracy of AI-generated feedback
AI-SL3Trust in AI-supported learning systems
AI-SL4AI assistance in understanding complex content
SDGASDG Awareness
SDG1Understanding the Sustainable Development Goals
SDG2Awareness of global sustainability challenges
SDG3Responsibility toward sustainable development
ENGLearner Engagemen
ENG1Active participation in learning activities
ENG2Sustained attention and effort
ENG3Cognitive and emotional involvement in learning
LOLearning Outcomes
LO1Achievement of learning objectives
LO2Improvement in subject understanding
LO3Confidence in problem-solving ability

References

  1. UNESCO. Education for Sustainable Development: A Roadmap; United Nations Educational: Paris, France, 2023. [Google Scholar]
  2. OECD. Education at a Glance 2022; OECD Publishing: Paris, France, 2022. [Google Scholar]
  3. OECD. An OECD learning framework 2030. In The Future of Education and Labor; Springer International Publishing: Cham, Switzerland, 2019; pp. 154–196. [Google Scholar]
  4. Tarlochan, F.; Alduais, A.; Chaaban, Y.; Du, X. Integrating sustainability into STEM education and career development: A scientometric and narrative review. Int. J. STEM Educ. 2025, 12, 62. [Google Scholar] [CrossRef]
  5. English, L.D.; Watson, J.M. Development of probabilistic reasoning in data-rich contexts. ZDM–Math. Educ. 2018, 50, 737–752. [Google Scholar]
  6. Niss, M.; Højgaard, T. Mathematical competencies revisited. Educ. Stud. Math. 2019, 102, 9–28. [Google Scholar] [CrossRef]
  7. Jenks, K.M.; van Lieshout, E.C.; de Moor, J.M. Cognitive correlates of mathematical achievement in children with cerebral palsy and typically developing children. Br. J. Educ. Psychol. 2012, 82, 120–135. [Google Scholar] [CrossRef]
  8. Friso-van den Bos, I.; van der Ven, S.H.G.; Kroesbergen, E.H.; van Luit, J.E.H. Working memory and mathematics in primary school children: A meta-analysis. Educ. Psychol. Rev. 2018, 30, 29–44. [Google Scholar] [CrossRef]
  9. Wang, C.Y.; Gao, B.L.; Chen, S.J. The effects of metacognitive scaffolding of project-based learning environments on students’ metacognitive ability and computational thinking. Educ. Inf. Technol. 2024, 29, 5485–5508. [Google Scholar] [CrossRef]
  10. Thi-Nga, H.; Thi-Binh, V.; Nguyen, T.T. Metacognition in mathematics education: From academic chronicle to future research scenario–A bibliometric analysis with the Scopus database. Eurasia J. Math. Sci. Technol. Educ. 2024, 20, em2427. [Google Scholar] [CrossRef]
  11. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of AI in higher education. Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  12. Chen, L.; Chen, P.; Lin, Z. Artificial intelligence in education: A review. IEEE Access 2020, 8, 75264–75278. [Google Scholar] [CrossRef]
  13. Yaseen, S.G.; Al-Rashid, A.; Al-Hamad, A.Q.; Al-Majali, M. The impact of adaptive learning technologies on student engage-ment in higher education: The mediating role of digital literacy. Sustainability 2025, 17, 1133. [Google Scholar] [CrossRef]
  14. Mao, J.; Chen, B.; Liu, J.C. Generative artificial intelligence in education and its implications for assessment. TechTrends 2024, 68, 58–66. [Google Scholar] [CrossRef]
  15. Jin, Y.; Yan, L.; Echeverria, V.; Gašević, D.; Martinez-Maldonado, R. Generative AI in higher education: A global perspective of institutional adoption policies and guidelines. Comput. Educ. Artif. Intell. 2025, 8, 100348. [Google Scholar] [CrossRef]
  16. Rigopouli, K.; Kotsifakos, D.; Psaromiligkos, Y. Vygotsky’s creativity options and ideas in 21st-century technology-enhanced learning design. Educ. Sci. 2025, 15, 257. [Google Scholar] [CrossRef]
  17. Putthidech, A.; Bootwisas, N.; Intarasat, U.; Boonman, P.; Singchua, W. The influence of AI-based learning perceptions on learning outcomes in calculus: The mediating role of cognitive and emotional responses using a PLS-SEM approach. Eurasia J. Math. Sci. Technol. Educ. 2025, 21, em2738. [Google Scholar] [CrossRef] [PubMed]
  18. Hassan, M.M.; Ahmad, A.R. Systematic literature review on the sustainability of higher education institutions (HEIs): Dimen-sions, practices and research gaps. Cogent Educ. 2025, 12, 2549789. [Google Scholar] [CrossRef]
  19. Spyropoulou, N.; Ioannou, M.; Kameas, A. Impact Framework for Transforming STEAM Education: A Multi-Level Approach to Evidence-Based Reform. Educ. Sci. 2025, 15, 1552. [Google Scholar] [CrossRef]
  20. Okulich-Kazarin, V.; Kazarin, A.; Hordiienko, V. When artificial intelligence tools meet non-violent learning environments: Implications for sustainable education and SDG 4.3. Sustainability 2024, 16, 7695. [Google Scholar] [CrossRef]
  21. Mintz, K.; Tal, T. The place of content and pedagogy in shaping sustainability learning outcomes in higher education. Environ. Educ. Res. 2018, 24, 207–229. [Google Scholar] [CrossRef]
  22. Grigorescu, A.; Munteanu, I.; Dumitrica, C.D.; Lincaru, C. Development of a Green Competency Matrix Based on Civil Servants’ Perception of Sustainable Development Expertise. Sustainability 2023, 15, 13913. [Google Scholar] [CrossRef]
  23. Neumann, R.; Parry, S.; Becher, T. Teaching and learning in their disciplinary contexts: A conceptual analysis. Stud. High. Educ. 2002, 27, 405–417. [Google Scholar] [CrossRef]
  24. Ellefson, M.R.; Baker, S.T.; Gibson, J.L. Lessons for successful cognitive developmental science in educational settings: The case of executive functions. J. Cogn. Dev. 2019, 20, 253–277. [Google Scholar] [CrossRef]
  25. Cowan, N. Short-term memory based on activated long-term memory: A review in response to Norris (2017). Psychol. Bull. 2019, 145, 822–847. [Google Scholar] [CrossRef]
  26. Miyake, A.; Friedman, N.P. The nature and organization of individual differences in executive functions. Curr. Dir. Psychol. Sci. 2019, 21, 8–14. [Google Scholar] [CrossRef]
  27. Tezer, M. Metacognitive Engagement in AI-Supported Learning: Frameworks, Challenges, and Transformations; IntechOpen: London, UK, 2026; Available online: https://www.intechopen.com/chapters/1236663 (accessed on 2 November 2025).
  28. Cowan, N. Working memory underpins cognitive development, learning, and education. Educ. Psychol. Rev. 2014, 26, 197–223. [Google Scholar] [CrossRef]
  29. Harvey, S.; Cope, E. Making Learning Happen in Teaching Games for Understanding with Cognitive Load Theory. Educ. Sci. 2025, 15, 631. [Google Scholar] [CrossRef]
  30. Sweller, J. Cognitive load theory and educational technology. Educ. Technol. Res. Dev. 2020, 68, 1–16. [Google Scholar] [CrossRef]
  31. De Vincenzo, C.; Carpi, M. Cognitive study strategies and motivational orientations among university students: A latent pro-file analysis. Educ. Sci. 2024, 14, 792. [Google Scholar] [CrossRef]
  32. Panadero, E. A review of self-regulated learning. Front. Psychol. 2017, 8, 422. [Google Scholar] [CrossRef] [PubMed]
  33. Rivas, S.F.; Saiz, C.; Ossa, C. Metacognitive strategies and development of critical thinking in higher education. Front. Psychol. 2022, 13, 913219. [Google Scholar] [CrossRef]
  34. Fredricks, J.A.; Filsecker, M.; Lawson, M.A. Student engagement, context, and adjustment: Addressing definitional, measurement, and methodological issues. Learn. Instr. 2016, 43, 1–4. [Google Scholar] [CrossRef]
  35. Reschly, A.; Pohl, A.J.; Christenson, S. Student Engagement; Springer International Publishing: Cham, Switzerland, 2020. [Google Scholar]
  36. Bond, M.; Bedenlier, S.; Marín, V.I.; Händel, M. Emergency remote teaching in higher education: Mapping the first global online semester. Educ. Technol. Res. Dev. 2021, 18, 50. [Google Scholar] [CrossRef]
  37. Schraw, G.; Moshman, D. Metacognitive theories. Educ. Psychol. Rev. 1995, 7, 351–371. [Google Scholar] [CrossRef]
  38. Kline, R.B. Principles and Practice of Structural Equation Modeling; Guilford Publications: Avenue, NY, USA, 2023. [Google Scholar]
  39. Hair, J.F., Jr.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M.; Danks, N.P.; Ray, S. Partial Least Squares Structural Equation Modeling (PLS-SEM) Using R: A Workbook; Springer Nature: Cham, Switzerland, 2021. [Google Scholar]
  40. Little, R.J.; Rubin, D.B. Statistical Analysis with Missing Data, 3rd ed.; John Wiley & Son: Hoboken, NJ, USA, 2019. [Google Scholar]
  41. Byrne, B.M. Structural Equation Modeling with AMOS: Basic Concepts, Applications, and Programming, 3rd ed.; Routledge: New York, NY, USA, 2016. [Google Scholar]
  42. Hu, L.T.; Bentler, P.M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Model. 1999, 6, 1–55. [Google Scholar] [CrossRef]
  43. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  44. Henseler, J.; Ringle, C.M.; Sarstedt, M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
  45. Vandenberg, R.J.; Lance, C.E. A review and synthesis of measurement invariance literature. Organ. Res. Methods 2000, 3, 4–70. [Google Scholar] [CrossRef]
  46. Davidov, E.; Meuleman, B.; Cieciuch, J.; Schmidt, P.; Billiet, J. Measurement invariance in cross-national research. Sociol. Methods Res. 2018, 47, 55–86. [Google Scholar]
  47. Sırgancı, G.; Uyumaz, G.; Yandı, A. Measurement invariance testing with alignment method: Many groups comparison. Int. J. Assess. Tools Educ. 2020, 7, 657–673. [Google Scholar] [CrossRef]
  48. Harman, H.H. Modern Factor Analysis; University of Chicago Press: Chicago, IL, USA, 1976. [Google Scholar]
  49. Podsakoff, P.M.; MacKenzie, S.B.; Lee, J.Y.; Podsakoff, N.P. Common method biases in behavioral research: A critical review of the literature and recommended remedies. J. Appl. Psychol. 2003, 88, 879–903. [Google Scholar] [CrossRef]
  50. Fuller, C.M.; Simmering, M.J.; Atinc, G.; Atinc, Y.; Babin, B.J. Common methods variance detection in business research. J. Bus. Res. 2016, 69, 3192–3198. [Google Scholar] [CrossRef]
  51. Cheung, G.W.; Rensvold, R.B. Evaluating goodness-of-fit indexes for testing measurement invariance. Struct. Equ. Modeling 2002, 9, 233–255. [Google Scholar] [CrossRef]
  52. Moșoi, A.A.; Maican, C.I.; Cazan, A.M.; Sumedrea, S. Do students need to think hard? The interplay of AI and cognitive abilities in solving problems. Educ. Inf. Technol. 2025, 30, 24337–24364. [Google Scholar] [CrossRef]
  53. Fang, K.; Li, L.; Wu, Y. Research on student engagement in distance learning in sustainability science to design an online intel-ligent assessment system. Front. Psychol. 2023, 14, 1282386. [Google Scholar] [CrossRef] [PubMed]
  54. Deslis, D.; Moutsios-Rentzos, A.; Kaskaouti, P.; Giakoumi, M. Digital Storytelling in Teaching and Learning Mathematics: A PRISMA Systematic Literature Review. Educ. Sci. 2025, 15, 1548. [Google Scholar] [CrossRef]
Figure 1. Structural equation model diagram.
Figure 1. Structural equation model diagram.
Sustainability 18 02087 g001
Table 1. Summary of Constructs and Measurement Items.
Table 1. Summary of Constructs and Measurement Items.
ConstructItem CodeBrief DescriptionNumber of ItemsSource
Working Memory (WM)WM1Maintaining information during academic tasks4 items[25]
WM2Updating information while learning [9,26]
WM3Managing multiple ideas simultaneously [25,30]
WM4Tracking multistep problem-solving processes [6,7]
Metacognition MET1Planning learning strategies4 items[10]
(MET)MET2Monitoring understanding [9,32]
MET3Evaluating learning progress [10,33]
MET4Adjusting strategies when ineffective [9,26]
Reasoning AbilityREA1Logical reasoning and inference3 items[2,6]
(REA)REA2Analytical problem-solving [6]
REA3Applying reasoning to new situations [2,9]
AI-Supported Learning AI-SL1Perceived usefulness of AI tools for learning.4 items[11,12]
(AI-SL)AI-SL2Accuracy of AI-generated feedback. [12,14]
AI-SL3I Trust in AI-supported learning systems. [11,17]
AI-SL4AI assistance in understanding complex content. [9,11]
SDG Awareness (SDGA)SDG1Understanding the Sustainable Development Goals3 items[22]
SDG2Awareness of global sustainability challenges [18,21]
SDG3Responsibility toward sustainable development [4,22]
Learner EngagementENG1Active participation in learning activities [34]
ENG2Sustained attention and effort3 items[35]
ENG3Cognitive and emotional involvement in learning [34,36]
Learning OutcomesLO1Achievement of learning objectives [35]
LO2Improvement in subject understanding [9,36]
LO3Confidence in problem-solving ability [6,17]
Table 2. Summary of Data Analysis Procedures and Methodological Choices.
Table 2. Summary of Data Analysis Procedures and Methodological Choices.
StageAnalysis
Step
Method/
Technique
SoftwareRationale
Data screeningMissing data,
normality, outliers
EM imputation,
skewness–kurtosis,
Mahalanobis distance
JamoviEnsure data suitability for SEM and reduce bias
Reliability assessmentInternal consistencyCronbach’s alphaJamoviConfirm scale reliability prior to CFA
Measurement modelConstruct validity and model fitCFAJamovi (lavaan)Validate factor structure and measurement quality
Convergent validityIndicator reliabilityFactor loadings,
CR, AVE
JamoviEnsure constructs adequately represent latent variables
Discriminant validityConstruct
distinctiveness
Fornell–Larcker, HTMT ratioJamoviConfirm theoretical separation among constructs
Structural modelHypothesis testingSEM with ML
estimation
Jamovi (lavaan)Examine direct relationships among latent variables
Mediation analysisIndirect effectsBias-corrected
bootstrapping
(5000 resamples)
JamoviObtain robust estimates of
mediation effects
Group comparisonDisciplinary
differences
MG-SEM &
invariance testing
Jamovi (lavaan)Compare structural paths
between STEM and Social
Science students
Model robustnessAdditional
diagnostics
VIF, fit indices
comparison
JamoviStrengthen credibility and
replicability of results
Table 3. Demographic Characteristics of the Participants.
Table 3. Demographic Characteristics of the Participants.
VariableCategoryn%
Academic DisciplineSTEM41254.1
Social Science35045.9
GenderFemale42756
Male32843
Other71
Age (years)Mean20.6
Standard Deviation1.4
Total Sample 762100
Table 4. Descriptive Statistics of Observed Variables.
Table 4. Descriptive Statistics of Observed Variables.
Observed VariablesMeanSDSkewnessKurtosis
WM13.980.65−0.410.32
WM23.920.67−0.380.28
WM33.850.70−0.290.21
WM43.780.72−0.220.19
MET14.050.62−0.460.35
MET24.120.61−0.520.41
MET34.080.63−0.480.38
MET43.970.66−0.370.30
REA13.760.74−0.250.17
REA23.820.71−0.310.23
REA33.690.78−0.180.14
SDG14.100.64−0.550.46
AI-SL14.150.60−0.580.44
AI-SL24.080.62−0.510.39
AI-SL33.980.65−0.420.33
AI-SL44.020.63−0.470.36
SDG24.060.66−0.490.39
SDG33.980.68−0.420.33
ENG13.880.70−0.340.26
ENG23.940.69−0.390.31
ENG33.910.71−0.360.29
LO13.840.73−0.280.20
LO23.900.72−0.330.24
LO33.620.76−0.150.11
Table 5. Descriptive Statistics and Correlations among Observed Variables.
Table 5. Descriptive Statistics and Correlations among Observed Variables.
Latent ConstructMeanSDWMMETREAAI-SLSDGAENGLO
WM3.880.641.00
MET4.050.610.561.00
REA3.760.690.520.581.00
AI-SL4.060.620.60.630.571.00
SDGA4.050.660.460.510.490.541.00
ENG3.910.680.60.630.590.690.551.00
LO3.790.710.570.610.580.740.540.711.00
Table 6. Measurement Model Statistics Including Indicator Loadings, CR, and AVE.
Table 6. Measurement Model Statistics Including Indicator Loadings, CR, and AVE.
IndicatorLoading (λ)CRAVE
Working Memory (WM)
WM10.780.860.55
WM20.72
WM30.74
WM40.80
Metacognition (MET)
MET10.810.890.62
MET20.78
MET30.84
MET40.79
Reasoning Ability (REA)
REA10.760.870.57
REA20.82
REA30.75
AI-Supported Learning (AI-SL)
AI-SL10.830.920.67
AI-SL20.86
AI-SL30.79
AI-SL40.85
SDG Awareness (SDGA)
SDG10.800.880.6
SDG20.77
SDG30.81
Engagement (ENG)
ENG10.820.90.64
ENG20.79
ENG30.84
Learning Outcomes (LO)
LO10.870.910.66
LO20.82
LO30.80
df = 352, χ2 = 813.12, χ2/df = 2.31, CFI = 0.94, TLI = 0.93, GFI = 0.91, AGFI = 0.89, RMSEA = 0.055, SRMR = 0.046.
Table 7. Discriminant Validity Assessment Using the Fornell–Larcker Criterion.
Table 7. Discriminant Validity Assessment Using the Fornell–Larcker Criterion.
FactorWMMETREAAI-SLSDGAENGLO
WM0.74
MET0.420.79
REA0.380.450.75
AI-SL0.360.410.390.82
SDGA0.330.380.360.480.77
ENG0.400.470.440.510.460.80
LO0.350.430.400.490.420.550.81
Table 8. HTMT Ratios among Latent Constructs.
Table 8. HTMT Ratios among Latent Constructs.
ConstructWMMETREAAI-SLSDGAENG
WM
MET0.74
REA0.710.76
AI-SL0.690.730.72
SDGA0.650.680.660.70
ENG0.720.750.730.830.71
Table 9. Measurement Invariance Across STEM and Social Science Groups.
Table 9. Measurement Invariance Across STEM and Social Science Groups.
ModelChi-Square (χ2)dfCFIRMSEAχ2
Difference
df
Difference
p-ValueΔCFIΔRMSEADecision
Configural1125.404800.9300.057-----Supported
Metric1148.224930.9280.05822.82130.0420.0020.001Supported
Scalar1196.555060.9170.06248.33130.0010.0120.004Partial Invariance
Table 10. Standardized Direct Effects.
Table 10. Standardized Direct Effects.
HypothesisPathStd. Estimate (β)Bootstrap Median95% CI p-Value
H1WM → ENG0.240.24[0.15, 0.33]<0.001
H2MET → ENG0.310.31[0.22, 0.40]<0.001
H3REA → ENG0.270.27[0.18, 0.36]<0.001
H4WM → LO0.180.18[0.20, 0.37]0.004
H5MET → LO0.220.22[0.17, 0.35]<0.001
H6REA → LO0.200.20[0.35, 0.52]0.003
H7AI-SL → ENG0.290.29[0.06, 0.29]<0.001
H8SDGA → ENG0.260.26[0.12, 0.32]<0.001
H9ENG → LO0.440.44[0.08, 0.31]<0.001
Table 11. Significant Group Differences in Path Coefficients.
Table 11. Significant Group Differences in Path Coefficients.
PathSTEM βSocial Sci βΔχ2dfp-ValueDecision
REA → ENG0.340.186.1810.013Different
MET → ENG0.220.377.3110.007Different
AI-SL → LO0.210.094.5610.033Different
SDGA → ENG0.180.335.9210.015Different
Table 12. Effect Sizes (f2) for Group-Differentiated Structural Paths.
Table 12. Effect Sizes (f2) for Group-Differentiated Structural Paths.
Pathf2
(STEM)
Effect Size (STEM)f2
(Social Science)
Effect Size
(Social Science)
REA → ENG0.12Medium0.04Small
MET → ENG0.05Small0.14Medium
Table 13. Explained Variance (R2) by Group.
Table 13. Explained Variance (R2) by Group.
Outcome VariableR2
(Pooled Sample)
R2
(STEM)
R2
(Social Science)
Learner Engagement (ENG)0.480.520.44
Learning Outcomes (LO)0.620.640.60
Table 14. Bootstrapped Indirect Effects.
Table 14. Bootstrapped Indirect Effects.
HypothesisPathStd.
Estimate (β)
95% CIResult
H10WM → ENG → LO0.11[0.06, 0.18]Significant
H11MET → ENG → LO0.14[0.09, 0.22]Significant
H12REA → ENG → LO0.12[0.07, 0.19]Significant
H13AI-SL → ENG → LO0.13[0.08, 0.19]Significant
H14SDGA → ENG → LO0.12[0.07, 0.18]Significant
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Putthidech, A.; Suthon, W.; Klubnual, P.; Sukruan, L.; Prabset, S. The Role of Cognitive Processes and SDG Awareness in Student Engagement and Mathematics Learning Outcomes in Higher Education. Sustainability 2026, 18, 2087. https://doi.org/10.3390/su18042087

AMA Style

Putthidech A, Suthon W, Klubnual P, Sukruan L, Prabset S. The Role of Cognitive Processes and SDG Awareness in Student Engagement and Mathematics Learning Outcomes in Higher Education. Sustainability. 2026; 18(4):2087. https://doi.org/10.3390/su18042087

Chicago/Turabian Style

Putthidech, Anek, Wannaporn Suthon, Praphat Klubnual, Lalitphat Sukruan, and Soemsiri Prabset. 2026. "The Role of Cognitive Processes and SDG Awareness in Student Engagement and Mathematics Learning Outcomes in Higher Education" Sustainability 18, no. 4: 2087. https://doi.org/10.3390/su18042087

APA Style

Putthidech, A., Suthon, W., Klubnual, P., Sukruan, L., & Prabset, S. (2026). The Role of Cognitive Processes and SDG Awareness in Student Engagement and Mathematics Learning Outcomes in Higher Education. Sustainability, 18(4), 2087. https://doi.org/10.3390/su18042087

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop