4.2. Quantitative Analysis
Building on the qualitative insights, the quantitative phase operationalizes the identified factors, enablers, challenges, strategies, and psychological variables into measurable constructs to validate their reliability, prioritize their relative importance, and model their interrelationships. This phase aims to provide empirical substantiation for the conceptual framework developed earlier, translating stakeholder perspectives into statistically testable dimensions of GAI integration in MEE.
Two structured surveys were designed: one targeting students and another targeting faculty members. While both instruments shared the same content and construct structure, they were tailored to capture the distinct experiential lenses of learners and educators regarding GAI use in MEE. The surveys collectively reached 105 participants (61 students and 44 faculty members) representing diverse engineering backgrounds. Inclusion extended beyond purely mechanical engineering participants to encompass related disciplines such as industrial, electrical, chemical, and aerospace engineering, ensuring a comprehensive view of GAI adoption across ME-relevant curricula (e.g., thermodynamics, mechanics of materials, simulation, design, AutoCAD, programming, and research-based courses). This interdisciplinary inclusion enhances the ecological validity of findings by recognizing the overlapping competencies central to modern mechanical engineering education.
To achieve a robust and multidimensional understanding, this phase employs several complementary analytical techniques. The RII quantifies and ranks the perceived significance of each factor, allowing prioritization across stakeholder groups. CFA and Cronbach’s α assess construct validity and internal reliability, ensuring that the latent variables derived from the qualitative phase are statistically sound and theoretically coherent. Finally, PLS-SEM is applied to test the hypothesized relationships among the constructs and to evaluate how enablers, challenges, strategies, and psychological factors collectively shape stakeholders’ perception of successful GAI integration.
Additionally, based on interview insights indicating differences in how students employ GAI tools across tasks, the student survey included an application-based ranking exercise. This component identifies the domains, such as design, computation, simulation, research, and writing, where GAI usage is most valued, providing a quantitative foundation for targeted pedagogical recommendations.
While the inclusion of non-ME participants may limit strict generalizability, it enriches the dataset by reflecting the interdisciplinary reality of contemporary engineering education, where computational, design, and analytical skills transcend departmental boundaries. This design thus ensures that the quantitative phase does not merely replicate the qualitative findings but tests them systematically, establishing the empirical backbone of the study’s theoretical and practical contributions.
4.2.1. Utilization of GAI Tools
The quantitative analysis extends the qualitative findings by quantifying how students prioritize the use of GAI tools across key academic domains in MEE. As interviews revealed diverse familiarity levels and faculty support for a gradual, guided integration, this survey aimed to determine where GAI has the greatest learning impact. Students rated their likelihood of using GAI tools across six domains on a five-point Likert scale (1 = not likely at all, 5 = very likely): academic writing and research, brainstorming and creativity, computation, design and simulation (including coding), collaboration, and reading and studying.
As shown in
Figure 2, students exhibit a clear hierarchy of preference. The strongest engagement occurs in design, simulation, and coding, where 77% of respondents indicate they are “likely” or “very likely” to employ GAI tools. This dominant score highlights how GAI aligns naturally with mechanical engineering’s analytical and modeling focus. The integration of GAI in these areas reflects students’ appreciation of its ability to automate code generation, improve simulation accuracy, and visualize complex mechanical phenomena. In essence, students view GAI as a cognitive amplifier, a means to bridge theory and practice rather than a substitute for learning effort.
In contrast, collaboration receives the lowest adoption score (only 44% likely or very likely), revealing an important socio-technical limitation. While students perceive GAI as efficient for individual tasks, they remain skeptical of its capacity to mediate teamwork, negotiation, and group creativity, core components of engineering collaboration. This gap reinforces earlier qualitative insights that human interaction, empathy, and communication remain irreplaceable competencies even in technology-enhanced learning environments.
Beyond technical tasks, students also demonstrate strong reliance on GAI for learning support and ideation. For reading and studying, 89% rate themselves as likely or very likely users, indicating that GAI serves as a scaffolding tool for conceptual understanding. Similarly, academic writing and research (87%) and brainstorming and creativity (87%) show widespread use for idea generation, summarization, paraphrasing, and literature exploration. These findings reflect a shift from using GAI purely for productivity to viewing it as a collaborative cognitive partner that enhances creativity and comprehension.
Meanwhile, computation (83%) occupies a middle position, revealing that students increasingly value GAI’s analytical functions, such as solving equations or visualizing results, but still combine it with traditional analytical reasoning. The pattern collectively demonstrates that students integrate GAI along a cognitive-to-practical continuum: from conceptual clarification (reading, brainstorming) to applied problem-solving (computation, simulation, design).
The strong preference for GAI in simulation, coding, and computation underscores that students associate value with measurable academic outcomes, speed, accuracy, and visualization, rather than collaborative or reflective functions. The lower ratings for collaboration and ethical awareness suggest that affective and interpersonal dimensions of learning remain underdeveloped in current GAI usage. This points to a dual challenge for educators: to embed GAI in ways that reinforce critical thinking and teamwork, while preserving authenticity and academic integrity.
Overall,
Figure 2 indicates that GAI tools are most effective when introduced first in high-impact technical contexts, such as simulation labs and computational design tasks, followed by progressive inclusion in conceptual and collaborative domains. This staged approach aligns with the Extended TAM framework, suggesting that perceived usefulness (in computation and design) drives early adoption, while attitudinal and ethical readiness must evolve for full integration.
4.2.2. Relative Importance Index (RII)
To address the second research question, the RII was applied to systematically quantify and rank the perceived importance of the identified enablers, challenges, strategies, and psychological factors influencing the integration of GAI in MEE. As a robust normalization technique, RII allows a comparative understanding of how various stakeholder groups, students and faculty, prioritize determinants relative to their maximum perceived importance [
42]. Beyond serving as a descriptive ranking tool, the RII analysis provides a diagnostic perspective on where alignment or divergence exists between stakeholders, establishing a foundation for inferential validation in subsequent CFA and PLS-SEM analyses.
Figure 3 and
Table 5 reveal that both faculty and students agree on the centrality of tool availability and user willingness as the dominant enablers of GAI integration in MEE. Yet, while their overall rankings align, the underlying rationales differ markedly. Students rank the availability of GAI tools to enhance time efficiency and reduce workload as the top enabler (RII = 0.8689), viewing GAI primarily as a performance amplifier that supports comprehension, simulation, and design activities. Faculty, by contrast, prioritize students’ willingness to adopt GAI (RII = 0.8636), interpreting readiness as a pedagogical prerequisite for successful technology diffusion.
This divergence underscores a micro–macro duality: students perceive GAI’s value through its immediate cognitive utility, whereas faculty evaluate it through systemic and behavioral readiness lenses. Moreover, both groups rate institutional and governing body support as moderately high but not dominant, signaling recognition that structural facilitation is necessary but insufficient without individual engagement. The relatively low ranking of collaboration and teamwork through GAI suggests a gap in awareness regarding GAI’s potential to mediate peer-assisted learning, an opportunity for future curriculum innovation.
In essence, the enablers analysis illustrates that MEE stakeholders view GAI not merely as a technological adjunct but as a transformational enabler, contingent on institutional legitimacy and personal agency. The interpretation of “enabling conditions” thus spans pragmatic (students) and organizational (faculty) dimensions.
As depicted in
Figure 4, both groups perceive GAI’s integration as constrained more by ethical and cognitive concerns than by technical or financial barriers. Faculty identify ethical misuse and plagiarism as the most critical challenge (RII = 0.9273), reflecting apprehension about maintaining academic integrity and assessment fairness. Students, on the other hand, rank over-reliance on GAI as reducing critical thinking skills as their foremost concern (RII = 0.8426), highlighting a self-awareness of potential cognitive erosion.
This divergence is telling: faculty focus on external accountability, while students exhibit concern for internal learning authenticity. Such dual apprehensions converge around a shared recognition that unregulated GAI use may compromise educational quality. Challenges such as bias in AI-generated outputs, inadequate faculty training, and integration issues with existing software (e.g., AutoCAD, MATLAB) occupy the mid-tier, indicating that technical limitations, while acknowledged, are secondary to ethical and pedagogical risks.
Interestingly, both groups assign the lowest RII to high cost and technical limitations, implying that MEE environments, accustomed to software-driven workflows, no longer perceive infrastructure as the primary barrier. Instead, the challenge matrix reflects a human-centered tension: balancing innovation with responsibility. The implication is that GAI governance frameworks, not hardware investments, will determine the sustainability of adoption.
Figure 5 shows a notable convergence between students and faculty regarding strategies for mitigating these challenges. Both rank clear policies and guidelines regulating GAI use as the most important strategy (RII ≈ 0.87), underscoring a shared demand for institutional clarity and ethical governance. However, the nature of their priorities diverges thereafter. Faculty emphasize awareness of student behavior and academic integrity initiatives, aligning with their concern for plagiarism and oversight, whereas students prioritize engineering-specialized GAI tools and tailoring applications to individual needs, emphasizing relevance and personalization.
The high ranking of training and support programs and gradual adoption further highlights an emerging consensus that GAI integration must follow a phased, capacity-building trajectory rather than abrupt implementation. This phased adoption aligns with the Extended TAM logic adopted in this study, where facilitating conditions (e.g., training and policy) directly enhance perceived usefulness and ease of use. Collectively, these strategic patterns point toward a two-level integration roadmap: macro-level policy and micro-level personalization, both of which are essential to align institutional frameworks with learner needs.
The analysis of psychological factors (
Figure 6) provides crucial insight into the emotional and cognitive undercurrents shaping GAI adoption. Both faculty and students identify the impact of GAI on students’ critical thinking as the most significant psychological concern (RII = 0.82–0.84), reflecting a deep-seated anxiety about automation displacing reasoning. Ethical alignment follows closely, suggesting that users perceive morality as a necessary companion to innovation.
However, the group-specific differences are revealing. Students report heightened motivation and perceive GAI as a cognitive scaffold that enhances creativity and efficiency, whereas faculty demonstrate stress, skepticism, and role anxiety, particularly around job displacement and loss of human interaction. These asymmetries signify an intergenerational divide in digital adaptability and confidence, a theme that reappears in qualitative narratives.
Standard deviation values above 1.0 (
Table 5) further reveal heterogeneity in emotional responses, implying that psychological readiness is unevenly distributed across participants. This heterogeneity justifies the need for targeted faculty development and student digital ethics workshops, ensuring balanced psychological engagement rather than polarized reactions to GAI technologies.
The comparative synthesis across
Figure 3,
Figure 4,
Figure 5 and
Figure 6 and
Table 5 reveals that while faculty and students share optimism toward GAI’s potential, they operate under distinct motivational logics. Students’ enthusiasm reflects an efficiency-oriented adoption mindset, driven by pragmatic learning benefits, whereas faculty exhibit a prudence-oriented stance, emphasizing ethical safeguards and pedagogical control.
This duality creates both tension and opportunity. If unaddressed, misaligned priorities may lead to fragmented adoption; yet, if strategically harmonized, they can produce a complementary adoption ecosystem, where faculty provide governance and mentorship while students drive experimentation and innovation.
From a policy perspective, three implications emerge:
Shift from awareness to accountability: Establish measurable ethical frameworks beyond generic guidelines.
Move from adoption to adaptation: Develop discipline-specific GAI tools that reflect mechanical engineering contexts rather than generic educational templates.
Balance regulation with innovation: Encourage supervised experimentation to preserve creativity while maintaining academic integrity.
Thus, the RII analysis transcends its statistical purpose by revealing a multilayered adoption landscape, where cognitive, ethical, and structural dimensions intersect. These findings form the empirical foundation for the forthcoming CFA and PLS-SEM, which formally test construct reliability and structural relationships within the extended Technology Acceptance framework.
4.2.3. Confirmatory Factor Analysis (CFA)
After identifying the four latent variables, Enablers, Challenges, Strategies, and Psychological Factors, CFA was conducted to verify the reliability and validity of the measurement model. Cronbach’s α was first calculated to assess internal consistency; values ≥ 0.50 were deemed acceptable for exploratory research [
43]. All constructs surpassed this threshold, confirming baseline reliability and justifying further validation.
CFA was then performed to examine the convergent and discriminant validity of the constructs and to evaluate the model’s goodness-of-fit. Key indices included:
Chi-square (χ2) for overall model adequacy,
Standardized Root Mean Square Residual (SRMR) for average standardized residuals,
Root Mean Square Error of Approximation (RMSEA) for discrepancy per degree of freedom, and
Bentler Comparative Fit Index (CFI) to compare the hypothesized and null models.
All computations were conducted using SAS statistical software (SAS 9.4).
Initial reliability results indicated moderate cohesion, prompting refinement to improve construct clarity. Five low-loading indicators, C2 (High costs and technical limitations of GAI tools), C6 (Integration challenges with engineering software), C9 (GAI reducing human factors in the teaching process), P1 (Concerns about GAI replacing teaching roles), and P5 (Stress caused by adopting GAI tools), were removed (
Table 6).
These items exhibited weak item–total correlations (0.21–0.32) and negatively influenced internal consistency. Their exclusion enhanced parsimony and aligns with earlier RII results, which also ranked these variables among the least important for both stakeholder groups.
Table 7 shows that all refined constructs achieved acceptable internal consistency. Enablers (α = 0.81) and Strategies (α = 0.83) display the strongest cohesion, reflecting participants’ stable agreement on tangible facilitators such as institutional support, tool availability, and clear policies. Challenges (α = 0.63) and Psychological Factors (α = 0.69) present moderate consistency, expected in exploratory studies where perceptions are heterogeneous [
44].
This variance itself is analytically meaningful: it reflects diverse experiential standpoints between students and faculty, particularly in the more subjective domains of risk perception and emotional adaptation.
Post-refinement CFA yielded a significantly improved fit (
Table 8). The Chi-square statistics decreased from 919.32 to 634.02 and degrees of freedom from 517 to 367, reducing overfitting and enhancing parsimony. SRMR fell from 0.111 to 0.107 and RMSEA from 0.087 to 0.084, while CFI rose from 0.621 to 0.703.
Although CFI remains below the 0.90 benchmark, these values demonstrate adequate fit for an exploratory model given the modest sample size (N = 105) and the novelty of GAI constructs within MEE. This is consistent with methodological guidance indicating that moderate CFI values (≈0.7–0.8) are acceptable in early-stage or small-sample SEM studies, especially when models involve emerging constructs and heterogeneous respondents [
43]. SRMR and RMSEA within moderate ranges confirm acceptable residual variance, and improvement across all indices indicates the refined structure better captures latent relationships among the constructs.
To further validate construct reliability, Composite Reliability (CR) and Average Variance Extracted (AVE) were computed. CR values exceeded 0.70 for Enablers (0.77) and Strategies (0.78), confirming strong internal consistency [
44]. Conversely, Challenges (0.46) and Psychological Factors (0.49) fell below the threshold, indicating partial reliability, consistent with the exploratory stage of conceptualization. AVE values ranged from 0.15 to 0.29, below the recommended 0.50 criterion [
45], implying limited convergent validity.
Such results are methodologically acceptable in formative, cross-disciplinary domains where constructs are new and respondents interpret items diversely. Retaining these variables was theoretically necessary to maintain the four-pillar structure derived from the Extended TAM, ensuring conceptual completeness for subsequent modeling.
Two major insights emerge:
Construct Asymmetry in Reliability—Higher internal consistency for Enablers and Strategies suggests participants share clearer mental models of what facilitates GAI integration (e.g., tool availability, institutional policies) than of what inhibits or psychologically influences it. This pattern mirrors early adoption dynamics, where enabling conditions crystallize before resistance factors are fully articulated.
Theoretical Validation of Stakeholder Perceptions—The refined four-factor structure confirms that faculty and students conceptualize GAI integration through parallel cognitive frameworks. Enablers and Strategies map onto the “perceived usefulness” and “facilitating conditions” dimensions of TAM, while Challenges and Psychological Factors correspond to “perceived ease of use” and “attitudinal intention.” The CFA thus empirically grounds the qualitative insights within an established adoption theory.
Overall, the third research question was tackled in this section since it was verified that the proposed factors and their constructs exhibit reliability and validity through the statistical analyses, including CFA, Cronbach’s Alpha, Composite reliability, and average variance extracted.
The improved reliability and model fit justify advancing to PLS-SEM. CFA has verified that the constructs are empirically distinct yet conceptually interdependent, providing a solid foundation for testing the causal pathways among enablers, challenges, strategies, and perceptions of successful GAI integration in MEE.
Hence, this stage elevates the study from descriptive factor validation to a theoretically anchored measurement model, linking statistical robustness with pedagogical insight.
4.2.4. Partial Least Squares Structural Equation Modeling (PLS-SEM)
To examine the last research question, PLS-SEM was employed to examine the structural relationships among the latent constructs, Enablers, Challenges, Strategies, Psychological Factors, and the higher-order construct Perception of Successful GAI Integration. PLS-SEM was chosen because it focuses on variance explanation in exploratory contexts and performs well with moderate samples (N = 105; 61 students and 44 faculty), unlike CB-SEM, which requires larger, normally distributed datasets, making the employment of 105 participants in PLS-SEM both appropriate and advantageous compared with methods that demand larger datasets [
46]. Familiarity with GAI was not modeled as a predictor because both groups had already demonstrated high familiarity in earlier phases.
Guided by the Extended TAM and qualitative insights, five hypotheses were tested to capture direct and indirect relationships among constructs as shown in
Figure 7:
H1 Strategies → Perception (positive)
H2 Challenges → Perception (negative)
H3 Enablers → Perception (positive)
H4 Psychological factors → Perception (effect present)
H5 Enablers, Challenges, and Strategies → Psychological factors (effect present)
Conceptually, Enablers and Strategies reflect perceived usefulness and facilitating conditions; Challenges represent ease-of-use barriers; and Psychological Factors capture attitudes and intentions toward adoption.
The model was estimated using PROC CALIS (SAS). Because this research is exploratory and the sample is modest, p-values between 0.05 and 0.10 were treated as trend-level significance [
21]. The Goodness-of-Fit Index (GFI = 0.64) is acceptable for early-stage modeling. Standardized loadings for Enablers, Challenges, and Strategies ranged 0.34–0.71 (
p ≈ 0.065–0.0946), while Psychological Factors produced weaker loadings (
p ≈ 0.76), indicating limited contribution.
Table 9 summarizes all paths, and
Figure 8 illustrates the structural relationships among constructs and their standardized effects, visually highlighting how strategies, enablers, and challenges shape perception while psychological factors remain weakly connected. Consistent with the hypothesized directionality, Strategies, Challenges, and Enablers all influence Perception, while Psychological Factors exert no meaningful direct effect.
Strategies → Perception: β ≈ 0.80, p = 0.0867 (strongest)
Challenges → Perception: β ≈ 0.82, p = 0.0686
Enablers → Perception: β ≈ 0.66, p = 0.0687
Psychological Factors → Perception: β ≈ 0.22, p = 0.8875
Enablers/Challenges/Strategies → Psychological Factors: non-significant (p > 0.70)
The results highlight several meaningful patterns.
Institutional scaffolding precedes attitudinal change: Strategies, policy clarity, training, and gradual roll-out show the greatest influence on perceived integration success, implying that visible institutional action shapes acceptance more strongly than individual motivation. This sequencing is consistent with early-stage adoption models where external conditions precede attitude formation.
Challenges act as indirect catalysts: Although hypothesized as negative, the positive coefficient for Challenges suggests that acknowledging and managing difficulties (e.g., over-reliance, ethical misuse) signals maturity and feasibility. In other words, when institutions visibly address risks, stakeholders interpret adoption as credible and controlled.
Psychological neutrality at early adoption: The null effect of Psychological Factors mirrors the mixed qualitative responses: students reported enthusiasm and efficiency gains, faculty reported stress and integrity worries, effects that cancel out statistically. Structural and cognitive determinants dominate until consistent norms emerge.
The evidence points to several practical priorities:
Lead with structure: Develop clear governance, training programs, and phased pilots; these yield the strongest influence on perceived success.
Treat risk as design input: Integrate safeguards (assessment redesign, academic-integrity workflows) into early adoption stages.
Sequence adoption intelligently: Start in high-utility domains such as design, simulation, and coding before extending to reflective and collaborative activities.
Follow with attitudinal work: Ethics awareness and motivation initiatives should complement, not precede, structural reforms.
Accepting trend-level significance (
p = 0.05–0.10) is appropriate for exploratory PLS-SEM with modest N [
47]. Coefficients here provide directional evidence rather than confirmatory proof, underscoring the need for larger, discipline-balanced replications and refined measurement, particularly for the Psychological Factors construct. Future work may test mediation or moderation effects once measurement stability improves.