Next Article in Journal
An Explainable Machine Learning Method for Neighborhood-Level Traffic Emissions Prediction: Insights from Ningbo, China
Previous Article in Journal
Can Tax Incentives Drive Green Sustainability in China’s Firms? Evidence on the Mediating Role of Innovation Investment
Previous Article in Special Issue
Aligning Parental and Student Educational Expectations: Implications for Sustainable Development of Education and Social and Emotional Skills
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generative AI in Mechanical Engineering Education: Enablers, Challenges, and Implementation Pathways

1
Department of Industrial Engineering, American University of Sharjah, Sharjah P.O. Box 26666, United Arab Emirates
2
Social Science Division, New York University, Abu Dhabi P.O. Box 129188, United Arab Emirates
*
Authors to whom correspondence should be addressed.
Sustainability 2025, 17(23), 10817; https://doi.org/10.3390/su172310817
Submission received: 16 October 2025 / Revised: 14 November 2025 / Accepted: 25 November 2025 / Published: 2 December 2025

Abstract

Generative Artificial Intelligence (GAI) is rapidly transforming higher education, yet its integration within Mechanical Engineering Education (MEE) remains insufficiently explored, particularly regarding the perspectives of faculty and students on its enablers, challenges, strategies, and psychological dimensions. This study addresses this gap through a sequential mixed-methods design that combines semi-structured interviews with faculty and students, along with a large-scale survey (N = 105) compromising 61 students and 44 faculty members primarily from universities in the UAE. Quantitative analyses employed the Relative Importance Index (RII) to prioritize factors, Confirmatory Factor Analysis (CFA) to test construct validity, and Partial Least Squared Structural Equation Modeling (PLS-SEM) to examine interrelationships. Results indicate convergence across groups: the top enablers include students’ willingness and tool availability for time efficiency; the main challenges concern ethical misuse and over-reliance reducing critical thinking; and the most effective strategies involve clear policies, training, and gradual adoption. CFA confirmed construct reliability after excluding low-loading items (SRMR ≈ 0.11; RMSEA ≈ 0.08; CFI ≈ 0.70). PLS-SEM revealed that enablers, challenges, and strategies significantly influence overall perceptions of successful integration, whereas psychological factors exert no significant effect. The study offers empirically grounded priorities and validated measures to guide curriculum design, faculty development, and policy formulation for the responsible and effective adoption of GAI in MEE.

1. Introduction

Artificial Intelligence (AI) and its subsets, such as Generative Artificial Intelligence (GAI), are rapidly transforming various academic fields by enhancing learning experiences and supporting the development of critical academic and professional skills. AI systems can observe their environment and respond with actions aimed at maximizing the likelihood of achieving specific goals. These systems process and analyze data, enabling them to self-learn and adapt progressively. GAI, a subset of AI, operates through advanced techniques including Natural Language Processing (NLP), Large Language Models (LLMs), and Generative Adversarial Networks (GANs), allowing it to comprehend, interpret, and generate content meaningfully [1]. Tools such as ChatGPT, GitHub Copilot, and Gemini have revolutionized educational sectors, including medicine, nursing, computer science, and engineering, by providing personalized learning, identifying knowledge gaps, and strengthening core competencies such as creativity, innovation, and problem-solving [2,3].
While the broader impact of GAI in education is increasingly evident, its integration within Mechanical Engineering Education (MEE) remains limited and insufficiently explored. The perspectives of key stakeholders, faculty members and students on the enablers, challenges, strategies, and psychological factors influencing this integration are rarely investigated. As a result, there is a growing need to understand how these factors shape the responsible and effective adoption of GAI in MEE, a discipline that requires both conceptual understanding and applied design skills.
Mechanical Engineering (ME) is a broad, multidisciplinary field that applies principles of physics, design, manufacturing, and maintenance to develop mechanical systems. It encompasses diverse areas, including manufacturing robotics, aerospace engineering, and computer-aided design (CAD) [4]. ME plays a critical role in addressing global challenges such as climate change and sustainability through innovations in energy systems, mobility, and manufacturing. In an era of rapid technological change, ME programs must not only emphasize fundamental engineering principles but also leverage emerging digital technologies to cultivate adaptability, creativity, and problem-solving skills in students [5,6]. This underscores the importance of maintaining a curriculum that aligns with technological advancement and equips students to thrive in evolving industrial ecosystems.
Despite rapid advances in GAI, the perspectives of faculty and students in MEE regarding the enablers, challenges, strategies, and psychological factors shaping its integration remain largely unexplored. To address this gap, the present study seeks to identify, prioritize, validate, and model these factors in relation to stakeholders’ overall perception of successful GAI integration in MEE. Specifically, it aims to bridge the current gap between technological capability and educational readiness, providing empirical insight into how GAI can be effectively and ethically implemented in mechanical engineering contexts. Additionally, the incorporation of GAI in MEE aligns with the aim of educational sustainability and Sustainability Development Goals (SDG) 4, which emphasizes quality education and promotes inclusive, resilient, and future-ready engineering education. By ensuring ethical AI integration and digital literacy, this study contributes to the long-term sustainability of higher education in systems in equipping graduates for rapidly evolving technological ecosystems [7].
The study is guided by the following research questions:
RQ1
What enablers, challenges, strategies, and psychological factors do faculty and students perceive as relevant to GAI integration in MEE?
RQ2
Which of these key factors are most crucial to stakeholders?
RQ3
Do the proposed factors and their constructs exhibit reliability and validity?
RQ4
How do enablers, challenges, strategies, and psychological factors relate to stakeholders’ overall perception of successful integration?
To address these questions, the study adopts a sequential exploratory mixed-methods approach. Initially, a comprehensive literature review was conducted to examine the current state of AI applications in MEE and identify opportunities for GAI adoption. This was followed by semi-structured interviews with faculty and students to capture stakeholder perspectives regarding the key factors influencing GAI integration. The insights derived from these qualitative phases informed the development of a structured survey instrument for quantitative analysis. The quantitative phase employed the Relative Importance Index (RII) to prioritize factors, Confirmatory Factor Analysis (CFA) and Cronbach’s α to assess construct validity and reliability, and PLS-SEM to examine structural relationships between constructs and stakeholders’ overall perceptions.
By empirically determining which factors matter most and how they shape perceptions, the findings aim to inform curriculum design, faculty development, and institutional policy for the responsible integration of GAI in MEE. Practically, the study provides insights that can help programs amplify high-impact enablers, mitigate priority challenges, and implement targeted strategies when embedding GAI into courses and laboratory activities.
This research contributes to the literature by (i) identifying and prioritizing key factors influencing GAI integration in MEE using RII; (ii) validating the constructs of enablers, challenges, strategies, and psychological factors through CFA and Cronbach’s α; (iii) modeling their interrelationships through PLS-SEM to determine their impact on stakeholders’ overall perception of successful GAI integration.
The remainder of this paper is organized as follows: Section 2 presents the related literature and theoretical framework underpinning the study; Section 3 details the methodology, data collection instruments, and ethical considerations; Section 4 reports the qualitative and quantitative findings (RII, CFA, and PLS-SEM analyses); Section 5 concludes the paper by summarizing key findings, practical implications, and directions for future research.

2. Literature Review

Building on the research gap established in the introduction, this section provides a comprehensive examination of existing studies on the integration of AI and, more specifically, GAI within MEE. The purpose of this review is to synthesize and categorize the main insights from prior research to identify the enablers, challenges, and strategic and psychological factors that influence the effective adoption of GAI in MEE. Establishing these factors is essential for understanding both the opportunities and barriers that accompany GAI adoption and for developing empirically grounded implementation pathways suited to engineering education contexts.
The literature review is structured into three interrelated subsections. Section 2.1 examines prior work on the application of AI in MEE and highlights the existing research gaps, emphasizing the limited focus on GAI and the need for stakeholder-centered perspectives. Section 2.2 summarizes the factors derived from prior literature, grouped as enablers, challenges, and strategies, forming the conceptual foundation for this study’s qualitative and quantitative phases. Finally, Section 2.3 introduces the Extended Technology Acceptance Model (TAM) as the theoretical framework guiding empirical investigation. This model provides a lens for interpreting how the identified factors relate to users’ perceptions, attitudes, and intentions toward adopting GAI in MEE.
Together, these subsections establish a structured understanding of the current state of AI and GAI integration in engineering education and provide the theoretical basis for the mixed-methods analysis that follows.

2.1. AI and MEE and Gaps in the Literature

In an era of rapid technological advancement, the integration of AI into higher education, particularly in engineering disciplines, has gained remarkable traction among scholars and practitioners. Within MEE, AI has emerged as a transformative force that enhances learning experiences, automates assessments, and strengthens analytical and problem-solving skills. Existing studies highlight both the enabling factors and challenges associated with AI adoption, emphasizing its applications in personalized learning, assessment, and project management. For example, ref. [8] used machine learning algorithms to predict student success, improve retention, and identify at-risk learners through exam-pattern analysis. Similarly, ref. [9] applied AI to optimize project team allocation using domain requirements, team experience, and task complexity, improving collaboration and fairness. The work in [10] introduced MathBot, a contextual bandit-based chatbot that explained equations, delivered practice problems, and generated adaptive feedback, demonstrating how reinforcement learning can tailor educational content to individual learners. Collectively, these studies demonstrate AI’s potential to automate time-consuming processes, personalize learning, and enhance cognitive efficiency, reinforcing its increasing relevance across engineering education.
Beyond assessment and personalization, AI is increasingly integrated into immersive and hands-on technologies, including virtual reality (VR), augmented reality (AR), and robotics, fields integral to mechanical engineering practice. As observed in [11], embedding AI into these environments deepens conceptual understanding and improves spatial reasoning. The study in [12] developed an AI-powered VR simulation using deep reinforcement learning to create adaptive, interactive design environments that respond to user behavior, enhancing learners’ comprehension of complex CAD models. Likewise, ref. [13] proposed a modular robotic platform with AI-enabled autonomy and adaptability, designed to promote multidisciplinary learning in MEE. The platform was shown to improve collaboration, motivation, and comprehension while providing safer, more flexible laboratory experiences. These integrations underscore the pedagogical and psychological value of combining AI with immersive technologies to enhance engagement and experiential learning.
AI applications have also expanded within additive manufacturing and engineering drawing, key pillars of MEE. The study in [14] underscored the role of 3D printing in fostering creativity, confidence, and design skills through hands-on learning. Similarly, refs. [15,16] demonstrated that AI integration in manufacturing enables improved process control, real-time monitoring, predictive modeling, and innovative material design, reducing cost, waste, and production time. Meanwhile, ref. [17] highlighted how deep learning enhances the digitization of engineering drawings, improving accuracy in detecting, classifying, and converting 2D elements into 3D models. Together, these studies reveal AI’s transformative role in bridging design and manufacturing, fostering intelligent automation, and preparing students for the evolving demands of Industry 4.0.
A growing body of research, such as [18,19], emphasizes aligning engineering curricula with emerging technologies to close the gap between academia and industry. AI-enabled tools, such as translation systems and CAD-based online platforms, facilitate remote learning and multilingual access to scientific content. Post-pandemic shifts toward digital education have further accelerated this need for technological alignment. As [18] showed, AI-enhanced simulation tools improve precision and automation in finite element analysis, while [19,20] proposed intelligent systems that personalize CAD instruction and address skill mismatches between academic preparation and industrial expectations.
However, while AI’s educational potential is substantial, its ethical and pedagogical implications warrant careful consideration. As noted in [21], issues such as academic dishonesty, plagiarism, and over-reliance on automation highlight the necessity for clear institutional policies and faculty training on ethical use. Recent research on GAI further accentuates this duality. Studies such as [22] demonstrate that GAI enhances creativity, productivity, and innovation in areas ranging from product design to process systems engineering. Likewise, students report significant benefits from tools like ChatGPT, including assistance with data analysis, programming, writing, and translation [23,24]. Yet, as [25,26] caution, these technologies raise legitimate concerns about accuracy, reliability, bias, and ethical misuse, underscoring the need for robust governance frameworks tailored to disciplinary contexts.
Despite the expanding literature on AI applications in MEE, a pronounced research gap remains concerning GAI integration. As highlighted by [11], few studies systematically investigate how GAI can be embedded within mechanical engineering curricula or how faculty and students perceive the associated enablers, challenges, strategies, and psychological dimensions. Existing research tends to be descriptive, focusing on isolated case studies rather than developing validated, theory-informed models of adoption. Commonly cited enablers include time efficiency, workload reduction, and alignment with industry-relevant skills, whereas challenges center on ethical misuse, cost, and integration with existing systems. Proposed strategies emphasize academic integrity safeguards and policy development, while psychological factors, such as motivation, confidence, and cognitive engagement, remain understudied.
Current literature affirms the transformative role of AI in MEE but lacks a comprehensive, stakeholder-centered understanding of GAI adoption. There is limited empirical validation of key factors or modeling of their interrelationships in shaping perceptions of successful integration. This gap provides the rationale for the present study, which seeks to identify, prioritize, and validate these factors through a mixed-methods approach. The next section (Section 2.2), therefore, summarizes the key enablers, challenges, and strategies extracted from the literature, forming the conceptual basis for subsequent qualitative and quantitative analyses.

2.2. Literature-Derived Factors

Drawing upon the comprehensive synthesis of prior research on AI in MEE conducted in [11], a structured set of key factors influencing the adoption of AI and GAI was identified. These factors reflect recurring patterns across the literature addressing how AI affects learning design, technological adaptation, and stakeholder engagement. Consistent with [11], the extracted factors were categorized into three overarching dimensions, enablers, challenges, and strategies, each also encompassing associated psychological aspects, such as motivation, cognitive engagement, and ethical awareness. This classification captures the multifaceted nature of AI integration, where technological readiness, institutional support, and human behavior intersect to shape implementation outcomes.
Enablers represent the technological and pedagogical drivers that promote successful integration. They include the automation of repetitive academic tasks, real-time feedback mechanisms, and the personalization of learning content. Studies reviewed in [11] also emphasized that such enablers enhance time efficiency, conceptual understanding, and cognitive development while reducing instructor workload. Additional factors such as collaboration, spatial reasoning, and the alignment of learning outcomes with industry-relevant skills were found to strengthen both student engagement and employability. Collectively, these enablers illustrate how AI can serve as both a catalyst for pedagogical innovation and a bridge between academic learning and industrial application.
Challenges, conversely, encapsulate the institutional and ethical barriers impeding AI adoption. Prominent among these are the high cost of integrating AI infrastructure, concerns about academic integrity and ethical misuse, and the difficulty of embedding AI tools within existing educational platforms. These limitations, as outlined in [11], underscore the need for universities to balance innovation with responsible governance and equitable access. Without systematic frameworks and ethical safeguards, even advanced technologies risk undermining educational authenticity and widening the digital divide.
Strategies denote the institutional, instructional, and policy-oriented approaches proposed in the literature to mitigate these challenges. As highlighted in [11], successful integration requires the development of clear academic policies, training initiatives, and evaluation frameworks that ensure GAI tools enhance rather than replace human learning. Effective strategies also include embedding AI within teamwork-based projects, integrating plagiarism detection systems for AI-generated content, and promoting collaborative and critical learning cultures. These interventions are crucial to cultivating a responsible innovation mindset among students and educators, ensuring that AI implementation strengthens academic integrity and cognitive growth.
The three categories collectively depict how AI influences the broader educational ecosystem, from learning efficiency and instructional design to ethical behavior and institutional planning. Table 1 summarizes the factors derived from [11], which serve as the conceptual foundation for the empirical phase of this research. In the following stages, these factors are further examined and validated using the Relative Importance Index (RII), CFA, and PLS-SEM to determine their relative significance and interrelationships.
Although the literature highlights these factors extensively, few studies have examined their relative importance or causal relationships in shaping stakeholders’ perceptions of successful GAI integration. This gap underscores the need for empirical prioritization and validation, forming the rationale for the present study. Accordingly, the next section (Section 2.3) introduces the Extended TAM, which provides the theoretical lens for analyzing how these enablers, challenges, and strategies influence adoption behavior within MEE.

2.3. Technology Adoption Model (TAM)

The adoption and sustained use of emerging technologies in education are frequently examined through TAMs, which provide a robust theoretical foundation for analyzing user acceptance, behavioral intention, and resistance [36]. In higher education, such frameworks are crucial for explaining how faculty and students perceive, adopt, and continuously utilize intelligent and digital technologies in teaching and learning environments. Over the past decades, several variants of TAM have been developed, including the extended TAM [37], TAM2 [38], and the Unified Theory of Acceptance and Use of Technology (UTAUT) [39]. Each version extends the original model by integrating additional constructs, such as social influence, facilitating conditions, and external variables, to better account for contextual and organizational factors influencing technology use.
For this study, the Extended TAM (Figure 1) is adopted as the conceptual lens because it preserves the original model’s core constructs of perceived usefulness and perceived ease of use while explicitly accommodating external variables such as institutional policies, training, and resource availability. This flexibility makes the Extended TAM particularly suitable for exploring the complex, multi-factorial process of GAI adoption within MEE. Unlike the traditional TAM, which focuses primarily on individual perceptions, the Extended TAM captures broader institutional and pedagogical dynamics, making it an ideal framework for modeling stakeholder perceptions and behavioral responses.
Accordingly, the study’s four empirically derived factor groups align directly with the Extended TAM’s theoretical dimensions:
  • Enablers correspond to perceived usefulness and facilitating conditions, reflecting how GAI tools enhance efficiency, creativity, and learning outcomes.
  • Challenges relate to perceived ease-of-use barriers, encompassing technical con-straints, ethical risks, and implementation costs.
  • Strategies map onto external influences, including institutional policies, training in-itiatives, and governance structures that shape adoption behaviors.
  • Psychological factors align with attitude and behavioral intention, representing cognitive, emotional, and motivational aspects that drive individual acceptance.
The Extended TAM thus provides a coherent theoretical foundation for this research, enabling the systematic examination of both technological determinants (usefulness and ease of use) and contextual moderators (policy, training, and ethics) that influence adoption outcomes. It integrates well with the study’s mixed-methods design by linking qualitative insights with quantitative validation through Relative Importance Index (RII), CFA, and PLS-SEM. Guided by this framework, the subsequent sections, Section 3 (Methodology) and Section 4 (Results and Discussion), investigate how the identified enablers, challenges, strategies, and psychological factors collectively shape faculty and student perceptions of successful GAI integration in MEE.

3. Methodology

The purpose of this study is to identify and validate the enablers, challenges, strategies, and psychological factors influencing the integration of GAI in MEE from the perspectives of both faculty and students. Guided by the Extended TAM (Section 2.3), the research follows a sequential exploratory mixed-methods design, in which qualitative findings inform quantitative testing. This approach allows for a deep exploration of stakeholder perceptions and empirical validation of the constructs that underpin GAI adoption in MEE.

3.1. Research Design

The study was conducted in three sequential phases:
  • Literature review to identify preliminary factors related to AI and GAI adoption in MEE;
  • Qualitative phase to refine and contextualize these factors through stakeholder in-terviews; and
  • Quantitative phase to validate and model their interrelationships statistically.
The study adopted a sequential mixed-method design to encapsulate both generalizability and depth. The qualitative phase (semi-structured interviews) investigated emerging themes and contextual refinements, which subsequently informed the quantitative survey items. This sequential structure supports methodological triangulation and coherence among inductive and deductive stages. Ensuring both conceptual richness and empirical robustness. The choice of a mixed-methods design follows extended TAM, investigating the establishment of technology (i.e., GAI) by validating exploratory data through quantitative analysis [40,41].

3.2. Literature Review

As detailed in Section 2.2, the synthesis of prior work, including [11], yielded an initial list of potential enablers, challenges, and strategies, together with embedded psychological aspects such as motivation, engagement, and ethical awareness. These literature-derived factors provided the conceptual basis for the qualitative interview protocol.

3.3. Qualitative Phase

3.3.1. Data Collection

The qualitative phase comprised semi-structured interviews with 13 faculty members and 33 students from universities in the United Arab Emirates. Participants were selected through purposive and snowball sampling to ensure relevance to MEE and related engineering programs. Interviews lasted about 60 min, conducted either face-to-face or online. All participants signed informed consent forms, and confidentiality was strictly maintained.
The interview guide, developed from the literature-derived factors and aligned with the Extended TAM constructs, combined open-ended questions with short rating tasks on a five-point Likert scale (1 = strongly disagree; 5 = strongly agree) to gauge the perceived importance of each factor.

3.3.2. Data Analysis for Qualitative Phase

Interview transcripts were analyzed using thematic analysis. A hybrid coding approach was applied, deductive codes based on literature categories and inductive codes emerging from participants’ narratives. Coding reliability was ensured through iterative cross-checking among the research team. Data saturation occurred at the 13th faculty and 33rd student interview, when no new substantive codes appeared. The resulting factor set informed the design of the quantitative survey instruments.

3.4. Quantitative Phase

3.4.1. Survey Design and Sampling

Two structured questionnaires, one for faculty and one for students, were developed to operationalize the refined factors using five-point Likert items. The faculty survey emphasized institutional and strategic aspects of GAI integration, whereas the student survey focused on their prioritization of GAI applications across MEE domains such as design, simulation, computation, coding, and research.
A cluster sampling strategy ensured diversity across institutions, academic ranks, and study levels. A total of 105 valid responses were obtained, providing a sufficient sample for structural equation modeling.

3.4.2. Data Analysis for Quantitative Phase

Quantitative analysis involved multiple complementary techniques:
  • RII: to rank the perceived significance of each factor group (enablers, challenges, strategies, psychological factors) and to compare faculty–student priorities.
  • CFA and Cronbach’s alpha: to assess construct validity, convergent validity, and internal consistency of measurement items.
  • PLS-SEM: to evaluate causal relationships among constructs and their overall in-fluence on stakeholder perceptions of successful GAI integration.
This multi-stage analytical process provided statistically robust evidence aligned with the Extended TAM framework and enabled the identification of both direct and mediated effects among constructs.

3.5. Ethical Considerations

All participation was voluntary. Respondents received full study information, signed written informed consent, and retained the right to withdraw at any time. Anonymity and confidentiality were preserved throughout data handling. Ethical approval was granted by the Institutional Review Board (IRB) of the host university prior to commencing data collection.
The ethical protocols were deployed across both qualitative and quantitative phases. In the qualitative phase, semi-structured interviews were conducted with participants in a confidential and voluntary manner, ensuring anonymity in all transcripts and subsequent analyses. Similarly, in the quantitative phase, structured surveys were distributed anonymously to avoid collecting identifiable data and to protect respondents’ privacy. This unified ethical framework emphasized transparency, voluntariness, and data protection across both methods, hence reinforcing the integrity and reliability of the mixed-methods design.
Given that the study included both faculty members and students, special consideration was taken to minimize any perceived hierarchical impact or pressure to participate. Recruitment was conducted through neural communication channels, such as mail invitations or through face-to-face invitations, and participation was completely voluntary. Similarly, students were explicitly informed that their decision to either participate or not would not impact their academic standing or relationship with faculty members. This emphasizes a psychologically safe environment and maintains the ethical principles of respect, autonomy, and non-coercion throughout both phases of the research.

3.6. Integration of Findings

Findings from the qualitative and quantitative phases were integrated to construct a comprehensive, evidence-based framework for responsible GAI adoption in MEE. Integration enabled the prioritization and validation of the most influential factors and the development of actionable recommendations for gradual, stakeholder-driven implementation. This mixed-methods integration ensures that the study’s conclusions are empirically grounded, theoretically consistent, and directly applicable to curriculum design, policy formation, and faculty development initiatives in MEE.

4. Results and Analysis

This section presents the study’s empirical outputs in a manner consistent with the sequential exploratory mixed-methods design outlined in Section 3. Guided by the Extended TAM framework, we organize the evidence around the four focal constructs of the study, enablers, challenges, strategies, and psychological factors, to address the research questions on stakeholders’ perceptions of GAI integration in MEE.
Section 4.1 reports the qualitative analysis, detailing themes that emerged from semi-structured interviews with faculty and students and explaining how these themes refined the factor set derived from the literature. Section 4.2 presents the quantitative analysis, where we operationalize the refined constructs and evaluate them using the RII, CFA, and PLS-SEM to assess prioritization, measurement quality, and structural relationships.

4.1. Qualitative Analysis

The qualitative phase represents the exploratory foundation of this mixed-methods study, offering a deeper interpretation of how faculty and students perceive the integration of GAI into MEE. Using thematic analysis of semi-structured interviews, this phase moves beyond description to explain why certain perceptions emerge and how they relate to institutional, pedagogical, and psychological conditions. The findings refine the literature-derived factors (Section 2.2) and shape the conceptual structure validated in the quantitative stage.
Ethical integrity guided all stages of the research: every participant signed an informed-consent form, anonymity was secured through coded identifiers (F# = faculty; S# = student), and ethical clearance was obtained from the IRB at the American University of Sharjah.

4.1.1. Semi-Structured Interviews

A total of 46 participants were interviewed, 33 students and 13 faculty members, as summarized in Table 2. Among students, 21 (63.6%) majored in Mechanical Engineering and 12 (36.4%) in closely allied disciplines such as Civil, Chemical, Electrical, Industrial, or Computer Engineering. Among faculty, 8 (61.5%) taught in MEE and 5 (38.5%) in related areas. Faculty had a median of 16 years of teaching experience (IQR = 6–22), and students included 55% undergraduates and 45% postgraduates.
Given time constraints, purposive and convenience sampling ensured both relevance and diversity: while most participants were in MEE, faculty from supportive disciplines (e.g., programming, physics) were included because their courses underpin MEE skill development. Each 60 min interview included visual prompts of GAI-generated content in design, simulation, and computation. This design encouraged participants to reflect critically rather than report superficially, enabling comparison between technical and pedagogical viewpoints. Table 2 presents the distribution of participants by group and discipline.
Participants’ Familiarity with GAI Tools
Before examining determinants, participants rated their familiarity with ChatGPT, Gemini, and GitHub Copilot (Likert 1–5) and listed other tools. Familiarity was nearly universal: 12 of 13 faculty and all 33 students actively used GAI. ChatGPT was dominant (students: 24 “extremely familiar”; faculty: 7). Students also mentioned Llama, Quillbot, NotebookLM, GitHub Copilot, ChatSpot, and Jules; faculty listed Microsoft Copilot, Perplexity AI, DALL-E, Llama, and Scite AI.
What matters analytically is how this familiarity translates into perceived legitimacy and inevitability. As one student emphasized, “I have a strong understanding of GAI and its transformative potential in MEE … for predictive maintenance and adaptive control” (S2). Faculty echoed inevitability: “If I am not aware of the tool, I wouldn’t know what students are using … I cannot stop them from using GAI” (F11). Such statements reveal agency asymmetry; faculty recognize a loss of gatekeeping control, while students frame GAI as a right or expectation of modern learning.
Patterns of use were diverse (see Table 3): most students used GAI for research (97%), writing (94%), idea generation (91%), problem-solving (85%), coding (82%), and visualization (76%). These percentages show that GAI has already become embedded in core cognitive tasks, not peripheral ones. Faculty usage, in turn, was instrumental, focused on preparing material and automating repetitive work. The contrast reveals an early-stage pedagogical gap: students treat GAI as an extension of learning cognition, while faculty view it as an instructional aid requiring control and oversight.
Despite enthusiasm, ethical unease persisted. A minority (3 faculty; 4 students) warned against dependency and moral shortcuts: “Useful in projects … but not in assignments due to ethical concerns and loss of critical thinking” (F2). The duality, widespread adoption versus concern for integrity, sets the stage for the constructs analyzed below.
Enablers, Challenges, and Psychological Factors
Drawing on the Extended TAM, interview codes were grouped as enablers (usefulness/facilitating conditions), challenges (ease-of-use barriers), strategies (external influences), and psychological factors (attitudes/intentions). This mapping moves analysis from surface description to explanation of why these categories matter.
Enablers: Both groups prioritized time efficiency and workload reduction, but the rationale differed. Students associated efficiency with personal mastery, “These tools save time and let me understand faster”, while faculty saw it as systemic productivity, easing grading and preparation. Institutional support, faculty willingness, and alignment with industry skills (e.g., CAD and simulation) were recurrent in faculty narratives. The distinction reflects two levels of “usefulness”: micro-level (individual learning) and macro-level (curricular modernization). Thus, GAI’s perceived usefulness operates across hierarchical layers of adoption, which has implications for policy design.
Challenges: Ethical misuse and the erosion of critical thinking dominated discussions, but again, motives diverged. Students feared “losing originality,” whereas faculty feared “losing integrity of assessment.” This difference points to moral complementarity; both value authenticity, but from different positions in the learning ecosystem. Secondary challenges (bias, lack of training, and integration issues) expose the technological–pedagogical gap between fast-moving student experimentation and slower institutional adaptation. The quote “The biggest challenge is knowing when students should and should not use GAI” (F12) captures this uncertainty in boundary-setting, emphasizing the need for contextual, course-specific guidelines rather than blanket restrictions.
Psychological Factors: The emotional landscape of GAI adoption was complex. Faculty expressed stress and role insecurity, “For older faculty, it can be challenging and stressful without training” (F07), indicating that change management must accompany technological training. Students exhibited optimism and cognitive stimulation (“GAI simplifies complex concepts and personalizes learning” (S3)) but also admitted over-reliance and procrastination risks (“GAI tools kill creativity and passion” (S9)). Comparing both groups reveals a generation-anchored divide: faculty anxiety centers on identity and control, while student anxiety concerns authenticity and cognitive dependence. These findings highlight that successful GAI integration must address emotional readiness as much as technical capability.
Strategies
Faculty proposed strategies that mirror the external-influence constructs of the Extended TAM. Their recommendations, assessment redesign, training, gradual adoption, and policy enforcement reflect a shift from reactive concern to proactive governance. “Assessments should be carefully planned to uphold academic integrity” (F01) and “Support from higher management is essential” (F08) demonstrate a consensus that top-down facilitation is essential for sustainable change.
Equally notable is the emergence of dual-role factors. Training and institutional support appeared both as existing enablers and desired strategies, signifying that these mechanisms are simultaneously present yet insufficient. Faculty recognize their partial existence but demand institutional reinforcement, a pattern characteristic of early-stage adoption cycles in higher education.
These insights converge into a strategic model of ethical integration: building AI literacy, piloting controlled implementations, ensuring discipline-specific customization, and maintaining continuous ethical reflection through formal curricula.
The final code structure summarizing these constructs appears in Table 4, which lists eight enablers (E1–E8), nine challenges (C1–C9), eight strategies (S1–S8), and nine psychological factors (P1–P9). This framework directly informed the survey design and the quantitative validation phase that follows.
Accordingly, the literature review and the qualitative phase in this research were capable of addressing the first research question, identifying the enablers, challenges, strategies, and psychological factors, particularly in the context of integrating GAI in MEE.
Hence, the themes identified from the qualitative phase inform the development of the quantitative survey instrument. As such, each category of enablers, challenges, strategies, and psychological factors is operationalized into measurable items reflecting the subthemes derived from faculty and students’ interviews. For example, concerns such as “role anxiety” and “cognitive dependence” are translated into measuring apprehension about GAI replacing teaching roles and over-reliance on AI for cognitive tasks. Similarly, enablers such as “tool adaptability” and “time efficiency” inform items assessing perceived usefulness and ease of incorporation. This systematic translation emphasizes that the quantitative constructs are grounded in participants’ lived experience, improving the transparency and validity of the mixed-methods design.

4.1.2. Trustworthiness and Link to Quantitative Phase

To ensure analytical credibility, a hybrid coding approach combining deductive and inductive strategies was used, and data saturation occurred at the 13th faculty and 33rd student interview. Cross-checking among coders and the inclusion of negative cases enhanced trustworthiness. While the inclusion of non-ME participants broadens contextual insight, it slightly limits generalizability, addressed later through the larger quantitative sample.
The qualitative findings thus move beyond enumeration to interpret how attitudes, structures, and emotional factors interact. They reveal that GAI adoption is neither purely technical nor purely attitudinal but a multilayered negotiation among efficiency, ethics, and identity. These insights form the theoretical and empirical bridge to Section 4.2 where RII, CFA, and PLS-SEM quantitatively test the relationships among the identified constructs.

4.2. Quantitative Analysis

Building on the qualitative insights, the quantitative phase operationalizes the identified factors, enablers, challenges, strategies, and psychological variables into measurable constructs to validate their reliability, prioritize their relative importance, and model their interrelationships. This phase aims to provide empirical substantiation for the conceptual framework developed earlier, translating stakeholder perspectives into statistically testable dimensions of GAI integration in MEE.
Two structured surveys were designed: one targeting students and another targeting faculty members. While both instruments shared the same content and construct structure, they were tailored to capture the distinct experiential lenses of learners and educators regarding GAI use in MEE. The surveys collectively reached 105 participants (61 students and 44 faculty members) representing diverse engineering backgrounds. Inclusion extended beyond purely mechanical engineering participants to encompass related disciplines such as industrial, electrical, chemical, and aerospace engineering, ensuring a comprehensive view of GAI adoption across ME-relevant curricula (e.g., thermodynamics, mechanics of materials, simulation, design, AutoCAD, programming, and research-based courses). This interdisciplinary inclusion enhances the ecological validity of findings by recognizing the overlapping competencies central to modern mechanical engineering education.
To achieve a robust and multidimensional understanding, this phase employs several complementary analytical techniques. The RII quantifies and ranks the perceived significance of each factor, allowing prioritization across stakeholder groups. CFA and Cronbach’s α assess construct validity and internal reliability, ensuring that the latent variables derived from the qualitative phase are statistically sound and theoretically coherent. Finally, PLS-SEM is applied to test the hypothesized relationships among the constructs and to evaluate how enablers, challenges, strategies, and psychological factors collectively shape stakeholders’ perception of successful GAI integration.
Additionally, based on interview insights indicating differences in how students employ GAI tools across tasks, the student survey included an application-based ranking exercise. This component identifies the domains, such as design, computation, simulation, research, and writing, where GAI usage is most valued, providing a quantitative foundation for targeted pedagogical recommendations.
While the inclusion of non-ME participants may limit strict generalizability, it enriches the dataset by reflecting the interdisciplinary reality of contemporary engineering education, where computational, design, and analytical skills transcend departmental boundaries. This design thus ensures that the quantitative phase does not merely replicate the qualitative findings but tests them systematically, establishing the empirical backbone of the study’s theoretical and practical contributions.

4.2.1. Utilization of GAI Tools

The quantitative analysis extends the qualitative findings by quantifying how students prioritize the use of GAI tools across key academic domains in MEE. As interviews revealed diverse familiarity levels and faculty support for a gradual, guided integration, this survey aimed to determine where GAI has the greatest learning impact. Students rated their likelihood of using GAI tools across six domains on a five-point Likert scale (1 = not likely at all, 5 = very likely): academic writing and research, brainstorming and creativity, computation, design and simulation (including coding), collaboration, and reading and studying.
As shown in Figure 2, students exhibit a clear hierarchy of preference. The strongest engagement occurs in design, simulation, and coding, where 77% of respondents indicate they are “likely” or “very likely” to employ GAI tools. This dominant score highlights how GAI aligns naturally with mechanical engineering’s analytical and modeling focus. The integration of GAI in these areas reflects students’ appreciation of its ability to automate code generation, improve simulation accuracy, and visualize complex mechanical phenomena. In essence, students view GAI as a cognitive amplifier, a means to bridge theory and practice rather than a substitute for learning effort.
In contrast, collaboration receives the lowest adoption score (only 44% likely or very likely), revealing an important socio-technical limitation. While students perceive GAI as efficient for individual tasks, they remain skeptical of its capacity to mediate teamwork, negotiation, and group creativity, core components of engineering collaboration. This gap reinforces earlier qualitative insights that human interaction, empathy, and communication remain irreplaceable competencies even in technology-enhanced learning environments.
Beyond technical tasks, students also demonstrate strong reliance on GAI for learning support and ideation. For reading and studying, 89% rate themselves as likely or very likely users, indicating that GAI serves as a scaffolding tool for conceptual understanding. Similarly, academic writing and research (87%) and brainstorming and creativity (87%) show widespread use for idea generation, summarization, paraphrasing, and literature exploration. These findings reflect a shift from using GAI purely for productivity to viewing it as a collaborative cognitive partner that enhances creativity and comprehension.
Meanwhile, computation (83%) occupies a middle position, revealing that students increasingly value GAI’s analytical functions, such as solving equations or visualizing results, but still combine it with traditional analytical reasoning. The pattern collectively demonstrates that students integrate GAI along a cognitive-to-practical continuum: from conceptual clarification (reading, brainstorming) to applied problem-solving (computation, simulation, design).
The strong preference for GAI in simulation, coding, and computation underscores that students associate value with measurable academic outcomes, speed, accuracy, and visualization, rather than collaborative or reflective functions. The lower ratings for collaboration and ethical awareness suggest that affective and interpersonal dimensions of learning remain underdeveloped in current GAI usage. This points to a dual challenge for educators: to embed GAI in ways that reinforce critical thinking and teamwork, while preserving authenticity and academic integrity.
Overall, Figure 2 indicates that GAI tools are most effective when introduced first in high-impact technical contexts, such as simulation labs and computational design tasks, followed by progressive inclusion in conceptual and collaborative domains. This staged approach aligns with the Extended TAM framework, suggesting that perceived usefulness (in computation and design) drives early adoption, while attitudinal and ethical readiness must evolve for full integration.

4.2.2. Relative Importance Index (RII)

To address the second research question, the RII was applied to systematically quantify and rank the perceived importance of the identified enablers, challenges, strategies, and psychological factors influencing the integration of GAI in MEE. As a robust normalization technique, RII allows a comparative understanding of how various stakeholder groups, students and faculty, prioritize determinants relative to their maximum perceived importance [42]. Beyond serving as a descriptive ranking tool, the RII analysis provides a diagnostic perspective on where alignment or divergence exists between stakeholders, establishing a foundation for inferential validation in subsequent CFA and PLS-SEM analyses.
Figure 3 and Table 5 reveal that both faculty and students agree on the centrality of tool availability and user willingness as the dominant enablers of GAI integration in MEE. Yet, while their overall rankings align, the underlying rationales differ markedly. Students rank the availability of GAI tools to enhance time efficiency and reduce workload as the top enabler (RII = 0.8689), viewing GAI primarily as a performance amplifier that supports comprehension, simulation, and design activities. Faculty, by contrast, prioritize students’ willingness to adopt GAI (RII = 0.8636), interpreting readiness as a pedagogical prerequisite for successful technology diffusion.
This divergence underscores a micro–macro duality: students perceive GAI’s value through its immediate cognitive utility, whereas faculty evaluate it through systemic and behavioral readiness lenses. Moreover, both groups rate institutional and governing body support as moderately high but not dominant, signaling recognition that structural facilitation is necessary but insufficient without individual engagement. The relatively low ranking of collaboration and teamwork through GAI suggests a gap in awareness regarding GAI’s potential to mediate peer-assisted learning, an opportunity for future curriculum innovation.
In essence, the enablers analysis illustrates that MEE stakeholders view GAI not merely as a technological adjunct but as a transformational enabler, contingent on institutional legitimacy and personal agency. The interpretation of “enabling conditions” thus spans pragmatic (students) and organizational (faculty) dimensions.
As depicted in Figure 4, both groups perceive GAI’s integration as constrained more by ethical and cognitive concerns than by technical or financial barriers. Faculty identify ethical misuse and plagiarism as the most critical challenge (RII = 0.9273), reflecting apprehension about maintaining academic integrity and assessment fairness. Students, on the other hand, rank over-reliance on GAI as reducing critical thinking skills as their foremost concern (RII = 0.8426), highlighting a self-awareness of potential cognitive erosion.
This divergence is telling: faculty focus on external accountability, while students exhibit concern for internal learning authenticity. Such dual apprehensions converge around a shared recognition that unregulated GAI use may compromise educational quality. Challenges such as bias in AI-generated outputs, inadequate faculty training, and integration issues with existing software (e.g., AutoCAD, MATLAB) occupy the mid-tier, indicating that technical limitations, while acknowledged, are secondary to ethical and pedagogical risks.
Interestingly, both groups assign the lowest RII to high cost and technical limitations, implying that MEE environments, accustomed to software-driven workflows, no longer perceive infrastructure as the primary barrier. Instead, the challenge matrix reflects a human-centered tension: balancing innovation with responsibility. The implication is that GAI governance frameworks, not hardware investments, will determine the sustainability of adoption.
Figure 5 shows a notable convergence between students and faculty regarding strategies for mitigating these challenges. Both rank clear policies and guidelines regulating GAI use as the most important strategy (RII ≈ 0.87), underscoring a shared demand for institutional clarity and ethical governance. However, the nature of their priorities diverges thereafter. Faculty emphasize awareness of student behavior and academic integrity initiatives, aligning with their concern for plagiarism and oversight, whereas students prioritize engineering-specialized GAI tools and tailoring applications to individual needs, emphasizing relevance and personalization.
The high ranking of training and support programs and gradual adoption further highlights an emerging consensus that GAI integration must follow a phased, capacity-building trajectory rather than abrupt implementation. This phased adoption aligns with the Extended TAM logic adopted in this study, where facilitating conditions (e.g., training and policy) directly enhance perceived usefulness and ease of use. Collectively, these strategic patterns point toward a two-level integration roadmap: macro-level policy and micro-level personalization, both of which are essential to align institutional frameworks with learner needs.
The analysis of psychological factors (Figure 6) provides crucial insight into the emotional and cognitive undercurrents shaping GAI adoption. Both faculty and students identify the impact of GAI on students’ critical thinking as the most significant psychological concern (RII = 0.82–0.84), reflecting a deep-seated anxiety about automation displacing reasoning. Ethical alignment follows closely, suggesting that users perceive morality as a necessary companion to innovation.
However, the group-specific differences are revealing. Students report heightened motivation and perceive GAI as a cognitive scaffold that enhances creativity and efficiency, whereas faculty demonstrate stress, skepticism, and role anxiety, particularly around job displacement and loss of human interaction. These asymmetries signify an intergenerational divide in digital adaptability and confidence, a theme that reappears in qualitative narratives.
Standard deviation values above 1.0 (Table 5) further reveal heterogeneity in emotional responses, implying that psychological readiness is unevenly distributed across participants. This heterogeneity justifies the need for targeted faculty development and student digital ethics workshops, ensuring balanced psychological engagement rather than polarized reactions to GAI technologies.
The comparative synthesis across Figure 3, Figure 4, Figure 5 and Figure 6 and Table 5 reveals that while faculty and students share optimism toward GAI’s potential, they operate under distinct motivational logics. Students’ enthusiasm reflects an efficiency-oriented adoption mindset, driven by pragmatic learning benefits, whereas faculty exhibit a prudence-oriented stance, emphasizing ethical safeguards and pedagogical control.
This duality creates both tension and opportunity. If unaddressed, misaligned priorities may lead to fragmented adoption; yet, if strategically harmonized, they can produce a complementary adoption ecosystem, where faculty provide governance and mentorship while students drive experimentation and innovation.
From a policy perspective, three implications emerge:
  • Shift from awareness to accountability: Establish measurable ethical frameworks beyond generic guidelines.
  • Move from adoption to adaptation: Develop discipline-specific GAI tools that reflect mechanical engineering contexts rather than generic educational templates.
  • Balance regulation with innovation: Encourage supervised experimentation to preserve creativity while maintaining academic integrity.
Thus, the RII analysis transcends its statistical purpose by revealing a multilayered adoption landscape, where cognitive, ethical, and structural dimensions intersect. These findings form the empirical foundation for the forthcoming CFA and PLS-SEM, which formally test construct reliability and structural relationships within the extended Technology Acceptance framework.

4.2.3. Confirmatory Factor Analysis (CFA)

After identifying the four latent variables, Enablers, Challenges, Strategies, and Psychological Factors, CFA was conducted to verify the reliability and validity of the measurement model. Cronbach’s α was first calculated to assess internal consistency; values ≥ 0.50 were deemed acceptable for exploratory research [43]. All constructs surpassed this threshold, confirming baseline reliability and justifying further validation.
CFA was then performed to examine the convergent and discriminant validity of the constructs and to evaluate the model’s goodness-of-fit. Key indices included:
  • Chi-square (χ2) for overall model adequacy,
  • Standardized Root Mean Square Residual (SRMR) for average standardized residuals,
  • Root Mean Square Error of Approximation (RMSEA) for discrepancy per degree of freedom, and
  • Bentler Comparative Fit Index (CFI) to compare the hypothesized and null models.
All computations were conducted using SAS statistical software (SAS 9.4).
Initial reliability results indicated moderate cohesion, prompting refinement to improve construct clarity. Five low-loading indicators, C2 (High costs and technical limitations of GAI tools), C6 (Integration challenges with engineering software), C9 (GAI reducing human factors in the teaching process), P1 (Concerns about GAI replacing teaching roles), and P5 (Stress caused by adopting GAI tools), were removed (Table 6).
These items exhibited weak item–total correlations (0.21–0.32) and negatively influenced internal consistency. Their exclusion enhanced parsimony and aligns with earlier RII results, which also ranked these variables among the least important for both stakeholder groups.
Table 7 shows that all refined constructs achieved acceptable internal consistency. Enablers (α = 0.81) and Strategies (α = 0.83) display the strongest cohesion, reflecting participants’ stable agreement on tangible facilitators such as institutional support, tool availability, and clear policies. Challenges (α = 0.63) and Psychological Factors (α = 0.69) present moderate consistency, expected in exploratory studies where perceptions are heterogeneous [44].
This variance itself is analytically meaningful: it reflects diverse experiential standpoints between students and faculty, particularly in the more subjective domains of risk perception and emotional adaptation.
Post-refinement CFA yielded a significantly improved fit (Table 8). The Chi-square statistics decreased from 919.32 to 634.02 and degrees of freedom from 517 to 367, reducing overfitting and enhancing parsimony. SRMR fell from 0.111 to 0.107 and RMSEA from 0.087 to 0.084, while CFI rose from 0.621 to 0.703.
Although CFI remains below the 0.90 benchmark, these values demonstrate adequate fit for an exploratory model given the modest sample size (N = 105) and the novelty of GAI constructs within MEE. This is consistent with methodological guidance indicating that moderate CFI values (≈0.7–0.8) are acceptable in early-stage or small-sample SEM studies, especially when models involve emerging constructs and heterogeneous respondents [43]. SRMR and RMSEA within moderate ranges confirm acceptable residual variance, and improvement across all indices indicates the refined structure better captures latent relationships among the constructs.
To further validate construct reliability, Composite Reliability (CR) and Average Variance Extracted (AVE) were computed. CR values exceeded 0.70 for Enablers (0.77) and Strategies (0.78), confirming strong internal consistency [44]. Conversely, Challenges (0.46) and Psychological Factors (0.49) fell below the threshold, indicating partial reliability, consistent with the exploratory stage of conceptualization. AVE values ranged from 0.15 to 0.29, below the recommended 0.50 criterion [45], implying limited convergent validity.
Such results are methodologically acceptable in formative, cross-disciplinary domains where constructs are new and respondents interpret items diversely. Retaining these variables was theoretically necessary to maintain the four-pillar structure derived from the Extended TAM, ensuring conceptual completeness for subsequent modeling.
Two major insights emerge:
  • Construct Asymmetry in Reliability—Higher internal consistency for Enablers and Strategies suggests participants share clearer mental models of what facilitates GAI integration (e.g., tool availability, institutional policies) than of what inhibits or psychologically influences it. This pattern mirrors early adoption dynamics, where enabling conditions crystallize before resistance factors are fully articulated.
  • Theoretical Validation of Stakeholder Perceptions—The refined four-factor structure confirms that faculty and students conceptualize GAI integration through parallel cognitive frameworks. Enablers and Strategies map onto the “perceived usefulness” and “facilitating conditions” dimensions of TAM, while Challenges and Psychological Factors correspond to “perceived ease of use” and “attitudinal intention.” The CFA thus empirically grounds the qualitative insights within an established adoption theory.
Overall, the third research question was tackled in this section since it was verified that the proposed factors and their constructs exhibit reliability and validity through the statistical analyses, including CFA, Cronbach’s Alpha, Composite reliability, and average variance extracted.
The improved reliability and model fit justify advancing to PLS-SEM. CFA has verified that the constructs are empirically distinct yet conceptually interdependent, providing a solid foundation for testing the causal pathways among enablers, challenges, strategies, and perceptions of successful GAI integration in MEE.
Hence, this stage elevates the study from descriptive factor validation to a theoretically anchored measurement model, linking statistical robustness with pedagogical insight.

4.2.4. Partial Least Squares Structural Equation Modeling (PLS-SEM)

To examine the last research question, PLS-SEM was employed to examine the structural relationships among the latent constructs, Enablers, Challenges, Strategies, Psychological Factors, and the higher-order construct Perception of Successful GAI Integration. PLS-SEM was chosen because it focuses on variance explanation in exploratory contexts and performs well with moderate samples (N = 105; 61 students and 44 faculty), unlike CB-SEM, which requires larger, normally distributed datasets, making the employment of 105 participants in PLS-SEM both appropriate and advantageous compared with methods that demand larger datasets [46]. Familiarity with GAI was not modeled as a predictor because both groups had already demonstrated high familiarity in earlier phases.
Guided by the Extended TAM and qualitative insights, five hypotheses were tested to capture direct and indirect relationships among constructs as shown in Figure 7:
  • H1 Strategies → Perception (positive)
  • H2 Challenges → Perception (negative)
  • H3 Enablers → Perception (positive)
  • H4 Psychological factors → Perception (effect present)
  • H5 Enablers, Challenges, and Strategies → Psychological factors (effect present)
Conceptually, Enablers and Strategies reflect perceived usefulness and facilitating conditions; Challenges represent ease-of-use barriers; and Psychological Factors capture attitudes and intentions toward adoption.
The model was estimated using PROC CALIS (SAS). Because this research is exploratory and the sample is modest, p-values between 0.05 and 0.10 were treated as trend-level significance [21]. The Goodness-of-Fit Index (GFI = 0.64) is acceptable for early-stage modeling. Standardized loadings for Enablers, Challenges, and Strategies ranged 0.34–0.71 (p ≈ 0.065–0.0946), while Psychological Factors produced weaker loadings (p ≈ 0.76), indicating limited contribution.
Table 9 summarizes all paths, and Figure 8 illustrates the structural relationships among constructs and their standardized effects, visually highlighting how strategies, enablers, and challenges shape perception while psychological factors remain weakly connected. Consistent with the hypothesized directionality, Strategies, Challenges, and Enablers all influence Perception, while Psychological Factors exert no meaningful direct effect.
  • Strategies → Perception: β ≈ 0.80, p = 0.0867 (strongest)
  • Challenges → Perception: β ≈ 0.82, p = 0.0686
  • Enablers → Perception: β ≈ 0.66, p = 0.0687
  • Psychological Factors → Perception: β ≈ 0.22, p = 0.8875
  • Enablers/Challenges/Strategies → Psychological Factors: non-significant (p > 0.70)
The results highlight several meaningful patterns.
  • Institutional scaffolding precedes attitudinal change: Strategies, policy clarity, training, and gradual roll-out show the greatest influence on perceived integration success, implying that visible institutional action shapes acceptance more strongly than individual motivation. This sequencing is consistent with early-stage adoption models where external conditions precede attitude formation.
  • Challenges act as indirect catalysts: Although hypothesized as negative, the positive coefficient for Challenges suggests that acknowledging and managing difficulties (e.g., over-reliance, ethical misuse) signals maturity and feasibility. In other words, when institutions visibly address risks, stakeholders interpret adoption as credible and controlled.
  • Psychological neutrality at early adoption: The null effect of Psychological Factors mirrors the mixed qualitative responses: students reported enthusiasm and efficiency gains, faculty reported stress and integrity worries, effects that cancel out statistically. Structural and cognitive determinants dominate until consistent norms emerge.
The evidence points to several practical priorities:
  • Lead with structure: Develop clear governance, training programs, and phased pilots; these yield the strongest influence on perceived success.
  • Treat risk as design input: Integrate safeguards (assessment redesign, academic-integrity workflows) into early adoption stages.
  • Sequence adoption intelligently: Start in high-utility domains such as design, simulation, and coding before extending to reflective and collaborative activities.
  • Follow with attitudinal work: Ethics awareness and motivation initiatives should complement, not precede, structural reforms.
Accepting trend-level significance (p = 0.05–0.10) is appropriate for exploratory PLS-SEM with modest N [47]. Coefficients here provide directional evidence rather than confirmatory proof, underscoring the need for larger, discipline-balanced replications and refined measurement, particularly for the Psychological Factors construct. Future work may test mediation or moderation effects once measurement stability improves.

5. Discussion

This study investigated the integration of GAI in MEE by examining the roles of enablers, challenges, strategies, and psychological factors influencing stakeholders’ perceptions. Through an integrated analysis combining RII, CFA, and PLS-SEM, the research uncovered how these constructs collectively shape the perceived success of GAI adoption. The results reveal that institutional and pedagogical factors exert a stronger influence on perception than psychological readiness, underscoring the need for strategic and evidence-driven frameworks to govern GAI use in technical education.
The results highlight a multidimensional but asymmetric pattern of influence. The RII and PLS-SEM analyses demonstrate that strategies and enablers are the most influential predictors of GAI integration, while psychological factors play a negligible role. This suggests that early adoption in MEE is driven more by structural preparedness, training, policies, and tool accessibility than by individual emotions or attitudes. Such a pattern aligns with the Extended TAM, where perceived usefulness and facilitating conditions outweigh attitudinal factors in shaping behavioral intention.
Specifically, the highest-ranked enablers, availability of user-friendly GAI tools and students’ willingness to engage with them, emphasize pragmatic engagement rather than novelty or curiosity. Faculty members, however, stress institutional support and alignment with industry skills, pointing to a more system-oriented perspective. This duality illustrates a classic tension in technology adoption: students are driven by immediate utility, while faculty focus on sustainability and integration. These findings reinforce earlier evidence by [11,36], which linked AI adoption in engineering to perceived relevance, practical value, and institutional readiness.
Conversely, the main challenges, ethical misuse, over-reliance, and bias, reflect concerns that GAI might compromise integrity and critical thinking, echoing warnings in [21,25,26]. Interestingly, cost and technical barriers rank among the least significant obstacles, suggesting that the barrier to integration in MEE is cultural and ethical rather than infrastructural. This observation indicates a shift: as technological maturity increases, moral governance and pedagogical design become the true determinants of successful adoption.
The CFA results confirmed internal consistency across constructs, validating the reliability of the identified factors, while PLS-SEM demonstrated that enablers, challenges, and strategies each exert marginal but significant influence on perception. The lack of significance for psychological factors suggests that cognitive and emotional responses have not yet consolidated, typical in early diffusion stages when organizational factors dominate. As [37,38,39] note, emotional acceptance becomes salient only once institutional systems stabilize and users gain sustained exposure.
This study’s findings both confirm and extend previous literature. Consistent with [22,24], structured strategies such as training, ethical guidance, and gradual adoption are vital in mediating AI acceptance. However, the present research extends prior work by establishing discipline-specific evidence: in MEE, the impact of GAI depends on its capacity to enhance computation, design, and simulation tasks, core engineering functions rarely emphasized in broader educational AI research. The insight that GAI’s effectiveness is domain-specific refines the one-size-fits-all assumptions often implicit in earlier adoption models.
The observed insignificance of psychological factors diverges from traditional TAM studies (e.g., [38,39]), where attitude and intention typically exert strong predictive power. This deviation implies that engineering disciplines, characterized by analytical rigor and problem-solving orientation, prioritize functionality and efficiency over affective readiness. In other words, engineers adopt technologies not because they “feel good” about them, but because they demonstrably work. This finding introduces a new lens through which TAM can be interpreted in technical education, one that emphasizes performance rationality over emotional motivation.
From a practical standpoint, these results suggest that successful GAI integration in MEE hinges on coordinated institutional, pedagogical, and ethical interventions:
  • Institutional Governance and Training: Given that strategies were the strongest predictor of perception, institutions must establish clear policies, ethical guidelines, and continuous professional training. This ensures that faculty feel confident managing GAI usage, mitigating misuse, and maintaining academic integrity.
  • Curriculum Alignment: Students’ strong preference for design, coding, and simulation (Figure 2) indicates where GAI can add immediate educational value. Integration should begin in such high-impact courses before scaling up. Embedding reflective assessments and hands-on validation can prevent over-reliance while reinforcing analytical reasoning.
  • Ethical and Cognitive Safeguards: The prevalence of ethical and cognitive concerns demands explicit instruction on AI ethics, critical-thinking resilience, and data transparency. GAI should be framed as a co-creator, a partner in learning, not a substitute for reasoning.
  • Stakeholder Collaboration: The divergent priorities of students and faculty underscore the need for institutional dialogs and shared decision-making in policy design. Co-creation of guidelines promotes balanced ownership and accountability.
These implications suggest a pragmatic roadmap: start small with controlled experimentation, scale based on evidence, and continuously recalibrate policies to balance innovation with integrity.
This research extends the theoretical discourse in three key ways:
  • Extension of TAM to GAI Contexts: By empirically integrating four constructs, enablers, challenges, strategies, and psychological factors, this study extends the Extended TAM to include institutional and pedagogical dimensions critical for AI governance in engineering education.
  • Bridging Descriptive and Structural Analytics: The combination of RII, CFA, and PLS-SEM provides a methodological bridge between descriptive prioritization and causal modeling, allowing both practical relevance and theoretical rigor.
  • Discipline-Specific Behavioral Model: The findings reveal that in highly technical fields, adoption is instrumental rather than affective, suggesting the emergence of an “Engineering-Specific Adoption Logic,” where efficacy and structure override emotion and intuition.
This study opens promising avenues for advancing an “Engineering-Specific Adoption Logic,” where efficacy and structure take precedence over emotion and intuition. While the findings are robust, they also highlight opportunities for deepening the inquiry. The current sample, though sufficient for exploratory SEM, sets the stage for future research to validate the proposed model across larger, cross-institutional populations. Similarly, the use of self-reported perceptions offers a valuable snapshot of current attitudes, while also creating scope for longitudinal and behavioral studies that can track evolving patterns of adoption over time.
In addition, incorporating external moderating factors such as institutional culture, digital literacy, and resource allocation could significantly enrich future models and extend their explanatory power. There is also considerable potential for experimental studies that directly assess the impact of specific GAI tools (e.g., ChatGPT, GitHub Copilot) on measurable learning outcomes in design or computation courses. Multidisciplinary collaboration among educational researchers, cognitive scientists, and software developers could further catalyze the development of adaptive GAI systems tailored to engineering pedagogy.
This research provides a nuanced and empirically grounded account of how GAI adoption unfolds in MEE. By showing that structural enablers and governance mechanisms currently outweigh psychological attitudes, it signals a paradigm shift toward performance- and policy-driven models of technology adoption. These insights not only strengthen the conceptual foundation for future work but also point toward the design of responsible, scalable, and pedagogically effective frameworks for integrating GAI into higher education. Furthermore, these findings highlight the study’s contribution to sustainable education by showing how responsible and evidence-based GAI integration can strengthen the inclusivity and resilience of engineering education systems.
Although the study revealed a significant contribution to engineering education, this study reveals various limitations that should be acknowledged. Firstly, the sample size (N = 105) was modest and primarily drawn from universities in the UAE, which may restrict the generalizability of the findings to other regional or institutional contexts. Additionally, the dependence on self-reported data induces the potential for response bias, while the cross-sectional design limits the capability to encapsulate emerging perceptions over time. Despite the comprehensiveness of the mixed-methods approach, the study is exploratory in nature and thus aims to establish initial theoretical and empirical grounding rather than definitive causal relationships. Moreover, the moderate CFI value (≈0.7) revealed in CFA reflects partial model fit typical of early-stage frameworks, including emerging constructs. It is therefore imperative for future research to expand the sample geographically and longitudinally, deploy comparative or experimental designs, and further validate the model across versatile educational settings.

6. Conclusions

This study explored the integration of GAI in MEE by systematically examining how faculty and students perceive its enablers, challenges, strategies, and psychological factors. Responding to a clear gap in the literature, where most prior studies addressed general AI applications without discipline-specific or stakeholder-driven validation, this research employed a sequential exploratory mixed-methods design grounded in the Extended TAM. Through qualitative interviews and quantitative validation (RII, CFA, and PLS-SEM), the study developed an empirically supported framework capturing the multidimensional dynamics shaping GAI adoption in MEE.
The findings underscore that strategic and enabling factors, not merely attitudes, drive successful GAI adoption. Institutional readiness, faculty training, policy clarity, and tool accessibility exerted the strongest influence on stakeholders’ perceptions, highlighting the central role of structured implementation over psychological readiness. Faculty emphasized integrity safeguards, pedagogical adaptation, and professional support, while students prioritized usability, efficiency, and real-world application in tasks such as design, coding, and simulation. Collectively, these insights reveal that effective GAI integration in MEE depends on aligning institutional frameworks with learners’ functional needs, ensuring that innovation complements rather than replaces critical engineering reasoning.
The lack of statistical correlation between psychological factors and perception suggests that emotional or behavioral dispositions exert a limited impact on GAI integration in ME contexts. These findings suggest that stakeholders’ assessments are more rational and performance-driven, shaped by structural conditions rather than affective readiness. Similarly, the findings refine TAM by implying that in analytically oriented disciplines, behavioral intention may stem directly from perceived usefulness and facilitating conditions, while affective attitude plays a secondary role. Practically, it highlights that successful adoption depends on institutional support and demonstrated utility rather than motivational interventions.
Theoretically, this research contributes by adapting and extending the Extended TAM to the context of GAI in technical education, demonstrating that instrumental value and institutional facilitation are stronger predictors of adoption than affective or motivational factors. Methodologically, it exemplifies how mixed methods can operationalize abstract constructs into measurable indicators, enhancing the reliability and transferability of technology-adoption studies in engineering disciplines.
Practically, the findings provide clear, evidence-based recommendations for policymakers, educators, and curriculum designers:
  • Institutional Governance: Establish transparent, ethical and operational frameworks guiding responsible GAI use in coursework and research.
  • Faculty Empowerment: Deliver targeted professional development that enhances confidence and ethical competence in GAI-integrated teaching.
  • Pedagogical Design: Embed GAI in high-impact learning tasks, such as design, computation, and simulation, using process-based assessments that reinforce originality and reasoning.
  • Ethical Literacy: Foster student awareness of plagiarism, bias, and over-reliance risks through active learning and ethical decision-making modules.
  • Collaborative Ecosystems: Encourage cross-disciplinary collaborations between educators, students, and technology providers to ensure adaptive, context-relevant integration.
While exploratory in scope, this study’s limitations open productive avenues for future research. Expanding the sample across multiple universities and cultural contexts would enhance generalizability, while longitudinal studies could track the evolving acceptance and pedagogical impact of GAI as tools mature. Additionally, since this study focuses on the incorporation of GAI within the UAE context, it is imperative for future research to explore similar models across different countries to validate the findings and account for potential variations in technology accessibility, institutional infrastructure, and educational practices.
Future research should also engage software developers and institutional leaders to bridge pedagogical needs with technological capabilities, exploring how GAI’s interoperability with engineering platforms (e.g., AutoCAD, MATLAB, or simulation tools) shapes learning quality and innovation capacity. These steps would refine the model proposed here and inform long-term policy and curriculum transformation.
Ultimately, this study reaffirms that the promise of GAI in engineering education lies not in automation but in augmentation, empowering both educators and learners to extend cognitive reach, enhance creativity, and deepen analytical reasoning. By embedding GAI within thoughtful pedagogical design and robust ethical governance, Mechanical Engineering programs can evolve toward a future-ready paradigm of learning, where technology amplifies human ingenuity rather than substitutes it. This study advances the sustainability of higher education by showing how the integration of GAI into MEE can reinforce the core principles of ethical responsibility, inclusivity, and lifelong learning that define sustainable educational practice.

Author Contributions

Conceptualization, M.A.; Methodology, M.A. and V.A.; Software, S.S.; Validation, M.A., V.A. and Z.B.; Data curation, M.A., V.A. and Z.B.; Writing—original draft, M.A.; Writing—review & editing, M.A., V.A. and Z.B.; Visualization, M.A. and S.S.; Supervision, V.A. and Z.B.; Funding acquisition, V.A. and Z.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of The American University of Sharjah (protocol code #24-090, 19 September 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this articles will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GAIGenerative Artificial Intelligence
MEEMechanical Engineering Education
RIIRelative Importance Index
CFAConfirmatory Factor Analysis
PLS-SEMPartial Least Squared Structural Equation Modeling
AIArtificial Intelligence
NLPNatural Language Processing
LLMsLarge Language Models
GANsGenerative Adversarial Networks
MEMechanical Engineering
CADComputer Aided Design
TAMTechnology Adoption Model
UTAUTUnified Theory of Acceptance and Use of Technology
SRMRStandardized Root Mean Square Residual
RMSEARoot Mean Square Error of Approximation
CFIBentler Comparative Fit Index
SEMStructural Equation Modeling

References

  1. Triguero, I.; Molina, D.; Poyatos, J.; Del Ser, J.; Herrera, F. General purpose artificial intelligence systems (GPAIS): Properties, definition, taxonomy, societal implications and responsible governance. Inf. Fusion 2024, 103, 102135. [Google Scholar] [CrossRef]
  2. Sandhu, R.; Channi, H.K.; Ghai, D.; Cheema, G.S.; Kaur, M. An introduction to generative AI tools for education 2030. In Integrating Generative AI in Education to Achieve Sustainable Development Goals; IGI Global: Mumbai, India, 2024; pp. 1–25. [Google Scholar]
  3. Bahroun, Z.; Anane, C.; Ahmed, V.; Zacca, A. Transforming Education: A Comprehensive Review of Generative Artificial Intelligence in Educational Settings through Bibliometric and Content Analysis. Sustainability 2023, 15, 12983. [Google Scholar] [CrossRef]
  4. Davim, J.P. Mechanical Engineering Education, 1st ed.; Wiley: Hoboken, NJ, USA, 2012. [Google Scholar] [CrossRef]
  5. Pollack, E.R.; Sarrafian, G.A.; Grimm, M.J. What do mechanical engineers do? A content analysis of mechanical engineers’ job descriptions. In Proceedings of the 2021 ASEE Annual Conference, Virtual, 26–29 July 2021; Available online: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85124563325&partnerID=40&md5=c0d65daa614e13b6f1ba836b873a71dc (accessed on 24 November 2025).
  6. Prabhu, T. Principles of Mechanical Engineering: Vital Concepts of Mechanical Engineering, 3rd ed.; Nestfame Creations Pvt Ltd.: Maharashtra, India, 2019. [Google Scholar]
  7. Awini, G.; Mensah, K.; Majeed, M.; Mahmoud, M.A.; Braimah, S.M. The use of innovative pedagogies in attaining un sustainable development goal 4: Quality education for learning outcomes in emerging markets. In Digital Analytics Applications for Sustainable Training and Education; Apple Academic Press: Oakville, ON, USA, 2024; Available online: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85200925364&partnerID=40&md5=064b6b6971f10d3183b8753a72c3cc13 (accessed on 24 November 2025).
  8. Kuzilek, J.; Zdrahal, Z.; Fuglik, V. Student success prediction using student exam behaviour. Future Gener. Comput. Syst. 2021, 125, 661–671. [Google Scholar] [CrossRef]
  9. Belapurkar, G.; Chauhan, A.; Panwar, A.; Fernandes, V.; Arya, K. Automated theme allotment to optimise learning outcomes in robotic competition. In Proceedings of the 2019 IEEE International Conference on Engineering, Technology and Education (TALE), Yogyakarta, Indonesia, 10–13 December 2019. [Google Scholar] [CrossRef]
  10. Cai, W.; Grossman, J.; Lin, Z.J.; Sheng, H.; Wei, J.T.-Z.; Williams, J.J.; Goel, S. Bandit algorithms to personalize educational chatbots. Mach. Learn. 2021, 110, 2389–2418. [Google Scholar] [CrossRef]
  11. Alghazo, M.; Ahmed, V.; Bahroun, Z. Exploring the applications of artificial intelligence in mechanical engineering education. Front. Educ. 2025, 9, 1–24. [Google Scholar] [CrossRef]
  12. Lin, M.; Shan, L.; Zhang, Y. Research on robot arm control based on Unity3D machine learning. J. Phys. Conf. Ser. 2020, 1633, 012007. [Google Scholar] [CrossRef]
  13. Wei, Z.; Berry, C. Design of a modular educational robotics platform for multidisciplinary education. In Proceedings of the ASEE Conference and Exposition, Salt Lake City, UT, USA, 24–27 June 2018; pp. 1–19. [Google Scholar]
  14. Singhal, I.; Tyagi, B.; Chowdhary, R.; Saggar, A.; Raj, A.; Sahai, A.; Fayazfar, H.; Sharma, R.S. Augmenting Mechanical Design Engineering with Additive Manufacturing; Springer Nature: Berlin/Heidelberg, Germany, 2022; Volume 8. [Google Scholar]
  15. Johnson, N.; Vulimiri, P.; To, A.; Zhang, X.; Brice, C.; Kappes, B.; Stebner, A. Invited review: Machine learning for materials developments in metals additive manufacturing. Addit. Manuf. 2020, 36, 101641. [Google Scholar] [CrossRef]
  16. Guo, A.X.; Cheng, L.; Zhan, S.; Zhang, S.; Xiong, W.; Wang, Z.; Wang, G.; Cao, S.C. Biomedical applications of the powder-based 3D printed titanium alloys: A review. J. Mater. Sci. Technol. 2022, 125, 252–264. [Google Scholar] [CrossRef]
  17. Moreno-Garcia, C.F.; Elyan, E.; Jayne, C. New trends on digitisation of complex engineering drawings. Neural Comput. Appl. 2019, 31, 1695–1712. [Google Scholar] [CrossRef]
  18. Chen, D.; You, C.; Su, M. Development of professional competencies for artificial intelligence in finite element analysis. Interact. Learn. Environ. 2020, 30, 1265–1272. [Google Scholar] [CrossRef]
  19. Brazina, J.; Stepanek, V.; Holub, M.; Vetiska, J.; Bradac, F. Application of industry 4.0 trends in the teaching process. In Proceedings of the 2022 20th International Conference on Mechatronics—Mechatronika (ME), Pilsen, Czech Republic, 7–9 December 2022. [Google Scholar]
  20. Afanasyev, A.; Voit, N.; Ionova, I.; Ukhanova, M.; Yepifanov, V. Development of the Intelligent System of Engineering Education for Corporate Use in the University and Enterprises. In Teaching and Learning in a Digital World; Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2017; pp. 716–727. [Google Scholar] [CrossRef]
  21. Lesage, J.; Brennan, R.; Eaton, S.E.; Moya, B.; McDermott, B.; Wiens, J.; Herrero, K. Exploring natural language processing in mechanical engineering education: Implications for academic integrity. Int. J. Mech. Eng. Educ. 2023, 52, 88–105. [Google Scholar] [CrossRef]
  22. Decardi-Nelson, B.; Alshehri, A.S.; You, F. Generative artificial intelligence in chemical engineering spans multiple scales. Front. Chem. Eng. 2024, 6, 1458156. [Google Scholar] [CrossRef]
  23. Weng, J. Critical thinking with AI: Navigating ChatGPT in engineering education. In Proceedings of the SEFI 2024—52nd Annual Conference of the European Society for Engineering, Proceedings: Educating Responsible Engineers, Lausanne, Switzerland, 2–5 September 2024. [Google Scholar] [CrossRef]
  24. Belim, P.; Bhatt, N.; Lathigara, A.; Durani, H. Enhancing Level of Pedagogy for Engineering Students Through Generative AI. J. Eng. Educ. Transform. 2025, 38, 463–470. [Google Scholar] [CrossRef]
  25. Daniel, S.; Nikolic, S.; Sandison, C.; Haque, R.; Grundy, S.; Belkina, M.; Lyden, S.; Hassan, G.M.; Neal, P. Engineering assessment in the age of generative artificial intelligence: A critical analysis. In Proceedings of the 2024 World Engineering Education Forum—Global Engineering Deans Council, WEEF-GEDC 2024, Sydney, Australia, 2–5 December 2024. [Google Scholar] [CrossRef]
  26. Robledo-Rella, V.; Toh, B. Artificial intelligence in physics courses to support active learning. In Proceedings of the 10th International Conference on E-Society, E-Learning and E-Technologies, ICSLT 2024, Rome, Italy, 21–23 June 2024. [Google Scholar] [CrossRef]
  27. Shyr, W.; Yang, F.; Liu, P.; Hsieh, Y.; You, C.; Chen, D. Development of assessment indicators for measuring the student learning effects of artificial intelligence-based robot design. Comput. Appl. Eng. Educ. 2019, 27, 863–868. [Google Scholar] [CrossRef]
  28. Cerra, P.P.; Álvarez, H.F.; Parra, B.B.; Busón, S.C. Boosting computer-aided design pedagogy using interactive self-assessment graphical tools. Comput. Appl. Eng. Educ. 2023, 31, 26–46. [Google Scholar] [CrossRef]
  29. Clark, Q.M.; Clark, J.V. Personalized learning tool for thermodynamics. In Proceedings of the 2018 IEEE Frontiers in Education Conference (FIE), San Jose, CA, USA, 3–6 October 2018. [Google Scholar] [CrossRef]
  30. Auerbach, J.E.; Concordel, A.; Kornatowski, P.M.; Floreano, D. Inquiry-Based Learning with RoboGen: An Open-Source Software and Hardware Platform for Robotics and Artificial Intelligence. IEEE Trans. Learn. Technol. 2019, 12, 356–369. [Google Scholar] [CrossRef]
  31. Klarić, Š.; Lockley, A.; Pisačić, K. The Application of Virtual Tools in Teaching Dynamics in Engineering. Teh. Glas. 2023, 17, 98–103. [Google Scholar] [CrossRef]
  32. Caroline, M.; Sawatzki, G. Work in Progress: Using Cost-effective Educational Robotics Kits in Engineering Education. In Proceedings of the 2021 ASEE Annual Conference, Virtual, 26–29 July 2021. [Google Scholar]
  33. Huang, J.; Wensveen, S.; Funk, M. Experiential Speculation in Vision-Based AI Design Education: Designing Conventional and Progressive AI Futures. Int. J. Des. 2023, 17, 17. [Google Scholar] [CrossRef]
  34. Mamedova, L.; Rukovich, A.; Likhouzova, T.; Vorona-Slivinskaya, L. Online education of engineering students: Educational platforms and their influence on the level of academic performance. Educ. Inf. Technol. 2023, 28, 15173–15187. [Google Scholar] [CrossRef] [PubMed]
  35. Kahangamage, U.; Leung, R.C.K. Remodelling an engineering design subject to enhance students’ learning outcomes. Int. J. Technol. Des. Educ. 2020, 30, 799–814. [Google Scholar] [CrossRef]
  36. Oyetade, K.E.; Harmse, A.; Zuva, T. Technology adoption factors in education: A review. In Proceedings of the 2020 International Conference on Artificial Intelligence, Big Data, Computing and Data Communication Systems, icABCD 2020, Durban, South Africa, 6–7 August 2020. [Google Scholar] [CrossRef]
  37. Chang, C.; Yan, C.; Tseng, J. Perceived convenience in an extended technology acceptance model: Mobile technology and English learning for college students. Australas. J. Educ. Technol. 2012, 28, 809–826. [Google Scholar] [CrossRef]
  38. Venkatesh, V.; Davis, F. A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Manag. Sci. 2000, 46, 186–204. [Google Scholar] [CrossRef]
  39. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. (Manag. Inf. Syst. Q.) 2003, 27, 425–478. [Google Scholar] [CrossRef]
  40. Minc, S.D.; Chandanabhumma, P.P.; Sedney, C.L.; Haggerty, T.S.; Davidov, D.M.; Pollini, R.A. Mixed methods research: A primer for the vascular surgeon. Semin. Vasc. Surg. 2022, 35, 447–455. [Google Scholar] [CrossRef]
  41. Venkatesh, V.; Brown, S.A.; Bala, H. Bridging the qualitative-quantitative divide: Guidelines for conducting mixed methods research in information systems. MIS Q. Manag. Inf. Syst. 2013, 37, 21–54. [Google Scholar] [CrossRef]
  42. Rajgor, M.; Paresh, C.; Dhruv, P.; Chirag, P.; Dhrmesh, B. RII & IMPI: Effective techniques for finding delay in construction project. Int. Res. J. Eng. Technol. (IRJET) 2016, 3, 1173–1177. [Google Scholar]
  43. Hair, J.F., Jr. Multivariate Data Analysis, 8th ed.; Pearson: London, UK, 2019. [Google Scholar]
  44. Hamid, M.R.A.; Sami, W.; Sidek, M.H.M. Discriminant validity assessment: Use of fornell & larcker criterion versus HTMT criterion. J. Phys. Conf. Ser. 2017, 890, 012163. [Google Scholar] [CrossRef]
  45. Huang, C.-C.; Wang, Y.-M.; Wu, T.-W.; Wang, P.-A. An Empirical Analysis of the Antecedents and Performance Consequences of Using the Moodle Platform. Int. J. Inf. Educ. Technol. 2013, 3, 217–221. [Google Scholar] [CrossRef]
  46. Hair, J.; Alamer, A. Partial Least Squares Structural Equation Modeling (PLS-SEM) in second language and education research: Guidelines using an applied example. Res. Methods Appl. Linguist. 2022, 1, 100027. [Google Scholar] [CrossRef]
  47. Wagenmakers, E.; Verhagen, J.; Ly, A.; Matzke, D.; Steingroever, H.; Rouder, J.N.; Morey, R.D. The need for bayesian hypothesis testing in psychological science. In Psychological Science Under Scrutiny: Recent Challenges and Proposed Solutions; Lilienfeld, S.O., Waldman, I.D., Eds.; Wiley: Hoboken, NJ, USA, 2017. [Google Scholar] [CrossRef]
Figure 1. Extended Technology Acceptance Model (TAM).
Figure 1. Extended Technology Acceptance Model (TAM).
Sustainability 17 10817 g001
Figure 2. Students’ diverse usage of GAI tools (Quantitative).
Figure 2. Students’ diverse usage of GAI tools (Quantitative).
Sustainability 17 10817 g002
Figure 3. Enablers RII.
Figure 3. Enablers RII.
Sustainability 17 10817 g003
Figure 4. Challenges RII.
Figure 4. Challenges RII.
Sustainability 17 10817 g004
Figure 5. Strategies RII.
Figure 5. Strategies RII.
Sustainability 17 10817 g005
Figure 6. Psychological factors RII.
Figure 6. Psychological factors RII.
Sustainability 17 10817 g006
Figure 7. Hypothesis of PLS-SEM.
Figure 7. Hypothesis of PLS-SEM.
Sustainability 17 10817 g007
Figure 8. Effect of factors shaping perceptions.
Figure 8. Effect of factors shaping perceptions.
Sustainability 17 10817 g008
Table 1. Summary of Factors Derived From the Literature [11].
Table 1. Summary of Factors Derived From the Literature [11].
CategoryFactorsKey References
EnablersAutomating processes to minimize workload and improve time efficiency; Ensuring a safer alternative to traditional lab environments; Personalizing learning content and providing real-time feedback; Enhancing collaboration and engagement among learners; Providing accessible tools that simplify complex concepts and support learning; Boosting motivation, comprehension, and memory; Developing problem-solving skills, critical thinking, sustained attention, and cognitive efficiency; Enhancing spatial reasoning and visualization skills; Aligning learning outcomes with industry-relevant skills; Encouraging active involvement of institutional management; Fostering a mindset geared towards critical thinking; Simplifying learning processes and improving the overall educational experience; Digitizing engineering drawings.[10,12,17,18,27,28,29,30,31]
ChallengesManaging high costs associated with technology integration; Addressing ethical concerns related to AI implementation; Ensuring seamless integration with existing learning platforms.[12,21,32,33]
StrategiesDefining clear and structured academic policies for technology use; Incorporating AI tools in ways that stimulate critical thinking; Implementing advanced plagiarism detection tools to identify AI-generated content; Promoting teamwork and a collaborative learning culture.[19,20,21,34,35]
Table 2. Students and Faculty semi-structured interviews profile.
Table 2. Students and Faculty semi-structured interviews profile.
Groupn% of TotalME DisciplineNon-ME Discipline
Students3371.7%21 (63.6%)12 (36.4%)
Faculty1328.3%8 (61.5%)5 (38.5%)
Table 3. Students’ diverse usage of GAI tools.
Table 3. Students’ diverse usage of GAI tools.
PurposeStudents (n = 33)
Writing (summarizing, paraphrasing)31 (94%)
Idea generation/brainstorming30 (91%)
Research support (searching, reviewing)32 (97%)
Problem solving (analytical tasks, simulations)28 (85%)
Coding/debugging27 (82%)
Visualization (diagrams, graphs, images)25 (76%)
Table 4. Qualitative findings.
Table 4. Qualitative findings.
CategoryCodingFactors
EnablersE1Institutional and governing bodies’ support for GAI integration
E2Faculty attitudes (willingness or resistance) toward GAI adoption
E3Students’ attitudes (willingness or resistance) to adopt GAI tools in learning.
E4Availability of GAI tools enhances teaching and learning
E5GAI tools enhancing industry-aligned skills (e.g., Spatial skills, Industry’s technologies).
E6The utilization of GAI tools to foster collaboration and teamwork among students.
E7GAI tools are available to enhance time efficiency and to reduce workload in research and courses.
E8Adaptability of GAI tools to online learning environments.
ChallengesC1Concerns about students misusing GAI (e.g., Plagiarism, Hindering critical skill development).
C2High costs and technical limitations of GAI tools.
C3Inadequate training for faculty to use GAI tools effectively.
C4Biases or inaccuracies in GAI-generated outputs.
C5Over-reliance on GAI reducing critical thinking skills.
C6Integration challenges with software like AutoCAD and MATLAB
C7Generational gaps affecting GAI adoption by faculty.
C8Improper adoption of GAI, potentially leading to missed opportunities for enhancing teaching and learning or creating inequities for individuals unfamiliar with these tools
C9GAI reducing human factors (e.g., empathy) in the teaching process
StrategiesS1Implementing academic integrity strategies and promoting ethical utilization to address GAI misuse
S2Training and support programs to enhance faculty adoption of GAI tools.
S3Focusing on gradual adoption of GAI tools prior to full-scale implementation immediately
S4Stating clear policies and guidelines to regulate GAI use
S5Tailoring GAI tools to individual needs to enhance teaching and learning outcomes
S6Implementation of specialized GAI tools tailored for educational purposes to enhance their classroom
S7Exploring for engineering-specialized GAI tools for better accuracy
S8Faculty awareness of students’ behavior (i.e., to distinguish students’ authentic work from GAI-generated solutions through participation, exams, and assignment comparisons)
Psychological FactorsP1Concern about GAI replacing teaching roles.
P2Confidence in integrating GAI tools.
P3Ethical alignment of GAI use in learning.
P4Impact of GAI on students’ critical thinking.
P5Stress caused by adopting GAI tools.
P6Concerns about GAI reducing human interaction in teaching and learning
P7Motivation from GAI tools for teaching and learning
P8Concerns about difficulty in tracking GAI tool usage by students
P9Concerns about data privacy and security when using GAI tools
Table 5. Students and Faculty RII, STD, Ranking, and Importance Level.
Table 5. Students and Faculty RII, STD, Ranking, and Importance Level.
Students RII RankingFaculty RII Ranking
FactorRIISTDRankImp. LevelRIISTDRankImp. Level
Enablers
GAI tools Availability to enhance time-efficiency and to reduce workload in research and courses0.86891.11621H0.81821.05852H
Students’ attitudes (willingness or resistance) to adopt GAI tools in learning0.84591.11992H0.86361.11021H
Availability of GAI tools to enhance students’ learning0.80330.90173H0.76361.16684H
Adaptability of GAI tools to online learning environments0.80000.93974H0.77270.90923H
GAI tools enhancing industry-aligned skills (e.g., spatial skills, industry technologies)0.77700.96785H0.72271.38466H
Institutional and governing bodies support for GAI integration0.71801.08896H0.70001.05257H
Faculty attitudes (willingness or resistance) toward GAI adoption0.68940.81427M0.72731.22415H
The utilization of GAI tools to foster collaboration and teamwork among students0.69180.98328M0.67731.06958M
Challenges
Over-reliance on GAI reducing critical thinking skills0.84261.04671H0.84090.93442H
Concerns about students misusing GAI (e.g., plagiarism, hindering critical skill development)0.81311.09342H0.92730.78031H
Biases or inaccuracies in GAI-generated outputs0.74750.93393H0.73641.20693H
Inadequate training for faculty to use GAI tools effectively0.72460.96444H0.71821.11594H
Generational gaps affecting GAI adoption by faculty0.71480.95075H0.69091.19118M
Improper adoption of GAI, potentially leading to missed opportunities for enhancing teaching and learning or creating inequities for individuals unfamiliar with these tools0.68851.20886M0.70001.10936H
GAI reducing human factors (e.g., empathy) in the teaching process0.68851.04026M0.71361.21195H
Integration challenges with software like AutoCAD and MATLAB0.62951.00878M0.70001.04556H
High costs and technical limitations of GAI tools0.61311.20459M0.53641.14939M
Strategies
Stating clear policies and guidelines to regulate GAI use0.82301.05091H0.87271.08881H
Exploring for engineering-specialized GAI tools for better accuracy0.80000.92572H0.82270.99973H
Tailoring GAI tools to individual needs to enhance teaching and learning outcomes0.78690.90753H0.81360.91734H
Training and support programs to enhance faculty adoption of GAI tools0.78031.01814H0.80450.99765H
Focusing on gradual adoption of GAI tools prior to full-scale implementation immediately0.78030.96384H0.79551.03916H
Implementation of specialized GAI tools tailored for educational purposes to enhance their classroom use0.76721.00306H0.79090.83788H
Implementing academic integrity strategies and promoting ethical utilization to address GAI misuse0.74430.94877H0.79551.03336H
Faculty awareness of students’ behavior (i.e., to distinguish students’ authentic work from GAI-generated solutions through participation, exams, and assignment comparisons)0.72131.18718H0.87271.02271H
Psychological Factors
Impact of GAI on students’ critical thinking0.82301.21351H0.84551.32631H
Ethical alignment of GAI use in teaching/learning0.76390.88552H0.77271.10812H
Motivation from GAI tools for learning0.76070.97483H0.66821.03307M
Concerns about GAI reducing human interaction in teaching/learning0.75741.01814H0.69551.35526M
Concerns about difficulty in tracking GAI tool usage by students0.74431.20385H0.75911.15323H
Concerns about data privacy and security when using GAI tools0.73771.12726H0.73641.07544H
Confidence in integrating GAI tools0.71150.96307H0.71361.01055H
Concerns about GAI replacing teaching roles0.63281.15668M0.58181.36019M
Stress caused by adopting GAI tools0.57381.28519M0.59091.24978M
Table 6. Excluded factors for statistical significance.
Table 6. Excluded factors for statistical significance.
CodeFactorsCorrelation with Total
C2High costs and technical limitations of GAI tools0.28519
C6Integration challenges with engineering software0.215237
C9GAI reducing human factor in the teaching process0.210664
P1Concerns about GAI replacing teaching roles0.30493
P5Stress caused by adopting GAI tools0.320615
Table 7. Cronbach’s Alpha of factors.
Table 7. Cronbach’s Alpha of factors.
Construct (After)Correlation with TotalCronbach α
Enablers 0.809218
Institutional and governing bodies support for GAI integration 0.446457 0.798742
Faculty attitudes (willingness or resistance) toward GAI adoption 0.40188 0.80505
Students’ attitudes (willingness or resistance) to adopt GAI tools in learning. 0.405736 0.804508
Availability of GAI tools enhance teaching and learning 0.539651 0.785215
GAI tools enhancing industry-aligned skills (e.g., Spatial skills, Industry’s technologies). 0.649457 0.768672
The utilization of GAI tools to foster collaboration and teamwork among students. 0.55102 0.783533
GAI tools availability to enhance time-efficiency and to reduce workload in research and courses. 0.606341 0.775247
Adaptability of GAI tools to online learning environments. 0.591839 0.777435
Challenges 0.630724
Concerns about students misusing GAI tools (e.g., plagiarism, hindering critical skill development) 0.425484 0.514247
Inadequate training for faculty to use GAI tools effectively 0.341687 0.550307
Biases or inaccuracies in GAI-generated outputs 0.306981 0.564746
Over-reliance on GAI reducing critical thinking skills 0.252545 0.586822
Generational gaps affecting GAI adoption by faculty 0.389855 0.529788
Improper adoption of GAI, potentially leading to missed opportunities for enhancing teaching and learning or creating inequities for individuals unfamiliar with these tools 0.283681 0.57428
Strategies 0.826939
Implementing academic integrity strategies and promoting ethical utilization to address GAI misuse 0.608717 0.798864
Training and support programs to enhance faculty adoption of GAI tools. 0.626743 0.796364
Focusing on gradual adoption of GAI tools prior to full-scale implementation immediately 0.550523 0.806826
Stating clear policies and guidelines to regulate GAI use 0.511806 0.812033
Tailoring GAI tools to individual needs to enhance teaching and learning outcomes 0.555949 0.806091
Implementation of specialized GAI tools tailored for educational purposes to enhance their classroom 0.508249 0.812508
Exploring for engineering-specialized GAI tools for better accuracy 0.538493 0.808452
Faculty awareness of students’ behavior (i.e., to distinguish students’ authentic work from GAI-generated solutions through participation, exams, and assignment comparisons) 0.497223 0.813976
Psychological factors 0.692188
Confidence in integrating GAI tools 0.305266 0.648713
Ethical alignment of GAI use in teaching 0.398204 0.621964
Impact of GAI on students’ critical thinking 0.418051 0.616094
Concerns about GAI reducing human interaction in learning 0.383502 0.626277
Motivation from GAI tools for teaching and learning 0.269473 0.658694
Concerns about difficulty in tracking GAI tool usage by students 0.522038 0.584408
Concerns about data privacy and security when using GAI tools 0.323436 0.643578
Table 8. CFA final model.
Table 8. CFA final model.
Fit Summary (After)
Chi-Square634.0241
Chi-Square DF367
Pr > Chi-Square<0.0001
Standardized RMR (SRMR)0.1066
RMSEA Estimate0.0836
Bentler Comparative Fit Index0.7033
Table 9. PLS-SEM findings.
Table 9. PLS-SEM findings.
PathStandardized Estimatep Value for the Unstandardized Estimate
Enablers ⟶ E1 0.45387 0.0799
Enablers ⟶ E2 0.41538 0.0848
Enablers ⟶ E3 0.49486 0.0759
Enablers ⟶ E4 0.64681 0.0672
Enablers ⟶ E5 0.71269 0.065
Enablers ⟶ E6 0.62189 0.0682
Enablers ⟶ E7 0.71455 0.065
Enablers ⟶ E8 0.67448 0.0662
Challenges ⟶ C1 0.61558 0.068
Challenges ⟶ C3 0.39297 0.088
Challenges ⟶ C4 0.44678 0.0804
Challenges ⟶ C5 0.42881 0.0826
Challenges ⟶ C7 0.44545 0.0805
Challenges ⟶ C8 0.3431 0.0985
Strategies ⟶ S1 0.68142 0.0879
Strategies ⟶ S2 0.7148 0.0869
Strategies ⟶ S3 0.6453 0.0891
Strategies ⟶ S4 0.61755 0.0903
Strategies ⟶ S5 0.56692 0.0927
Strategies ⟶ S6 0.5446 0.094
Strategies ⟶ S7 0.56511 0.0928
Strategies ⟶ S8 0.53562 0.0946
Psychological ⟶ P2 0.39805 0.7599
Psychological ⟶ P3 0.53992 0.7596
Psychological ⟶ P4 0.45213 0.7598
Psychological ⟶ P6 0.49121 0.7597
Psychological ⟶ P7 0.3833 0.76
Psychological ⟶ P8 0.60948 0.7596
Psychological ⟶ P9 0.44782 0.7598
Enablers ⟶ Psychological 0.27065 0.7281
Challenges ⟶ Psychological 0.9945 0.7677
Strategies ⟶ Psychological −0.46905 0.7614
Perception ⟶ Enablers 0.6556 0.0687
Perception ⟶ Challenges 0.81937 0.0686
Perception ⟶ Strategies 0.80273 0.0867
Perception ⟶ Psychological 0.2181 0.8875
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alghazo, M.; Ahmed, V.; Bahroun, Z.; Saboor, S. Generative AI in Mechanical Engineering Education: Enablers, Challenges, and Implementation Pathways. Sustainability 2025, 17, 10817. https://doi.org/10.3390/su172310817

AMA Style

Alghazo M, Ahmed V, Bahroun Z, Saboor S. Generative AI in Mechanical Engineering Education: Enablers, Challenges, and Implementation Pathways. Sustainability. 2025; 17(23):10817. https://doi.org/10.3390/su172310817

Chicago/Turabian Style

Alghazo, Mohannad, Vian Ahmed, Zied Bahroun, and Sara Saboor. 2025. "Generative AI in Mechanical Engineering Education: Enablers, Challenges, and Implementation Pathways" Sustainability 17, no. 23: 10817. https://doi.org/10.3390/su172310817

APA Style

Alghazo, M., Ahmed, V., Bahroun, Z., & Saboor, S. (2025). Generative AI in Mechanical Engineering Education: Enablers, Challenges, and Implementation Pathways. Sustainability, 17(23), 10817. https://doi.org/10.3390/su172310817

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop