Examining the Influence of AI on Python Programming Education: An Empirical Study and Analysis of Student Acceptance Through TAM3
Abstract
1. Introduction
1.1. Background and Motivation
1.2. Theoretical Framing
1.3. Context and Research Gap
1.4. Research Purpose and Contribution
2. Literature Review
2.1. AI Adoption in K–12 and Higher Education
2.2. ChatGPT and Conversational AI Tools
2.3. Broader Applications of TAM and UTAUT in EdTech
2.4. Faculty and Institutional Perspectives on AI Adoption
2.5. Thematic Synthesis and Observed Gaps
- PU and PEOU consistently emerge as the strongest predictors of technology acceptance across all contexts.
- Constructs such as trust, ethics, and sociocultural variables (e.g., gender, language, institutional support) are becoming more prominent, indicating a shift toward more nuanced models of technology adoption.
- Studies on AI chatbots and generative AI reveal mixed cognitive outcomes, suggesting that while these tools may enhance engagement and access, they require carefully designed instructional scaffolding to avoid superficial learning.
2.6. Research Objectives and Questions
2.6.1. Research Objectives
2.6.2. Research Questions
- RQ1: Which TAM3 constructs significantly influence the intention of female computer science students to use PyChatAI?
- RQ2: How do behavioural intention and perceived usefulness predict the actual usage of the chatbot in programming education?
- RQ3: What are the indirect (mediated) effects through PEOU and PU by which antecedents such as trust and facilitating conditions influence behavioural intention and actual usage?
- RQ4: Does the use of a bilingual AI assistant enhance user perceptions of both usability and educational value?
2.6.3. Research Hypotheses
3. PyChatAI Learning Tool: Design and Features
3.1. Overview of PyChatAI
3.2. Functional Components and Features
- Ask a Question
- Functionality: Users can pose Python-related queries and receive AI-generated responses with animated delivery.
- Use Case: Ideal for asking about syntax rules, logic flows, or specific programming functions.
- Pedagogical Role: Facilitates self-directed learning and reinforces understanding through direct AI interaction.
- Clear Response
- Functionality: Resets the conversation interface.
- Use Case: Allows students to begin a new query session without distraction.
- Pedagogical Role: Supports cognitive reset and iterative questioning, minimising overload.
- Learn Python
- Functionality: Offers structured lessons categorised by topics.
- Use Case: Students choose modules such as loops, inheritance, or file handling to receive explanations and mini exercises.
- Pedagogical Role: Delivers microlearning experiences for just-in-time concept reinforcement.
- Solve a Problem
- Functionality: Accepts problem descriptions and outputs guided solutions.
- Use Case: Prompts such as “reverse a string” are broken into logic steps corresponding Python code.
- Pedagogical Role: Cultivates computational thinking and debugging skills through modelled reasoning.
- Generate Comments
- Functionality: Analyses submitted code and returns annotated lines.
- Use Case: Suitable for reinforcing understanding of code structure and logic.
- Pedagogical Role: Enhances code readability and promotes reflection through automated explanation.
- Prompts
- Functionality: Offers categorised example prompts.
- Use Case: Helps users formulate better queries by referencing model examples grouped by topic (e.g., data structures, OOP).
- Pedagogical Role: Scaffolds effective engagement and supports inquiry-based learning.
- Help
- Functionality: Provides access to usage guidelines in a help module.
- Use Case: Users can review functional descriptions and interaction tips.
- Pedagogical Role: Reduces cognitive load and enhances PEOU by supporting autonomy and system familiarity.
3.3. Educational Alignment
- PEOU: Simple interface, bilingual support, structured prompts, and clear guidance reduce interaction friction.
- PU: Real-time answers, task-based modules, and annotated code outputs enhance learning efficiency and effectiveness.
- Trust and Self-Efficacy: Clear explanations and consistent AI responses build confidence, especially among novice learners.
- Facilitating Conditions: Mobile accessibility, offline guidance, and multi-language support remove common adoption barriers.
4. Research Methodology
4.1. Study Design
4.1.1. Participant Distribution and Group Configuration
4.1.2. Phase Timeline and Workflow
- Phase 1—Pre-Test (2 Weeks):
- Phase 2—Intervention (4 Months):
- Phase 3—Post-Test (2 Weeks):
4.1.3. TAM3 Integration and Measurement Mapping
4.1.4. Clarified Summary
4.1.5. Justification for Design
4.2. Measurement Model Analysis
4.2.1. Common Method Bias
4.2.2. Measurement Model Evaluation
4.2.3. Model Fit
- Chi-square to degrees of freedom ratio (CMIN/DF) = 1.231 (threshold < 3),
- Comparative Fit Index (CFI) = 0.980 (threshold > 0.95),
- Standardised Root Mean Square Residual (SRMR) = 0.061 (threshold < 0.08),
- Root Mean Square Error of Approximation (RMSEA) = 0.039 (threshold < 0.06),
- PClose = 0.934 (threshold > 0.05).
4.2.4. Construct Reliability
4.2.5. Convergent Validity
4.2.6. Discriminant Validity
- The HTMT value between PU and IU was 0.521,
- Between PU and AUB was 0.401,
- Between TR and IU was 0.351,
- Between TR and AUB was 0.255.
PU | FC | PV | PEOU | HM | CF | SI | TR | IU | AUB | |
---|---|---|---|---|---|---|---|---|---|---|
PU | — | |||||||||
FC | 0.076 | — | ||||||||
PV | 0.120 | 0.008 | — | |||||||
PEOU | 0.352 | 0.313 | 0.002 | — | ||||||
HM | 0.204 | 0.012 | 0.048 | 0.171 | — | |||||
CF | 0.001 | 0.081 | 0.098 | 0.028 | 0.052 | — | ||||
SI | 0.494 | 0.088 | 0.141 | 0.122 | 0.046 | 0.000 | — | |||
TR | 0.249 | 0.033 | 0.077 | 0.428 | 0.070 | 0.018 | 0.048 | — | ||
IU | 0.521 | 0.187 | 0.113 | 0.441 | 0.039 | 0.061 | 0.081 | 0.351 | — | |
AUB | 0.401 | 0.025 | 0.028 | 0.374 | 0.220 | 0.037 | 0.083 | 0.255 | 0.453 | — |
5. Structural Model Assessment
5.1. Structural Model Results (Direct Effects)
5.2. Indirect Effects/Mediation Analysis
5.2.1. Key Mediation Paths
5.2.2. Non-Significant Mediation Paths
5.2.3. Additional Supporting Paths
6. Discussion
6.1. Theoretical Implications
6.2. Practical Implications of Effect Sizes and Statistical Significance
6.3. Limitations and Future Research Directions
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
First Author | Objective | Participants | Design | Theory |
Han et al. [20] | Analyse the factors that influence the student to use AI-based learning systems | China’s two middle schools’ 66 students | 260 questionnaires were used | TAM3 |
Goh et al. [24] | To explore the leveraging of the ChatGPT paradigm in conceptual research | The study used ChatGPT-generated responses for analysis | Two experimental studies where ChatGPT was prompted with scenarios related. | TAM |
Bouteraa et al. [25] | To determine the role of student integrity to adopting ChatGPT | 41 measurement items based on 921 individuals | Comprise the online survey | SCT, UTA, and UTAUT |
Al-Mamary et al. [21] | The intention of students determining to adopt LMS | University of Hail’s 228 students | Internet questionnaire | TAM |
Kleine et al. [26] | Identify factors associated with the usage of AI chatbots by university students | Study 1 (N = 72), Study 2 (N = 153) | Survey forms | TAM and TAM3 |
Duy et al. [30] | To determine the influential factors in to use of AI in students | 390 students at the University | Survey using the stratified sampling method | TAM and planning behaviour theory |
Castillo et al. [31] | To identify the factors that capture students’ attention toward the acceptance of technologies | 326 samples from the students | Self-administered questionnaire | TAM |
Madni et al. [32] | To determine the factors that appeal to the students to adopt different technologies in higher education | Existing studies from 2016 to 2021 | Web of Science, Scopus, Science Direct, Taylor & Francis, Springer, Google Scholar, IEEE Explore, and ACM Digital Library | Designed Adoption Model |
Sánchez et al. [22] | To take students’ perceptions about their assessment by machine learning models | 30 Likert-type items obtained from previous theories | Questionnaire | TAM |
Helmiatin et al. [23] | To investigate how AI can be used by policymakers in education | 348 lecturers and 39 administrative personnel | Survey conducted | PLS-SEM Model |
Moon et al. [27] | To develop a positive perception of teachers towards the use of ChatGPT | Using the response of 100 elements teacher based on their profiles. | Questioner | TAM3 model |
Yazdanpanah et al. [33] | To demonstrate the orthodontists’ acceptance of teleorthodontic technology | Gathered the responses from 300 Iranian orthodontists | Accommodated questionnaire | TAM3 model |
Faqih et al. [34] | To investigate mobile commerce adoption in Jordan | D54ata was collected from 14 private Jordanian universities based on 425 valid datasets | Paper-based questionnaire | TAM3 model |
AlGerafi et al. [10] | To evaluate the intention of Chinese higher education students toward AI-based robots for study | Analysis was performed on 425 valid datasets | Questionnaires | TAM3 model |
Kavitha et al. [39] | To assess the intention and willingness of use in higher pedagogical and professional domains | 400 responses data collected from assistant professors to professors | Cross-sectional survey design | TAM based on CB-SEM |
Almulla et al. [29] | To assess the impact of learning satisfaction and adoption of ChatGPT by university students | 252 University Students in Saudi Arabia | Questionnaire | TAM, PLS-SEM, and an SEM approach |
Qashou et al. [36] | To analyse the adoption of mobile learning in university students | 402 university students involved in the survey | Questionnaire | TAM |
Li et al. [37] | To determine the important factors that influence students to adopt intelligent methods in physical education | 502 responses from university students | Questionnaire Star software | TAM |
Ma et al. [35] | To assess the teacher education students’ influential factors for AI adoption | 359 samples collected from a survey | Survey Questionnaire | TAM and SEM models |
Goli et al. [40] | To examine various which affect the behavioural intention of an individual to use Chatbots | Data from 378 individuals was collected | A questionnaire was designed | TAM 2, TAM 3, UTAUT 2, and Delone and McLean models |
Dewi et al. [38] | To analyse the user acceptance level of the e-Report system | 93 samples from 1201 teachers were collected | Questionnaire was distributed | TAM3 and the DeLone & McLean methods |
Handoko et al. [41] | Examine the factors influencing students’ use of AI and its impact on learning satisfaction | 130 university students | Quantitative study using convenience sampling and SEM | (UTAUT) |
Musyaffi et al. [42] | Examine the factors influencing accounting students’ acceptance of AI in learning | 147 higher-education students | Quantitative study using a survey and analysed using SmartPLS 4.0 with the partial least square approach | (TAM) and Information Systems Success Model |
Chiu et al. [43] | Compare the effectiveness of traditional programming instruction versus Robot-Assisted Instruction (RAI) in enhancing student learning and satisfaction | 81 junior high school students (41 in traditional programming course, 40 in RAI course using Dash robots) | Quantitative study using a questionnaire survey and pre-test/post-test analysis based on the UTAUT model | (UTAUT) |
Pan et al. [44] | Investigate key factors influencing college students’ willingness to use AI Coding Assistant Tools (AICATs) by extending the Technology Acceptance Model (TAM) | 303 Chinese college students, 251 valid responses after data cleaning | Quantitative study using an online survey, factor analysis, and SEM | Technology Acceptance Model (TAM) with extended variables: Perceived Trust, Perceived Risk, Self-Efficacy, and Dependence Worry |
Ho et al. [45] | Investigate the impact of Generative AI intervention in teaching App Inventor programming courses, comparing UI materials designed by traditional teachers and Generative AI | 6 students in a preliminary test, with plans to expand the sample size in future research | Experimental study with a single-group pre-test/post-test design, comparing traditional teacher-designed UI materials with Generative AI-generated materials | (TAM), User Satisfaction Model |
Liu et al. [46] | Summarise the current status of AI-assisted programming technology in education and explore the efficiency mechanisms of AI-human paired programming | Not specified, but literature review-based study | Systematic literature review following thematic coding methodology | (TAM), Learning Analytics, Grounded Theory |
Ferri et al. [47] | To ascertain the intentions of risk managers to use artificial intelligence in performing their tasks—To examine the factors affecting risk managers’ motivation to use artificial intelligence | 782 Italian risk managers were surveyed, with 208 providing full responses (response rate of 26.59%). | Integrated theoretical framework combining TAM3 and UTAUT, Applying PLS-SEM on data from a Likert-based questionnaire. | TAM3 & UTAUT |
Kleine et al. [26] | Investigate factors influencing AI chatbot usage intensity among university students | University students (Study 1: N = 72, Study 2: N = 153) | Daily diary study (5 days), Multilevel SEM, between-person and within-person variation | TAM, TAM3, Cognitive Load Theory, Task-Technology Fit Theory (TTF), Social Influence Theory |
Cippelletti et al. [48] | perceived acceptability of collaborative robots (“cobots”) among French workers. | 442 French workers | Quantitative study | The theoretical frameworks used in this study are the Technology Acceptance Model 3 (TAM3), the Meaning of Work Inventory, and the Ethical, Legal and Social Implications (ELSI) scale. |
Abdi et al. [49] | To explore the predictors of behavioural intention to use ChatGPT for academic purposes among university students in Somalia | 299 university students who have used ChatGPT for academic purposes | Quantitative study | TAM |
References
- Alanazi, M.; Soh, B.; Samra, H.; Li, A. PyChatAI: Enhancing Python Programming Skills—An Empirical Study of a Smart Learning System. Computers 2025, 14, 158. [Google Scholar] [CrossRef]
- Alanazi, M.; Soh, B.; Samra, H.; Li, A. The Influence of Artificial Intelligence Tools on Learning Outcomes in Computer Programming: A Systematic Review and Meta-Analysis. Computers 2025, 14, 185. [Google Scholar] [CrossRef]
- Aznar-Díaz, I.; Cáceres-Reche, M.P.; Romero-Rodríguez, J.M. Artificial intelligence in higher education: A bibliometric study on its impact in the scientific literature. Educ. Sci. 2019, 9, 51. [Google Scholar] [CrossRef]
- Fan, G.; Liu, D.; Zhang, R.; Pan, L. The impact of AI-assisted pair programming on student motivation, programming anxiety, collaborative learning, and programming performance: A comparative study with traditional pair programming and individual approaches. Int. J. STEM Educ. 2025, 12, 16. [Google Scholar] [CrossRef]
- Bassner, P.; Frankford, E.; Krusche, S. Iris: An AI-driven virtual tutor for computer science education. In Proceedings of the 2024 on Innovation and Technology in Computer Science Education, Milan, Italy, 8–10 July 2024; Volume 1, pp. 394–400. [Google Scholar]
- Kashive, N.; Powale, L.; Kashive, K. Understanding user perception toward artificial intelligence (AI) enabled e-learning. Int. J. Inf. Learn. Technol. 2020, 38, 1–19. [Google Scholar] [CrossRef]
- Ghimire, A.; Edwards, J. Generative AI adoption in the classroom: A contextual exploration using the Technology Acceptance Model (TAM) and the Innovation Diffusion Theory (IDT). In Proceedings of the 2024 Intermountain Engineering, Technology and Computing (IETC), Logan, UT, USA, 13–14 May 2024; pp. 129–134. [Google Scholar]
- Aldraiweesh, A.A.; Alturki, U. The influence of Social Support Theory on AI acceptance: Examining educational support and perceived usefulness using SEM analysis. IEEE Access 2025, 13, 18366–18385. [Google Scholar] [CrossRef]
- Daleiden, P.; Stefik, A.; Uesbeck, P.M.; Pedersen, J. Analysis of a randomized controlled trial of student performance in parallel programming using a new measurement technique. ACM Trans. Comput. Educ. (TOCE) 2020, 20, 1–28. [Google Scholar] [CrossRef]
- Algerafi, M.A.; Zhou, Y.; Alfadda, H.; Wijaya, T.T. Understanding the factors influencing higher education students’ intention to adopt artificial intelligence-based robots. IEEE Access 2023, 11, 99752–99764. [Google Scholar] [CrossRef]
- Almogren, A.S. Creating a novel approach to examine how King Saud University’s Arts College students utilize AI programs. IEEE Access 2025, 13, 63895–63917. [Google Scholar] [CrossRef]
- Binyamin, S.; Rutter, M.; Smith, S. The Moderating Effect of Gender and Age on the Students’ Acceptance of Learning Management Systems in Saudi Higher Education. Int. J. Educ. Technol. High. Educ. 2020, 17, 1–18. [Google Scholar] [CrossRef]
- Alanazi, A.; Li, A.; Soh, B. Barriers and Potential Solutions to Women in Studying Computer Programming in Saudi Arabia. Int. J. Enhanc. Res. Manag. Comput. Appl. 2023, 12, 55–70. [Google Scholar]
- Alharbi, A. English Medium Instruction in Saudi Arabia: A Systematic Review. Lang. Teach. Res. Q. 2024, 42, 21–37. [Google Scholar] [CrossRef]
- Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
- Groothuijsen, S.; van den Beemt, A.; Remmers, J.C.; van Meeuwen, L.W. AI chatbots in programming education: Students’ use in a scientific computing course and consequences for learning. Comput. Educ. Artif. Intell. 2024, 7, 100290. [Google Scholar] [CrossRef]
- Gu, X.; Cai, H. Predicting the future of artificial intelligence and its educational impact: A thought experiment based on social science fiction. Educ. Res. 2021, 42, 137–147. [Google Scholar]
- Essel, H.B.; Vlachopoulos, D.; Essuman, A.B.; Amankwa, J.O. ChatGPT effects on cognitive skills of undergraduate students: Receiving instant responses from AI-based conversational large language models (LLMs). Comput. Educ. Artif. Intell. 2024, 6, 100198. [Google Scholar] [CrossRef]
- Garcia, M.B.; Revano, T.F.; Maaliw, R.R.; Lagrazon, P.G.G.; Valderama, A.M.C.; Happonen, A.; Qureshi, B.; Yilmaz, R. Exploring student preference between AI-powered ChatGPT and human-curated Stack Overflow in resolving programming problems and queries. In Proceedings of the 2023 IEEE 15th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM), Coron, Philippines, 19–23 November 2023; pp. 1–6. [Google Scholar]
- Han, J.; Liu, G.; Liu, X.; Yang, Y.; Quan, W.; Chen, Y. Continue using or gathering dust? A mixed method research on the factors influencing the continuous use intention for an AI-powered adaptive learning system for rural middle school students. Heliyon 2024, 10, e33251. [Google Scholar] [CrossRef]
- Al-Mamary, Y.H.S. Why do students adopt and use learning management systems?: Insights from Saudi Arabia. Int. J. Inf. Manag. Data Insights 2022, 2, 100088. [Google Scholar] [CrossRef]
- Sánchez-Prieto, J.C.; Cruz-Benito, J.; Therón Sánchez, R.; García-Peñalvo, F.J. Assessed by machines: Development of a TAM-based tool to measure AI-based assessment acceptance among students. Int. J. Interact. Multimed. Artif. Intell. 2020, 6, 80. [Google Scholar] [CrossRef]
- Helmiatin; Hidayat, A.; Kahar, M.R. Investigating the adoption of AI in higher education: A study of public universities in Indonesia. Cogent Educ. 2024, 11, 2380175. [Google Scholar] [CrossRef]
- Goh, T.T.; Dai, X.; Yang, Y. Benchmarking ChatGPT for prototyping theories: Experimental studies using the Technology Acceptance Model. BenchCouncil Trans. Benchmarks Stand. Eval. 2023, 3, 100153. [Google Scholar]
- Bouteraa, M.; Bin-Nashwan, S.A.; Al-Daihani, M.; Dirie, K.A.; Benlahcene, A.; Sadallah, M.; Zaki, H.O.; Lada, S.; Ansar, R.; Fook, L.M.; et al. Understanding the diffusion of AI-generative (ChatGPT) in higher education: Does students’ integrity matter? Comput. Hum. Behav. Rep. 2024, 14, 100402. [Google Scholar]
- Kleine, A.K.; Schaffernak, I.; Lermer, E. Exploring predictors of AI chatbot usage intensity among students: Within- and between-person relationships based on the Technology Acceptance Model. Comput. Hum. Behav. Artif. Hum. 2025, 3, 100113. [Google Scholar]
- Moon, J.; Kim, S.B. Analyzing elementary teachers’ perception of ChatGPT. In Proceedings of the 2023 5th International Workshop on Artificial Intelligence and Education (WAIE), Tokyo, Japan, 28–30 September 2023; pp. 43–47. [Google Scholar]
- Sitanaya, I.H.; Luhukay, D.; Syahchari, D.H. Digital education revolution: Analysis of ChatGPT’s influence on student learning in Jakarta. In Proceedings of the 2024 IEEE International Conference on Technology, Informatics, Management, Engineering and Environment (TIME-E), Bali, Indonesia, 7–9 August 2024; Volume 5, pp. 69–73. [Google Scholar]
- Almulla, M. Investigating influencing factors of learning satisfaction in AI ChatGPT for research: University students’ perspective. Heliyon 2024, 10, e32220. [Google Scholar] [CrossRef]
- Duy, N.B.P.; Phuong, T.N.M.; Chau, V.N.M.; Nhi, N.V.H.; Khuyen, V.T.M.; Giang, N.T.P. AI-assisted learning: An empirical study on student application behavior. Multidiscip. Sci. J. 2025, 7, 2025275. [Google Scholar]
- Castillo-Vergara, M.; Álvarez-Marín, A.; Villavicencio Pinto, E.; Valdez-Juárez, L.E. Technological acceptance of Industry 4.0 by students from rural areas. Electronics 2022, 11, 2109. [Google Scholar] [CrossRef]
- Madni, S.H.H.; Ali, J.; Husnain, H.A.; Masum, M.H.; Mustafa, S.; Shuja, J.; Maray, M.; Hosseini, S. Factors influencing the adoption of IoT for e-learning in higher educational institutes in developing countries. Front. Psychol. 2022, 13, 915596. [Google Scholar] [CrossRef]
- Yazdanpanahi, F.; Shahi, M.; Vossoughi, M.; Davaridolatabadi, N. Investigating the effective factors on the acceptance of teleorthodontic technology based on the Technology Acceptance Model 3 (TAM3). J. Dent. 2024, 25, 68. [Google Scholar]
- Faqih, K.M.; Jaradat, M.I.R.M. Assessing the moderating effect of gender differences and individualism–collectivism at individual-level on the adoption of mobile commerce technology: TAM3 perspective. J. Retail. Consum. Serv. 2015, 22, 37–52. [Google Scholar] [CrossRef]
- Ma, S.; Lei, L. The factors influencing teacher education students’ willingness to adopt artificial intelligence technology for information-based teaching. Asia Pac. J. Educ. 2024, 44, 94–111. [Google Scholar] [CrossRef]
- Qashou, A. Influencing factors in M-learning adoption in higher education. Educ. Inf. Technol. 2021, 26, 1755–1785. [Google Scholar]
- Li, X.; Tan, W.H.; Bin, Y.; Yang, P.; Yang, Q.; Xu, T. Analysing factors influencing undergraduates’ adoption of intelligent physical education systems using an expanded TAM. Educ. Inf. Technol. 2025, 30, 5755–5785. [Google Scholar]
- Dewi, Y.M.; Budiman, K. Analysis of user acceptance levels of the e-Rapor system users in junior high schools in Rembang District using the TAM 3 and DeLone McLean. J. Adv. Inf. Syst. Technol. 2024, 6, 256–269. [Google Scholar]
- Kavitha, K.; Joshith, V.P. Artificial intelligence powered pedagogy: Unveiling higher educators’ acceptance with extended TAM. J. Univ. Teach. Learn. Pract. 2025, 21. [Google Scholar] [CrossRef]
- Goli, M.; Sahu, A.K.; Bag, S.; Dhamija, P. Users’ acceptance of artificial intelligence-based chatbots: An empirical study. Int. J. Technol. Hum. Interact. (IJTHI) 2023, 19, 1–18. [Google Scholar]
- Handoko, B.L.; Thomas, G.N.; Indriaty, L. Adoption and utilization of artificial intelligence to enhance student learning satisfaction. In Proceedings of the 2024 International Conference on ICT for Smart Society (ICISS), Bandung, Indonesia, 4–5 September 2024; pp. 1–6. [Google Scholar]
- Musyaffi, A.M.; Baxtishodovich, B.S.; Afriadi, B.; Hafeez, M.; Adha, M.A.; Wibowo, S.N. New challenges of learning accounting with artificial intelligence: The role of innovation and trust in technology. Eur. J. Educ. Res. 2024, 13, 183–195. [Google Scholar] [CrossRef]
- Chiu, M.Y.; Chiu, F.Y. Using UTAUT to explore the acceptance of high school students in programming learning with robots. In Proceedings of the 2024 21st International Conference on Information Technology Based Higher Education and Training (ITHET), Paris, France, 6–8 November 2024; pp. 1–7. [Google Scholar]
- Pan, Z.; Xie, Z.; Liu, T.; Xia, T. Exploring the key factors influencing college students’ willingness to use AI coding assistant tools: An expanded technology acceptance model. Systems 2024, 12, 176. [Google Scholar] [CrossRef]
- Ho, C.L.; Liu, X.Y.; Qiu, Y.W.; Yang, S.Y. Research on innovative applications and impacts of using generative AI for user interface design in programming courses. In Proceedings of the 2024 International Conference on Information Technology, Data Science, and Optimization, Taipei, Taiwan, 22–24 May 2024; pp. 68–72. [Google Scholar]
- Liu, J.; Li, S. Toward artificial intelligence–human paired programming: A review of the educational applications and research on artificial intelligence code-generation tools. J. Educ. Comput. Res. 2024, 62, 1385–1415. [Google Scholar] [CrossRef]
- Ferri, L.; Maffei, M.; Spanò, R.; Zagaria, C. Uncovering risk professionals’ intentions to use artificial intelligence: Empirical evidence from the Italian setting. Manag. Decis. 2023. [Google Scholar] [CrossRef]
- Cippelletti, E.; Fournier, É.; Jeoffrion, C.; Landry, A. Assessing cobot’s acceptability of French workers: Proposition of a model integrating the TAM3, the ELSI and the meaning of work scales. Int. J. Hum.–Comput. Interact. 2024, 41, 2763–2775. [Google Scholar] [CrossRef]
- Abdi, A.N.M.; Omar, A.M.; Ahmed, M.H.; Ahmed, A.A. The predictors of behavioral intention to use ChatGPT for academic purposes: Evidence from higher education in Somalia. Cogent Educ. 2025, 12, 2460250. [Google Scholar] [CrossRef]
- Venkatesh, V.; Bala, H. Technology acceptance model 3 and a research agenda on interventions. Decis. Sci. 2008, 39, 273–315. [Google Scholar] [CrossRef]
- Fuller, C.M.; Simmering, M.J.; Atinc, G.; Atinc, Y.; Babin, B.J. Common methods variance detection in business research. J. Bus. Res. 2016, 69, 3192–3198. [Google Scholar] [CrossRef]
- Podsakoff, P.M.; MacKenzie, S.B.; Lee, J.-Y.; Podsakoff, N.P. Common method biases in behavioral research: A critical review of the literature and recommended remedies. J. Appl. Psychol. 2003, 88, 879–903. [Google Scholar] [CrossRef]
- Hu, L.T.; Bentler, P.M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Model. Multidiscip. J. 1999, 6, 1–55. [Google Scholar] [CrossRef]
- Joreskog, K.; Sorbom, D. Structural Equation Modelling: Guidelines for Determining Model Fit; University Press of America: Lanham, MD, USA, 1993. [Google Scholar]
- Barrett, P. Structural equation modelling: Adjudging model fit. Personal. Individ. Differ. 2007, 42, 815–824. [Google Scholar] [CrossRef]
- Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
- Hair, J.F.B.; William, C.; Babin, B.J.; Anderson, R.E. Multivariate Data Analysis, 7th ed.; Pearson: London, UK, 2009. [Google Scholar]
- Hair, J.F.; Gabriel, M.; Patel, V. AMOS covariance-based structural equation modeling (CB-SEM): Guidelines on its application as a marketing research tool. Braz. J. Mark. 2014, 13. [Google Scholar] [CrossRef]
- Henseler, J.; Ringle, C.M.; Sarstedt, M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
- Cheung, G.W.; Cooper-Thomas, H.D.; Lau, R.S.; Wang, L.C. Reporting reliability, convergent and discriminant validity with structural equation modeling: A review and best-practice recommendations. Asia Pac. J. Manag. 2024, 41, 745–783. [Google Scholar] [CrossRef]
First Author | Objective | Theory |
---|---|---|
Han et al. [20] | Analyse the factors that influence students’ use of AI-based learning systems | TAM3 |
Goh et al. [24] | Explore the leveraging of the ChatGPT paradigm in conceptual research | TAM |
Bouteraa et al. [25] | Determine the role of student integrity in adopting ChatGPT | SCT, UTA, UTAUT |
Al-Mamary et al. [21] | Examine students’ intention to adopt LMS | TAM |
Kleine et al. [26] | Identify factors associated with AI chatbot usage among university students | TAM, TAM3 |
Duy et al. [30] | Determine factors influencing students’ use of AI | TAM, TPB |
Castillo et al. [31] | Identify factors capturing students’ acceptance of technologies | TAM |
Madni et al. [32] | Review factors influencing technology adoption in higher education | Adoption Model |
Sánchez et al. [22] | Explore students’ perceptions of ML-based assessment | TAM |
Helmiatin et al. [23] | Investigate AI adoption in education policy | PLS-SEM |
Moon et al. [27] | Assess teachers’ perceptions of ChatGPT | TAM3 |
Yazdanpanah et al. [33] | Examine orthodontists’ acceptance of teleorthodontics | TAM3 |
Faqih et al. [34] | Investigate mobile commerce adoption | TAM3 |
AlGerafi et al. [10] | Evaluate intention of students to use AI-based robots | TAM3 |
Kavitha et al. [39] | Assess intention to adopt AI in higher education | TAM |
Almulla et al. [29] | Assess ChatGPT adoption among university students | TAM, PLS-SEM |
Qashou et al. [36] | Analyse adoption of mobile learning | TAM |
Li et al. [37] | Examine factors influencing adoption of AI in physical education | TAM |
Ma et al. [35] | Investigate teacher education students’ AI adoption | TAM, SEM |
Goli et al. [40] | Explore determinants of chatbot adoption | TAM2, TAM3, UTAUT2, DeLone and McLean |
Dewi et al. [38] | Analyse user acceptance of e-Report system | TAM3, DeLone and McLean |
Handoko et al. [41] | Examine AI use and learning satisfaction | UTAUT |
Musyaffi et al. [42] | Investigate accounting students’ AI adoption | TAM, IS Success Model |
Chiu et al. [43] | Compare traditional vs. robot-assisted programming instruction | UTAUT |
Pan et al. [44] | Assess adoption of AI coding assistant tools | TAM (extended) |
Ho et al. [45] | Investigate generative AI in programming instruction | TAM, User Satisfaction |
Liu et al. [46] | Review AI-assisted programming technologies | TAM, Learning Analytics |
Ferri et al. [47] | Explore AI adoption among risk managers | TAM3, UTAUT |
Kleine et al. [26] | Investigate chatbot usage intensity | TAM, TAM3, TTF, CLT, SI |
Cippelletti et al. [48] | Examine acceptance of collaborative robots | TAM3, ELSI |
Abdi et al. [49] | Predictors of ChatGPT use in academia | TAM |
Group | Pre-Test | PyChatAI Intervention | Post-Test | Description |
---|---|---|---|---|
G1 | ✔ | ✔ | ✔ | Experimental group with pre-test |
G2 | ✔ | ✘ | ✔ | Control group with pre-test |
G3 | ✘ | ✔ | ✔ | Experimental group without pre-test |
G4 | ✘ | ✘ | ✔ | Control group without pre-test |
Construct | Items | Loadings | Cronbach’s Alpha | Composite Reliability | AVE (Average Variance Extracted) |
---|---|---|---|---|---|
Perceived Usefulness | q1–q3 | 0.863–0.895 | 0.911 | 0.910 | 0.772 |
Perceived Ease of Use | q4–q6 | 0.881–0.958 | 0.927 | 0.927 | 0.810 |
Social Influence | q7–q9 | 0.888–0.959 | 0.964 | 0.964 | 0.900 |
Facilitating Conditions | q10–q12 | 0.880–0.917 | 0.943 | 0.945 | 0.850 |
Hedonic Motivation | q13–q15 | 0.935–0.939 | 0.956 | 0.956 | 0.879 |
Trust | q16–q18 | 0.948–0.966 | 0.948 | 0.948 | 0.859 |
Perceived Value | q19–q21 | 0.926–0.974 | 0.938 | 0.939 | 0.836 |
Cognitive Factors | q22–q24 | 0.920–0.975 | 0.975 | 0.975 | 0.928 |
Intention to Use | q25–q27 | 0.774–0.921 | 0.891 | 0.895 | 0.740 |
Actual Usage Behaviour | q28–q30 | 0.853–0.872 | 0.896 | 0.896 | 0.742 |
Hypothesis | Path | Unstandardised Coefficients | p Value | Results Summary |
---|---|---|---|---|
H1 | SI → PEOU | −0.064 | 0.137 | Not Supported |
H2 | PV → PEOU | −0.032 | 0.568 | Not Supported |
H3 | CF → PEOU | −0.026 | 0.521 | Not Supported |
H4 | FC → PEOU | 0.225 | >0.001 | Supported |
H5 | HM → PEOU | 0.142 | 0.004 | Supported |
H6 | TR → PEOU | 0.354 | >0.001 | Supported |
H7 | PEOU → PU | 0.467 | >0.001 | Supported |
H8 | SI → PU | 0.369 | >0.001 | Supported |
H9 | PV → PU | 0.165 | 0.004 | Supported |
H10 | PU → IU | 0.446 | >0.001 | Supported |
H11 | SI → IU | −0.087 | 0.106 | Not Supported |
H12 | PEOU → IU | 0.243 | 0.005 | Supported |
H13 | IU → AUB | 0.406 | >0.001 | Supported |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Alanazi, M.; Li, A.; Samra, H.; Soh, B. Examining the Influence of AI on Python Programming Education: An Empirical Study and Analysis of Student Acceptance Through TAM3. Computers 2025, 14, 411. https://doi.org/10.3390/computers14100411
Alanazi M, Li A, Samra H, Soh B. Examining the Influence of AI on Python Programming Education: An Empirical Study and Analysis of Student Acceptance Through TAM3. Computers. 2025; 14(10):411. https://doi.org/10.3390/computers14100411
Chicago/Turabian StyleAlanazi, Manal, Alice Li, Halima Samra, and Ben Soh. 2025. "Examining the Influence of AI on Python Programming Education: An Empirical Study and Analysis of Student Acceptance Through TAM3" Computers 14, no. 10: 411. https://doi.org/10.3390/computers14100411
APA StyleAlanazi, M., Li, A., Samra, H., & Soh, B. (2025). Examining the Influence of AI on Python Programming Education: An Empirical Study and Analysis of Student Acceptance Through TAM3. Computers, 14(10), 411. https://doi.org/10.3390/computers14100411