1. Introduction
The rapid advancements in artificial intelligence (AI) have significantly transformed various domains including education [
1]. As educational institutions strive to enhance learning outcomes, the integration of AI tools offers promising avenues for personalized instruction, increased student engagement, and improved educational efficiency.
ChatGPT 4.0, a conversational AI model developed by OpenAI, is increasingly being integrated into educational settings to support both teaching and learning. ChatGPT can assist students by answering questions, providing explanations, and offering tutoring on a wide range of subjects. Its ability to process and generate human-like text enables it to break down complex ideas into simpler terms, making it a valuable resource for learning and comprehension.
Since ChatGPT’s launch in November 2022, its user base has grown exponentially. Monthly views of the ChatGPT website surged from 152.7 million in November 2022 to a peak of 1900 million in May 2023. After a few months, ChatGPT’s user base reached more than 1500 million steadily, indicating its significant impact across various industries and everyday life [
2].
ChatGPT’s adoption rates are notably higher among younger age groups, particularly those aged 18 to 34, who utilize the tool at rates almost twice as high as older age groups. Specifically, 13.5% of Gen Z and millennials use ChatGPT, compared to 7.9% of Gen X, 7.2% of baby boomers, and 5.3% of the silent generation [
3]. This discrepancy might be due to the older age groups’ lack of awareness and the quicker embrace of AI tools by students compared to their teachers.
In the education sector, ChatGPT dominates with a 70% usage rate. Both educators and students use ChatGPT for various purposes. For instance, 33% use AI for research, 18% to break down complex ideas, and 15% to learn new skills, highlighting its versatility in educational contexts [
4].
College students report using ChatGPT and other AI technologies at a rate of 43%. Of those who have used AI, 50% claim to have utilized these tools for assignments or exams. One in five college students, or 22% of all survey respondents, use AI to complete their tasks [
5]. The usage of ChatGPT among students for academic purposes is significant. Almost every student uses ChatGPT to aid their academic work, for instance 89% for homework, 53% for essays, and 48% for in-home exams. These substantial numbers demonstrate the profound impact of AI tools on students’ academic performance [
3].
This research is motivated by the growing adoption of popular AI tools such as ChatGPT, Quillbot, Grammarly, and Perplexity among students at Multimedia University. These tools are designed to facilitate various academic tasks, from summarizing lengthy articles to generating interactive presentations. Despite the benefits, the integration of AI in education raises concerns about its impact on student motivation and engagement. The use of AI tools by students for assignments may affect their interest and commitment to learning. Traditional teaching methods often employ a one-size-fits-all approach, failing to address individual learning styles and needs. While AI can offer personalized and adaptive learning experiences, there are concerns about its impact on critical thinking and problem-solving skills. Additionally, the use of AI raises security and privacy issues, as personal data may be collected and analyzed.
By integrating Technology Acceptance Model (TAM) and the General Attitudes towards Artificial Intelligence Scale (GAAIS), this research aims to evaluate the prevalence of AI tool applications among students in their academic life, understand the students’ perceptions towards AI in academia and what influences students’ intention to use AI tools in the education, and to measure the actual use of these tools. The determinants include perceived usefulness, the perceived ease of use, security and privacy, and both positive and negative attitudes towards AI tools. Through a structured survey administered to students, the research analyzes their experiences and opinions, providing insights into the effectiveness and challenges associated with AI integration in academic settings.
The findings of this study are expected to contribute to the broader discourse on AI in education, offering recommendations for educators and policymakers on how to integrate AI tools responsibly and ethically. Ultimately, this research seeks to ensure that AI tools are leveraged to enhance learning outcomes while addressing concerns related to privacy, security, and the development of critical thinking skills among students.
2. Related Works
The integration of artificial intelligence (AI) into educational setting has significantly transformed the landscape of teaching and learning. AI tools such as adaptive learning systems, intelligent tutoring systems, and automated grading software, have become increasingly prevalent in academic environments. These tools offer personalized learning experiences, provide instant feedback, and assist educators in managing administrative tasks, thereby enhancing overall educational efficiency and effectiveness. As AI technology continues to evolve, it is crucial to understand its impact on enhancing students’ academic performance to maximize its potential in education.
Technology Acceptance Model (TAM), introduced by Venkatesh et al. [
6,
7], provides a theoretical framework for understanding users’ acceptance and the use of technology. TAM posits that perceived usefulness and the perceived ease of use are primary determinants of technology acceptance, influencing users’ attitudes, intentions, and actual usage. This model has been widely applied in various contexts, including education, to study the adoption of different technological innovations.
In parallel, the General Attitudes towards Artificial Intelligence Scale (GAAIS) offers insights into users’ overall attitudes towards AI, capturing both positive and negative perceptions [
8]. Given the increasing reliance on AI tools in academic settings, examining students’ attitudes towards these tools is essential for identifying potential barriers to adoption and for developing strategies to mitigate concerns related to security and privacy.
Several studies have explored the role of AI in enhancing academic performance. For instance, to examine the factors influencing the acceptance of AI-based educational applications among university students, highlighting the significance of perceived usefulness and ease of use in predicting behavioral intentions [
9] and investigating the effects of AI-powered learning environments on students’ academic performance, finding that these tools can significantly enhance learning outcomes [
10].
The application of TAM in educational technology research has been well documented. Prior research conducted a meta-analysis of TAM studies, affirming the robustness of perceived usefulness and ease of use as predictors of technology acceptance [
11]. Their findings underscore the relevance of TAM in understanding students’ acceptance of AI tools in academic settings. Some studies have expanded TAM by incorporating additional constructs such as security and privacy concerns [
12], reflecting the evolving landscape of digital learning environments [
13].
More recent research investigated the impact of AI tools like ChatGPT on student performance. For example, studies have shown significant improvements in student performance when AI tools were integrated into instruction, although they also emphasize the importance of critical thinking and information verification [
14]. The GAAIS has been used to understand students’ overall attitudes towards AI, finding a generally positive reception but also noting concerns about AI’s potential drawbacks.
The relationship between the use of AI tools and students’ academic performance suggests that learning motivation plays a crucial mediating role. This highlights the importance of fostering motivation alongside integrating AI tools in education [
15]. In addition, AI tools used in higher education focus on both the potential benefits and academic integrity concerns which led to the argument for a balanced approach to integrating AI tools, emphasizing the need for innovative assessment design and the inclusion of student voices in discussions about AI in education [
16].
On the other hand, the relationship between the use of AI tools and problem-based learning (PBL) can enhance higher-order thinking skills. AI tools are able to provide simulations of real world problems to generate dynamic case studies [
17], offer personalized advice [
18], and automate feedback [
19]. This relationship must be handled with caution as over-reliance would undermine the actual goals of PBL and, as a result, students may prioritize algorithmic answers over deep inquiry [
20]. Educators play an important role in mitigating this problem. Educators should emphasize learning processes over AI-generated outcomes. If used appropriately, AI tools can transform PBL from static scenarios into adaptive learning experiences while at the same time preserving critical thinking [
21].
The transformative impact of AI on educational practices has been acknowledged and at the same time the potential concerns are raised related to the impact on analytical skills, motivation, quality of learning, and academic integrity [
22]. Hence, the ongoing integration of AI in education requires careful consideration of these factors to ensure that the benefits are maximized while mitigating potential drawbacks.
3. Research Method
3.1. Research Design
This research employs a quantitative research design, focusing on the analysis of data gathered through structured surveys to understand the impact of AI tools on enhancing student academic performance. The survey method was chosen due to its efficiency in collecting data from a large number of respondents, ensuring a comprehensive analysis of students’ opinions and experiences related to AI tools in an academic context.
The survey was administered using Google Forms, a widely used and accessible platform that facilitates easy distribution and collection of responses. The questionnaire was designed based on established frameworks, according to Technology Acceptance Model (TAM) and the General Attitudes towards Artificial Intelligence Scale (GAAIS). The integration helps in understanding the various dimensions of students’ interaction with AI tools, including perceived usefulness, the perceived ease of use, security and privacy, and positive and negative attitudes towards AI in academic towards behavioral intentions, and actual use of AI tools.
Technology Acceptance Model (TAM) was employed to assess students’ acceptance and use of AI tools. TAM includes several constructs, each measured by multiple items on a Likert scale ranging from one (strongly disagree) to five (strongly agree). The key constructs measured in this study were perceived usefulness, the perceived ease of use, behavioral intention to use, actual use, and security and privacy.
Perceived usefulness measures the degree to which students believe that using AI tools enhances their academic performance. Sample items for this construct include statements such as “I think using AI tools helps me perform better in academic settings” and “I think using AI tools enhances the coursework quality”. The perceived ease of use assesses how effortless students perceive the use of AI tools to be, with sample items like “I think AI tools are easy use” and “I can use AI tools effectively without assistance from others”. Behavioral intention to use measures students’ intention to use AI tools in the future. Sample items for this construct include “I intend to use AI tools frequently in my studies” and “I anticipate using AI tools to assist me in future coursework”. Actual use captures the frequency and extent of AI tool usage by students, with items such as “I regularly use AI tools to assist with my academic tasks” and “I rely on AI tools for completing complex academic projects”. Lastly, security and privacy assess students’ concerns regarding the security and privacy of AI tools. Sample items include “I feel that my personal information is secure when using AI tools” and “I believe that AI tools protect the privacy of my information”.
General Attitudes Towards Artificial Intelligence Scale (GAAIS) was integrated with TAM to gauge students’ overall attitudes towards AI tools in an academic setting. GAAIS consists of items designed to capture both positive and negative attitudes, measured on a Likert scale ranging from one (strongly disagree) to five (strongly agree).
Positive attitudes towards AI in academia capture students’ favorable views and positive experiences with AI tools. Sample items for this construct include “There are many beneficial applications of artificial intelligence tools” and “The use of artificial intelligence is exciting and leads to better grades in academics”. Negative attitudes towards AI in academia capture students’ unfavorable views and issues and possible disadvantages related to the application of AI in academia. Sample items for this construct include “I think that artificial intelligence is dangerous in terms of stealing people privacy and spying on people” and “I think that over-reliance on artificial intelligence can affect students critical thinking skills.”
3.2. Participants
The participants were 202 students from Multimedia University, Malaysia. The inclusion criteria required participants to be currently enrolled students, regardless of their faculty, year of study, or prior experience with AI tools. This diverse participant pool ensures that the findings are representative of the broader student body, capturing a wide range of perspectives and experiences. The demographic data collected from the participants included gender (male, female), age (18 to 20, 20 to 22, 23 or older), education level (foundation, diploma, degree), faculty (business and accounting, engineering, law, information technology and computer science), years of study (first year, second year, third year, fourth year), AI tools experience (yes, no), and types of AI tools used (ChatGPT, Quillbot, Grammarly, Perplexity, others specified by respondents).
3.3. Data Collection
Data was collected through a structured questionnaire that included sections on demographic profile, Technology Acceptance Model (TAM), and General Attitudes towards Artificial Intelligence Scale (GAAIS). The demographic profile section captured basic information to contextualize the responses. The TAM section assessed perceived usefulness, the perceived ease of use, behavioral intention to use, actual use or adoption, and security and privacy concerns. The GAAIS section gauged both positive and negative attitudes towards AI tools in an academic setting. Each section of the questionnaire was designed to capture specific aspects of students’ interactions with AI tools, ensuring a comprehensive understanding of their experiences and perceptions. The use of Likert scales (ranging from one to five) allowed for the quantification of subjective responses, facilitating statistical analysis.
3.4. Data Analysis
The collected data were analyzed using various statistical techniques to identify patterns and relationships between different variables. Descriptive statistics provided an overview of the demographic distribution and general trends in AI tool usage. To ensure the reliability and validity of the data, several tests were conducted. Reliability was assessed using Cronbach’s alpha, which measures internal consistency. A Cronbach’s alpha value > 0.5 is thought to be appropriate, suggesting that the components of each construct consistently evaluate the same fundamental idea. This analysis was applied to each construct to ensure its reliability. Validity was evaluated through both content and construct validity. Content validity was ensured by carefully designing the questionnaire based on established frameworks and consulting with experts in the field to ensure that all relevant aspects were covered. Construct validity was assessed using factor analysis, which examined the relationships between the questionnaire items and the underlying constructs they were intended to measure. A high factor loading with values greater than 0.5, Composite Reliability (CR) with values larger than 0.7, and Average Variance Extracted (AVE) with values above 0.5 are considered acceptable, indicating a strong association between each item and its respective factor. Inferential statistics, including regression analysis, correlation analysis, and partial least squares structural equation modeling (PLS-SEM), were used to test the hypotheses outlined in the conceptual model. These analyses helped in understanding the factors influencing students’ acceptance and the actual use of AI tools, as well as their overall attitudes towards these technologies.
5. Pilot Study
A preliminary analysis was carried out to provide insights into the feasibility of the study design and provide the sample size estimation, ultimately contributing to more robust and credible research findings. A total of 62 responses were collected, and all the gathered respondents’ demographic data analyzed using descriptive statistics. Then, Cronbach’s alpha and factor loading were performed on each construct to test its reliability.
5.1. Demographics
The pilot study involved a sample of 62 students from Multimedia University. The demographic data collected included gender, age, education, faculty, year of study, experienced using AI tools, and types of AI tools used. Male (79%) respondents outnumbered female (21%) respondents by nearly four times. The majority of respondents were in the 21–22 age range, with 72.60%, and the smallest age group was 18–20, comprising only 8.10% of the participants. Regarding the respondent’s education level, 77.40% of respondents were degree-seeking students, 19.40% held a diploma, and only 3.20% were foundation students. The students’ majors or fields of study ranged from Business and Accounting to Law, with Information Technology and Computer Science being the most represented field at 66.10%. All respondents (100%) had experience using AI tools in their academics, with Chatgpt being the most popular, used by 100% of the respondents, followed by Quillbot (87.10%), Grammarly (66.10%) and Perplexity (27.40%) as the least favorable AI tools. Additionally, 3.20% of the respondents reported using a wider variety of AI tools beyond the four mentioned.
5.2. Perceived Usefulness
Perceived usefulness assesses the degree to which students believed that using ChatGPT would enhance their academic performance. Participants responded on a five-point Likert scale ranging from one (strongly disagree) to five (strongly agree). The internal consistency of the scale, as measured by Cronbach’s alpha, was 0.844, indicating good reliability. The factor loadings for the items were all above 0.7, indicating a strong relationship between the items and the underlying construct.
5.3. Perceived Ease of Use
The perceived ease of use assesses how effortless participants found using ChatGPT. Responses were recorded on a five-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). The scale demonstrated high reliability with a Cronbach’s alpha of 0.868. The factor loadings for the items are all above 0.7, demonstrating that each item is a good indicator of the construct.
5.4. Behavioral Intention to Use
Behavioral intention to use captures students’ intentions to use ChatGPT in their academic activities in the near future. Responses were collected on a five-point Likert scale from one (strongly disagree) to five (strongly agree). The reliability of this scale was confirmed with a Cronbach’s alpha of 0.861. The factor loadings for the items are all above 0.7, confirming that each item effectively captures the construct.
5.5. Actual Use
Actual use or adoption measure provided insight into the extent to which AI tools were integrated into students’ study routines. Responses were collected on a five-point Likert scale from one (strongly disagree) to five (strongly agree). The reliability of this scale was confirmed with a Cronbach’s alpha of 0.825. The factor loadings for the items range from 0.689 to 0.832 with only one item below 0.7, demonstrating that each item is still a solid indicator of the construct.
5.6. Security and Privacy
Security and privacy evaluate students’ perceptions of the security and privacy risks associated with using AI tools. Participants responded on a five-point Likert scale ranging from one (strongly disagree) to five (strongly agree). The scale had a Cronbach’s alpha of 0.944, indicating strong internal consistency. The factor loadings for the items are robust, all above 0.8, demonstrating that each item is a strong indicator of the construct.
5.7. Positive Attitudes Towards AI in Academic
Positive attitudes towards AI in academia assess the extent to which students viewed AI tools as beneficial for their academic experience. Responses were collected on a five-point Likert scale from one (strongly disagree) to five (strongly agree). The scale demonstrated good reliability with a Cronbach’s alpha of 0.817. The factor loadings for the items are all above 0.7, confirming that each item effectively captures the construct.
5.8. Negative Attitudes Towards AI in Academic
Negative attitudes towards AI in academia evaluate the concerns and apprehensions students had about the use of AI tools in their academic activities. Participants responded on a five-point Likert scale ranging from one (strongly disagree) to five (strongly agree). The internal consistency of this scale was confirmed by a Cronbach’s alpha of 0.852. The factor loadings for the items are all above 0.7, demonstrating that each item is a reliable representation of the underlying construct.
7. Discussion
The goal of this research is to ascertain the prevalence of AI tool application among students in their academic activities, understand the students’ perceptions towards AI in academia and what influences students’ intention to use AI tools in the education, and to measure the actual use of these tools. The conceptual model consisted of seven constructs: perceived usefulness, the perceived ease of use, security and privacy, positive attitudes towards AI in academia, negative attitudes towards AI in academia, behavioral intention to use AI, and actual use or adoption of AI tools. Six hypotheses are proposed; five of these are accepted and one is rejected.
Primarily, we aim to determine the popularity of AI tool usage among students. The demographic analysis confirmed that all 202 respondents had experience using AI tools, indicating a 100% usage rate. The most commonly used AI tool was ChatGPT. This widespread adoption suggests that AI tools have become integral to students’ academic routines.
Secondarily, we seek to evaluate the impact of AI tools in academia on enhancing students’ academic lives amidst AI security and privacy concerns. The results from regression analysis, correlation, and PLS-SEM confirmed that all hypotheses are accepted except H5. According to the integration model of TAM and GAAIS, the constructs are perceived usefulness, the perceived ease of use, behavioral intention to use, security and privacy, positive attitudes towards AI, and negative attitudes towards AI, providing a comprehensive understanding of students’ perceptions and intentions.
The hypothesis that perceived usefulness positively impacts behavioral intention to use AI tools (H1) is strongly validated by the data. The regression analysis (
β = 0.395,
t = 6.88,
p < 0.001), Pearson correlation (
r = 0.66,
p < 0.001), and PLS-SEM (path coefficient = 0.381, R
2 = 0.658, f
2 = 0.260,
t = 6.795,
p = 0.000) consistently show significant positive relationships. This proves that pupils who see AI as beneficial have a greater propensity to use them. Past studies have consistently shown that higher perceived usefulness leads to a greater desire to use technology because users are more likely to use tools they find beneficial for their tasks [
6,
25]. An example of previous study also confirms that in the context of e-learning, perceived usefulness significantly impacts students’ intention to engage with digital learning tools [
24]. The strong support for H1 suggests that the perceived academic benefits of AI tools are a critical motivator for students’ intention to use them.
The hypothesis that the perceived ease of use favorably influences behavioral intention to use AI tools (H2) is also validated. The regression results (
β = 0.150,
t = 2.548,
p = 0.012), correlation result (
r = 0.56,
p < 0.001), and PLS-SEM (path coefficient = 0.166, R
2 = 0.658, f
2 = 0.048,
t = 2.833,
p = 0.002) indicate significant positive effects. This suggests that students are more inclined to use AI tools if they find them simple [
26]. According to a study, students’ perceived ease of use and satisfaction levels will both intensify when they believe the tools are user-friendly and, hence, they will be more likely to intend to utilize them in the future [
27]. The positive findings for H2 imply that the simplicity and user-friendliness of AI tools are crucial factors in fostering students’ willingness to adopt these technologies.
The hypothesis that security and privacy positively impact behavioral intention to use AI tools (H3) is validated through the regression analysis (
β = 0.134,
t = 2.645,
p = 0.009), correlation analysis (
r = 0.30,
p < 0.001), and PLS-SEM (path coefficient = 0.131, R
2 = 0.658, f
2 = 0.040,
t = 2.266,
p = 0.012). These results underscore that concerns about security and privacy are significant predictors of AI tool usage intention. The results are also consistent with previous studies, which highlighted the importance of privacy in the adoption of new technologies [
28], suggesting that privacy concerns significantly affect users’ willingness to adopt new technologies [
29]. The acceptance of H3 indicates that students value the protection of their personal information, which significantly influences their intention to use AI tools.
The hypothesis that positive attitudes towards AI positively influence behavioral intention (H4) is also supported by the data. The regression analysis (
β = 0.334,
t = 6.077,
p < 0.001), correlation result (
r = 0.62,
p < 0.001), and PLS-SEM (path coefficient = 0.369, R
2 = 0.658, f
2 = 0.264,
t = 5.150,
p = 0.000) all show significant positive relationships. This suggests that positive attitudes towards AI enhance students’ intentions to use these tools. A study also demonstrated that positive perceptions of AI significantly influence technology adoption in educational field [
30]. The strong support for H4 demonstrates that students who view AI in a positive light are more motivated to incorporate these tools into their academic routines.
Contrary to expectations, the hypothesis that negative attitudes towards AI negatively impact behavioral intention (H5) is rejected. The regression results (
β = −0.007,
t = −0.135,
p = 0.892) show that the relationship between them is negative, with a very low negative value of
β. Correlation analysis (
r = 0.08,
p = 0.124) shows that there is no linear relationship between them, with the
r value being very near to 0, and PLS-SEM (path coefficient = 0.018, R
2 = 0.658, f
2 = 0.001,
t = 0.257,
p = 0.399) shows that the relationship between them is positive, with a very low positive value of path coefficient. This shows very different results for these three analysis methods when assessing the relationship between them; however, the hypothesis is rejected because all three analysis methods present the same result of
p-value >0.05, which is not statistically significant to support the hypothesis. This rejection could be due to the specific context of academic use, where the benefits of AI might outweigh negative attitudes. Some studies found that in educational field, perceived usefulness and simplicity of usage have higher impact than negative attitudes [
31]. Even when negative attitudes were present, they did not really impact the overall desire to use technology [
32]. The pragmatic needs overshadow the negative beliefs [
23]. The lack of support for H5 suggests that negative perceptions about AI do not significantly deter students from intending to use these tools in their studies.
The hypothesis that behavioral intention to use AI tools positively influences actual use or adoption (H6) is strongly supported. The regression analysis (
β = 0.739,
t = 15.507,
p < 0.001), correlation analysis (
r = 0.74,
p < 0.001), and PLS-SEM (path coefficient = 0.802, R
2 = 0.643, f
2 = 1.802,
t = 19.023,
p = 0.000) all show significant positive relationships. The results suggest that students’ intention notably impacts their actual use of AI tools in academic activities. A previous study on mobile payment apps found that users with a strong intention to use the app tend to use it more frequently, highlighting the direct relationship between behavioral intention and actual usage [
28]. The strong support for H6 demonstrates that pupils who possess a great desire to utilize AI will be more likely to adopt them in their daily life [
33,
34].
To conclude, the results obtained from these analyses show a consistent result across all three methods, providing robust support for the hypotheses related to perceived usefulness, the perceived ease of use, security and privacy, positive attitudes towards AI, and the relationship between behavioral intention and actual use. The only hypothesis not supported is that negative attitudes towards AI negatively impact behavioral intention to use AI, highlighting the specific situation of academic use, where the perceived benefits of AI tools might overshadow negative attitudes.
8. Implications
The results offer insightful information for future research. The issues with reliability and validity in measuring constructs like perceived ease of use, actual use or adoption, and positive attitudes towards AI in academia indicate that better survey tools are needed. Future studies should focus on creating more accurate and reliable instruments to capture these important aspects. Additionally, expanding the research to other universities and cultural contexts can help determine if these findings hold true elsewhere, enriching our understanding of how students perceive and use AI tools in education.
Educators could leverage the findings of this research to enhance students’ use of AI tools in enhancing their academic performance. They could integrate the existing curriculum with AI tools, aligning the learning objectives and outcomes, as perceived usefulness and the ease of use are considered positive factors. The curriculum integration saves learning time and reduces study resistance. However, security and privacy concerns over AI tools should be addressed. Policymakers could develop national guidelines to prevent the misuse of personal data and students’ information. They should also advocate campaigns to promote positive attitudes towards AI tools.
For educators and technology developers, this research underscores the importance of making AI tools both useful and easy to use. Improving the usability and perceived benefits of these tools can encourage more students to adopt them. Educational institutions should implement clear guidelines and robust security measures to build trust and confidence among students. On the other hand, the use of AI tools also has long-term effects on students’ critical thinking and problem-solving abilities. Specifically, AI tools reduce cognitive burdens for routine work such as calculations and grammar checks. Hence, students could focus more on theoretical concepts instead of grammatical errors. Not only that, but AI tools could also enhance students’ exposure to complex problem solving enabled by AI-based simulations such as GitHub CoPilot.
The application of AI technologies in academic settings has the possibility to significantly enhance learning experiences and outcomes. However, it is crucial to balance these benefits with the potential downsides, such as the impact on students’ critical thinking and problem-solving abilities. For example, over-reliance may impact students’ creativity as they merely obtain the outcomes and ignore the process of learning. Moreover, the validity of AI outputs has raised concerns due to inaccurate information which misdirects students’ learning process. Society needs to consider the moral ramifications of AI in education, including concerns about data privacy and the risk of over-reliance on technology.
Policymakers and educators should collaborate to create guidelines that ensure AI is used responsibly and effectively in education, maximizing benefits while minimizing risks. This is because using AI tools is inevitable, and the benefits that enhance students’ academic performance outweigh AI risks. Hence, the use of AI tools could be regulated through creating pedagogical learning guidelines to ensure AI is enhancing students’ learning and not replacing learning. Effective adoption of AI tools can be implemented through workshops and training to ensure that students and educators are AI-literate. Comprehensive policies for AI tools applications in universities should be clearly stated.
9. Conclusions
This research explores the factors that influence students at Multimedia University, Malaysia, to adopt and use AI tools. Given the rapid integration of AI technologies in education, it is essential to understand how students perceive and intend to use these tools. This insight is crucial for making the most of AI’s potential to enhance learning experiences. The study builds on the TAM and expands it with the GAAIS to provide a thorough analysis of what drives AI adoption in academics.
In developing the conceptual model, several key variables were identified: perceived usefulness, the perceived ease of use, security and privacy, and both positive and negative attitudes towards AI as independent variables. Behavioral intention to use AI was considered a mediator, while the actual use or adoption of AI tools was the dependent variable. The hypothesis was tested using various analytical methods, including regression analysis, Pearson correlation, and PLS-SEM. Before diving into hypothesis testing, the reliability and validity of each construct were ensured through preliminary analyses such as Cronbach’s alpha and factor analysis.
The findings reveal that perceived usefulness, ease of use, security and privacy, and positive attitudes towards AI significantly boost students’ intentions to use AI tools. Interestingly, negative attitudes towards AI did not have a notable impact on the behavioral intention. Additionally, a strong intention to use AI tools is a good predictor of their actual use. These insights can greatly benefit other researchers and students by offering a clear framework for understanding what drives AI adoption in educational settings. By emphasizing the importance of usefulness, ease of use, and security, the research highlights the need for educational institutions and technology developers to focus on these areas to promote the effective integration of AI tools in academic environments.