Next Article in Journal
Comparative Evaluation of Multimodal Large Language Models for No-Reference Image Quality Assessment with Authentic Distortions: A Study of OpenAI and Claude.AI Models
Previous Article in Journal
Machine Learning-Based Classification of Sulfide Mineral Spectral Emission in High Temperature Processes
Previous Article in Special Issue
PK-Judge: Enhancing IP Protection of Neural Network Models Using an Asymmetric Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating the Impact of Artificial Intelligence Tools on Enhancing Student Academic Performance: Efficacy Amidst Security and Privacy Concerns

Faculty of Information Science and Technology, Multimedia University, Jalan Ayer Keroh Lama, Malacca 75450, Malaysia
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2025, 9(5), 131; https://doi.org/10.3390/bdcc9050131
Submission received: 19 March 2025 / Revised: 3 May 2025 / Accepted: 12 May 2025 / Published: 15 May 2025
(This article belongs to the Special Issue Security, Privacy, and Trust in Artificial Intelligence Applications)

Abstract

The rapid advancements in artificial intelligence (AI) have significantly transformed various domains, including education, by introducing innovative tools that reshape teaching and learning processes. This research investigates the perceptions and attitudes of students towards the use of AI tools in their academic activities, focusing on constructs such as perceived usefulness, the perceived ease of use, security and privacy concerns, and both positive and negative attitudes towards AI. On the basis of Technology Acceptance Model (TAM) and the General Attitudes towards Artificial Intelligence Scale (GAAIS), this research seeks to identify the factors influencing students’ behavioral intentions and actual adoption of AI tools in educational settings. A structured survey was administered to students at Multimedia University, Malaysia, capturing their experiences and opinions on widely used AI tools such as ChatGPT, Quillbot, Grammarly, and Perplexity. Hypothesis testing was used to evaluate the statistical significance of relationships between the constructs and behavioral intention and actual use of the AI tools. The findings reveal a high level of engagement with AI tools among University students, primarily driven by their perceived benefits in enhancing academic performance, improving efficiency, and facilitating personalized learning experiences. The findings also uncover significant concerns related to data security, privacy, and the potential over-reliance on AI tools, which may hinder the development of critical thinking and problem-solving skills.

1. Introduction

The rapid advancements in artificial intelligence (AI) have significantly transformed various domains including education [1]. As educational institutions strive to enhance learning outcomes, the integration of AI tools offers promising avenues for personalized instruction, increased student engagement, and improved educational efficiency.
ChatGPT 4.0, a conversational AI model developed by OpenAI, is increasingly being integrated into educational settings to support both teaching and learning. ChatGPT can assist students by answering questions, providing explanations, and offering tutoring on a wide range of subjects. Its ability to process and generate human-like text enables it to break down complex ideas into simpler terms, making it a valuable resource for learning and comprehension.
Since ChatGPT’s launch in November 2022, its user base has grown exponentially. Monthly views of the ChatGPT website surged from 152.7 million in November 2022 to a peak of 1900 million in May 2023. After a few months, ChatGPT’s user base reached more than 1500 million steadily, indicating its significant impact across various industries and everyday life [2].
ChatGPT’s adoption rates are notably higher among younger age groups, particularly those aged 18 to 34, who utilize the tool at rates almost twice as high as older age groups. Specifically, 13.5% of Gen Z and millennials use ChatGPT, compared to 7.9% of Gen X, 7.2% of baby boomers, and 5.3% of the silent generation [3]. This discrepancy might be due to the older age groups’ lack of awareness and the quicker embrace of AI tools by students compared to their teachers.
In the education sector, ChatGPT dominates with a 70% usage rate. Both educators and students use ChatGPT for various purposes. For instance, 33% use AI for research, 18% to break down complex ideas, and 15% to learn new skills, highlighting its versatility in educational contexts [4].
College students report using ChatGPT and other AI technologies at a rate of 43%. Of those who have used AI, 50% claim to have utilized these tools for assignments or exams. One in five college students, or 22% of all survey respondents, use AI to complete their tasks [5]. The usage of ChatGPT among students for academic purposes is significant. Almost every student uses ChatGPT to aid their academic work, for instance 89% for homework, 53% for essays, and 48% for in-home exams. These substantial numbers demonstrate the profound impact of AI tools on students’ academic performance [3].
This research is motivated by the growing adoption of popular AI tools such as ChatGPT, Quillbot, Grammarly, and Perplexity among students at Multimedia University. These tools are designed to facilitate various academic tasks, from summarizing lengthy articles to generating interactive presentations. Despite the benefits, the integration of AI in education raises concerns about its impact on student motivation and engagement. The use of AI tools by students for assignments may affect their interest and commitment to learning. Traditional teaching methods often employ a one-size-fits-all approach, failing to address individual learning styles and needs. While AI can offer personalized and adaptive learning experiences, there are concerns about its impact on critical thinking and problem-solving skills. Additionally, the use of AI raises security and privacy issues, as personal data may be collected and analyzed.
By integrating Technology Acceptance Model (TAM) and the General Attitudes towards Artificial Intelligence Scale (GAAIS), this research aims to evaluate the prevalence of AI tool applications among students in their academic life, understand the students’ perceptions towards AI in academia and what influences students’ intention to use AI tools in the education, and to measure the actual use of these tools. The determinants include perceived usefulness, the perceived ease of use, security and privacy, and both positive and negative attitudes towards AI tools. Through a structured survey administered to students, the research analyzes their experiences and opinions, providing insights into the effectiveness and challenges associated with AI integration in academic settings.
The findings of this study are expected to contribute to the broader discourse on AI in education, offering recommendations for educators and policymakers on how to integrate AI tools responsibly and ethically. Ultimately, this research seeks to ensure that AI tools are leveraged to enhance learning outcomes while addressing concerns related to privacy, security, and the development of critical thinking skills among students.

2. Related Works

The integration of artificial intelligence (AI) into educational setting has significantly transformed the landscape of teaching and learning. AI tools such as adaptive learning systems, intelligent tutoring systems, and automated grading software, have become increasingly prevalent in academic environments. These tools offer personalized learning experiences, provide instant feedback, and assist educators in managing administrative tasks, thereby enhancing overall educational efficiency and effectiveness. As AI technology continues to evolve, it is crucial to understand its impact on enhancing students’ academic performance to maximize its potential in education.
Technology Acceptance Model (TAM), introduced by Venkatesh et al. [6,7], provides a theoretical framework for understanding users’ acceptance and the use of technology. TAM posits that perceived usefulness and the perceived ease of use are primary determinants of technology acceptance, influencing users’ attitudes, intentions, and actual usage. This model has been widely applied in various contexts, including education, to study the adoption of different technological innovations.
In parallel, the General Attitudes towards Artificial Intelligence Scale (GAAIS) offers insights into users’ overall attitudes towards AI, capturing both positive and negative perceptions [8]. Given the increasing reliance on AI tools in academic settings, examining students’ attitudes towards these tools is essential for identifying potential barriers to adoption and for developing strategies to mitigate concerns related to security and privacy.
Several studies have explored the role of AI in enhancing academic performance. For instance, to examine the factors influencing the acceptance of AI-based educational applications among university students, highlighting the significance of perceived usefulness and ease of use in predicting behavioral intentions [9] and investigating the effects of AI-powered learning environments on students’ academic performance, finding that these tools can significantly enhance learning outcomes [10].
The application of TAM in educational technology research has been well documented. Prior research conducted a meta-analysis of TAM studies, affirming the robustness of perceived usefulness and ease of use as predictors of technology acceptance [11]. Their findings underscore the relevance of TAM in understanding students’ acceptance of AI tools in academic settings. Some studies have expanded TAM by incorporating additional constructs such as security and privacy concerns [12], reflecting the evolving landscape of digital learning environments [13].
More recent research investigated the impact of AI tools like ChatGPT on student performance. For example, studies have shown significant improvements in student performance when AI tools were integrated into instruction, although they also emphasize the importance of critical thinking and information verification [14]. The GAAIS has been used to understand students’ overall attitudes towards AI, finding a generally positive reception but also noting concerns about AI’s potential drawbacks.
The relationship between the use of AI tools and students’ academic performance suggests that learning motivation plays a crucial mediating role. This highlights the importance of fostering motivation alongside integrating AI tools in education [15]. In addition, AI tools used in higher education focus on both the potential benefits and academic integrity concerns which led to the argument for a balanced approach to integrating AI tools, emphasizing the need for innovative assessment design and the inclusion of student voices in discussions about AI in education [16].
On the other hand, the relationship between the use of AI tools and problem-based learning (PBL) can enhance higher-order thinking skills. AI tools are able to provide simulations of real world problems to generate dynamic case studies [17], offer personalized advice [18], and automate feedback [19]. This relationship must be handled with caution as over-reliance would undermine the actual goals of PBL and, as a result, students may prioritize algorithmic answers over deep inquiry [20]. Educators play an important role in mitigating this problem. Educators should emphasize learning processes over AI-generated outcomes. If used appropriately, AI tools can transform PBL from static scenarios into adaptive learning experiences while at the same time preserving critical thinking [21].
The transformative impact of AI on educational practices has been acknowledged and at the same time the potential concerns are raised related to the impact on analytical skills, motivation, quality of learning, and academic integrity [22]. Hence, the ongoing integration of AI in education requires careful consideration of these factors to ensure that the benefits are maximized while mitigating potential drawbacks.

3. Research Method

3.1. Research Design

This research employs a quantitative research design, focusing on the analysis of data gathered through structured surveys to understand the impact of AI tools on enhancing student academic performance. The survey method was chosen due to its efficiency in collecting data from a large number of respondents, ensuring a comprehensive analysis of students’ opinions and experiences related to AI tools in an academic context.
The survey was administered using Google Forms, a widely used and accessible platform that facilitates easy distribution and collection of responses. The questionnaire was designed based on established frameworks, according to Technology Acceptance Model (TAM) and the General Attitudes towards Artificial Intelligence Scale (GAAIS). The integration helps in understanding the various dimensions of students’ interaction with AI tools, including perceived usefulness, the perceived ease of use, security and privacy, and positive and negative attitudes towards AI in academic towards behavioral intentions, and actual use of AI tools.
Technology Acceptance Model (TAM) was employed to assess students’ acceptance and use of AI tools. TAM includes several constructs, each measured by multiple items on a Likert scale ranging from one (strongly disagree) to five (strongly agree). The key constructs measured in this study were perceived usefulness, the perceived ease of use, behavioral intention to use, actual use, and security and privacy.
Perceived usefulness measures the degree to which students believe that using AI tools enhances their academic performance. Sample items for this construct include statements such as “I think using AI tools helps me perform better in academic settings” and “I think using AI tools enhances the coursework quality”. The perceived ease of use assesses how effortless students perceive the use of AI tools to be, with sample items like “I think AI tools are easy use” and “I can use AI tools effectively without assistance from others”. Behavioral intention to use measures students’ intention to use AI tools in the future. Sample items for this construct include “I intend to use AI tools frequently in my studies” and “I anticipate using AI tools to assist me in future coursework”. Actual use captures the frequency and extent of AI tool usage by students, with items such as “I regularly use AI tools to assist with my academic tasks” and “I rely on AI tools for completing complex academic projects”. Lastly, security and privacy assess students’ concerns regarding the security and privacy of AI tools. Sample items include “I feel that my personal information is secure when using AI tools” and “I believe that AI tools protect the privacy of my information”.
General Attitudes Towards Artificial Intelligence Scale (GAAIS) was integrated with TAM to gauge students’ overall attitudes towards AI tools in an academic setting. GAAIS consists of items designed to capture both positive and negative attitudes, measured on a Likert scale ranging from one (strongly disagree) to five (strongly agree).
Positive attitudes towards AI in academia capture students’ favorable views and positive experiences with AI tools. Sample items for this construct include “There are many beneficial applications of artificial intelligence tools” and “The use of artificial intelligence is exciting and leads to better grades in academics”. Negative attitudes towards AI in academia capture students’ unfavorable views and issues and possible disadvantages related to the application of AI in academia. Sample items for this construct include “I think that artificial intelligence is dangerous in terms of stealing people privacy and spying on people” and “I think that over-reliance on artificial intelligence can affect students critical thinking skills.”

3.2. Participants

The participants were 202 students from Multimedia University, Malaysia. The inclusion criteria required participants to be currently enrolled students, regardless of their faculty, year of study, or prior experience with AI tools. This diverse participant pool ensures that the findings are representative of the broader student body, capturing a wide range of perspectives and experiences. The demographic data collected from the participants included gender (male, female), age (18 to 20, 20 to 22, 23 or older), education level (foundation, diploma, degree), faculty (business and accounting, engineering, law, information technology and computer science), years of study (first year, second year, third year, fourth year), AI tools experience (yes, no), and types of AI tools used (ChatGPT, Quillbot, Grammarly, Perplexity, others specified by respondents).

3.3. Data Collection

Data was collected through a structured questionnaire that included sections on demographic profile, Technology Acceptance Model (TAM), and General Attitudes towards Artificial Intelligence Scale (GAAIS). The demographic profile section captured basic information to contextualize the responses. The TAM section assessed perceived usefulness, the perceived ease of use, behavioral intention to use, actual use or adoption, and security and privacy concerns. The GAAIS section gauged both positive and negative attitudes towards AI tools in an academic setting. Each section of the questionnaire was designed to capture specific aspects of students’ interactions with AI tools, ensuring a comprehensive understanding of their experiences and perceptions. The use of Likert scales (ranging from one to five) allowed for the quantification of subjective responses, facilitating statistical analysis.

3.4. Data Analysis

The collected data were analyzed using various statistical techniques to identify patterns and relationships between different variables. Descriptive statistics provided an overview of the demographic distribution and general trends in AI tool usage. To ensure the reliability and validity of the data, several tests were conducted. Reliability was assessed using Cronbach’s alpha, which measures internal consistency. A Cronbach’s alpha value > 0.5 is thought to be appropriate, suggesting that the components of each construct consistently evaluate the same fundamental idea. This analysis was applied to each construct to ensure its reliability. Validity was evaluated through both content and construct validity. Content validity was ensured by carefully designing the questionnaire based on established frameworks and consulting with experts in the field to ensure that all relevant aspects were covered. Construct validity was assessed using factor analysis, which examined the relationships between the questionnaire items and the underlying constructs they were intended to measure. A high factor loading with values greater than 0.5, Composite Reliability (CR) with values larger than 0.7, and Average Variance Extracted (AVE) with values above 0.5 are considered acceptable, indicating a strong association between each item and its respective factor. Inferential statistics, including regression analysis, correlation analysis, and partial least squares structural equation modeling (PLS-SEM), were used to test the hypotheses outlined in the conceptual model. These analyses helped in understanding the factors influencing students’ acceptance and the actual use of AI tools, as well as their overall attitudes towards these technologies.

4. Conceptual Model

4.1. Research Conceptual Model

The conceptual model was created based on the integration of TAM and GAAIS. The model outlines several key relationships in the construct that may affect the use of AI in academics as illustrated in Figure 1.
Firstly, it is posited that perceived usefulness, or the belief that AI tools enhance academic performance, impacts behavioral intention to use AI. When people view AI as beneficial, they have a higher probability of developing a strong desire to implement it. Similarly, the perceived ease of use, or the belief that AI tools are easy to use, also impacts behavioral intention to use AI. If users find AI intuitive and user friendly, they are more inclined to plan on using it. Additionally, security and privacy concerns greatly impact behavioral intention to use AI. Higher confidence in security and privacy of AI systems leads to a stronger intention to adopt AI. Furthermore, attitudes towards AI in academia, whether positive or negative, directly impact the behavioral intention to use this technology. Finally, behavioral intention to use had an immediate effect on the actual use or adoption of AI. Therefore, a strong desire to implement AI tools is driven by perceptions of usefulness, ease of use, and security, along with positive attitudes, leading to the actual implementation and usage of AI in academic environments.

4.2. Hypotheses Development

The research hypotheses outline the relationships between various factors impacting the application of AI tools in enhancing students’ academic performance. These hypotheses aim to examine the impacts, both direct and indirect, of these variables on behavioral intention to use AI and the actual use of AI. The following six hypotheses were developed:
H1: 
Perceived usefulness positively impacts behavioral intention to use AI.
Perceived usefulness favorably affects the behavioral intention to use AI in academic settings. When individuals believe that AI will improve the way they perform academically or productivity, they have a higher probability of intending to use it. This belief in the utility of AI fosters a stronger behavioral intention towards its adoption.
H2: 
Perceived ease of use positively impacts behavioral intention to use AI.
The perceived ease of use favorably affects the behavioral intention to use AI in academic settings. If individuals find AI easy and effortless to use, the likelihood that they will use it is higher. Ease of use reduces the impression of complication and increases the likelihood of intending to adopt AI.
H3: 
Security and privacy positively impact behavioral intention to use AI.
Security and privacy of AI favorably impacts the behavioral intention to use AI in academic. It is essential for users to have faith in the security and privacy of AI. When individuals believe information is secure and their private information is safeguarded, there is a higher probability of a stronger behavioral intention to use AI.
H4: 
Positive attitudes towards AI positively impact behavioral intention to use AI.
H5: 
Negative attitudes towards AI negatively impact behavioral intention to use AI.
Positive or negative attitudes towards AI significantly influence the behavioral intention to use AI in academic. People who have favorable opinions about AI are more likely to intend to use it, while negative attitudes may hinder this intention. Attitudes have a significant impact in determining the intention to engage with AI technologies.
H6: 
Behavioral intention to use AI positively impacts Actual Use of AI.
Behavioral intention to use AI favorably affects the actual use or adoption of AI in academic settings. A strong intention to use AI leads to its actual implementation and usage. Behavioral intention serves as a direct precursor to actual adoption of AI technologies.
These hypotheses collectively investigate the pathways through which perceived usefulness, ease of use, security and privacy, and overall attitudes towards AI influence the behavioral intention to use AI, which subsequently leads to actual adoption of AI in academic environments.

5. Pilot Study

A preliminary analysis was carried out to provide insights into the feasibility of the study design and provide the sample size estimation, ultimately contributing to more robust and credible research findings. A total of 62 responses were collected, and all the gathered respondents’ demographic data analyzed using descriptive statistics. Then, Cronbach’s alpha and factor loading were performed on each construct to test its reliability.

5.1. Demographics

The pilot study involved a sample of 62 students from Multimedia University. The demographic data collected included gender, age, education, faculty, year of study, experienced using AI tools, and types of AI tools used. Male (79%) respondents outnumbered female (21%) respondents by nearly four times. The majority of respondents were in the 21–22 age range, with 72.60%, and the smallest age group was 18–20, comprising only 8.10% of the participants. Regarding the respondent’s education level, 77.40% of respondents were degree-seeking students, 19.40% held a diploma, and only 3.20% were foundation students. The students’ majors or fields of study ranged from Business and Accounting to Law, with Information Technology and Computer Science being the most represented field at 66.10%. All respondents (100%) had experience using AI tools in their academics, with Chatgpt being the most popular, used by 100% of the respondents, followed by Quillbot (87.10%), Grammarly (66.10%) and Perplexity (27.40%) as the least favorable AI tools. Additionally, 3.20% of the respondents reported using a wider variety of AI tools beyond the four mentioned.

5.2. Perceived Usefulness

Perceived usefulness assesses the degree to which students believed that using ChatGPT would enhance their academic performance. Participants responded on a five-point Likert scale ranging from one (strongly disagree) to five (strongly agree). The internal consistency of the scale, as measured by Cronbach’s alpha, was 0.844, indicating good reliability. The factor loadings for the items were all above 0.7, indicating a strong relationship between the items and the underlying construct.

5.3. Perceived Ease of Use

The perceived ease of use assesses how effortless participants found using ChatGPT. Responses were recorded on a five-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). The scale demonstrated high reliability with a Cronbach’s alpha of 0.868. The factor loadings for the items are all above 0.7, demonstrating that each item is a good indicator of the construct.

5.4. Behavioral Intention to Use

Behavioral intention to use captures students’ intentions to use ChatGPT in their academic activities in the near future. Responses were collected on a five-point Likert scale from one (strongly disagree) to five (strongly agree). The reliability of this scale was confirmed with a Cronbach’s alpha of 0.861. The factor loadings for the items are all above 0.7, confirming that each item effectively captures the construct.

5.5. Actual Use

Actual use or adoption measure provided insight into the extent to which AI tools were integrated into students’ study routines. Responses were collected on a five-point Likert scale from one (strongly disagree) to five (strongly agree). The reliability of this scale was confirmed with a Cronbach’s alpha of 0.825. The factor loadings for the items range from 0.689 to 0.832 with only one item below 0.7, demonstrating that each item is still a solid indicator of the construct.

5.6. Security and Privacy

Security and privacy evaluate students’ perceptions of the security and privacy risks associated with using AI tools. Participants responded on a five-point Likert scale ranging from one (strongly disagree) to five (strongly agree). The scale had a Cronbach’s alpha of 0.944, indicating strong internal consistency. The factor loadings for the items are robust, all above 0.8, demonstrating that each item is a strong indicator of the construct.

5.7. Positive Attitudes Towards AI in Academic

Positive attitudes towards AI in academia assess the extent to which students viewed AI tools as beneficial for their academic experience. Responses were collected on a five-point Likert scale from one (strongly disagree) to five (strongly agree). The scale demonstrated good reliability with a Cronbach’s alpha of 0.817. The factor loadings for the items are all above 0.7, confirming that each item effectively captures the construct.

5.8. Negative Attitudes Towards AI in Academic

Negative attitudes towards AI in academia evaluate the concerns and apprehensions students had about the use of AI tools in their academic activities. Participants responded on a five-point Likert scale ranging from one (strongly disagree) to five (strongly agree). The internal consistency of this scale was confirmed by a Cronbach’s alpha of 0.852. The factor loadings for the items are all above 0.7, demonstrating that each item is a reliable representation of the underlying construct.

6. Results

6.1. Demographics

The demographic profile of the respondents provides essential insights, as shown in Table 1. Out of 202 respondents, 66% were male, and 34% were female, indicating a higher representation of males. The majority of respondents were aged between 21–22 years, accounting for 64.90% of the sample, while 19.80% were aged 18–20 years, and 15.30% were 23 years or older. Regarding educational levels, 55.90% were pursuing a degree, 31.70% were diploma students, and 12.40% were at the foundation level. The faculty distribution shows a significant concentration in the Information Technology and Computer Science faculty, with 61.90% of respondents, followed by 25.20% from Business and Accounting, 8.90% from Law, and 4.00% from Engineering. All respondents had experience using AI tools, with ChatGPT being the most popular (100%), followed by Quillbot (79.21%), Grammarly (53.47%), and Perplexity (16.83%).

6.2. Reliability and Validity

The analysis of reliability and validity test of each construct are being analyse using several metrics including Cronbach’s Alpha (α), Average Variance Extracted (AVE) and Composite Reliability (CR) as shown in Table 2. These analysis methods help assess how well the items measure their respective constructs and the overall consistency of the constructs. Overall, the internal consistency of the constructs lies between 0.539–0.8501, indicating moderate to strong reliability. The moderate reliability reflected the early adoption of a technology [8,23,24], as such with the use of AI tools in academics in this study.

6.3. Hypotheses Test

The model predicting behavioral intention to use proved to be applicable, confirming that 65.8% of the variability identified in behavioral intention to use AI was influenced by perceived usefulness, the perceived ease of use, security and privacy, positive attitudes towards AI, and negative attitudes towards AI. In addition, the actual use was also influenced, with 64.3% of the variance being explained by behavioral intention to use AI. These high R2 values suggest that the model is quite effective in explaining the factors influencing behavioral intention and actual use of AI tools in enhancing academic performance, as shown in Figure 2.
To obtain a more comprehensive insight into the results, linear regression, Pearson correlation, and PLS-SEM were employed to test the relationship between each of the independent variables and its dependent variables shown in Table 3, Table 4, and Table 5, respectively. The results indicated that perceived usefulness, the perceived ease of use, security and privacy, and positive attitudes toward AI in academia were positively and significantly correlated with behavioral intention to use, supporting the H1, H2, H3, and H4. The results reveal that negative attitudes toward AI in academia were not statistically significant, with all the p-values greater than 0.05 suggesting the rejection of H5. Lastly, the results show that behavioral intention to use has a strong and positive correlation with actual use, supporting the H6.

7. Discussion

The goal of this research is to ascertain the prevalence of AI tool application among students in their academic activities, understand the students’ perceptions towards AI in academia and what influences students’ intention to use AI tools in the education, and to measure the actual use of these tools. The conceptual model consisted of seven constructs: perceived usefulness, the perceived ease of use, security and privacy, positive attitudes towards AI in academia, negative attitudes towards AI in academia, behavioral intention to use AI, and actual use or adoption of AI tools. Six hypotheses are proposed; five of these are accepted and one is rejected.
Primarily, we aim to determine the popularity of AI tool usage among students. The demographic analysis confirmed that all 202 respondents had experience using AI tools, indicating a 100% usage rate. The most commonly used AI tool was ChatGPT. This widespread adoption suggests that AI tools have become integral to students’ academic routines.
Secondarily, we seek to evaluate the impact of AI tools in academia on enhancing students’ academic lives amidst AI security and privacy concerns. The results from regression analysis, correlation, and PLS-SEM confirmed that all hypotheses are accepted except H5. According to the integration model of TAM and GAAIS, the constructs are perceived usefulness, the perceived ease of use, behavioral intention to use, security and privacy, positive attitudes towards AI, and negative attitudes towards AI, providing a comprehensive understanding of students’ perceptions and intentions.
The hypothesis that perceived usefulness positively impacts behavioral intention to use AI tools (H1) is strongly validated by the data. The regression analysis (β = 0.395, t = 6.88, p < 0.001), Pearson correlation (r = 0.66, p < 0.001), and PLS-SEM (path coefficient = 0.381, R2 = 0.658, f2 = 0.260, t = 6.795, p = 0.000) consistently show significant positive relationships. This proves that pupils who see AI as beneficial have a greater propensity to use them. Past studies have consistently shown that higher perceived usefulness leads to a greater desire to use technology because users are more likely to use tools they find beneficial for their tasks [6,25]. An example of previous study also confirms that in the context of e-learning, perceived usefulness significantly impacts students’ intention to engage with digital learning tools [24]. The strong support for H1 suggests that the perceived academic benefits of AI tools are a critical motivator for students’ intention to use them.
The hypothesis that the perceived ease of use favorably influences behavioral intention to use AI tools (H2) is also validated. The regression results (β = 0.150, t = 2.548, p = 0.012), correlation result (r = 0.56, p < 0.001), and PLS-SEM (path coefficient = 0.166, R2 = 0.658, f2 = 0.048, t = 2.833, p = 0.002) indicate significant positive effects. This suggests that students are more inclined to use AI tools if they find them simple [26]. According to a study, students’ perceived ease of use and satisfaction levels will both intensify when they believe the tools are user-friendly and, hence, they will be more likely to intend to utilize them in the future [27]. The positive findings for H2 imply that the simplicity and user-friendliness of AI tools are crucial factors in fostering students’ willingness to adopt these technologies.
The hypothesis that security and privacy positively impact behavioral intention to use AI tools (H3) is validated through the regression analysis (β = 0.134, t = 2.645, p = 0.009), correlation analysis (r = 0.30, p < 0.001), and PLS-SEM (path coefficient = 0.131, R2 = 0.658, f2 = 0.040, t = 2.266, p = 0.012). These results underscore that concerns about security and privacy are significant predictors of AI tool usage intention. The results are also consistent with previous studies, which highlighted the importance of privacy in the adoption of new technologies [28], suggesting that privacy concerns significantly affect users’ willingness to adopt new technologies [29]. The acceptance of H3 indicates that students value the protection of their personal information, which significantly influences their intention to use AI tools.
The hypothesis that positive attitudes towards AI positively influence behavioral intention (H4) is also supported by the data. The regression analysis (β = 0.334, t = 6.077, p < 0.001), correlation result (r = 0.62, p < 0.001), and PLS-SEM (path coefficient = 0.369, R2 = 0.658, f2 = 0.264, t = 5.150, p = 0.000) all show significant positive relationships. This suggests that positive attitudes towards AI enhance students’ intentions to use these tools. A study also demonstrated that positive perceptions of AI significantly influence technology adoption in educational field [30]. The strong support for H4 demonstrates that students who view AI in a positive light are more motivated to incorporate these tools into their academic routines.
Contrary to expectations, the hypothesis that negative attitudes towards AI negatively impact behavioral intention (H5) is rejected. The regression results (β = −0.007, t = −0.135, p = 0.892) show that the relationship between them is negative, with a very low negative value of β. Correlation analysis (r = 0.08, p = 0.124) shows that there is no linear relationship between them, with the r value being very near to 0, and PLS-SEM (path coefficient = 0.018, R2 = 0.658, f2 = 0.001, t = 0.257, p = 0.399) shows that the relationship between them is positive, with a very low positive value of path coefficient. This shows very different results for these three analysis methods when assessing the relationship between them; however, the hypothesis is rejected because all three analysis methods present the same result of p-value >0.05, which is not statistically significant to support the hypothesis. This rejection could be due to the specific context of academic use, where the benefits of AI might outweigh negative attitudes. Some studies found that in educational field, perceived usefulness and simplicity of usage have higher impact than negative attitudes [31]. Even when negative attitudes were present, they did not really impact the overall desire to use technology [32]. The pragmatic needs overshadow the negative beliefs [23]. The lack of support for H5 suggests that negative perceptions about AI do not significantly deter students from intending to use these tools in their studies.
The hypothesis that behavioral intention to use AI tools positively influences actual use or adoption (H6) is strongly supported. The regression analysis (β = 0.739, t = 15.507, p < 0.001), correlation analysis (r = 0.74, p < 0.001), and PLS-SEM (path coefficient = 0.802, R2 = 0.643, f2 = 1.802, t = 19.023, p = 0.000) all show significant positive relationships. The results suggest that students’ intention notably impacts their actual use of AI tools in academic activities. A previous study on mobile payment apps found that users with a strong intention to use the app tend to use it more frequently, highlighting the direct relationship between behavioral intention and actual usage [28]. The strong support for H6 demonstrates that pupils who possess a great desire to utilize AI will be more likely to adopt them in their daily life [33,34].
To conclude, the results obtained from these analyses show a consistent result across all three methods, providing robust support for the hypotheses related to perceived usefulness, the perceived ease of use, security and privacy, positive attitudes towards AI, and the relationship between behavioral intention and actual use. The only hypothesis not supported is that negative attitudes towards AI negatively impact behavioral intention to use AI, highlighting the specific situation of academic use, where the perceived benefits of AI tools might overshadow negative attitudes.

8. Implications

The results offer insightful information for future research. The issues with reliability and validity in measuring constructs like perceived ease of use, actual use or adoption, and positive attitudes towards AI in academia indicate that better survey tools are needed. Future studies should focus on creating more accurate and reliable instruments to capture these important aspects. Additionally, expanding the research to other universities and cultural contexts can help determine if these findings hold true elsewhere, enriching our understanding of how students perceive and use AI tools in education.
Educators could leverage the findings of this research to enhance students’ use of AI tools in enhancing their academic performance. They could integrate the existing curriculum with AI tools, aligning the learning objectives and outcomes, as perceived usefulness and the ease of use are considered positive factors. The curriculum integration saves learning time and reduces study resistance. However, security and privacy concerns over AI tools should be addressed. Policymakers could develop national guidelines to prevent the misuse of personal data and students’ information. They should also advocate campaigns to promote positive attitudes towards AI tools.
For educators and technology developers, this research underscores the importance of making AI tools both useful and easy to use. Improving the usability and perceived benefits of these tools can encourage more students to adopt them. Educational institutions should implement clear guidelines and robust security measures to build trust and confidence among students. On the other hand, the use of AI tools also has long-term effects on students’ critical thinking and problem-solving abilities. Specifically, AI tools reduce cognitive burdens for routine work such as calculations and grammar checks. Hence, students could focus more on theoretical concepts instead of grammatical errors. Not only that, but AI tools could also enhance students’ exposure to complex problem solving enabled by AI-based simulations such as GitHub CoPilot.
The application of AI technologies in academic settings has the possibility to significantly enhance learning experiences and outcomes. However, it is crucial to balance these benefits with the potential downsides, such as the impact on students’ critical thinking and problem-solving abilities. For example, over-reliance may impact students’ creativity as they merely obtain the outcomes and ignore the process of learning. Moreover, the validity of AI outputs has raised concerns due to inaccurate information which misdirects students’ learning process. Society needs to consider the moral ramifications of AI in education, including concerns about data privacy and the risk of over-reliance on technology.
Policymakers and educators should collaborate to create guidelines that ensure AI is used responsibly and effectively in education, maximizing benefits while minimizing risks. This is because using AI tools is inevitable, and the benefits that enhance students’ academic performance outweigh AI risks. Hence, the use of AI tools could be regulated through creating pedagogical learning guidelines to ensure AI is enhancing students’ learning and not replacing learning. Effective adoption of AI tools can be implemented through workshops and training to ensure that students and educators are AI-literate. Comprehensive policies for AI tools applications in universities should be clearly stated.

9. Conclusions

This research explores the factors that influence students at Multimedia University, Malaysia, to adopt and use AI tools. Given the rapid integration of AI technologies in education, it is essential to understand how students perceive and intend to use these tools. This insight is crucial for making the most of AI’s potential to enhance learning experiences. The study builds on the TAM and expands it with the GAAIS to provide a thorough analysis of what drives AI adoption in academics.
In developing the conceptual model, several key variables were identified: perceived usefulness, the perceived ease of use, security and privacy, and both positive and negative attitudes towards AI as independent variables. Behavioral intention to use AI was considered a mediator, while the actual use or adoption of AI tools was the dependent variable. The hypothesis was tested using various analytical methods, including regression analysis, Pearson correlation, and PLS-SEM. Before diving into hypothesis testing, the reliability and validity of each construct were ensured through preliminary analyses such as Cronbach’s alpha and factor analysis.
The findings reveal that perceived usefulness, ease of use, security and privacy, and positive attitudes towards AI significantly boost students’ intentions to use AI tools. Interestingly, negative attitudes towards AI did not have a notable impact on the behavioral intention. Additionally, a strong intention to use AI tools is a good predictor of their actual use. These insights can greatly benefit other researchers and students by offering a clear framework for understanding what drives AI adoption in educational settings. By emphasizing the importance of usefulness, ease of use, and security, the research highlights the need for educational institutions and technology developers to focus on these areas to promote the effective integration of AI tools in academic environments.

Author Contributions

Conceptualization, H.-F.N.; methodology, H.-F.N. and C.-C.T.; validation, C.-C.T.; formal analysis, J.T.K.P.; data curation, J.T.K.P.; writing—original draft preparation, J.T.K.P.; writing—review and editing, C.-C.T.; supervision, H.-F.N.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rodway, P.; Schepman, A. The impact of adopting AI educational technologies on projected course satisfaction in university students. Comput. Educ. Artif. Intell. 2023, 5, 100150. [Google Scholar] [CrossRef]
  2. Rousseau, H.-P. From Gutenberg to Chat GPT: The Challenge of the Digital University; No. 2023rb-02; CIRANO: Montreal, QC, Canada, 2023. [Google Scholar]
  3. Lee, V.R.; Pope, D.; Miles, S.; Zárate, R.C. Cheating in the age of generative AI: A high school survey study of cheating behaviors before and after the release of ChatGPT. Comput. Educ. Artif. Intell. 2024, 7, 100253. [Google Scholar] [CrossRef]
  4. von Garrel, J.; Mayer, J. Artificial Intelligence in studies—Use of ChatGPT and AI-based tools among students in Germany. Humanit. Soc. Sci. Commun. 2023, 10, 799. [Google Scholar] [CrossRef]
  5. Welding, L.; Half of College Students Say Using AI Is Cheating. BestColleges. 2023. Available online: https://ideas.repec.org/p/cir/cirbur/2023rb-02.html (accessed on 30 December 2023).
  6. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  7. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  8. Schepman, A.; Rodway, P. Initial validation of the general attitudes towards Artificial Intelligence Scale. Comput. Hum. Behav. Rep. 2020, 1, 100014. [Google Scholar] [CrossRef] [PubMed]
  9. Alenezi, A.M. The relationship of students’emotional intelligence and the level of their readiness for online education: A contextual study on the example of university training in saudi arabia. Образoвание и наука 2020, 22, 89–109. [Google Scholar]
  10. Salido, V. Impact of AI-Powered Learning Tools on Student Understanding and Academic Performance. 2023, no. 17259.31521 unpublished. Available online: https://www.researchgate.net/publication/376260972_Impact_of_AI-Powered_Learning_Tools_on_Student_Understanding_and_Academic_Performance (accessed on 30 December 2023).
  11. King, W.R.; He, J. A meta-analysis of the technology acceptance model. Inf. Manag. 2006, 43, 740–755. [Google Scholar] [CrossRef]
  12. Alsyouf, A.; Lutfi, A.; Alsubahi, N.; Alhazmi, F.N.; Al-Mugheed, K.; Anshasi, R.J.; Alharbi, N.I.; Albugami, M. The use of a technology acceptance model (TAM) to predict patients’ usage of a personal health record system: The role of security, privacy, and usability. Int. J. Environ. Res. Public Health 2023, 20, 1347. [Google Scholar] [CrossRef]
  13. Farooq, A.; Ahmad, F.; Khadam, N.; Lorenz, B.; Isoaho, J. The impact of perceived security on intention to use e-learning among students. In Proceedings of the 2020 IEEE 20th International Conference on Advanced Learning Technologies (ICALT), Tartu, Estonia, 6–9 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 360–364. [Google Scholar]
  14. Alneyadi, S.; Wardat, Y. ChatGPT: Revolutionizing student achievement in the electronic magnetism unit for eleventh-grade students in Emirates schools. Contemp. Educ. Technol. 2023, 15, ep448. [Google Scholar] [CrossRef]
  15. Caratiquit, K.D.; Caratiquit, L.J.C. ChatGPT as an academic support tool on the academic performance among students: The mediating role of learning motivation. J. Social, Humanit. Educ. 2023, 4, 21–33. [Google Scholar] [CrossRef]
  16. Sullivan, M.; Kelly, A.; McLaughlan, P. ChatGPT in higher education: Considerations for academic integrity and student learning. J. Appl. Learn. Teach. 2023, 6, 1–10. [Google Scholar] [CrossRef]
  17. Rajput, R. Use of Artificial Intelligence to Solve Problems in the Classroom. In New Technological Applications in the Flipped Learning Model; IGI Global Scientific Publishing: Hershey, PA, USA, 2025; pp. 137–168. [Google Scholar]
  18. Baskara, F.X.R. Generative AI as a Catalyst for Sustainable Learning: Proposing an Adaptive Pedagogical Strategy. Proceeding Int. Conf. Relig. Sci. Educ. 2025, 4, 463–470. [Google Scholar]
  19. Al-Kamzari, F.; Alias, N. A systematic literature review of project-based learning in secondary school physics: Theoretical foundations, design principles, and implementation strategies. Humanit. Soc. Sci. Commun. 2025, 12, 286. [Google Scholar] [CrossRef]
  20. Nguyen, H.; Hayward, J. Applying Generative Artificial Intelligence to Critiquing Science Assessments. J. Sci. Educ. Technol. 2025, 34, 199–214. [Google Scholar] [CrossRef]
  21. Hadi, W.; AlShaikh-Hasan, M.N. AI-Driven Curriculum Transformation and Faculty Development in Developing Universities. In Revolutionizing Urban Development and Governance with Emerging Technologies; IGI Global Scientific Publishing: Hershey, PA, USA, 2025; pp. 289–322. [Google Scholar]
  22. Grassini, S. Shaping the future of education: Exploring the potential and consequences of AI and ChatGPT in educational settings. Educ. Sci. 2023, 13, 692. [Google Scholar] [CrossRef]
  23. Al-Emran, M.; Al-Sharafi, M.A. Revolutionizing education with industry 5.0: Challenges and future research agendas. IJITLS 2022, 6, 1–5. [Google Scholar]
  24. Marikyan, D.; Papagiannidis, S. Technology acceptance model: A review. In TheoryHub Book; Papagiannidis, S., Ed.; TheoryHub: Newcastle upon Tyne, UK, 2024; ISBN 9781739604400. Available online: https://open.ncl.ac.uk (accessed on 30 December 2023).
  25. Balakrishnan, V.; Gan, C.L. Students’ learning styles and their effects on the use of social media technology for learning. Telemat. Inform. 2016, 33, 808–821. [Google Scholar] [CrossRef]
  26. Boubker, O. From chatting to self-educating: Can AI tools boost student learning outcomes? Expert Syst. Appl. 2024, 238, 121820. [Google Scholar] [CrossRef]
  27. Bélanger, F.; Crossler, R.E. Privacy in the digital age: A review of information privacy research in information systems. MIS Q. 2011, 35, 1017–1041. [Google Scholar] [CrossRef]
  28. Kaur, J.; Singh, P. Study habits and academic performance: A comparative analysis. Eur. J. Mo-Lecular Clin. Med. 2020, 7, 6161–6166. [Google Scholar]
  29. Salloum, S.A.; Alhamad, A.Q.M.; Al-Emran, M.; Monem, A.A.; Shaalan, K. Exploring students’ acceptance of e-learning through the development of a comprehensive technology acceptance model. IEEE Access 2019, 7, 128445–128462. [Google Scholar] [CrossRef]
  30. Luik, P.; Taimalu, M. Predicting the intention to use technology in education among student teachers: A path analysis. Educ. Sci. 2021, 11, 564. [Google Scholar] [CrossRef]
  31. Kim, H.J.; Yi, P.; Hong, J.I. Students’ academic use of mobile technology and higher-order thinking skills: The role of active engagement. Educ. Sci. 2020, 10, 47. [Google Scholar] [CrossRef]
  32. Liu, C.-H.; Chen, Y.-T.; Kittikowit, S.; Hongsuchon, T.; Chen, Y.-J. Using unified theory of acceptance and use of technology to evaluate the impact of a mobile payment app on the shopping intention and usage behavior of middle-aged customers. Front. Psychol. 2022, 13, 830842. [Google Scholar] [CrossRef] [PubMed]
  33. Esiyok, E.; Gokcearslan, S.; Kucukergin, K.G. Acceptance of educational use of AI chatbots in the context of self-directed learning with technology and ict self-efficacy of undergraduate students. Int. J. Hum.–Comput. Interact. 2024, 41, 641–650. [Google Scholar] [CrossRef]
  34. Shahzad, M.F.; Xu, S.; Asif, M. Factors affecting generative artificial intelligence, such as ChatGPT, use in higher education: An application of technology acceptance model. Br. Educ. Res. J. 2025, 51, 489–513. [Google Scholar] [CrossRef]
Figure 1. Conceptual model.
Figure 1. Conceptual model.
Bdcc 09 00131 g001
Figure 2. Model fitness.
Figure 2. Model fitness.
Bdcc 09 00131 g002
Table 1. Respondents’ profile.
Table 1. Respondents’ profile.
DemographicsItemFrequency (N = 202)Percentage (%)
GenderFemale6834%
Male13466%
Age18–2040
21–2213119.80%
23 or older3164.90%
EducationDegree11315.30%
Diploma64
Foundation2555.90%
FacultyBusiness and Accounting5125.20%
Engineering84.00%
Information Technology and Computer Science12561.90%
Law188.90%
Years of studyFirst year3215.80%
Fourth year84.00%
Second year8542.10%
Third year838.10%
Experience using AI toolsYes202100%
No20%
Types of AI tools usedChatgpt202100%
Quillbot16079.21%
Grammarly10853.47%
Perplexity3416.83%
Others21.00%
Table 2. Reliability and validity test.
Table 2. Reliability and validity test.
ConstructsLoadingαCRAVE
Perceived usefulness (PU) 0.6580.5760.407
PU10.654
PU20.662
PU30.567
PU40.686
PU50.694
Perceived ease of use (PEU) 0.5390.8230.483
PEU10.695
PEU20.675
PEU30.646
PEU40.538
PEU50.659
Behavioral intention to use (BI) 0.6290.8170.472
BI10.761
BI20.642
BI30.664
BI40.694
BI50.717
Actual use (AU) 0.5850.8340.504
AU10.816
AU20.791
AU30.800
AU40.248
AU50.503
Security and privacy (SP) 0.8510.8780.592
SP10.798
SP20.826
SP30.745
SP40.795
SP50.826
Positive attitudes towards AI in academia (PA) 0.5920.7390.416
PA10.743
PA20.780
PA30.689
PA40.472
Negative attitudes towards AI in academia (AA) 0.7570.8240.541
NA10.696
NA20.809
NA30.732
NA40.814
Table 3. Linear regression test.
Table 3. Linear regression test.
Associationβt-Valuep-ValueDecision
H1Perceived usefulnessBehavioral intention to use0.3956.88<0.001Accepted
H2Perceived ease of useBehavioral intention to use0.1502.5480.012Accepted
H3Security and privacyBehavioral intention to use0.1342.6450.009Accepted
H4Positive attitudes towards AI in academiaBehavioral intention to use0.3346.077<0.001Accepted
H5Negative attitudes towards AI in academiaBehavioral intention to use−0.007−0.1350.892Rejected
H6Behavioral intention to useActual use or adoption0.73915.507<0.001Accepted
Table 4. Pearson correlation test.
Table 4. Pearson correlation test.
Associationrp-ValueDecision
H1Perceived usefulnessBehavioral intention to use0.66<0.001Accepted
H2Perceived ease of useBehavioral intention to use0.56<0.001Accepted
H3Security and privacyBehavioral intention to use0.3<0.001Accepted
H4Positive attitudes towards AI in academiaBehavioral intention to use0.62<0.001Accepted
H5Negative attitudes towards AI in academiaBehavioral intention to use0.080.124Rejected
H6Behavioral intention to useActual use or adoption0.74<0.001Accepted
Table 5. PLS-SEM test.
Table 5. PLS-SEM test.
AssociationPath CoefficientR2f2t-Valuep-ValueDecision
H1Perceived usefulnessBehavioral intention to use0.3810.6580.2606.7950.000Accepted
H2Perceived ease of useBehavioral intention to use0.1660.6580.0482.8330.002Accepted
H3Security and privacyBehavioral intention to use0.1310.6580.0402.2660.012Accepted
H4Positive attitudes towards AI in academiaBehavioral intention to use0.3690.6580.2645.1500.000Accepted
H5Negative attitudes towards AI in academiaBehavioral intention to use0.0180.6580.0010.2570.399Rejected
H6Behavioral intention to useActual use or adoption0.8020.6431.80219.0230.000Accepted
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Phua, J.T.K.; Neo, H.-F.; Teo, C.-C. Evaluating the Impact of Artificial Intelligence Tools on Enhancing Student Academic Performance: Efficacy Amidst Security and Privacy Concerns. Big Data Cogn. Comput. 2025, 9, 131. https://doi.org/10.3390/bdcc9050131

AMA Style

Phua JTK, Neo H-F, Teo C-C. Evaluating the Impact of Artificial Intelligence Tools on Enhancing Student Academic Performance: Efficacy Amidst Security and Privacy Concerns. Big Data and Cognitive Computing. 2025; 9(5):131. https://doi.org/10.3390/bdcc9050131

Chicago/Turabian Style

Phua, Jwern Tick Kiet, Han-Foon Neo, and Chuan-Chin Teo. 2025. "Evaluating the Impact of Artificial Intelligence Tools on Enhancing Student Academic Performance: Efficacy Amidst Security and Privacy Concerns" Big Data and Cognitive Computing 9, no. 5: 131. https://doi.org/10.3390/bdcc9050131

APA Style

Phua, J. T. K., Neo, H.-F., & Teo, C.-C. (2025). Evaluating the Impact of Artificial Intelligence Tools on Enhancing Student Academic Performance: Efficacy Amidst Security and Privacy Concerns. Big Data and Cognitive Computing, 9(5), 131. https://doi.org/10.3390/bdcc9050131

Article Metrics

Back to TopTop