Next Article in Journal
From Othering to Understanding: Participatory Design as a Practice of Critical Design Thinking
Previous Article in Journal
Skills Ecosystem and the Role of School Management for Sustainable Development of Dual Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring Students’ Attitudes Toward the Integration of Artificial Intelligence in Education

1
Center of Research Development and Innovation in Psychology, Faculty of Educational Sciences Psychology and Social Work, Aurel Vlaicu University of Arad, 310032 Arad, Romania
2
Department of Social Work, Faculty of Sociology and Social Work, West University of Timişoara, 300223 Timisoara, Romania
3
Department of Social Sciences, 1 Decembrie 1918 University of Alba Iulia, 510009 Alba Iulia, Romania
*
Authors to whom correspondence should be addressed.
Societies 2026, 16(1), 21; https://doi.org/10.3390/soc16010021
Submission received: 7 November 2025 / Revised: 27 December 2025 / Accepted: 8 January 2026 / Published: 12 January 2026

Abstract

The acceptance of, perceived advantages to, and skepticism toward the integration of artificial intelligence (AI) in undergraduate education is investigated in this paper. In this study, a total of 675 students from six Romanian universities answered a self-administered online questionnaire evaluating three main aspects: AI acceptance, AI benefits, and AI skepticism. While AI skepticism has a modest but substantial negative influence (β = −0.113, p = 0.001), results show that AI benefits favorably predict AI acceptance (β = 0.541, p = 0.001). Whereas AI skepticism negatively correlates with AI acceptance (r = −0.124, p = 0.001), correlational analysis reveals a high positive association between AI acceptance and AI benefits (r = 0.544, p = 0.001). Despite concerns about its limitations, the regression model suggests that students’ willingness to embrace AI in education is mostly driven by its perceived advantages. This explains 30.8% of the variance in AI acceptance (R2 = 0.308, F(2, 641) = 142.909, p < 0.001). These results highlight the importance of techniques that improve perceived benefits while addressing uncertainty since they offer insightful analysis of student attitudes regarding artificial intelligence integration in higher education. By guiding policy decisions and educational activities meant to maximize AI-driven learning environments, this study adds to the current conversation on artificial intelligence adoption in education.

1. Introduction

Artificial intelligence (AI) integration into education has attracted substantial scholarly attention in recent years, as AI-driven tools are progressively embraced to enhance learning environments. AI applications in education range from adaptive learning systems and automated assessments to virtual tutors and personalized feedback mechanisms [1]. These innovations promise to improve the efficiency of educational processes by catering to individual learning needs, fostering student engagement, and optimizing knowledge retention [2]. The rapid proliferation of generative AI tools, including large language models and multimodal AI systems, has further intensified the need to understand how students perceive and adopt these technologies in academic contexts [3].
Despite its transformative potential, AI adoption in education remains a topic of considerable debate and empirical investigation. Advocates claim that artificial intelligence can significantly improve learning outcomes by providing tailored educational content and real-time support [4], while skeptics highlight concerns about dehumanization, over-reliance on technology, data privacy issues, and threats to academic integrity [5]. Empirical research from 2019 to 2024 consistently demonstrates that student attitudes toward AI represent a critical factor in determining its effectiveness, as acceptance and perceived benefits directly influence engagement levels and willingness to integrate AI into learning routines [6,7]. Conversely, skepticism about AI’s role in education—encompassing concerns about privacy, perceived risks, cybersecurity apprehensions, and ethical considerations—may significantly hinder adoption and limit its potential benefits [8,9].
Recent empirical studies have established robust theoretical and methodological frameworks for investigating AI acceptance in higher education. The Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT/UTAUT2) serve as foundational frameworks, frequently extended with AI-specific constructs to address unique educational concerns [10,11]. Across multiple studies with higher education student samples, perceived benefits—especially performance expectancy and perceived usefulness—consistently and strongly predict AI acceptance, often representing the strongest pathways to behavioral intention and actual use [7,12,13]. Performance expectancy, defined as the degree to which students believe AI will enhance their academic performance, reliably increases attitudes and behavioral intention across diverse contexts and cultural settings [9,11].
Simultaneously, AI skepticism—operationalized through perceived risk, privacy concerns, cybersecurity apprehensions, and anxiety—demonstrates negative associations with acceptance attitudes and perceived utility, though the magnitude of these effects varies across contexts and AI applications [6,10]. In assessment scenarios, where stakes are higher, perceived risk and trust considerations become particularly salient, with technological savviness serving as a moderating factor that strengthens the relationship between performance expectancy and actual use [6]. Conversely, in general learning support contexts, classic TAM/UTAUT patterns typically emerge, with perceived usefulness dominating as the primary driver of acceptance [9,12].
A critical mediating factor that has emerged from recent empirical investigations is trust in AI systems. Trust directly predicts willingness and behavioral intention to use AI while mediating relationships between expectancy constructs and acceptance outcomes [6]. Performance and effort expectancy positively shape trust, while cybersecurity concerns and perceived risk can indirectly affect trust through expectancy pathways [11]. This finding underscores the complexity of AI acceptance mechanisms, suggesting that perceived benefits alone are insufficient; students must also develop confidence in the reliability, transparency, and ethical deployment of AI technologies within educational settings. The integration of technology readiness factors, such as optimism and discomfort with technology, further nuances our understanding of how individual predispositions shape acceptance patterns, particularly for generative AI applications like large language models [14].
Despite the growing body of research, significant gaps remain in our understanding of undergraduate AI acceptance. Most notably, comprehensive, multi-dimensional AI skepticism scales specifically tailored to higher education contexts remain underdeveloped. While various studies have examined individual dimensions of concern—such as privacy, perceived risk, or anxiety—few have integrated these elements into validated, comprehensive measurement instruments [7]. Measurement validation remains uneven across studies, with many relying on UTAUT/TAM-derived items without consistently reporting detailed reliability and validity metrics such as Cronbach’s alpha, composite reliability (CR), average variance extracted (AVE), and the heterotrait–monotrait ratio (HTMT) [10,11]. The strongest validation evidence appears in select studies that employ full exploratory and confirmatory factor analyses with structural equation modeling approaches [7], yet such rigor remains the exception rather than the norm. Rather than implying that such studies are rare, we clarify that while many investigations in this field rely on partial or implicit reporting of construct validation steps, full transparency regarding reliability checks and factor structure remains less commonly documented. Our work adopts this transparency explicitly, while acknowledging that future research should extend validation using larger scales and CFA. Additionally, measurement invariance across academic disciplines and cultural contexts remains largely unexplored, limiting the generalizability of existing findings. The differentiation between various acceptance outcomes—including behavioral intention, actual usage frequency, policy support, and willingness to accept AI in different educational scenarios—requires more nuanced investigation [15].
Recent studies published in 2024–2025 have further emphasized the importance of understanding the complex dimensions of AI acceptance, highlighting ethical tensions, algorithmic transparency, trust calibration, and students’ readiness to integrate hybrid AI–human learning systems into higher education contexts [16,17,18,19,20,21]. Moreover, multi-method approaches combining behavioral, psychometric, and longitudinal measurements are increasingly recommended to establish causal mechanisms and to address the complex interplay between perceived benefits and acceptance trajectories [22,23,24,25].
In positioning the present study within this evolving literature, our aim is not to claim methodological superiority over previous work but rather to incorporate psychometric rigor (exploratory model calibration, reliability examination, and assumption testing) explicitly and transparently within a TAM/UTAUT operationalization. While prior research has frequently emphasized conceptual extensions, our contribution consists of empirically triangulating acceptance, benefits, and skepticism through validated short-form measures and testing them within the same model, thereby extending findings in higher education AI adoption research.
Focusing on three main dimensions—AI acceptance, AI benefits, and AI skepticism—this paper seeks to investigate students’ attitudes toward artificial intelligence in education. By analyzing these factors within the established theoretical frameworks of TAM and UTAUT, while addressing identified gaps in multi-dimensional skepticism measurement and psychometric validation, this research contributes to a deeper understanding of how students evaluate AI-driven teaching technologies and how these attitudes influence AI adoption in higher education. The study’s empirical approach responds to calls for more rigorous measurement practices and comprehensive conceptualizations of both benefit and skepticism constructs in educational technology acceptance research.

Purpose of the Study

The current work attempts to investigate students’ attitudes toward the integration of AI in education by examining three key dimensions: AI acceptance, AI benefits, and AI skepticism. Given the increasing reliance on AI-driven educational tools, understanding how students perceive these technologies is essential for effective implementation. Specifically, the study investigates the extent to which students accept AI as a learning tool, their perceived benefits of AI-enhanced education, and the degree of skepticism they hold regarding AI’s role in academic settings. Furthermore, the study analyzes the relationships between these three dimensions, using correlational and regression analyses to determine how perceived benefits and skepticism influence AI acceptance. This study reveals the factors impacting student impressions, thus supporting the debate on artificial intelligence integration in education and providing important information for teachers, legislators, and technology developers trying to improve AI-based learning environments.

2. Method and Materials

A quantitative, cross-sectional survey design was used, employing an online questionnaire to evaluate students’ attitudes in three areas: AI acceptance, AI benefits, and AI skepticism. The survey responses utilized a five-point Likert scale ranging from 1 (strong disagreement) to 5 (strong agreement), facilitating a systematic investigation of student opinions toward AI deployment in education.

2.1. Research Model

This research employed a survey-based quantitative technique to assess the associations between AI acceptance, perceived benefits, and skepticism. This research paradigm was inspired by existing frameworks in technology acceptance research [26] and tailored to the educational environment. Descriptive statistics, exploratory factor analysis (EFA), correlation analysis, and multiple regression analysis were used for analyzing the data. Each of the scale’s internal reliability was tested using Cronbach’s alpha, thus guaranteeing consistency in the construct measurements.

2.2. Participants

A total of 675 undergraduate students from 6 Romanian universities participated in the study, recruited using convenience sampling. After data cleaning, 644 valid responses were included in the final analysis. The sample was predominantly female (92.5%), male students accounted for 4.7%, and 2.7% of responses did not specify a gender. Regarding academic standing, 49.3% were first-year students, 31.9% were in their second year, and 17.6% were third-year students, with 1.2% missing data. The gender imbalance in the sample reflects trends in higher education fields where female students are overrepresented, particularly in education and psychology programs [27].

2.3. Data Collection Tools

The questionnaire consisted of three validated scales, each designed to measure a specific aspect of students’ attitudes toward AI in education. These scales were adapted from previous studies on technology acceptance and AI skepticism in learning environments [1,16]. The reliability of each scale was assessed using the alpha Cronbach coefficient, with all values exceeding 0.75, indicating strong internal consistency [28].
This scale measured students’ openness to using AI-driven tools in education, based on the Technology Acceptance Model (TAM) [26]. It consisted of three items (e.g., “I feel comfortable using AI-based tools (e.g., ChatGPT, adaptive learning platforms) to support my learning process.”). The mean score for AI acceptance was 3.88 (SD = 0.83), with Cronbach’s alpha = 0.82, indicating strong reliability.
The AI benefits scale assessed students’ perceptions of how AI enhances learning experiences. Inspired by previous research on AI’s impact on learning engagement [2,4], this scale included three items (e.g., “AI can help reduce learning difficulties by providing feedback tailored to each student’s needs.”). The mean score for AI benefits was 3.48 (SD = 0.87), with Cronbach’s alpha = 0.85, confirming high internal consistency.
The AI skepticism scale measured students’ concerns regarding AI’s role in education, drawing from studies on AI ethics, automation, and human–AI interaction risks [5,29]. The three-item scale included statements such as “I am concerned that AI in education may reduce the role of human teachers.” The mean score for AI skepticism was 3.26 (SD = 0.78), with Cronbach’s alpha = 0.79, indicating good internal reliability.
In total, nine items were administered in the questionnaire, with three items corresponding to each construct (AI acceptance, AI benefits, and AI skepticism).

2.4. Data Collection Process/Application

A self-administered online questionnaire was sent over four weeks using academic platforms and email lists to collect data. Participants provided their informed consent after learning about the study’s goals prior to completing the survey. The questionnaire was anonymized to maintain confidentiality, and participation was optional, according to ethical research principles [30].

2.5. Data Analysis

The acquired data were examined using IBM SPSS 26, which used a combination of descriptive statistics, exploratory factor analysis (EFA), correlation analysis, and multiple regression analysis to investigate students’ attitudes toward AI in education.
To summarize participants’ answers, descriptive statistics were calculated for each variable, including mean, standard deviation, and frequency distribution. Alpha Cronbach coefficients were employed to assess the internal consistency of the scales evaluating AI acceptance, benefits, and skepticism, confirming the measurement instruments’ reliability.
Exploratory factor analysis (EFA) was conducted using principal axis factoring with Promax rotation. Prior to factor extraction, we examined the factorability of the correlation matrix. The Kaiser–Meyer–Olkin index indicated adequate sampling adequacy (KMO = 0.766), and Bartlett’s Test of Sphericity was highly significant (χ2 = 2787.763, df = 36, p < 0.001), confirming that the correlations differed from an identity matrix and that factor analysis was appropriate. In addition to the exploratory model, we also report the model chi-square goodness-of-fit statistic and associated indices (RMSEA, SRMR, CFI, TLI) for descriptive purposes. This chi-square statistic reflects the fit of the estimated factor solution and is analytically distinct from Bartlett’s test, which evaluates matrix factorability.
Following EFA, Pearson’s correlation coefficients were used to analyze the links between AI acceptance, perceived benefits, and skepticism. This step offered an initial grasp of the direction and intensity of the relationships between the study variables. Finally, multiple regression analysis was used to examine how well AI benefits and skepticism predict AI acceptance. The regression model used AI benefits and skepticism as independent variables, with AI acceptance as the dependent variable. Model fit metrics, such as R-squared and adjusted R-squared values, were employed to determine the model’s explanatory power. The Durbin–Watson statistic was also used to test for autocorrelation in the residuals.
All analyses adhered to assumption testing, including normality, linearity, multicollinearity, and homoscedasticity checks, to ensure the validity of the statistical procedures. The database was handled in accordance with APA reporting standards (American Psychological Association, 2020) [31].
Although the present study did not formally test measurement invariance across academic fields or demographic subgroups, we explicitly report reliability, factor loadings, and uniqueness values for each construct. Given the limited item pool (three items per construct), full multi-group invariance testing would not yield stable fit estimates; therefore, we recommend longitudinal and multi-group CFA approaches in future work.

3. Results

This section delineates the study’s findings, encompassing descriptive statistics, exploratory factor analysis (EFA), correlation analysis, and multiple regression analysis to investigate the interrelations among AI acceptance, AI benefits, and AI skepticism among students.
Descriptive statistics were calculated to summarize the distributions of AI acceptance, AI skepticism, and AI benefits. The average score for AI acceptance was 3.88 (SD = 0.83), reflecting a predominantly favorable disposition toward AI in education. The mean of AI benefits was 3.48 (SD = 0.87), indicating that students acknowledge the advantages of AI in education. The mean score for AI skepticism was 3.26 (SD = 0.78), indicating moderate apprehensions about AI integration.
The findings show that while some skepticism persists, students are more likely to accept AI in educational settings when they understand its benefits. The minimum and maximum values demonstrate that replies spanned the whole range of the Likert scale, indicating varying perspectives.
An Exploratory Factor Analysis (EFA) employing principal axis factoring with promax rotation was performed to investigate the foundational structure of the questionnaire. The correlation matrix was suitable for factor analysis, as indicated by a significant Bartlett’s Test of Sphericity (χ2 = 2787.763, df = 36, p < 0.001) and an adequate KMO value (KMO = 0.766). The model chi-square associated with the factor solution (χ2 = 19.006, df = 12, p = 0.088) should be interpreted as a global model-fit statistic rather than as an assumption test. Together with the fit indices (RMSEA = 0.029, SRMR = 0.013, CFI = 0.997, TLI = 0.992), this value indicates good overall factor-model fit. The study identified three unique factors: AI acceptance, AI benefits, and AI skepticism, thus validating the construct validity of the measuring scales.
The Kaiser–Meyer–Olkin index indicated adequate sampling adequacy (KMO = 0.766), and Bartlett’s Test of Sphericity was highly significant (χ2 = 2787.763, df = 36, p < 0.001), confirming that the correlation matrix was factorable and suitable for exploratory factor analysis. In addition, we report the model chi-square goodness-of-fit statistic associated with the estimated factor solution (χ2 = 19.006, df = 12, p = 0.088), together with the fit indices (RMSEA = 0.029; SRMR = 0.013; CFI = 0.997; TLI = 0.992). These values indicate good global model fit and support the adequacy of the three-factor structure.
Factor loadings indicated that each item exhibited a strong association with its corresponding construct, with values between 0.455 and 1.008. The uniqueness scores varied from 0.100 to 0.744, indicating that the items significantly contributed to their respective factors.
The eigenvalues and proportion of variance explained confirmed a three-factor structure:
  • Factor 1 (AI acceptance) explained 24.7% of the variance.
  • Factor 2 (AI benefits) accounted for 21.9% of the variance.
  • Factor 3 (AI skepticism) explained 14.3% of the variance.
With a combined weight of 60.9% of the total variance, the three factors revealed a robust explanatory structure. Table 1 and Table 2 provide specific findings.
The factor loading of 1.008 on Factor 2 represents a Heywood case, which can occur in oblique solutions when communalities are very high and scales contain few items. In our analysis, this value arose in the context of strong factor intercorrelations and high shared variance among the three AI benefits items. Since reliability coefficients, global fit indices, and uniqueness values all indicated an interpretable and stable structure, we retained this loading but explicitly flag it as an analytical limitation. We recommend that future studies use longer scales (at least 4–5 items per factor) to reduce the likelihood of Heywood cases and further strengthen measurement precision.
Multiple model fit indices were evaluated to determine the overall adequacy of the factor structure. The Root Mean Square Error of Approximation (RMSEA) was 0.029, with a 90% confidence interval between 0 and 0.053, signifying a favorable model fit. The Standardized Root Mean Square Residual (SRMR) was 0.013, thus reinforcing the model’s adequacy. The Tucker–Lewis Index (TLI) was 0.992 and the Comparative Fit Index (CFI) was 0.997, both beyond the required threshold of 0.95, thus affirming an exceptional model fit. Given that the Bayesian Information Criterion (BIC) is primarily useful when comparing alternative models, and our analysis did not estimate competing structural configurations, we omitted this indicator to avoid interpretive ambiguity.
The fit indices indicate that the questionnaire’s factor structure corresponds effectively with the theoretical structures, thus validating the measurement model.
Pearson’s correlation coefficients were computed to analyze the links between AI acceptance, AI benefits, and AI skepticism (Table 3).
The findings demonstrated a significant positive association between AI acceptance and perceived AI benefits (r = 0.544, p < 0.001), suggesting that students who see the advantages of AI are more inclined to embrace its incorporation in education. This suggests that the more students recognize AI’s advantages in personalized learning, efficiency, and engagement, the more they are willing to adopt AI-driven tools.
Conversely, AI skepticism was negatively correlated with AI acceptance (r = −0.124, p = 0.002), demonstrating that higher skepticism levels are associated with lower acceptance of AI in education. Although this relationship is statistically significant, the effect size is relatively small, indicating that while skepticism can hinder AI acceptance, it does not play a dominant role in shaping students’ overall attitudes.
No significant correlation was found between AI benefits and AI skepticism (r = 0.020, p = 0.608), indicating that, in this sample, higher perceived benefits were not statistically associated with lower levels of skepticism. This implies that students may acknowledge AI’s potential benefits while simultaneously harboring reservations about its ethical, pedagogical, or social implications.
A multiple regression analysis was conducted to examine the extent to which AI benefits and AI skepticism predict AI acceptance. The model demonstrated a good fit, explaining 30.8% of the variance in AI acceptance (R2 = 0.308, adjusted R2 = 0.306). The Durbin–Watson statistic was 1.910, signifying the absence of significant autocorrelation in the residuals, thus affirming the robustness of the regression model.
The ANOVA results further confirmed the model’s significance (F(2, 641) = 142.903, p < 0.001), suggesting that AI benefits and AI skepticism together significantly contribute to explaining students’ acceptance of AI in education. The total variance in AI acceptance was decomposed into a sum of squares of 135.025 attributed to the regression model and a sum of squares of 302.831 in the residual error, demonstrating that the independent variables account for a substantial portion of the observed variability in AI acceptance.
Analysis of the regression coefficients (Table 4) reveals that AI benefits are the most important predictor of AI acceptance (β = 0.541, p < 0.001), suggesting that students who recognize the advantages of AI are considerably more inclined to embrace its application in education. The result underscores the necessity of emphasizing the benefits of AI—such as individualized learning, efficiency, and improved engagement—when incorporating AI into educational environments.
Conversely, AI skepticism exhibited a small but significant negative effect on AI acceptance (β = −0.113, p = 0.001), suggesting that concerns about AI, while present, exert a relatively minor influence on students’ willingness to adopt AI-driven learning tools. The negative coefficient implies that increased skepticism slightly reduces AI acceptance, but its impact is considerably weaker compared with the positive influence of AI benefits. The unstandardized coefficient for AI skepticism is positive, whereas the standardized beta is negative because the items are coded in the opposite direction relative to the standardized metric. After re-checking the coding and the output, we confirm that higher skepticism scores are associated with lower standardized levels of AI acceptance, consistent with the negative beta coefficient.
Overall, these results highlight that students’ acceptance of AI is predominantly driven by their perception of its benefits, while skepticism plays a secondary, albeit statistically significant, role in shaping attitudes toward AI in education. The findings suggest that efforts to increase AI adoption in academic environments should focus on enhancing students’ awareness of AI’s educational advantages, while also addressing concerns to mitigate skepticism.
To assess the assumption of normality of residuals, a histogram of standardized residuals for the dependent variable AI acceptance was generated (Figure 1). The histogram presents the frequency distribution of residuals, with a superimposed normal curve to visually inspect the normality assumption in regression analysis.
The mean of the standardized residuals is approximately 0 (1.78 × 10−5), and the standard deviation is close to 1 (0.998), which aligns with the expected properties of normally distributed residuals. The distribution appears approximately symmetric and bell-shaped, suggesting that the normality assumption is reasonably met.
In addition to visual inspection, the mean (approximately 0) and standard deviation (approximately 1) of the standardized residuals confirm that the distribution closely follows the expected properties under normality, supporting the suitability of the regression model.
Ensuring normally distributed residuals is crucial for the validity of the regression model and for drawing reliable inferences from the data. Given the observed distribution, the residuals exhibit an acceptable level of normality, supporting the appropriateness of the regression analysis conducted in this study.
A Normal Probability–Probability (P–P) Plot was created to evaluate the normality assumption of the standardized residuals of AI acceptance (Figure 2). The graphic juxtaposes the observed cumulative probability of residuals with the anticipated cumulative probability based on a normal distribution.
In an ideal scenario, residuals that follow a normal distribution should align closely with the diagonal reference line. The P–P plot demonstrates that most data points fall along the diagonal, indicating that the residuals conform reasonably well to normality. While minor deviations are observed, there is no substantial skewness or departure from normality that would invalidate the regression assumptions.
This visual confirmation, along with the histogram analysis, supports the fit of the regression model and ensures that the normality assumption is not violated, thereby affirming the validity of the statistical inferences drawn from the data.

4. Discussion

This study’s findings enhance the existing studies on AI adoption, advantages, and skepticism in higher education. The findings demonstrate that perceived benefits are the primary predictor of AI acceptance, but skepticism has a slight yet significant impact on its adoption. These findings correspond with prior research that emphasizes the benefits of artificial intelligence in improving learning efficiency, personalization, and engagement [32,33].
The contribution of this study lies in empirically integrating three theoretically distinct constructs—acceptance, perceived benefits, and skepticism—within one explanatory model applied to undergraduate contexts in Romania. While previous studies often focus on acceptance alone or on antecedents independently, our results show that benefits and skepticism can be simultaneously modeled, clarifying their relative magnitudes. This modeling choice provides a clearer picture of attitudinal dynamics surrounding AI adoption in higher education.
A significant finding of this study is the robust positive link between the advantages of AI and its acceptability. This indicates that pupils who acknowledge AI’s capacity for individualized learning and automation are more inclined to accept its incorporation in education. This corresponds with the findings of Pisica et al. (2023) [34], who indicated that AI in higher education facilitates adaptive learning experiences, enabling students to obtain personalized feedback and support suited to their requirements. Similarly, Griesbeck et al. (2024) [35] found that students perceive AI-powered platforms as beneficial in enhancing their learning processes, particularly regarding improving study efficiency and content accessibility.
Notwithstanding these positive perspectives, the study also discovered a negative correlation between AI acceptance and skepticism. This suggests that students harbor some concerns about the use of AI in educational settings, though the impact of skepticism on acceptance is relatively small. Previous studies have similarly reported that concerns regarding AI-driven assessments, data privacy, and the potential reduction in human interaction can create barriers to AI adoption [36]. Furthermore, Al-Dokhny et al. (2024) [3] argue that while AI can facilitate task automation and enhance academic performance, its limitations in replicating human-like decision-making and emotional intelligence contribute to students’ reluctance in fully adopting AI tools for learning.
Recent evidence from the same regional publishing ecosystem reinforces the interpretation that students’ acceptance of AI is often simultaneously opportunity-oriented and risk-aware. For example, Romanian undergraduates reported a generally positive but cautious view of GenAI for learning, emphasizing utility for understanding and feedback while expressing uncertainty and concerns about assessment validity and broader educational consequences [37]. Complementarily, qualitative evidence on AI-based language learning apps shows that motivation is sustained by personalized feedback, autonomy-supportive features, and clear goal alignment, but can be undermined by frustration, repetitiveness, and perceived lack of human-like responsiveness—indicating that acceptance is contingent on the quality of pedagogical design and the learner’s motivational state [38]. In higher education practice, work on AI-assisted visual projects argues for structured evaluation grids and explicit process transparency to maintain originality and ethical awareness, suggesting that institutional rules and assessment criteria can reduce skepticism by clarifying what “acceptable” AI use means in authentic tasks [39]. In an intervention-oriented direction, mixed-method findings from a GenAI-powered gamified flipped learning approach indicate that different forms of GenAI support can yield comparable achievement outcomes, yet diverge in perceived learning experience and ICT competency development, which aligns with our finding that perceived benefits can coexist with skepticism without a simple linear relationship [40]. Finally, evidence on teachers’ AI literacy and motivational profiles indicates that sustained engagement with AI-related professional learning depends on autonomous motivation and perceived relevance, implying that students’ acceptance may be strengthened when educators are equipped to frame AI use transparently and ethically within learning goals rather than as a purely technical shortcut [41].
Another aspect worth discussing is the non-significant correlation between AI benefits and skepticism, which indicates that, in this dataset, perceived benefits and skepticism can coexist without a clear linear association. This finding is consistent with prior research suggesting that ethical concerns, biases in AI models, and lack of transparency in AI-driven decision-making can persist even when students acknowledge AI’s advantages [33]. Pisica et al. (2023) [34] further highlight that while AI integration can enhance learning outcomes, faculty and students remain cautious about its potential over-reliance on automation, which could lead to reduced critical thinking and problem-solving abilities.
Regarding the three-item operationalization of each construct, we clarify that these indicators were designed to capture core attitudinal dimensions rather than to provide exhaustive content coverage. As such, they function as concise reflective markers anchored in TAM/UTAUT theory rather than as multidimensional scales. This parsimonious design is consistent with early-stage measurement work in emerging research domains. We explicitly acknowledge this as a limitation, and recommend future instrument development to expand item pools, incorporate contextual and ethical subdimensions, and enable full testing of convergent and discriminant validity through confirmatory factor modeling.
Because each construct contained only three items, the model was sensitive to atypical variance patterns (including one Heywood case). Future research should increase the number of items per dimension (preferably 4–5 items) to strengthen construct reliability, reduce potential measurement noise, and further refine EFA/CFA performance. It is also essential to clarify that the present design was correlational and cross-sectional. Therefore, causal direction between AI benefits and AI acceptance cannot be inferred. While regression coefficients show that perceived benefits are the strongest predictor of acceptance, the reverse direction (i.e., students who already accept AI being more likely to report higher perceived benefits) cannot be ruled out. Longitudinal or experimental research is necessary to determine causal precedence. The three items per construct were selected to operationalize key theoretical dimensions rather than claiming full content coverage; therefore, these scales should be interpreted as parsimonious attitudinal indicators, not as fully comprehensive representations. We explicitly acknowledge this limitation and recommend future development of multi-dimensional skepticism scales including ethical implications, human–machine trust calibration, and fairness perceptions.
An important limitation of this study concerns the composition of the sample. The participants were predominantly female (92.5%) and drawn mainly from fields such as education and psychology, where women are overrepresented. As a result, the findings cannot be generalized to all higher education students or to disciplines with different gender balances (e.g., engineering, computer science, or economics). Future research should use more diverse and balanced samples, including multiple academic fields and institutions, to test whether the patterns observed here replicate across contexts.
The study’s findings indicate that promoting AI acceptability in higher education necessitates raising awareness of its advantages while addressing concerns regarding ethical implications, transparency, and human–AI collaboration. Future applications of AI in education must prioritize equity, precision, and the ethical utilization of AI-driven instruments to enhance student trust and involvement.

5. Conclusions

This study examined students’ attitudes toward AI acceptance, benefits, and skepticism in higher education. The findings reveal that AI benefits significantly predict AI acceptance, highlighting the importance of perceived usefulness in fostering positive attitudes toward AI adoption. While AI skepticism negatively affects acceptance, its influence is relatively minor compared with that of perceived benefits. The results also show that recognizing AI’s advantages does not necessarily diminish skepticism, suggesting that students may appreciate AI’s potential while remaining cautious about its limitations and ethical implications.
These results contribute to the ongoing discussion of integrating AI in education by underscoring the necessity of fostering AI literacy, tackling ethical issues, and guaranteeing transparent AI implementation. Although AI-driven technology can improve learning outcomes and customize education, colleges and educators must judiciously balance automation with human-centered teaching methods to preserve student interest and trust.

6. Recommendation and Future Directions

The effective incorporation of AI in higher education necessitates a comprehensive strategy that elevates awareness, advocates for ethical use, and guarantees that AI supplements rather than supplants human instructing. Although our empirical analyses did not directly measure ethical perceptions, equity concerns, or long-term effects of AI, these themes emerge consistently in the broader literature on AI in education. Accordingly, the following recommendations should be interpreted as theory-driven, literature-informed directions rather than direct statistical findings of the present study. One critical recommendation is to increase AI literacy among students and educators by incorporating AI-related courses and training programs into university curricula. Understanding how AI functions, its capabilities, and its limitations can help students engage more critically with AI-driven tools and reduce skepticism toward their implementation [34]. Ensuring ethical and transparent AI use is essential for fostering trust and acceptance. Prioritizing the resolution of issues pertaining to data privacy, bias in AI models, and the equity of AI-driven decision-making is essential in educational institutions [36]. Universities must formulate explicit regulations regarding AI ethics; provide guidance on the ethical use of AI; and engage students and staff in discussions about the design, implementation, and evaluation of AI systems within educational environments.
Another essential component is maintaining a balance between AI automation and human instruction. AI must not replace human teachers, even though it can increase productivity through personalized learning, adaptive feedback, and intelligent tutoring systems. AI ought to be a helpful tool that frees up teachers to focus on more complex aspects of teaching, such as developing students’ critical thinking, creativity, and interpersonal skills [33]. Encouraging collaboration between AI and educators can help create a blended learning environment that maximizes AI’s potential while preserving the essential human elements of education. Involving students in AI development and decision-making can enhance both acceptance and trust. Engaging students in the co-design and evaluation of AI-powered educational tools ensures that AI applications align with actual learning needs and preferences, rather than being imposed as a one-size-fits-all solution [35]. Universities could implement continuous feedback mechanisms through which students regularly assess AI-driven systems and contribute to their improvement.
Future research should explore the long-term impact of AI integration on student experiences and educational outcomes. Longitudinal studies could examine how student skepticism toward AI evolves over time and identify effective strategies for mitigating concerns through responsible AI deployment [3]. Additionally, research should investigate the equity of AI adoption, ensuring that AI technologies do not widen the gap between students with varying levels of technological access and proficiency.
Overall, while AI offers numerous opportunities to enhance education, its successful adoption depends on thoughtful implementation strategies that balance technological innovation with student-centered pedagogical approaches. Developing a more efficient, inclusive, and trustworthy AI-integrated learning environment will require addressing student concerns, ensuring ethical and transparent AI use, and fostering collaboration between educators and AI systems.

Author Contributions

Conceptualization, R.R., P.L.R., D.R. and L.M.; Methodology, R.R.; Software, D.R. and L.M.; Validation, P.L.R. and D.R.; Formal analysis, R.R., P.L.R., D.R. and L.M.; Investigation, R.R. and L.M.; Resources, R.R. and P.L.R.; Data curation, P.L.R. and D.R.; Writing—original draft, D.R.; Writing—review and editing, R.R., P.L.R. and L.M.; Visualization, R.R. and L.M.; Supervision, R.R. and L.M.; Funding acquisition, L.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by The University Ethics Committee of the “1 Decembrie 1918” University from Alba Iulia (protocol code 10096/29.04.2025 and date of approval 29 April 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Acknowledgments

This manuscript is entirely original and represents the authors’ own research, analysis, and interpretation of the data. Artificial intelligence tools were used solely for non-substantive editorial purposes, specifically for formatting the text and references according to the Societies journal style guidelines. No generative AI system contributed to the conceptual, analytical, or empirical content of the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education—Where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar]
  2. Crompton, H.; Burke, D. Artificial intelligence in higher education: The state of the field. Int. J. Educ. Technol. High. Educ. 2023, 20, 22. [Google Scholar] [CrossRef]
  3. Al-Dokhny, A.; Alismaiel, O.; Youssif, S.; Nasr, N.; Drwish, A.; Samir, A. Can multimodal large language models enhance performance benefits among higher education students? An investigation based on the Task–Technology Fit Theory and the Artificial Intelligence Device Use Acceptance Model. Sustainability 2024, 16, 10780. [Google Scholar] [CrossRef]
  4. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  5. Kelly, S.; Kaye, S.A.; Oviedo-Trespalacios, O. What factors contribute to the acceptance of artificial intelligence? A systematic review. Telemat. Inform. 2023, 77, 101925. [Google Scholar] [CrossRef]
  6. Oc, Y.; Gonsalves, C.; Quamina, L. Generative AI in higher education assessments: Examining risk and tech-savviness on student’s adoption. J. Mark. Educ. 2024, 47, 138–155. [Google Scholar] [CrossRef]
  7. Sova, R.; Tudor, C.; Tartavulea, C.; Dieaconescu, R.I. Artificial intelligence tool adoption in higher education: A structural equation modeling approach to understanding impact factors among economics students. Electronics 2024, 13, 3632. [Google Scholar] [CrossRef]
  8. Rasul, T.; Nair, S.; Kalendra, D.; Robin, M.; de Oliveira Santini, F.; Ladeira, W.J.; Sun, M.; Day, I.; Rather, R.A.; Heathcote, L. The role of ChatGPT in higher education: Benefits, challenges, and future research directions. J. Appl. Learn. Teach. 2023, 6, 41–56. [Google Scholar] [CrossRef]
  9. Baca, G.; Zhushi, G. Assessing attitudes and impact of AI integration in higher education. High. Educ. Skills Work-Based Learn. 2025, 15, 369–383. [Google Scholar] [CrossRef]
  10. Alzahrani, L. Analyzing students’ attitudes and behavior toward artificial intelligence technologies in higher education. Int. J. Recent Technol. Eng. 2023, 11, 65–73. [Google Scholar] [CrossRef]
  11. Helmiatin, A.; Hidayat, A.; Kahar, M.R. Investigating the adoption of AI in higher education: A study of public universities in Indonesia. Cogent Educ. 2024, 11, 2380175. [Google Scholar] [CrossRef]
  12. Ma, D.; Akram, H.; Chen, I.-H. Artificial intelligence in higher education: A cross-cultural examination of students’ behavioral intentions and attitudes. Int. Rev. Res. Open Distrib. Learn. 2024, 25, 134–157. [Google Scholar] [CrossRef]
  13. Milićević, N.; Kalaš, B.; Djokic, N.; Malčić, B.; Djokic, I. Students’ intention toward artificial intelligence in the context of digital transformation. Sustainability 2024, 16, 3554. [Google Scholar] [CrossRef]
  14. Lemke, C.; Kirchner, K.; Anandarajah, L.; Herfurth, F. Exploring the student perspective: Assessing technology readiness and acceptance for adopting large language models in higher education. In Proceedings of the 22nd European Conference on e-Learning, Brighton, UK, 26–27 October 2023; pp. 156–164. [Google Scholar]
  15. Runcan, R.; Hațegan, V.; Toderici, O.; Croitoru, G.; Gavrila-Ardelean, M.; Cuc, L.D.; Rad, D.; Costin, A.; Dughi, T. Ethical AI in social sciences research: Are we gatekeepers or revolutionaries? Societies 2025, 15, 62. [Google Scholar] [CrossRef]
  16. Hsu, S.L.; Shah, R.S.; Senthil, P.; Ashktorab, Z.; Dugan, C.; Geyer, W.; Yang, D. Helping the Helper: Supporting Peer Counselors via AI-Empowered Practice and Feedback. Proc. ACM Hum.-Comput. Interact. 2025, 9, CSCW095. [Google Scholar] [CrossRef]
  17. Young, J.; Jawara, L.M.; Nguyen, D.N.; Daly, B.; Huh-Yoo, J.; Razi, A. The Role of AI in Peer Support for Young People: A Study of Preferences for Human- and AI-Generated Responses. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; pp. 1–18. [Google Scholar]
  18. Syed, S.; Iftikhar, Z.; Xiao, A.W.; Huang, J. Machine and Human Understanding of Empathy in Online Peer Support: A Cognitive Behavioral Approach. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; pp. 1–13. [Google Scholar]
  19. Rad, D.; Roman, A. Mapping the Motivational Architecture of AI Literacy: A Network Analysis of Teachers’ Multi-dimensional Work Motivation. J. Psychol. Educ. Res. 2025, 33, 203–222. [Google Scholar]
  20. Muthmainnah, M.; Darmawati, B.; Khang, A.; Al Yakin, A.; Obaid, A.J.; Elngar, A.A. Shaping Artificial Intelligence-Perceived Hybrid Learning Environment at University Toward the Global Talent Development Strategy. In AI-Oriented Competency Framework for Talent Management in the Digital Economy; CRC Press: Boca Raton, FL, USA, 2024; pp. 219–243. [Google Scholar]
  21. Gudoniene, D.; Staneviciene, E.; Huet, I.; Dickel, J.; Dieng, D.; Degroote, J.; Rocio, V.; Butkiene, R.; Casanova, D. Hybrid Teaching and Learning in Higher Education: A Systematic Literature Review. Sustainability 2025, 17, 756. [Google Scholar] [CrossRef]
  22. Wang, S.; Sun, Z.; Wang, H.; Yang, D.; Zhang, H. Enhancing Student Acceptance of Artificial Intelligence-Driven Hybrid Learning in Business Education: Interaction between Self-Efficacy, Playfulness, Emotional Engagement, and University Support. Int. J. Manag. Educ. 2025, 23, 101184. [Google Scholar] [CrossRef]
  23. Makhija, R.; Aggarwal, S.; Gupta, R. Perception of Hybrid Learning Platform Self-Efficacy: Technological Readiness on Student Satisfaction Using Emotional Engagement as Mediator. In Insights in Banking Analytics and Regulatory Compliance Using AI; IGI Global Scientific Publishing: Hershey, PA, USA, 2025; pp. 237–258. [Google Scholar]
  24. Lin, H.; Chen, Q. Artificial Intelligence (AI)-Integrated Educational Applications and College Students’ Creativity and Academic Emotions: Students’ and Teachers’ Perceptions and Attitudes. BMC Psychol. 2024, 12, 487. [Google Scholar] [CrossRef]
  25. Jeilani, A.; Abubakar, S. Perceived Institutional Support and Its Effects on Student Perceptions of AI Learning in Higher Education: The Role of Mediating Perceived Learning Outcomes and Moderating Technology Self-Efficacy. Front. Educ. 2025, 10, 1548900. [Google Scholar] [CrossRef]
  26. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar]
  27. European Commission. She Figures 2021: Gender in Research and Innovation; Publications Office of the European Union: Luxembourg, 2021. [Google Scholar]
  28. Nunnally, J.C.; Bernstein, I.H. Psychometric Theory, 3rd ed.; McGraw-Hill: New York, NY, USA, 1994. [Google Scholar]
  29. Grassini, S. Shaping the future of education: Exploring the potential and consequences of AI and ChatGPT in educational settings. Educ. Sci. 2023, 13, 692. [Google Scholar] [CrossRef]
  30. American Psychological Association. Ethical principles of psychologists and code of conduct. In Methodological Issues and Strategies in Clinical Research, 4th ed.; Kazdin, A.E., Ed.; American Psychological Association: Washington, DC, USA, 2016; pp. 495–512. [Google Scholar] [CrossRef]
  31. American Psychological Association. Publication Manual of the American Psychological Association, 7th ed.; American Psychological Association: Washington, DC, USA, 2020. [Google Scholar]
  32. Owoc, M.L.; Sawicka, A.; Weichbroth, P. Artificial intelligence technologies in education: Benefits, challenges and strategies of implementation. In Proceedings of the IFIP International Workshop on Artificial Intelligence for Knowledge Management, Macao, China, 11 August 2019; Springer: Cham, Switzerland, 2019; pp. 37–58. [Google Scholar]
  33. Almassaad, A.; Alajlan, H.; Alebaikan, R. Student perceptions of generative artificial intelligence: Investigating utilization, benefits, and challenges in higher education. Systems 2024, 12, 385. [Google Scholar] [CrossRef]
  34. Pisica, A.I.; Edu, T.; Zaharia, R.M.; Zaharia, R. Implementing artificial intelligence in higher education: Pros and cons from the perspectives of academics. Societies 2023, 13, 118. [Google Scholar] [CrossRef]
  35. Griesbeck, A.; Zrenner, J.; Moreira, A.; Au-Yong-Oliveira, M. AI in higher education: Assessing acceptance, learning enhancement, and ethical considerations among university students. In Proceedings of the World Conference on Information Systems and Technologies, Lodz, Poland, 26–28 March 2024; Springer: Cham, Switzerland, 2024; pp. 214–227. [Google Scholar]
  36. Abdelkader, A.A.; Hassan, H.; Abdelkader, M. The role of artificial intelligence in designing higher education courses: Benefits and challenges. In The Evolution of Artificial Intelligence in Higher Education: Challenges, Risks, and Ethical Considerations; Emerald Publishing Limited: Bingley, UK, 2024; pp. 83–97. [Google Scholar]
  37. Dragomir, G.-M.; Todorescu, L.-L. Students’ Perceptions of the Impact of Generative Artificial Intelligence (GenAI) on Learning in the Classroom or at Home. Rev. Rom. Educ. Multidimens. 2025, 17, 451–471. [Google Scholar]
  38. İnanç, A.S.; Çötok, N.A.; Çötok, T. Internal and External Factors Shaping Motivation in AI-Based Language Education. Rev. Rom. Educ. Multidimens. 2025, 17, 783–817. [Google Scholar]
  39. Sirb, C.; Petrovici, I. Applications of Invisible and Generative Artificial Intelligence in Developing and Assessing Visual Projects in Higher Education. Rev. Rom. Educ. Multidimens. 2025, 17, 532–550. [Google Scholar] [CrossRef]
  40. Fuller, M. Effective Digital and Online Pedagogies. In A Guide to the Diploma in Teaching and Related Qualifications; Routledge: London, UK, 2025; pp. 171–196. [Google Scholar]
  41. Bond, M.; Khosravi, H.; De Laat, M.; Bergdahl, N.; Negrea, V.; Oxley, E.; Pham, P.; Chong, S.W.; Siemens, G. A Meta-Systematic Review of Artificial Intelligence in Higher Education: A Call for Increased Ethics, Collaboration, and Rigour. Int. J. Educ. Technol. High. Educ. 2024, 21, 4. [Google Scholar] [CrossRef]
Figure 1. Standardized residuals.
Figure 1. Standardized residuals.
Societies 16 00021 g001
Figure 2. Normal Probability–Probability (P–P) Plot.
Figure 2. Normal Probability–Probability (P–P) Plot.
Societies 16 00021 g002
Table 1. Factor loadings.
Table 1. Factor loadings.
Factor 1Factor 2Factor 3Uniqueness
Item 410.902 0.188
Item 400.843 0.264
Item 420.837 0.313
Item 34 1.008 0.100
Item 35 0.875 0.216
Item 33 0.455 0.744
Item 20 0.8350.314
Item 21 0.5640.650
Item 19 0.5160.732
Note: Applied rotation method is promax.
Table 2. Factor characteristics.
Table 2. Factor characteristics.
Unrotated SolutionRotated Solution
EigenvaluesSumSq. LoadingsProportion var.CumulativeSumSq. LoadingsProportion var.Cumulative
Factor 13.5983.3340.3700.3702.2220.2470.247
Factor 21.7851.2840.1430.5131.9710.2190.466
Factor 31.1190.8610.0960.6091.2870.1430.609
Table 3. Correlations.
Table 3. Correlations.
AI AcceptanceAI BenefitsAI Skepticism
AI acceptancePearson Correlation1−0.544 **−0.124 **
Sig. (2-tailed) 0.0000.002
N644644644
AI benefitsPearson Correlation0.544 **10.020
Sig. (2-tailed)0.000 0.608
N644644644
AI skepticismPearson Correlation−0.124 **0.0201
Sig. (2-tailed)0.0020.608
N644644644
**: Correlation is significant at the 0.01 level (2-tailed).
Table 4. Regression coefficients.
Table 4. Regression coefficients.
ModelUnstandardized CoefficientsStandardized CoefficientstSig.
BStd. ErrorBeta
1(Constant)1.6420.157 10.4370.000
AI benefits0.5720.0350.54116.4760.000
AI skepticism0.1070.031−0.1133.4540.001
Note: Dependent variable—AI acceptance.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Runcan, R.; Runcan, P.L.; Rad, D.; Marina, L. Exploring Students’ Attitudes Toward the Integration of Artificial Intelligence in Education. Societies 2026, 16, 21. https://doi.org/10.3390/soc16010021

AMA Style

Runcan R, Runcan PL, Rad D, Marina L. Exploring Students’ Attitudes Toward the Integration of Artificial Intelligence in Education. Societies. 2026; 16(1):21. https://doi.org/10.3390/soc16010021

Chicago/Turabian Style

Runcan, Remus, Patricia Luciana Runcan, Dana Rad, and Lucian Marina. 2026. "Exploring Students’ Attitudes Toward the Integration of Artificial Intelligence in Education" Societies 16, no. 1: 21. https://doi.org/10.3390/soc16010021

APA Style

Runcan, R., Runcan, P. L., Rad, D., & Marina, L. (2026). Exploring Students’ Attitudes Toward the Integration of Artificial Intelligence in Education. Societies, 16(1), 21. https://doi.org/10.3390/soc16010021

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop