Next Article in Journal
Gender Differences in the Impact of Workload Demands and Motivation on Teachers’ Burnout and Stress: A Multigroup Analysis
Previous Article in Journal
Equity Leadership in K–12 Online Communities Under Democratic Duress
Previous Article in Special Issue
Exploring Dynamic Assessment of Writing: The Loop Pedagogy from an Ecological-Languaging-Competencies (ELC) Lens
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Is the Rise of Artificial Intelligence Redefining Italian University Students’ Learning Experiences? Perceptions, Practices, and the Future of Education

Department of Clinical and Experimental Sciences, University of Brescia, Viale Europa 11, 25123 Brescia, Italy
*
Author to whom correspondence should be addressed.
Educ. Sci. 2026, 16(2), 258; https://doi.org/10.3390/educsci16020258 (registering DOI)
Submission received: 17 December 2025 / Revised: 28 January 2026 / Accepted: 3 February 2026 / Published: 6 February 2026
(This article belongs to the Special Issue The State of the Art and the Future of Education)

Abstract

Background: The rapid diffusion of generative Artificial Intelligence (AI) in higher education is reshaping students’ learning practices and raising concerns about unequal access and educational equity. In the Italian university context, where institutional guidelines on AI use are still developing, examining how students adopt and perceive tools such as ChatGPT is particularly relevant. Methods: This quantitative study investigated patterns of ChatGPT use and perceptions among Italian university students, with specific attention to its perceived support for learning and the development of transversal skills. Data were collected through an online survey. Differences across socio-demographic and academic characteristics were analysed using Mann–Whitney and Kruskal–Wallis tests, while associations between ChatGPT use, students’ perceptions, and study-related outcomes were examined using Spearman’s rho coefficients. Results: Students perceived ChatGPT as a useful tool, particularly in supporting the development of analytical, writing, and digital skills. Significant differences emerged across student groups. Higher levels of use and more positive perceptions were reported by freshmen, students studying in urban areas, and those with stronger economic resources. Conclusions: ChatGPT adoption and subjectively perceived institutional support and benefits vary by academic experience and socio-economic background. As the findings are based on self-reported perceptions, they reflect perceived rather than measured learning outcomes, highlighting the need for further research using objective indicators.

1. Introduction

In recent years, generative Artificial Intelligence (AI), and in particular large-language-model chatbots such as ChatGPT, has rapidly gained traction in higher education, providing students with new forms of study support and assistance (Kasneci et al., 2023; Zhai, 2023). International research shows that university students widely experiment with AI chatbots for tasks such as drafting and summarizing texts, clarifying course content, and supporting exam preparation, often valuing their immediacy, flexibility, and perceived usefulness (Gamage et al., 2023; Arum et al., 2025, Ravšelj et al., 2025). Students often describe ChatGPT as an always-available study companion that supports their learning, particularly in information-heavy or text-intensive disciplines (Bikanga Ada, 2024). At the same time, the educational use of ChatGPT raises concerns related to accuracy, academic integrity, and overreliance on automated support, as well as broader ethical issues such as bias, transparency, and data privacy (Aristovnik et al., 2024; Benke & Szőke, 2024). Existing studies suggest that students’ engagement with AI tools is not uniform but shaped by contextual factors, including institutional guidance, faculty attitudes, and national academic cultures. Where policies are unclear or instructors express ambivalence, students tend to adopt more cautious or fragmented patterns of use, whereas explicit guidelines and pedagogical modelling foster more integrated and reflective uses of AI (Petricini et al., 2025).
Despite the rapid expansion of this literature, empirical evidence from Italy remains limited. To date, only a small number of studies have investigated Italian university students’ use and perception of ChatGPT (or similar AI chatbots), and these studies are confined to health-related degree programs (Angelini et al., 2024; Tortella et al., 2025). A study conducted among medical students reported that nearly all respondents were aware of ChatGPT and that a substantial majority (around 79%) used it for study support; 81% considered AI suitable for educational purposes, 78% found it helpful, and 89% intended to continue using it (Angelini et al., 2024). Another nation-wide cross-sectional survey of physiotherapy students conducted in 2024 found that although 95.3% had heard of AI chatbots, more than half (53.7%) had never used them for academic purposes; among the users, “learning support” was the most common function, whereas use during internships was rare (Tortella et al., 2025). While these studies indicate high awareness alongside cautious adoption, their narrow disciplinary focus and the absence of analyses on socio-economic inequalities limit their generalizability. This gap is particularly relevant given the Italian context, characterized by marked socio-economic disparities, a persistent digital divide, and the absence of uniform institutional orientations regarding AI use in higher education.
Against this background, the present study aims to provide a broader and more systematic analysis of ChatGPT use and perceptions among Italian university students across multiple disciplines. The study examines patterns of use, perceived usefulness for learning support and the development of transversal skills, as well as differences across socio-demographic and academic characteristics. It further explores associations between ChatGPT use, students’ perceptions, and study-related outcomes. Finally, the study investigates how students evaluate ChatGPT in comparison with their professors, particularly with regard to ease of interaction and clarity of information.
Based on these aims, three hypotheses were formulated: (1) ChatGPT use is associated with perceptions of its academic capabilities, with variations across socio-demographic and academic groups; (2) ChatGPT use and perceptions are associated with study-related outcomes, including motivation; and (3) students’ ChatGPT use is associated with evaluations of its ease of interaction and clarity compared with professors.

2. Materials and Methods

2.1. Study Design

This paper is part of a larger global project named “Students’ Perception of ChatGPT” led by the Faculty of Public Administration at the University of Ljubljana. The overarching project aims to deepen understanding of how higher education students worldwide perceive and use ChatGPT3.5/4 within educational settings. A detailed description of the project’s description and its main results is available in a separate publication (Ravšelj et al., 2025). As part of this project, a large-scale online survey was conducted between October and December 2023, involving 23,218 students from 109 countries and territories.
The survey involved students enrolled in university degree programs at the host universities who were at least 18 years old and legally able to provide voluntary, informed consent to complete an anonymous online questionnaire. Students were recruited using a convenience sampling method, a non-probability sampling technique in which participants are selected based on their availability and ease of access rather than through random selection. The survey was promoted during lectures and via official university communication channels. Its content was designed in cooperation with international partners to capture key dimensions related to ChatGPT. An initial version was tested with Slovenian students (Aristovnik et al., 2024), and subsequent revisions were informed by pilot feedback to enhance clarity, reliability, and relevance. To ensure global accessibility, the final survey instrument was translated by native speakers into six additional languages: Italian, Spanish, Turkish, Japanese, Arabic, and Hebrew. The Italian version of the survey was developed following a standard translation and back-translation procedure to ensure semantic equivalence with the original instrument. The survey was administered through the web platform 1KA (One Click Survey; https://www.1ka.si/d/en; accessed on 9 October 2023), which complies with the General Data Protection Regulation (GDPR), ensuring informed consent procedures and safeguarding participant anonymity and data confidentiality.
This article presents a quantitative study based on data collected from a public university in Northern Italy, which enrolls approximately 17,500 students and offers undergraduate and master’s degree programs, as well as doctoral and specialization schools. Its academic structure comprises eight departments across Social sciences, STEM and Health sciences. Data collection in Italy followed institutional procedures identical to those of the global project. Italian students participated in traditional or blended learning and were recruited via official university channels (e.g., Instagram, website). A convenience sample of students was used to allow efficient recruitment and data collection, which is appropriate given the exploratory aims of the study. This approach enabled the examination of patterns of ChatGPT use and perceptions, and their associations with study-related outcomes within the target population, while acknowledging that findings may not be generalizable to all Italian university students.

2.2. Online Survey

The questionnaire comprised 42 primarily closed-ended questions designed to explore students’ perceptions of their initial encounters with ChatGPT. It was organized into 11 thematic sections. In addition to socio-demographic and general study-related information, the survey addressed a wide range of topics connected to ChatGPT, including usage, capabilities, regulation and ethical concerns, satisfaction and attitude, study issues and outcomes, skills development, labor market and skills mismatch, and emotions. Items concerning frequency and agreement were assessed using a five-point Likert scale, ranging from 1 (strongly disagree/never) to 5 (strongly agree/always) (Croasmun & Ostrom, 2011). Details on the construction of the questionnaire and the associated dataset are available in a separate publication (Ravšelj et al., 2025) and in the Mendeley Data repository (Ravšelj et al., 2024).

2.3. Statistical Analysis

Descriptive statistics (means, standard deviations, frequencies, and percentages) were first computed for individual Likert-type items.
To evaluate whether predefined sets of items could be meaningfully combined into composite indices, separate Principal Component Analyses (PCA) were conducted for the following item groups. The list of individual items is reported in the Supplementary Materials (see Table S1).
  • Q18—How often do you use ChatGPT for the following tasks?
  • Q19—How much do you agree with the following statements related to the capabilities of ChatGPT?
  • Q24—How much do you agree with the following statements related to your satisfaction with ChatGPT?
  • Q26—How much do you agree with the following statements related to learning and academic enhancement addressed with ChatGPT?
  • Q27—How much do you agree with the following statements related to personal and professional development addressed with ChatGPT?
  • Q28—How much do you agree with the following statements related to the ability of ChatGPT to facilitate proficiency and communication skills development?
  • Q29—How much do you agree with the following statements related to the ability of ChatGPT to facilitate analytical and problem-solving skills development?
Prior to each PCA, sampling adequacy was assessed using the Kaiser–Meyer–Olkin (KMO) measure and Bartlett’s test of sphericity, with KMO values ≥ 0.60 and a significant Bartlett test indicating suitability for component analysis. PCA was selected as a data-reduction technique to derive composite indices summarizing item groups corresponding to theoretically predefined constructs, rather than modeling latent variables. Components were retained based on eigenvalues greater than 1 and inspection of scree plots. For most item groups, a single dominant component was retained; in the case of Q24, two components emerged, with the first component used to define the main composite scale. These analyses were conducted in an exploratory manner to inform the construction of composite indices.
Items were considered acceptable if they showed primary component loadings ≥ 0.40 on the first component and corrected item–total correlations ≥ 0.30. Items not meeting these criteria were removed. Internal consistency of each resulting composite scale was evaluated using Cronbach’s alpha. Composite scores were computed as the mean of the retained items, with higher scores indicating stronger endorsement of the construct reflected in each scale. The scales were treated as reflective constructs, assuming that each item reflects the underlying construct of interest (e.g., satisfaction, perceived capability). No items required reverse-coding. Missing responses were minimal, with at most one missing value per item included in the composite indices; therefore, participants were retained, and composite scores were computed based on available item responses without applying imputation procedures. Detailed PCA results, including KMO values, Bartlett’s test statistics, eigenvalues, percentage of variance explained, and ranges of component loadings, are reported in Table S2.
The final composite scales were as follows:
  • Q18—Academic use of ChatGPT: higher scores indicate more frequent use of ChatGPT for academic tasks (Cronbach’s α = 0.83).
  • Q19—Perceived capabilities of ChatGPT: higher scores indicate that students perceive ChatGPT as having greater abilities in supporting academic tasks and facilitating learning (Cronbach’s α = 0.83).
  • Q24—Satisfaction with ChatGPT: higher scores indicate greater satisfaction with ChatGPT’s performance in academic contexts. Items Q24b, Q24c and Q24d were removed because they did not load adequately on the primary component. After items’ removal, the Cronbach’s α was 0.80.
  • Q26—Learning and academic enhancement through ChatGPT: higher scores indicate that students perceive ChatGPT as having greater abilities to enhance their learning and academic performance (Cronbach’s α = 0.92).
  • Q27—Personal and professional development through ChatGPT: higher scores indicate that students perceive ChatGPT as having greater abilities to support their personal and professional growth (Cronbach’s α = 0.92).
  • Q28—Communication and proficiency skills supported by ChatGPT: higher scores indicate that students perceive ChatGPT as having greater abilities to facilitate the development of communication and proficiency skills (Cronbach’s α = 0.85).
  • Q29—Analytical and problem-solving skills supported by ChatGPT: higher scores indicate that students perceive ChatGPT as having greater abilities to facilitate analytical and problem-solving skill development (Cronbach’s α = 0.87).
Following scale construction, the distribution of the composite indices was examined using skewness and kurtosis values, histograms, and Q–Q plots. As several variables showed non-normal distributions, non-parametric tests were used for inferential analyses. In addition to the composite scales, four single-item Likert-type variables were analysed as ordinal measures: perceived ease of interacting with ChatGPT, clarity of the information provided (both in comparison with instructors), academic motivation, and self-reported study success (1 = strongly disagree, 5 = strongly agree). Associations between composite scales and socio-demographic or academic categorial variables were examined using the Mann-Whitney U test and the Kruskal–Wallis test. Confidence intervals are not provided due to the non-parametric nature of these tests; effect sizes provide an indication of practical significance. Correlations among composite scales and ordinal variables were evaluated using Spearman’s rho. Each statistical test was chosen to correspond to the specific research hypotheses regarding the relationship between ChatGPT usage patterns, perceptions, and outcomes such as satisfaction, learning enhancement, and skill development. Given the exploratory nature of the study and the number of statistical comparisons performed, p-values should be interpreted cautiously as descriptive indicators of association rather than confirmatory evidence. Statistical significance was set at p < 0.05. All analyses were conducted using IBM SPSS Statistics, version 29.

3. Results

The mean age of the sample was 20.75 years (SD = 3.87). Gender distribution was balanced, with 50.4% identifying as male, 47.9% as female, and 1.7% preferring not to disclose their gender. Approximately 32% of participants lived in an urban area, while the remainder resided in suburban or rural areas. Overall, 28.2% of students were employed, mainly in part-time positions (20.5%). Regarding economic status, 18.8% reported being below or significantly below the average, whereas most students described their economic condition as average (64.1%). Table 1 reports the academic characteristics of the sample.
Concerning students’ perceptions of their academic experience, 55.6% agreed or strongly agreed with the statement “I am successful in my studies”, and 59.8% agreed or strongly agreed with “I am motivated to study.” More than half of students (54.7%) reported regularly attending classes. Regarding learning preferences, most participants indicated a preference for blended learning (71.8%), whereas only 3.4% would prefer fully online learning.
Table 2 summarizes patterns of ChatGPT use and general impressions. When asked whether interacting with ChatGPT was easier than interacting with others, 24.1% of students reported it was easier than interacting with professors; this percentage decreased to 11.2% when the comparison was made with university peers. Additionally, 19.8% of students perceived the information obtained from ChatGPT as clearer than that provided by professors.
The most frequent academic tasks for which ChatGPT was used included summarizing information, and study assistance, and research assistance. Commonly reported academic support included help with understanding instructions, summarizing extensive information, simplifying complex content, and providing information efficiently.
Regarding skills development, students reported that ChatGPT primarily supported digital content creation skills, information literacy, and foreign language proficiency (related to communication skills). Furthermore, they indicated support for artificial intelligence literacy, programming skills, and data analysis skills (related to analytical and problem-solving abilities). Detailed information on the specific tasks, academic capabilities, and perceived skill support is provided in the Supplementary Materials (Tables S3 and S4).
Descriptive statistics for the composite indices are reported in Table 3. Academic use of ChatGPT (Q18) was low, with a median of 1.83 (IQR = 1.33–2.33), whereas all other indices (Q19–Q29) had medians ranging from 3.11 to 3.60, with Interquartile Ranges (IQRs) ranging from 2.50–3.56 to 3.30–4.08. These findings suggest limited academic use but generally positive evaluations of ChatGPT’s perceived capabilities and support for learning and personal development.
Despite these perceived benefits, students also expressed ethical concerns: 44.4% believed they should consult their professors before using ChatGPT, and around one in four felt they should disclose their use of ChatGPT to professors. Specific concerns included potential misinformation, promotion of cheating, privacy violations, and plagiarism; however, overall levels of concern were relatively low (see Table S5, Supplementary Materials).
Non-parametric comparisons were conducted to examine whether composite indices differed across key socio-demographic and academic characteristics.
As shown in Table 4, no significant differences emerged across gender for any outcome. Freshmen reported higher academic use of ChatGPT (MW Z = −2.634, p = 0.008) and greater perception of its support for analytical and problem-solving skills development (MW Z = −2.018, p = 0.044). Regarding field of study, the only significant difference was observed for perceived support of analytical and problem-solving skills (MW Z = −2.392, p = 0.017), with social sciences students reporting higher scores than STEM and Health sciences students. Employment status was not significantly associated with ChatGPT use or perceptions. The largest differences were observed for area of residence and economic status. Students living in urban areas reported higher academic use of ChatGPT (MW Z = −2.306, p = 0.021), greater perceived academic support (MW Z = −2.239, p = 0.025), higher satisfaction with its academic use (MW Z = −2.340, p = 0.019), and stronger perceptions that ChatGPT supports learning and academic performance (MW Z = −3.023, p = 0.003), personal and professional growth (MW Z = −3.374, p = <0.001), and analytical and problem-solving skill development (MW Z = −3.306, p = <0.001). Regarding economic status, students reporting above-average conditions showed higher academic use (KW H = 6.730, p = 0.035) and satisfaction (KW H = 7.261, p = 0.027) compared to those with average or below-average conditions. Students reporting below-average economic conditions perceived ChatGPT as less capable of supporting learning and academic performance (KW H = 13.285, p = 0.001) and promoting personal and professional growth (KW H = 13.106, p = 0.001) compared with average- and above-average-status peers.
Effect sizes were calculated for all non-parametric comparisons (r for Mann–Whitney tests, ε2 for Kruskal–Wallis tests). Effect sizes were generally negligible to small for Mann–Whitney tests (|r| = 0.005–0.312), with a few moderate effects. For Kruskal–Wallis tests, effect sizes were mostly small (ε2 = 0.010–0.046), with a few approaching medium magnitude (ε2 = 0.097–0.099). These values indicate that while some differences were statistically significant, the practical significance was modest.
Spearman’s rank-order correlations (ρ) were used to examine associations among ChatGPT use, perceptions, and study-related outcomes (Table 5). Academic use was positively associated with perceived capabilities, satisfaction, learning and academic support, personal and professional development, communication and proficiency skills, and analytical/problem-solving skills, with correlations ranging from weak to moderate (ρ = 0.220–0.448). The strongest associations were observed for learning and academic support (ρ = 0.448) and personal and professional development (ρ = 0.438). Academic use was also modestly related to perceptions of ease of interaction and clarity of information compared with professors (ρ = 0.257; 0.287) and to study motivation (ρ = 0.282).
Perceptions and satisfaction with ChatGPT were strongly interrelated. Students who perceived ChatGPT as highly capable in supporting academic tasks and facilitating learning also reported higher satisfaction and greater perceived benefits across learning support, personal and professional development, communication and proficiency skills, and analytical/problem-solving skills, with correlations mostly in the moderate-to-strong range (ρ = 0.514–0.664).
When considering comparisons with professors, both ease of interaction and clarity of information were positively associated with other positive perceptions of ChatGPT (ρ = 0.205–0.552), suggesting that students who found ChatGPT easier to use or clearer than professors also tended to report higher satisfaction and perceived learning benefits.
Finally, study-related outcomes showed weaker links with ChatGPT engagement. Self-reported academic success showed no significant associations with ChatGPT-related variables, whereas study motivation was modestly positively associated with academic use (ρ = 0.282) and some perceptions of ChatGPT, including perceived capabilities (ρ = 0.284) and learning support (ρ = 0.219).

4. Discussion

This study aimed to explore patterns of use and perceptions related to ChatGPT in a sample of Italian university students. Overall, students reported a generally positive experience with ChatGPT, perceiving it as a tool offering multiple potential benefits. However, some concerns regarding its use were also highlighted. This duality reflects recent international reports indicating that students regard generative AI both as a learning support and as a potential academic risk (Dos, 2025).
In this study, nearly one in four students reported finding it easier to interact with ChatGPT than with their professors, and almost one in five perceived ChatGPT’s explanations as clearer. These findings are consistent with recent surveys in which students favored AI-generated explanations for their clarity and immediacy (Fußhöller et al., 2025; Ravšelj et al., 2025). From a pedagogical perspective, such perceptions may be interpreted in light of theories of instructional scaffolding and guided learning (Vygotskij, 1934), as ChatGPT can provide immediate, structured, and adaptive explanations that align with learners’ momentary needs. Students’ emphasis on ChatGPT’s capabilities, particularly in summarizing, simplifying complex content, and providing information efficiently, aligns with the cognitive offloading hypothesis, which suggests that AI tools can reduce mental workload and streamline information processing (Grinschgl & Neubauer, 2022; Gerlich, 2025).
Another interesting result is the variety of skills for which students reported potential benefits from using ChatGPT. Beyond information literacy and foreign-language proficiency, students mentioned AI literacy, programming, and data analysis, skills increasingly recognized as essential for the future labor market (World Economic Forum, 2023). These findings suggest that students may perceive ChatGPT not merely as a writing or summarizing tool, but also as a resource that could support the development of skills relevant to employability. Further research is needed to substantiate these perceptions. Students also noted some potential benefits in the domain of analytical and problem-solving skills. While ChatGPT does not directly teach these competencies, it provides structured explanations and helps break down complex tasks, which may support certain students’ metacognitive processes (Contel & Cusi, 2025). The relationship between ChatGPT use and potential improvements in metacognitive skills is a promising area that warrants further investigation in future studies.
Despite its perceived usefulness, students reported some ethical concerns regarding ChatGPT. Although overall levels of concern were relatively low, the most frequently reported worries related to the possibility that ChatGPT might provide inaccurate information, facilitate cheating or plagiarism, or raise privacy issues. Similar concerns have been emphasized in multiple international studies, which show that accuracy and misinformation remain among students’ most salient worries (Ravšelj et al., 2025) and that students often require clear policies on AI use (Johnston et al., 2024). Notably, nearly half of the Italian students believed they should consult professors before using ChatGPT, suggesting a perceived need for institutional guidance. This finding mirrors observations from other universities, where a lack of clear AI-use policies has prompted students to seek explicit approval or reassurance from instructors (Johnston et al., 2024). These patterns have important implications for educational equity, as students’ ability to benefit from AI tools may depend on access to guidance and support, potentially disadvantaging those with lower digital literacy or weaker institutional connections. Universities therefore have a responsibility to provide clear policies, structured training, and equitable access to AI resources, ensuring that all students can engage safely and effectively with emerging technologies.
A striking contribution of this study is the identification of contextual differences in AI adoption. The most interesting result regards the socio-economic and residence-related differences. Students living in urban areas and those reporting above-average economic conditions reported higher usage, greater satisfaction, and stronger perceived benefits. While these findings are consistent with the idea that AI could be influenced by the digital divide, this study does not directly measure technological access or related socio-economic factors. Future research should investigate whether students with greater access to technology and resources are indeed more likely to engage deeply with AI tools. Nonetheless, the observed patterns suggest that students with fewer resources or living in less connected areas may have more limited engagement with AI tools. Students reporting below-average economic conditions also perceived ChatGPT as less capable of supporting learning and personal development, highlighting a potential risk of exacerbating existing educational inequalities (Carter et al., 2020). From a theoretical and policy-oriented perspective, these disparities point to the need to frame AI adoption in higher education not merely as a matter of individual choice or innovation, but as a structural issue shaped by socio-economic stratification and territorial inequalities, calling for institutional and public interventions aimed at ensuring equitable opportunities for learning and participation. Moreover, universities should consider implementing strategies to promote equitable engagement, including structured guidance on responsible AI use, training to improve digital literacy, and targeted initiatives to support students with fewer technological resources. Clear institutional policies, accessible training, and support mechanisms could help ensure that all students, regardless of socio-economic background or geographic location, can benefit from generative AI tools safely and effectively. Future research should investigate how technological access, digital skills, and institutional support interact with socio-economic conditions to influence students’ use of AI in learning, and whether these factors contribute to narrowing or widening existing educational disparities.
This study shows that freshmen reported higher usage and stronger beliefs in ChatGPT’s ability to support analytical skills. While this study did not directly measure students’ study strategies or the development of underlying skills, it is possible that first-year students may rely more on external support such as ChatGPT, which offers structured explanations, step-by-step examples, and decomposition of complex tasks. This hypothesis warrants further investigation in future research, ideally using appropriate proxies or direct measures of foundational skills and study strategies.
The correlational analyses revealed a coherent pattern: greater use of ChatGPT was linked to more positive perceptions, higher satisfaction, and stronger perceived learning support. These moderate correlations are consistent with cross-sectional research suggesting that increased familiarity with AI tools is linked to higher levels of trust and perceived usefulness among students (Zhang et al., 2025). However, academic success showed no association with ChatGPT use or perceptions, indicating that students do not directly attribute their academic performance to generative AI. Similar findings have been reported in other surveys, where AI use and its perceived usefulness for schoolwork were not significantly associated with academic achievement, despite students reporting benefits in task completion and study efficiency (Klarin et al., 2024).
Taken together, these findings suggest that Italian university students perceive ChatGPT as a useful academic support tool, particularly valued for clarity, efficiency, and its potential applicability across a variety of skills. Overall levels of concern regarding ethical and pedagogical issues were relatively low, although students most frequently mentioned risks related to inaccurate information, cheating, plagiarism, and privacy. This combination of moderate perceived benefits and modest concerns may reflect the current transitional phase of AI integration in higher education, where institutional guidance, pedagogical frameworks, and digital literacy initiatives are still evolving.
Several limitations should be acknowledged. The use of convenience sampling and voluntary participation may have introduced selection bias, as students with greater interest in ChatGPT were likely overrepresented. Recruitment via lectures and institutional channels likely resulted in a sample of relatively engaged students, which calls for caution when interpreting the positive perceptions associated with ChatGPT use. In addition, the study does not fully address potential common method bias or social desirability effects, which may have influenced self-reported measures. The relatively small sample sizes for subgroup analyses limit the robustness of these comparisons. Moreover, the study was conducted at a single university in Northern Italy, which restricts the generalizability of the findings to other academic and cultural contexts. The cross-sectional design precludes causal inferences regarding ChatGPT use and perceived benefits, and the assessments of skill development relied on students’ self-reports rather than objective performance measures. Another limitation concerns the composite indices: although they represent theoretically distinct domains, some degree of shared variance is expected due to their common focus on students’ experiences with ChatGPT. Accordingly, these constructs should be interpreted as related but non-redundant dimensions rather than as fully independent factors. Finally, data were collected during the early phase of ChatGPT adoption, and both usage patterns and perceptions are likely to evolve as students and institutions gain more experience with generative AI tools.

5. Conclusions

This study suggests that ChatGPT is beginning to integrate into the academic routines of Italian university students and is generally perceived as a useful tool, particularly for its clarity, efficiency, and perceived support across a range of skills, including information literacy, analytical reasoning, and problem-solving. Although ethical concerns related to misinformation, plagiarism, and privacy issues were overall limited, their presence underscores the need for clear and shared institutional guidelines for the use of generative AI in academic contexts. The most salient differences observed across student groups—particularly the higher levels of use and perceived benefits reported by urban students and those with stronger economic resources—suggest that AI adoption in higher education should not be understood merely as a matter of individual choice or technological innovation. Rather, it emerges as a structural phenomenon shaped by socio-economic stratification and territorial inequalities, thereby calling for coordinated institutional and public interventions aimed at ensuring equitable opportunities for learning and participation. In this context, universities can play a crucial role by adopting equity-oriented policies, including tailored support for students with fewer resources, sustained investment in accessible and reliable digital infrastructures, and the provision of guidance on effective, ethical, and responsible AI-supported study practices. Beyond regulatory measures, the findings highlight the importance of structured AI literacy initiatives for both students and faculty. Such initiatives could promote a more critical and reflective use of generative AI, addressing not only technical skills but also ethical awareness and pedagogical integration. Future research would benefit from longitudinal and mixed-methods designs to assess how patterns of AI use and perceptions evolve over time and to evaluate more directly the relationship between generative AI engagement and learning outcomes. Taken together, these results suggest that generative AI is becoming an increasingly visible component of higher education and that proactive, evidence-informed institutional action may be important to support its responsible, effective, and equitable integration.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/educsci16020258/s1. Table S1: Selected questionnaire items considered for composite index construction. Table S2: Principal component analysis and reliability indices for composite indices. Table S3: Students’ reported frequency of ChatGPT use for academic tasks. Table S4: Students’ agreement on ChatGPT’s ability to support learning and skill development. Table S5: Students’ reported ethical and other concerns related to ChatGPT.

Author Contributions

Conceptualization, C.B.; Formal analysis, J.D.; Investigation, C.B. and A.G.; Data curation, J.D.; Writing—original draft, C.B. and J.D.; Writing—review and editing, C.B., J.D. and A.G.; Supervision, C.B. and A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was performed in accordance with the ethical standards as laid down in the 1964 Declaration of Helsinki and its later amendments. Students were informed that their participation was confidential, anonymous, not compulsory, and that their personal data would be respected. The study protocol was approved by the Ethics Committee of the University of Verona (Ethical Clearance Number: 1816/2023 Prot n. 466258, 23 November 2023), which conducted the translation from Italian.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data that support the findings of this study can be accessed in the Mendeley Data repository (https://data.mendeley.com/datasets/ymg9nsn6kn/2; accessed on 13 August 2024).

Acknowledgments

The authors thank the members of the CovidSocLab Team for including them in the CovidSocLab initiatives and, in particular, in the project titled “Students’ Perceptions of ChatGPT.” Moreover, the authors express their gratitude to the anonymous university students who participated in the survey for providing valuable insights into their early perceptions of ChatGPT.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Angelini, M., Ugolini, E. M., & Dimitrio, A. E. (2024). ChatGPT: Una risorsa formativa per gli studenti di medicina italiani? Recenti Progressi in Medicina, 115(11), 556–557. [Google Scholar] [CrossRef] [PubMed]
  2. Aristovnik, A., Umek, L., Brezovar, N., Keržič, D., & Ravšelj, D. (2024). The role of ChatGPT in higher education: Some reflections from public administration students. In S. K. S. Cheung, F. L. Wang, N. Paoprasert, P. Charnsethikul, K. C. Li, & K. Phusavat (Eds.), Technology in education: Innovative practices for the new normal. 6th international conference on technology in education, ICTE 2023, Hong Kong, China, 19–21 December 2023, proceedings (Volume 1974, pp. 254–263). Springer. Communications in Computer and Information Science. Available online: https://www.springerprofessional.de/en/technology-in-education-innovative-practices-for-the-new-normal/26280922 (accessed on 2 February 2026).
  3. Arum, R., Calderon Leon, M., Li, X., & Lopes, J. (2025). ChatGPT early adoption in higher education: Variation in student usage, instructional support, and educational equity. AERA Open, 11(1), 12. [Google Scholar] [CrossRef]
  4. Benke, E., & Szőke, A. (2024). Academic integrity in the time of artificial intelligence: Exploring student attitudes. Italian Journal of Sociology of Education, 16(2), 91–108. [Google Scholar] [CrossRef]
  5. Bikanga Ada, M. (2024). It helps with crap lecturers and their low effort: Investigating computer science students’ perceptions of using ChatGPT for learning. Education Sciences, 14(10), 1106. [Google Scholar] [CrossRef]
  6. Carter, L., Liu, D., & Cantrell, C. (2020). Exploring the intersection of the digital divide and artificial intelligence: A hermeneutic literature review. AIS Transactions on Human-Computer Interaction, 12(4), 253–275. [Google Scholar] [CrossRef]
  7. Contel, F., & Cusi, A. (2025). Investigating the role of ChatGPT in supporting metacognitive processes during problem-solving activities. Digital Experiences in Mathematics Education, 11(1), 167–191. [Google Scholar] [CrossRef]
  8. Croasmun, J. T., & Ostrom, L. (2011). Using likert-type scales in the social sciences. Journal of Adult Education, 40(1), 19–22. [Google Scholar]
  9. Dos, I. (2025). A systematic review of research on ChatGPT in higher education. The European Educational Researcher, 8(2), 59–76. [Google Scholar] [CrossRef]
  10. Fußhöller, A., Lechner, F., Schlicker, N., Muehlensiepen, F., Mayr, A., Kuhn, S., Hirsch, M. C., & Knitza, J. (2025). Perceptions, usage, and educational impact of ChatGPT among medical students in Germany: Cross-sectional mixed methods survey. JMIR Formative Research, 11(9), e81484. [Google Scholar] [CrossRef] [PubMed]
  11. Gamage, K. A. A., Dehideniya, S. C. P., Xu, Z., & Tang, X. (2023). ChatGPT and higher education assessments: More opportunities than concerns? Journal of Applied Learning & Teaching, 6(2), 358–369. [Google Scholar] [CrossRef]
  12. Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. [Google Scholar] [CrossRef]
  13. Grinschgl, S., & Neubauer, A. C. (2022). Supporting cognition with modern technology: Distributed cognition today and in an AI-enhanced future. Frontiers in Artificial Intelligence, 5, 908261. [Google Scholar] [CrossRef] [PubMed]
  14. Johnston, H., Wells, R. F., Shanks, E. M., Boey, T., & Parsons, B. N. (2024). Student perspectives on the use of generative artificial intelligence technologies in higher education. International Journal for Educational Integrity, 20, 2. [Google Scholar] [CrossRef]
  15. Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Poquet, M., Schmidt, A., Seidel, T., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. [Google Scholar] [CrossRef]
  16. Klarin, J., Hoff, E., Larsson, A., & Daukantaitė, D. (2024). Adolescents’ use and perceived usefulness of generative AI for schoolwork: Exploring their relationships with executive functioning and academic achievement. Frontiers in Artificial Intelligence, 7, 1415782. [Google Scholar] [CrossRef] [PubMed]
  17. Petricini, T., Zipf, J., & Wu, H. (2025). RESEARCH-AI: Communicating academic honesty: Teacher messages and student perceptions about generative AI. Frontiers in Communication, 10, 1544430. [Google Scholar] [CrossRef]
  18. Ravšelj, D., Aristovnik, A., Keržič, D., Tomaževič, N., Umek, L., Brezovar, N., Abdulla, A. A., Akopyan, A., Aldana Segura, M. W., AlHumaid, J., Allam, M. F., Alló, M., Andoh, R. P. K., Andronic, O., Arthur, Y. D., Aydın, F., Badran, A., Balbontín-Alvarado, R., Ben Saad, H., … Troitschanskaia, O. (2024). Higher education students’ early perceptions of ChatGPT: Global survey data. Mendeley; Elsevier. 1 online resource. Mendeley Data, Version 1. [Google Scholar] [CrossRef]
  19. Ravšelj, D., Keržič, D., Tomaževič, N., Umek, L., Brezovar, N., Iahad, N. A., Abdulla, A. A., Akopyan, A., Segura, M. W. A., AlHumaid, J., Allam, M. F., Alló, M., Andoh, R. P. K., Andronic, O., Arthur, Y. D., Aydın, F., Badran, A., Balbontín-Alvarado, R., Ben Saad, H., … Aristovnik, A. (2025). Higher education students’ perceptions of ChatGPT: A global study of early reactions. PLoS ONE, 20(2), e0315011. [Google Scholar] [CrossRef] [PubMed]
  20. Tortella, F., Palese, A., Turolla, A., Castellini, G., Pillastrini, P., Landuzzi, M. G., Cook, C., Galeoto, G., Giovannico, G., Rodeghiero, L., Gianola, S., & Rossettini, G. (2025). Knowledge and use, perceptions of benefits and limitations of artificial intelligence chatbots among Italian physiotherapy students: A cross-sectional national study. BMC Medical Education, 25, 572. [Google Scholar] [CrossRef] [PubMed]
  21. Vygotskij, L. S. (1934). Pensiero e linguaggio (L. Mecacci, Ed.). Laterza. [Google Scholar]
  22. World Economic Forum. (2023). Future of job report 2023. Cologny/Geneva Switzerland. Available online: https://www.weforum.org/publications/the-future-of-jobs-report-2023/ (accessed on 30 April 2023).
  23. Zhai, X. (2023). ChatGPT and AI: The Game Changer for Education. In Zhai, X.(2023). ChatGPT: Reforming education on five aspects (pp. 16–17). Shanghai Education. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4389098 (accessed on 2 February 2026).
  24. Zhang, Y., Guo, J., Wang, Y., Li, S., Yang, Q., Zhang, J., & Lu, Z. (2025). Understanding trust and willingness to use GenAI tools in higher education: A SEM-ANN approach based on the S-O-R framework. Systems, 13(10), 855. [Google Scholar] [CrossRef]
Table 1. Academic characteristics.
Table 1. Academic characteristics.
n (%)
Student status (N = 117)
Full-time students 113 (96.6)
Part-time students 4 (3.4)
Level of study (N = 117)
Undergraduate 102 (87.2)
Postgraduate 15 (12.6)
Field of study (N = 116)
Social sciences 39 (33.6)
STEM and Health sciences 77 (66.4)
First-year student (N = 116)
Yes 91 (78.4)
No 25 (21.6)
Perceived difficulty of the study program (N = 117)
Easy 10 (8.5)
Just about right 22 (18.8)
Challenging 85 (72.7)
Table 2. ChatGPT students’ experience.
Table 2. ChatGPT students’ experience.
n (%)
Extent of use of ChatGPT (N = 117)
Rarely 41 (35.0)
Occasionally 31 (26.5)
Moderately 29 (24.8)
Considerably 10 (8.5)
Extensively 6 (5.1)
Experience with ChatGPT (N = 116)
Bad 7 (6.0)
Neutral 32 (27.6)
Good 66 (56.9)
Very good 11 (9.5)
Version of ChatGPT used (N = 116)
ChatGPT-3.5 (free version) 105 (90.5)
ChatGPT-4 (with a subscription) 5 (4.3)
Both 6 (5.2)
Where they first learned about ChatGPT (N = 117)
From mainstream media/news 24 (20.5)
On social media 48 (41.0)
In class and/or at work 13 (11.1)
From friends and/or family 32 (27.4)
Table 3. Composite indices scores.
Table 3. Composite indices scores.
Composite IndicesMean (SD)MedianIQRRange
Q18—Academic use of ChatGPT1.91 (0.69)1.832.33–1.331–5
Q19—Perceived capabilities of ChatGPT3.66 (0.58)3.604.08–3.301–5
Q24—Satisfaction with ChatGPT3.33 (0.78)3.504.00–2.751–5
Q26—Learning and academic enhancement through ChatGPT3.35 (0.77)3.353.80–2.931–5
Q27—Personal and professional development through ChatGPT3.17 (0.74)3.203.60–2.801–5
Q28—Communication and proficiency skills supported by ChatGPT3.25 (0.67)3.333.67–2.891–5
Q29—Analytical and problem-solving skills supported by ChatGPT3.05 (0.74)3.113.56–2.501–5
Table 4. Non-parametric comparisons of ChatGPT use and perceptions across grouping variables.
Table 4. Non-parametric comparisons of ChatGPT use and perceptions across grouping variables.
Grouping VariableGroupsOutcome VariableTest
Statistics
pEffect Size
GenderMale (n = 58)
Female (n = 56)
Academic use of ChatGPT−0.054 a0.957−0.005
Perceived capabilities of ChatGPT0.182 a0.8560.017
Satisfaction with ChatGPT1.037 a0.3000.097
Learning and academic enhancement through ChatGPT1.380 a0.1670.129
Personal and professional development through ChatGPT1.136 a0.2560.106
Communication and proficiency skills supported by ChatGPT1.455 a0.1460.136
Analytical and problem-solving skills supported by ChatGPT0.381 a0.7030.036
First-year studentYes (n = 91)
No (n = 25)
Academic use of ChatGPT−2.634 a0.008−0.245
Perceived capabilities of ChatGPT−0.540 a0.589−0.050
Satisfaction with ChatGPT−0.235 a0.814−0.022
Learning and academic enhancement through ChatGPT−0.105 a0.916−0.010
Personal and professional development through ChatGPT−0.632 a0.528−0.059
Communication and proficiency skills supported by ChatGPT−1.442 a0.149−0.134
Analytical and problem-solving skills supported by ChatGPT−2.018 a0.044−0.187
Field of studySocial sciences
(n = 39)
STEM and Health Sciences (n = 77)
Academic use of ChatGPT−0.908 a0.364−0.084
Perceived capabilities of ChatGPT−0.334 a0.739−0.031
Satisfaction with ChatGPT0.445 a0.6560.041
Learning and academic enhancement through ChatGPT−0.740 a0.459−0.069
Personal and professional development through ChatGPT−1.755 a0.079−0.163
Communication and proficiency skills supported by ChatGPT−1.060 a0.289−0.098
Analytical and problem-solving skills supported by ChatGPT−2.392 a0.017−0.222
Area of residenceUrban (n = 38)
Suburban or Rural (n = 79)
Academic use of ChatGPT−2.306 a0.021−0.213
Perceived capabilities of ChatGPT−2.239 a0.025−0.207
Satisfaction with ChatGPT−2.340 a0.019−0.216
Learning and academic enhancement through ChatGPT−3.023 a0.003−0.279
Personal and professional development through ChatGPT−3.374 a<0.001−0.312
Communication and proficiency skills supported by ChatGPT−1.687 a0.092−0.156
Analytical and problem-solving skills supported by ChatGPT−3.306 a<0.001−0.306
Employment statusHave a job
(n = 33)
Do not have a job (n = 84)
Academic use of ChatGPT−0.931 a0.352−0.086
Perceived capabilities of ChatGPT−0.276 a0.783−0.026
Satisfaction with ChatGPT−0.304 a0.761−0.028
Learning and academic enhancement through ChatGPT−0.264 a0.792−0.024
Personal and professional development through ChatGPT0.775 a0.4500.072
Communication and proficiency skills supported by ChatGPT−0.362 a0.717−0.033
Analytical and problem-solving skills supported by ChatGPT0.288 a0.7730.027
Economic statusBelow average
(n = 22)
Average (n = 75)
Above average (n = 20)
Academic use of ChatGPT6.730 b0.0350.041
Perceived capabilities of ChatGPT3.099 b0.2120.010
Satisfaction with ChatGPT7.261 b0.0270.046
Learning and academic enhancement through ChatGPT13.285 b0.0010.099
Personal and professional development through ChatGPT13.106 b0.0010.097
Communication and proficiency skills supported by ChatGPT5.440 b0.0660.030
Analytical and problem-solving skills supported by ChatGPT5.897 b0.0520.034
Note: a Mann–Whitney U standardized Z test, with effect size r. b Kruskal–Wallis H test (df = 2), with effect size ε2. Effect size interpretation follows Cohen’s conventions: r = 0.10 small, 0.30 medium, 0.50 large; ε2 = 0.01 small, 0.06 medium, 0.14 large. Positive and negative r values indicate the direction of the effect, while ε2 is always positive.
Table 5. Spearman correlations.
Table 5. Spearman correlations.
Variable1234567891011
1. Academic use of ChatGPT
2. Perceived capabilities of ChatGPT0.284 **
3. Satisfaction with ChatGPT0.422 ***0.612 ***
4. Learning and academic enhancement through ChatGPT0.448 ***0.664 ***0.548 ***
5. Personal and professional development through ChatGPT0.438 ***0.603 ***0.458 ***0.825 ***
6. Communication and proficiency skills supported by ChatGPT0.293 **0.523 ***0.382 ***0.577 ***0.550 ***
7. Analytical and problem-solving skills supported by ChatGPT0.220 *0.514 ***0.371 ***0.516 ***0.539 ***0.518 ***
8. Ease of interaction—ChatGPT easier than professors a0.257 **0.1650.336 ***0.300 **0.235 *0.186 *0.141
9. Clarity of information—ChatGPT clearer than professors a0.287 **0.1250.552 ***0.205 *0.1680.243 **0.1770.532 ***
10. Self-reported academic success b0.1100.1770.1420.1060.114−0.0170.065−0.050−0.119
11. Study motivation b0.282 **0.216 *0.1110.219 *0.1540.1440.124−0.043−0.1180.441 ***
a “Ease of interaction” and “Clarity of information” are ordinal variables measured on a 5-point Likert scale (1 = Strongly disagree, 5 = Strongly agree). b “Motivation to study” and “Self-reported study success” are ordinal variables measured on a 5-point Likert scale (1 = Strongly disagree, 5 = Strongly agree, higher scores indicate greater motivation or perceived academic success). * p < 0.05; ** p < 0.01; *** p < 0.001.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Buizza, C.; Dagani, J.; Ghilardi, A. Is the Rise of Artificial Intelligence Redefining Italian University Students’ Learning Experiences? Perceptions, Practices, and the Future of Education. Educ. Sci. 2026, 16, 258. https://doi.org/10.3390/educsci16020258

AMA Style

Buizza C, Dagani J, Ghilardi A. Is the Rise of Artificial Intelligence Redefining Italian University Students’ Learning Experiences? Perceptions, Practices, and the Future of Education. Education Sciences. 2026; 16(2):258. https://doi.org/10.3390/educsci16020258

Chicago/Turabian Style

Buizza, Chiara, Jessica Dagani, and Alberto Ghilardi. 2026. "Is the Rise of Artificial Intelligence Redefining Italian University Students’ Learning Experiences? Perceptions, Practices, and the Future of Education" Education Sciences 16, no. 2: 258. https://doi.org/10.3390/educsci16020258

APA Style

Buizza, C., Dagani, J., & Ghilardi, A. (2026). Is the Rise of Artificial Intelligence Redefining Italian University Students’ Learning Experiences? Perceptions, Practices, and the Future of Education. Education Sciences, 16(2), 258. https://doi.org/10.3390/educsci16020258

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop