1. Introduction
In recent years, generative Artificial Intelligence (AI), and in particular large-language-model chatbots such as ChatGPT, has rapidly gained traction in higher education, providing students with new forms of study support and assistance (
Kasneci et al., 2023;
Zhai, 2023). International research shows that university students widely experiment with AI chatbots for tasks such as drafting and summarizing texts, clarifying course content, and supporting exam preparation, often valuing their immediacy, flexibility, and perceived usefulness (
Gamage et al., 2023;
Arum et al., 2025,
Ravšelj et al., 2025). Students often describe ChatGPT as an always-available study companion that supports their learning, particularly in information-heavy or text-intensive disciplines (
Bikanga Ada, 2024). At the same time, the educational use of ChatGPT raises concerns related to accuracy, academic integrity, and overreliance on automated support, as well as broader ethical issues such as bias, transparency, and data privacy (
Aristovnik et al., 2024;
Benke & Szőke, 2024). Existing studies suggest that students’ engagement with AI tools is not uniform but shaped by contextual factors, including institutional guidance, faculty attitudes, and national academic cultures. Where policies are unclear or instructors express ambivalence, students tend to adopt more cautious or fragmented patterns of use, whereas explicit guidelines and pedagogical modelling foster more integrated and reflective uses of AI (
Petricini et al., 2025).
Despite the rapid expansion of this literature, empirical evidence from Italy remains limited. To date, only a small number of studies have investigated Italian university students’ use and perception of ChatGPT (or similar AI chatbots), and these studies are confined to health-related degree programs (
Angelini et al., 2024;
Tortella et al., 2025). A study conducted among medical students reported that nearly all respondents were aware of ChatGPT and that a substantial majority (around 79%) used it for study support; 81% considered AI suitable for educational purposes, 78% found it helpful, and 89% intended to continue using it (
Angelini et al., 2024). Another nation-wide cross-sectional survey of physiotherapy students conducted in 2024 found that although 95.3% had heard of AI chatbots, more than half (53.7%) had never used them for academic purposes; among the users, “learning support” was the most common function, whereas use during internships was rare (
Tortella et al., 2025). While these studies indicate high awareness alongside cautious adoption, their narrow disciplinary focus and the absence of analyses on socio-economic inequalities limit their generalizability. This gap is particularly relevant given the Italian context, characterized by marked socio-economic disparities, a persistent digital divide, and the absence of uniform institutional orientations regarding AI use in higher education.
Against this background, the present study aims to provide a broader and more systematic analysis of ChatGPT use and perceptions among Italian university students across multiple disciplines. The study examines patterns of use, perceived usefulness for learning support and the development of transversal skills, as well as differences across socio-demographic and academic characteristics. It further explores associations between ChatGPT use, students’ perceptions, and study-related outcomes. Finally, the study investigates how students evaluate ChatGPT in comparison with their professors, particularly with regard to ease of interaction and clarity of information.
Based on these aims, three hypotheses were formulated: (1) ChatGPT use is associated with perceptions of its academic capabilities, with variations across socio-demographic and academic groups; (2) ChatGPT use and perceptions are associated with study-related outcomes, including motivation; and (3) students’ ChatGPT use is associated with evaluations of its ease of interaction and clarity compared with professors.
2. Materials and Methods
2.1. Study Design
This paper is part of a larger global project named “Students’ Perception of ChatGPT” led by the Faculty of Public Administration at the University of Ljubljana. The overarching project aims to deepen understanding of how higher education students worldwide perceive and use ChatGPT3.5/4 within educational settings. A detailed description of the project’s description and its main results is available in a separate publication (
Ravšelj et al., 2025). As part of this project, a large-scale online survey was conducted between October and December 2023, involving 23,218 students from 109 countries and territories.
The survey involved students enrolled in university degree programs at the host universities who were at least 18 years old and legally able to provide voluntary, informed consent to complete an anonymous online questionnaire. Students were recruited using a convenience sampling method, a non-probability sampling technique in which participants are selected based on their availability and ease of access rather than through random selection. The survey was promoted during lectures and via official university communication channels. Its content was designed in cooperation with international partners to capture key dimensions related to ChatGPT. An initial version was tested with Slovenian students (
Aristovnik et al., 2024), and subsequent revisions were informed by pilot feedback to enhance clarity, reliability, and relevance. To ensure global accessibility, the final survey instrument was translated by native speakers into six additional languages: Italian, Spanish, Turkish, Japanese, Arabic, and Hebrew. The Italian version of the survey was developed following a standard translation and back-translation procedure to ensure semantic equivalence with the original instrument. The survey was administered through the web platform 1KA (One Click Survey;
https://www.1ka.si/d/en; accessed on 9 October 2023), which complies with the General Data Protection Regulation (GDPR), ensuring informed consent procedures and safeguarding participant anonymity and data confidentiality.
This article presents a quantitative study based on data collected from a public university in Northern Italy, which enrolls approximately 17,500 students and offers undergraduate and master’s degree programs, as well as doctoral and specialization schools. Its academic structure comprises eight departments across Social sciences, STEM and Health sciences. Data collection in Italy followed institutional procedures identical to those of the global project. Italian students participated in traditional or blended learning and were recruited via official university channels (e.g., Instagram, website). A convenience sample of students was used to allow efficient recruitment and data collection, which is appropriate given the exploratory aims of the study. This approach enabled the examination of patterns of ChatGPT use and perceptions, and their associations with study-related outcomes within the target population, while acknowledging that findings may not be generalizable to all Italian university students.
2.2. Online Survey
The questionnaire comprised 42 primarily closed-ended questions designed to explore students’ perceptions of their initial encounters with ChatGPT. It was organized into 11 thematic sections. In addition to socio-demographic and general study-related information, the survey addressed a wide range of topics connected to ChatGPT, including usage, capabilities, regulation and ethical concerns, satisfaction and attitude, study issues and outcomes, skills development, labor market and skills mismatch, and emotions. Items concerning frequency and agreement were assessed using a five-point Likert scale, ranging from 1 (strongly disagree/never) to 5 (strongly agree/always) (
Croasmun & Ostrom, 2011). Details on the construction of the questionnaire and the associated dataset are available in a separate publication (
Ravšelj et al., 2025) and in the Mendeley Data repository (
Ravšelj et al., 2024).
2.3. Statistical Analysis
Descriptive statistics (means, standard deviations, frequencies, and percentages) were first computed for individual Likert-type items.
To evaluate whether predefined sets of items could be meaningfully combined into composite indices, separate Principal Component Analyses (PCA) were conducted for the following item groups. The list of individual items is reported in the
Supplementary Materials (see Table S1).
Q18—How often do you use ChatGPT for the following tasks?
Q19—How much do you agree with the following statements related to the capabilities of ChatGPT?
Q24—How much do you agree with the following statements related to your satisfaction with ChatGPT?
Q26—How much do you agree with the following statements related to learning and academic enhancement addressed with ChatGPT?
Q27—How much do you agree with the following statements related to personal and professional development addressed with ChatGPT?
Q28—How much do you agree with the following statements related to the ability of ChatGPT to facilitate proficiency and communication skills development?
Q29—How much do you agree with the following statements related to the ability of ChatGPT to facilitate analytical and problem-solving skills development?
Prior to each PCA, sampling adequacy was assessed using the Kaiser–Meyer–Olkin (KMO) measure and Bartlett’s test of sphericity, with KMO values ≥ 0.60 and a significant Bartlett test indicating suitability for component analysis. PCA was selected as a data-reduction technique to derive composite indices summarizing item groups corresponding to theoretically predefined constructs, rather than modeling latent variables. Components were retained based on eigenvalues greater than 1 and inspection of scree plots. For most item groups, a single dominant component was retained; in the case of Q24, two components emerged, with the first component used to define the main composite scale. These analyses were conducted in an exploratory manner to inform the construction of composite indices.
Items were considered acceptable if they showed primary component loadings ≥ 0.40 on the first component and corrected item–total correlations ≥ 0.30. Items not meeting these criteria were removed. Internal consistency of each resulting composite scale was evaluated using Cronbach’s alpha. Composite scores were computed as the mean of the retained items, with higher scores indicating stronger endorsement of the construct reflected in each scale. The scales were treated as reflective constructs, assuming that each item reflects the underlying construct of interest (e.g., satisfaction, perceived capability). No items required reverse-coding. Missing responses were minimal, with at most one missing value per item included in the composite indices; therefore, participants were retained, and composite scores were computed based on available item responses without applying imputation procedures. Detailed PCA results, including KMO values, Bartlett’s test statistics, eigenvalues, percentage of variance explained, and ranges of component loadings, are reported in
Table S2.
The final composite scales were as follows:
Q18—Academic use of ChatGPT: higher scores indicate more frequent use of ChatGPT for academic tasks (Cronbach’s α = 0.83).
Q19—Perceived capabilities of ChatGPT: higher scores indicate that students perceive ChatGPT as having greater abilities in supporting academic tasks and facilitating learning (Cronbach’s α = 0.83).
Q24—Satisfaction with ChatGPT: higher scores indicate greater satisfaction with ChatGPT’s performance in academic contexts. Items Q24b, Q24c and Q24d were removed because they did not load adequately on the primary component. After items’ removal, the Cronbach’s α was 0.80.
Q26—Learning and academic enhancement through ChatGPT: higher scores indicate that students perceive ChatGPT as having greater abilities to enhance their learning and academic performance (Cronbach’s α = 0.92).
Q27—Personal and professional development through ChatGPT: higher scores indicate that students perceive ChatGPT as having greater abilities to support their personal and professional growth (Cronbach’s α = 0.92).
Q28—Communication and proficiency skills supported by ChatGPT: higher scores indicate that students perceive ChatGPT as having greater abilities to facilitate the development of communication and proficiency skills (Cronbach’s α = 0.85).
Q29—Analytical and problem-solving skills supported by ChatGPT: higher scores indicate that students perceive ChatGPT as having greater abilities to facilitate analytical and problem-solving skill development (Cronbach’s α = 0.87).
Following scale construction, the distribution of the composite indices was examined using skewness and kurtosis values, histograms, and Q–Q plots. As several variables showed non-normal distributions, non-parametric tests were used for inferential analyses. In addition to the composite scales, four single-item Likert-type variables were analysed as ordinal measures: perceived ease of interacting with ChatGPT, clarity of the information provided (both in comparison with instructors), academic motivation, and self-reported study success (1 = strongly disagree, 5 = strongly agree). Associations between composite scales and socio-demographic or academic categorial variables were examined using the Mann-Whitney U test and the Kruskal–Wallis test. Confidence intervals are not provided due to the non-parametric nature of these tests; effect sizes provide an indication of practical significance. Correlations among composite scales and ordinal variables were evaluated using Spearman’s rho. Each statistical test was chosen to correspond to the specific research hypotheses regarding the relationship between ChatGPT usage patterns, perceptions, and outcomes such as satisfaction, learning enhancement, and skill development. Given the exploratory nature of the study and the number of statistical comparisons performed, p-values should be interpreted cautiously as descriptive indicators of association rather than confirmatory evidence. Statistical significance was set at p < 0.05. All analyses were conducted using IBM SPSS Statistics, version 29.
3. Results
The mean age of the sample was 20.75 years (SD = 3.87). Gender distribution was balanced, with 50.4% identifying as male, 47.9% as female, and 1.7% preferring not to disclose their gender. Approximately 32% of participants lived in an urban area, while the remainder resided in suburban or rural areas. Overall, 28.2% of students were employed, mainly in part-time positions (20.5%). Regarding economic status, 18.8% reported being below or significantly below the average, whereas most students described their economic condition as average (64.1%).
Table 1 reports the academic characteristics of the sample.
Concerning students’ perceptions of their academic experience, 55.6% agreed or strongly agreed with the statement “I am successful in my studies”, and 59.8% agreed or strongly agreed with “I am motivated to study.” More than half of students (54.7%) reported regularly attending classes. Regarding learning preferences, most participants indicated a preference for blended learning (71.8%), whereas only 3.4% would prefer fully online learning.
Table 2 summarizes patterns of ChatGPT use and general impressions. When asked whether interacting with ChatGPT was easier than interacting with others, 24.1% of students reported it was easier than interacting with professors; this percentage decreased to 11.2% when the comparison was made with university peers. Additionally, 19.8% of students perceived the information obtained from ChatGPT as clearer than that provided by professors.
The most frequent academic tasks for which ChatGPT was used included summarizing information, and study assistance, and research assistance. Commonly reported academic support included help with understanding instructions, summarizing extensive information, simplifying complex content, and providing information efficiently.
Regarding skills development, students reported that ChatGPT primarily supported digital content creation skills, information literacy, and foreign language proficiency (related to communication skills). Furthermore, they indicated support for artificial intelligence literacy, programming skills, and data analysis skills (related to analytical and problem-solving abilities). Detailed information on the specific tasks, academic capabilities, and perceived skill support is provided in the
Supplementary Materials (Tables S3 and S4).
Descriptive statistics for the composite indices are reported in
Table 3. Academic use of ChatGPT (Q18) was low, with a median of 1.83 (IQR = 1.33–2.33), whereas all other indices (Q19–Q29) had medians ranging from 3.11 to 3.60, with Interquartile Ranges (IQRs) ranging from 2.50–3.56 to 3.30–4.08. These findings suggest limited academic use but generally positive evaluations of ChatGPT’s perceived capabilities and support for learning and personal development.
Despite these perceived benefits, students also expressed ethical concerns: 44.4% believed they should consult their professors before using ChatGPT, and around one in four felt they should disclose their use of ChatGPT to professors. Specific concerns included potential misinformation, promotion of cheating, privacy violations, and plagiarism; however, overall levels of concern were relatively low (see
Table S5, Supplementary Materials).
Non-parametric comparisons were conducted to examine whether composite indices differed across key socio-demographic and academic characteristics.
As shown in
Table 4, no significant differences emerged across gender for any outcome. Freshmen reported higher academic use of ChatGPT (MW Z = −2.634,
p = 0.008) and greater perception of its support for analytical and problem-solving skills development (MW Z = −2.018,
p = 0.044). Regarding field of study, the only significant difference was observed for perceived support of analytical and problem-solving skills (MW Z = −2.392,
p = 0.017), with social sciences students reporting higher scores than STEM and Health sciences students. Employment status was not significantly associated with ChatGPT use or perceptions. The largest differences were observed for area of residence and economic status. Students living in urban areas reported higher academic use of ChatGPT (MW Z = −2.306,
p = 0.021), greater perceived academic support (MW Z = −2.239,
p = 0.025), higher satisfaction with its academic use (MW Z = −2.340,
p = 0.019), and stronger perceptions that ChatGPT supports learning and academic performance (MW Z = −3.023,
p = 0.003), personal and professional growth (MW Z = −3.374,
p = <0.001), and analytical and problem-solving skill development (MW Z = −3.306,
p = <0.001). Regarding economic status, students reporting above-average conditions showed higher academic use (KW H = 6.730,
p = 0.035) and satisfaction (KW H = 7.261,
p = 0.027) compared to those with average or below-average conditions. Students reporting below-average economic conditions perceived ChatGPT as less capable of supporting learning and academic performance (KW H = 13.285,
p = 0.001) and promoting personal and professional growth (KW H = 13.106,
p = 0.001) compared with average- and above-average-status peers.
Effect sizes were calculated for all non-parametric comparisons (r for Mann–Whitney tests, ε2 for Kruskal–Wallis tests). Effect sizes were generally negligible to small for Mann–Whitney tests (|r| = 0.005–0.312), with a few moderate effects. For Kruskal–Wallis tests, effect sizes were mostly small (ε2 = 0.010–0.046), with a few approaching medium magnitude (ε2 = 0.097–0.099). These values indicate that while some differences were statistically significant, the practical significance was modest.
Spearman’s rank-order correlations (ρ) were used to examine associations among ChatGPT use, perceptions, and study-related outcomes (
Table 5). Academic use was positively associated with perceived capabilities, satisfaction, learning and academic support, personal and professional development, communication and proficiency skills, and analytical/problem-solving skills, with correlations ranging from weak to moderate (ρ = 0.220–0.448). The strongest associations were observed for learning and academic support (ρ = 0.448) and personal and professional development (ρ = 0.438). Academic use was also modestly related to perceptions of ease of interaction and clarity of information compared with professors (ρ = 0.257; 0.287) and to study motivation (ρ = 0.282).
Perceptions and satisfaction with ChatGPT were strongly interrelated. Students who perceived ChatGPT as highly capable in supporting academic tasks and facilitating learning also reported higher satisfaction and greater perceived benefits across learning support, personal and professional development, communication and proficiency skills, and analytical/problem-solving skills, with correlations mostly in the moderate-to-strong range (ρ = 0.514–0.664).
When considering comparisons with professors, both ease of interaction and clarity of information were positively associated with other positive perceptions of ChatGPT (ρ = 0.205–0.552), suggesting that students who found ChatGPT easier to use or clearer than professors also tended to report higher satisfaction and perceived learning benefits.
Finally, study-related outcomes showed weaker links with ChatGPT engagement. Self-reported academic success showed no significant associations with ChatGPT-related variables, whereas study motivation was modestly positively associated with academic use (ρ = 0.282) and some perceptions of ChatGPT, including perceived capabilities (ρ = 0.284) and learning support (ρ = 0.219).
4. Discussion
This study aimed to explore patterns of use and perceptions related to ChatGPT in a sample of Italian university students. Overall, students reported a generally positive experience with ChatGPT, perceiving it as a tool offering multiple potential benefits. However, some concerns regarding its use were also highlighted. This duality reflects recent international reports indicating that students regard generative AI both as a learning support and as a potential academic risk (
Dos, 2025).
In this study, nearly one in four students reported finding it easier to interact with ChatGPT than with their professors, and almost one in five perceived ChatGPT’s explanations as clearer. These findings are consistent with recent surveys in which students favored AI-generated explanations for their clarity and immediacy (
Fußhöller et al., 2025;
Ravšelj et al., 2025). From a pedagogical perspective, such perceptions may be interpreted in light of theories of instructional scaffolding and guided learning (
Vygotskij, 1934), as ChatGPT can provide immediate, structured, and adaptive explanations that align with learners’ momentary needs. Students’ emphasis on ChatGPT’s capabilities, particularly in summarizing, simplifying complex content, and providing information efficiently, aligns with the cognitive offloading hypothesis, which suggests that AI tools can reduce mental workload and streamline information processing (
Grinschgl & Neubauer, 2022;
Gerlich, 2025).
Another interesting result is the variety of skills for which students reported potential benefits from using ChatGPT. Beyond information literacy and foreign-language proficiency, students mentioned AI literacy, programming, and data analysis, skills increasingly recognized as essential for the future labor market (
World Economic Forum, 2023). These findings suggest that students may perceive ChatGPT not merely as a writing or summarizing tool, but also as a resource that could support the development of skills relevant to employability. Further research is needed to substantiate these perceptions. Students also noted some potential benefits in the domain of analytical and problem-solving skills. While ChatGPT does not directly teach these competencies, it provides structured explanations and helps break down complex tasks, which may support certain students’ metacognitive processes (
Contel & Cusi, 2025). The relationship between ChatGPT use and potential improvements in metacognitive skills is a promising area that warrants further investigation in future studies.
Despite its perceived usefulness, students reported some ethical concerns regarding ChatGPT. Although overall levels of concern were relatively low, the most frequently reported worries related to the possibility that ChatGPT might provide inaccurate information, facilitate cheating or plagiarism, or raise privacy issues. Similar concerns have been emphasized in multiple international studies, which show that accuracy and misinformation remain among students’ most salient worries (
Ravšelj et al., 2025) and that students often require clear policies on AI use (
Johnston et al., 2024). Notably, nearly half of the Italian students believed they should consult professors before using ChatGPT, suggesting a perceived need for institutional guidance. This finding mirrors observations from other universities, where a lack of clear AI-use policies has prompted students to seek explicit approval or reassurance from instructors (
Johnston et al., 2024). These patterns have important implications for educational equity, as students’ ability to benefit from AI tools may depend on access to guidance and support, potentially disadvantaging those with lower digital literacy or weaker institutional connections. Universities therefore have a responsibility to provide clear policies, structured training, and equitable access to AI resources, ensuring that all students can engage safely and effectively with emerging technologies.
A striking contribution of this study is the identification of contextual differences in AI adoption. The most interesting result regards the socio-economic and residence-related differences. Students living in urban areas and those reporting above-average economic conditions reported higher usage, greater satisfaction, and stronger perceived benefits. While these findings are consistent with the idea that AI could be influenced by the digital divide, this study does not directly measure technological access or related socio-economic factors. Future research should investigate whether students with greater access to technology and resources are indeed more likely to engage deeply with AI tools. Nonetheless, the observed patterns suggest that students with fewer resources or living in less connected areas may have more limited engagement with AI tools. Students reporting below-average economic conditions also perceived ChatGPT as less capable of supporting learning and personal development, highlighting a potential risk of exacerbating existing educational inequalities (
Carter et al., 2020). From a theoretical and policy-oriented perspective, these disparities point to the need to frame AI adoption in higher education not merely as a matter of individual choice or innovation, but as a structural issue shaped by socio-economic stratification and territorial inequalities, calling for institutional and public interventions aimed at ensuring equitable opportunities for learning and participation. Moreover, universities should consider implementing strategies to promote equitable engagement, including structured guidance on responsible AI use, training to improve digital literacy, and targeted initiatives to support students with fewer technological resources. Clear institutional policies, accessible training, and support mechanisms could help ensure that all students, regardless of socio-economic background or geographic location, can benefit from generative AI tools safely and effectively. Future research should investigate how technological access, digital skills, and institutional support interact with socio-economic conditions to influence students’ use of AI in learning, and whether these factors contribute to narrowing or widening existing educational disparities.
This study shows that freshmen reported higher usage and stronger beliefs in ChatGPT’s ability to support analytical skills. While this study did not directly measure students’ study strategies or the development of underlying skills, it is possible that first-year students may rely more on external support such as ChatGPT, which offers structured explanations, step-by-step examples, and decomposition of complex tasks. This hypothesis warrants further investigation in future research, ideally using appropriate proxies or direct measures of foundational skills and study strategies.
The correlational analyses revealed a coherent pattern: greater use of ChatGPT was linked to more positive perceptions, higher satisfaction, and stronger perceived learning support. These moderate correlations are consistent with cross-sectional research suggesting that increased familiarity with AI tools is linked to higher levels of trust and perceived usefulness among students (
Zhang et al., 2025). However, academic success showed no association with ChatGPT use or perceptions, indicating that students do not directly attribute their academic performance to generative AI. Similar findings have been reported in other surveys, where AI use and its perceived usefulness for schoolwork were not significantly associated with academic achievement, despite students reporting benefits in task completion and study efficiency (
Klarin et al., 2024).
Taken together, these findings suggest that Italian university students perceive ChatGPT as a useful academic support tool, particularly valued for clarity, efficiency, and its potential applicability across a variety of skills. Overall levels of concern regarding ethical and pedagogical issues were relatively low, although students most frequently mentioned risks related to inaccurate information, cheating, plagiarism, and privacy. This combination of moderate perceived benefits and modest concerns may reflect the current transitional phase of AI integration in higher education, where institutional guidance, pedagogical frameworks, and digital literacy initiatives are still evolving.
Several limitations should be acknowledged. The use of convenience sampling and voluntary participation may have introduced selection bias, as students with greater interest in ChatGPT were likely overrepresented. Recruitment via lectures and institutional channels likely resulted in a sample of relatively engaged students, which calls for caution when interpreting the positive perceptions associated with ChatGPT use. In addition, the study does not fully address potential common method bias or social desirability effects, which may have influenced self-reported measures. The relatively small sample sizes for subgroup analyses limit the robustness of these comparisons. Moreover, the study was conducted at a single university in Northern Italy, which restricts the generalizability of the findings to other academic and cultural contexts. The cross-sectional design precludes causal inferences regarding ChatGPT use and perceived benefits, and the assessments of skill development relied on students’ self-reports rather than objective performance measures. Another limitation concerns the composite indices: although they represent theoretically distinct domains, some degree of shared variance is expected due to their common focus on students’ experiences with ChatGPT. Accordingly, these constructs should be interpreted as related but non-redundant dimensions rather than as fully independent factors. Finally, data were collected during the early phase of ChatGPT adoption, and both usage patterns and perceptions are likely to evolve as students and institutions gain more experience with generative AI tools.
5. Conclusions
This study suggests that ChatGPT is beginning to integrate into the academic routines of Italian university students and is generally perceived as a useful tool, particularly for its clarity, efficiency, and perceived support across a range of skills, including information literacy, analytical reasoning, and problem-solving. Although ethical concerns related to misinformation, plagiarism, and privacy issues were overall limited, their presence underscores the need for clear and shared institutional guidelines for the use of generative AI in academic contexts. The most salient differences observed across student groups—particularly the higher levels of use and perceived benefits reported by urban students and those with stronger economic resources—suggest that AI adoption in higher education should not be understood merely as a matter of individual choice or technological innovation. Rather, it emerges as a structural phenomenon shaped by socio-economic stratification and territorial inequalities, thereby calling for coordinated institutional and public interventions aimed at ensuring equitable opportunities for learning and participation. In this context, universities can play a crucial role by adopting equity-oriented policies, including tailored support for students with fewer resources, sustained investment in accessible and reliable digital infrastructures, and the provision of guidance on effective, ethical, and responsible AI-supported study practices. Beyond regulatory measures, the findings highlight the importance of structured AI literacy initiatives for both students and faculty. Such initiatives could promote a more critical and reflective use of generative AI, addressing not only technical skills but also ethical awareness and pedagogical integration. Future research would benefit from longitudinal and mixed-methods designs to assess how patterns of AI use and perceptions evolve over time and to evaluate more directly the relationship between generative AI engagement and learning outcomes. Taken together, these results suggest that generative AI is becoming an increasingly visible component of higher education and that proactive, evidence-informed institutional action may be important to support its responsible, effective, and equitable integration.