Next Article in Journal
Comparative Study on Phacoemulsification Techniques and Intraocular Lens Implantation in Dogs with Cataract
Previous Article in Journal
Selective SAPF for Harmonic and Interharmonic Compensation Using an Adaptive Kalman Filter-Based Identification Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI Competency Assessment and Ranking: A Framework for Higher Education

by
Luis M. Sánchez-Ruiz
1,*,
Nuria Llobregat-Gómez
2,
Erika Vega-Fleitas
3 and
Santiago Moll-López
1
1
Departamento de Matemática Aplicada, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
2
Departamento de Lingüística Aplicada, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
3
Instituto de Diseño y Fabricación, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(22), 12248; https://doi.org/10.3390/app152212248
Submission received: 3 October 2025 / Revised: 15 November 2025 / Accepted: 16 November 2025 / Published: 18 November 2025
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

Generative artificial intelligence is reshaping students’ learning strategies, yet most definitions of “AI competency” rely on self-reports and prescriptive checklists. We analyzed data from N = 686 university students in Spain to uncover behavioral patterns of AI use and translate them into practical guidance for teaching and policy. Using unsupervised clustering (k-means) complemented with a topological summary (Mapper), we identified four coherent profiles along a continuum of engagement and self-regulation: Low-Engagement, Active–Cautious, Balanced–Confident, and High-Use–Vigilant. The profiles differ in how often students use AI, how they revise outputs, their reliance on AI, and their ethical awareness, and they show distinct emotional patterns (e.g., curiosity, motivation, stress). A continuous structure links profiles rather than separating them rigidly. Self-rated digital competence and studying in STEM fields were associated with higher-level profiles. Overall, the results support a layered, data-informed view of AI competency that prioritizes observed practices over single summary scores. We introduce AI CAR as a formative framework to help institutions (i) locate students on this continuum and (ii) design targeted supports that strengthen revision habits, ethical reflection, and self-regulation across the curriculum.

1. Introduction

The growing integration of Artificial Intelligence (AI) into educational environments is changing the way knowledge is accessed, processed, and produced. Students increasingly rely on AI-powered tools for various academic tasks, including content summarization, language translation, code generation, and study planning. Although these tools can offer valuable support, their use also introduces new questions regarding student autonomy, academic integrity, learning strategies, and skill development. As AI becomes more accessible and embedded in everyday educational practices, it is necessary to critically examine how students interact with these technologies. Understanding what competencies are necessary for their meaningful and responsible use [1] is equally important. In this context, we define AI competency not only as technical proficiency but as a broader set of capabilities that encompass critical thinking, ethical reflection, digital agency, and self-regulated learning [2,3].
International organizations have acknowledged the potential and risks of artificial intelligence in education. The Beijing Consensus on Artificial Intelligence and Education [4], adopted by UNESCO in 2019, highlights the importance of integrating AI in ways that support equity, inclusion, and human-centered learning. It calls for the development of AI literacy and related competencies that enable individuals to engage effectively with AI systems, not only as users but also as informed and active participants in shaping their role in society. These documents stress the importance of embedding AI-related competencies in educational systems, curricula, and lifelong learning strategies [5,6,7].
Although institutional frameworks such as DigComp [8], the UNESCO AI competency frameworks [5,6,7], and the OECD’s Future of Education and Skills 2030 project [9] have established foundational references for defining AI-related competencies, and many of them explicitly articulate student-facing descriptors and observable indicators, their development has largely followed a top-down logic oriented toward curriculum design, teacher training, and policy alignment rather than toward empirically derived profiles of students’ day-to-day practices [10]. Consequently, there is still relatively little empirical work that starts from students’ own perspectives and behaviors in AI-rich learning environments, especially in higher education, and examines how these practices translate into learning strategies, autonomy, and ethical awareness [1,11]. Understanding these dynamics is essential for designing relevant educational responses and for complementing existing top-down frameworks with bottom-up, behaviorally grounded insights.
These contributions are valuable, but they generally rely on self-report measures and do not attempt to differentiate students according to their patterns of AI use, level of integration into learning strategies, or forms of technological agency. Moreover, there is limited research on how these competencies can be linked to observable behaviors or used to inform adaptive educational interventions. This opens a space for complementary approaches that go beyond perception-based assessments and aim to identify usage profiles or trajectories that reflect how students engage with AI tools in practice [1,11]. Understanding these dynamics could support more targeted curriculum development and contribute to a more nuanced integration of AI competencies into higher education.
In this context, the present study seeks to identify meaningful patterns in how university students engage with AI tools in their learning processes. Rather than focusing exclusively on perceived literacy or technical proficiency, our approach aims to explore actual behaviors, strategies, and levels of integration of AI into academic practices. This exploration supports the development of a data-informed framework. It can guide curriculum design, institutional policy, and pedagogical support related to students’ responsible and autonomous use of AI.
This work differs from existing frameworks in that it does not predefine competency levels based on normative or externally imposed criteria. Instead, it adopts an exploratory approach that combines behavioral indicators, student-reported practices, and unsupervised classification techniques to detect emergent usage profiles. By using tools, such as Topological Data Analysis (TDA) and k-means clustering, we aim to uncover non-obvious structures within student data that can inform the design of a layered AI competency model [12,13,14]. This model will be grounded in actual patterns of use and levels of self-regulation, rather than theoretical abstractions or single-score indicators.
The study is guided by the following research questions:
  • What types of AI tools are students using in academic contexts, and for what purposes?
  • How can students be grouped based on their patterns of AI use, autonomy, and attitudes toward learning?
  • What implications do these usage profiles have for defining, supporting, and potentially assessing AI competency in higher education?
This exploratory study was conducted at a Spanish university, with the majority of participants enrolled in undergraduate and master’s programs across various disciplines. Although the research was based primarily in the Valencian Community, students came from multiple regions of Spain, providing a diverse but non-representative sample. The findings presented here should therefore be interpreted as an initial attempt to identify usage profiles and competency levels that may serve as the foundation for a broader national framework. Future work should extend this effort to more diverse institutional contexts to support a systematic integration of AI competencies into higher education policy.
This study contributes to the literature by (a) introducing an empirically grounded framework of AI competency based on actual usage behaviors rather than self-perceptions and (b) providing practical insights to guide the design of data-informed curricular and institutional strategies for responsible and autonomous AI use in higher education.

2. AI Competency Frameworks

Building on the brief overview presented in the Introduction, this section reviews major AI competency frameworks in greater depth, comparing their target audiences, dimensions, and behavioral focus.
Several initiatives have sought to define what it means to be competent in the use of artificial intelligence, often drawing on broader frameworks of digital competency or 21st-century skills. Among the most prominent efforts are the DigComp framework [8] developed by the European Commission, the UNESCO competency frameworks for students and teachers, and recent empirical instruments such as the Meta AI Literacy Scale (MAILS) [15]. Although these frameworks share common ground in promoting awareness about AI and its responsible use, they vary in their target audiences, structural dimensions, and degree of integration with actual educational practice. Moreover, recent reviews underline the fragmented and outcome-oriented nature of this field, which has yet to converge on a coherent set of competencies for higher education learners [1,10,11].
Originally developed to assess digital competence in the European context, the DigComp framework has been expanded to incorporate emerging technologies such as artificial intelligence. However, its structure remains generic and largely focuses on digital tasks and cognitive operations. AI appears primarily as a subdomain of technological literacy, with limited emphasis on agency, critical reflection, or patterns of tool integration into self-directed learning [16].
UNESCO has proposed dedicated frameworks for both students and teachers, outlining AI-related competencies in terms of understanding, use, ethical awareness, and social engagement. The student version not only highlights the importance of preparing learners as informed participants in shaping AI’s societal role but also specifies observable indicators that can, in principle, guide assessment practices [5,6]. Even so, these documents primarily function as high-level guidelines and require further operationalization through concrete metrics and scalable tools in higher education settings. Furthermore, the ethical and equity concerns raised in recent studies suggest that conceptual frameworks should be accompanied by safeguards against misuse and overreliance [17,18].
Empirical instruments such as the Meta AI Literacy Scale (MAILS) attempt to address this gap by providing a validated survey instrument for assessing AI literacy. MAILS identifies four key domains—understanding, interaction, creation, and reflection—and offers a psychometric structure based on student self-reports [15]. It represents a useful step toward quantifying AI competence, particularly in terms of awareness and ethical engagement. However, as noted earlier, it does not capture behavioral profiles, levels of autonomy, or learning strategies associated with AI use in practice. Although existing frameworks primarily rely on cognitive and self-perceptual measures, they remain valuable as foundational references for understanding how AI literacy has been conceptualized and for identifying the gaps that our behavioral approach seeks to address.
Other frameworks and scales have also been proposed. For instance, Laupichler et al. [16] developed a Delphi-based item set for assessing non-experts’ AI literacy, and Wang et al. [19] created a validated AI Literacy Scale focused on user competence. Pinski and Benlian [20] extended this line of work by developing a multidimensional measurement model for general AI literacy, grounded in socio-technical perspectives and validated through expert studies and self-reported surveys. In the workplace, Cetindamar et al. [21] examined the AI literacy of employees, highlighting competencies relevant beyond educational settings. While these instruments provide valuable conceptual and psychometric advances, they primarily assess self-perceived literacy. They therefore offer limited insight into the actual practices, strategies, and behavioral profiles through which students integrate AI into their learning.
Table 1 summarizes the main features of these frameworks, highlighting their scope, structure, and relevance to student populations in higher education. This table provides a concise comparison of major AI competency frameworks, clarifying their intended audience, main dimensions, and degree of behavioral focus. It serves three functions within the study. First, it visually positions our approach in relation to existing frameworks and highlights that most current instruments—such as DigComp or UNESCO’s guidelines—have been implemented predominantly as conceptual or self-perceptual references rather than as tools grounded in behavioral data. Second, it demonstrates that there is no widely adopted framework specifically built from students’ actual practices in higher education, which justifies the need for a behavioral, data-driven alternative. Third, it acts as a bridge between the literature review and our empirical design, showing why the subsequent clustering and topological analyses focus on usage patterns rather than solely on self-reported skills.
Despite these efforts, there remains a lack of approaches that ground AI competency in the actual practices, strategies, and contextual uses developed by students themselves. Rather than replacing existing frameworks, this study seeks to complement them with bottom-up evidence on how students currently engage with AI tools in their everyday learning. This motivates the need for complementary methodologies able to capture usage profiles and behavioral dynamics, providing a more nuanced understanding of AI competency in academic settings.

3. Materials and Methods

3.1. Data Collection

Data were collected through a structured online questionnaire specifically designed for this study (see Appendix A). The instrument included blocks on demographic and contextual information, knowledge and awareness of AI tools, frequency and purpose of AI use, strategies of self-regulation, and ethical and emotional reflections on AI in academic contexts. The questionnaire was distributed to undergraduate and graduate students at universities in the Valencian Community, Spain, between January and June 2025. Participation was voluntary, anonymous, and conducted in compliance with the EU General Data Protection Regulation (GDPR) and Spanish data protection laws. Prior to participation, all respondents were required to provide informed consent. A total of N = 686 valid responses were obtained after data cleaning, which served as the basis for the subsequent analyses.
The questionnaire was developed through a combined deductive–inductive process. As a conceptual starting point, we reviewed established AI literacy frameworks, including UNESCO’s AI guidance and the Meta-AI Literacy Scale (MAILS). These sources helped identify broad dimensions relevant for students’ engagement with AI—such as awareness of AI tools, responsible use, reflection, and interaction with AI systems—although the specific items in our questionnaire were developed independently to capture behavioral patterns more directly linked to academic practice. In addition, the instrument incorporated dimensions drawn from well-established research on self-regulated learning, including habits of revision, monitoring, and appropriate reliance on technological tools, which are increasingly relevant in AI-mediated learning environments.
Item wording went through an iterative refinement process involving internal expert review by the research team, who have extensive experience in survey design and AI-in-education research. A pilot study with approximately 50 students was conducted to assess clarity, response distribution, and completion time. Based on pilot feedback, we removed ambiguous items, simplified language, and ensured that the response formats produced sufficient variability without floor or ceiling effects. The resulting instrument tries to reflect both theoretical alignment with established frameworks and empirical adjustments based on early student input.
While the data were collected through a self-report questionnaire, the instrument was specifically designed to capture behavioral indicators (e.g., frequency of revision, reliance, or task-specific usage) rather than attitudinal perceptions, thus partially overcoming the limitations of traditional self-report methods. This design choice aligns with the study’s aim of inferring actual behavioral patterns from students’ reported practices rather than their subjective opinions about AI.
For the multi–item blocks, internal consistency was assessed using Cronbach’s alpha. Given the conceptual distinction between positive and negative emotional reactions, the emotional items were grouped into two subscales. The positive-emotion subscale (curiosity, calmness, motivation) demonstrated excellent reliability (Cronbach’s α = 0.90), and the negative-emotion and ethical-reflection subscale (guilt, ethical doubt, stress, distrust) also showed excellent reliability (Cronbach’s α = 0.92). These coefficients indicate that the items within each subscale measure coherent constructs and are suitable for use as external validators of the behavioral profiles. Detailed coefficients are reported in Table 2.
All analyses were conducted using IBM SPSS Statistics (v29; IBM Corp., Armonk, NY, USA) and Python (v3.11; Python Software Foundation, Wilmington, DE, USA) in Google Colab (Google LLC, Mountain View, CA, USA). Data preprocessing included removal of incomplete responses (less than 80% completion), recoding of categorical variables, and standardization of continuous variables as z-scores. Assumptions of normality and homogeneity were checked before applying parametric tests. Descriptive and inferential analyses (ANOVA, Kruskal–Wallis, and multinomial logistic regression) were performed in SPSS, while clustering and topological analyses were implemented in Python using the pandas (PyData/NumFOCUS, Austin, TX, USA), scikit-learn (Inria, Paris, France), and KeplerMapper (open-source Python TDA project) libraries. All scripts and procedures followed a reproducible pipeline consistent with the three research questions presented at the end of the Introduction.

3.2. Participants and Sample Characteristics

A total of 686 students participated in the study. The average age was 26 years (SD = 5.1, range 18–44). This reflects the inclusion of postgraduate students—particularly PhD candidates (mean age = 31.5 years)—alongside Master’s (mean = 25.3) and Bachelor’s students (mean = 21.3). This mix of study levels naturally resulted in a higher overall mean age compared to typical undergraduate-only samples. Regarding gender, women represented 49.3%, men 47.2%, and 3.5% preferred not to disclose. In terms of study level, the largest group was Bachelor’s students (54.7%), followed by Master’s (30.2%) and PhD candidates (15.1%) (see Table 3). Concerning academic fields, the distribution was broad, though participants from Engineering/Architecture and Computer Science/IT predominated.
The distribution confirms that the sample is diverse in terms of demographics, covering all major study areas, although with a higher proportion of technical degrees.

3.3. Clustering of AI Competency Profiles

To identify distinct patterns of AI use and competency among students, we conducted a cluster analysis using the k-means algorithm, which is an unsupervised machine learning technique that partitions observations into k clusters by minimizing within-cluster variance while maximizing between-cluster differences. The algorithm is widely used in educational and social sciences to detect hidden profiles in survey data.
We selected variables reflecting frequency and purpose of AI use, self-regulation (revision and overreliance), ethical concern (plagiarism, impact on learning), and emotional responses (curiosity, motivation, stress, distrust). Prior to clustering, variables were standardized (z-scores) to ensure equal weighting.
All variables included in the clustering were selected based on theoretical relevance to AI use, self-regulation, and ethical engagement, following recommendations from previous empirical studies on digital and AI literacy [1,10,19]. This ensures that the resulting clusters reflect meaningful patterns grounded in established constructs rather than purely statistical separations.
This selection focuses on the most theoretically relevant and empirically recurrent behavioral patterns documented in AI literacy and self-regulated learning research, providing a robust basis for identifying meaningful usage profiles while acknowledging that student behavior may include additional nuances beyond the scope of an exploratory survey.
The optimal number of clusters was determined by examining the elbow criterion (inertia plot) and silhouette scores. Both criteria suggested that a solution with four clusters provided a proper balance between parsimony and interpretability. This choice is also consistent with theoretical frameworks distinguishing between low, medium, high, and advanced competency levels. Although the algorithm produced four clusters, in the interpretation of the results, we refer to them as profiles. This terminology tries to emphasize that the groups represent emergent patterns of AI use and competency rather than fixed categories or hierarchical levels. Using the notion of profiles allows us to highlight behavioral tendencies and educational implications, while avoiding the prescriptive connotations of terms such as “levels.”
Although the silhouette coefficient slightly increased again for k = 5 , that solution produced an additional split within the high-engagement group without improving the interpretability or balance of cluster sizes. Therefore, the k = 4 solution was retained as theoretically meaningful, allowing clearer differentiation of behavioral tendencies while avoiding artificial subdivision of similar profiles. This choice was further supported by the elbow criterion, which indicated diminishing returns in explained variance beyond four clusters.

Profile Assignment for New Students

Although the primary goal of this study was exploratory, the identified profiles can also serve as reference categories for new students. Once cluster centroids are established, a new participant’s responses can be standardized and compared to these centroids using distance-based classification (e.g., nearest centroid assignment). This procedure allows assigning each new student to the most similar competency profile without re-estimating the full clustering model. In practice, this approach could support educators in diagnosing students’ AI competency and tailoring interventions accordingly. Future work could extend this approach by developing a short-form questionnaire or predictive model to enable faster classification in applied settings.
This procedure also enables a continuous ranking score (e.g., inverse distance to higher-profile centroids) that can be reported in a strictly formative manner to guide feedback and support.

3.4. Predictors of AI Competency Profiles

To examine which student characteristics were associated with competency profiles, we estimated a multinomial logistic regression model with the four competency profiles (derived from k-means clusters) as the outcome variable. Independent variables included demographic and contextual factors (level of studies, field of study, gender, province of residence) and self-rated digital competence.
Multinomial logistic regression was chosen over ordinal regression because preliminary tests suggested that the proportional odds assumption was not met. All categorical predictors were dummy-coded, and results are reported as odds ratios (OR) with 95% confidence intervals.

3.5. Data Analysis: An Exploratory Topological Approach

To identify emergent, non-obvious groupings within the multidimensional survey data, we employed Topological Data Analysis (TDA). TDA is a framework from computational mathematics that analyzes the “shape” of data, excelling at identifying clusters, loops, and connections within high-dimensional datasets that may be missed by traditional methods [12]. This approach is particularly suited to the exploratory goal of discovering profiles without imposing strong a priori assumptions about their number or nature.
We utilized the Mapper algorithm [13], which provides an intuitive, topological summary of the entire dataset. In this representation, distinct nodes and connected components can be interpreted as potential prototypical profiles of student AI tool usage. The structural insights derived from the Mapper graph were then used to guide further analysis. On the one hand, the number and relationships of persistent clusters observed in the graph offered guidance for selecting an appropriate value of k and interpreting the results of a subsequent partition-based method such as k-means. On the other hand, the significant nodes of the Mapper graph, characterized by the average values of their constituent data points, could be directly interpreted as usage profiles. This flexible approach allows the data’s intrinsic topology to dictate the grouping strategy, ensuring that the identified profiles genuinely reflect underlying patterns in student behavior rather than being artifacts of a particular clustering algorithm.

3.6. Analytical Framework for AI Competency Assessment

To consolidate the methodological design, this study adopts a practical analytical framework for assessing students’ AI competency through behavioral evidence. The framework integrates three sequential stages that together form a coherent process of empirical inference and pedagogical interpretation.
First, behavioral data collection was carried out through a structured questionnaire designed to capture not only frequency of AI use but also self-regulation, ethical awareness, and emotional engagement (see Appendix A). Second, computational modeling was applied to reveal latent structures in students’ reported behaviors. This stage combined clustering analysis (k-means) and Topological Data Analysis (Mapper) to detect both discrete profiles and continuous transitions among them. Finally, educational interpretation and formative ranking were performed, aligning the resulting profiles with existing AI literacy frameworks and interpreting them as progressive levels of competency development rather than static categories.
This integrated framework offers a reproducible path from data collection to pedagogical application, supporting institutions in identifying behavioral patterns of AI use and designing targeted learning interventions that promote responsible, autonomous, and ethically aware engagement with AI.
The diagram in Figure 1 provides a concise overview of the entire methodological process. It shows how the study progressed from the design and collection of behavioral data to the computational modeling phase—combining clustering and topological analysis—and finally to the stage of educational interpretation. This visual synthesis clarifies the logical flow of the empirical work and helps avoid redundancy among the methodological subsections by integrating them into a single coherent structure.

4. Results

4.1. General AI Competency and Use

As shown in Table 4, self-rated digital competency was relatively high (M = 4.15, SD = 0.88). Most students felt capable of explaining, at least in general terms, how generative AI works (M = 3.80, SD = 0.98). AI use was reported as frequent: for academic purposes, the mean frequency was 4.23 (SD = 0.84), corresponding to weekly or almost daily use. Non-academic use was somewhat lower (M = 3.32, SD = 1.00).
Participants reported actively revising AI outputs (M = 3.65, SD = 0.84), though overreliance was moderate (M = 2.35, SD = 0.77). Concern about unintentional plagiarism was rated medium (M = 3.03, SD = 0.79).
These results indicate that students perceive themselves as digitally competent and make frequent use of AI in their academic work, typically revising outputs rather than accepting them passively. Although reliance on AI remains moderate, the coexistence of high use and medium concern about plagiarism suggests an emerging awareness of ethical implications alongside practical dependence.

4.2. Emotional Responses

Curiosity was the most prominent emotion (M = 4.02, SD = 0.84), closely followed by motivation (M = 3.97, SD = 0.78). Calmness was also relatively high (M = 3.48, SD = 0.78), while negative emotions were less intense: guilt (M = 2.70, SD = 0.74), ethical doubt (M = 2.68, SD = 0.79), stress or anxiety (M = 2.66, SD = 0.77), and distrust (M = 2.67, SD = 0.75) (see Table 5). Overall, these results suggest that AI use is experienced positively by students, although accompanied by moderate levels of ambivalence and concern.
The emotional landscape is largely positive, dominated by curiosity and motivation. At the same time, moderate levels of guilt, ethical doubt, and anxiety reveal that students’ enthusiasm is tempered by reflection about the implications of AI use. This combination of excitement and ambivalence characterizes a transitional moment in the adoption of generative AI for learning.

4.3. Strategies and Approaches to AI Use

When confronted with incorrect or incomplete AI outputs, students reported diverse strategies (see Table 6). Editing the response based on their own knowledge was the most frequent option (30.3%), followed closely by simply accepting the output as it was (29.9%) and verifying it with other sources (29.2%). Re-asking or reformulating the prompt was less common (10.6%).
The near balance between editing, accepting, and verifying responses shows that students adopt diverse regulation strategies when facing incorrect or incomplete outputs. The relatively low percentage of prompt reformulation points to limited iterative prompting habits, suggesting a potential area for training in reflective use.
Regarding their general approach to AI use, most students reported using AI as a support tool while revising and learning from it (41.0%), as seen in Table 7. Around one-third (29.4%) indicated using AI mainly for ideas while building the final result themselves, and one-fifth (20.1%) relied on AI only when they felt blocked. A smaller proportion (9.5%) reported avoiding AI as much as possible. No participants indicated submitting AI-generated work directly without modifications.
Most participants describe using AI as a supportive companion rather than a replacement for their own reasoning. This orientation toward “learning with” rather than “learning from” AI underlines an emerging form of active scaffolding, in which students preserve agency by revising and adapting AI outputs.

4.4. Academic Tasks Supported by AI

Students reported using AI for a wide range of academic tasks, with relatively similar frequencies across domains (see Table 8). Mean scores were consistently around 3.8 on the five-point scale, indicating regular use of AI tools for generating ideas (M = 3.79, SD = 0.98), writing or proofreading (M = 3.81, SD = 0.94), translation or summarization (M = 3.83, SD = 0.95), problem-solving (M = 3.80, SD = 0.95), programming or debugging (M = 3.80, SD = 0.96), and study planning and organization (M = 3.84, SD = 0.93). These results suggest that students integrate AI broadly into diverse academic activities, rather than restricting its use to specific tasks.
AI tools appear integrated across the academic workflow—from idea generation to writing and planning—rather than confined to a single type of activity. This transversal pattern reinforces the need to teach transferable competencies such as evaluation, ethical awareness, and self-regulation rather than tool-specific competencies.

4.5. Concerns About Learning Impact

Students were asked whether they believed that AI might affect their way of learning. The average concern was moderate to low (M = 2.98, SD = 0.79), suggesting that although students recognize potential risks, most do not perceive AI use as strongly detrimental to their learning processes (see Table 9).

4.6. Correlational Analyses

Correlation analyses (Pearson and Spearman) revealed modest but consistent associations between frequency of AI use, self-regulation strategies, and emotional responses. The strongest coefficients were observed for academic AI use, which correlated positively with curiosity ( r = 0.30 , p < 0.001 ) and motivation ( r = 0.31 , p < 0.001 ). Academic use was also positively associated with reviewing AI outputs ( r = 0.31 , p < 0.001 ), suggesting that students who use AI more frequently are also more likely to engage in active revision. In addition, curiosity and motivation were moderately correlated with each other ( r = 0.27 , p < 0.001 ).
These findings suggest that while simple correlations provide evidence of positive tendencies, the relationships between AI use, regulation strategies, and emotions are relatively modest. This underscores the need for complementary approaches, such as clustering and topological data analysis, to capture the more complex patterns of competency and usage profiles in higher education (see Table 10).
Correlations are positive but modest, indicating that curiosity, motivation, and revision habits accompany frequent AI use without forming a simple one-dimensional trait. These subtle relationships have led to the implementation of multivariate and exploratory analyses—such as clustering and topological mapping—to capture more complex and meaningful competency patterns.

4.7. Determining the Number of Clusters

The optimal number of clusters was determined through a combination of the Elbow method and the Silhouette coefficient. To ensure robustness, we compared two approaches: (a) clustering with all survey variables, and (b) clustering with a reduced set of standardized variables. Both approaches yielded consistent results (Figure 2), showing a clear inflection in the Elbow curve at k = 4 .
Silhouette scores reached their maximum at k = 2 ( s 0.12 in the full-variable model; s 0.16 in the reduced-variable model), but values for k = 4 remained within an acceptable range ( s 0.09 and s 0.13 , respectively). Although these values are modest, this is expected in survey-based educational data where clusters are typically less compact. Taken together, these indicators supported the retention of a four-cluster solution as the most informative compromise between statistical adequacy and interpretability.

4.8. AI Competency Profiles (Cluster Analysis)

Consistent with theoretical expectations, the k-means analysis ( k = 4 ) identified four distinct competency profiles. The profiles varied in terms of frequency of academic use, revision strategies, overreliance, plagiarism concern, and emotional responses, forming a clear gradient of practices.

Interpretation of the Profiles

As shown in Table 11 and illustrated in Figure 3, the four clusters can be interpreted as behavioral profiles of AI competency rather than fixed levels. Each profile combines usage, self-regulation, and ethical vigilance, further nuanced by emotional and academic dimensions (Table 12, Table 13 and Table 14).
Profile 1—Active–Cautious. Relatively high academic use with moderate revision, low overreliance, and elevated concern about plagiarism. Affectively, these students show above-average curiosity and motivation with low stress/anxiety (Table 13). Task frequencies are mid-to-high (Table 14), suggesting engaged yet prudent adoption.
Profile 2—Low-Engagement. Low frequency of AI use, limited revision, and comparatively higher overreliance. It reports the lowest curiosity and motivation (Table 13) and the weakest integration of AI across tasks (Table 14). This pattern reflects sporadic, less reflective use of AI tools.
Profile 3—Balanced–Confident. Moderate-to-high use with consistent revision, lower plagiarism concern, and positive emotional engagement (high curiosity and motivation). It indicates a balanced, well-regulated integration of AI into academic practices.
Profile 4—High-Use–Vigilant. Very frequent use with strong revision and low overreliance, coupled with heightened ethical vigilance. Notably, these students report the lowest stress/anxiety (Table 13) and the highest task frequencies (Table 14), suggesting intensive but well-regulated integration.
These four profiles form a coherent continuum—from limited and less reflective use (Profile 2), through cautious and developing engagement (Profile 1), to balanced and autonomous integration (Profiles 3 and 4). This gradient captures the progressive nature of AI competency and aligns with the structure observed in the heatmap and PCA projection (Figure 3 and Figure 4).
These empirically derived profiles complement existing AI literacy frameworks by providing an evidence-based view of how competencies manifest in actual learner behavior. Whereas previous models such as DigComp [8] or the UNESCO AI Competency Framework [5,6] conceptualize literacy in normative terms, our results illustrate how these competencies emerge dynamically through students’ patterns of use, self-regulation, and ethical reflection.

4.9. Exploratory Regression Analysis

As an exploratory step, we estimated a multinomial logistic regression to examine whether background variables predicted AI competency profiles (reference category: Profile 2, Low-Engagement). Predictors included digital competence, field of study (STEM vs. Non-STEM), study level (Bachelor, Master, PhD), and gender (Man, Woman, Other).
Results showed that self-rated digital competency was the strongest and most consistent predictor: each one-point increase was associated with significantly higher odds of belonging to Profiles 3 and 4 compared to Profile 2. Students in STEM fields were also more likely to be classified in higher profiles, while gender and study level showed weaker and less consistent associations. Some estimates displayed very large odds ratios due to sparse categories; these results should be interpreted with caution. The overall pattern supports the interpretation of the four competency profiles as a progression linked to prior digital skills and disciplinary context.
Table 12. Multinomial logistic regression predicting AI competency profiles (reference = Profile 2: Low-Engagement). Odds ratios (OR) with 95% confidence intervals. Large ORs reflect sparse categories.
Table 12. Multinomial logistic regression predicting AI competency profiles (reference = Profile 2: Low-Engagement). Odds ratios (OR) with 95% confidence intervals. Large ORs reflect sparse categories.
PredictorProfile 1 vs. 2Profile 3 vs. 2Profile 4 vs. 2
Digital competency0.82 [0.61, 1.09]1.44 [1.11, 1.85]1.67 [1.29, 2.16]
STEM (vs. Non-STEM)0.91 [0.51, 1.61]1.98 [1.15, 3.40]2.25 [1.32, 3.82]
Master (vs. Bachelor)1.12 [0.57, 2.20]1.27 [0.64, 2.52]1.41 [0.72, 2.74]
PhD (vs. Bachelor)1.31 [0.63, 2.71]1.56 [0.75, 3.22]1.79 [0.89, 3.61]
Woman (vs. Man)0.95 [0.52, 1.73]1.08 [0.59, 1.97]1.21 [0.66, 2.20]
Other (vs. Man)1.22 [0.44, 3.42]1.35 [0.49, 3.72]1.48 [0.55, 4.01]

4.10. Validation with Emotional and Ethical Measures

To validate the cluster solution, we compared students’ emotional and ethical responses to AI across the four competency profiles. Variables included self-reported curiosity, motivation, stress/anxiety, distrust, guilt, and ethical doubt. One-way ANOVA and Kruskal–Wallis tests confirmed significant group differences for most measures, with post-hoc pairwise comparisons showing distinct affective and ethical orientations across profiles.
Table 13. Mean scores (SD) of emotional and ethical responses to AI across competency profiles.
Table 13. Mean scores (SD) of emotional and ethical responses to AI across competency profiles.
VariableProfile 1Profile 2Profile 3Profile 4
Curiosity3.82 (0.81)3.22 (0.83)4.28 (0.63)4.51 (0.60)
Motivation3.85 (0.81)3.14 (0.83)4.16 (0.77)4.41 (0.71)
Stress/Anxiety2.60 (0.71)2.43 (0.72)3.33 (0.61)2.24 (0.58)
Distrust2.64 (0.71)2.33 (0.73)2.72 (0.72)2.86 (0.79)
Guilt2.60 (0.72)2.45 (0.67)2.98 (0.77)2.73 (0.71)
Ethical doubt2.56 (0.74)2.43 (0.79)2.91 (0.78)2.78 (0.80)
Across the four competency profiles, emotional and ethical dimensions showed meaningful differences. Profiles 3 and 4 reported the highest curiosity and motivation, suggesting that more engaged users derive stronger learning benefits from AI. Stress and anxiety peaked in Profile 3, while remaining lowest in Profile 4, indicating that active engagement with AI can carry emotional costs, but vigilant strategies may mitigate them. Ethical concerns (guilt and ethical doubt) were especially salient in Profile 3, pointing to heightened reflection among engaged users. Profile 2 showed the lowest curiosity and motivation, together with relatively low ethical concerns, suggesting limited critical engagement. Distrust remained relatively stable across profiles, indicating that this emotion may reflect a general attitude toward AI rather than competence-specific variation. These findings confirm that AI competency profiles capture not only usage patterns but also the affective and ethical landscape of student–AI interaction.

4.11. External Validation with Academic Behaviors

To further validate the competency profiles, we compared profiles in relation to academic uses of AI. These included frequency of use for generating ideas, writing or proofreading, translating or summarizing, solving problems, programming, and planning study activities. Responses were rated on a five-point Likert scale. Differences between profiles were examined using one-way ANOVA and Kruskal–Wallis tests, with post-hoc pairwise comparisons adjusted for multiple testing.
Academic behaviors were clearly differentiated across competency profiles (Table 14). Profiles 3 and 4 consistently reported the highest frequency of AI use across all academic tasks, with mean scores above 4 on a five-point scale. Profile 1 showed intermediate levels of use, relying on AI more than Profile 2 but less intensively than Profiles 3 and 4.
Table 14. Mean frequency of AI use in academic tasks across competency profiles (1 = never, 5 = very frequently).
Table 14. Mean frequency of AI use in academic tasks across competency profiles (1 = never, 5 = very frequently).
TaskProfile 1Profile 2Profile 3Profile 4
Idea generation3.50 (0.82)2.45 (0.78)4.44 (0.63)4.37 (0.61)
Writing/proofreading3.45 (0.81)2.64 (0.79)4.51 (0.64)4.36 (0.59)
Translation/summarization3.55 (0.83)2.49 (0.80)4.32 (0.65)4.51 (0.62)
Problem solving3.54 (0.79)2.52 (0.76)4.49 (0.62)4.26 (0.60)
Programming/debugging3.48 (0.84)2.48 (0.81)4.37 (0.66)4.48 (0.63)
Planning/organization3.59 (0.80)2.58 (0.82)4.39 (0.64)4.41 (0.61)
By contrast, Profile 2 students displayed the lowest and least consistent reliance on AI tools. These patterns confirm that the four competency profiles map coherently onto distinct study practices: from limited integration (Profile 2), through intermediate and cautious use (Profile 1), to frequent and systematic adoption of AI in learning activities (Profiles 3 and 4).

4.12. Topological Insights into AI Competency (Mapper)

The Mapper analysis, based on academic AI use, revision of outputs, overreliance, curiosity, motivation, stress/anxiety, and distrust, with digital competency and academic use as filter functions ( n _ c u b e s = 10, 30% overlap, KMeans clustering), revealed a structured but continuous landscape of student profiles (Figure 5). Nodes represent locally clustered groups of students, sized by membership, and colored by their average AI proficiency level (1–4). Edges indicate overlaps between adjacent nodes, reflecting transitional cases across the filter space.
The topology suggested a continuum of AI competency rather than strictly separated categories. Dense regions of the graph broadly aligned with the four competency profiles identified through k-means clustering, while transitional links highlighted smooth pathways between them. Lower-proficiency students (average levels ≈ 2.0–2.5) were concentrated in peripheral nodes, whereas higher-proficiency students (≈3.5–4.0) appeared in more central and connected areas. Side branches pointed to possible subgroups, such as frequent users with limited revision, consistent with “uncritical” patterns. Alternative parameterizations of Mapper produced comparable structures, indicating that these topological features are robust.

5. Discussion

This study explored how university students engage with AI tools in academic contexts and identified four competency profiles that reflect different levels of integration, self-regulation, and ethical reflection. Building on recent calls to empirically ground AI literacy and competency frameworks [10,11], our findings demonstrate that student engagement with AI is best conceptualized not as a binary distinction between “users” and “non-users,” but as a continuum of practices, ranging from unreflective or limited use to advanced and vigilant integration.
While our goal was to avoid prescriptive labels, the four profiles can also be read as a formative ranking of AI competency—progressing from less reflective to more autonomous engagement. In this sense, AI CAR AI Competency Assessment and Ranking) supports a diagnostic, non-summative ranking that helps institutions identify where students stand along the continuum and what targeted support may foster progression.

5.1. Profiles in the Context of Existing Frameworks

Existing frameworks such as UNESCO’s AI competency guidelines [4,5,6] and the DigComp model provide important conceptual foundations but are largely formulated at the policy and curriculum level, with limited behavioral operationalization of student practices. Similarly, instruments like MAILS [15] operationalize AI literacy through self-perceived competencies, focusing on domains such as understanding, interaction, and reflection. While these contributions are valuable, they do not capture how students actually use AI tools in practice. Our study complements these approaches by identifying empirically derived profiles grounded in observable behaviors, strategies, and emotional responses.
The four profiles we found—Profile 2 (Low-Engagement), Profile 1 (Active–Cautious), Profile 3 (Balanced–Confident), and Profile 4 (High-Use–Vigilant)—echo themes in the literature. For example, Pinski and Benlian [1] highlight that AI literacy requires not only technical skills but also ethical awareness and agency, elements that differentiate advanced users from uncritical ones. Likewise, Long and Magerko [22] emphasize that critical engagement is central to AI literacy, aligning with our finding that revision practices and ethical reflection distinguish higher profiles. At the same time, the presence of students who rely heavily on AI with limited revision (Profile 2) illustrates the risks of dependency noted in studies warning about overreliance and the “Pandora’s box” of ethical challenges [17].
Our findings complement these conceptual and perception-based frameworks by providing empirical evidence of how students actually engage with AI tools in their day-to-day academic routines. Whereas instruments such as MAILS and related AI literacy scales [15,19,20] and UNESCO’s guidelines [4,5,6] primarily outline what students should know or be able to do, our results reveal the behavioral patterns through which students integrate AI into their learning, how they revise and monitor AI-generated outputs, and how ethical reflection emerges in practice. This bottom-up perspective does not aim to replace existing frameworks, but rather to support their further operationalization by offering data-driven insights into the behavioral, emotional, and regulatory dimensions that shape AI competency in higher education [1,11,23].

5.2. Positioning the Four Profiles Within Empirical Findings

Beyond conceptual frameworks, the four behavioral profiles identified in this study resonate strongly with empirical findings on AI use in education. Intervention studies show that curiosity, motivation, and perceived empowerment are key outcomes of AI literacy programs in both school and university settings, refs. [24,25,26] which is consistent with the high levels of intrinsic motivation observed in the Balanced–Confident and High-Use–Vigilant profiles. Similarly, determinants such as digital self-efficacy and computational thinking have been identified as central to AI literacy [1,27], echoing the strong association between self-rated digital competency and advanced profiles in our regression analysis.
Our results also mirror concerns raised about uncritical or overreliant use of generative AI in higher education. Studies on AI-assisted writing and classroom use report that some students adopt AI tools in ways that diminish revision, verification, or reflective engagement [17,28,29]. This pattern is consistent with the Low-Engagement profile, characterised by lower frequency of use, limited revision, and comparatively higher overreliance. By contrast, the Active–Cautious and High-Use–Vigilant profiles resemble the “risk-aware adoption” patterns described in recent work on AI-supported learning [30], where students combine frequent use with heightened ethical vigilance and systematic checking of AI outputs.
These connections suggest that the four profiles are statistically derived clusters, and empirically plausible configurations that integrate cognitive, emotional, and regulatory dimensions of AI use. They thus contribute to cumulative evidence that AI competency in higher education should be understood as a multidimensional construct, shaped by prior digital readiness, study practices, and evolving attitudes toward generative AI [23,31,32].

5.3. Implications for Learning and Teaching

The competency profiles also align with previous work, suggesting that the integration of AI into education can both empower and challenge students. On the one hand, students in advanced profiles reported high curiosity and motivation, consistent with findings that AI can stimulate engagement and confidence [30]. On the other hand, the elevated stress and ethical concerns observed among advanced users suggest that intensive engagement carries emotional costs, a nuance often overlooked in discussions of AI literacy.
From a pedagogical perspective, the profiles can inform differentiated strategies. Students in Profile 2 (Low-Engagement) may require introductory activities that build confidence and highlight responsible use. In contrast, overreliant users would benefit from training in verification, self-regulation, and critical evaluation. Profile 3 (Balanced–Confident) users can be challenged with tasks that promote creativity and problem-solving, whereas Profile 4 (High-Use–Vigilant) students may need support in managing workload, addressing ethical dilemmas, and sustaining reflective practices. This aligns with the idea that AI literacy should be embedded not only as a technical skill but also as a form of digital agency and critical capacity [1,6]. In this sense, fostering AI competency is not only about academic effectiveness but also about promoting students’ digital well-being and sustainable learning practices.
The effectiveness of such tailored interventions is supported by studies showing that AI literacy courses can significantly improve university students’ conceptual understanding, empowerment, and ethical awareness, regardless of their study background [25,26].

Implications

The profiles provide students with a developmental map of AI engagement. Low-Engagement students may benefit from structured opportunities to practise iterative prompting, source verification, and independent reasoning; Active–Cautious users can progressively extend their skills toward more creative and open-ended uses; Balanced–Confident learners can be encouraged to leverage AI for higher-order tasks such as synthesis and critique; and High-Use–Vigilant students may require support in managing workload, preventing burnout, and navigating ethical tensions. These differentiated pathways are consistent with calls to frame AI literacy as a form of digital agency rather than merely technical proficiency [1,32].
For instructors, the profiles offer a concrete basis for differentiated instruction. Rather than assuming a homogeneous level of AI literacy, educators can design learning activities that explicitly target revision, evaluation, and ethical reflection for students at different points of the continuum. Evidence from AI literacy courses suggests that carefully designed interventions can improve conceptual understanding, empowerment, and ethical awareness across diverse disciplines [25,26]. Embedding profile-informed tasks (e.g., requiring justification of AI-assisted solutions, or comparing AI-generated and human-written outputs) may help students move from sporadic or uncritical use to more balanced and reflective engagement.
At the institutional level, the profiles illustrate how student-facing dimensions of frameworks such as UNESCO’s AI competency guidance, DigComp, and the OECD’s Future of Education and Skills 2030 project [6,8,9] can be operationalized through measurable behavioral indicators, such as revision habits, overreliance, and ethical sensitivity. This opens avenues for designing AI literacy initiatives, updating codes of academic integrity, and developing learning analytics dashboards that identify emerging patterns of dependence or disengagement. When implemented in strictly formative ways, such monitoring can support early intervention and the promotion of students’ digital well-being, without turning AI competency into a high-stakes metric.

5.4. The Continuum of Competence: Insights from Topology

A distinctive contribution of this study is the use of Topological Data Analysis (TDA) to visualize competency as a continuum. Mapper graphs revealed bridges and transitional zones linking profiles, illustrating that students do not fall into rigid categories but rather occupy fluid positions along developmental pathways. This resonates with educational theories that view competency as progressive and contextual rather than fixed. In practical terms, it suggests that interventions should not treat students as belonging permanently to one profile but rather support movement along the continuum, for example, from overreliant to balanced use.

5.5. Answers to the Research Questions

The study raised three research questions, which are summarized and answered below in light of the findings.
RQ1: How do students use AI tools in academic contexts, and what patterns of engagement can be identified?
Cluster analysis revealed four distinct yet interconnected profiles of AI competency: Low-Engagement, Active–Cautious, Balanced–Confident, and High-Use–Vigilant. These profiles represent a continuum of practices, from limited and unreflective use toward autonomous, well-regulated, and ethically aware engagement (Table 11, Table 12, Table 13 and Table 14, Figure 3 and Figure 4). Mapper analysis further confirmed that these patterns are not rigid categories but part of a continuous topology of student behavior.
RQ2: What factors predict differences in AI competency profiles? The multinomial regression indicated that self-rated digital competence and study field (STEM vs. non-STEM) were the strongest predictors of belonging to higher profiles (Table 12). Gender and study level had weaker, non-significant effects. These findings suggest that prior digital readiness and disciplinary exposure shape the depth and regulation of AI engagement.
RQ3: How are emotional and ethical responses related to AI competency? Emotional and ethical validation analyses (Table 13 and Table 14) revealed that higher profiles (3 and 4) are characterized by elevated curiosity and motivation, along with lower stress/anxiety and higher ethical reflection. This indicates that advanced AI users combine confidence and regulation with awareness of potential ethical risks, whereas lower-profile users tend to show both limited engagement and lower ethical sensitivity.
These results provide empirical answers to the research questions and support the conceptualization of AI competency as a multidimensional and developmental continuum, integrating behavioral, emotional, and ethical dimensions.

5.6. Applications and Future Directions

Although exploratory, the competency profiles identified in this study can serve as reference categories for practical applications in higher education. Beyond their statistical identification, the profiles offer actionable insights for interventions, curriculum design, and student monitoring.
First, profiles provide a way to identify students who may benefit from targeted support. For example, students in Profile 2 (Low-Engagement), characterized by low engagement and higher overreliance, could be offered workshops on critical use of AI tools and strategies for independent learning. In contrast, students in Profile 4 (High-Use–Vigilant), who report very frequent use but also higher stress and ethical concerns, may benefit from guidance on managing workload, stress reduction strategies, and reflective practices for ethical decision-making.
Second, competency profiles can inform curriculum development. Rather than assuming a homogeneous level of AI literacy, instructors can design learning activities that acknowledge different starting points. Introductory modules may focus on building basic awareness and confidence for less engaged students, while advanced activities could emphasize ethical reflection and integration of AI into complex problem-solving for more experienced users. This aligns with recent initiatives calling for embedding AI across the curriculum [33] and for tailoring literacy programs to diverse learner groups [25].
Third, the profiles could be used as a monitoring tool. By applying the same clustering approach longitudinally, institutions may track how students move between profiles during their studies. This would enable early detection of students at risk of overreliance or disengagement, while also providing evidence of the effectiveness of curricular interventions. With additional development, a short-form questionnaire or predictive model could allow institutions to classify students efficiently and provide tailored guidance. Integrating such classification into learning analytics dashboards could help advisors and instructors design personalized support pathways, linking competency profiles to digital well-being and academic outcomes.
Future research should expand the scope of this study by including diverse institutional contexts, disciplines, and national settings, as called for in large-scale reviews of the AI literacy landscape [23,32]. Longitudinal designs are needed to examine how students’ AI competency evolves and whether transitions between profiles can be fostered through specific pedagogical interventions. Moreover, further validation of the framework with refined or reduced instruments would make it more usable in classroom and institutional settings. Finally, exploring the intersection between AI competence, academic performance, and broader digital well-being would provide a richer understanding of how AI shapes the student experience.
More broadly, the profiles identified in this study open a set of questions that extend beyond the scope of the present analysis but are essential for a cumulative research agenda. Existing empirical work in AI literacy consistently highlights the need to examine how students’ motivation, ethical awareness, and digital readiness evolve as they engage with AI tools [1,23,25,26]. The behavioral differences observed across profiles—particularly regarding revision, overreliance, and emotional engagement—suggest that AI competency is not a static trait but a developmental process shaped by pedagogical support, digital experience, and study practices.
Future research should therefore investigate how these behavioral tendencies change over time, and whether targeted instructional interventions—such as those evaluated in recent AI literacy programs [24,30]—can facilitate movement toward more reflective and autonomous use. Mixed-methods approaches would also be valuable in linking self-reported practices with real-time data (e.g., prompt histories or interaction logs), as recommended by recent reviews [34,35]. Finally, cross-institutional and cross-cultural studies are needed to assess the generalisability of the competency continuum and to examine how structural factors, such as disciplinary conventions or institutional policies, shape students’ evolving relationships with AI [32,33]. Together, these avenues form a coherent agenda for advancing empirical research on AI competency in higher education.

5.7. Limitations

Several limitations must be acknowledged. First, although the sample was diverse and included students from multiple universities across the Valencian Community, it cannot be considered representative of higher education more broadly. Second, the study relied on self-reports, which may introduce biases; future work should incorporate behavioral log data or mixed-methods approaches. Third, while clustering revealed meaningful patterns, the boundaries between profiles are not strict, as illustrated by TDA, and classification should be used for formative rather than summative purposes. Finally, the cross-sectional design does not allow conclusions about causal relationships or developmental trajectories. The cultural specificity of the sample must also be considered, as results may differ in other educational systems or national contexts.

6. Conclusions

This study provides empirical evidence that AI competency among students is not homogeneous but structured into distinct yet interconnected profiles: Low-Engagement, Active–Cautious, Balanced–Confident, and High-Use–Vigilant. These profiles reflect meaningful differences in frequency and purpose of use, self-regulation, ethical awareness, and emotional experience. By combining clustering methods with topological analysis, we demonstrated that competency is best conceptualized as a continuum with transitional states rather than as rigid categories.
The study extends existing AI literacy frameworks by grounding them in actual student practices and by showing how competency is linked to both cognitive and affective dimensions. Beyond methodological innovation, it contributes a practical framework for identifying patterns of engagement that can inform teaching strategies, curriculum design, institutional policy, and the promotion of students’ digital well-being.
This work highlights the formative value of AI CAR—a framework that connects empirical profiling with actionable educational guidance—and points toward future applications in curriculum design, monitoring, and support for responsible, autonomous, and reflective AI use in higher education.
Beyond describing current patterns of use, this study translates existing AI literacy and competency frameworks into empirically grounded behavioral profiles, thereby providing a bridge between normative models and everyday academic practice. By situating the four profiles within prior empirical findings and outlining their implications for teaching, learning, and policy, the work contributes to a cumulative research agenda that views AI competency as a multidimensional, developmental construct rather than a static attribute of individual students.

Author Contributions

Conceptualization, L.M.S.-R. and E.V.-F.; methodology, L.M.S.-R.; software, S.M.-L.; formal analysis, L.M.S.-R. and N.L.-G.; investigation, E.V.-F.; resources, S.M.-L.; data curation, E.V.-F.; writing—original draft preparation, S.M.-L.; writing—review and editing, N.L.-G.; visualization, E.V.-F.; supervision, L.M.S.-R.; project administration, S.M.-L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Universitat Politècnica de València, PIME project PIME-C/24-25/440 “Evaluación y Mejora de Rutas de Aprendizaje Digitales con IA para Fomentar el Pensamiento Crítico en Ingeniería”.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (Ethics Committee) of the Universitat Politècnica de València (protocol code P22_22-07-2025, date of approval: 22 July 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Participation was voluntary and responses were collected anonymously.

Data Availability Statement

The data supporting the findings of this study will be provided by the corresponding author upon reasonable request. Due to ethical restrictions and privacy concerns, the dataset cannot be made publicly available. Any data sharing is subject to approval by the Ethics Committee of the Universitat Politècnica de València.

Acknowledgments

The authors wish to thank the Universitat Politècnica de València for administrative support throughout the project. We also acknowledge the constructive feedback provided by colleagues during the internal review process. During the preparation of this manuscript, the authors used ChatGPT (OpenAI, GPT-5, 2025) and Grammarly to support text refinement, and language polishing. The authors have carefully reviewed and edited the output and take full responsibility for the final content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

    The following abbreviations are used in this manuscript:
AIArtificial Intelligence
GDPRGeneral Data Protection Regulation (European Union)
TDATopological Data Analysis
MapperAlgorithm from TDA used to visualize data topology
k-meansUnsupervised clustering algorithm for grouping observations
ANOVAAnalysis of Variance (statistical test)
PCAPrincipal Components Analysis (dimensionality reduction)
SDStandard Deviation
STEMScience, Technology, Engineering, and Mathematics

Appendix A. Questionnaire: AI Use and Competency in Higher Education

  • Informed Consent
You are invited to participate in a research study about how university students use artificial intelligence (AI) tools in academic contexts. The aim of this study is to explore different patterns of AI use and understand how students interact with these technologies in relation to their learning strategies, autonomy, and ethical awareness.
Participation is entirely voluntary. The survey is anonymous and will take approximately 10–12 min to complete. You may skip any question or withdraw at any point without consequences.
All data will be processed in accordance with the EU General Data Protection Regulation (GDPR) and relevant Spanish data protection laws. No personally identifying information will be collected. The data will be used exclusively for academic and scientific purposes, and only aggregated results will be published.
This study has been reviewed and approved by the Institutional Review Board (Ethics Committee) of the Universitat Politècnica de València (protocol code P22_22-07-2025, date of approval: 22 July 2024). 579. If you have any questions, you may contact the principal investigator at: LMSR@mat.upv.es.
By clicking “Continue” or proceeding with the questionnaire, you confirm that:
  • You are at least 18 years old.
  • You have read and understood this information.
  • You voluntarily agree to participate in the study.
  • Block 0: Demographic and Contextual Information
  • Age: ____ years
  • Province of current residence (for academic reasons): ________
  • University you currently attend:
    • UPV
    • UV
    • UA
    • UJI
    • Other: ________
  • What type of studies are you pursuing?
    • Bachelor’s degree
    • Master’s degree
    • PhD
    • Higher vocational training (CFGS)
    • Other: _______
  • Main field of study:
    • Engineering or Architecture
    • Computer Science or Information Technology
    • Natural Sciences or Mathematics
    • Social Sciences (Psychology, Sociology, etc.)
    • Law or Political Science
    • Economics or Business
    • Education or Teacher Training
    • Arts and Humanities
    • Health Sciences (Medicine, Nursing, etc.)
    • Other/Interdisciplinary
  • How do you identify? (Optional–multiple options allowed)
    • Woman
    • Man
    • Non-binary person
    • Prefer not to say
    • Other: ________
  • How would you rate your general digital competency (use of digital technologies in everyday and academic life)?
    • 1—Very low
    • 2—Low
    • 3—Medium
    • 4—High
    • 5—Very high
  • Block 1: Knowledge and Awareness of AI Tools
8.
Which AI tools have you used or tried? (Select all that apply)
  • ChatGPT
  • Claude/Perplexity/Gemini
  • Copilot/CodeWhisperer/Gemini Code
  • DeepSeek/DeepSeek-Coder
  • Grammarly/DeepL/AI-based translation tools
  • DALL·E/Midjourney/image-generation tools
  • Other: ________
  • I have not used any AI tool
9.
Do you feel capable of explaining, in general terms, what a generative AI does?
  • 1—Not at all
  • 2
  • 3
  • 4
  • 5—Absolutely
10.
How did you learn to use AI tools? (Select all that apply)
  • Self-taught
  • University course/workshop
  • From peers or friends
  • Online tutorials or social media
  • Other: ________
  • Block 2: Frequency and Variety of Use
11.
How often do you use AI tools for academic tasks?
  • 1—Never
  • 2—Occasionally
  • 3—Monthly
  • 4—Weekly
  • 5—Almost daily
12.
How often do you use AI tools for the following academic tasks?
(1 = Never, 5 = Very frequently)
  • Generating ideas or approaches                        [1] [2] [3] [4] [5]
  • Writing or proofreading text                             [1] [2] [3] [4] [5]
  • Translating or summarizing documents          [1] [2] [3] [4] [5]
  • Solving exercises or problems                           [1] [2] [3] [4] [5]
  • Programming or code debugging                     [1] [2] [3] [4] [5]
  • Planning or organizing study                            [1] [2] [3] [4] [5]
13.
How often do you use AI tools for non-academic purposes?
  • 1—Never
  • 2—Occasionally
  • 3—Monthly
  • 4—Weekly
  • 5—Almost daily
  • Block 3: Purpose and Strategy of Use
14.
To what extent do you review, check, or modify the AI’s output?
  • 1—Never
  • 2
  • 3
  • 4
  • 5—Always
15.
Do you feel you understand topics better when using AI?
  • 1—Not at all
  • 2
  • 3
  • 4
  • 5—Very much
16.
What do you usually do when the AI output seems incorrect or incomplete?
  • Accept it as it is
  • Edit it based on your own knowledge
  • Verify it using other sources
  • Ask the AI again or rephrase the question
17.
How often do you encounter AI-generated misinformation?
  • 1—Never
  • 2
  • 3
  • 4
  • 5—Very often
  • Block 4: Autonomy and Self-Regulation
18.
Do you think you rely too much on AI to study or work?
  • 1—Not at all
  • 2
  • 3
  • 4
  • 5—Very much
19.
When you use AI for academic tasks, which of these best describes your typical approach?
  • I let AI do most of the work and submit it directly
  • I use AI to help me, but I revise and learn from it
  • I use AI for ideas, but I build the final result myself
  • I only use AI when I feel blocked or stuck
  • I avoid using AI as much as possible
20.
How concerned are you about unintentional plagiarism when using AI?
  • 1—Not at all concerned
  • 2
  • 3
  • 4
  • 5—Very concerned
  • Block 5: Ethical and Emotional Reflection
21.
Please rate how strongly you feel these emotions when using AI in your studies:
(1 = Not at all, 5 = Very strongly)
  • Curiosity                            [1] [2] [3] [4] [5]
  • Guilt                                   [1] [2] [3] [4] [5]
  • Ethical doubt                     [1] [2] [3] [4] [5]
  • Calmness                           [1] [2] [3] [4] [5]
  • Stress or anxiety               [1] [2] [3] [4] [5]
  • Motivation                         [1] [2] [3] [4] [5]
  • Distrust                              [1] [2] [3] [4] [5]
22.
Are you concerned that AI use might affect your way of learning?
  • 1—Not at all
  • 2
  • 3
  • 4
  • 5—Very much
23.
(Optional) Describe a situation where AI significantly helped or hindered your learning:
________________________________________________
________________________________________________

References

  1. Pinski, M.; Benlian, A. AI literacy for users—A comprehensive review and future research directions of learning methods, components, and effects. Comput. Hum. Behav. Artif. Humans 2024, 2, 100062. [Google Scholar] [CrossRef]
  2. Allen, L.K.; Kendeou, P. ED-AI lit: An interdisciplinary framework for AI literacy in education. Policy Insights Behav. Brain Sci. 2023, 11, 3–10. [Google Scholar] [CrossRef]
  3. Carolus, A.; Augustin, Y.; Markus, A.; Wienrich, C. Digital interaction literacy model: Conceptualizing competencies for literate interactions with voice-based AI systems. Comput. Educ. Artif. Intell. 2022, 4, 100114. [Google Scholar] [CrossRef]
  4. UNESCO. Beijing Consensus on Artificial Intelligence and Education. 2019. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000368303 (accessed on 15 November 2025).
  5. UNESCO. AI Competency Framework for Teachers. 2022. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000391104 (accessed on 15 November 2025).
  6. UNESCO. AI Competency Framework for Students. 2023. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000391105 (accessed on 15 November 2025).
  7. UNESCO. AI and Education: Guidance for Policy-Makers. 2021. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000376709 (accessed on 15 November 2025).
  8. European Commission. The Digital Competency Framework for Citizens (DigComp); Joint Research Centre: Brussels, Belgium, 2018; Available online: https://joint-research-centre.ec.europa.eu/projects-and-activities/education-and-training/digital-transformation-education/digital-competence-framework-citizens-digcomp_en (accessed on 14 September 2025).
  9. OECD. Future of Education and Skills 2030; Organisation for Economic Co-Operation and Development: Paris, France, 2019; Available online: https://www.oecd.org/en/about/projects/future-of-education-and-skills-2030.html (accessed on 15 November 2025).
  10. Mikeladze, T.; Meijer, P.C.; Verhoeff, R.P. A comprehensive exploration of artificial intelligence competency frameworks for educators: A critical review. Eur. J. Educ. 2024, 59, e12663. [Google Scholar] [CrossRef]
  11. Ng, D.T.K.; Leung, J.K.L.; Chu, S.K.W.; Qiao, M.S. Conceptualizing AI Literacy: An Exploratory Review. Comput. Educ. Artif. Intell. 2021, 2, 100041. [Google Scholar] [CrossRef]
  12. Carlsson, G. Topology and Data. Bull. Am. Math. Soc. 2009, 46, 255–308. [Google Scholar] [CrossRef]
  13. Singh, G.; Mémoli, F.; Carlsson, G.E. Topological Methods for the Analysis of High Dimensional Data Sets and 3D Object Recognition. In Proceedings of the Eurographics Symposium on Point-Based Graphics, Prague, Czech Republic, 2–3 September 2007; pp. 91–100. [Google Scholar]
  14. Lloyd, S. Least squares quantization in PCM. IEEE Trans. Inf. Theory 1982, 28, 129–137. [Google Scholar] [CrossRef]
  15. Carolus, A.; Koch, M.J.; Straka, S.; Latoschik, M.E.; Wienrich, C. MAILS—Meta AI literacy scale: Development and testing of an AI literacy questionnaire based on well-founded competency models and psychological change- and meta-competencies. Comput. Hum. Behav. Artif. Humans 2023, 1, 100014. [Google Scholar] [CrossRef]
  16. Laupichler, M.C.; Aster, A.; Raupach, T. Delphi study for the development and preliminary validation of an item set for the assessment of non-experts’ AI literacy. Comput. Educ. Artif. Intell. 2023, 4, 100126. [Google Scholar] [CrossRef]
  17. Dakakni, D.; Safa, N. Artificial intelligence in the L2 classroom: Implications and challenges on ethics and equity in higher education: A 21st century Pandora’s box. Comput. Educ. Artif. Intell. 2023, 5, 100179. [Google Scholar] [CrossRef]
  18. Adams, C.; Pente, P.; Lemermeyer, G.; Rockwell, G. Ethical principles for artificial intelligence in K-12 education. Comput. Educ. Artif. Intell. 2023, 4, 100131. [Google Scholar] [CrossRef]
  19. Wang, B.; Rau, P.L.P.; Yuan, T. Measuring user competency in using artificial intelligence: Validity and reliability of artificial intelligence literacy scale. Behav. Inf. Technol. 2022, 42, 1324–1337. [Google Scholar] [CrossRef]
  20. Pinski, M.; Benlian, A. AI literacy: Towards measuring human competency in artificial intelligence. In Proceedings of the 56th Hawaii International Conference on System Sciences, Maui, HI, USA, 3–6 January 2023; pp. 165–174. [Google Scholar] [CrossRef]
  21. Cetindamar, D.; Kitto, K.; Wu, M.; Zhang, Y.; Abedin, B.; Knight, S. Explicating AI literacy of employees at digital workplaces. IEEE Trans. Eng. Manag. 2022, 71, 810–823. [Google Scholar] [CrossRef]
  22. Long, D.; Magerko, B. What is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20), Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–16. [Google Scholar] [CrossRef]
  23. Yang, Y.; Zhang, Y.; Sun, D.; He, W.; Wei, Y. Navigating the landscape of AI literacy education: Insights from a decade of research (2014–2024). Humanit. Soc. Sci. Commun. 2025, 12, 374. [Google Scholar] [CrossRef]
  24. Kong, S.C.; Cheung, W.M.Y.; Tsang, O. Evaluating an artificial intelligence literacy programme for secondary students: Conceptual learning, literacy and ethical awareness. Educ. Inf. Technol. 2023, 28, 4703–4724. [Google Scholar] [CrossRef]
  25. Kong, S.C.; Cheung, W.M.Y.; Zhang, G. Evaluating an artificial intelligence literacy programme for developing university students’ conceptual understanding, literacy, empowerment and ethical awareness. Educ. Technol. Soc. 2023, 26, 16–30. [Google Scholar] [CrossRef]
  26. Kong, S.-C.; Cheung, W.M.-Y.; Zhang, G. Evaluation of an artificial intelligence literacy course for university students with diverse study backgrounds. Comput. Educ. Artif. Intell. 2021, 2, 100026. [Google Scholar] [CrossRef]
  27. Celik, I. Exploring the determinants of artificial intelligence (AI) literacy: Digital divide, computational thinking, cognitive absorption. Telemat. Inform. 2023, 83, 102026. [Google Scholar] [CrossRef]
  28. Cardon, P.; Fleischmann, C.; Aritz, J.; Logemann, M.; Heidewald, J. The challenges and opportunities of AI-assisted writing: Developing AI literacy for the AI age. Bus. Prof. Commun. Q. 2023, 86, 257–295. [Google Scholar] [CrossRef]
  29. Firat, M. What ChatGPT means for universities: Perceptions of scholars and students. J. Appl. Learn. Teach. 2023, 6, 57–63. [Google Scholar] [CrossRef]
  30. Tzirides, A.O.; Zapata, G.; Kastania, N.P.; Saini, A.K.; Castro, V.; Ismael, S.A.; You, Y.-L.; Afonso dos Santos, T.; Searsmith, D.; O’Brien, C.; et al. Combining human and artificial intelligence for enhanced AI literacy in higher education. Comput. Educ. Open 2024, 6, 100184. [Google Scholar] [CrossRef]
  31. Chan, C.K.Y.; Lee, K.K. The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their teachers? Smart Learn. Environ. 2023, 10, 60. [Google Scholar] [CrossRef]
  32. Yang, Y.; Sun, W.; Sun, D.; Salas-Pilco, S.Z. Navigating the AI-enhanced STEM education landscape: A decade of insights, trends, and opportunities. Res. Sci. Technol. Educ. 2024, 43, 693–717. [Google Scholar] [CrossRef]
  33. Southworth, J.; Migliaccio, K.; Glover, J.; Reed, D.; McCarty, C.; Brendemuhl, J.; Thomas, A. Developing a model for AI across the curriculum: Transforming the higher education landscape via innovation in AI literacy. Comput. Educ. Artif. Intell. 2023, 4, 100127. [Google Scholar] [CrossRef]
  34. Casal-Otero, L.; Catala, A.; Fernández-Morante, C.; Taboada, M.; Cebreiro, B.; Barro, S. AI literacy in K-12: A systematic literature review. Int. J. STEM Educ. 2023, 10, 29. [Google Scholar] [CrossRef]
  35. Ng, D.T.K.; Su, J.; Leung, J.K.L.; Chu, S.K.W. Artificial intelligence (AI) literacy education in secondary schools: A review. Interact. Learn. Environ. 2023, 32, 6204–6224. [Google Scholar] [CrossRef]
Figure 1. Analytical framework for AI Competency Assessment. The process integrates behavioral data collection, computational modeling (clustering and topological analysis), and educational interpretation into a coherent, data-driven structure for understanding students’ AI engagement.
Figure 1. Analytical framework for AI Competency Assessment. The process integrates behavioral data collection, computational modeling (clustering and topological analysis), and educational interpretation into a coherent, data-driven structure for understanding students’ AI engagement.
Applsci 15 12248 g001
Figure 2. Comparison of clustering performance metrics for different numbers of clusters (k). Each panel displays the Elbow method (left) and Silhouette scores (right). The upper panel corresponds to the clustering performed with the reduced set of standardized variables, while the lower panel uses all survey variables. Both criteria converged on a four-cluster solution as the most interpretable and stable configuration.
Figure 2. Comparison of clustering performance metrics for different numbers of clusters (k). Each panel displays the Elbow method (left) and Silhouette scores (right). The upper panel corresponds to the clustering performed with the reduced set of standardized variables, while the lower panel uses all survey variables. Both criteria converged on a four-cluster solution as the most interpretable and stable configuration.
Applsci 15 12248 g002
Figure 3. Profile centers (z-scores) across key variables: frequency of academic use, reviewing outputs, overreliance, plagiarism concern, curiosity, motivation, stress/anxiety, and distrust. Higher (blue) and lower (yellow) standardized values indicate the relative standing of each profile per variable.
Figure 3. Profile centers (z-scores) across key variables: frequency of academic use, reviewing outputs, overreliance, plagiarism concern, curiosity, motivation, stress/anxiety, and distrust. Higher (blue) and lower (yellow) standardized values indicate the relative standing of each profile per variable.
Applsci 15 12248 g003
Figure 4. PCA scatter plot (PC1 vs. PC2) colored by k-means profiles ( k = 4 ). The plot illustrates that, although students form a continuum, four prototypical profiles can be distinguished, supporting the four-profile solution.
Figure 4. PCA scatter plot (PC1 vs. PC2) colored by k-means profiles ( k = 4 ). The plot illustrates that, although students form a continuum, four prototypical profiles can be distinguished, supporting the four-profile solution.
Applsci 15 12248 g004
Figure 5. Mapper graph of AI competency responses. Each node represents a group of students; node size reflects membership, and node color encodes the average AI proficiency profile (1–4). Edges represent overlaps between adjacent groups, highlighting transitions rather than rigid boundaries.
Figure 5. Mapper graph of AI competency responses. Each node represents a group of students; node size reflects membership, and node color encodes the average AI proficiency profile (1–4). Edges represent overlaps between adjacent groups, highlighting transitions rather than rigid boundaries.
Applsci 15 12248 g005
Table 1. Comparison of selected AI competency frameworks.
Table 1. Comparison of selected AI competency frameworks.
FrameworkTarget AudienceCore DimensionsBehavioral Focus
DigComp [8]General population (EU)Digital content creation, problem solving, communication, safetyConceptual only—no behavioral metrics
UNESCO Student Framework [6]Students (global)Understanding, use, ethical reflection, social impactMainly conceptual—lacks behavioral indicators
UNESCO Teacher Framework [5]Educators (global)Teaching with AI, understanding AI, ethics and inclusionProfessional practice emphasis—not learner behavior
MAILS [15]Higher education studentsUnderstanding, interaction, creation, reflectionSelf-reported frequency and perception—no direct behavioral observation
AI Literacy Scale [19]Users (general)Competency in using AI systemsSelf-perception instrument—does not assess behavioral data
AI Competency (Workplace) [21]EmployeesSkills for digital workplacesCognitive and conceptual—limited evidence of applied use
Table 2. Internal consistency of emotional subscales (Cronbach’s alpha).
Table 2. Internal consistency of emotional subscales (Cronbach’s alpha).
SubscaleItemsCronbach’s α
Positive emotionsCuriosity, Calmness, Motivation0.90
Negative/ethical emotionsGuilt, Ethical doubt, Stress, Distrust0.92
Table 3. Sample characteristics (N = 686).
Table 3. Sample characteristics (N = 686).
VariableCategories%
Level of studiesBachelor’s degree54.7
Master’s degree30.2
PhD15.1
GenderWoman49.3
Man47.2
Prefer not to say3.5
Main fieldEngineering or Architecture15.9
Computer Science/IT13.0
Natural Sciences/Mathematics11.7
Law or Political Science10.9
Economics or Business10.5
Social Sciences (Psychology, Sociology, etc.)10.3
Health Sciences9.6
Education or Teacher Training9.3
Arts and Humanities8.7
Table 4. General AI competency and use (scales 1–5).
Table 4. General AI competency and use (scales 1–5).
VariableMeanSD
Digital competency (self-rated)4.150.88
Ability to explain generative AI3.800.98
Frequency of academic AI use4.230.84
Frequency of non-academic AI use3.321.00
Reviewing AI outputs3.650.84
Overreliance on AI2.350.77
Plagiarism concern3.030.79
Table 5. Emotional responses to AI use (scales 1–5).
Table 5. Emotional responses to AI use (scales 1–5).
EmotionMeanSD
Curiosity4.020.84
Motivation3.970.78
Calmness3.480.78
Guilt2.700.74
Ethical doubt2.680.79
Stress/Anxiety2.660.77
Distrust2.670.75
Table 6. Strategies when AI output is incorrect (N = 686).
Table 6. Strategies when AI output is incorrect (N = 686).
Strategy%
Edit with own knowledge30.3
Accept as it is29.9
Verify with other sources29.2
Re-ask or reformulate10.6
Table 7. Approach to AI use (N = 686).
Table 7. Approach to AI use (N = 686).
Approach%
Use AI as help, but revise and learn from it41.0
Use AI for ideas, but build final result myself29.4
Use AI only when blocked20.1
Avoid AI as much as possible9.5
Table 8. Frequency of AI use across academic tasks (scales 1–5).
Table 8. Frequency of AI use across academic tasks (scales 1–5).
TaskMeanSD
Generating ideas or approaches3.790.98
Writing or proofreading text3.810.94
Translating or summarizing documents3.830.95
Solving exercises or problems3.800.95
Programming or debugging3.800.96
Planning or organizing study3.840.93
Table 9. Concern about AI impact on learning (scale 1–5).
Table 9. Concern about AI impact on learning (scale 1–5).
VariableMeanSD
Concern about impact on learning2.980.79
Table 10. Selected correlations between AI use variables and emotions (|r| > 0.25).
Table 10. Selected correlations between AI use variables and emotions (|r| > 0.25).
Variable PairCorrelation (r)p-Value
Academic AI use–Reviewing outputs0.31<0.001
Academic AI use–Curiosity0.30<0.001
Academic AI use–Motivation0.31<0.001
Curiosity–Motivation0.27<0.001
Table 11. Profile centers (standardized z-scores) for usage, self-regulation, and ethics. Emotional variables are summarized in Table 13.
Table 11. Profile centers (standardized z-scores) for usage, self-regulation, and ethics. Emotional variables are summarized in Table 13.
Variable (z-Scores)Profile 1 (n = 139)Profile 2 (n = 187)Profile 3 (n = 132)Profile 4 (n = 228)
Frequency of academic use0.49−1.070.160.48
Reviewing outputs0.21−0.590.050.33
Overreliance−0.100.39−0.24−0.11
Plagiarism concern0.540.13−1.400.37
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sánchez-Ruiz, L.M.; Llobregat-Gómez, N.; Vega-Fleitas, E.; Moll-López, S. AI Competency Assessment and Ranking: A Framework for Higher Education. Appl. Sci. 2025, 15, 12248. https://doi.org/10.3390/app152212248

AMA Style

Sánchez-Ruiz LM, Llobregat-Gómez N, Vega-Fleitas E, Moll-López S. AI Competency Assessment and Ranking: A Framework for Higher Education. Applied Sciences. 2025; 15(22):12248. https://doi.org/10.3390/app152212248

Chicago/Turabian Style

Sánchez-Ruiz, Luis M., Nuria Llobregat-Gómez, Erika Vega-Fleitas, and Santiago Moll-López. 2025. "AI Competency Assessment and Ranking: A Framework for Higher Education" Applied Sciences 15, no. 22: 12248. https://doi.org/10.3390/app152212248

APA Style

Sánchez-Ruiz, L. M., Llobregat-Gómez, N., Vega-Fleitas, E., & Moll-López, S. (2025). AI Competency Assessment and Ranking: A Framework for Higher Education. Applied Sciences, 15(22), 12248. https://doi.org/10.3390/app152212248

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop