Next Article in Journal
Gamifying Engagement in Spatial Crowdsourcing: An Exploratory Mixed-Methods Study on Gamification Impact Among University Students
Next Article in Special Issue
From Transformative Agency to AI Literacy: Profiling Slovenian Technical High School Students Through the Five Big Ideas Lens
Previous Article in Journal
Exploring the Role of AI and Teacher Competencies on Instructional Planning and Student Performance in an Outcome-Based Education System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Framing and Evaluating Task-Centered Generative Artificial Intelligence Literacy for Higher Education Students

1
Department of Mathematics, Science, and Technology Education, Tel Aviv University, Tel-Aviv 6997801, Israel
2
School of Mechanical Engineering, Tel Aviv University, Tel-Aviv 6997801, Israel
3
Department of Labor Studies, Tel Aviv University, Tel-Aviv 6997801, Israel
4
Department of Art History, Tel Aviv University, Tel-Aviv 6997801, Israel
*
Author to whom correspondence should be addressed.
Systems 2025, 13(7), 518; https://doi.org/10.3390/systems13070518
Submission received: 27 May 2025 / Revised: 16 June 2025 / Accepted: 24 June 2025 / Published: 27 June 2025

Abstract

The rise in generative artificial intelligence (GenAI) demands new forms of literacy among higher education students. This paper introduces a novel task-centered generative artificial intelligence literacy framework, which was developed collaboratively with academic and administrative staff at a large research university in Israel. The framework identifies eight skills which are informed by the six cognitive domains of Bloom’s Taxonomy. Based on this framework, we developed a measuring tool for students’ GenAI literacy and surveyed 1667 students. Findings from the empirical phase show moderate GenAI use and medium–high literacy levels, with significant variations by gender, discipline, and age. Notably, 82% of students support formal GenAI instruction, favoring integration within curricula to prepare for broader digital society participation. The study offers actionable insights for educators and policymakers aiming to integrate GenAI into higher education responsibly and effectively.

1. Introduction

The emergence of generative artificial intelligence (GenAI) has raised, yet again, the discourse about the transformative changes that various fields, including education, will undergo. With a host of GenAI-based applications that can ease students’ burden of carrying out almost anything related to their academic studies, e.g., reading learning materials, writing essays, coding, or solving problems, higher education institutions have found themselves challenged with issues of integrity and assessment [1,2,3]. Overall, higher education stakeholders have faced important questions about how to integrate these technologies into teaching and learning in meaningful, ethical, and pedagogically sound ways [4,5]. At the core of this discussion lies the issue of better preparing students for our digitally saturated world, with GenAI being its newest addition.
Literacy in the context of digital technologies has evolved in recent decades to include a range of skills, competencies, and dispositions. Whilst several frameworks have sought to articulate the skills needed for the digital or information age [6,7,8,9], the emergence of GenAI, with the unique affordances and obstacles it offers, call for an updated conceptualization [10]. Specifically, there is a growing recognition that students must be aware of GenAI tools, understand their capabilities and limitations, use them ethically, and integrate them effectively and efficiently into their academic work. However, existing definitions of AI literacy are often either overly broad or insufficiently grounded in the realities of student learning, leaving a gap in actionable guidance for both learners and educators.
In this paper, we address this gap by proposing a task-centered framework for GenAI literacy in the context of learning in higher education. Our framework is grounded in Bloom’s Revised Taxonomy and revolves around tasks students encounter in academic settings. Therefore, the purpose of this paper is twofold. First, to present the development of a Task-Centered Generative Artificial Intelligence Literacy framework for higher education students; second, to present the use of this framework to study students’ GenAI literacy and its associations with demographic-, academic-, and experience-related personal variables.

2. Literature Review

2.1. Preparing Students for the Digital Age

Over the last three decades or so, education researchers and practitioners have been pondering on how to prepare students for the post-industrial, digitally saturated age which is characterized by a heavy, constant use of information and communication technologies (ICT). It has been agreed-upon that the “3Rs”—reading, writing, arithmetic—are not enough anymore as the basic skills required for the new generations of learners, employees and citizens, and that a new set of skills was needed to be defined. However, the question of which skills should form this new set has received many different answers. Furthermore, the very term “skill” has been debated, and others, e.g., literacy, or competency, have been suggested as more suitable alternatives.
These definitions, frameworks, taxonomies of skills, or their derivatives assume that living in an ICT-saturated world necessitates new ways of thinking about how this world behaves and how we should behave in it. Importantly, they are not technology-dependent, and overall shed new light on humans’ way of thinking when surrounded by technology. For example, one of the first new skills suggested for the digital age was visual thinking [11]. Moving from analog to digital imagery, with the latter being characterized by sheer volume and speed, enabled, maybe even necessitated, new ways of thinking, as individuals were conveniently equipped with the power of visual language. Visual thinking, as defined by Dake, included the understanding and implementation of greater flexibility and manipulability, interactive dialogues with images, expanded ways of storing and retrieving, multilevel expression, and use and reuse of existing images.
These early components of a “new” thinking skill for the digital age from over 30 years ago echo later definitions of other new skills. For example, storing and retrieving images is considered integral to personal information management, an important digital age-related skill which is strongly associated with learning [12]; and the notion of use and reuse is integral to computational thinking [13], a thinking skill that stemmed from humans’ interactions with computers. More than that, visual literacy will be later considered as one of a few digital literacies for the digital era in at least two important frameworks, each of which expands on some other components of Dake’s original viewpoints [6,8].
Notably, a few terms have been used in the context of defining what is required for effectively and efficiently managing today’s and tomorrow’s life. (Not to mention that a few terms have been used to describe what it is that we must prepare for, among which are “digital age”, “digital era”, “information age”, “the 21st-century”, and “the fourth industrial revolution”.) The most common terms are skills, literacies, and competencies [14]. Going beyond mastering knowledge, a skill is the ability to perform a certain physical or mental task that is functionally related to attaining a performance goal [15]. Continuing even further, competency is a set of skills, behaviors and attitudes needed to perform an activity or process in a competent manner [16]. As for literacy—this term was originally used in the context of reading and writing, and mostly revolved around the ability to understand and use information contained in various kinds of textual materials [17]; later, this term was used in other contexts, e.g., when mathematical literacy or digital literacy were defined, the understanding and use of these domains were still crucial, hence literacy inherently involves critical examination of related materials. Obviously, these three concepts are intertwined.
A few years ago, a comprehensive scoping reviews of skills, competencies and literacies attributed to the new industrial landscape had highlighted the large variety of these, including a mix of soft skills, e.g., creativity, collaboration, or interpersonal skills; hard skills, e.g., data literacy, programming, or mathematical knowledge. Interestingly, some identified skills in the literature were coded in this review as “unclear/vague”, e.g., multi-purpose skills, flexibility to perform adaptive abilities, or global awareness, and some were coded as “outliers”, e.g., user experience design skills, or investigative and experimental skills [18]. Hence, there is no one agreed-upon framework, and existing frameworks are very different from each other. Even if we focus on the term “digital literacy” alone, it may consist of several distinct aspects, including critical, cognitive, social, operative, emotional, and projective dimensions [19]. This conundrum of concepts and definitions, which may represent either the richness of the topic or its ubiquitousness, is a challenge when suggesting a new framework. It also raises the question of why a new classification should be suggested at all to this already dense swamp, to which we refer in the next section.

2.2. Skills, Literacies, and Competencies for the Age of Generative Artificial Intelligence

Artificial Intelligence (AI) is not a new field, and its history as an academic discipline goes back to the 1950s [20]; much earlier than that, it was imagined in mythological traditions and fictional texts, especially within dystopian contexts, which helps explain the fear and even resentment we encounter among faculties, students, and the broader public. Definitions of AI vary and take different perspectives. Some take a perspective of a human interacting with an artificial machine, like the Turing Test, which states that a machine would be considered intelligent if that human would not be able to distinguish the machine from human; others take a comparative approach and state that a machine is intelligent if a human performing the same things it performs would be considered intelligent; and some are more operational and process-oriented, noting that the intelligence of a system lies in its ability to correctly interpret external data, to learn from it, and to use this learning to achieve specific goals and tasks through adaptations [21].
The history of AI in education is as long as the history of AI itself, as soon after this technological advancement was achieved, it found its way into the education field. The first common implementation of AI in education was probably that of intelligent tutoring systems, which tailored personalized learning paths to students. Today, AI seamlessly serves as the basis for numerous advancements in education at all levels, from student and teacher levels to course and school levels to district and national levels, and for various tasks, including assessment and grading, personalization and adaptation, mass teaching, management, and much more [22,23].
The more recent thread of Generative AI (GenAI), with its core use of Large Language Models, has yet again been associated with its potential to dramatically impact education at large. GenAI is largely defined by its abilities to produce text, images, video, audio, and other forms of data. The current GenAI boom is mostly associated with the launch of OpenAI’s ChatGPT, a user-facing chatbot, in November 2022. Since then, we have witnessed a surge in applications for almost any aspect of life, with the education field not being an exception. Evidence for the wide impact of GenAI applications on our daily life is the fact that various dictionaries had chosen GenAI-related terms as Word of the Year for 2023, e.g., Macquarie Dictionary’s “Generative AI”, Cambridge’s Dictionary’s “hallucinate”, and Merriam-Webster’s take on the artificial era, “authentic” [24,25,26]. Due to its high popularity and seemingly new nature, it is no surprise that a new mix of skills, literacies, and competencies has already been suggested for handling the GenAI-saturated world.
A common, broad definition of AI literacy is “the ability to be aware of and comprehend AI technology in practical applications; to be able to apply and exploit AI technology for accomplishing tasks proficiently; and to be able to analyze, select, and critically evaluate the data and information provided by AI, while fostering awareness of one’s own personal responsibilities and respect for reciprocal rights and obligations” ([27] p. 1326). Building on this, Bozkurt [28] defines GenAI literacy as follows:
AI literacy is the comprehensive set of competencies, skills, and fluency required to understand, apply, and critically evaluate AI technologies, involving a flexible approach that includes foundational knowledge (Know What), practical skills for effective real-world applications (Know How), and a deep understanding of the ethical and societal implications (Know Why), enabling individuals to engage with AI technologies in a responsible, informed, ethical, and impactful manner.
As comprehensive as this definition is, we find it too far away from learning and teaching practicalities. Like other suggested frameworks, e.g., [29], it is too abstract for teachers and instructors who might want to teach or assess such literacy, and it is too general for students to understand how to act according to it. Furthermore, upon designing an assessment tool based on such a definition, the items may be too broad or context-independent [30]. Recent competence-based GenAI frameworks demonstrate a direction of defining GenAI in a way that is more actionable [31,32]; however, the former is too broad to concretize, and the latter is not learning-focused.
Hence, we identified a gap in the current literature about GenAI literacy. We believe that a framework for GenAI literacy that would speak to students, teachers and instructors, and education scholars is still missing. Therefore, we take a task-centered approach in which GenAI literacy is concrete and operationalizable and is about actual and potential uses of GenAI in familiar contexts of tasks that are integral to learning. The characteristics of our framework are detailed below, under Section 3.1.

3. Developing a Task-Centered Generative Artificial Intelligence Literacy Framework

In this section, we present our Task-Centered Generative Artificial Intelligence Literacy framework. We start by describing its characteristics regarding learning theories, relevance to students’ academic experience and to instructors’ views; continue by detailing methodological issues of its development; and finally introduce the resulting framework in the wider context of other relevant literacies and skills.

3.1. Characteristics of the Framework

Early in the process of developing a new GenAI literacy, we decided that such a framework should be based on the following characteristics: relevance to learning theories, relevance to students’ actual learning experience, relevance to instructors across disciplines, and being actionable for instructors.

3.1.1. Relevance to Learning Theories: Bloom’s Revised Taxonomy

After reviewing various learning theories, we decided to base the framework on the cognitive domain of Bloom’s Revised Taxonomy of education goals. This taxonomy, both in its original form and in its revised form, has been widely used in numerous educational contexts and for multiple purposes, including instructional design and assessment. The original taxonomy included six categories denoted by nouns: knowledge, comprehension, application, analysis, synthesis, and evaluation. The revised taxonomy defined six categories of observable knowledge, skills, attitudes, behaviors, and abilities; the six categories were defined by verbs: remember, understand, apply, analyze, evaluate, and create [33,34]. Bloom’s taxonomy has shown great relevance to the digital era, with many applications for choosing appropriate technologies for achieving educational goals and for assessing technology integration in education [35,36,37,38]. Therefore, we found it suitable for our framework as well.
It is common to depict Bloom’s Taxonomy using a pyramid, at the bottom of which is “remember”, and at the top of which is “create”, which denotes a hierarchical order between the categories; however, this image was not originally presented by Bloom, and the relationship between the levels of thinking is not necessarily hierarchical. In their original publication, although overall arguing, albeit quite cautiously, towards a hierarchical, cumulative order of the categories, Bloom and his colleagues explicitly asserted that evaluation, then the last category, is not necessarily the last step in thinking or problem solving, but rather it can quite possibly be “the prelude to the acquisition of new knowledge, a new attempt at comprehension or application, or a new analysis and synthesis” ([33], p. 185). Later, when Churches [36] presented Bloom’s Digital Taxonomy, he again asserted that learning processes must not begin at the lower taxonomic levels but rather can be initiated at any point. When the original taxonomy was revised to be used in biology education, while empirically examining hundreds of assessment items, its non-hierarchical structure was explicitly emphasized [39].
Empirical evidence of student performance has raised the question of hierarchy time and again. When Bloom and his colleagues had compared students’ performance across tasks that were categorized by different classes of their taxonomy, they found that it was more common to find that individuals had low scores on complex problems and high scores on less complex problems than the reverse; however, they explicitly and honestly stated that this evidence was “not entirely satisfactory” ([33] p. 19). Later, it was shown that individuals can indeed demonstrate higher performance on higher learning levels without necessarily demonstrating competency on lower levels [40].
Early attempts to test for the validity of the hierarchical notion of the Taxonomy showed either moderate support or inconsistency with the suggested order [41,42]. Indeed, in the revised taxonomy, “the requirement of a strict hierarchy has been relaxed” ([34] p. 215)—an assertion that was later demonstrated empirically [43]. A relatively recent review showed that the assumed hierarchical order has little empirical evidence [44], and a large-scale analysis of an international comparative test in mathematics showed no difference in student performance between lower-order and higher-order tasks [45]. Furthermore, in the context of higher education settings, all of Bloom’s Taxonomy could be considered equally important as instructors’ and students’ approach can be shifted at any time to increase engagement and learning [46]. Therefore, we will not assume a hierarchical order of Bloom’s Revised Taxonomy’s components.

3.1.2. Relevance to Students’ Actual Learning Experience: A Task-Centered View

We assume that students experience learning in many ways, regardless of their course of study. Learning strategies, motivations, personal characteristics, and many other factors make it almost impossible to imagine a single learning experience to which we refer while imagining the use of GenAI-based tools. Therefore, we decided to revolve our framework around the notion of tasks students have to complete as part of their studies.
Along their course of study, students face many different tasks, depending, among other factors, on the disciplines, the level of the course, and the stage of the learning process. Examples for such tasks include reading a paper, solving a problem, designing an experiment, formulating an argument, writing a piece of text, analyzing data, or designing a visual artifact. Note that we mostly refer here to tasks that are directly related to the academic aspect of student life, but our framework may also be relevant to meta-cognitive aspects, e.g., regulating one’s learning, managing big projects, adopting problem-solving strategies, and even to socio-emotional and motivational aspects, like persisting in a task until its completion, settling disputes, or struggling with a failure. However, here we will mostly refer to academic aspects.

3.1.3. Relevance to Instructors Across Disciplines

Having the framework revolving around students’ tasks also makes it relevant to instructors across disciplines, as they can think of tasks they are familiar with while using it. Therefore, we made sure the framework is phrased in a content-independent manner that emphasizes the learning that should occur.

3.1.4. Being Actionable for Instructors

For being useful as an aid for instructors, we wished the framework to be actionable. That is, we wanted it to be relatively easily translated to actual pedagogical experiences. Of course, we do not assume that such a translation is a trivial task, and as with any new technology, it may take dedicated training and strong motivation for this process to succeed.

3.2. Methodology

The GenAI literacy framework was developed as part of a wider project held within Tel Aviv University, a large research university in Israel. This endeavor was initiated and led by the Center for Innovation in Teaching and Learning at Tel Aviv University. The university is truly multidisciplinary, with about 30,000 undergraduate and graduate students and about 4500 faculty members across all academic disciplines.
A few faculty members were invited by the Center to lead each of the working groups that were defined to operate under this project; the first author had led the GenAI Literacy group. Three other working groups were part of the larger learning community, discussing GenAI tools as an aid to instructors; writing and assessment in text-heavy domains; and writing and solving problems in numerical-heavy domains.
A Call for Participation in the learning community was sent to all faculty members and relevant administrative staff, and each one of the respondents chose which working group they wished to take part. Each participant received a license from the Center to one GenAI-based system they chose for the duration of the process. The whole process was led by the Center staff, and working group leaders took part in extra meetings for coordination and process management.

3.2.1. Participants

Eleven faculty members and five administrative staff took an active role, albeit to various extents, in the working group. The faculty, with different levels of experience in teaching and technology integration in teaching, were from the Humanities and Arts (4), Social Sciences (2), Life Sciences (1), Exact Sciences (2), and Medicine (2); administrative staff were from the campus’ libraries (2), techno-pedagogues (2), and heads of administration within faculties (1).

3.2.2. Process

The group had met about ten times between March and August 2024 for one-hour brainstorming and working sessions, most of which were held remotely. Together, the group members discussed what GenAI literacy is, reviewed relevant literature and documents from other institutions worldwide, and eventually defined and refined their own framework.
Part of the defining and refining process of the group included authentic experience in the participants’ classes, and reflection and discussion about this process during group meetings. That is, participants tried to translate the literacy components into pedagogical interventions, which was what we strived for by setting up a definition in the first place. This, along with the other characteristics of the framework, is discussed in the following section.

3.3. GenAI Literacy Framework

Overall, our framework refers to GenAI literacy as the ability to effectively and efficiently use GenAI-based tools to simplify, enrich, or better learning-related tasks. As we discuss the role GenAI plays for higher education students in the broader context of their academic studies, we find the term “literacy” suitable as a meta-level integrative construct; and as our goal is to identify what is required to use GenAI in performing academic-related tasks, we use the term “skill” to refer to specific operational abilities. Our framework is based on the six categories of the cognitive domain of Bloom’s Revised Taxonomy; for simplicity’s sake, each category was mapped to one or two skills, with an overall count of the eight skills.
The framework revolves around tasks students face as part of their course of study; that is, the term “task” serves as a placeholder for any task one can think of regarding students’ learning. Looking at it from this perspective, we emphasize that the task is being carried out by students, and GenAI-based tools are to be considered as aids to this process. This perspective is in line with the notion of GenAI as an aid to complete tasks at various degrees of complexity [47,48] while keeping the human in command [47].
We now present the skills included in our framework by their relatedness to Bloom’s Taxonomy categories. This is summarized in Table 1. While presenting them, we make links between these newly defined skills and existing skills and literacies for the digital era [6,7,8,9].

3.3.1. Know

Two skills were identified under this category. First, to become familiar with GenAI-based tools that can assist in performing a given task. One will not be literate in using GenAI-based tools if they do not know relevant tools that they could harness to perform a given task. Indeed, familiarity with technology, or lack thereof, is an important factor in adopting new educational technologies by students [49,50]. Using such tools in educational contexts may be encouraged by a habitual use of them in other daily contexts [51] or by integrating them into the curriculum by instructors [52].
The second skill is to stay updated on innovations in the world of GenAI. Like any other technology, the one discussed here too is constantly changing and developing; unlike many other technologies, this new technology demonstrates changes and developments quicker than ever. Therefore, being literate in the field of GenAI requires constant updates, which can be achieved in many different ways, and should involve both educators and students [53].

3.3.2. Understand

Under this category, we highlight the need to understand how to make the most of GenAI-based tools. This includes understanding their various features, different use cases, and an overall understanding of the role of learners in the human-GenAI partnership. This way, one may be able to effectively consider what they can achieve by using these tools. As had been demonstrated with previous technologies, students and educators may use technology in a rather shallow manner, hence may not fulfill their potential for meaningfully supporting learning and may even be discouraged to use them [54,55]. Learning about various uses of an existing tool may be a result of its constant use, hence the importance of continuing to experiment with such tools [56].

3.3.3. Apply

Here, we identified two skills. First, to formulate prompts that lead to the desired results. Prompts serve as the main means of communication between users and GenAI-based tools; hence, they enable the production of the tools’ output. But more than merely being a means for completing the task, mastering prompting has the power to enhance learning by fostering personalized, engaging, and equitable educational experiences [57]. Being familiar with how prompts work and how to engineer them will help learners to effectively and efficiently use GenAI tools as part of working on a task; indeed, it is a skill that needs to be taught and practiced [58,59].
The second skill is to use GenAI-based tools ethically in the context of a given task. The underlying assumption of our framework is that learners would use GenAI-based tools as part of their own process of completing a task; that is, the process would be led by the learners, and the final product would be considered as the learners’ (see the Create-related skill below). This is one aspect of working ethically with these tools. Another major ethical aspect of using GenAI-based tools is to comply with copyright laws and with privacy regulations in the context of uploading materials to these tools, i.e., not inputting copyrighted materials or sensitive information to these tools. Also, it is important to refer to potential biases in the products of GenAI-based tools and to check their accuracy and credibility [60,61]. Additionally, it is important to be transparent when reporting on a task, mentioning how and to what extent GenAI-based tools have been used. Finally, it is imperative that students be aware that using GenAI-based tools may compromise their own privacy as institutions can gain control over the data that is gathered while using these tools [47,62]. These are just some ethics-related issues that may arise in the context of working with GenAI-based tools, which makes it an important skill [63,64,65], and implementing this skill will, overall, make the learners accountable for their use of these tools; more profoundly, ethics-related factors may influence students’ mere use of these tools [66].

3.3.4. Analyze

Here, we highlight the need to compare the outputs of different GenAI-based tools to a given task. The notion of comparison is of great importance in learning, stemming from literacy; Olson and Astington [67] concluded that talking about text may be as important as the skills of reading and writing, and comparing texts was found to be powerful in vocabulary acquisition [68]. In our context, this skill is crucially important, as GenAI-based tools do not work in a deterministic manner, hence may produce unexpected results. Furthermore, different tools may work differently, even when using the same prompt, depending on the model they use and other features (e.g., ChatGPT’s Temperature, which controls the “creativity” or randomness of the generated text). Therefore, analyzing different outputs—even from different runs on the same tool—will promote a better understanding of the use of GenAI-based tools, and will enhance their use.

3.3.5. Evaluate

The relevant skill here is to verify the accuracy of the output given by GenAI-based tools by cross-checking with other sources and prior knowledge. Generally, assessing information credibility is a demanding task, which is influenced by a wide range of factors [69,70,71]. Due to known hallucinations and biases of GenAI-based tools, it is crucial to fact-check them while comparing their output to non-GenAI-based sources. This may also help learners to identify areas in which these tools are beneficial. Indeed, credibility assessment of AI-generated content may be derived from different motivations and can be handled in various ways [72,73].

3.3.6. Create

The relevant skill here is to produce quality output for a given task using GenAI-based tools; importantly, completing a task successfully and achieving a high-quality human-led product often requires the integration of various GenAI-based tools [74]. This skill also has to do with self-regulated learning, which refers to the extent to which students intentionally and strategically adapt learning activities to achieve their learning goals [75,76,77], and which becomes ever more important in the context of technology-enhanced learning [78,79].

3.4. Comparing Our Framework with Existing Frameworks

Of the many existing frameworks for the skills, literacies, or competencies required for the digitally saturated world, we present three that seem to us relevant to today’s higher education world and to the new wave of technological advancements on which we focus here. The first was developed in the early 2000s by the non-profit Partnership for 21st Century Skills that included members of the national business community, education leaders, and policymakers. This framework consists of a few components that extend the traditional Three R’s (reading, writing, arithmetic) and define three new components: learning and innovation skills, life and career skills, and information, media, and technology skills; the learning and innovation skills are known as The Four C’s, namely, critical thinking, communication, collaboration, and creativity—and this is the component to which we will refer later [9]. Due to the importance of these skills across educational, occupational, and civic contexts, they have been widely accepted and are still seen as relevant in the long journey towards modernizing education [80]. Notably, each of these skills has a meaningful interface with GenAI: critical thinking may inform humans’ effective use of GenAI, and reciprocally, GenAI use may support the promotion of humans’ critical thinking [81,82,83]; human–machine communication lies at the core of using GenAI-based applications, and interpersonal communication may be meaningfully impacted by the use of GenAI [84,85]; collaboration-wise, GenAI is often looked at as an aid to humans [48]; and creativity may play an important role in promoting the synergy between humans and GenAI [86,87,88].
The second framework we find relevant is Eshet-Alkalai’s conceptual framework for survival skills in the digital era, which was later revised and extended [6,7]. This holistic framework, which includes six types of literacies relevant across domains and contexts, refers to how people should better navigate in a digitally saturated world. These six types are as follows: photo-visual literacy—understanding messages from graphical displays; reproduction literacy—utilizing digital tools to create new, meaningful materials from existing ones; branching literacy—constructing knowledge from non-linear online spaces; information literacy—critically assessing online information; socio-emotional literacy—understanding and applying norms of communication in digital environments; and real-time thinking—processing large volumes of stimuli simultaneously.
The third framework is Mioduser, Nachmias, and Forkosh-Baruch’s new set of literacies for the knowledge society [8]. This set includes seven literacies: multimodal information processing—understand, produce, and negotiate meanings in a culture made up of words, images, and sounds; navigating the info space—understand when to use information, where to find it and how to retrieve it, and how to decode and communicate it; interpersonal communication—how to effectively, efficiently, and ethically use various communication channels; visual literacy—use images to advance thinking, reasoning, decision making, and learning; hyper-literacy—the ability to deal with non-linear knowledge representations; personal information management (PIM)—storing information items to later retrieve them; and coping with complexity—perceive phenomena as complex, understand them and cope with this complexity. Despite some overlap with Eshet’s framework [7], we distinguish between them due to Mioduser et al.’s [8] focus on knowledge construction across contexts and their broader view on the skills needed by individuals to learn, work, socially interact, and cope with the needs of everyday life. Both sets of literacies echo the new affordances and challenges that characterize the digital era, and due to their holistic nature and their relevance to learning settings, we find them both related to our goal of defining GenAI literacy. On the one hand, they both seem particularly relevant to the current GenAI era, which has already extended people’s abilities to engage with visuals [89,90], has implications for information retrieval and assessment [91,92], and may impact socio-emotional learning [93,94]. On the other hand, issues of coping with complexity, real-time thinking, and hyper-literacy should be revisited in the current wave of tools that are able to analyze huge volumes of data, decrease complexity in a matter of seconds, and whose interface is largely linear.
Mapping our suggested framework to these three frameworks results with an interesting insight: our framework has some complex relationships with them, however it depicts a new construct that is neither fully included nor including any of the three. The Know-related skill of becoming familiar with GenAI-based tools may require learners to be proficient, to some degree at least, in two of Mioduser et al.’s literacies [8]— navigating the infospace, and personal information management—for finding relevant tools, and keeping track of them for personal use for a long time. The other Know-related skill, i.e., staying updated, resonates with Mioduser et al.’s coping with complexity, as the continuously developing GenAI phenomenon is surely a complex one; also, it requires a high level of information literacy [6], considering the flood—even the tsunami—of information available regarding this phenomenon. The Understand-related skill, considering the ever-evolving world of GenAI-based tools and the constant appearance of new tools and features, requires coping with complexity [6] and being creative [9] in finding efficient solutions to task-related problems. The Apply-related skill of prompting—being related to communication (with machines), collaboration (again, with machines), creativity (to obtain the desirable outcome effectively and efficiently), and critical thinking (to examine outcomes)- requires all The Four “C”s [9]; furthermore, as prompting is an iterative process during which previous prompts are fine-tuned and prompts suggested by others are modified, this process requires reproduction literacy [6]. The other Apply-related skill, that is, working ethically, requires a high sense of critical thinking [9], coping with complexity [8], and information literacy [6]. The Analyze-related skill requires critical thinking to compare different outputs [9], and the Evaluate-related skill requires information literacy [6] and critical thinking [9]. Finally, the Create-related skill requires coping with complexity, in the face of the various options available and potential paths to be taken [8], as well as critical thinking while examining the output along the way, and creativity in incorporating various GenAI-based tools in completing the task [9].

4. Exploring GenAI Literacy Among University Students

In this section, we report on an empirical study whose goal was to explore the extent to which university students are GenAI literate according to the framework we developed. Previous studies of digital literacy had found four factors affecting students’ digital literacy: age, gender, family socioeconomic status, and parent’s education level [95]. In the context of our study, we decided to focus on age, gender, and the degree towards which they study, omitting the socioeconomic factor; additionally, we added a factor of academic discipline, in order to deepen our understanding of differences between subjects. Therefore, we set up the following research questions:
(RQ1) To what extent do higher-education students use GenAI-based tools for academic purposes, and how is it associated with demographic- and academic-related personal variables?
(RQ2) How is GenAI Literacy characterized among higher-education students, and how is it associated with demographic-, academic-, and experience-related personal variables?
(RQ3) To what extent and in which forms do higher-education students wish GenAI Literacy to be taught during their academic studies, and how is it associated with demographic-, academic-, and experience-related personal variables?

4.1. Methodology

4.1.1. Research Field and Research Population

This study was conducted in a large, multidisciplinary research university in Israel, with about 30,000 undergraduate and graduate students, and about 4500 faculty members; it was approved by the institution’s Ethics Committee (0010309-1). Our sample included N = 1667 students who filled out an online, anonymous survey. Of the participants, we had 695 males (42%), 877 females (53%), and 95 missing values for gender (5%), with an average age of 31.6 (SD = 11.7, N = 1597). Participants came from faculties across the campus, with N = 804 (48%) from the STEM disciplines, and N = 863 (52%) from the Humanities and Social Sciences. Over half of the participants were studying towards a bachelor’s degree (N = 889, 53%), 30%—towards a master’s degree (N = 499), and 14%—towards a Ph.D. (N = 230); the rest (N = 49, 3%) were studying under other programs.

4.1.2. Research Variables

Independent variables included:
Gender
Age
Education Level [bachelor’s, master’s, Ph.D., other]
Faculty—participants could choose from 10 values, based on the faculties that exist in the studied university. Later, we coded these into two broad categories: (1) STEM, including faculties of Engineering, Medical and Health Sciences, Exact Sciences, Life Sciences, and Neurosciences; (2) Humanities and Social Sciences, including also faculties of Arts, Law, and Management.
Dependent variables included:
GenAI Literacy (1–5)—based on the framework we developed (see next section)
GenAI Use—answer to the question “To what extent do you actually use GenAI tools for carrying out tasks related to your studies?” [5-point Likert scale, from 1—“To a Very Low Extent to 5—“To a Very High Extent”]
Teaching GenAI—answer to the question “Do you think the use of GenAI tools should be officially taught at Tel Aviv University?” [Yes/No]
Teaching GenAI Approach—only for those who responded “Yes” under Teaching GenAI, answer to the question “In which courses should the use of GenAI tools be taught?” [In a Dedicated Course for That Topic, As Part of a Few Existing Courses, In Every Course]
Importance of Teaching GenAI—only for those who responded “Yes” under Teaching GenAI, answer to the question “Why is it important to teach the use of GenAI tools at the university? (Multiple choice) [So I Could Study at The University More Efficiently, To Prepare Me for the Job Market, So I Will Be Ready for a Technology-Saturated World, Without Regard to My Job].

4.1.3. Research Tool, Procedure, and Analysis

Data was collected using an online, anonymous self-report questionnaire (in Google Forms). The questionnaire included items for all variables, as described above, with a dedicated questionnaire to measure GenAI Literacy, as described below. Data collection took place during July–August 2024, towards the end of the Spring Semester and the beginning of the Summer Semester. An email with an invitation to participate was sent out via a mailing list of all students at the university. All statistical analyses were performed using JASP software, Version 0.19.3.
The tool for measuring GenAI Literacy consisted of eight items, measuring the extent of agreement with the presented statements, each of which was ranked on a 5-point Likert scale from 1 to 5 [“Disagree Strongly”, “Disagree”, “Neither Agree nor Disagree”, “Agree Strongly”]. The items were developed following our definition of GenAI Literacy, while going through rounds of discussions and refinements within the working group. Eventually, two categories were assigned with two items each, and four categories—with a single item each. This questionnaire is presented in Table 2.
As our GenAI Literacy framework is task-focused, we asked the participants to first choose a task to which they would refer. This was carried out by the following stimulus: “Choose one task with which you are required to deal with during your studies. You can choose from the list or write down another task below. Refer to this task when ranking your agreement in the items below”; the given list was as follows: Studying New Material; Working on Home Assignments; Searching, Reading, or Summarizing Articles; Writing or Editing Texts; Summarizing Lectures; Studying Towards an Exam; Time Management.
For checking the structure of the questionnaire, we ran both Principal Component Analysis and Exploratory Factor Analysis, both using Oblimin oblique rotation. Principal Component Analysis resulted with a single component (χ2 = 595, at p < 0.001, df = 20), with component loadings of 0.80 or above for all items. Exploratory Factor Analysis resulted with a single factor (χ2 = 484, at p < 0.001, df = 20), with factor loadings of 0.76 or above for all items. Testing for reliability, we used McDonald’s ω [96,97], which yielded a very high value of 0.94 (N = 1528). Therefore, we use an average of all items to compute a single GenAI Literacy variable.

4.2. Findings

4.2.1. Using GenAI-Based Tools and Demographics, Academic Profile (RQ1)

Overall, we found a medium average value of using GenAI-based tools for academic purposes, M = 2.9 (SD = 1.4, N = 1667). We tested its associations with demographics- and academic-related personal variables.
  • Demographics (Gender, Age)
Testing for gender differences, we found that males (M = 3.0, SD = 1.3, N = 695) scored higher than females (M = 2.8, SD = 1.4, N = 877), with t(1570) = 3.1, at p < 0.01, which denotes a small effect size of Cohen’s d = 0.16.
Age was negligibly negatively associated with GenAI use, with r = −0.08, at p < 0.01.
  • Academic Characteristics (Faculty, Education Level)
Testing for differences by academic discipline, we found that STEM students (M = 3.0, SD = 1.3, N = 804) scored higher on GenAI Literacy than Humanities and Social Science students (M = 2.7, SD = 1.4, N = 863), with t(1665) = 5.0, at p < 0.001, which denotes a small effect size of Cohen’s d = 0.24.
Using ANOVA test, we found no association between GenAI use and Education Level, with F(3) = 1.3, at p = 0.29 (N = 1667).

4.2.2. GenAI Literacy and Demographics, Academic Profile, GenAI Use (RQ2)

Overall, we found a medium–high average value of GenAI Literacy, M = 3.3 (SD = 1.1, N = 1665). We tested its associations with demographics- and academic-related personal variables.
  • Demographics (Gender, Age)
Testing for gender differences, we found that males (M = 3.5, SD = 1.1, N = 694) scored higher than females (M = 3.1, SD = 1.1, N = 876), with t(1568) = 6.8, at p < 0.001, which denotes a small–medium effect size of Cohen’s d = 0.35.
Age was weakly negatively associated with GenAI Literacy, with r = −0.15, at p < 0.001.
  • Academic Characteristics (Faculty, Education Level)
Testing for differences by academic discipline, we found that STEM students (M = 3.4, SD = 1.1, N = 804) scored higher on GenAI Literacy than Humanities and Social Science students (M = 3.2, SD = 1.2, N = 861), with t(1663) = 4.4, at p < 0.001, which denotes a small effect size of Cohen’s d = 0.22.
Using the ANOVA test, we found no association between GenAI Literacy and Education Level, with F(3) = 2.5, at p = 0.06 (N = 1665). A post-hoc Tukey test revealed no significant differences between any pair of Education Levels. Omitting the “Other” Education Level, i.e., considering only bachelor’s, master’s, and Ph.D. academic programs, we found a significant association, with F(2) = 3.4, at p < 0.05 (N = 1617); however, using a post-toc Tukey test, we found no significant difference between pairs of Education Levels.
  • Experience in Using GenAI Use
Not surprisingly, GenAI Literacy was found to be strongly, positively significantly correlated with GenAI Use, with r = 0.65, at p < 0.001 (N = 1665).

4.2.3. Teaching GenAI Literacy (RQ3)

A vast majority of participants thought that the use of GenAI-based tools should be taught as part of their university studies (1370 of 1667, 82%).
  • Demographics (Gender, Age)
There was a significant difference based on gender, with higher proportions among females (749 of 877, 85%) than among males (544 of 695, 78%), with χ2 = 14.5, at p < 0.001, Cramer’s V = 0.09, indicating a small effect size.
Of those who responded “Yes”, the average age was higher than those who responded “No” (M = 32.4, SD = 12.1, N = 1312, compared with M = 27.9, SD = 8.9, N = 285, respectively). This difference is significant, with t(1595) = 6.0, at p < 0.001, which denotes a small–medium effect size of Cohen’s d = 0.39.
  • Academic Characteristics (Faculty, Education Level)
There was a significant difference based on academic discipline, with higher proportions among Humanities and Social Sciences (739 of 863, 86%) than among STEM disciplines (631 of 804, 78%), with χ2 = 14.5, at p < 0.001, Cramer’s V = 0.09, indicating a small effect size.
There was a decreasing rate of “Yes” respondents when considering Education Level, from Ph.D. students (206 of 230, 90%) to master’s students (431 of 499, 86%) to bachelor’s students (697 of 889, 78%); finally, the response of “Yes” among participants who study in other programs was the lowest (36 of 49, 73%). This difference is significant, with χ2 = 25.8, at p < 0.001, Cramer’s V = 0.12, indicating a small–medium effect size.

4.2.4. How and Why Should the Use of GenAI Be Taught?

Of those participants who stated that the use of GenAI should be taught as part of their course of study at the university, over 40% thought it should be taught in a dedicated course (600 of 1367, 44%), and almost 40% thought it should be taught across relevant courses (528 of 1367, 39%); the other 17% (239 of 1367) thought it should be taught in every course of their studies.
As for the reasons to teach the use of GenAI—again, only from those who stated that it should be taught in some form—81% (1112 of 1370) stated that it was important for navigating today’s digitally saturated world without relation to their job, 76% (1043 of 1370) stated that the reason is to help them better study at the university, and 58% (793 of 1370) stated that it was important for them to being prepared to the job market. Testing for associations with academic discipline, we found that better studying was also chosen more commonly among STEM participants (498 of 631, 79%) than among Humanities and Social Sciences students (545 of 739, 74%), with χ2 = 5.0, at p < 0.05. Preparedness for the job market was also chosen more commonly among STEM participants (413 of 631, 65%) than among Humanities and Social Sciences students (380 of 739, 51%), with χ2 = 27.5, at p < 0.001. Preparedness for today’s digitally saturated world showed no difference, with χ2 = 1.5, at p = 0.22.
Regarding age, we found that those who thought it was important for today’s digitally saturated world were younger than those who did not choose this response (M = 32.0, SD = 11.5, N = 1062, compare with M = 34.0, SD = 14.2, N = 250, respectively), with t = 2.3, at p < 0.05, which denotes a small effect size with Cohen’s d = 0.16. Those who thought it was important for their studies were younger than those who did not choose this response (M = 31.9, SD = 11.8, N = 998, compare with M = 34.0, SD = 12.8, N = 314, respectively), with t = 2.7, at p < 0.01, which denotes a small effect size with Cohen’s d = 0.18. Finally, those who thought it was important for the job market were younger than those who did not choose this response (M = 29.9, SD = 9.2, N = 757, compare with M = 35.8, SD = 14.5, N = 555, respectively), with t = 9.0, at p < 0.001, which denotes a medium effect size with Cohen’s d = 0.50.
No differences were found based on Education Level, with χ2 = 2.3, at p = 0.51, or gender, with χ2 = 0.06, at p = 0.81.

5. Discussion

In this paper, we introduced the development of a task-centered generative artificial intelligence (GenAI) literacy framework for higher-education students. The development of this framework should be seen as part of the ongoing discussion within higher-education institutions around the world about the ways by which GenAI may change the landscape of academic studies. This important discourse usually involves a mix of concerns—mostly, about students’ dishonesty and about the need to modify assessment—and of hopes that the new technology would finally make academia reconsider its societal and professional role and would help it shift towards preparing students to the current digitally saturated era [98,99,100].
Our attempt at framing GenAI literacy in the context of higher-education studies is also part of an already long line of research, despite the relatively short time of the current GenAI boom [28,30,32,101]. Our unique approach helped us to define a framework that is relevant and actionable across disciplines and for all students and instructors. Choosing a task-centric perspective that builds on Bloom’s Revised Taxonomy [34], we made sure that our framework is relevant for instructors for both designing pedagogical interventions that would support this literacy and assessing it. Considering this, our exploration of GenAI literacy, using an assessment tool that is directly derived from our framework, helped us highlight some important issues regarding its association with demographic-, academic-, and experience-related issues, as we will discuss below.

5.1. A New GenAI Literacy for Higher Education Students

Our framework sheds some important light on what is currently required from higher-education students, as seen by a cohort of campus-wide faculty and staff. It is task-focused and refers to GenAI literacy as the ability to effectively and efficiently use GenAI-based tools to simplify, enrich, or better learning-related tasks. Overall, we identify the eight skills that are mapped to the six cognitive levels as follows: become familiar with GenAI-based tools that can assist in performing a given task; stay updated on innovations in the world of GenAI (both related to Know); understand how to make the most of GenAI-based tools (Understand); formulate prompts that lead to the desired results; use GenAI-based tools ethically in the context of a given task (both related to Apply); compare the outputs of different GenAI-based tools to a given task (Analyze); verify the accuracy of the output given by GenAI-based tools by cross-checking with other sources and prior knowledge (Evaluate); produce quality output for a given task using GenAI-based tools (Create).
Linking this framework with existing frameworks of skills, competencies, and literacies for the digital age [6,7,8,9] helps us emphasize its uniqueness. On the one hand, we observe that task-centered GenAI literacy is closely related to various facets of existing frameworks, hence the importance of referring to these foundational skills, including “soft skills”, in higher education institutions [102,103]. On the other hand, the distinctive mix of such skills when particularly discussing students’ use of GenAI-based tools highlights that this technology, like many others, should be examined through the lens of its own characteristics; as such, the integration of a specific technology into education should carefully consider affordances and challenges related to that technology and to its integration. Even in the seemingly narrow perspective of GenAI, the “How?” of the implementation has an impact on students’ performance, higher-order thinking, and perceptions [104]. In a broader sense, we point out the constant need for examining what new technologies bring to education arenas, and what it requires from students and educators.
It is true that our framework lacks independent or external validation. Interestingly, a recent GenAI literacy assessment test that underwent a comprehensive validation allows us to examine our work in light of an external tool. GLAT [30] is a 20-item assessment tool that is loosely built on the grounds of Bloom’s Taxonomy, considering three dimensions, namely, Know and Understand, Use and Apply, and Evaluate and Create; on top of these, Ethics is another dimension of GLAT. The tool’s multiple-choice questions ask about various aspects of using GenAI-based tools, and in a sense, they are even more practically focused than our framework. For example, under Use and Apply, Item 11, “When using generative AI to create a marketing pitch, which of the following strategies is least likely to be effective?”, with response options that suggest prompting techniques, e.g., “Supplying the AI with information about the target audience”, “Asking the AI to include unique selling points and benefits”, “Requesting the AI to use persuasive language techniques”, and “Providing the AI with a list of competitors’ products” (correct answer); this is an explicit scenario that resembles our more general skill of “Formulate prompts that lead to the desired results and use GenAI-based tools ethically in the context of a given task”, under the Apply category. Another example is Item 16, under Evaluate and Create, “As a student using a Large Language Model (LLM) to gather information for an assignment, how should you approach the information it provides?”; the response options refer to the comparison between LLM-generated text and other internet-based sources; in a sense, this echoes our own skill of “Verify the accuracy of the output given by GenAI-based tools by cross-checking with other sources and prior knowledge”, under the Evaluate category. Furthermore, GLAT was found to be unidimensional, in line with our underlying assumption. Finally, GLAT scores were relatively high, with an average of 12.8 out of 15 (SD = 4.14, N = 355 higher education students), which echoes our medium–high scores among our research population.

5.2. GenAI Literacy Among Higher-Education Students

The use of GenAI for academic purposes was found to be an overall medium, which echoes findings regarding higher-education students in the UK [105], and from another recent UK study, according to which about half of the participating students had used or considered using GenAI-based tools for academic purposes [106]. Our data demonstrated weak negative correlations of age to both GenAI use and GenAI Literacy, and Arowosegbe et al.’s analysis of age-dependent perceptions of GenAI may shed light on this [105]; their analysis revealed that young students (18–24 y/o) tend to be less positive towards GenAI than students in the 24–25 y/o age group, however older students (35–44 y/o) are more neutral than positive. Their study also revealed that students’ perception of GenAI was overall increasing from bachelor’s to master’s to Ph.D. students—an effect that we did not observe in our analyses regarding either GenAI use or GenAI Literacy. It is worth mentioning that in the Israeli context, where this study was conducted, university students tend to be older than in many other countries, mostly due to post-high school mandatory military service for large parts of the population [107].
Based on our findings, female students had less experience with GenAI and reported lower GenAI Literacy than males, which is in line with a previous exploration [101]. O’Dea et al.’s findings differ from ours regarding association with academic discipline; while we found that STEM students reported on higher GenAI Literacy levels, they found that Business and Education students expressed a higher level of comfort with AI compared to students from other subject disciplines, including computing and technology ones. Besides age-related issues, as mentioned above, this discrepancy may be attributed to cultural issues, as their analysis also showed differences between students from the UK and Hong Kong, with students from Hong Kong reporting on higher levels of GenAI literacy than UK students. Overall, Hongkonger students reported on higher GenAI literacy than found in our study, and for them—GenAI use was only weakly correlated with GenAI literacy [108].
A vast majority of students in our sample thought that they should be taught how to use GenAI-based tools as part of their academic studies. Together with the medium experience and medium–high GenAI literacy reported, this finding has important implications for higher-education: students seek institutions to adopt this new technology. This is in line with recent studies according to which higher-education students expect institutions to embrace GenAI as a means to make their learning more effective and to prepare them to the future, however not without considering concerns about accuracy, privacy, ethics, and the impact on personal and societal aspects [1,105,106,108].

5.3. Teaching GenAI Literacy in Higher-Education

So, how to teach GenAI use? In our sample, there was a tendency towards either a dedicated course or integrating GenAI in a few relevant courses, with a very high proportion of participants suggesting these approaches combined; the rate we found is similar to the rate that was found while surveying students in Romania, where only about 20% stated that they were not interested in learning or using any AI models [109]. It was shown that a single course, even a single workshop, may improve students’ GenAI literacy [3,110]; however, integrating GenAI as part of existing curricula requires much consideration as to where when, to what extent, and how to implement this integration [5,111,112,113]. Indeed, the recent literature reviews of integration of GenAI in higher-education showed that modes of incorporation widely differ between various implementations, benefiting either students, instructions, or both, and facilitating different aspects of the educational experience, hence fostering a host of skills and competencies in learners [4,114].
Interestingly, students in our sample, when thinking about the reasons to teach GenAI as part of their higher education, appreciated most the purpose of being ready to live in a digitally saturated world, without regarding their job. Second was ranked being assisted during academic studies, and third, in a meaningful gap, was ranked preparing for the job market. This broad view may represent students’ understanding of the role new technologies play in shaping our future society, across contexts. They are aware of the broader societal implications of GenAI of which learning and employability are only part of a bigger picture. Therefore, higher education stakeholders should keep in mind that it is not enough to teach students how to use GenAI-based tools for their current, academic-related purpose or for the purpose of serving their future jobs, but rather to teach them how to use this technology responsibly and transparently, as well as showing them how to discern when and where to utilize AI tools [115]. This is in line with current views of the important societal roles of higher education, which go beyond learning and working [102,103].
The appreciation of students of the wider role of GenAI in our society, as depicted in our findings, makes it important to emphasize ethical considerations while using this technology. Although explicitly stated in only one skill within our framework (i.e., the Apply skill of “Use GenAI-based tools ethically in the context of a given task”), we are aware that “ethical use” is a wider lens through which the whole use of GenAI in education should be examined. Indeed, ethical implications include various aspects like copyrights and authorship, transparency, responsibility, academic integrity, bias, sustainable practice, and equity [116,117]. These should be further explored and taught at all levels of educational settings. One of the challenges regarding this is that many higher education institutions still lack a clear GenAI policy [118].

6. Conclusions and Implications

In this paper, we presented the development and study of a task-centered generative artificial intelligence (GenAI) literacy for higher education students. We believe that the way we framed this literacy has a major strength of being anchored in the literature, relevant in actual educational settings, and actionable for instructors. Nevertheless, we must note some limitations. First, our study took place in one institution within one country, hence certain educational, technological, and cultural factors may have impacted on the resulting framework and the way it is manifested in our population; even within this local context, our research sample may not be representative of the studied institution. Second, the structure of and the dynamics within the group of campus-wide academic and administrative staff that were gathered as a working group in the first place, and which together defined the framework, may have had an impact on the resulting framework. Moreover, the higher education system in Israel has some unique characteristics. Specifically, a high portion of students begin their academic studies after an army service—sometimes, after another gap year—that is, they are more mature than immediate high-school graduates; also, higher education institutions in Israel have high autonomy in research and teaching. Taken together, these factors may have shaped both the framework design and the survey responses. Therefore, we recommend replicating this working process in other settings in order to enrich our understanding of how higher education staff refer to students’ literacy in the GenAI era.
Still, we believe that our work is of significance, and that it has a few important implications. At the theoretical level, our task-centered approach paves the way for many studies that will explore associations between GenAI literacy and different tasks, across demographics, disciplines, and educational contexts. In a broader sense, this point of view may help in understanding the integration of technology in education in a more nuanced manner. At the practical level, our framework can help higher education educators and policymakers define educational goals and design pedagogical interventions and assessment in the context of students’ use of GenAI-based tools. This may also lead to relevant professional development of educators for supporting these processes.

Author Contributions

Conceptualization, A.H.; methodology, A.H., M.T., Y.R., L.L. and T.C.; software, A.H.; validation, M.T.; formal analysis, A.H. and M.T.; investigation, A.H.; resources, A.H., M.T., Y.R., L.L. and T.C.; data curation, A.H.; writing—original draft preparation, A.H. and M.T.; writing—review and editing, Y.R., L.L. and T.C.; visualization, A.H.; supervision, A.H.; project administration, A.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jochim, J.; Lenz-Kesekamp, V.K. Teaching and testing in the era of text-generative AI: Exploring the needs of students and teachers. Inf. Learn. Sci. 2025, 126, 149–169. [Google Scholar] [CrossRef]
  2. Smolansky, A.; Cram, A.; Raduescu, C.; Zeivots, S.; Huber, E.; Kizilcec, R.F. Educator and student perspectives on the impact of generative AI on assessments in higher education. In Proceedings of the Tenth ACM Conference on Learning @ Scale, Copenhagen, Denmark, 20–22 July 2023; pp. 378–382. [Google Scholar] [CrossRef]
  3. Wood, D.; Moss, S.H. Evaluating the impact of students’ generative AI use in educational contexts. J. Res. Innov. Teach. Learn. 2024, 17, 152–167. [Google Scholar] [CrossRef]
  4. Belkina, M.; Daniel, S.; Nikolic, S.; Haque, R.; Lyden, S.; Neal, P.; Grundy, S.; Hassan, G.M. Implementing generative AI (GenAI) in higher education: A systematic review of case studies. Comput. Educ. Artif. Intell. 2025, 8, 100407. [Google Scholar] [CrossRef]
  5. Kurtz, G.; Amzalag, M.; Shaked, N.; Zaguri, Y.; Kohen-Vacs, D.; Gal, E.; Zailer, G.; Barak-Medina, E. Strategies for integrating generative AI into higher education: Navigating challenges and leveraging opportunities. Educ. Sci. 2024, 14, 503. [Google Scholar] [CrossRef]
  6. Eshet-Alkalai, Y. Digital literacy: A conceptual framework for survival skills in the digital era. J. Educ. Multimed. Hypermedia 2004, 13, 93–106. [Google Scholar]
  7. Eshet, Y. Thinking in the digital era: A revised model for digital literacy. Issues Informing Sci. Inf. Technol. 2012, 9, 267–276. [Google Scholar]
  8. Mioduser, D.; Nachmias, R.; Forkosh-Baruch, A. New literacies for the knowledge society. In International Handbook of Information Technology in Primary and Secondary Education; Voogt, J., Knezek, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 23–42. [Google Scholar] [CrossRef]
  9. Partnership for 21st Century Skills. P21 Framework Definitions. 2009. Available online: http://www.p21.org/our-work/p21-framework (accessed on 22 June 2025).
  10. Chiu, T.K.F. Future research recommendations for transforming higher education with generative AI. Comput. Educ. Artif. Intell. 2024, 6, 100197. [Google Scholar] [CrossRef]
  11. Dake, D.M. Visual thinking skills for the digital age. In Proceedings of the Visual Literacy in the Digital Age: Selected Readings from the Annual Conference of the International Visual Literacy Association, Rochester, NY, USA, 13–17 October 1993. [Google Scholar]
  12. Jaffe, S.H.; Nachmias, R. Personal information management and learning. Int. J. Technol. Enhanc. Learn. 2011, 3, 570. [Google Scholar] [CrossRef]
  13. Brennan, K.; Resnick, M. New frameworks for studying and assessing the development of computational thinking. In Proceedings of the 2012 Annual Meeting of the American Educational Research Association, Vancouver, BC, Canada, 13–17 April 2012; pp. 1–25. [Google Scholar]
  14. Tinmaz, H.; Lee, Y.-T.; Fanea-Ivanovici, M.; Baber, H. A systematic review on digital literacy. Smart Learn. Environ. 2022, 9, 21. [Google Scholar] [CrossRef]
  15. Marin-Zapata, S.I.; Román-Calderón, J.P.; Robledo-Ardila, C.; Jaramillo-Serna, M.A. Soft skills, do we know what we are talking about? Rev. Manag. Sci. 2022, 16, 969–1000. [Google Scholar] [CrossRef]
  16. Lambert, B.; Plank, R.E.; Reid, D.A.; Fleming, D. A Competency Model for Entry Level Business-to-Business Services Salespeople. Serv. Mark. Q. 2014, 35, 84–103. [Google Scholar] [CrossRef]
  17. Kirsch, I.S.; Jungeblut, A.; Jenkins, L.; Kolstad, A. Adult Literacy in America; DIANE Publishing: Collingdale, PA, USA, 1994. [Google Scholar]
  18. Chacka, C. Skills, competencies and literacies attributed to 4IR/Industry 4.0: Scoping review. IFLA J. 2020, 46, 369–399. [Google Scholar] [CrossRef]
  19. Martínez-Bravo, M.C.; Sádaba Chalezquer, C.; Serrano-Puche, J. Dimensions of digital literacy in the 21st century competency frameworks. Sustainability 2022, 14, 1867. [Google Scholar] [CrossRef]
  20. Haenlein, M.; Kaplan, A. A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. Calif. Manag. Rev. 2019, 61, 5–14. [Google Scholar] [CrossRef]
  21. Wang, P. On defining artificial intelligence. J. Artif. Gen. Intell. 2019, 10, 1–37. [Google Scholar] [CrossRef]
  22. Chen, L.; Chen, P.; Lin, Z. Artificial intelligence in education: A review. IEEE Access 2020, 8, 75264–75278. [Google Scholar] [CrossRef]
  23. Doroudi, S. The intertwined histories of artificial intelligence and education. Int. J. Artif. Intell. Educ. 2023, 33, 885–928. [Google Scholar] [CrossRef]
  24. Creamer, E. ‘Hallucinate’ Chosen as Cambridge Dictionary’s Word of the Year. The Guardian. 2023. Available online: https://www.theguardian.com/books/2023/nov/15/hallucinate-cambridge-dictionary-word-of-the-year (accessed on 22 June 2025).
  25. Italie, L. What’s Merriam-Webster’s Word of the Year for 2023? Hint: Be True to Yourself. AP News. 2023. Available online: https://apnews.com/article/merriam-webster-word-of-year-2023-a9fea610cb32ed913bc15533acab71cc (accessed on 22 June 2025).
  26. Macquarie Dictionary. Announcing the Macquarie Dictionary Word of the Year 2023. Macquarie Dictionary. 2022. Available online: https://www.macquariedictionary.com.au/the-word-of-the-year-2022-is/ (accessed on 22 June 2025).
  27. Wang, B.; Rau, P.-L.P.; Yuan, T. Measuring user competence in using artificial intelligence: Validity and reliability of artificial intelligence literacy scale. Behav. Inf. Technol. 2023, 42, 1324–1337. [Google Scholar] [CrossRef]
  28. Bozkurt, A. Why generative AI literacy, why now and why it matters in the educational landscape? Kings, queens and GenAI dragons. Open Prax. 2024, 16, 283–290. [Google Scholar] [CrossRef]
  29. Cha, Y.; Dai, Y.; Lin, Z.; Liu, A.; Lim, C.P. Empowering university educators to support generative AI-enabled learning: Proposing a competency framework. Procedia CIRP 2024, 128, 256–261. [Google Scholar] [CrossRef]
  30. Jin, Y.; Martinez-Maldonado, R.; Gašević, D.; Yan, L. GLAT: The generative AI literacy assessment test. Comput. Educ. Artif. Intell. 2025, 9, 100436. [Google Scholar] [CrossRef]
  31. Sattelmaier, L.; Pawlowski, J.M. Towards a generative artificial intelligence competence framework for schools. In Proceedings of the International Conference on Enterprise and Industrial Systems (ICOEINS 2023), Advances in Economics, Business and Management Research 270; Sulistiyo, M.D., Nugraha, R.A., Eds.; Springer Nature: Berlin/Heidelberg, Germany, 2023; pp. 291–307. [Google Scholar] [CrossRef]
  32. Annapureddy, R.; Fornaroli, A.; Gatica-Perez, D. Generative AI literacy: Twelve defining competencies. Digit. Gov. Res. Pract. 2025, 6, 13. [Google Scholar] [CrossRef]
  33. Bloom, B.S.; Engelhart, M.D.; Furst, E.J.; Hill, W.H.; Krathwohl, D.R. Taxonomy of Educational Objectives: The Classification of Educational goals. Handbook I: Cognitive Domain; David McKay Company: Ann Arbor, MI, USA, 1956. [Google Scholar]
  34. Krathwohl, D.R. A revision of Bloom’s taxonomy: An overview. Theory Into Pract. 2002, 41, 212–218. [Google Scholar] [CrossRef]
  35. Alaghbary, G.S. Integrating technology with Bloom’s revised taxonomy: Web 2.0-enabled learning designs for online learning. Asian EFL J. 2021, 28, 10–37. [Google Scholar]
  36. Churches, A. Bloom’s Digital Taxonomy. 2008. Available online: http://burtonslifelearning.pbworks.com/f/BloomDigitalTaxonomy2001.pdf (accessed on 22 June 2025).
  37. Faraon, M.; Granlund, V.; Rönkkö, K. Artificial intelligence practices in higher education using Bloom’s digital taxonomy. In Proceedings of the 2023 5th International Workshop on Artificial Intelligence and Education (WAIE), Tokyo, Japan, 28–30 September 2023; pp. 53–59. [Google Scholar] [CrossRef]
  38. Coşgun Ögeyik, M. Using Bloom’s digital taxonomy as a framework to evaluate webcast learning experience in the context of COVID-19 pandemic. Educ. Inf. Technol. 2022, 27, 11219–11235. [Google Scholar] [CrossRef]
  39. Lo, S.M.; Larsen, V.M.; Yee, A.T. A two-dimensional and non-hierarchical framework of Bloom’s taxonomy for biology. FASEB J. 2016, 30 (Suppl. S1), 662.14. [Google Scholar] [CrossRef]
  40. Bagchi, S.N.; Sharma, R. Hierarchy in Bloom’s Taxonomy: An empirical case-based exploration using MBA students. J. Case Res. 2014, 5, 57–79. [Google Scholar]
  41. Kunen, S.; Cohen, R.; Solman, R. A levels-of-processing analysis of Bloom’s Taxonomy. J. Educ. Psychol. 1981, 73, 202–211. [Google Scholar] [CrossRef]
  42. Seddon, G.M. The properties of Bloom’s Taxonomy of educational objectives for the cognitive domain. Rev. Educ. Res. 1978, 48, 303–323. [Google Scholar] [CrossRef]
  43. Lalwani, A.; Agrawal, S. Validating revised Bloom’s taxonomy using deep knowledge tracing. In Artificial Intelligence in Education; Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 225–238. [Google Scholar] [CrossRef]
  44. Bruehler, B.B. Traversing Bloom’s taxonomy in an introductory scripture course. Teach. Theol. Relig. 2018, 21, 92–109. [Google Scholar] [CrossRef]
  45. Mullis, I.V.S.; Martin, M.O.; Foy, P.; Hooper, M. TIMSS 2015 International Results in Mathematics; International Association for the Evaluation of Educational Achievement: Amsterdam, The Netherlands, 2016. [Google Scholar]
  46. Hyder, I.; Bhamani, S. Bloom’s taxonomy (cognitive domain) in higher education settings: Reflection brief. J. Educ. Educ. Dev. 2016, 3, 288–300. [Google Scholar] [CrossRef]
  47. De Stefano, V. “Negotiating the algorithm”: Automation, artificial intelligence, and labor protection. Comp. Labor Law Policy J. 2019, 41, 15–46. [Google Scholar]
  48. Safari, N.; Techatassanasoontorn, A.A.; Diaz Andrade, A. Auto-pilot, co-pilot and pilot: Human and generative AI configurations in software development. In Proceedings of the International Conference on Information Systems, Edinburgh, UK, 14–16 August 2024. [Google Scholar]
  49. Bringman-Rodenbarger, L.; Hortsch, M. How students choose e-learning resources: The importance of ease, familiarity, and convenience. FASEB BioAdvances 2020, 2, 286–295. [Google Scholar] [CrossRef] [PubMed]
  50. Byungura, J.C.; Hansson, H.; Muparasi, M.; Ruhinda, B. Familiarity with technology among first-year students in Rwandan tertiary education. Electron. J. e-Learn. 2018, 16, 30–45. Available online: https://www.ejel.org (accessed on 22 June 2025).
  51. Strzelecki, A. To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interact. Learn. Environ. 2024, 32, 5142–5155. [Google Scholar] [CrossRef]
  52. Szymkowiak, A.; Melović, B.; Dabić, M.; Jeganathan, K.; Kundi, G.S. Information technology and Gen Z: The role of teachers, the internet, and technology in the education of young people. Technol. Soc. 2021, 65, 101565. [Google Scholar] [CrossRef]
  53. Prensky, M. How to teach with technology: Keeping both teachers and students comfortable in an era of exponential change. Emerg. Technol. Learn. 2007, 2, 40–46. [Google Scholar]
  54. Cassidy, E.D.; Martinez, M.; Shen, L. Not in love, or not in the know? Graduate student and faculty use (and non-use) of e-books. J. Acad. Librariansh. 2012, 38, 326–332. [Google Scholar] [CrossRef]
  55. Dahlstrom, E.; Brooks, D.C.; Bichsel, J. The Current Ecosystem of Learning Management Systems in Higher Education: Student, Faculty, and IT Perspectives; ECAR: Louisville, CO, USA, 2014. [Google Scholar]
  56. Dai, Y.; Xiao, J.-Y.; Huang, Y.; Zhai, X.; Wai, F.-C.; Zhang, M. How generative AI enables an online project-based learning platform: An applied study of learning behavior analysis in undergraduate students. Appl. Sci. 2025, 15, 2369. [Google Scholar] [CrossRef]
  57. Tu, J.; Hadan, H.; Wang, D.M.; Sgandurra, S.A.; Mogavi, R.H.; Nacke, L.E. Augmenting the author: Exploring the potential of AI collaboration in academic writing. arXiv 2024, arXiv:2404.16071. [Google Scholar]
  58. Federiakin, D.; Molerov, D.; Zlatkin-Troitschanskaia, O.; Maur, A. Prompt engineering as a new 21st century skill. Front. Educ. 2024, 9, 1366434. [Google Scholar] [CrossRef]
  59. Oppenlaender, J.; Linder, R.; Silvennoinen, J. Prompting AI art: An investigation into the creative skill of prompt engineering. Int. J. Hum. –Comput. Interact. 2024, 1–23. [Google Scholar] [CrossRef]
  60. Barzilay, A.R. Discrimination without discriminating: Learned gender inequality in the labor market and gig economy. Cornell Law Sch. 2018, 28, 545–568. [Google Scholar] [CrossRef]
  61. Lobel, O. The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future; PublicAffairs: New York, NY, USA, 2022. [Google Scholar]
  62. Doellgast, V.; Wagner, I.; O’Brady, S. Negotiating limits on algorithmic management in digitalised services: Cases from Germany and Norway. Transf. Eur. Rev. Labour Res. 2023, 29, 105–120. [Google Scholar] [CrossRef]
  63. Al-kfairy, M.; Mustafa, D.; Kshetri, N.; Insiew, M.; Alfandi, O. Ethical challenges and solutions of generative AI: An interdisciplinary perspective. Informatics 2024, 11, 58. [Google Scholar] [CrossRef]
  64. Hagendorff, T. Mapping the ethics of generative AI: A comprehensive scoping review. Minds Mach. 2024, 34, 39. [Google Scholar] [CrossRef]
  65. Nguyen, K.V. The use of generative AI tools in higher education: Ethical and pedagogical principles. J. Acad. Ethics 2025. [Google Scholar] [CrossRef]
  66. Zhu, W.; Huang, L.; Zhou, X.; Li, X.; Shi, G.; Ying, J.; Wang, C. Could AI ethical anxiety, perceived ethical risks and ethical awareness about AI influence university students’ use of generative AI products? An ethical perspective. Int. J. Hum. –Comput. Interact. 2025, 41, 742–764. [Google Scholar] [CrossRef]
  67. Olson, D.R.; Astington, J.W. Talking about text: How literacy contributes to thought. J. Pragmat. 1990, 14, 705–721. [Google Scholar] [CrossRef]
  68. Hasannejad, M.R.; Bahador, H.; Kazemi, S.A. Powerful vocabulary acquisition through texts comparison. Int. J. Appl. Linguist. Engl. Lit. 2014, 4, 213–220. [Google Scholar] [CrossRef]
  69. Henkel, M.; Jacob, A.; Perrey, L. What shapes our trust in scientific information? A review of factors influencing perceived scientificness and credibility. In Proceedings of the 8th European Conference on Information Literacy, Kraków, Poland, 9–12 October 2023; pp. 107–118. [Google Scholar] [CrossRef]
  70. Metzger, M.J. Making sense of credibility on the web: Models for evaluating online information and recommendations for future research. J. Am. Soc. Inf. Sci. Technol. 2007, 58, 2078–2091. [Google Scholar] [CrossRef]
  71. Pornpitakpan, C. The persuasiveness of source credibility: A critical review of five decades’ evidence. J. Appl. Soc. Psychol. 2004, 34, 243–281. [Google Scholar] [CrossRef]
  72. Ou, M.; Zheng, H.; Zeng, Y.; Hansen, P. Trust it or not: Understanding users’ motivations and strategies for assessing the credibility of AI-generated information. New Media Soc. 2024, 14614448241293154. [Google Scholar] [CrossRef]
  73. Chakraborty, U.; Biswal, S.K. Is ChatGPT a responsible communication: A study on the credibility and adoption of conversational artificial intelligence. J. Promot. Manag. 2024, 30, 929–958. [Google Scholar] [CrossRef]
  74. Wei, X.; Wang, L.; Lee, L.-K.; Liu, R. The effects of generative AI on collaborative problem-solving and team creativity performance in digital story creation: An experimental study. Int. J. Educ. Technol. High. Educ. 2025, 22, 23. [Google Scholar] [CrossRef]
  75. Panadero, E. A review of self-regulated learning: Six models and four directions for research. Front. Psychol. 2017, 8, 422. [Google Scholar] [CrossRef]
  76. Puustinen, M.; Pulkkinen, L. Models of self-regulated learning: A review. Scand. J. Educ. Res. 2001, 45, 269–286. [Google Scholar] [CrossRef]
  77. Zimmerman, B.J. Becoming a self-regulated learner: An overview. Theory Into Pract. 2002, 41, 64–70. [Google Scholar] [CrossRef]
  78. Faza, A.; Lestari, I.A. Self-regulated learning in the digital age: A systematic review of strategies, technologies, benefits, and challenges. Int. Rev. Res. Open Distrib. Learn. 2025, 26, 23–58. [Google Scholar] [CrossRef]
  79. Prasse, D.; Webb, M.; Deschênes, M.; Parent, S.; Aeschlimann, F.; Goda, Y.; Yamada, M.; Raynault, A. Challenges in promoting self-regulated learning in technology supported learning environments: An umbrella review of systematic reviews and meta-rnalyses. Technol. Knowl. Learn. 2024, 29, 1809–1830. [Google Scholar] [CrossRef]
  80. Thornhill-Miller, B.; Camarda, A.; Mercier, M.; Burkhardt, J.-M.; Morisseau, T.; Bourgeois-Bougrine, S.; Vinchon, F.; El Hayek, S.; Augereau-Landais, M.; Mourey, F.; et al. Creativity, critical thinking, communication, and collaboration: Assessment, certification, and promotion of 21st century skills for the future of work and education. J. Intell. 2023, 11, 54. [Google Scholar] [CrossRef]
  81. Cain, W. Prompting change: Exploring prompt engineering in large language model AI and its potential to transform education. TechTrends 2024, 68, 47–57. [Google Scholar] [CrossRef]
  82. Premkumar, P.P.; Yatigammana, M.R.K.N.; Kannangara, S. Impact of generative AI on critical thinking skills in undergraduates: A systematic review. J. Desk Res. Rev. Anal. 2024, 2, 199–215. [Google Scholar] [CrossRef]
  83. Sardi, J.; Darmansyah Candra, O.; Yuliana, D.F.; Habibullah Yanto, D.T.P.; Eliza, F. How generative AI influences students’ self-regulated learning and critical thinking skills? A systematic review. Int. J. Eng. Pedagog. IJEP 2025, 15, 94–108. [Google Scholar] [CrossRef]
  84. Akdilek, S.; Akdilek, I.; Punyanunt-Carter, N.M. The influence of generative AI on interpersonal communication dynamics. In The Role of Generative AI in the Communication Classroom; IGI Global: Hershey, PA, USA, 2024; pp. 167–190. [Google Scholar] [CrossRef]
  85. Gans, J.S. How will generative AI impact communication? Econ. Lett. 2024, 242, 111872. [Google Scholar] [CrossRef]
  86. Habib, S.; Vogel, T.; Anli, X.; Thorne, E. How does generative artificial intelligence impact student creativity? J. Creat. 2024, 34, 100072. [Google Scholar] [CrossRef]
  87. Heigl, R. Generative artificial intelligence in creative contexts: A systematic review and future research agenda. Manag. Rev. Q. 2025. [Google Scholar] [CrossRef]
  88. Rafner, J.; Beaty, R.E.; Kaufman, J.C.; Lubart, T.; Sherson, J. Creativity in the age of generative AI. Nat. Hum. Behav. 2023, 7, 1836–1838. [Google Scholar] [CrossRef] [PubMed]
  89. Essa, A.; Lataifeh, M. Evaluating Generative AI Tools for Visual Asset Creation—An Educational Approach; Springer Nature: Cham, Switzerland, 2024; pp. 269–282. [Google Scholar] [CrossRef]
  90. Han, A.; Cai, Z. Design implications of generative AI systems for visual storytelling for young learners. In Proceedings of the 22nd Annual ACM Interaction Design and Children Conference, Chicago, IL, USA, 19–23 June 2023; pp. 470–474. [Google Scholar] [CrossRef]
  91. Choi, W.; Bak, H.; An, J.; Zhang, Y.; Stvilia, B. College students’ credibility assessments of GenAI-generated information for academic tasks: An interview study. J. Assoc. Inf. Sci. Technol. 2025, 76, 867–883. [Google Scholar] [CrossRef]
  92. Hersh, W. Search still matters: Information retrieval in the era of generative AI. J. Am. Med. Inform. Assoc. 2024, 31, 2159–2161. [Google Scholar] [CrossRef] [PubMed]
  93. Henriksen, D.; Creely, E.; Gruber, N.; Leahy, S. Social-emotional learning and generative AI: A critical literature review and framework for teacher education. J. Teach. Educ. 2025, 76, 312–328. [Google Scholar] [CrossRef]
  94. Ortega-Ochoa, E.; Sabaté, J.-M.; Arguedas, M.; Conesa, J.; Daradoumis, T.; Caballé, S. Exploring the utilization and deficiencies of generative artificial intelligence in students’ cognitive and emotional needs: A systematic mini-review. Front. Artif. Intell. 2024, 7, 1493566. [Google Scholar] [CrossRef]
  95. Peng, D.; Yu, Z. A literature review of digital literacy over two decades. Educ. Res. Int. 2022, 2022, 2533413. [Google Scholar] [CrossRef]
  96. Hayes, A.F.; Coutts, J.J. Use Omega rather than Cronbach’s Alpha for estimating reliability. But…. Commun. Methods Meas. 2020, 14, 1–24. [Google Scholar] [CrossRef]
  97. Kalkbrenner, M.T. Alpha, Omega, and H internal consistency reliability estimates: Reviewing these options and when to use them. Couns. Outcome Res. Eval. 2023, 14, 77–88. [Google Scholar] [CrossRef]
  98. Luo, J.A. critical review of GenAI policies in higher education assessment: A call to reconsider the “originality” of students’ work. Assess. Eval. High. Educ. 2024, 49, 651–664. [Google Scholar] [CrossRef]
  99. Wang, H.; Dang, A.; Wu, Z.; Mac, S. Generative AI in higher education: Seeing ChatGPT through universities’ policies, resources, and guidelines. Comput. Educ. Artif. Intell. 2024, 7, 100326. [Google Scholar] [CrossRef]
  100. Yusuf, A.; Pervin, N.; Román-González, M. Generative AI and the future of higher education: A threat to academic integrity or reformation? Evidence from multicultural perspectives. Int. J. Educ. Technol. High. Educ. 2024, 21, 21. [Google Scholar] [CrossRef]
  101. O’Dea, X.; Tsz Kit Ng, D.; O’Dea, M.; Shkuratskyy, V. Factors affecting university students’ generative AI literacy: Evidence and evaluation in the UK and Hong Kong contexts. Policy Futures Educ. 2024, 14782103241287401. [Google Scholar] [CrossRef]
  102. Alvarado-Bravo, N.; Aldana-Trejo, F.; Duran-Herrera, V.; Rasilla-Rovegno, J.; Suarez-Bazalar, R.; Torres-Quiroz, A.; Paredes-Soria, A.; Gonzales-Saldaña, S.H.; Tomás-Quispe, G.; Olivares-Zegarra, S. Artificial intelligence as a tool for the development of soft skills: A bibliometric review in the context of higher education. Int. J. Learn. Teach. Educ. Res. 2024, 23, 379–394. [Google Scholar] [CrossRef]
  103. Mohammed, F.S.; Ozdamli, F. A systematic literature review of soft skills in information technology education. Behav. Sci. 2024, 14, 894. [Google Scholar] [CrossRef]
  104. Wang, J.; Fan, W. The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: Insights from a meta-analysis. Humanit. Soc. Sci. Commun. 2025, 12, 621. [Google Scholar] [CrossRef]
  105. Arowosegbe, A.; Alqahtani, J.S.; Oyelade, T. Students’ perception of generative AI use for academic purpose in UK higher education. Front. Educ. 2024, 9, 1463208. [Google Scholar] [CrossRef]
  106. Johnston, H.; Wells, R.F.; Shanks, E.M.; Boey, T.; Parsons, B.N. Student perspectives on the use of generative artificial intelligence technologies in higher education. Int. J. Educ. Integr. 2024, 20, 2. [Google Scholar] [CrossRef]
  107. OECD. At What Age do University Students Earn Their First Degree? OECD: Paris, France, 2014. [Google Scholar]
  108. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  109. Țala, M.L.; Muller, C.N.; Nastase, I.A.; State, O.; Gheorghe, G. Exploring university students’ perceptions of generative artificial intelligence in education. Amfiteatru Econ. 2024, 26, 71. [Google Scholar] [CrossRef]
  110. Sullivan, M.; McAuley, M.; Degiorgio, D.; McLaughlan, P. Improving students’ generative AI literacy: A single workshop can improve confidence and understanding. J. Appl. Learn. Teach. 2024, 7, 88–97. [Google Scholar] [CrossRef]
  111. Cordero, J.; Torres-Zambrano, J.; Cordero-Castillo, A. Integration of generative artificial intelligence in higher education: Best practices. Educ. Sci. 2024, 15, 32. [Google Scholar] [CrossRef]
  112. Kelly, A.; Sullivan, M.; Strampel, K. Teaching generative AI in higher education: Strategies, implications, and reflective practices. In Teaching and Learning in the Age of Generative AI: Evidence-Based Approaches to Pedagogy, Ethics, and Beyond; Corbeil, J.R., Corbeil, M.E., Eds.; Routledge: London, UK, 2025; pp. 213–234. [Google Scholar] [CrossRef]
  113. Riaz, S.; Mushtaq, A. Optimizing generative AI integration in higher education: A framework for enhanced student engagement and learning outcomes. In Proceedings of the 2024 Advances in Science and Engineering Technology International Conferences (ASET), Abu Dhabi, United Arab Emirates, 3–5 June 2024; pp. 1–6. [Google Scholar] [CrossRef]
  114. Prather, J.; Leinonen, J.; Kiesler, N.; Gorson Benario, J.; Lau, S.; MacNeil, S.; Norouzi, N.; Opel, S.; Pettit, V.; Porter, L.; et al. Beyond the hype: A comprehensive review of current trends in generative AI research, teaching practices, and tools. In Proceedings of the 2024 Working Group Reports on Innovation and Technology in Computer Science Education, Milan, Italy, 8 July 2024; pp. 300–338. [Google Scholar] [CrossRef]
  115. Hashmi, N.; Bal, A.S. Generative AI in higher education and beyond. Bus. Horiz. 2024, 67, 607–614. [Google Scholar] [CrossRef]
  116. Đerić, E.; Frank, D.; Vuković, D. Exploring the ethical implications of using generative AI tools in higher education. Informatics 2025, 12, 36. [Google Scholar] [CrossRef]
  117. Quince, Z.; Petkoff, K.; Michael, R.N.; Daniel, S.; Nikolic, S. The current ethical considerations of using GenAI in engineering education and practice: A systematic literature review. In Proceedings of the 35th Annual Conference of the Australasian Association for Engineering Education (AAEE 2024), Christchurch, New Zealand, 1–4 December 2024; pp. 509–517. [Google Scholar]
  118. Arista, A.; Shuib, L.; Ismail, M.A. Systematic literature review and bibliometric analysis on ethical policies for generative artificial intelligence (GAI) in higher education institutions (HEIs). In Proceedings of the 2024 International Conference on Informatics, Multimedia, Cyber and Information System (ICIMCIS), Banten, Indonesia, 20–21 November 2024; pp. 454–459. [Google Scholar] [CrossRef]
Table 1. The resulting GenAI Literacy Framework, which is based on Bloom’s Revised Taxonomy.
Table 1. The resulting GenAI Literacy Framework, which is based on Bloom’s Revised Taxonomy.
Bloom’s CategorySkill(s)
KnowBecome familiar with GenAI-based tools that can assist in performing a given task
Stay updated on innovations in the world of GenAI
UnderstandUnderstand how to make the most of GenAI-based tools
ApplyFormulate prompts that lead to the desired results
Use GenAI-based tools ethically in the context of a given task
AnalyzeCompare the outputs of different GenAI-based tools to a given task
EvaluateVerify the accuracy of the output given by GenAI-based tools by cross-checking with other sources and prior knowledge
CreateProduce quality output for a given task using GenAI-based tools
Table 2. GenAI Literacy Questionnaire—the anchor reads “To what extent do you agree with each of the following statements? Every time a ‘task’ is mentioned—refer to the task you chose above.”.
Table 2. GenAI Literacy Questionnaire—the anchor reads “To what extent do you agree with each of the following statements? Every time a ‘task’ is mentioned—refer to the task you chose above.”.
Item #CategoryItem
1KnowI am familiar with GenAI tools that can help me accomplish this task
2KnowI am up to date on new GenAI tools that can help me with this task
3UnderstandI understand how to make optimal use of GenAI tools to complete this task
4ApplyI know how to write prompts that will give me the desired results for carrying out this task
5ApplyI know how to make ethical use of GenAI tools for the purpose of carrying out this task
6AnalyzeI know how to compare the outputs of different GenAI tools when performing this task
7EvaluateI know how to assess the correctness of the output of GenAI tools when performing this task, referring to the limitations of AI, to other sources, and to my previous knowledge
8CreateI know how to produce an optimal outcome for this task using a variety of GenAI tools
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hershkovitz, A.; Tabach, M.; Reich, Y.; Lurie, L.; Cholcman, T. Framing and Evaluating Task-Centered Generative Artificial Intelligence Literacy for Higher Education Students. Systems 2025, 13, 518. https://doi.org/10.3390/systems13070518

AMA Style

Hershkovitz A, Tabach M, Reich Y, Lurie L, Cholcman T. Framing and Evaluating Task-Centered Generative Artificial Intelligence Literacy for Higher Education Students. Systems. 2025; 13(7):518. https://doi.org/10.3390/systems13070518

Chicago/Turabian Style

Hershkovitz, Arnon, Michal Tabach, Yoram Reich, Lilach Lurie, and Tamar Cholcman. 2025. "Framing and Evaluating Task-Centered Generative Artificial Intelligence Literacy for Higher Education Students" Systems 13, no. 7: 518. https://doi.org/10.3390/systems13070518

APA Style

Hershkovitz, A., Tabach, M., Reich, Y., Lurie, L., & Cholcman, T. (2025). Framing and Evaluating Task-Centered Generative Artificial Intelligence Literacy for Higher Education Students. Systems, 13(7), 518. https://doi.org/10.3390/systems13070518

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop