Next Article in Journal
Time-Dependent Vehicle Routing Problem with Simultaneous Pickup-and-Delivery and Time Windows Considering Carbon Emission Costs Using an Improved Ant Colony Optimization Algorithm
Next Article in Special Issue
Towards a Sustainable and Ethical Integration of AI Chatbots in Higher Education
Previous Article in Journal
Multi-Objective Optimization and Federated Learning for Agri-Food Supply Chains via Dynamic Heterogeneous Graph Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Mapping the Landscape of Generative Artificial Intelligence Literacy: A Systematic Review Toward Social, Ethical, and Sustainable AI Adoption

by
Patricio Ramírez-Correa
1,*,
Elizabeth E. Grandón
2 and
Ari Melo Mariano
3
1
School of Engineering, Universidad Católica del Norte, Coquimbo 1781421, Chile
2
Departamento de Sistemas de Información, Facultad de Ciencias Empresariales, Universidad del Bío-Bío, Concepción 4081112, Chile
3
DataLab-Data Laboratory, Faculty of Technology, Department of Production Engineering, Universidade de Brasília, Brasilia 70297-400, Brazil
*
Author to whom correspondence should be addressed.
Sustainability 2026, 18(3), 1429; https://doi.org/10.3390/su18031429
Submission received: 29 November 2025 / Revised: 24 January 2026 / Accepted: 28 January 2026 / Published: 31 January 2026

Abstract

The rapid expansion of generative artificial intelligence across educational, professional, and societal domains has intensified the need for a clear understanding of generative artificial intelligence literacy. Although scholarly interest in this topic has grown substantially in recent years, existing research remains dispersed across disciplines, limiting both theoretical consolidation and practical guidance. This study maps the scientific literature on generative artificial intelligence literacy by identifying its underlying thematic structure. A systematic literature review was conducted following PRISMA 2020 guidelines. We retrieved 40 peer-reviewed journal articles published between 2023 and 2025 from the Web of Science and Scopus databases. Topic modeling using Latent Dirichlet Allocation was applied to the full texts, with inter-rater reliability validation achieving substantial agreement (Cohen’s kappa = 0.78). The analysis revealed four interrelated thematic areas: ethical foundations (40%), educational use (32.5%), adoption and interaction (12.5%), and evaluation (15%). Geographic analysis showed notable concentration in Asia (50%) and educational settings (47.5%), with limited representation in healthcare, government, and industry sectors. Two critical gaps emerged: the scarcity of validated measurement instruments and a persistent disconnect between expert ethical frameworks and users’ ethical awareness. These findings provide a structured foundation for researchers, educators, and policymakers to develop evidence-based interventions and support the sustainable adoption of generative artificial intelligence technologies.

1. Introduction

Generative Artificial Intelligence (GAI) is creating a new digital divide. In 2023, a global survey showed that 52% of people felt nervous about artificial intelligence products and services, and although two out of three said they understood what artificial intelligence is, only one out of two knew which products use it [1]. More recently, a Pew Research Center global report [2] found that only 34% of adults had heard or read much about artificial intelligence, 47% a little, and 14% nothing. In addition, 34% said they were more worried than excited about its role in daily life, 42% felt both worried and excited, and only 16% were more excited than worried. In this context, UNESCO [3] recommends strengthening artificial intelligence literacy through inclusive education, community initiatives, cross-sector collaboration, and lifelong learning to reduce the emerging digital gap. In parallel, the rapid mass adoption of GAI has transformed numerous sectors of our society and, in turn, created the need to acquire new skills for its proper application [4,5]. Skills related to GAI are essential not only for ensuring an economically competitive situation, but they also significantly affect the promotion of social cohesion [6,7,8]. Training in GAI tools is of utmost importance for three main reasons. First, in the professional sphere, advanced skillful employees become a major source of innovation and, at the same time, they are able to maintain a high level of work efficiency [6]. Second, from a civic perspective, these skills are very significant for the community members in that they enable the latter to analyze and critique media news and also distinguish GAI-generated content; thus, indirectly, they are provided with improved means to fight against disinformation [7,9]. In fact, there is a consensus that promoting GAI knowledge among the people is an inescapable duty of the present society [8]. Third, recent studies have investigated the impact of GAI on sustainability in different domains such as environmental, economic, and social. The current study is a review of scientific literature regarding these new skills.
On the way towards GAI literacy, there have been previous proposals for technological skills. The first point was digital literacy, which comprises the skills that a person employs to judge and, at the same time, communicate in an effective manner in digital environments. Later, this literacy was further developed to include artificial intelligence elements, which means that it should involve the knowledge of how machine learning algorithms operate and that the biases they use come from the already existing data. The advent of ChatGPT (OpenAI’s GPT-3.5 model, released in November 2022) marked the decisive breakthrough for GAI literacy. This new phase goes beyond the previous ones as it puts the focus on the skill of not only comprehending but also using GAI tools in a critical and responsible manner [4,10].
While GAI literacy has been the subject of increasing discussion, the conceptual framework and the key issues of primary research are still not well-defined [4,8]. Researchers must identify the most significant topics to promote knowledge creation and utilization, while educators and policymakers must remain alert to these changes and apply them correctly [11]. A comprehensive roadmap is lacking in the field, which would be instrumental in achieving systematic and strategic progress.
Although theoretical frameworks have conceptualized GAI literacy dimensions, there is no systematic empirical evidence on how academic research is distributed across these dimensions. Previous studies have proposed what GAI literacy should include. However, we lack evidence of which themes dominate scholarly discourse, which sectors are underrepresented, and which geographic regions shape the field. This empirical mapping is essential because it reveals where the academic community concentrates efforts versus where critical gaps exist, identifies thematic imbalances hindering balanced literacy development, and provides evidence-based foundations for prioritizing research investments. Instead of proposing another framework, this study systematically maps existing scholarship in GAI literacy to identify what we know, what is still underexplored, and where interventions are most required.
The main objective of this research is to perform a systematic review of scientific literature on GAI literacy and to recognize its thematic structure. A topic modeling technique is being used to analyze the complete text of articles in order to accomplish this objective. This study provides an understanding of GAI literacy, clarifies the topics of research, and sets up a foundation for future studies.
The rest of the article has been organized as follows. Section 2 summarizes the main ideas of GAI literacy literature. Section 3 details the method used in the study. Section 4 presents the results. Section 5 provides both a discussion and an integrative model. Finally, Section 6 contains the concluding remarks.

2. Theoretical Background

GAI literacy can be seen as the next stage in a continuum of technological skills. At the beginning of this range, we found digital literacy, which is described as a multifaceted survival skill for the technological age. GAI literacy, however, comprises, besides technological skills, complex cognitive, social, and emotional capacities that allow people to communicate and assess information in digital environments.
Artificial intelligence (AI) literacy, also part of the continuum, builds upon digital literacy, focusing on developing specific skills to understand machine learning and algorithmic decision-making. AI literacy requires people to have a conceptual understanding of how AI systems work, how they learn, and how their actions can perpetuate existing biases, including the skills to critically evaluate these technologies and collaborate effectively with them [12]. More specifically, AI literacy is associated with four aspects: (1) knowing and understanding the technology, (2) using and applying it, (3) evaluating and creating with it, and (4) addressing its ethical questions [13].
GAI literacy represents a deeper specialization of AI literacy. GAI literacy specifically focuses on systems capable of generating novel content (text, images, code, etc.), not merely analyzing or classifying existing data. Where AI literacy emphasizes algorithmic understanding and decision-making processes, GAI literacy adds the unique dimension of human–AI co-creation through prompt engineering and iterative interaction. Additionally, GAI literacy addresses generative-specific ethical risks including authenticity/misinformation of generated content, intellectual property concerns in training data and outputs, and the acceleration of human cognitive biases through AI-generated reinforcement. GAI literacy refers to a set of skills that enable efficient, critical, and ethical interactions with AI systems capable of generating new content. These specialized skills can be grouped into the same four aspects of AI literacy. As shown in Figure 1, the first dimension, knowledge and understanding of GAI technical foundations, entails comprehending the core principles of generative AI, including how it differs from other forms of AI and its potential applications across various domains, gaining insight into the workings of the models that power GAI (such as large language models (LLMs)), and understanding their ability to generate creative content, automate tasks, and provide insights, as well as their limitations in terms of accuracy, bias, and originality. The second dimension, which relates to the use and practical application of technology, is directly associated with the performance of tasks. For example, writing, programming, problem-solving, and imaging, among others. This dimension embraces the proficiency in prompt engineering, which is required to direct GAI towards the expected outcomes [14]. The third dimension, critical evaluation, comprises assessing GAI results through critical evaluation of their accuracy, relevance, bias, and originality. The deployment of these skills empowers users to be content supervisors for the content they generate [9,15]. Moreover, the fourth dimension, ethical awareness and responsible use, refers to the skill set that prepares users to recognize the ethical dilemmas when using such technologies. The skills call for a commitment to the responsible use of GAI, and unlike other digital technologies, the focus should be on humanity [7,16]. Overall, Figure 1 integrates these four dimensions into a coherent framework that conceptualizes GAI literacy as a specialized extension of AI literacy, in which technical knowledge, practical use, critical evaluation, and ethical awareness interact dynamically to support responsible and effective engagement with GAI.
According to previous studies, GAI, like any other technology, is constantly changing. Since its algorithms are designed by observing and copying human behavior, thereby adapting to how we think and act, GAI literacy should not be a static concept defined once and for all. Instead, it should be viewed as a dynamic and evolving concept that adapts to new technological capabilities and emerging societal norms [4,17]. Figure 1 presents the conceptual framework of what GAI literacy should comprise. This study’s distinct contribution is empirical: we systematically analyze 40 published studies to reveal how research efforts are currently distributed across these four dimensions. This empirical mapping reveals whether scholarly investment is proportional or concentrated in certain areas, identifying gaps that guide future research priorities.

3. Methodology

3.1. Data Collection

GAI literacy was operationally defined as knowledge, skills, and competencies specifically related to understanding, using, and critically evaluating generative artificial intelligence systems. While the theoretical framework establishes what GAI literacy should include, an empirical question persists: How are research efforts distributed across these dimensions? Topic modeling via LDA systematically answers this by identifying thematic concentration and gaps within the corpus. Table 1 presents the inclusion and exclusion criteria.
The data collection followed PRISMA 2020 guidelines and was conducted on 10 September 2025, across two databases: Web of Science (WoS) Core Collection and Scopus, capturing the emergence of formal academic discourse on GAI literacy post-ChatGPT. The search strategy employed the Boolean query (“Generative AI Literacy” OR “Generative Artificial Intelligence Literacy” OR “GAI Literacy”) with filters for timespan 2023–2025, document type “Article”, and language “English”. In WoS, we applied TS = (“Generative AI Literacy” OR “Generative Artificial Intelligence Literacy” OR “GAI Literacy”) to the topic field (title, abstract, keywords), yielding 25 records. In Scopus, we executed TITLE-ABS-KEY (“Generative AI Literacy” OR “Generative Artificial Intelligence Literacy” OR “GAI Literacy”) with filters for source type “Journals”, retrieving 31 records. The inclusion criteria were as follows: (a) peer-reviewed journal articles that explicitly address GAI literacy or directly related concepts; (b) peer-reviewed articles in English; (c) published between 2023–2025; (d) accessible full text. The exclusion criteria were (a) books, book chapters, conference proceedings, gray literature; (b) non-English publications; (c) inaccessible full text; (d) studies not directly addressing GAI literacy themes. The total number of records was 56, but after eliminating 16 duplicates, 40 unique articles remained. All 40 titles/abstracts were screened as potentially relevant given the broad and emerging nature of GAI literacy discourse, and full-text eligibility assessment confirmed that all 40 met inclusion criteria. The absence of exclusions is due to the precision of our search strategy, which specifically targeted GAI-specific terminology to capture this unique field instead of the broader AI literacy domain.
The deliberate use of the explicit terminology “Generative Artificial Intelligence Literacy”, rather than broader notions such as AI literacy, digital literacy, or algorithmic literacy, seeks to capture the specific academic discourse that emerged in the period following the appearance of ChatGPT. Although the focus of this search is on relevance and conceptual precision, it may not cover studies on general AI literacy, responsible AI education, human–AI interaction, or AI competencies that could provide transferable knowledge. This systematic review focuses on the landscape of explicit academic discourse on generative AI literacy, instead of the complete historical trajectory of AI literacy. This scope is intentional and consistent with our objective. Future research could be supplemented by broader reviews that identify conceptual continuities that are specific to generative AI contexts.
The entire data gathering operation, adapted from PRISMA flow diagram [18], is depicted in Figure 2.

3.2. Data Analysis

Data processing and analysis were conducted using a custom R script designed to ensure reproducibility (script available in Supplementary Materials). The workflow integrated several packages for text mining and topic modeling. First, raw text was ingested using the pdftools package (version 3.6.0), employing a layout-aware algorithm to correctly extract text from both single and dual-column PDF formats. We then used the tm package (version 0.7-16) for preprocessing, which included converting text to lowercase, removing punctuation, numbers, and whitespace characters, filtering standard English stopwords (n = 174 terms from the SMART lexicon) plus a custom domain-specific list (n = 37 terms, including “study,” “doi,” “figure,” “results,” “method,” “article,” “research,” and bibliographic markers, developed iteratively to eliminate high-frequency terms with low thematic informativeness), and applying stemming using the Porter Stemmer algorithm (via the SnowballC package version 0.7.1).
For topic modeling, we constructed a Document–Term Matrix (DTM) based on unigrams, filtering sparse terms with document frequency <2 to reduce noise and improve model stability. Latent Dirichlet Allocation (LDA) was selected as the topic modeling approach, a probabilistic method designed to identify latent themes within text corpora. To determine the optimal number of topics (K), we evaluated K values ranging from 2 to 30 using the ldatuning package (version 1.0.3) with four convergence metrics [19,20,21,22]. Based on the convergence of these metrics and semantic interpretability, K = 4 was selected.
The LDA model was computed using the topicmodels package (version 0.2-17) via Gibbs sampling. To ensure reproducibility, we fixed the random seed to 1234 and set hyperparameters to alpha (α) = 0.10 and delta (δ) = 0.01 (priors favoring sparse distributions). The sampling process ran for 2000 iterations with a 500-iteration burn-in period. Topics were visualized and explored interactively using LDAvis (version 0.3.2). The complete preprocessing code, parameter settings, and analysis outputs (topic terms, document assignments, LDA tuning metrics, and interactive visualization) are documented in the R script in Supplementary Materials.
To improve interpretive validity, two researchers independently examined a stratified sample of 15 articles (37.5%) and categorized thematic content into the four-topic framework. Among the 15 articles that were coded, 13 were assigned the same topic, and 2 articles had initial discrepancies (articles 3 and 40). The resolution of these discrepancies was achieved through discussion, resulting in a consensus. The use of Cohen’s kappa (κ = 0.78; p < 0.001) indicated significant agreement between coders in terms of inter-rater reliability. A comprehensive disciplinary and geographic analysis of the full corpus (N = 40) contextualized topic distribution across sectors and regions (see Section 4.1).

4. Results

The LDA analysis identified four topics that outline the hierarchical structure of the scientific literature on GAI literacy. Figure 3 shows the most important words for each topic. As seen in Figure 4, the Ethical Foundation topic accounts for the majority of articles, while topics 3 (Adoption and Interaction) and 4 (Evaluation) are the least represented themes. The following synthesis presents an overview of each topic, contextualized within the broader characteristics of the research corpus.

4.1. Corpus Characterization

The final selection comprised 40 peer-reviewed articles that focused on GAI literacy and were published from 2023 to 2025. The details of the characteristics of the studies are shown in Table 2.
Education-related fields account for 19 studies (47.5% of the corpus) in the research landscape, as demonstrated by disciplinary analysis in Table 3. Although there is a significant amount of representation in the remaining disciplines, such as Health (n = 6, 15%), Technology (n = 6, 15%), Information Sciences (n = 5, 12.5%), Business (n = 3, 7.5%), and Ethics (n = 1, 2.5%). Despite the strong educational focus on GAI literacy development, emerging perspectives from healthcare, technology, and organizational contexts are beginning to shape the field, as suggested by this distribution.
The corpus exhibits a significant range of methodological approaches. The largest category (n = 15, 37.5%) is made up of empirical and intervention studies, which include pedagogical interventions, curriculum design, field studies, and empirical evaluations of GAI literacy. Theoretical and conceptual articles (n = 10, 25%) provide frameworks, literature syntheses, and foundational discussions of GAI literacy concepts. Measurement and validation studies (n = 8, 20%) address assessment challenges by developing and validating scales and validation instruments. Qualitative and exploratory studies (n = 5, 12.5%) offer in-depth investigations of user experiences and perceptions. Finally, policy and governance analyses (n = 2, 5%) examine institutional adoption and regulatory contexts. This diversity of methodological approaches indicates that the GAI literacy field is in its formative stages, with multiple research communities contributing distinct perspectives on how to understand and advance this emerging competency.
Geographically, the corpus shows a significant concentration in Asia, which accounts for 20 studies (50% of the total). Europe contributed 11 studies (27.5%), followed by the Americas, with 8 studies (20%), while only 1 study (2.5%) adopted a global perspective. The substantial geographic concentration in Asia, coupled with limited representation from other regions, suggests a potential geographic bias in the indexed literature and research ecosystems where GAI literacy scholarship has emerged. Future research should expand the investigation of GAI literacy to underrepresented regions and organizational contexts (government, industry, and healthcare systems outside educational settings).

4.2. Topic 1: Ethical Foundation

Topic 1 defines responsible and aware use of GAI by addressing moral issues, developing competency frameworks, and understanding how users identify, comprehend, and manage these challenges. The articles analyzed emphasize that GAI literacy should be viewed as an essential civil skill in addition to a technical skill. Ref. [7] follows this idea from the notion of algorithmic literacy and general AI literacy to a more comprehensive and fundamental concept of responsible GAI. In his argument, he says that a critical view of misinformation, bias, and social harm is what the generative capabilities of the GAI tools demand. To execute this, Ref. [8] propose a thorough 12-component skill model ranging from fundamental AI and data literacy to sophisticated prompt engineering, in which ethical issues are woven throughout. Similarly, Ref. [4] presents the 3wAI Framework. 3wAI examines literacy issues across three fields: knowledge (Know What); skill (Know How); and, most significantly, ethical and philosophical reflection (Know Why).
Academic libraries are becoming key supporters of this literacy. In fact, studies show that they are very active in developing resources and often adapt existing information literacy frameworks to fit their new environment. For example, Ref. [36] demonstrated that libraries align their work with the ACRL Information Literacy Framework. Conversely, Ref. [44] examined the AI Literacy framework for guide development. They found that the focus is on the basic use of tools and their ethical application, while giving very little attention to advanced AI creation skills.
The leading problem determined in most studies is the lack of connection between user views and the expert framework. Ref. [9] reported that university students considered GAI-related issues like plagiarism, misinformation, and dependence on technology to be the most urgent to address. Nevertheless, students’ awareness of and concern for complex ethical problems, such as the environmental impact and exploitation of labor, were just at a moderate level. This situation highlights the importance of preparing students to cope with these concerns in educational programs. Ref. [16] confirms this notion. In his study, he revealed that users who are given literacy-based instruction regarding GAI risks before engaging with the technology substantially raised their ethical consciousness compared to those in the control group.
The need for ethical education is also recognized across various fields of study. For instance, Ref. [42] believes that introducing GAI literacy into the Library and Information Science curriculum should be aligned with the profession’s core values. Ref. [5] claims the inclusion of GAI literacy in nursing education. Furthermore, Ref. [37] noted that government authorities are the main actors coordinating the generative AI environment. They facilitate interaction among industry, academic institutions, and non-profit organizations to share responsibility for regulating and using generative GAI ethically. These factors will be responsible for a sustainable process over time.

4.3. Topic 2: Educational Use

This topic discusses GAI’s practical application in education, focusing on pedagogical interventions and their impact on student learning. The literature strongly supports the high effectiveness of GAI-based educational programs. According to [10], just one 90 min workshop significantly boosted students’ confidence and their perception of how to properly use GAI. More broadly, Ref. [48] developed a learning module that improved GAI literacy among biomedical engineering students and also increased their self-confidence and ethical understanding of GAI. Ref. [31] emphasizes the importance of training educators and shows that pre-service language teachers who received integrated GAI instruction achieved significantly higher literacy levels than the control group.
Academic tasks, especially writing, can become a significant area of application. Ref. [15] goes beyond basic use to develop a critical GAI literacy for doctoral students. This literacy emphasizes using the tool as a conversational partner to question assumptions and improve thinking, rather than as a replacement. Interestingly, in the context of learning English as a Foreign Language, Ref. [23] found that students had a moderate level of literacy and that their mastery of GAI tools was positively correlated with their grade point averages. This highlights the need for explicit instruction.
GAI literacy influence also goes beyond basic learning skills. A study by [45], on nursing and midwifery students, found a positive correlation between GAI literacy and self-directed learning skills. They observed that the attitude of students towards GAI is a factor that mediates the relationship. Various disciplines utilize these tools differently; thus, they need their own approaches. For instance, Ref. [17] supports the idea of marketing students developing interpretive flexibility to be able to critically evaluate the outputs of GAI. On the other hand, in computer science, a semester-long experiment by [39] found that students who had access to an LLM-enabled tutor achieved significant final grade improvements. The extensive survey conducted by [29] revealed that students have a moderate GAI use. However, significant differences in gender, discipline, and year of study indicate that institutional strategies cannot be uniform to fit all situations.

4.4. Topic 3: Adoption and Interaction

This topic explores the integration of GAI from two perspectives: the macro-level of institutional policy and the micro-level of individual user interaction. At the institutional level, a global study of 40 universities by [11], guided by the Diffusion of Innovations Theory, shows a proactive yet cautious approach. Institutions have begun to establish standards that not only emphasize academic integrity but also focus on teaching quality, even though a full policy framework is being developed. This strategic approach is especially urgent in professional fields. Ref. [46] identified key strategies to reform medical education in Japan. At the same time, Ref. [34] conveyed the story of nurse researchers as they adopted GAI, describing it as the complex journey of human–machine interaction that reveals both potential and challenges.
Micro-level research is moving towards a more detailed comprehension of human–AI interaction. Ref. [30] view GAI as a mediator of learning, where a guided interaction can develop both subject knowledge and literacy. By going beyond screen-based interfaces, Ref. [28] developed an embodied co-creation concept with a tangible interface where users partner with a GAI through sand, thus making the GAI’s processes more understandable. The user’s current literacy plays a huge role in the quality of this interaction. In another study, Ref. [33] discovered that a student’s GAI literacy determined their ability to interact efficiently with a learning analytics dashboard that is chatbot-powered. The study has identified prompt engineering as an essential skill. In a pioneering approach, Ref. [14] suggest a method where GAI is the agent that assesses and enhances students’ prompting capabilities. Their model turns the directions for good prompting into the characteristics that a language model can identify, thus providing a way for scalable and integrated learning support.

4.5. Topic 4: Evaluation

The last topic navigates through the methodological aspects of the field as well as the creation of instruments to measure the impact and the level of GAI literacy. The major change from subjective self-reports to objective, validated tools is highly emphasized. A prominent work in this regard is the development of the GAI Literacy Assessment Test (GLAT) by [32]. They constructed and validated a 20-item multiple-choice test. Their results show that GLAT is a reliable (Cronbach’s α = 0.80) and valid measure of competence. Nevertheless, the authors admit that the validation study scope is quite small, as it was a non-longitudinal study of English-speaking university students from the Western culture. This highlights the need to develop more suitable instruments for a variety of GAI tasks and diverse populations.
This move towards stringent measurement is also evident in the study of specific areas. For instance, Ref. [38] created and validated the GAI-DMC Literacy Scale for Digital Multimodal Composing, acknowledging the distinct skills required for these tasks. Another study conducted by [6] focused on the professional domain. They developed and validated the GAI Literacy Scale (GAILS). In the first stage, GAI was instrumental in developing the scale, which was later validated with a sample of public-sector employees in China. GAILS includes technical knowledge, practical skills, critical thinking, and ethical awareness.
Beyond measurement considerations, this theme is also concerned with the performance of GAI. As an example, Ref. [6] stated that job performance was greatly enhanced by high GAI literacy scores, especially in tasks that required innovation and creativity. Likewise, Ref. [26] suggested a pedagogical metalanguage of transposition to facilitate language students to think about their process of using GAI to transfer meaning between modes (e.g., text to image), thus acquiring deeper literacy.
Ultimately, the work in this field is about identifying the key factors that influence the use of such tools. A network analysis on nursing students by [50] revealed that performance expectancy, i.e., the belief that the tool will facilitate the work, and facilitating conditions, like having access and support, were the two most central factors determining the intention to use GAI, whereas factors like self-efficacy or social influence played less important roles.

4.6. Corpus Distribution

As shown in Table 3, the four identified topics exhibit distinct geographic and disciplinary patterns. Topic 1 (Ethical Foundation, n = 16) has the most balanced geographic distribution, with representation from Asia (17.5%), Europe (10.0%), the Americas (10.0%), and a global perspective (2.5%). This suggest that ethical frameworks have attracted scholarly attention across multiple regions. In contrast, Topics 2–4 demonstrate increasingly pronounced concentrations in Asia: Topic 2 (Educational Use, n = 13) shows 12.5%, Topic 3 (Adoption and Interaction, n = 5) shows 10.0%, and Topic 4 (Evaluation, n = 6) shows 10.0%. This pattern indicates that implementation, institutional adoption, and evaluation research are conducted across regions, with notable representation from Europe and the Americas.
Disciplinarily, Topic 1 is distributed across Education (22.5%), Information Science (10.0%), and Health (7.5%), reflecting the importance of ethical literacy in multiple professional contexts. Topic 2 exhibits the greatest disciplinary diversity, with Education (10.0%), Technology (10.0%), Health (5.0%), Information (2.5%), Business (2.5%), and Ethics (2.5%), indicating multi-sector engagement with pedagogical applications. Topics 3 and 4 remain concentrated in Education (7.5% and 7.5% respectively), with secondary representation in Technology (5.0% in Topic 4) and Business (2.5%), suggesting that institutional adoption and evaluation research remain primarily within educational contexts.
The geographic and disciplinary analysis reveals research distributed across multiple regions and disciplines, with particular emphasis on educational applications. The representation of Asia (50.0%), Europe (27.5%), and the Americas (20.0%) indicates reasonable geographic diversity, while the concentration in Education (47.5% across all topics) leaves gaps in understanding GAI literacy in corporate, government, and healthcare administration sectors. Future research should prioritize disciplinary expansion, particularly in Technology (15.0%), and maintain geographic diversity to ensure that assessment instruments and adoption frameworks are adaptable across varied educational and professional contexts.

5. Discussion

The development of GAI literacy is closely intertwined with sustainability, as equitable access, ethical practices, and responsible integration are essential for long-term societal well-being. The analysis of works on GAI literacy has revealed four dominant topics: Ethical Foundation, Educational Use, Adoption and Interaction, and Evaluation.
The distribution of research across these topics reveals important imbalances. Ethical Foundation accounts for 40% of the literature, while Evaluation represents only 15%. This gap suggests that researchers have prioritized developing ethical frameworks over building the measurement tools needed to assess whether these principles are actually adopted. We have articulated what responsible GAI use should entail, but we lack robust instruments to evaluate whether users embrace these principles in practice. This explains the persistence of the ethics–practice disconnect, as the field has invested in normative ideals without proportional investment in evaluation infrastructure. Additionally, the concentration of research in education (47.5% of the total corpus) while healthcare, government, and business remain underrepresented suggests that literacy frameworks developed in academic settings may not adequately address the distinct ethical and operational challenges of other sectors.
Geographic concentration adds another layer to these patterns. Half of the reviewed studies originate from Asia, with Europe contributing 27.5%, and the Americas, 20%. This distribution likely reflects both the rapid adoption of AI in Asian educational systems and representation patterns in indexed databases. However, it raises questions about whether frameworks developed in Asian contexts transfer to regions with different educational traditions and labor market needs. Similarly, the sectoral concentration in education means we know considerably more about how students and university educators approach GAI than how healthcare professionals, government officials, or corporate employees engage with these tools. Given that healthcare professionals must navigate patient privacy concerns and government agencies must address accountability issues, developing context-specific literacy frameworks for these sectors should be a research priority.
While some topics are more pressing than others, a logical progression can also be traced among the themes found, facilitating their organization. Among these topics, measurement emerges as the most urgent challenge. GAI is transforming rapidly; however, there is little evidence that the transformation is having a positive impact on people and society. These scales have to be sensitive to the context and capable of adjusting to rapid technological changes. Such efforts can be seen as a first step toward establishing a broader and more stable indicator that captures the essence of GAI literacy as an indicator that remains relevant even with the emergence of new tools.
The existing literature highlights responsibility and ethics as essential skills for users, given that current levels of ethical awareness remain low, thereby placing effective implementation at risk. In the past, when training in digital skills was provided, emphasis was mainly laid on technical proficiency, whereas the ethical side was ignored. Current research provides theoretical grounds for understanding human interaction with GAI while highlighting the need to define the competences required for responsible use that benefits society as a whole.
Figure 5 depicts the topics discovered as well as their interrelations in the form of a living ecosystem. We conceptualized these connections as dynamic, as the topics are addressed in a similar manner in organizations. At the same time, they follow a logical sequence, reflecting interdependencies and feedback mechanisms inherent to the GAI literacy process. Each axis indicates the information, value, and evidence moving back and forth. The effectiveness of the system is reliant on the proper functioning of every component, the overall system awareness, and the constant, measurable interaction between its parts.
The four topics identified through LDA analysis informed the hierarchical structure of the proposed GAI literacy tree model, grounded in quantifiable characteristics: corpus prominence, topic-specific vocabulary, and geographic–disciplinary distribution. Topic 1 (Ethical Foundation) was positioned as conceptual soil because it represents 40% of the corpus—the highest prominence. Signature terms (“ethics,” “responsible,” “framework,” “awareness”) characterize its normative function and universal representation across disciplines and regions. Topic 3 (Adoption and Interaction) was mapped to roots because signature terms (“adoption,” “institution,” “policy,” “integration,” “user”) describe organizational infrastructure and absorptive capacity. The 78% concentration in Asian contexts signals region-specific adoption trajectories, reflected in the model through variable root depth. Topic 4 (Evaluation) was assigned to trunk because predominant terms (“assessment,” “scale,” “validation,” “measurement”) correspond to measurement functions. The tree-ring metaphor—wherein rings chronicle growth—translates directly to evaluation frameworks capturing progressive literacy development. Topic 4’s 15% prominence indicates a measurement gap: evaluation infrastructure is underdeveloped relative to implementation, visualized as a structural component requiring reinforcement. Topic 2 (Educational Use) was positioned as canopy and fruits comprising 32.5% of the corpus with highest disciplinary diversity. Signature pedagogical terms (“student,” “learning,” “education”, “intervention,” “skill”) describe visible, measurable outputs: competencies, learning gains, professional standards. This validates its role as the interface where principles and capacity materialize as tangible outcomes. The structure—ethical soil, adoption roots, evaluation trunk, educational canopy- represents data-driven mapping of how research conceptualizes GAI literacy.
The conceptual soil is the topic “Ethical Foundation”, thus being the normative and cognitive elements that feed the whole tree. This stage is characterized by structural dilemmas that influence the behavior and choices of the people involved. Productivity and safety must function in harmony; every step towards increased efficiency should be accompanied by careful attention to risk management and data protection.
The distinction between learning and functional digital illiteracy is, thus, placed within the fundamental frameworks. Competence should not be merely the ability to work with the tools, as it requires critical thinking, knowledge of how the tools work, and acknowledgment of their limitations, as well as the responsibilities that come with them. Although many AI tools are user-friendly, users must remain analytical and capable of verifying the accuracy of results. Consequently, the distinction between substantive and superficial use is often narrow.
Moreover, the distinction between responsible and dependent use constitutes a critical consideration. Preserving intellectual autonomy and avoiding cognitive dependence are therefore essential. While technological tools can contribute to solutions, they may simultaneously enable unethical or criminal activities; consequently, robust principles, audit mechanisms, and governance structures are essential for preventing misuse and ensuring accountability. This ethical and conceptual foundation underpins the entire system and, consequently, influences both its quality and long-term sustainability.
The roots stand for the topic “Adoption and Interaction”. By the comparison, the metaphor illustrates that only strong and well-developed roots can effectively absorb nutrients, thereby implying that institutions require adequate preparedness, maturity, and acceptance to successfully integrate AI technologies. Organizations that deploy these technologies prematurely, without a sufficiently strong foundation base, are likely to experience structural fragility, making them vulnerable to external pressures and undermining long-term stability.
Therefore, knowing the adoption mechanisms, their effects, and degree of individual acceptance becomes a matter of utmost importance. In essence, artificial intelligence challenges traditional technological frameworks and introduces complexities for policymakers; consequently, shaping public opinion in favor of AI and enacting regulatory measures to limit its use remain complex tasks for political actors. This difficulty highlights a central paradox: AI functions as a double-edged sword, offering substantial benefits to individuals while simultaneously exposing organizations, individuals, and systems to potential risks, such as the mishandling of sensitive or confidential data.
This issue is particularly evident in everyday professional practice. It has become increasingly difficult to justify allocating several hours to a task that an AI system can complete within minutes with comparable quality. Such cases put the spotlight back on individual behavior and decision-making. Furthermore, as technical skills deteriorate, the performance delivered by AI outshines that of professionals, thus making them choose to deliver the product they know is best, but coming from AI.
The five pillars of the system—policies, infrastructure, acceptance, staff training, and governance—are depicted as the roots of the organizational tree, varying in relative significance and representing alternative pathways that organizations may pursue in this process. The smaller roots, which are usually hidden, symbolize everyday practices, ongoing training, and management routines that deepen institutional contact with knowledge. The bigger roots stand for regulatory, governance, and technological infrastructure systems that ensure the organization’s stability and longevity. All kinds of roots are equally important. The organizational tree without them is short of nourishment.
The human–AI interaction forms are the supply channels for this system. AI is, in a few situations, a tool that simply enables humans to do more without substituting them. In some other scenarios, it supports the generation of new ideas where both humans and AI as a team share the creative process. Through partial or full automation, control over certain operations can be delegated to technological systems, thereby reducing human workload; however, ethical oversight and careful system design remain necessary. Each channel has a different impact on the institutional body and, therefore, should be in line with certain goals, roles, responsibilities, and levels of risk considered acceptable.
Organizational environments of cultural resistance, lack of trust, or vague regulations are like extremely acidic soil in which roots cannot grow and absorb necessary nutrients. Conversely, when an organization’s root system is deep, diversified, and strong, it can internalize essential structures that manifest as codes of conduct, transparent procedures, learning networks, and innovation initiatives. In this context, the roots function not merely as passive supports, but as dynamic forces that translate principles into actions, connect institutions with their environment, and enable the continuous flow of knowledge throughout the system.
In this metaphor, the trunk of the tree represents the topic “Evaluation”. The stability of the whole structure depends on the robustness of this component. Tools, indicators, and evaluation procedures should be designed from the very beginning, with a primary focus on their validity and reliability. Similarly to how a tree’s rings tell the story of its growth, repeated evaluations provide historical data through which one can monitor progress, make comparisons, and get useful feedback. The structural core that supports the branches and the canopy are formed by literacy scales, knowledge tests, task-based assessments, and responsible use indicators. The interaction between the crown and the roots that provides the system with vital resources is interrupted if measurement is absent; thus, gradually, the liveliness of the system decreases.
The canopy, metaphorically represented in the tree by branches and leaves, corresponds to the topic “Educational Use”. In this framework, educational applications function as the leaves, performing “pedagogical photosynthesis,” by transforming potential opportunities into measurable learning outcomes, skills, and standards of professional practice. The branches of a tree symbolize a pedagogical domain or an instructional project, whereas the leaves can stand for curricular diversity, academic levels, and teaching strategies. The output produced by the canopy is forwarded to the trunk and roots as a feedforward, thus enabling changes in the policy, fine-tuning assessment instruments, and updating ethical and competence frameworks.
The role of universities is pivotal in such a setting. They are paving different ways for the integration of AI with a focus on learning, critical engagement, and ethical reflection. Educational establishments not only identify challenges but also provide solutions, uncover previously unrecognized applications, and extend the use of AI to various disciplines. In contrast, the private sector tends to emphasize scalability and operational deployment in daily organizational activities.
In essence, this model represents a circular and interdependent dynamic. The topic “Ethical Foundation” addresses decisions regarding what knowledge is absorbed and considered beneficial; The topic “Adoption and Interaction” concerns ensuring institutional adoption and integration; The topic “Evaluation” provides structure and evaluative memory; while the topic “Educational Use” focuses on applying knowledge and generating evidence that feeds into the system. When ethical foundations are well defined, institutional roots are strong and diverse, measurement frameworks are stable and reliable, and educational applications are critical and reflective, GAI literacy develops as a well-balanced, lasting, and socially accountable ecosystem.
The value of this systematic mapping lies not in proposing new literacy dimensions—those already exist in the Theoretical Background—but in empirically revealing where the field invests its efforts and where critical gaps persist. The concentration of research in ethics over evaluation (40% versus 15%) is not merely a descriptive finding; it identifies a structural problem. The Literacy Tree model captures this imbalance visually: while ethical foundations form the conceptual soil and educational applications populate the visible canopy, the trunk (evaluation) remains underdeveloped. This creates a framework-to-practice problem: we have articulated principles without the capacity of measurement to validate whether they matter. By mapping research distribution across geography, discipline, and topic, this review provides researchers, educators, and policymakers with a concrete understanding of where investments are needed most urgently: developing validated evaluation tools, expanding research beyond educational settings, and investigating how institutions actually translate ethical principles into practice.

6. Conclusions

This systematic review of 40 peer-reviewed articles (2023–2025) highlights a major imbalance in how the academic community prioritizes different aspects of generative artificial intelligence literacy. Ethical Foundations dominate the research discussion (40%, n = 16), while Evaluation—arguably the most critical aspect for measuring progress—remains significantly underdeveloped (15%, n = 6). This 2.7:1 ratio between ethical theory and measurement practice reveals a key gap: although the field has clearly defined what GAI literacy should include, it has also heavily underinvested in the validated evaluation tools necessary to implement and measure these literacy goals. This imbalance creates a major obstacle to developing evidence-based educational and organizational interventions. To our knowledge, this is the first systematic quantification of thematic imbalances, sectoral underrepresentation, and geographic patterns in GAI literacy research. This empirical mapping offers an evidence-based foundation for strategic research priorities and resource distribution.
Theoretically, this empirical mapping distinguishes itself from earlier normative frameworks by fundamentally altering the research question. While existing literature asks “What should GAI literacy include?”, this systematic analysis answers with “What are researchers actually investigating?” This difference between prescriptive definitions and empirical research focus shows a misalignment between theoretical proposals and the areas that scholars target. Research tends to follow disciplinary standards, institutional incentives, and geopolitical influences that shape which aspects of GAI literacy are most often studied.
The practical implications of this study are threefold. First, the field must focus on developing and validating measurement tools. Researchers should create a coordinated plan for scale development, psychometric validation, and comparison of GAI literacy assessments. Second, scholars should intentionally expand research beyond educational settings (currently 47.5% of the corpus) into healthcare, government, and industry sectors where GAI adoption is growing. Each sector requires specific frameworks that address unique ethical and operational issues. Third, the strong focus of research in Asia (50%) highlights the need for targeted efforts to fund and promote research from underrepresented regions such as the Americas, Africa, and the Middle East, ensuring that GAI literacy frameworks include culturally diverse viewpoints and local labor-market conditions.
This review has important limitations that should guide interpretation. The search was limited to English-language peer-reviewed articles indexed in Web of Science and Scopus, which may underrepresent scholarship from developing regions, non-English contexts, and emerging academic systems. The snapshot reflects literature available through September 2025; however, rapid advances in generative AI capabilities could quickly make evaluation frameworks and pedagogical approaches outdated. Finally, the field’s focus on academic publishing might conceal significant GAI literacy development in professional settings, government agencies, and corporations that typically do not produce peer-reviewed research. Future systematic reviews should include gray literature and practitioner perspectives.
Along with the previous suggestion on systematic reviews, future research should focus on four key areas. First, develop culturally adapted evaluation instruments across K-12, higher education, and professional settings, including longitudinal studies that track how literacy evolves as users gain more experience. Second, conduct comparative studies to explore how GAI literacy priorities, teaching methods, and ethical frameworks vary across different regions and cultures. Third, investigate how organizations adopt and implement mechanisms and institutional factors that support or hinder GAI literacy development. Finally, because of limited measurement capacity and the gap between ethics and practice, the field should pursue integrated solutions: creating evaluation tools that assess not only competency but also behavioral adoption of ethical principles, and designing interventions that simultaneously improve technical skills and ethical awareness. This integrated approach shifts measurement and ethics development from separate research streams into mutually reinforcing mechanisms. Strategic investment in these interconnected areas, especially across underrepresented sectors and regions, could produce the greatest benefits for the field’s growth.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/su18031429/s1.

Author Contributions

Conceptualization, E.E.G. and P.R.-C.; methodology, A.M.M. and P.R.-C.; software, P.R.-C.; validation, A.M.M., P.R.-C. and E.E.G.; formal analysis, P.R.-C.; investigation, E.E.G. and P.R.-C.; resources, E.E.G. and P.R.-C.; data curation, P.R.-C.; writing—original draft preparation, A.M.M. and P.R.-C.; writing—review and editing, E.E.G.; visualization, A.M.M.; supervision, P.R.-C.; project administration, P.R.-C.; funding acquisition, P.R.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by ANID/FONDECYT Regular, grant number 1241852.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

During the preparation of this manuscript, the authors used Grammarly (version 14.1271.0) and DeepL (1.71.0) to improve grammar, spelling, wording, and language clarity in some parts. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Carmichael, M. AI Is Making the World More Nervous; Ipsos: Paris, France, 2023. [Google Scholar]
  2. Poushter, J.; Fagan, M.; Corichi, M. How People Around the World View AI; Pew Research Center: Washington, DC, USA, 2025; Volume 15. [Google Scholar]
  3. Gonzales, S. AI Literacy and the New Digital Divide—A Global Call for Action; UNESCO: Paris, France, 2024. [Google Scholar]
  4. Bozkurt, A. Why Generative AI Literacy, Why Now and Why It Matters in the Educational Landscape? Kings, Queens and GenAI Dragons. OPEN Prax. 2024, 16, 283–290. [Google Scholar] [CrossRef]
  5. Simms, R.C. Generative Artificial Intelligence (AI) Literacy in Nursing Education: A Crucial Call to Action. Nurse Educ. Today 2025, 146, 106544. [Google Scholar] [CrossRef] [PubMed]
  6. Liu, X.; Zhang, L.X.; Wei, X.C. Generative Artificial Intelligence Literacy: Scale Development and Its Effect on Job Performance. Behav. Sci. 2025, 15, 811. [Google Scholar] [CrossRef] [PubMed]
  7. Cox, A. Algorithmic Literacy, AI Literacy and Responsible Generative AI Literacy. J. Web Librariansh. 2024, 18, 93–110. [Google Scholar] [CrossRef]
  8. Annapureddy, R.; Fornaroli, A.; Gatica-Pérez, D. Generative AI Literacy: Twelve Defining Competencies. Digit. Gov. Res. Pract. 2025, 6, 1–21. [Google Scholar] [CrossRef]
  9. Zhao, X.; Cox, A.; Cai, L. ChatGPT and the Digitisation of Writing. Humanit. Soc. Sci. Commun. 2024, 11, 482. [Google Scholar] [CrossRef]
  10. Sullivan, M.; McAuley, M.; Degiorgio, D.; McLaughlan, P. Improving Students’ Generative AI Literacy: A Single Workshop Can Improve Confidence and Understanding. J. Appl. Learn. Teach. 2024, 7, 88–97. [Google Scholar] [CrossRef]
  11. Jin, Y.; Yan, L.; Echeverria, V.; Gašević, D.; Maldonado, R. Generative AI in Higher Education: A Global Perspective of Institutional Adoption Policies and Guidelines. Comput. Educ. Artif. Intell. 2025, 8, 100348. [Google Scholar] [CrossRef]
  12. Long, D.; Magerko, B. What Is AI Literacy? Competencies and Design Considerations. In Proceedings of the Conference on Human Factors in Computing Systems—Proceedings; Association for Computing Machinery: New York, NY, USA, 2020. [Google Scholar]
  13. Ng, D.T.K.; Leung, J.K.L.; Chu, S.K.W.; Qiao, M.S. Conceptualizing AI Literacy: An Exploratory Review. Comput. Educ. Artif. Intell. 2021, 2, 100041. [Google Scholar] [CrossRef]
  14. Ognibene, D.; Donabauer, G.; Theophilou, E.; Koyuturk, C.; Yavari, M.; Bursic, S.; Telari, A.; Testa, A.; Boiano, R.; Taibi, D.; et al. Use Me Wisely: AI-Driven Assessment for LLM Prompting Skills Development. Educ. Technol. Soc. 2025, 28, 184–201. [Google Scholar] [CrossRef]
  15. Ou, A.W.; Khuder, B.; Franzetti, S.; Negretti, R. Conceptualising and Cultivating Critical GAI Literacy in Doctoral Academic Writing. J. Second Lang. Writ. 2024, 66, 101156. [Google Scholar] [CrossRef]
  16. Kwon, J. Enhancing Ethical Awareness Through Generative AI Literacy: A Study on User Engagement and Competence. Edelweiss Appl. Sci. Technol. 2024, 8, 4136–4145. [Google Scholar] [CrossRef]
  17. Beninger, S.; Reppel, A.; Stanton, J.; Watson, F. Facilitating Generative AI Literacy in the Face of Evolving Technology: Interventions in Marketing Classrooms. J. Mark. Educ. 2025, 47, 112–125. [Google Scholar] [CrossRef]
  18. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ 2021, 372, 71. [Google Scholar] [CrossRef] [PubMed]
  19. Griffiths, T.L.; Steyvers, M. Finding Scientific Topics. Proc. Natl. Acad. Sci. USA 2004, 101, 5228–5235. [Google Scholar] [CrossRef]
  20. Deveaud, R.; SanJuan, E.; Bellot, P. Accurate and Effective Latent Concept Modeling for Ad Hoc Information Retrieval. Doc. Numer. 2014, 17, 61–84. [Google Scholar] [CrossRef]
  21. Cao, J.; Xia, T.; Li, J.; Zhang, Y.; Tang, S. A Density-Based Method for Adaptive LDA Model Selection. Neurocomputing 2009, 72, 1775–1781. [Google Scholar] [CrossRef]
  22. Arun, R.; Suresh, V.; Madhavan, C.E.V.; Murty, M.N. On Finding the Natural Number of Topics with Latent Dirichlet Allocation: Some Observations. In Proceedings of the Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2010; Volume 6118 LNAI. [Google Scholar]
  23. Alzubi, A.A.F. Generative Artificial Intelligence in the EFL Writing Context: Students’ Literacy in Perspective. Qubahan Acad. J. 2024, 4, 59–69. [Google Scholar] [CrossRef]
  24. Bozkurt, A. Unleashing the Potential of Generative AI, Conversational Agents and Chatbots in Educational Praxis: A Systematic Review and Bibliometric Analysis of GenAI in Education. Open Prax. 2023, 15, 261–270. [Google Scholar] [CrossRef]
  25. Chen, K.; Tallant, A.C.; Selig, I. Exploring Generative AI Literacy in Higher Education: Student Adoption, Interaction, Evaluation and Ethical Perceptions. Inf. Learn. Sci. 2025, 126, 132–148. [Google Scholar] [CrossRef]
  26. Christoforou, M. Gen AI-Assisted Multimodal Meaning Design: Exercising a Pedagogic Metalanguage of Transposition. Pedagogies 2025, 1–25. [Google Scholar] [CrossRef]
  27. Dadhich, M.; Hiran, K.K.; Bhaumik, A.; Chakkaravarthy, M.; Doshi, R.; Dadhich, M.; Poddar, S.; Hiran, K.K. Demystifying the Dynamic Determinants of Generative Artificial Intelligence (AI) Literacy for Adaptable Sustainable Education: Multistage Structure Equation Remodeling. In Integrating Generative AI in Education to Achieve Sustainable Development Goals; IGI Global Scientific Publishing: Hershey, PA, USA, 2024; pp. 72–85. [Google Scholar]
  28. El-Zanfaly, D.; Dong, Y.W.; Huang, Y.W.; ACM. Sand-in-the-Loop: Investigating Embodied Co-Creation for Shared Understandings of Generative AI. In Proceedings of the Carnegie Mellon University; Association for Computing Machinery: New York, NY, USA, 2023; pp. 256–260. [Google Scholar]
  29. Hershkovitz, A.; Tabach, M.; Reich, Y.; Lurie, L.; Cholcman, T. Framing and Evaluating Task-Centered Generative Artificial Intelligence Literacy for Higher Education Students. Systems 2025, 13, 518. [Google Scholar] [CrossRef]
  30. Honigsberg, S.; Watkowski, L.; Drechsler, A. Generative Artificial Intelligence in Higher Education: Mediating Learning for Literacy Development. Commun. Assoc. Inf. Syst. 2025, 56, 1044–1076. [Google Scholar] [CrossRef]
  31. Huang, T.; Wu, C.; Wu, M. Developing Pre-Service Language Teachers’ GenAI Literacy: An Interventional Study in an English Language Teacher Education Course. Discov. Artif. Intell. 2025, 5, 163. [Google Scholar] [CrossRef]
  32. Jin, Y.; Maldonado, R.; Gašević, D.; Yan, L. GLAT: The Generative AI Literacy Assessment Test. Comput. Educ. Artif. Intell. 2025, 9, 100436. [Google Scholar] [CrossRef]
  33. Jin, Y.Q.; Yang, K.X.; Yan, L.X.; Echeverria, V.; Zhao, L.X.; Alfredo, R.; Milesi, M.; Fan, J.X.; Li, X.Y.; Gasevic, D.; et al. Chatting with a Learning Analytics Dashboard: The Role of Generative AI Literacy on Learner Interaction with Conventional and Scaffolding Chatbots. In Proceedings of the Monash University; Association for Computing Machinery: New York, NY, USA, 2025; pp. 579–590. [Google Scholar]
  34. Kang, R.; Xuan, Z.; Tong, L.; Wang, Y.; Jin, S.; Xiao, Q. Nurse Researchers’ Experiences and Perceptions of Generative AI: Qualitative Semistructured Interview Study. J. Med. Internet Res. 2025, 27, e65523. [Google Scholar] [CrossRef] [PubMed]
  35. Kato, T.; Tawada, M.; Miyaguni, Y.; Ahuso, H. Designing a Generative AI Literacy Education Program for Education Faculty Students and Evaluating Its Effectiveness Using the ARCS Motivation Model. IEEJ Trans. Ind. Appl. 2025, 145, 230–237. [Google Scholar] [CrossRef]
  36. Ko, C.R.; Chiu, M.-H. How Can Academic Librarians Support Generative AI Literacy: An Analysis of Library Guides Using the ACRL Information Literacy Framework. Proc. Assoc. Inf. Sci. Technol. 2024, 61, 977–979. [Google Scholar] [CrossRef]
  37. Lao, Y.C.; You, Y.K. Unraveling Generative AI in BBC News: Application, Impact, Literacy and Governance. Transform. Gov. People Process Policy 2024. ahead of print. [Google Scholar] [CrossRef]
  38. Liu, M.L.; Zhang, L.J.; Zhang, D.L. Enhancing Student GAI Literacy in Digital Multimodal Composing through Development and Validation of a Scale. Comput. Hum. Behav. 2025, 166, 108569. [Google Scholar] [CrossRef]
  39. Lyu, W.H.; Wang, Y.M.; Chung, T.T.; Sun, Y.F.; Zhang, Y.X. Evaluating the Effectiveness of LLMs in Introductory Computer Science Education: A Semester-Long Field Study; Association for Computing Machinery: New York, NY, USA, 2024; pp. 63–74. [Google Scholar]
  40. Meng, X.P.; Guo, X.G.; Fang, J.W.; Chen, J.; Huang, L.F. Fostering Pre-Service Teachers’ Generative AI Literacy and Critical Thinking: An RSCQA Approach. Educ. Technol. Soc. 2025, 28, 202–225. [Google Scholar] [CrossRef]
  41. Mohammed, C. Work in Progress: A Rapid Review of the Scholarship on Generative AI in Engineering Workplaces-Implications for Engineering Education. In Proceedings of the University West Indies Mona Jamaica; Ciampi, M.M., Brito, C.D., Eds.; IEEE: New York, NY, USA, 2025. [Google Scholar]
  42. Morris, R.J.; Malady, A. Facing the Questions Together: Faculty and Student Perspectives on Integrating Generative AI in LIS Education. Libr. Trends 2025, 73, 553–573. [Google Scholar] [CrossRef]
  43. O’Dea, X.; Ng, D.T.K.; O’Dea, M.; Shkuratskyy, V. Factors Affecting University Students’ Generative AI Literacy: Evidence and Evaluation in the UK and Hong Kong Contexts. Policy Futures Educ. 2024. [Google Scholar] [CrossRef]
  44. Ru, K.C.; Tang, R. Promoting AI Literacy Through US Academic Libraries: An Analysis of LibGuides from ARL and Oberlin Group Libraries Using the EDUCAUSE AI Literacy Framework. Inf. Res. Int. Electron. J. 2025, 30, 847–865. [Google Scholar] [CrossRef]
  45. Sengul, T.; Sarıköse, S.; Uncu, B.; Kaya, N. The Effect of Artificial Intelligence Literacy on Self-Directed Learning Skills: The Mediating Role of Attitude Towards Artificial Intelligence: A Study on Nursing and Midwifery Students. Nurse Educ. Pract. 2025, 88, 104516. [Google Scholar] [CrossRef]
  46. Shimizu, I.; Kasai, H.; Shikino, K.; Araki, N.; Takahashi, Z.; Onodera, M.; Kimura, Y.; Tsukamoto, T.; Yamauchi, K.; Asahina, M.; et al. Developing Medical Education Curriculum Reform Strategies to Address the Impact of Generative AI: Qualitative Study. JMIR Med. Educ. 2023, 9, e53466. [Google Scholar] [CrossRef] [PubMed]
  47. Tomlinson, E.; Schoch, M.; Macfarlane, S.; Aryal, S.; Kumar, F.; Bunker, N.; McDonall, J. A Course-Wide Approach to Building Generative Artificial Intelligence Literacy Across an Undergraduate Nursing Curriculum. Nurse Educ. 2025, 50, 113–115. [Google Scholar] [CrossRef] [PubMed]
  48. Wang, X.L.; Chan, T.M.; Tamura, A.A. A Learning Module for Generative AI Literacy in a Biomedical Engineering Classroom. Front. Educ. 2025, 10, 1551385. [Google Scholar] [CrossRef]
  49. Zhang, D.; Wen, L.; Wu, J.G. Structured or Semi-Structured? The Use of Reflection Journals in Postgraduates’ Generative Artificial Intelligence Literacy Development in an L2 Academic Writing Context. Eur. J. Educ. 2025, 60, e70189. [Google Scholar] [CrossRef]
  50. Zhu, Y.; Wen, F.; Wu, J.; Huang, P.; Zhang, Y. Identifying Highly Correlated Determinants Influencing Student Nurses’ Behavioral Intention of Using Generative Artificial Intelligence (Generative AI): A Network Analysis. Nurse Educ. Today 2025, 154, 106855. [Google Scholar] [CrossRef]
Figure 1. Dimensions of GAI literacy.
Figure 1. Dimensions of GAI literacy.
Sustainability 18 01429 g001
Figure 2. Data collection phase. Flow diagram showing the PRISMA-compliant systematic review process: initial search across WoS (n = 25) and Scopus (n = 31) databases (total n = 56 records), duplicate removal (n = 16), resulting in 40 unique articles included in the final review. All 40 articles met inclusion criteria with no exclusions based on title/abstract or full-text screening.
Figure 2. Data collection phase. Flow diagram showing the PRISMA-compliant systematic review process: initial search across WoS (n = 25) and Scopus (n = 31) databases (total n = 56 records), duplicate removal (n = 16), resulting in 40 unique articles included in the final review. All 40 articles met inclusion criteria with no exclusions based on title/abstract or full-text screening.
Sustainability 18 01429 g002
Figure 3. Most relevant words by topic. Four-panel visualization showing the top terms identified by LDA modeling for each thematic area: (a) Topic 1: Ethical Foundation; (b) Topic 2: Educational Use; (c) Topic 3: Adoption and Interaction (key terms: adoption, institution, policy, integration); (d) Topic 4: Evaluation.
Figure 3. Most relevant words by topic. Four-panel visualization showing the top terms identified by LDA modeling for each thematic area: (a) Topic 1: Ethical Foundation; (b) Topic 2: Educational Use; (c) Topic 3: Adoption and Interaction (key terms: adoption, institution, policy, integration); (d) Topic 4: Evaluation.
Sustainability 18 01429 g003
Figure 4. Distribution of articles by topics. Bar chart showing the number of included articles (N = 40) assigned to each topic by LDA: Ethical Foundation (n = 16, 40%), Educational Use (n = 13, 32.5%), Adoption and Interaction (n = 5, 12.5%), and Evaluation (n = 6, 15%).
Figure 4. Distribution of articles by topics. Bar chart showing the number of included articles (N = 40) assigned to each topic by LDA: Ethical Foundation (n = 16, 40%), Educational Use (n = 13, 32.5%), Adoption and Interaction (n = 5, 12.5%), and Evaluation (n = 6, 15%).
Sustainability 18 01429 g004
Figure 5. Literacy tree in GAI.
Figure 5. Literacy tree in GAI.
Sustainability 18 01429 g005
Table 1. Inclusion and exclusion criteria for study selection.
Table 1. Inclusion and exclusion criteria for study selection.
CriterionIncludedExcluded
ContentGAI literacy, prompt engineering, generative AI skillsAI/algorithmic/digital literacy (non-GAI focus); general AI competencies
Publication TypePeer-reviewed journal articlesBooks, chapters, gray literature, preprints, conference proceedings
LanguageEnglishNon-English
Time Period2023–2025 (post-ChatGPT)Before 2023 or after 2025
AccessibilityFull text availableFull text not accessible
Table 2. Characteristics of included studies.
Table 2. Characteristics of included studies.
StudyContext/FieldPrimary Focus/MethodologyRegionDisciplineTopic
Alzubi (2024) [23]EFL/Language LearningWriting Skills: Impact of GenAI on EFL students’ writing literacy.AsiaEducation1
Annapureddy et al. (2025) [8]General/GovernmentCompetency Framework: Proposal of 12 defining competencies for GenAI literacy.EuropeBusiness2
Beninger et al. (2025) [17]Marketing EducationPedagogy: Interventions to facilitate GenAI literacy in marketing classrooms.AmericasEducation4
Bozkurt (2023) [24]Education (General)Systematic Review: Analysis of conversational agents and chatbots in educational praxis.EuropeEducation1
Bozkurt (2024) [4]Education (General)Theoretical: The urgency and definition of GenAI literacy in the educational landscape.EuropeEducation3
Chen et al. (2025) [25]Higher EducationStudent Perception: Adoption, interaction, and ethical evaluation of GenAI by students.AsiaEducation1
Christoforou (2025) [26]Design/PedagogyMultimodal Design: GenAI-assisted meaning design and pedagogic metalanguage.EuropeEducation2
Cox (2024) [7]LIS/Web LibrarianshipConceptual: Defining “Responsible Generative AI Literacy” vs. “Algorithmic Literacy”.EuropeInformation2
Dadhich et al. (2024) [27]Sustainable EducationDeterminants Analysis: Factors influencing GenAI literacy for adaptable education.AsiaEducation2
El-Zanfaly et al. (2023) [28]HCI/DesignCo-creation: “Sand-in-the-loop” investigation of embodied co-creation.AmericasTechnology2
Hershkovitz et al. (2025) [29]Higher EducationFramework Evaluation: Task-centered GenAI literacy framework for students.AsiaEducation1
Honigsberg et al. (2025) [30]Higher EducationLearning Mediation: How GenAI mediates learning and literacy development in HE.AmericasInformation1
Huang et al. (2025) [31]Teacher Education (EFL)Intervention: Developing pre-service language teachers’ GenAI literacy.AsiaEducation4
Jin Y. et al. (2025) [32]EdTech/MeasurementScale Development: “GLAT” (Generative AI Literacy Assessment Test).AsiaEducation3
Jin Y. et al. (2025) [11]Higher Ed PolicyPolicy Analysis: Global review of institutional adoption policies and guidelines.GlobalEducation1
Jin Y.Q. et al. (2025) [33]Learning AnalyticsHCI/Chatbots: Role of GenAI literacy in learner interaction with dashboards.AsiaBusiness4
Kang et al. (2025) [34]Nursing ResearchQualitative: Nurse researchers’ experiences and perceptions of GenAI.AsiaHealth2
Kato et al. (2025) [35]Teacher EducationProgram Design: Design and evaluation of a GenAI literacy program using ARCS model.AsiaEducation2
Ko & Chiu (2024) [36]Academic LibrariesContent Analysis: Analysis of Library Guides supporting GenAI literacy.AsiaInformation1
Kwon (2024) [16]Ethics/GeneralEthics: Relationship between GenAI literacy and ethical awareness/engagement.AsiaEthics2
Lao & You (2024) [37]Media/GovernanceDiscourse Analysis: GenAI application and literacy in BBC News.EuropeTechnology2
Liu M.L. et al. (2025) [38]Multimodal ComposingScale Validation: Enhancing student literacy in digital multimodal composing.AsiaEducation3
Liu X. et al. (2025) [6]Workplace/HRJob Performance: Scale development and effect of GenAI literacy on performance.AsiaBusiness3
Lyu et al. (2024) [39]CS EducationField Study: Effectiveness of LLMs in introductory Computer Science education.AsiaTechnology4
Meng et al. (2025) [40]Teacher EducationPedagogy (RSCQA): Fostering critical thinking and GenAI literacy in pre-service teachers.AsiaEducation2
Mohammed (2025) [41]EngineeringWorkplace Review: Scholarship on GenAI in engineering workplaces.AmericasTechnology2
Morris & Malady (2025) [42]LIS EducationPerspectives: Faculty and student views on integrating GenAI in LIS.AmericasInformation1
O’Dea et al. (2024) [43]Higher EducationComparative: Factors affecting student GenAI literacy in UK vs. Hong Kong.AsiaEducation1
Ognibene et al. (2025) [14]EdTechAssessment: AI-driven assessment for LLM prompting skills.EuropeEducation4
Ou et al. (2024) [15]Doctoral WritingAcademic Writing: Cultivating “Critical GAI Literacy” in doctoral writing.EuropeEducation1
Ru & Tang (2025) [44]Academic LibrariesContent Analysis: US Academic Libraries’ promotion of AI literacy (LibGuides).AmericasInformation1
Sengul et al. (2025) [45]Nursing EducationSkill Development: Effect of AI literacy on self-directed learning skills.EuropeHealth1
Shimizu et al. (2023) [46]Medical EducationCurriculum Reform: Strategies to address GenAI impact in med ed.AsiaHealth1
Simms (2025) [5]Nursing EducationCall to Action: Urgency of integrating GenAI literacy in nursing curriculum.AmericasHealth1
Sullivan et al. (2024) [10]Higher EducationWorkshop/Impact: Impact of a single workshop on student confidence/literacy.EuropeEducation1
Tomlinson et al. (2025) [47]Nursing EducationCurriculum Design: Course-wide approach to building GenAI literacy.AmericasHealth2
Wang et al. (2025) [48]Biomedical Eng.Module Design: Learning module for GenAI literacy in engineering classroom.AsiaTechnology4
Zhang D. et al. (2025) [49]Postgraduate/L2Writing/Reflection: Use of reflection journals for literacy development.AsiaEducation1
Zhao et al. (2024) [9]Higher EducationWriting Practices: ChatGPT’s impact on digitization of writing.EuropeTechnology2
Zhu et al. (2025) [50]Nursing StudentsBehavioral Intention: Determinants influencing student nurses’ use of GenAI.AsiaHealth3
Table 3. Distribution of included studies (N = 40) by topic, geographic region, and discipline.
Table 3. Distribution of included studies (N = 40) by topic, geographic region, and discipline.
TopicRegionEducationTechnologyHealthInformationBusinessEthicsTotal
Topic 1: Ethical Foundation (n = 16)Americas--2.5%7.5%--10.0%
Asia12.5%-2.5%2.5%--17.5%
Europe7.5%-2.5%---10.0%
Global2.5%-----2.5%
Subtotal22.5%-7.5%10.0%--40.0%
Topic 2: Educational Use (n = 13)Americas-5.0%2.5%---7.5%
Asia7.5%-2.5%--2.5%12.5%
Europe2.5%5.0%-2.5%2.5%-12.5%
Subtotal10.0%10.0%5.0%2.5%2.5%2.5%32.5%
Topic 3: Adoption and Interaction (n = 5)Americas-------
Asia5.0%-2.5%-2.5%-10.0%
Europe2.5%-----2.5%
Subtotal7.5%-2.5%-2.5%-12.5%
Topic 4: Evaluation (n = 6)Americas2.5%-----2.5%
Asia2.5%5.0%--2.5%-10.0%
Europe2.5%-----2.5%
Subtotal7.5%5.0%--2.5%-15.0%
TOTAL 47.5%15.0%15.0%12.5%7.5%2.5%100.0%
Note: “-“ indicates no studies.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ramírez-Correa, P.; Grandón, E.E.; Mariano, A.M. Mapping the Landscape of Generative Artificial Intelligence Literacy: A Systematic Review Toward Social, Ethical, and Sustainable AI Adoption. Sustainability 2026, 18, 1429. https://doi.org/10.3390/su18031429

AMA Style

Ramírez-Correa P, Grandón EE, Mariano AM. Mapping the Landscape of Generative Artificial Intelligence Literacy: A Systematic Review Toward Social, Ethical, and Sustainable AI Adoption. Sustainability. 2026; 18(3):1429. https://doi.org/10.3390/su18031429

Chicago/Turabian Style

Ramírez-Correa, Patricio, Elizabeth E. Grandón, and Ari Melo Mariano. 2026. "Mapping the Landscape of Generative Artificial Intelligence Literacy: A Systematic Review Toward Social, Ethical, and Sustainable AI Adoption" Sustainability 18, no. 3: 1429. https://doi.org/10.3390/su18031429

APA Style

Ramírez-Correa, P., Grandón, E. E., & Mariano, A. M. (2026). Mapping the Landscape of Generative Artificial Intelligence Literacy: A Systematic Review Toward Social, Ethical, and Sustainable AI Adoption. Sustainability, 18(3), 1429. https://doi.org/10.3390/su18031429

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop