1. Introduction
The publication of scientific research is the result of a collective process involving various actors, including researchers, editors-in-chief, editorial committees, and peer reviewers. All of them share the responsibility of ensuring that the knowledge disseminated is reliable and relevant to the academic community (
Bhavsar et al., 2025;
Morales & Jaimes, 2024). Despite this, the decision to publish an article does not depend solely on the individual will of one or more of these participants, but rather is mediated by compliance with guidelines and policies defined by journals and publishers. These guidelines and/or policies govern the evaluation and acceptance process of manuscripts, seeking to safeguard the integrity, transparency, and trustworthiness of the scientific record (
Bhavsar et al., 2025). As
Fonseca-Mora and Aguaded (
2014) point out, editorial policies play a central role in establishing criteria for the admission of knowledge and in promoting responsible behavior among the actors involved.
In this scenario, the emergence of artificial intelligence and, in particular, chatbots and Large language models (LLMs) has introduced significant changes in the processes of production, evaluation, and dissemination of scientific knowledge, generating a growing debate about their limits and acceptable uses in academic writing (
Vuong et al., 2023;
Yoo, 2025;
Z. Wang & Gong, 2026). One of the most notable developments has been the substantial increase in research explicitly mentioning ChatGPT (
Van Noorden & Perkel, 2023;
Nandi et al., 2025;
S. J. Kim, 2024).
These tools have demonstrated the ability to generate, correct, and synthesize scientific texts with high levels of coherence, which has accelerated their adoption by researchers and students (
Lund & Wang, 2023;
Yoo, 2025). However, their incorporation is not neutral, as it places strain on key principles of scientific communication, particularly those related to authorship, originality, and academic responsibility—issues that become critical when considering whether a system such as ChatGPT can be regarded as an “author” in the strict sense (
Lund & Naheem, 2024;
Teixeira da Silva, 2023). The rapid expansion of these practices has pressured publishers and journals to update their policies, incorporating specific guidelines on the use of AI in manuscript preparation and, in some cases, in peer review processes (
El Harrath, 2025;
Tlili et al., 2025;
Bhavsar et al., 2025).
The massification of these tools has also raised concerns about their effects on the dynamics of scientific production, as they facilitate and provenance the rapid generation of texts with increasingly homogeneous styles and reduced traceability of human intellectual contribution (
Vuong et al., 2023;
Lindebaum & Fleming, 2024). In parallel, there has been sustained growth in publications that explicitly refer to ChatGPT, suggesting that the phenomenon is no longer experimental but rather massive and transversal across different fields (
Nandi et al., 2025). This process poses clear risks to the quality of academic debate, insofar as it may reinforce a logic of productivity based on volume and weaken reflexivity, scientific responsibility, and the sense of “original scholarship” (
Lindebaum & Fleming, 2024).
In this context, recent literature agrees that, although artificial intelligence tools may contribute to improving certain formal aspects of manuscripts, their use entails substantive risks, as these systems are prone to generating erroneous information, reproducing content without proper attribution, and facilitating practices akin to plagiarism, in addition to lacking the ability to assume responsibility for the content produced (
Dergaa et al., 2023;
Kooli, 2023;
Lubowitz, 2023;
Thorp, 2023;
Van Dis et al., 2023).
Faced with this scenario, editorial boards confront a complex dilemma: how to leverage the technological benefits of AI without compromising academic integrity, particularly when the actual capacity for oversight and policy enforcement may be limited (
Tlili et al., 2025;
Lindebaum & Fleming, 2024). Furthermore, AI-generated text detection systems present significant limitations and do not guarantee consistent results, thereby increasing editorial uncertainty (
Guleria et al., 2023). Along these lines, it has been observed that ChatGPT-generated content has already begun to infiltrate published articles, even in high-level journals, confirming that addressing this issue cannot rely solely on automated control tools but instead requires explicit editorial criteria and verifiable transparency practices (
Strzelecki, 2024).
The cross-cutting nature of the debate on the use of artificial intelligence in scientific research and publication is reflected in the diversity of actors that have begun to formulate specific guidelines and orientations, ranging from organizations linked to editorial ethics to large scientific publishers that have progressively adjusted their policies in recent years (
Bhavsar et al., 2025). However, the coexistence of multiple regulatory frameworks and the heterogeneity of criteria regarding scope, level of detail, and transparency requirements have led to a fragmented scenario in which editorial guidelines are not always clear or consistent across journals and disciplines (
Bhavsar et al., 2025;
El Harrath, 2025). In this context, international organizations such as the World Association of Medical Editors (WAME), the International Committee of Medical Journal Editors (ICMJE), and the Committee on Publication Ethics (COPE), as well as regional databases such as SciELO and international publishing industry associations—including the International Association of Scientific, Technical & Medical Publishers (STM)—and large publishers and journals have established principles aimed at safeguarding transparency, authorship responsibility, and the integrity of the editorial process in the context of the use of artificial intelligence tools.
The aim of this article is to analyze the current state of guidelines regulating the use of artificial intelligence tools by authors in Latin American journals indexed in Scopus and classified according to the Scimago Journal Rank (SJR). Examining the level of development of these policies provides evidence of the regulatory maturity of the regional publishing system, as well as its alignment with global publishing trends. This analysis is particularly relevant given the diversity, expansion, and progressive consolidation of the Latin American scientific journal ecosystem, as well as the challenges it faces in terms of editorial governance, standardization of practices, and the safeguarding of scientific integrity.
The relevance of this study is reinforced in a context in which the integrity and credibility of science are subject to increasingly intense scrutiny, both publicly and academically (
Ehsan & Raza, 2025). In parallel, several studies have documented an increase in retractions linked to the publication of randomly generated content or content lacking scientific traceability, highlighting tensions and limitations in current editorial control mechanisms (
Lei et al., 2024;
Martínez-Rojas & Zahn-Muñoz, 2025). This scenario underscores the importance of analyzing how these challenges are being addressed in the Latin American region.
The contribution of this study lies in addressing a significant empirical gap, as most of the literature on editorial policies and the use of artificial intelligence focuses on high-impact publishers and journals, primarily located in Europe and North America. The limited evidence available for the Latin American context restricts our understanding of how journals in the region are responding to these challenges. The analysis presented here identifies progress, tensions, and critical areas for improvement in editorial guidelines on AI, providing elements to strengthen editorial governance and to foster more transparent practices in scientific communication.
To address this objective, the study is guided by the following research questions:
- -
(RQ1) To what extent have Latin American journals indexed in Scopus and classified according to the Scimago Journal Rank (SJR) incorporated explicit guidelines on the use of artificial intelligence?
- -
(RQ2) What are the main characteristics, scope, and regulatory approaches of these guidelines?
- -
(RQ3) Are there significant differences in the adoption of AI guidelines according to structural variables such as journal quartile, country, and field of knowledge?
Based on the existing literature, it is expected that the adoption of AI-related editorial guidelines in the region remains uneven and incipient, with higher levels of implementation in journals with greater visibility and impact.
The article is structured as follows. Following the introduction, the state of the art regarding the implications, risks, and strengths of editorial guidelines on the use of artificial intelligence is presented. Next, the methodological section details the methodological approach adopted, as well as data collection and analysis procedures. The study results are then presented, followed by the discussion and conclusions.
2. Literature Review
2.1. Editorial Governance and Scientific Policies as Regulatory Mechanisms
Scientific communication develops through a set of institutionalized norms, procedures, and practices that guide the production, validation, and dissemination of knowledge. In this context, editorial policies play a key role as governance mechanisms, as they establish the criteria by which knowledge is accepted, define authorship frameworks, delimit responsibilities, and structure processes of evaluation and editorial decision-making (
Fonseca-Mora & Aguaded, 2014). Their function is not limited to operational aspects but also fulfills a normative role by setting standards of quality, integrity, and transparency that sustain the credibility of the scientific record.
From a governance perspective, editorial policies can be conceived as coordination and control mechanisms through which scientific journals respond to transformations in the scientific and technological environment. In contexts of rapid change, these policies acquire particular relevance as instruments of institutional adaptation, translating general principles of academic integrity into concrete and enforceable rules for authors, reviewers, and editors (
Bhavsar et al., 2025). Under this approach, the analysis of editorial guidelines not only makes it possible to identify their explicit normative content but also to evaluate the degree of maturity, internal coherence, and regulatory capacity of publishing systems.
In recent years, this interest has materialized in sustained growth in empirical studies aimed at examining the degree of implementation of guidelines on the use of artificial intelligence in scientific journals and publishers. The available literature reveals heterogeneous approaches that vary according to the disciplines and publishing contexts analyzed. These include research focused on music journals (
Tortop, 2025), studies examining higher education institutions and their positions on the use of AI in academic research (
Rana, 2025), as well as reviews of guidelines issued by international organizations and journals linked to academic societies (
Huh, 2023).
At the publishing level, notable contributions include AI policy audits in publishers belonging to the International Association of Scientific, Technical and Medical Publishers (STM) (
Bhavsar et al., 2025), studies focused on bioethics and health humanities journals (
Bobier et al., 2025), analyses of major global scientific publishers (
De Veiga, 2025), and comparative policy reviews of leading journals in specific disciplines, such as applied linguistics (
El Harrath, 2025), library and information science (
E. Kim, 2024), and dentistry journals indexed in the Web of Science (
Queiroz et al., 2025). Taken together, these studies demonstrate an uneven adoption of AI guidelines and reinforce the need for comparative analyses to understand regulatory variation across disciplines, publishers, and regions.
2.2. Generative Artificial Intelligence and Academic Writing
Large language models (LLMs), such as ChatGPT and other similar tools, have begun to be progressively incorporated into different phases of the academic writing process, particularly in the drafting, editing, and synthesis of scientific texts (
Yoo, 2025). However, their relevance transcends the technical capabilities they offer, as, from an editorial perspective, generative artificial intelligence has become an emerging object of regulation. In this regard, its use poses challenges that strain and call into question the traditional normative frameworks upon which scientific publishing has historically been structured.
Recent literature converges on the idea that the use of artificial intelligence in academic writing cannot be reduced to a merely instrumental function. Rather, its incorporation introduces substantive ambiguities concerning authorship—given that large language models lack personhood (
Montemayor, 2023)—as well as issues related to content originality and the attribution of intellectual responsibility in scientific production (
Pigola et al., 2023;
Sampaio et al., 2024a). In this scenario, particularly when clearly defined editorial criteria are lacking, the boundary between legitimate technical assistance and the substantive generation of content tends to become blurred. This lack of clarity hinders both the evaluation of human intellectual contributions and the traceability of the processes involved in the production of scientific knowledge.
In this context, editorial guidelines play a key role in establishing acceptable thresholds of use, defining disclosure obligations, and specifying the conditions under which the use of AI tools is compatible with the principles of scientific communication. The heterogeneity observed in these guidelines reflects different approaches to editorial governance, ranging from minimal and general regulations to more detailed and restrictive policies (
El Harrath, 2025).
2.3. Authorship, Responsibility, and Transparency in the Use of AI
The notion of authorship constitutes one of the foundational pillars of the scientific publishing system, insofar as it links the attribution of academic merit to intellectual responsibility for published content. From an ethical and regulatory standpoint, authorship entails not only the production of knowledge but also the capacity to assume accountability for it—responding for errors, omissions, or improper practices. In this sense, authorship presupposes agency and moral responsibility, conditions that artificial intelligence systems do not fulfill (
Teixeira da Silva, 2023).
This limitation is consistent with
Montemayor’s (
2023) argument that artificial intelligence, lacking personhood and genuine agency, cannot be considered a moral subject capable of value alignment in the same way as human actors. Consequently, AI remains an “it” rather than a “who”: a tool that may assist in the production of scientific texts but cannot bear authorship, precisely because it is unable to assume responsibility or be held accountable for the knowledge it helps generate.
In response to these concerns, international editorial ethics bodies—such as the World Association of Medical Editors (WAME), the Committee on Publication Ethics (COPE), and the International Committee of Medical Journal Editors (ICMJE)—have established guidelines that explicitly exclude AI systems from authorship and reaffirm the full responsibility of human authors for published content.
Similarly, the Open Access Scholarly Publishers Association (OASPA) and the International Organization for Standardization (ISO) emphasize transparency, accountability, and responsible AI use, reinforcing that any use of these tools must be clearly disclosed and remains under human oversight (
COPE, 2023;
ICMJE, 2023;
WAME, 2023).
However, empirical evidence indicates that the degree of adoption and explicit communication of these requirements varies considerably among publishers and journals, generating scenarios of normative ambiguity that hinder their uniform and consistent application (
Bhavsar et al., 2025). This variability reinforces the need to analyze editorial policies not only in terms of their formal existence but also with regard to their clarity, scope, and operational effectiveness.
2.4. Emerging Risks: Productivism, Homogenization, and Scientific Fraud
From an editorial governance perspective, the adoption of generative artificial intelligence tools in the absence of clear regulatory frameworks, or under insufficient regulatory conditions, may lead to risks that compromise the integrity of the scientific publishing process. Several studies have identified technical limitations and risks associated with their use in the production of academic articles, including concerns regarding content originality, the emergence of new forms of plagiarism, and the weakening of the principles of academic transparency (
Tlili et al., 2025;
Yoo, 2025). In this regard, it has been observed that the generation of texts with limited or inadequate human intervention can negatively affect the reliability of the scientific record.
Likewise, multiple studies have issued warnings indicating that generative models are prone to producing erroneous information or so-called “hallucinations,” a phenomenon that has been empirically documented (
Rana, 2025;
Guleria et al., 2023). These hallucinations may manifest in practices such as the generation of misleading or entirely fabricated citations, the invention of non-existent authors or sources, and the production of falsified or unverifiable data. In addition, concerns have been raised regarding biases embedded in these models, given that AI systems lack the critical capacity to discriminate between high- and low-quality information, fictitious data, or biased interpretations, thereby contributing to the reproduction of errors and distortions in scientific texts (
Lindebaum & Fleming, 2024).
Several studies have also documented the presence of automatically generated content in scientific journals, as well as an increase in retractions associated with texts produced without clear traceability or containing nonexistent references, highlighting the limitations of editorial systems in responding to these emerging practices (
Strzelecki, 2024;
Lei et al., 2024). In this context, concerns have been raised about forms of “high-tech plagiarism,” in which AI tools paraphrase existing works and present them as new, without critical reflection or original contribution (
Guleria et al., 2023), as well as the generation of fictitious bibliographic references, as documented in studies on ChatGPT and similar tools (
Giray, 2024;
Alkaissi & McFarlane, 2023).
Furthermore, several authors have argued that excessive reliance on generative technologies may progressively erode human agency in knowledge production processes. This risk is compounded by the increasing homogenization of scientific discourse and the potential trivialization of intellectual work, insofar as the critical and creative contributions of authors become diluted (
Lindebaum & Fleming, 2024;
Mezzadri, 2025). Taken together, these risks reinforce the argument that managing the use of artificial intelligence in scientific publishing cannot rely solely on automated detection tools, but instead requires clear regulatory frameworks and explicit editorial criteria that complement—rather than replace—human judgment and expertise in scientific evaluation. Current strategies include mandatory disclosure of AI use in manuscript preparation, strengthened authorship criteria and accountability requirements, the use of AI-detection and plagiarism-screening tools, and the development of editorial guidelines for reviewers and editors (
Tlili et al., 2025).
3. Materials and Methods
The study adopts a predominantly quantitative methodological approach, complemented by a descriptive documentary analysis. This strategy enabled a systematic examination of the presence, characteristics, and scope of editorial guidelines on the use of artificial intelligence tools, as well as an analysis of patterns and differences among journals according to structural variables. The documentary analysis focused on publicly available editorial policies (e.g., author guidelines, ethical statements, and submission instructions), which were systematically reviewed and coded using a predefined analytical framework.
The study population consisted of 1119 scientific journals from 17 Latin American countries indexed in the Scimago Journal Rank (SJR) 2025. Data collection was conducted between July and October 2025. Journals were selected based on their inclusion in the SJR database, and no additional sampling was applied. For each journal, variables such as country of origin, subject area, quartile ranking, publisher type, and the explicit presence or absence of AI-related editorial guidelines were recorded.
Data were analyzed using descriptive statistics to identify frequencies and distributions, as well as comparative analyses to explore differences across journal characteristics. To ensure consistency, the coding process followed standardized criteria, and ambiguous cases were resolved through iterative review.
3.1. Data Collection and Systematization
In the first stage, the official Scimago Journal Rank database for 2025 was downloaded, from which all indexed Latin American journals were identified. From this database, the following structural variables were extracted: journal name, ISSN, country of publication, publisher, quartile, SJR score, CiteScore, and areas and categories of knowledge. Subsequently, a systematic review of each journal’s official website was conducted to identify the existence of rules, policies, or guidelines regarding the use of artificial intelligence tools. When available, the review included the following sections: editorial policy, journal presentation, ethical policies or best practices, instructions for authors, and guidelines for reviewers and editors.
During data collection, challenges related to link rot and inaccessible webpages were identified. To address these issues, alternative strategies were employed, including accessing archived versions of websites (e.g., via web archives), navigating through publisher platforms, and consulting updated links provided in indexing databases. When no accessible or verifiable information could be retrieved after these steps, cases were recorded as missing data and excluded from specific analyses where appropriate. The collected information was then systematized in a database specifically designed for this study using Microsoft Excel. To ensure consistency, the coding process followed standardized criteria and iterative review procedures among researchers, including cross-checking and the discussion of ambiguous cases. However, formal inter-coder reliability measures (e.g., Cohen’s kappa) were not calculated, which represents a limitation of the study.
3.2. Coding Variables and Criteria
As an initial coding criterion, it was determined whether each journal included any explicit reference to the use of artificial intelligence in its editorial policies. In cases where such an explicit reference was identified, the coding process recorded whether the journal permitted or prohibited the use of artificial intelligence tools by authors. For journals that permitted the use of AI, the following variables were additionally recorded: (i) the section of the website in which the guideline was located; (ii) permitted uses; (iii) prohibited uses; (iv) requirements for authors to declare the use of AI; (v) adherence to external policies or guidelines (e.g., COPE, SciELO, WAME, ICMJE, Open Access Scholarly Publishers Association, or others); (vi) differentiated regulations according to role (authors, reviewers, and editors); (vii) declaration of the use of software for detecting AI-generated content; and (viii) qualitative observations relevant to the interpretation of the policies.
3.3. Data Analysis
Data analysis was conducted in two stages. In the first stage, descriptive statistical techniques—absolute and relative frequencies—were applied to characterize the set of journals and to describe the distribution of the variables analyzed.
In the second stage, inferential statistical techniques were used to explore the existence of statistically significant differences in the adoption of AI guidelines according to journal quartile, country of publication, and field of knowledge.
Statistical analyses were performed using SPSS version 21.
3.4. Methodological Considerations
Although the research design allowed for broad coverage of the Latin American publishing ecosystem indexed in SJR, the results should be interpreted with caution, as the analysis was based exclusively on information publicly available on journal websites at the time of data collection. Consequently, the absence of explicit guidelines does not necessarily imply the absence of internal policies, but rather a lack of publication or visibility in official editorial communication channels.
4. Results
Figure 1 shows that, of the 1119 Latin American journals indexed in Scopus and classified according to the Scimago Journal Rank (SJR), 72.8% (815 journals) do not include explicit guidelines regarding the use of AI on their websites, whereas 27.2% (304 journals) explicitly address the use of AI. This finding indicates that the vast majority of journals have not yet formally incorporated this issue into their published editorial policies. Among the journals that have established AI guidelines, 97.4% permit the use of artificial intelligence tools, while 2.6% (8 journals) explicitly prohibit their use.
Within the subset of journals that have established guidelines and permit the use of AI (n = 296), variability is observed in the degree of specificity with which usage scenarios for authors are defined. On the one hand, some journals explicitly communicate the conditions under which AI tools may be used, whereas others provide only general statements and/or do not make specific reference to usage limits.
Figure 2 shows that 54.4% of the 296 journals explicitly specify the permitted uses of AI tools. Similarly, 34.1% of journals specify uses that are prohibited for authors. This indicates a clearer definition of permitted uses compared to prohibited uses.
Table 1 presents a qualitative analysis that synthesizes the criteria established by journals for authors’ use of AI into five categories, distinguishing among permitted uses, prohibited uses, and areas of divergence. Overall, journals tend to accept AI as a research support tool while restricting functions that compromise human control over intellectual processes.
Writing support: AI is permitted for tasks such as spelling, grammar, and style correction, as well as for improving text clarity and readability. Its use is prohibited for content generation or for writing entire sections. However, some variation exists among journals, as some allow the creation of abstracts, whereas others maintain strict restrictions.
Reference management: AI is permitted as an auxiliary tool for searching for references and standardizing citations. By contrast, the invention of references or their inclusion without prior verification is expressly prohibited.
Methodological support: In this category, regulations are more restrictive. While the use of AI is accepted for organizing chapters, assisting with technical processes, or supporting preliminary structuring, autonomous research design, hypothesis formulation, and data manipulation are prohibited. Areas of divergence are primarily concentrated in database creation, data analysis, and the production of primary data.
Production of graphic and visual elements: Basic optimization of tables or figures and the standardization of formats are permitted. However, altering scientific images and the use of AI-generated images or videos are prohibited.
Research development: In this category, greater heterogeneity is observed in editorial guidelines. While some journals accept the use of AI for technical support or reproducible analyses, others prohibit it, particularly when it involves the autonomous generation of results or the drawing of conclusions without human review. Areas of divergence focus on facilitating data processing and synthesis.
In contrast to the guidelines directed at authors, a smaller subset of journals provides guidelines for editorial roles. In this context,
Figure 3 shows that, among the journals that permit the use of AI (n = 296), 24.3% include guidelines governing the use of AI by reviewers, while 16.9% do so for editors.
The qualitative analysis of guidelines for reviewers and editors reveals a high degree of convergence around regulations grounded in ethics, transparency, confidentiality, and the preservation of human control and critical judgment (
Table 2). Both roles share core priorities, such as safeguarding confidentiality, copyright, and manuscript integrity, and in most journals explicitly prohibit the uploading, processing, or evaluation of content using AI tools.
In both cases, the tasks that may be supported by AI are auxiliary in nature, including style review, plagiarism detection, assistance in drafting evaluation reports, and support for formatting and metadata management. Under no circumstances are these tools intended to replace human critical judgment.
Table 3 presents the distribution of website sections in which journals include their guidelines on the use of artificial intelligence (AI). The largest proportion of journals include their guidelines within their editorial policies (36.2%). Second, sections devoted to ethics policies, best practices, or research integrity account for 19.4%, reflecting a close association between AI regulation and editorial ethical principles. Third, 18.8% of journals include a dedicated section specifically addressing AI policies or guidelines.
Of the 304 journals with AI guidelines, only 19 (6.25%) report using software to detect content generated by AI tools (
Figure 4). An even smaller number of journals specify which software or tools are employed. Among the tools reported, for example, Caderno CRH uses ZeroGPT; Revista Venezolana de Oncología uses SurgeGraph; and Revista Enfermagem reports the use of GPTZero, AI Text Classifier, AI Content Detector, and OpenAI Detector.
Table 4 presents the frequency and percentage of journals that permit the use of AI and details whether they require a declaration from authors and the manuscript section in which such a declaration is recommended.
Within the AI declaration category, 65.54% (194 journals) specify where the use of AI should be declared. By contrast, 27.03% (80 journals) indicate that a declaration is required but do not specify the section in which it should be included, while 7.43% (22 journals) do not mention the need to declare the use of AI in their guidelines. Overall, 92.6% (274 journals) require some form of declaration regarding the use of AI tools by authors.
With respect to the sections in which journals specify this declaration (n = 194), the majority recommend including the mention of AI use in the section detailing the methods (47.9%). This is followed by declarations in a separate section within the manuscript (38.1%), generally suggested to appear before the references. In addition, 27.8% of journals request that the declaration be included in the cover letter or at the time of manuscript submission. To a lesser extent, declarations appear in the acknowledgments (12.9%), the abstract (5.2%), or footnotes (4.6%), while 7.7% group them under other categories (e.g., authors’ contributions, acknowledgments of assistance, introduction, side notes, appendices, references, or any other format deemed appropriate by the authors).
Figure 5 presents the percentage of adherence to artificial intelligence (AI) policies among Latin American journals, including policies issued by organizations, publishers, or journals. Overall, 54.1% (160 journals) of the analyzed entities adhere to AI policies, whereas 45.9% (136 journals) do not.
Table 5 shows that the most frequently cited AI policies are those issued by the Committee on Publication Ethics (COPE) (43.8%), followed by SciELO (24.4%) and the World Association of Medical Editors (WAME) (20.6%). Less frequently cited are Elsevier’s AI Policy (PERK) (15.6%) and the International Committee of Medical Journal Editors (ICMJE) (16.3%).
In addition, 8.8% of the adopted policies or guidelines fall under the category of “Other,” which includes policies from a variety of sources, such as IEEE, Nature, PLOS, Emerald, Taylor & Francis, BioMed Central, Science, and the U.S. AI Committee. Furthermore, some journals adopt scientific articles as reference guidelines for the ethical and responsible use of AI, including those by
Penabad-Camacho et al. (
2024),
Peres (
2024),
Sampaio et al. (
2024a,
2024b), and
Pigola et al. (
2023).
Table 6 shows a decreasing trend in the proportion of journals that incorporate guidelines on the use of AI as the ranking quartile declines. In the top quartile (Q1), 32.3% of journals have such guidelines, whereas the proportion decreases to 29.9% in Q2, 28.3% in Q3, and 24.0% in Q4.
Overall, these results suggest that journals with greater visibility and impact tend to integrate guidelines related to the use of AI more frequently, which may reflect a stronger emphasis on transparency, traceability, and scientific integrity.
The results of the Mann–Whitney U test indicate statistically significant differences between journals with and without explicit guidelines on the use of artificial intelligence across all three indicators analyzed (p < 0.001).
These findings suggest that the adoption of guidelines on the use of AI is associated with journals exhibiting greater impact and institutional maturity. However, this association does not imply causality; rather, it likely reflects the greater organizational and regulatory capacity of journals that are better positioned to respond to emerging challenges in scientific communication (
Table 7).
Table 8 presents the number of journals per country and the percentage that have adopted AI guidelines. Brazil has the largest number of journals in the region and the highest proportion of journals with such guidelines (35.0%). Colombia follows, with 31.7% of journals reporting AI guidelines, and is also the second country in terms of the number of journals. Ecuador stands out with a proportion of 33.3%; however, it has only nine journals indexed in Scopus and classified according to the Scimago Journal Rank (SJR).
Chile (12.8%), Argentina (17.9%), and Mexico (20.2%) have a substantial number of journals in the region; however, their levels of AI guideline adoption are comparatively lower.
5. Discussion
The results indicate that the adoption of guidelines regulating the use of artificial intelligence tools in Latin American scientific journals indexed in Scopus and classified according to the Scimago Journal Rank (SJR) remains at an early stage, as fewer than one third of the journals analyzed explicitly report guidelines in this area. This regulatory lag is particularly significant given that generative artificial intelligence not only intervenes in instrumental writing tasks but also places pressure on structural principles of the publishing system, such as academic integrity, authorial responsibility, and the transparency of the publication process.
These findings are consistent with previous studies reporting a gradual adoption of AI-related guidelines.
Bhavsar et al. (
2025), for example, found that 34.6% of academic publishers in science, technology, and mathematics (STM) had publicly available guidelines on the use of AI chatbots. Similarly,
Bobier et al. (
2025) identified that only 16% of a sample of 50 bioethics and health humanities journals had clear AI guidelines. By contrast,
Queiroz et al. (
2025) reported a substantially higher level of adoption, noting that 71% of dentistry journals indexed in the Web of Science had established AI policies. Among journals that have incorporated guidelines, a predominantly permissive stance is observed, suggesting that AI is primarily conceived as a tool to support research and manuscript preparation. In this sense, the guidelines appear to serve a regulatory rather than a prohibitive function.
This regulatory orientation can be justified by the rapid and irreversible integration of AI technologies into scientific workflows, which renders outright prohibition both impractical and potentially counterproductive. Rather than restricting technological progress, regulatory frameworks enable the establishment of clear boundaries, ensure transparency, and preserve accountability, thereby allowing innovation to coexist with the ethical and epistemic standards of scientific production. However, this regulatory openness coexists with a significant degree of uncertainty, as authorization of AI use is not always accompanied by a clear definition of the boundaries between legitimate technical assistance and the substantive generation of academic content. This situation gives rise to so-called “gray areas” in editorial governance. Categories such as methodological support, data analysis, and abstract writing reveal normative ambiguities that may lead to divergent interpretations by authors, thereby weakening the regulatory effectiveness of these guidelines, particularly in contexts where editorial oversight capacities are limited.
A second relevant finding concerns the marked asymmetry in regulatory coverage across different roles in the editorial process. While most guidelines focus on regulating authors’ use of AI, policies applicable to reviewers and editors remain considerably less developed, despite their central role in evaluation and editorial decision-making. From an editorial governance perspective, this gap is critical, as the integrity of the publication process depends not only on authorial conduct but also on practices related to peer review, confidentiality, and editorial judgment. In this sense, AI policies for reviewers are essential within the scholarly value chain, as they help safeguard the confidentiality of manuscripts, prevent the unauthorized use or leakage of unpublished data into AI systems, and preserve the independence and critical rigor of peer review. These findings are consistent with those reported by
Queiroz et al. (
2025), who found that in dentistry journals indexed in the Web of Science, policies directed at reviewers (88.4%) and editors (46.4%) were less frequent than those directed at authors.
From an ethical standpoint, the use of artificial intelligence in peer review and editorial processes raises significant concerns regarding the handling of confidential materials. The potential uploading or processing of unpublished content in external AI systems entails risks related to data leakage, unauthorized use, and loss of control over sensitive scientific information. In this context, the absence of explicit editorial guidelines governing these practices represents a significant weakness, underscoring the need to establish clear procedural rules that safeguard confidentiality, ensure responsible use, and preserve the integrity of the evaluation process.
In this context, the guidelines identified tend to reaffirm that scientific evaluation and editorial decision-making remain fundamentally human activities, grounded in critical judgment, ethical principles, confidentiality, and academic responsibility. This orientation is consistent with the recommendations issued by international organizations and major scientific publishers.
The low proportion of journals reporting the use of AI-generated content detection tools suggests that the regional editorial response relies more heavily on regulatory transparency mechanisms—particularly requirements for the disclosure of AI use—than on automated detection strategies. This approach is consistent with recent literature that highlights the technical limitations of AI detection tools and their relatively high false-positive rates (
J. Wang et al., 2023).
Yoo (
2025), for instance, argues that the critical issue is not the identification of linguistic patterns that may indicate AI authorship, but rather the assessment of whether the core ideas of a manuscript were generated by human authors. From this perspective, the prevailing detection premise is conceptually flawed.
The analysis by quartile and impact indicators further shows that journals with greater visibility and impact tend to incorporate AI guidelines more frequently. This pattern suggests that the adoption of editorial policies in this area may be associated with higher levels of organizational maturity, editorial professionalization, more consolidated scientific ecosystems, and closer alignment with international standards, in line with previous findings (
Bhavsar et al., 2025;
E. Kim, 2024).
However, these results should be interpreted with caution. The observed differences across quartiles may not solely reflect variations in editorial maturity at the journal level, but could also be influenced by the role of large publishing houses that implement centralized and standardized editorial policies across their journal portfolios.
Finally, this study presents several limitations that should be considered when interpreting the results. Data collection and categorization were conducted manually, which may introduce a degree of bias and subjectivity. Moreover, the analysis relied exclusively on information publicly available on journals’ official websites at the time of data collection; therefore, the absence of explicit AI-related guidelines does not necessarily imply the absence of internal policies. The study was also limited to journals indexed in Scopus and classified according to the Scimago Journal Rank (SJR), which constrains the generalizability of the findings to journals included in other databases, such as SciELO or Latindex. Additionally, as the data were collected at a specific point in time, it was not possible to examine the evolution of editorial policies, a particularly relevant limitation in a rapidly changing regulatory landscape.
These findings should also be interpreted in light of methodological constraints related to the analytical strategy. The study adopts a primarily descriptive and univariate approach, which limits the ability to assess the combined effects of variables such as quartile, country, publisher type, and field of knowledge. Furthermore, variables such as publisher characteristics and disciplinary variation were not systematically controlled, which may influence the observed patterns and should be addressed in future research.
In light of the heterogeneity observed in editorial guidelines, an important implication of this study is the need to advance toward more standardized policy frameworks for the use of artificial intelligence in scientific publishing. The development of a model AI policy for Latin American journals—incorporating clear distinctions between permitted and prohibited uses, disclosure requirements, and role-specific responsibilities—could contribute to reducing normative ambiguity and strengthening editorial governance in the region.
6. Conclusions
The present study analyzed the current state of editorial guidelines on the use of artificial intelligence tools in Latin American scientific journals indexed in the Scimago Journal Rank, providing empirical evidence in an area that has been scarcely explored at the regional level, as previous research has largely focused on publishers, individual journals, or specific fields of knowledge. The results indicate that, although the debate surrounding the use of AI in scientific publishing is increasingly recognized, its translation into explicit editorial policies remains incipient and heterogeneous.
A key finding is the gap between the rapid incorporation of AI tools into scientific production—documented in multiple studies cited in this work—and the capacity of journals in the region to regulate their use in a clear, coherent, homogeneous, and transparent manner. A substantial number of journals lack explicit guidelines, and an even smaller proportion provide policies that clearly and precisely define permitted and prohibited uses.
Existing guidelines reflect an emerging consensus on excluding AI systems from authorship and preserving human intellectual responsibility, alongside a regulated openness to the use of these tools as forms of technical support. These shared principles are consistent with global frameworks for AI use, such as those promoted by COPE and WAME, among others. However, the persistence of gray areas regarding both permitted and prohibited uses, the limited regulation applicable to reviewers and editors, and the low adoption of declared mechanisms for detecting AI-generated content reveal significant weaknesses in the editorial governance of this phenomenon.
From an ethical perspective, the use of artificial intelligence in peer review and editorial processes raises concerns about the confidentiality of unpublished manuscripts. Uploading or processing manuscripts in external AI systems may expose sensitive scientific information to risks such as data leakage or unauthorized use. In the absence of clear editorial policies regulating these practices, it becomes necessary for journals to establish explicit guidelines that protect confidentiality and ensure the integrity of the evaluation process.
From an editorial governance perspective, the findings suggest that strengthening clear and accessible policies represents a growing challenge and should be considered a priority in the Latin American context. Advancing toward more detailed and homogeneous regulations would not only reduce ambiguities but also reinforce the integrity of scientific production and trust in scientific communication. In conclusion, this study contributes to clarifying the current state of editorial regulation concerning the use of artificial intelligence in Latin America, identifying progress, prevailing trends, and unresolved challenges that require attention in the short term. The results offer relevant insights for editors, editorial boards, reviewers, and authors interested in promoting responsible practices in a context of accelerated transformation of scientific production.