Previous Article in Journal
Preserving Culinary Heritage Through AI: Sustainable Digitisation of Granny Josie’s Notebooks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ethical Considerations for the Use of Artificial Intelligence in Linguistics Journal Publishing: Combining Hybrid Thematic Analysis and Critical Discourse Analysis

School of Foreign Studies, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Publications 2025, 13(4), 61; https://doi.org/10.3390/publications13040061
Submission received: 2 September 2025 / Revised: 17 November 2025 / Accepted: 24 November 2025 / Published: 25 November 2025

Abstract

The immense potential of artificial intelligence (AI) in academic journal publishing has significantly impacted scholarly communication between stakeholders, leading to increased research into ethical considerations for AI use in academic publishing. Due to the contextual nature of ethics and the ontological base of language as its own object of inquiry, the conceptual framework and underlying ideologies of AI ethics in linguistics deserve attention. In this study, we address the call for these ethical considerations by combining a hybrid thematic analysis (HTA) of the ethical guidelines available on 144 Social Sciences Citation Index (SSCI) linguistics journals’ and 11 corresponding publishers’ websites as of 31 October 2025, and a critical discourse analysis (CDA) case study on Language Testing, a representative journal with self-developed AI ethical guidelines. Through the HTA, we identified seven themes: accountability, authorship, citation practices, copyright, long-term governance, human agency, and transparency. The role allocation of CDA demonstrated that the AI ethical guidelines independently established by the linguistics journal expand the scope of stakeholders to include the sources of research data and technology, covering the informed consent of research participants and the responsibilities of the AI tool operators. Moreover, AI tools are given a beneficialized role, suggesting a more technology-assisted-oriented perspective and reflecting deeper trust in AI’s involvement. Through the findings, our study contributes to the broader understanding of ethical governance in relation to AI usage in discipline-based communication, highlighting the need for a more dialogic and diverse framework to share responsibility among stakeholders to promote the ethical use of AI.

1. Introduction

The rapid development and emergence of large language models (LLMs) and generative artificial intelligence (AI) tools have transformed scholarly publishing, as they are able to play a significant role in the preparation of manuscripts or even in the knowledge production processes related to their publication (Garcia, 2025; Mututa & Tomaselli, 2025; Resnik & Hosseini, 2025). Publishers have begun releasing ethical guidelines for AI usage to regulate responsible conduct, aiming to protect research integrity and fairness in scholarly communication (Kim, 2024; Lund & Naheem, 2024; Resnik & Hosseini, 2025). However, it is critical to determine what constitutes these frameworks and what practices are required to ensure their realization (Jeon et al., 2025; Jobin et al., 2019), with related research being conducted across different academic disciplines (Bobier et al., 2025; Hosier & Cantwell-Jurkovic, 2025; Kuteeva & Andersson, 2024).
In the field of linguistics journal publishing, existing studies have explored researchers’ perspectives, practices, challenges, and ethical standards when using AI to write manuscripts and conduct research (Casal & Kessler, 2023; Farangi & Nejadghanbar, 2024; Kuteeva & Andersson, 2024). The individual necessities of exploring these ethical frameworks and their practice in linguistics have been widely discussed due to the field’s distinctive epistemological, methodological, and ideological characteristics, yet research in this field is still developing (Curry et al., 2025; Moorhouse et al., 2025). Therefore, we aim to explore the ethical frameworks of AI usage in linguistics journal publication and what roles stakeholders play in the discourse surrounding publishers’ AI guidelines. In this study, we seek to address the following two questions: (1) What themes emerge from publishers’ ethical guidelines for AI use in linguistic academic journal publishing? (2) How do exclusive guidelines for linguistics journals discursively allocate ethical responsibilities among publishing stakeholders?

2. Literature Review

2.1. AI Ethical Frameworks

The ethics of publishing was first defined as an inquiry that first seeks to identify the element of obligation in conduct and then examines the underlying values (Dewey, 1969). Global associations such as the Committee on Publication Ethics (COPE), European Association of Science Editors (EASE), International Committee of Medical Journal Editors (ICMJE), and World Association of Medical Editors (WAME) outline general frameworks for publishing ethics, mainly centered around the following domains: (a) human rights, privacy, and confidentiality; (b) cultures and heritage; (c) registering clinical trials; (d) animals in research; (e) biosecurity; and (f) reporting guidelines (COPE Council, 2021; EASE, 2024; ICMJE, 2025; WAME, 2023). However, these guidelines do not provide specific advice on the ethics of AI usage (Kocak, 2024); recent research has begun updating the original frameworks to tailor them to AI applications (Hosseini & Resnik, 2025; Jeon et al., 2025; Jeyaraman et al., 2023). The most commonly debated principles in the existing literature include transparency, privacy, accountability, and fairness, which serve as foundational principles for ensuring the practice of AI ethics in scholarly communication (Cohen et al., 2024; Lund & Naheem, 2024). Table 1 presents other elements in the literature that have been suggested for incorporation into AI ethical frameworks. A trend that emerges from these studies is the recommendation for building ethical frameworks for AI usage in individual disciplines.

2.2. Stakeholders for AI Ethics in Journal Publishing

Since publication is a long-term process, AI ethics considerations should include not only the principles applicable throughout all publication procedures (including author submissions, peer reviewing, and editors’ decisions), but also during research design, data collection and analysis, and manuscript writing (Lund et al., 2023; Pratiwi et al., 2025; Sharma, 2024). This continuous and iterative process could engage various stakeholders, including technologists, ethicists, policymakers, and the public at large (Jeon et al., 2025). This could allow researchers, academic publishers, editorial teams, and even academia at large to become more involved in ethical considerations for AI practice, meaning that the influence of AI could extend to very early research stages such as design, consent, and recruitment, potentially leading to dialogue on authorship, plagiarism, and transparency (Hosier & Cantwell-Jurkovic, 2025). Stakeholder collaboration and the division of labor have emerged as significant issues in ethical implementation and oversight (Kamali et al., 2024). However, few studies have focused on the role of stakeholders in ethical implementation through bottom-up approaches compared to those focusing on ethical principles; stakeholders are significant given their ability to establish, maintain, and execute ethical guidelines for AI usage and promote knowledge ecology (Hosier & Cantwell-Jurkovic, 2025; Nam & Bai, 2023).

2.3. AI Ethical Discourse

As ethics are believed to be culturally constructed and developed by professionals, discourse analysis (DA) has been employed to explore their underlying ideologies in various fields (Häußler, 2021; Holden, 2020; Saxén, 2018). These texts use wide, abstract concepts to represent power relations and hierarchies in relation to established discourse, practices, and attitudes related to ethics. Critical discourse analysis (CDA) provides a means to explore the relationships between discursive texts, events, and practices and wider social and cultural structures, relations, and processes, especially social and cultural understandings and beliefs shaped by the use of language (Fairclough, 1993, 2001). Ethics in the field of AI are closely related to control over the standards and social impacts of this technology, as an AI-dominated future might face the danger of reinforcing and legitimizing the existing concentrations of resources and power (Jobin et al., 2019; Lund et al., 2023; Talib, 2025). Jeon et al. (2025) identified three narratives of ethical discourse on AI usage by interviewing early-career social scientists in order to explore how research ethics could be updated for social science research practice: as an equalizer, as a meritocracy, and as a community. The equalizer narrative reflects views on whether AI can help reduce existing inequalities in the social sciences; the meritocracy narrative highlights the idea of AI use as an individual skill or capability; and the community narrative relates to concerns and opportunities about how AI may influence the development of the academic community. Therefore, analyzing AI ethics through DA can assist our construction of values and trustworthiness for developing relevant policy recommendations and stances on AI technology, and its implications for human life (Stamboliev & Christiaens, 2025; Talib, 2025).

2.4. Academic Publishing Ethics in Linguistics in the Context of GenAI

Existing research on AI ethics in linguistics publishing mainly focuses on three topics: the current status of ethical guideline implementation, the main challenges in the implementation process, and potential guidance frameworks. In terms of status, Farangi and Nejadghanbar (2024) conducted a study on Iranian applied linguistics and revealed widespread uncertainty regarding AI use, coupled with a lack of clear regulations. Although region-specific, these issues reflect broader global concerns such as overreliance on AI, lack of transparency, and ambiguous ethical boundaries. Two major challenges cited were the paradigm shifts caused by AI and the difficulty of detecting AI-generated content. In terms of the paradigm shifts, De Costa (2024) warns applied linguists against using AI as a “shortcut” under productivity pressures and urges discourse analysts to remain vigilant about AI-generated political and social media content. Consoli and Ganassin (2025) further emphasize that while AI can analyze language patterns and produce human-like responses, it lacks empathy, lived experience, and a moral compass. These limitations call for renewed critical reflection among researchers. Moreover, identifying AI-assisted writing remains a practical hurdle. Casal and Kessler (2023) found that reviewers correctly identified AI-generated abstracts only 38.9% of the time, indicating significant challenges in maintaining textual integrity and authenticity in scholarly communication. In terms of frameworks, guidelines for AI usage instead of AI ethics have been proposed as a disciplinary framework in TESOL research (Moorhouse et al., 2025). The framework builds on Plonsky’s (2024) four elements of study quality—transparency, methodological rigor, ethics, and societal value—and adds a fifth crucial element: human accountability. Ethics have been treated as an element of transparency regarding AI use instead of being used to build frameworks for ethical AI usage in themselves.
Ethical content and meaning vary considerably across disciplines (Bakiner, 2023). In linguistics, which employs situated theories and methodologies, there is a growing need to reexamine academic publishing ethics within the context of AI adoption (Kubanyiova, 2008). Unlike other fields, where language is a research tool, in linguistics, language itself is the object of study, making ethical concerns inseparable from how meaning and representation are constructed. Linguistics research is also highly situated and relational, featuring shifting researcher–participant dynamics that demand context-sensitive, micro-ethical approaches (Kubanyiova, 2008; Rice, 2006). Moreover, as Ortega (2005) notes, linguistic inquiry values epistemological diversity and recognizes that knowledge about language is always mediated by ideology, identity, and power. These features render universal or technocratic AI ethical frameworks insufficient for this field. Finally, while other disciplines have established comprehensive AI ethics codes, linguistics lacks a coherent framework for addressing how AI-generated or AI-analyzed language should be treated in publishing. Given its focus on meaning, communication, and human–machine interaction, studying AI ethics in linguistics journals is essential for understanding how technological mediation reshapes the epistemology of language research. Moreover, while existing analyses of ethical frameworks focus more on these dimensions, the underlying ideology and responsibilities still remain underexplored. Additionally, a significant gap persists between the ethical aspirations and current AI practices of publications. Therefore, we aim to explore the emerging themes of ethical AI usage guidelines and their underlying responsibilities and ideology by combining hybrid thematic analysis and critical discourse analysis.

2.5. Analytical Framework

This study is guided by two complementary theoretical frameworks: the micro–meso–macro movements in CDA (Talib & Fitzgerald, 2016) and the micro–macro traditions in qualitative research (Simmons-Mackie, 2014). Together, these perspectives shape the design of the research questions and inform how we interpret the findings. The multi-level CDA framework has three levels: the emerged theme from text analysis at the micro level, the examination of specific policy empirical material at the meso level, and the long-term discursive patterns of large-scale orders of texts at the macro level. The qualitative inquiry framework explains micro perspectives that focus on language and meaning within texts, as well as macro perspectives regarding how texts help form and reinforce group identity.
Within these two frameworks, our study engages the micro and meso levels of the CDA model while drawing on both micro and macro perspectives from qualitative research. A macro-level CDA analysis is not included because it would require longitudinal resources across many years, and such data are not currently available. Current AI ethics guidelines in linguistics publishing are still relatively new, with many having been introduced after 2023, and updates are infrequent and seldom timestamped. For example, Language Testing set the AI guidelines in 2024, and journals from Taylor Francis did not update AI ethical guidelines between November 2024 to October 2025. These conditions make it impossible to trace diachronic patterns across the field. Beyond being a limitation of the study, this absence also points to a clear direction for future research once a more substantial historical record becomes available. At the same time, the analysis incorporates both micro and macro perspectives from qualitative research because the study attends to textual features while considering how responsibilities, identities, and expectations are constructed within the cultural and social context of scholarly publishing.
These theoretical perspectives also provide the foundation for our research questions. In the first question, we focus on the themes that emerge across the AI ethics guidelines of linguistics journals. This focus aligns with the micro level in CDA and with the micro tradition in qualitative research, both of which emphasize close attention to textual themes. Through the second question, we examine how responsibilities are distributed among stakeholders. This focus reflects meso-level concerns in CDA while connecting to the macro perspective in qualitative inquiry, where issues of cultural context, social structure, and group identity are central. Given the attention to those issues within academic publishing, the research questions together form a theoretically and methodologically coherent design that reflects both the aims of the study and the nature of the available data.

3. Methodology

3.1. Research Design

In this study, we employed both hybrid thematic analysis (HTA) and critical discourse analysis (CDA) to justify the meaning-making and social construction embedded in publishers’ ethical guidelines for AI use (Alejandro & Zhao, 2024). Through the HTA, recurring ethical themes were identified, whereas the CDA enabled a deeper exploration of how these guidelines discursively distribute ethical responsibilities among publishing stakeholders. This contributes to understanding how such discourses reflect underlying ethical ideologies that shape power and hierarchies within the publishing ecosystem.
Hybrid thematic analysis was adopted in the current study because existing AI ethical frameworks offer valuable references but are not fully applicable to linguistics. Its strength lies in combining the deductive application of established theories with the inductive generation of codes from new data (Proudfoot, 2023). In this way, our hybrid approach involved using pre-ordinate themes guided by AI ethical frameworks derived from the literature, representing its deductive aspect; at the same time, it allowed new themes to emerge directly from the data, reflecting its inductive dimension. This integration of theory and data enables a more complete and contextually grounded analysis (Roberts et al., 2019).
In order to examine the ethical ideologies behind discourses about AI applications in linguistic publishing, we also conducted a CDA on journals and publishers’ ethical guidelines for AI use as a supplementary methodology. Critical discourse analysis provides a means to explore relationships between discursive texts, events, and practices and wider social and cultural structures, relations, and processes (Fairclough, 1993). This allowed us to investigate hidden cultural and social constructions, tensions, and power relations among stakeholders (Holden, 2020), since the aim of CDA is to determine the ideologies behind the process of discursive production in specific contexts, which are usually untransparent for the general public (Häußler, 2021; Saxén, 2018; Talib, 2025).
To systematically analyze the socio-semantic discursive structures that shape the textual construction of stakeholder identities, this study employed role allocation derived from van Leeuwen’s (2008) social actors theory. This analytical lens is widely applied to examine how responsibilities are distributed in the discursive representation of controversial social actions (Ahlstrand, 2021; Fevyer & Aldred, 2022). van Leeuwen’s (2008) role allocation framework provides a socio-semantic lens for interpreting how stakeholders are positioned within discourse. It conceptualizes stakeholder roles as activated, subjected, or beneficialized, depending on whether they are represented as agents of action and regulation, recipients of institutional or ethical control, or beneficiaries of protection and support. Building on this model, the present study synthesized role representations into a conceptual framework that captures how power and responsibility are discursively constructed within the AI ethics ecosystem. This framework shows how ethical discourse in academic publishing reinforces hierarchical moral relations and legitimizes institutional governance under the banner of ethical accountability.

3.2. Data Collection

Data collection consisted of four major steps. The first step involved identifying the target linguistics journals by referring to journal impact and ranking indicators provided by Clarivate, which is widely recognized across academic disciplines as one of the major authoritative institutions for journal evaluation (Chen et al., 2021; Pearson, 2021; Vega-Arce et al., 2019). On the Master Journal List platform of Clarivate (https://mjl.clarivate.com/search-results, accessed on 28 December 2024), the ‘Core Collection’ was filtered by selecting the Social Sciences Citation Index (SSCI) and further refining the ‘Category’ to ‘Linguistics’, resulting in a final list of 195 linguistics journals considered high-quality within this disciplinary domain. The webpage listing the 195 linguistics journals is an interactive platform. By clicking ‘View profile page’, users can access not only detailed information about each journal but also information about its publisher. In addition, there are two ‘Visit Site’ links: one directs users to the official homepage of the journal, and the other leads to the publisher’s official website. Using this approach, we located the official websites of all 195 linguistics journals along with those of their 45 affiliated publishers, and subsequently examined each website to explore whether AI ethical guidelines were provided.
The second step focused on examining the availability of AI ethical guidelines across linguistics journals. Based on the ‘Primary Language’ attribute provided in Clarivate, we found that 178 out of the 195 journals are published in English, while the remaining journals use other languages in which the authors of this study are not proficient. Therefore, this study exclusively targeted the 178 English-language journals, for which all instructions for authors related sections on their official websites were systematically reviewed.1 Through close reading, all AI-related ethical statements were extracted and saved in a structured Word file that included journal names, extracted text from journal webpages, and the corresponding data collection dates. Data collection was conducted from the first day of the project (1 November 2024) to 31 October 2025, and a follow-up tracking was implemented every three months until the final round of major revisions of this manuscript. The 178 journals were categorized into three groups: journals that do not provide any AI ethical guidelines nor link to publishers’ guidelines (34 journals), journals that directly link to their publishers’ AI ethical guidelines (134 journals), and journals that formulate their own AI ethical norms or offer supplementary journal-level guidance (10 journals). Detailed analytical results are presented in Section 4.1, and the complete list of journals is included in Appendix A, Table A1.
Third, the corresponding publishers’ official websites were examined to locate AI ethical guidelines. Among the 144 journals that provided AI ethical guidelines, 143 explicitly hyperlinked to the general AI ethical guidelines issued by their corresponding publishers. All of these publisher-level guidelines were carefully examined, and the relevant AI ethical guidelines were extracted and systematically recorded in a structured Word file for analysis. The process followed the same data collection timeframe and tracking schedule as applied for journal-level data. In total, these guidelines involved 11 publishers, whose detailed information is presented in Appendix A Table A1. These publisher-issued guidelines were also treated as essential analytical sources, as they represent the AI ethical standards formally acknowledged and endorsed by linguistics journals.
Finally, all retrieved AI ethical statements were systematically categorized into three analyzable files prepared for NVivo import: (1) AI ethical guidelines set by publishers, (2) AI ethical guidelines set by journals, and (3) supplementary requirements specified by journals (see Table 2 for details). Because many journals under the same publisher referred to identical publisher-level AI ethics policies, all overlapping content was merged and refined to avoid redundancy. When multiple journals drew on the same publisher-issued guidelines, only one version of that policy text was retained for analysis, and the subheadings in the file were renamed using the publisher’s name, followed by a list of all journals associated with that shared policy. This approach ensured analytic coherence, preserved the traceability between individual journals and the corresponding publisher-level guidelines, and maintained a manageable, non-duplicated text corpus for NVivo-based qualitative analysis.

3.3. Data Analysis

The data analysis was divided into three main phases. The first phase consisted of a statistical analysis of AI ethical guidelines at both the journal and publisher levels, focusing on the proportional distribution and basic characteristics of the journal set and publisher set, which provided a contextual foundation for understanding the overall landscape of the linguistics field. The second phase adopted HTA to identify and interpret thematic patterns, aiming to answer the first research question. The third phase applied CDA informed by the social actors framework to explore stakeholder role allocation, aiming to answer the second research question. NVivo15 was used to support coding in these phases. SPSS 29 was used for statistical calculations, including analyzing the percentage of journals and publishers that provided or did not provide AI ethical guidelines within their respective categories, as well as assessing inter-coder consistency. Throughout the entire data analysis process, both authors of the current study were fully involved at every stage, particularly during the coding procedures, in order to ensure analytic accuracy and quality.
With respect to first research question, and following Fereday and Muir-Cochrane’s (2006) procedures for HTA and O’Connor and Joffe’s (2020) steps for inter-coder agreement, the two authors of the current study conducted three rounds of coding: (1) a sample subset of data from File 1, (2) the full dataset of File 1, and (3) the data in File 2 and File 3. Prior to coding, preparatory work and preliminary decisions were made. Both authors of the current study carefully reviewed all literature included in the literature review and thoroughly examined all content in File 1 to develop a coding frame that covered all elements presented in Table 1. Given that File 1 contained 4521 words, the two authors deemed the workload manageable and, in order to ensure accuracy, agreed to double-code 100% of File 1 using sentence-level units of analysis, with an acceptable Cohen’s Kappa threshold of ≥0.80.
Subsequently, the two authors independently and blindly coded a randomly selected longer text segment from the publisher Taylor & Francis Online (1207 words). After coding, they compared their coded results item by item and provided justifications for any discrepancies. For the four instances of disagreement, consultation was sought from a third researcher in the project team who had extensive experience publishing thematic analysis studies. The third researcher agreed with the first author’s interpretation, and therefore the authors agreed that, after completing all coding, they would (a) calculate Cohen’s Kappa, which must remain above 0.80 to indicate satisfactory agreement and (b) resolve disagreements through discussion; if consensus could not be reached, the first author’s coding decision would prevail. As File 2 and File 3 consisted of journal-specific ethical content, they were coded in the third round to examine thematic saturation and to determine whether new themes emerged beyond those identified in File 1.
For Research Question 2, a role allocation framework within CDA was employed. The analysis focused on two primary data sources: the AI ethical guidelines of Language Testing (744 words) and the AI ethical guidelines issued by its publisher, Sage (749 words). The two authors of the current study jointly conducted negotiated paragraph-level coding. Using the opening paragraph of Language Testing as an example, both authors independently identified all stakeholders appearing in the paragraph while working in the same room, and subsequently discussed and compared all annotations. After reaching consensus on stakeholder identification, the authors proceeded to analyze the role allocation of each stakeholder within the paragraph through categorizing them as activated, subjected, or beneficialized. In cases where agreement could not be reached after a rational exchange, a third researcher with expertise in role allocation was consulted. To maintain concentration and ensure accuracy, each collaborative coding session was limited to three hours, and each session required the completion of one full paragraph (coding unit) before ending. A total of three sessions (nine hours) were completed to finish all coding. Additionally, a one-hour consultation meeting was held with another experienced researcher to clarify unresolved issues, and a two-thirds agreement was adopted as the final coding decision rule.
The inter-coder consistency results for each analytic phase are presented in Table 3, with agreement levels ranging from strong to almost perfect. The values reported represent the level of agreement between the two authors after discussion, but prior to consulting the third researcher. With regard to the involvement of the third researcher, as well as the final coding decisions, both authors fully agreed upon all coding procedures, decision rules, and adjudication principles applied throughout the study.

4. Findings

This section details how the findings mainly consist of four dimensions: (a) the state of existence and source of AI ethical guidelines in SSCI linguistics journals through descriptive statistics, (b) themes that emerged from AI ethical guidelines in linguistic academic journal publishing through thematic analysis, and (c) a CDA case study of Language Testing, the only journal in the field of SSCI linguistics that both refers to publishers’ AI ethical guidelines and developed its own AI ethical guidelines. Based on the social actors model, this study compares how the guidelines independently proposed by the journal (i.e., Language Testing) and the general guidelines of the referenced publisher (i.e., Sage) address role allocation.

4.1. Descriptive Statistics of SSCI Linguistics Journals and Their Publishers’ AI Ethical Guidelines

The field of linguistics has 195 SSCI journals from 45 publishers. Based on the requirements of discourse analysis, this study only investigated 178 journals from 29 publishers that have author guidelines available in English. Among them, 144 journals have AI ethical guidelines, accounting for 80.90% (see Table 4). Most of those journals (93.06%) refer to the publishers’ general ethical guidelines, but 5.56% have additional or emphasized content, and 2 (Language Testing and Digital Scholarship in the Humanities) have independent journal-specific ethical guidelines. However, only Language Testing also refers to the publisher’s guidelines.
Fourteen publishers have AI ethical guidelines, accounting for 48.28% of the publishers, but three publishers’ AI ethical guidelines (Oxford University Press, Wiley, and Hopkins Press) were not mentioned by their affiliated linguistics journals (see Table 5).

4.2. Emerging Themes in AI Ethics for Academic Journal Publishing

Themes emerged from three types of source texts during the analysis of AI ethical guidelines in the field of linguistics: (a) general AI ethical guidelines on publishers’ information webpages explicitly linked to by journals, (b) additional guidance on journals’ webpages that supplements publishers’ guidelines, and (c) exclusive AI ethics norms formulated by journals themselves. Based on the hybrid thematic analysis, seven themes emerged: accountability, authorship, citation practices, copyright, long-term governance, human agency, and transparency. Five of these themes are derived from the literature, while the human agency and long-term governance themes are derived from the data in the first type of source text. For the other two types of course text (additional content and linguistics journals’ exclusive guidelines), a saturation check was conducted on the seven themes, and no new themes were found. This result indicates that the AI ethical guidelines of linguistics journals are primarily based on the publishers’ general guidelines, despite some targeted adjustments—for example, Language Testing added content related to ethical consent from research participants under the theme of “transparency.” The specific details about emerging themes are presented in Table 6.

4.3. The Discursive Construction of Stakeholders’ Responsibilities in AI Ethical Guidelines

Language Testing is the only journal in the field of SSCI linguistics that both refers to the publisher’s (i.e., Sage) AI ethical guidelines and develops its own. Based on the social actors theory by van Leeuwen (2008), the role allocations of all stakeholders have been explored in AI ethical guidelines discourse in the journal (Language Testing) and the publisher (Sage) to compare the two (see Table 6). For Language Testing, eight stakeholders were found in the discourse. Included were research participants and the organization that runs the genAI tools—stakeholders not mentioned in Sage’s general guidelines. This finding not only indicates that Language Testing more heavily emphasizes third-party responsibilities but also marks a shift in AI ethics policies, from initially focusing on individual behavioral norms like what authors should do to constructing a broader ecosystem of ethics and accountability that covers both the source of research data (participants) and the source of technological services (AI companies).
In Table 7, the role distribution of different stakeholders in the AI ethical guidelines is displayed, with data presented in frequency (N) and percentage (pct.). Authors, editors, reviewers, publishers, journals, research participants, and the organization that runs the genAI tools play a significant “activated role,” which shows a co-governance model that shifts from a “passive restriction” to an “active empowerment” responsibility framework. Thus, the core intention of the policy is no longer just to tell these stakeholders what they cannot do, but to explicitly authorize and expect them to do what is needed. AI tools primarily play a subjected role, being the objects of management. Regarding the above, no significant difference emerged in the stakeholders between Sage and Language Testing. However, the COPE guidelines differ between the two. Sage’s AI tools are fully in the activated role, while Language Testing’s are more of a subjected role. For Sage (100% activated), it views COPE as an “activated” authority and partner. This view means that Sage positions COPE as an active, referable source of guidelines within the publisher’s policy framework. Specifically, Sage’s policy actions reflect a relationship of collaboration and citation because they involve adopting, citing, and integrating COPE’s recommendations, using them as a basis and support for internal rules. For Language Testing (67% subjected), it views COPE as a “complied-with” norm and regulator, which indicates the journal places its policies under COPE’s authority and constraints. Language Testing thus acknowledges COPE’s higher authority and the need to ensure that the journal’s policies comply with, align with, and adhere to the standards set by COPE. This acknowledgement reflects a relationship of subordination and compliance.

4.3.1. “Activated” Roles

For Sage, among the 9 stakeholders, research participants, and the organization that runs the genAI tools do not play activated roles. For the remaining 7 stakeholders, the journal and COPE constitute the highest proportion at 100%, while AI tools represent the lowest at 41%. This finding indicates that the journal and COPE have a dominant and active role in the process, whereas the participation of AI tools remains relatively limited, reflecting the limited application scenarios approved within the current framework.
For Language Testing, among the 9 stakeholders, publishers do not play activated roles. Among the remaining 8 stakeholders, editors, reviewers, research participants, and the organization that runs the genAI tools constitute the highest proportion at 100%, while AI tools represent the lowest at 27%. Compared with Sage’s heavier focus on COPE norms and the journals’ features, Language Testing has distinct characteristics—its research mostly centers on linguistic ontology and empirical studies involving research participants. Unlike in other disciplines with diverse data sources, the language use of participants could serve as the core data source in linguistics research. The gradual integration of AI technology into research means that AI use for analysis or revision will likely trigger the critical issue of participant data leakage. For this reason, Language Testing not only aims to engage more stakeholders in the governance and negotiation of rules to form a co-construction model, but also specifically defines such scenarios through the following detailed content: “The use of genAI to analyze qualitative data provided by research participants normally requires informed consent from those participants that their data will be analyzed using genAI tools in the ethics approval process.” This approach fully demonstrates the unique attributes of linguistics research and related journals, but can also be regarded as an important shift in the approach of linguistics journals in the context of AI use.

4.3.2. “Subjected” Roles

For Sage, among the 9 stakeholders, 5 (AI tools, authors, editors, reviewers, publishers) represent subjected roles, while the other 4 (journal, COPE, research participants, the organization that runs the genAI tools) are not. Among those with subjected roles, AI tools have the highest proportion at 59%, whereas authors have the lowest at 7%. This pattern is the same for Language Testing, where AI tools have the most subjected role at 68%, while authors have the least subjected role, at 11%. For example, in the following extract, authors are the actors in relation to using genAI, but genAI could join authors to play active roles in relation to data analysis. However, the activated role of authors has been re-emphasized for its responsibility in checking the accuracy of the analysis. This pattern suggests that AI tools are discursively constructed as the most regulated and constrained entities within the current publishing ethics discourse, reflecting concerns about the accountability and potential risks of AI tools. In contrast, authors may still be subjected to certain restrictions but are granted comparatively greater agency and professional trust, indicating a hierarchical distribution of responsibility and control across stakeholders.
Language Testing states that, “in all cases, any uses of genAI for data analysis or coding must be noted in the manuscript and the authors bear responsibility for checking the accuracy of the analysis.”

4.3.3. “Beneficialized” Roles

Beneficialized roles are relatively limited in AI ethical guidelines compared to activated and subjected roles. For Sage, 3 stakeholders (authors, editors, reviewers) represent beneficialized roles, while 2 stakeholders (AI tools, authors) represent beneficialized roles. The main difference is whose role should be beneficialized; AI tools play a new “beneficialized” role in Language Testing. This difference reflects fundamental differences in policy perspective: human-centered vs. technology-function-centered. For Sage, its policy explicitly acknowledges and aims to safeguard the human members within the academic community (authors, editors, reviewers) so that they can derive positive benefits from the use of AI. This perspective is “human-centered,” focusing on how technology serves and improves human work and collaboration. For Language Testing, its policy uniquely positions the AI tools themselves as beneficiaries, suggesting a more technology-function-oriented perspective. This view could mean the policy recognizes AI technology as an emerging entity whose proper use and development deserve a certain level of protection or recognition within the academic framework.

5. Discussion

5.1. Ethical Considerations in Linguistics Journal Publishing

This study reveals that AI ethical guidelines in the linguistics field have diversified guidance styles: Some journals refer to the ethical guidelines set by publishers, while others independently formulate their own ethical frameworks. Additionally, some journals both reference publisher standards and supplement the guidelines with additional content. However, some journals also do not refer to relevant publishers’ guidelines that do exist. These differences indicate that the development of AI ethical guidelines within the field of linguistics is moving toward diversification and individualization. Similar trends are emerging in other disciplines, further demonstrating that the development of AI ethics is deeply rooted in the publisher’s positioning and the journal’s characteristics (Bobier et al., 2025; Kim, 2024). The academic publishing community is adopting a pragmatic approach of layered governance and contextualized application to address the complex issue of AI ethics. It has become a prevalent trend for publishers to provide a foundational framework, while individual journals make personalized supplements and implementations based on the unique needs of their respective academic fields.
Using the HTA, a seven-dimensional ethical framework (accountability, authorship, citation practices, copyright, long-term governance, human agency, and transparency) for AI use in linguistics journal publishing highlights two newly added themes compared to the traditional ethical framework. The first is “human agency”, which emphasizes that LLMs cannot replace human creativity and decision-making ability; the second newly added theme is “long-term governance”, which consists of two dimensions: policy evolution and regulation reflexivity. This framework clearly reflects two key trends: first, the human–machine relationship is shifting from an opposing model of explicit prohibition of technology to an effective human–machine integration model centered on affirming human creativity; second, policy formulation is dynamic and adaptable, with potential iterations, so a long-term perspective is needed when viewing governance strategies for genAI technological mechanisms and ethics (Batool et al., 2025; Liu & Maas, 2021).
Although these themes are not exclusively manifested in the field of linguistics, as some are related to the overall transformation and development of AI technology, journals like Language Testing that independently formulate their own AI ethical guidelines still demonstrate nuanced variations in how these themes are articulated. For example, when introducing guidelines on the theme of ‘transparency’, Language Testing outlines the differences between basic AI tools and genAI tools to distinguish editorial correction from generating materials. It states, “Note that genAI does not include grammar-checking tools, citation software, plagiarism detectors, or text-to-speech or speech-to-text tools. Use of these basic AI tools to progress or refine manuscripts does not need to be disclosed or cited in manuscripts submitted to Language Testing”. In addition, when introducing guidelines on the theme of ‘authorship’, Language Testing has made a subtle adjustment regarding when genAI cannot be considered an author. The policy replaces the commonly used terms “should not” and “must not” with “may not.” Despite explicitly prohibiting treating AI as an author, the policy leaves room for potential use in the future or in extremely special circumstances.

5.2. Ethical Responsibilities Among Stakeholders

The two main findings on stakeholders’ interests, benefits and concerns of AI in linguistics field are as follows: first, the inclusion of research participants as stakeholders, which reflects the contextualization of language research and the characteristics of language as the subject of study (Kubanyiova, 2008; Ortega, 2005); second, the designation of AI tools with a beneficialized role, which offers new insights into the attitude toward AI tools and the human–AI relationship in the era of genAI (De Costa, 2024; Moorhouse et al., 2025; Sharma, 2024).
The policy stating “The use of genAI to analyze qualitative data provided by research participants normally requires informed consent from those participants whose data will be analyzed using genAI tools in the ethics approval process” recognizes AI analysis of qualitative data as an independent and high-risk ethical process, one distinct from neutral technological tools like AI transcription. Specifically, the policy highlights the unique risks associated with this form of data processing, which necessitates separate ethical review and approval. The policy also transforms the principle of “informed consent” into a concrete, pre-procedural requirement, ensuring researchers cannot assume broad consent covers AI analysis or inform participants retroactively. Instead, participants must give authorization based on a clear understanding of how their data will be processed. This requirement reflects the journal’s awareness of the potential risks, including privacy and confidentiality concerns, when qualitative data (e.g., interview transcripts containing personal information) is input into third-party AI platforms. For instance, the data could be used for model training, raising risks of leakage and secondary use. The regulation sets an overall higher ethical standard in qualitative research, emphasizing that respecting individuals and protecting data dignity remain non-negotiable ethical boundaries when pursuing technological efficiency.
The policy stating “Authors should demonstrate criticality in their use of genAI tools, for example, by acknowledging limitations” transforms “critical use” from an attitude to a specific, demonstrable action. The policy goes beyond the vague request for authors to maintain a critical stance and instead specifies a verifiable, mandatory action. Authors can no longer simply hold skepticism internally. Instead, they must publicly acknowledge and explain the specific limitations of the AI tools used in a paper or methodology section. This requirement shifts an internal thought process into an academic communication and obligation. While most policies focus on the output of AI tools (e.g., content accuracy and ownership), the requirement to acknowledge limitations forces authors to examine and understand the inherent flaws of AI technology. As mentioned earlier, Language Testing uniquely lists AI tools as a beneficialized role. The requirement to acknowledge limitations aligns with this categorization by encouraging an honest recognition of the tools’ limitations, which itself is a form of respect for the tools and their technological realities. This policy helps avoid potential damage to the journal’s reputation caused by misuse or over-reliance. Ultimately, the policy represents a deeper level of transparency that considers not only whether AI is used, but also the effectiveness and trustworthiness of the use.

5.3. Methodological Reflection of Combining HTA and CDA

The integration of HTA and CDA was methodologically productive in bridging descriptive, interpretive, and analytical layers of research inquiry. Thematically, recurring categories—authorship, transparency, privacy, accountability, fairness, human agency, and ethical governance—can be used to update the frameworks of Jobin et al. (2019) and Hosseini and Horbach (2023). Discourse analysis, however, revealed the linguistic and ideological means by which these categories are constructed and legitimized. By combining inductive pattern recognition with contextual interpretation, this hybrid method enabled a deeper understanding of how ethics is textualized and why certain subject positions (e.g., “the responsible author”) dominate policy language.
This methodological synthesis aligns with Jeon et al. (2025) and Yao et al. (2025), who advocate institutionally grounded and multi-layered approaches to AI ethics research. The HTA–CDA approach provides both empirical grounding and critical reflexivity, acknowledging that ethical discourse is simultaneously normative and persuasive. It therefore complements bibliometric and policy studies (Austin & Medina Riveros, 2025; Huang & Gadavanij, 2025) by contextualizing quantitative trends within their rhetorical and ideological environments. As a methodological contribution, this study demonstrates how interpretive analysis can itself embody ethical awareness—treating textual inquiry as an act of responsibility toward meaning and representation.

6. Conclusions

In the current study, we explored the ethical considerations needed for using AI in academic journal publication and the ethical responsibilities of stakeholders from the perspective of linguistics journals. This study reveals a diversification of AI ethics guidance in linguistics, with journals adopting, adapting, or even disregarding publisher guidelines, indicating a trend towards individualization. Through thematic analysis, the study identifies seven core themes (accountability, authorship, citation practices, copyright, long-term governance, human agency, and transparency), with two newly added themes: human agency and long-term governance. The new themes represent a dialectical relationship: On one hand, they affirm human creativity and intellectual property, but on the other, they recognize the dynamic changes fueled by technological development. While the emergence of these themes is fundamentally rooted in the academic reflections prompted by AI advancement, the ethical norms of linguistics journals still predominantly reference publisher-led requirements. Yet the research on role allocation showed that the stakeholders involved in linguistics journals are increasingly diverse, with increasing emphasis on the practicality, trustworthiness, and benefit of AI technologies. Involving research participants as stakeholders in qualitative studies and AI tools as beneficialized tools have been the key features of AI ethical guidelines for linguistics journals compared to publishers’ general guidelines.
Future studies should explore exclusive AI ethical guidelines from more journals that have independently developed AI ethical guidelines. Since only Language Testing has been currently developed, this study only involved a CDA case study. The sample size could, however, be expanded as more journals develop their own guidelines, and long-term mechanisms for attention to AI ethics should be established in linguistics journals. In addition, this study focused on the role allocation of stakeholders, but this focus might not be the only perspective for AI ethics research. More diversified CDA analyses on the hierarchical structure, power, and identity associated with ethical norms could be employed to further explore the ideologies of these ethics, and empirical studies (e.g., interviews with editors, authors, reviewers, etc.) could also be employed. In addition, due to discourse analysis features and the researchers’ linguistic limitations, this study does not cover multilingual journals. English remains the dominant language for international publications. Thus, future studies could extend to publishers and journals in different languages, conducting long-term tracking and documenting changes in ethical guidelines and temporal features. This study also suggests that publishers and journals could clearly record the updated versions, update times, and core changes of AI ethical guidelines on official websites, providing dynamic and ongoing reference for authors, reviewers, and researchers, in line with the long-term dynamic nature of academic research and submissions. Since the academic focus of Language Testing has led to its own dialogical AI ethical framework, other journals and academic fields could promote the development of more diversified and domain-specific AI ethical frameworks for the respective journals and disciplines.

Author Contributions

Conceptualization, X.W. and X.Z.; methodology, X.W. and X.Z.; formal analysis, X.Z.; writing—original draft preparation, X.W. and X.Z.; writing—review and editing, X.W.; supervision, X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the NPU Research Scheme for Education and Teaching Reform, grant number 2024JGWZ09.

Data Availability Statement

All the data discussed in this paper are drawn from the open guidelines and policies on ethics published on the official websites of publishers, which are publicly accessible. However, these materials are subject to updates in response to AI developments. The data referenced in this study are limited to those available prior to 31 October 2025. The corresponding publisher websites have been included in the references section for further consultation.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

The following table presents 144 SSCI journals in linguistics published under 12 publishers, all of which include ethics-related requirements on their websites.
Table A1. Publishers and their journals.
Table A1. Publishers and their journals.
Ethical Guidelines CategoriesJournalsPublishers
Journals do not have their own AI ethical guidelines but reference or link to their publishers’ AI ethical guidelines.American Journal of Speech-Language Pathology
Journal of Speech Language and Hearing Research
Language Speech and Hearing Services in Schools
American Speech-Language-Hearing Association (3 Journals)
Annual Review of Applied Linguistics
Applied Psycholinguistics
Bilingualism-Language and Cognition
English Language & Linguistics
English Today
Journal of Child Language
Journal of French Language Studies
Journal of Germanic Linguistics
Journal of Linguistics
Journal of the International Phonetic Association
Language and Cognition
Language in Society
Language Teaching
Language Variation and Change
Natural Language Processing
Nordic Journal of Linguistics
Phonology
ReCALL
Signs and Society
Studies in Second Language Acquisition
Cambridge University Press & Assessment (20 Journals)
Applied Linguistics Review
Cognitive Linguistics
Corpus Linguistics and Linguistic Theory
Dialectologia Et Geolinguistica
Folia Linguistica
Indogermanische Forschungen
IRAL-International Review of Applied Linguistics in Language Teaching
Journal of African Languages and Linguistics
Journal of Politeness Research-Language Behaviour Culture
Linguistics
Linguistic Typology
Phonetica
Poznan Studies in Contemporary Linguistics
Text & Talk
Theoretical Linguistics
Zeitschrift Fur Sprachwissenschaft
De Gruyter Brill (16 Journals)
Assessing Writing
Brain and Language
English for Specific Purposes
Journal of Communication Disorders
Journal of English for Academic Purposes
Journal of Fluency Disorders
Journal of Memory and Language
Journal of Neurolinguistics
Journal of Phonetics
Journal of Pragmatics
Journal of Second Language Writing
Language & Communication
Language Sciences
Lingua
Linguistics and Education
System
Elsevier (16 Journals)
Babel-Revue Internationale De La Traduction
Diachronica
English World
Functions of Language
Gesture
Historiographia Linguistica
Interaction Studies
International Journal of Corpus Linguistics
Interpreting
Journal of Historical Pragmatics
Journal of Language and Politics
Journal of Pidgin and Creole Languages
Language Problems & Language Planning
Linguistic Approaches to Bilingualism
Narrative Inquiry
Pragmatics
Pragmatics and Society
Pragmatics & Cognition
Review of Cognitive Linguistics
Revista Espanola De Linguistica Aplicada
Spanish in Context
Studies in Language
Target-International Journal of Translation Studies
Terminology
Translation and Interpreting Studies
John Benjamins (25 Journals)
Computational Linguistics
Linguistic Inquiry
MIT Press (2 Journals)
Aphasiology
Australian Journal of Linguistics
Clinical Linguistics & Phonetics
Computer-Assisted Language Learning
Current Issues in Language Planning
European Journal of English Studies
Innovation in Language Learning and Teaching
International Journal of Bilingual Education and Bilingualism
International Journal of Multilingualism
International Journal of Speech-Language Pathology
International Multilingual Research Journal
Interpreter and Translator Trainer
Journal of Language Identity and Education
Journal of Multilingual and Multicultural Development
Journal of Quantitative Linguistics
Language Acquisition
Language and Education
Language and Intercultural Communication
Language Assessment Quarterly
Language Awareness
Language Cognition and Neuroscience
Language Culture and Curriculum
Language & History
Language Learning and Development
Language Matters
Metaphor and Symbol
Perspectives-Studies in Translation Theory and Practice
Research on Language and Social Interaction
Social Semiotics
Southern African Linguistics and Applied Language Studies
Translation Studies
Translator
Taylor & Francis (32 Journals)
Child Language Teaching & Therapy
Communication Disorders Quarterly
First Language
International Journal of Bilingualism
Journal of English Linguistics
Language and Literature
Language and Speech
Language Teaching Research
RELC Journal
Second Language Research
Sage (10 Journals)
Argumentation
Journal of Comparative Germanic Linguistics
Journal of East Asian Linguistics
Journal of Psycholinguistic Research
Linguistics and Philosophy
Natural Language & Linguistic Theory
Natural Language Semantics
Springer (7 Journals)
Canadian Modern Language Review-Revue Canadienne Des Langues Vivantes
Gender and Language
International Journal of Speech Language and the Law
University of Toronto Press (3 Journals)
The journals primarily follow their publishers’ AI ethical guidelines but additionally emphasize or supplement certain related requirements on their own journal pages.Intercultural Pragmatics
Linguistic Review
Linguistics Vanguard
Multilingua-Journal of Cross-Cultural and Interlanguage Communication
Probus
De Gruyter Brill
Journal of Language and Social PsychologySage
Language PolicySpringer
English Teaching: Practice and CritiqueEmerald
While referring to their publishers’ AI ethical guidelines, the journals have also developed their own AI ethical guidelines.Language TestingSage
Journals do not refer to the publisher’s AI ethical guidelines but provide a brief explanation of the journals’ own relevant requirements.Digital Scholarship in the HumanitiesOxford University Press

Note

1
This decision was also made because we conducted a CDA, for which a precise understanding of linguistic features is essential. Although translated versions could convey the general meaning, they would inevitably distort the grammatical and lexical nuances crucial to discourse-level interpretation. Since our team is proficient only in English and the number of available non-English guidelines was relatively limited (17 journals, accounting for 8.7% in total), we analyzed only English texts to ensure data reliability. This choice also constitutes one of the study’s acknowledged limitations.

References

  1. Ahlstrand, J. L. (2021). Strategies of ideological polarisation in the online news media: A social actor analysis of Megawati Soekarnoputri. Discourse & Society, 32(1), 64–80. [Google Scholar] [CrossRef]
  2. Alejandro, A., & Zhao, L. (2024). Multi-method qualitative text and discourse analysis: A methodological framework. Qualitative Inquiry, 30(6), 461–473. [Google Scholar] [CrossRef]
  3. Austin, T., & Medina Riveros, R. A. (2025). Ethics for researching language and education: What the discourse of professional guidelines reveals. Research Methods in Applied Linguistics, 4(2), 100221. [Google Scholar] [CrossRef]
  4. Bakiner, O. (2023). What do academics say about artificial intelligence ethics? An overview of the scholarship. AI and Ethics, 3(2), 513–525. [Google Scholar] [CrossRef]
  5. Batool, A., Zowghi, D., & Bano, M. (2025). AI governance: A systematic literature review. AI and Ethics, 5(3), 3265–3279. [Google Scholar] [CrossRef]
  6. Bobier, C., Rodger, D., & Hurst, D. (2025). Artificial intelligence policies in bioethics and health humanities: A comparative analysis of publishers and journals. BMC Medical Ethics, 26(1), 79. [Google Scholar] [CrossRef]
  7. Casal, J. E., & Kessler, M. (2023). Can linguists distinguish between ChatGPT/AI and human writing? A study of research ethics and academic publishing. Research Methods in Applied Linguistics, 2(3), 100068. [Google Scholar] [CrossRef]
  8. Chen, S., Qiu, J., Arsenault, C., & Larivière, V. (2021). Exploring the interdisciplinarity patterns of highly cited papers. Journal of Informetrics, 15(1), 101124. [Google Scholar] [CrossRef]
  9. Cohen, M., Khavkin, M., Movsowitz Davidow, D., & Toch, E. (2024). ChatGPT in the public eye: Ethical principles and generative concerns in social media discussions. New Media & Society. Advance online publication. [Google Scholar] [CrossRef]
  10. Consoli, S., & Ganassin, S. (2025). Reflexivity as a means to address researcher vulnerabilities. Applied Linguistics Review, 16(6), 2521–2544. [Google Scholar] [CrossRef]
  11. COPE Council. (2021, September 24). COPE discussion document: Artificial intelligence (AI) in decision making. Available online: https://publicationethics.org/guidance/discussion-document/artificial-intelligence-ai-decision-making (accessed on 26 August 2025).
  12. Curry, N., McEnery, T., & Brookes, G. (2025). A question of alignment—AI, GenAI and applied linguistics. Annual Review of Applied Linguistics, 45, 315–336. [Google Scholar] [CrossRef]
  13. De Costa, P. I. (2024). What’s ethics got to do with applied linguistics? Revisiting the past, considering the present, and being optimistic about the future of our field. Research Methods in Applied Linguistics, 3(1), 100103. [Google Scholar] [CrossRef]
  14. Dewey, J. (1969). The ethics of democracy. In J. A. Boydston (Ed.), The early works of john dewey (p. 246). Southern Illinois University Press. [Google Scholar]
  15. EASE. (2024, September 25). Recommendations on the use of AI in scholarly communication. Available online: https://ease.org.uk/2024/09/recommendations-on-the-use-of-ai-in-scholarly-communication/ (accessed on 17 October 2025).
  16. Fairclough, N. (1993). Critical discourse analysis and the marketization of public discourse: The universities. Discourse & Society, 4(2), 133–168. [Google Scholar] [CrossRef]
  17. Fairclough, N. (2001). Language and power. Routledge. [Google Scholar]
  18. Farangi, M. R., & Nejadghanbar, H. (2024). Investigating questionable research practices among Iranian applied linguists: Prevalence, severity, and the role of artificial intelligence tools. System, 125, 103427. [Google Scholar] [CrossRef]
  19. Fereday, J., & Muir-Cochrane, E. (2006). Demonstrating rigor using thematic analysis: A hybrid approach of inductive and deductive coding and theme development. International Journal of Qualitative Methods, 5(1), 80–92. [Google Scholar] [CrossRef]
  20. Fevyer, D., & Aldred, R. (2022). Rogue drivers, typical cyclists, and tragic pedestrians: A critical discourse analysis of media reporting of fatal road traffic collisions. Mobilities, 17(6), 759–779. [Google Scholar] [CrossRef]
  21. Garcia, M. B. (2025). ChatGPT as an academic writing tool: Factors influencing researchers’ intention to write manuscripts using generative artificial intelligence. International Journal of Human–Computer Interaction. [Google Scholar] [CrossRef]
  22. Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120. [Google Scholar] [CrossRef]
  23. Häußler, H. (2021). The underlying values of data ethics frameworks: A critical analysis of discourses and power structures. International Journal of Libraries and Information Studies, 71(4), 307–319. [Google Scholar] [CrossRef]
  24. Holden, A. C. L. (2020). Exploring the evolution of a dental code of ethics: A critical discourse analysis. BMC Medical Ethics, 21(1), 45. [Google Scholar] [CrossRef]
  25. Hosier, A., & Cantwell-Jurkovic, L. (2025). AI and library and information science publishing: A survey of journal editors. Library Trends, 73(3), 243–266. [Google Scholar] [CrossRef]
  26. Hosseini, M., & Horbach, S. P. J. M. (2023). Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. Research Integrity and Peer Review, 8(4), 1–9. [Google Scholar] [CrossRef] [PubMed]
  27. Hosseini, M., & Resnik, D. B. (2025). Guidance needed for using artificial intelligence to screen journal submissions for misconduct. Research Ethics, 21(1), 1–8. [Google Scholar] [CrossRef] [PubMed]
  28. Hosseini, M., Resnik, D. B., & Holmes, K. (2023). The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts. Research Ethics, 19(4), 449–465. [Google Scholar] [CrossRef]
  29. Huang, X., & Gadavanij, S. (2025). Power and marginalization in discourse on AI in education (AIEd): Social actors’ representation in China Daily (2018–2023). Humanities and Social Sciences Communications, 12(1), 412. [Google Scholar] [CrossRef]
  30. ICMJE. (2025, April 1). Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. Available online: https://www.icmje.org/recommendations/ (accessed on 17 October 2025).
  31. Jeon, J., Kim, L., & Park, J. (2025). The ethics of generative AI in social science research: A qualitative approach for institutionally grounded AI research ethics. Technology in Society, 81, 102836. [Google Scholar] [CrossRef]
  32. Jeyaraman, M., Balaji, S., Jeyaraman, N., & Yadav, S. (2023). Unraveling the ethical enigma: Artificial intelligence in healthcare. Cureus, 15(8), e43262. [Google Scholar] [CrossRef]
  33. Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. [Google Scholar] [CrossRef]
  34. Kamali, J., Alpat, M. F., & Bozkurt, A. (2024). AI ethics as a complex and multifaceted challenge: Decoding educators’ AI ethics alignment through the lens of activity theory. International Journal of Educational Technology in Higher Education, 21(1), 62. [Google Scholar] [CrossRef]
  35. Kardes, G., & Tuna Oran, N. (2025). Perspectives on the use of ChatGPT in academic publications. Science and Public Policy, 52(2), 321–325. [Google Scholar] [CrossRef]
  36. Kim, S. J. (2024). Research ethics and issues regarding the use of ChatGPT-like artificial intelligence platforms by authors and reviewers: A narrative review. Science Editing, 11(2), 96–106. [Google Scholar] [CrossRef]
  37. Kocak, Z. (2024). Publication ethics in the era of artificial intelligence. Journal of Korean Medical Science, 39(33), e249. [Google Scholar] [CrossRef]
  38. Kubanyiova, M. (2008). Rethinking research ethics in contemporary applied linguistics: The tension between macroethical and microethical perspectives in situated research. The Modern Language Journal, 92(4), 503–518. [Google Scholar] [CrossRef]
  39. Kuteeva, M., & Andersson, M. (2024). Diversity and standards in writing for publication in the age of AI—between a rock and a hard place. Applied Linguistics, 45(3), 561–567. [Google Scholar] [CrossRef]
  40. Liu, H. Y., & Maas, M. M. (2021). ‘Solving for X?’ Towards a problem-finding framework to ground long-term governance strategies for artificial intelligence. Futures, 126, 102672. [Google Scholar] [CrossRef]
  41. Lund, B. D., & Naheem, K. T. (2024). Can ChatGPT be an author? A study of artificial intelligence authorship policies in top academic journals. Learned Publishing, 37(1), 13–21. [Google Scholar] [CrossRef]
  42. Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z. (2023). ChatGPT and a new academic reality: Artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology, 74(5), 570–581. [Google Scholar] [CrossRef]
  43. Moorhouse, B. L., Nejadghanbar, H., & Yeo, M. A. (2025). Study quality in the age of AI: A disciplinary framework for using GenAI in TESOL research. TESOL Quarterly. [Google Scholar] [CrossRef]
  44. Mututa, A., & Tomaselli, K. (2025). Research cultures in the modern university: Artificial intelligence and its imperatives on scientific knowledge. Education as Change, 29, 1–19. [Google Scholar] [CrossRef] [PubMed]
  45. Nam, B. H., & Bai, Q. (2023). ChatGPT and its ethical implications for STEM research and higher education: A media discourse analysis. International Journal of STEM Education, 10(1), 66. [Google Scholar] [CrossRef]
  46. O’Connor, C., & Joffe, H. (2020). Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods, 19, 1609406919899220. [Google Scholar] [CrossRef]
  47. Ortega, L. (2005). For what and for whom is our research? The ethical as transformative lens in instructed SLA. The Modern Language Journal, 89(3), 427–443. [Google Scholar] [CrossRef]
  48. Pearson, W. S. (2021). Quoted speech in linguistics research article titles: Patterns of use and effects on citations. Scientometrics, 126(4), 3421–3442. [Google Scholar] [CrossRef]
  49. Plonsky, L. (2024). Study quality as an intellectual and ethical imperative: A proposed framework. Annual Review of Applied Linguistics, 44, 4–18. [Google Scholar] [CrossRef]
  50. Pratiwi, H., Suherman, Hasruddin, & Ridha, M. (2025). Between shortcut and ethics: Navigating the use of artificial intelligence in academic writing among Indonesian doctoral students. European Journal of Education, 60(2), e70083. [Google Scholar] [CrossRef]
  51. Proudfoot, K. (2023). Inductive/deductive hybrid thematic analysis in mixed methods research. Journal of Mixed Methods Research, 17(3), 308–326. [Google Scholar] [CrossRef]
  52. Resnik, D. B., & Hosseini, M. (2025). Disclosing artificial intelligence use in scientific research and publication: When should disclosure be mandatory, optional, or unnecessary? Accountability in Research. [Google Scholar] [CrossRef]
  53. Rice, K. (2006). Ethical issues in linguistic fieldwork: An overview. Journal of Academic Ethics, 4(1), 123–155. [Google Scholar] [CrossRef]
  54. Roberts, K., Dowell, A., & Nie, J. B. (2019). Attempting rigour and replicability in thematic analysis of qualitative research data: A case study of codebook development. BMC Medical Research Methodology, 19(1), 66. [Google Scholar] [CrossRef] [PubMed]
  55. Saxén, S. (2018). Same principles, different worlds: A critical discourse analysis of medical ethics and nursing ethics in finnish professional texts. HEC Forum, 30(1), 31–55. [Google Scholar] [CrossRef]
  56. Sharma, S. (2024). Benefits or concerns of AI: A multistakeholder responsibility. Futures, 157, 103328. [Google Scholar] [CrossRef]
  57. Simmons-Mackie, N. (2014). Micro and macro traditions in qualitative research. In M. J. Ball, N. Müller, & R. L. Nelson (Eds.), Handbook of qualitative research in communication disorders (pp. 17–38). Psychology Press. [Google Scholar]
  58. Stamboliev, E., & Christiaens, T. (2025). How empty is trustworthy AI? A discourse analysis of the ethics guidelines of trustworthy AI. Critical Policy Studies, 19(1), 39–56. [Google Scholar] [CrossRef]
  59. Talib, N. (2025). Rethinking ethics in AI policy: A method for synthesising Graham’s critical discourse analysis approaches and the philosophical study of valuation. Critical Discourse Studies, 22(2), 210–225. [Google Scholar] [CrossRef]
  60. Talib, N., & Fitzgerald, R. (2016). Micro–meso–macro movements; a multi-level critical discourse analysis framework to examine metaphors and the value of truth in policy texts. Critical Discourse Studies, 13(5), 531–547. [Google Scholar] [CrossRef]
  61. van Leeuwen, T. (2008). Discourse and practice: New tools for critical discourse analysis. Oxford University press. [Google Scholar]
  62. Vega-Arce, M., Salas, G., Núñez-Ulloa, G., Pinto-Cortez, C., Fernandez, I. T., & Ho, Y.-S. (2019). Research performance and trends in child sexual abuse research: A science citation index expanded-based analysis. Scientometrics, 121(3), 1505–1525. [Google Scholar] [CrossRef]
  63. WAME. (2023, May 31). Chatbots, generative AI, and scholarly manuscripts. Available online: https://wame.org/page3.php?id=106 (accessed on 17 October 2025).
  64. Yao, M., Wei, Y., & Liu, H. (2025). AI practices and ethical concerns: An analysis of undeclared uses of AI in published research articles. Ethics & Behavior. [Google Scholar] [CrossRef]
Table 1. Elements suggested for AI ethical frameworks.
Table 1. Elements suggested for AI ethical frameworks.
ReferenceDisciplineElements Suggested for AI Ethical Frameworks
Cohen et al. (2024)N/Atransparency, privacy, accountability, and fairness
Hagendorff (2020)N/Aaccountability, explainability, privacy, justice, fairness, robustness, and safety
Hosseini et al. (2023)N/Aauthorship, plagiarism, transparency, and accountability
Jobin et al. (2019)N/Atransparency, justice and fairness, non-maleficence, responsibility, and privacy
Kim (2024)Library and information sciencepresence of AI use policy, guidance on declaration of AI use, referral to COPE for ethical AI use, AI use in editing
Kocak (2024)Medicineauthorship, AI disclosure, transparency and responsibility, and ethical use of AI
Kardes and Tuna Oran (2025)N/Aauthorship, plagiarism, and errors in references
Lund et al. (2023)N/Aauthorship, copyright, plagiarism, citation practices
Note: N/A means that the researchers did not specify in their articles whether the proposed frameworks apply to any particular discipline.
Table 2. Data collection files and subheadings.
Table 2. Data collection files and subheadings.
No.File NameSubheadings in the File
1AI ethical guidelines set by publishersAmerican Speech–Language–Hearing Association
Cambridge University Press & Assessment
De Gruyter Brill
Elsevier
John Benjamins Publishing
MIT Press
Sage
Springer
Taylor & Francis
University of Toronto Press
2AI ethical guidelines set by journalsLanguage Testing
Digital Scholarship in the Humanities
3Supplementary requirements specified by the journalsIntercultural Pragmatics, etc., under De Gruyter Brill
Journal of Language and Social Psychology
Language Policy
English Teaching: Practice and Critique
Table 3. Inter-coder reliability.
Table 3. Inter-coder reliability.
Analysis DomainsCohen’s Kappa
Hybrid thematic analysis0.87
Saturation check0.98
Role allocation in CDA0.91
Table 4. Journals with available AI ethical guidelines for reference (number = 144, proportion = 80.90%).
Table 4. Journals with available AI ethical guidelines for reference (number = 144, proportion = 80.90%).
Number of Linguistics SSCI Journals That Meet the Relevant Category CriteriaProportion
Journals do not have their own AI ethical guidelines but reference or link to publishers’ AI ethical guidelines in English. (n = 134)93.06%
Journals primarily follow their publishers’ AI ethical guidelines but additionally emphasize or supplement certain related requirements on journal pages. (n = 8)5.56%
While referring to publishers’ AI ethical guidelines, the journals have also developed their own AI ethical guidelines. (n = 1)0.69%
Journals do not refer to the publisher’s AI ethical guidelines but provide a brief explanation of the journals’ own relevant requirements. (n = 1)0.69%
Table 5. Publishers with available AI ethical guidelines (number = 14, proportion = 48.28%).
Table 5. Publishers with available AI ethical guidelines (number = 14, proportion = 48.28%).
Number of Publishers for Linguistics SSCI Journals Meeting the Relevant Category CriteriaProportion
Publishers’ AI ethical guidelines are mentioned and referenced by the journals. (n = 11)78.57%
Publishers’ AI ethical guidelines are not mentioned or referenced by the journals. (n = 3)21.43%
Table 6. Emerging themes from ethical guidelines for AI use in academic journal publishing.
Table 6. Emerging themes from ethical guidelines for AI use in academic journal publishing.
ThemesCodesExtract Exemplar
AccountabilityAuthors’ accountabilityAuthors are ultimately responsible and accountable for the contents of the work. (Elsevier)
Reviewers’ accountabilityPeer reviewers are accountable for the accuracy and views expressed in their reports, and the peer review process operates on a principle of mutual trust between authors, reviewers, and editors. (Springer)
AuthorshipAI non-authorshipAuthors must not list or cite AI and AI-assisted technologies as an author or co-author on the manuscript since authorship implies responsibilities and tasks that can only be attributed to and performed by humans.
Authorship criteriaArtificial intelligence (AI) tools do not meet University of Toronto Press’s definition for authorship, given the level of accountability required. (University of Toronto Press)
Citation practicesCitation requirementsTo ensure transparency, we expect any such use to be declared and described fully to readers, and to comply with our plagiarism policy and best practices regarding citation and acknowledgements. (Cambridge University Press)
Citation methodThe author(s) must describe the content created or modified as well as appropriately cite the name and version of the AI tool used; any additional works drawn on by the AI tool should also be appropriately cited and referenced. (English Teaching: Practice and Critique)
Exemption from citationUse of these basic AI tools to progress or refine manuscripts does not need to be disclosed or cited in manuscripts submitted to Language Testing. (Language Testing)
CopyrightCopyright requirementsAuthors should be aware of copyright restrictions before uploading any published or unpublished documents or extracts into genAI tools. (Language Testing)
Copyright scopeTaylor & Francis supports the responsible use of Generative AI tools that respect high standards of data security, confidentiality, and copyright protection in cases such as: idea generation and idea exploration, language improvement, interactive online search with LLM-enhanced search engines, literature classification, and coding assistance. (Taylor & Francis)
Ethical governancePolicy evolutionAs we expect things to develop rapidly in this field in the near future, we will review this policy regularly and adapt it if necessary. (Springer)
Regulation reflexivityWe are actively evaluating compliant AI tools and may revise this policy in the future. (Elsevier)
Human agencyReviewTherefore, the responsibility for peer review lies exclusively with humans. (De Gruyter)
Decision Editors must not use ChatGPT or other Generative AI to generate decision letters or summaries of unpublished research. (Sage)
CreativityWhile AI may assist in routine tasks such as grammar checking or formatting, the intellectual and creative content of the manuscript must reflect the authors’ own work. (Language Policy)
TransparencyReviewers’ disclosureIf any part of the evaluation of the claims made in the manuscript was in any way supported by an AI tool, we ask peer reviewers to declare the use of such tools transparently in the peer review report. (Springer)
Authors’ disclosureCOPE guidelines require authors to explicitly and transparently disclose any use of AI tools in the Methods section (e.g., which AI tools were used, how they were used, and for which purpose). (Language Testing)
Note: The coded keywords are all in bold. The journal names in parentheses indicate that these contents are not from the publishers’ interface, but from the journals’ interface.
Table 7. Role allocation in the ethical guidelines for AI use.
Table 7. Role allocation in the ethical guidelines for AI use.
StakeholdersActivated (n, pct.)Subjected (n, pct.)Beneficialized (n, pct.)
SageLanguage TestingSageLanguage TestingSageLanguage Testing
AI tools12, 41%10, 27%17, 59%25, 68%N/A2, 5%
Authors12, 80%16, 84%1, 7%2, 11%2, 13%1, 5%
Editors10, 84% 1, 100%1, 8%N/A1, 8%N/A
Reviewers7, 78%1, 100%1, 11%N/A1, 11%N/A
Publishers6, 86%N/A1, 14%N/AN/AN/A
Journal 1, 100%2, 67%N/A1, 33%N/AN/A
COPE1, 100%1, 33%N/A2, 67%N/AN/A
Research participantsN/A2, 100%N/AN/AN/AN/A
Organization that runs the genAI toolsN/A1, 100%N/AN/AN/AN/A
Note: N/A means that the ethical guidelines do not contain information related to the roles of these stakeholders.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Zhang, X. Ethical Considerations for the Use of Artificial Intelligence in Linguistics Journal Publishing: Combining Hybrid Thematic Analysis and Critical Discourse Analysis. Publications 2025, 13, 61. https://doi.org/10.3390/publications13040061

AMA Style

Wang X, Zhang X. Ethical Considerations for the Use of Artificial Intelligence in Linguistics Journal Publishing: Combining Hybrid Thematic Analysis and Critical Discourse Analysis. Publications. 2025; 13(4):61. https://doi.org/10.3390/publications13040061

Chicago/Turabian Style

Wang, Xuan, and Xinyi Zhang. 2025. "Ethical Considerations for the Use of Artificial Intelligence in Linguistics Journal Publishing: Combining Hybrid Thematic Analysis and Critical Discourse Analysis" Publications 13, no. 4: 61. https://doi.org/10.3390/publications13040061

APA Style

Wang, X., & Zhang, X. (2025). Ethical Considerations for the Use of Artificial Intelligence in Linguistics Journal Publishing: Combining Hybrid Thematic Analysis and Critical Discourse Analysis. Publications, 13(4), 61. https://doi.org/10.3390/publications13040061

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop