Next Article in Journal
E-Exam Cheating Detection System for Moodle LMS
Previous Article in Journal
DDA-MSLD: A Multi-Feature Speech Lie Detection Algorithm Based on a Dual-Stream Deep Architecture
Previous Article in Special Issue
Co-Creation for Sign Language Processing and Translation Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human-Centered AI and the Future of Translation Technologies: What Professionals Think About Control and Autonomy in the AI Era

by
Miguel A. Jiménez-Crespo
Department of Spanish and Portuguese, Rutgers University, 15 Seminary Place 5th Floor, New Brunswick, NJ 08901, USA
Information 2025, 16(5), 387; https://doi.org/10.3390/info16050387
Submission received: 10 March 2025 / Revised: 21 April 2025 / Accepted: 29 April 2025 / Published: 7 May 2025
(This article belongs to the Special Issue Human and Machine Translation: Recent Trends and Foundations)

Abstract

:
Two key pillars of human-centered AI (HCAI) approaches are “control” and “autonomy”. To date, little is known about professional translators’ attitudes towards these concepts in the AI era. This paper explores this issue through a survey study of US-based professional translators in mid-2024. Methodologically, this paper presents a qualitative analysis of open-ended questions through thematic coding to identify themes related to (1) present conceptualizations of control and autonomy over translation technologies, (2) future attitudes towards control and autonomy in the AI era, (3) main threats and challenges, and (4) recommendations to developers to enhance perceptions of control and autonomy. The results show that professionals perceive control and autonomy differently in both the present and the future. The main themes are usability, the ability to turn on and off technologies or reject jobs that require specific technologies, collaboration with developers, and differences in working with LSPs versus private clients. In terms of future attitudes, the most frequent ones are post-editing, quality, communicating or informing clients, LSPs or society at large, and creativity or rates. Overall, the study helps identify how professionals conceptualize control and autonomy and what specific issues could help foster the development of truly human-centered AI in the translation profession.

1. Introduction

Recent AI-driven developments have quickly gained traction, with ChatGPT quickly reaching 250 million users worldwide [1] and new systems such as DeepSeek quickly topping global app store charts [2]. These technologies are having a profound impact on contemporary societies, with wide-ranging positive and negative impacts for lay end users, professionals and society at large [3]. In this context, the translation and interpreting field appears as a field at high risk of AI exposure in labor reports [4], even though reports show modest levels of actual implementations in the language industry [5]. Professional users of translation and interpreting technologies identify GenAI as a threat, with 61.67% of respondents on a recent survey perceiving GenAI as a threat to the profession [6], while 46% of language service providers (LSPs) expressed negative attitudes towards it [7]. Such negative attitudes towards technology or anxiety over automation [8] are often attributed to the perceived impact on their economic, professional and social standing [9]. These concerns often lead to resistance to adoption [10] and decreased job motivation and satisfaction among professionals [11].
In the human-centered AI (HCAI) paradigm, it is argued that the involvement of end users in the development of technologies is key to reducing negative attitudes towards automation and facilitating higher adoption rates. A key principle of HCAI approaches is for end users to be part of the “process of conceiving, designing, testing, deploying, and iterating” technologies [12]. Regis et al. also stress the importance of “involve[ing] potential users from the early stages of product and service development” because an “inclusive R&D process is imperative” [13]. These end users need to be incorporated in the initial stages of AI development and deployment because doing this at later stages “results in issues and missed opportunities, which may be expensive to recover from due to the cost, time, resources, and energy spent” [14]. Guidelines and standards indicate that it is necessary to understand users’ conceptions of AI and its perceived impact to improve the attitudes towards these technologies [15]. Today, with the rapid integrations of AI in a wide range of human endeavors, user opinions and attitudes are key to developing AI technologies that are not perceived as imposed or restrictive. This has not necessarily been the case with other technologies such as Neural Machine Translation (NMT). Scholars have argued that users have been involved with improving MT systems, rather than focusing on usability or user experience [16]. Consequently, users are forced to undergo a process of “human adaptation”, where technologies are developed, and users are expected to adapt to an existing technology or workflow. In his study on MT user experience, Briva Iglesias [17] argues that research should also move in the opposite direction—developing human-centered technologies from the outset—in order to foster their adoption. This approach would reduce the likelihood of technology rejection and better enable users to leverage the advances these technologies offer.
This perspective aligns closely with the objectives of the present study, which examines the attitudes of US-based professional translators towards two key issues in HCAI, control and autonomy [18,19], as well as how the development of AI technologies and their implementations should be informed by these end users. Generally, the attitudes of professional translators towards MT and translation technologies have been widely studied, e.g., [8,20,21,22,23]; however, the specific focus of this paper, the attitudes towards control and autonomy within the context of HCAI, represents a new and relatively unexplored area.
The data for this paper were compiled as part of a larger study on professional attitudes towards control and autonomy in an AI-driven future. A previous publication has already reported quantitative findings based on Likert-scale responses from this study [24]. The present paper reports on a qualitative analysis of the open-ended questions that were compiled in the same survey using a thematic analysis approach, in which the participants’ responses were coded to gain a richer understanding of their views on control and autonomy, including what ideas they can suggest for a more user-centric translation technology and how they perceive it might change in the near future with AI implementations.
The remainder of the paper is structured as follows. First, the paper reviews the notion of HCAI and recent research on attitudes, opinions, and expectations regarding the development of human-centered technologies. It then overviews the notion of control and autonomy in relation to translation technologies and describes the methodology for the study. This is followed by a description of the thematic analysis and the themes and subthemes identified. The analysis is then grouped under three main areas: (1) control and autonomy in the present and the future, (2) the perceived impact of AI, and (3) user recommendations for developers and LSPs on how to foster control and autonomy in future AI implementations.

2. Human-Centered AI and the Development of AI Technologies in the Language Industry

Despite ongoing epistemological discussions [14,19,25,26,27], HCAI is often defined using Shneiderman’s 2020 seminal work as follows:
HCAI focuses on amplifying, augmenting, and enhancing human performance in ways that make systems reliable, safe, and trustworthy. These systems also support human self-efficacy, encourage creativity, clarify responsibility, and facilitate social participation.
[18]
The main goal of HCAI is “to align AI solutions with human values, ethical principles, and legal requirements to ensure safety and security, enabling trustworthy AI” [26]. It places “aspects of the human user/partner/operator, their values and agency into account” [25]. This centrality of human agents in this area means that end users needs and expectations are essential to the development of AI technologies. From a developmental perspective, these end users (professionals, citizens, students, policymakers, etc.) should be central to the “process of conceiving, designing, testing, deploying, and iterating” technologies [12]. This underscores that users should be involved in the initial stages of AI development and deployment, because incorporating them at later stages “results in issues and missed opportunities, which may be expensive to recover from due to the cost, time, resources, and energy spent” [14].
In this paper, the focus is on two key issues in HCAI: (1) automation and (2) control [18,28]. Developing “human-centered” technologies implies that from the moment of inception, users’ control and autonomy should be front and center to increase users’ satisfaction and rates of adoption [19], leading to genuine human centeredness. As Väänänen et al. [29] indicate, AI technologies offer innumerable benefits but they also “pose a threat to human autonomy by over-optimizing the workflow, hyper-personalization, or by not giving users sufficient choice, control, or decision-making opportunities” [29]. Both “autonomy” and “control” are closely related notions in the field of Human–Computer Interaction (HCI). The notion “human control” comes from psychological approaches, and it is closely related to the sense of agency [30]. This sense of agency, also known as “perceived control,” is the subjective experience of feeling that a person is the initiator of a behavior and they are actively in control. Therefore, human control over technology “stems from perceived control over one’s actions to make decisions and influence events” [31]. It has a strong association with those technologies and interfaces that support an internal locus of control, that is, the perception of users that they control the outcome. This locus of control can be both internal and external [32], depending on whether the person believes the outcome might be due to their behavior or not. Thus, in the HCI literature, it has been shown that putting users at the center is key to designing interfaces and workflows that support this internal “locus of control”. In HCAI, Shneiderman [18,33] proposed a bi-dimensional framework that includes various levels of human control paired with computer automation, aiming simultaneously for the highest possible levels of both human control and computer automation. In fact, one of the main golden rules of interface design in HCAI is that developers need to “[k]eep users in control” [33]. This need means that new AI technologies should be developed in ways that do not “[jeopardize] human control, agency, and autonomy” [29]. In addition, “autonomy” in this study is understood as either the agent or the human(s) having control and making fully independent decisions.
The benefits of this approach are many. The HCI literature shows that users who perceive a lack of control respond with increased stress, anxiety, and low self-esteem. Users can also experience more anger and hostility towards technology [34] or other human agents that force them to use it, an effect that has been widely reported in translation technology literature [10,35]. Therefore, designers of human–computer interfaces need to consider users’ sense of control as an important determinant of the overall usability of any system [34]. In this context, and if the language industry or technological giants intend to “augment” human translators’ cognitive abilities (both professional and non-professionals alike), notions such as control and autonomy in relation to AI and automation deserve a closer look. This is especially important at a time when language tech companies are rushing to include AI applications in translation workflows [36]. Autonomy and control often appear as desired qualities that, according to surveys of professional translators’ attitudes towards language technologies, should be embedded in emergent technologies [16]. Here, it is undeniable that translators should perceive themselves at the center of the cognitive system and retain autonomy and the locus of control while translating.
In a previous publication, Jiménez-Crespo [24] identified that professional translators, freelancers and in-house alike, do feel in control of translation technologies and how they are integrated into their workflows. This initial study reported the results of responses to questions answered using a Likert scale (1 to 100) in the same survey. It also reported on questions related to frequency of technology use and perceived command of tools. Overall, participants reported high levels of perceived control (M = 76.08) and autonomy (M = 73.13). Participants reported higher levels of perceived control when asked if they could turn on/off any technology depending on the type of task, content, moment of the day, perceived cognitive load, etc. (M = 81.35). Respondents also reported moderate levels of forced or imposed use of translation technologies (M = 48.09), in line with the study by Sakamoto et al. [11] where translators reported that translation technologies do not dominate their work (average score of 2.83 out of 5). Looking towards the future, participants reported lower levels of control over translation technologies in the AI-driven era (M = 47.17), in line with the existing tendency to attribute lower levels of control over newer technologies [37]. Nevertheless, respondents attributed the future locus of control in the AI era to other human actors in the overall translation workflow (M = 73.44%), including developers, big tech, LSPs or clients. This points to the fact that in this study, the fears of future loss of agency are attributed to other economic players in the translation ecosystem, rather than AI or translation technologies themselves. This paper offers a qualitative analysis of the open-ended responses collected from participants in the same online survey.

3. The Study: Research Questions and Methodology

3.1. Research Questions

Building on the theoretical background of HCAI, this paper addresses three main research questions:
  • RQ1: What are the attitudes towards the human–AI interface in the context of translation technology use, both in the present and the future?
  • RQ2: What do “autonomy” and “control” mean for professional translators using translation technologies in the age of AI?
  • RQ3: What types of future implementations or features of translation technologies will enhance the self perceived sense of “control” and “autonomy” in professional translators in their interactions with these tools?

3.2. The Survey

The survey for this study was designed using the online tool Qualtrics. Ethical approval was obtained by the Institutional Review Board at Rutgers University, protocol Pro2024000427, approved on 8 March 2024. The pilot survey was checked for content and face validity to ensure usability and clarity through expert reviews and a pilot study. The final survey was available from 10 May to 16 June 2024, and additional responses were collected after the elaboration of the previous study [24]. A combination of convenience and snowball sampling [38] was used. Participants were recruited online through mass mailings distributed by professional associations in the USA (e.g., ATA Language Technology Division and Spanish Division, ATA regional chapters), social networks (e.g., LinkedIn), and professional forums by the researcher and administrators of professional mailing lists and social media sites. The only requirement for participation was to be a full-time translator based in the United States with at least 2 years of experience.
The survey comprised three sections: a demographic section (twelve questions), a section with numerical Likert-scale responses (six questions) and a set of open-ended questions (ten questions). For the present qualitative analysis, eight of the open-ended questions were selected; one was previously reported [24]. Prior to participation, participants were given a brief introduction outlining the purpose of the study, duration, data handling, privacy protocols, and voluntary participation. Informed consent was then obtained online, and participants proceeded with the study. The questions analyzed in this study have been grouped into three primary areas. The first group includes questions related to present views on control and autonomy over translation technologies. The number of valid responses is indicated in each question (N).
  • Q16a. Do you feel that you have autonomy as a professional in terms of how technology is used and integrated in your day-to-day work? Please explain. (N = 34)
  • Q20a. Would you like to have more control over the type of translation technology integrations that you work with? Please explain. (N = 36)
  • Q22. Imagine that developers of translation technologies, or those who set up translation workflows, were asking the opinion of translators about what “control” over translation technologies means, or how they should implement it. What would be your response? (N = 45)
  • The second block of questions relates to future attitudes and views related to control and autonomy in the AI era.
  • Q23a. Do you think you will have control over the integration of AI in the translation process? Please Explain. (N = 25)
  • Q25. Which part or subcomponents of the translation process do you think you might lose control over when AI gets more integrated in the workflow? (N = 45)
  • Q27. Human-Centered AI involves a high degree of autonomy of the human agent(s). If you would develop AI applications for translation, what would “autonomy” mean for you? (N = 38)
  • Q28. In your opinion, what are the main challenges translators might face in the age of automation and AI? (N = 41)
  • The last block relates to potential input for developers that are implementing or will implement AI technologies for professional translation purposes.
  • Q26. If you had to provide input to design an AI technology tool to augment your capacities to translate better, more efficiently, or faster, how would you describe it? (N = 35)
  • In total, 298 responses were collected and analyzed, an average of 37.25 responses per question (SD = 6.20). The author initially examined all responses to the open-ended questions using thematic content analysis [39]. The coding scheme was developed inductively based on patterns in existing responses, resulting in an initial set of themes and subthemes based on similar responses across the dataset. This initial set was used then by an additional researcher, and the coders then met to discuss any differences and to refine the scheme. The coding scheme was refined, resulting in 30 codes for themes and subthemes. The author and an additional researcher then categorized all responses in NVivo for Mac Release 1 (Version 1.7) using the revised coding scheme. The researchers then discussed any differences and agreed on coding decisions. This was carried out to ensure intercoder reliability.

3.3. Demographic Data

Data were collected from 50 individuals who completed the survey (36 females and 13 males). The ages of the participants ranged from 24 to 76 (M = 49.36, SD = 15.76), with professional translation experience ranging from 2 to 50 years (M = 15.76, SD = 11.81). In terms of their current employment, 64% of the participants stated they were full-time freelance translators, 22% were part-time freelance translators, and 14% worked in-house. Participants reported a wide range of working languages in addition to English, such as Spanish (N = 18), French (N = 13), German (N = 6), Portuguese (N = 3), and Ukrainian (N = 2). Additionally, the following languages were each reported by one participant (N = 1): Arabic, Belarusian, Cape Verdean Creole, Catalan, Finnish, Hebrew, Haitian Creole, Italian, Polish, Russian, and Swedish. Participants were asked about their perceived level of translation technology competence on a Likert scale from 0 to 100, where 0 indicated “extremely poor”, 50 indicated “average” and 100 indicated “excellent”. The mean of the responses was 63.57 (M = 63.57, SD = 25.30). This implies that participants considered themselves on average regular users of translation technology, but far from being experts. The average number of years of experience was 15 (M = 15.76, SD = 11.81). The main specializations reported by participants were in fields of translation related to law (52%), medicine (45%), education (28%), government institutions (28%), economy/finance (18%), science (12%), localization (7%) and media/subtitling (6%).

3.4. Themes and Subthemes

As previously noted, the bottom-up inductive analysis of these future-oriented questions was iterated and resulted in a range of themes and subthemes. The following list includes the main themes in all the questions related to the AI-driven future. The themes (18) and subthemes (9) are listed here in order of frequency.
  • Usability_UX: Discussions on usability and user experience (UX).
    (a)
    1a. Adaptive_interactive: References to adaptive or interactive technologies, both NMT or LLMs.
    (b)
    1b. Configuration: The ability of translators to fully configure existing translation tools.
    (c)
    1c. Speed: References to gains in translation speed using technology.
  • PE_Transfer: References to PE, either from NMT systems or LLMs.
    (a)
    2a. Transfer: Direct mention of the transfer stage of the translation process or the ability to create the initial interim translation proposal rather than being offered translation suggestions.
    (b)
    2b. Override_locked_seg: This is a subtheme within the “PE” theme where translators discuss that they do not like locked segments or the inability to override suggestions by NMT, TM, or AI.
  • Forced: Theme related to whether, or not, translators are forced to use any technology or their attitudes towards the perceived imposition of translation technologies.
  • Quality: Issues related to translation quality of the final products or their implications.
  • Communicating: Any issue related to how translators communicate or discuss the implications of using AI, NMT or other technologies with clients, LSPs, users or society at large. It includes issues related to perception of translators and translation in society, and the impact on their loss of recognition or status.
  • Collaboration_devs: References to reasons why developers or workflow designers should consult with actual end users and involve translators in the development process.
  • Replacement: Concerns about the potential replacement of translators by any type of technology.
    (a)
    7a. Human_superiority: References to beliefs in human superiority over machines in translation tasks.
  • Control: General reference to human control over translation technologies.
    (a)
    8a. Control_final: Subtheme within the control theme addressing human control over the final product.
  • Tech_on_off: Any reference to the ability of translators to turn on or off any type of technology for projects or during any time throughout the translation process.
  • Terminology: Any reference to issues related to terminology during the translation process.
  • Diff_clients_LSPs: Reference to the differences in levels of control and autonomy when translators work with LSPs or directly with clients.
  • AI_companies: References to AI or tech companies, mostly in reference to those in control of processes, development, and integrations.
  • Job_conditions: Mentions of professional working conditions for translators.
    (a)
    13a. Rates_competition: Subtheme within the theme job_conditions related to translation rates or competition among translators that lowers them.
  • Creativity: References to the importance or threat to creative dimensions of translation.
  • TM: References to TM, either due to improvements or to losing TM technologies due to AI.
  • Unsure: Mentions of uncertainty or inability to respond, a common feature in AI surveys (e.g., [3]).
  • Education_up_down: Impact of technologies on translators’ education, training, and skills. It can refer both to the need to up-skill and the deskilling effect of relying on translation technology.
    (a)
    17a. Instructions_knowledge: Reference to instructions, guides, and resources available to learn about tools and technologies.
In addition, some other themes and subthemes, such as data biases (N = 2) or ethics (N = 2), showed a very low frequency in this study even though they often appear in both TS and HCAI research. However, given that this is a user survey, the results match those of surveys on users’ attitudes towards AI, which found that although “ethics, privacy, and security were important developer priorities, [they] were not key aspects of AI user experiences” [3].

4. Results

The analysis is organized into three sections, which have been previously described: (1) present attitudes towards control and autonomy, (2) future attitudes in an AI era, including perceived threats, (3) and recommendations for developers.

5. Control and Autonomy over Technologies: The Present

The first set of questions addresses the current perception and attitudes of professional translators towards control and autonomy. This section includes three questions, Q16, Q20a and Q22a. The last two questions are follow-ups to the Likert-scale questions (Q20 and Q22) with a field “please explain” as reported in the previous study [24]. Table 1 presents a contrastive analysis of the most frequent themes and subthemes in questions related exclusively either to control or autonomy currently.
The analysis shows that users conceptualize current control and autonomy in different ways. “Control” is primarily conceptualized by respondents in terms of tool usability (usability_UX), the ability to turn on and off technologies or decide when to use them (tech_on_off) or the need to collaborate with developers to foster the sense of control in end users (collaboration_devs). In terms of “autonomy”, the themes relate more to whether translators are forced or not to use technologies (forced), their ability to select or reject work to preserve their autonomy (reject_select_work), and the differences between LSPs and personal clients in terms of whether or not they are being forced to work with specific technologies (diff_clients_LSPs). Among all the themes, only usability_UX appears in both categories; however, while it is the primary theme under control, it is of much less importance in questions related to autonomy. Thus, control is more closely related to actual use of technology during translation, while autonomy is more related to overall job conditions and job-related issues. The following section describes in more detail each of the main themes.

5.1. Control

The issue of control is perceived as a critical component to assist the translator when-ever necessary to produce high-quality and culturally sensitive translations. The analysis shows the translators perceive that if they are responsible for the final outcome of the translation, they should have control over the global process, including what types of technologies can be used, as well as how and when they are implemented. Q22 asks respondents to define what control means and the most frequent themes in order of frequency are: tech_on_off, configuration, privacy, rates_competition and quality. The first one, tech_on_off, refers to the ability of users to have full control over when to incorporate and turn on and off technologies. This ability to control precisely what technologies to incorporate into any workflow, as well as when to turn them on and off at any given moment during specific tasks, is central to how professionals define control over their workflows. The following statement from one participant (P10) exemplifies this need to have complete control over technologies:
I want to have complete control, as the final translation is my work, and the tools I use and the way I use them should be only under my control.
(P10)
The main argument put forward is that the final translation is the user’s responsibility, and as such, they should also be responsible for any and all technological decisions. The purpose of this control is to produce high-quality outputs and a degree of adaptation to the context, culture, and end user, deciding “when and where to use them” (P5). As previously described, a perceived lack of control by users often leads to frustration, stress, and anxiety [34]. This is supported by participants’ comments that often indicate that forcing technologies upon translators “will likely frustrate their user base in the long run” (P33). This participant also indicates that AI, tech, and workflow developers should offer more control to users in order to reduce the likelihood of negative attitudes among translators, one of the main issues with new technological developments [10]. The reasons why users should have control over technologies can be based on factors such as the type of document or project (P29). The focus often turns to control over what type of features should be turned on and off and when, such as “term recognition” (P40) or AI suggestions.
In Q20a, which asked whether professionals would like more control over the technologies they use, the theme usability_UX emerged as the most frequent, followed by human control, both positive and negative, collaboration with developers (collaboration), better training or tech instructions (instructions_knowledge) and concerns about being forced to use technologies (forced).

5.1.1. Usability_UX

Usability, user experience, and customization are key issues in the perception of control [34,40]. These notions have received attention in recent TS literature [17]. They are directly related to issues of control over translation technologies because often users are forced to work with technologies that are perceived as lacking usability or user-friendliness (P32). Customization and user experience are also closely related in participants’ responses, such as the following:
There should be a balance between customization (control and options) and the user experience for the translator (easy to use is better).
(P16)
Usability and UX are linked to whether professionals can adjust tools and apps to their preferences, workflows, and desired configurations, because these translators perceive that technologies “should be best-suited to [users’] workflows” (P24). To achieve this goal, respondents emphasized the need for collaboration with developers (collaboration_devs), a key theme in the analysis, and that they should be tested with actual translators, as indicated by participant P7:
[D]evelopers should test their product involving actual translators working across industries and make changes accordingly.
(P7)
The involvement and collaboration of end users with the development of tools are key principles for technology to be human-centered [12,26]. This has not been the case historically [41], with developers of MT systems being more interested in efficiency gains than in creating human-centered tools. As one participant noted,
I could be wrong, but I get the impression AI companies are not including translators in conversations related to design and functionality, but they only want translators to do language proofreading to help perfect AI’s language output.
(P15) [emphasis own]
Nevertheless, recent studies suggest that this trend is emerging, as reported in [42]. In their study on attitudes towards ChatGPT, the authors conclude that their corpus data show “the emergence of a dialogue between developers and professionals”, a development perceived as beneficial by translators. For example, participant P25 claims that automation can be good if translators “have a voice in how it’s implemented” (P25). They would like to be involved in the development process beyond having their output used for training AI systems. They seek to extend this involvement to user experience and user interfaces [43].

5.1.2. Instructions, Learning and Knowledge

Another key theme that emerges when participants discuss control over technologies is related to tool instructions and learning to use modern technologies. This includes discourses related to up-skilling to better understand AI, AI tools, and new implementations or workflows. Notably, the need for better instructions and training materials is linked to both the obligation to use tools and the time and effort involved. Participant P23, for example, succinctly indicates this need for better guides, tech support and training, especially when tech tools are required:
We need good user guides and tech support for all translation technologies, especially those we are requested or required to use.
(P23)
Similarly, participants indicate the need to focus on usability and user-friendliness (P7) in the products (P45), training materials, instructions and guides. They also request “more training” (P45). In some cases, participants highlight the time and effort to learn or adapt to new tools, as well as compensation needs for the time necessary to learn to use them:
I would like to use the tools I am most familiar with to do the work required. I would prefer not to have to learn the use of new software applications unless I am being paid for my time to learn their use.
(P12)
When discussing autonomy, this last participant noted the time and money invested in learning a new tool for specific clients, indicating the effort to comply with requests by single clients: “[it] is neither efficient nor advantageous to change such a tool at the request of an individual client” (P12). The connection between learning and instruction in terms of control is also linked to education on how AI works and how it produces translations and make decisions, enabling professionals to generate higher-quality outputs (P8). The effect of AI use on translators’ skill was also prominently displayed in the participants’ responses. This is part of the theme education_up_down, where two main trends were observed: (1) the need to up-skill and (2) the potential for deskilling or losing skills if AI is frequently used or integrated in the system. This last issue, deskilling due to technology use, was often identified in previous studies [44,45].

5.2. Autonomy

Autonomy for translators is mostly conceptualized in terms of whether they have full control of the range of technologies they use, and whether they are imposed by third parties such as LSPs. The role of LSPs and key stakeholders in deciding whether to post-edit, and how, has been examined in studies such as Nitzke et al. [46], who describe the factors that lead to workflow decisions on MTPE. Professionals in this study conceptualize “autonomy” through three main subthemes: (1) not being forced to use technology (forced), as well as the interrelated issue of (2) the ability to reject or select work (select_reject_work), and (3) the differences between working with LSPs or direct clients (diff_LSP_clients). The first theme, “forced”, appears equally in positive and negative terms. In some cases, participants indicate that they are not obligated to use technologies or that they are satisfied with the technologies that LSPs or their companies provide. The negative association with this theme is linked to discourses suggesting that preserving autonomy depends on tools not being imposed upon translators. The following statement embodies this common forced use of technology that, in this case, is welcomed and not seen in a negative light by the respondent.
I don’t have autonomy. But I can give an input in the type of needs I have to perform my job. And I am happy with the technology we use in our team.
(P16)

5.2.1. Choice to Select or Reject Work

Autonomy is primarily conceptualized by respondents as the ability to select or reject work assignments, regardless of whether they are forced to do so. This is often related to the imposition or forced use of technology that they might not know or be willing to use, an issue that falls under the notion of translator’s agency [10]. This theme is most commonly found in responses by freelancers and is associated with years of experience (see Jiménez-Crespo [24]). As previously mentioned, the average number of years of experience was 15 (M = 15.76, SD = 11.81). Participants often explain that they can reject any work that involves the imposition of any tool, highlighting their agency (P1). This also means that they can avoid working “for clients who control [their] technology” (P18). This is often conceptualized in terms of the ability to decide by themselves rather than to be imposed on:
Ability to decide which ones are better and when, and not to depend on clients or others to impose.
(P29)
The reasons why freelancers often conceptualize autonomy as the choice to reject or select work assignments relates to not having access to certain technologies if they are not provided by the LSP. It can also be due to usability issues if they do not feel comfortable using any given tool (P44). Participants thus highlight findings from Nitzke et al. [46], in that “working conditions and prospects for highly qualified and technology-savvy translators in the high-end segment are good despite claims to the contrary”. Some participants relate this imposed lack of autonomy to the power imbalance inherent in their labor relationship with those they work for. As this participant puts it,
If a client wants to dictate which tech I use, that puts me on a slippery slope of being considered an employee.
(P20)
Nevertheless, there is also a perception that this ability to select or reject work might not be sustainable in the future. For example, some participants question whether, despite having high-end clients and a sought-after language pair, client demands in terms of specific technology use might need to be met in the future. P43 wonders “if at some point I will have to yield to more client demands”, while P46 writes that even though clients do not require any technologies, “this may change in the future” (P46). This need to lose some degree of autonomy for established professionals is also conceptualized as a requirement for the future. Autonomy is conceptualized as an evolving phenomenon, with those who still perceive themselves as autonomous potentially being “left in the dust” (P31).

5.2.2. Differences Between Working with LSP or Direct Translation Clients

LSPs are often perceived as the agent that imposes technology uses and workflows upon translators. One of the most common subthemes is that working with direct clients allows professionals to have autonomy to select their preferred technologies. This theme is reiterated throughout the various sections of the study. Participant P34 succinctly explains this issue:
There is still a growing practice in the business where the multinational clients and multinational agencies are the ones pushing the terms of how technology is use and integrated in the day-to-day work of us, freelance translators. I can only have autonomy when I am dealing with a direct client that does not know/care what software and tools are available for me to do my work to the best I can.
(P34)
This participant thus perceives that autonomy derives from working with personal clients where technology use can be established by the translator. However, the data also show that some LSPs allow freedom in choosing tools and processes and machine translation use might be up to them (P28). This is especially true for participants with more years of experience or expertise.

6. Control and Autonomy over Technologies: The Future

The following set of questions (Q23a, Q25, Q27, Q28) inquired about attitudes and perceptions of control and autonomy in an AI-driven future.

6.1. Control and the Future

Question Q25 directly inquired about the type of control that might be lost with future AI integrations. Responses reported that the top three themes and subthemes were PE, creativity, and transfer. Through the iterative bottom-up thematic analysis, it was decided that “transfer” was a subtheme within the “PE” theme since PE and transfer represent the same phenomenon viewed from different perspectives. The perceived loss of the ability to “transfer” the initial translation means that translators work with prepopulated translation candidates by MT or LLMs. This means conceptually that they are forced to post-edit by LSPs or clients. This shift from translation from scratch to PE is reflected in a response from participant P34. When asked about what would be lost in the AI age for professionals, P34 directly addressed this concern:
The power to negotiate fare rates, the ability to translate from scratch if all the agencies are asking is postediting, quality of the final result.
(P34) [emphasis own]
This sense of loss has also been identified in previous studies [47,48], and it is conceptualized as the inability to craft the initial round of translation. Respondents described it as losing “the actual conversion of one language to another” (P15), the “translation step” (P45) or “the act of translating […] I feel humans will become proofreaders” (P14). Responses reflect a predominantly negative perspective, indicating that the provision of pre-processed files with AI is perceived as worse than previous technological workflows. P20 expects that AI will be required by LSPs and, even worse, receiving pre-processed files with AI that need to be post-edited:
[…] pre-processed files (segments pre-populated and sometimes locked for editing) where the pre-processing is automatically generated from AI (rather than TMs).
(P20) [emphasis own]
TMs are perceived in this statement as “human” processes, and the phasing out of TM technologies emerges as a subtheme in the analysis. Participants expressed the transfer of theme in different ways, but most often associated it with the initial or first round of translation. Formulations about what is “lost” include “the initial production of a draft” (P16), “the initial translation” (P20), “the initial round of translation” (41) or “the initial translation step” (P45). In these formulations, respondents indicate that it is “translation” that will be lost, signaling that PE might not be translation at all. This perceived loss of the ability to convey meaning is often associated with the second most frequent theme identified in this survey item, losing the creative potential of the translator in the theme “creativity”, as P43 puts it:
Translating! AI is not creative, and I work in the creative fields of translation. I don’t want to see AI suggestions, because they will block my own creativity (studies have shown this to be true). So I am not interested in integrating AI into my workflow. I intend to produce “hand-crafted” translations as long as I can, and I think I work in fields where this approach is valued.
(P43)
This participant refers directly to published literature demonstrating that, in fact, PE leads to the loss of creativity [31]. It also can hinder the production of more creative translations once a translation candidate has been proposed by an MT or LLM system. This has recently become one of the most popular research trends across disciplines such as translation studies, literary studies, and computational linguistics [49,50,51].

6.2. Autonomy and the Future

Future-looking questions describe “autonomy” according to two main themes, (1) control over the process and especially the final product (control_final) and (2) the ability to turn technologies on and off, or to integrate them when needed and at the sole discretion of professionals (tech_on_off). Professionals perceive themselves as the final gatekeepers of the translation products, and therefore, they believe that they should have full autonomy over any and all decisions regarding the translation. This is precisely what being “autonomous” from the moment they receive the translation or project means. They insist on retaining their agency and decision-making ability in all aspects of the final product. This includes deciding on the type of technology that is integrated into the process, as observed in previous studies [21,48]. The agency of the translator is reflected in the use of the terms “final” (P16) or “last word” (P17), including “final decision for all the steps of the translation” (P29). It also means that during any specific passage, moment, or part of a project, assistance could be turned on or off, as P43 indicates:
Autonomy for me would mean that with a click of a button I could turn AI intervention on or off.
(P43)
Regarding AI integration, respondents report that they welcome AI suggestions in the translation process but wish to maintain ultimate control over the final output. For example, participant P44 responded a preference for AI “just making suggestions”, while “autonomy is me creating the translation” (P44). Similarly, respondent P34 supports AI for inspiration or augmentation in line with recent findings in similar studies [6], while translators “create the translation from scratch”. Thus, suggested translation renderings are therefore welcomed, but respondents express their resistance to automatically approved segments:
That nothing is translated without the user clicking a check box to indicate the translation is human approved.
(P31)
As the translator, to be able to change anything you didn’t think was correct.
(P28)
This ties into one of the subthemes identified within the PE theme, override_locked_segments. Participants express their dissatisfaction with locked segments that they cannot edit, and would like to have “the ability to override a machine’s approval” of any accepted segment (P41). Here, formulations such as “the ability to turn AI assistance off” or to provide suggestions “only when I want to” are used by respondents. This is conceptualized with several variables, such as the “text”, “the project” or “translators’ preferences” for any specific moment or task. This issue is presented in combination with the possibility of choosing whether or not to post-edit or to work without suggestions whenever the translator decides to do so, which a respondent describes as “human-only mode” (P7). Deciding whether or not they receive suggestions or work with the TM-MT paradigm [20], including AI suggestions, emerges as a key issue in terms of autonomy. In addition, another respondent (P42) rightly indicates that nothing is fully automatic, given that humans set the parameters for automation:
Nothing is automatic, all autonomy is first decided by a human.
(P42)
This response is related to the previously mentioned issue that respondents primarily blame other human agents for their perceived lack of autonomy and control [24]. Also, the fact that translators are aware that workflows are set up by humans is consistent with one of the key issues in the AI augmentation approach: those integrating technologies into current or future workflows to establish “which tasks to automate, which tasks to augment, and which tasks to leave to humans” [52]. In practice, LSPs and/or translation managers often decide on these levels of automation. However, professionals prefer to retain control themselves, being able to decide when to PE, when to translate from scratch, or when and how to integrate LLM suggestions. This, as Ruffo, Dames and Macken [53] indicate, might be more important than any time or efficiency gains for literary translators.

7. Impact of AI in the Future of the Profession: Main Challenges of AI

This section presents an overview of the main themes and subthemes identified in the dataset related to how professionals conceptualize the future of the AI-driven profession. The main six themes in order of frequency are (1) communication with different stakeholders in the translation process (e.g., clients, LSPs and the general public), (2) the replacement of translators and the impact on work conditions, (3) quality issues, (4) the move towards PE, (5) the need to educate themselves and (6) rates and competition.

7.1. Communicating

The most frequent theme when discussing future challenges is communicating, or in other words, explaining the translator’s side and perspective to other parties in the translation ecosystem, including the wider public and stakeholders. This has been observed with previous technologies and in similar survey studies, as technologies “can add uncertainty to translation services and in turn exacerbate issues relating to miscommunication” [8]. In the present study, this is often attributed to the “overblown confidence in the capacity of AI” (P46). Communication needs extend across the broader translation network, an issue that has been studied from an extended and distributed cognitive perspective in TS [54]. This communication need starts with clients and LSPs, with “client education” being one of the “biggest challenges” (P23). Respondents often argue that clients do not understand the “human factor” in the translation process:
[…] lack of understanding from the client side that a “human” translation is more valuable than a machine/AI.
(P45)
This lack of understanding of the human side of the process also extends to clients, customers, and the general public. Responses indicate that clients tend to expect “miracles” in “in terms of quality and cost-savings” with AI use, blaming other “humans” for these difficulties they encounter (P20). Interestingly, respondents, including participant P1, indicate that issues of communicating the perspective of the translator when new technologies emerge have happened with earlier technologies:
Most likely the same challenges we’ve been dealing with all along. People who speak only one language don’t understand what translation really is or how it works. If a client wants you to use a tool that they think is best, but it is actually not appropriate for the task, how do you explain this to them?
(P1)
This lack of communication on the translator’s perspective is directly related to the perceived low status of translators reported in previous studies [55,56,57], as Ruokonen indicates—“[T]here is convincing empirical evidence that translator status is, indeed, rather low” [58]—and this remains a concern for translation professionals when handling public messaging in the AI era.

7.2. Replacement and Rates

Lower translation rates and economic conditions appear in most studies that have surveyed professional attitudes towards translation technologies [59,60,61]. This study further confirms that this issue remains a concern for professionals in the new paradigm that combines PE and AI integrations. As indicated in the latest ELIS report (2024), professionals often conflate both AI and MT to blame for lower rates as they are considered to be “equivalent in the sense that both reduce the appreciation and therefore also the financial compensation, for human language work” [7]. This conflation of AI and MTPE with decreased translation rates is evident in the analyzed data:
Clients might approach translators with machine post-editing assignments rather than translation jobs to save money.
(P7)
Other participants report concerns about lower rates and a tighter market with increased competition:
Economic challenges: a tighter market for translators with lower rates. (…) Now translators will be hired for less money to revise or check AI writing or translation.
(P43)
In some cases, this fear of lower rates is also linked to fear of replacement, the second most frequent theme, and the potential disappearance of professional translation work. This is perceived as the “main challenge” if companies use AI without human control or post-editing (P47). Others attribute potential rate reductions to industry hype about capabilities of AI apps.
Downward pressure on rates without commensurate gains in efficiency or re-ductions in actual labor expenditure due to overblown confidence in the capacity of AI. Indeed, a bad tool can often *reduce* efficiency or *increase* labor, if my experiences with MTPE are any indication.
(P46)
Nevertheless, recent studies do not show a decrease in rates in the AI age, with 70.34% of respondents to a recent survey indicating similar or increasing rates [6]. In addition, as shown by other studies, the fear of lower rates is linked to competition from other translators who accept certain conditions that affect the entire profession:
Lower and lower rates for translation (translators using AI accept lower rates and that lowers the rates across the board).
(P31)
Although attitudes toward PE and automation are mostly negative in the dataset, some positive attitudes also emerge, such as this response by participant P25:
Many translators also feel like automation and AI is here to steal their livelihood. I personally don’t feel that way, as I understand automation can be good if we have a voice in how it’s implemented.
(P25)
Ultimately, this positive attitude is closely tied to translators’ ability to control automation and its implementation. As indicated in a recent study on claims of AI augmentation in collaborative platforms by Jiménez-Crespo [62], translators can only be augmented from an HCAI perspective if the locus of control resides in human participants, and if they retain full agency.

8. Recommendations for Developers

Q26 asked translators to provide recommendations for developers or those who implement technology-driven workflows often imposed on translators. It asked participants to explain how translators could be augmented in the sense of Engelbart [63] or Shneiderman [28]. As previously discussed, collaboration with developers emerges as one key theme in this study, but this question specifically elicited concrete suggestions from participants. The main themes in this area were adaptive–interactive technologies, easy configuration (configuration) and broad references to TM (TM). In this last theme, one of the subthemes was “better_TM” because in this question participants consistently indicated that any tool capable of “augmenting” their work would need to be an improved version of a TM. This improvement involves mainly TM systems that can be adaptable given that the theme often appears with the broader adaptive_interactive one. Adaptive capabilities suggested by respondents range from context, terminology, dialectal variation, register or document type, all the way to user preferences for interaction. These themes indicate that translators perceive adaptive and personalized technologies as a key factor to feel “augmented”, in line with the work of Briva Iglesias and O’Brien [43]. In this sense, the capacity of a tool to adapt to an existing user goes back to the debate between customization and generalization. Respondent P10 clearly and succinctly includes all these issues in their response:
(1) Is developed and released after thorough testing by human translators (2) Offers flexibility: Functions can be activated/deactivated at will (3) Can interact with external sources: For example, allowing the translators to plug in terminology databases (4) Is helpful in catching inadvertent mistakes (typos, grammatical mistakes, glaring misinterpretations) (5) Gives the translator freedom to edit the target language as he/she wishes.
This response includes some of the main themes in the overall study, such as usability_UX, tech_on_off, and collaboration_dev. It also includes integration with existing technologies such as spell checkers and terminology databases. Other responses mentioned interactive capabilities, suggesting that AI suggestions should include actual sources found online, similar to those offered by Retrieval-Augmented Generation (RAG) systems (e.g., Google AI Overviews). In addition, across the survey participants recommend improvements in usability, configuration, and the ability to enable or disable technology support at their discretion. One final suggestion from users is improving speed, as user-perceived latency (UPL) can negatively impact the overall interaction experience.

9. Conclusions

This study was initiated to investigate professional translators’ attitudes and perceptions about control and autonomy in the AI era. The paper has shown that professionals conceptualize control and autonomy differently, and that their perspectives on the future of AI-driven control and autonomy in translation technologies vary. Overall, the analysis also showed that some themes and subthemes are recurring and appear across most questions, and many overlap with other general AI surveys, such as the themes “user experience”, “functionality” or “unsure” [3]. Similarly, the perspective of professionals also matches previous general AI surveys in that AI represents “a threat to human autonomy … by not giving users sufficient choice, control, or decision-making opportunities” [29]. Here, the high frequency of themes such as control, forced use or tech_on_off support this observation.
In this study, participants often report having sufficient levels of control and autonomy, but nevertheless, they feel in control under certain conditions: (1) LSPs and clients let them control the use of technology in their workflows, (2) they work with user-friendly, adaptable and configurable technology, and (3) they decide when and how to use PE or AI support, or even to reject projects that do not respect their autonomy. Among the barriers and ways to overcome potential problems, participants report the difficulty communicating with human actants in the overall translation ecosystem, including the wider public; the need to collaborate with developers and workflow managers for tool optimizations; or the need to preserve decent work conditions [9]. In general, as Nunez Viera and Alonso indicate in their study on attitudes towards MTPE, modern technologies “add uncertainty to translation services and in turn exacerbate problems related to miscommunication and fragmentation of work” [8]. Respondents perceive this perceived communication gap in the translation ecosystem and thus express their desire to have their opinion heard by developers, workflow designers, and translation managers, those who implement flexible or forced technologies downstream. Professionals express a willingness to solve these communication and fragmentation issues in the development of translation technologies, placing control and autonomy as key components of a positive and satisfying career. In sum, flexible, adaptive, usable, and customizable tools and tool environments developed with the input of translators, paired with translators’ control and autonomy over the outcome of the translation process, emerge as an ideal combination to create future positive AI-driven translation environments. Furthermore, translators perceive other human actants, rather than AI or algorithms, as the real threat to their future professions and livelihoods, including other fellow translators and big tech. If HCAI represents a “second Copernican” revolution that intends to put humans at the heart of AI [3], then this goal will only be achieved if all human participants collaborate with AI with a shared commitment to the common good. As this study highlights, issues of control and autonomy in the age of AI are primarily a human-centered problem that requires human solutions.
To finish with, the study has several limitations that should be acknowledged. First and foremost, the study used a combination of convenience sampling and snowball sampling to recruit participants. This might pose a limitation in the sample selection process, as only those translators active in their respective professional associations might have been reached for participation. This fact might have introduced some bias into the sample, and replicating the study in different geographical locations could benefit from random sampling techniques to ensure broader representativeness of the sample. In addition, the lack of compensation for taking the survey could have resulted in participation bias. Future longitudinal studies could track changes in translator attitudes and perceptions over time as AI technologies evolve, in line with the changes observed in large surveys such as the yearly European ELIS study [7,64]. Comparative studies between different geographical areas could also be conducted, as well as between different translation specializations, such as technical vs. literary translation.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Rutgers University, protocol code Pro2024000427 approved on 8 March 2024.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to confidentiality issues.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ray, T. OpenAI Plans to Offer its 250 Million ChatGPT Users Even More Services; ZDNET: New York, NY, USA, 2023. [Google Scholar]
  2. Mehta, I. Deepseek Reaches no 1 on US Play Store; TechCrunch: San Francisco, CA, USA, 2025. [Google Scholar]
  3. Bingley, W.J.; Curtis, C.; Lockey, S.; Bialkowski, A.; Gillespie, N.; Haslam, S.A.; Ko, R.K.; Steffens, N.; Wiles, J.; Worthy, P. Where is the human in human-centered AI? Insights from developer priorities and user experiences. Comput. Hum. Behav. 2023, 141, 107617. [Google Scholar] [CrossRef]
  4. Felten, E.W.; Raj, M.; Seamans, R. Occupational Heterogeneity in Exposure to Generative AI; SSRN Scholarly Paper; SSRN: Rochester, NY, USA, 2023. [Google Scholar]
  5. GALA. AI and Automation Barometer Report 2024; Technical Report; Globalization and Localization Association: Seattle, WA, USA, 2024. [Google Scholar]
  6. Rivas Ginel, M.I.; Sader Feghali, L.; Accogli, F. Exploring translators’ perceptions of AI. ELC Surv. 2024. [Google Scholar] [CrossRef]
  7. Research, E. European Language Industry Survey 2024; Technical Report; European Union of Associations of Translation Companies: Brussels, Belgium, 2024. [Google Scholar]
  8. Nunes Vieira, L.; Alonso, E. Translating perceptions and managing expectations: An analysis of management and production perspectives on machine translation. Perspectives 2020, 28, 163–184. [Google Scholar] [CrossRef]
  9. Fırat, G.; Gough, J.; Moorkens, J. Translators in the platform economy: A decent work perspective. Perspectives 2024, 32, 422–440. [Google Scholar] [CrossRef]
  10. Ruokonen, M.; Koskinen, K. Dancing with technology: Translators’ narratives on the dance of human and machinic agency in translation work. Translator 2017, 23, 310–323. [Google Scholar] [CrossRef]
  11. Sakamoto, A.; Van Laar, D.; Moorkens, J.; Carmo, F.D. Measuring translators’ quality of working life and their career motivation: Conceptual and methodological aspects. Transl. Spaces 2024, 13, 54–74. [Google Scholar] [CrossRef]
  12. Vallor, S. Defining human-centered AI: An interview with Shannon Vallor. In Human-Centered AI; Régis, K., Denis, J.-L., Axente, M.L., Kishimoto, A., Eds.; Chapman and Hall/CRC: Boca Raton, FL, USA, 2024; pp. 13–20. [Google Scholar]
  13. Régis, C.; Denis, J.L.; Axente, M.L.; Kishimoto, A. (Eds.) Human-Centered AI: A Multidisciplinary Perspective for Policy-Makers, Auditors, and Users; CRC Press: Boca Raton, FL, USA, 2024. [Google Scholar]
  14. Winslow, B.; Garibay, O. Human-centered AI. In Human-Computer Interaction in Intelligent Environments; Stephanidis, C., Salvendy, G., Eds.; CRC Press: Boca Raton, FL, USA, 2024; pp. 108–140. [Google Scholar]
  15. Walsh, T. Machines Behaving Badly: The Morality of AI; La Trobe University Press: Bundoora, Australia, 2022. [Google Scholar]
  16. Nunes Vieira, L.; Ragni, V.; Alonso, E. Translator autonomy in the age of behavioural data. Transl. Cogn. Behav. 2021, 4, 124–146. [Google Scholar] [CrossRef]
  17. Briva-Iglesias, V. Fostering Human-Centered, Augmented Machine Translation: Analysing Interactive Post-Editing. Doctoral Dissertation, Dublin City University, Dublin, Ireland, 2024. [Google Scholar]
  18. Shneiderman, B. Human-centered artificial intelligence: Three fresh ideas. AIS Trans. Hum.-Comput. Interact. 2020, 12, 109–124. [Google Scholar] [CrossRef]
  19. Ozmen Garibay, O.; Winslow, B.; Andolina, S.; Antona, M.; Bodenschatz, A.; Coursaris, C.; Falco, G.; Fiore, S.M.; Garibay, I.; Grieman, K.; et al. Six human-centered artificial intelligence grand challenges. Int. J. Hum.–Comput. Interact. 2023, 39, 391–437. [Google Scholar] [CrossRef]
  20. Bundgaard, K. Translator attitudes towards translator-computer interaction—Findings from a workplace study. Hermes 2017, 56, 125–144. [Google Scholar] [CrossRef]
  21. Rossi, C.; Chevrot, J.P. Uses and perceptions of machine translation at the European Commission. J. Spec. Transl. (JoSTrans) 2019, 31, 177–200. [Google Scholar]
  22. Brogueira, J. Portuguese translators’ attitude to MT and its impact on their profession. L10N J. 2023, 2, 24–35. [Google Scholar]
  23. Prieto Ramos, F. Patterns of human-machine interaction in legal and institutional translation: From hype to fact. Polissema Rev. Let. ISCAP 2024, 24, 1–27. [Google Scholar]
  24. Jiménez-Crespo, M.A. Exploring translators’ attitudes towards control and autonomy in the Human-Centered AI era: Quantitative results from a survey study. Tradumatica 2024, 20, 276–301. [Google Scholar] [CrossRef]
  25. Capel, T.; Brereton, M. What is human-centered about human-centered AI? A map of the research landscape. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; pp. 1–23. [Google Scholar]
  26. Schmager, S.; Pappas, I.; Vassilakopoulou, P. Defining human-centered AI: A comprehensive review of HCAI literature. In Proceedings of the Mediterranean Conference on Information Systems, Madrid, Spain, 6–9 September 2023. [Google Scholar]
  27. Schmager, S.; Pappas, I.O.; Vassilakopoulou, P. Understanding Human-Centred AI: A review of its defining elements and a research agenda. Behav. Inf. Technol. 2025, 1–40. [Google Scholar] [CrossRef]
  28. Shneiderman, B. Human-Centered AI; Oxford University Press: Oxford, UK, 2022. [Google Scholar]
  29. Väänänen, K.; Sankaran, S.; Gutierrez Lopez, M.; Zhang, C. Editorial: Respecting human autonomy through human-centered AI. Front. Artif. Intell. 2021, 4, 807566. [Google Scholar] [CrossRef] [PubMed]
  30. Moore, J.W. What is the sense of agency and why does it matter? Front. Psychol. 2016, 7, 1272. [Google Scholar] [CrossRef]
  31. Alfredo, R.; Echeverria, V.; Jin, Y.; Yan, L.; Swiecki, Z.; Gašević, D.; Martinez-Maldonado, R. Human-centred learning analytics and AI in education: A systematic literature review. Comput. Educ. Artif. Intell. 2024, 5, 100215. [Google Scholar] [CrossRef]
  32. Rotter, J.B. Generalized expectancies for internal versus external control of reinforcement. Psychol. Monogr. Gen. Appl. 1966, 80, 1. [Google Scholar] [CrossRef]
  33. Shneiderman, B.; Plaisant, C. Designing the User Interface: Strategies for Effective Human-Computer Interaction, 6th ed.; Pearson: Upper Saddle River, NJ, USA, 2016. [Google Scholar]
  34. Hinds, P. User Control and Its Many Facets: A Study of Perceived Control in Human-Computer Interaction; Technical Report; Hewlett Packard Laboratories: Palo Alto, CA, USA, 1998. [Google Scholar]
  35. Herbert, S.; do Carmo, F.; Gough, J.; Carnegie-Brown, A. From responsibilities to responsibility: A study of the effects of translation workflow automation. J. Spec. Transl. 2023, 40, 9–35. [Google Scholar]
  36. Nimdzi. The 2024 NIMDZI 100: The Ranking of the Largest Language Service Providers in the World; Technical Report; Nimdzi: Mercer Island, WA, USA, 2024. [Google Scholar]
  37. Sieger, L.N.; Detjen, H. Exploring Users’ Perceived Control over Technology. In Proceedings of the Mensch und Computer 2021, Ingolstadt, Germany, 5–8 September 2021; pp. 344–348. [Google Scholar]
  38. Biernacki, P.; Waldorf, D. Snowball sampling: Problems and techniques of chain referral sampling. Sociol. Methods Res. 1981, 10, 141–163. [Google Scholar] [CrossRef]
  39. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
  40. Xu, W.; Gao, Z. Enabling human-centered AI: A methodological perspective. In Proceedings of the 2024 IEEE 4th International Conference on Human-Machine Systems (ICHMS), Toronto, ON, Canada, 15–17 May 2024. [Google Scholar]
  41. O’Brien, S. Translation as human–computer interaction. Transl. Spaces 2012, 1, 101–122. [Google Scholar] [CrossRef]
  42. Ginel, M.I.R.; Moorkens, J. A year of ChatGPT: Translators’ attitudes and degree of adoption. Tradumàtica Tecnol. Traducció 2024, 22, 258–275. [Google Scholar] [CrossRef]
  43. Briva-Iglesias, V.; O’Brien, S.; Cowan, B.R. The impact of traditional and interactive post-editing on machine translation user experience, quality, and productivity. Transl. Cogn. Behav. 2023, 6, 60–86. [Google Scholar] [CrossRef]
  44. LeBlanc, M. Translators on translation memory (TM). Results of an ethnographic study in three translation services and agencies. Transl. Interpret. Int. J. Transl. Interpret. Res. 2013, 5, 1–13. [Google Scholar] [CrossRef]
  45. Cadwell, P.; Castilho, S.; O’Brien, S.; Mitchell, L. Human factors in machine translation and post-editing among institutional translators. Transl. Spaces 2016, 5, 222–243. [Google Scholar] [CrossRef]
  46. Nitzke, J.; Canfora, C.; Hansen-Schirra, S.; Kapnas, D. Decisions in projects using machine translation and post-editing: An interview study. J. Spec. Transl. 2024, 41, 127–148. [Google Scholar] [CrossRef]
  47. Pielmeier, H.; O’Mara, P. The State of the Linguist Supply Chain; Translators and Interpreters in 2020; CSA Research, Technical Report. 2020. Available online: https://www.studocu.com/latam/document/universidad-de-la-republica/histologia/the-state-of-the-linguist-supply-chain-2020/92564975 (accessed on 16 April 2025).
  48. Girletti, S. Beyond the assembly line: Exploring salaried linguists’ satisfaction with translation, revision and PE tasks. Tradumàtica 2024, 22, 207–237. [Google Scholar] [CrossRef]
  49. Guerberof-Arenas, A.; Toral, A. Creativity in Translation: Machine Translation as a Constraint for Literary Texts. Transl. Spaces 2022, 11, 184–212. [Google Scholar] [CrossRef]
  50. Kenny, D.; Winters, M. Customization, personalization and style in literary machine translation. In Translation, Interpreting and Technological Change: Innovations in Research, Practice and Training; Bloomsbury Publishing: London, UK, 2024; p. 59. [Google Scholar]
  51. Resende, N.; Hadley, J. The translator’s canvas: Using LLMs to enhance poetry translation. In Proceedings of the 16th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), Chicago, IL, USA, 28 September–2 October 2024. [Google Scholar]
  52. Sadiku, M.; Musa, S. A Primer on Multiple Intelligences; Springer: Cham, Switzerland, 2021. [Google Scholar]
  53. Ruffo, P.; Daems, J.; Macken, L. Measured and perceived effort: Assessing three literary translation workflows. Tradumàtica 2024, 22, 238–257. [Google Scholar] [CrossRef]
  54. Risku, H.; Rogl, R.; Pein-Weber, C. Mutual dependencies: Centrality in translation networks. J. Spec. Transl. 2016, 25, 1–22. [Google Scholar]
  55. Dam, H.V.; Zethsen, K.Z. Translator status—Helpers and opponents in the ongoing battle of an emerging profession. Target Int. J. Transl. Stud. 2010, 22, 194–211. [Google Scholar] [CrossRef]
  56. Ruokonen, M. Realistic but not pessimistic: Finnish translation students’ perceptions of translator status. J. Spec. Transl. 2016, 25, 188–212. [Google Scholar]
  57. Liu, C.F.M. Translator professionalism in Asia. Perspectives 2021, 29, 1–19. [Google Scholar] [CrossRef]
  58. Ruokonen, M. Studying Translator Status: Three Points of View. In Haasteena Näkökulma: Point of View as Challenge; Eronen, M., Rodi-Risberg, M., Eds.; VAKKI Publications; University of Vaasa: Vaasa, Finland, 2013; Volume 2, pp. 327–338. [Google Scholar]
  59. Läubli, S.; Orrego-Carmona, D. When Google Translate is better than some human colleagues, those people are no longer colleagues. In Proceedings of the Translating and the Computer, London, UK, 16–17 November 2017; pp. 59–69. [Google Scholar]
  60. Cadwell, P.; O’Brien, S.; Teixeira, C.S. Resistance and accommodation: Factors for the (non-) adoption of machine translation among professional translators. Perspectives 2018, 26, 301–321. [Google Scholar] [CrossRef]
  61. Alvarez-Vidal, S.; Oliver, A.; Badia, T. Post-editing for professional translators: Cheer or fear? Tradumàtica 2020, 18, 49–69. [Google Scholar] [CrossRef]
  62. Jiménez-Crespo, M.A. Augmentation and translation crowdsourcing: Are collaborative translators minds really “augmented”? Transl. Cogn. Behav. 2024, 7, 291–310. [Google Scholar] [CrossRef]
  63. Engelbart, D.C. Augmenting human intellect: A conceptual framework. In Augmented Education in the Global Age; Routledge: New York, NY, USA, 2023; pp. 13–29. [Google Scholar]
  64. ELIS Research. European Language Industry Survey 2025. 2025. Available online: http://elis-survey.org/wp-content/uploads/2025/03/ELIS-2025_Report.pdf (accessed on 16 April 2025).
Table 1. Contrastive analysis of main themes in questions related to current control and autonomy attitudes by professional translators towards technologies.
Table 1. Contrastive analysis of main themes in questions related to current control and autonomy attitudes by professional translators towards technologies.
ControlAutonomy
1. Usability_UX1. Forced
2. Tech_on_off2. Reject_select work
3. Collaboration_devs3. Diff_clients_LSPs
4. Configuration4. Control
5. Instructions_knowledge5. Quality
6. Rates_competition6. Usability_UX
7. Privacy
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiménez-Crespo, M.A. Human-Centered AI and the Future of Translation Technologies: What Professionals Think About Control and Autonomy in the AI Era. Information 2025, 16, 387. https://doi.org/10.3390/info16050387

AMA Style

Jiménez-Crespo MA. Human-Centered AI and the Future of Translation Technologies: What Professionals Think About Control and Autonomy in the AI Era. Information. 2025; 16(5):387. https://doi.org/10.3390/info16050387

Chicago/Turabian Style

Jiménez-Crespo, Miguel A. 2025. "Human-Centered AI and the Future of Translation Technologies: What Professionals Think About Control and Autonomy in the AI Era" Information 16, no. 5: 387. https://doi.org/10.3390/info16050387

APA Style

Jiménez-Crespo, M. A. (2025). Human-Centered AI and the Future of Translation Technologies: What Professionals Think About Control and Autonomy in the AI Era. Information, 16(5), 387. https://doi.org/10.3390/info16050387

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop