Next Article in Journal
The Latest Developments in Research on Sustainability and the Sustainable Development Goals in the Areas of Business, Management and Accounting
Next Article in Special Issue
Artificial Intelligence and Its Role in Shaping Organizational Work Practices and Culture
Previous Article in Journal
Understanding the Dynamics of Board-Executive Director Relationships in Nonprofits: A Qualitative Study of Youth-Serving Nonprofits in Utah
Previous Article in Special Issue
Fintech: Evidence of the Urgent Need to Improve Financial Literacy in Portugal
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Why Do Swiss HR Departments Dislike Algorithms in Their Recruitment Process? An Empirical Analysis

by
Guillaume Revillod
Swiss Graduate School of Public Administration, University of Lausanne, 1015 Lausanne, Switzerland
Adm. Sci. 2024, 14(10), 253; https://doi.org/10.3390/admsci14100253
Submission received: 9 August 2024 / Revised: 3 October 2024 / Accepted: 6 October 2024 / Published: 9 October 2024

Abstract

:
This study investigates the factors influencing the aversion of Swiss HRM departments to algorithmic decision-making in the hiring process. Based on a survey provided to 324 private and public HR professionals, it explores how privacy concerns, general attitude toward AI, perceived threat, personal development concerns, and personal well-being concerns, as well as control variables such as gender, age, time with organization, and hierarchical position, influence their algorithmic aversion. Its aim is to understand the algorithmic aversion of HR employees in the private and public sectors. The following article is based on three PLS-SEM structural equation models. Its main findings are that privacy concerns are generally important in explaining aversion to algorithmic decision-making in the hiring process, especially in the private sector. Positive and negative general attitudes toward AI are also very important, especially in the public sector. Perceived threat also has a positive impact on algorithmic aversion among private and public sector respondents. While personal development concerns explain algorithmic aversion in general, they are most important for public actors. Finally, personal well-being concerns explain algorithmic aversion in both the private and public sectors, but more so in the latter, while our control variables were never statistically significant. This said, this article makes a significant contribution to explaining the causes of the aversion of HR departments to recruitment decision-making algorithms. This can enable practitioners to anticipate these various points in order to minimize the reluctance of HR professionals when considering the implementation of this type of tool.

1. Introduction

Artificial intelligence (AI) now enables machines and information systems to perform tasks that require human intelligence (Cao et al. 2021; Strohmeier 2022). Over time, the rise of “computational information processing capabilities” and “big data analytics technologies” (Cao et al. 2021, p. 1) have made it possible for AI to perform increasingly complex processes that normally require cognitive abilities, such as making judgments, detecting emotions, or performing certain tasks that were once considered uniquely human.
One of the most debated aspects of AI in the workplace is its role in decision-making, where it is often hailed for its efficiency and accuracy compared to human judgment (Agrawal et al. 2017; Dietvorst et al. 2015; Mahmud et al. 2022; Metcalf et al. 2019), a fortiori in the field of HRM, where its predictive power and speed, among other things, speed up decision-making, reduce the workload of HR departments, and enable them to devote more time to higher-value-added activities (Malik et al. 2023; Song and Wu 2021; Strohmeier 2020, p. 55). Yet, increased use of AI in HRM also comes with significant concerns, including issues related to transparency, equal treatment, and privacy (Rodgers et al. 2023; Strohmeier 2020). Therefore, despite the potential benefits of using algorithms in decision-making, their full potential can only be realized if individuals accept their use (Strohmeier 2020, p. 57) by overcoming their algorithm aversion (AA), that is, their propensity to prefer a decision made by a human to that made by an algorithm1 (Strohmeier 2020, p. 57; Dietvorst et al. 2015, 2018; Melick 2020).
Surprisingly, few studies have focused on this aversion among HR players. In their recent literature review, Mahmud et al. (2022) provide a comprehensive review of the factors contributing to AA, but only Lee (2018) deals, in part, with algorithmic decision-making in HRM. However, subsequent studies, such as that by Dargnies et al. (2024), have not focused on HR employees. Thus, the factors explaining AA for this specific population remain largely unexplored. This study aims to address this gap. By means of a questionnaire addressed to several hundred HR professionals, we sought to answer the following research question: to what extent do their concerns and general attitudes influence their aversion to algorithmic recruitment decisions?
To answer this question, we drew from the framework proposed by Mahmud et al. (2022), the integrated AI acceptance–avoidance model (IAAAM) from Cao et al. (2021), and the General Attitudes towards Artificial Intelligence Scale (GAAIS) by Schepman and Rodway (2020, 2023) to develop our conceptual model. Furthermore, given the distinct psychological profiles of public versus private sector employees (Anderfuhren-Biget et al. 2010; Emery and Giauque 2016), we explored whether HR professionals’ aversion varied across sectors. In short, this study contributes to the broader conversation on the digitalization of HR (Malik et al. 2020), a pressing challenge for organizations in Switzerland, particularly public administrations (Emery and Giauque 2023). Our findings inform the ongoing debate on how best to balance the efficiency gains of algorithmic decision-making with the concerns of those who implement and are affected by its decisions. In achieving this, this research advances both theory and practice. Theoretically, it fills a gap in the literature by focusing specifically on HR professionals’ aversion to algorithmic decision-making, an area that has received limited attention in prior studies. Furthermore, drawing on an innovative research model, this study provides new insights into the factors influencing HR professionals’ AA. Practically, our findings offer valuable guidance for organizations in managing the implementation of AI in HR processes, helping them to better navigate the concerns that HR professionals have towards algorithmic recruitment systems and to foster broader acceptance.
The remainder of this paper is organized as follows. Section 2 presents the literature review and the theoretical framework and hypotheses of this study. Section 3 describes our methods. Section 4 presents and discusses our results. Finally, Section 5 concludes the paper by explaining its limitations and proposing new avenues to explore to delve deeper into the subject of HR professionals’ aversion to algorithms used in HR.

2. Literature Review, Theoretical Framework, and Hypotheses

2.1. AI and Algorithmic Decision-Making in HR

Currently, the literature for managers and scientists emphasizes the importance, benefits, advantages, and challenges of integrating more sophisticated artificial intelligence algorithms and tools to automate, assist, or support HR decision-making (Stone and Lukaszewski 2024; Strohmeier 2022; Prikshat et al. 2023). Since the first study of a tool based on artificial intelligence by Lawler and Elliot (1993), several academic studies have focused on AI tools in areas such as employee engagement (Campion and Campion 2024; Jantan et al. 2010a, 2010b; Rodney et al. 2019; Strohmeier and Piazza 2015) as well as within other processes inherent to HR departments (Emery and Gonin 2009), such as performance management (Duggan et al. 2020; Jia et al. 2018; Johnson et al. 2022; Kim et al. 2021; Varma et al. 2024; Vrontis et al. 2021), career development (Binns et al. 2018; Höddinghaus et al. 2021), to provide tailored training programs (Alabi et al. 2024; Johnson et al. 2022), or in so-called “cross-functional” processes (Emery and Gonin 2009, p. 46) where AI exists to predict psychosocial risks (Merhbene et al. 2022), turnover intentions (Alabi et al. 2024; Kang et al. 2020), and assess well-being at work (Mer and Virdi 2023).
Notwithstanding scientific productions, there is a limited understanding of what AI-based HRM tools, instruments, or applications actually are (Prikshat et al. 2023, p. 5). Initially, the definition of AI was far from unanimous (Grosz et al. 2016), a fortiori in the field of HRM (Strohmeier 2022). However, this did not prevent some authors from venturing into its formulation. For Meijerink et al. (2021), HR AI is defined as a category of algorithms,2 that is, a set of coded procedures for transforming input data into a desired output based on specified calculations (Strohmeier 2020, p. 54; Gillespie 2014), enabling an information system to perform HRM activities that would normally require the knowledge and intervention of a human being. For Strohmeier (2022), an HR AI tool is any information system that, when inserted into an HR process, not only imitates natural intelligence but also evolves according to the data that feeds it.3 Its aim is to either completely replace the performance of a task previously carried out by the HR department or produce a result that can then be used to inform the HR department’s4 choices; in other words, it helps with HRM decision-making (Park et al. 2021; Strohmeier 2020, p. 54).
At the turn of the century, the question of whether AI systems, and thus the algorithms on which they are based, are better suited to helping decision-makers or replacing them was unresolved. Bader et al. (1988) identified six different roles played by technical systems: critic, assistant, second opinion, consultant, tutor, and automaton. However, while some researchers defend the idea that AI can create a man–machine symbiosis reaching a higher level of intelligence than either of them operating separately (Syam and Courtney 1994), others, in the medical field in particular, firmly believe that AI systems are merely instruments that support decision-making (Spyropoulos and Papagounos 1995). Twenty-five years later, their positions have changed little. Ghosh and Kandasamy (2020, p. 1138), for example, believe that AI systems are too opaque for clinical decisions to be made based on their results, whereas Pee et al. (2019) assert that the relationship between humans and AI-based medical imaging systems can be cooperative, collaborative, competitive, or competitive. Many elements explain these different perspectives, which are sometimes critical for AI systems. For example, Zuboff (2019) considers AI tools as technologies with explosive social consequences due to the lack of public control over the use of these tools. In HRM, Böhmer and Schinnenburg (2023) highlight several ambiguities surrounding AI systems. First, their role oscillates between a tool to support HR activities and a system to replace employees,5 being potentially responsible for unprecedented personal and social consequences, so as to gain effectiveness and efficiency. Second, their opacity,6 which, even when a result is produced and appears to be the correct one, often prevents the decision-maker from understanding how it was produced by an AI system. Third, concerning their performance,7 because although the work of HR departments can be optimized by means of AI tools, the latter are also inclined to exert pressure on employees by closely monitoring their performance, as Varma et al. (2024) suggest. Finally, data,8 although their use is likely to improve HRM decision-making (Coron 2023; Pessach et al. 2020), can not only be difficult to acquire in sufficient quantity and variety to feed HR AI systems (Garcia-Arroyo et al. 2019) but might also create or amplify conflicts inherent in human rights, data protection, and the different storage principles that prevail within organizations or political structures (Blum and Kainer 2019; Huff and Götz 2020). In addition to these four major ambiguities, there are also potential ethical issues (Rodgers et al. 2023), particularly in recruitment (Chen 2023; Köchling and Wehner 2020; van den Broek et al. 2019), and practical limitations, such as the compatibility of AI with organizational processes or the AI skills of HR professionals (Schinnenburg and Brüggemann 2018).
All these elements, as Cao et al. (2021) point out, contribute to shaping the appetence as well as the aversion of individuals to AI tools and the algorithms on which they are based. This latter aspect is of interest in this study.

2.2. Research Model and Hypotheses

2.2.1. Hiring Process AA (Dependent Variable)

Several studies have shown that algorithms far surpass humans in terms of decision-making (Dietvorst et al. 2015; Mahmud et al. 2022). However, when given a choice, most individuals prefer to rely on decisions emanating from the latter than from the former (Dietvorst et al. 2015; Prahl and Van Swol 2021). In the literature, preferring a decision made by a human to one made by an algorithm is referred to as AA (Dietvorst et al. 2015, 2018; Melick 2020). In practice, AA is observed when the accuracies of human and algorithmic decisions are identical (Berger et al. 2021; Bogert et al. 2021). In addition, individuals are usually convinced that their decisions are more reliable than those produced by algorithms and, thus, display a certain aversion to them from the outset (Kawaguchi 2021; Litterscheidt and Streich 2020; Sultana et al. 2021). Finally, some people harbor an intrinsic aversion to algorithms, regardless of their performance, because they are fundamentally distrustful of them (Kawaguchi 2021; Prahl and Van Swol 2017). Perceived as a behavioral anomaly, AA has attracted the attention of many researchers, giving rise to a wealth of studies on its determinants. Mahmud et al. (2022) review more than 80 studies and propose a theoretical framework on which this study is based. According to Mahmud et al. (2022), the factors influencing AA can be divided into four main groups: (1) algorithmic factors, (2) individual factors, (3) task factors, and (4) high-level factors. Each of these four main dimensions is then subdivided into several others, which we do not summarize here due to space considerations. Figure 1 summarizes the conceptual framework of Mahmud et al. (2022).
Although Mahmud et al. (2022) provide a rather exhaustive summary of the factors influencing AA, their study has two limitations: on the one hand, their work does not exhaust all potential predictors of AA; on the other hand, it incorporates only a small number of HRM-related texts.9 To address these two limitations, as well as to respond to the call to empirically test its general theoretical framework (Mahmud et al. 2022, p. 17), we tested new individual explanatory factors10 and focused specifically on aversion to algorithms that are part of the staff engagement process11 (Emery and Gonin 2009; Melick 2020).

2.2.2. Explanatory Factors for AA in the Hiring Process (Independent Variables)

This conceptual framework explains the aversion of Swiss HR departments to algorithmic recruitment decisions by (Privacy Concerns Section) actors’ concerns about data privacy and security, (General Attitude towards AI Scale Section) the latter’s general attitude towards AI, and (Perceived Threat and Its Determinants Section) the threat posed by AI to actors, as well as, from a more personal point of view, the concerns raised by algorithmic decisions in terms of personal development (Personal Development Concerns Section) and personal well-being (Personal Well-Being Concerns Section). In line with our research question, these are the main AI-related gaps in the literature. The inclusion of GAAIS is justified by its novelty.

Privacy Concerns

Tools such as algorithms used in recruitment, which can automate all or part of the decision-making process, are generally based on digital data (Newell and Marabelli 2020) collected on a large scale, either inside or outside organizations (Araujo et al. 2020). However, as the literature shows (Araujo et al. 2020; Thurman et al. 2019), the use of these data often raises confidentiality or security concerns, commonly known as privacy concerns (PC). According to Mahmud et al. (2022), the latter is positively associated with aversion to algorithmic decision-making. Although no study has yet addressed this issue, we believe that this link between PC and AA can also be found in HRM, where AI algorithms and systems feed on a wide range of information, including personal data such as age, nationality, level of education, and place of residence, as well as information about the activities of job applicants or employees in the workplace (Strohmeier 2022). Hence, our first hypothesis is as follows:
Hypothesis 1 (H1): 
HR employees’ PC are positively associated with their AA.

General Attitude towards AI Scale

In general, perceptions of and attitudes towards algorithms are strongly linked to AA (Workman 2005). Some individuals are naturally hostile to algorithmic decisions (Kawaguchi 2021; Önkal et al. 2009; Prahl and Van Swol 2017). Individual causes of this aversion include a lack of trust in algorithms (Zhang et al. 2021); the perception that algorithms are less competent and empathetic than humans when it comes to informing and making decisions (Luo et al. 2019)12; an egocentric bias that makes individuals prefer their own decisions over not only those of other humans but also those of algorithms (Sutherland et al. 2016); feeling responsible for the consequences of a decision (Van Dongen and Van Maanen 2013); or the motivation to follow algorithmic decisions (Mahmud et al. 2022, p. 11).
However, to date, no study has tested whether the GAAIS (Schepman and Rodway 2020, 2023) influences this dependent variable. This is what we propose in this paper. In their initial validation, Schepman and Rodway (2020) divide the GAAIS into two groups13: one grouping positive attitudes toward AI, such as perceptions of its innovativeness and individuals’ optimism towards it,14 and the other grouping negative attitudes toward AI, such as discomfort or insecurity felt by individuals towards AI systems.15 Two subscales were thus created, GAAIS+ and GAAIS−, which we used to test their significance for AA in recruitment. Thus, our second and third hypotheses are as follows:
Hypothesis 2 (H2): 
HR employees’ GAAIS+ is negatively associated with their AA.
Hypothesis 3 (H3): 
HR employees’ GAAIS− is positively associated with their AA.

Perceived Threat and Its Determinants

Mahmud et al. (2022, p. 17) were the first to propose a more in-depth study of AA using the IAAAM (Cao et al. 2021). The latter extends the unified theory of acceptance and use of technology (UTAUT), initially outlined by Venkatesh et al. (2003) and then reformulated and augmented numerous times (Venkatesh et al. 2016; Venkatesh 2022). Cao et al. (2021) incorporate a construct called perceived threat (PT), which synthesizes the extent of individuals’ fear, worry, and anxiety about algorithm-based decisions and, thus, AI. These results show that PT is significantly associated with individuals’16 attitudes toward the use of AI in organizational decision-making. In this study, we tested whether, in addition to influencing individuals’ attitudes, PT can also explain their decision-making preferences within the recruitment process. Because PT is a negative perception, we hypothesized a positive link with aversion to decisions produced by algorithms used in the recruitment process, as follows:
Hypothesis 4 (H4): 
HR employees’ PT is positively associated with their AA.
According to Chen and Zahedi (2016) and Cao et al. (2021), PT can be explained by perceived susceptibility (PSUS)—the belief that a person is likely to make bad decisions—and perceived severity (PSEV)—the belief that a tool or technology generates or reproduces stereotypes, biases, or discrimination. According to these authors, both PSUS and PSEV are positively associated with PT. However, these associations are not always significant (Carpenter et al. 2019; Zahedi et al. 2015). In this study, ethical concerns, as well as the potentially serious consequences associated with algorithmic decision-making (Dwivedi et al. 2021; European Commission 2020; Shrestha et al. 2019) in the field of HR (Strohmeier 2022) prompted us to test these relationships to better understand the antecedents of the threats posed by algorithms inserted into the recruitment process from the viewpoint of HR departments. Our hypotheses are as follows:
Hypothesis 4a (H4a): 
HR employees’ PSUS is positively associated with their PT.
Hypothesis 4b (H4b): 
HR employees’ PSEV is positively associated with their PT.

Personal Development Concerns

Although the implementation of HR AI tools can, in particular, enable certain tasks—some of them unpleasant—previously devolved to HR departments, such as processing pay slips (Mohamed et al. 2022; Persson and Wallo 2022; Saukkonen et al. 2019), welcoming employees (Baudoin et al. 2019, p. 5), and sorting and selecting job applications (Schuetz and Venkatesh 2020; Zu and Wang 2019), as well as describing and summarizing large volumes of data, predicting trends, developing scenarios, optimizing decision-making, and, in short, more strategically directing HR activity toward tasks with higher added value or requiring more social skills (Garcia-Arroyo et al. 2019; Malik et al. 2020, 2023; Pessach et al. 2020), it cannot be ruled out that the use of algorithms may give rise to certain concerns among HR players that are linked to the development of their skills or their future (Nolan et al. 2016). As Böhmer and Schinnenburg (2023, p. 19)17 point out, “AI can take over tasks formerly performed by workers and destroy jobs or reduce task complexity, leading to intellectual impoverishment of work”. In the literature, the question of whether individuals believe AI promotes the creation of new learning opportunities or, on the contrary, removes them is summarized by the concept of PDEV, that is, “an individual’s concerns regarding the degree of preventing personal learning form own experience by the use of AI” (Cao et al. 2021, p. 5). In this respect, we believe that the AA of HR departments can be partly explained by the view that algorithmic decision-making in recruitment can prevent it from learning from its own experiences in this area. Therefore, our fifth hypothesis is as follows:
Hypothesis 5 (H5): 
HR employees’ PDEV are positively associated with their AA.

Personal Well-Being Concerns

Just as this could affect the personal development of HR employees, the introduction of algorithms to make recruitment decisions could negatively affect their well-being. Indeed, the literature identifies several outcomes linked to the use of AI systems within organizations. Brougham and Haar (2018) show that the use of STARA (Smart Technologies, AI, Robotics, and Algorithms) negatively affects employees’ perspectives, which, in turn, harms their well-being by promoting depression and cynicism in the workplace. Other studies have highlighted anxiety (Meuter et al. 2003; van Esch et al. 2021) and stress (Tarafdar et al. 2019) to be associated with the use of technical systems. Others, although more positive, find that the use of algorithms in HR analytics decision-making promotes employee resilience (Xiao et al. 2023). However, even before feeling the effects of the introduction of a new technical system, individuals maintain PWB, defined as “an individual’s concerns regarding the degree of personal anxiety and stress caused by the use of AI” (Cao et al. 2021, p. 5). In this case, we expect the following:
Hypothesis 6 (H6): 
HR employees’ PWB are positively associated with their AA.

2.2.3. Control Variables

Our control variables—age, gender, time with organization, and hierarchical position—were selected based on prior research linking these demographics to AA (Logg et al. 2019; Mahmud et al. 2022). Age18 and hierarchical position19 are often associated with differing levels of technological acceptance (Araujo et al. 2020; Campion 1989; Hill 1981, p. 105; Logg et al. 2019; Mahmud et al. 2022, p. 13). Gender has been shown to influence perceptions of AI fairness (Köchling and Wehner 2020) and therefore AA. Time spent in an organization20 is another important factor, as longer-tenured employees may be more resistant to changes involving AI (Delaney and D’Agostino 2015; Kozlowski 1987).

2.2.4. Final Research Model

Based on the above developments, Figure 2 summarizes our conceptual or search model. As can be seen, this is an extension of the conceptual framework of Mahmud et al. (2022), summarizing the factors likely to influence algorithmic aversion. Unlike the latter, however, we focus specifically on AA within the recruitment process. In addition, we attempt to explain AA in HR departments by new factors derived from the work of Cao et al. (2021) and Schepman and Rodway (2020, 2023), who focus specifically on AI-based instruments and, in particular, on AI in the decision-making process. This allows us both to refine our understanding of AA and to investigate, through a hypothetico-deductive method, predictors of AA that have never been tested before.

3. Methods

3.1. Data Collection and Sample Characteristics

This study is based on a survey of private and public HR professionals in Switzerland conducted between November 2022 and March 2023.
The associations HR Vaud (N = 777), HR Tessin (N = 270), and the Zürcher Gesellschaft für Personalmanagement (N ≅ 600) all agreed to distribute the questionnaire to their networks and to carry out at least one follow-up survey at three-week intervals after the first distribution of the questionnaire. HR Genève (N = 720) and HR Valais (N = 330) distributed the questionnaire to their networks only once.
Since 1848, the Swiss federal political system has consisted of three levels of governance: the federal state, cantons, and communes. The principle of subsidiarity (Sciarini 2023) confers broad autonomy in politics and the way in which they organize public administration, particularly in terms of infrastructure and information systems (Ladner et al. 2019). However, contextual differences can be observed in this area, which is why we surveyed the Federal Personnel Office (OFPER) (N = 1), 26 cantonal HR departments, and 168 of Switzerland’s 2136 municipalities. Regarding the latter, we chose to restrict ourselves to those with more than 10,000 inhabitants (OFS 2021). The size of a municipality also determines whether it has an HR department (Ladner and Haus 2021). This arbitrary threshold allowed us to ensure that the respondents were proven HR function members. Each public authority was invited to participate in our questionnaire three times, at three-week intervals, by e-mail and post. Finally, 324 responses were received, with a return rate of 11.20%.21 Because Swiss HR professionals are busy, this rate was acceptable. To respect Switzerland’s linguistic diversity, the questionnaire was translated into three of the four languages officially recognized by the Swiss Confederation: German, French, and Italian, as well as English. The individual characteristics of the respondents are shown in Table 1. Overall, this sample was characterized by a predominance of experienced professionals in managerial roles, with a slight overrepresentation of French-speaking Swiss.

3.2. Preventing Bias

Organizational behavior research often faces methodological biases, particularly when researchers rely on self-administered questionnaires (Podsakoff et al. 2012). In some cases, this can threaten the validity of the observed relationships between variables, as well as the conclusions inferred from them (Pandey et al. 2008). Good questionnaire design, a clear data collection strategy, and post hoc data analysis are three ways of mitigating and verifying that potential measurement biases are neither present nor influential within data (Podsakoff et al. 2012). To this end, we guaranteed complete anonymity for all respondents (Pandey et al. 2008). The invitation to complete the questionnaire was accompanied by a description of the aims of our study and a reminder of the essential rules of scientific ethics. The respondents were also asked to answer freely and were informed that none of the information gathered would be passed on to anyone else. Although not necessarily necessary when using the partial least squares structural equation modeling (PLS-SEM) method (Hair et al. 2021, pp. 11–12), post hoc statistical tests (skewness and kurtosis) were nevertheless carried out to ensure the normality of our variables (Rouanet and Leclerc 1970). Subsequently, our measurement and structural models were tested to ensure that our results met the standards for using PLS-SEM in HRM (Ringle et al. 2020). The results are presented in Appendix A and Appendix B.

3.3. Measurements

3.3.1. Dependent Variable

Our dependent variable, AA to AI in recruitment (Melick 2020, p. 44), is a latent construct (Williams and O’Boyle 2008) whose two relative items were measured on a five-point Likert scale, ranging from (1) “strongly disagree” to (5) “strongly agree”. This is a type of ordinal scale that, similar to Blaikie (2003, p. 24) or Anderfuhren-Biget et al. (2010, p. 223), we assume to be continuous when applying PLS-SEM (Hair et al. 2021).

3.3.2. Independent Variables

Most of our independent variables were measured on a five-point Likert scale, ranging from (1) “strongly disagree” to (5) “strongly agree”. This is notably the case for the GAAIS+ and GAAIS− constructs, measured by 12 and 8 items, respectively, for a total of 20 items, as selected by Schepman and Rodway (2020), which relate to the general attitude of players toward AI, but which the space allotted to this article does not allow us the ability to detail here (see Appendix A). PT was measured by means of four indicators reflecting the extent of an individual’s fear, concern, and/or anxiety about the risks represented, in his or her view, by AI HR (Cao et al. 2021; Chen and Zahedi 2016). PSUS was measured using the same three indicators as Cao et al. (2021) regarding the possibility of AI making bad decisions at a given time or in the future. PSEV was adapted from the seven indicators of Cao et al. (2021) to capture the extent to which an individual perceives that HR AI (1) perpetuates cultural stereotypes, (2) amplifies discriminations, (3) reproduces institutional biases, (4) intensifies systemic biases, (5) errs due to the difficulty of specifying the HR problem to be solved, (6) uses inadequate architecture or models to solve HR problems, or (7) produces poor results due to insufficient training of the data on which it is based. This was also the case for PDEV, measured using four items concerning the impact of HR AI-assisted decision-making on (1) employees’ ability to learn, (2) their career development, (3) loss of control over their personal development, and (4) loss of opportunity to learn from their own experiences (Cao et al. 2021). This was the case for PWB as well, measured by the six indicators developed by Cao et al. (2021) based on the work of Agogo and Hess (2018) and Brougham and Haar (2018) concerning how AI-assisted decision-making leads HR employees to feel (1) relaxed, (2) anxious, (3) redundant, (4) useless, (5) inferior, and (6) satisfied with their job. The reuse of Cao et al.’s (2021) indicators was relevant to studying HR employees’ AA, as these authors used them to study managers’ attitudes and behaviors towards using artificial intelligence for organizational decision-making. Finally, the PC construct was measured on a four-point Likert scale22 using five items reflecting (1) individuals’ discomfort with sharing information without prior authorization, (2) their concerns about the misuse of personal information, (3) their inherent fears about the security of data storage, (4) their concerns about the use of personal data collected within organizations, and (5) their opinions of organizations’ propensity to share data collected without authorization (Araujo et al. 2020; Baek and Morimoto 2012). In short, we borrowed our indicators from the most recent texts available, based on the criterion that the measurement scales should have been previously confirmed at least in an exploratory way. Our control variables, namely, age, gender, seniority, and position in the hierarchy, were single constructs. Appendix A details the distribution, skewness, kurtosis, and exact wording of all items used in this study.

3.4. Analysis Procedure

This section briefly summarizes our analyses, which are presented in full in Appendix B, using PLS-SEM (Hair et al. 2021). To carry out our research, we produced three models in line with the recommendations of Hair et al. (2021, p. 157). First, we grouped the respondents from both the private and public sectors (N = 324). Second, we focused on the private sector. To this end, the observations of the public respondents were deleted, and the same model was rerun. Therefore, our database size was reduced from 324 to 157. In the third and final model, we focused solely on the public sector respondents (N = 154). This resulted in 13 nonresponses. In conducting this research, we employed a hypothetico-deductive approach, beginning with the formulation of hypotheses based on existing theory. These hypotheses were then tested empirically through PLS-SEM, allowing us to validate or refute them. This approach ensured a rigorous, theory-driven analysis while also providing flexibility to refine our models based on the empirical evidence gathered from the different sectoral groups.

3.4.1. Preliminary Considerations

Statistically, the use of PLS-SEM structural equations is justified when a theoretical model includes latent constructs and involves testing the complex relationships between them proposed from a theoretical framework (Hair et al. 2021, p. 22). This is what was carried out in this study. That said, the data used must meet certain requirements to ensure that the PLS-SEM method retains sufficient statistical power and that the results obtained can be generalized beyond the simple sample considered (Hair et al. 2021, p. 15). The 10-time rule (Hair et al. 2021, p. 16) and the inverse square root method were considered here (Hair et al. 2021, pp. 17–18). The number of iterations required for a model to converge must also be below 300 (Hair et al. 2021, p. 82), which was the case for our three models.
Some common limitations are associated with PLS-SEM, but they do not apply to our study. Concerns about small sample sizes affecting statistical power were mitigated by our adherence to sample size guidelines (N = 324, N = 157, N = 154). Potential biases related to self-reported data were addressed using validated measurement scales (Coltman et al. 2008; Hanafiah 2020). Moreover, while non-normality of data can be an issue, PLS-SEM is robust in such cases (Hair et al. 2021), and our data distribution presented no significant issues. Finally, although traditional SEM methods like CB-SEM are sometimes preferred, PLS-SEM was chosen for its flexibility and suitability for handling moderately small samples and complex models.

3.4.2. Evaluation of Measurement Models

The evaluation of our measurement models depended on the type of latent construct used. In our case, these were reflective latent constructs (Coltman et al. 2008; Hanafiah 2020), as they exist independently of the items used to measure them (Borsboom et al. 2004). Typically, perceptual, attitudinal, or personality trait measurement scales are reflective constructs (Coltman et al. 2008, p. 1252). Further, reflective constructs assume that causality runs from the concept to the indicator (ibid.). They must also share a common theme and be interchangeable (Coltman et al. 2008, p. 1253), which was the case in our study.
Empirically, the evaluation of a model composed of reflective constructs involves various tests, for which we refer to the different thresholds commonly accepted in the literature (Cheung et al. 2023; Hair et al. 2021). These tests are divided into four stages. The first examines the reliability of the indicators. In this respect, we produced the indicator loadings reproduced in Appendix B, Table A1, and checked whether they met the threshold commonly accepted in the literature of 0.708 (Hair et al. 2021). The second assesses the internal consistency of the constructs. At this stage, we produced the following metrics: Cronbach’s alpha (α), rohC, and rohA, reported in Appendix B, Table A2, and checked whether they met the threshold commonly accepted in the literature of 0.70 (ibid.). The third examines the convergent validity of each conceptual measure. The relevant metric here was the average variance extracted (AVE), which must be greater than 0.50 (ibid.). This is also reported in Appendix B, Table A2. The fourth and final stage examines the discriminant validity of the constructs, that is, the extent to which they differ from one another. To achieve this, we produced heterotrait–monotrait ratios of correlations (HTMT), which we bootstrapped to see if they were significantly below the 0.85 threshold (ibid.). The HTMTs are reported in Appendix B, Table A3. Ultimately, all our latent constructs met the evaluation criteria inherent in our measurement models.

3.4.3. Evaluation of Structural Models

Hair et al. (2021, p. 116) propose the following systematic approach that was used to assess the quality of our structural models: first, whether they contain collinearity problems should be examined; second, the significance and relevance of the structural models should be assessed; and finally, their explanatory and predictive power should be evaluated. As Table A4 in Appendix B shows, the variance inflation factor (VIF) was employed to assess potential collinearity issues. A VIF value above 5 for any latent construct signals a collinearity problem. However, concerns can emerge even at lower VIF values, specifically between 3 and 5 (Hair et al. 2021, p. 117). Therefore, it is preferable for VIF values to remain below 3, which was the case for all three of our models. The second step in evaluating our structural models was to examine the significance of the path coefficients and their relevance (Hair et al. 2021, p. 117). Hair et al. (2021, p. 125) recommend inspecting the bootstrapped paths and setting the number of bootstraps to 10,000, which we carried out. Table A5 in Appendix B shows this analysis. Finally, we evaluated the predictive power of our models using the root-mean-square error (RMSE) (Hair et al. 2021, p. 120). This approach positioned us in a scenario where the predictive power of model 1 was moderate. For model 2, the procedure resulted in a strong predictive power, allowing us to draw conclusions with confidence beyond our sample. Lastly, the predictive power of the third model was also moderate, according to the thresholds outlined by Hair et al. (2021, p. 129). Full details of these analyses are provided in Appendix B, Table A6. Ultimately, there was no reason to suspect that our structural models were unreliable with respect to these criteria (Cheung et al. 2023; Hair et al. 2021). We present and interpret our results below.

4. Results

4.1. Description of Results

This section identifies the determinants of AA for both private and public players (N = 324). It then looks specifically at the results for the private sector (N = 157) and, finally, the results for the public sector (N = 154)23 obtained by dividing our dataset, as recommended by Hair et al. (2021, p. 157).

4.1.1. Complete Database

Table 2 provides a detailed summary of our results for all the respondents (N = 324).
Our model explained 55.6%24 of the variance in the AA dependent variable, whereas PSUS and PSEV explained 36.9% of the variance in the PT predictor.
The first column of this table shows that all the independent variables were related to AA. This was notably the case for HR department PCs, which were positively correlated with AA (β = 0.074 *); GAAIS+ was negatively correlated with AA (β = −0.211 ***); GAAIS− was positively associated with AA (β = 0.414 ***); HR employee PT vis-à-vis HR AI was positively related to AA (β = 0.169 ***); PDEV was positively correlated with AA (β = 0.177 ***); and PWB was also positively correlated with AA (β = 0.181 ***). However, all other control variables (i.e., age, gender, respondents’ seniority, and hierarchical level) were not significantly associated with this dependent variable.
Turning to the second column of the table, which relates to the explanatory factors for PTs associated with algorithmic HRM decision-making, our data show an initial statistically significant relationship between PSUS and PTs (β = 0.402 ***) and a second statistically significant association between PSEV and PTs (β = 0.369 ***).

4.1.2. Private Sector VS Public Sector

Table 3 provides a comparative summary of our results for the respondents from the private (N = 157) and public (N = 154) sectors.
As this table shows, there were notable differences between the AA of the private and public respondents. First, 65.3% of the variance in the AA of the former was explained by our predictors compared to 50.1% for the latter. Conversely, 38.4% of the variance in PT for public players was explained by PSUS and PSEV versus 35.5% for private players. Among private players, PC was significantly related to AA in recruitment (β = 0.079 *) but not among public players (β = 0.092). As a result, GAAIS+ was not significantly associated with AA among the former (β = −0.101), while it explained part of the variance in AA among our public sector respondents (β = −0.296 ***). GAAIS− was significantly associated with AA in both groups (β = 0.521 ***; β = 0.373 ***). The importance of this predictor for both the former and latter should be emphasized, revealing that GAAIS− was a particularly strong predictor of AA. Somewhat surprisingly, PT was then significantly linked to the AA of private HR employees (β = 0.269 ***) but not to those in the public sector (β = 0.091), while PDEV followed the opposite trend, not being significantly linked to the AA of private players (β = 0.100) but significantly associated with AA among public players (β = 0.249 ***). Finally, PWBs were both significant in terms of the AA of private (β = 0.153 *) and public (β = 0.172 *) HR departments, with, in terms of strength, a slight lead for the latter over the former. As in our first model, no control variable was significantly linked to the AA of private and public players.
Our results show that the predictors of PT were significant for both private and public employees. However, PSUS was more strongly related to PT among the private respondents (β = 0.430 ***) than among the public sector respondents (β = 0.400 ***). Conversely, PSEV explained a slightly greater share of the variance in PT among public employees (β = 0.364 ***) than among the private sector respondents (β = 0.352 ***).

4.2. Confirmed/Invalidated Assumptions

Our results confirm several of our hypotheses while invalidating others. The results are summarized in Table 4, which indicates with a “yes” each time a hypothesis is validated (p-value ≤ 0.05 at least) and with a “no” each time a hypothesis is invalidated (p-value > 0.05).

5. Interpretation of Results and Discussion

This study examined the reasons behind the aversion of HR departments to algorithmic decision-making in the recruitment process. In general, PC were positively related to the latter. This result is interesting, as it is consistent with the literature (Araujo et al. 2020; Thurman et al. 2019; Vimalkumar et al. 2021), which indicates that the same association link between PC and AA that operates in different fields, such as media (Thurman et al. 2019) or expert systems (Araujo et al. 2020), also prevails for recruitment algorithms deployed within the staff hiring process. However, when we look at the difference between private and public HR employees, we notice that, for a relatively similar average level of PC,25 this association is only significant within private HR departments, despite a stronger coefficient of association among the respondents belonging to the public sector. While the reason for this difference is unknown, there are several possible explanations, such as more frequent or high-profile26 abuses of data protection and confidentiality in private organizations (Kim and Bodie 2020; Tiwari 2022). Given the strict legal frameworks within public organizations (LIPAD 2020; LPrD 2007), it could also be that their employees are less concerned about data security and confidentiality breaches27. Further research could, thus, focus on the reasons behind this association and even on the moderators that amplify or diminish it.
Overall, a generally positive attitude toward AI was systematically linked to a lower level of AA regarding recruitment. However, as our study shows, this result differed between the private and public sectors. In fact, the GAAIS+ of the former was not associated with a lower level of AA, whereas that of the latter was significant. One possible explanation could relate to the distinct operational environment of the public sector. Public organizations pursue a different goal from private organizations; while the former are interested in producing public goods (Browning et al. 2009) or “public value” (Ritz et al. 2022, p. 528), the latter are guided by market signals as well as economic considerations such as profitability and profit-seeking (Ritz et al. 2022). Therefore, public sector HR employees may perceive algorithms as tools for delivering public goods and services more equitably and effectively, which could enhance their positive attitudes toward AI and, in turn, reduce their aversion to algorithms for decision-making in recruitment. Conversely, a negative general attitude toward AI was systematically associated with a higher level of AA, whether in the private or the public sector. Nonetheless, the higher correlation in the private sector implies that GAAIS− is more relevant in explaining AA for this group. This stronger correlation can be attributed to several factors specific to the private sector. First, the private sector’s focus on competition, efficiency, and profitability often amplifies concerns about the risks associated with AI. Employees may perceive that AI, especially in recruitment, poses a threat to their job security, autonomy, or fairness in decision-making (Brynjolfsson and McAfee 2014). These concerns contribute to a more pronounced negative attitude, which, in turn, intensifies AA. Additionally, the private sector tends to adopt AI technologies more aggressively, exposing employees to these tools more frequently in critical functions such as recruitment and performance evaluation. This heightened exposure increases the likelihood that employees will witness or anticipate negative outcomes, such as biased hiring decisions or dehumanization of the recruitment process, further fueling AA (Davenport and Ronanki 2018). As our results and the literature show (Acquisti et al. 2015), privacy concerns are also more prevalent in the private sector, where breaches or misuse of personal information can have severe financial and reputational consequences. These worries can exacerbate skepticism toward AI in recruitment, where sensitive personal data are processed. The potential for such tools to mishandle information or make biased decisions only heightens the negative attitudes, reinforcing the correlation between GAAIS− and AA. Thus, these results confirm the relevance of the link between general attitudes toward AI and individuals’ algorithm preferences (Workman 2005; Mahmud et al. 2022), a factor which was inserted into the staff engagement process in our case. For the designers of such tools, this means paying particular attention to how they develop them. Faulty design, such as Amazon’s recruitment algorithm being revealed as gender-biased (Dastin 2022), could worsen the general attitude of HR departments towards these tools and, as a result, increase AA. From a managerial perspective, this influence of the general attitude towards AI thus implies a certain degree of uncertainty for organizations tempted to equip themselves with it. Indeed, the general attitude of actors towards AI depends on a multitude of factors external to organizations and that are difficult for them to control (Mahmud et al. 2022; Schepman and Rodway 2023). As shown by the wording of our two latent constructs (Appendix A), GAAIS refers, in particular for GAAIS+, to economic opportunities, improved performance thanks to AI, desirability of AI, or positive emotions associated with AI such as excitement or being impressed by it; for GAAIS−, it refers, for example, to concerns about AI such as unethical use, the fact that it makes mistakes, or negative emotions such as being uncomfortable with it or finding something sinister. As such, an exogenous shock28 inducing a change in relation to these different elements would be enough to vary the attitude of actors toward AI and, as a result, transform their willingness to prefer human to algorithmic decision-making. To address these external influences and improve the acceptance of AI tools, organizations could take a proactive approach by engaging in transparent communication, showcasing positive AI use cases29, and actively managing their public relations strategies. For instance, by emphasizing AI’s benefits for diversity (Azoulay et al. 2020), HR departments could counteract negative media coverage that might otherwise heighten skepticism or concerns. Additionally, fostering a culture of trust and inclusion, where employees feel heard and reassured about the ethical use of AI, may also mitigate the effects of societal trends or media scrutiny. Organizations could further invest in continuous AI education and training for employees, helping them understand how AI operates and dispelling myths that often fuel AA. By addressing these factors, companies can better manage the narrative surrounding AI tools and reduce resistance within the workforce.
Similar to Cao et al. (2021), who focus on another outcome, namely the intention to use AI for organizational decision-making, our study reveals that the threat posed by AI in HRM—characterized in particular by the belief that it is likely to make bad decisions (PSUS) as well as the belief that it generates or reproduces stereotypes, biases, or discrimination (PSEV)—predicts a significant share of the aversion of HR departments to algorithmic decision-making in recruitment. Again, this result is consistent with the literature. In extant studies, despite the many strategic advantages offered by HR AI (Prikshat et al. 2023; Strohmeier 2022), there are numerous examples of drift linked to algorithmic decision-making in HR. Amazon’s male-biased recruitment algorithm (Dastin 2022) can be used as an example, as can the threat of dehumanization of the hiring process posed by algorithmic selection (Fritts and Cabrera 2021). In any case, this implies minimizing the various threats potentially represented by algorithms in recruitment both for the designers of these tools and organizations. The significance of the links between PSUS and PT and between PDEV and PT also sheds light on the way forward. This is a question of putting in place the necessary safeguards to mitigate the risk of the algorithms used making bad decisions. In this respect, many players are calling for induced biases to be corrected in the data (Dastin 2022; Datta et al. 2014; Lambrecht and Tucker 2019; Raub 2018). Historically, men have occupied managerial positions more often. Without appropriate correctives, an algorithm can systematically exclude women or other categories of people from the shortlist of candidates selected for the next stage of the recruitment process (Broecke 2023). As the significance of the link between PSEV and PT shows, there are many issues to be addressed: cultural stereotypes, institutional biases, systemic biases, the quality of the results produced owing to sufficient training of the models used, and the suitability of the models for the process to be addressed, particularly given the difficulty of specifying the terms leading to a result.30 “Building entirely bias-free algorithms may not be possible, therefore the focus should be on reducing bias, and not eliminating it,” as Broecke (2023, p. 46) reminds us. He also mentions that humans constantly make HR decisions based on biased judgments, an assertion Amadieu (2013) does not refute. In addition to the technical means of correcting these biases, organizations could play the transparency card, which generally reduces reticence toward AI (Langer and König 2023). Regular audits and publicity31 of the algorithms used could also reduce the threat that individuals imagine they entail (Park et al. 2021; Zhou et al. 2023). Finally, Nagtegaal (2021) pointed out that the more familiar individuals are with algorithms, the less aversion they are to them. In short, minimizing the PT of the algorithms used in recruitment requires working on these different points.
As hypothesized, the respondents’ aversion to algorithms can also be explained by their preoccupations with personal development. On a closer examination, however, this concern is essentially the prerogative of public sector employees. For the latter, the use of AI should not be detrimental to developing skills and opportunities to learn on the job. This attitude makes sense in light of the literature and the trajectory taken by employment relationships within modern public organizations, particularly in Switzerland (Emery and Gonin 2009). Initially confined to alienating relationships and work organizations32 (Engels 1845), at the very least not conducive to their personal fulfillment and development (Lalive D’Épinay and Garcia 1988), the work environment in which Swiss employees today evolve enables them to expect personalized and motivating working conditions33 within an organization involved in their career development (Emery and Gonin 2009). However, with a relatively higher level of PDEV than private HR employees,34 public HR actors are more concerned about AI challenging their learning opportunities and skill development, leading to greater AA. Does this mean that the expectations of their organizations are too high? Public organizations must keep their operational activities in focus and cannot be held entirely responsible for training everyone (Emery and Gonin 2009). However, individual responsibility for skills development should not overshadow the still insufficient strategic orientation of HR departments in Swiss public organizations, which are still too few to systematically implement career and skills development plans for their employees (Emery and Gonin 2009). Adding to this competition from algorithms, it is easy to see why public employees do not fall for the simple rhetoric that AI will, for example, free up their time to focus on higher-value-added tasks or supplement them in their work but have concerns about the development of their skills. However, this line of thought benefits from further empirical research.
Our results also show that respondents’ AA can be explained by the well-being concerns they have about HR AI. This result is interesting considering the literature. In their work on the factors that lead managers to develop AA and on the best ways to overcome it, Mahmud et al. (2023, p. 5) identify the risks associated with technological innovations as the main factor explaining AA. While they emphasize—based on innovation resistance theory (Ram and Sheth 1989) and subsequent developments (Arif et al. 2020; Leong et al. 2020)—the importance of physical and psychological risks, they do not focus on actors’ concerns about AI-induced changes to their well-being. In this sense, our work complements Mahmud et al.’s (2023) model by highlighting—among both private and public actors—the role played by stress, anxiety, feelings of inferiority, and feelings of uselessness in the aversion to algorithmic decision-making systems in recruitment.
Finally, the lack of statistical significance for the control variables—age, gender, time with the organization, and hierarchical position—suggests that, in this context, demographic factors do not play a direct role in influencing algorithmic aversion. This highlights that traditional demographics, while relevant in other studies, may be less pertinent when examining AA in the context of Swiss HRM. In Swiss organizations, employees across different demographics tend to receive similar support and opportunities to adapt to technological changes (Emery and Gonin 2009). This environment likely reduces individual algorithmic aversion, making demographic factors like age, tenure, or hierarchical position less impactful.

6. Conclusions

6.1. Practical Implications

Given findings relative to PC, several practical implications can be drawn for HR practitioners and policymakers aiming to minimize algorithmic aversion in recruitment processes. First, addressing privacy concerns should be a priority, particularly in the private sector, where incidents of data misuse may be more prominent (Dastin 2022). To reduce algorithmic aversion, HR departments should enhance data protection measures and transparently communicate safeguards, supported by regular audits. This can foster trust among employees and candidates, addressing key privacy concerns. Second, improving AI transparency is crucial (Langer and König 2023). HR practitioners should offer clear explanations of how recruitment algorithms work, their benefits, and their limitations. This can be achieved through training sessions and information campaigns, which can help reduce fears and misunderstanding surrounding AI systems (Binns et al. 2018). Promoting a culture of transparency will also encourage employees to view AI as a supportive tool rather than a threat. Third, policymakers should develop clear governance frameworks that align with existing legal regulations, especially around data privacy and ethical AI use. This is particularly important in the public sector, where stringent legal frameworks already exist but could benefit from clearer guidelines on the ethical use of AI (Leicht-Deobald et al. 2022). Finally, fostering collaboration between HR and IT departments is key to ensuring AI systems are both user-friendly and customized to the organization’s specific needs. Establishing continuous feedback loops where HR professionals can voice concerns and suggest improvements could enhance trust in the system and reduce aversion.
Given the findings on the varying influence of general attitudes toward AI on algorithmic aversion, several practical implications arise for HR practitioners and policymakers, differentiated by sector. First, in the public sector, where GAAIS+ significantly reduces AA, organizations should emphasize AI as a tool for enhancing transparency, equity, and efficiency. Framing algorithms in the recruitment process as tools that support merit-based hiring and reduce human biases will resonate with the public sector’s commitment to fairness and ethical standards (Raisch and Krakowski 2021). Public institutions can also leverage educational initiatives that highlight AI’s alignment with legal regulations, reinforcing the perception that AI tools contribute to ethical and compliant decision-making (Floridi and Cowls 2022). Second, in the private sector, where GAAIS+ has less impact on AA, HR should focus on engaging employees in the AI implementation process. This can be achieved by involving HR staff in the design and deployment of AI systems, creating a sense of ownership and reducing anxiety about job displacement. Offering pilot programs where employees can test AI tools and provide feedback will help build trust and foster a positive relationship with AI (Tambe et al. 2019). By demonstrating that AI is a complement to, rather than a replacement for, human skills, companies can mitigate fears and create a more collaborative work environment. Third, addressing negative attitudes toward AI is crucial for both sectors. In the public sector, HR departments should focus on transparency and compliance, showing how AI tools ensure fair and accountable recruitment practices. In the private sector, organizations should actively dispel concerns that AI is purely profit-driven. Instead, HR departments should communicate how AI enhances fairness in hiring by eliminating biases, ensuring that AI is perceived as a tool for ethical improvement rather than exploitation (Binns et al. 2018).
Regarding PT, our results highlight the need for organizations to implement safeguards in their AI recruitment tools to minimize the risk of poor decisions and biases —PSUS and PSEV. Practically, this involves ensuring transparency in AI systems by conducting regular audits and publicly sharing the results to reduce concerns among HR professionals. Additionally, organizations should prioritize bias correction in the data used for AI training to prevent the perpetuation of stereotypes or discriminatory practices. Familiarizing HR employees with AI tools through training programs can also help reduce their aversion, fostering greater acceptance of algorithmic decision-making in recruitment processes. Lastly, promoting transparency in algorithmic decisions and addressing cultural biases directly can contribute to minimizing PT and improving acceptance of AI in HRM.
Given the impact of PDEV on AA, at least two practical implications arise for public sector HR practitioners. First, public organizations could proactively integrate AI into personalized career development pathways. As noted by Emery and Gonin (2009), the Swiss public sector has evolved towards working environments where employees expect continuous growth opportunities. HR departments should leverage AI to support this trajectory by embedding AI tools into upskilling initiatives, helping employees identify skill gaps, and offering tailored learning programs. This could reframe AI as a partner in career advancement rather than as a competitor and could thus mitigate aversion to it. Second, highlighting successful initiatives where AI has already contributed to job enrichment could alleviate concerns about its impact on skill development. Public sector organizations could therefore showcase real examples of AI integration that have improved employee learning, positioning decision-making algorithms as tools that enhance their professional capabilities. Demonstrating these successes will help frame AI as a catalyst for skill development rather than as a threat.
Finally, given the significant role of well-being concerns in shaping algorithmic aversion, several practical implications arise for HR practitioners aiming to reduce stress and anxiety associated with AI in recruitment. First, HR departments could invest in work design adjustments that mitigate stress. This could involve restructuring tasks to ensure AI complements human work, providing employees with opportunities to develop skills alongside AI rather than feeling displaced by it. Creating clear pathways where AI supports professional growth could reduce feelings of uselessness or inferiority. Second, organizations could implement programs that address AI-related anxiety. Providing resources such as stress management workshops and mental health support specifically targeted at AI integration could help employees cope with the emotional impact of technological change (Mahmud et al. 2022). Third, involving employees in the co-design of AI systems, giving them a role in shaping how AI is integrated into their workflow, could reduce feelings of inferiority and foster a sense of ownership and empowerment (Vrontis et al. 2021), reducing their aversion to it.

6.2. Limitations and Future Research

In short, our work has enabled us to identify new factors that explain aversion to algorithmic decision-making in recruitment. If HR professionals prefer decisions made by humans to those made by algorithms, this is due to doubts about the confidentiality and privacy guaranteed by algorithms, their general positive or negative attitude toward AI, the various threats posed by algorithms, concerns about personal development, or worries about their well-being. However, there were notable differences between the private and public players. In the former case, AA was significantly influenced by PC and PT. For the latter, AA varied greatly according to the GAAIS+ and personal development concerns linked to HR AI. This being the case, this study responds, on the one hand, to the requests of Mahmud et al. (2022, 2023) to complete the explanatory factors of AA and test new predictors when it comes to specific algorithmic decision making, as is the case in HRM. On the other hand, and in contrast to Dargnies et al. (2024) or Lee (2018)—who focus essentially on the aversion of the general public to hiring algorithms—our study focuses on the point of view of HR employees. Additionally, our empirical results have allowed us to successfully unify the IAAAM proposed by Cao et al. (2021) with the GAAIS+ and GAAIS− constructs developed by Schepman and Rodway (2020, 2023) while incorporating the concept of privacy concerns specifically into the context of HRM. This integrative approach not only enhances our understanding of algorithmic aversion but also demonstrates the utility of these models in explaining HR professionals’ resistance to algorithmic decision-making, particularly in recruitment.
However, there are several limitations to our work, starting with its cross-sectional nature (Connelly 2016), which prevents us from drawing causal inferences from the results. Indeed, our results are limited to describing the observed relationships between our variables at a given point in time, which makes it difficult to predict how the different factors studied in this article influence HR employees’ aversion to recruitment algorithms. While potential biases in survey responses, such as social desirability bias or prior exposure to AI, may have influenced the participants’ perceptions, the risk of these biases unduly skewing the overall findings is likely mitigated by the diversity of the respondents and the robust methodology employed. As a result, we believe that these biases, while possible, do not fundamentally undermine the validity of our conclusions. Additionally, although the sample is restricted to HR professionals in Switzerland, the moderate-to-strong predictive power of our models (Appendix B) suggests that these results may have broader relevance beyond our immediate context.
Future research could extend these findings by conducting longitudinal studies that explore how attitudes toward AI evolve over time, particularly as exposure to recruitment algorithms increases or as new technologies emerge in HR. Building on the current findings, future research should particularly examine how concerns about confidentiality, personal development, and well-being—each of which emerged as a significant predictor of AA in this study—develop over time and potentially interact with broader organizational culture shifts as AI use becomes more widespread in recruitment. Another promising avenue for further investigation is to explore this phenomenon across different industries in order to assess whether similar patterns of algorithmic aversion exist. Expanding the geographical scope to include a cross-cultural analysis would also provide deeper insights into how national or regional cultural factors shape the acceptance or resistance toward AI in recruitment. Finally, experimental designs that test interventions to reduce algorithmic aversion, such as increasing transparency or user control, could offer practical strategies to improve AI adoption in HR practices (Langer and König 2023). These interventions could directly address some of the key drivers of aversion highlighted in this study, such as concerns over privacy and personal development, thus building a bridge between our findings and actionable measures for organizations.
Compared with the literature review by Mahmud et al. (2022), our study focused on a limited number of explanatory factors for AA, most of which were individual and have not been previously studied in relation to AA. The aim was to remain sufficiently innovative, but above all, avoid overfitting (Ozili 2023). Researchers interested in extending this line of thinking can also examine cultural differences in AA. In this respect, Gao et al. (2020) find, for example, that the Chinese are less concerned about the privacy risks associated with the use of technology and consequently display lower AA. In addition, differences in institutional environments and resources, particularly between developed countries, such as Switzerland,35 and developing countries (Yamakawa et al. 2008), could influence AA factors. For example, Lennartz et al. (2021) suggest that individuals in developing countries might show more appetite for disruptive technologies, such as AI, and thus be less averse to algorithmic decision-making. Having investigated new and unique causes of AA, researchers could also study, following the example of Jia et al. (2023), job performance, the impact of AA on different job outcomes, and well-being outcomes. Indeed, understanding how algorithmic aversion influences long-term HR strategies will be key. As AI tools increasingly automate repetitive tasks, HR professionals may need to shift toward more strategic roles, focusing on talent development and organizational culture while acquiring new skills in AI literacy and data analytics. However, this transition could also trigger concerns about job displacement and insecurity, as automation takes over technical tasks traditionally managed by HR. HR professionals will also likely face growing challenges related to ethical and legal considerations. Ensuring transparency, fairness, and privacy in algorithmic decisions will become critical, especially as regulatory frameworks evolve, demanding greater accountability from organizations. In any case, this study establishes a basis from which researchers interested in the antecedents and effects of aversion to algorithmic decision-making in both private and public HR can develop their own research questions.

Funding

This research received no external funding.

Institutional Review Board Statement

As anonymized questionnaires were used, this research does not fall within the scope of the Swiss Federal Law on Research Involving Human Subjects, and we did not need to consult an ethics committee before administering it.

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

Due to confidentiality guarantees given to the respondents and in order to comply with the legal requirements for the storage of scientific data in the canton of Vaud, complete data are only available on request.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A. Variables, Mean, Standard Deviation, Skewness, and Kurtosis

ConceptItemMeanSDSkew.Kurt.Wording
Dependent variable
Algorithmic aversion (Melick 2020)aa12.911.190.070−0.968Selecting new employees using professional judgement of the hiring manager is more effective than selecting new employees using a formula designed to predict job performance.
aa22.931.220.023−1.03Using hiring’s managers review of resumes is more likely to identify high quality applicants than using computerized text analysis to screen resumes.
Independent variables
Privacy concerns (Baek and Morimoto 2012)pc_1 (R)3.390.840−1.454.52I feel uncomfortable when information is shared without permission.
pc_21.720.9281.223.59I am concerned about misuse of personal information.
pc_31.690.9181.243.61I feel fear that information may not be safe while stored.
pc_41.700.9271.253.65I believe that personal information is often misused.
pc_51.700.9331.223.51I think my organization share information without permission.
GAAIS+
(Schepman and Rodway 2020)
gaais_12.911.350.084−1.22For routine transactions, I would rather interact with an artificial intelligent system than with a human.
gaais_22.951.360.017−1.23Artificial intelligence can provide new economic opportunities for this country.
gaais_43.041.31−0.039−1.16Artificially intelligent systems can help people feel happier.
gaais_52.921.330.025−1.18I am impressed by what artificial intelligence can do.
gaais_73.001.35−0.058−1.23I am interested in using artificially intelligent systems in my daily life.
gaais_113.041.35−0.048−1.22Artificial intelligence can have positive impacts on people’s wellbeing.
gaais_123.071.38−0.068−1.27Artificial intelligence is exciting.
gaais_133.121.34−0.085−1.19An artificially intelligent agent would be better than an employee in many routine jobs.
gaais_143.081.39−0.076−1.28There are many beneficial applications of artificial intelligence.
gaais_163.091.35−0.042−1.24Artificially intelligent systems can perform better than humans.
gaais_173.071.36−0.049−1.25Much of society will benefit form a future full of artificial intelligence.
gaais_183.071.36−0.089−1.20I would like to use artificial intelligence in my own job.
GAAIS–
(Schepman and Rodway 2020)
gaais_32.901.240.063−1.05Organizations use artificial intelligence unethically.
gaais_62.891.240.086−1.02I think artificially intelligent systems make many errors.
gaais_82.901.270.097−1.07I find artificial intelligence sinister.
gaais_92.901.260.049−1.08Artificial intelligence might take control of people.
gaais_102.851.250.128−1.05I think artificial intelligence is dangerous.
gaais_152.841.290.146−1.10I shiver with discomfort when I think about future uses of artificial intelligence.
gaais_192.851.290.119−1.12People like me will suffer if artificial intelligence is used more and more.
gaais_202.911.270.065−1.09Artificial intelligence is used to spy on people.
Perceived threat
(Cao et al. 2021)
pt_12.921.25−0.006−1.10My fear of exposure to AI in HR’s risks is high.
pt_2_R (R)2.961.300.097−1.15The extent of my worry about AI in HR’s risks is low.
pt_32.901.27−4.93−1.11The extent of my anxiety about potential loss due to AI in HR’s risks is high.
pt_42.971.30−0.023−1.16I’m very concerned about the potential misuse of HR AI.
Perceived susceptibility (Cao et al. 2021)psus_13.021.32−0.05−1.19AI is likely to make bad HR decisions in the future.
psus_22.941.310.023−1.15The chances of AI making bad HR decisions are great.
psus_33.001.40−0.022−1.31AI may make bad HR decisions at some point.
Perceived severity (Cao et al. 2021)psev_12.921.320.033−1.20AI in HR may perpetuate cultural stereotypes in available data.
psev_22.891.360.100−1.25AI in HR may amplify discrimination in available data.
psev_33.041.34−0.065−1.21AI in HR may be prone to reproducing institutional biases in available data.
psev_42.951.340.024−1.20AI in HR may have a propensity for intensifying systemic bias in available data.
psev_52.921.340.041−1.19AI in HR may have the wrong objective due to the difficulty of specifying the objective explicitly.
psev_62.981.39−0.004−1.31AI in HR may use inadequate structures such as problematic models.
psev_72.981.340.006−1.22AI in HR may perform poorly due to insufficient training.
Personal development concerns (Cao et al. 2021)pdev_1_R (R)3.031.30−0.051−1.10AI HR supported or automatised decision-making would have a positive impact on my learning ability.
pdev_2_R (R)2.981.34−0.035−1.19AI HR supported or automatised decision-making would have a positive impact on my career development.
pdev_33.001.31−0.040−1.13I would hesitate to use HR AI for fear of losing control of my personal development.
pdev_42.971.320.001−1.14It scares me to think that I could lose the opportunity to learn from my own experience using AI HR supported or automatised decision-making.
Personal well-being concerns
(Cao et al. 2021)
pwb_1 (R)3.121.35−0.081−1.22AI in HR makes me feel relaxed.
pwb_22.871.390.106−1.27AI in HR makes me feel anxious.
pwb_32.941.370.056−1.24AI in HR makes me feel redundant.
pwb_42.911.380.076−1.25AI in HR makes me feel useless.
pwb_52.851.370.061−1.26AI in HR makes me feel inferior.
pwb_6 (R)3.031.36−0.051−1.22AI in HR would increase my job satisfaction.
Control variables
Private/
Public
PP1.490.5000.0191.00Your organization is:
1: Private
2: Public or semi-public
Ageage4.980.822−0.5262.78What age bracket do you belong to?
1: Under 18
2: 18–25 years old
3: 26–34 years old
4: 35–44 years
5: 45–54 years old
6: 55–64 years old
7: 65 and over
Gendergenre0.530.499−0.1391.01Please indicate your gender below:
1: Male
2: Female
3: I don’t recognize myself in any of these categories
Time with organizationanciennete3.601.31−0.5072.01How long have you worked for your organization?
1: Less than one year
2: From more than 1 to less than 3 years
3: From more than 3 to less than 5 years
4: From more than 5 to less than 10 years old
5: More than 10 years
Hierarchical positionhierarchie2.831.24−0.5431.65What position do you occupy in your organization’s hierarchy?
1: HR manager/HR specialist (without managerial function)
2: Local manager
3: Middle management
4: Executive

Appendix B. Analyses

Table A1. Indicator loadings and reliability.
Table A1. Indicator loadings and reliability.
IndicatorsPCGAAIS+GAAIS−PTPDEVPWBPSUSPSEVAA
Model 1: both private and public respondents (N = 324)
pc_10.955
pc_20.787
pc_30.864
pc_40.823
pc_50.846
gaais_1 0.975
gaais_2 0.754
gaais_4 0.744
gaais_17 0.764
gaais_7 0.727
gaais_11 0.795
gaais_18 0.727
gaais_13 0.795
gaais_14 0.713
gaais_16 0.722
gaais_5 0.786
gaais_12 0.802
gaais_3 0.963
gaais_6 0.837
gaais_8 0.811
gaais_9 0.784
gaais_10 0.765
gaais_15 0.760
gaais_19 0.766
gaais_20 0.752
pt_1 0.962
pt_2 0.840
pt_3 0.877
pt_4 0.890
psus_1 0.959
psus_2 0.916
psus_3 0.888
psev_1 0.975
psev_2 0.833
psev_3 0.816
psev_4 0.874
psev_5 0.864
psev_6 0.835
psev_7 0.832
pdev_1 0.969
pdev_2 0.896
pdev_3 0.887
pdev_4 0.882
pwb_1 0.974
pwb_2 0.860
pwb_3 0.879
pwb_4 0.844
pwb_5 0.863
pwb_6 0.874
aa_1 0.963
aa_2 0.952
Note that the indicator loadings and reliability of models 2 and 3 also respect the commonly accepted thresholds of 0.708 and 0.50 (Hair et al. 2021). For reasons of brevity, we have not provided details of the reliability of our indicators. Mathematically, when a loading is >0.708, then its squared value, which is how we measured the reliability of indicators in R Studio, is higher than 0.50 anyway.
Table A2. Assessment of the constructs’ internal consistency, reliability, and convergent validity.
Table A2. Assessment of the constructs’ internal consistency, reliability, and convergent validity.
ConstructsαrhoCAVErhoA
Model 1: both private and publics respondents (N = 324)
Privacy concerns0.9090.9320.7340.923
GAAIS+0.8510.9200.6080.947
GAAIS−0.9220.9370.6520.935
Perceived threat0.9150.9400.7980.929
Personal development concerns0.9290.9500.8260.951
Personal well-being concerns0.9430.9550.7810.952
Perceived susceptibility0.9110.9440.8500.926
Perceived severity0.9420.9530.7450.952
Algorithmic aversion0.9100.9570.9170.919
Note that Cronbach’s alpha (α), rhoC, AVE, and rhoA of models 2 and 3 also respect the commonly accepted thresholds of 0.70—for α, rhoC, and rhoA—and 0.50—for AVE (Cheung et al. 2023; Hair et al. 2021).
Table A3. Discriminant validity.
Table A3. Discriminant validity.
PCGAAIS+GAAIS−PTPDEVPWBAGETWOHGPSUSPSEV
Model 1: both private and public respondents (N = 324)
GAAIS+0.065
GAAIS−0.1340.348
PT0.0690.2160.311
PDEV0.0600.1110.1500.230
PWB0.0360.2840.2300.2210.163
AGE0.0460.0690.0660.0140.0440.028
TWO0.0130.0930.0800.0750.0180.0680.127
H0.0270.0360.0440.1210.0190.0680.3000.266
G0.0850.0440.0560.0620.0750.0850.1490.1540.345
PSUS0.0680.1440.1600.5360.2320.1620.0380.0550.0430.014
PSEV0.0660.1160.1510.4980.1160.0610.0220.0680.1090.0330.272
AA0.1770.4870.6610.4450.3530.4200.0350.0720.0120.1120.2340.199
As can be seen, all are below the 0.85 threshold (Hair et al. 2021). So, there is no reason to suspect that one latent construct in our model measures exactly the same dimension as another. Models 2 and 3 show the same thing. In addition, Henseler et al. (2015) suggest using bootstrap confidence intervals to determine whether the HTMTs are significantly different from 1 and below our threshold of 0.85. For the sake of brevity, however, the results of this procedure are not reported here. Nevertheless, the resulting confidence intervals also confirm the discriminant validity of our various constructs.
Table A4. Collinearity issues (VIF).
Table A4. Collinearity issues (VIF).
Model 1: Private and Public Respondents (N = 324)
With AA:
PC: 1.030PWB: 1.145
GAAIS+: 1.196AGE: 1.118
GAAIS−: 1.236TWO: 1.106
PT: 1.187H: 1.306
PDEV: 1.079G: 1.165
With PT:
PSUS: 1.069PSEV: 1.068
The variance inflation factor (VIF) is used to examine possible collinearity problems. For each latent construct, a value greater than 5 indicates a collinearity problem. However, collinearity concerns may arise at lower VIF values, between 3 and 5 (Hair et al. 2021, p. 117). Ideally, then, VIF values should be below 3, which is the case here for our three models.
Table A5. Significance and relevance of the models.
Table A5. Significance and relevance of the models.
Bootstrapped Paths: Model 1 (N = 324)
PathEast.Boot. MeanBoot. SdT-Stat.2.5% CI97.5% CI
PC → AA0.0740.0770.0312.391 *0.0160.137
GAAIS+ → AA−0.211−0.2120.049−4.251 ***−0.310−0.115
GAAIS− → AA0.4140.4130.0537.765 ***0.3110.519
PT → AA0.1690.1700.0493.410 ***0.0730.266
PDEV → AA0.1770.1750.0473.702 ***0.0820.269
PWB → AA0.1810.1800.0503.630 ***0.0850.280
AGE → AA−0.005−0.0060.038−0.150−0.0800.068
TWO → AA−0.022−0.0210.038−0.583−0.0950.052
H → AA−0.018−0.0180.037−0.492−0.0910.056
G → AA−0.044−0.0430.039−1.117−0.1200.034
PSUS → AA0.4010.4020.0557.268 ***0.2940.510
PSEV → AA0.3690.3700.0546.787 ***0.2630.475
AA: algorithmic aversion; PT: perceived threat; PC: privacy concerns; GAAIS+: general attitude toward artificial intelligence, positive; GAAIS−: general attitude toward artificial intelligence, negative; PDEV: personal development concerns; PWB: personal well-being concerns; AGE: age; TWO: time with organization; H: hierarchical position; G: gender; PSUS: perceived susceptibility; PSEV: perceived severity. t: table for significance (two-tailed): Confidence interval: 95%; t-value ≥ 1.960 (*). Confidence interval: 99%. Confidence interval: 99.9%; t-value ≥ 3.291 (***).
Hair et al. (2021, p. 125) recommend inspecting bootstrapped paths and setting the number of bootstraps to 10,000, which was carried out for our three models. For the sake of brevity, however, we provide only the first. The next step is to examine the coefficient of determination (R2) of the endogenous constructs (Hair et al. 2021, p. 118). However, as descriptions of our path coefficients as well as our R2 are given in the article, we do not repeat it here.
Table A6. Predictive power of our models.
Table A6. Predictive power of our models.
Model 1
PLS out-of-sample metrics:
aa_1aa_2pt_1pt_2pt_3pt_4
RMSE:0.7910.9100.9461.0941.0331.085
LM out-of-sample metrics:
RMSE:0.8340.9550.9141.1701.0491.140
Model 2
PLS out-of-sample metrics:
RMSE:0.7770.8000.9891.1151.0331.111
LM out-of-sample metrics:
RMSE:0.9430.9701.0091.3081.2291.249
Model 3
PLS out-of-sample metrics:
0.8140.9680.9091.0751.0341.043
LM out-of-sample metrics:
1.0051.1300.8661.0950.9051.092
To produce these values, we first generated predictions using the predict_pls( ) function. We performed this procedure with k = 10 folds and ten repetitions. For this, we set noFolds = 10 and reps = 10. In addition, we used the predict_DA approach (Hair et al. 2021, p. 129). This procedure placed us in a configuration where the predictive power of model 1 was average, which is quite good considering that our questionnaire reflected around 12% of our total HR population. For model 2, this procedure placed us in a configuration where the predictive power of our model was strong, which means that our conclusions can be applied with confidence beyond our sample. Finally, the predictive power of our third model was also average if we refer to the thresholds described by Hair et al. (2021, p. 129).

Notes

1
Strohmeier (2020, p. 54) differentiates between “algorithmic decision-making in HRM” (ADM) and “human decision-making in HRM” (HDM).
2
In its simplest sense, an algorithm is a set of instructions expressed in a particular computer language such as Java, Python, or C++ that are used to solve a well-defined problem (Casilli 2019; Introna 2016, p. 21). Just like a recipe, it is then used to produce a result based on instructions. This conception is similar to that of Gillespie (2014, p. 1), who defines an algorithm as a “set of encoded procedures for transforming input data into a desired output, based on specified calculations”. Note that, depending on the data they are required to process, algorithms are divided into several fields that constitute artificial intelligence techniques such as natural language processing or machine learning (Prikshat et al. 2023, p. 5; Strohmeier 2022).
3
Mahmud et al. (2023, p. 17) differentiate between “rule-based AI decision systems” and “machine-learning-based decision systems”. The first term refers to the definition of an algorithm by Gillespie (2014). The latter refers to the evolutionary capacity of the decision system, enabling it to evolve over time.
4
For a non-HR reference, see Acharya et al. (2018).
5
“Job design ambiguity” (Böhmer and Schinnenburg 2023, p. 17).
6
“Transparency ambiguity” (Böhmer and Schinnenburg 2023, p. 18).
7
“Performance ambiguity” (ibid.).
8
“Data ambiguity” (ibid.).
9
Note, for example, Lee (2018). However, more recent studies, specifically those concerned with AA in HRM, such as those by Cecil et al. (2024), Fleiss et al. (2024), or Dargnies et al. (2024), appeared after the synthesis by Mahmud et al. (2022). However, the latter are not interested in the same predictors as our work, not in the same type of decision, and not in the same actors.
10
Mahmud et al. (2022, p. 11) defined the latter as: “Individual factors related to algorithm aversion consist of a wide range of factors including psychological factors, personality traits, demography, and individuals’ familiarity with algorithms and tasks. By psychological factors, we imply those factors related to individuals’ reasoning, logic, thinking, and emotion in connection to algorithmic decisions”.
11
The original definition of the latter is: “The process by which the organization defines the profiles of the positions to be filled, selects the most suitable people to fill them and then ensures their integration” (Emery and Gonin 2009, p. 43).
12
This result is consistent with those of Berger et al. (2021) and Önkal et al. (2009).
13
This distinction was confirmed three years later by Schepman and Rodway (2023).
14
“For the general attitudes, items that loaded onto the positive factor expressed societal or personal benefits of AI, or a preference of AI over humans in some contexts (e.g., in routine transactions), with some items capturing emotional matters (AI being exciting, impressive, making people happier, enhancing their wellbeing)” (Schepman and Rodway 2020, p. 11).
15
“In the negative subscale, more items were eliminated from the initial pool, and those that were retained were dominated by emotions (sinister, dangerous, discomfort, suffering for “people like me”), and dystopian views of the uses of AI (unethical, error prone-ness, taking control, spying)”. (ibid.)
16
Attitude is defined as “an individual’s positive or negative feelings about using AI for organizational decision-making” (Cao et al. 2021, p. 5).
17
Generally, for the impacts of the introduction of new IT and AI tools on individuals, see Agogo and Hess (2018), Vimalkumar et al. (2021), and Zhang (2013).
18
For the details: Older people tend to perceive algorithmic decisions as less useful (Araujo et al. 2020) and, as a result, to trust them less (Lourenço et al. 2020). Thurman et al. (2019) show, for example, that older people prefer news recommendations made to them by human beings rather than by algorithms. However, this link between age and algorithm aversion is not totally unequivocal. In the medical field, Ho et al. (2005) show that older people trust algorithms more than humans when it comes to decision support systems. Logg et al. (2019), on the other hand, point out that there is no association between age and algorithm aversion. In addition to these considerations, there are also studies in the field of AI-related digital inequalities. Lutz (2019) suggests that age is not in itself a barrier to the propensity of individuals to be comfortable with technology, but that context could explain a large part of the appetence of individuals towards technical objects. In this respect, the Swiss context appears to be relatively favorable to the use of new technologies (Equey and Fragnière 2008). That said, we believe that the relative novelty of algorithms (Strohmeier 2022) encourage mistrust and, thus, the AA of older individuals.
19
For the details: A higher hierarchical level of respondents is sometimes associated with a greater appetite for information technology (El-Attar 2006). According to Campion (1989) or Sainsaulieu and Leresche (2023, p.19), this is essentially due to the desire of managers, executives, or hierarchical superiors to increase control over their employees’ activities. The introduction of decision-making algorithms into the employee engagement process would enable them to do that.
20
For the details: The introduction of new technologies can represent a challenge for long-serving employees with well-established work routines. For example, new technologies can lead to changes in job responsibilities, increased workloads, additional training to learn how to use them, and even have a profound impact on an organization’s policies. People with certain unique or rare skills now threatened or called into question by a technical system may see their introduction as a challenge to their power (Crozier and Friedberg 1977). In short, since AI-based decision-making systems necessarily introduce change into organizations (Malik et al. 2020, 2023), and since the most senior individuals within organizations are the most likely to have mastered the rules, to possess habits, or to control areas of uncertainty—which, incidentally, they would not like to see called into question—we believe that seniority is positively associated with algorithmic aversion.
21
324 × 100/2892 = 11.203%, where 324 is the total number of responses out of a potential 2892 respondents. Respondents took an average of 25 min to complete the survey.
22
A four-point Likert scale, unlike an odd-point scale—such as a five-point scale—does not offer a mid-point. This forces respondents to express a clearer opinion, avoiding the tendency to choose a neutral position. This is particularly useful for measuring attitudes, sentiments or opinions about sensitive subjects such as, here, privacy concerns related to AI, where neutrality could mask real feelings (Fowler 2013).
23
For 13 missing observations.
24
The remaining 44.4% of the unexplained variance suggests the presence of other influential factors not captured by this model. To address this, several potential drivers of algorithmic aversion could be considered. Those interested in completing our analysis could thus refer to the determinants identified by Mahmud et al. (2022), which are, however, not summarized in full in our theoretical framework, for the sake of brevity. The same reasoning applies to the other variances reported below.
25
The average latent construct in the private sector is 1.72, compared with 1.62 in the public sector.
26
“Misuse of personal data” (Tiwari 2022).
27
This stricter regulatory framework in public organizations might indeed lead to lower concerns about data security (LIPAD 2020; LPrD 2007). For example, research by Veale and Edwards (2018) on the implementation of General Data Protection Regulation (GDPR) in public institutions shows that compliance with data protection laws is often more rigorous in the public sector due to higher levels of accountability and transparency requirements. Furthermore, studies focusing on public administration, such as those by Bannister and Connolly (2014), emphasize that public sector employees typically operate under stricter internal audit and compliance measures, reducing their perceived vulnerability to data breaches. This could explain why, despite a higher coefficient of association between PC and AA, the relationship is not statistically significant in the public sector. On the other hand, private organizations, while also subject to GDPR, may not always apply the same level of stringent data protection practices, especially when balancing cost efficiency and compliance (Tiwari 2022). Therefore, HR employees in the private sector may experience higher concerns regarding algorithmic decision-making, especially in relation to the potential misuse of personal data during recruitment processes (Kim and Bodie 2020).
28
Such as the Cambridge Analytica scandal (Isaak and Hanna 2018).
29
Such as bias reduction in hiring (Köchling and Wehner 2020), predictive analytics for workforce planning (Alabi et al. 2024), and improvements in talent management and employee engagement (Tambe et al. 2019; Meijerink et al. 2021).
30
As our use of latent constructs illustrates, HR issues such as motivation and commitment are difficult to measure in the real world.
31
In other words, make them accessible to the public.
32
According to Seeman (1967), job alienation consists of five dimensions: powerlessness, meaningfulness, normlessness, isolation, and self-estrangement. This definition has inspired numerous studies, including Soffia et al. (2022).
33
Emery and Gonin (2009, pp. 332–33) briefly review the different ways in which humans are viewed within organizations: first, as an economic individual driven by greed (Taylor [1911] 1957); then, as a social individual drawn to and nourished by the interactions they experiences within organizations (Deal and Kennedy 1983; Schein 2010); afterwards, as an individual in search of self-fulfillment (Emery and Gonin 2009, p. 332); and finally, as a complex person whose motivations within their organization are influenced by factors as diverse and varied as their individual career path, their hopes, their ambitions, and so on. This complexity is not without its contradictions, as Wüthrich et al. (2008) explain, requiring the organization to adapt its management to each individual and to personalize working conditions as much as possible.
34
The average latent construct in the private sector is 2.90, compared with 3.06 in the public sector.
35
We use the definitions and rankings of the United Nations Development Programme (2024).

References

  1. Acharya, Abhilash, Sanjay Kumar Singh, Vijay Pereira, and Poonam Singh. 2018. Big data, knowledge co-creation and decision making in fashion industry. International Journal of Information Management 42: 90–101. [Google Scholar] [CrossRef]
  2. Acquisti, Alessandro, Laura Brandimarte, and George Loewenstein. 2015. Privacy and human behavior in the age of information. Science 347: 509–14. [Google Scholar] [CrossRef] [PubMed]
  3. Agogo, David, and Traci J. Hess. 2018. “How does tech make you feel?” a review and examination of negative affective responses to technology use. European Journal of Information Systems 27: 570–99. [Google Scholar] [CrossRef]
  4. Agrawal, Ajay, Joshua Gans, and Avi Goldfarb. 2017. How AI will change the way we make decisions. Harvard Business Review 26: 1–7. [Google Scholar]
  5. Alabi, Khadijat Oyindamola, Adegoke A. Adedeji, Samia Mahmuda, and Sunday Fowomo. 2024. Predictive Analytics in HR: Leveraging AI for Data-Driven Decision Making. International Journal of Research in Engineering, Science and Management 7: 137–43. [Google Scholar]
  6. Amadieu, Jean-François. 2013. DRH: Le Livre Noir. Paris: Média Diffusion. [Google Scholar]
  7. Anderfuhren-Biget, Simon, Frédéric Varone, David Giauque, and Adrian Ritz. 2010. Motivating employees of the public sector: Does public service motivation matter? International Public Management Journal 13: 213–46. [Google Scholar] [CrossRef]
  8. Araujo, Theo, Natali Helberger, Sanne Kruikemeier, and Claes H. de Vreese. 2020. In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Society 35: 611–23. [Google Scholar]
  9. Arif, Imtiaz, Wajeeha Aslam, and Yujong Hwang. 2020. Barriers in adoption of internet banking: A structural equation modeling-Neural network approach. Technology in Society 61: 101231. [Google Scholar] [CrossRef]
  10. Azoulay, Eva, Pascal Lefebvre, and Rachel Marlin. 2020. Using artificial intelligence to diversify the recruitment process at L’Oréal. Le Journal de l’école de Paris du Management 142: 16–22. [Google Scholar]
  11. Bader, Jon, John Edwards, Chris Harris-Jones, and David Hannaford. 1988. Practical engineering of knowledge-based systems. Information and Software Technology 30: 266–77. [Google Scholar] [CrossRef]
  12. Baek, Tae Hyun, and Mariko Morimoto. 2012. Stay away from me. Journal of Advertising 41: 59–76. [Google Scholar] [CrossRef]
  13. Bannister, Frank, and Regina Connolly. 2014. ICT, public values and transformative government: A framework and programme for research. Government Information Quarterly 31: 119–28. [Google Scholar] [CrossRef]
  14. Baudoin, Emmanuel, Caroline Diard, Myriam Benabid, and Karim Cherif. 2019. Digital Transformation of the HR Function. Paris: Dunod. [Google Scholar]
  15. Berger, Benedikt, Martin Adam, Alexander Rühr, and Alexander Benlian. 2021. Watch me improve-algorithm aversion and demonstrating the ability to learn. Business & Information Systems Engineering 63: 55–68. [Google Scholar]
  16. Binns, Reuben, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. ‘It’s Reducing a Human Being to a Percentage’ Perceptions of Justice in Algorithmic Decisions. Paper presented at 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, April 21–26; pp. 1–14. [Google Scholar]
  17. Blaikie, Norman. 2003. Analyzing Quantitative Data: From Description to Explanation. London: Sage. [Google Scholar]
  18. Blum, Benjamin, and Friedemann Kainer. 2019. Rechtliche Aspekte beim Einsatz von KI in HR: Wenn Algorithmen entscheiden. Personal Quarterly 71: 22–27. [Google Scholar]
  19. Bogert, Eric, Aaron Schecter, and Richard T. Watson. 2021. Humans rely more on algorithms than social influence as a task becomes more difficult. Scientific Reports 11: 8028. [Google Scholar] [CrossRef] [PubMed]
  20. Böhmer, Nicole, and Heike Schinnenburg. 2023. Critical exploration of AI-driven HRM to build up organizational capabilities. Employee Relations: The International Journal 45: 1057–82. [Google Scholar] [CrossRef]
  21. Borsboom, Denny, Gideon J. Mellenbergh, and Jaap van Heerden. 2004. The concept of validity. Psychological Review 111: 1061. [Google Scholar] [CrossRef]
  22. Broecke, Stijn. 2023. Artificial intelligence and labor market matching. OECD Working Papers on Social Issues, Employment and Migration 284: 1–52. [Google Scholar]
  23. Brougham, David, and Jarrod Haar. 2018. Smart technology, artificial intelligence, robotics, and algorithms (STARA): Employees’ perceptions of our future workplace. Journal of Management & Organization 24: 239–57. [Google Scholar]
  24. Browning, Vicky, Fiona Edgar, Brendan Gray, and Tony Garrett. 2009. Realising competitive advantage through HRM in New Zealand service industries. The Service Industries Journal 29: 741–60. [Google Scholar] [CrossRef]
  25. Brynjolfsson, Erik, and Andrew McAfee. 2014. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W. W. Norton & Company. [Google Scholar]
  26. Campion, Emily D., and Michael A. Campion. 2024. Impact of machine learning on personnel selection. Organizational Dynamics 53: 101035. [Google Scholar] [CrossRef]
  27. Campion, Michael G. 1989. Technophilia and technophobia. Australasian Journal of Educational Technology 5: 1–10. [Google Scholar] [CrossRef]
  28. Cao, Guangming, Yanqing Duan, John S. Edwards, and Yogesh K. Dwivedi. 2021. Understanding managers’ attitudes and behavioral intentions toward using artificial intelligence for organizational decision-making. Technovation 106: 102312. [Google Scholar] [CrossRef]
  29. Carpenter, Darrell, Diana K. Young, Paul Barrett, and Alexander J. McLeod. 2019. Refining technology threat avoidance theory. Communications of the Association for Information Systems 44: 1–40. [Google Scholar] [CrossRef]
  30. Casilli, Antonio. A. 2019. Waiting for the Robots-Investigating the Work of the Click. Paris: Média Diffusion. [Google Scholar]
  31. Cecil, Julia, Eva Lermer, Matthias F. C. Hudecek, Jan Sauer, and Susanne Gaube. 2024. Explainability does not mitigate the negative impact of incorrect AI advice in a personnel selection task. Scientific Reports 14: 9736. [Google Scholar] [CrossRef]
  32. Chen, Yan, and Fatemeh Mariam Zahedi. 2016. Individuals’ internet security perceptions and behaviors. MIS Quarterly 40: 205–22. [Google Scholar] [CrossRef]
  33. Chen, Zhisheng. 2023. Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications 10: 567. [Google Scholar] [CrossRef]
  34. Cheung, Gordon W., Helena D. Cooper-Thomas, Rebecca S. Lau, and Linda C. Wang. 2023. Reporting reliability, convergent and discriminant validity with structural equation modeling: A review and best-practice recommendations. Asia Pacific Journal of Management 41: 745–783. [Google Scholar] [CrossRef]
  35. Coltman, Tim, Timothy M. Devinney, David F. Midgley, and Sunil Venaik. 2008. Formative versus reflective measurement models: Two applications of formative measurement. Journal of Business Research 61: 1250–62. [Google Scholar] [CrossRef]
  36. Connelly, Lynne M. 2016. Cross-sectional survey research. Medsurg Nursing 25: 369. [Google Scholar]
  37. Coron, Clotilde. 2023. Quantifying Human Resources: The Contributions of Economics and Sociology of Conventions. In Handbook of Economics and Sociology of Conventions. Cham: Springer International Publishing, pp. 1–19. [Google Scholar]
  38. Crozier, Michel, and Erhard Friedberg. 1977. L’acteur et le système. Paris: Seuil. [Google Scholar]
  39. Dargnies, Marie-Pierre, Rustamdjan Hakimov, and Dorothea Kübler. 2024. Aversion to hiring algorithms: Transparency, gender profiling, and self-confidence. Management Science. [Google Scholar] [CrossRef]
  40. Dastin, Jeffrey. 2022. Amazon scraps secret AI recruiting tool that showed bias against women. In Ethics of Data and Analytics. Boca Raton: Auerbach Publications, pp. 296–99. [Google Scholar]
  41. Datta, Amit, Michael Carl Tschantz, and Anupam Datta. 2014. Automated experiments on ad privacy settings: A tale of opacity, choice, and discrimination. arXiv arXiv:1408.6491. [Google Scholar]
  42. Davenport, Thomas H., and Rajeev Ronanki. 2018. Artificial intelligence for the real world. Harvard Business Review 96: 108–16. [Google Scholar]
  43. Deal, Terrence E., and Allan A. Kennedy. 1983. Corporate cultures: The rites and rituals of corporate life. Business Horizons 26: 82–85. [Google Scholar] [CrossRef]
  44. Delaney, Rob, and Robert D’Agostino. 2015. The Challenges of Integrating New Technology into an Organization. Philadelphia: La Salle University. [Google Scholar]
  45. Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General 144: 114. [Google Scholar] [CrossRef]
  46. Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey. 2018. Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science 64: 1155–70. [Google Scholar] [CrossRef]
  47. Duggan, James, Ultan Sherman, Ronan Carbery, and Anthony McDonnell. 2020. Algorithmic management and app-work in the gig economy: A research agenda for employment relations and HRM. Human Resource Management Journal 30: 114–32. [Google Scholar] [CrossRef]
  48. Dwivedi, Yogesh K., Laurie Hughes, Elvira Ismagilova, Gert Aarts, Crispin Coombs, Tom Crick, Yanqing Duan, Rohita Dwivedi, John Edwards, Aled Eirug, and et al. 2021. Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management 57: 101994. [Google Scholar] [CrossRef]
  49. El-Attar, Sanabel El-Hakeem. 2006. User involvement and perceived usefulness of information technology. Ph.D. thesis, Mississippi State University, Mississippi State, MS, USA. [Google Scholar]
  50. Emery, Yves, and David Giauque. 2016. Administration and public agents in a post-bureaucratic environment. In The Actor and Bureaucracy in the 21st Century. Los Angeles: University of California, pp. 33–62. [Google Scholar]
  51. Emery, Yves, and David Giauque. 2023. Chapter 12: Human resources management. In Modèle IDHEAP d’administration publique: Vue d’ensemble. Edited by N. Soguel, P. Bundi, T. Mettler and S. Weerts. Lausanne: EPFL PRESS. [Google Scholar]
  52. Emery, Yves, and François Gonin. 2009. Gérer les Ressources Humaines: Des Théories aux Outils, un Concept Intégré par Processus, Compatible avec les Normes de Qualité. Lausanne: PPUR Presses Polytechniques. [Google Scholar]
  53. Engels, Friedrich. 1845. The Condition of the Working Class. Leipzig: Otto Wigand. [Google Scholar]
  54. Equey, Catherine, and Emmanuel Fragnière. 2008. Elements of perception regarding the implementation of ERP systems in Swiss SMEs. International Journal of Enterprise Information Systems (IJEIS) 4: 1–8. [Google Scholar] [CrossRef]
  55. European Commission. 2020. White Paper on Artificial Intelligence—A European Approach to Excellence and Trust. Available online: https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en (accessed on 15 May 2024).
  56. Fleiss, Jürgen, Elisabeth Bäck, and Stefan Thalmann. 2024. Mitigating algorithm aversion in recruiting: A study on explainable AI for conversational agents. ACM SIGMIS Database: The DATABASE for Advances in Information Systems 55: 56–87. [Google Scholar] [CrossRef]
  57. Floridi, Luciano, and Josh Cowls. 2022. A unified framework of five principles for AI in society. In Machine Learning and the City: Applications in Architecture and Urban Design. Hoboken: Wiley, pp. 535–45. [Google Scholar]
  58. Fowler, Floyd J., Jr. 2013. Survey Research Methods, 4th ed. Thousand Oaks: SAGE Publications. [Google Scholar]
  59. Fritts, Megan, and Frank Cabrera. 2021. AI recruitment algorithms and the dehumanization problem. Ethics and Information Technology 23: 791–801. [Google Scholar] [CrossRef]
  60. Gao, Shuqing, Lingnan He, Yue Chen, Dan Li, and Kaisheng Lai. 2020. Public perception of artificial intelligence in medical care: Content analysis of social media. Journal of Medical Internet Research 22: e16649. [Google Scholar] [CrossRef] [PubMed]
  61. García-Arroyo, José A., Amparo Osca Segovia, and José M. Peiró. 2019. Meta-analytical review of teacher burnout across 36 societies: The role of national learning assessments and gender egalitarianism. Psychology & Health 34: 733–53. [Google Scholar]
  62. Ghosh, Adarsh, and Devasenathipathy Kandasamy. 2020. Interpretable artificial intelligence: Why and when. American Journal of Roentgenology 214: 1137–38. [Google Scholar] [CrossRef] [PubMed]
  63. Gillespie, Tarleton. 2014. The relevance of algorithms. In Media Technologies: Essays on Communication, Materiality, and Society. Cambridge: MIT Press, pp. 167–94. [Google Scholar]
  64. Grosz, Barbara J., Russ Altman, Eric Horvitz, Alan Mackworth, Tom Mitchell, Deirdre Mulligan, and Yoav Shoham. 2016. Artificial Intelligence and Life in 2030: One Hundred Year Study on Artificial Intelligence. Stanford: Stanford University. [Google Scholar]
  65. Hair, Joseph F. Jr, G. Tomas M. Hult, Christian M. Ringle, Marko Sarstedt, Nicholas P. Danks, and Soumya Ray. 2021. Partial Least Squares Structural Equation Modeling (PLS-SEM) Using R: A Workbook. Berlin: Springer Nature, p. 197. [Google Scholar]
  66. Hanafiah, Mohd Hafiz. 2020. Formative vs. reflective measurement model: Guidelines for structural equation modeling research. International Journal of Analysis and Applications 18: 876–89. [Google Scholar]
  67. Henseler, Jörg, Christian M. Ringle, and Marko Sarstedt. 2015. A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science 43: 115–35. [Google Scholar] [CrossRef]
  68. Hill, Stephen. 1981. Competition and Control at Work. London: Heinemann Educational Books. [Google Scholar]
  69. Ho, Geoffrey, Dana Wheatley, and Charles T. Scialfa. 2005. Age differences in trust and reliance of a medication management system. Interacting with Computers 17: 690–710. [Google Scholar] [CrossRef]
  70. Höddinghaus, Miriam, Dominik Sondern, and Guido Hertel. 2021. The automation of leadership functions: Would people trust decision algorithms? Computers in Human Behavior 116: 106635. [Google Scholar] [CrossRef]
  71. Huff, Jonas, and Thomas Götz. 2020. Was datengestütztes Personalmanagement kann und darf. Personalmagazin 1: 48–52. [Google Scholar]
  72. Introna, Lucas D. 2016. Algorithms, governance, and governmentality: On governing academic writing. Science, Technology, & Human Values 41: 17–49. [Google Scholar]
  73. Isaak, Jim, and Mina J. Hanna. 2018. User data privacy: Facebook, Cambridge Analytica, and privacy protection. Computer 51: 56–59. [Google Scholar] [CrossRef]
  74. Jantan, Hamidah, Abdul Razak Hamdan, and Zulaiha Ali Othman. 2010a. Human talent prediction in HRM using C4.5 classification algorithm. International Journal on Computer Science and Engineering 2: 2526–34. [Google Scholar]
  75. Jantan, Hamidah, Abdul Razak Hamdan, and Zulaiha Ali Othman. 2010b. Intelligent techniques for decision support system in human resource management. Decision Support Systems 78: 261–76. [Google Scholar]
  76. Jia, Qiong, Yue Guo, Rong Li, Yurong Li, and Yuwei Chen. 2018. A conceptual artificial intelligence application framework in human resource management. Paper presented at 18th International Conference on Electronic Business, ICEB, Guilin, China, December 2–6; pp. 106–14. [Google Scholar]
  77. Jia, Qiong, Shan Wang, and Wenxue Chen. 2023. Research on the relationship between digital job crafting, algorithm aversion, and job performance in the digital transformation. Paper presented at International Conference on Electronic Busines, Chiayi, Taiwan, October 19–23. [Google Scholar]
  78. Johnson, Brad A. M., Jerrell D. Coggburn, and Jared J. Llorens. 2022. Artificial intelligence and public human resource management: Questions for research and practice. Public Personnel Management 51: 538–62. [Google Scholar] [CrossRef]
  79. Kang, In Gu, Ben Croft, and Barbara A. Bichelmeyer. 2020. Predictors of turnover intention in U.S. federal government workforce: Machine learning evidence that perceived comprehensive HR practices predict turnover intention. Public Personnel Management 50: 009102602097756. [Google Scholar] [CrossRef]
  80. Kawaguchi, Kohei. 2021. When will workers follow an algorithm? A field experiment with a retail business. Management Science 67: 1670–95. [Google Scholar] [CrossRef]
  81. Kim, Pauline T., and Matthew T. Bodie. 2020. Artificial intelligence and the challenges of workplace discrimination and privacy. ABA Journal of Labor & Employment Law 35: 289. [Google Scholar]
  82. Kim, Sunghoon, Ying Wang, and Corine Boon. 2021. Sixty years of research on technology and human resource management: Looking back and looking forward. Human Resource Management 60: 229–47. [Google Scholar] [CrossRef]
  83. Köchling, Alina, and Marius Claus Wehner. 2020. Discriminated by an algorithm: A systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Business Research 13: 795–848. [Google Scholar] [CrossRef]
  84. Kozlowski, Steve W. J. 1987. Technological innovation and strategic HRM: Facing the challenge of change. Human Resource Planning 10: 69. [Google Scholar]
  85. Ladner, Andreas, and Alexander Haus. 2021. Aufgabenerbringung der Gemeinden in der Schweiz: Organisation, Zuständigkeiten und Auswirkungen. Lausanne: IDHEAP Institut de Hautes Études en Administration Publique. [Google Scholar]
  86. Ladner, Andreas, Nicolas Keuffer, Harald Baldersheim, Nikos Hlepas, Pawel Swianiewicz, Kristof Steyvers, and Carmen Navarro. 2019. Patterns of Local Autonomy in Europe. London: Palgrave Macmillan, pp. 229–30. [Google Scholar]
  87. Lalive D’Épinay, Christian, and Carlos Garcia. 1988. Le Mythe du Travail en Suisse. Genève: Georg éditeur. [Google Scholar]
  88. Lambrecht, Anja, and Catherine Tucker. 2019. Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Management Science 65: 2966–81. [Google Scholar] [CrossRef]
  89. Langer, Markus, and Cornelius J. König. 2023. Introducing a multi-stakeholder perspective on opacity, transparency and strategies to reduce opacity in algorithm-based human resource management. Human Resource Management Review 33: 100881. [Google Scholar] [CrossRef]
  90. Lawler, John J., and Robin Elliot. 1993. Artificial intelligence in HRM: An experimental study of an expert system. Paper presented at 1993 Conference on Computer Personnel Research, St Louis, MO, USA, April 1–3; pp. 473–80. [Google Scholar]
  91. Lee, Min Kyung. 2018. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data and Society 5: 2053951718756684. [Google Scholar] [CrossRef]
  92. Leicht-Deobald, Ulrich, Thorsten Busch, Christoph Schank, Antoinette Weibel, Simon Schafheitle, Isabelle Wildhaber, and Gabriel Kasper. 2022. The challenges of algorithm-based HR decision-making for personal integrity. In Business and the Ethical Implications of Technology. Cham: Springer Nature Switzerland, pp. 71–86. [Google Scholar]
  93. Lennartz, Simon, Thomas Dratsch, David Zopfs, Thorsten Persigehl, David Maintz, Nils Große Hokamp, and Daniel Pinto dos Santos. 2021. Use and control of artificial intelligence in patients across the medical workflow: Single-center questionnaire study of patient perspectives. Journal of Medical Internet Research 23: e24221. [Google Scholar] [CrossRef]
  94. Leong, Lai Ying, Teck Soon Hew, Keng-Boon Ooi, and June Wei. 2020. Predicting mobile wallet resistance: A two-staged structural equation modeling-artificial neural network approach. International Journal of Information Management 51: 102047. [Google Scholar] [CrossRef]
  95. LIPAD. 2020. Law on Public Information, Access to Documents and Protection of Personal Data. Available online: https://silgeneve.ch/legis/index.aspx (accessed on 5 July 2024).
  96. Litterscheidt, Rouven, and David J. Streich. 2020. Financial education and digital asset management: What’s in the black box? Journal of Behavioral and Experimental Economics 87: 101573. [Google Scholar] [CrossRef]
  97. Logg, Jennifer M., Julia A. Minson, and Don A. Moore. 2019. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151: 90–103. [Google Scholar] [CrossRef]
  98. Lourenço, Carlos J. S., Benedict G. C. Dellaert, and Bas Donkers. 2020. Whose algorithm says so: The relationships between type of firm, perceptions of trust and expertise, and the acceptance of financial robo-advice. Journal of Interactive Marketing 49: 107–24. [Google Scholar] [CrossRef]
  99. LPrD. 2007. Personal Data Protection Act. Available online: https://prestations.vd.ch/pub/blv-publication/actes/consolide/172.65?key=1543934892528&id=cf9df545-13f7-4106-a95b-9b3ab8fa8b01 (accessed on 4 July 2024).
  100. Luo, Xueming, Siliang Tong, Zheng Fang, and Zhe Qu. 2019. Frontiers: Machines vs. humans: The impact of artificial intelligence chatbot disclosure on customer purchases. Marketing Science 38: 937–47. [Google Scholar] [CrossRef]
  101. Lutz, Christoph. 2019. Digital inequalities in the age of artificial intelligence and big data. Human Behavior and Emerging Technologies 1: 141–48. [Google Scholar] [CrossRef]
  102. Mahmud, Hasan, A. K. M. Najmul Islam, and Ranjan Kumar Mitra. 2023. What drives managers towards algorithm aversion and how to overcome it? Mitigating the impact of innovation resistance through technology readiness. Technological Forecasting and Social Change 193: 122641. [Google Scholar] [CrossRef]
  103. Mahmud, Hasan, A. K. M. Najmul Islam, Syed Ishtiaque Ahmed, and Kari Smolander. 2022. What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technological Forecasting and Social Change 175: 121390. [Google Scholar] [CrossRef]
  104. Malik, Ashish, Pawan Budhwar, and Bahar Ali Kazmi. 2023. Artificial intelligence (AI)-assisted HRM: Towards an extended strategic framework. Human Resource Management Review 33: 100940. [Google Scholar] [CrossRef]
  105. Malik, Ashish, N. R. Srikanth, and Pawan Budhwar. 2020. Digitisation, artificial intelligence (AI) and HRM. In Human Resource Management: Strategic and International Perspectives, 3rd ed. Edited by Jonathan Crashaw, Pawan Budhwar and Angela Davis. Los Angeles: SAGE Publications Ltd., pp. 88–111. [Google Scholar]
  106. Meijerink, Jeroen, Mark Boons, Anne Keegan, and Janet Marler. 2021. Algorithmic human resource management: Synthesizing developments and cross-disciplinary insights on digital HRM. The International Journal of Human Resource Management 32: 2545–62. [Google Scholar] [CrossRef]
  107. Melick, Sarah Ruth. 2020. Development and Validation of a Measure of Algorithm Aversion. Bowling Green: Bowling Green State University. [Google Scholar]
  108. Mer, Akansha, and Amarpreet Singh Virdi. 2023. Navigating the paradigm shift in HRM practices through the lens of artificial intelligence: A post-pandemic perspective. In The Adoption and Effect of Artificial Intelligence on Human Resources Management, Part A. Bingley: Emerald Publishing Limited, pp. 123–54. [Google Scholar]
  109. Merhbene, Ghofrane, Sukanya Nath, Alexandre R. Puttick, and Mascha Kurpicz-Briki. 2022. BurnoutEnsemble: Augmented intelligence to detect indications for burnout in clinical psychology. Frontiers in Big Data 5: 863100. [Google Scholar] [CrossRef]
  110. Metcalf, Lynn, David A. Askay, and Louis B. Rosenberg. 2019. Keeping humans in the loop: Pooling knowledge through artificial swarm intelligence to improve business decision making. California Management Review 61: 84–109. [Google Scholar] [CrossRef]
  111. Meuter, Matthew L., Amy L. Ostrom, Mary Jo Bitner, and Robert Roundtree. 2003. The influence of technology anxiety on consumer use and experiences with self-service technologies. Journal of Business Research 56: 899–906. [Google Scholar] [CrossRef]
  112. Mohamed, Syaiful Anwar, Moamin A. Mahmoud, Mohammed Najah Mahdi, and Salama A. Mostafa. 2022. Improving efficiency and effectiveness of robotic process automation in human resource management. Sustainability 14: 3920. [Google Scholar] [CrossRef]
  113. Nagtegaal, Rosanna. 2021. The impact of using algorithms for managerial decisions on public employees’ procedural justice. Government Information Quarterly 38: 101536. [Google Scholar] [CrossRef]
  114. Newell, Sue, and Marco Marabelli. 2020. Strategic opportunities (and challenges) of algorithmic decision-making: A call for action on the long-term societal effects of ‘datification’. In Strategic Information Management. London: Routledge, pp. 430–49. [Google Scholar]
  115. Nolan, Kevin P., Nathan T. Carter, and Dev K. Dalal. 2016. Threat of technological unemployment: Are hiring managers discounted for using standardized employee selection practices? Personnel Assessment and Decisions 2: 4. [Google Scholar] [CrossRef]
  116. OFS. 2021. Regional Portraits and Key Figures for All Municipalities. Available online: https://www.bfs.admin.ch/bfs/fr/home/statistiques/statistique-regions/portraits-regionaux-chiffres-cles/cantons/uri.html (accessed on 3 November 2023).
  117. Önkal, Dilek, Paul Goodwin, Mary Thomson, Sinan Gönül, and Andrew Pollock. 2009. The relative influence of advice from human experts and statistical methods on forecast adjustments. Journal of Behavioral Decision Making 22: 390–409. [Google Scholar] [CrossRef]
  118. Ozili, Peterson K. 2023. The acceptable R-square in empirical modelling for social science research. In Social Research Methodology and Publishing Results: A Guide to Non-Native English Speakers. Hershey: IGI Global, pp. 134–43. [Google Scholar]
  119. Pandey, Sanjay K., Bradley E. Wright, and Donald P. Moynihan. 2008. Public service motivation and interpersonal citizenship behavior in public organizations: Testing a preliminary model. International Public Management Journal 11: 89–108. [Google Scholar] [CrossRef]
  120. Park, Hyanghee, Daehwan Ahn, Kartik Hosanagar, and Joonhwan Lee. 2021. Human-AI interaction in human resource management: Understanding why employees resist algorithmic evaluation at workplaces and how to mitigate burdens. Paper presented at 2021 CHI Conference on Human Factors in Computing Systems, Yokohama Japan, May 8–13; pp. 1–15. [Google Scholar]
  121. Pee, L. G., Shan L. Pan, and Lili Cui. 2019. Artificial intelligence in healthcare robots: A social informatics study of knowledge embodiment. Journal of the Association for Information Science and Technology 70: 351–69. [Google Scholar] [CrossRef]
  122. Persson, Marcus, and Andreas Wallo. 2022. Automation and public service values in human resource management. In Service Automation in the Public Sector: Concepts, Empirical Examples and Challenges. Cham: Springer International Publishing, pp. 91–108. [Google Scholar]
  123. Pessach, Dana, Gonen Singer, Dan Avrahami, Hila Chalutz Ben-Gal, Erez Shmueli, and Irad Ben-Gal. 2020. Employees recruitment: A prescriptive analytics approach via machine learning and mathematical programming. Decision Support Systems 134: 113290. [Google Scholar] [CrossRef]
  124. Podsakoff, Philip M., Scott B. MacKenzie, and Nathan P. Podsakoff. 2012. Sources of method bias in social science research and recommendations on how to control it. Annual Review of Psychology 63: 539–69. [Google Scholar] [CrossRef]
  125. Prahl, Andrew, and Lyn Van Swol. 2017. Understanding algorithm aversion: When is advice from automation discounted? Journal of Forecasting 36: 691–702. [Google Scholar] [CrossRef]
  126. Prahl, Andrew, and Lyn Van Swol. 2021. Out with the humans, in with the machines?: Investigating the behavioral and psychological effects of replacing human advisors with a machine. Human-Machine Communication 2: 209–34. [Google Scholar] [CrossRef]
  127. Prikshat, Verma, Ashish Malik, and Pawan Budhwar. 2023. AI-augmented HRM: Antecedents, assimilation and multilevel consequences. Human Resource Management Review 33: 100860. [Google Scholar] [CrossRef]
  128. Raisch, Sebastian, and Sebastian Krakowski. 2021. Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review 46: 192–210. [Google Scholar] [CrossRef]
  129. Ram, Sudha, and Jagdish N. Sheth. 1989. Consumer resistance to innovations: The marketing problem and its solutions. Journal of Consumer Marketing 6: 5–14. [Google Scholar] [CrossRef]
  130. Raub, McKenzie. 2018. Bots, Bias and Big Data: Artificial Intelligence, Algorithmic Bias and Disparate Impact Liability in Hiring Practices. Arkansas Law Review 71: 529. Available online: https://scholarworks.uark.edu/alr/vol71/iss2/7 (accessed on 8 July 2024).
  131. Ringle, Christian M., Marko Sarstedt, Rebecca Mitchell, and Siegfried P. Gudergan. 2020. Partial least squares structural equation modeling in HRM research. The International Journal of Human Resource Management 31: 1617–43. [Google Scholar] [CrossRef]
  132. Ritz, Adrian, Kristina S. Weißmüller, and Timo Meynhardt. 2022. Public value at cross points: A comparative study on employer attractiveness of public, private, and nonprofit organizations. Review of Public Personnel Administration 43: 528–56. [Google Scholar] [CrossRef]
  133. Rodgers, Waymond, James M. Murray, Abraham Stefanidis, William Y. Degbey, and Shlomo Y. Tarba. 2023. An artificial intelligence algorithmic approach to ethical decision-making in human resource management processes. Human Resource Management Review 33: 100925. [Google Scholar] [CrossRef]
  134. Rodney, Harriet, Katarina Valaskova, and Pavol Durana. 2019. The artificial intelligence recruitment process: How technological advancements have reshaped job application and selection practices. Psychosociological Issues in Human Resource Management 7: 42–47. [Google Scholar]
  135. Rouanet, Henri, and Bruno Leclerc. 1970. The role of the normal distribution in statistics. Mathématiques et Sciences Humaines 32: 57–74. [Google Scholar]
  136. Sainsaulieu, Ivan, and Jean-Philippe Leresche. 2023. C’est qui ton chef?! Sociologie du leadership en Suisse. Lausanne: PPUR Presses Polytechniques. [Google Scholar]
  137. Saukkonen, Juha, Pia Kreus, Nora Obermayer, Óscar Rodríguez Ruiz, and Maija Haaranen. 2019. AI, RPA, ML and other emerging technologies: Anticipating adoption in the HRM field. Paper presented at ECIAIR 2019 European Conference on the Impact of Artificial Intelligence and Robotics, Oxford, UK, October 31–November 1; Manchester: Academic Conferences and Publishing Limited, vol. 287. [Google Scholar]
  138. Schein, Edgar H. 2010. Organizational Culture and Leadership. Hoboken: John Wiley & Sons, vol. 2. [Google Scholar]
  139. Schepman, Astrid, and Paul Rodway. 2020. Initial validation of the general attitudes toward Artificial Intelligence Scale. Computers in Human Behavior Reports 1: 100014. [Google Scholar] [CrossRef]
  140. Schepman, Astrid, and Paul Rodway. 2023. The General Attitudes toward Artificial Intelligence Scale (GAAIS): Confirmatory validation and associations with personality, corporate distrust, and general trust. International Journal of Human-Computer Interaction 39: 2724–41. [Google Scholar] [CrossRef]
  141. Schinnenburg, Heike, and Christoph Brüggemann. 2018. Predictive HR-Analytics. Möglichkeiten und Grenzen des Einsatzes im Personalbereich. ZFO-Zeitschrift Führung und Organisation 87: 330–36. [Google Scholar]
  142. Schuetz, Sebastian, and Viswanath Venkatesh. 2020. The rise of human machines: How cognitive computing systems challenge assumptions of user-system interaction. Journal of the Association for Information Systems 21: 460–82. [Google Scholar] [CrossRef]
  143. Sciarini, Pascal. 2023. Politique Suisse: Institutions, Acteurs, Processus. Lausanne: Presses Polytechniques et Universitaires Romandes. [Google Scholar]
  144. Seeman, Melvin. 1967. On the personal consequences of alienation in work. American Sociological Review 32: 273–85. [Google Scholar] [CrossRef]
  145. Shrestha, Yash Raj, Shiko M. Ben-Menahem, and Georg von Krogh. 2019. Organizational decision-making structures in the age of artificial intelligence. California Management Review 61: 66–83. [Google Scholar] [CrossRef]
  146. Soffia, Magdalena, Alex J. Wood, and Brendan Burchell. 2022. Alienation is not ‘Bullshit’: An empirical critique of Graeber’s theory of BS jobs. Work, Employment and Society 36: 816–40. [Google Scholar] [CrossRef]
  147. Song, Yuegang, and Ruibing Wu. 2021. Analysing human-computer interaction behaviour in human resource management system based on artificial intelligence technology. Knowledge Management Research & Practice 19: 1–10. [Google Scholar]
  148. Spyropoulos, Basilis, and George Papagounos. 1995. A theoretical approach to artificial intelligence systems in medicine. Artificial Intelligence in Medicine 7: 455–65. [Google Scholar] [CrossRef] [PubMed]
  149. Stone, Dianna L., and Kimberly M. Lukaszewski. 2024. Artificial intelligence can enhance organizations and our lives: But at what price? Organizational Dynamics 53: 101038. [Google Scholar] [CrossRef]
  150. Strohmeier, Stefan, ed. 2022. Handbook of Research on Artificial Intelligence in Human Resource Management. Cheltenham: Edward Elgar Publishing. [Google Scholar]
  151. Strohmeier, Stefan. 2020. Algorithmic Decision Making in HRM. In Encyclopedia of Electronic HRM. Berlin and Boston: De Gruyter Oldenburg, pp. 54–60. [Google Scholar]
  152. Strohmeier, Stefan, and Franca Piazza. 2015. Artificial intelligence techniques in human resource management—A conceptual exploration. In Intelligent Techniques in Engineering Management: Theory and Applications. Berlin: Springer, pp. 149–72. [Google Scholar]
  153. Sultana, Sharifa, Md. Mobaydul Haque Mozumder, and Syed Ishtiaque Ahmed. 2021. Chasing Luck: Data-driven Prediction, Faith, Hunch, and Cultural Norms in Rural Betting Practices. Paper presented at 2021 CHI Conference on Human Factors in Computing Systems, Yokohama Japan, May 8–13; pp. 1–17. [Google Scholar]
  154. Sutherland, Steven C., Casper Harteveld, and Michael E. Young. 2016. Effects of the advisor and environment on requesting and complying with automated advice. ACM Transactions on Interactive Intelligent Systems (TiiS) 6: 1–36. [Google Scholar] [CrossRef]
  155. Syam, Sid S., and James F. Courtney. 1994. The case for research in decision support systems. European Journal of Operational Research 73: 450–57. [Google Scholar] [CrossRef]
  156. Tambe, Prasanna, Peter Cappelli, and Valery Yakubovich. 2019. Artificial intelligence in human resources management: Challenges and a path forward. California Management Review 61: 15–42. [Google Scholar] [CrossRef]
  157. Tarafdar, Monideepa, Cary L. Cooper, and Jean-François Stich. 2019. The technostress trifecta—techno eustress, techno distress and design: Theoretical directions and an agenda for research. Information Systems Journal 29: 6–42. [Google Scholar] [CrossRef]
  158. Taylor, Frederic Winslow. 1957. La Direction Scientifique des Entreprises. Paris: Dunod. First published 1911. [Google Scholar]
  159. Thurman, Neil, Judith Moeller, Natali Helberger, and Damian Trilling. 2019. My friends, editors, algorithms, and I: Examining audience attitudes to news selection. Digital Journalism 7: 447–69. [Google Scholar] [CrossRef]
  160. Tiwari, Prakhar. 2022. Misuse of personal data by social media giants. Jus Corpus LJ 3: 1041. [Google Scholar]
  161. United Nations Development Programme. 2024. Human Development Report 2023/2024: Unstuck: Rethinking Cooperation in a Polarized World. Available online: https://hdr.undp.org/system/files/documents/global-report-document/hdr2023-24snapshoten.pdf (accessed on 4 July 2024).
  162. van den Broek, Elmira, Anastasia Sergeeva, and Marleen Huysman. 2019. Hiring algorithms: An ethnography of fairness in practice. In Association for Information Systems. Athens: AIS Electronic Library (AISeL). [Google Scholar]
  163. van Dongen, Kees, and Peter-Paul van Maanen. 2013. A framework for explaining reliance on decision aids. International Journal of Human-Computer Studies 71: 410–24. [Google Scholar] [CrossRef]
  164. van Esch, Patrick, J. Stewart Black, and Denni Arli. 2021. Job candidates’ reactions to AI-enabled job application processes. AI and Ethics 1: 119–30. [Google Scholar] [CrossRef]
  165. Varma, Arup, Vijay Pereira, and Parth Patel. 2024. Artificial intelligence and performance management. Organizational Dynamics 53: 101037. [Google Scholar] [CrossRef]
  166. Veale, Michael, and Lilian Edwards. 2018. Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling. Computer Law & Security Review 34: 398–404. [Google Scholar]
  167. Venkatesh, Viswanath. 2022. Adoption and use of AI tools: A research agenda grounded in UTAUT. Annals of Operations Research 308: 641–652. [Google Scholar] [CrossRef]
  168. Venkatesh, Viswanath, Michael G. Morris, Gordon B. Davis, and Fred D. Davis. 2003. User acceptance of information technology: Toward a unified view. MIS Quarterly 27: 425–78. [Google Scholar] [CrossRef]
  169. Venkatesh, Viswanath, James Y. L. Thong, and Xin Xu. 2016. Unified theory of acceptance and use of technology: A synthesis and the road ahead. Journal of the Association for Information Systems 17: 328–76. [Google Scholar] [CrossRef]
  170. Vimalkumar, M., Sujeet Kumar Sharma, Jang Bahadur Singh, and Yogesh K. Dwivedi. 2021. ‘Okay Google, what about my privacy?’: User’s privacy perceptions and acceptance of voice-based digital assistants. Computers in Human Behavior 120: 106763. [Google Scholar] [CrossRef]
  171. Vrontis, Demetris, Michael Christofi, Vijay Pereira, Shlomo Tarba, Anna Makrides, and Eleni Trichina. 2021. Artificial intelligence, robotics, advanced technologies and human resource management: A systematic review. The International Journal of Human Resource Management 33: 1237–66. [Google Scholar] [CrossRef]
  172. Williams, Larry J., and Ernest H. O’Boyle, Jr. 2008. Measurement models for linking latent variables and indicators: A review of human resource management research using parcels. Human Resource Management Review 18: 233–42. [Google Scholar] [CrossRef]
  173. Workman, Michael. 2005. Expert decision support system use, disuse, and misuse: A study using the theory of planned behavior. Computers in Human Behavior 21: 211–31. [Google Scholar] [CrossRef]
  174. Wüthrich, Hans A., Dirk Osmetz, and Stefan Kaduk. 2008. Musterbrecher: Führung neu Leben. Berlin: Springer. [Google Scholar]
  175. Xiao, Qijie, Jiaqi Yan, and Greg J. Bamber. 2023. How does AI-enabled HR analytics influence employee resilience: Job crafting as a mediator and HRM system strength as a moderator. Personnel Review. [Google Scholar] [CrossRef]
  176. Yamakawa, Yasuhiro, Mike W. Peng, and David L. Deeds. 2008. What drives new ventures to internationalize from emerging to developed economies? Entrepreneurship Theory and Practice 32: 59–82. [Google Scholar] [CrossRef]
  177. Zahedi, Fatemeh Mariam, Ahmed Abbasi, and Yan Chen. 2015. Fake-website detection tools: Identifying elements that promote individuals’ use and enhance their performance. Journal of the Association for Information Systems 16: 2. [Google Scholar] [CrossRef]
  178. Zhang, Lixuan, Iryna Pentina, and Yuhong Fan. 2021. Who do you choose? Comparing perceptions of human vs robo-advisor in the context of financial services. Journal of Services Marketing 35: 634–46. [Google Scholar] [CrossRef]
  179. Zhang, Ping. 2013. The affective response model: A theoretical framework of affective concepts and their relationships in the ICT context. MIS Quarterly 37: 247–74. [Google Scholar] [CrossRef]
  180. Zhou, Yu, Lijun Wang, and Wansi Chen. 2023. The dark side of AI-enabled HRM on employees based on AI algorithmic features. Journal of Organizational Change Management 36: 1222–41. [Google Scholar] [CrossRef]
  181. Zu, Shicheng, and Xiulai Wang. 2019. Resume information extraction with a novel text block segmentation algorithm. International Journal on Natural Language Computing 8: 29–48. [Google Scholar] [CrossRef]
  182. Zuboff, Shoshana. 2019. Surveillance capitalism and the challenge of collective action. In New Labor Forum. Sage and Los Angeles: SAGE Publications, vol. 28, pp. 10–29. [Google Scholar]
Figure 1. Main determinants of AA according to the framework developed by Mahmud et al. (2022).
Figure 1. Main determinants of AA according to the framework developed by Mahmud et al. (2022).
Admsci 14 00253 g001
Figure 2. Full theoretical model of the determinants of the AA of Swiss HR departments to AI recruitment tools.
Figure 2. Full theoretical model of the determinants of the AA of Swiss HR departments to AI recruitment tools.
Admsci 14 00253 g002
Table 1. Individual characteristics of our respondents.
Table 1. Individual characteristics of our respondents.
Gender   Linguistic Area
   Woman49.69     French-speaking Switzerland44.44
   Man43.21     German-speaking Switzerland42.90
   Other0.00     Italian-speaking Switzerland6.17
   NA7.10     NA6.48
Hierarchical pos.   Time with organization
   Employee25.31     <17.10
   Proximity exec.4.32     1–315.43
   Middle manager23.77     3–515.74
   Exec. Manager39.51     5–1022.22
   NA7.10     >1031.48
     NA8.02
Age   Level of activity (federalism)
       <180.00     International30.86
       18–250.00     Federal27.47
       26–344.63     Cantonal14.51
       35–4417.59     Communal20.06
       45–5443.83     NA7.10
       55–6425.93  Private/Public
       >650.00     Private48.46
       NA8.02     Public or semi-public47.53
     NA4.01
Table 2. Path coefficients, significance, and R2 (both private and public sectors).
Table 2. Path coefficients, significance, and R2 (both private and public sectors).
AAPT
R20.5690.372
R2 adjusted0.5560.369
PC0.074 */
GAAIS+−0.211 ***/
GAAIS−0.414 ***/
PT0.169 ***/
PDEV0.177 ***/
PWB0.181 ***/
AGE−0.006/
TWO−0.022/
H−0.019/
G−0.044/
PSUS/0.402 ***
PSEV/0.369 ***
AA: algorithmic aversion; PT: perceived threat; PC: privacy concerns; GAAIS+: general attitude toward artificial intelligence, positive; GAAIS−: general attitude toward artificial intelligence, negative; PDEV: personal development concerns; PWB: personal well-being concerns; AGE: age; TWO: time with organization; H: hierarchical position; G: gender; PSUS: perceived susceptibility; PSEV: perceived severity. t: table for significance (two-tailed); confidence interval: 95%; t-value ≥ 1.960 (*). Confidence interval: 99%. Confidence interval: 99.9%; t-value ≥ 3.291 (***).
Table 3. Path coefficients, significance, and R2 (private sector VS public sector).
Table 3. Path coefficients, significance, and R2 (private sector VS public sector).
AAPT
Private SectorPublic SectorPrivate SectorPublic Sector
R20.6750.5330.3670.392
R2 ajusted0.6530.5010.3580.384
PC0.079 *0.092//
GAAIS+−0.101−0.296 ***//
GAAIS−0.521 ***0.373 ***//
PT0.269 ***0.091//
PDEV0.1000.249 ***//
PWB0.153 *0.172 *//
AGE−0.030−0.011//
TWO0.031−0.047//
H0.052−0.035//
G0.042−0.085//
PSUS//0.430 ***0.400 ***
PSEV 0.352 ***0.364 ***
t-value ≥ 1.960 (*). t-value ≥ 3.291 (***).
Table 4. Confirmed/invalidated hypotheses.
Table 4. Confirmed/invalidated hypotheses.
Hypo.WordingConfirmation
H1HR employees’ PC are positively associated with their AA.Yes
  H1private Yes
  H1public No
H2HR employees’ GAAIS+ is negatively associated with their AA.Yes
  H2private No
  H2public Yes
H3HR employees’ GAAIS− is positively associated with their AA.Yes
  H3private Yes
  H3public Yes
H4HR employees’ PT is positively associated with their AA.Yes
  H4private Yes
  H4public No
H4aHR employees’ PSUS is positively associated with their PT. Yes
  H4aprivate Yes
  H4apublic Yes
H4bHR employees’ PSEV is positively associated with their PT.Yes
  H4bprivate Yes
  H4bpublic Yes
H5HR employees’ PDEV are positively associated with their AA.Yes
  H5private No
  H5public Yes
H6HR employees’ PWB are positively associated with their AA.Yes
  H6private Yes
  H6public Yes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Revillod, G. Why Do Swiss HR Departments Dislike Algorithms in Their Recruitment Process? An Empirical Analysis. Adm. Sci. 2024, 14, 253. https://doi.org/10.3390/admsci14100253

AMA Style

Revillod G. Why Do Swiss HR Departments Dislike Algorithms in Their Recruitment Process? An Empirical Analysis. Administrative Sciences. 2024; 14(10):253. https://doi.org/10.3390/admsci14100253

Chicago/Turabian Style

Revillod, Guillaume. 2024. "Why Do Swiss HR Departments Dislike Algorithms in Their Recruitment Process? An Empirical Analysis" Administrative Sciences 14, no. 10: 253. https://doi.org/10.3390/admsci14100253

APA Style

Revillod, G. (2024). Why Do Swiss HR Departments Dislike Algorithms in Their Recruitment Process? An Empirical Analysis. Administrative Sciences, 14(10), 253. https://doi.org/10.3390/admsci14100253

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop