Next Article in Journal
University Students’ Perception of the Dehesa and the Associated Traditional Trades
Previous Article in Journal
Assessment for the Sustainable Development of Components of the Tourism and Recreational Potential of Rural Areas of the Aktobe Oblast of the Republic of Kazakhstan
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Impact of Artificial Intelligence Replacing Humans in Making Human Resource Management Decisions on Fairness: A Case of Resume Screening

1
School of Business, Hohai University, Nanjing 211100, China
2
Department of Construction and Real Estate, School of Civil Engineering, Southeast University, Nanjing 210096, China
3
Research Center of Smart City, Nanjing Tech University, Nanjing 211816, China
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(9), 3840; https://doi.org/10.3390/su16093840
Submission received: 26 March 2024 / Revised: 27 April 2024 / Accepted: 30 April 2024 / Published: 2 May 2024
(This article belongs to the Section Psychology of Sustainability and Sustainable Development)

Abstract

:
A growing number of organizations have used artificial intelligence (AI) to make decisions to replace human resource (HR) workers; yet, the fairness perceptions of the people affected by the decision are still unclear. Given that an organization’s sustainability is significantly influenced by individuals’ perceptions of fairness, this study takes a resume-screening scenario as an example to explore the impact of AI replacing humans on applicants’ perceptions of fairness. This study adopts the method of the online scenario experiment and uses SPSS to analyze the experimental data: 189 and 214 people, respectively, participated in two online scenarios, with two independent variables of decision makers (AI and humans), two dependent variables of procedural and distributive fairness, and two moderating variables of outcome favorability and the expertise of AI. The results show that the applicants tend to view AI screening resumes as less fair than humans. Furthermore, moderating effects exist between the outcome favorability and the expertise of AI. This study reveals the impact of AI substituting for humans in decision-making on fairness. The proposed model can help organizations use AI to screen resumes more effectively. And future research can explore the collaboration between humans and AI to make human resource management decisions.

1. Introduction

Artificial intelligence (AI) is being rapidly deployed in functional modules and poses substantial problems and potential for human resource management with the advent of the digital era [1]. An artificial intelligence efficiency detection and evaluation system for warehouse management was created by Amazon in 2015. It tracks employee job status and incorporates it into performance reviews. A Russian online payment company called Xsolla also removed 150 workers in August 2021 due to inefficiencies and attitude issues based on an algorithmically determined “digital footprint”. Artificial intelligence applications save labor costs, increase the efficacy of human resource management, and play a significant role in the innovative growth and digital transformation of enterprises. It is still unclear, however, just how people’s perceptions and responses to AI making these choices will differ from those of conventional human resource managers.
To achieve organizational goals and enhance sustainability, organizations need to ensure that employees feel they are being treated fairly in the decision-making process. A recent study indicated that the perceived lack of impartiality in decision making is the critical reason for employee turnover in the technology sector, which costs the sector $16 billion yearly [2]. Fairness is a crucial component of both the long-term and steady growth of organizations, as well as the rights and interests of individuals. First, fairness can guarantee that every person involved in the decision-making process receives the respect and consideration they deserve. Only in this manner can we improve the rationality and viability of decision making while accurately reflecting the requirements and interests of all stakeholders [3]. Secondly, fairness helps to improve the acceptability of decision making [4]. A fair decision-making process increases the likelihood that the parties involved will accept it in the end. This lessens contradictions and internal conflicts while simultaneously strengthening the team’s cohesiveness and centripetal force, enabling the team members to work as a unit to accomplish decisions. In addition, fair decision-making also helps organizations convey positive values and a feeling of social responsibility, which enhances their brand image and promotes sustainable business growth. Therefore, careful consideration should be given to the perceived fairness of the individuals affected by the decisions to assist corporations in making more successful decisions.
The majority of the prior research on the effects of decisions made by AI on people’s perceptions or behavioral attitudes has been conducted in the marketing industry, where AI replaces humans in giving consumer advice or recommending products [5,6]. However, there is a lack of relevant studies in the field of human resource management. The boundary conditions under which decision-making takes place also received less attention in the historical research. In addition to the decision-making process, various decision-related factors can also drive an individual’s psychological perception of a decision [7]. Therefore, it is crucial that we explore the boundary conditions under which the influence of people’s perceptions of fairness is strengthened or weakened among the many factors related to decision making. Furthermore, prior studies have predominantly concentrated on the influence of the decision-making procedure on individual views, with less emphasis placed on the significance of decision outcomes [8]. Attitudes and perceptions of decision-making may be influenced by the outcome of the decision. The “outcome bias” postulates that people prioritize assessing a decision’s outcome over considering its process because they are more interested in the decision’s outcome. In other words, the outcome of decisions may affect the degree to which various decision-makers’ roles influence people’s perceptions of fairness [9,10]. In addition, earlier research paid little attention to how the decision-maker’s traits influence people’s psychological perceptions of decision making and the decision outcomes.
This study examines the decisions made by artificial intelligence (AI) in the field of human resource management (HRM), breaking down the two dimensions of fairness by using a particular resume-screening scenario as an example to investigate how applicants’ perceptions of procedural and distributive fairness would change if AI were to replace human reviewers. Meanwhile, the study considers various decision outcomes and decision-maker traits collectively. The two online scenario experiments discover that AI resume screening results in lower perceptions of both types of fairness when compared with humans, and reveals a positive moderating effect of outcome favorability and the expertise of AI.

2. Literature Review

2.1. Artificial Intelligence in Human Resource Management

The phrase “Artificial Intelligence” was initially used in a Dartmouth College summer seminar proposal. The seminar described “artificial intelligence” to be “the ability to make machines behave in the same way that humans behave intelligently” [11]. To put it simply, artificial intelligence (AI) is a wide, general phrase that refers to the application of computational techniques to simulate human intelligence. An increasing number of organizations have implemented algorithms in the workplace in recent years, and some have even started implementing artificial intelligence (AI) to support human resource managers in tasks including employee performance management, promotion, interviewing, and resume screening [1]. One of the key factors influencing the growing use of AI in managerial and organizational decision making is its ability to quickly and efficiently sort through vast amounts of information [8]. Additionally, a related meta-analysis discovered that AI performs 10% more accurately on average than human judgment [12]. All of these results point to the use of AI in HRM decision making by enterprises as a significant trend for the future and show that AI is more efficient at making decisions than humans are.
Companies are currently moving toward Digital Recruitment 3.0, a phase that is centered on the use of artificial intelligence technology in recruiting and selection processes, due to the advancement of digital technology in human resource management [13]. Artificial Intelligence Recruitment is the term for the techniques and tools used by enterprises to select talent by processing data and making assessments and decisions about hiring using technologies like machine learning, natural language processing, and emotion recognition [13,14]. According to the definition, intelligent resume screening, online assessments, and video interviews are the primary recruitment scenarios for artificial intelligence (AI). Other essential technologies in AI recruitment include natural language processing, deep learning, speech and emotion detection, etc. Additionally, several academics note that there are four major components to AI recruitment: outreach, screening, assessment, and co-ordination [13]. The complete process, from identifying appropriate applicants to making the hiring choice, is covered in these four sections. In these four components, AI recruitment functions differently than conventional human recruitment techniques [14]. For example, AI has the potential to transform the way that job information is typically released. It may be used to target applicants by mining user data from social media platforms like Facebook and LinkedIn for natural language processing. AI can also be utilized in virtual reality and games to conduct applicant interviews and evaluations. Meanwhile, AI recruitment can help enterprises access top talent as a competitive advantage and save a significant amount of time when compared to traditional human recruitment. However, prior research has mostly examined the benefits of AI from the viewpoint of organizations rather than the perspectives of individuals impacted by the decisions made by AI. This makes a significant difference in determining whether AI can be implemented in enterprises broadly and durably. Using resume screening as an example, artificial intelligence (AI) can save over 80% of the time spent on traditional manual techniques. However, prior research has not addressed whether applicants are willing to have their resumes reviewed by AI, or how applicants’ psychological evaluations of them have altered. Nevertheless, this study aims to address these issues.

2.2. Applicants and Fairness Perception

The study of fairness in organizations has been a long-standing and popular topic in organizational science. Adams proposed that fairness is the equality of the decisions’ outcomes within an organization (e.g., salary distributions, promotions, and performance reviews), emphasizing the distributions’ outcomes. Furthermore, he emphasized that fairness is an exchange relationship between the inputs and outputs of two behaviors. Inputs include both the quality and quantity of work, effort, knowledge, skills, and loyalty, while outputs include wages, bonuses, promotions, performance reviews, and status. The ratio of inputs to outputs determines the fairness of distribution. Subsequent research gradually came to realize that, when distributional outcomes were the focus, individual concerns about how those outcomes are distributed also affected people’s perceptions of fairness. As a result, the focus of the research was shifted to the fairness of the decision-making process, or procedural fairness [15]. Employees in enterprises frequently wonder how their superiors make decisions, particularly in difficult circumstances. Procedural fairness is the focus on the impartiality of the processes utilized in the decision-making process [16].
Applicants’ subsequent attitudes, intentions, and behaviors are significantly and meaningfully influenced by their responses throughout the selection process. According to certain research, an applicant’s unfavorable response could set off a chain reaction of unfavorable events that could include decreased organizational attitudes (like low organizational attractiveness), unwanted behaviors or behavioral intentions (like referral and litigation intentions) [17], and even an impact on the applicant’s actual acceptance of a job offer as opposed to just the expected acceptance [18]. In addition, negative reactions can also shorten an applicant’s attention span [19], which might result in poor performance during the interview. Accordingly, an applicant’s good response sets off a positive chain reaction that may result in improved attitudes, favorable actions, or behavioral intentions. Fairness impressions are ranked highest among the many responses (such as motivation, anxiety, and efficacy) [20]. According to the study, an organization’s attractiveness, an applicant’s likelihood to accept a job offer, and whether the applicant would promote the organization to others can all be impacted by procedural and distributive fairness [21].
According to expectation theory, applicants’ expectations reflect their beliefs about the future [22]. The influence of expectations on applicants’ responses has been examined in numerous studies [23]. Fairness was proven to be a strong predictor of interview efficacy and job application motivation, underscoring the significance of fairness for applicants once more [24]. The fairness heuristic suggests that the heuristic is shaped by early events (such as the selection process) and is thereafter the basis for an individual’s assessment of how fair the organization’s actions are [20]. Furthermore, as resume screening is typically the first interaction an applicant has with an organization, the applicant’s impression of fairness is heavily influenced by the outcome of this interaction. Combining the results of the studies mentioned above, it can be concluded that an applicant’s stronger perceptions of fairness influence subsequent attitudes and behaviors toward the organization. Recent years have seen an increase in the number of organizations using AI to replace humans in HRM decision making. However, research on this topic is still in its infancy, with even less of it concentrating on resume screening scenarios to examine how decisions made by AI affect applicants’ fairness. Therefore, this study focuses on whether the replacement of human resume screening by AI affects applicants’ views of fairness and who elicits higher perceptions of fairness.

2.3. Artificial Intelligence and Fairness Perception

In academia, there has been debate regarding how fair decisions produced by AI should be seen. While some studies have proven the reverse, others have suggested that AI decisions are less fair than those made by humans. For instance, it is thought that using AI to review political contextual content is less fair than using humans [25]. Nonetheless, warehouse employees believe that using AI to allocate tasks is fairer than using people [26].
This debate can be explained, in part, by the fact that AI performs jobs of varying kinds and yields diverse results. More specifically, decisions made by AI are seen as more unfair than human ones when the task calls for uniquely human abilities (such as the ability to incorporate emotions for subjective evaluation). However, AI-made decisions are thought to be fairer than human-made ones when the work calls for mechanical skills (such as processing vast volumes of quantitative data for the upcoming objective assessment) [8]. Furthermore, the difficulty of the task determines how the dispute will turn out. Numerous studies have demonstrated that when AI is assigned high-complexity tasks (such as those involving multiple components or stages), the decisions it makes are viewed as less fair than those made by humans [27]. On the other hand, decisions made by AI are thought to be fairer than human decisions for certain basic tasks [28]. Everybody will, however, define the nature and complexity of the assignment differently. As a result, research on how decisions made by AI affect fairness may be less accurate if it is carried out across a wide range of tasks rather than on a particular scenario. Therefore, in contrast to the earlier research, the goal of this study is to more precisely test the influence of AI-made decisions on fairness and elucidate the relationship between AI and fairness in this task by focusing the research scenario on the resume-screening task of recruitment in the human resource management field.

3. Research Method

The research model is designed based on the research gaps in the previous studies. According to previous research, decisions made by AI may result in a higher or lower perception of fairness than those made by humans, while obviously different conclusions existed among those works of research [25,26]. Most of the existing studies on AI and fairness talked about fairness in general, without examining the different dimensions of fairness in more detail [2,8]. As a decision, its impact on people is not only related to the decision-maker but also related to the factors related to the decision [7]. Therefore, it is necessary to explore the factors that may interact with decision-makers to influence the perception of fairness, that is, the boundary conditions that have received less attention in previous studies. In particular, people may focus more on the outcome of the decisions than the process [9,10].
Therefore, the research model of this study takes decision-makers (humans and AI) as independent variables, two fairness perceptions (procedural fairness and distributive fairness) as dependent variables, and two moderating variables of outcome favorability and the expertise of AI. For applicants, the results of resume screening are of great importance to them. This result determines whether they can move on to the application process and even whether they get the job. As the previous research discussed, moreover, people may judge decisions that produce positive outcomes more positively [29]. Therefore, this study considers adding outcome favorability into the model as a moderating variable and guesses that it could negatively moderate the relationship between decision-makers and fairness. In addition, people may have different views of AI expertise due to their characteristics (such as gender, age, education, etc.), which may be a factor that potentially affects the results. Therefore, this study also discusses the expertise of AI as a moderating variable and guesses that it could negatively moderate the relationship between decision-makers and fairness. At present, AI is widely used in human resource management. While human resource management involves many scenarios, each scenario has different characteristics and needs. Therefore, this study takes the resume screening scenario as an example to explore a highly possible impact of humans and AI as resume screeners on the procedural fairness and distributive fairness of applicants.
Therefore, the research methods are presented as follows: This study intends to test the hypothesis model by designing two scenario experiments in which participants with employment experience play the role of applicants. The first experiment manipulates both the resume screener and the decision outcome; that is, it designs a 2 × 2 intergroup experiment to examine both the main effect and the moderating effect of outcome favorability. The second experiment manipulates both the resume screener and the expertise of AI to examine the main effect again and the moderating effect of the expertise of AI. In the data analysis part, the reliability analysis of the scale should be carried out to ensure the consistency of participants’ scores. In the result analysis, this study conducted variance analysis and regression analysis on the experimental data to verify that different resume screeners would produce different perceptions of fairness and the role of the two moderating variables. In the discussion, we explained the principles of AI screening resumes and the reasons why it causes unfairness. The overall research framework is shown in Figure 1.

4. Research Hypothesis

4.1. The Impact of Decisions Made by Humans or Artificial Intelligence on Applicants’ Perceptions of Fairness

This study centers on resume-screening scenarios, which require that decision-makers assess all aspects of a resume comprehensively by integrating the experience and expertise with emotional intelligence, instead of relying solely on data analysis. Therefore, resume screening demands high human-only skills, and people would perceive procedural fairness to be inferior if it were carried out by AI [1]. Concerns have also long been raised about the “interpretability” and “transparency” of AI. The black-box effect of AI has always existed for laymen, who are unable to understand the AI’s decision-making process or its assessment of whether decisions are fair in terms of distribution or procedure [30]. The fairness heuristic and signal theory state that certain cues and signals issued by the organization will serve as the basis for people’s assessments of how fair the decision is [1,3]. The criteria that AI follows when evaluating resumes are opaque, in contrast to humans, and applicants have no means of knowing how or what standards will be applied to their material. People may believe that the organization has ambiguous policies on the handling and assessment of information because of this. They will perceive this as a signal that decisions affecting them are not always made openly and transparently. As a result, they will use this cue of ambiguous rules to base their assessments of fairness, which, in turn, lowers their perception of procedural fairness [3].
On the other hand, based on prior experiences, people will assess the fairness of the current screening exercise’s outcomes. AI is a new experience for individuals as compared to the traditional method, and it may provide them with a feeling of an experience that is distinct from the one they had before [1]. In the traditional method, humans process the data to determine the decision’s outcome, but artificial intelligence (AI) generates the outcome using an algorithmic rule that the person is not familiar with. This strategy might violate people’s internal standards of fairness and equity, and it might cause people to feel that the outcomes are not sufficiently fair. Furthermore, algorithmic reductionism holds that AI quantifies qualitative traits about individuals and evaluates them in an isolated context. This is incompatible with human resource work, which necessitates a thorough examination of human characteristics, and it seriously violates the principle that just processes should be predicated on accurate information [2]. Based on these points above, the following hypotheses are proposed:
Hypothesis 1a (H1a):
When AI screens resumes instead of humans, people will develop a lower perception of procedural fairness.
Hypothesis 1b (H1b):
When AI screens resumes instead of humans, people will develop a lower perception of distributive fairness.

4.2. The Moderating Role of Outcome Favorability

Economists claim that, because people are motivated by their interests, they are typically rather indifferent to the interests of the group. As a result, when asked to respond to a decision, people tend to give more weight to whether the decision benefits them personally than to everyone else [31]. This implies that the decision’s outcome is especially significant and that various outcomes might have different impacts. According to behavioral decision theory, humans make decisions in a highly complex and uncertain environment with limited knowledge and computing capacity. They also have limited rationality. As a result, while making decisions, people typically aim for merely satisfactory outcomes rather than ideal ones. According to the behavioral decision theory of individuals, when people make poor decisions, they reconstruct the decision-making process, intensify the situation in which they are making the decision, and conduct a more thorough decision-analysis process to find out why they received the results they did [9]. In other words, negative outcomes make people pay closer attention to the decision-making process, which is when AI’s potentially more unfair flaws are highlighted and people react more strongly to the decision’s perceived unfairness [10].
Research indicates that individuals evaluate the decision’s quality according to whether the decision’s outcome is positive or negative. This can occasionally occur independently of the decision-making process, and it is known as “outcome bias” when the emphasis is placed on evaluating the decision’s outcome rather than its process [32]. Individuals’ perceptions of the decision-making process are influenced by outcome bias, which occurs when individuals concentrate on the decision’s result rather than the process or quality of the decision [33]. Decisions made by AI also exhibit this bias. To be more precise, people will view as fairer the judgment that benefits them, even if the procedure used for both decisions is the same. This bias is exacerbated in situations when there is insufficient information to assess the decision’s quality [29]. It follows that, when decisions have positive outcomes, they are seen as fairer and have a variety of other good benefits. Furthermore, Formosa indicates that individuals feel more courteous and respected when decisions have a positive consequence, which raises the sense of fairness [34]. According to a study conducted in a judicial setting, plaintiffs who obtain a positive result (e.g., a judge granting their request) perceive court officials as fairer in their decisions and develop more positive emotions toward them [35,36]. Additionally, it has been discovered that, when people see AI making judgments that benefit them, they see AI as being fairer. This perception of fairness even offsets the negative effects of learning that AI is highly prejudiced against a particular group of people [7]. Based on this, this study proposes the following hypotheses:
Hypothesis 2a (H2a):
Outcome favorability negatively moderated the relationship between resume screeners and procedural fairness. That is, the relationship between resume screeners on procedural fairness was diminished in the positive outcome and strengthened in the negative.
Hypothesis 2b (H2b):
Outcome favorability negatively moderated the relationship between resume screeners and distributive fairness. That is, the relationship between resume screeners on distributive fairness was diminished in the positive outcome and strengthened in the negative.

4.3. The Moderating Role of Expertise in Artificial Intelligence

In the previous studies, less attention has been paid to the possibility that people’s perceptions of the expertise of AI vary depending on their traits or nature (e.g., knowledge, age, or personality), which could have an impact on how fair people feel the technology is. An interesting study discovered that, when the technology is labeled as “expert”, people’s approval of the information it generates increases, and their views of it are impacted, leading to a certain unconscious reaction [37]. According to the same research, humans frequently take information from authoritative sources at face value and quickly accept textual cues of competence, which leads to the unconscious acceptance of machine-generated content tagged as “expert” [38]. A replicated experiment using smartphones and apps yielded similar findings, that mobile advertisements from both specialized hardware and software agents lead to higher purchase intentions [39]. People trust specialized machines more, regardless of their actual performance, just as they trust specialists in particular fields more than generalists with a wide range of knowledge and experience.
Although this inference has not yet been applied to AI, this study predicts that people will judge specialist AI more favorably than general AI, based on previous findings. In other words, by demonstrating the expertise of AI, people will regard AI more favorably, reducing the competence gap between humans and AI and reducing the negative impact of AI decisions on the sense of fairness. Therefore, this study argues that the expertise of AI is a very important boundary condition that determines whether the effects of various screeners on fairness are greater or lesser. On this basis, this study proposes the following hypotheses:
Hypothesis 3a (H3a):
The expertise of AI positively moderated the relationship between resume screeners and procedural fairness. That is, the relationship between resume screeners on procedural fairness was diminished with the high expertise of AI and strengthened in the low.
Hypothesis 3b (H3b):
The expertise of AI positively moderated the relationship between resume screeners and distributive fairness. That is, the relationship between resume screeners on distributive fairness was diminished with the high expertise of AI and strengthened in the low.
Figure 2 shows the research model of this study.

5. Study 1: Artificial Intelligence, Fairness, and Outcome Favorability

5.1. Sample

An online scenario experiment was utilized in this study to evaluate the hypotheses. Study 1 examines the main impact of various screeners on individual perceptions of fairness as well as the first moderating variable, namely, whether the moderating influence of outcome favorability is significant. People with work experience were the study’s intended participants, and they were found online. The sample size for this experiment was determined using G*Power 3.1 before data collection. A medium effect size (0.25), a significance level of 0.05, and a 90% statistical validity were set. The resulting minimum sample size was 171. After eliminating the attention-checking items from the sample, we were left with 189 experimental data points out of the 220 eligible subjects we initially recruited. The participants’ demographic details are shown in Table 1. With 49.2% of participants being female and 50.8% of participants being male, the gender distribution was fairly balanced. The age group of 18–30 comprised 70.4% of the participants, with 18–40 coming in second (18.0%), and 55.6% of the participants earned a Bachelor’s degree after graduation.

5.2. Procedure and Stimuli

Before the study, each participant completed an informed consent form and received information on the main goals, procedures, and requirements. They were then asked to complete two attention tests, one on whether they had ever applied for a job, and the other on confirming that they had participated in the study seriously and answered truthfully. It was necessary to pass both questions to move on to the main study. Based on earlier research, the experimental materials for this study were created for a corporate recruitment setting [1]. Three sections made up the scenario material, which had a unified theme: background information, resume screener, and job application outcomes. The background information set a story of an online recruitment scenario in which participants acted as applicants who were told they needed to apply for a job. The resume screeners include artificial intelligence and humans. The outcome favorability was manipulated to pass and reject. Four recruitment scenario tales were created by combining the resume screener and outcome favorability, and participants were assigned at random to one of the four scenarios. Following their reading of the scenario, participants were required to respond to questions concerning distributive fairness, procedural fairness, and manipulation tests. Finally, participants answered questions about their demographic characteristics.

5.3. Measures

On a 1–5 scale (strongly disagree to strongly agree), participants indicated their agreement with three statements (Cronbach’s α = 0.877) adapted from Bauer et al. to measure procedural fairness [40]. Moreover, participants were asked to indicate their agreement or disagreement with three statements (Cronbach’s α = 0.798) adapted from Schinkel et al. on a 1–5 scale (strongly disagree to strongly agree) to measure distributive fairness [41]. The specific measurement items for each variable are listed in Table 2.

5.4. Results

5.4.1. Mean Difference between Resume Screeners on Applicants’ Perceptions of Fairness

This study was analyzed by ANOVA using SPSS 25.0 to demonstrate that there are differences in the procedural fairness perception and distributive fairness perception depending on different resume screeners (see Figure 3). The results showed that there was a significant mean difference in the procedural fairness perception (Mhuman = 3.92 > MAI = 2.63; p < 0.001) and distributive fairness perception (Mhuman = 3.64 > MAI = 2.49; p < 0.001). This result shows that both fairness perceptions of resumes by humans are significantly higher than those by AI. Therefore, H1a and H1b are supported.

5.4.2. The Interaction Effect of Resume Screeners and Outcome Favorability

This study utilized both ANOVA and regression analysis in SPSS 25.0 to test for interaction effects. With the inclusion of control variables, the perception of procedural fairness and distributive fairness significantly differed between the two in terms of outcome favorability. Compared to the rejection condition, participants report higher perceptions of procedural fairness (Mpass = 3.89 > Mreject = 2.62, p < 0.05) and distributive fairness (Mpass = 3.71 > Mreject = 2.37, p < 0.01) in the acceptance condition.
In addition, by regressing different resume screeners, outcome favorability, and the interaction term between the two, with perceptions of procedural fairness and distributive fairness as dependent variables, it was found that the relationship between resume screeners and perceptions of procedural fairness and distributive fairness was moderated by outcome favorability. To further elucidate this moderating effect, two figures (Figure 4 and Figure 5) were plotted, depicting the influential relationship among the variables and outcome favorability. Figure 4 and Figure 5 show a significant interaction between resume screeners and outcome favorability. The effects on the two perceptions of fairness were stronger when the outcome favorability was a rejection compared to the pass condition. Therefore, H2a and H2b are supported.

6. Study 2: Artificial Intelligence, Fairness, and Expertise of Artificial Intelligence

6.1. Sample

An online scenario experiment is used in this study to explore the hypotheses. In Study 2, the main effect of various resume screeners on applicants’ views of fairness is once again tested, along with the second moderating variable, namely, whether the moderating effect of the expertise of AI is substantial. The target participants for Study 2 were also people with work experience and were recruited via the Internet. A total of 240 eligible participants were initially recruited, and 215 pieces of experimental data were kept after attention-checking items were removed. The participants’ demographic details are shown in Table 3. There was a reasonable gender distribution among the participants, with 56.5% of the participants were male and 43.5% were female. The ages of 18 to 30 made up the largest percentage of participants (68.2%), followed by the ages of 31 to 40 (20.6%), and 65.9% of the participants earned a Bachelor’s degree upon graduation.

6.2. Procedure and Stimuli

Similar to Study 1, each participant first completed an informed permission form after learning about the study’s fundamental goals, procedures, and requirements. They were then asked to complete two attention tests, one on whether they had ever applied for a job, and the other on confirming that they had participated in the study seriously and answered truthfully. Passing both questions was required before proceeding to the main study. The scenario materials for this study were the same as in Study 1 [1]. The resume screeners include artificial intelligence and humans. The expertise of AI was manipulated to specialist and general [42]. Participants were randomly allocated to one of four recruitment scenario stories created by combining resume screeners and the expertise of AI. After reading the scenario material, participants were asked to answer manipulation tests and questions about procedural fairness and distributive fairness. Finally, participants answered questions about their demographic characteristics.

6.3. Measures

All items in Study 2 were the same as in Study 1 (Table 1). All measurement items were assessed on a five-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). The Cronbach’s α values for the perceptions of procedural fairness and distributive fairness were 0.797 and 0.796, respectively, confirming high reliability.

6.4. Results

6.4.1. Mean Difference between Resume Screeners on Applicants’ Perceptions of Fairness

This study was analyzed by ANOVA using SPSS 25.0 to demonstrate that there are differences in the procedural fairness perception and distributive fairness perception depending on different resume screeners (see Figure 6). The results showed that there was a significant mean difference in the procedural fairness perception (Mhuman = 3.39 > MAI = 2.80; p < 0.001) and distributive fairness perception (Mhuman = 3.53 > MAI = 2.77; p < 0.001). This result shows that both fairness perceptions of resumes by humans are significantly higher than those by AI. Therefore, H1a and H1b are supported.

6.4.2. The Interaction Effect of Resume Screeners and Expertise of Artificial Intelligence

This study utilized both ANOVA and regression analysis in SPSS 25.0 to test for interaction effects. With the inclusion of control variables, the perception of procedural fairness and distributive fairness significantly differed between the different expertise of AI. Compared to the general AI, participants report a higher perception of procedural fairness (Mspecialist = 3.86 > Mgeneral = 2.37, p < 0.001) and distributive fairness (Mspecialist = 3.86 > Mgeneral = 2.49, p < 0.001) in the specialist AI condition.
In addition, by regressing different resume screeners, the expertise of AI, and the interaction term between the two, with perceptions of procedural fairness and distributive fairness as dependent variables, it was found that the relationship between decision-makers and perceptions of procedural fairness and distributive fairness was moderated by the expertise of AI. To further elucidate this moderating effect, two figures (Figure 7 and Figure 8) were plotted, depicting the influential relationship between the variables and the expertise of AI. Figure 7 and Figure 8 show a significant interaction between resume screeners and the expertise of AI. The effects on the two perceptions of fairness were stronger when the expertise of AI was generally compared to the specialist condition. Therefore, H3a and H3b are supported.

7. General Discussion

Different from the general fairness mentioned in previous studies, this study divides fairness into two dimensions. This study compares the impact of humans and AI as screeners in a resume-screening scenario and concludes that AI screening resumes will lead to the lower procedural and distributive fairness perception of applicants. AI is primarily based on natural language processing (NLP) and traditional machine-learning (ML) techniques to screen resumes [43]. With NLP, AI can understand the semantics in the text to read and parse the text content of the resume. Through using more traditional ML techniques, the screening model is trained based on a large number of historical sample resumes, and then formulates the correlations between the screening results and samples’ features. Finally, AI outputs the screening results and recommended resumes. However, AI is not completely unbiased in this process. For example, the screening results of an ML-based model depends on which dataset and samples’ features are chosen for training, while it is hard to always ensure the selection of historical dataset and features are comprehensive and appropriate. Furthermore, the more complex the AI model is, the more likely it to be a “black box” model. If the applicant is unclear about how ML arrives at its decision results, the credibility of the decision model could reduce, which, in turn, raises the applicant’s perception of unfairness.
Recently, the explainability of AI has gradually become a hot topic. Advanced Ex-plainable Artificial Intelligence (XAI) techniques such as SHAP (SHapley Additive ex-Planation) are increasingly used. SHAP can be adopted through global and local explanations to reveal how ML models use features to obtain evaluation or prediction results, opening the “black box” to a certain extent [44]. For example, in resume screening, interpretable machine-learning models can be used to know how each applicant’s features affect the AI’s screening results. However, the current XAI still cannot completely solve the “black box” problem, so the bias brought by AI decision making cannot be completely eliminated. In summary, at this stage, AI replacing human beings in screening resumes will trigger applicants’ perception of unfairness.

7.1. Theoretical Contributions

First, this study broadens the literature on fairness and adds to the understanding of the relationship between AI and fairness, specifically distributive fairness. This helps individuals perceive fairness in organizations. Most previous studies on the impact of decisions made by AI on fairness have only discussed it broadly [2,8]. This study focuses on the resume-screening scenario of human resource management work in an organization and explores the impact of AI screening resumes instead of human resource workers from the applicant’s point of view, which confirms that AI reduces individuals’ fairness. The results of this study retain a coherent perspective with several earlier studies [1,28], extending the theoretical research on procedural fairness. Furthermore, this study further defines the relationship between AI and procedural fairness, for which the answers were previously ambiguous [6].
Meanwhile, by examining the reactions of individuals impacted by AI-made decisions, this study contributes to our understanding of AI psychology. In a future where artificial intelligence is applied extensively, we need to comprehend how technology affects people’s views to expand its use. It is crucial to view AI from the standpoint of individual psychology, even though the earlier research focused on discussing the relationship between AI and organizational performance or whether people accept or reject the recommendations offered by the technology [5,45]. In particular, applicants may experience negative psychological effects from AI resume screening at this point. This is due to a belief that AI overlooks human traits, which are precisely what resume screening requires, and instead associates AI with algorithmic reductionism.

7.2. Managerial Implications

Some new insights into organizational management are provided by this study. Firstly, the substitution of AI for human resource employees in the resume-screening process could potentially reduce applicants’ perception of fairness. Therefore, managers in organizations should exercise caution when deciding whether to use AI in place of humans for decision making. While the utilization of AI may increase efficiency and save costs, it may also have unfavorable effects on individual fairness and decrease the likelihood that applicants will choose the organization, affecting the talent intake and, thus, the organization’s sustainability. At the same time, to reduce the negative impacts of algorithmic reductionism on applicants’ perceptions of fairness, managers who choose to implement AI should also make available the information used by the AI to screen resumes and the screening procedure itself.
Second, while employing AI for resume screening, managers in enterprises can appropriately prevent the unfavorable impacts of negative outcomes. While circumstances permit, they can also make every effort to avoid informing applicants of the negative outcomes. If a negative outcome occurs, managers need to soothe the applicant’s feelings to mitigate the adverse effects of negative outcomes. Furthermore, applicants’ perceptions of fairness may be enhanced by the high expertise of AI. Thus, if funding permits, managers in organizations should introduce or develop specialized AI as much as feasible, or instruct the AI to increase its expertise after introduction. Additionally, managers need to notify applicants of the high expertise of AI so that they are aware that the specialist AI making judgments and to encourage applicants’ notions of fairness.

7.3. Limitations and Future Research

This study still has some limitations. First, this study focuses on resume-screening scenarios and does not explore other human resource management scenarios to generalize the results of this study. Therefore, future research could explore more scenarios in the field of human resource management to validate the findings of this study. Specifically, while human resource management encompasses many different scenarios, there are some commonalities among them. For example, whether it is recruitment, training, or performance evaluation, it is necessary to collect, analyze, and use personnel information. Therefore, it is important to refine the model and method of this study, identify the characteristics and needs of different human resource management scenarios, carry out cross-scenario application research, and, finally, extend the research results to a wider range of scenarios. Taking the performance evaluation scenario as an example, the model in this study includes two evaluators, human and AI. Different evaluation results and the expertise of AI used in the evaluation process may interact with the evaluator to affect employee’s perception of fairness. Based on the above content, relevant scenario experiments are designed to verify the conclusions drawn in this study, which will be able to popularize the model.
Second, this study examined the moderating effects of outcome favorability and the expertise of AI separately, but the effects of the integration of the two variables were not explored. Therefore, to improve the integrity of the study and the generalization of the results, future research could also increase an experiment including all the variables in the model. Third, future decisions may move toward human–AI collaboration for human resource management, something this study does not delve into. Humans collaborating with AI to make HR decisions will not only take advantage of AI’s information-processing capabilities, but will also be able to take care of details that might have previously been overlooked under human supervision, potentially leading to more balanced, fair, and contextually aware decision-making outcomes. Therefore, future studies can compare human decision-making, AI decision-making, and human–AI collaborative decision-making to expand the research model in this study.

8. Conclusions

As artificial intelligence (AI) keeps developing quickly and significantly increases the effectiveness of information processing, numerous companies have started to use AI in their HRM processes. Since how employees respond to the decisions made by AI remains unclear, this study investigates the impact of various resume screeners (humans and AI) on applicants’ perceptions of procedural and distributive fairness based on the scenario of resume screening. This study also incorporates an exploration of outcome favorability and the expertise of AI through two scenario experiments. In Study 1, the moderating role of outcome favorability is tested together with the main effect of the influence of various resume screeners on applicants’ two perceptions of fairness. Study 2 tests the moderating effects of the expertise of AI and confirms the main effect once more. According to the findings, applicants’ perceptions of procedural and distributive fairness are lower when using AI resume screening as opposed to traditional human resource methods, which supports Hypotheses 1a and 1b. Simultaneously, this affective association supports Hypotheses 2a and 2b because it is stronger in the case of a negative decision outcome (reject) and mitigated in the case of a positive decision outcome (pass). Furthermore, when the expertise of AI is high, the relationship between resume screeners on procedural and distributive fairness is diminished, supporting Hypotheses 3a and 3b.
The study focuses on situations where AI is used in human resource management tasks. The case of resume screening shows how decisions made by AI affect applicants’ procedural and distributive fairness. Second, while previous studies mainly focused on examining fairness in a general sense, this study divides fairness into two dimensions (procedural fairness and distributive fairness) and discovers that the replacement of humans with AI in resume screening had negative effects on both fairness judgments. Furthermore, this study demonstrates a relatively complete model by combining the identity of the decision-maker, decision outcomes, and the features of the decision-makers. In conclusion, this study provides a deeper understanding of the meaning of fairness, as well as clarifying earlier contentious studies on AI and fairness, and emphasizing that decision outcomes should not be disregarded when concentrating on the fairness of the process. In addition, this study provides ideas for the variable selection and experimental design for subsequent research related to decisions made by AI.

Author Contributions

Conceptualization, F.C.; methodology, F.C. and J.Z.; formal analysis, F.C.; investigation, F.C.; writing—original draft preparation, F.C. and J.Z.; writing—review and editing, J.Z. and L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki. Ethical review and approval were waived for this study because it does not identify, collect, and record any personal information, and the respondents’ anonymity and confidentiality were ensured.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available upon request from the first author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Acikgoz, Y.; Davison, K.H.; Compagnone, M.; Laske, M. Justice perceptions of artificial intelligence in selection. Int. J. Sel. Assess. 2020, 28, 399–416. [Google Scholar] [CrossRef]
  2. Newman, D.T.; Fast, N.J.; Harmon, D.J. When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organ. Behav. Hum. Decis. Process. 2020, 160, 149–167. [Google Scholar] [CrossRef]
  3. Colquitt, J.A.; Zipay, K.P. Justice, fairness, and employee reactions. Annu. Rev. Organ. Psychol. Organ. Behav. 2015, 2, 75–99. [Google Scholar] [CrossRef]
  4. Adamovic, M. Organizational justice research: A review, synthesis, and research agenda. Eur. Manag. Rev. 2023, 20, 762–782. [Google Scholar] [CrossRef]
  5. Logg, J.M.; Minson, J.A.; Moore, D.A. Algorithm appreciation: People prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 2019, 151, 90–103. [Google Scholar] [CrossRef]
  6. Castelo, N.; Bos, M.W.; Lehmann, D.R. Task-dependent algorithm aversion. J. Mark. Res. 2019, 56, 809–825. [Google Scholar] [CrossRef]
  7. Wang, R.; Harper, F.M.; Zhu, H. Factors influencing perceived fairness in algorithmic decision-making: Algorithm outcomes, development procedures, and individual differences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–14. [Google Scholar]
  8. Lee, M.K. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data Soc. 2018, 5, 2053951718756684. [Google Scholar] [CrossRef]
  9. Brockner, J.; Wiesenfeld, B.M.; Martin, C.L. Decision frame, procedural justice, and survivors reactions to job layoffs. Organ. Behav. Hum. Decis. Process. 1995, 63, 59–68. [Google Scholar] [CrossRef]
  10. Sasaki, H.; Hayashi, Y. Moderating the interaction between procedural justice and decision frame: The counterbalancing effect of personality traits. J. Psychol. 2013, 147, 125–151. [Google Scholar] [CrossRef]
  11. McCarthy, J.; Minsky, M.L.; Rochester, N.; Shannon, C.E. A proposal for the dartmouth summer research project on artificial intelligence. AI Mag. 2006, 27, 12. [Google Scholar]
  12. Grove, W.M.; Zald, D.H.; Lebow, B.S.; Snitz, B.E.; Nelson, C. Clinical versus mechanical prediction: A meta-analysis. Psychol. Assess. 2000, 12, 19. [Google Scholar] [CrossRef] [PubMed]
  13. Black, J.S.; van Esch, P. AI-enabled recruiting: What is it and how should a manager use it? Bus. Horiz. 2020, 63, 215–226. [Google Scholar] [CrossRef]
  14. Allal-Chérif, O.; Aránega, A.Y.; Sánchez, R.C. Intelligent recruitment: How to identify, select, and retain talents from around the world using artificial intelligence. Technol. Forecast. Soc. Change 2021, 169, 120822. [Google Scholar] [CrossRef]
  15. Augustyn, M.B.; Ward, J.T. Exploring the sanction–crime relationship through a lens of procedural justice. J. Crim. Justice 2015, 43, 470–479. [Google Scholar] [CrossRef]
  16. Thibaut, J.; Walker, L. A theory of procedure. Calif. Law Rev. 1978, 66, 541. [Google Scholar] [CrossRef]
  17. Geenen, B.; Proost, K.; Schreurs, B.; van Dijke, M.; Derous, E.; De Witte, K.; von Grumbkow, J. The influence of general beliefs on the formation of justice expectations: The moderating role of direct experiences. Career Dev. Int. 2012, 17, 67–82. [Google Scholar] [CrossRef]
  18. Konradt, U.; Garbers, Y.; Böge, M.; Erdogan, B.; Bauer, T.N. Antecedents and consequences of fairness perceptions in personnel selection: A 3-year longitudinal study. Group Organ. Manag. 2017, 42, 113–146. [Google Scholar] [CrossRef]
  19. McCarthy, J.M.; Van Iddekinge, C.H.; Lievens, F.; Kung, M.C.; Sinar, E.F.; Campion, M.A. Do candidate reactions relate to job performance or affect criterion-related validity? A multistudy investigation of relations among reactions, selection test scores, and job performance. J. Appl. Psychol. 2013, 98, 701. [Google Scholar] [CrossRef]
  20. McCarthy, J.M.; Bauer, T.N.; Truxillo, D.M.; Anderson, N.R.; Costa, A.C.; Ahmed, S.M. Applicant perspectives during selection: A review addressing “So what?”, “What’s new?”, and “Where to next?”. J. Manag. 2017, 43, 1693–1725. [Google Scholar] [CrossRef]
  21. Bauer, T.N.; Maertz, C.P., Jr.; Dolen, M.R.; Campion, M.A. Longitudinal assessment of applicant reactions to employment testing and test outcome feedback. J. Appl. Psychol. 1998, 83, 892. [Google Scholar] [CrossRef]
  22. Derous, E.; Born, M.P.; Witte, K.D. How applicants want and expect to be treated: Applicants’ selection treatment beliefs and the development of the social process questionnaire on selection. Int. J. Sel. Assess. 2004, 12, 99–119. [Google Scholar] [CrossRef]
  23. Ryan, A.M.; Ployhart, R.E. Applicants’ perceptions of selection procedures and decisions: A critical review and agenda for the future. J. Manag. 2000, 26, 565–606. [Google Scholar] [CrossRef]
  24. Bell, B.S.; Wiechmann, D.; Ryan, A.M. Consequences of organizational justice expectations in a selection system. J. Appl. Psychol. 2006, 91, 455. [Google Scholar] [CrossRef] [PubMed]
  25. Wojcieszak, M.; Thakur, A.; Ferreira Gonçalves, J.F.; Casas, A.; Menchen-Trevino, E.; Boon, M. Can AI enhance people’s support for online moderation and their openness to dissimilar political views? J. Comput. -Mediat. Commun. 2021, 26, 223–243. [Google Scholar] [CrossRef]
  26. Bai, B.; Dai, H.; Zhang, D.; Zhang, F.; Hu, H. The impacts of algorithmic work assignment on fairness perceptions and productivity. In Proceedings of the Academy of Management Proceedings. Acad. Manag. 2021, 2021, 12335. [Google Scholar]
  27. Gupta, M.; Parra, C.M.; Dennehy, D. Questioning racial and gender bias in AI-based recommendations: Do espoused national cultural values matter? Inf. Syst. Front. 2022, 24, 1465–1481. [Google Scholar] [CrossRef]
  28. Nagtegaal, R. The impact of using algorithms for managerial decisions on public employees’ procedural justice. Gov. Inf. Q. 2021, 38, 101536. [Google Scholar] [CrossRef]
  29. Baron, J.; Hershey, J.C. Outcome bias in decision evaluation. J. Personal. Soc. Psychol. 1988, 54, 569. [Google Scholar] [CrossRef] [PubMed]
  30. Yang, S.J.; Ogata, H.; Matsui, T.; Chen, N.S. Human-centered artificial intelligence in education: Seeing the invisible through the visible. Comput. Educ. Artif. Intell. 2021, 2, 100008. [Google Scholar] [CrossRef]
  31. Rodriguez-Lara, I.; Moreno-Garrido, L. Self-interest and fairness: Self-serving choices of justice principles. Exp. Econ. 2012, 15, 158–175. [Google Scholar] [CrossRef]
  32. Fischhoff, B. Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty. J. Exp. Psychol. Hum. Percept. Perform. 1975, 1, 288. [Google Scholar] [CrossRef]
  33. Bankins, S.; Formosa, P.; Griep, Y.; Richards, D. AI decision making with dignity? Contrasting workers’ justice perceptions of human and AI decision making in a human resource management context. Inf. Syst. Front. 2022, 24, 857–875. [Google Scholar] [CrossRef]
  34. Formosa, P.; Rogers, W.; Griep, Y.; Bankins, S.; Richards, D. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts. Comput. Hum. Behav. 2022, 133, 107296. [Google Scholar] [CrossRef]
  35. Hou, Y.; Lampe, C.; Bulinski, M.; Prescott, J.J. Factors in fairness and emotion in online case resolution systems. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 2511–2522. [Google Scholar]
  36. Sasaki, H.; Hayashi, Y. Justice orientation as a moderator of the framing effect on procedural justice perception. J. Soc. Psychol. 2014, 154, 251–263. [Google Scholar] [CrossRef]
  37. Leshner, G.; Reeves, B.; Nass, C. Switching channels: The effects of television channels on the mental representations of television news. J. Broadcast. Electron. Media 1998, 42, 21–33. [Google Scholar] [CrossRef]
  38. Nass, C.; Moon, Y. Machines and mindlessness: Social responses to computers. J. Soc. Issues 2000, 56, 81–103. [Google Scholar] [CrossRef]
  39. Kim, K.J. Can smartphones be specialists? Effects of specialization in mobile advertising. Telemat. Inform. 2014, 31, 640–647. [Google Scholar] [CrossRef]
  40. Bauer, T.N.; Truxillo, D.M.; Sanchez, R.J.; Craig, J.M.; Ferrara, P.; Campion, M.A. Applicant reactions to selection: Development of the selection procedural justice scale (SPJS). Pers. Psychol. 2001, 54, 387–419. [Google Scholar] [CrossRef]
  41. Schinkel, S.; van Vianen, A.E.; Marie Ryan, A. Applicant reactions to selection events: Four studies into the role of attributional style and fairness perceptions. Int. J. Sel. Assess. 2016, 24, 107–118. [Google Scholar] [CrossRef]
  42. Hong, J.W.; Choi, S.; Williams, D. Sexist AI: An experiment integrating CASA and ELM. Int. J. Hum. –Comput. Interact. 2020, 36, 1928–1941. [Google Scholar] [CrossRef]
  43. Behzadi, F. Natural language processing and machine learning: A review. Int. J. Comput. Sci. Inf. Secur. 2015, 13, 101. [Google Scholar]
  44. Zhang, J.; Yuan, J.; Mahmoudi, A.; Ji, W.; Fang, Q. A data-driven framework for conceptual cost estimation of infrastructure projects using XGBoost and Bayesian optimization. J. Asian Archit. Build. Eng. 2023, 1–24. [Google Scholar] [CrossRef]
  45. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm aversion: People erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 2015, 144, 114. [Google Scholar] [CrossRef]
Figure 1. Research framework.
Figure 1. Research framework.
Sustainability 16 03840 g001
Figure 2. Proposed research model.
Figure 2. Proposed research model.
Sustainability 16 03840 g002
Figure 3. Mean difference between resume screeners on applicants’ perceptions of fairness (Study 1).
Figure 3. Mean difference between resume screeners on applicants’ perceptions of fairness (Study 1).
Sustainability 16 03840 g003
Figure 4. Moderating effects of outcome favorability between resume screeners and procedural fairness.
Figure 4. Moderating effects of outcome favorability between resume screeners and procedural fairness.
Sustainability 16 03840 g004
Figure 5. Moderating effects of outcome favorability between resume screeners and distributive fairness.
Figure 5. Moderating effects of outcome favorability between resume screeners and distributive fairness.
Sustainability 16 03840 g005
Figure 6. Mean difference between resume screeners on applicants’ perceptions of fairness (Study 2).
Figure 6. Mean difference between resume screeners on applicants’ perceptions of fairness (Study 2).
Sustainability 16 03840 g006
Figure 7. Moderating effects of expertise of AI between resume screeners and procedural fairness.
Figure 7. Moderating effects of expertise of AI between resume screeners and procedural fairness.
Sustainability 16 03840 g007
Figure 8. Moderating effects of expertise of AI between resume screeners and distributive fairness.
Figure 8. Moderating effects of expertise of AI between resume screeners and distributive fairness.
Sustainability 16 03840 g008
Table 1. Descriptive characteristics (Study 1).
Table 1. Descriptive characteristics (Study 1).
Demographic
Characteristics
DescriptiveFrequency
(n = 189)
Percentage (%)
GenderMale9650.8
Female9349.2
Age18–3013370.4
31–403418.0
41–50157.9
51–6073.7
Highest Education LevelHigh school or less3418.0
Some college4523.8
College10555.6
Graduate school52.6
Table 2. Measurement items.
Table 2. Measurement items.
ConstructsItems
Procedural FairnessI think applicants have full information about the form of the company’s screening.
I believe all applicants’ resumes are screened in the same way.
How the company’s screening process determines which applicants can move to subsequent interviews is fair.
Distributive FairnessEach applicant is fairly determined whether he or she will get an interview.
I think the result of this resume screening is fair.
All applicants are treated equally in the company’s screening institution.
Table 3. Descriptive characteristics (Study 2).
Table 3. Descriptive characteristics (Study 2).
Demographic
Characteristics
DescriptiveFrequency
(n = 214)
Percentage (%)
GenderMale12156.5
Female9343.5
Age18–3014668.2
31–404420.6
41–501987.9
51–6052.3
Highest Education LevelHigh school or less125.6
Some college167.5
College14165.9
Graduate school4521.0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cai, F.; Zhang, J.; Zhang, L. The Impact of Artificial Intelligence Replacing Humans in Making Human Resource Management Decisions on Fairness: A Case of Resume Screening. Sustainability 2024, 16, 3840. https://doi.org/10.3390/su16093840

AMA Style

Cai F, Zhang J, Zhang L. The Impact of Artificial Intelligence Replacing Humans in Making Human Resource Management Decisions on Fairness: A Case of Resume Screening. Sustainability. 2024; 16(9):3840. https://doi.org/10.3390/su16093840

Chicago/Turabian Style

Cai, Fei, Jiashu Zhang, and Lei Zhang. 2024. "The Impact of Artificial Intelligence Replacing Humans in Making Human Resource Management Decisions on Fairness: A Case of Resume Screening" Sustainability 16, no. 9: 3840. https://doi.org/10.3390/su16093840

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop