Next Article in Journal
The Presenter in the Browser: Design and Evaluation of Human Interactive Overlays with Web Content
Previous Article in Journal
Managing ADHD Symptoms in Children Through the Use of Various Technology-Driven Serious Games: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intention to Work with Social Robots: The Role of Perceived Robot Use Self-Efficacy, Attitudes Towards Robots, and Beliefs in Human Nature Uniqueness

by
Jean-Christophe Giger
1,2,*,
Nuno Piçarra
3,
Grzegorz Pochwatko
4,
Nuno Almeida
1,5 and
Ana Susana Almeida
1
1
Psychology Research Centre (CIP), University of Algarve, 8005-139 Faro, Portugal
2
University Research Center in Psychology (CUIP), University of Algarve, 8005-139 Faro, Portugal
3
Research Centre for Psychological Family and Social Wellbeing (CRC-W), Portuguese Catholic Universit, 1649-023 Lisbon, Portugal
4
Laboratory of Virtual Reality and Psychophysiology, Institute of Psychology, Polish Academy of Sciences, 00-378 Warsaw, Poland
5
FAROTESTE—Avaliação Psicológica, 8000-220 Faro, Portugal
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2025, 9(2), 9; https://doi.org/10.3390/mti9020009
Submission received: 12 May 2024 / Revised: 30 December 2024 / Accepted: 16 January 2025 / Published: 21 January 2025

Abstract

:
Recent studies have enlightened the crucial role of perceived robot use self-efficacy in human robot interaction. This paper investigates the interplay between perceived robot use self-efficacy, attitudes towards robots, and beliefs in human nature uniqueness (BHNU) on the intention to work with social robots. Participants (N = 117) first filled out a questionnaire measuring their BHNU and attitudes towards robots. Then, they were randomly exposed to a video displaying a humanoid social robot (either humanlike or mechanical). Finally, participants indicated their robot use self-efficacy and their intention to work with the displayed social robot. Regression and serial mediation analyses showed the following: (1) the intention to work with social robots was significantly predicted by robot use self-efficacy and attitudes towards robots; (2) BHNU has a direct influence on attitudes towards robots and an indirect influence on the intention to work with social robots through attitudes towards robots and robot use self-efficacy. Our findings expand the current research on the impact of perceived robot use self-efficacy on intention to work with social robots. Implications for human robot interaction and human resource management are discussed.

1. Introduction

The latest technological breakthrough is sustained by artificial intelligence (AI) and social robotics. Social robots (SRs) are embodied AI agents created for a peer-to-peer human–machine interaction and for which social and emotional interactions play a critical role. Indeed, SRs are designed to express and perceive emotions, communicate with natural language, recognize and learn from human or robotic agents, exhibit non-verbal cues (e.g., gaze, gestures), as well as to display a distinctive personality and character. In other words, they are made for establishing and maintaining social relationships with the users. The presence of such cutting-edge technologies is skyrocketing in many industries or services, such as military facilities, hospitals, factories, and stores. The appropriate implementation and the use of AI and social robotics represent a competitive advantage for organizations, and their embedment, across all organizational functions, as collaborators for example, will deeply influence workforce development. However, the introduction of SRs as collaborators will increase complexity and pose novel demands on workforces. Indeed, the ongoing introduction of emotional AI as SRs will require from employees a mindset that will favor a quick, easy, and smooth acquisition of knowledge and skills to collaborate with SRs. As it was already suggested, employees will be paid in the future based on their ability to work with robots [1] and not if they like it or not. Indeed, using a technology does not mean to endorse or embrace it. Employees can be compelled to use new technologies even if they do not fully support or agree with their implementation. Technology adoption relies then on a successful and effective implementation which depends on individuals’ acceptance [2]. Understanding and predicting robot acceptance will be crucial for organizations in the near future. As employees are the greatest asset to organizations, “organizations should enhance employee skills and abilities to work with robots, maximize the effectiveness of human–robot teams, and build managerial systems that foster institutional synergy” [3] (pp. 49–50). Consequently, the identification of factors influencing the intention to work with SRs is crucial for the performance of employees, teams, and organizations. The present study contributes to such a program and was aimed at exploring, specifically, how psychosocial factors like the tendency to deny human attributes to SRs (e.g., emotions), attitudes toward robots, and the perceived robot use self-efficacy can be associated with the intention to work with SRs.

2. Related Work

2.1. Behavioral Intention to Work with Social Robots

Behavioral intention is the most proxy determinant of actual behavior. Indeed, “intentions are assumed to capture the motivational factors that influence behavior; they are indications of how hard people are willing to try, of how much of an effort they are planning to exert, in order to perform the behavior. As a rule, the stronger the intention to engage in a behavior, the more likely should be its performance” [4] (p. 181). In technology adoption research technology acceptance is generally defined as the intention to use technology for the tasks it is designed to support and is considered to be a strong predictor of the overt use of the technology [5]. The identification of core psychological factors underlying acceptance is then essential to avoid or minimize resistance or rejection to use a new technology. We argue that the tendency to deny an ontological status to SRs, the attitudes toward robots, and the perceived robot use self-efficacy to work with SRs can influence the intention to work with SRs (see Figure 1).

2.2. The Role of Perceived Robot Use Self-Efficacy on Acceptance to Work with It

In human robot interaction (HRI), the way people think they can deal with a robot is operationalized by the concept of robot use self-efficacy. Self-efficacy refers to the estimates or beliefs about one’s own ability or capacity to perform in a specific situation, or a particular task and to produce the desired goals [6]. Perceptions of self-efficacy will determine the engagement in an activity, the level of effort expanded on an activity, as well as the level of perseverance when facing difficulties [6]. Within the technology acceptance research, self-efficacy related to technology use was consistently shown to be associated with a more effective and enjoyable use of computers, software, and the internet, as well as with less anxiety to use them (see [7,8,9]). The investigation of the relationship between individuals perceived self-efficacy in interacting with SRs and their acceptance to use SRs has been sporadic and quite recent [10]. However, the handful of existing studies show a significant association. For example, Turja et al. (2019) [11] showed that robot use self-efficacy, reported by healthcare workers, was positively associated with their technological interest and use of technological assistive tools. Moreover, Latikka et al. (2019) [8] found that acceptance of humanoid, pet, lifting, and telepresence robots was also positively associated with perceived robot use self-efficacy among care work staff. Finally, research in behavioral decision making showed that perceived behavioral control (i.e., the perceived ease or difficulty of performing a behavior that includes the perception of self-efficacy, as well as the available material resources, and behavioral constraints to perform a behavior) was a significant predictor of the intention to work with an SR (see [12,13,14]).
Interestingly, Robinson et al. (2020) [15] showed that direct experience with an SR influences self-efficacy. They asked participants to briefly interact with the SR Pepper. The interaction consisted of a 2 min interactive tutorial delivered by the robot. They found that participants reported higher perception of self-efficacy to operate and apply the SR to a task after the interaction than prior to the interaction. Moreover, the change in self-efficacy uniquely contributed to the prediction of willingness to use the robot. The kind of interaction between user and robot can also influence the user’s perceived self-efficacy to use a robot. Indeed, Zafari et al. (2019) [16] observed that participants reported a higher level of self-efficacy after interacting with a robot displaying a person-oriented interaction style.
In short, the handful of existing studies suggests that perceived robot use self-efficacy plays a crucial role for understanding acceptance or rejection of robots at the workplace. Accordingly, the following is hypothesized:
H1. 
Perceived robot use self-efficacy predicts the intention to work with an SR.

2.3. Attitudes Towards Robots as Antecedents of Robot Use Self-Efficacy and Acceptance to Work with SRs

Users’ prior evaluations of robots, and especially attitudes, are important factors in HRI [3,17,18]. Attitudes are classically defined as the positive or negative evaluation towards an object, concept, or behavior, and are considered as a key factor in predicting behavioral intention or actual behavior [19]. Indeed, positive attitudes are associated with approach behaviors while negative attitudes are associated with avoidance behaviors [19]. In HRI research, attitudes towards robots have generally been operationalized and measured by using the negative attitude toward robot scale (NARS) [20]. NARS is aimed at gauging individuals’ unwillingness to interact with a robot due to arousal of negative emotions or anxiety triggered by thinking of interacting with human-like and non-human-like robots. Empirical research using NARS showed that general negative attitudes towards robots were associated with a lesser intention to use SRs [21,22]. Moreover, negative attitudes towards robots with human traits and toward interactions with robots (measured with the Portuguese version of NARS [23]) were found to be associated with both negative attitudes to work with SRs and a weaker behavioral intention to work with SRs [23]. Finally, more negative attitudes toward robots were also found to be negatively correlated with robot self-efficacy [9]. Based on these results, the following is hypothesized:
H2. 
Attitudes towards robots predict perceived robot use self-efficacy.
H3. 
Attitudes towards robots predict intention to work with an SR.

2.4. Exploring the Role of Beliefs in Human Nature Uniqueness (BHNU) as a Distal Antecedent of Intention to Work with SRs Through Attitudes Towards Robots and Perceived Robot Use Self-Efficacy

There is a trend in HRI to humanize robots through “the implementation of social (e.g., language, nonverbal behavior, personality, emotions, and empathy), ethical (e.g., moral, values), and spiritual competences (e.g., religion, culture, and tradition)” [24] (p. 111). Humanlike SRs are supposed to reduce uncertainty and misunderstanding during the interaction. Indeed, it is suggested that the display of human attributes will make the interaction easier and more meaningful [24]. However, humanlike SRs use triggers negative reactions [24] like feelings associated with the uncanny valley effect [25], as well as realistic [26] and identity [27] threats. Recently, some researchers proposed the concept of belief in human nature uniqueness (BHNU) [28,29,30]. BHNU was elaborated on psychological essentialism theory and refers to “the extent to which humans reserve human nature for their own group and deny the possibility of a human essence to robots” [29] (p. 67). BHNU differ from realistic and identity threats. Realistic threat reflects the reaction to a resources threat (i.e., material resources, safety, or physical well-being), and identity threat stems from the feeling of identity and distinctiveness damage. BHNU refers to an individual chronic tendency to deny an ontological status to SRs based on the endorsement of a set of beliefs that reserves hallmarks of humanness only to humans. Empirical research showed that BHNU was associated with negative emotions and attitudes towards interacting with robots and humanlike robots [28,29,30]. People characterized by a high level of BHNU are those who display more nervosity when thinking about meeting a robot [31]. Accordingly, the following is hypothesized:
H4. 
BHNU predicts attitudes towards robots.
H5. 
BHNU predicts perceived robot use self-efficacy.
H6. 
BHNU has a remote indirect effect on acceptance to work with an SR through attitudes towards robots and perceived self-efficacy.

3. Study Design

3.1. Participants

The convenience sample is composed of 117 participants (Mage = 34.11; SD = 12.43) among whom 79 were female (Mage = 37.79; SD = 12.18), and 38 were male (Mage = 39; SD = 12.92). Participants were Brazilian (n = 92), Portuguese (n = 24), and one participant did not report the nationality. Participants were working in finance (n = 9), education (n = 8), health (n = 8), own business (n = 8), banks (n = 7), transformation industry (n = 5), police (n = 5), food business (n = 5), human resources (n = 5), administration (n = 4), hotel business (n = 3), civil protection (n = 2), scientific investigation (n = 2), sales (n = 2), informatics (n = 2), or in diverse activity sectors (n = 10), and some participants were still students (n = 21) or did not report their job (n = 11).

3.2. Data Collection and Procedure

All participants volunteered to take part in an online study. Recruitment was made through social networks. When clicking on the link, participants were first informed about the voluntary character of their participation, the confidentiality of the data collected, and the possibility to stop participating at any time if they felt uncomfortable with the task. They were then asked to complete the consent form. Once done, participants obtained access to the measures. Figure 2 displays a summary of the procedure. First, participants filled out the negative attitude towards robot scale (NARS) and the beliefs in human uniqueness scale (BHNUS). Then, they were asked to see a video (see description below) presenting either a humanlike SR or a humanoid machinelike SR. The use of two robots with a human or a humanoid machine appearance was aimed at controlling the potential effect of the appearance of the robots on the measures under study (see [24]). Finally, all participants were asked to report their perceived robot use self-efficacy (PRUSE) as well as their intention to work with the SR (IWSR) displayed in the video.

3.3. Materials

Belief in human nature uniqueness. Participants responded to the items of the beliefs in the human nature uniqueness scale (BHNUS; [28,30]) on a 7-point scale (1 = Totally disagree to 7 = Totally agree). BHNUS assesses the individual tendency to deny SRs the possibility to have human specific attributes. Higher values indicate stronger beliefs in the uniqueness of human nature. BHNUS displayed good internal consistency reliability (see Table 1).
Attitude towards social robots. Participants filled out the Portuguese adaptation (PNARS [23]) of the negative attitude towards robots scale (NARS [20]). The PNARS is a 12-item scale composed of two factors: the negative attitudes towards robots with human traits (NARHT) and the negative attitudes towards interactions with robots (NATIR). Items are rated on a 7-point scale ranging from 1 = I totally disagree to 7 = I totally agree. To facilitate the reading of results, scores were reversed so that higher scores indicated a more positive attitude towards robots. Inversion was indicated by adding (R). The total scale PNARS (R) as well as its subdimensions NARHT (R) and NATIR (R) displayed good internal consistency reliability (see Table 1).
Social Robots. The Snackbot is an assistive social humanoid mechanical-like robot developed at Carnegie Mellon University (see [32] for a detailed description). The Actroid DER (https://www.kokoro-dreams.co.jp/english/rt_rent/actroid/, accessed on 15 January 2025) is a full-sized realistic humanlike social robot. The videos lasted 1 min and 50 s. Participants received the following instructions: “In the future it will be common to interact with robots. This will happen in public spaces (factories, offices, museums) and in our houses. We are going to show you a video with one of those social robots. Your task is to imagine yourself working with this robot in the near future and forming an opinion about it”. During both videos, a female voice narrated the following: “Hello, my name is Snackbot (or Actroid) and I’m a social robot. A social robot is a robot created to interact with people in a natural fashion. To do that, my creators included in my design human characteristics like eyes, mouth, language, and the capacity to understand and perform social behaviors. In the future, I will be performing such jobs as a hotel receptionist, personal trainer, or office clerk. Some even say that in the future I will be responsible for caring for the elders. Goodbye and see you in the future”.
Perceived robot use self-efficacy (PRUSE). Participants reported their PRUSE to work with the SR displayed in the video by responding on a 7-point scale (1 = Totally disagree to 7 = Totally agree) composed of seven items. Items were as follows: “I can’t work with this robot; I am confident that I will be able to work with this robot; I am not able to interact with this robot; I am unable to communicate with this robot; I wouldn’t like to receive work instructions from this robot; This robot would be a good working partner; It would be easy to work with this robot”. The PRUSE was especially elaborated for the study and was designed at assessing individual’s self-efficacy in engaging and interacting with the SR displayed in the video. Items 1, 3, 4, and 5 were inverted and the seven items were aggregated so that higher scores indicated stronger PRUSE. The scale displayed a good internal consistency reliability (see Table 1).
Behavioral intention to work with social robots (IWSR). Participants reported their intention to work with the SR displayed in the video by responding to five items on a 7-point scale (1 = I completely disagree; 7 = I completely agree): Items were: “I want to work with this robot in the future; To work with this robot, I would be prepared to invest a lot of effort; I would like to work with this robot; I would keep trying to work with this robot, even if it was very difficult; In the future, I want to work with this robot.” The behavioral intention scale was especially elaborated for the study. Items were aggregated so that higher scores indicated stronger intention to collaborate with the SR. The scale displayed a good internal consistency reliability (see Table 1)

4. Results

4.1. Data Analysis

Descriptive statistics (i.e., means, standard deviations, skewness, and kurtosis) were calculated and explored for each scale. The reliability of the scales was assessed using two methods: the Cronbach’s alpha coefficient (α) and the McDonald’s omega (ωt), calculated with the OMEGA macro [33]. The use of omega is becoming increasingly recommended because it is a better estimator of reliability than the Cronbach’s alpha [33]. The relations between the variables were investigated using Pearson correlation coefficients. Statistical analyses were conducted using SPSS, version 28.0 (IBM Corp. Released 2020. IBM SPSS Statistics for Windows, Version 28.0. Armonk, NY, USA: IBMCorp).
To test the hypotheses of prediction, multiple linear regressions and a mediation model were conducted using PROCESS [34]. PROCESS is a regression based bootstrap approach. Consequently, PROCESS generates highly accurate confidence intervals and allows dealing with a small sample and the possibility of non-normality in the sampling distribution. It also allows dealing with the shortcomings of the Baron and Kenny’s (1986) [35] steps method for mediation and of the Sobel test (see [34]). Finally, PROCESS generates results similar to those that can be observed when conducting structural equation modeling (SEM; see [34]).
Following our conceptual framework (see Figure 1), the effect of the BHNU on intention to work with the SR (IWSR) displayed in the video is mediated by attitudes towards robots (i.e., NARHT, NATIR), and PRUSE. Preliminary analyses revealed multicollinearity between NATIR and PRUSE. Although these factors are measured by different items, they have in common the ability to assess two ways of interacting with robots which can explain multicollinearity. NATIR was then ruled out from the model. To test the proposed associations among the variables, a serial mediation process (model 6, [34]) was conducted with BHNUS as predictor of IWSR, and NARHT and PRUSE as serial mediators. Following standard procedures [34], all bootstrapping analyses were based on 5000 samples and 95% confidence intervals.

4.2. Descriptives Analyses and Control Check

Table 1 displays the descriptive statistics for the scales under study. Examination of the normality of the data showed that both skewness and kurtosis values were all below the threshold recommended by Curran et al. [36] (i.e., 2 and 7 respectively; see also [37]). A series of one sample t-tests revealed that participants reported a mean score significantly above the middle point of the scales (i.e., 3.5) for all the variables. All scales displayed good reliability indicators (see Table 1).
Moreover, a one-way MANOVA was conducted to determine whether there was a difference on BHNUS, PNARS, NARHT, NATIR, PRUSE, and IWSR scores according to the type of robot (humanlike vs. humanoid mechanical-like). No significant difference in scores based on robot type was observed, F(5, 111) = 1.48, p = 0.20; Wilk’s lambda = 0.93.

4.3. Hypotheses Testing

Hypothesis 1 (H1) states that perceived robot use self-efficacy (PRUSE) predicts the intention to work with the displayed SR (IWSR). Results showed that PRUSE was positively correlated with IWRS (r = 0.56, p < 0.01; see Table 2) and was a significant predictor (β = 0.56, p < 0.001) of IWSR (see Table 3 and Figure 3). The pattern of results supports H1.
Hypothesis 2 (H2) states that attitudes towards robots predict perceived robot use self-efficacy (PRUSE). Due to multicollinearity between NATIR and PRUSE (see above), only the effect of the negative attitude towards robots with human traits (NARHT) was tested. Results showed that NARHT was positively correlated with PRUSE (r = 0.48, p < 0.01; see Table 2) and was a significant predictor (β = 0.49, p < 0.001) of PRUSE (see Table 3 and Figure 3). The pattern of results supports H2.
Hypothesis 3 (H3) states that attitudes towards robots predict acceptance to work with social robots (IWSR). Results showed that NARHT was positively correlated with IWSR (r = 0.44, p < 0.01; see Table 2) and was a significant predictor (β = 0.28, p = 0.03) of IWSR (see Table 3 and Figure 3). The pattern of results supports H3.
Hypothesis 4 (H4) states that beliefs in human nature uniqueness (BHNU) predict attitudes towards robots. Results showed that BHNU was negatively correlated with NARHT (r = −0.54, p < 0.01; see Table 2) and was a significant predictor (β = −0.47, p < 0.001) of NARHT (see Table 3 and Figure 2). In other words, the more participants deny the possibility of a human ontology to robots, the less they have positive attitudes towards robots with human traits. The pattern of results supports H4.
Hypothesis 5 (H5) states that beliefs in human nature uniqueness (BHNU) predict perceived robot use self-efficacy (PRUSE). Results showed that BHNU is negatively correlated with PRUSE (r = −0.31, p < 0.01; see Table 2) but is not a significant predictor (β = −0.066; p = 0.48) of PRUSE (see Table 3 and Figure 3). The pattern of results did not support H5.
Hypothesis 6 (H6) states that beliefs in human nature uniqueness (BHNU) have a remote indirect effect on acceptance to work with a social robot (IWSR) through attitudes towards robots and perceived robot self-efficacy (PRUSE). To test this, a serial mediational analysis was conducted with PROCESS [34]. Results revealed two significant indirect effects. BHNUS influenced IWSR: (1) via NARHT (effect = −0.13; se = 0.06; 95% IC = [−0.27, −0.01]); and (2) sequentially via NARHT and PRUSE (effect = −0.13; se = 0.04; 95% IC = [−0.23, −0.05]). In short, results showed that NARHT and PRUSE mediated the relationship between BHNU and IWSR. The pattern of results supports H6.

5. Discussion

The introduction of SRs as collaborators will create a complex work environment. The current study investigated how the interplay between BHNU, attitudes towards robots and perceived robot use self-efficacy could influence the intention to work with an SR.
The present study contributes to the literature showing the explanatory power of perceived self-efficacy in robots’ acceptance. Coherent with previous research, this study shows that perceived capacity to collaborate with an SR is a significant predictor of the intention to work with it (confirming H1). This suggests that employees who believe that they can easily deal with an SR will be more willing to collaborate with it. Importantly, our findings suggest that higher perceived robot use self-efficacy could successfully reduce resistance to technological implementations. Indeed, if employees believe they can successfully interact and collaborate with SRs, they will be more likely to accept and use it. For example, self-efficacy towards high technology tasks was shown to be positively associated with more occupational commitment, satisfaction, and work quality and quantity, while negatively associated with absenteeism and tardiness [38].
Moreover, attitudes towards robots were significant predictors of perceived robot use self-efficacy and intention to work with SRs. Participants with more pre-existing positive attitudes towards robots with human traits (NARHT) were also those who felt more competent to interact with it (confirming H2) and who were more eager to collaborate with an SR (confirming H3). These results stressed the importance of prior pre-existing evaluations about a new technology on its implementation and acceptance. Our results are also congruent with earlier findings (e.g., [9,23]).
Additionally, results showed that BHNU was an antecedent of the attitudes towards robots with human traits (NARHT). In other words, participants who tend to reserve hallmarks of humanness to humans were also those who hold more negative attitudes towards robots with human traits (confirming H4). This pattern is congruent with results of previous research [28,29,30]. However, BHNU was not a significant antecedent of the perceived robot use self-efficacy (H5 not confirmed), indicating that the denial of an ontology to SRs did not impede a perception of self-efficacy to use robots.
Finally, unique to our study, the results showed that BHNU was remotely associated with the willingness to collaborate with the displayed SR through the attitudes towards robots (specifically through NARHT) and the perceived robot use self-efficacy (PRUSE). The results stressed the importance of considering, in technological implementation, the potential individual tendency of employees to deny SRs an ontological nature. Indeed, BHNU holders can be overly intimidated by technologies that challenge their conception of humanness.

5.1. Implications for Practice

Adaptation to a new technology such as SRs can be challenging for employees, and the current study results have implications for organizations, management, and practitioners. Working with interactive AI and SRs in the future will not only require hard skills (e.g., technology and computer skills, digital skills, programming skills) from employees, but also critical thinking skills and soft skills (e.g., communication skills).
First, the present findings suggest that robot use self-efficacy could be a powerful tool to promote SR acceptance and to work with robots. Because self-efficacy beliefs are formed through direct and indirect learning experiences, they can be changed. Human resources development practitioners can provide learning opportunities in vocational preparation and in continued formation. Leaders can also promote a culture of self-efficacy by providing mastery experiences, promoting vicarious experiences, providing positive feedback, and encouraging psychological support (in case of doubt). Well-prepared employees and teams to face the challenge of SRs will have a competitive advantage and an organizational asset.
Second, the current study also showed that robot use self-efficacy is shaped by pre-existing attitudes toward robots. Positive attitudes toward robots will be needed for effective communication and collaboration with SRs. Hence human resources development practitioners and leaders should gauge these pre-existing attitudes and implement activities to change them.
Third, current results also reveal that the tendency to deny an ontological status to SRs has an indirect influence on the intention to work with them through, sequentially, the attitude towards a robot and the perceived robot use self-efficacy. Although humanlike social robots are already present in popular culture, people still have a representation of robots as emotionless machines [39,40]. Accordingly, employees will need to be prepared to work with human-like emotional robots. BHNU could be modified by training, thus allowing employees to have smoother interactions with SRs.

5.2. Limitations

The findings of this study must be seen by considering some limitations. First, the study is based on a convenience sample and the use of non-probabilistic samples can increase the risk of selection bias and decrease the generalization of the results. Future investigations should use representative samples. Second, participants can be considered as Western, educated, industrialized, rich, and democratic (WEIRD) society members [41]. Accordingly, further studies should be conducted with non-WEIRD society members. In short, future research should incorporate more diverse and representative participant groups to enhance the generalizability of the results. Finally, the experimental stimuli consisted of video excerpts featuring the two SRs, which were selected to provide participants with concrete exemplars. Although participants received instructions to imagine themselves working with these SRs, the video content was limited to isolated robotic demonstrations rather than human–robot interaction sequences. This methodological constraint may have attenuated the participants’ psychological engagement and representation of SRs at the workplace.

5.3. Future Research

Because perceived robot use self-efficacy plays a crucial role in the intention to collaborate with SRs, future studies should be conducted to determine the sociopsychological factors that may favor or inhibit it. One factor could be that of emotions and emotional coping. Anticipation to realize a task can trigger emotional states like anxiety or enthusiasm that can influence the perception of self-efficacy. Self-efficacy was shown to be associated with emotional coping in organizational [42] and educational [43] contexts. In HRI, anticipation of working with an SR has been shown to trigger individual anticipated positive and negative emotions [12]. Interestingly, emotions are not only individual but also collective. Indeed, employees, as a group, can experience collective emotions towards SRs. Accordingly, future research should investigate how individual and collective emotions can influence robot use self-efficacy. Such a line of research could articulate the individual and group level in human resources considerations for HRI [3]. Further research should also explore how to promote a quick change in attitudes toward robots and BHNU to accelerate the adaptation of employees to the introduction of SRs in organizations. Because organizations are becoming more and more international, studies should explore the intercultural differences in BHNU and attitudes towards robots and their influence on intention to work with SRs. Although behavioral intention is considered to be the proximate cause and the strongest predictor of actual behavior, research has also shown that intentions do not always translate into actions [44]. Indeed, various sociocognitive and contextual factors can influence the strength of the intention–behavior relationship, such as direct experience [45]. The present study focused on the effect of an indirect experience with SRs (i.e., through exposure to a video) on perceived robot use self-efficacy. Further studies should then explore how the BHNU could impact perceived robot use self-efficacy after a direct experience with an SR. According to Chen et al. (2023) [46], enhancing the perceived social presence of robots may lead to improved HRI and a more favorable assessment of service quality. Future studies could then test how BHNU could influence perceived robot use self-efficacy and intention to work with SRs through the perceived social presence of robots. Finally, the integration of anthropomorphic SRs into the workplace environments raises ethical considerations at both macro- and micro-levels of analysis. At the societal level, the proliferation of these autonomous social agents raises fundamental concerns regarding labor market disruption, data privacy governance, and the ontological understanding of human labor (see [47,48,49] for a review). At the individual level, the deployment of SRs may create tension with personal moral frameworks, particularly among individuals exhibiting elevated levels of BHNU. Recent empirical evidence suggests that individuals scoring higher on BHNU measures demonstrate stronger religious adherence and perceive the development of social robots as ethically problematic [30]. This emerging pattern warrants further investigation into the complex interrelationships between BHNU, moral reasoning processes, and the psychological mechanisms underlying SR acceptance. Future research should elucidate these interconnections and their implications for successful human–robot integration in organizational settings to mitigate potential ethical issues.

6. Conclusions

Research on technology acceptance of SRs is still in its infancy. The present study contributes to the recent research on the factors that influence the acceptance of working with SRs and maximize the effectiveness of HRI. The present study contributes to and expands the recent research on showing how the tendency to deny an ontological status to SRs, prior attitudes towards robots, and perceived robot use self-efficacy interplay in influencing the intention to embrace the upcoming technological change represented by social robotics.

Author Contributions

Conceptualization, J.-C.G., N.P. and G.P.; methodology, J.-C.G. and N.P.; formal analysis, J.-C.G.; data curation, J.-C.G.; writing—original draft preparation, J.-C.G.; writing—review and editing, J.-C.G., N.P., G.P., N.A. and A.S.A.; funding acquisition, J.-C.G. and A.S.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by national funds through FCT—Fundação para a Ciência e a Tecnologia—as part of the project CIP—Refª UIDB/PSI/04345/2020.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki. All procedures performed in this study were in accordance with the American Psychological Association’s (APA) ethical principles and the Portuguese regulations about data protection. The study was a part of a broader project that was approved by the University of Algarve’s Research Ethics Committee (protocol code CEUAlg Pn°101/2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All data available upon request.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Kelly, K. Better Than Human: Why Robots Will—And Must—Take. 24 December 2012. Available online: https://www.wired.com/2012/12/ff-robots-will-take-our-jobs/ (accessed on 8 May 2024).
  2. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  3. Kim, S. Working with Robots: Human Resource Development Considerations in Human–Robot Interaction. Hum. Resour. Dev. Rev. 2022, 21, 48–74. [Google Scholar] [CrossRef]
  4. Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
  5. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  6. Bandura, A.; Freeman, W.H.; Lightsey, R. Self-Efficacy: The Exercise of Control; Springer: Berlin/Heidelberg, Germany, 1997. [Google Scholar]
  7. Rahman, M.S.; Ko, M.; Warren, J.; Carpenter, D. Healthcare Technology Self-Efficacy (HTSE) and its influence on individual attitude: An empirical study. Comput. Hum. Behav. 2016, 58, 12–24. [Google Scholar] [CrossRef]
  8. Latikka, R.; Turja, T.; Oksanen, A. Self-Efficacy and Acceptance of Robots. Comput. Hum. Behav. 2019, 93, 157–163. [Google Scholar] [CrossRef]
  9. Rosenthal-von der Pütten, A.; Bock, N. Development and Validation of the Self-Efficacy in Human-Robot-Interaction Scale (SE-HRI). ACM Trans. Hum. Robot. Interact. THRI 2018, 7, 1–30. [Google Scholar] [CrossRef]
  10. Latikka, R.; Savela, N.; Koivula, A.; Oksanen, A. Perceived Robot Attitudes of Other People and Perceived Robot Use Self-efficacy as Determinants of Attitudes Toward Robots. In HCII 2021; Springer Nature: Berlin, Germany, 2021; pp. 262–274. [Google Scholar]
  11. Turja, T.; Rantanen, T.; Oksanen, A. Robot use self-efficacy in healthcare work (RUSH): Development and validation of a new measure. AI Soc. 2019, 34, 137–143. [Google Scholar] [CrossRef]
  12. Piçarra, N.; Giger, J.-C. Predicting intention to work with social robots at anticipation stage: Assessing the role of behavioral desire and anticipated emotions. Comput. Hum. Behav. 2018, 86, 129–146. [Google Scholar] [CrossRef]
  13. Piçarra, N.; Giger, J.-C.; Pochwatko, G.; Możaryn, J. Designing Social Robots for Interaction at Work: Socio-Cognitive Factors Underlying Intention to Work with Social Robots. J. Autom. Mob. Robot. Intell. Syst. 2016, 10, 17–26. [Google Scholar] [CrossRef]
  14. de Graaf, M.M.A.; Ben Allouch, S. Exploring influencing variables for the acceptance of social robots. Robot. Auton. Syst. 2013, 61, 1476–1486. [Google Scholar] [CrossRef]
  15. Robinson, N.L.; Hicks, T.N.; Suddrey, G.; Kavanagh, D.J. The Robot Self-Efficacy Scale: Robot Self-Efficacy, Likability and Willingness to Interact Increases After a Robot-Delivered Tutorial. In Proceedings of the IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020. [Google Scholar]
  16. Zafari, S.; Schwaninger, I.; Hirschmanner, M.; Schmidbauer, C.; Weiss, A.; Koeszegi, S.T. “You Are Doing so Great!”—The Effect of a Robot’s Interaction Style on Self-Efficacy in HRI. In Proceedings of the 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 14–18 October 2019. [Google Scholar]
  17. Spatola, N.; Wudarczyk, O.A.; Nomura, T.; Cherif, E. Attitudes Towards Robots Measure (ARM): A New Measurement Tool Aggregating Previous Scales Assessing Attitudes Toward Robots. Int. J. Soc. Robot. 2023, 15, 1683–1701. [Google Scholar] [CrossRef]
  18. Naneva, S.; Sarda Gou, M.; Webb, T.L. A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. Int. J. Soc. Robot. 2020, 12, 1179–1201. [Google Scholar] [CrossRef]
  19. Ajzen, I.; Fishbein, M. Attitudes and the Attitude-Behavior Relation: Reasoned and Automatic Processes. Eur. Rev. Soc. Psychol. 2011, 11, 1–33. [Google Scholar] [CrossRef]
  20. Nomura, T.; Kanda, T.; Suzuki, T. Experimental investigation into influence of negative attitudes toward robots on human–robot interaction. AI Soc. 2006, 20, 138–150. [Google Scholar] [CrossRef]
  21. Huang, H.-L.; Cheng, L.-K.; Sun, P.-C.; Chou, S.-J. The Effects of Perceived Identity Threat and Realistic Threat on the Negative Attitudes and Usage Intentions Toward Hotel Service Robots: The Moderating Effect of the Robot’s Anthropomorphism. Int. J. Soc. Robot. 2021, 13, 1599–1611. [Google Scholar] [CrossRef]
  22. Rantanen, T.; Lehto, P.; Vuorinen, P.; Coco, K. Attitudes towards care robots among Finnish home care personnel—A comparison of two approaches. Scand. J. Caring Sci. 2018, 32, 772–782. [Google Scholar] [CrossRef] [PubMed]
  23. Piçarra, N.; Giger, J.-C.; Pochwatko, G.; Gonçalves, G. Validation of the Portuguese version of the Negative Attitudes towards Robots Scale. Eur. Rev. Appl. Psychol. 2015, 65, 93–104. [Google Scholar] [CrossRef]
  24. Giger, J.; Piçarra, N.; Alves-Oliveira, P.; Oliveira, R.; Arriaga, P. Humanization of robots: Is it really such a good idea? Hum. Behav. Emerg. Technol. 2019, 1, 111–123. [Google Scholar] [CrossRef]
  25. Mori, M.; MacDorman, K.F.; Kageki, N. The uncanny valley [from the field]. IEEE Robot. Autom. Mag. 2012, 19, 98–100. [Google Scholar] [CrossRef]
  26. Złotowski, J.; Yogeeswaran, K.; Bartneck, C. Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. Int. J. Hum. Comput. Stud. 2017, 100, 48–54. [Google Scholar] [CrossRef]
  27. Ferrari, F.; Paladino, M.P.; Jetten, J. Blurring Human–Machine Distinctions: Anthropomorphic Appearance in Social Robots as a Threat to Human Distinctiveness. Int. J. Soc. Robot. 2016, 8, 287–302. [Google Scholar] [CrossRef]
  28. Giger, J.C.; Moura, D.; Almeida, N.; Piçarra, N. Attitudes towards social robots: The role of gender, belief in human nature uniqueness, religiousness and interest in science fiction. In Proceedings of the II International Congress on Interdisciplinarity in Social and Human Sciences, Faro, Portugal, 11–12 May 2017. [Google Scholar]
  29. Pochwatko, G.; Giger, J.C.; Różańska-Walczuk, M.; Świdrak, J.; Kukiełka, K.; Możaryn, J.; Piçarra, N. Polish Version of the Negative Attitude Toward Robots Scale (NARS-PL). J. Autom. Mob. Robot. Intell. Syst. 2015, 9, 65–72. [Google Scholar]
  30. Giger, J.-C.; Piçarra, N.; Pochwatko, G.; Almeida, N.; Almeida, A.S.; Costa, N. Development of the Beliefs in Human Nature Uniqueness Scale and Its Associations With Perception of Social Robots. Hum. Behav. Emerg. Technol. 2024, 2024, 5569587. [Google Scholar] [CrossRef]
  31. Łupkowski, P.; Wasielewska, A. The Cooperative Board Game THREE. A Test Field for Experimenting with Moral Dilemmas of Human-Robot Interaction. Ethics Prog. 2019, 10, 82–97. [Google Scholar] [CrossRef]
  32. Lee, M.K.; Forlizzi, J.; Rybski, P.; Crabbe, F.; Chung, W.; Finkle, J.; Glaser, E.; Kiesler, S. The Snackbot: Documenting the design of a robot for long-term Human-Robot Interaction. In Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI), La Jolla, CA, USA, 9–13 March 2009; pp. 7–14. [Google Scholar]
  33. Hayes, A.F.; Coutts, J.J. Use omega rather than Cronbach’s alpha for estimating reliability. But.... Commun. Methods Meas. 2020, 14, 1–24. [Google Scholar] [CrossRef]
  34. Hayes, A.F. Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach, 2nd ed.; Guilford Press: New York, NY, USA, 2018. [Google Scholar]
  35. Baron, R.M.; Kenny, D.A. The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. J. Personal. Soc. Psychol. 1986, 51, 1173–1182. [Google Scholar] [CrossRef] [PubMed]
  36. Curran, J.; West, S.; Finch, J. The Robustness of Test Statistics to Nonnormality and Specification Error in Confirmatory Factor Analysis. Psychol. Methods 1996, 1, 16–29. [Google Scholar] [CrossRef]
  37. Hair, J.F.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM), 3rd ed.; Sage: Thousand Oaks, CA, USA, 2022. [Google Scholar]
  38. McDonald, T.; Siegall, M. The effects of technological self-efficacy and job focus on job performance, attitudes, and withdrawal behaviors. J. Psychol. Interdiscip. Appl. 1992, 126, 465–475. [Google Scholar] [CrossRef]
  39. Piçarra, N.; Giger, J.-C.; Pochwatko, G.; Gonçalves, G. Making sense of social robots: A structural analysis of the layperson’s social representation of robots. Eur. Rev. Appl. Psychol. 2016, 66, 277–289. [Google Scholar] [CrossRef]
  40. Brondi, S.; Pivetti, M.; Battista, S.D.; Sarrica, M. What do we expect from robots? Social representations, attitudes and evaluations of robots in daily life. Technol. Soc. 2021, 66, 1–10. [Google Scholar] [CrossRef]
  41. Henrich, J.; Heine, S.J.; Norenzayan, A. The weirdest people in the world? Behavioral and Brain Sciences. Behav. Brain Sci. 2010, 33, 61–83. [Google Scholar] [CrossRef] [PubMed]
  42. Stumpf, S.A.; Brief, A.P.; Hartman, K. Self-efficacy expectations and coping with career-related events. J. Vocat. Behav. 1987, 31, 91–108. [Google Scholar] [CrossRef]
  43. Sun, G.; Lyu, B. Relationship between emotional intelligence and self-efficacy among college students: The mediating role of coping styles. Discov. Psychol. 2022, 42, 2–8. [Google Scholar] [CrossRef]
  44. Conner, M.; Norman, P. Understanding the intention-behavior gap: The role of intention strength. Front. Psychol. 2022, 13, 923464. [Google Scholar] [CrossRef] [PubMed]
  45. Fazio, R.H.; Zanna, M.P. Direct experience and attitude-behavior consistency. Adv. Exp. Soc. Psychol. 1981, 14, 161–202. [Google Scholar]
  46. Chen, N.; Liu, X.; Zhai, Y.; Hu, X. Development and validation of a robot social presence measurement dimension scale. Sci. Rep. 2023, 13, 1–15. [Google Scholar] [CrossRef] [PubMed]
  47. Boada, J.P.; Maestre, B.R.; Genís, C.T. The ethical issues of social assistive robotics: A critical literature review. Technol. Soc. 2021, 67, 101726. [Google Scholar] [CrossRef]
  48. Torras, C. Ethics of Social Robotics: Individual and Societal Concerns and Opportunities. Annu. Rev. Control Robot. Auton. Syst. 2024, 7, 1–18. [Google Scholar] [CrossRef]
  49. Smids, J.; Nyholm, S.; Berkers, H. Robots in the Workplace: A Threat to—Or Opportunity for—Meaningful Work? Philos. Technol. 2020, 33, 503–522. [Google Scholar] [CrossRef]
Figure 1. Theoretical model under study.
Figure 1. Theoretical model under study.
Mti 09 00009 g001
Figure 2. Graphical display of the procedure.
Figure 2. Graphical display of the procedure.
Mti 09 00009 g002
Figure 3. Results of the serial multiple mediator model. * p < 0.05; ** p < 0.01; *** p < 0.001; ns = p > 0.05.
Figure 3. Results of the serial multiple mediator model. * p < 0.05; ** p < 0.01; *** p < 0.001; ns = p > 0.05.
Mti 09 00009 g003
Table 1. Statistical characteristics of the scales under study.
Table 1. Statistical characteristics of the scales under study.
Min–MaxMSDSkKαωt
BHNUS1–75.71 *1.43−1.361.490.870.88
NARHT1–73.79 *1.260.01−0.140.860.71
NATIR1–74.15 *1.500.00−0.530.860.85
PNARS1–74.13 *1.230.06−0.220.860.86
PRUSE1–74.68 *1.39−0.350.140.840.83
IWSR1–73.78 *1.710.24−0.710.910.91
Notes. N = 117; M = mean; SD = standard deviation; Sk = skewness; K = kurtosis; α = Cronbach’s alpha; ωt = McDonald’s omega; BHNUS = beliefs in human nature uniqueness scale; NARHT = negative attitudes towards robots with human traits; NATIR = negative attitudes towards interactions with robots; PNARS = Portuguese negative attitudes towards robots scale; PRUSE = perceived robot use self-efficacy; IWSR = intention to work with the social robot; * = means differ from the middle point of the scale (i.e., 3.5) at p < 0.001 (one sample t-test).
Table 2. Correlations between variables under study.
Table 2. Correlations between variables under study.
BHNUSPNARSNARHTNATIRPRUSEIWSR
BHNUS-−0.45 **−0.54 **−0.38 **−0.31 **−0.28 **
PNARS (R) -0.87 **0.92 **0.60 **0.39 **
NARHT (R) -0.69 **0.48 **0.44 **
NATIR (R) -0.60 **0.32 **
PRUSE -0.56 **
IWSR -
Notes. N = 117; ** = p < 0.01; BHNUS = beliefs in human nature uniqueness scale; PNARS = Portuguese negative attitudes towards robots scale; NARHT = negative attitudes towards robots with human traits; NATIR = negative attitudes towards interactions with robots; (R) = to facilitate the reading of results, the scales were inverted, so that higher scores indicated more positive attitude; PRUSE = perceived robot use self-efficacy; IWSR = intention to work with the social robot.
Table 3. Regression coefficients, standards errors, and model summary information for the serial mediation model depicted in Figure 3.
Table 3. Regression coefficients, standards errors, and model summary information for the serial mediation model depicted in Figure 3.
NARHT (M1) PRUSE (M2) IWSR (Y)
AntecedentCoeff.SEp95% CICoeff.SEp95% CICoeff.SEp95% CI
BHNUS (X)−0.470.06<0.001−0.61; −0.33−0.0660.090.48−0.25; 0.12−0.030.100.72−0.25; 0.17
NATHR (R) (M1) 0.490.10<0.0010.28; 0.700.280.130.030.01; 0.54
PRUSE (M2) 0.560.10<0.0010.35; 0.77
Constant6.510.40<0.0015.71; 7.323.190.83<0.0011.52; 4.850.301.010.76−1.70; 2.31
R2 = 0.29R2 = 0.23R2 = 0.35
F(1,115) = 47.62, p < 0.001F(2,114) = 17.98, p < 0.0001F(3,113) = 21.08, p < 0.00001
Notes. N = 117; CI = confidence intervals; Process model 6 with 5000 bootstraps; BHNUS = beliefs in human nature uniqueness scale; NARHT = negative attitudes towards robots with human traits; (R) = NARHT was inverted so that higher scores indicated more positive attitudes. PRUSE = perceived robot use self-efficacy; IWSR = intention to work with the social robot.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Giger, J.-C.; Piçarra, N.; Pochwatko, G.; Almeida, N.; Almeida, A.S. Intention to Work with Social Robots: The Role of Perceived Robot Use Self-Efficacy, Attitudes Towards Robots, and Beliefs in Human Nature Uniqueness. Multimodal Technol. Interact. 2025, 9, 9. https://doi.org/10.3390/mti9020009

AMA Style

Giger J-C, Piçarra N, Pochwatko G, Almeida N, Almeida AS. Intention to Work with Social Robots: The Role of Perceived Robot Use Self-Efficacy, Attitudes Towards Robots, and Beliefs in Human Nature Uniqueness. Multimodal Technologies and Interaction. 2025; 9(2):9. https://doi.org/10.3390/mti9020009

Chicago/Turabian Style

Giger, Jean-Christophe, Nuno Piçarra, Grzegorz Pochwatko, Nuno Almeida, and Ana Susana Almeida. 2025. "Intention to Work with Social Robots: The Role of Perceived Robot Use Self-Efficacy, Attitudes Towards Robots, and Beliefs in Human Nature Uniqueness" Multimodal Technologies and Interaction 9, no. 2: 9. https://doi.org/10.3390/mti9020009

APA Style

Giger, J.-C., Piçarra, N., Pochwatko, G., Almeida, N., & Almeida, A. S. (2025). Intention to Work with Social Robots: The Role of Perceived Robot Use Self-Efficacy, Attitudes Towards Robots, and Beliefs in Human Nature Uniqueness. Multimodal Technologies and Interaction, 9(2), 9. https://doi.org/10.3390/mti9020009

Article Metrics

Back to TopTop