Next Article in Journal
Feedback Amplifier Analysis: Extending the Rosenstark Method for Impedance and Noise Evaluation
Previous Article in Journal
Efficient Compression of Red Blood Cell Image Dataset Using Joint Deep Learning-Based Pattern Classification and Data Compression
Previous Article in Special Issue
Combining Design Neurocognition Technologies and Neural Networks to Evaluate and Predict New Product Designs: A Multimodal Human–Computer Interaction Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Trust and Trustworthiness from Human-Centered Perspective in Human–Robot Interaction (HRI)—A Systematic Literature Review

1
Virumaa College, Tallinn University of Technology (Taltech), 19086 Tallinn, Estonia
2
School of Digital Technologies, Tallinn University, 10120 Tallinn, Estonia
3
School of Engineering, Tallinn University of Technology (Taltech), 19086 Tallinn, Estonia
*
Authors to whom correspondence should be addressed.
Electronics 2025, 14(8), 1557; https://doi.org/10.3390/electronics14081557
Submission received: 29 January 2025 / Revised: 29 March 2025 / Accepted: 31 March 2025 / Published: 11 April 2025
(This article belongs to the Special Issue Emerging Trends in Multimodal Human-Computer Interaction)

Abstract

:
The transition from Industry 4.0 to Industry 5.0 highlights recent European efforts to design intelligent devices, systems, and automation that can work alongside human intelligence and enhance human capabilities. In this vision, human–machine interaction (HMI) goes beyond simply deploying machines, such as autonomous robots, for economic advantage. It requires societal and educational shifts toward a human-centric research vision, revising how we perceive technological advancements to improve the benefits and convenience for individuals. Furthermore, it also requires determining which priority is given to user preferences and needs to feel safe while collaborating with autonomous intelligent systems. This proposed human-centric vision aims to enhance human creativity and problem-solving abilities by leveraging machine precision and data processing, all while protecting human agency. Aligned with this perspective, we conducted a systematic literature review focusing on trust and trustworthiness in relation to characteristics of humans and systems in human–robot interaction (HRI). Our research explores the aspects that impact the potential for designing and fostering machine trustworthiness from a human-centered standpoint. A systematic analysis was conducted to review 34 articles in recent HRI-related studies. Then, through a standardized screening, we identified and categorized factors influencing trust in automation that can act as trust barriers and facilitators when implementing autonomous intelligent systems. Our study comments on the application areas in which trust is considered, how it is conceptualized, and how it is evaluated within the field. Our analysis underscores the significance of examining users’ trust and the related factors impacting it as foundational elements for promoting secure and trustworthy HRI.

1. Introduction

Industry 5.0 relies on deploying intelligent automation systems, prioritizing human–machine interactions (HMI) and optimizing data-driven tools and processes. The objectives envisioned consider technologies to adapt to human needs and the diversity of human nature, endowing workers with power and improving their process efficiency rather than replacing them [1]. This vision aligns with the recent European efforts to design intelligent devices, systems, and automation that complement human capabilities [2,3]. These perspectives on innovation also share a common goal: promoting a societal change in which advanced technologies, such as autonomous robots, “are actively used in everyday life, industry, healthcare and other spheres of activity, not primarily for economic advantage but for the benefit and convenience of each citizen” [1]. In that sense, technology should also enable tools to augment the human ability to be creative and solve problems, benefiting from machine precision and data processing while preserving the human role in critical decision-making processes.
Nevertheless, real-time interaction is necessary across several of the aforementioned domains, bringing significant challenges that affect the study of trust in intelligent technology. Within the HCI community, researchers argue that these difficulties stem “from the need to provide specifications for the perceptual, reasoning, and behavioral processes of robot systems that will need to acquire models of, and deal with, the high variability exhibited in human behavior.” [4]. By that token, current human–robot interaction (HRI) approaches should focus on human and system qualities that affect the potential of building robust and adaptive industrial ecosystems to safeguard this human-centric vision of technology development.
Drawing on recent HRI-related studies, we examined the phenomenon of trust, how it has been studied, and the environment-, human-, and design-related factors that restrain and bolster trust in intelligent automation in the contexts of human–machine collaboration. The following sections of this work describe the systematic literature review methodology, including the research objectives and strategies. It also explains how the screening, selection, and data-extraction processes were performed and validated. Next, we present and comment on the analysis and synthesis of the definitions of trust employed by the selected studies, their focus, and the aspects listed by their authors as trust barriers and facilitators. Finally, we discuss a pathway to foster appropriate trust in human–robot relations and support successful collaboration from a human-centered design (HCD) perspective.

1.1. Research Scope

On that note, a fundamental challenge lies in designing autonomous technology that promotes seamless human–machine collaboration. The recent advancements of AI-enabled systems often blur the general comprehension of what it does and its impact on the societal and interpersonal level: “while using these increasingly complex systems, we may not be concerned about trust itself, but the ultimate behaviors that trust is likely to produce” [5].
Current definitions of autonomous robots describe them as embodied systems capable of independently determining their actions without human intervention [4]. With the fast pace of technological development, the range and capabilities of such technological innovations vary from using robots in factories, at home, as driverless transports, and as autonomous drones to deploying COBOTS as teammates, checking orders with robot concierges, etc. A strict separation and role distinction between humans and machines may not be sufficient to mitigate the risks of unintended consequences when aiming to foster effective human–machine collaboration in complex and high-risk application contexts.
Therefore, promoting ethical, socially responsible, and user-trustworthy automation may require alternative design approaches. Adopting a human-centric approach to the problem is referred to as a solution for leveraging effective, safe, and trustworthy human–robot collaboration [3,6,7].

1.1.1. Industry 5.0 Vision

While Industry 5.0 envisions human-centric and sustainable innovations, the rapid adoption of robots and AI raises new challenges. For example, such fast development in technological innovations introduces concerns about trust, ethics, safety, and security when human–machine collaboration is needed [4,8]. In cooperation settings, rather than validating discrete decisions made by an agent, a user or operator must work with agents to make decisions actively [9]. In such contexts, a suitable approach emerges as Just-in-Time (JIT) guidelines, when combined with principles of jidoka—a significant driver of human-centered automation focused on values of balance and empowerment [10]—form a robust foundation for lean manufacturing, ensuring efficiency, flexibility, and quality in production processes. With current technological advances, the concept of combining automated systems with human supervision often referred to as “Human in the Loop” (HITL) automation, is widely discussed in automation and robotics. This approach emphasizes that human oversight can enhance the efficiency and reliability of automated systems. It is argued that this combination leads to better machine maintenance and improved work organization. HITL automation ensures that humans can intervene when necessary, providing critical judgment and decision-making that machines alone might not handle effectively.
Such a discussion of the extent to which machines can have human judgment is widely debated and depends much on the actions and context in which the machine works. For simple and routine tasks, following the principle of jidoka to build human intelligence into the machine helps humans improve safety and product quality and reduce manufacturing time [11,12]. It can also foster a partnership between humans and machines, such as when drivers might take over autonomous vehicles and control them during complex traffic scenarios. Regardless, the “Human in the Loop” framework demands new design approaches and methods to ensure that humans supervise, guide, and intervene in system operations when needed. This ensures that critical decisions are aligned with ethical, contextual, and situational requirements and that humans are ethically accountable for their choices, as in complex and high-risk application contexts like AI-generated clinic diagnoses. Moreover, it will enable effective human–machine partnerships, fostering a future where technological progress aligns with societal and environmental well-being.

1.1.2. Human-Centric Vision

Interdisciplinary human–computer interaction (HCI) methodologies facilitate the design and evaluation of technologies from a human-centric perspective, adhering to guidelines like ISO 9241-210:2019 [13]. HCI practitioners and researchers have adopted the human-centered approach to the problem since 1970. They borrow knowledge from computer science, psychology, design, and other areas, focusing on improving system usability to make interactions between humans and machines more efficient and effective.
The field remains interdisciplinary while seeking to improve how people interact with advanced technology [14]. The frameworks guiding HCI research promote investigation around optimizing usability, usefulness, and user experience. Additionally, it contemplates studies on the emotional and social aspects, alongside the principles of cognitive psychology, to provide a deeper understanding of technology-related human behavior. Today, the perspectives of HCI studies also play a crucial role in achieving Industry 5.0’s sustainability goals, inclusivity, and human-centric innovation. Furthermore, they have been interrelated with subfields such as human–robot interaction (HRI), human–machine interaction (HMI), human–agent interaction (HAI), and human–AI interaction/experience (HAX).

1.1.3. Human-Centric Trust Vision

In this interplay, trust is essential for enabling the successful integration of autonomous robots. As these technologies develop, they become increasingly capable of performing complex cognitive tasks and taking over more activities previously handled by humans. Their acceptance becomes more difficult due to consumers’ growing feelings of losing control and the idea that such innovations necessitate new knowledge, skills, and an understanding of their underlying operations [15]. The “black box” nature of AI systems further complicates this relationship, fueling debates about their benefits and risks [16,17]. Furthermore, trust and trustworthiness in human–robot interaction remain intricate and often misapplied, leading to confusion about its role and definition [18,19]. This lack of clarity has impeded research to foster acceptance and trust in autonomous technologies such as self-driving cars, drones, and collaborative robots [20,21].
In a general sense, trust refers to an individual’s subjective appraisal when judging a trustee’s characteristics or set of characteristics, which researchers refer to as trustworthiness. Nonetheless, trust is an intricate and dynamic attitude that changes over time. Its definition can be outlined as a function of the confidence we predict and place in someone or something behaving in a beneficial manner.
We highlight two approaches in HCI research, one that views the phenomenon of trust as a predisposition and the other that views it as behavior. The first, pioneered by [22], describes trust propensity, which is characterized as the willingness to be vulnerable based on expected actions, reflecting individual perceptions that shape this disposition. Such an approach considers human perceptions of technology and how they affect the individual’s disposition to trust. On the other hand, HRI researchers supported by the work of [23] often examine trust as behavior. Through these lenses, the phenomenon is described as an attitude that can mediate interactions because it guides reliance and, therefore, supports humans in overcoming potential complexities when interacting with complex autonomous systems. Although definitions vary across disciplines, trust relationships in HRI hold commonalities: “there must be some task for whose execution the trustor needs the trustee. If the trustee’s execution is uncertain, the trustor becomes vulnerable. If the trustor accepts this vulnerability, then he/she trusts the trustee.” [24].
Trust is critical in uncertain or risk-prone environments, and its complexity has limited studies attempting to imprint trust models in a decision-theoretic framework to implement adaptive robot behavior and promote fluent long-term collaboration [25]. Hence, it is crucial to investigate how trust unfolds within interactions if we aim to ensure the positive and successful integration of autonomous agents in real-world contexts and try to harness and mitigate potential harmful outcomes [4,5].

2. Research Methodology

This systematic literature review examines the trust-related factors investigated in recent HRI research to understand barriers and facilitators in fostering human–robot trust. The main goal was to explore how trust is assessed and conceptualized within the literature on human–robot interaction. In parallel, our review analyzes the methods employed to evaluate trust in HRI systems, commenting on the study focus, research methods, and applications. This study aims to answer three main questions:
RQ1: 
What are the most common methodologies to study users’ trust in HRI?
RQ2: 
What has been the focus of HRI researchers when investigating trust?
RQ3: 
What are the barriers and facilitators for fostering trustworthy HRI?
We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines for qualitative synthesis [26] and structured the process in the following steps: (a) establishing the research objectives and defining search string terms, inclusion, and exclusion criteria; (b) identification of articles from research databases; (c) screening titles and abstracts; (d) reviewing selected articles for data extraction; and (e) analysis, categorization, and summary of results. All the authors agreed upon the inclusion criteria and screening method, and at least two authors reviewed the progress back-to-back.
The flow chart presented in Figure 1 illustrates the stages of the screening process following [26].

2.1. Literature Search and Strategy

The search strategy captured literature at the intersection of human–computer interaction, human–robot interaction, and trust research. The search strategy was guided by similar reviews exploring human–machine trust aspects [5,27]. The query included terms that emphasized the niche of study, such as “user-centered design”, trust-related aspects, such as “trust assessment”, “trustworthiness”, and “user trust”. Additionally, the search was expanded to include literature on “human-centered computing”, as well as “human-machine interaction”. Terms within similar interest categories were combined with OR (for instance, “UX evaluation” OR “trust assessment”). This plural focus was used to integrate perspectives on user trust from both technological and human-centered design lenses.
The articles were searched via Tallinn University databases using the libraries of Web of Science (127) and EBSCO (131), with constraints for publication year (i.e., 2014–2024), peer-reviewed articles, and English. The selected articles were assessed for quality using the mixed-method appraisal tools, and data were extracted to address the review aims. Data aggregation and synthesis were conducted and presented as descriptive numerical summaries and a narrative synthesis, respectively, as discussed in the following sections. The procedure presented in the following subsections delineates the stages of the study.

2.1.1. Screening

The search fetched 258 articles. After duplicates were removed (N = 16), the articles were perused by title and abstract. In pairs, we evaluated the papers following the criteria established for inclusion and exclusion presented in Figure 2 below. Disagreements were resolved via third-party checks and discussions. The resulting list contained 59 articles eligible for inclusion, which the first and second authors thoroughly checked. As illustrated in Figure 1, 25 other articles were sequentially removed from the review due to (a) the lack of systematic description of the approach used and (b) the application examined not referring to robots but to personal virtual assistants or recommendation systems.

2.1.2. Data Extraction

A set of aspects was listed as relevant information that should be observed in the article review, piloted by all the authors, and later iterated. The goal was to ensure a standard procedure and simplify the data extraction process.
The data extracted included the following information: author(s), reference, the study aims, definition of trust, research focus, study type, methods applied, trust metrics, and target behaviors. Reviewers were also invited to take notes of pertinent information that could help answer the research questions. Additionally, a second round of data extraction was carried out to collect further study-specific data, which was meant to inform the thematic framework built by the first and last authors within the analysis phase.

2.1.3. Analysis and Synthesis

The dataset from the reviewed articles was organized into different categories to facilitate its analysis, including the employed definitions of trust, the origins and focus of the studies, the kind of application examined, the target users, and the methodology deployed. Furthermore, the first and second authors analyzed the aspects listed as facilitators and barriers to trust using the K.J. method [28] for clustering and organizing the extracted knowledge.

3. Analysis

The studies summarized in the upcoming sections include diverse geographical regions, applications, and user populations, reflecting the broad scope of trust research in HRI. The majority of reviewed articles were published in Western European (N = 12) countries such as Germany, Austria, the UK, and the Netherlands, accounting for over one-third of HRI trust studies (see Table 1 below). This is followed by North America (N = 7), mainly the USA, and East Asia (N = 6), including China and South Korea.
Other regions are represented by fewer studies, including Northern Europe (N = 2), with studies from Sweden and Norway; Australasia (N = 2), primarily Australia; and Southern Europe (N = 2), including Spain and Portugal. South-East Asia (Singapore), Eastern Europe (Hungary), and Western Asia (Israel) each contributed one study.

3.1. What Are the Most Common Methodologies to Study Users’ Trust in HRI?

We assessed the types of studies carried out in the HRI field focused on users’ trust. We found that 56% of the reviewed papers (N = 19) were experimental studies. This was followed by systematic literature reviews (N = 7), which account for approximately 20% of the reviewed articles. Case studies (N = 5) comprised approximately 15% of the papers, and surveys (N = 2) were the least preferred methodologies, as illustrated in Figure 3 below. Delving further into that disproportion, we examined the overarching domains in which the HRI studies were developed and how these studies defined and assessed users’ trust.

3.1.1. Robot Deployment Domains

We analyzed the applications examined by the selected articles. We observed that service robots—robots that assist humans in professional or personal settings through application areas like social robots, entertainment, and domestic robots—are investigated in twelve (35.3%) of the reviewed articles, as illustrated in Figure 4. Six studies (17.6%) explored general aspects of HRI, providing broader perspectives that do not focus on a specific domain. Five (14.7%) of the studies examined the context of industrial automation, ranging from manufacturing to construction applications. Four articles (11.8%) investigated autonomous mobile robots (AMR), exploring aerial and terrestrial vehicles. Equally, four articles (11.8%) explored healthcare applications with robots implemented for rehabilitation. Additionally, 8.8% delved into the emergent domain of AI-enabled systems.

3.1.2. Conceptualization of User Trust in HRI

Figure 5 illustrates the distribution of definitions of trust in recent HRI studies. Ten (29.4%) of the revised articles do not define trust. Likewise, approximately 29.4% of the articles apply custom definitions for trust—ad-hoc definitions described by the authors either following dictionary expressions or coupling widely used definitions of trust (e.g., widely used definitions of trust in automation). Even serving as a foundational reference for many custom definitions found in our analysis, ref. [23]’s definition of trust is present in only six (17.6%) of the articles, while [22] Mayer et al.’s (1995) definition—even though recognized as a widely used definition of trust in the field of HCI—is explicitly adopted in only two (5.4%) articles.

3.1.3. How Do the Studies Assess Trust?

Considering the methods and strategies deployed to assess trust, we noticed that approximately a third of the reviewed articles (26.5%) had used Custom Questions (e.g., “Overall, how much would you trust an automated system?” and “How trustworthy did the robot appear to you?”) pertinent to the application investigated by them. These groups also correspond to half of the experimental studies reviewed, as illustrated in Table 1.
In 23.5% of the reviewed papers, combined measures were used to assess users’ trust in the robots across experimental, survey, and case study contexts, exploiting pre- and post-testing questionnaires alongside behavioral measurements and self-reporting methods. Lim and colleagues mentioned combining questionnaires with user interviews [37].
Also, 20.6% of the studies used validated tools for empirically measuring user trust across experimental settings, surveys, and cross-sectional studies. Among the assessment instruments, scales such as the Interpersonal Trust Questionnaire (ITQ) by Forbes and Roger (1999) [59], the Human–Computer Trust Scale (HCTS) by [17], and the Trust Perception Scale–HRI [60] were mentioned. Furthermore, in 8.8% of the reviewed articles, self-reports (e.g., interviews) were the primary methods used to evaluate users’ trust perceptions of HRI.
This classification did not include the literature reviews and meta-analysis articles by [27,53,54,55,56,57,58]-classified as non-applicable.

3.2. What Has Been the Focus of HRI Researchers When Investigating Trust?

While classifying the study focus of the recent HRI studies in human-robot trust, we examined emerging research themes alongside the overarching domains investigated. The research primarily explores research on trust-influencing factors and measurements, human-robot communication, and user studies, also delving into perceived safety and communicating trustworthiness in robot behaviors. As shown in Table 2, 23.5% of the reviewed studies focused on trust indicators and measurements. Some studies, such as those by [2,17,61], provide broader perspectives by addressing trust across the emergent AI-enabled systems, while others focus on specific applications, such as industrial robots in collaborative application areas examined by [2]. In contrast, ref. [58] did not specify a domain or area of application but rather provided general perspectives on the theme of extended control.
Another focus of interest explored was human–robot communication, representing 20.6% of the results. While [53] offered a meta-analysis on transparency in the framework of shared autonomy, ref. [56] investigated robot errors as marks of socio-affective competence. Moreover, refs. [37,45,48,49] focused on human–robot communication in the service robotics domain, observing applications such as a robot barista [37] and a catering robot [48]. Furthermore, 8.8% of the HRI articles focused on user experiences and characteristics.
While investigating industrial automation, ref. [29] observed user preferences for industrial robots’ interfaces, and [46] focused on the application of construction robots. Also, refs. [31,51] focused on human–robot teams in the context of autonomous mobile robots (AMRs). Furthermore, ref. [34] explored the implications of robot behavior styles in the context of service robots for domestic applications.
The systematic review by [55] analyzed human–robot communication and trust indicators and measurements while exploring social cues for general HRI applications. Moreover, ref. [43] focused on user studies and human–robot teaming, analyzing users’ reliance on robot outputs. Perceived safety was a focus area of two of the reviewed studies. In [40], they explored robot behaviors and perceived safety in the service domain, investigating soft robots for social and entertainment applications. Ref. [57] proposed a taxonomy focused on human–robot teaming and perceived safety.

3.3. What Are the Barriers and Facilitators for Fostering Trustworthy HRI?

We adopted Hancock et al.’s [62] classification to explore the factors influencing users’ perceptions of trust in HRI. In this classification, Hancock and colleagues identify trust-predicting factors related to human, robot, and environmental characteristics. Although they identified performance-based robot characteristics as predictors of human trust in HRI, subsequent studies have pointed out that other factors, like culture, also greatly impact users’ trust [63,64]. Previous research [51] also has indicated that due to trust’s dynamic nature, it is essential to understand the variety of potential design cues and functionalities that affect trust and, consequently, support the integration of autonomous robots in diverse scenarios.
With that in mind and intrigued by possible mechanisms to calibrate appropriate levels of trust during interactions, we proposed a breakdown of the factors to clarify potential trust constraints (i.e., barriers) and enablers (i.e., facilitators) related to user needs, design, functionalities, and contextual elements that can be utilized to mediate trust.

Facilitators and Barriers

The diagrams in Figure 6 and Figure 7 present key attributes, human characteristics, and contextual factors relevant to designing trustworthy HRI systems, as identified by our analysis. These categories, derived from the extracted data, were organized by the first and second authors into trust facilitators and barriers, following the rationale of
  • Environmental-related factors representing contextual considerations that include regulation, safety, and integration;
  • Human-related factors, which refer to user characteristics and their experiences; and
  • Robot-related factors, the attributes concerning robotic systems, are divided into transparency, communication, performance, behavior and situated awareness, and appearance and design.
Among the environmental-related factors, authors mention that trust can be facilitated by regulatory means such as legal compliance [27,32] and ethical design [38,53], as well as by ensuring structural safety with precise emergency mechanisms [40]. Adami and colleagues indicated interdependency as a trust facilitator for integrating industrial robots in construction applications [46], which is also pointed out as a facilitator for incorporating robots in military systems [51]. Task complexity is noted to facilitate trust across application areas such as autonomous aerial vehicles [31], COBOTS [43], and domestic robots [25].
Considering human-related facilitators, ref. [57] refers to psychological safety as an aspect of the user experience that might enable trust in human–robot teaming. Tailoring interactions to user preferences and needs is pointed out by [37,44] in the domain of service robots. Similarly, ref. [38] identifies tailored interaction as a trust facilitator in the context of autonomous mobile robots (AMRs).
User characteristics such as familiarity with technology are noted by [35,36,40] as a factor in enabling human trust in the robot over varied applications of service robots. Furthermore, ref. [46] describes that familiarity could enhance situational awareness and reduce mental load during task execution alongside construction robots. Furthermore, authors indicate user gender [42,47,51,52], emotional state [33,35], need for control [36,40,58], and willingness to take risks [2,42] as factors influencing trust in HRI.
Aspects of a robot’s performance and behavior are thought to facilitate trust across different domains. Besides system reliability [25,27], the robot’s motion fluency [34,39] and the simplicity and consistency in the robot’s behavior [2,40] can influence users’ trust. Simplified motions have been shown to alleviate user anxiety, particularly among individuals with limited technology acceptance [40]. Moreover, the robot’s ability to learn from mistakes (recognizing and recovering from them) might impact user engagement and interaction, potentially even turning this into a net positive for the interaction [50].
Corroborating with the findings shared by [27], our analysis indicates the gap between regulations and practice as an aspect that hinders user trust in AI-enabled systems. Among the environment-related barriers, ref. [53] identified impunity as a hindering factor and proposed equipping robots with ethical black box mechanisms to address this issue and enhance accountability.
Furthermore, in a socio-technical system, robots are constantly subject to society’s evaluation. A negative reputation acts as a trust barrier in autonomous technologies. Taking the context of AMR, ref. [38] also observed that media highlights regarding accidents involving these systems might fuel adverse reactions toward autonomous vehicles. Our analysis also identified barriers in interactions marked by security threats like hacking or data leaks and physical threats potentially enhanced by close proximity in shared environments and other sorts of accidents involving intelligent systems.
Considering human-related factors, barriers are mentioned in studies that portray trust breaches in interactions characterized by misaligned perceptions [2] and user emotions like anxiety [40,44], fear and stress [42], and complacency [58]. Characteristics of user privacy orientation and perceived privacy are emphasized by the works of [17,47], describing aspects that hinder the user’s predisposition to trust and the user’s trust behaviors.
Reflecting on robot-related factors, we noticed that in application areas demanding human–robot collaboration like service and industrial domains, a lack of competency [34,50], hesitant behavior (i.e., trembling motions) [41], and failed hardware or software performance [25,48,57] negatively impact users’ trust. Similarly, ref. [56] highlighted that social norm violations harm the user’s perception of a robot’s socio-affective competence in domains that require social interaction. Furthermore, the severity of errors may also impact trust perception to a different degree [52].

4. Discussion

This study examined the aspects that impact the potential for designing and fostering machine trustworthiness from a human-centered standpoint. A systematic procedure was carried out to explore 34 articles. We relied on the data extracted from the reviewed literature to reflect on users’ trust and the related factors impacting it through the lenses of HRI-related studies, taking an approach away from technical-centric considerations (i.e., robustness, and reliance) to human-centric concerns (i.e., perception of safety, satisfaction, cognitive ergonomics). In the following sections, we present the implications derived from this review.

4.1. Lack of Conceptual Clarity

Regarding methodologies to study trust, we call attention to the lack of conceptual clarity manifested in the shortfall of studies presenting, attributing definitions of trust, or relying on research tools endorsed by the scientific community. Approximately a third of the reviewed studies did not explicitly define the concept, which can be problematic as trust is a complex phenomenon interwoven with and affected by other variables of an interaction, like culture [65].
Moreover, the consequences of the conceptual issue might include the choices of methods and instruments used to investigate trust in robots and other autonomous systems. Among the experimental studies examined by this review, most assessed trust through custom questions—often a single likelihood assessment—where users rated the overall experience instead of reflecting on different components of the experience that can affect trust.
An alternative for achieving a comprehensive trust assessment is demonstrated by 23% of the studies, predominantly relying on combined measures for their evaluation of experimental settings. Sanders and colleagues [43] combined a wide range of methods, including demographics and personality questions, alongside the Negative Attitudes Toward Robots Scale (NARS) [66], the Interpersonal Trust Questionnaire (ITQ) [59], and the Trust in Automated Systems Survey [67]. Pompe et al. [49] also collected measures across different environments and contexts using validated scales, including the Godspeed Questionnaire [68], to understand the factors affecting user perception of robots. It is essential to consider that although self-reporting methods may introduce biases to the assessment, complementing them with quantitative assessments can offer insights into how users engage and handle the dynamics of the interactions.
Furthermore, our findings reiterate the notion that the conceptual gap in trust in technology literature persists and that this lack of clarity explains the sometimes-contradictory findings in the literature [17,19,51,69,70]. Trust is a phenomenon investigated across diverse disciplines, which, to our knowledge, brings about a tendency to use competing and often contradictory definitions, models, and frameworks. Bach and colleagues [27] proposed that selecting the most appropriate definition of trust concerning the context investigated is better than pursuing a unified definition of trust or comparing the existing ones.
Trust is considered “to emerge in an attitude-formation process and then be calibrated along with a comparison of expectations with a robot’s behavior” ([44], p. 03). As a behavior or attitude, trust is continuously influenced and updated by the human assessment [71], also referred to by other authors as a trial-and-error experience [58]. Furthermore, our analysis stressed the need for a distinct understanding of trust formation and calibration, calling attention to the different stages and dynamics of trust before and during the interaction with a robot. Other studies define trust as a multidimensional latent variable mediating past events and subsequent reliance in uncertain environments [72]. Such dynamic characteristics make it complex to measure trust directly, but researchers highlight that it can be captured via underlying constructs, such as reliance [5,23].

4.2. Diverse Focus

In sum, to realize Industry 5.0’s full potential, it is essential to shift the focus to developing systems that prioritize human-centric design and a deep understanding of psychological, social, and ethical dimensions. Moreover, research indicates that ensuring both physical safety (e.g., through robust sensor-based systems) and psychological safety (e.g., by fostering user trust and well-being) is crucial for the successful integration of automated systems [4,73,74,75,76].
Studies by [32,33,45,50] have emphasized the importance of the human-centric drive for autonomous technology development, underscoring the value of designing systems to communicate trustworthiness through optimized usability and increased transparency. Emerging themes include perceived safety and human–robot teaming in collaborative and service robots, with increased efforts to balance technical innovation with human-centric design principles for better observability and predictability of system behavior, as shown by [40,53].

4.3. Context-Bound Factors

On a higher level, the environmental-related factors, barriers, and facilitators convey a duality of contrasting aspects that are mentioned to influence trust in HRI. A simplistic interpretation would indicate that weighing up particular facilitators could remediate the presence of one specific barrier, and problems would be sorted out. Nevertheless, such a rationale is contested, as we observe fewer contrasting and more similar factors listed within the two additional categories. The ambiguity of those aspects highlights the inherent complexity mentioned earlier. For instance, anthropomorphism, one aspect of robot-related factors, is considered a facilitator in service and healthcare domains where social interaction occurs and adaptive feedback mechanisms are expected [37,54]. On the other hand, studies investigating applications in the industrial and transportation sector shed light on facilitating trust through factors such as a simple and clear user interface [29], intent display [38], and suitability to task [42].
Even though human-like features, such as the robot’s use of natural language [52] and awareness of social rules [50], are described as socio-affective competencies impacting the user perception of the robot’s intelligence [41,56] and are also observed in the application of social robots for companionship and rehabilitation [33,48,49], authors have also described complementary perspectives on collaborative robots (COBOTS) in manufacturing contexts and highlighted aspects of social interaction, such as emotion awareness [32] and benevolence [2]. Moreover, ref. [47] points out that gendering a robot’s voice can likewise affect the robot’s perception in terms of stereotypes, preferences, and trust.
As described in our analysis, user characteristics and perceptions of the interaction, including individual preferences and needs, how they take risks, psychological safety, control, emotional states, and familiarity with robots, are pivotal as trust facilitators and barriers in HRI. Hence, tailoring interactions to individual needs might influence trust and affect engagement and adoption rates, as commented on by [35,44]. Providing tailored information and explanations is also essential, as seen in the studies by [2,27,39].
Furthermore, the broader context of interactions, including shared control and ethical considerations, influences user trust, as negative experiences or unmet expectations might significantly affect usability and acceptance, emphasizing the need for engaging and intuitive HRI designs [38,48]. As noted by recent studies with social robots for service and healthcare, physical appearance and human-like attributes also enhance trust and acceptance [30,50,54]. Design must account for socio-ethical concerns, privacy, and security in shared environments [47,51,53].
On the other hand, robot errors, performance failures, and lack of competency might negatively affect trust and reliability in robot actions [35,50,56]. To achieve seamless interactions and enable human empowerment, as proposed by the concept of Industry 5.0, robots should exhibit accountable behaviors and offer adaptive feedback as mechanisms to repair user trust. Additionally, the studies by [31,54,58] also indicate issues in the system design as barriers to trust in HRI, demonstrating that complex explanations, poor interaction flows, and unpredictable behavior contribute to user frustration and hinder user trust. Researchers have also highlighted that prolonged interaction over time can redress issues of lack of trust; [35,36,40,52] demonstrated the role of familiarity in increasing trust in extended interactions as users become more familiar with the robot’s behaviors and capabilities.

5. Conclusions

This systematic literature review explored how trust has been conceptualized and studied as a pathway to facilitate the successful implementation of autonomous technologies. In doing so, we examined the primary definitions adopted by researchers and the methods used to assess trust in HRI systems. Moreover, our overview offers a classification of trust-related barriers and facilitators examined in recent HRI studies.
Addressing technical and human-centered aspects while integrating AI and robotic systems into human environments has become increasingly demanding. This includes ensuring system security, efficiency, flexibility, and resilience while also addressing human-related qualities such as trust, psychological safety, and ethical considerations [1,58,77,78], as proposed by the European Commission framework. The foundational components of Industry 5.0 for this symbiotic collaboration contemplate a shift in industrial priorities, ensuring human-centric and sustainable approaches for efficient industrial innovation. For society, this means a perspective for dealing with emerging societal trends and needs and a shift in industrial priorities, emphasizing human-centric and sustainable approaches alongside technological innovation. For research, this represents an opportunity to observe the phenomenon of technology emergence and adoption from a multidisciplinary approach.
Our literature review points to a divergence in the backgrounds forming the basis for various studies on trust in HRI. Therefore, distinct factors may be more influential in specific use scenarios. We highlight that the domain and area of application, among other contextual implications, may impact how humans perceive robot trustworthiness. These factors are deeply interwoven with the context of use and may vary depending on the user’s, task’s, and robot’s specifics. Hence, we argue that in addition to building systems that ensure trustworthiness by attending to safety and security requirements, it is vital to address the non-functional properties of the interaction concerning user perceptions of trust and the factors impacting such perceptions.
Lastly, our review pondered the recommendations for collaboration between humans and robots presented by [58], indicating that maximizing trust is not the recommended strategy for successfully integrating robots. Therefore, designers and developers must find a balanced level of trustworthiness conducive to effective and sustainable human-robot trust in practical, real-world contexts [79]. Due to our current classification system’s general and abstract nature, it is impractical to directly derive mechanisms for calibrating trust from it. However, this framework serves as a valuable tool for researchers. It provides a foundation for selecting and investigating specific aspects of trust in human-robot interaction (HRI), guiding future studies in this evolving field. These findings can inform future technical and design strategies, research, and initiatives that foster and maintain human trust in autonomous robots. The reflections shared in this study provide insights into how human-robot trust is conceptualized and approached in recent HRI studies. The conclusions described here were crafted to aid designers and developers in identifying and reflecting on critical barriers and facilitators to trust in HRI. We aimed to contribute to the human-centric Industry 5.0 vision by providing a literature overview of the potential trust-influencing factors that can hinder or facilitate trustworthy human-machine collaborations.

5.1. Future Considerations

Evolving collaboration between humans and machines requires enabling rapid and often high-stakes decision-making in the industrial, healthcare, government, and military sectors. It is pivotal to ensure that the design and implementation of robot systems in the industrial sector adhere to sustainable user-centered practices. Technology must adapt to align with human values, skills, and well-being, ultimately fostering effective and efficient collaboration between humans and machines. Providing users with relevant and reliable information about a system’s work can increase user trust. This may include explanations of algorithms, model performance, risk factors, contextual information, and actionable information. The amount of information should not be overwhelming and should be targeted to specific user groups.
Another critical aspect is systems’ resilience to adapt to unexpected disruptions, such as pandemics or geopolitical challenges. Identifying the factors that positively or negatively influence trust in autonomous robots is challenging. Furthermore, quantifying the impact of individual factors or the cumulative effect of various combinations proves to be a complex task.
Our research suggests that tailored approaches are necessary to foster trust in HRI. It is crucial to delineate when trust pertains to interaction attributes—such as task appropriateness and machine suitability—and when it reflects user preferences, which, as our findings suggest, significantly influence their perceptions of the information exchange and their assessment of robot capabilities. An open question remains whether a synthesis of attributes from different categories could be examined to predict users’ trust and enhance machine adaptability. This synthesis could potentially preempt events of misuse, accommodate unforeseen changes, and even proactively engage with humans to augment decision-making capabilities and physical interactions as per Industry 5.0 principles.

5.2. Limitations

In the next section, we acknowledge limitations in our work and report on the strategies taken to overcome them. In this study, we implemented a standardized approach for literature review and data extraction. This approach provided a framework for the reviews; however, it may have constrained the reviewers’ appreciation of the data. As we undertake a multidisciplinary review of recent HRI studies through the lens of trust in technology, which also impacted the range of sources consulted, we foresee a potential thematic gap that could affect our research findings. To remedy this limitation, we held periodical meetings to align the activity’s progress. Another potential issue refers to biases caused by the choices of search keywords, selection criteria, and researchers’ interpretations of the selected literature. As commented in the previous sections, limitations may also result from the article’s geographic location and the underrepresentation of specific regions in the selected studies. We try to address these by detailing the articles’ affiliations. Furthermore, the quality of the reviewed studies could potentially limit the content extracted and analyzed in this review. As we undertake a multidisciplinary review of recent HRI studies through the lens of trust in technology, we foresee a potential thematic gap that could impact our research findings.

Author Contributions

Conceptualization, D.F.d.S. and S.S.; methodology, D.F.d.S. and S.S.; validation, all authors; formal analysis, D.F.d.S. and S.S.; writing—original draft preparation, D.F.d.S. and S.S.; writing—review and editing, all authors; supervision, S.S.; project administration, S.S. and O.D.; funding acquisition, O.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the project “Increasing the knowledge intensity of Ida-Viru entrepreneurship”, co-funded by the European Union. Was co-funded by the European Regional Development Fund and by the Trust and Influence AFOSR programme (21IOE051).

Data Availability Statement

Dataset available on request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. The first author is a PhD candidate and has led the work with the supervision and contribution of the senior author S.S.

References

  1. Nahavandi, S. Industry 5.0—A human-centric solution. Sustainability 2019, 11, 4371. [Google Scholar] [CrossRef]
  2. Pinto, A.; Sousa, S.; Simões, A.; Santos, J. A Trust Scale for Human-Robot Interaction: Translation, Adaptation, and Validation of a Human Computer Trust Scale. Hum. Behav. Emerg. Technol. 2022, 2022, 6437441. [Google Scholar] [CrossRef]
  3. European Commission; Directorate-General for Research and Innovation; Renda, A.; Schwaag Serger, S.; Tataj, D.; Morlet, A.; Isaksson, D.; Martins, F.; Mir Roca, M.; Morlet, A.; et al. Industry 5.0, a Transformative Vision for Europe–Governing Systemic Transformations Towards a Sustainable Industry; Publications Office of the European Union: Luxembourg, 2021. [Google Scholar] [CrossRef]
  4. Abeywickrama, D.B.; Bennaceur, A.; Chance, G.; Demiris, Y.; Kordoni, A.; Levine, M.; Moffat, L.; Moreau, L.; Mousavi, M.R.; Nuseibeh, B.; et al. On specifying for trustworthiness. Commun. ACM 2023, 67, 98–109. [Google Scholar] [CrossRef]
  5. Gebru, B.; Zeleke, L.; Blankson, D.; Nabil, M.; Nateghi, S.; Homaifar, A.; Tunstel, E. A Review on Human–Machine Trust Evaluation: Human-Centric and Machine-Centric Perspectives. IEEE Trans. Hum.-Mach. Syst. 2022, 52, 952–962. [Google Scholar] [CrossRef]
  6. Commission, E. Artificial Intelligence Act. Available online: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai (accessed on 20 March 2025).
  7. Sousa, S.; Lamas, D.; Cravino, J.; Martins, P. Human-Centered Trustworthy Framework: A Human–Computer Interaction Perspective. Computer 2024, 57, 46–58. [Google Scholar] [CrossRef]
  8. Naiseh, M.; Bentley, C.; Ramchurn, S.D. Trustworthy autonomous systems (TAS): Engaging TAS experts in curriculum design. In Proceedings of the 2022 IEEE Global Engineering Education Conference (EDUCON), Tunis, Tunisia, 28–31 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 901–905. [Google Scholar]
  9. Daronnat, S.; Azzopardi, L.; Halvey, M.; Dubiel, M. Impact of Agent Reliability and Predictability on Trust in Real Time Human-Agent Collaboration. In Proceedings of the 8th International Conference on Human-Agent Interaction, Virtual, 10–13 November 2020; pp. 131–139. [Google Scholar] [CrossRef]
  10. Eban, E. Jidoka: Automation with a human touch. Softw. Syst. Model 2024. [Google Scholar] [CrossRef]
  11. Krijnen, A. The Toyota Way: 14 Management Principles from the World’s Greatest Manufacturer; Taylor & Francis: Abingdon, UK, 2007. [Google Scholar]
  12. Koichi, S.; Fujimoto, T.; Miller, W.; Shook, J. The Birth of Lean; Lean Enterprise Institute, Incorporated: Boston, MA, USA, 2012. [Google Scholar]
  13. Ergonomics of Human-System Interaction ISO 9241-210:2019 Human-Centred Design for Interactive Systems. 2019. Available online: https://www.iso.org/standard/77520.html (accessed on 30 March 2025).
  14. Rogers, E.M.; Singhal, A.; Quinlan, M.M. Diffusion of Innovations; Routledge: Oxfordshire, UK, 2014. [Google Scholar]
  15. Brynjolfsson, E.; Mcafee, A. Artificial intelligence, for real. Harv. Bus. Rev. 2017, 1, 1–31. [Google Scholar]
  16. European Commission: Directorate-General for Communications Networks Ethics Guidelines for Trustworthy AI. 2019. Available online: https://data.europa.eu/doi/10.2759/346720 (accessed on 30 March 2025).
  17. Gulati, S.; Sousa, S.; Lamas, D. Design, development and evaluation of a human–computer trust scale. Behav. Inf. Technol. 2019, 38, 1004–1015. [Google Scholar] [CrossRef]
  18. Gulati, S.; McDonagh, J.; Sousa, S.; Lamas, D. Trust models and theories in human–computer interaction: A systematic literature review. Comput. Hum. Behav. Rep. 2024, 16, 100495. [Google Scholar] [CrossRef]
  19. Sousa, S.; Cravino, J.; Martins, P. Challenges and Trends in User Trust Discourse in AI Popularity. Multimodal Technol. Interact. 2023, 7, 13. [Google Scholar] [CrossRef]
  20. De Visser, E.J.; Peeters, M.M.; Jung, M.F.; Kohn, S.; Shaw, T.H.; Pak, R.; Neerincx, M.A. Towards a theory of longitudinal trust calibration in human–robot teams. Int. J. Soc. Robot. 2020, 12, 459–478. [Google Scholar] [CrossRef]
  21. Pilacinski, A.; Pinto, A.; Oliveira, S.; Araújo, E.; Carvalho, C.; Silva, P.A.; Matias, R.; Menezes, P.; Sousa, S. The robot eyes don’t have it. The presence of eyes on collaborative robots yields marginally higher user trust but lower performance. Heliyon 2023, 9, e18164. [Google Scholar] [CrossRef]
  22. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integrative model of organizational trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  23. Lee, J.D.; See, K.A. Trust in automation: Designing for appropriate reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef] [PubMed]
  24. Saßmannshausen, T.; Burggräf, P.; Hassenzahl, M.; Wagner, J. Human trust in otherware—A systematic literature review bringing all antecedents together. Ergonomics 2023, 66, 976–998. [Google Scholar] [CrossRef]
  25. Soh, H.; Xie, Y.; Chen, M.; Hsu, D. Multi-task trust transfer for human–robot interaction. Int. J. Robot. Res. 2020, 39, 233–249. [Google Scholar] [CrossRef]
  26. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef] [PubMed]
  27. Bach, T.A.; Khan, A.; Hallock, H.; Beltrão, G.; Sousa, S. A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective. Int. J. Hum.–Comput. Interact. 2024, 40, 1251–1266. [Google Scholar] [CrossRef]
  28. Iba, T.; Yoshikawa, A.; Munakata, K. Philosophy and methodology of clustering in pattern mining: Japanese anthropologist Jiro Kawakita’s KJ method. In Proceedings of the 24th Conference on Pattern Languages of Programs, Vancouver, ON, Canada, 22–25 October 2017; pp. 1–11. [Google Scholar]
  29. Daniel, B.; Thomessen, T.; Korondi, P. Simplified Human-Robot Interaction: Modeling and Evaluation. Model. Identif. Control A Nor. Res. Bull. 2013, 34, 199–211. [Google Scholar] [CrossRef]
  30. Jung, Y.; Cho, E.; Kim, S. Users’ Affective and Cognitive Responses to Humanoid Robots in Different Expertise Service Contexts. Cyberpsychol. Behav. Soc. Netw. 2021, 24, 300–306. [Google Scholar] [CrossRef]
  31. Zhu, Y.; Wang, T.; Wang, C.; Quan, W.; Tang, M. Complexity-Driven Trust Dynamics in Human–Robot Interactions: Insights from AI-Enhanced Collaborative Engagements. Appl. Sci. 2023, 13, 12989. [Google Scholar] [CrossRef]
  32. Esterwood, C.; Robert, L.P. The theory of mind and human–robot trust repair. Sci. Rep. 2023, 13, 9877. [Google Scholar] [CrossRef] [PubMed]
  33. Alam, S.; Johnston, B.; Vitale, J.; Williams, M.A. Would you trust a robot with your mental health? The interaction of emotion and logic in persuasive backfiring. In Proceedings of the 2021 30th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Vancouver, ON, Canada, 8–12 August 2021; pp. 384–391. [Google Scholar] [CrossRef]
  34. Brule, R.v.d.; Dotsch, R.; Bijlstra, G.; Wigboldus, D.H.J.; Haselager, P. Do Robot Performance and Behavioral Style affect Human Trust? Int. J. Soc. Robot. 2014, 6, 519–531. [Google Scholar] [CrossRef]
  35. Kraus, J.M.; Merger, J.; Gröner, F.; Pätz, J. “Sorry” Says the Robot. In Proceedings of the Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, Stockholm, Sweden, 13–16 March 2023; pp. 436–441. [Google Scholar] [CrossRef]
  36. Gaudiello, I.; Zibetti, E.; Lefort, S.; Chetouani, M.; Ivaldi, S. Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers. Comput. Hum. Behav. 2016, 61, 633–655. [Google Scholar] [CrossRef]
  37. Lim, M.Y.; Robb, D.A.; Wilson, B.W.; Hastie, H. Feeding the Coffee Habit: A Longitudinal Study of a Robo-Barista. In Proceedings of the 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Busan, Republic of Korea, 28–31 August 2023; pp. 1983–1990. [Google Scholar] [CrossRef]
  38. Yun, H.; Yang, J.H. Configuring user information by considering trust threatening factors associated with automated vehicles. Eur. Transp. Res. Rev. 2022, 14, 9. [Google Scholar] [CrossRef]
  39. Babel, F.; Kraus, J.; Baumann, M. Findings From A Qualitative Field Study with An Autonomous Robot in Public: Exploration of User Reactions and Conflicts. Int. J. Soc. Robot. 2022, 14, 1625–1655. [Google Scholar] [CrossRef]
  40. Wang, Y.; Wang, G.; Ge, W.; Duan, J.; Chen, Z.; Wen, L. Perceived Safety Assessment of Interactive Motions in Human–Soft Robot Interaction. Biomimetics 2024, 9, 58. [Google Scholar] [CrossRef]
  41. Chen, N.; Zhai, Y.; Liu, X. The Effects of Robots’ Altruistic Behaviours and Reciprocity on Human-robot Trust. Int. J. Soc. Robot. 2022, 14, 1913–1931. [Google Scholar] [CrossRef]
  42. Clement, P.; Veledar, O.; Könczöl, C.; Danzinger, H.; Posch, M.; Eichberger, A.; Macher, G. Enhancing Acceptance and Trust in Automated Driving through Virtual Experience on a Driving Simulator. Energies 2022, 15, 781. [Google Scholar] [CrossRef]
  43. Sanders, T.; Kaplan, A.; Koch, R.; Schwartz, M.; Hancock, P.A. The Relationship Between Trust and Use Choice in Human-Robot Interaction. Hum. Factors J. Hum. Factors Ergon. Soc. 2018, 61, 614–626. [Google Scholar] [CrossRef]
  44. Miller, L.; Kraus, J.; Babel, F.; Baumann, M. More Than a Feeling—Interrelation of Trust Layers in Human-Robot Interaction and the Role of User Dispositions and State Anxiety. Front. Psychol. 2021, 12, 592711. [Google Scholar] [CrossRef]
  45. Huang, H.; Rau, P.L.P.; Ma, L. Will you listen to a robot? Effects of robot ability, task complexity, and risk on human decision-making. Adv. Robot. 2021, 35, 1156–1166. [Google Scholar] [CrossRef]
  46. Adami, P.; Rodrigues, P.B.; Woods, P.J.; Becerik-Gerber, B.; Soibelman, L.; Copur-Gencturk, Y.; Lucas, G. Impact of VR-Based Training on Human–Robot Interaction for Remote Operating Construction Robots. J. Comput. Civ. Eng. 2022, 36, 04022006. [Google Scholar] [CrossRef]
  47. Ambsdorf, J.; Munir, A.; Wei, Y.; Degkwitz, K.; Harms, H.M.; Stannek, S.; Ahrens, K.; Becker, D.; Strahl, E.; Weber, T.; et al. Explain yourself! Effects of Explanations in Human-Robot Interaction. In Proceedings of the 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 29 August–2 September 2022; pp. 393–400. [Google Scholar] [CrossRef]
  48. Kraus, M.; Wagner, N.; Untereiner, N.; Minker, W. Including Social Expectations for Trustworthy Proactive Human-Robot Dialogue. In Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, Barcelona, Spain, 4–7 July 2022; pp. 23–33. [Google Scholar] [CrossRef]
  49. Pompe, B.L.; Velner, E.; Truong, K.P. The Robot That Showed Remorse: Repairing Trust with a Genuine Apology. In Proceedings of the 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Napoli, Italy, 29 August–2 September 2022; pp. 260–265. [Google Scholar] [CrossRef]
  50. Cameron, D.; Saille, S.d.; Collins, E.C.; Aitken, J.M.; Cheung, H.; Chua, A.; Loh, E.J.; Law, J. The effect of social-cognitive recovery strategies on likability, capability and trust in social robots. Comput. Hum. Behav. 2021, 114, 106561. [Google Scholar] [CrossRef]
  51. Schaefer, K.E.; Straub, E.R.; Chen, J.Y.; Putney, J.; Evans, A. Communicating intent to develop shared situation awareness and engender trust in human-agent teams. Cogn. Syst. Res. 2017, 46, 26–39. [Google Scholar] [CrossRef]
  52. Koren, Y.; Polak, R.F.; Levy-Tzedek, S. Extended Interviews with Stroke Patients Over a Long-Term Rehabilitation Using Human–Robot or Human–Computer Interactions. Int. J. Soc. Robot. 2022, 14, 1893–1911. [Google Scholar] [CrossRef] [PubMed]
  53. Alonso, V.; Puente, P.d.l. System Transparency in Shared Autonomy: A Mini Review. Front. Neurorobot. 2018, 12, 83. [Google Scholar] [CrossRef]
  54. Yuan, F.; Klavon, E.; Liu, Z.; Lopez, R.P.; Zhao, X. A Systematic Review of Robotic Rehabilitation for Cognitive Training. Front. Robot. AI 2021, 8, 605715. [Google Scholar] [CrossRef]
  55. Xu, K.; Chen, M.; You, L. The Hitchhiker’s Guide to a Credible and Socially Present Robot: Two Meta-Analyses of the Power of Social Cues in Human–Robot Interaction. Int. J. Soc. Robot. 2023, 15, 269–295. [Google Scholar] [CrossRef]
  56. Tian, L.; Oviatt, S. A Taxonomy of Social Errors in Human-Robot Interaction. ACM Trans. Hum.-Robot. Interact. (THRI) 2021, 10, 1–32. [Google Scholar] [CrossRef]
  57. Akalin, N.; Kiselev, A.; Kristoffersson, A.; Loutfi, A. A Taxonomy of Factors Influencing Perceived Safety in Human–Robot Interaction. Int. J. Soc. Robot. 2023, 15, 1993–2004. [Google Scholar] [CrossRef]
  58. Schoeller, F.; Miller, M.; Salomon, R.; Friston, K.J. Trust as Extended Control: Human-Machine Interactions as Active Inference. Front. Syst. Neurosci. 2021, 15, 669810. [Google Scholar] [CrossRef]
  59. Forbes, A.; Roger, D. Stress, social support and fear of disclosure. Br. J. Health Psychol. 1999, 4, 165–179. [Google Scholar] [CrossRef]
  60. Schaefer, K.E. The Perception and Measurement of Human-Robot Trust. Ph.D. Thesis, College of Sciences, Atlanta, GA, USA, 2013. [Google Scholar]
  61. Sousa, S.; Kalju, T. Modeling trust in COVID-19 contact-tracing apps using the human-computer Trust Scale: Online survey study. JMIR Hum. Factors 2022, 9, e33951. [Google Scholar] [CrossRef] [PubMed]
  62. Hancock, P.A.; Billings, D.R.; Schaefer, K.E. Can you trust your robot? Ergon. Des. 2011, 19, 24–29. [Google Scholar] [CrossRef]
  63. McKnight, D.H.; Chervany, N.L. What trust means in e-commerce customer relationships: An interdisciplinary conceptual typology. Int. J. Electron. Commer. 2001, 6, 35–59. [Google Scholar] [CrossRef]
  64. Beltrão, G.; Sousa, S.; Lamas, D. Unmasking Trust: Examining Users’ Perspectives of Facial Recognition Systems in Mozambique. In Proceedings of the 4th African Human Computer Interaction Conference, New York, NY, USA, 21–23 June 2024; AfriCHI’23. pp. 38–43. [Google Scholar] [CrossRef]
  65. Beltrão, G.; Sousa, S. Factors Influencing Trust in WhatsApp: A Cross-Cultural Study. In Proceedings of the International Conference on Human-Computer Interaction, Bari, Italy, 30 August–3 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 495–508. [Google Scholar]
  66. Nomura, T.; Suzuki, T.; Kanda, T.; Kato, K. Measurement of negative attitudes toward robots. Interact. Stud. Soc. Behav. Commun. Biol. Artif. Syst. 2006, 7, 437–454. [Google Scholar] [CrossRef]
  67. Jian, J.Y.; Bisantz, A.M.; Drury, C.G. Foundations for an empirically determined scale of trust in automated systems. Int. J. Cogn. Ergon. 2000, 4, 53–71. [Google Scholar] [CrossRef]
  68. Bartneck, C. Godspeed questionnaire series: Translations and usage. In International Handbook of Behavioral Health Assessment; Springer: Berlin/Heidelberg, Germany, 2023; pp. 1–35. [Google Scholar]
  69. Hancock, P.A.; Billings, D.R.; Schaefer, K.E.; Chen, J.Y.; De Visser, E.J.; Parasuraman, R. A meta-analysis of factors affecting trust in human-robot interaction. Hum. Factors 2011, 53, 517–527. [Google Scholar] [CrossRef]
  70. Hoff, K.A.; Bashir, M. Trust in automation: Integrating empirical evidence on factors that influence trust. Hum. Factors 2015, 57, 407–434. [Google Scholar] [CrossRef]
  71. Kohn, S.C.; Visser, E.J.d.; Wiese, E.; Lee, Y.C.; Shaw, T.H. Measurement of Trust in Automation: A Narrative Review and Reference Guide. Front. Psychol. 2021, 12, 604977. [Google Scholar] [CrossRef] [PubMed]
  72. Kok, B.C.; Soh, H. Trust in Robots: Challenges and Opportunities. Curr. Robot. Rep. 2020, 1, 297–309. [Google Scholar] [CrossRef] [PubMed]
  73. Gihleb, R.; Giuntella, O.; Stella, L.; Wang, T. Industrial robots, workers’ safety, and health. Labour Econ. 2022, 78, 102205. [Google Scholar] [CrossRef]
  74. Analysis, I. The Collaborative Robot Market. 2019. Available online: https://www.interactanalysis.com/wp-content/uploads/2019/12/Cobot-Market-to-account-for-30-of-Total-Robot-Market-by-2027-–-Interact-Analysis-PR-Dec-19.pdf (accessed on 20 March 2025).
  75. Laux, J.; Wachter, S.; Mittelstadt, B. Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk. Regul. Gov. 2024, 18, 3–32. [Google Scholar] [CrossRef] [PubMed]
  76. Towers-Clark, C. Keep The Robot In The Cage—How Effective (And Safe) Are Co-Bots, 2019. Forbes. Available online: https://www.forbes.com/sites/charlestowersclark/2019/09/11/keep-the-robot-in-the-cagehow-effective--safe-are-co-bots/ (accessed on 20 March 2025).
  77. Watson. EU’s Expert Group Releases Policy and Investment Recommendations for Trustworthy AI. 2021. Available online: https://www.ibm.com/policy/eu-ai-trust/ (accessed on 20 March 2025).
  78. Shneiderman, B. Human-centered artificial intelligence: Reliable, safe & trustworthy. Int. J. Hum.–Comput. Interact. 2020, 36, 495–504. [Google Scholar]
  79. Paramonova, I.; Sousa, S.; Lamas, D. Exploring Factors Affecting User Perception of Trustworthiness in Advanced Technology: Preliminary Results; Springer: Denmark, 2023; pp. 366–383. [Google Scholar] [CrossRef]
Figure 1. The PRISMA flow chart of the study selection process.
Figure 1. The PRISMA flow chart of the study selection process.
Electronics 14 01557 g001
Figure 2. The inclusion and exclusion criteria matrix.
Figure 2. The inclusion and exclusion criteria matrix.
Electronics 14 01557 g002
Figure 3. Overview of methodologies to study users’ trust in HRI.
Figure 3. Overview of methodologies to study users’ trust in HRI.
Electronics 14 01557 g003
Figure 4. The chart illustrates the variety of deployment contexts explored by the selected studies.
Figure 4. The chart illustrates the variety of deployment contexts explored by the selected studies.
Electronics 14 01557 g004
Figure 5. Overview of the definitions of trust adopted by the selected articles [22,23].
Figure 5. Overview of the definitions of trust adopted by the selected articles [22,23].
Electronics 14 01557 g005
Figure 6. The diagram shows the elements of trust facilitators identified by the reviewers. Such factors are identified as aspects that can bolster trust within human–robot interaction contexts.
Figure 6. The diagram shows the elements of trust facilitators identified by the reviewers. Such factors are identified as aspects that can bolster trust within human–robot interaction contexts.
Electronics 14 01557 g006
Figure 7. The diagram shows the elements of trust barriers. Such aspects were identified as potential trust restraint elements within interactions with robots.
Figure 7. The diagram shows the elements of trust barriers. Such aspects were identified as potential trust restraint elements within interactions with robots.
Electronics 14 01557 g007
Table 1. How trust is assessed in the reviewed HRI studies.
Table 1. How trust is assessed in the reviewed HRI studies.
How the Study Measures TrustStudy Type
ExperimentalSystematic ReviewCase-StudySurveyCross-SectionalTotal%
Custom Question
Daniel et al. (2013) [29]x
Jung et al. (2021) [30]x
Zhu et al. (2023) [31]x
Esterwood et al. (2023) [32]x
Alam et al. (2021) [33]x
Brule et al. (2014) [34]x
Kraus et al. (2023) [35]x
Gaudiello et al. (2016) [36]x
Soh et al. (2018) [25]x
Total 926.5
Combined measures
Lim et al. (2023) [37] x
Yun et al. (2022) [38]x
Babel et al. (2022) [39] x
Wang et al. (2024) [40]x
Chen et al. (2022) [41]x
Clement et al. (2022) [42]x
Sanders et al. (2018) [43] x
Miller et al. (2021) [44]x
Total 823.5
Validaded Scale
Pinto et al. (2022) [2] x
Huang et al. (2021) [45]x
Gulati et al. (2019) [17]x
Adami et al. (2022) [46]x
Ambsdorf et al. (2022) [47] x
Kraus et al. (2022) [48]x
Pompe et al. (2022) [49]x
Total 720.6
Self-report
Cameron et al. (2021) [50] x
Schaefer et al. (2017) [51] x
Koren et al. (2022) [52] x
Total 38.8
N/A
Alonso et al. (2018) [53] x
Yuan et al. (2021) [54] x
Bach et al. (2024) [27] x
Xu et al. (2023) [55] x
Tian et al. (2021) [56] x
Akalin et al. (2023) [57] x
Schoeller et al. (2021) [58] x
Total 720.6
Grand Total 34100
Table 2. What have been the focus and application areas of the reviewed HRI studies?
Table 2. What have been the focus and application areas of the reviewed HRI studies?
Application Area
GeneralIndustrialServiceHealthcareAMRAI-Enabled SystemsTotal%
Study Focus
Human–robot communication
Alonso et al. (2018) [53]x
Lim et al. (2023) [37] x
Huang et al. (2021) [45] x
Tian et al. (2021) [56]x
Ambsdorf et al. (2022) [47] x
Kraus et al. (2022) [48] x
Pompe et al. (2022) [49] x 720.6%
Robot behaviors
Brule et al. (2014) [34] x 12.9%
Human–robot teaming
Schaefer et al. (2017) [51] x
Zhu et al. (2023) [31] x 25.9%
User studies
Daniel et al. (2013) [29] x
Yuan et al. (2021) [38] x
Adami et al. (2022) [46] x 38.8%
Trust indicators & measurements
Pinto et al. (2022) [2] x
Jung et al. (2021) [30]x
Yun et al. (2022) [38] x
Bach et al. (2024) [27] x
Gulati et al. (2019) [17] x
Koren et al. (2022) [52] x
Schoeller et al. (2021) [58]x
Soh et al. (2018) [25] x 823.5%
User studies & trust measurements
Clement et al. (2022) [42] x
Cameron et al. (2021) [50] x
Miller et al. (2021) [44] x 38.8%
User studies & human–robot teaming
Sanders et al. (2018) [43] x 12.9%
User studies & Human–robot communication
Babel et al. (2022) [39] x
Alam et al. (2021) [33] x
Gaudiello et al. (2016) [36] x 38.8%
Trust measurements & robot behaviors
Chen et al. (2022) [41] x 12.9%
Trust measurements & human–robot communication
Xu et al. (2023) [55]x 12.9%
Robot behaviors & human–robot communication
Esterwood et al. (2023) [32] x
Kraus et al. (2023) [35] x 25.9%
Robot behaviors & perceived safety
Wang et al. (2024) [40] x 12.9%
Human–robot teaming & perceived safety
Akalin et al. (2023) [57]x 12.9%
Grand Total 34100%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Firmino de Souza, D.; Sousa, S.; Kristjuhan-Ling, K.; Dunajeva, O.; Roosileht, M.; Pentel, A.; Mõttus, M.; Can Özdemir, M.; Gratšjova, Ž. Trust and Trustworthiness from Human-Centered Perspective in Human–Robot Interaction (HRI)—A Systematic Literature Review. Electronics 2025, 14, 1557. https://doi.org/10.3390/electronics14081557

AMA Style

Firmino de Souza D, Sousa S, Kristjuhan-Ling K, Dunajeva O, Roosileht M, Pentel A, Mõttus M, Can Özdemir M, Gratšjova Ž. Trust and Trustworthiness from Human-Centered Perspective in Human–Robot Interaction (HRI)—A Systematic Literature Review. Electronics. 2025; 14(8):1557. https://doi.org/10.3390/electronics14081557

Chicago/Turabian Style

Firmino de Souza, Debora, Sonia Sousa, Kadri Kristjuhan-Ling, Olga Dunajeva, Mare Roosileht, Avar Pentel, Mati Mõttus, Mustafa Can Özdemir, and Žanna Gratšjova. 2025. "Trust and Trustworthiness from Human-Centered Perspective in Human–Robot Interaction (HRI)—A Systematic Literature Review" Electronics 14, no. 8: 1557. https://doi.org/10.3390/electronics14081557

APA Style

Firmino de Souza, D., Sousa, S., Kristjuhan-Ling, K., Dunajeva, O., Roosileht, M., Pentel, A., Mõttus, M., Can Özdemir, M., & Gratšjova, Ž. (2025). Trust and Trustworthiness from Human-Centered Perspective in Human–Robot Interaction (HRI)—A Systematic Literature Review. Electronics, 14(8), 1557. https://doi.org/10.3390/electronics14081557

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop