Previous Article in Journal
Physical Reinforcement Learning with Integral Temporal Difference Error for Constrained Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mortal vs. Machine: A Compact Two-Factor Model for Comparing Trust in Humans and Robots

Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore 637718, Singapore
Robotics 2025, 14(8), 112; https://doi.org/10.3390/robotics14080112 (registering DOI)
Submission received: 6 June 2025 / Revised: 30 July 2025 / Accepted: 12 August 2025 / Published: 16 August 2025
(This article belongs to the Section Humanoid and Human Robotics)

Abstract

Trust in robots is often analyzed with scales built for either humans or automation, making cross-species comparisons imprecise. Addressing that gap, this paper distils decades of trust scholarship, from clinical vs. actuarial judgement to modern human–robot teaming, into a lean two-factor framework: Mortal vs. Machine (MvM). We first surveyed classic technology-acceptance and automation-reliance research and then integrated empirical findings in human–robot interaction to identify diagnostic cues that can be instantiated by both human and machine agents. The model includes (i) ability—perceived task competence and reliability—and (ii) value congruence—alignment of decision weights and trade-off priorities. Benevolence, oft-included in trust studies, was excluded because current robots cannot manifest genuine goodwill and existing items elicit high dropout. The resulting scale travels across contexts, allowing for researchers to benchmark a robot against a human co-worker on identical terms and enabling practitioners to pinpoint whether performance deficits or priority clashes drive acceptance. By reconciling anthropocentric and technocentric trust literature in a deployable diagnostic, MvM offers a field-ready tool and a conceptual bridge for future studies of AI-empowered robotics.

1. Introduction

There are few constructs as central to our understanding of human communication as that of trust. Yet the locus of that trust has expanded dramatically. We now entrust not only family, friends, and institutions but also algorithms, databases, and, increasingly, autonomous machines. As socio-technical systems insinuate themselves into everyday life, the question shifts from whether we trust to what kind of entity we are willing to trust and on what grounds. For most of human existence, it was reasonable to think of trust as an interpersonal issue, but only since the middle of the 20th century have we been confronted with the issue of trust in computer technology. Even if the timeless assertion that computers will always lack human qualities, such as compassion and empathy [1], holds, few will dispute that computers—and the robots they now animate—have become indispensable to modern life.
Trust is the state of confident vulnerability: a readiness to rely on another agent with the knowledge that one can be harmed if that agent fails. Trustworthiness comprises the diagnostic cues that observers use to reach that state [2]. Perceptions of trustworthiness are, therefore, antecedents, whose joint configuration yields the trust state. Among the many socio-technical settings now demanding trust of machines, human–robot interaction stands out as both emblematic and consequential. Within shared workspaces with humans and robots, the abstract question of “trust in technology” acquires flesh, torque, and liability [3]. From warehouse cobots that coordinate pick-and-place schedules to surgical robots executing millimeter-scale trajectories, robots are becoming essential partners in countless industries [4,5]. Beyond professional contexts, one visit to a consumer-electronics store reveals a startling trend: household robots are finding a way into personal life as well—from vacuuming and lawn mowing to elder-care companionship [6,7].
The steady march of robotic autonomy into everyday life is not only due to technology improving over time and thus being capable of more tasks but also to humans choosing to place some degree of trust in these systems; otherwise, they would not be used. For example, large-sample survey evidence indicates that trust is the pivotal mediator between perceived competence, ease-of-use, and the intention to adopt service robots [8]. This trust is present both when dealing with physical robots and even those operating remotely, far from a human’s tangible grasp. A recent mixed-reality study, for instance, found that baseline trust and intention-to-use levels generalized seamlessly across both virtual-reality and physical collaboration settings, suggesting that the decision to rely on a robot is not bound to its physical presence alone [9]. Robots have become so common in everyday personal and professional contexts that examples are easy to imagine and find in research: a doctor trusts that a robot assistant will safely steer an IV pole, pilots trust an autopilot to choose an efficient descent profile, logistics crews trust that a mobile manipulator will not drop a 40-pound tote, and so forth [10,11]. Undoubtedly, although we might not put our own judgement completely aside, we increasingly trust these robotic aids because they perform well—often outperforming humans—and, therefore, because distrust could introduce even greater risk [12].
This research paper is primarily concerned with the issue of human trust in robots. We adopt the oft-cited definition of a robot as “a machine that senses, thinks, and acts” [13], a scope that comfortably spans social, service, industrial, and medical platforms. Our goal is to arrive at a model of trust that compares humans (mortals) versus robots (machines); in doing so, we review the long history of trust in other machine technologies, including decision support systems, control automation, and diagnostic aids. We turn to this earlier research because, while a substantial body of HRI scholarship already examines robotic trust (e.g., [14,15,16]), the richer historical record of technology trust remains under-exploited. Our goal is, therefore, to surface that latent value, blend it with contemporary robot-specific findings, and channel both streams into a single narrative.
In doing so, we advance a conceptual and theoretical framework for explaining why robots are trusted vis-à-vis why humans are trusted. Separating trust in machines from trust in humans is intuitive, yet the very notion of trust is derived from interpersonal experience, making it all too easy to conflate the two when evaluating robots [17]. This distinction grows ever more important as robots assume higher-stakes roles in healthcare, logistics, defense, and domestic life. This paper takes the view that, while a long history of scholarship has modelled human-to-human trust (with innumerable sub-models) and substantial prior work describes robotic trust, researchers and practitioners need a useful instrument that works for both trustee types; existing surveys are anthropocentric or focused on technological capabilities. For example, prior questionnaires fall broadly into two camps: (i) robot-focused trust or “acceptance” scales (e.g., the Trust Perception Scale-Human–Robot Interaction (TPS-HRI)) and (ii) warmth-rich scales such as Multidimensional Model of Trust (MDMT), whose “sincere” or “ethical” facets presume the perception of moral agency [18,19]. Neither lets a user score, for example, a human forklift driver and an autonomous robotic forklift on identical items without risking many items appearing as “not applicable” to one or both trustees. Both of these scales excel in specialized use-cases, but MvM is intentionally bi-referent and, therefore, suited to situations where cross-species benchmarking is needed. We propose that such a bridge will aid robotics researchers, developers, users, and organizational leaders as robots continue to replace or augment human labor in new roles and represent a needed contribution to theory in human–robot interaction.
Our model is also motivated by practical concerns. In practice, managers who implement robots confront a diagnostic void: they can quantify efficiency gains but cannot pinpoint what aspects of trust drive acceptance or resistance by human workers. Existing models tilt either toward human trust cues, such as empathy and honesty, or toward automation metrics, such as reliability and latency, leaving a blind spot. Bridging that gap requires a framework that scores humans and machines on the same scale, surfacing where performance is expected, where priorities clash, and where suspicions lie.
This article does not attempt to advance a new grand theory of trust. Instead, we distil a long line of trust scholarship into a parsimonious, two-factor scale that any lab, field study, or industry audit can deploy in minutes. By design, our model avoids high-level social constructs; they are important, but they lack measurement symmetry across mortals and machines. We, therefore, present MvM as a baseline diagnostic: a lowest-common-denominator tool that researchers can administer today and extend tomorrow.
This paper proceeds as follows. First, we provide a brief historical overview of the manual versus machine debate, tracing a line from clinical vs. actuarial judgement to manual vs. autonomous control in robotics. Building on the Integrated Model of Trust [2] but incorporating nuance from adjacent trust frameworks and the unique considerations that arise when the trustee is a machine, we develop a blended conceptual and measurement model, Mortal vs. Machine (MvM), which focuses exclusively on ability (perceived task competence and reliability) and (ii) value congruence (alignment of decision weights and trade-off priorities) for studying trust in robots and humans on common ground.

2. Manual Versus Autonomy: Historical Roots of an Instinctive Distrust and Their Relevance to Robotics

Robotics inherits a century-long dialogue about human versus mechanical judgment. A concise history is needed to anchor MvM in established trust research while pointing forward. Although the empirical studies that animate MvM originate in clinical psychology and business forecasting, the trust gap they expose is directly germane to contemporary robotics. Yet, despite more than a half-century of data showing how people discount algorithmic advice, this legacy is referenced only sporadically in human–robot interaction research; many of its lessons for design and evaluation remain hidden in adjacent literature. Whether an autonomous warehouse picker, a surgical robot, or a self-driving car proposes an action, the human overseer confronts precisely the same dilemma that once faced the psychiatrist weighing a mathematical diagnosis rule or the planner staring at a sales algorithm: Can I trust the machine? To clarify why modern robots inherit this legacy of skepticism, we first retrace the long arc of manual versus autonomy research.
For most of recorded history, complex judgements (e.g., diagnosing disease, steering ships, forecasting markets, etc.) have been entrusted to mortal experts who consider information and make a judgement based on expertise, past experience, or intuition [20]. The moment machines began to propose alternative judgements, a deep-seated skepticism surfaced, and it has shadowed every wave of automation since, including today’s social, service, and industrial robots. Clinical psychology was one of the first battlegrounds: debates over statistical versus clinical prediction unfolded simultaneously in professional practice and scholarly journals [21], setting the template for later clashes in business, engineering, and now robotics.
Early in the 20th-century clinical–actuarial debate, clinical referred to human judgement, whereas actuarial denoted formulaic, statistical rules executed by machines or simple algorithms. Meehl [22] demonstrated that actuarial judgement consistently outperformed clinical judgement, yet despite the volume of evidence, early researchers were careful to leave the door open to the continual usefulness (and perhaps necessity) of clinical judgement. The empirical superiority of simple mechanical rules, epitomized by diagnosis tools like the Goldberg MMPI (Minnesota Multiphasic Personality Inventory) formula that outperformed the best performing human judge, did not translate into immediate acceptance [23]. Instead, humans were generally trusted more than statistical models. A parallel pattern emerged in business forecasting. Despite researchers concluding that “clinical judgement does not have an impressive record” ([24], p. 5), surveys two decades later still found that clinical judgement was dominating the forecasting process, and very few principles regarding how to use judgement were being followed in real organizations [24,25].
In summary, the historical trajectory from the first actuarial models to today’s cutting-edge robotics is characterized by a dual contest: performance versus preference. Algorithmic systems have repeatedly proven superior on accuracy, yet humans often feel more at ease—and more valued—when another human, not a robot, occupies the decision seat. Trust for a robot can, therefore, be more elusive than trust for a fellow human—precisely the asymmetry that the MvM framework seeks to explore. This reflexive skepticism supplies the inspiration for constructs in MvM: it explains why the ability dimension alone is insufficient for machines, why integrity must be interpreted through the lens of perceived value congruence, and why benevolence becomes, for technology, an absent quality that is unable to compare mortal versus machine. Having traced these historical currents, we now turn to a systematic explication of the first dimension in the MvM framework, ability, and how perceptions of competence diverge for human and robotic advisors.

3. Components of Trust

3.1. Ability

The long arc of manual versus machine research, therefore, leaves us with a puzzle: if robots are the newest embodiment of the machine, what makes people decide that a robot is—finally—worth following? In MvM, that calculus begins with ability. Conceptually, ability is identical for humans and machines: it is a trustor’s judgment about whether the advisor can do the job better than I can. The difference lies in the cues available to make that judgment and in how fragile those cues can be for automation.
At its core, ability is the trustor’s assessment of the skills, accuracy, and overall performance the trustee can bring to bear on a focal task. Characterizing trust as performance-based has a long pedigree: Researchers have referenced a person’s “expertise” in persuasion [26] or leadership [27], and the popular Technology Acceptance Model [28] established the concept for machines, labelling it as perceived usefulness—“the degree to which a person believes that using a particular system would enhance his or her job performance.” A surgical resident who watches a robot system suture a nephrostomy is making the same inference that a mid-century executive once made about a seasoned human sales analyst: does this advisor, human or machine, improve my performance?
Yet the perceptual evidence that feeds the inference can diverge sharply. Humans often display diplomas, confident body language, and a track record of anecdotes; robots supply cycle-time charts, force–torque graphs, and lab test videos. Notice, too, how the physical presence of robots intensifies the question. A statistical model prints a number on a screen, but a collaborative robot swings heavy parts through a shared workspace. A bug in a spreadsheet prediction prompts a shrug and a manual correction, whereas a wobble in the robot provokes an instinctive flinch. Accordingly, “ability” research in robotics often adds precision, stability, speed, etc., as concrete, observable signals that feed the trust calculus [14]. For example, a pick-and-place arm that maintains minimal positional error and displays metrics on a live dashboard offers unambiguous cues of competence. In short, humans and robots are judged on the same performance dimension, but machines must prove that performance under far closer scrutiny, and the verdict can flip quickly with experience: once the machine falters, trust plummets faster and further than it would for a human who made the same mistake [16,29].
As such, operators who once deferred to the algorithm seize the joystick, mirroring the “trust-reversal” documented in aviation and medical-device studies [30,31]. Recent work replicates that trajectory in modern cobot settings: after three consecutive performance violations, trust dropped to baseline and never fully recovered despite apology or promise repairs [32]. Complementary physiological data show lower heart-rate variability (indicative of reduced stress) when a robot is reliably accurate but sharp spikes once reliability dips [33].
Ability, therefore, evolves over time [9]. Demonstrated reliability drives trust sharply upward; a single unexplained fault can collapse it. Humans and robots share that basic trajectory, yet robots face steeper slopes both upward (because their precision can dazzle) and downward (because their failures feel less forgivable) [16]. Providing explanations or timely apologies after a failure can cushion that collapse. We do acknowledge that ability items in MvM ask respondents to judge what the agent can do now; they do not probe how quickly a robot can adapt when conditions shift. In dynamic domains where algorithms revise plans in real time, trust may also be an effect of the machines perceived learning. Capturing that nuance will require repeated administrations of the scale as the system evolves or grafting items onto the scale, which measure user perceptions of machine adaptability.

Domain-Specificity of Ability

Because ability is domain-specific, robots can earn trust in one context while forfeiting it in another. Commuters relax in automated metros yet blanch at driverless taxis in mixed traffic; warehouse pickers welcome ceiling-guided shuttles yet disable a slower mobile manipulator whose path planning seems hesitant [34]. Humans show the same pattern: an expert archivist is assumed competent at cataloguing books but not at neurosurgery, except that observers routinely generalize a wider halo of competence around humans. Robots are perceived as purpose-built: the bolt-tightening arm is only good at bolts, not at supervising quality or scheduling shifts. For robots, where the environment is narrow, structured, and repeatedly demonstrated, perceived ability soars; where it is abstract, ambiguous, or morally complex, trust comes less easily.
In sum, ability is considered the first pillar of trust for both mortals and machines, but the evidentiary cues, the volatility of impressions, and the breadth of generalization all differ in ways robotics researchers must intentionally design for. With these nuances established, we now turn to the second pillar of MvM—value congruence—to explain why ability alone does not assure trust.

3.2. Value Congruence

Upon an initial reading of IMOT, it is often difficult to separate the benevolence and integrity dimensions. The key distinction is that intentions reside in benevolence, whereas value priorities reside in integrity. I may believe a partner genuinely wants the best for me and, therefore, accords high benevolence yet simultaneously suspect that this same partner organizes and weights information in a way that is not consistent with my own priorities, thereby lowering perceptions of integrity. A college senior, for instance, may feel loved by her parents (high benevolence) while also feeling that their value structure is misaligned with her own career aspirations (low integrity). The same logic stretches easily to human–robot interaction on an assembly line: operators can recognize that a collaborative robot has no malevolent motives whatsoever yet still experience unease if they sense that the robot optimizes exclusively for throughput while giving little weight to the human’s ergonomic fatigue.
The concept of integrity, because it feels more abstract than ability, may intuitively seem like a lesser factor in robot trust but that would be misguided. A recent PRISMA scoping review of 128 HRC papers confirms that non-ability factors—cognitive workload, interface transparency, and psychological safety—can outweigh raw performance as levers of operator choice [35,36,37]. Meta-reviews likewise find that mistrust more often stems from integrity-adjacent disagreements (ethics, legality, fairness) than from accuracy shortfalls [38]. Consider a pediatric ward in which nurses overrode a drug-dispensing robot, not because it mis-dosed medication but because its route-planning algorithm prized speed over noise, repeatedly waking sleeping children. The misalignment lay in values, not accuracy. When a robot’s performance appears misaligned with local norms, even good performance or programmed social cues cannot guarantee trust.
In the original IMOT framework, this appraisal was labelled integrity and carried unmistakably human overtones—honesty, honor, morality. Such language sits awkwardly beside contemporary autonomous systems: a robotic manipulator does not “lie,” a routing algorithm does not “break promises,” and a reinforcement-learning agent feels no moral shame. MvM, therefore, relabels the construct value congruence. We define value congruence as the degree to which a decision-maker perceives that an advisor—human or robot—weights evidence, constraints, and objectives in a way that coheres with the trustor’s own hierarchy of values. Concretely, it has three subcomponents: (i) criterion weighting—how much the agent cares about speed versus safety; (ii) constraint priority—which constraints are absolute and which are negotiable; and (iii) trade-off tolerance—when sacrifice of one criterion for another becomes acceptable. This differs from goal alignment, which concerns end-states, and from fairness, which concerns distributional equity. This reconceptualization retains IMOT’s focus on value structures yet strips away intentional and emotional language, allowing for the same measurement items to compare humans and robots on equal footing. Where local ethics or affective norms matter most, researchers can graft supplementary items onto the ten-item core (see Appendix A) rather than rewriting it, preserving comparability while capturing domain-specific values.
Historical parallels reinforce the point [20,39]. Long before robots, supply-chain managers distrusted forecasting software optimized for finance rather than service, foreshadowing modern warehouse systems tuned for cycle time that elevate speed over worker fatigue [40,41,42]. Contemporary studies of medical AI echo the theme: patients resent “being reduced to a percentage” by an algorithm [43], underscoring that what offends is not accuracy but a perceived clash of values: efficiency over dignity. Work elsewhere in human–machine interaction shows value misalignment hurts trust [9,44]. Yet, a systematic scan of 2022–2025 studies shows that a minority of papers operationalize constructs similar to value alignment directly [44,45,46], highlighting a critical measurement gap that MvM is designed to fill.
Conceptually, value congruence also explains why long-standing human-technology trust constructs, such as compatibility [47] and facilitating conditions [48], predict trust: a system perceived as “consistent with our workflow” implicitly mirrors local priorities. In disaster-relief drones, speed outranks privacy; in urban surveillance, the priorities reverse. Recent work illustrates the plasticity: by embedding vulnerability weights for cyclists and pedestrians into an autonomous-vehicle planner, they halved predicted harm without altering raw sensor accuracy—trust rose because users saw their own moral weighting encoded in the algorithm [49].
In summary, replacing integrity with value congruence lets the construct travel smoothly from human–human to human–robot contexts. Robots, algorithms, and other automated aids do possess operational values—explicit cost functions or learned reward weights—and trust will rise or fall to the extent that those computational values resonate with their human stakeholders. Having shown why raw performance rarely seals the deal, we turn to the hardest dimension to map onto machines: benevolence.

3.3. Benevolence

The ability and value-congruence sections have argued that robots can be assessed on the same performance and priority criteria we apply to humans [17,50,51]. Yet, how, exactly, could we gauge the “good intentions” of an artefact built from steel, servomotors, and Python code? Canonical survey items—“My advisor is concerned for my welfare,” “The advisor shows interest in me as a person”—feel, at best, like forced metaphors when the “advisor” is a palletizing arm. Recent measurement work underscores the dilemma: across two experiments, nearly half of all benevolence-type survey items (e.g., “This robot is principled”) were flagged by respondents as “non-applicable to robots,” even though aggregate trust scores appeared stable [17]. Such persistent ambiguity demands we consider the inclusion or exclusion of benevolence from the MvM model.
This conceptual discomfort is not new. Previous works reasoned that integrity and benevolence should drop out of technology-trust models because artefacts lack consciousness [52]. Much of the human-automation literature followed suit: once the trustee became a machine, multifactor trust measures sometimes collapsed into a single reliability/competence score [14,15]. More recently, research shows the conceptual complexity of measuring the indirect perception of benevolence, requiring a “triad” model [53] or simply finding benevolence items cannot be answered with robots as evaluation targets [17]. Therefore, instead of relocating benevolence to hidden programmers, we excise it and restrict MvM to the two facets both humans and robots can instantiate: This yields cleaner, empirically tractable constructs and prevents anthropocentric language from distorting machine evaluations.
Although we excise benevolence from the core MvM model, we do not deny its empirical footprint: numerous HRI studies show that simulated politeness, eye-gaze, or empathic utterances reliably inflate perceived warmth and downstream behavioral compliance. We argue, however, that these effects are mediated through users’ inferences about the human value chain behind the artefact rather than through any intrinsic intention of the robot itself; “benevolence”, therefore, remains diagnostically useful only as a second-order construct that points to upstream designers, regulators, and organizational leaders. By isolating ability and value-congruence as first-order machine attributes, MvM yields a lean, ontologically symmetrical core while still allowing benevolence modules to be grafted on when context demands. In practice, excluding benevolence shortens survey instruments, reduces “not applicable” dropout, and focuses diagnosis on the two levers most predictive of robot reliance; indeed, as one recent review of automation-trust measures concludes, asking whether a machine “cares” for us may simply “not make sense” [54]. Future work can revisit the construct if and when robots attain authentic moral agency or when measurement techniques evolve to capture benevolence unambiguously.

4. Discussion

4.1. Practical Applications of MvM

Robotics is advancing fastest in exactly those settings where machines replace or augment human labor, such as surgical suites, fulfillment centers, farm fields, logistics hubs, and elder-care facilities. Implementers in these domains confront a vexing diagnostic question: Is the workforce reluctant because the robot truly underperforms or because its priorities clash with local values? Existing trust scales cannot answer cleanly. Human-centric measures probe empathy and honesty—constructs that robots cannot literally satisfy—while automation-centric surveys focus almost exclusively on reliability and usability, ignoring the social substrates of trust. The risk is not hypothetical; validation studies reveal that extant HRI questionnaires systematically conflate functional and social cues [3], prompting participants to skip or misinterpret items that presume human-like goodwill [17].
MvM fills this measurement gap. By recasting the IMOT pillars into constructs that travel across humans and machines, the framework lets researchers, developers, and organizational leaders place humans and machines on identical footing (Appendix B). A single diagnostic instrument can reveal, for instance, that warehouse pickers judge the new mobile manipulator higher on ability yet lower on value congruence than their veteran co-worker, pinpointing the locus of resistance and guiding targeted redesign (e.g., exposing the planner’s safety margins or incorporating ergonomic constraints).
The utility extends downstream. When reliance behavior does become measurable—e.g., the moment operators decide whether to hand control to a robot forklift—MvM does not become obsolete; it becomes predictive. By pairing the perceptual survey with a binary or graded reliance metric (manual override, intervention latency, monetary bet, etc.), researchers can test the common-sense but rarely quantified claim that “trust drives reliance.” If ability and value-congruence perceptions explain variance in hand-off decisions better than legacy “automation reliability” scores, organizations have empirical grounds to target those levers in training, UI design, or governance policy. Furthermore, because the scale is symmetrical, longitudinal studies can track how trust migrates as a facility transitions from human-led to robot-assisted workflows, quantifying whether gains in ability over time compensate for lagging value congruence or, conversely, whether corporate outreach elevates benevolence perceptions enough to offset occasional performance lapses. Because MvM is deliberately compact, we also acknowledge that high-stakes domains such as healthcare and education may demand extra diagnostic granularity. We, therefore, position the ten-item core as a stem scale and invite domain experts to graft context-specific items onto either pillar (Appendix A).
In short, MvM is more than a conceptual reconciliation of human- and robot-trust literature; it is a practical toolkit for seeing where, why, and how trust diverges as automation advances. By locating deficits precisely—performance or priorities—organizations can prescribe remedies with surgical focus rather than blanket retraining or costly over-engineering. That actionable clarity, we contend, is the framework’s most immediate contribution to both scholarship and practice.

4.2. Theoretical Contributions of MvM

In many real-world rollouts, the behavioral hallmark of trust—actual reliance under risk—simply cannot be observed. Some scenarios are pre-deployment (the robot is still in prototyping, yet decision-makers must green-light adoption); others are hyper-autonomous (the system operates beyond easy human override, so “take-over frequency” is a poor proxy for trust). In such settings, the only actionable window into future acceptance is the perceived trustworthiness of the machine.
The MvM survey fills this gap in three ways. First, it captures the diagnostic antecedents of trust (ability, value congruence) even when no reliance behavior is yet possible. Second, because the same instrument can be administered to rate a human co-worker, project leaders gain a direct comparison baseline: Do nurses rate the new IV-pole robot higher on ability but lower on value congruence than their human aides? That delta signals where change-management energy should be spent. Third, once the robot is fielded, the identical items let researchers track whether initial perceptions actually forecast subsequent reliance—closing the loop between prospective attitude and observed behavior in a way no existing HRI scale enables.
The table in Appendix C situates MvM alongside two seminal models and two recent human–robot trust instruments to clarify what it adds and what it deliberately leaves out. The comparison shows a clear progression: legacy interpersonal models, such as IMOT/ABI [2], introduced the triad of ability–benevolence–integrity, but their anthropocentric wording travels poorly to artefacts; domain-specific robot scales, like TPS-HRI [18] and MDMT [19], enrich cue coverage yet re-insert moral-agency items that can become not-applicable when the trustee is non-human; acceptance frameworks, such as UTAUT [48], predict uptake but gauge intention rather than trust per se. MvM sits at the intersection. MvM retains the diagnostic power of a performance factor (ability) and a social-alignment factor (value congruence) while enforcing bi-referent symmetry: every item can be answered for a human or a robot without re-wording. By treating benevolence as a second-order judgement about the human value chain behind the machine, the scale avoids the conceptual contortions that plague earlier tools and, at just ten items, offers a parsimonious yet actionable diagnostic that researchers can extend with warmth or moral-agency modules only when a given context truly demands them.

4.3. Future Research in the Age of AI-Empowered Robotics

The MvM framework now rests on two timeless constructs, ability and value congruence, but the relentless advance of AI (and the embedding of it in robotic technologies) means the empirical terrain MvM investigates is shifting fast. AI-empowered robots stretch each MvM pillar in distinct ways; we highlight three areas in need of future research in the coming age of AI.
First is broadening ability. Early robots (e.g., industrial, assembly line) displayed narrow competencies; users could assess performance in a single, repetitive motion. Yet the latest advances in AI robotics now enable a single platform to constantly learn new skills, assembling furniture in one hour and stocking produce [55]. As “generalist” robots proliferate, ability will likely become less about isolated abilities and more about adaptive breadth. Longitudinal work is, therefore, needed to track ability trajectories and to map how trust calibrates (or fails to) as perceived performance improves. Repeated MvM administration after each major software update or learning cycle could reveal whether trust lags behind, overshoots, or tracks these performance gains, helping establish dynamic validity for the scale. Such research can untangle if wider ability portfolios dampen trust swings or raise the stakes of any single failure.
Second, emergent transparency–congruence dynamics deserve investigation. This implication flows from AI’s growing capacity for self-explanation. Large language models may bring improved explanations of the weights and trade-offs that guide robotic actions [56,57]. Research should examine how this conversational transparency reshapes the value-congruence assessment: Does an eloquent but opaque rationale reassure users or drive deeper skepticism? Answers will determine whether explainability tools become important inclusions or potential flaws for future robots.
Finally, our understanding of benevolence may itself be challenged going forward. Simulated empathy and conversational politeness already inflate perceived warmth and trustworthiness of chatbots [58,59,60]; AI may bring comparable social benefits for physical robots. While MvM opts for a conceptual parsimony by focusing on ability and value congruence, we encourage future work that investigates benevolence. There may be an inflection point at which performative social features lead to perceived moral agency. Moments such as this require testing to discover if benevolence attributions remain indirect assessments of human developers or migrate to the robot itself.
AI-empowered robotics both raises the ceiling of what robots can do and stretches the conceptual complexity of how humans judge what they do. Capturing that evolving judgment will likely require administering MvM at multiple time points or pairing it with objective logs of algorithmic updates so that evolving trust can be modelled alongside evolving performance. Future studies can keep MvM empirically grounded while preserving its theoretical clarity.

5. Limitations

The framework is not without limitations. First, MvM is deliberately lean. Where specific domains have unique considerations (e.g., HIPAA in healthcare or pedagogical norms in education) or add distinctive trust, researchers may desire adding domain-specific items rather than altering the core wording. Second, MvM is, at this stage, concept-heavy: while each item inherits psychometric provenance from prior scales, the bi-referent set has not yet completed a full criterion-validation cycle. Third, MvM does not consider social virtues (e.g., affective warmth, fairness) and normative questions of moral agency. We view parsimony as a feature, and it maximizes response rates and minimizes anthropomorphic misinterpretation, therefore yielding scores that travel cleanly across trustee types. However, some tasks and robot designs may still evoke those richer perceptions. No single instrument can fit every scenario; in the rare cases where the core MvM pillars prove insufficient, investigators are encouraged to append validated modules from existing scales while reporting their psychometrics separately.

6. Conclusions

This paper sets out with two intertwined goals: (i) to provide a conceptual model that allows scholars to compare trust in robots with trust in humans on truly common ground and (ii) to supply practitioners with a diagnostic tool that pinpoints how the facets of trust drive acceptance or resistance as robots are increasingly adopted in professional workplaces and personal life. We achieved the first aim by grafting decades of human-centric trust theory onto the unique affordances and liabilities of embodied machines, yielding the Mortal vs. Machine (MvM) model. MvM preserves the Integrated Model of Trust’s ability and integrity pillars yet reconceptualizes them so they travel across mortals and machines: ability retains its performance focus but incorporates embodied safety cues, and integrity is reframed as value congruence to strip away anthropocentric moral language.
Along the way, we re-examined historical distrust of previous technological systems, empirically grounded the relabeling of integrity, and used recent measurement critiques to show why benevolence measurements only muddy the picture when comparing mortals to machines. A forward-looking agenda then mapped how AI-driven generalist skills, simulated sociability, and conversational transparency may stretch our understanding of human–robot trust in the coming decade.
Taken together, MvM offers researchers a conceptual model that works for human and machine trust and for both researchers and practitioners, a field-ready, two-factor instrument. It invites robotics researchers, designers, and organizational leaders to move beyond binary debates over “trustworthy or not” and toward a fine-grained understanding of why trust diverges and how it can be intentionally engineered, calibrated, and sustained as machines become more capable partners in human endeavors.

Funding

This research is supported by the Ministry of Education, Singapore, under its Academic Research Fund Tier 2 Program (Award MOE-T2EP40122-0007). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore.

Acknowledgments

During manuscript preparation, the authors used Microsoft Copilot for automated grammar checking and proofreading. All content and interpretations are the author’s own, and the author accepts full responsibility for the final text.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. MvM Trust Suggested Measurement Items

There have been decades of scholarship debate on whether trust, trustworthiness, or benevolence belong in technology scales, yet practitioners still need a compact, deployable instrument. Our two-pillar MvM-Scale, therefore, hews to parsimony: it measures only those cues that can be demonstrably instantiated by both mortals and machines—ability (task competence) and value congruence (alignment of priorities). Sacrificing benevolence and other affect-laden constructs trades theoretical breadth for field usability: the resulting ten-item battery can be embedded in time-pressured studies, usability trials, or post-task questionnaires without exhausting respondents. Future investigators remain free to append warmth or benevolence modules should their context warrant, but the core scale offers a “lowest-common-denominator” diagnostic that travels across industries, cultures, and trustee types.
To maximize comparability with prior research, the wording below preserves the syntax of the original IMOT items wherever possible, substituting neutral terms (“this agent”) for anthropocentric language and deleting words with moral or emotional connotations. In other words, wording intentionally eschews mental-state verbs (e.g., “cares,” “intends”) to avoid anthropomorphic leakage. All items describe observable decision patterns or performance signals that can, in principle, be exhibited by either a human or a robot.
  • Suggested scale items: AB: Ability Items; VC: Value-Congruence Items
  • Response format: 1 = Strongly Disagree to 7 = Strongly Agree.
  • AB1: The [human/robot] is very capable of performing their job.
  • AB2: I feel confident about my/the [human/robot]’s abilities.
  • AB3: The [human/robot] has sufficient knowledge about the work that they/it need(s) to do.
  • AB4: The [human/robot] is known to be successful in the things they are supposed to do.
  • AB5: The [human/robot] is reliable.
  • VC1: The [human/robot] applies consistent rules in their/its work.
  • VC2: I never have to worry about whether the [human/robot] will follow agreed procedures.
  • VC3: The [human/robot] is fair in their/its dealings with others.
  • VC4: The [human/robot]’s decisions reflect priorities similar to mine.
  • VC5: The [human/robot]’s decision trade-offs match my priorities.
  • Notes:
  • AB1–AB4 adapted from IMOT ability instrument
  • AB5 added to capture reliability component (applicable to both humans and machines)
  • VC1–VC4 adapted from IMOT integrity instrument
  • VC5 adapted from TPS-HRI [18]
  • Context-Specific Extensions (Optional)
  • When a domain has specific value-laden trust cues of interest, additional items can be added at the researchers’ discretion to the ten-item MvM core. Below are examples for education and healthcare.
  • Healthcare: The [human/robot] prioritizes patient safety over procedural speed.
  • Education: The [human/robot] prioritizes student privacy when storing performance data.

Appendix B. Construct Guide

Table A1. Construct guide.
Table A1. Construct guide.
ConstructOne-Sentence DefinitionPrimary Diagnostic Cues in PracticeExample Survey Items
AbilityPerception of the extent to which the human/robot can successfully execute the focal task.Capability
Knowledge
Performance
Track-record
AB1: “This [agent] is very capable of performing their job.”; AB3: “This [agent] has sufficient knowledge about the work that they/it need(s) to do.”
Value CongruencePerception that a human/robot weights evidence, constraints, and objectives in a way that coheres with the trustor’s own hierarchy of values.Decision Weights
Priorities
Values
Compliance
VC4: “This [agent]’s decisions reflect priorities similar to mine.”; VC5: “The [agent]’s decision trade-offs match my priorities.”

Appendix C. Trust Models Comparison

ModelPrimary TrusteeTypical Item CountBi-ReferentBenevolence Items?Key StrengthsLimitations (for Cross-Species Comparison)
IMOT/ABI [2]Human colleagues and institutions15–18NoYesSeminal; predicts organizational risk takingBenevolence and integrity are anthropocentric; does not distinctly capture reliability
TPS HRI [18]Robots (military and service)40–54NoImplicit (warmth wording)Rich cue coverage; good reliabilityLong; many items anthropomorphic (“friendly”, “kind”) but not developed on human targets
MDMT [19]Social robots (claimed human applicability)20PartiallyYesIntegrates moral trust; concise“Sincere/Ethical” still presumes moral agency; many robotics domains elicit high “N/A” responses [17]
UTAUT [48]IT systems (acceptance)16–40NoNoPredictive of adoption intention; broad domain useMeasures intention, not trust; anthropocentric constructs; no direct human versus robot comparison
MvM (this work)Humans and robots10YesNo (treated as second-order perception of designers)Cross species symmetry; briefOmits warmth/moral agency; must be extended if those cues are central to the research question

References

  1. Weizenbaum, J. Computer Power and Human Reason: From Judgement to Calculation; W. H. Freeman: San Francisco, CA, USA, 1976; 300p. [Google Scholar]
  2. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integrative model of organizational trust. Acad. Manag. Acad. Manag. Rev. 1995, 20, 709. [Google Scholar] [CrossRef]
  3. Hannum, C.; Li, R.; Wang, W. A Trust-Assist Framework for Human–Robot Co-Carry Tasks. Robotics 2023, 12, 30. [Google Scholar] [CrossRef]
  4. Savela, N.; Turja, T.; Oksanen, A. Social acceptance of robots in different occupational fields: A systematic literature review. Int. J. Soc. Robot. 2018, 10, 493–502. [Google Scholar] [CrossRef]
  5. Intahchomphoo, C.; Millar, J.; Gundersen, O.E.; Tschirhart, C.; Meawasige, K.; Salemi, H. Effects of Artificial Intelligence and Robotics on Human Labour: A Systematic Review. Leg. Inf. Manag. 2024, 24, 109–124. [Google Scholar] [CrossRef]
  6. Hofstede, B.M.; Askari, S.I.; Lukkien, D.; Gosetto, L.; Alberts, J.W.; Tesfay, E.; ter Stal, M.; van Hoesel, T.; Cuijpers, R.H.; Vastenburg, M.H.; et al. A field study to explore user experiences with socially assistive robots for older adults: Emphasizing the need for more interactivity and personalisation. Front. Robot. AI 2025, 12, 1537272. [Google Scholar] [CrossRef]
  7. Kadylak, T.; Bayles, M.A.; Rogers, W.A. Are Friendly Robots Trusted More? An Analysis of Robot Sociability and Trust. Robotics 2023, 12, 162. [Google Scholar] [CrossRef]
  8. Kraus, J.; Miller, L.; Klumpp, M.; Babel, F.; Scholz, D.; Merger, J.; Baumann, M. On the Role of Beliefs and Trust for the Intention to Use Service Robots: An Integrated Trustworthiness Beliefs Model for Robot Acceptance. Int. J. Soc. Robot. 2024, 16, 1223–1246. [Google Scholar] [CrossRef]
  9. Legler, F.; Trezl, J.; Langer, D.; Bernhagen, M.; Dettmann, A.; Bullinger, A.C. Emotional Experience in Human–Robot Collaboration: Suitability of Virtual Reality Scenarios to Study Interactions beyond Safety Restrictions. Robotics 2023, 12, 168. [Google Scholar] [CrossRef]
  10. Cresswell, K.; Cunningham-Burley, S.; Sheikh, A. Health care robotics: Qualitative exploration of key challenges and future directions. J. Med. Internet Res. 2018, 20, e10410. [Google Scholar] [CrossRef] [PubMed]
  11. Billings, C.E. Human-Centered Aviation Automation: Principles and Guidelines; National Aeronautics and Space Administration, Ames Research Center: Moffett Field, CA, USA, 1996.
  12. Sanders, T.L.; Schafer, K.E.; Volante, W.; Reardon, A.; Hancock, P.A. Implicit Attitudes Toward Robots. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2016, 60, 1746–1749. [Google Scholar] [CrossRef]
  13. Bekey, G.A. Autonomous Robots: From Biological Inspiration to Implementation and Control (Intelligent Robotics and Autonomous Agents); The MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  14. Hancock, P.A.; Billings, D.R.; Schaefer, K.E.; Chen, J.Y.C.; de Visser, E.J.; Parasuraman, R. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Hum. Factors J. Hum. Factors Ergon. Soc. 2011, 53, 517–527. [Google Scholar] [CrossRef] [PubMed]
  15. Hoff, K.A.; Bashir, M. Trust in automation integrating empirical evidence on factors that influence trust. Hum. Factors J. Hum. Factors Ergon. Soc. 2015, 57, 407–434. [Google Scholar] [CrossRef]
  16. Madhavan, P.; Wiegmann, D.A. Similarities and differences between human–human and human–automation trust: An integrative review. Theor. Issues Ergon. Sci. 2007, 8, 277–301. [Google Scholar] [CrossRef]
  17. Chita-Tegmark, M.; Law, T.; Rabb, N.; Scheutz, M. Can You Trust Your Trust Measure? In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 8–11 March 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 92–100. [Google Scholar] [CrossRef]
  18. Schaefer, K.E. Measuring Trust in Human Robot Interactions: Development of the “Trust Perception Scale-HRI”. In Robust Intelligence and Trust in Autonomous Systems; Mittu, R., Sofge, D., Wagner, A., Lawless, W.F., Eds.; Springer: Boston, MA, USA, 2016; pp. 191–218. [Google Scholar] [CrossRef]
  19. Ullman, D.; Malle, B.F. Measuring Gains and Losses in Human-Robot Trust: Evidence for Differentiable Components of Trust. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Republic of Korea, 11–14 March 2019; pp. 618–619. Available online: https://ieeexplore.ieee.org/abstract/document/8673154 (accessed on 7 August 2025).
  20. Dawes, R.M.; Faust, D.; Meehl, P.E. Clinical versus actuarial judgment. Science 1989, 243, 1668–1674. [Google Scholar] [CrossRef]
  21. Meehl, P.E. Causes and Effects of My Disturbing Little Book. J. Pers. Assess. 1986, 50, 370–375. [Google Scholar] [CrossRef]
  22. Meehl, P.E. Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence; University of Minnesota Press: Minneapolis, MN, USA, 1954; Volume x, 149p. [Google Scholar]
  23. Goldberg, L.R. Diagnosticians vs. diagnostic signs: The diagnosis of psychosis vs. neurosis from the MMPI. Psychol. Monogr. Gen. Appl. 1965, 79, 1–28. [Google Scholar] [CrossRef] [PubMed]
  24. Hogarth, R.M.; Makridakis, S. Forecasting and Planning: An Evaluation. 1981. Available online: http://pubsonline.informs.org/doi/abs/10.1287/mnsc.27.2.115 (accessed on 30 November 2013).
  25. Armstrong, J.S. The Seer-Sucker Theory: The Value of Experts in Forecasting. Marketing Papers. 1 June 1980. Available online: http://repository.upenn.edu/marketing_papers/3 (accessed on 10 May 2025).
  26. Hovland, C.I.; Janis, I.L.; Kelley, H.H. Communication and Persuasion; Psychological Studies of Opinion Change; Yale University Press: New Haven, CT, USA, 1953; Volume xii, 315p. [Google Scholar]
  27. Jones, A.P.; James, L.R.; Bruni, J.R. Perceived leadership behavior and employee confidence in the leader as moderated by job involvement. J. Appl. Psychol. 1975, 60, 146–149. [Google Scholar] [CrossRef]
  28. Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef]
  29. Yokoi, R. Trust in self-driving vehicles is lower than in human drivers when both drive almost perfectly. Transp. Res. Part F Traffic Psychol. Behav. 2024, 103, 1–17. [Google Scholar] [CrossRef]
  30. Thomas, G. Wake-Up Call: The Lessons of AF447 and Other Recent High-Automation Aircraft Incidents Have Wide Training Implications; AIR TRANSPORT WORLD; Northwestern University Transportation Library: Evanston, IL, USA, 2011; Volume 48, Available online: http://trid.trb.org/view.aspx?id=1121246 (accessed on 7 December 2014).
  31. Lee, J.E.; See, K.A. Trust in Automation: Designing for Appropriate Reliance. Hum. Factors J. Hum. Factors Ergon. Soc. 2004, 46, 50–80. [Google Scholar] [CrossRef]
  32. Esterwood, C.; Robert, L.J. Do You Still Trust Me? Human-Robot Trust Repair Strategies. In Proceedings of the 30th IEEE International Conference on Robot and Human Interactive Communication, Vancouver, BC, Canda, 8–12 August 2021; Available online: http://deepblue.lib.umich.edu/handle/2027.42/168396 (accessed on 29 July 2025).
  33. Hopko, S.K.; Mehta, R.K.; Pagilla, P.R. Physiological and perceptual consequences of trust in collaborative robots: An empirical investigation of human and robot factors. Appl. Ergon. 2023, 106, 103863. [Google Scholar] [CrossRef] [PubMed]
  34. van Pinxteren, M.M.E.; Wetzels, R.W.H.; Rüger, J.; Pluymaekers, M.; Wetzels, M. Trust in humanoid robots: Implications for services marketing. J. Serv. Mark. 2019, 33, 507–518. [Google Scholar] [CrossRef]
  35. Tatasciore, M.; Bowden, V.; Loft, S. Do concurrent task demands impact the benefit of automation transparency? Appl. Ergon. 2023, 110, 104022. [Google Scholar] [CrossRef]
  36. Wanner, J.; Herm, L.-V.; Heinrich, K.; Janiesch, C. The effect of transparency and trust on intelligent system acceptance: Evidence from a user-based study. Electron. Mark. 2022, 32, 2079–2102. [Google Scholar] [CrossRef]
  37. Kaplan, A.D.; Kessler, T.T.; Brill, J.C.; Hancock, P.A. Trust in Artificial Intelligence: Meta-Analytic Findings. Hum. Factors 2023, 65, 337–359. [Google Scholar] [CrossRef]
  38. Afroogh, S.; Akbari, A.; Malone, E.; Kargar, M.; Alambeigi, H. Trust in AI: Progress, challenges, and future directions. Humanit. Soc. Sci. Commun. 2024, 11, 1568. [Google Scholar] [CrossRef]
  39. Torrent-Sellens, J.; Jiménez-Zarco, A.I.; Saigí-Rubió, F. Do People Trust in Robot-Assisted Surgery? Evidence from Europe. Int. J. Environ. Res. Public Health 2021, 18, 12519. [Google Scholar] [CrossRef]
  40. Fildes, R.; Goodwin, P. Forecasting support systems: What we know, what we need to know. Int. J. Forecast. 2013, 29, 290–294. [Google Scholar] [CrossRef]
  41. Lawrence, M.; Goodwin, P.; O’Connor, M.; Önkal, D. Judgmental forecasting: A review of progress over the last 25 years. Int. J. Forecast. 2006, 22, 493–518. [Google Scholar] [CrossRef]
  42. Alvarado-Valencia, J.A.; Barrero, L.H. Reliance, trust and heuristics in judgmental forecasting. Comput. Hum. Behav. 2014, 36, 102–113. [Google Scholar] [CrossRef]
  43. Formosa, P.; Rogers, W.; Griep, Y.; Bankins, S.; Richards, D. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts. Comput. Hum. Behav. 2022, 133, 107296. [Google Scholar] [CrossRef]
  44. Bhat, S.; Lyons, J.B.; Shi, C.; Yang, X.J. Evaluating the Impact of Personalized Value Alignment in Human-Robot Interaction: Insights into Trust and Team Performance Outcomes. arXiv 2023, arXiv:2311.16051. [Google Scholar] [CrossRef]
  45. Gideoni, R.; Honig, S.; Oron-Gilad, T. Is It Personal? The Impact of Personally Relevant Robotic Failures (PeRFs) on Humans’ Trust, Likeability, and Willingness to Use the Robot. Int. J. Soc. Robot. 2024, 16, 1049–1067. [Google Scholar] [CrossRef]
  46. Firmino de Souza, D.; Sousa, S.; Kristjuhan-Ling, K.; Dunajeva, O.; Roosileht, M.; Pentel, A.; Mõttus, M.; Can Özdemir, M.; Gratšjova, Ž. Trust and Trustworthiness from Human-Centered Perspective in Human–Robot Interaction (HRI)—A Systematic Literature Review. Electronics 2025, 14, 1557. [Google Scholar] [CrossRef]
  47. Rogers, E.M. Diffusion of Innovations, 4th ed.; Simon and Schuster: New York, NY, USA, 2010; 550p. [Google Scholar]
  48. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  49. Lu, H.; Zhu, M.; Lu, C.; Feng, S.; Wang, X.; Wang, Y.; Yang, H. Empowering safer socially sensitive autonomous vehicles using human-plausible cognitive encoding. Proc. Natl. Acad. Sci. USA 2025, 122, e2401626122. [Google Scholar] [CrossRef]
  50. Christoforakos, L.; Gallucci, A.; Surmava-Große, T.; Ullrich, D.; Diefenbach, S. Can Robots Earn Our Trust the Same Way Humans Do? A Systematic Exploration of Competence, Warmth, and Anthropomorphism as Determinants of Trust Development in HRI. Front. Robot. AI 2021, 8, 640444. [Google Scholar] [CrossRef]
  51. Stower, R.; Calvo-Barajas, N.; Castellano, G.; Kappas, A. A Meta-analysis on Children’s Trust in Social Robots. Int. J. Soc. Robot. 2021, 13, 1979–2001. [Google Scholar] [CrossRef]
  52. Gefen, D.; Karahanna, E.; Straub, D.W. Trust and TAM in Online Shopping: An Integrated Model. MIS Q. 2003, 27, 51–90. [Google Scholar] [CrossRef]
  53. Cameron, D.; Collins, E.C.; de Saille, S.; Eimontaite, I.; Greenwood, A.; Law, J. The Social Triad Model: Considering the Deployer in a Novel Approach to Trust in Human–Robot Interaction. Int. J. Soc. Robot. 2024, 16, 1405–1418. [Google Scholar] [CrossRef] [PubMed]
  54. Scholz, D.D.; Kraus, J.; Miller, L. Measuring the Propensity to Trust in Automated Technology: Examining Similarities to Dispositional Trust in Other Humans and Validation of the PTT-A Scale. Int. J. Hum.–Comput. Interact. 2025, 41, 970–993. [Google Scholar] [CrossRef]
  55. Suárez-Ruiz, F.; Zhou, X.; Pham, Q.-C. Can robots assemble an IKEA chair? Sci. Robot. 2018, 3, eaat6385. [Google Scholar] [CrossRef] [PubMed]
  56. Ye, Y.; You, H.; Du, J. Improved Trust in Human-Robot Collaboration with ChatGPT. arXiv 2023, arXiv:2304.12529. [Google Scholar] [CrossRef]
  57. Zhu, L.; Williams, T. Effects of Proactive Explanations by Robots on Human-Robot Trust. In Social Robotics, Proceedings of the 12th International Conference, ICSR 2020, Golden, CO, USA, 14–18 November 2020; Wagner, A.R., Feil-Seifer, D., Haring, K.S., Rossi, S., Williams, T., He, H., Sam Ge, S., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 85–95. [Google Scholar]
  58. Seitz, L. Artificial empathy in healthcare chatbots: Does it feel authentic? Comput. Hum. Behav. Artif. Hum. 2024, 2, 100067. [Google Scholar] [CrossRef]
  59. Zhou, M.; Liu, L.; Feng, Y. Building citizen trust to enhance satisfaction in digital public services: The role of empathetic chatbot communication. Behav. Inf. Technol. 2025, 1–20. [Google Scholar] [CrossRef]
  60. Brummernhenrich, B.; Paulus, C.L.; Jucks, R. Applying social cognition to feedback chatbots: Enhancing trustworthiness through politeness. Br. J. Educ. Technol. 2025. Available online: https://onlinelibrary.wiley.com/doi/abs/10.1111/bjet.13569 (accessed on 29 May 2025). [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Prahl, A. Mortal vs. Machine: A Compact Two-Factor Model for Comparing Trust in Humans and Robots. Robotics 2025, 14, 112. https://doi.org/10.3390/robotics14080112

AMA Style

Prahl A. Mortal vs. Machine: A Compact Two-Factor Model for Comparing Trust in Humans and Robots. Robotics. 2025; 14(8):112. https://doi.org/10.3390/robotics14080112

Chicago/Turabian Style

Prahl, Andrew. 2025. "Mortal vs. Machine: A Compact Two-Factor Model for Comparing Trust in Humans and Robots" Robotics 14, no. 8: 112. https://doi.org/10.3390/robotics14080112

APA Style

Prahl, A. (2025). Mortal vs. Machine: A Compact Two-Factor Model for Comparing Trust in Humans and Robots. Robotics, 14(8), 112. https://doi.org/10.3390/robotics14080112

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop