Mortal vs. Machine: A Compact Two-Factor Model for Comparing Trust in Humans and Robots
Abstract
1. Introduction
2. Manual Versus Autonomy: Historical Roots of an Instinctive Distrust and Their Relevance to Robotics
3. Components of Trust
3.1. Ability
Domain-Specificity of Ability
3.2. Value Congruence
3.3. Benevolence
4. Discussion
4.1. Practical Applications of MvM
4.2. Theoretical Contributions of MvM
4.3. Future Research in the Age of AI-Empowered Robotics
5. Limitations
6. Conclusions
Funding
Acknowledgments
Conflicts of Interest
Appendix A. MvM Trust Suggested Measurement Items
- Suggested scale items: AB: Ability Items; VC: Value-Congruence Items
- Response format: 1 = Strongly Disagree to 7 = Strongly Agree.
- AB1: The [human/robot] is very capable of performing their job.
- AB2: I feel confident about my/the [human/robot]’s abilities.
- AB3: The [human/robot] has sufficient knowledge about the work that they/it need(s) to do.
- AB4: The [human/robot] is known to be successful in the things they are supposed to do.
- AB5: The [human/robot] is reliable.
- VC1: The [human/robot] applies consistent rules in their/its work.
- VC2: I never have to worry about whether the [human/robot] will follow agreed procedures.
- VC3: The [human/robot] is fair in their/its dealings with others.
- VC4: The [human/robot]’s decisions reflect priorities similar to mine.
- VC5: The [human/robot]’s decision trade-offs match my priorities.
- Notes:
- AB1–AB4 adapted from IMOT ability instrument
- AB5 added to capture reliability component (applicable to both humans and machines)
- VC1–VC4 adapted from IMOT integrity instrument
- VC5 adapted from TPS-HRI [18]
- Context-Specific Extensions (Optional)
- When a domain has specific value-laden trust cues of interest, additional items can be added at the researchers’ discretion to the ten-item MvM core. Below are examples for education and healthcare.
- Healthcare: The [human/robot] prioritizes patient safety over procedural speed.
- Education: The [human/robot] prioritizes student privacy when storing performance data.
Appendix B. Construct Guide
Construct | One-Sentence Definition | Primary Diagnostic Cues in Practice | Example Survey Items |
---|---|---|---|
Ability | Perception of the extent to which the human/robot can successfully execute the focal task. | Capability Knowledge Performance Track-record | AB1: “This [agent] is very capable of performing their job.”; AB3: “This [agent] has sufficient knowledge about the work that they/it need(s) to do.” |
Value Congruence | Perception that a human/robot weights evidence, constraints, and objectives in a way that coheres with the trustor’s own hierarchy of values. | Decision Weights Priorities Values Compliance | VC4: “This [agent]’s decisions reflect priorities similar to mine.”; VC5: “The [agent]’s decision trade-offs match my priorities.” |
Appendix C. Trust Models Comparison
Model | Primary Trustee | Typical Item Count | Bi-Referent | Benevolence Items? | Key Strengths | Limitations (for Cross-Species Comparison) |
IMOT/ABI [2] | Human colleagues and institutions | 15–18 | No | Yes | Seminal; predicts organizational risk taking | Benevolence and integrity are anthropocentric; does not distinctly capture reliability |
TPS HRI [18] | Robots (military and service) | 40–54 | No | Implicit (warmth wording) | Rich cue coverage; good reliability | Long; many items anthropomorphic (“friendly”, “kind”) but not developed on human targets |
MDMT [19] | Social robots (claimed human applicability) | 20 | Partially | Yes | Integrates moral trust; concise | “Sincere/Ethical” still presumes moral agency; many robotics domains elicit high “N/A” responses [17] |
UTAUT [48] | IT systems (acceptance) | 16–40 | No | No | Predictive of adoption intention; broad domain use | Measures intention, not trust; anthropocentric constructs; no direct human versus robot comparison |
MvM (this work) | Humans and robots | 10 | Yes | No (treated as second-order perception of designers) | Cross species symmetry; brief | Omits warmth/moral agency; must be extended if those cues are central to the research question |
References
- Weizenbaum, J. Computer Power and Human Reason: From Judgement to Calculation; W. H. Freeman: San Francisco, CA, USA, 1976; 300p. [Google Scholar]
- Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integrative model of organizational trust. Acad. Manag. Acad. Manag. Rev. 1995, 20, 709. [Google Scholar] [CrossRef]
- Hannum, C.; Li, R.; Wang, W. A Trust-Assist Framework for Human–Robot Co-Carry Tasks. Robotics 2023, 12, 30. [Google Scholar] [CrossRef]
- Savela, N.; Turja, T.; Oksanen, A. Social acceptance of robots in different occupational fields: A systematic literature review. Int. J. Soc. Robot. 2018, 10, 493–502. [Google Scholar] [CrossRef]
- Intahchomphoo, C.; Millar, J.; Gundersen, O.E.; Tschirhart, C.; Meawasige, K.; Salemi, H. Effects of Artificial Intelligence and Robotics on Human Labour: A Systematic Review. Leg. Inf. Manag. 2024, 24, 109–124. [Google Scholar] [CrossRef]
- Hofstede, B.M.; Askari, S.I.; Lukkien, D.; Gosetto, L.; Alberts, J.W.; Tesfay, E.; ter Stal, M.; van Hoesel, T.; Cuijpers, R.H.; Vastenburg, M.H.; et al. A field study to explore user experiences with socially assistive robots for older adults: Emphasizing the need for more interactivity and personalisation. Front. Robot. AI 2025, 12, 1537272. [Google Scholar] [CrossRef]
- Kadylak, T.; Bayles, M.A.; Rogers, W.A. Are Friendly Robots Trusted More? An Analysis of Robot Sociability and Trust. Robotics 2023, 12, 162. [Google Scholar] [CrossRef]
- Kraus, J.; Miller, L.; Klumpp, M.; Babel, F.; Scholz, D.; Merger, J.; Baumann, M. On the Role of Beliefs and Trust for the Intention to Use Service Robots: An Integrated Trustworthiness Beliefs Model for Robot Acceptance. Int. J. Soc. Robot. 2024, 16, 1223–1246. [Google Scholar] [CrossRef]
- Legler, F.; Trezl, J.; Langer, D.; Bernhagen, M.; Dettmann, A.; Bullinger, A.C. Emotional Experience in Human–Robot Collaboration: Suitability of Virtual Reality Scenarios to Study Interactions beyond Safety Restrictions. Robotics 2023, 12, 168. [Google Scholar] [CrossRef]
- Cresswell, K.; Cunningham-Burley, S.; Sheikh, A. Health care robotics: Qualitative exploration of key challenges and future directions. J. Med. Internet Res. 2018, 20, e10410. [Google Scholar] [CrossRef] [PubMed]
- Billings, C.E. Human-Centered Aviation Automation: Principles and Guidelines; National Aeronautics and Space Administration, Ames Research Center: Moffett Field, CA, USA, 1996.
- Sanders, T.L.; Schafer, K.E.; Volante, W.; Reardon, A.; Hancock, P.A. Implicit Attitudes Toward Robots. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2016, 60, 1746–1749. [Google Scholar] [CrossRef]
- Bekey, G.A. Autonomous Robots: From Biological Inspiration to Implementation and Control (Intelligent Robotics and Autonomous Agents); The MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
- Hancock, P.A.; Billings, D.R.; Schaefer, K.E.; Chen, J.Y.C.; de Visser, E.J.; Parasuraman, R. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Hum. Factors J. Hum. Factors Ergon. Soc. 2011, 53, 517–527. [Google Scholar] [CrossRef] [PubMed]
- Hoff, K.A.; Bashir, M. Trust in automation integrating empirical evidence on factors that influence trust. Hum. Factors J. Hum. Factors Ergon. Soc. 2015, 57, 407–434. [Google Scholar] [CrossRef]
- Madhavan, P.; Wiegmann, D.A. Similarities and differences between human–human and human–automation trust: An integrative review. Theor. Issues Ergon. Sci. 2007, 8, 277–301. [Google Scholar] [CrossRef]
- Chita-Tegmark, M.; Law, T.; Rabb, N.; Scheutz, M. Can You Trust Your Trust Measure? In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 8–11 March 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 92–100. [Google Scholar] [CrossRef]
- Schaefer, K.E. Measuring Trust in Human Robot Interactions: Development of the “Trust Perception Scale-HRI”. In Robust Intelligence and Trust in Autonomous Systems; Mittu, R., Sofge, D., Wagner, A., Lawless, W.F., Eds.; Springer: Boston, MA, USA, 2016; pp. 191–218. [Google Scholar] [CrossRef]
- Ullman, D.; Malle, B.F. Measuring Gains and Losses in Human-Robot Trust: Evidence for Differentiable Components of Trust. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Republic of Korea, 11–14 March 2019; pp. 618–619. Available online: https://ieeexplore.ieee.org/abstract/document/8673154 (accessed on 7 August 2025).
- Dawes, R.M.; Faust, D.; Meehl, P.E. Clinical versus actuarial judgment. Science 1989, 243, 1668–1674. [Google Scholar] [CrossRef]
- Meehl, P.E. Causes and Effects of My Disturbing Little Book. J. Pers. Assess. 1986, 50, 370–375. [Google Scholar] [CrossRef]
- Meehl, P.E. Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence; University of Minnesota Press: Minneapolis, MN, USA, 1954; Volume x, 149p. [Google Scholar]
- Goldberg, L.R. Diagnosticians vs. diagnostic signs: The diagnosis of psychosis vs. neurosis from the MMPI. Psychol. Monogr. Gen. Appl. 1965, 79, 1–28. [Google Scholar] [CrossRef] [PubMed]
- Hogarth, R.M.; Makridakis, S. Forecasting and Planning: An Evaluation. 1981. Available online: http://pubsonline.informs.org/doi/abs/10.1287/mnsc.27.2.115 (accessed on 30 November 2013).
- Armstrong, J.S. The Seer-Sucker Theory: The Value of Experts in Forecasting. Marketing Papers. 1 June 1980. Available online: http://repository.upenn.edu/marketing_papers/3 (accessed on 10 May 2025).
- Hovland, C.I.; Janis, I.L.; Kelley, H.H. Communication and Persuasion; Psychological Studies of Opinion Change; Yale University Press: New Haven, CT, USA, 1953; Volume xii, 315p. [Google Scholar]
- Jones, A.P.; James, L.R.; Bruni, J.R. Perceived leadership behavior and employee confidence in the leader as moderated by job involvement. J. Appl. Psychol. 1975, 60, 146–149. [Google Scholar] [CrossRef]
- Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef]
- Yokoi, R. Trust in self-driving vehicles is lower than in human drivers when both drive almost perfectly. Transp. Res. Part F Traffic Psychol. Behav. 2024, 103, 1–17. [Google Scholar] [CrossRef]
- Thomas, G. Wake-Up Call: The Lessons of AF447 and Other Recent High-Automation Aircraft Incidents Have Wide Training Implications; AIR TRANSPORT WORLD; Northwestern University Transportation Library: Evanston, IL, USA, 2011; Volume 48, Available online: http://trid.trb.org/view.aspx?id=1121246 (accessed on 7 December 2014).
- Lee, J.E.; See, K.A. Trust in Automation: Designing for Appropriate Reliance. Hum. Factors J. Hum. Factors Ergon. Soc. 2004, 46, 50–80. [Google Scholar] [CrossRef]
- Esterwood, C.; Robert, L.J. Do You Still Trust Me? Human-Robot Trust Repair Strategies. In Proceedings of the 30th IEEE International Conference on Robot and Human Interactive Communication, Vancouver, BC, Canda, 8–12 August 2021; Available online: http://deepblue.lib.umich.edu/handle/2027.42/168396 (accessed on 29 July 2025).
- Hopko, S.K.; Mehta, R.K.; Pagilla, P.R. Physiological and perceptual consequences of trust in collaborative robots: An empirical investigation of human and robot factors. Appl. Ergon. 2023, 106, 103863. [Google Scholar] [CrossRef] [PubMed]
- van Pinxteren, M.M.E.; Wetzels, R.W.H.; Rüger, J.; Pluymaekers, M.; Wetzels, M. Trust in humanoid robots: Implications for services marketing. J. Serv. Mark. 2019, 33, 507–518. [Google Scholar] [CrossRef]
- Tatasciore, M.; Bowden, V.; Loft, S. Do concurrent task demands impact the benefit of automation transparency? Appl. Ergon. 2023, 110, 104022. [Google Scholar] [CrossRef]
- Wanner, J.; Herm, L.-V.; Heinrich, K.; Janiesch, C. The effect of transparency and trust on intelligent system acceptance: Evidence from a user-based study. Electron. Mark. 2022, 32, 2079–2102. [Google Scholar] [CrossRef]
- Kaplan, A.D.; Kessler, T.T.; Brill, J.C.; Hancock, P.A. Trust in Artificial Intelligence: Meta-Analytic Findings. Hum. Factors 2023, 65, 337–359. [Google Scholar] [CrossRef]
- Afroogh, S.; Akbari, A.; Malone, E.; Kargar, M.; Alambeigi, H. Trust in AI: Progress, challenges, and future directions. Humanit. Soc. Sci. Commun. 2024, 11, 1568. [Google Scholar] [CrossRef]
- Torrent-Sellens, J.; Jiménez-Zarco, A.I.; Saigí-Rubió, F. Do People Trust in Robot-Assisted Surgery? Evidence from Europe. Int. J. Environ. Res. Public Health 2021, 18, 12519. [Google Scholar] [CrossRef]
- Fildes, R.; Goodwin, P. Forecasting support systems: What we know, what we need to know. Int. J. Forecast. 2013, 29, 290–294. [Google Scholar] [CrossRef]
- Lawrence, M.; Goodwin, P.; O’Connor, M.; Önkal, D. Judgmental forecasting: A review of progress over the last 25 years. Int. J. Forecast. 2006, 22, 493–518. [Google Scholar] [CrossRef]
- Alvarado-Valencia, J.A.; Barrero, L.H. Reliance, trust and heuristics in judgmental forecasting. Comput. Hum. Behav. 2014, 36, 102–113. [Google Scholar] [CrossRef]
- Formosa, P.; Rogers, W.; Griep, Y.; Bankins, S.; Richards, D. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts. Comput. Hum. Behav. 2022, 133, 107296. [Google Scholar] [CrossRef]
- Bhat, S.; Lyons, J.B.; Shi, C.; Yang, X.J. Evaluating the Impact of Personalized Value Alignment in Human-Robot Interaction: Insights into Trust and Team Performance Outcomes. arXiv 2023, arXiv:2311.16051. [Google Scholar] [CrossRef]
- Gideoni, R.; Honig, S.; Oron-Gilad, T. Is It Personal? The Impact of Personally Relevant Robotic Failures (PeRFs) on Humans’ Trust, Likeability, and Willingness to Use the Robot. Int. J. Soc. Robot. 2024, 16, 1049–1067. [Google Scholar] [CrossRef]
- Firmino de Souza, D.; Sousa, S.; Kristjuhan-Ling, K.; Dunajeva, O.; Roosileht, M.; Pentel, A.; Mõttus, M.; Can Özdemir, M.; Gratšjova, Ž. Trust and Trustworthiness from Human-Centered Perspective in Human–Robot Interaction (HRI)—A Systematic Literature Review. Electronics 2025, 14, 1557. [Google Scholar] [CrossRef]
- Rogers, E.M. Diffusion of Innovations, 4th ed.; Simon and Schuster: New York, NY, USA, 2010; 550p. [Google Scholar]
- Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
- Lu, H.; Zhu, M.; Lu, C.; Feng, S.; Wang, X.; Wang, Y.; Yang, H. Empowering safer socially sensitive autonomous vehicles using human-plausible cognitive encoding. Proc. Natl. Acad. Sci. USA 2025, 122, e2401626122. [Google Scholar] [CrossRef]
- Christoforakos, L.; Gallucci, A.; Surmava-Große, T.; Ullrich, D.; Diefenbach, S. Can Robots Earn Our Trust the Same Way Humans Do? A Systematic Exploration of Competence, Warmth, and Anthropomorphism as Determinants of Trust Development in HRI. Front. Robot. AI 2021, 8, 640444. [Google Scholar] [CrossRef]
- Stower, R.; Calvo-Barajas, N.; Castellano, G.; Kappas, A. A Meta-analysis on Children’s Trust in Social Robots. Int. J. Soc. Robot. 2021, 13, 1979–2001. [Google Scholar] [CrossRef]
- Gefen, D.; Karahanna, E.; Straub, D.W. Trust and TAM in Online Shopping: An Integrated Model. MIS Q. 2003, 27, 51–90. [Google Scholar] [CrossRef]
- Cameron, D.; Collins, E.C.; de Saille, S.; Eimontaite, I.; Greenwood, A.; Law, J. The Social Triad Model: Considering the Deployer in a Novel Approach to Trust in Human–Robot Interaction. Int. J. Soc. Robot. 2024, 16, 1405–1418. [Google Scholar] [CrossRef] [PubMed]
- Scholz, D.D.; Kraus, J.; Miller, L. Measuring the Propensity to Trust in Automated Technology: Examining Similarities to Dispositional Trust in Other Humans and Validation of the PTT-A Scale. Int. J. Hum.–Comput. Interact. 2025, 41, 970–993. [Google Scholar] [CrossRef]
- Suárez-Ruiz, F.; Zhou, X.; Pham, Q.-C. Can robots assemble an IKEA chair? Sci. Robot. 2018, 3, eaat6385. [Google Scholar] [CrossRef] [PubMed]
- Ye, Y.; You, H.; Du, J. Improved Trust in Human-Robot Collaboration with ChatGPT. arXiv 2023, arXiv:2304.12529. [Google Scholar] [CrossRef]
- Zhu, L.; Williams, T. Effects of Proactive Explanations by Robots on Human-Robot Trust. In Social Robotics, Proceedings of the 12th International Conference, ICSR 2020, Golden, CO, USA, 14–18 November 2020; Wagner, A.R., Feil-Seifer, D., Haring, K.S., Rossi, S., Williams, T., He, H., Sam Ge, S., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 85–95. [Google Scholar]
- Seitz, L. Artificial empathy in healthcare chatbots: Does it feel authentic? Comput. Hum. Behav. Artif. Hum. 2024, 2, 100067. [Google Scholar] [CrossRef]
- Zhou, M.; Liu, L.; Feng, Y. Building citizen trust to enhance satisfaction in digital public services: The role of empathetic chatbot communication. Behav. Inf. Technol. 2025, 1–20. [Google Scholar] [CrossRef]
- Brummernhenrich, B.; Paulus, C.L.; Jucks, R. Applying social cognition to feedback chatbots: Enhancing trustworthiness through politeness. Br. J. Educ. Technol. 2025. Available online: https://onlinelibrary.wiley.com/doi/abs/10.1111/bjet.13569 (accessed on 29 May 2025). [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Prahl, A. Mortal vs. Machine: A Compact Two-Factor Model for Comparing Trust in Humans and Robots. Robotics 2025, 14, 112. https://doi.org/10.3390/robotics14080112
Prahl A. Mortal vs. Machine: A Compact Two-Factor Model for Comparing Trust in Humans and Robots. Robotics. 2025; 14(8):112. https://doi.org/10.3390/robotics14080112
Chicago/Turabian StylePrahl, Andrew. 2025. "Mortal vs. Machine: A Compact Two-Factor Model for Comparing Trust in Humans and Robots" Robotics 14, no. 8: 112. https://doi.org/10.3390/robotics14080112
APA StylePrahl, A. (2025). Mortal vs. Machine: A Compact Two-Factor Model for Comparing Trust in Humans and Robots. Robotics, 14(8), 112. https://doi.org/10.3390/robotics14080112