Facial Anthropomorphic Trustworthiness Scale for Social Robots: A Hybrid Approach
Abstract
:1. Introduction
2. Literature Review
2.1. Facial Anthropomorphic Trustworthiness
2.2. Trustworthiness Dimensions for Social Robot
3. Methods
3.1. Instrumental Methods
3.1.1. Traditional Methods vs. Crowdsourcing to Explore User Experience
3.1.2. Natural Language Processing (NLP) in Deep Convolution
3.2. Analytical Methods
4. Different Phases in Developing and Validating Scale
4.1. Phase 1: Item Generation via a Hybrid Method
Variables | Definition | Methods | Source and Example |
---|---|---|---|
Ethics Concern | Ethics concern refers to the extent to which individuals perceive that the robot has been designed and programmed with ethical considerations in mind. It involves the evaluation of whether the robot’s actions, behaviors, and decision-making processes align with ethical principles or values. | Inductive | Schaefer [68], Hancock et al. [69], Tay et al. [70], Wheless and Grotz [71], Colquitt and LePine [72], Yogoda and Gillan [73], Bhattacherjee [39], Büttner and Göritz [74] |
Deductive | “I’d trust the one learned from a compassionate creator in a safe loving environment” “People can write various codes and programs to make robots do evil things” | ||
Capability | Capability refers to the dimension that assesses individuals’ perceptions of a social robot’s competence and ability to perform its designated tasks or functions effectively. | Inductive | Schaefer [68], Hancock et al. [69], Tay et al. [70], Wheless and Grotz [71], Colquitt and LePine [72], Yogoda and Gillan [73], Bhattacherjee [39], Büttner and Göritz [74] |
Deductive | “They are good robots and competent enough to their programmed task” “I want to see them as robots that will perform their duties in an efficient manner” | ||
Positive Affect | Positive affect refers to the dimension that captures individuals’ emotional or affective responses characterized by positive feelings, attitudes, or sentiments toward the robot. | Inductive | Schaefer [68], Hancock et al. [69], Tay et al. [70], Wheless and Grotz [71], Colquitt and LePine [72], Yogoda and Gillan [73], Bhattacherjee [39], Büttner and Göritz [74] |
Deductive | “Overall, the robot should be cute as that makes me feel protective of it and more trustful” “The robot looks like a robot it feels more honest and open” | ||
Anthropomorphism | Anthropomorphism refers to the extent that a robot has a human-like appearance, behavior, or personality in order to facilitate social interaction with humans. | Inductive | Ho and MacDorman [36], Walters et al. [75] |
Deductive | “If it’s making a poor attempt at looking humanlike I immediately distrust it and am afraid of it” “Trusting robots that look like classic robots is easier than a robot that is made to look like a human” |
4.2. Phase 2: Item Refinement and Polish
4.3. Phase 3: Item Reduction and Exploratory Factor Analysis
4.4. Phase 4: Validation Phase
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
List of Abbreviations
Abbreviations | Definition |
A.I. | Artificial Intelligence |
FATSR-17 | Facial Anthropomorphic Trustworthiness towards Social Robots |
HRI | Human–robot interaction |
EFA | Exploratory factor analysis |
CFA | Confirmatory factor analysis |
NLP | Natural language processing |
SOTA | State-of-the-art |
UGC | User-generated content |
BERT | Bidirectional encoder representations from transformers |
MLM | Mask language modeling |
NSP | Next sentence prediction |
EC | Ethics concern |
CAP | Capability |
AFF | Positive affect |
AN | Anthropomorphism |
SEM | Structural equation model |
GFI | Goodness of fit |
IFI | Incremental fit index |
NFI | Normed fit index |
CFI | Comparative fit index |
AGFI | Adjusted goodness of fit measure |
RMSEA | Root-mean-square error of approximation |
AVE | Average variance extracted |
MSV | Maximum shared variance |
TORC | Theory of robot communication |
Appendix A
Ethics Concern | Capability | Positive Affect | Anthropomorphism |
---|---|---|---|
look evil | capable | friendly | uncanny |
sufficient integrity | competent | conscious | an actual human face |
self-awareness | perform duties | kind | natural appearance |
ethics in programing | successful at the things | cute | appropriate features |
sense of justice | sufficient artificial intelligence | happy | too human-identical |
stick to its program | confident | well qualified | too close to humanity |
looks fair | specialized capability | concerned welfare | a minimum of human appearance |
behaviors are consistent | satisfy users’ needs | desires seem important | machine-like features |
ethical concern | expect good advice | anything to hurt me | moderate lifelike |
sound principles | dependable | important to me | weird face |
predictable | reliable | help me | complex detailed features |
standards | autonomous | interested in welfare | neither too plain nor too weird |
operates scrupulously | finish the task | put my interest first | neither too dull nor too freaky |
statements | follow the advice | responsible | too boring nor too shocking |
methods are clear | give me advice | supportive | balance between human and machine |
keeps promises | rely on the advice | pleasant | neither too real nor too synthetic |
protect human | function successfully | join our team | infantile like |
openly communicate | clearly communicate | aggressive | neither too humanoid nor too robotic |
perform as instructed | frequent maintenance | neither too living nor too inanimate | |
obey order | better than a novice human user | ||
a competitor for job | provide feedback | ||
hacked easily | meet the need of the mission | ||
provide appropriate information |
References
- Zhao, S. Humanoid Social Robots as a Medium of Communication. New Media Soc. 2006, 8, 401–419. [Google Scholar] [CrossRef] [Green Version]
- Fraune, M.R.; Oisted, B.C.; Sembrowski, C.E.; Gates, K.A.; Krupp, M.M.; Šabanović, S. Effects of Robot-Human versus Robot-Robot Behavior and Entitativity on Anthropomorphism and Willingness to Interact. Comput. Hum. Behav. 2020, 105, 106220. [Google Scholar] [CrossRef]
- Song, Y.; Luximon, Y. The Face of Trust: The Effect of Robot Face Ratio on Consumer Preference. Comput. Hum. Behav. 2021, 116, 106620. [Google Scholar] [CrossRef]
- Song, Y.; Luximon, A.; Luximon, Y. The Effect of Facial Features on Facial Anthropomorphic Trustworthiness in Social Robots. Appl. Ergon. 2021, 94, 103420. [Google Scholar] [CrossRef] [PubMed]
- Song, Y.; Tao, D.; Luximon, Y. In Robot We Trust ? The Effect of Emotional Expressions and Contextual Cues on Anthropomorphic Trustworthiness. Appl. Ergon. 2023, 109, 103967. [Google Scholar] [CrossRef] [PubMed]
- Xu, K. First Encounter with Robot Alpha: How Individual Differences Interact with Vocal and Kinetic Cues in Users’ Social Responses. New Media Soc. 2019, 21, 2522–2547. [Google Scholar] [CrossRef]
- Bodó, B. Mediated Trust: A Theoretical Framework to Address the Trustworthiness of Technological Trust Mediators. New Media Soc. 2021, 23, 2668–2690. [Google Scholar] [CrossRef]
- Walters, M.L. The Design Space for Robot Appearance and Behaviour for Social Robot Companions. Doctoral Dissertation, University of Hertfordshire, Hatfield, UK, 2008. [Google Scholar]
- Landwehr, J.R.; McGill, A.L.; Herrmann, A. It’s Got the Look: The Effect of Friendly and Aggressive “Facial” Expressions on Product Liking and Sales. J. Mark. 2011, 75, 132–146. [Google Scholar] [CrossRef]
- Guzman, A.L.; Lewis, S.C. Artificial Intelligence and Communication: A Human–Machine Communication Research Agenda. New Media Soc. 2020, 22, 70–86. [Google Scholar] [CrossRef]
- Hirokawa, E.; Suzuki, K.; Suzuki, K.; Nunez, E.; Hirokawa, M.; Suzuki, K. Design of a Huggable Social Robot with Affective Expressions Using Projected Images. Appl. Sci. 2018, 8, 2298. [Google Scholar] [CrossRef] [Green Version]
- Marzi, T.; Righi, S.; Ottonello, S.; Cincotta, M.; Viggiano, M.P. Trust at First Sight: Evidence from ERPs. Soc. Cogn. Affect. Neurosci. 2014, 9, 63–72. [Google Scholar] [CrossRef] [Green Version]
- Oosterhof, N.N.; Todorov, A. Shared Perceptual Basis of Emotional Expressions and Trustworthiness Impressions from Faces. Emotion 2009, 9, 128–133. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Montealegre, A.; Jimenez-Leal, W. The Role of Trust in the Social Heuristics Hypothesis. PLoS ONE 2019, 14, e0216329. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Stroessner, S.J.; Benitez, J. The Social Perception of Humanoid and Non-Humanoid Robots: Effects of Gendered and Machinelike Features. Int. J. Soc. Robot. 2019, 11, 305–315. [Google Scholar] [CrossRef]
- Prakash, A.; Rogers, W.A. Why Some Humanoid Faces Are Perceived More Positively Than Others: Effects of Human-Likeness and Task. Int. J. Soc. Robot. 2015, 7, 309–331. [Google Scholar] [CrossRef]
- Palinko, O.; Rea, F.; Sandini, G.; Sciutti, A. Eye Gaze Tracking for a Humanoid Robot. In Proceedings of the 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, Republic of Korea, 3–5 November 2015; pp. 318–324. [Google Scholar]
- Dehn, D.M.; Van Mulken, S. Impact of Animated Interface Agents: A Review of Empirical Research. Int. J. Hum. Comput. Stud. 2000, 52, 1–22. [Google Scholar] [CrossRef]
- Ghazali, A.S.; Ham, J.; Barakova, E.I.; Markopoulos, P. Effects of Robot Facial Characteristics and Gender in Persuasive Human-Robot Interaction. Front. Robot. AI 2018, 5, 73. [Google Scholar] [CrossRef] [Green Version]
- Ghazali, A.S.; Ham, J.; Barakova, E.; Markopoulos, P. Assessing the Effect of Persuasive Robots Interactive Social Cues on Users’ Psychological Reactance, Liking, Trusting Beliefs and Compliance. Adv. Robot. 2019, 33, 325–337. [Google Scholar] [CrossRef] [Green Version]
- Paradeda, R.B.; Hashemian, M.; Rodrigues, R.A.; Paiva, A. How Facial Expressions and Small Talk May Influence Trust in a Robot. In Social Robotics, Proceedings of the 8th International Conference, ICSR 2016, Kansas City, MO, USA, 1–3 November 2016; Lecture Notes in Computer Science Volume 9979; pp. 169–178. [CrossRef]
- Mathur, M.B.; Reichling, D.B. Navigating a Social World with Robot Partners: A Quantitative Cartography of the Uncanny Valley. Cognition 2016, 146, 22–32. [Google Scholar] [CrossRef] [Green Version]
- Maeng, A.; Aggarwal, P. Facing Dominance: Anthropomorphism and the Effect of Product Face Ratio on Consumer Preference. J. Consum. Res. 2018, 44, 1104–1122. [Google Scholar] [CrossRef] [Green Version]
- Gunaratnam, Y.; Bell, V. How John Berger Changed Our Ways of Seeing Art. Indep. UK 2017. [Google Scholar]
- Fortunati, L.; Manganelli, A.M.; Cavallo, F.; Honsell, F. You Need to Show That You Are Not a Robot. New Media Soc. 2019, 21, 1859–1876. [Google Scholar] [CrossRef]
- Decety, J.; Sommerville, J.A. Shared Representations between Self and Other: A Social Cognitive Neuroscience View. Trends Cogn. Sci. 2003, 7, 527–533. [Google Scholar] [CrossRef]
- Atkinson, D.; Hancock, P.; Hoffman, R.R.; Lee, J.D.; Rovira, E.; Stokes, C.; Wagner, A.R. Trust in Computers and Robots: The Uses and Boundaries of the Analogy to Interpersonal Trust. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2012, 56, 303–307. [Google Scholar] [CrossRef]
- Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An Integrative Model of Organizational Trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
- Song, Y.; Luximon, Y.; Luo, J. A Moderated Mediation Analysis of the Effect of Lettering Case and Color Temperature on Trustworthiness Perceptions and Investment Decisions. Int. J. Bank Mark. 2020, 38, 987–1005. [Google Scholar] [CrossRef]
- Thatcher, J.B.; Harrison, D.; White Baker, E.; Arsal, R.E.; Roberts, N.H. The Role of Trust in Postadoption IT Exploration: An Empirical Examination of Knowledge Management Systems. IEEE Trans. Eng. Manag. 2011, 58, 56–70. [Google Scholar] [CrossRef]
- Fosch-Villaronga, E.; Lutz, C.; Tamò-Larrieux, A. Gathering Expert Opinions for Social Robots’ Ethical, Legal, and Societal Concerns: Findings from Four International Workshops. Int. J. Soc. Robot. 2020, 12, 441–458. [Google Scholar] [CrossRef] [Green Version]
- Chanseau, A.; Dautenhahn, K.; Koay, K.L.; Salem, M. Who Is in Charge? Sense of Control and Robot Anxiety in Human-Robot Interaction. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 743–748. [Google Scholar]
- Stuck, R.E.; Rogers, W.A. Older Adults’ Perceptions of Supporting Factors of Trust in a Robot Care Provider. J. Robot. 2018, 2018, 6519713. [Google Scholar] [CrossRef] [Green Version]
- Brenton, H.; Gillies, M.; Ballin, D.; Chatting, D. The Uncanny Valley: Does It Exist and Is It Related to Presence. Presence Connect 2015, 8. [Google Scholar]
- Mathur, M.B.; Reichling, D.B. An Uncanny Game of Trust: Social Trustworthiness of Robots Inferred from Subtle Anthropomorphic Facial Cues. In Proceedings of the 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI), La Jolla, CA, USA, 11–13 March 2009; pp. 313–314. [Google Scholar]
- Ho, C.C.; MacDorman, K.F. Measuring the Uncanny Valley Effect: Refinements to Indices for Perceived Humanness, Attractiveness, and Eeriness. Int. J. Soc. Robot. 2017, 9, 129–139. [Google Scholar] [CrossRef] [Green Version]
- Lee, J.J.; Lee, K.P. Facilitating Dynamics of Focus Group Interviews in East Asia: Evidence and Tools by Cross-Cultural Study. Int. J. Des. 2009, 3, 17–28. [Google Scholar]
- Jiao, J.; Chen, C.H.; Kerr, C. Customer Requirement Management in Product Development. Concurr. Eng. Res. Appl. 2006, 14, 169–171. [Google Scholar] [CrossRef]
- Bhattacherjee, A. Individual Trust in Online Firms: Scale Development and Initial Test. J. Manag. Inf. Syst. 2002, 19, 211–241. [Google Scholar] [CrossRef]
- Ratislavová, K.; Ratislav, J. Asynchronous Email Interview as a Qualitative Research Method in the Humanities. Hum. Aff. 2014, 24, 452–460. [Google Scholar] [CrossRef]
- Johnson, P.A.; Sieber, R.E.; Magnien, N.; Ariwi, J. Automated Web Harvesting to Collect and Analyse User-Generated Content for Tourism. Curr. Issues Tour. 2012, 15, 293–299. [Google Scholar] [CrossRef]
- Goucher-Lambert, K.; Cagan, J. Crowdsourcing Inspiration: Using Crowd Generated Inspirational Stimuli to Support Designer Ideation. Des. Stud. 2019, 61, 1–29. [Google Scholar] [CrossRef]
- Shank, D.B. Using Crowdsourcing Websites for Sociological Research: The Case of Amazon Mechanical Turk. Am. Sociol. 2016, 47, 47–55. [Google Scholar] [CrossRef]
- Lovett, M.; Bajaba, S.; Lovett, M.; Simmering, M.J. Data Quality from Crowdsourced Surveys: A Mixed Method Inquiry into Perceptions of Amazon’s Mechanical Turk Masters. Appl. Psychol. 2018, 67, 339–366. [Google Scholar] [CrossRef] [Green Version]
- AMT Amazon Mechanical Turk. Available online: https://www.mturk.com/ (accessed on 30 March 2021).
- Song, Y.; Luximon, Y. Design for Sustainability: The Effect of Lettering Case on Environmental Concern from a Green Advertising Perspective. Sustainability 2019, 11, 1333. [Google Scholar] [CrossRef] [Green Version]
- Khare, R.; Burger, J.D.; Aberdeen, J.S.; Tresner-Kirsch, D.W.; Corrales, T.J.; Hirchman, L.; Lu, Z. Scaling Drug Indication Curation through Crowdsourcing. Database 2015, 2015, bav016. [Google Scholar] [CrossRef] [Green Version]
- Lutz, C.; Newlands, G. Consumer Segmentation within the Sharing Economy: The Case of Airbnb. J. Bus. Res. 2018, 88, 187–196. [Google Scholar] [CrossRef]
- Deng, L. Deep Learning: Methods and Applications. Found. Trends® Signal Process. 2014, 7, 197–387. [Google Scholar] [CrossRef] [Green Version]
- Goldberg, Y.; Levy, O. Word2vec Explained: Deriving Mikolov et Al.’s Negative-Sampling Word-Embedding Method. arXiv 2014. [Google Scholar] [CrossRef]
- Mikolov, T.; Chen, K.; Corrado, G.; Dean, J. Efficient Estimation of Word Representations in Vector Space. In Proceedings of the 1st International Conference on Learning Representations, ICLR 2013—Workshop Track Proceedings; International Conference on Learning Representations, ICLR, Scottsdale, AZ, USA, 2–4 May 2013. [Google Scholar]
- Verhelst, M.; Moons, B. Embedded Deep Neural Network Processing: Algorithmic and Processor Techniques Bring Deep Learning to IoT and Edge Devices. IEEE Solid-State Circuits Mag. 2017, 9, 55–65. [Google Scholar] [CrossRef]
- Lan, Z.; Chen, M.; Goodman, S.; Gimpel, K.; Sharma, P.; Soricut, R. Albert: A Lite Bert for Self-Supervised Learning of Language Representations. arXiv 2019. [Google Scholar] [CrossRef]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. arXiv 2018. [Google Scholar] [CrossRef]
- Chen, T.; Xu, R.; He, Y.; Wang, X. Improving Sentiment Analysis via Sentence Type Classification Using BiLSTM-CRF and CNN. Expert Syst. Appl. 2017, 72, 221–230. [Google Scholar] [CrossRef] [Green Version]
- Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; Stoyanov, V. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv 2019. [Google Scholar] [CrossRef]
- Kaiser, H.F. An Index of Factorial Simplicity. Psychometrika 1974, 39, 31–36. [Google Scholar] [CrossRef]
- Holgado–Tello, F.P.; Chacón–Moscoso, S.; Barbero–García, I.; Vila–Abad, E. Polychoric versus Pearson Correlations in Exploratory and Confirmatory Factor Analysis of Ordinal Variables. Qual. Quant. 2010, 44, 153–166. [Google Scholar] [CrossRef]
- Bollen, K.A. Structural Equations with Latent Variables; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2014; ISBN 9781118619179. [Google Scholar]
- Norris, M.; Lecavalier, L. Evaluating the Use of Exploratory Factor Analysis in Developmental Disability Psychological Research. J. Autism Dev. Disord. 2010, 40, 8–20. [Google Scholar] [CrossRef] [PubMed]
- Rietjens, S. Qualitative Data Analysis. In Routledge Handbook of Research Methods in Military Studies; Routledge: Abingdon, UK, 2015. [Google Scholar]
- Kiss, T.; Strunk, J. Unsupervised Multilingual Sentence Boundary Detection. Comput. Linguist. 2006, 32, 485–525. [Google Scholar] [CrossRef]
- Timoshenko, A.; Hauser, J.R. Identifying Customer Needs from User-Generated Content. Mark. Sci. 2019, 38, 1–20. [Google Scholar] [CrossRef]
- Reimers, N.; Gurevych, I. Sentence-BERT: Sentence Embeddings Using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, Hong Kong, China, 3–7 November 2019; pp. 3973–3983. [Google Scholar]
- Xiong, C.; Hua, Z.; Lv, K.; Li, X. An Improved K-Means Text Clustering Algorithm by Optimizing Initial Cluster Centers. In Proceedings of the 2016 7th International Conference on Cloud Computing and Big Data, CCBD 2016, Macau, China, 16–18 November 2016; pp. 265–268. [Google Scholar]
- Marutho, D.; Hendra Handaka, S.; Wijaya, E. Muljono The Determination of Cluster Number at K-Mean Using Elbow Method and Purity Evaluation on Headline News. In Proceedings of the 2018 International Seminar on Application for Technology of Information and Communication: Creative Technology for Human Life, iSemantic 2018, Semarang, Indonesia, 21–22 September 2018; pp. 533–538. [Google Scholar]
- Syakur, M.A.; Khotimah, B.K.; Rochman, E.M.S.; Satoto, B.D. Integration K-Means Clustering Method and Elbow Method for Identification of the Best Customer Profile Cluster. IOP Conf. Ser. Mater. Sci. Eng. 2018, 336, 012017. [Google Scholar] [CrossRef] [Green Version]
- Schaefer, K.E. Measuring Trust in Human Robot Interactions: Development of the “Trust Perception Scale-HRI”. In Robust Intelligence and Trust in Autonomous Systems; Springer: Boston, MA, USA, 2016; pp. 191–218. [Google Scholar] [CrossRef]
- Hancock, P.A.; Billings, D.R.; Schaefer, K.E.; Chen, J.Y.C.; De Visser, E.J.; Parasuraman, R. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Hum. Factors 2011, 53, 517–527. [Google Scholar] [CrossRef] [PubMed]
- Tay, B.; Jung, Y.; Park, T. When Stereotypes Meet Robots: The Double-Edge Sword of Robot Gender and Personality in Human-Robot Interaction. Comput. Hum. Behav. 2014, 38, 75–84. [Google Scholar] [CrossRef]
- Wheless, L.R.; Grotz, J. The Measurement of Trust and Its Relationship To Self-Disclosure. Hum. Commun. Res. 1977, 3, 250–257. [Google Scholar] [CrossRef]
- Colquitt, J.A.; Scott, B.A.; LePine, J.A. Trust, Trustworthiness, and Trust Propensity: A Meta-Analytic Test of Their Unique Relationships with Risk Taking and Job Performance. J. Appl. Psychol. 2007, 92, 909–927. [Google Scholar] [CrossRef]
- Yagoda, R.E.; Gillan, D.J. You Want Me to Trust a ROBOT? The Development of a Human-Robot Interaction Trust Scale. Int. J. Soc. Robot. 2012, 4, 235–248. [Google Scholar] [CrossRef]
- Park, S.; Mowen, J.C. Perceived Trustworthiness of Online Shops. J. Consum. Behav. 2007, 6, 35–50. [Google Scholar]
- Walters, M.L.; Syrdal, D.S.; Dautenhahn, K.; te Boekhorst, R.; Koay, K.L. Avoiding the Uncanny Valley: Robot Appearance, Personality and Consistency of Behavior in an Attention-Seeking Home Scenario for a Robot Companion. Auton. Robot. 2008, 24, 159–178. [Google Scholar] [CrossRef] [Green Version]
- Blijlevens, J.; Hekkert, P.; Leder, H.; Thurgood, C.; Chen, L.L.; Whitfield, T.W.A. The Aesthetic Pleasure in Design Scale: The Development of a Scale to Measure Aesthetic Pleasure for Designed Artifacts. Psychol. Aesthet. Creat. Arts 2017, 11, 86–98. [Google Scholar] [CrossRef]
- Bloch, P.H.; Brunel, F.F.; Arnold, T.J. Individual Differences in the Centrality of Visual Product Aesthetics: Concept and Measurement. J. Consum. Res. 2003, 29, 551–565. [Google Scholar] [CrossRef] [Green Version]
- Xie, Y.; DeVellis, R.F. Scale Development: Theory and Applications, 16th ed.; Sage Publications: Thousand Oaks, CA, USA, 1992; Volume 21, ISBN 9781506341569. [Google Scholar]
- Zhang, J.; Luximon, Y.; Song, Y. The Role of Consumers’ Perceived Security, Perceived Control, Interface Design Features, and Conscientiousness in Continuous Use of Mobile Payment Services. Sustainability 2019, 11, 6843. [Google Scholar] [CrossRef] [Green Version]
- Fornell, C.; Larcker, D.F. Structural Equation Models with Unobservable Variables and Measurement Error: Algebra and Statistics. J. Mark. Res. 2006, 18, 382. [Google Scholar] [CrossRef]
- Hoorn, J.F. The Handbook of the Psychology of Communication Technology; Wiley: Hoboken, NJ, USA, 2015. [Google Scholar]
- Hoorn, J.F. Theory of Robot Communication: II. Befriending a Robot over Time. Int. J. Humanoid Robot. 2018, 17, 2502572. [Google Scholar] [CrossRef]
- Frey, J.H.; Fontana, A. The Group Interview in Social Research. Soc. Sci. J. 1991, 28, 175–187. [Google Scholar] [CrossRef]
- Mukhamediev, R.I.; Popova, Y.; Kuchin, Y.; Zaitseva, E.; Kalimoldayev, A.; Symagulov, A.; Levashenko, V.; Abdoldina, F.; Gopejenko, V.; Yakunin, K.; et al. Review of Artificial Intelligence and Machine Learning Technologies: Classification, Restrictions, Opportunities and Challenges. Mathematics 2022, 10, 2552. [Google Scholar] [CrossRef]
- Saranya, A.; Subhashini, R. A Systematic Review of Explainable Artificial Intelligence Models and Applications: Recent Developments and Future Trends. Decis. Anal. J. 2023, 7, 100230. [Google Scholar] [CrossRef]
Ethics Concern (5) | Capability (4) |
EC1. This robot does not look evil | CAP1. This robot looks competent in its work |
EC2. This robot looks as if its creator is not intending to harm humanity | CAP2. This robot looks like it can perform its duties in an efficient manner |
EC3. The designer has ethically programmed this robot | CAP3. This robot looks like it can be successful in the matter it is programmed to do |
EC4. This robot seems to act following its program | CAP4. This robot looks like it can provide appropriate information |
EC5. This robot seems reasonable when interacting with a human | |
Positive Affect (4) | Anthropomorphism (4) |
AFF1. This robot looks kind | AN1. This robot face looks neither too living nor too inanimate |
AFF2. This robot looks cute | AN2. This robot face looks neither too humanoid nor too robotic |
AFF3. This robot looks considerate | AN3. This robot face looks neither too real nor too synthetic |
AFF4. This robot looks like it cares about my welfare | AN4. This robot face strikes a balance between a human-like face and a machine-like face |
Factors | ||||
---|---|---|---|---|
Capability | Ethics Concern | Anthropomorphism | Positive Affect | |
CAP1 | 0.848 | |||
CAP2 | 0.794 | |||
CAP3 | 0.766 | |||
CAP4 | 0.752 | |||
EC2 | 0.752 | |||
EC1 | 0.696 | |||
EC5 | 0.653 | |||
EC3 | 0.635 | |||
EC4 | 0.615 | |||
AN2 | 0.898 | |||
AN3 | 0.832 | |||
AN1 | 0.725 | |||
AN4 | 0.673 | |||
AFF3 | 0.744 | |||
AFF2 | 0.743 | |||
AFF1 | 0.715 | |||
AFF4 | 0.692 |
CR | AVE | MSV | EC | CAP | AN | AFF | |
---|---|---|---|---|---|---|---|
Ethics Concern (EC) | 0.89 | 0.63 | 0.63 | 0.79 | |||
Capability (CAP) | 0.92 | 0.74 | 0.46 | 0.68 *** | 0.86 | ||
Anthropomorphism (AN) | 0.90 | 0.69 | 0.31 | 0.51 *** | 0.36 *** | 0.83 | |
Positive Affect (AFF) | 0.93 | 0.76 | 0.63 | 0.79 *** | 0.62 *** | 0.56 *** | 0.87 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Song, Y.; Luximon, A.; Luximon, Y. Facial Anthropomorphic Trustworthiness Scale for Social Robots: A Hybrid Approach. Biomimetics 2023, 8, 335. https://doi.org/10.3390/biomimetics8040335
Song Y, Luximon A, Luximon Y. Facial Anthropomorphic Trustworthiness Scale for Social Robots: A Hybrid Approach. Biomimetics. 2023; 8(4):335. https://doi.org/10.3390/biomimetics8040335
Chicago/Turabian StyleSong, Yao, Ameersing Luximon, and Yan Luximon. 2023. "Facial Anthropomorphic Trustworthiness Scale for Social Robots: A Hybrid Approach" Biomimetics 8, no. 4: 335. https://doi.org/10.3390/biomimetics8040335
APA StyleSong, Y., Luximon, A., & Luximon, Y. (2023). Facial Anthropomorphic Trustworthiness Scale for Social Robots: A Hybrid Approach. Biomimetics, 8(4), 335. https://doi.org/10.3390/biomimetics8040335