Affective Design Analysis of Explainable Artificial Intelligence (XAI): A User-Centric Perspective
Abstract
:1. Introduction
- Determine how an end-user calibrates trust from XAI.
- Identify the factors that can significantly change how trust can be calibrated.
- Examine possible moderating factors that can affect the calibration process.
- How do external factors moderate the effect of XAI in the calibration process?
2. Review of Related Literature
2.1. Trust Calibration
2.1.1. Cognitive Trust Calibration
2.1.2. Affective Trust Calibration
2.1.3. Synthesis for Trust Calibration
2.2. Emotional or Affective Design
2.2.1. Three Levels of Processing
2.2.2. Four Pleasures
2.2.3. Framework for Affective Customer Needs
2.2.4. Synthesis of Emotional or Affective Design
2.3. Proposed Model and Hypothesis
- Being the first study to verify how trust calibration from XAI happens through the lens of an end-user.
- Valuing the importance of the user-centered approach, this study shed light on the user’s view of XAI design composition and its perceived importance in explaining and possibly building the theories for XAI trust research.
- Based on the results of this study, different insights on how XAI can be designed are generated, which can potentially be used to leverage the effects of emotions.
- By understanding the dynamics of external factors, better situational use of XAI can be created.
3. Materials and Methods
3.1. Participants
3.2. Measurements
3.2.1. Online Questionnaire
3.2.2. Experiment Testbed
3.3. Procedure
3.3.1. Pre-Experiment Onboarding
3.3.2. Main Experiment
3.3.3. Post-Experiment Analysis
3.4. Technique of Analysis
4. Results
4.1. Data Screening
4.2. Exploratory Factor Analysis
4.3. Confirmatory Factor Analysis
4.4. Structural Equation Modelling
4.4.1. Mediation Effect Analysis
4.4.2. Direct Effect Analysis
- Example-based explanation increases interestingly surprised, trusting emotions, and perceived usefulness, while feature- and rule-based explanation decreases them.
- Logic robotic increases fearfully dismayed plus anxiously suspicious emotions and decreases perceived usefulness, while humanized communication functions the opposite.
- The presence of supplementary information decreases fearfully dismayed emotions, while the absence of it increases the effect.
4.4.3. Moderation Effect Analysis
5. Discussion
5.1. Objective 1: Confirmation of Affective Trust Calibration for XAI
5.2. Objective 2: Effect of Different XAI Designs and Importance of End-User Centric Approach
5.3. Objective 3: External Factors Delimiting XAI Effect
6. Conclusions
6.1. Implications
6.2. Limitations and Direction for Future Research
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Lu, Y. Artificial Intelligence: A Survey on Evolution, Models, Applications and Future Trends. J. Manag. Anal. 2019, 6, 1–29. [Google Scholar] [CrossRef]
- Jordan, M.I.; Mitchell, T.M. Machine Learning: Trends, Perspectives, and Prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef] [PubMed]
- Rai, A. Explainable AI: From Black Box to Glass Box. J. Acad. Mark. Sci. 2020, 48, 137–141. [Google Scholar] [CrossRef] [Green Version]
- Doshi-Velez, F.; Kim, B. Towards A Rigorous Science of Interpretable Machine Learning. arXiv 2017, arXiv:1702.08608. [Google Scholar]
- Castelvecchi, D. Can We Open the Black Box of AI? Nat. News 2016, 538, 4. [Google Scholar] [CrossRef] [Green Version]
- Schmidt, P.; Biessmann, F. Quantifying Interpretability and Trust in Machine Learning Systems. arXiv 2019, arXiv:1901.08558. [Google Scholar]
- Kliegr, T.; Bahník, Š.; Fürnkranz, J. A Review of Possible Effects of Cognitive Biases on Interpretation of Rule-Based Machine Learning Models. Artif. Intell. 2021, 295, 103458. [Google Scholar] [CrossRef]
- Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy 2020, 23, 18. [Google Scholar] [CrossRef] [PubMed]
- Weitz, K.; Hassan, T.; Schmid, U.; Garbas, J.-U. Deep-Learned Faces of Pain and Emotions: Elucidating the Differences of Facial Expressions with the Help of Explainable AI Methods. TM Tech. Mess. 2019, 86, 404–412. [Google Scholar] [CrossRef]
- Preece, A. Asking ‘Why’ in AI: Explainability of Intelligent Systems—Perspectives and Challenges. Intell. Sys. Acc. Fin. Manag. 2018, 25, 63–72. [Google Scholar] [CrossRef] [Green Version]
- Venkatesh, V. Adoption and Use of AI Tools: A Research Agenda Grounded in UTAUT. Ann. Oper. Res. 2022, 308, 641–652. [Google Scholar] [CrossRef]
- Chowdhary, K.R. Fundamentals of Artificial Intelligence; Springer: New Delhi, India, 2020; ISBN 978-81-322-3970-3. [Google Scholar]
- Lewis, M.; Li, H.; Sycara, K. Deep Learning, Transparency, and Trust in Human Robot Teamwork. In Trust in Human-Robot Interaction; Elsevier: Amsterdam, The Netherlands, 2021; pp. 321–352. ISBN 978-0-12-819472-0. [Google Scholar]
- Savage, N. Breaking into the Black Box of Artificial Intelligence. Nature 2022. [Google Scholar] [CrossRef]
- Mohseni, S.; Zarei, N.; Ragan, E.D. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. ACM Trans. Interact. Intell. Syst. 2021, 11, 1–45. [Google Scholar] [CrossRef]
- Barredo Arrieta, A.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-Lopez, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef] [Green Version]
- Singh, A.; Sengupta, S.; Lakshminarayanan, V. Explainable Deep Learning Models in Medical Image Analysis. J. Imaging 2020, 6, 52. [Google Scholar] [CrossRef]
- Miller, T.; Howe, P.; Sonenberg, L. Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences. arXiv 2017, arXiv:1712.00547. [Google Scholar] [CrossRef]
- Lopes, P.; Silva, E.; Braga, C.; Oliveira, T.; Rosado, L. XAI Systems Evaluation: A Review of Human and Computer-Centred Methods. Appl. Sci. 2022, 12, 9423. [Google Scholar] [CrossRef]
- Alicioglu, G.; Sun, B. A Survey of Visual Analytics for Explainable Artificial Intelligence Methods. Comput. Graph. 2022, 102, 502–520. [Google Scholar] [CrossRef]
- Zhang, Z.; Zhao, L.; Yang, T. Research on the Application of Artificial Intelligence in Image Recognition Technology. J. Phys. Conf. Ser. 2021, 1992, 032118. [Google Scholar] [CrossRef]
- Arun, N.; Gaw, N.; Singh, P.; Chang, K.; Aggarwal, M.; Chen, B.; Hoebel, K.; Gupta, S.; Patel, J.; Gidwani, M.; et al. Assessing the Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging. Radiol. Artif. Intell. 2021, 3, e200267. [Google Scholar] [CrossRef]
- Zhang, J.; Chao, H.; Dasegowda, G.; Wang, G.; Kalra, M.K.; Yan, P. Overlooked Trustworthiness of Saliency Maps. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2022; Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S., Eds.; Lecture Notes in Computer Science; Springer Nature: Cham, Switzerland, 2022; Volume 13433, pp. 451–461. ISBN 978-3-031-16436-1. [Google Scholar]
- Adadi, A.; Berrada, M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
- Haque, A.B.; Islam, A.K.M.N.; Mikalef, P. Explainable Artificial Intelligence (XAI) from a User Perspective: A Synthesis of Prior Literature and Problematizing Avenues for Future Research. Technol. Forecast. Soc. Chang. 2023, 186, 122120. [Google Scholar] [CrossRef]
- Shin, D. The Effects of Explainability and Causability on Perception, Trust, and Acceptance: Implications for Explainable AI. Int. J. Hum. Comput. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
- Rudin, C.; Radin, J. Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition. Harv. Data Sci. Rev. 2019, 1. [Google Scholar] [CrossRef]
- Förster, M.; Hühn, P.; Klier, M.; Kluge, K. User-Centric Explainable AI: Design and Evaluation of an Approach to Generate Coherent Counterfactual Explanations for Structured Data. J. Decis. Syst. 2022, 1–32. [Google Scholar] [CrossRef]
- Ferreira, J.J.; Monteiro, M. Designer-User Communication for XAI: An Epistemological Approach to Discuss XAI Design. arXiv 2021, arXiv:2105.07804. [Google Scholar] [CrossRef]
- Silva, A.; Schrum, M.; Hedlund-Botti, E.; Gopalan, N.; Gombolay, M. Explainable Artificial Intelligence: Evaluating the Objective and Subjective Impacts of XAI on Human-Agent Interaction. Int. J. Hum. Comput. Interact. 2022, 1–15. [Google Scholar] [CrossRef]
- Cirqueira, D.; Helfert, M.; Bezbradica, M. Towards Design Principles for User-Centric Explainable AI in Fraud Detection. In Artificial Intelligence in HCI; Degen, H., Ntoa, S., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2021; Volume 12797, pp. 21–40. ISBN 978-3-030-77771-5. [Google Scholar]
- Chari, S.; Seneviratne, O.; Gruen, D.M.; Foreman, M.A.; Das, A.K.; McGuinness, D.L. Explanation Ontology: A Model of Explanations for User-Centered AI. In The Semantic Web—ISWC 2020; Pan, J.Z., Tamma, V., d’Amato, C., Janowicz, K., Fu, B., Polleres, A., Seneviratne, O., Kagal, L., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2020; Volume 12507, pp. 228–243. ISBN 978-3-030-62465-1. [Google Scholar]
- Chromik, M.; Butz, A. Human-XAI Interaction: A Review and Design Principles for Explanation User Interfaces. In Human-Computer Interaction—INTERACT 2021; Ardito, C., Lanzilotti, R., Malizia, A., Petrie, H., Piccinno, A., Desolda, G., Inkpen, K., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2021; Volume 12933, pp. 619–640. ISBN 978-3-030-85615-1. [Google Scholar]
- Liao, Q.V.; Varshney, K.R. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. arXiv 2021, arXiv:2110.10790. [Google Scholar]
- Gan, Y.; Ji, Y.; Jiang, S.; Liu, X.; Feng, Z.; Li, Y.; Liu, Y. Integrating Aesthetic and Emotional Preferences in Social Robot Design: An Affective Design Approach with Kansei Engineering and Deep Convolutional Generative Adversarial Network. Int. J. Ind. Ergon. 2021, 83, 103128. [Google Scholar] [CrossRef]
- Nawaratne, R. Human-Centric Product Design with Kansei Engineering and Artificial Intelligence. Available online: https://towardsdatascience.com/human-centric-product-design-with-kansei-engineering-and-artificial-intelligence-f38cb3c0f26d (accessed on 21 December 2021).
- Wang, D.; Yang, Q.; Abdul, A.; Lim, B.Y. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow Scotland, UK, 2 May 2019; ACM: New York, NY, USA, 2019; pp. 1–15. [Google Scholar]
- Lee, J.D.; See, K.A. Trust in Automation: Designing for Appropriate Reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef] [Green Version]
- Hoff, K.A.; Bashir, M. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Hum. Factors 2015, 57, 407–434. [Google Scholar] [CrossRef] [PubMed]
- Kramer, R.M. Trust and Distrust in Organizations: Emerging Perspectives, Enduring Questions. Annu. Rev. Psychol. 1999, 50, 569–598. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lewis, J.D.; Weigert, A. Trust as a Social Reality. Soc. Forces 1985, 63, 967. [Google Scholar] [CrossRef]
- McAllister, D.J. Affect- and Cognition-Based Trust as Foundations for Interpersonal Cooperation in Organizations. Acad. Manag. J. 1995, 38, 24–59. [Google Scholar] [CrossRef]
- Panksepp, J. Affective Consciousness: Core Emotional Feelings in Animals and Humans. Conscious. Cogn. 2005, 14, 30–80. [Google Scholar] [CrossRef]
- Schwarz, N.; Bless, H.; Bohner, G. Mood and Persuasion: Affective States Influence the Processing of Persuasive Communications. In Advances in Experimental Social Psychology; Elsevier: Amsterdam, The Netherlands, 1991; Volume 24, pp. 161–199. ISBN 978-0-12-015224-7. [Google Scholar]
- Forlizzi, J.; Battarbee, K. Understanding Experience in Interactive Systems. In Proceedings of the 2004 Conference on Designing Interactive Systems Processes, Practices, Methods, and Techniques—DIS ’04, Cambridge, MA, USA, 1–4 August 2004; ACM Press: New York, NY, USA, 2004; p. 261. [Google Scholar]
- Van Gorp, T.; Adams, E. Design for Emotion; Morgan Kaufmann: Waltham, MA, USA, 2012; ISBN 978-0-12-386531-1. [Google Scholar]
- Madsen, M.; Gregor, S. Measuring Human-Computer Trust; Australasian Association for Information System: Wales, Australia, 2000; Volume 53, pp. 6–8. [Google Scholar]
- Myers, C.D.; Tingley, D. The Influence of Emotion on Trust. Polit. Anal. 2016, 24, 492–500. [Google Scholar] [CrossRef]
- Jin, N.; Merkebu, J. The Role of Employee Attractiveness and Positive Emotion in Upscale Restaurants. Anatolia 2015, 26, 284–297. [Google Scholar] [CrossRef]
- Jensen, T.; Khan, M.M.H.; Albayram, Y.; Fahim, M.A.A.; Buck, R.; Coman, E. Anticipated Emotions in Initial Trust Evaluations of a Drone System Based on Performance and Process Information. Int. J. Hum. Comput. Interact. 2020, 36, 316–325. [Google Scholar] [CrossRef]
- Albayram, Y.; Khan, M.M.H.; Jensen, T.; Buck, R.; Coman, E. The Effects of Risk and Role on Users’ Anticipated Emotions in Safety-Critical Systems. In Engineering Psychology and Cognitive Ergonomics; Harris, D., Ed.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 10906, pp. 369–388. ISBN 978-3-319-91121-2. [Google Scholar]
- Guerdan, L.; Raymond, A.; Gunes, H. Toward Affective XAI: Facial Affect Analysis for Understanding Explainable Human-AI Interactions. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 19–25 June 2021; Volume 10, pp. 3796–3805. [Google Scholar]
- Phillips, R.; Madhavan, P. The Role of Affective Valence and Task Uncertainty in Human-Automation Interaction. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2013, 57, 354–358. [Google Scholar] [CrossRef]
- Gompei, T.; Umemuro, H. Factors and Development of Cognitive and Affective Trust on Social Robots. In Social Robotics; Ge, S.S., Cabibihan, J.-J., Salichs, M.A., Broadbent, E., He, H., Wagner, A.R., Castro-González, Á., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 11357, pp. 45–54. ISBN 978-3-030-05203-4. [Google Scholar]
- Buck, R.; Khan, M.; Fagan, M.; Coman, E. The User Affective Experience Scale: A Measure of Emotions Anticipated in Response to Pop-Up Computer Warnings. Int. J. Hum. Comput. Interact. 2018, 34, 25–34. [Google Scholar] [CrossRef]
- Bernardo, E.; Tangsoc, J. Explanatory Modelling of Factors Influencing Adoption of Smartphone Shopping Application. IEMS 2019, 18, 647–657. [Google Scholar] [CrossRef]
- Chen, Q.Q.; Park, H.J. How Anthropomorphism Affects Trust in Intelligent Personal Assistants. Ind. Manag. Data Syst. 2021, 121, 2722–2737. [Google Scholar] [CrossRef]
- Helander, M.G.; Khalid, H.M. Affective and Pleasurable Design. In Handbook of Human Factors and Ergonomics; Salvendy, G., Ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2006; pp. 543–572. ISBN 978-0-470-04820-7. [Google Scholar]
- Khalid, H.M. Embracing Diversity in User Needs for Affective Design. Appl. Ergon. 2006, 37, 409–418. [Google Scholar] [CrossRef] [PubMed]
- Lottridge, D.; Chignell, M.; Jovicic, A. Affective Interaction: Understanding, Evaluating, and Designing for Human Emotion. Rev. Hum. Factors Ergon. 2011, 7, 197–217. [Google Scholar] [CrossRef]
- Gasah, M.; Mat Zain, N.H.; Baharum, A. An Approach in Creating Positive Emotion for Children’s e-Learning Based on User Interface Design. IJEECS 2019, 13, 1267. [Google Scholar] [CrossRef]
- Isbister, K. How Games Move Us: Emotion by Design; Playful Thinking; MIT Press: Cambridge, MA, USA, 2016; ISBN 978-0-262-03426-5. [Google Scholar]
- Gutierrez, A.M.J.; Chiu, A.S.F.; Seva, R. A Proposed Framework on the Affective Design of Eco-Product Labels. Sustainability 2020, 12, 3234. [Google Scholar] [CrossRef] [Green Version]
- Dy, A.K.; Lazo, M.; Santos, A.G.; Seva, R. Affective Trash Bin Signage to Promote Waste Segregation. In Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021), Online, 13-18 June 2021; Black, N.L., Neumann, W.P., Noy, I., Eds.; Lecture Notes in Networks and Systems. Springer International Publishing: Cham, Switzerland, 2022; Volume 223, pp. 20–30, ISBN 978-3-030-74613-1. [Google Scholar]
- Norman, D.A. Emotional Design: Why We Love (or Hate) Everyday Things; Basic Books: New York, NY, USA, 2004; ISBN 978-0-465-05135-9. [Google Scholar]
- Jordan, P.W. Designing Pleasurable Products; CRC Press: Boca Raton, FL, USA, 2000; ISBN 978-1-135-73411-4. [Google Scholar]
- Khalid, H.M.; Helander, M.G. A Framework for Affective Customer Needs in Product Design. Theor. Issues Ergon. Sci. 2004, 5, 27–42. [Google Scholar] [CrossRef]
- Bernardo, E.; Seva, R. Explainable Artificial Intelligence (XAI) Emotions Set. Appl. Sci. 2022, submitted.
- Albayram, Y.; Jensen, T.; Khan, M.M.H.; Buck, R.; Coman, E. Investigating the Effect of System Reliability, Risk, and Role on Users’ Emotions and Attitudes toward a Safety-Critical Drone System. Int. J. Hum. Comput. Interact. 2019, 35, 761–772. [Google Scholar] [CrossRef]
- Du, N.; Zhou, F.; Pulver, E.M.; Tilbury, D.M.; Robert, L.P.; Pradhan, A.K.; Yang, X.J. Examining the Effects of Emotional Valence and Arousal on Takeover Performance in Conditionally Automated Driving. Transp. Res. Part C Emerg. Technol. 2020, 112, 78–87. [Google Scholar] [CrossRef]
- Jian, J.-Y.; Bisantz, A.M.; Drury, C.G. Foundations for an Empirically Determined Scale of Trust in Automated Systems. Int. J. Cogn. Ergon. 2000, 4, 53–71. [Google Scholar] [CrossRef]
- Kline, R.B. Principles and Practice of Structural Equation Modeling, 4th ed.; Methodology in the Social Sciences; The Guilford Press: New York, NY, USA, 2016; ISBN 978-1-4625-2335-1. [Google Scholar]
- Westland, C. Lower Bounds on Sample Size in Structural Equation Modeling. Electron. Commer. Res. Appl. 2010, 9, 476–487. [Google Scholar] [CrossRef]
- Cohen, J. Statistical Power Analysis for the Behavioral Sciences; Routledge: London, UK, 1988; ISBN 978-0-203-77158-7. [Google Scholar]
- Angold, A.; Costello, E.J. Short Mood and Feelings Questionnaire; APA PsycNet: Washington, DC, USA, 1987. [Google Scholar] [CrossRef]
- Frazier, M.L.; Johnson, P.D.; Fainshmidt, S. Development and Validation of a Propensity to Trust Scale. J. Trust. Res. 2013, 3, 76–97. [Google Scholar] [CrossRef]
- Lowry, P.B.; Twyman, N.W.; Pickard, M.; Jenkins, J.L.; Bui, Q. “Neo” Proposing the Affect-Trust Infusion Model (ATIM) to Explain and Predict the Influence of High and Low Affect Infusion on Web Vendor Trust. Inf. Manag. 2014, 51, 579–594. [Google Scholar] [CrossRef] [Green Version]
- Hsu, S.-H.; Chen, W.; Hsieh, M. Robustness Testing of PLS, LISREL, EQS and ANN-Based SEM for Measuring Customer Satisfaction. Total Qual. Manag. Bus. Excell. 2006, 17, 355–372. [Google Scholar] [CrossRef]
- Henseler, J.; Ringle, C.M.; Sinkovics, R.R. The Use of Partial Least Squares Path Modeling in International Marketing. In Advances in International Marketing; Sinkovics, R.R., Ghauri, P.N., Eds.; Emerald Group Publishing Limited: Bingley, UK, 2009; Volume 20, pp. 277–319. ISBN 978-1-84855-468-9. [Google Scholar]
- Chin, W.W. The Partial Least Squares Approach for Structural Equation Modeling. In Modern Methods for Business Research; Methodology for Business and Management; Lawrence Erlbaum Associates Publishers: Mahwah, NJ, USA, 1998; pp. 295–336. ISBN 0-8058-2677-7. [Google Scholar]
- Yang, X.J.; Unhelkar, V.V.; Li, K.; Shah, J.A. Evaluating Effects of User Experience and System Transparency on Trust in Automation. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6 March 2017; ACM: New York, NY, USA, 2017; pp. 408–416. [Google Scholar]
- Vogt, W.P.; Johnson, R.B. The SAGE Dictionary of Statistics & Methodology: A Nontechnical Guide for the Social Sciences, 5th ed.; SAGE: Los Angeles, CA, USA, 2016; ISBN 978-1-4833-8176-3. [Google Scholar]
- Hair, J.F. (Ed.) Multivariate Data Analysis; Prentice Hall: Upper Saddle River, NJ, USA, 1998; ISBN 978-0-13-894858-0. [Google Scholar]
- Taber, K.S. The Use of Cronbach’s Alpha When Developing and Reporting Research Instruments in Science Education. Res. Sci. Educ. 2018, 48, 1273–1296. [Google Scholar] [CrossRef] [Green Version]
- Hu, L.; Bentler, P.M. Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria versus New Alternatives. Struct. Equ. Model. A Multidiscip. J. 1999, 6, 1–55. [Google Scholar] [CrossRef]
- Schreiber, J.B.; Nora, A.; Stage, F.K.; Barlow, E.A.; King, J. Reporting Structural Equation Modeling and Confirmatory Factor Analysis Results: A Review. J. Educ. Res. 2006, 99, 323–338. [Google Scholar] [CrossRef]
- Cangur, S.; Ercan, I. Comparison of Model Fit Indices Used in Structural Equation Modeling Under Multivariate Normality. J. Mod. Appl. Stat. Meth. 2015, 14, 152–167. [Google Scholar] [CrossRef] [Green Version]
- Baron, R.M.; Kenny, D.A. The Moderator–Mediator Variable Distinction in Social Psychological Research: Conceptual, Strategic, and Statistical Considerations. J. Personal. Soc. Psychol. 1986, 51, 1173–1182. [Google Scholar] [CrossRef]
- Shi, D.; Lee, T.; Maydeu-Olivares, A. Understanding the Model Size Effect on SEM Fit Indices. Educ. Psychol. Meas. 2019, 79, 310–334. [Google Scholar] [CrossRef] [Green Version]
- Petty, R.E.; Cacioppo, J.T. The Elaboration Likelihood Model of Persuasion. In Advances in Experimental Social Psychology; Elsevier: Amsterdam, The Netherlands, 1986; Volume 19, pp. 123–205. ISBN 978-0-12-015219-3. [Google Scholar]
Component | Variable | # of Levels | Configurations |
---|---|---|---|
XAI Design | Explanation Form | 2 | Feature, Example |
Communication Style | 2 | Humanized, Robotic | |
Supplementary Information | 2 | With, Without | |
AI Features | AI Reliability | 2 | Low (70%), High (90%) |
Learning Capability | 2 | Yes, No | |
Brand | 2 | Google, Generic | |
Time Experience | 2 | Day 1, Day 2 |
Type | Count | % | Type | Count | % | |
---|---|---|---|---|---|---|
Age | Educational Attainment | |||||
Gen Z (18 to 23) | 33 | 16.34% | Elementary | 0 | 0.00% | |
Younger Millennial (24 to 30) | 54 | 26.73% | High School | 12 | 5.94% | |
Older Millennial (31 to 39) | 64 | 31.68% | College | 139 | 68.81% | |
Younger GenX (40 to 47) | 36 | 17.82% | Masters | 25 | 12.38% | |
Older GenX (48 to 55) | 10 | 4.95% | PhD | 7 | 3.47% | |
Younger Boomer (56-65) | 5 | 2.48% | Technical Vocational | 19 | 9.41% | |
Gender | Occupation | |||||
Male | 82 | 40.59% | Student | 56 | 27.72% | |
Female | 90 | 44.55% | Employed (Full Time) | 81 | 40.10% | |
Prefer not to say | 30 | 14.85% | Employed (Part-Time) | 13 | 6.44% | |
Unemployed | 9 | 4.46% | ||||
AI Experience | Freelance/Contractor | 20 | 9.90% | |||
Laggards (Less than 1 year) | 3 | 1.49% | Self-employed | 22 | 10.89% | |
Late Majority (1–2 years) | 3 | 1.49% | Retired | 1 | 0.50% | |
Early Majority (3–4 years) | 15 | 7.43% | ||||
Early Adopters (4–5 years) | 36 | 17.82% | ||||
Innovators (More than 5 years) | 145 | 71.78% |
Consistency Measure | Measurement |
---|---|
Kaiser–Meyer–Olkin Measure | 0.919 |
Bartlett’s Test of Sphericity | |
Approx. Chi-Square | 7055.358 |
df | 253.000 |
Sig. | <0.000 a |
Dimension | AI Anxiety | Incidental Emotion | Trust Disposition | Perceived Usefulness | Perceived Trust |
---|---|---|---|---|---|
AIAnxietySocioTechnicalQ6 | 0.970 | ||||
AIAnxietySocioTechnicalQ5 | 0.960 | ||||
AIAnxietySocioTechnicalQ4 | 0.957 | ||||
AIAnxietyLearningQ3 | 0.956 | ||||
AIAnxietyConfigurationQ7 | 0.954 | ||||
AIAnxietyLearningQ1 | 0.953 | ||||
AIAnxietyConfigurationQ9 | 0.947 | ||||
AIAnxietyLearningQ2 | 0.939 | ||||
AIAnxietyConfigurationQ8 | 0.928 | ||||
IncidentalEmotionQ1 | 0.937 | ||||
IncidentalEmotionQ2 | 0.936 | ||||
IncidentalEmotionQ4 | 0.929 | ||||
IncidentalEmotionQ3 | 0.916 | ||||
TrustDispositionQ4 | 0.987 | ||||
TrustDispositionQ2 | 0.979 | ||||
TrustDispositionQ3 | 0.978 | ||||
TrustDispositionQ1 | 0.978 | ||||
PercUsefulnessQ3 | 0.868 | ||||
PercUsefulnessQ2 | 0.849 | ||||
PercUsefulnessQ1 | 0.759 | ||||
TrustQ1 | 0.806 | ||||
TrustQ2 | 0.761 | ||||
TrustQ3 | 0.731 |
Dimension | CR | AVE | MSV | MaxR(H) |
---|---|---|---|---|
AI Anxiety | 0.988 | 0.904 | 0.101 | 0.989 |
Incidental Emotion | 0.983 | 0.935 | 0.445 | 0.983 |
Trust Disposition | 0.989 | 0.957 | 0.017 | 0.992 |
Perceived Usefulness | 0.832 | 0.630 | 0.106 | 0.944 |
Perceived Trust | 0.831 | 0.627 | 0.445 | 0.927 |
Type | Indices | Estimate | Threshold |
---|---|---|---|
Absolute Fit | RMSEA | 0.039 | <0.06 [73] |
SRMR | 0.045 | <0.08 [85] | |
Incremental Fit | CFI | 0.991 | >0.95 [86] |
NFI | 0.961 | >0.95 [85] | |
Parsimonious Fit | χ2/df | 1.301 | 1 to 3 [85] |
Type | Indices | Estimate | Threshold |
---|---|---|---|
Absolute Fit | RMSEA | 0.051 | <0.06 [73] |
SRMR | 0.045 | <0.08 [87] | |
Incremental Fit | CFI | 0.988 | >0.95 [86] |
NFI | 0.966 | >0.95 [85] | |
Parsimonious Fit | χ2/df | 1.524 | 1 to 3 [85] |
Group | From | Mediator | To | Std. Est. | p-Value | Mediated? a |
---|---|---|---|---|---|---|
Affective Trust | Explanation Form | Interestingly Surprised | Perceived Trust | 0.289 | 0.001 | Yes |
Trusting | 0.163 | 0.001 | Yes | |||
Fearfully Dismayed | −0.020 | 0.159 | No | |||
Anxiously Suspicious | 0.001 | 0.523 | No | |||
Communication Style | Interestingly Surprised | Perceived Trust | −0.015 | 0.489 | No | |
Trusting | −0.037 | 0.067 | No | |||
Fearfully Dismayed | −0.108 | 0.015 | Yes | |||
Anxiously Suspicious | −0.017 | 0.527 | No | |||
Supplementary Information | Interestingly Surprised | Perceived Trust | 0.064 | 0.068 | No | |
Trusting | 0.078 | 0.046 | Yes | |||
Fearfully Dismayed | 0.068 | 0.020 | Yes | |||
Anxiously Suspicious | 0.005 | 0.475 | No | |||
Affective Reliance | Interestingly Surprised | Perceived Trust | Reliance | 0.239 | 0.001 | Yes |
Trusting | 0.171 | 0.001 | Yes | |||
Fearfully Dismayed | −0.070 | 0.013 | Yes | |||
Anxiously Suspicious | −0.010 | 0.509 | No | |||
Cognitive Trust | Explanation Form | Perceived Usefulness | Perceived Trust | 0.023 | 0.009 | Yes |
Communication Style | 0.211 | 0.008 | Yes | |||
Supplementary Information | 0.010 | 0.036 | Yes | |||
Cognitive Reliance | Perceived Usefulness | Perceived Trust | Reliance | −0.099 | 0.007 | Yes |
Group | From | To | Std. Est. | p-Value | Supported a |
---|---|---|---|---|---|
Design to Emotions | Explanation Form (+Example, −Feature, Rule) | Interestingly Surprised | 0.530 | 0.001 | Yes |
Trusting | 0.419 | 0.002 | Yes | ||
Fearfully Dismayed | 0.126 | 0.339 | No | ||
Anxiously Suspicious | −0.032 | 0.806 | No | ||
Communication Style (+Logic, −Human) | Interestingly Surprised | −0.070 | 0.488 | No | |
Trusting | −0.257 | 0.081 | No | ||
Fearfully Dismayed | 1.822 | 0.001 | Yes | ||
Anxiously Suspicious | 1.783 | 0.001 | Yes b | ||
Supplementary Information (+With, −Without) | Interestingly Surprised | 0.117 | 0.068 | No | |
Trusting | 0.201 | 0.053 | No | ||
Fearfully Dismayed | −0.432 | 0.006 | Yes | ||
Anxiously Suspicious | −0.227 | 0.059 | No | ||
Emotions to Trust | Interestingly Surprised | Perceived Trust | 0.545 | 0.001 | Yes |
Trusting | 0.390 | 0.001 | Yes | ||
Fearfully Dismayed | −0.158 | 0.017 | Yes | ||
Anxiously Suspicious | −0.024 | 0.539 | No | ||
Design to Usefulness | Supplementary Information | Perceived Usefulness | 0.045 | 0.224 | No c |
Communication Style | −0.940 | 0.002 | Yes | ||
Explanation Form | 0.100 | 0.006 | Yes | ||
Usefulness to Trust | Perceived Usefulness | Perceived Trust | 0.225 | 0.007 | Yes |
Trust to Reliance | Perceived Trust | Reliance | 0.439 | 0.001 | Yes |
Group | Moderation Model | DF | CMIN | p-Value | Moderated? a |
---|---|---|---|---|---|
Human Factors | AI Anxiety | 21.000 | 41.212 | 0.005 | Yes |
Incidental Emotion | 21.000 | 63.314 | 0.000 | Yes | |
Trust Disposition | 21.000 | 21.940 | 0.403 | No | |
XAI Experience | 21.000 | 22.810 | 0.354 | No | |
AI Factors | AI Reliability | 21.000 | 73.497 | 0.000 | Yes |
Learning Capability | 21.000 | 16.260 | 0.755 | No | |
Brand | 21.000 | 13.083 | 0.906 | No | |
Experience | 21.000 | 34.021 | 0.036 | Yes |
Group | Moderation Model | Runs | CFI | SRMR | PClose | Fit? a |
---|---|---|---|---|---|---|
Human Factors | AI Anxiety | High AI Anxiety | 0.986 | 0.032 | 0.054 | Yes |
Low AI Anxiety | 1.000 | 0.021 | 0.754 | Yes | ||
Incidental Emotion | Positive Incidental Emotion | 1.000 | 0.008 | 0.892 | Yes | |
Negative Incidental Emotion | 0.994 | 0.028 | 0.283 | Yes | ||
AI Factors | AI Reliability | High AI Reliability | 1.000 | 0.014 | 0.587 | Yes |
Low AI Reliability | 0.952 | 0.051 | 0.050 | Yes | ||
Experience | Short Experience | 1.000 | 0.008 | 0.892 | Yes | |
Long Experience | 0.994 | 0.028 | 0.283 | Yes |
From | To | Est. | p-Value | Est. | p-Value | z-Score | p-Value a |
---|---|---|---|---|---|---|---|
High AI Anxiety | Low AI Anxiety | ||||||
Fearfully Dismayed | Perceived Trust | −0.059 | 0.044 | 0.021 | 0.331 | 2.195 | 0.014 |
Communication Style | Perceived Usefulness | −0.970 | 0.000 | −0.790 | 0.000 | 2.442 | 0.007 |
Perceived Trust | Reliance | 0.804 | 0.000 | 1.185 | 0.000 | 1.762 | 0.039 |
Pos. Inci. Emotion | Neg. Inci. Emotion | ||||||
Supplementary Information | Interestingly Surprised | −0.979 | 0.026 | 0.457 | 0.209 | 2.513 | 0.006 |
Supplementary Information | Trusting | −1.001 | 0.032 | 0.137 | 0.725 | 1.868 | 0.031 |
Trusting | Perceived Trust | −0.017 | 0.681 | 0.174 | 0.000 | 4.081 | 0.000 |
Fearfully Dismayed | Perceived Trust | −0.045 | 0.148 | 0.023 | 0.035 | 2.071 | 0.019 |
Communication Style | Perceived Usefulness | −1.058 | 0.000 | −0.875 | 0.000 | 2.096 | 0.018 |
Perceived Usefulness | Perceived Trust | 0.196 | 0.029 | 0.028 | 0.331 | 1.780 | 0.038 |
High AI Reliability | Low AI Reliability | ||||||
Trusting | Perceived Trust | 0.064 | 0.076 | 0.223 | 0.000 | 3.416 | 0.000 |
Fearfully Dismayed | Perceived Trust | −0.073 | 0.026 | 0.027 | 0.250 | 2.479 | 0.007 |
Short Experience | Long Experience | ||||||
Trusting | Perceived Trust | 0.184 | 0.000 | 0.095 | 0.012 | 1.859 | 0.032 |
Fearfully Dismayed | Perceived Trust | 0.018 | 0.177 | −0.046 | 0.117 | 1.987 b | 0.023 |
Design Element | Type | Interestingly Surprised | Trusting | Fearfully Dismayed | Anxiously Suspicious | Perceived Usefulness |
---|---|---|---|---|---|---|
Explanation Form | Example | + | + | × | × | + |
Feature and Rule | − | − | × | × | − | |
Communication Style | Logic | × | × | + | + | − |
Human | × | × | − | − | + | |
Supplementary Information | With | × | × | − | × | × |
Without | × | × | + | × | × |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bernardo, E.; Seva, R. Affective Design Analysis of Explainable Artificial Intelligence (XAI): A User-Centric Perspective. Informatics 2023, 10, 32. https://doi.org/10.3390/informatics10010032
Bernardo E, Seva R. Affective Design Analysis of Explainable Artificial Intelligence (XAI): A User-Centric Perspective. Informatics. 2023; 10(1):32. https://doi.org/10.3390/informatics10010032
Chicago/Turabian StyleBernardo, Ezekiel, and Rosemary Seva. 2023. "Affective Design Analysis of Explainable Artificial Intelligence (XAI): A User-Centric Perspective" Informatics 10, no. 1: 32. https://doi.org/10.3390/informatics10010032
APA StyleBernardo, E., & Seva, R. (2023). Affective Design Analysis of Explainable Artificial Intelligence (XAI): A User-Centric Perspective. Informatics, 10(1), 32. https://doi.org/10.3390/informatics10010032