Interacting with a Chatbot-Based Advising System: Understanding the Effect of Chatbot Personality and User Gender on Behavior
Abstract
:1. Introduction
2. Related Work
2.1. Human Personality
2.2. Chatbots in Education
2.3. Personality-Imbued Chatbots
3. Research Goals and Hypotheses
- H1A. There is a difference between students’ trust in the conscientious and extroverted chatbots.
- H1B. There is a difference between students’ trust in the conscientious and agreeable chatbots.
- H1C. There is a difference between students’ trust in the agreeable and extroverted chatbots.
- H2A. There is a difference between students’ perceived authenticity of the conscientious and extroverted chatbots.
- H2B. There is a difference between students’ perceived authenticity of the conscientious and agreeable chatbots.
- H2C. There is a difference between students’ perceived authenticity of the agreeable and extroverted chatbots.
- H3A. There is a difference between students’ intention to use the conscientious and extroverted chatbots.
- H3B. There is a difference between students’ intention to use the conscientious and agreeable chatbots.
- H3C. There is a difference between students’ intention to use the agreeable and extroverted chatbots.
- H4A. There is a difference between students’ intended engagement with the conscientious and extroverted chatbots.
- H4B. There is a difference between students’ intended engagement with the conscientious and agreeable chatbots.
- H4C. There is a difference between students’ intended engagement with the agreeable and extroverted chatbots.
- H5A. There is a difference between male and female students’ trust in the conscientious chatbot.
- H5B. There is a difference between male and female students’ trust in the extroverted chatbot.
- H5C. There is a difference between male and female students’ trust in the agreeable chatbot.
- H6A. There is a difference between male and female students’ perceived authenticity of the conscientious chatbot.
- H6B. There is a difference between male and female students’ perceived authenticity of the extroverted chatbot.
- H6C. There is a difference between male and female students’ perceived authenticity of the agreeable chatbot.
- H7A. There is a difference between male and female students’ intention to use the conscientious chatbot.
- H7B. There is a difference between male and female students’ intention to use the extroverted chatbot.
- H7C. There is a difference between male and female students’ intention to use the agreeable chatbot.
- H8A. There is a difference between male and female intended engagement with the conscientious chatbot.
- H8B. There is a difference between male and female intended engagement with the extroverted chatbot.
- H8C. There is a difference between male and female intended engagement with the agreeable chatbot.
4. Methodology
4.1. Chatbot Design
4.2. Chatbot Personality Design and Validation
4.3. Study Design
4.4. Participants
4.5. Post-Interaction Questionnaire
5. Results
5.1. Effect of Chatbot Personality on Trust, Authenticity, Usage Intention, and Engagement
5.1.1. Testing Hypothesis 1 (Effect of Chatbot Personality on Trust)
5.1.2. Testing Hypothesis 2 (Effect of Chatbot Personality on Perceived Authenticity)
5.1.3. Testing Hypothesis 3 (Effect of Chatbot Personality on Usage Intention)
5.1.4. Testing Hypothesis 4 (Effect of Chabot Personality on Intended Engagement)
5.2. Effect of User Gender on Behavior
5.3. Thematic Analysis of Qualitative Data
5.3.1. Reasons for Trust, Perceived Authenticity, Usage Intention, and Intended Engagement with the Chatbots
5.3.2. Chatbot Preference
6. Discussion, Implications for Future Research, and Study Limitations
6.1. Effect of Chatbot Personality on Behavior
6.2. Effect of User Gender on Behavior
6.3. Chatbot Personality Preference
6.4. Future Research Considerations
6.5. Study Limitations
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
References
- Oh, K.-J.; Lee, D.; Ko, B.; Choi, H.-J. A chatbot for psychiatric counseling in mental healthcare service based on emotional dialogue analysis and sentence generation. In Proceedings of the 18th IEEE International Conference on Mobile Data Management (MDM), Daejeon, Korea, 29 May–1 June 2017. [Google Scholar]
- Xu, A.; Liu, Z.; Guo, Y.; Sinha, V.; Akkiraju, R. A new chatbot for customer service on social media. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017. [Google Scholar]
- Kuhail, M.A.; Alturki, N.; Alramlawi, S.; Alhejori, K. Interacting with Educational Chatbots: A Systematic Review. Educ. Inf. Technol. 2022, 1–46. Available online: https://link.springer.com/article/10.1007/s10639-022-11177-3#citeas (accessed on 14 September 2022). [CrossRef]
- Statista. Size of the Chatbot Market Worldwide, in 2016 and 2025. Available online: https://www.statista.com/statistics/656596/worldwide-chatbot-market/ (accessed on 14 September 2022).
- Tsvetkova, M.; García-Gavilanes, R.; Floridi, L.; Yasseri, T. Even good bots fight: The case of Wikipedia. PLoS ONE 2017, 12, e0171774. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Shumanov, M.; Johnson, L. Making conversations with chatbots more personalized. Comput. Hum. Behav. 2021, 117, 106627. [Google Scholar] [CrossRef]
- Nass, C.; Steuer, J.; Tauber, E.R. Computers are social actors. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 24–28 April 1994. [Google Scholar]
- Nass, I.; Brave, S. Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
- Reeves, B.; Nass, C.I. The Media Equation: How People Treat Computers, Television, and New Media Like Real People; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
- Völkel, S.T.; Schoedel, R.; Kaya, L.; Mayer, S. User Perceptions of Extraversion in Chatbots after Repeated Use. In Proceedings of the CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 30 April–5 May 2022. [Google Scholar]
- Braun, M.; Mainz, A.; Chadowitz, R.; Pfleging, B.; Alt, F. At your service: Designing voice assistant personalities to improve automotive user interfaces. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019. [Google Scholar]
- Zhou, M.X.; Mark, G.; Li, J.; Yang, H. Trusting virtual agents: The effect of personality. ACM Trans. Interact. Intell. Syst. 2019, 9, 10. [Google Scholar] [CrossRef]
- Bickmore, T.; Cassell, J. Social dialongue with embodied conversational agents. In Advances in Natural Multimodal Dialogue System; Springer: Dordrecht, The Netherlands, 2005; pp. 23–54. [Google Scholar]
- Smestad, T.L.; Volden, F. Chatbot personalities matters. In Proceedings of the International Conference on Internet Science, Perpignan, France, 2–5 December 2019; Available online: https://research.com/conference/insci-2019-international-conference-on-internet-science (accessed on 14 September 2022).
- Mekni, M.; Baani, Z.; Sulieman, D. A smart virtual assistant for students. In Proceedings of the 3rd International Conference on Applications of Intelligent Systems, Las Palmas, Spain, 7–9 January 2020. [Google Scholar]
- Ranoliya, B.R.; Raghuwanshi, N.; Singh, S. Chatbot for university related FAQs. In Proceedings of the International Conference on Advances in Computing, Udupi, India, 13–16 September 2017; Available online: http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=53457 (accessed on 14 September 2022).
- Jin, S.V.; Youn, S. Why do consumers with social phobia prefer anthropomorphic customer service chatbots? Evolutionary explanations of the moderating roles of social phobia. Telemat. Inform. 2021, 62, 101644. [Google Scholar] [CrossRef]
- Völkel, S.T.; Kaya, L. Examining User Preference for Agreeableness in Chatbots. In Proceedings of the 3rd Conference on Conversational User Interfaces (CUI 2021), Bilbao, Spain, 27–29 July 2021. [Google Scholar]
- Lee, M.; Ackermans, S.; van As, N.; Chang, H.; Lucas, E.; Jsselsteijn, W.I. Caring for Vincent: A chatbot for self-compassion. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019. [Google Scholar]
- Bremner, P.; Celiktutan, O.; Gunes, H. Personality perception of robot avatar tele-operators. In Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016. [Google Scholar]
- Krenn, B.; Endrass, B.; Kistler, F.; André, E. Effects of language variety on personality perception in embodied conversational agents. In Proceedings of the International Conference on Human-Computer Interaction, Heraklion, Greece, 22–27 June 2014. [Google Scholar]
- Andrist, S.; Mutlu, B.; Tapus, A. Look like me: Matching robot personality via gaze to increase motivation. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Korea, 18–23 April 2015. [Google Scholar]
- Nass, C.; Lee, K.M. Does computer-synthesized speech manifest personality? Experimental tests of recognition, similarity-attraction, and consistency-attraction. J. Exp. Psychol. Appl. 2001, 7, 171–181. [Google Scholar] [CrossRef]
- Cafaro, H.H. Vilhjálmsson and T. Bickmore. First Impressions in Human-Agent Virtual Encounters. ACM Trans. Comput.-Hum. Interact. 2016, 23, 24. [Google Scholar] [CrossRef] [Green Version]
- Li, J.; Zhou, M.X.; Yang, H.; Mark, G. Confiding in and listening to virtual agents: The effect of personality. In Proceedings of the 22nd International Conference on Intelligent User Interfaces, Limassol, Cyprus, 13–16 March 2017. [Google Scholar]
- Rothmann, S.; Coetzer, E.P. The big five personality dimensions and job performance. SA J. Ind. Psychol. 2003, 29, 68–74. [Google Scholar] [CrossRef] [Green Version]
- Garbarino, E.; Johnson, M.S. The different roles of satisfaction, trust, and commitment in customer relationships. J. Mark. 1999, 63, 70–87. [Google Scholar] [CrossRef]
- Przegalinska, A.; Ciechanowski, L.; Stroz, A.; Gloor, P.; Mazurek, G. In bot we trust: A new methodology of chatbot performance measures. Bus. Horiz. 2019, 62, 785–797. [Google Scholar] [CrossRef]
- Rese, A.; Ganster, L.; Baier, D. Chatbots in retailers’ customer communication: How to measure their acceptance? J. Retail. Consum. Serv. 2020, 56, 102176. [Google Scholar] [CrossRef]
- Allport, W. Pattern and Growth in Personality; Harcourt College Publishers: San Diego, CA, USA, 1961. [Google Scholar]
- McCrae, R.R.; Costa, P.T., Jr. The five factor theory of personality. In Handbook of Personality: Theory and Research; The Guilford Press: New York, NY, USA, 2008; pp. 159–181. [Google Scholar]
- Trouvain, J.; Schmidt, S.; Schröder, M.; Schmitz, M.; Barry, W.J. Modelling personality features by changing prosody in synthetic speech. In Proceedings of the 3rd International Conference on Speech Prosody, Dresden, Germany, 2–5 May C2006. [Google Scholar]
- Goldberg, L.R. The structure of phenotypic personality traits. Am. Psychol. 1993, 48, 26–34. [Google Scholar] [CrossRef]
- Mehra, B. Chatbot personality preferences in Global South urban English speakers. Soc. Sci. Humanit. Open 2021, 3, 100131. [Google Scholar] [CrossRef]
- Norman, W.T. Toward an adequate taxonomy of personality attributes: Replicated factor structure in peer nomination personality ratings. J. Abnorm. Soc. Psychol. 1963, 66, 574–583. [Google Scholar] [CrossRef] [PubMed]
- Danner, D.; Rammstedt, B.; Bluemke, M.; Lechner, C.; Berres, S.; Knopf, T.; Soto, C.; John, O.P. Die Deutsche Version des Big Five Inventory 2 (bfi-2); Leibniz Institute for the Social Sciences: Mannheim, Germany, 2016. [Google Scholar]
- McCrae, R.R.; Costa, P.T. Validation of the five-factor model of personality across instruments and observers. J. Personal. Soc. Psychol. 1987, 52, 81. [Google Scholar] [CrossRef]
- Matz, S.C.; Kosinski, M.; Nave, G.; Stillwell, D.J. Psychological targeting as an effective approach to digital mass persuasion. Proc. Natl. Acad. Sci. USA 2017, 114, 12714–12719. [Google Scholar] [CrossRef] [Green Version]
- Rajaobelina, L.; Bergeron, J. Antecedents and consequences of buyer-seller relationship quality in the financial services industry. Int. J. Bank Mark. 2006, 27, 359–380. [Google Scholar] [CrossRef]
- Desmet, P.; Fokkinga, S. Beyond Maslow’s Pyramid: Introducing a Typology of Thirteen Fundamental Needs for Human-Centered Design. Multimodal Technol. Interact. 2020, 4, 38. [Google Scholar] [CrossRef]
- Hassenzahl, M.; Diefenbach, S.; Göritz, A. Needs, affect, and interactive products: Facets of user experience. Interact. Comput. 2010, 22, 353–362. [Google Scholar] [CrossRef]
- Liu, W.; Lee, K.-P.; Gray, C.; Toombs, A.; Chen, K.-H.; Leifer, L. Transdisciplinary Teaching and Learning in UX Design: A Program Review and AR Case Studies. Appl. Sci. 2021, 11, 10648. [Google Scholar] [CrossRef]
- Komarraju, M.; Karau, S.J. The relationship between the big five personality traits and academic motivation. Personal. Individ. Differ. 2005, 39, 557–567. [Google Scholar] [CrossRef]
- de Feyter, T.; Caers, R.; Vigna, C.; Berings, D. Unraveling the impact of the Big Five personality traits on academic performance: The moderating and mediating effects of self-efficacy and academic motivation. Learn. Individ. Differ. 2012, 22, 439–448. [Google Scholar] [CrossRef]
- Benotti, L.; Martnez, M.C.; Schapachnik, F. A tool for introducing computer science with automatic formative assessment. IEEE Trans. Learn. Technol. 2017, 11, 179–192. [Google Scholar] [CrossRef]
- Haake, M.; Gulz, A. A look at the roles of look & roles in embodied pedagogical agents—A user preference perspective. Int. J. Artif. Intell. Educ. 2009, 19, 39–71. [Google Scholar]
- Feng, D.; Shaw, E.; Kim, J.; Hovy, E. An intelligent discussion-bot for answering student queries in threaded discussions. In Proceedings of the 11th International Conference on Intelligent User Interfaces, Syndney, Australia, 29 Juanuary–1 February 2006. [Google Scholar]
- Heffernan, N.T.; Croteau, E.A. Web-based evaluations showing differential learning for tutorial strategies employed by the Ms. Lindquist tutor. In Proceedings of the International Conference on Intelligent Tutoring Systems, Maceió, Brazil, 30 August–3 September 2004. [Google Scholar]
- VanLehn, K.; Graesser, A.; Jackson, G.T.; Jordan, P.; Olney, A.; Rosé, C.P. Natural Language Tutoring: A comparison of human tutors, computer tutors, and text. Cogn. Sci. 2007, 31, 3–52. [Google Scholar] [CrossRef]
- Coronado, M.; Iglesias, C.A.; Carrera, Á.; Mardomingo, A. A cognitive assistant for learning java featuring social dialogue. Int. J. Hum.-Comput. Stud. 2018, 117, 55–67. [Google Scholar] [CrossRef]
- el Janati, S.; Maach, A.; el Ghanami, D. Adaptive e-learning AI-powered chatbot based on multimedia indexing. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 299–308. [Google Scholar] [CrossRef]
- Qin, C.; Huang, W.; Hew, K.F. Using the Community of Inquiry framework to develop an educational chatbot: Lesson learned from a mobile instant messaging learning environment. In Proceedings of the 28th International Conference on Computers in Education, Online, 23–27 November 2020. [Google Scholar]
- Dibitonto, M.; Leszczynska, K.; Tazzi, F.; Medaglia, C.M. Chatbot in a campus environment: Design of LiSA, a virtual assistant to help students in their university life. In Proceedings of the International Conference on Human-Computer Interaction, Las Vegas, NV, USA, 15–20 July 2018. [Google Scholar]
- Kuhail, M.A.; al Katheeri, H.; Negreiros, J.; Seffah, A.; Alfandi, O. Engaging Students with a Chatbot-Based Academic Advising System. Int. J. Hum.–Comput. Interact. 2022, 1–27. [Google Scholar] [CrossRef]
- Mairesse, F.; Walker, M.; Mehl, M.; Moore, R. Using linguistic cues for the automatic recognition of personality in conversation and text. J. Artif. Intell. Res. 2007, 30, 457–500. [Google Scholar] [CrossRef] [Green Version]
- Ruane, E.; Farrell, S.; Ventresque, A. User perception of text-based chatbot personality. In Proceedings of the International Workshop on Chatbot Research and Design, Online, 23–24 November 2020. [Google Scholar]
- Calvo, R.; Vella Brodrick, A.; Desmet, P.M.; Ryan, R. Positive computing: A new partnership between psychology, social sciences and technologists. Psychol. Well-Being Theory Res. Pract. 2016, 6, 10. [Google Scholar] [CrossRef] [Green Version]
- Reinkemeier, F.; Gnewuch, U. Match or mismatch? How matching personality and gender between voice assistants and users affects trust in voice commerce. In Proceedings of the 55th Hawaii International Conference on System Sciences, Lahaina, HI, USA, 4–7 January 2022; Available online: https://dblp.org/db/conf/hicss/index.html (accessed on 14 September 2022).
- Chen, Z.L.Y.; Nieminen, M.P.; Lucero, A. Creating a chatbot for and with migrants: Chatbot personality drives co-design activities. In Proceedings of the ACM Designing Interactive Systems Conference, Eindhoven, The Netherlands, 6–10 July 2020. [Google Scholar]
- John, O.P.; Srivastava, S. The Big Five Trait taxonomy: History, measurement, and theoretical perspectives. In Handbook of Personality: Theory and Research; Guilford Press: New York, NY, USA, 1999; pp. 102–138. [Google Scholar]
- Mari, A.; Algesheimer, R. The role of trusting beliefs in voice assistants during voice shopping. In Proceedings of the Hawaii International Conference on System Sciences (HICSS), Maui, HI, USA, 5–8 January 2021; Available online: https://www.insna.org/events/54th-hawaii-international-conference-on-system-sciences-hicss (accessed on 14 September 2022).
- Chung, H.; Iorga, M.; Voas, J.; Lee, S. Alexa, Can I Trust You? Computer 2017, 50, 100–104. [Google Scholar] [CrossRef] [PubMed]
- Benbasat, I.; Wang, W. Trust in and adoption of online recommendation agents. J. Assoc. Inf. Syst. 2005, 6, 4. [Google Scholar] [CrossRef]
- Gefen, D.; Straub, D. Managing user trust in B2C e-services. E-Service 2003, 2, 7–24. [Google Scholar] [CrossRef]
- Kasilingam, D.L. Understanding the attitude and intention to use smartphone chatbots for shopping. Technol. Soc. 2020, 62, 101280. [Google Scholar] [CrossRef]
- McKnight, D.H.; Choudhury, V.; Kacmar, C. Developing and validating trust measures for e-commerce: An integrative typology. Inf. Syst. Res. 2002, 13, 334–359. [Google Scholar] [CrossRef] [Green Version]
- Qiu, L.; Benbasat, I. Evaluating anthropomorphic product recommendation agents: A social relationship perspective to designing information systems. J. Manag. Inf. Syst. 2009, 25, 145–182. [Google Scholar] [CrossRef]
- Müller, L.; Mattke, J.; Maier, C.; Weitzel, T.; Graser, H. Chatbot acceptance: A latent profile analysis on individuals’ trust in conversational agents. In Proceedings of the Computers and People Research Conference (SIGMIS-CPR ‘19), Nashville, TN, USA, 20–22 June 2019. [Google Scholar]
- Neururer, M.; Schlögl, S.; Brinkschulte, L.; Groth, A. Perceptions on authenticity in chat bots. Multimodal Technol. Interact. 2018, 2, 60. [Google Scholar] [CrossRef] [Green Version]
- Jones, C.L.E.; Hancock, T.; Kazandjian, B.; Voorhees, C.M. Engaging the Avatar: The effects of authenticity signals during chat-based service recoveries. J. Bus. Res. 2022, 144, 703–716. [Google Scholar] [CrossRef]
- Seto, E.; Davis, W.E. Authenticity predicts positive interpersonal relationship quality at low, but not high, levels of psychopathy. Personal. Individ. Differ. 2021, 182, 111072. [Google Scholar] [CrossRef]
- Sutton, A. Distinguishing between authenticity and personality consistency in predicting well-being. Eur. Rev. Appl. Psychol. 2018, 68, 117–130. [Google Scholar] [CrossRef]
- Rodden, K.; Hutchinson, H.; Fu, X. Measuring the user experience on a large scale: User-centered metrics for web applications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Atlanta, GA, USA, 10–15 April 2010. [Google Scholar]
- Pütten, M.; Krämer, N.C.; Gratch, J. How our personality shapes our interactions with virtual characters-implications for research and development. In Proceedings of the International Conference on Intelligent Virtual Agents, Philadelphia, PA, USA, 20–22 September 2010. [Google Scholar]
- Weisberg, Y.J.; DeYoung, C.G.; Hirsh, J.B. Gender differences in personality across the ten aspects of the Big Five. Front. Psychol. 2011, 2, 178. [Google Scholar] [CrossRef] [PubMed]
- Chatbot Conversation Script. Available online: https://www.dropbox.com/s/mn4lcllt027ifhl/chatbot_conversation_script.docx?dl=0 (accessed on 14 September 2022).
- Google. Dialogflow. Available online: https://cloud.google.com/dialogflow/docs (accessed on 14 September 2022).
- Response Manipulation. Available online: https://www.dropbox.com/s/5lkwtc49dtug833/Responses_manipulation.xlsx?dl=0 (accessed on 14 September 2022).
- Kruskal, W.H.; Wallis, W.A. Use of ranks in one-criterion variance analysis. J. Am. Stat. Assoc. 1952, 47, 583–621. [Google Scholar] [CrossRef]
- Ruland, F. The Wilcoxon-Mann-Whitney Test—An Introduction to Nonparametrics with Comments on the R Program wilcox.test; Independently Published: Chicago, IL, USA, 2018. [Google Scholar]
- Hair, F.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM); SAGE Publications: Thousand Oaks, CA, USA, 2016. [Google Scholar]
- Braun, V.; Clarke, V. Thematic Analysis: A Practical Guide; Sage Publications: London, UK, 2022. [Google Scholar]
- Han, J.; Ji, X.; Hu, X.; Guo, L.; Liu, T. Arousal recognition using audio-visual features and FMRI-based brain response. IEEE Trans. Affect. Comput. 2015, 6, 337–347. [Google Scholar] [CrossRef]
Reference | Research Goal | Application Area | Findings | Design |
---|---|---|---|---|
Völkel & Kaya [18] | Testing users’ perception of chatbot agreeableness | Movie recommendation | Users prefer the highly agreeable chatbot |
|
Völkel et al. [10] | Testing users’ perception of chatbot extraversion | Healthcare | Users prefer the highly extroverted chatbot |
|
Ruane et al. [56] | Testing users’ perception of chatbot agreeableness and extraversion | Course recommendation | Users prefer highly agreeable and extroverted chatbots |
|
Mehra [34] | Testing users’ preference for chatbots of three personalities | Making a food order | Users preferred the extroverted chatbot, followed by the conscientious chatbot |
|
Characteristic | Description |
---|---|
Age | 18–25 (N = 37), 26–35 (N = 6) |
Gender | Female (N = 29), Male (N = 14) |
Campus Location | Saudi Arabia (N = 22), United Arab Emirates (N = 14), United States (N = 7) |
Education Level | Undergraduate student (N = 37), Postgraduate student (N = 6) |
Familiarity with chatbots | All are familiar with chatbots. |
IT Skills | All at least have basic IT skills (familiar with web surfacing and document editing tools) |
Hypothesis Testing | H-Value | p-Value | Result |
H1. Chatbot personality affects students’ trust. | 4.339 | 0.114 | n. s. |
Sub-Hypothesis | U-Value | p-Value | Result |
H1A. There is a difference between students’ trust in the conscientious and extroverted chatbots. | 821.5 | 0.375 | n.s |
H1B. There is a difference between students’ trust in the conscientious and agreeable chatbots. | 696.5 | 0.049 | supported |
H1C. There is a difference between students’ trust in the agreeable and extroverted chatbots. | 768.5 | 0.178 | n.s. |
Hypothesis | H-Value | p-Value | Result |
H2. Chatbot personality affects students’ perceived authenticity of the chatbot. | 15.059 | 0.001 | supported |
Sub-Hypothesis | U-Value | p-Value | Result |
H2A. There is a difference between students’ perceived authenticity of the conscientious and extroverted chatbots. | 525.5 | 0.001 | supported |
H2B. There is a difference between students’ perceived authenticity of the conscientious and agreeable chatbots. | 553 | 0.001 | supported |
H2C. There is a difference between students’ perceived authenticity of the agreeable and extroverted chatbots. | 948.5 | 0.838 | n.s. |
Hypothesis | H-Value | p-Value | Result |
H3. Chatbot personality affects students’ intention to use the chatbot. | 4.338 | 0.114 | n. s. |
Sub-Hypothesis | U-Value | p-Value | Result |
H3A. There is a difference between students’ intention to use the conscientious and extroverted chatbots. | 747.5 | 0.118 | n.s. |
H3B. There is a difference between students’ intention to use the conscientious and agreeable chatbots. | 707.5 | 0.054 | n.s. |
H3C. There is a difference between students’ intention to use the agreeable and extroverted chatbots. | 864 | 0.591 | n.s. |
Hypothesis | H-Value | p-Value | Result |
H4. Chatbot personality affects students’ intended engagement with the chatbot. | 12.413 | 0.002 | supported |
Sub-Hypothesis | U-Value | p-Value | Result |
H4A. There is a difference between students’ intended engagement with the conscientious and extroverted chatbots. | 549 | 0.001 | supported |
H4B. There is a difference between students’ intended engagement with the conscientious and agreeable chatbots. | 607 | 0.006 | n.s. |
H4C. There is a difference between students’ intended engagement with the agreeable and extroverted chatbots. | 992.5 | 0.557 | n.s. |
Hypothesis | U-Value | p-Value | Result |
---|---|---|---|
H5. Students’ gender affects students’ trust of the chatbot | |||
H5A. There is a difference between male and female students’ trust in the conscientious chatbot. | 176 | 0.491 | n.s. |
H5B. There is a difference between male and female students’ trust in the extroverted chatbot. | 150 | 0.173 | n.s. |
H5C. There is a difference between male and female students’ trust in the agreeable chatbot. | 236 | 0.398 | n.s. |
H6. Students’ gender affects students’ perception of chatbot authenticity. | |||
H6A. There is a difference between male and female students’ perceived authenticity of the conscientious chatbot. | 173.5 | 0.449 | n.s. |
H6B. There is a difference between male and female students’ perceived authenticity of the extroverted chatbot. | 198 | 0.906 | n.s. |
H6C. There is a difference between male and female students’ perceived authenticity of the agreeable chatbot. | 204.5 | 0.979 | n.s. |
H7. Students’ gender affects students’ intention to use the chatbot. | |||
H7A. There is a difference between male and female students’ intention to use the conscientious chatbot. | 155.5 | 0.213 | n.s. |
H7B. There is a difference between male and female students’ intention to use the extroverted chatbot. | 134 | 0.066 | n.s. |
H7C. There is a difference between male and female students’ intention to use the agreeable chatbot. | 185 | 0.633 | n.s. |
H8. Students’ gender affects students’ intended engagement with the chatbot | |||
H8A. There is a difference between male and female intended engagement with the conscientious chatbot. | 152 | 0.188 | n. s. |
H8B. There is a difference between male and female intended engagement with the extroverted chatbot. | 179.5 | 0.548 | n. s. |
H8C. There is a difference between male and female intended engagement with the agreeable chatbot. | 196.5 | 0.876 | n. s. |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kuhail, M.A.; Thomas, J.; Alramlawi, S.; Shah, S.J.H.; Thornquist, E. Interacting with a Chatbot-Based Advising System: Understanding the Effect of Chatbot Personality and User Gender on Behavior. Informatics 2022, 9, 81. https://doi.org/10.3390/informatics9040081
Kuhail MA, Thomas J, Alramlawi S, Shah SJH, Thornquist E. Interacting with a Chatbot-Based Advising System: Understanding the Effect of Chatbot Personality and User Gender on Behavior. Informatics. 2022; 9(4):81. https://doi.org/10.3390/informatics9040081
Chicago/Turabian StyleKuhail, Mohammad Amin, Justin Thomas, Salwa Alramlawi, Syed Jawad Hussain Shah, and Erik Thornquist. 2022. "Interacting with a Chatbot-Based Advising System: Understanding the Effect of Chatbot Personality and User Gender on Behavior" Informatics 9, no. 4: 81. https://doi.org/10.3390/informatics9040081
APA StyleKuhail, M. A., Thomas, J., Alramlawi, S., Shah, S. J. H., & Thornquist, E. (2022). Interacting with a Chatbot-Based Advising System: Understanding the Effect of Chatbot Personality and User Gender on Behavior. Informatics, 9(4), 81. https://doi.org/10.3390/informatics9040081