Optimizing Chatbots to Improve Customer Experience and Satisfaction: Research on Personalization, Empathy, and Feedback Analysis
Abstract
Featured Application
Abstract
1. Review
1.1. Relevant Theories and Concepts
1.2. Methodologies Used
2. Modeling of Chatbot Personalization, Empathy, and Feedback, Including Hypotheses
2.1. Entity Recognition and Text Classification Model
2.2. Sentiment and Semantic Analysis Model
Operational Definition of Empathy in Chatbot Interactions
2.3. Research Questions and Hypotheses
2.3.1. Research Questions
- RQ1: How do advanced NLP techniques (named entity recognition, text classification, sentiment analysis, and semantic analysis) individually and collectively impact customer satisfaction in chatbot interactions?
- RQ2: What mediating mechanisms explain the relationship between NLP technique effectiveness and customer satisfaction outcomes in empathy-driven chatbot systems?
- RQ3: How do contextual factors and service categories moderate the effectiveness of chatbot optimization strategies in different customer interaction scenarios?
2.3.2. Research Hypotheses
- The accuracy of named entity recognition (NER) in identifying customer intent and complaints is positively associated with improved customer satisfaction.
- The effectiveness of text classification algorithms (e.g., random forest, logistic regression, SVM) in categorizing customer interactions is positively associated with improved customer satisfaction.
- Positive sentiment detected through sentiment analysis tools (VADER, naïve Bayes, TextBlob) in chatbot interactions is positively associated with higher customer satisfaction.
- The depth of semantic understanding, as measured by latent Dirichlet allocation (LDA) in identifying key topics and issues, is positively associated with improved customer satisfaction.
- The combined effectiveness of multiple NLP techniques (NER, text classification, sentiment analysis, and semantic analysis) has a greater impact on customer satisfaction than individual techniques alone, demonstrating synergistic effects in integrated chatbot systems.
- Empathy-driven response appropriateness mediates the relationship between NLP technique accuracy and customer satisfaction, where accurate intent recognition enables more empathetic and contextually appropriate responses.
- Intent recognition precision mediates the relationship between NER effectiveness and overall customer experience, where accurate entity identification leads to better understanding of customer needs and subsequently improved satisfaction.
- Service category complexity moderates the relationship between NLP effectiveness and customer satisfaction, with stronger positive effects observed in complex service categories (e.g., ACCOUNT, REFUND) compared to simpler categories (e.g., NEWSLETTER, ORDER).
- Customer emotional state, as detected through sentiment analysis, moderates the relationship between empathetic chatbot responses and satisfaction outcomes, with greater benefits observed for customers expressing negative emotions.
- The integrated analytical framework combining all NLP techniques (NER, text classification, sentiment analysis, and semantic analysis) with empathy-driven personalization leads to optimal customer experience outcomes across different service contexts, surpassing the effectiveness of individual components or partial integrations.
3. Method
3.1. Data Source and Collection
- Twenty-seven unique intents assigned to one of evelven top-level categories (e.g., ACCOUNT, ORDER, PAYMENT, SHIPPING).
- Seven entity/slot types.
- Each utterance is tagged with entities/slots where applicable.
- Additionally, each utterance is enriched with linguistic tags indicating the type of language variation expressed (e.g., COLLOQUIAL, INTERROGATIVE, OFFENSIVE). These tags allow for customization of training datasets to adapt to different user language profiles, ensuring the trained bot can effectively handle diverse linguistic phenomena such as spelling mistakes, run-on words, and punctuation errors.
3.2. Tools and Environment
- spaCy (Explosion AI, Berlin, Germany): For named entity recognition (NER) and general natural language processing tasks.
- scikit-learn (scikit-learn developers, worldwide): For implementing machine learning algorithms such as random forest, decision trees, support vector machines (SVMs), and logistic regression, used in text classification and sentiment analysis.
- NLTK (Natural Language Toolkit, University of Pennsylvania, Philadelphia, PA, USA): For various text preprocessing steps and potentially for lexicon-based sentiment analysis (e.g., VADER).
- Pandas (NumFOCUS, Austin, TX, USA) and numpy (NumFOCUS, Austin, TX, USA): For data manipulation and numerical operations.
- Matplotlib (Matplotlib Development Team, worldwide) and seaborn (Michael Waskom, New York, NY, USA): For data visualization.
- gensim (RARE Technologies, Prague, Czech Republic): For latent Dirichlet allocation (LDA) implementation in deep semantic analysis.
3.3. Research Procedure
- (A) Entity Recognition and Text Classification: Models such as spaCy NER, along with machine learning algorithms including HDBSCAN, K-Means, SVM, logistic regression, and random forest, were applied to categorize input data and identify specific entities within customer inquiries.
- (B) Sentiment and Semantic Analysis: Techniques such as VADER, Naive Bayes, TextBlob, and latent Dirichlet allocation (LDA) were employed to assess customer sentiment and extract deeper insights from textual data, including emotional tone and underlying meanings.
- (C) Statistical Analysis: This component involved correlation analysis (Spearman) and multivariate analysis using methods such as PCA, ANOVA, and logistic regression to detect trends and relationships in chatbot interactions, empirically validating the research hypotheses.
3.4. Ethical Considerations
3.5. Evaluation Metrics
3.5.1. Quantitative Metrics
3.5.2. Qualitative Metrics
3.6. Statistical Analyses
3.6.1. Correlation Analysis
3.6.2. Multivariate Analysis
4. Results
4.1. Overview of Statistical Tests
4.2. Hypothesis Testing Results
4.2.1. Primary NLP Technique Effects (
4.2.2. Interaction Effects Analysis ()
4.2.3. Mediation Analysis (
4.2.4. Moderation Analysis ()
4.2.5. Integration Framework Validation ()
4.3. Effect Sizes and Further Insights
5. Discussion
5.1. Discussion of Key Findings
5.1.1. Entity Recognition and Text Classification
5.1.2. Sentiment and Semantic Analysis
5.1.3. Integrated Feedback Analysis and Statistical Insights
5.2. Theoretical Implications
5.3. Practical Implications
- Optimize Chatbot Design: Implement advanced NER and text classification for more accurate intent recognition and response generation.
- Enhance Empathy and Personalization: Develop chatbots capable of detecting subtle emotional nuances and adapting their communication style accordingly, particularly in sensitive service categories.
- Improve Feedback Loops: Utilize comprehensive analytical frameworks to continuously monitor and analyze customer interactions, identifying emerging trends and areas for proactive intervention.
- Strategic Resource Allocation: Direct resources to address specific negative emotional triggers identified in categories like shipping and cancellation, thereby improving overall customer satisfaction.
5.4. Limitations
5.5. Future Work
- Cross-Dataset Validation: Conduct extensive validation of the proposed models and findings across diverse, real-world customer service datasets to enhance generalizability and robustness.
- Advanced Sentiment and Emotion Detection: Explore and integrate more sophisticated NLP models, such as fine-tuning BERT or other transformer-based architectures, to capture subtle emotional nuances, sarcasm, and cultural specificities in customer language. This could involve developing hybrid models that combine lexicon-based approaches with machine learning techniques.
- Dynamic Personalization Models: Develop and test adaptive chatbot personalization models that dynamically adjust their communication style and content based on real-time sentiment, historical interaction data, and individual customer profiles. This could involve reinforcement learning approaches.
- Longitudinal Studies: Conduct longitudinal studies to assess the long-term impact of chatbot personalization and empathy on customer loyalty, retention, and overall business outcomes.
- Multimodal Interaction Analysis: Extend the research to include multimodal interactions (e.g., voice, video) to gain a more holistic understanding of customer emotions and intent in chatbot conversations.
- Ethical AI in Chatbots: Further investigate the ethical implications of advanced chatbot capabilities, focusing on transparency, bias detection, and ensuring fair and equitable customer service experiences across diverse user groups.
- Integration of External Factors: Incorporate external factors such as service quality, response time, and specific contextual variables into the analytical models to gain a more comprehensive understanding of their influence on customer satisfaction.
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Cath, C.; Wachter, S.; Mittelstadt, B.; Taddeo, M.; Floridi, L. Artificial Intelligence and the ‘Good Society’: The US, EU, and UK approach. Sci. Eng. Ethics 2017, 24, 505–528. [Google Scholar] [CrossRef]
- Shah, H.; Warwick, K.; Vallverdú, J.; Wu, D. Can machines talk? Comparison of Eliza with modern dialogue systems. Comput. Hum. Behav. 2016, 58, 278–295. [Google Scholar] [CrossRef]
- Ciechanowski, L.; Przegalinska, A.; Magnuski, M.; Gloor, P. In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Gener. Comput. Syst. 2019, 92, 539–548. [Google Scholar] [CrossRef]
- Huo, P.; Yang, Y.; Zhou, J.; Chen, C.; He, L. TERG: Topic-Aware Emotional Response Generation for Chatbot. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar] [CrossRef]
- Shumanov, M.; Johnson, L. Making conversations with chatbots more personalized. Comput. Hum. Behav. 2021, 117, 106627. [Google Scholar] [CrossRef]
- Crolic, C.; Thomaz, F.; Hadi, R.; Stephen, A.T. Blame the Bot: Anthropomorphism and Anger in Customer–Chatbot Interactions. J. Mark. 2022, 86, 132–148. [Google Scholar] [CrossRef]
- Meyer-Waarden, L.; Pavone, G.; Poocharoentou, T.; Prayatsup, P.; Ratinaud, M.; Tison, A.; Torné, S. How Service Quality Influences Customer Acceptance and Usage of Chatbots? J. Serv. Manag. Res. 2020, 4, 35–51. [Google Scholar] [CrossRef]
- Ashfaq, M.; Yun, J.; Yu, S.; Loureiro, S.M.C. I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telemat. Inform. 2020, 54, 101473. [Google Scholar] [CrossRef]
- Li, C.-Y.; Zhang, J.-T. Chatbots or me? Consumers’ switching between human agents and conversational agents. J. Retail. Consum. Serv. 2023, 72, 103264. [Google Scholar] [CrossRef]
- Breiman, L. Random Forest. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- McCallum, A.; Nigam, K. A comparison of event models for naive bayes text classification. In Proceedings of the AAAI Conference on Artificial Intelligence, Madison, WI, USA, 26–30 July 1998; Available online: https://api.semanticscholar.org/CorpusID:7311285 (accessed on 10 September 2023).
- Yen, C.; Chiang, M.-C. Trust me, if you can: A study on the factors that influence consumers’ purchase intention triggered by chatbots based on brain image evidence and self-reported assessments. Behav. Inf. Technol. 2021, 40, 1177–1194. [Google Scholar] [CrossRef]
- Powers, D.M.W. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv 2020. [Google Scholar] [CrossRef]
- Bickmore, T.W.; Picard, R.W. Establishing and maintaining long-term human-computer relationships. ACM Trans. Comput. Hum. Interact. 2005, 12, 293–327. [Google Scholar] [CrossRef]
- Wirtz, J.; Patterson, P.G.; Kunz, W.H.; Gruber, T.; Lu, V.N.; Paluch, S.; Martins, A. Brave new world: Service robots in the frontline. J. Serv. Manag. 2018, 29, 907–931. [Google Scholar] [CrossRef]
- Sivaramakrishnan, S.; Wan, F.; Tang, Z. Giving an “e-human touch” to e-tailing: The moderating roles of static information quantity and consumption motive in the effectiveness of an anthropomorphic information agent. J. Interact. Mark. 2007, 21, 60–75. [Google Scholar] [CrossRef]
- Shawar, B.A.; Atwell, E.S. Using corpora in machine-learning chatbot systems. Int. J. Corpus Linguist. 2005, 10, 489–516. [Google Scholar] [CrossRef]
- Brennan, K. The managed teacher: Emotional labour, education, and technology. Educ. Insights 2006, 10, 55–65. [Google Scholar]
- Chung, M.; Ko, E.; Joung, H.; Kim, S.J. Chatbot e-service and customer satisfaction regarding luxury brands. J. Bus. Res. 2020, 117, 587–595. [Google Scholar] [CrossRef]
- Holzwarth, M.; Janiszewski, C.; Neumann, M.M. The Influence of Avatars on Online Consumer Shopping Behavior. J. Mark. 2006, 70, 19–36. [Google Scholar] [CrossRef]
- Radziwill, N.M.; Benton, M.C. Evaluating Quality of Chatbots and Intelligent Conversational Agents. arXiv 2017. [Google Scholar] [CrossRef]
- Sands, S.; Ferraro, C.; Campbell, C.; Tsao, H.-Y. Managing the human–chatbot divide: How service scripts influence service experience. J. Serv. Manag. 2021, 32, 246–264. [Google Scholar] [CrossRef]
- Jiang, Q.; Zhang, Y.; Pian, W. Chatbot as an emergency exist: Mediated empathy for resilience via human-AI interaction during the COVID-19 pandemic. Inf. Process. Manag. 2022, 59, 103074. [Google Scholar] [CrossRef] [PubMed]
- Wardhana, A.K.; Ferdiana, R.; Hidayah, I. Empathetic chatbot enhancement and development: A literature review. In Proceedings of the 2021 International Conference on Artificial Intelligence and Mechatronics Systems (AIMS), Bandung, Indonesia, 28–30 April 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Park, G.; Yim, M.C.; Chung, J.; Lee, S. Effect of AI chatbot empathy and identity disclosure on willingness to donate: The mediation of humanness and social presence. Behav. Inf. Technol. 2023, 42, 1998–2010. [Google Scholar] [CrossRef]
- Waytz, A.; Heafner, J.; Epley, N. The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. J. Exp. Soc. Psychol. 2014, 52, 113–117. [Google Scholar] [CrossRef]
- De Visser, E.J.; Monfort, S.S.; McKendrick, R.; Smith, M.A.B.; McKnight, P.E.; Krueger, F.; Parasuraman, R. Almost human: Anthropomorphism increases trust resilience in cognitive agents. J. Exp. Psychol 2016, 22, 331. [Google Scholar] [CrossRef]
- Jo Bitner, M. Service and technology: Opportunities and paradoxes. Manag. Serv. Qual. Int. J. 2001, 11, 375–379. [Google Scholar] [CrossRef]
- Giebelhausen, M.; Robinson, S.G.; Sirianni, N.J.; Brady, M.K. Touch versus Tech: When Technology Functions as a Barrier or a Benefit to Service Encounters. J. Mark. 2014, 78, 113–124. [Google Scholar] [CrossRef]
- Valtolina, S.; Barricelli, B.R.; Di Gaetano, S. Communicability of traditional interfaces VS chatbots in healthcare and smart home domains. Behav. Inf. Technol. 2020, 39, 108–132. [Google Scholar] [CrossRef]
- Schuetzler, R.M.; Giboney, J.S.; Grimes, G.M.; Nunamaker, J.F. The influence of conversational agent embodiment and conversational relevance on socially desirable responding. Decis. Support Syst. 2018, 114, 94–102. [Google Scholar] [CrossRef]
- Um, T.; Kim, T.; Chung, N. How does an Intelligence Chatbot Affect Customers Compared with Self-Service Technology for Sustainable Services? Sustainability 2020, 12, 5119. [Google Scholar] [CrossRef]
- Belanche, D.; Casaló, L.V.; Flavián, C. Frontline robots in tourism and hospitality: Service enhancement or cost reduction? Electron. Mark. 2021, 31, 477–492. [Google Scholar] [CrossRef]
- Sheehan, B.; Jin, H.S.; Gottlieb, U. Customer service chatbots: Anthropomorphism and adoption. J. Bus. Res. 2020, 115, 14–24. [Google Scholar] [CrossRef]
- Troshani, I.; Rao Hill, S.; Sherman, C.; Arthur, D. Do We Trust in AI? Role of Anthropomorphism and Intelligence. J. Comput. Inf. Syst. 2021, 61, 481–491. [Google Scholar] [CrossRef]
- De Cicco, R.; Silva, S.C.; Alparone, F.R. Millennials’ attitude toward chatbots: An experimental study in a social relationship perspective. Int. J. Retail. Distrib. Manag. 2020, 48, 1213–1233. [Google Scholar] [CrossRef]
- Tan, S.-M.; Liew, T.W. Designing Embodied Virtual Agents as Product Specialists in a Multi-Product Category E-Commerce: The Roles of Source Credibility and Social Presence. Int. J. Hum. Comput. Interact. 2020, 36, 1136–1149. [Google Scholar] [CrossRef]
- Collier, J.E.; Barnes, D.C.; Abney, A.K.; Pelletier, M.J. Idiosyncratic service experiences: When customers desire the extraordinary in a service encounter. J. Bus. Res. 2018, 84, 150–161. [Google Scholar] [CrossRef]
- Chebat, J.-C.; Kollias, P. The Impact of Empowerment on Customer Contact Employees’ Roles in Service Organizations. J. Serv. Res. 2000, 3, 66–81. [Google Scholar] [CrossRef]
- Schau, H.J.; Dellande, S.; Gilly, M.C. The impact of code switching on service encounters. J. Retail. 2007, 83, 65–78. [Google Scholar] [CrossRef]
- Pavlou, P.A. Institution-based trust in interorganizational exchange relationships: The role of online B2B marketplaces on trust formation. J. Strateg. Inf. Syst. 2002, 11, 215–243. [Google Scholar] [CrossRef]
- Ratnasingam, P. Trust in inter-organizational exchanges: A case study in business to business electronic commerce. Decis. Support Syst. 2005, 39, 525–544. [Google Scholar] [CrossRef]
- Beldad, A.; Hegner, S.; Hoppen, J. The effect of virtual sales agent (VSA) gender—Product gender congruence on product advice credibility, trust in VSA and online vendor, and purchase intention. Comput. Hum. Behav. 2016, 60, 62–72. [Google Scholar] [CrossRef]
- Lee, S.; Choi, J. Enhancing user experience with conversational agent for movie recommendation: Effects of self-disclosure and reciprocity. Int. J. Hum. Comput. Stud. 2017, 103, 95–105. [Google Scholar] [CrossRef]
- Kuipers, B. How can we trust a robot? Commun. ACM 2018, 61, 86–95. [Google Scholar] [CrossRef]
- Nordheim, C.B. Trust in Chatbots for Customer Service–Findings from a Questionnaire Study. Master’s Thesis, Universitetet i Oslo, Oslo, Norway, 2018. [Google Scholar]
- Chattaraman, V.; Kwon, W.-S.; Gilbert, J.E.; Ross, K. Should AI-Based, conversational digital assistants employ social-or task-oriented interaction style? A task-competency and reciprocity perspective for older adults. Comput. Hum. Behav. 2019, 90, 315–330. [Google Scholar] [CrossRef]
- Mimoun, M.S.B.; Poncin, I. A valued agent: How ECAs affect website customers’ satisfaction and behaviors. J. Retail. Consum. Serv. 2015, 26, 70–82. [Google Scholar] [CrossRef]
- Liang, Y.; Lee, S.A. Advancing the strategic messages affecting robot trust effect: The dynamic of user-and robot-generated content on human–robot trust and interaction outcomes. Cyberpsychol. Behav. Soc. Netw. 2016, 19, 538–544. [Google Scholar] [CrossRef]
- Liew, T.W.; Tan, S.-M. Exploring the effects of specialist versus generalist embodied virtual agents in a multi-product category online store. Telemat. Inform. 2018, 35, 122–135. [Google Scholar] [CrossRef]
- Benbasat, I.; Wang, W. Trust in and adoption of online recommendation agents. J. Assoc. Inf. Syst. 2005, 6, 4. [Google Scholar] [CrossRef]
- Araujo, T. Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Comput. Hum. Behav. 2018, 85, 183–189. [Google Scholar] [CrossRef]
- Qiu, L.; Benbasat, I. A study of demographic embodiments of product recommendation agents in electronic commerce. Int. J. Hum.-Comput. Stud. 2010, 68, 669–688. [Google Scholar] [CrossRef]
- Følstad, A.; Brandtzæg, P.B. Chatbots and the new world of HCI. Interactions 2017, 24, 38–42. [Google Scholar] [CrossRef]
- Li, L.; Lee, K.Y.; Emokpae, E.; Yang, S.-B. What makes you continuously use chatbot services? Evidence from Chinese online travel agencies. Electron. Mark. 2021, 31, 575–599. [Google Scholar] [CrossRef] [PubMed]
- Li, C.; Pan, R.; Xin, H.; Deng, Z. Research on artificial intelligence customer service on consumer attitude and its impact during online shopping. J. Phys. Conf. Ser. 2020, 1575, 12192. [Google Scholar] [CrossRef]
- Nadkarni, P.M.; Ohno-Machado, L.; Chapman, W.W. Natural language processing: An introduction. J. Am. Med. Inform. Assoc. 2011, 18, 544–551. [Google Scholar] [CrossRef] [PubMed]
- Nguyen, T.H.; Waizenegger, L.; Techatassanasoontorn, A.A. “Don’t Neglect the User!”—Identifying Types of Human-Chatbot Interactions and their Associated Characteristics. Inf. Syst. Front. 2022, 24, 797–838. [Google Scholar] [CrossRef]
- Kohavi, R. Glossary of terms. Mach. Learn. 1998, 30, 271–274. [Google Scholar] [CrossRef]
- Quinlan, J.R. Induction of decision trees. Mach. Learn. 1986, 1, 81–106. [Google Scholar] [CrossRef]
- Yun, J.; Park, J. The Effects of Chatbot Service Recovery with Emotion Words on Customer Satisfaction, Repurchase Intention, and Positive Word-Of-Mouth. Front. Psychol. 2022, 13, 922503. [Google Scholar] [CrossRef]
- McLean, G.; Wilson, A. Evolving the online customer experience… is there a role for online customer support? Comput. Hum. Behav. 2016, 60, 602–610. [Google Scholar] [CrossRef]
Article | Study Focus | Theoretical Lens | Key Findings |
---|---|---|---|
[8] | Factors influencing user satisfaction with chatbot-based customer service | An integrated framework combining the expectation-confirmation model, information systems success model, and technology acceptance model | Information and service quality positively influence user satisfaction, which predicts continuance intention. Perceived enjoyment, usefulness, and ease of use contribute to continuance intention. |
[6] | Impact of anthropomorphic chatbot design on customer responses | Integration of anthropomorphism theory, expectancy violation theory, and emotion theories | For angry customers, human-like cues lead to inflated expectations, which, when unmet, result in lower satisfaction, poorer company evaluation, and reduced purchase intentions. |
[4] | Development of an emotionally appropriate and topically relevant chatbot response generation model | Encoder–decoder framework with emotion-aware module and topic commonsense-aware module | The proposed model significantly improves emotion expression precision and topic relevance, as demonstrated through automatic metrics and human evaluations. |
[9] | Factors driving consumer switching from human customer service to AI-based conversational agents in banking | Push–pull–mooring framework from migration theory and switching behavior literature | Both push effects (low empathy and adaptability of human agents) and pull effects (anytime/anywhere connectivity, personalization of AI) significantly influence switching behavior. |
[7] | Consumer acceptance and intention to reuse chatbots in the airline industry | Extended technology acceptance models with SERVQUAL framework and trust considerations | Reliability and perceived usefulness are the most critical determinants of reuse intention; tangible elements enhance perceived ease of use, while empathy has no significant effect on acceptance. |
[23] | Managing the human–chatbot divide: how service scripts influence service experience | Theater metaphor, social impact theory, and social exchange theory | When employing an educational script, human service agents significantly positively affect satisfaction and purchase intention compared to chatbots. These effects are fully mediated by emotion and rapport. However, when an entertaining script is used, there are no differences between human agents and chatbots. |
[5] | Effect of aligning chatbot personality with consumer personality | Similarity–attraction theory and personality congruence principles | Matching the chatbot’s personality to the consumer’s (e.g., introversion/extraversion) results in higher engagement and improved sales outcomes, particularly in social gain contexts. |
[24] | Human–AI interaction and digitally mediated empathy during COVID-19 pandemic | Communication theory of resilience and empathy theories | Five types of digitally mediated empathy identified among Replika users: companion buddy, responsive diary, emotion-handling program, electronic pet, and tool for venting. Mediated empathy serves as pathway to psychological resilience during pandemic disruption. |
[25] | Empathetic chatbot development models: A systematic literature review | Dialogue system architecture analysis and empathetic characteristic frameworks | Analysis of 204 works reveals distribution of empathetic chatbot models: 60% generative models, 27% retrieval models, and 13% hybrid models. Generative models demonstrate superior performance in delivering rich empathetic utterances. |
[26] | Effect of AI chatbot empathy and identity disclosure on donation behavior | CASA hypothesis and uncanny valley theory | Interaction effects between chatbot empathy types (cognitive vs. affective) and identity disclosure (human vs. chatbot names) on willingness to donate, mediated through perceived humanness and social presence. Demonstrates importance of careful empathy implementation to avoid uncanny valley effects. |
Test Type | Statistic | p-Value | Conclusion |
---|---|---|---|
Chi-Square (HDBSCAN) | 208,232.43 | 2.2 × 10−16 | Null Hypothesis Rejected |
Chi-Square (K-Means) | 1984.51 | 2.2 × 10−16 | Null Hypothesis Rejected |
(a) | |||||
---|---|---|---|---|---|
Model | Hit Rate | False Alarms | Misses | Correct Rejection | TDS-d’ |
Logistic regression | 0.91 | 0.07 | 0.09 | 0.93 | 2.78 |
Random forest | 0.89 | 0.08 | 0.11 | 0.92 | 2.65 |
SVM | 0.93 | 0.05 | 0.07 | 0.95 | 3.02 |
(b) | |||||
Service Category | Hits | False Alarms | Misses | Correct Rejection | TDS-d’ |
ACCOUNT | 0.91 | 0.07 | 0.09 | 0.93 | 2.78 |
CANCELLATION_FEE | 0.88 | 0.09 | 0.12 | 0.91 | 2.52 |
CONTACT | 0.92 | 0.06 | 0.08 | 0.94 | 2.88 |
DELIVERY | 0.90 | 0.07 | 0.10 | 0.93 | 2.71 |
FEEDBACK | 0.89 | 0.08 | 0.11 | 0.92 | 2.65 |
INVOICE | 0.94 | 0.04 | 0.06 | 0.96 | 3.15 |
NEWSLETTER | 0.95 | 0.03 | 0.05 | 0.97 | 3.28 |
ORDER | 0.96 | 0.02 | 0.04 | 0.98 | 3.41 |
PAYMENT | 0.85 | 0.11 | 0.15 | 0.89 | 2.25 |
REFUND | 0.86 | 0.10 | 0.14 | 0.90 | 2.38 |
SHIPPING_ADDRESS | 0.87 | 0.09 | 0.13 | 0.91 | 2.45 |
(c) | |||||
Entity/Slot Type | Hits | False Alarm Rate | Misses | Correct Rejection | TDS-d’ |
account_type | 0.93 | 0.05 | 0.07 | 0.95 | 3.02 |
delivery_city | 0.89 | 0.08 | 0.11 | 0.92 | 2.65 |
delivery_country | 0.91 | 0.07 | 0.09 | 0.93 | 2.78 |
invoice_id | 0.94 | 0.04 | 0.06 | 0.96 | 3.15 |
order_id | 0.95 | 0.03 | 0.05 | 0.97 | 3.28 |
person_name | 0.92 | 0.06 | 0.08 | 0.94 | 2.88 |
refund_amount | 0.88 | 0.09 | 0.12 | 0.91 | 2.52 |
Test Type | Statistic | p-Value | TDS-d’ Validation | Conclusion |
---|---|---|---|---|
ANOVA (Service Categories) | F = 810.88 | <0.05 | Supported by d’ > 2.8 across components | Significant differences between categories |
Test Type | Statistic | p-Value | Conclusion |
---|---|---|---|
Spearman Correlation (Sentiment vs. Satisfaction) | −0.0013 | 0.4849 | Failed to Reject the Null Hypothesis |
Model Type | Accuracy | TDS-d’ Score | H5 Support |
---|---|---|---|
Logistic Regression | 100% | 4.0+ | Perfect discrimination supports integration |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Uzan, S.; Freud, D.; Elalouf, A. Optimizing Chatbots to Improve Customer Experience and Satisfaction: Research on Personalization, Empathy, and Feedback Analysis. Appl. Sci. 2025, 15, 9439. https://doi.org/10.3390/app15179439
Uzan S, Freud D, Elalouf A. Optimizing Chatbots to Improve Customer Experience and Satisfaction: Research on Personalization, Empathy, and Feedback Analysis. Applied Sciences. 2025; 15(17):9439. https://doi.org/10.3390/app15179439
Chicago/Turabian StyleUzan, Shimon, David Freud, and Amir Elalouf. 2025. "Optimizing Chatbots to Improve Customer Experience and Satisfaction: Research on Personalization, Empathy, and Feedback Analysis" Applied Sciences 15, no. 17: 9439. https://doi.org/10.3390/app15179439
APA StyleUzan, S., Freud, D., & Elalouf, A. (2025). Optimizing Chatbots to Improve Customer Experience and Satisfaction: Research on Personalization, Empathy, and Feedback Analysis. Applied Sciences, 15(17), 9439. https://doi.org/10.3390/app15179439