# A Machine Learning Based Classification Method for Customer Experience Survey Analysis

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Work and Paper Contribution

#### 2.1. Works on Net Promoter Score-NPS

#### 2.2. Limitations of NPS

#### 2.3. Our Contribution

## 3. Net Promoter Score Survey

- NPS question [5]: “How likely are you to recommend [company x] to your friends or colleagues?” The response is provided in the range of 0 (definitely no) to 10 (definitely yes).
- Satisfaction scores (from 0 or 1 to 10) for a set of CX attributes like product experience (service quality, network coverage, tariff plan, billing, etc.), Touchpoint experience (e.g., call center, website, mobile app, shops), customer lifecycle milestones (e.g., contract renewal), etc. Some surveys also include brand image related attributes (e.g., trust, innovation)

#### NPS Survey Analysis

## 4. NPS Bias Classification

#### 4.1. Datasets Available for NPS Bias Analysis

#### 4.2. Distribution Analysis of the NPS Bias Catergories

#### 4.3. Regression Analysis of the NPS Bias Classes

#### 4.4. NPS Drivers’ Analysis Based on NPS Bias

## 5. Machine Learning Algorithms for CX Classification

#### 5.1. Problem Formulation

**x**, where $\mathit{x}\in X\subseteq {\mathbb{R}}^{K}$ and $\mathit{y}\in Y=\{{C}_{1},{C}_{2},\cdots ,{C}_{Q}\}$, i.e.,

**x**is in K-dimensional input space and y a label of Q different values. Q labels form categories or groups of patterns and the objective is to find a classification rule or function $y=r\left(\mathit{x}\right):X\to Y$ to predict the categories of the NPS index, given a training set of N points, $D=({\mathit{x}}_{i},{y}_{i}),i=1,\cdots ,N$

#### 5.2. Machine Learning Algorithms

#### 5.2.1. Decision Trees

#### 5.2.2. k-Nearest Neighbors

#### 5.2.3. Support Vector Machines

#### 5.2.4. Random Forest (RF)

#### 5.2.5. Artificial Neural Networks (ANNs)

#### 5.2.6. Convolutional Neural Networks (CNNs)

#### 5.2.7. Naïve Bayes

#### 5.2.8. Logistic Regression

#### 5.3. Applied Dataset

#### 5.4. Experimental Results

## 6. Personal Data Protection Rules and Ethical Issues

- Personal data of participants are strictly held confidential at any time of the research;
- No personal data are centrally stored. In addition, data are scrambled where possible and abstracted and/or anonymized in a way that does not affect the final project outcome;
- No collected data are utilized outside the scope of this research or for any other secondary use.

## 7. Discussion

- (a)
- A set of scenarios was tested by changing the mix of random and real data in the training data set. The results indicated that the contribution of randomly generated data led to similar results (in terms of the metrics presented in Figure 7 and Figure 8). Although the incorporation of the randomly generated data eliminated the potential overfitting effects, it did not increase the achieved performance of the models tested. To this end, the next research step in this direction will be to further enhance the random data generator through the application of Generative Adversarial Networks (GANs) [39].
- (b)
- The comparative analysis of all the examined models indicated that despite the differences observed in the performance metrics, at this stage we cannot identify a single model as the one with dominant performance for NPS classification analysis. It appears that linear and logistic regression exhibit similar performance with other ML algorithms.
- (c)
- The introduction of the NPS bias label delivers substantial improvement in the performance metrics of all the tested algorithms. The proposed method provides fertile ground for the better understanding of the NPS key drivers, which in turn will allow to apply targeted actions based on separate analysis of positively and negatively biased customers, as described in Section 3. The next research step in this case will be to verify whether the statistical results of this paper are associated with causality. This can be achieved through the comparison of the key drivers’ analysis results with the free comments that the surveyed customers are asked to provide (sentiment analysis).

## 8. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## Appendix A. Performance Metrics

#### Appendix A.1. Confusion Matrix

Actual Class | ||||
---|---|---|---|---|

Detractors | Passives | Promoters | ||

Predicted-Class | Detractors | ${C}_{1,1}$ | ${C}_{1,2}$ | ${C}_{1,3}$ |

Passives | ${C}_{2,1}$ | ${C}_{2,2}$ | ${C}_{2,3}$ | |

Promoters | ${C}_{3,1}$ | ${C}_{3,2}$ | ${C}_{3,3}$ |

#### Appendix A.1.1. Accuracy

#### Appendix A.1.2. Precision and Recall

#### Appendix A.1.3. F1-Score

## References

- Bolton, R.N.; Drew, J.H. A Multistage Model of Customers’ Assessments of Service Quality and Value. J. Consum. Res.
**1991**, 17, 375–384. [Google Scholar] [CrossRef] - Aksoy, L.; Buoye, A.; Aksoy, P.; Larivière, B.; Keiningham, T.L. A cross-national investigation of the satisfaction and loyalty linkage for mobile telecommunications services across eight countries. J. Interact. Mark.
**2013**, 27, 74–82. [Google Scholar] [CrossRef] - Ismail, A.R.; Melewar, T.C.; Lim, L.; Woodside, A. Customer experiences with brands: Literature review and research directions. Mark. Rev.
**2011**, 11, 205–225. [Google Scholar] [CrossRef] [Green Version] - Gentile, C.; Spiller, N.; Noci, G. How to Sustain the Customer Experience: An Overview of Experience Components that Co-create Value With the Customer. Eur. Manag. J.
**2007**, 25, 395–410. [Google Scholar] [CrossRef] - Reichheld, F.F. The one number you need to grow. Harv. Bus. Rev.
**2003**, 81, 46–55. [Google Scholar] - Reichheld, F. The Ultimate Question: Driving Good Profits and True Growth, 1st ed.; Harvard Business School Press: Boston, MA, USA, 2006. [Google Scholar]
- Jeske, D.R.; Callanan, T.P.; Guo, L. Identification of Key Drivers of Net Promoter Score Using a Statistical Classification Model. In Efficient Decision Support Systems—Practice and Challenges From Current to Future; IntechOpen: London, UK, 2011. [Google Scholar] [CrossRef] [Green Version]
- Ickin, S.; Ahmed, J.; Johnsson, A.; Gustafsson, J. On Network Performance Indicators for Network Promoter Score Estimation. In Proceedings of the 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, 5–7 June 2019; pp. 1–3. [Google Scholar] [CrossRef]
- Dastane, O.; Fazlin, I. Re-Investigating Key Factors of Customer Satisfaction Affecting Customer Retention for Fast Food Industry. Int. J. Manag. Account. Econ.
**2017**, 4, 379–400. [Google Scholar] - Ban, H.J.; Choi, H.; Choi, E.K.; Lee, S.; Kim, H.S. Investigating Key Attributes in Experience and Satisfaction of Hotel Customer Using Online Review Data. Sustainability
**2019**, 11, 6570. [Google Scholar] [CrossRef] [Green Version] - Raspor Janković, S.; Gligora Marković, M.; Brnad, A. Relationship Between Attribute And Overall Customer Satisfaction: A Case Study Of Online Banking Services. Zb. Veleučilišta Rijeci
**2014**, 2, 1–12. [Google Scholar] - Rallis, I.; Markoulidakis, I.; Georgoulas, I.; Kopsiaftis, G. A novel classification method for customer experience survey analysis. In Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Corfu, Greece, 30 June–3 July 2020; pp. 1–9. [Google Scholar]
- Lalonde, S.M.; Company, E.K. Key Driver Analysis Using Latent Class Regression; Survey Research Methods Section; American Statistical Association: Alexandria, VA, USA, 1996; pp. 474–478. [Google Scholar]
- Larson, A.; Goungetas, B. Modeling the drivers of Net Promoter Score. Quirks Med.
**2013**, 20131008. Available online: https://www.quirks.com/articles/modeling-the-drivers-of-net-promoter-score (accessed on 30 November 2020). - Reno, R.; Tuason, N.; Rayner, B. Multicollinearity and Sparse Data in Key Driver Analysis. 2013. Available online: https://www.predictiveanalyticsworld.com/sanfrancisco/2013/pdf/Day2_1550_Reno_Tuason_Rayner.pdf (accessed on 30 November 2020).
- Rose, S.; Sreejith, R.; Senthil, S. Social Media Data Analytics to Improve the Customer Services: The Case of Fast-Food Companies. Int. J. Recent Technol. Eng.
**2019**, 8, 6359–6366. [Google Scholar] [CrossRef] - Miao, Y. A Machine-Learning Based Store Layout Strategy in Shopping Mall. In Proceedings of the International Conference on Machine Learning and Big Data Analytics for IoT Security and Privacy, Shanghai, China, 6–8 November 2020; pp. 170–176. [Google Scholar]
- Sheoran, A.; Fahmy, S.; Osinski, M.; Peng, C.; Ribeiro, B.; Wang, J. Experience: Towards automated customer issue resolution in cellular networks. In Proceedings of the 26th Annual International Conference on Mobile Computing and Networking, London, UK, 21–25 September 2020; pp. 1–13. [Google Scholar]
- Al-Mashraie, M.; Chung, S.H.; Jeon, H.W. Customer switching behavior analysis in the telecommunication industry via push-pull-mooring framework: A machine learning approach. Comput. Ind. Eng.
**2020**, 106476. [Google Scholar] [CrossRef] - Keiningham, T.L.; Cooil, B.; Andreassen, T.W.; Aksoy, L. A longitudinal examination of net promoter and firm revenue growth. J. Mark.
**2007**, 71, 39–51. [Google Scholar] [CrossRef] [Green Version] - Grisaffe, D.B. Questions about the ultimate question: Conceptual considerations in evaluating Reichheld’s net promoter score (NPS). J. Consum. Satisf. Dissatisf. Complain. Behav.
**2007**, 20, 36. [Google Scholar] - Zaki, M.; Kandeil, D.; Neely, A.; McColl-Kennedy, J.R. The fallacy of the net promoter score: Customer loyalty predictive model. Camb. Serv. Alliance
**2016**, 10, 1–25. [Google Scholar] - Karamolegkos, P.N.; Patrikakis, C.Z.; Doulamis, N.D.; Tragos, E.Z. User—Profile based Communities Assessment using Clustering Methods. In Proceedings of the 2007 IEEE 18th International Symposium on Personal, Indoor and Mobile Radio Communications, Athens, Greece, 3–7 September 2007; pp. 1–6. [Google Scholar] [CrossRef]
- Voulodimos, A.S.; Patrikakis, C.Z.; Karamolegkos, P.N.; Doulamis, A.D.; Sardis, E.S. Employing clustering algorithms to create user groups for personalized context aware services provision. In Proceedings of the 2011 ACM Workshop on Social and Behavioural Networked Media Access, Scottsdale, AZ, USA, 1 December 2001; Association for Computing Machinery: New York, NY, USA, 2011; pp. 33–38. [Google Scholar] [CrossRef]
- Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. Comput. Intell. Neurosci.
**2018**, 2018, 1–13. [Google Scholar] [CrossRef] - Taneja, A.; Arora, A. Modeling user preferences using neural networks and tensor factorization model. Int. J. Inf. Manag.
**2019**, 45, 132–148. [Google Scholar] [CrossRef] - Doulamis, A.D.; Doulamis, N.D.; Kollias, S.D. On-line retrainable neural networks: Improving the performance of neural networks in image analysis problems. IEEE Trans. Neural Netw.
**2000**, 11, 137–155. [Google Scholar] [CrossRef] - Doulamis, N.; Dragonas, J.; Doulamis, A.; Miaoulis, G.; Plemenos, D. Machine learning and pattern analysis methods for profiling in a declarative collaorative framework. In Intelligent Computer Graphics 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 189–206. [Google Scholar]
- Yiakoumettis, C.; Doulamis, N.; Miaoulis, G.; Ghazanfarpour, D. Active learning of user’s preferences estimation towards a personalized 3D navigation of geo-referenced scenes. GeoInformatica
**2014**, 18, 27–62. [Google Scholar] [CrossRef] - Lad, S.; Parikh, D. Interactively guiding semi-supervised clustering via attribute-based explanations. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 333–349. [Google Scholar]
- Doulamis, N.; Yiakoumettis, C.; Miaoulis, G. On-line spectral learning in exploring 3D large scale geo-referred scenes. In Proceedings of the Euro-Mediterranean Conference, Limassol, Cyprus, 29 October–3 November 2012; pp. 109–118. [Google Scholar]
- Izonin, I.; Tkachenko, R.; Kryvinska, N.; Tkachenko, P.; Gregušml, M. Multiple Linear Regression based on Coefficients Identification using Non-Iterative SGTM Neural-Like Structure. In Proceedings of the International Work-Conference on Artificial Neural Networks, Gran Canaria, Spain, 12–14 June 2019; pp. 467–479. [Google Scholar]
- Vitynskyi, P.; Tkachenko, R.; Izonin, I.; Kutucu, H. Hybridization of the SGTM neural-like structure through inputs polynomial extension. In Proceedings of the 2018 IEEE Second International Conference on Data Stream Mining &Processing (DSMP), Lviv, Ukraine, 21–25 August 2018; pp. 386–391. [Google Scholar]
- Tkachenko, R.; Izonin, I.; Kryvinska, N.; Chopyak, V.; Lotoshynska, N.; Danylyuk, D. Piecewise-Linear Approach for Medical Insurance Costs Prediction Using SGTM Neural-Like Structure. In Proceedings of the 1st International Workshop on Informatics & Data-Driven Medicine (IDDM 2018), Lviv, Ukraine, 28–30 November 2018; pp. 170–179. [Google Scholar]
- Karamolegkos, P.N.; Patrikakis, C.Z.; Doulamis, N.D.; Vlacheas, P.T.; Nikolakopoulos, I.G. An evaluation study of clustering algorithms in the scope of user communities assessment. Comput. Math. Appl.
**2009**, 58, 1498–1519. [Google Scholar] [CrossRef] [Green Version] - Conklin, M.; Powaga, K.; Lipovetsky, S. Customer satisfaction analysis: Identification of key drivers. Eur. J. Oper. Res.
**2004**, 154, 819–827. [Google Scholar] [CrossRef] - LaLonde, S. A Demonstration of Various Models Used in a Key Driver Analysis; 2016; p. 17. Available online: https://www.lexjansen.com/mwsug/2016/AA/MWSUG-2016-AA23.pdf (accessed on 30 November 2020).
- Magidson, J. Correlated Component Regression: A Prediction/Classification Methodology for Possibly Many Features; American Statistical Association: Alexandria, VA, USA, 2010; pp. 4372–4386. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Advances in Neural Information Processing Systems; Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2014; Volume 27, pp. 2672–2680. [Google Scholar]
- Lipovetsky, S.; Conklin, M. Analysis of regression in game theory approach. Appl. Stoch. Model. Bus. Ind.
**2001**, 17, 319–330. [Google Scholar] [CrossRef] - Tontini, G.; Picolo, J.D.; Silveira, A. Which incremental innovations should we offer? Comparing importance–performance analysis with improvement-gaps analysis. Total. Qual. Manag. Bus. Excell.
**2014**, 25, 705–719. [Google Scholar] [CrossRef] - John, A. Martilla.; James, J.C. Importance-Performance Analysis - John A. Martilla, John C. James, 1977. J. Mark.
**1977**, 41, 77–79. [Google Scholar] - Bacon, D.R. A Comparison of Approaches to Importance-Performance Analysis. Int. J. Mark. Res.
**2003**, 45, 1–15. [Google Scholar] [CrossRef] - Slack, N. The Importance-Performance Matrix as a Determinant of Improvement Priority. Int. J. Oper. Prod. Manag.
**1994**, 14, 59–75. [Google Scholar] [CrossRef] - Deng, J.; Pierskalla, C.D. Linking Importance–Performance Analysis, Satisfaction, and Loyalty: A Study of Savannah, GA. Sustainability
**2018**, 10, 704. [Google Scholar] [CrossRef] [Green Version] - European Parliament, Council of the European Union. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation). Technical Report. 2016. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679 (accessed on 30 November 2020).
- Quinlan, J.R. Induction of decision trees. Mach. Learn.
**1986**, 1, 81–106. [Google Scholar] [CrossRef] [Green Version] - Rokach, L.; Maimon, O.Z. Data Mining with Decision Trees: Theory and Applications; World Scientific: Singapore, 2008; Volume 69. [Google Scholar]
- Bhatia, N.; Vandana. Survey of Nearest Neighbor Techniques. arXiv
**2010**, arXiv:1007.0085. [Google Scholar] - Murphy, K.P. Machine Learning: A Probabilistic Perspective; MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
- Protopapadakis, E.; Voulodimos, A.; Doulamis, A.; Camarinopoulos, S.; Doulamis, N.; Miaoulis, G. Dance pose identification from motion capture data: A comparison of classifiers. Technologies
**2018**, 6, 31. [Google Scholar] [CrossRef] [Green Version] - Keller, J.M.; Gray, M.R.; Givens, J.A. A fuzzy k-nearest neighbor algorithm. IEEE Trans. Syst. Man Cybern.
**1985**, 4, 580–585. [Google Scholar] [CrossRef] - Basak, D.; Srimanta, P.; Patranabis, D.C. Support Vector Regression. Neural Inf. Process.-Lett. Rev.
**2007**, 11, 203–224. [Google Scholar] - Abe, S. Support Vector Machines for Pattern Classification, 2nd ed.; Advances in Computer Vision and Pattern Recognition; Springer: London, UK, 2010. [Google Scholar] [CrossRef]
- Kopsiaftis, G.; Protopapadakis, E.; Voulodimos, A.; Doulamis, N.; Mantoglou, A. Gaussian Process Regression Tuned by Bayesian Optimization for Seawater Intrusion Prediction. Comput. Intell. Neurosci.
**2019**, 2019, 2859429. [Google Scholar] [CrossRef] [PubMed] - Pal, M. Random forest classifier for remote sensing classification. Int. J. Remote Sens.
**2005**, 26, 217–222. [Google Scholar] [CrossRef] - Haykin, S. Neural Networks: A Comprehensive Foundation; Prentice-Hall Inc.: Upper Anhe, NJ, USA, 2007. [Google Scholar]
- Hecht-Nielsen, R. Kolmogorov’s mapping neural network existence theorem. In Proceedings of the International Conference on Neural Networks, San Diego, CA, USA, 21–24 June 1987; IEEE Press: New York, NY, USA, 1987; Volume 3, pp. 11–14. [Google Scholar]
- Doulamis, N.; Doulamis, A.; Varvarigou, T. Adaptable neural networks for modeling recursive non-linear systems. In Proceedings of the 2002 14th International Conference on Digital Signal Processing, DSP 2002 (Cat. No. 02TH8628), Santorini, Greece, 1–3 July 2002; Volume 2, pp. 1191–1194. [Google Scholar]
- Protopapadakis, E.; Voulodimos, A.; Doulamis, A. On the Impact of Labeled Sample Selection in Semisupervised Learning for Complex Visual Recognition Tasks. Complexity
**2018**, 2018, 1–11. [Google Scholar] [CrossRef] - Doulamis, A.; Doulamis, N.; Protopapadakis, E.; Voulodimos, A. Combined Convolutional Neural Networks and Fuzzy Spectral Clustering for Real Time Crack Detection in Tunnels. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 4153–4157. [Google Scholar] [CrossRef]
- Haouari, B.; Amor, N.B.; Elouedi, Z.; Mellouli, K. Naïve possibilistic network classifiers. Fuzzy Sets Syst.
**2009**, 160, 3224–3238. [Google Scholar] [CrossRef] - Hosmer, D.W., Jr.; Lemeshow, S.; Sturdivant, R.X. Applied Logistic Regression; John Wiley & Sons: Hoboken, NJ, USA, 2013; Volume 398. [Google Scholar]
- Freitas, C.O.A.; de Carvalho, J.M.; Oliveira, J.; Aires, S.B.K.; Sabourin, R. Confusion Matrix Disagreement for Multiple Classifiers. In Progress in Pattern Recognition, Image Analysis and Applications; Lecture Notes in Computer Science; Rueda, L., Mery, D., Kittler, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 387–396. [Google Scholar] [CrossRef]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res.
**2011**, 12, 2825–2830. [Google Scholar]

**Figure 7.**Comparison of the classification metrics for Dataset 1 results (input: CX attributes only).

**Figure 8.**Comparison of the classification metrics for Dataset 2 results (input: CX metrics plus NPS Bias).

Response Values | NPS Label |
---|---|

9–10 | Promoter |

7–8 | Passive |

0–6 | Detractor |

NPS Bias | Bias Category |
---|---|

$\mathrm{NPS}\_\mathrm{BIAS}\ge 0$ | Positively Biased |

$\mathrm{NPS}\_\mathrm{BIAS}<0$ | Negatively Biased |

Mix | Detractors | Passives | Promoters | Total |
---|---|---|---|---|

Negatives | 42 | 84 | 2 | 128 |

Positives | 3 | 155 | 165 | 323 |

Total | 45 | 239 | 167 | 451 |

Chi-Square Test (a = 0.05) | ||||
---|---|---|---|---|

Chi-Square | df | p-Value | Significance | Cramer V |

197.52 | 4 | $1.28\times {10}^{-41}$ | Yes | 0.467 |

Parameter | Values |
---|---|

Function measuring quality of split | entropy |

Maximum depth of tree | 3 |

Weights associated with classes | 1 |

Parameter | Values |
---|---|

Number of neighbors | 5 |

Distance metric | Minkowski |

Weights function | uniform |

Parameter | Values |
---|---|

Kernel type | linear |

Degree of polynomial kernel function | 3 |

Weights associated with classes | 1 |

Parameter | Values |
---|---|

Number of trees | 100 |

Measurement of the quality of split | Gini index |

Parameter | Values |
---|---|

Number of hidden neurons | 6 |

Activation function applied for the input and hidden layer | RelU |

Activation function applied for the output layer | Softmax |

Optimizer network function | Adam |

Calculated loss | sparce categorical cross-entropy |

Epochs used | 100 |

Batch size | 10 |

Parameter | Values |
---|---|

Model | Sequential (array of Keras Layers) |

kernel size | 3 |

pool size | 4 |

Activation function applied | RelU |

Calculated loss | categorical cross entropy |

Epochs used | 100 |

Batch size | 128 |

Parameter | Values |
---|---|

Maximum number of iterations | 300 |

algorithm used in optimization | L-BFGS |

weights associated with classes | 1 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Markoulidakis, I.; Rallis, I.; Georgoulas, I.; Kopsiaftis, G.; Doulamis, A.; Doulamis, N.
A Machine Learning Based Classification Method for Customer Experience Survey Analysis. *Technologies* **2020**, *8*, 76.
https://doi.org/10.3390/technologies8040076

**AMA Style**

Markoulidakis I, Rallis I, Georgoulas I, Kopsiaftis G, Doulamis A, Doulamis N.
A Machine Learning Based Classification Method for Customer Experience Survey Analysis. *Technologies*. 2020; 8(4):76.
https://doi.org/10.3390/technologies8040076

**Chicago/Turabian Style**

Markoulidakis, Ioannis, Ioannis Rallis, Ioannis Georgoulas, George Kopsiaftis, Anastasios Doulamis, and Nikolaos Doulamis.
2020. "A Machine Learning Based Classification Method for Customer Experience Survey Analysis" *Technologies* 8, no. 4: 76.
https://doi.org/10.3390/technologies8040076