A Novel Model on Reinforce K-Means Using Location Division Model and Outlier of Initial Value for Lowering Data Cost
Abstract
:1. Introduction
2. Related Research
2.1. K-Means Algorithm
2.2. Clustering K Value in Non-Labeling Data
2.3. Clustering Initial Approach Algorithm
3. Proposed Reinforcement K-Means Algorithm
3.1. Overview of Proposed Reinforcement K-Means Algorithm
3.2. Selection of Proposed Reinforce K-Means Initial Centroid Approach
3.2.1. Outliers Generating Condition
3.2.2. Algorithm to Determine the Initial Center with Outliers and Space Division
3.3. Proposed Reinforcement K-Means (PKM) Initial Approach Algorithm
Algorithm 1 Reinforcement Clustering using PCA and Initial Centroid Subspace |
Data: Non-Labeling Dataset Output: Data by Cluster Input: Training set ,,, … where (drop = 1 by convention) repeat each do for each temporary , do calculation from each data to cluster centroid(cohension), calculation from each data to cluster centroid(separation), assign the initial centroid number, S(K) end if Selection -> Clustering number K then for check of each vector data, each = if > , which is the initial centroid if when there is two object distributed then assign the object to , the centroid of , the first cluster else then assign the object whose two vectors record the biggest length first to , the centroid of , the first cluster for check of each vector data, for check of each vector data, if (of the maximum measurements) then until each cluster centroid. end end end end end main |
4. Experiments and Performance Evaluation
4.1. Environments of Experiments and Performance Evaluation
4.2. The Data and Procedure for Algorithm Verification
4.3. Experiments and Performance Evaluation in Small Data
4.3.1. Clustering Classification Ratio
4.3.2. Clustering Time Complexity
4.3.3. Clustering Performance Time (Cost)
4.3.4. Clustering Performance of Outlier Detection
4.4. Experiments and Performance Evaluation in Large Data
4.4.1. Performance Evaluation of CVI
4.4.2. Performance Evaluation
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Abbreviations
IntraCD | Intra-Cluster Distance |
ICD | Inter-Cluster Distance |
AIC | Akaike Information Criterion |
BIC | Bayesian Inference Criterion |
MAK | Macqueen Approach K-means |
KAK | Kaufman Approach K-means |
MMK | Max-min Approach K-means |
AIC | Akaike Information Criterion |
BIC | Bayesian Inference Criterion |
MLE | Maximum Likelihood Estimate |
References
- Seo, Y.-S.; Huh, J.-H. Automatic Emotion-Based Music Classification for Supporting Intelligent IoT Applications. Electronics 2019, 8, 164. [Google Scholar] [CrossRef] [Green Version]
- Amir, H.; Haider, M. Beyond the hype: Big data concepts, methods, and analytics. J. Inf. Manag. 2015, 35, 137–144. [Google Scholar]
- Tsai, C.-W.; Lai, C.-F.; Chiang, M.-C.; Yang, L.T. Data mining for Internet of Things: A survey. IEEE Commun. Surv. Tutor. 2014, 16, 77–97. [Google Scholar] [CrossRef]
- Huh, J.-H. Big Data Analysis for Personalized Health Activities: Machine Learning Processing for Automatic Keyword Extraction Approach. Symmetry 2018, 10, 93. [Google Scholar] [CrossRef] [Green Version]
- Jung, S.H.; Kim, K.J.; Lim, E.C.; Sim, C.B. A Novel on Automatic K Value for Efficiency Improvement of K-Means Clustering; CUTE 2019; Singapore Pte Ltd. LNEE: Berlin/Heidelberg, Germany, 2017; pp. 181–186. [Google Scholar]
- Ortiz, A.M.; Hussein, D.; Park, S.; Han, S.N.; Crespi, N. The cluster between internet of things and social networks: Review and research challenges. IEEE Internet Things J. 2014, 1, 206–215. [Google Scholar] [CrossRef] [Green Version]
- Huh, J.-H. An Efficient Solitary Senior Citizens Care Algorithm and Application: Considering Emotional Care for Big Data Collection. Processes 2018, 6, 244. [Google Scholar] [CrossRef] [Green Version]
- Fong, S.; Wong, R.; Vasilakos, A.V. Accelerated PSO swarm search feature selection for data stream mining big data. IEEE Trans. Serv. Comput. 2016, 9, 33–45. [Google Scholar] [CrossRef]
- Wu, X.; Zhu, X.; Wu, G.-Q.; Ding, W. Data mining with big data. IEEE Trans. Knowl. Data Eng. 2014, 26, 97–107. [Google Scholar]
- Boyd, D.; Crawford, K. Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. J. Inf. Commun. Soc. 2012, 15, 662–679. [Google Scholar] [CrossRef]
- Jung, S.H.; Kim, J.C.; Sim, C.B. A novel data prediction model using data weights and neural network based on R for meaning analysis between data. J. Korea Multimed. Soc. 2015, 18, 524–532. [Google Scholar] [CrossRef] [Green Version]
- Jung, S.H.; Shin, C.S.; Cho, Y.Y.; Park, J.W.; Park, M.H.; Kim, Y.H.; Sim, C.B. Analysis Process based on Modify K-means for Efficiency Improvement of Electric Power Data Pattern Detection. J. Korea Multimed. Soc. 2017, 20, 1960–1969. [Google Scholar]
- Ma, J.; Jiang, X.; Gong, M. Two-phase clustering algorithm with density exploring distance measure. CAAI Trans. Intell. Technol. 2018, 3, 59–64. [Google Scholar] [CrossRef]
- Liu, X.; Zhu, X.; Li, M.; Wang, L.; Zhu, E.; Liu, T.; Kloft, M.; Shen, D.; Yin, J.; Gao, W. Multiple kernel k-means with incomplete kernels. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 1191–1204. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yu, S.-S.; Chu, S.W.; Wang, C.-M.; Chan, Y.-K.; Tang, T.-C. Two improved k-means algorithms. Appl. Soft Comput. 2018, 68, 747–755. [Google Scholar] [CrossRef]
- Zhang, G.; Zhang, C.; Zhang, H. Improved K-means algorithm based on density Canopy. Knowl. Based Syst. 2018, 145, 289–297. [Google Scholar] [CrossRef]
- George, G.; Haas, M.R.; Pentland, A. Big data and management. Acad. Manag. J. 2014, 57, 321–326. [Google Scholar] [CrossRef]
- Fritzke, B. Growing cell structures—A self-organizing network for unsupervised and supervised learning. Neural Netw. 1994, 7, 1441–1460. [Google Scholar] [CrossRef]
- Gustavo, C.; Chan, A.B.; Moreno, P.J.; Vasconcelos, N. Supervised learning of semantic classes for image annotation and retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 394–410. [Google Scholar]
- Huang, G.; Song, S.; Gupta, J.N.D.; Wu, C. Semi-supervised and unsupervised extreme learning machines. IEEE Trans. Cybern. 2014, 44, 2405–2417. [Google Scholar] [CrossRef]
- Bradley, C.L. Comparing supervised and unsupervised category learning. Psychon. Bull. Rev. 2002, 9, 829–835. [Google Scholar]
- Bradley, C.L. Unsupervised learning. Neural Comput. 1989, 1, 295–311. [Google Scholar]
- Dy, J.G.; Brodley, C.E. Feature selection for unsupervised learning. J. Mach. Learn. Res. 2004, 5, 845–889. [Google Scholar]
- Hartigan, J.A.; Wong, M.A. Algorithm AS 136: A k-means clustering algorithm. J. R. Stat. Soc. Ser. C 1979, 28, 100–108. [Google Scholar] [CrossRef]
- Kanungo, T.; Mount, D.M.; Netanyahu, N.S.; Piatko, C.D.; Silverman, R.; Wu, A.Y. An efficient k-means clustering algorithm: Analysis and implementation. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 881–892. [Google Scholar] [CrossRef]
- Aristidis, L.; Vlassis, N.; Verbeek, J.J. The global k-means clustering algorithm. Pattern Recognit. 2003, 36, 451–461. [Google Scholar]
- Anil, K.J. Data clustering: 50 years beyond K-means. Pattern Recognit. Lett. 2010, 31, 651–666. [Google Scholar]
- Capó, M.; Pérez, A.; Lozano, J.A. An efficient K-means clustering algorithm for tall data. Data Min. Knowl. Discov. 2020, 34, 776–811. [Google Scholar] [CrossRef] [Green Version]
- Kim, S.-S. Variable Selection and Outlier Detection for Automated K-means Clustering. J. Commun. Stat. Appl. Methods 2015, 22, 55–67. [Google Scholar] [CrossRef] [Green Version]
- Steinley, D.; Brusco, M.J. Initializing K-means batch clustering: A critical evaluation of several techniques. J. Classifi. 2007, 24, 99–121. [Google Scholar] [CrossRef]
- Schellekens, V.; Jacques, L. Quantized Compressive K-Means. IEEE Signal Process. Lett. 2018, 25, 1211–1215. [Google Scholar] [CrossRef] [Green Version]
- Yu, H.; Wen, G.; Gan, J.; Zheng, W.; Lei, C. Self-paced learning for k-means clustering algorithm. Pattern Recognit. Lett. 2018, 132, 69–75. [Google Scholar] [CrossRef]
- Bhattacharya, A.; Jaiswal, R.; Kumar, A. Faster algorithms for the constrained k-means problem. Theory Comput. Syst. 2018, 62, 93–115. [Google Scholar] [CrossRef] [Green Version]
- Alvarez, M.A.Z.; Agbossou, K.; Cardenas, A.; Kelouwani, S.; Boulon, L. Demand Response Strategy Applied to Residential Electric Water Heaters Using Dynamic Programming and K-Means Clustering. IEEE Trans. Sustain. Energy 2019. [Google Scholar] [CrossRef]
- Zhao, W.-L.; Deng, C.-H.; Ngo, C.-W. K-means: A revisit. Neurocomputing 2018, 291, 195–206. [Google Scholar] [CrossRef]
- Ostrovsky, R.; Rabani, Y.; Schulman, L.J.; Swamy, C. The Effectiveness of Loyd-Type Methods for then k-Means Problem. In Proceedings of the 47th Annual IEEE Symposium on Foundaions of Computer Science, Berkeley, CA, USA, 21–24 October 2006; pp. 165–176. [Google Scholar]
- Jung, S.-H.; Kim, J.-C.; Sim, C.-B. Prediction Data Processing Scheme using an Artificial Neural Network and Data Clustering for Big Data. J. Electr. Comput. Eng. 2016, 6, 330–336. [Google Scholar]
- Peña, J.M.; Lozano, J.A.; Larrañaga, P. An empirical comparison of four initialization methods for the K-Means algorithm. Pattern Recognit. Lett. 1999, 20, 1027–1040. [Google Scholar]
- Celebi, M.E.; Kingravi, H.A.; Vela, P.A. A comparative study of efficient initialization methods for the k-means clustering algorithm. J. Expert Syst. Appl. 2013, 40, 200–210. [Google Scholar] [CrossRef] [Green Version]
- Capóa, M.; Pérez, A.; Lozano, J.A. An efficient approximation to the K-means clustering for massive data. J. Knowl. Based Syst. 2017, 117, 56–69. [Google Scholar] [CrossRef] [Green Version]
- Lu, X.J.J.; Li, X. Davies Bouldin Index based hierarchical initialization K-means. J. Intell. Data Anal. 2017, 21, 1327–1338. [Google Scholar]
- Song, J.; Li, F.; Li, R. Improved K-means Algorithm Based on Threshold Value Radius. In IOP Conference Series: Earth and Environmental Science; IOP Publishing Ltd.: Bristol, UK, 2020; Volume 428. [Google Scholar]
- Bulcid, S.; Pelillo, M. Dominant-set clustering: A review. Eur. J. Oper. Res. 2017, 262, 1–13. [Google Scholar]
- Kim, J.M.; Lee, W.K.; Song, J.S.; Lee, S.B. Optimized combinatorial clustering for stochastic processes. Clust. Comput. 2017, 20, 1135–1148. [Google Scholar] [CrossRef] [Green Version]
- Qiao, Y.; Li, Y.; Lv, X. The Application of Big Data Mining Prediction Based on Improved K-Means Algorithm. In Proceedings of the 34rd Youth Academic Annual Conference of Chinese Association of Automation (YAC), Jinzhou, China, 6–8 June 2019. [Google Scholar]
- Kim, K.; Ahn, H. A recommender system using GA K-means clustering in an online shopping market. J. Expert Syst. Appl. 2008, 34, 1200–1209. [Google Scholar] [CrossRef]
- Huang, J.Z.; Ng, M.K.; Rong, H.; Li, Z. Automated variable weighting in k-means type clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 657–668. [Google Scholar] [CrossRef] [PubMed]
- Li, M.J.; Ng, M.K.; Cheng, Y.; Huang, J.Z. Agglomerative fuzzy k-means clustering algorithm with selection of number of clusters. IEEE Trans. Knowl. Data Eng. 2008, 20, 1519–1534. [Google Scholar] [CrossRef] [Green Version]
- Celik, T. Unsupervised change detection in satellite images using principal component analysis and k-means clustering. IEEE Geosci. Remote Sens. Lett. 2009, 6, 772–776. [Google Scholar] [CrossRef]
- Zhang, N.; Leatham, K.; Xiong, J.; Zhong, J. PCA-K-Means Based Clustering Algorithm for High Dimensional and Overlapping Spectra Signals. In Proceedings of the 2018 Ninth International Conference on Intelligent Control and Information Processing (ICICIP), Chongqing, China, 9–11 November 2018. [Google Scholar]
- Cristina, T.; Gómez-Pérez, D.; Balcázar, J.L.; Montaña, J.L. Global optimality in k-means clustering. Inf. Sci. 2018, 439, 79–94. [Google Scholar]
- Krishnaswamy, R.; Li, S.; Sandeep, S. Constant approximation for k-median and k-means with outliers via iterative rounding. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, Los Angeles, CA, USA, 25–29 June 2018. [Google Scholar]
- Bradley, P.S.; Fayyad, U.M. Refining initial points for K-means clustering. ICML 1998, 98, 91–99. [Google Scholar]
- Khan, S.S.; Ahmad, A. Cluster center initialization algorithm for K-means clustering. Pattern Recognit. Lett. 2004, 25, 1293–1302. [Google Scholar] [CrossRef]
- Arai, K.; Barakbah, A.R. Hierarchical K-means: An algorithm for centroids initialization for K-means. Rep. Fac. Sci. Eng. 2007, 36, 25–31. [Google Scholar]
- Erisoglu, M.; Calis, N.; Sakallioglu, S. A new algorithm for initial cluster centers in k-means algorithm. Pattern Recognit. Lett. 2011, 32, 1701–1705. [Google Scholar] [CrossRef]
- Li, C.S. Cluster center initialization method for k-means algorithm over data sets with two clusters. Procedia Eng. 2011, 24, 324–328. [Google Scholar] [CrossRef] [Green Version]
- Mahmud, M.S.; Rahman, M.M.; Akhtar, M.N. Improvement of K-means clustering algorithm with better initial centroids based on weighted average. In Proceedings of the 2012 7th IEEE International Conference on Electrical & Computer Engineering (ICECE), Dhaka, Bangladesh, 20–22 December 2012; pp. 647–650. [Google Scholar]
- Tzortzis, G.; Likas, A. The MinMax k-means clustering algorithm. Pattern Recognit. 2014, 47, 2505–2516. [Google Scholar] [CrossRef]
- Goyal, M.; Kumar, S. Improving the initial centroids of K-means clustering algorithm to generalize its applicability. J. Inst. Eng. Ser. B 2014, 95, 345–350. [Google Scholar] [CrossRef]
- Kumar, Y.; Sahoo, G. A new initialization method to originate initial cluster centers for K-Means algorithm. Int. J. Adv. Sci. Technol. 2014, 62, 43–54. [Google Scholar] [CrossRef]
- Yang, J.; Ma, Y.; Zhang, X.; Li, S.; Zhang, Y. An initialization method based on hybrid distance for k-means algorithm. Neural Comput. 2017, 29, 3094–3117. [Google Scholar] [CrossRef] [PubMed]
- Zhang, K.; Bi, W.; Zhang, X.; Fu, X.; Zhou, K.; Zhu, L. A New Kmeans Clustering Algorithm for Point Cloud. J. Hybrid Inf. Technol. 2015, 8, 157–170. [Google Scholar] [CrossRef]
- Macqueen, J. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 21 June–18 July 1965 and 27 December 1965–7 January 1966; University of California Press: Berkeley, CA, USA, 1967; pp. 281–297. [Google Scholar]
- Yuan, F.; Meng, Z.H.; Zhang, H.X.; Dong, C.R. A New Algorithm to Get the Initial Centroids. In Proceeding of the 3rd International Conference on Machine Learning and Cybernetics, Worldfield Convention Hotel, Shanghai, China, 26–29 August 2004; pp. 26–29. [Google Scholar]
- Jung, S.H.; Kim, J.C. Efficiency Improvement of Classification Model Based on Altered K-Means Using PCA and Outlier. Int. J. Softw. Eng. Knowl. Eng. 2019, 29, 693–713. [Google Scholar] [CrossRef]
- Jung, S.H.; So, W.-H.; You, K.; Sim, C.-B. A Novel on Altered K-Means Algorithm for Clustering Cost Decrease of Non-labeling Big-Data, Advanced Multimedia and Ubiquitous Engineering; Springer: Berlin/Heidelberg, Germany, 2018; pp. 375–381. [Google Scholar]
- Data Sets—UCI Machine Learning Repository. Available online: https://archive.ics.uci.edu/ml/datasets.html (accessed on 1 June 2018).
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, Y.; Thirion, B.; Grisel, O.; Blondel, M.; Müller, A.; Nothman, J.; Louppe, J.; et al. Scikit-learn, Machine Learning in Python. JMLR 2011, 12, 2825–2830. [Google Scholar]
Reliability | Dataset | Existing Model (MAK) | Proposed Reinforcement K-Means Algorithm (PKM) | ||||
---|---|---|---|---|---|---|---|
Average | K = 2 | K = 4 | K = 6 | K = 8 | |||
90% | Wine | 0.9663 | 0.9874 | 0.9944 | 0.9944 | 0.9831 | 0.9775 |
Yeast | 0.9461 | 0.9618 | 0.9596 | 0.9677 | 0.9589 | 0.9609 | |
Iris | 0.9667 | 0.9867 | 0.9933 | 0.9933 | 0.9867 | 0.9733 | |
Average | 0.9824 | 0.9851 | 0.9762 | 0.9706 | |||
85% | Wine | 0.9551 | 0.9635 | 0.9888 | 0.9831 | 0.9382 | 0.9438 |
Yeast | 0.9124 | 0.9532 | 0.9555 | 0.9319 | 0.9555 | 0.9697 | |
Iris | 0.9467 | 0.9700 | 0.9733 | 0.9867 | 0.9600 | 0.9600 | |
Average | 0.9725 | 0.9673 | 0.9512 | 0.9578 | |||
80% | Wine | 0.9494 | 0.9438 | 0.9438 | 0.98788 | 0.9663 | 0.9382 |
Yeast | 0.8484 | 0.9313 | 0.9319 | 0.9340 | 0.9171 | 0.9420 | |
Iris | 0.9333 | 0.9533 | 0.9600 | 0.9533 | 0.9600 | 0.9400 | |
Average | 0.9453 | 0.9587 | 0.9478 | 0.9401 | |||
75% | Wine | 0.8820 | 0.9185 | 0.9270 | 0.9326 | 0.9213 | 0.8933 |
Yeast | 0.8248 | 0.8972 | 0.8962 | 0.8747 | 0.8868 | 0.9313 | |
Iris | 0.8800 | 0.9033 | 0.9133 | 0.9200 | 0.9067 | 0.8733 | |
Average | 0.9122 | 0.9091 | 0.9049 | 0.8993 | |||
70% | Wine | 0.7528 | 0.8708 | 0.8820 | 0.9101 | 0.8315 | 0.8596 |
Yeast | 0.7951 | 0.8624 | 0.8726 | 0.8315 | 0.8336 | 0.9117 | |
Iris | 0.8400 | 0.8900 | 0.9133 | 0.8800 | 0.8933 | 0.8733 | |
Average | 0.8893 | 0.8739 | 0.8528 | 0.8815 |
Cluster | Item | Wine (s) | Yeast (s) | Iris (s) | Avg. (s) |
---|---|---|---|---|---|
2 | KMP | 0.85 | 1.65 | 0.82 | 1.10 |
MMK | 1.00 | 1.74 | 1.00 | 1.24 | |
MAK | 0.88 | 2.10 | 0.95 | 1.31 | |
KAK | 0.89 | 1.44 | 0.94 | 1.09 | |
PKM | 0.84 | 1.41 | 0.82 | 1.02 | |
3 | KMP | 0.82 | 1.85 | 0.82 | 1.16 |
MMK | 0.81 | 1.65 | 0.99 | 1.15 | |
MAK | 0.91 | 1.95 | 0.93 | 1.26 | |
KAK | 0.98 | 1.56 | 0.91 | 1.15 | |
PKM | 0.79 | 1.74 | 0.78 | 1.01 | |
5 | KMP | 0.84 | 1.67 | 0.99 | 1.17 |
MMK | 0.87 | 1.45 | 1.24 | 1.19 | |
MAK | 1.15 | 2.15 | 1.33 | 1.54 | |
KAK | 1.45 | 1.75 | 1.24 | 1.48 | |
PKM | 0.79 | 1.41 | 1.89 | 1.03 | |
7 | KMP | 0.85 | 1.77 | 1.00 | 1.21 |
MMK | 0.87 | 1.65 | 1.33 | 1.28 | |
MAK | 1.22 | 2.20 | 1.35 | 1.59 | |
KAK | 1.50 | 1.80 | 1.32 | 1.54 | |
PKM | 0.81 | 1.34 | 0.94 | 1.03 |
Item | Wine (K = 3) | Yeast (K = 7) | Iris (K = 3) | Total |
---|---|---|---|---|
KMP | 11 | 20 | 13 | 46 |
MMK | 12 | 20 | 9 | 41 |
MAK | 7 | 21 | 9 | 37 |
KAK | 11 | 19 | 10 | 40 |
PKM | 3 | 10 | 3 | 16 |
Item | K = 2 | K = 3 | K = 4 | K = 5 | K = 6 | K = 7 | K = 8 | K = 9 |
---|---|---|---|---|---|---|---|---|
Abi.-DB | 0.6400 | 0.670 | 0.400 | 0.5500 | 0.5100 | 0.4900 | 0.5500 | 0.7900 |
Abi.-Shilhoutte | 0.1513 | 0.2609 | 0.2683 | 0.2073 | 0.1964 | 0.1931 | 0.1847 | 0.1945 |
Abi.-SSE | 84.0000 | 87.0000 | 94.0000 | 90.0000 | 92.0000 | 91.0000 | 95.0000 | 96.0000 |
PKM-DB | 0.4800 | 0.6800 | 0.2500 | 0.5100 | 0.3400 | 0.4200 | 0.6400 | 0.4400 |
PKM-Shiloutte | 0.1654 | 0.1510 | 0.1223 | 0.1744 | 0.1451 | 0.1324 | 0.1247 | 0.1642 |
PKM-SSE | 88.000 | 85.0000 | 84.0000 | 97.0000 | 94.0000 | 97.0000 | 99.0000 | 94.0000 |
Cluster | Object | TP | TN | FP | Precision | Recall | F-Measure |
---|---|---|---|---|---|---|---|
3 | 50,000 | 30,000 | 914 | 747 | 97.04 | 97.57 | 97.31 |
4 | 50,000 | 30,000 | 874 | 642 | 97.17 | 97.90 | 97.54 |
7 | 50,000 | 30,000 | 3476 | 2415 | 89.62 | 92.06 | 91.06 |
Avg. | 50,000 | 30,000 | 1755 | 1268 | 94.61 | 96.01 | 95.30 |
Dataset (with Outlier Rate in %) | Mean Square Error | |
---|---|---|
Abi. | PKM | |
KDDCUP99 (0%) | 853.21 | 614.45 |
KDDCUP99 (5%) | 1847.79 | 1518.16 |
KDDCUP99 (10%) | 1974.45 | 1644.55 |
KDDCUP99 (15%) | 2674.84 | 1879.68 |
KDDCUP99 (20%) | 3074.58 | 2378.17 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jung, S.-H.; Lee, H.; Huh, J.-H. A Novel Model on Reinforce K-Means Using Location Division Model and Outlier of Initial Value for Lowering Data Cost. Entropy 2020, 22, 902. https://doi.org/10.3390/e22080902
Jung S-H, Lee H, Huh J-H. A Novel Model on Reinforce K-Means Using Location Division Model and Outlier of Initial Value for Lowering Data Cost. Entropy. 2020; 22(8):902. https://doi.org/10.3390/e22080902
Chicago/Turabian StyleJung, Se-Hoon, Hansung Lee, and Jun-Ho Huh. 2020. "A Novel Model on Reinforce K-Means Using Location Division Model and Outlier of Initial Value for Lowering Data Cost" Entropy 22, no. 8: 902. https://doi.org/10.3390/e22080902
APA StyleJung, S.-H., Lee, H., & Huh, J.-H. (2020). A Novel Model on Reinforce K-Means Using Location Division Model and Outlier of Initial Value for Lowering Data Cost. Entropy, 22(8), 902. https://doi.org/10.3390/e22080902