You are currently viewing a new version of our website. To view the old version click .
Algorithms
  • Article
  • Open Access

28 April 2020

Evolution of SOMs’ Structure and Learning Algorithm: From Visualization of High-Dimensional Data to Clustering of Complex Data

and
Department of Electrical and Computer Engineering, Kielce University of Technology, 25-314 Kielce, Poland
*
Author to whom correspondence should be addressed.
This article belongs to the Section Evolutionary Algorithms and Machine Learning

Abstract

In this paper, we briefly present several modifications and generalizations of the concept of self-organizing neural networks—usually referred to as self-organizing maps (SOMs)—to illustrate their advantages in applications that range from high-dimensional data visualization to complex data clustering. Starting from conventional SOMs, Growing SOMs (GSOMs), Growing Grid Networks (GGNs), Incremental Grid Growing (IGG) approach, Growing Neural Gas (GNG) method as well as our two original solutions, i.e., Generalized SOMs with 1-Dimensional Neighborhood (GeSOMs with 1DN also referred to as Dynamic SOMs (DSOMs)) and Generalized SOMs with Tree-Like Structures (GeSOMs with T-LSs) are discussed. They are characterized in terms of (i) the modification mechanisms used, (ii) the range of network modifications introduced, (iii) the structure regularity, and (iv) the data-visualization/data-clustering effectiveness. The performance of particular solutions is illustrated and compared by means of selected data sets. We also show that the proposed original solutions, i.e., GeSOMs with 1DN (DSOMs) and GeSOMS with T-LSs outperform alternative approaches in various complex clustering tasks by providing up to 20 % increase in the clustering accuracy. The contribution of this work is threefold. First, algorithm-oriented original computer-implementations of particular SOM’s generalizations are developed. Second, their detailed simulation results are presented and discussed. Third, the advantages of our earlier-mentioned original solutions are demonstrated.

1. Introduction

Self-organizing neural networks—usually referred to as self-organizing maps (henceforward SOMs)—were introduced in the beginning of 1980s by T. Kohonen (see, e.g., [,]), who presented them as “a new, effective software tool for the visualization of high-dimensional data” (the quotation from Kohonen []). The visualization is performed by means of a topology-preserving mapping of the considered data into a low-dimensional display space (most often, in the form of a two-dimensional, usually rectangular, grid; three-dimensional SOMs—due to difficulties with their visualization—have achieved limited success []). In turn, according to [], “feature mapping is conceptually different from clustering” and thus the authors of [] conclude that SOM “is not a clustering method, but which often lends ideas to clustering algorithms” (see also a discussion in []). However, it is worth stressing that since the introduction of SOMs, their initial concept (including their structure and learning algorithm) has been significantly evolving and thus the range of its effective applications (including also complex clustering problems) has been substantially broadened. SOMs constitute an active research field, see, e.g., “a varied collection of studies that testify to the vitality of the field of self-organizing systems for data analysis. Most of them relate to the core models in the field, namely self-organizing maps (SOMs)” (the quotation from Preface of the recently published []).
The objective of this paper is to briefly present several modifications and generalizations of the concept of SOMs (including our two original solutions)—starting from the simplest and ending with the most advanced ones—in order to illustrate their advantages in applications ranging from high-dimensional data visualization to complex data clustering. After brief presentation of a conventional SOM, two approaches that are able to automatically increase the number of neurons in their networks are outlined. They include Growing SOM (GSOM) [] and Growing Grid Network (GGN) []. Next, a solution additionally equipped with the ability to add or remove some topological connections in the network is briefly presented. It is Incremental Grid Growing (IGG) approach []. In turn, Growing Neural Gas (GGN) approach [] that can also remove some neurons from the network is outlined. Finally, our two original solutions that are able to automatically adjust the number of neurons in the network (to “grow”/reduce its structure), to disconnect it into substructures as well as to reconnect some of them again are outlined. They include Generalized SOMs with 1-Dimensional Neighborhood (GeSOMs with 1DN []) operating on splitting-merging neuron chains (they are also referred to as Dynamic SOMs (DSOMs) in [,]) and Generalized SOMs with splitting-merging Tree-Like Structures (GeSOMs with T-LSs [,,,,,]). The operation and performance of particular solutions are illustrated and compared by means of some data sets.
The contribution of this work is threefold. First, algorithm-oriented original computer-implementations of particular SOM’s generalizations are developed. Second, their detailed simulation results are presented and discussed. Third, the advantages of our earlier-mentioned original solutions are demonstrated. Our simulation-based presentation of particular solutions is fully justified since—according to Kohonen himself [] and Cottrell et al. []—“The SOM algorithm is very astonishing. On the one hand, it is very simple to write down and to simulate, its practical properties are clear and easy to observe. But, on the other hand, its theoretical properties still remain without proof in the general case, despite the great efforts of several authors” as well as “…the Kohonen algorithm is surprisingly resistant to a complete mathematical study. As far as we know, the only case where a complete analysis has been achieved is the one dimensional case (the input space has dimension 1) …”.

3. The Proposed Generalized SOMs with Splitting-Merging Structures

Finally, we would like to briefly present our two original approaches, i.e., (i) Generalized SOMs with 1-Dimensional Neighborhood (GeSOMs with 1DN [] also referred to as Dynamic SOMs (DSOMs) in [,]) which operate on splitting-merging neuron chains (see Figure 15) and (ii) Generalized SOMs with splitting-merging Tree-Like Structures (GeSOMs with T-LSs [,,,,,]; see Figure 16). Main features, main objectives, general concept, and implementation of both approaches are the following:
Figure 15. Generalized SOMs (GeSOM) with 1DN (DSOM).
Figure 16. GeSOM with T-LSs.
Main features: Our approaches work in a fully unsupervised way, i.e., (i) they do not need to predefine the number of clusters and (ii) they use unlabeled data.
Main objectives: (i) an automatic determination of the number of clusters in a given data set and (ii) an automatic generation of multi-point prototypes for particular clusters.
General concept: In comparison with conventional SOMs, the proposed approaches are also equipped with three additional mechanisms (allowing for data segmentation) such as: (i) automatic adjustment of the number of neurons in the network (removing low-active neurons and adding new neurons in the areas of the existing high-active neurons), (ii) automatic disconnection of the tree-like structure into subnetworks, and (iii) automatic reconnection of some of the subnetworks preserving—in the case of GeSOMs with T-LSs—the no-loop spanning-tree properties. Such generalized SOMs are able to detect data clusters of virtually any shape and density (including both volumetric and thin, shell-type clusters) by locating a single disconnected subnetwork in the area of the data space occupied by a given cluster. Hence, the number of automatically generated subnetworks is equal to the number of clusters. Moreover, the set of neurons in a given subnetwork is a multi-prototype of the corresponding cluster. Such prototypes can be directly used in clustering/classification tasks using the well-known nearest multi-prototype algorithms [].
Implementation:Figure 2 (Initialization- and WTM-algorithm-blocks as well as blocks concerning GeSOM with 1DN and GeSOM with T-LSs) presents generalizations of the conventional-SOM learning algorithm for both structures considered. Both approaches start with a few neurons (usually two—see the Initialization block of Figure 2). In turn, after each learning epoch, the mechanism (i), (ii), and (iii) listed in the General-concept part of this section are implemented by conditional activation of five successive operations (see also Figure 2; for details, see the afore-listed references):
1.
The removal of single, low-active neurons preserving the network continuity: a given neuron is removed if its activity—measured by the number of its wins—is below an assumed level.
2.
The disconnection of the network (subnetwork) into two subnetworks: the disconnection of two neighboring neurons takes place if the Euclidean distance between them exceeds an assumed level.
3.
The removal of very small subnetworks with two or three neurons (usually, they represent noise).
4.
The insertion of additional neurons into the neighborhood of high-active neurons in order to take over some of their activities (it results in distributing more evenly the system’s activity across the network).
5.
The reconnection of two selected subnetworks:
5.1.
The GeSOM with 1DN case: the nearest end-neurons from two neighboring sub-chains are connected if the Euclidean distance between them is below an assumed level.
5.2.
The GeSOM with T-LSs case: the nearest single neurons from two neighboring subnetworks are connected if the Euclidean distance between them is below an assumed level (this mechanism supports growing tree-like structure of the network).
Figure 17a–i present the selected stages of the evolution of GeSOM with 1DN (DSOM) for the more complex data set of Figure 6a. Figure 17j,k illustrate the adjustment of the number of sub-chains (finally equal to 5, i.e., the number of clusters) and the number of neurons (finally equal to 243) in the network as the learning progresses. Figure 17l shows the envelope of the distance histogram H j d i s t between two neighboring neurons Nos. j and j + 1 along the neuron chain ( j = 1 , 2 , , m 1 ):
H j d i s t = d ( w j , w j + 1 ) .
Figure 17. The evolution of GeSOM with 1DN (DSOM) in the synthetic data set of Figure 6a in learning epochs: (a) No. 0 (start of learning), (b) No. 5, (c) No. 10, (d) No. 20, (e) No. 50, (f) No. 100, (g) No. 300, (h) No. 500, and (i) No. 1000 (end of learning); moreover, plots of the number of sub-chains (j) and the number of neurons (k) vs. epoch number and (l) the distance histogram (5) for the network of Figure 17i as well as (m) the network in learning epoch No. 1000 for different starting point of the learning algorithm.
The lower the histogram’s bars are, the closer the corresponding neurons are situated and thus, the data they represent belong to more compact clusters. The distance histogram illustrates the distribution of original data in particular clusters and shows distinct borders (high single bars) between clusters.
In comparison with the conventional SOM and its previously presented generalizations, our approach demonstrates an interesting and advantageous feature directly shown in Figure 17j,k. Namely, the “unfolding” stage of the network is almost eliminated. The system almost immediately increases its number of neurons to the desired level and concentrates on its essential task, i.e., detecting data clusters by appropriate distribution of neuron sub-chains in the data set.
The splitting-merging neuron chains have, however, some disadvantages regarding the multi-prototype generation for volume clusters. The corresponding neuron sub-chain that is “twisting” within the volume of the cluster, at the end of learning may assume different shapes for different starting points of the learning algorithms—compare Figure 17i with Figure 17m for clusters 1 and 2. It may result in slightly incorrect image of the data distribution in volume clusters.
The aforementioned disadvantages of GeSOMs with 1DN are eliminated in their extension in the form of generalized SOMs with splitting-merging Tree-Like Structures (GeSOMs with T-LSs) which—as their name implies—operate on tree-like networks. Figure 18a–i show the selected stages of the evolution of GeSOM with T-LSs for the synthetic data set of Figure 6a. Figure 18j–k illustrate the adjustment of the number of network substructures (finally equal to 5, i.e., the number of clusters) and the number of neurons (finally equal to 247) in the network during the learning process. This time, multi-prototypes for volume clusters—represented by tree-like structures—are much more evenly distributed (comparing with chain-based approach) within particular clusters.
Figure 18. The evolution of GeSOM with T-LSs in the synthetic data set of Figure 6a in learning epochs: (a) No. 0 (start of learning), (b) No. 5, (c) No. 10, (d) No. 20, (e) No. 50, (f) No. 100, (g) No. 300, (h) No. 500, and (i) No. 1000 (end of learning) as well as plots of the number of substructures (j) and the number of neurons (k) vs. epoch number
Similarly as GeSOM with 1DN, also GeSOM with T-LSs almost eliminates the “unfolding” stage in its operation. Figure 18j,k show that the system almost immediately increases its number of neurons and starts discovering clusters in data. Already at the 10-th learning epoch (out of 1000)—see Figure 18c—the system develops pretty precise image of the cluster distribution in the considered data set. We can conclude that GeSOM with T-LSs is a volume-cluster-oriented version of GeSOM with 1DN.
We demonstrated that the proposed original GeSOMs with 1DN (DSOMs) and GeSOMS with T-LSs outperform different alternative approaches in various complex clustering tasks. In particular:
(a)
in [] we applied our GeSOMs with 1DN (DSOMs) to WWW-newsgroup-document clustering (the collection of 19997 documents was considered); our approach generated 58.41% of correct decisions, whereas alternative approaches achieved from 33.98% to 49.12% of correct decisions,
(b)
in [] we tested our GeSOMs with 1DN (DSOMs) in terms of their abilities to correctly determine the number of clusters in a given data set (8 benchmark data sets available from the University of California (UCI) Database Repository at https://archive.ics.uci.edu/ml were considered); our approach achieved 100% of correct decisions for 6 out of 8 considered data sets, whereas an alternative method obtained such an accuracy only for 1 data set,
(c)
in [] we applied our GeSOMs with T-LSs to microarray leukemia gene data clustering (the benchmark leukemia cancer data set containing 7129 genes and 72 samples was considered); our approach achieved 98.6% of correct decisions regarding the cluster assignments of particular data samples, whereas an alternative method gave only 93.14% accuracy,
(d)
in [] we applied our GeSOMs with T-LSs to WWW-document clustering (the collection of 548 abstracts of technical reports and its 476-element subset, both available from the WWW server of the Department of Computer Science, University of Rochester, USA at https://www.cs.rochester.edu/trs were considered); our approach obtained 87.23% and 84.87% clustering accuracies for bigger and smaller collections, respectively, whereas alternative approaches gave from 36.68% to 65.33% accuracy for bigger collection and from 38.45% to 69.96% for smaller collection,
(e)
in [] our GeSOMs with T-LSs were used to uncover informative genes from colon cancer gene expression data via multi-step clustering (the benchmark colon cancer microarray data set containing 6500 genes and 62 samples was considered); our approach generated 88.71% of correct decisions regarding the clustering of samples, whereas alternative methods achieved from 51.61% to 85.73% accuracy,
(f)
in [] we applied our GeSOMs with T-LSs to electricity consumption data clustering for load profiling (the benchmark Irish Commission for Energy Regulation data set containing 4066 customer profiles with 25728 recordings per profile was considered); our approach achieved 94.86% of correct decisions, whereas alternative methods generated from 89.77% to 94.76% of correct decisions,
(g)
finally, in [] we applied our both approaches to microarray lymphoma gene data clustering (the benchmark lymphoma cancer data set containing 4026 genes and 62 samples was considered); our approaches achieved 91.9% (GeSOMs with 1DN) and 93.6% (GeSOMs with T-LSs) of correct decisions, whereas alternative techniques gave from 61.3% to 75.8% of correct decisions.
Concluding, our approaches provide up to 20% increase in the clustering accuracy in comparison with alternative methods. It is worth emphasizing that alternative methods require the number of clusters to be predefined to perform the clustering. Thus, they are favored in regard to our approaches which aim at an automatic detection of the number of clusters and the cluster multi-point prototypes in a given data set.
There are two limitations of our approaches: (i) inability to visualize the distribution of data clusters—particularly in the case of GeSOMs with T-LSs—and this is the objective of our future work and (ii) relatively high computational complexity of both approaches (it is, however, typical for overwhelming majority of unsupervised-learning-based techniques).

4. Conclusions

In this paper, we briefly present several modifications and generalizations of the concept of SOMs in order to illustrate their advantages in applications that range from high-dimensional data visualization to complex data clustering. Starting from the conventional SOMs, Growing SOMs (GSOMs), Growing Grid Networks (GGNs), Incremental Grid Growing (IGG) approach, Growing Neural Gas (GNG) method as well as our two original solutions, i.e., Generalized SOMs with 1-Dimensional Neighborhood (GeSOMs with 1DN also referred to as Dynamic SOMs (DSOMs)) and Generalized SOMs with Tree-Like Structures (GeSOMs with T-LSs) are presented. Original computer-implementations of particular solutions are developed and their detailed simulation results are shown. The performance of particular solutions is illustrated and compared by means of selected data sets. Table 1 summarizes the characteristics of the considered solutions in terms of (i) the modification mechanisms used, (ii) the range of network modifications introduced, (iii) the structure regularity, and (iv) the data-visualization/data-clustering effectiveness. The performance of particular solutions is illustrated and compared by means of selected data sets.
Table 1. Various modifications and generalizations of SOMs, their functionalities and data-visualization/data-clustering effectiveness.
Our GeSOMs with T-LSs have been effectively used in complex clustering tasks including the clustering of huge amounts of gene expression data [,,], electricity consumption data [], and WWW documents []. Our future work aims at equipping our both approaches (in particular, GeSOMs with T-LSs) with data visualization mechanisms while maintaining their full capacity for automatic and precise clustering of data. We hope that such a generalization contributes to bridging two SOM-related research areas of dimensionality-reduction-based data visualization and complex data clustering.

Author Contributions

Conceptualization, M.B.G. and F.R.; Formal analysis, M.B.G. and F.R.; Investigation, M.B.G. and F.R.; Methodology, M.B.G. and F.R.; Writing—original draft, M.B.G. and F.R.; Writing—review & editing, M.B.G. and F.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank J. Piekoszewski for performing some numerical experiments for GeSOMs with T-LSs.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kohonen, T. Self-Organizing Maps, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  2. Kohonen, T. Self-organized formation of topologically correct feature maps. Biol. Cybern. 1982, 43, 59–69. [Google Scholar] [CrossRef]
  3. Gorricha, J.; Lobo, V. On the Use of Three-Dimensional Self-Organizing Maps for Visualizing Clusters in Georeferenced Data. In Lecture Notes in Geoinformation and Cartography; Spring: Berlin/Heidelberg, Germany, 2011; pp. 61–75. [Google Scholar]
  4. Pal, N.R.; Bezdek, J.C.; Tsao, E.C.K. Generalized clustering networks and Kohonen’s self-organizing scheme. IEEE Trans. Neural Netw. 1993, 4, 549–557. [Google Scholar] [CrossRef] [PubMed]
  5. Ultsch, A. Clustering with SOM: U*C. In Proceedings of the Workshop on Self-Organizing Maps, Paris, France, 5–8 September 2005; pp. 75–82. [Google Scholar]
  6. Vellido, A.; Gibert, K.; Angulo, C.; Guerrero, M.J.D. Advances in Self-Organizing Maps, Learning Vector Quantization, Clustering and Data Visualization. In Proceedings of the 13th International Workshop, WSOM+ 2019, Barcelona, Spain, 26–28 June 2019; Volume 976. [Google Scholar]
  7. Rodrigues, J.S.; Almeida, L.B. Improving the learning speed in topological maps of patterns. In The International Neural Network Society (INNS), the IEEE Neural Network Council Cooperating Societies, International Neural Network Conference (INNC); Springer: Paris, France, 1990; pp. 813–816. [Google Scholar]
  8. Fritzke, B. Growing grid—A self-organizing network with constant neighborhood range and adaptation strength. Neural Process. Lett. 1995, 2, 9–13. [Google Scholar] [CrossRef]
  9. Blackmore, J.; Miikkulainen, R. Incremental grid growing: Encoding high-dimensional structure into a two-dimensional feature map. In Proceedings of the IEEE International Conference on Neural Networks, San Francisco, CA, USA, 28 March–1 April 1993; Volume 1, pp. 450–455. [Google Scholar]
  10. Fritzke, B. A growing neural gas network learns topologies. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 1995; Volume 7, pp. 625–632. [Google Scholar]
  11. Gorzałczany, M.B.; Rudziński, F. Generalized Self-Organizing Maps for Automatic Determination of the Number of Clusters and Their Multiprototypes in Cluster Analysis. IEEE Trans. Neural Networks Learn. Syst. 2018, 29, 2833–2845. [Google Scholar] [CrossRef] [PubMed]
  12. Gorzałczany, M.B.; Rudziński, F. Cluster analysis via dynamic self-organizing neural networks. In Artificial Intelligence and Soft Computing—ICAISC 2006; Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Żurada, J.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4029, pp. 593–602. [Google Scholar]
  13. Gorzałczany, M.B.; Rudziński, F. WWW-newsgroup-document clustering by means of dynamic self-organizing neural networks. In Artificial Intelligence and Soft Computing—ICAISC 2008; Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Żurada, J.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; Volume 5097, pp. 40–51. [Google Scholar]
  14. Gorzałczany, M.B.; Piekoszewski, J.; Rudziński, F. Generalized tree-like self-organizing neural networks with dynamically defined neighborhood for cluster analysis. In Artificial Intelligence and Soft Computing—ICAISC 2014; Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Żurada, J.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; Volume 8468, pp. 725–737. [Google Scholar]
  15. Gorzałczany, M.B.; Piekoszewski, J.; Rudziński, F. Microarray leukemia gene data clustering by means of generalized self-organizing neural networks with evolving tree-like structures. In Artificial Intelligence and Soft Computing—ICAISC 2015; Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Żurada, J.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2015; Volume 9119, pp. 15–25. [Google Scholar]
  16. Gorzałczany, M.B.; Rudziński, F.; Piekoszewski, J. Generalized SOMs with splitting-merging tree-like structures for WWW-document clustering. In Proceedings of the 2015 Conference of the International Fuzzy Systems Association and the European Society for Fuzzy Logic and Technology (IFSA-EUSFLAT-15), Gijón, Spain, 15–19 June 2015; Alonso, J.M., Bustince, H., Reformat, M., Eds.; Atlantis Press: Gijón, Spain, 2015; Volume 89, pp. 186–193. [Google Scholar]
  17. Gorzałczany, M.B.; Rudziński, F.; Piekoszewski, J. Gene expression data clustering using tree-like SOMs with evolving splitting-merging structures. In Proceedings of the IEEE World Congress on Computational Intelligence (IEEE WCCI 2016), International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 3666–3673. [Google Scholar]
  18. Gorzałczany, M.B.; Piekoszewski, J.; Rudziński, F. Uncovering informative genes from colon cancer gene expression data via multi-step clustering based on generalized SOMs with splitting-merging structures. In Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence (SSCI), Xiamen, China, 6–9 December 2019; pp. 533–539. [Google Scholar]
  19. Gorzałczany, M.B.; Piekoszewski, J.; Rudziński, F. Electricity Consumption Data Clustering for Load Profiling Using Generalized Self-Organizing Neural Networks with Evolving Splitting-Merging Structures. In Proceedings of the 2018 IEEE 27th International Symposium on Industrial Electronics (ISIE), Cairns, Australia, 13–15 June 2018; pp. 747–752. [Google Scholar]
  20. Cottrell, M.; Fort, J.; Pages, G. Theoretical aspects of the SOM algorithm. Neurocomputing 1998, 21, 119–138. [Google Scholar] [CrossRef]
  21. Fritzke, B. Growing cell structures—A self-organizing network for unsupervised and supervised learning. Neural Netw. 1994, 7, 1441–1460. [Google Scholar] [CrossRef]
  22. Martinetz, T. Competitive Hebbian learning rule forms perfectly topology preserving maps. In Proceedings of the ICANN ’93, Amsterdam, The Netherlands, 13–16 September 1993; Gielen, S., Kappen, B., Eds.; Springer: London, UK, 1993; pp. 427–434. [Google Scholar]
  23. Martinetz, T.; Schulten, K. Topology representing networks. Neural Netw. 1994, 7, 507–522. [Google Scholar] [CrossRef]
  24. Ultsch, A.; Thrun, M.C. Credible visualizations for planar projections. In Proceedings of the 2017 12th International Workshop on Self-Organizing Maps and Learning Vector Quantization, Clustering and Data Visualization (WSOM), Nancy, France, 28–30 June 2017; pp. 1–5. [Google Scholar]
  25. Thrun, M. Projection-Based Clustering through Self-Organization and Swarm Intelligence; Springer Vieweg: Auflage/Berlin, Germany, 2018. [Google Scholar]
  26. Onishi, A. Landmark map: An extension of the self-organizing map for a user-intended nonlinear projection. Neurocomputing 2020, 388, 228–245. [Google Scholar] [CrossRef]
  27. Hu, R.; Ratner, K.; Ratner, E.; Miche, Y.; Björk, K.M.; Lendasse, A. ELM-SOM+: A continuous mapping for visualization. Neurocomputing 2019, 365, 147–156. [Google Scholar] [CrossRef]
  28. Boualem, M.; Chemseddine, R.; Djamel, B.; Belkacem, O.B. A novel gearbox fault feature extraction and classification using Hilbert empirical wavelet transform, singular value decomposition, and SOM neural network. J. Vib. Control 2018, 24, 2512–2531. [Google Scholar]
  29. Rezaei, F.; Ahmadzadeh, M.; Safavi, H. SOM-DRASTIC: Using self-organizing map for evaluating groundwater potential to pollution. Stoch. Environ. Res. Risk Assess. 2017, 31, 1941–1956. [Google Scholar] [CrossRef]
  30. Feng, N.; Yang, L.; XueYong, J.; LiYan, D.; YongJie, C. Application of improved SOM network in gene data cluster analysis. Measurement 2019, 145, 370–378. [Google Scholar]
  31. Delgado, S.; Higuera, C.; Calle-Espinosa, J.; Morán, F.; Montero, F. A SOM prototype-based cluster analysis methodology. Expert Syst. Appl. 2017, 88, 14–28. [Google Scholar] [CrossRef]
  32. Prasad, H. Mixed data clustering using dynamic growing hierarchical self-organizing map with improved LM learning. Int. Res. J. Eng. Technol. 2016, 3, 150–156. [Google Scholar]
  33. Hung, W.L.; Yang, J.H.; Song, I.W.; Chang, Y.C. A modified self-updating clustering algorithm for application to dengue gene expression data. Commun. Stat. Simul. Comput. 2019. [Google Scholar] [CrossRef]
  34. Febrita, R.E.; Mahmudy, W.F.; Wibawa, A.P. High Dimensional Data Clustering using Self-Organized Map. Knowl. Eng. Data Sci. 2019, 2, 31–40. [Google Scholar] [CrossRef]
  35. Vesanto, J.; Alhonierni, E. Clustering of the self-organizing map. IEEE Trans. Neural Netw. 2000, 11, 586–600. [Google Scholar] [CrossRef]
  36. Brugger, D.; Bogdan, M.; Rosenstiel, W. Automatic cluster detection in Kohonen’s SOM. IEEE Trans. Neural Netw. 2008, 19, 442–459. [Google Scholar] [CrossRef]
  37. Tasdemir, K.; Merenyi, E. Exploiting data topology in visualization and clustering of self-organizing maps. IEEE Trans. Neural Netw. 2009, 20, 549–562. [Google Scholar] [CrossRef]
  38. Tasdemir, K.; Milenov, P.; Tapsall, B. Topology-based hierarchical clustering of self-organizing maps. IEEE Trans. Neural Netw. 2011, 22, 474–485. [Google Scholar] [CrossRef]
  39. Cabanes, G.; Bennani, Y. Learning the number of clusters in self organizing map. In Self-Organizing Map; Matsopoulos, G.K., Ed.; Intech: Vienna, Austria, 2010; pp. 15–28. [Google Scholar]
  40. Wu, S.; Chow, T.W.S. Self-organizing-map based clustering using a local clustering validity index. Neural Process. Lett. 2003, 17, 253–271. [Google Scholar] [CrossRef]
  41. Bezdek, J.C.; Reichherzer, T.R.; Lim, G.S.; Attikiouzel, Y. Multiple-prototype classifier design. IEEE Trans. Syst. Man Cybern. Part C 1998, 28, 67–79. [Google Scholar] [CrossRef]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.