About Granular Rough Computing—Overview of Decision System Approximation Techniques and Future Perspectives
Faculty of Mathematics and Computer Science, University of Warmia and Mazury in Olsztyn, 10-710 Olsztyn, Poland
Algorithms 2020, 13(4), 79; https://doi.org/10.3390/a13040079
Received: 2 February 2020 / Revised: 26 March 2020 / Accepted: 27 March 2020 / Published: 29 March 2020
(This article belongs to the Special Issue Granular Computing: From Foundations to Applications)
Granular computing techniques are a huge discipline in which the basic component is to operate on groups of similar objects according to a fixed similarity measure. The first references to the granular computing can be seen in the works of Zadeh in fuzzy set theory. Granular computing allows for a very natural modelling of the world. It is very likely that the human brain, while solving problems, performs granular calculations on data collected from the senses. The researchers of this paradigm have proven the unlimited possibilities of granular computing. Among other things, they are used in the processes of classification, regression, missing values handling, for feature selection, and as mechanisms of data approximation. It is impossible to quote all methods based on granular computing—we can only discuss a selected group of techniques. In the article, we have presented a review of recently developed granulation techniques belonging to the family of approximation algorithms founded by Polkowski—in the framework of rough set theory. Starting from the basic Polkowski’s standard granulation, we have described further developed by us concept dependent, layered, and epsilon variants, and our recent homogeneous granulation. We are presenting simple numerical examples and samples of research results. The effectiveness of these methods in terms of decision system size reduction and maintenance of the internal knowledge from the original data are presented. The reduction in the number of objects in our techniques while maintaining classification efficiency reaches 90 percent—for standard granulation with usage of a kNN classifier (we achieve similar efficiency for the concept-dependent technique for the Naive Bayes classifier). The largest reduction achieved in the number of exhaustive set of rules at the efficiency level to the original data are 99 percent—it is for concept-dependent granulation. In homogeneous variants, the reduction is less than 60 percent, but the advantage of these techniques is that it is not necessary to look for optimal granulation parameters, which are selected dynamically. We also describe potential directions of development of granular computing techniques by prism of described methods.
View Full-Text
▼
Show Figures
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited
MDPI and ACS Style
Artiemjew, P. About Granular Rough Computing—Overview of Decision System Approximation Techniques and Future Perspectives. Algorithms 2020, 13, 79. https://doi.org/10.3390/a13040079
AMA Style
Artiemjew P. About Granular Rough Computing—Overview of Decision System Approximation Techniques and Future Perspectives. Algorithms. 2020; 13(4):79. https://doi.org/10.3390/a13040079
Chicago/Turabian StyleArtiemjew, Piotr. 2020. "About Granular Rough Computing—Overview of Decision System Approximation Techniques and Future Perspectives" Algorithms 13, no. 4: 79. https://doi.org/10.3390/a13040079
Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.
Search more from Scilit