entropy-logo

Journal Browser

Journal Browser

Entropy Weight Methods of Combining Classifiers in Distributed Learning Area

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Multidisciplinary Applications".

Deadline for manuscript submissions: closed (31 January 2024) | Viewed by 8654

Special Issue Editor


E-Mail Website
Guest Editor
Institute of Computer Science, University of Silesia in Katowice, 40-007 Katowice, Poland
Interests: decision-making systems; dispersed data; distributed learning; rough sets; artificial intelligence; data mining; expert systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues, 

Decision-making processes where an optimized decision is taken on the basis of observed data are common in every aspect of human activity. Machine learning models can in some cases mimic human decision-making processes, classify objects, and even enable more optimized decision/classification than humans can achieve. Making decisions using data from more than a single source is known to be more effective than when using data from a single source. The utilization of multiple data sources makes it possible to gain a comprehensive understanding of the entire case and avoid bias. It is common that knowledge on a subject is not limited to one source but collected in fragments by independent units. However, classification based on multiple data sources also comes with some challenges, including data security, poor data quality, and data inconsistency, among others.

The main focus of this Special Issue is the application of entropy to classification tasks based on dispersed data. We welcome the submission of papers addressing issues in the field of combining classifiers, the application of classifier ensembles to real-world problems, new methods of combining classifier predictions, the development of complex classifiers, and other related topics. Research in the area of distributed learning, particularly federated learning, is also aligned with the goals of this Issue.

Prof. Dr. Małgorzata Przybyła-Kasperek
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • distributed learning
  • classifier ensembles
  • entropy in fusion methods
  • machine learning
  • classification methods
  • group decision-making
  • conflict analysis
  • dispersed data

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 618 KiB  
Article
Ensemble Classifier Based on Interval Modeling for Microarray Datasets
by Urszula Bentkowska, Wojciech Gałka, Marcin Mrukowicz and Aleksander Wojtowicz
Entropy 2024, 26(3), 240; https://doi.org/10.3390/e26030240 - 8 Mar 2024
Viewed by 670
Abstract
The purpose of the study is to propose a multi-class ensemble classifier using interval modeling dedicated to microarray datasets. An approach of creating the uncertainty intervals for the single prediction values of constituent classifiers and then aggregating the obtained intervals with the use [...] Read more.
The purpose of the study is to propose a multi-class ensemble classifier using interval modeling dedicated to microarray datasets. An approach of creating the uncertainty intervals for the single prediction values of constituent classifiers and then aggregating the obtained intervals with the use of interval-valued aggregation functions is used. The proposed heterogeneous classification employs Random Forest, Support Vector Machines, and Multilayer Perceptron as component classifiers, utilizing cross-entropy to select the optimal classifier. Moreover, orders for intervals are applied to determine the decision class of an object. The applied interval-valued aggregation functions are tested in terms of optimizing the performance of the considered ensemble classifier. The proposed model’s quality, superior to other well-known and component classifiers, is validated through comparison, demonstrating the efficacy of cross-entropy in ensemble model construction. Full article
Show Figures

Figure 1

23 pages, 1485 KiB  
Article
Simulation Research on the Relationship between Selected Inconsistency Indices Used in AHP
by Tomasz Starczewski
Entropy 2023, 25(10), 1464; https://doi.org/10.3390/e25101464 - 19 Oct 2023
Viewed by 830
Abstract
The Analytic Hierarchy Process (AHP) is a widely used used multi-criteria decision-making method (MCDM). This method is based on pairwise comparison, which forms the so-called Pairwise Comparison Matrix (PCM). PCMs usually contain some errors, which can have an influence on the eventual results. [...] Read more.
The Analytic Hierarchy Process (AHP) is a widely used used multi-criteria decision-making method (MCDM). This method is based on pairwise comparison, which forms the so-called Pairwise Comparison Matrix (PCM). PCMs usually contain some errors, which can have an influence on the eventual results. In order to avoid incorrect values of priorities, the inconsistency index (ICI) has been introduced in the AHP by Saaty. However, the user of the AHP can encounter many definitions of ICIs, of which values are usually different. Nevertheless, a lot of these indices are based on a similar idea. The values of some pairs of these indices are characterized by high values of a correlation coefficient. In my work, I present some results of Monte Carlo simulation, which allow us to observe the dependencies in AHP. I select some pairs of ICIs and I evaluate values of the Pearson correlation coefficient for them. The results are compared with some scatter plots that show the type of dependencies between selected ICIs. The presented research shows some pairs of indices are closely correlated so that they can be used interchangeably. Full article
Show Figures

Figure 1

20 pages, 487 KiB  
Article
Nested Binary Classifier as an Outlier Detection Method in Human Activity Recognition Systems
by Agnieszka Duraj and Daniel Duczymiński
Entropy 2023, 25(8), 1121; https://doi.org/10.3390/e25081121 - 26 Jul 2023
Viewed by 948
Abstract
The present article is devoted to outlier detection in phases of human movement. The aim was to find the most efficient machine learning method to detect abnormal segments inside physical activities in which there is a probability of origin from other activities. The [...] Read more.
The present article is devoted to outlier detection in phases of human movement. The aim was to find the most efficient machine learning method to detect abnormal segments inside physical activities in which there is a probability of origin from other activities. The problem was reduced to a classification task. The new method is proposed based on a nested binary classifier. Test experiments were then conducted using several of the most popular machine learning algorithms (linear regression, support vector machine, k-nearest neighbor, decision trees). Each method was separately tested on three datasets varying in characteristics and number of records. We set out to evaluate the effectiveness of the models, basic measures of classifier evaluation, and confusion matrices. The nested binary classifier was compared with deep neural networks. Our research shows that the method of nested binary classifiers can be considered an effective way of recognizing outlier patterns for HAR systems. Full article
Show Figures

Figure 1

30 pages, 17042 KiB  
Article
Hiding Information in Digital Images Using Ant Algorithms
by Mariusz Boryczka and Grzegorz Kazana
Entropy 2023, 25(7), 963; https://doi.org/10.3390/e25070963 - 21 Jun 2023
Viewed by 1376
Abstract
Stenographic methods are closely related to the security and confidentiality of communications, which have always been essential domains of human life. Steganography itself is a science dedicated to the process of hiding information in public communication channels. Its main idea is to use [...] Read more.
Stenographic methods are closely related to the security and confidentiality of communications, which have always been essential domains of human life. Steganography itself is a science dedicated to the process of hiding information in public communication channels. Its main idea is to use digital files or even communication protocols as a medium inside which data are hidden. The present research aims to investigate the applicability of ant algorithms in steganography and evaluate the effectiveness of this approach. Ant systems could be employed both in spatial and frequency-based image steganography. The combination of frequency domain and optimization method to increase robustness is used, and an integer wavelet transform is performed on the host image. ACO optimization is used to find the optimal coefficients describing where to hide the data. The other method utilizes ACO to determine the optimal pixel locations for embedding secret data in the cover image. ACO is also used to detect complex regions of the cover image. Afterward, the least-significant-bits (LSB) substitution is used to hide secret information in the detected complex regions’ pixels. Our study focuses on optimizing two mutually exclusive features of steganograms—high capacity and low distortion. An attempt was made to use ant systems to select areas of digital images that allow the greatest amount of information to be hidden with the least loss of image quality. The effect of variants of the ant system and its parameters on the quality of the results obtained was also investigated, and the final effectiveness of the proposed method was evaluated. The results of the experiments were compared with those published in related articles. The proposed procedures proved to be effective and allowed the embedding of large amounts of data with relatively little impact on image quality. Full article
Show Figures

Figure 1

22 pages, 1962 KiB  
Article
Study on the Use of Artificially Generated Objects in the Process of Training MLP Neural Networks Based on Dispersed Data
by Kwabena Frimpong Marfo and Małgorzata Przybyła-Kasperek
Entropy 2023, 25(5), 703; https://doi.org/10.3390/e25050703 - 24 Apr 2023
Cited by 1 | Viewed by 936
Abstract
This study concerns dispersed data stored in independent local tables with different sets of attributes. The paper proposes a new method for training a single neural network—a multilayer perceptron based on dispersed data. The idea is to train local models that have identical [...] Read more.
This study concerns dispersed data stored in independent local tables with different sets of attributes. The paper proposes a new method for training a single neural network—a multilayer perceptron based on dispersed data. The idea is to train local models that have identical structures based on local tables; however, due to different sets of conditional attributes present in local tables, it is necessary to generate some artificial objects to train local models. The paper presents a study on the use of varying parameter values in the proposed method of creating artificial objects to train local models. The paper presents an exhaustive comparison in terms of the number of artificial objects generated based on a single original object, the degree of data dispersion, data balancing, and different network structures—the number of neurons in the hidden layer. It was found that for data sets with a large number of objects, a smaller number of artificial objects is optimal. For smaller data sets, a greater number of artificial objects (three or four) produces better results. For large data sets, data balancing and the degree of dispersion have no significant impact on quality of classification. Rather, a greater number of neurons in the hidden layer produces better results (ranging from three to five times the number of neurons in the input layer). Full article
Show Figures

Figure 1

19 pages, 597 KiB  
Article
A Comparative Study of Rank Aggregation Methods in Recommendation Systems
by Michał Bałchanowski and Urszula Boryczka
Entropy 2023, 25(1), 132; https://doi.org/10.3390/e25010132 - 9 Jan 2023
Cited by 7 | Viewed by 2986
Abstract
The aim of a recommender system is to suggest to the user certain products or services that most likely will interest them. Within the context of personalized recommender systems, a number of algorithms have been suggested to generate a ranking of items tailored [...] Read more.
The aim of a recommender system is to suggest to the user certain products or services that most likely will interest them. Within the context of personalized recommender systems, a number of algorithms have been suggested to generate a ranking of items tailored to individual user preferences. However, these algorithms do not generate identical recommendations, and for this reason it has been suggested in the literature that the results of these algorithms can be combined using aggregation techniques, hoping that this will translate into an improvement in the quality of the final recommendation. In order to see which of these techniques increase the quality of recommendations to the greatest extent, the authors of this publication conducted experiments in which they considered five recommendation algorithms and 20 aggregation methods. The research was carried out on the popular and publicly available MovieLens 100k and MovieLens 1M datasets, and the results were confirmed by statistical tests. Full article
Show Figures

Figure 1

Back to TopTop