entropy-logo

Journal Browser

Journal Browser

Entropy in Machine Learning Applications

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Multidisciplinary Applications".

Deadline for manuscript submissions: closed (15 January 2024) | Viewed by 18688

Special Issue Editor


E-Mail Website
Guest Editor
1. Zhuhai Sub Laboratory of Key Laboratory for Symbol Computation and Knowledge Engineering of National Education Ministry, Zhuhai College of Science and Technology, Zhuhai 519041, China
2. Key Laboratory for Symbol Computation and Knowledge Engineering of National Education Ministry, College of Computer Science and Technology, Jilin University, Changchun 130012, China
Interests: machine learning methods in computational biology; optimization problems solving using evolutionary algorithms; hybrid evolutionary algorithms; deep learning models and algorithms

Special Issue Information

Dear Colleagues,

This Special Issue will include but not be limited to applications using machine learning methods, including construction and application of managed pressure drilling knowledge graphs with extended cross-entropy loss, computational characterization of undifferentially expressed genes with altered transcription regulations, entropy weighted water quality prediction methods based on long-short term memory and data correlation analysis, convolutional networks with cross-entropy as the loss function based on transfer learning for rice disease detection, and semantic disambiguation of advertising vocabulary based on knowledge graphs.

Prof. Dr. Yanchun Liang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • knowledge graph
  • data correlation
  • differential expression
  • long–short-term memory
  • semantic disambiguation
  • advertising vocabulary
  • entity relationship extraction
  • semi-supervised learning
  • cross-entropy loss
  • semantic entropy

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 826 KiB  
Article
A Good View for Graph Contrastive Learning
by Xueyuan Chen and Shangzhe Li
Entropy 2024, 26(3), 208; https://doi.org/10.3390/e26030208 - 27 Feb 2024
Viewed by 837
Abstract
Due to the success observed in deep neural networks with contrastive learning, there has been a notable surge in research interest in graph contrastive learning, primarily attributed to its superior performance in graphs with limited labeled data. Within contrastive learning, the selection of [...] Read more.
Due to the success observed in deep neural networks with contrastive learning, there has been a notable surge in research interest in graph contrastive learning, primarily attributed to its superior performance in graphs with limited labeled data. Within contrastive learning, the selection of a “view” dictates the information captured by the representation, thereby influencing the model’s performance. However, assessing the quality of information in these views poses challenges, and determining what constitutes a good view remains unclear. This paper addresses this issue by establishing the definition of a good view through the application of graph information bottleneck and structural entropy theories. Based on theoretical insights, we introduce CtrlGCL, a novel method for achieving a beneficial view in graph contrastive learning through coding tree representation learning. Extensive experiments were conducted to ascertain the effectiveness of the proposed view in unsupervised and semi-supervised learning. In particular, our approach, via CtrlGCL-H, yields an average accuracy enhancement of 1.06% under unsupervised learning when compared to GCL. This improvement underscores the efficacy of our proposed method. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications)
Show Figures

Figure 1

16 pages, 2690 KiB  
Article
Automatic Vertebral Rotation Angle Measurement of 3D Vertebrae Based on an Improved Transformer Network
by Xing Huo, Hao Li and Kun Shao
Entropy 2024, 26(2), 97; https://doi.org/10.3390/e26020097 - 23 Jan 2024
Viewed by 905
Abstract
The measurement of vertebral rotation angles serves as a crucial parameter in spinal assessments, particularly in understanding conditions such as idiopathic scoliosis. Historically, these angles were calculated from 2D CT images. However, such 2D techniques fail to comprehensively capture the intricate three-dimensional deformities [...] Read more.
The measurement of vertebral rotation angles serves as a crucial parameter in spinal assessments, particularly in understanding conditions such as idiopathic scoliosis. Historically, these angles were calculated from 2D CT images. However, such 2D techniques fail to comprehensively capture the intricate three-dimensional deformities inherent in spinal curvatures. To overcome the limitations of manual measurements and 2D imaging, we introduce an entirely automated approach for quantifying vertebral rotation angles using a three-dimensional vertebral model. Our method involves refining a point cloud segmentation network based on a transformer architecture. This enhanced network segments the three-dimensional vertebral point cloud, allowing for accurate measurement of vertebral rotation angles. In contrast to conventional network methodologies, our approach exhibits notable improvements in segmenting vertebral datasets. To validate our approach, we compare our automated measurements with angles derived from prevalent manual labeling techniques. The analysis, conducted through Bland–Altman plots and the corresponding intraclass correlation coefficient results, indicates significant agreement between our automated measurement method and manual measurements. The observed high intraclass correlation coefficients (ranging from 0.980 to 0.993) further underscore the reliability of our automated measurement process. Consequently, our proposed method demonstrates substantial potential for clinical applications, showcasing its capacity to provide accurate and efficient vertebral rotation angle measurements. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications)
Show Figures

Figure 1

25 pages, 2843 KiB  
Article
Cross Entropy in Deep Learning of Classifiers Is Unnecessary—ISBE Error Is All You Need
by Władysław Skarbek
Entropy 2024, 26(1), 65; https://doi.org/10.3390/e26010065 - 12 Jan 2024
Viewed by 844
Abstract
In deep learning of classifiers, the cost function usually takes the form of a combination of SoftMax and CrossEntropy functions. The SoftMax unit transforms the scores predicted by the model network into assessments of the degree (probabilities) of an object’s membership to a [...] Read more.
In deep learning of classifiers, the cost function usually takes the form of a combination of SoftMax and CrossEntropy functions. The SoftMax unit transforms the scores predicted by the model network into assessments of the degree (probabilities) of an object’s membership to a given class. On the other hand, CrossEntropy measures the divergence of this prediction from the distribution of target scores. This work introduces the ISBE functionality, justifying the thesis about the redundancy of cross-entropy computation in deep learning of classifiers. Not only can we omit the calculation of entropy, but also, during back-propagation, there is no need to direct the error to the normalization unit for its backward transformation. Instead, the error is sent directly to the model’s network. Using examples of perceptron and convolutional networks as classifiers of images from the MNIST collection, it is observed for ISBE that results are not degraded with SoftMax only but also with other activation functions such as Sigmoid, Tanh, or their hard variants HardSigmoid and HardTanh. Moreover, savings in the total number of operations were observed within the forward and backward stages. The article is addressed to all deep learning enthusiasts but primarily to programmers and students interested in the design of deep models. For example, it illustrates in code snippets possible ways to implement ISBE functionality but also formally proves that the SoftMax trick only applies to the class of dilated SoftMax functions with relocations. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications)
Show Figures

Figure 1

20 pages, 1348 KiB  
Article
Water Quality Prediction Based on Machine Learning and Comprehensive Weighting Methods
by Xianhe Wang, Ying Li, Qian Qiao, Adriano Tavares and Yanchun Liang
Entropy 2023, 25(8), 1186; https://doi.org/10.3390/e25081186 - 9 Aug 2023
Cited by 9 | Viewed by 3857
Abstract
In the context of escalating global environmental concerns, the importance of preserving water resources and upholding ecological equilibrium has become increasingly apparent. As a result, the monitoring and prediction of water quality have emerged as vital tasks in achieving these objectives. However, ensuring [...] Read more.
In the context of escalating global environmental concerns, the importance of preserving water resources and upholding ecological equilibrium has become increasingly apparent. As a result, the monitoring and prediction of water quality have emerged as vital tasks in achieving these objectives. However, ensuring the accuracy and dependability of water quality prediction has proven to be a challenging endeavor. To address this issue, this study proposes a comprehensive weight-based approach that combines entropy weighting with the Pearson correlation coefficient to select crucial features in water quality prediction. This approach effectively considers both feature correlation and information content, avoiding excessive reliance on a single criterion for feature selection. Through the utilization of this comprehensive approach, a comprehensive evaluation of the contribution and importance of the features was achieved, thereby minimizing subjective bias and uncertainty. By striking a balance among various factors, features with stronger correlation and greater information content can be selected, leading to improved accuracy and robustness in the feature-selection process. Furthermore, this study explored several machine learning models for water quality prediction, including Support Vector Machines (SVMs), Multilayer Perceptron (MLP), Random Forest (RF), XGBoost, and Long Short-Term Memory (LSTM). SVM exhibited commendable performance in predicting Dissolved Oxygen (DO), showcasing excellent generalization capabilities and high prediction accuracy. MLP demonstrated its strength in nonlinear modeling and performed well in predicting multiple water quality parameters. Conversely, the RF and XGBoost models exhibited relatively inferior performance in water quality prediction. In contrast, the LSTM model, a recurrent neural network specialized in processing time series data, demonstrated exceptional abilities in water quality prediction. It effectively captured the dynamic patterns present in time series data, offering stable and accurate predictions for various water quality parameters. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications)
Show Figures

Figure 1

23 pages, 1956 KiB  
Article
IBGJO: Improved Binary Golden Jackal Optimization with Chaotic Tent Map and Cosine Similarity for Feature Selection
by Kunpeng Zhang, Yanheng Liu, Fang Mei, Geng Sun and Jingyi Jin
Entropy 2023, 25(8), 1128; https://doi.org/10.3390/e25081128 - 27 Jul 2023
Cited by 6 | Viewed by 1174
Abstract
Feature selection is a crucial process in machine learning and data mining that identifies the most pertinent and valuable features in a dataset. It enhances the efficacy and precision of predictive models by efficiently reducing the number of features. This reduction improves classification [...] Read more.
Feature selection is a crucial process in machine learning and data mining that identifies the most pertinent and valuable features in a dataset. It enhances the efficacy and precision of predictive models by efficiently reducing the number of features. This reduction improves classification accuracy, lessens the computational burden, and enhances overall performance. This study proposes the improved binary golden jackal optimization (IBGJO) algorithm, an extension of the conventional golden jackal optimization (GJO) algorithm. IBGJO serves as a search strategy for wrapper-based feature selection. It comprises three key factors: a population initialization process with a chaotic tent map (CTM) mechanism that enhances exploitation abilities and guarantees population diversity, an adaptive position update mechanism using cosine similarity to prevent premature convergence, and a binary mechanism well-suited for binary feature selection problems. We evaluated IBGJO on 28 classical datasets from the UC Irvine Machine Learning Repository. The results show that the CTM mechanism and the position update strategy based on cosine similarity proposed in IBGJO can significantly improve the Rate of convergence of the conventional GJO algorithm, and the accuracy is also significantly better than other algorithms. Additionally, we evaluate the effectiveness and performance of the enhanced factors. Our empirical results show that the proposed CTM mechanism and the position update strategy based on cosine similarity can help the conventional GJO algorithm converge faster. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications)
Show Figures

Figure 1

17 pages, 4931 KiB  
Article
Chinese Few-Shot Named Entity Recognition and Knowledge Graph Construction in Managed Pressure Drilling Domain
by Siqing Wei, Yanchun Liang, Xiaoran Li, Xiaohui Weng, Jiasheng Fu and Xiaosong Han
Entropy 2023, 25(7), 1097; https://doi.org/10.3390/e25071097 - 22 Jul 2023
Cited by 1 | Viewed by 1277
Abstract
Managed pressure drilling (MPD) is the most effective means to ensure drilling safety, and MPD is able to avoid further deterioration of complex working conditions through precise control of the wellhead back pressure. The key to the success of MPD is the well [...] Read more.
Managed pressure drilling (MPD) is the most effective means to ensure drilling safety, and MPD is able to avoid further deterioration of complex working conditions through precise control of the wellhead back pressure. The key to the success of MPD is the well control strategy, which currently relies heavily on manual experience, hindering the automation and intelligence process of well control. In response to this issue, an MPD knowledge graph is constructed in this paper that extracts knowledge from published papers and drilling reports to guide well control. In order to improve the performance of entity extraction in the knowledge graph, a few-shot Chinese entity recognition model CEntLM-KL is extended from the EntLM model, in which the KL entropy is built to improve the accuracy of entity recognition. Through experiments on benchmark datasets, it has been shown that the proposed model has a significant improvement compared to the state-of-the-art methods. On the few-shot drilling datasets, the F-1 score of entity recognition reaches 33%. Finally, the knowledge graph is stored in Neo4J and applied for knowledge inference. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications)
Show Figures

Figure 1

15 pages, 2697 KiB  
Article
A Hybrid Recommender System Based on Autoencoder and Latent Feature Analysis
by Shangzhi Guo, Xiaofeng Liao, Gang Li, Kaiyi Xian, Yuhang Li and Cheng Liang
Entropy 2023, 25(7), 1062; https://doi.org/10.3390/e25071062 - 14 Jul 2023
Cited by 2 | Viewed by 1189
Abstract
A recommender system (RS) is highly efficient in extracting valuable information from a deluge of big data. The key issue of implementing an RS lies in uncovering users’ latent preferences on different items. Latent Feature Analysis (LFA) and deep neural networks (DNNs) are [...] Read more.
A recommender system (RS) is highly efficient in extracting valuable information from a deluge of big data. The key issue of implementing an RS lies in uncovering users’ latent preferences on different items. Latent Feature Analysis (LFA) and deep neural networks (DNNs) are two of the most popular and successful approaches to addressing this issue. However, both the LFA-based and the DNNs-based models have their own distinct advantages and disadvantages. Consequently, relying solely on either the LFA or DNN-based models cannot ensure optimal recommendation performance across diverse real-world application scenarios. To address this issue, this paper proposes a novel hybrid recommendation model that combines Autoencoder and LFA techniques, termed AutoLFA. The main idea of AutoLFA is two-fold: (1) It leverages an Autoencoder and an LFA model separately to construct two distinct recommendation models, each residing in a unique metric representation space with its own set of strengths; and (2) it integrates the Autoencoder and LFA model using a customized self-adaptive weighting strategy, thereby capitalizing on the merits of both approaches. To evaluate the proposed AutoLFA model, extensive experiments on five real recommendation datasets are conducted. The results demonstrate that AutoLFA achieves significantly better recommendation performance than the seven related state-of-the-art models. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications)
Show Figures

Figure 1

18 pages, 7787 KiB  
Article
Use of Composite Multivariate Multiscale Permutation Fuzzy Entropy to Diagnose the Faults of Rolling Bearing
by Qiang Yuan, Mingchen Lv, Ruiping Zhou, Hong Liu, Chongkun Liang and Lijiao Cheng
Entropy 2023, 25(7), 1049; https://doi.org/10.3390/e25071049 - 12 Jul 2023
Cited by 2 | Viewed by 994
Abstract
The study focuses on the fault signals of rolling bearings, which are characterized by nonlinearity, periodic impact, and low signal-to-noise ratio. The advantages of entropy calculation in analyzing time series data were combined with the high calculation accuracy of Multiscale Fuzzy Entropy (MFE) [...] Read more.
The study focuses on the fault signals of rolling bearings, which are characterized by nonlinearity, periodic impact, and low signal-to-noise ratio. The advantages of entropy calculation in analyzing time series data were combined with the high calculation accuracy of Multiscale Fuzzy Entropy (MFE) and the strong noise resistance of Multiscale Permutation Entropy (MPE), a multivariate coarse-grained form was introduced, and the coarse-grained process was improved. The Composite Multivariate Multiscale Permutation Fuzzy Entropy (CMvMPFE) method was proposed to solve the problems of low accuracy, large entropy perturbation, and information loss in the calculation process of fault feature parameters. This method extracts the fault characteristics of rolling bearings more comprehensively and accurately. The CMvMPFE method was used to calculate the entropy value of the rolling bearing experimental fault data, and Support Vector Machine (SVM) was used for fault diagnosis analysis. By comparing with MPFE, the Composite Multiscale Permutation Fuzzy Entropy (CMPFE) and the Multivariate Multiscale Permutation Fuzzy Entropy (MvMPFE) methods, the results of the calculations show that the CMvMPFE method can extract rolling bearing fault characteristics more comprehensively and accurately, and it also has good robustness. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications)
Show Figures

Figure 1

15 pages, 2720 KiB  
Article
PBQ-Enhanced QUIC: QUIC with Deep Reinforcement Learning Congestion Control Mechanism
by Zhifei Zhang, Shuo Li, Yiyang Ge, Ge Xiong, Yu Zhang and Ke Xiong
Entropy 2023, 25(2), 294; https://doi.org/10.3390/e25020294 - 4 Feb 2023
Viewed by 2171
Abstract
Currently, the most widely used protocol for the transportation layer of computer networks for reliable transportation is the Transmission Control Protocol (TCP). However, TCP has some problems such as high handshake delay, head-of-line (HOL) blocking, and so on. To solve these problems, Google [...] Read more.
Currently, the most widely used protocol for the transportation layer of computer networks for reliable transportation is the Transmission Control Protocol (TCP). However, TCP has some problems such as high handshake delay, head-of-line (HOL) blocking, and so on. To solve these problems, Google proposed the Quick User Datagram Protocol Internet Connection (QUIC) protocol, which supports 0-1 round-trip time (RTT) handshake, a congestion control algorithm configuration in user mode. So far, the QUIC protocol has been integrated with traditional congestion control algorithms, which are not efficient in numerous scenarios. To solve this problem, we propose an efficient congestion control mechanism on the basis of deep reinforcement learning (DRL), i.e., proximal bandwidth-delay quick optimization (PBQ) for QUIC, which combines traditional bottleneck bandwidth and round-trip propagation time (BBR) with proximal policy optimization (PPO). In PBQ, the PPO agent outputs the congestion window (CWnd) and improves itself according to network state, and the BBR specifies the pacing rate of the client. Then, we apply the presented PBQ to QUIC and form a new version of QUIC, i.e., PBQ-enhanced QUIC. The experimental results show that the proposed PBQ-enhanced QUIC achieves much better performance in both throughput and RTT than existing popular versions of QUIC, such as QUIC with Cubic and QUIC with BBR. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications)
Show Figures

Figure 1

24 pages, 12917 KiB  
Article
A Semantic-Enhancement-Based Social Network User-Alignment Algorithm
by Yuanhao Huang, Pengcheng Zhao, Qi Zhang, Ling Xing, Honghai Wu and Huahong Ma
Entropy 2023, 25(1), 172; https://doi.org/10.3390/e25010172 - 15 Jan 2023
Cited by 4 | Viewed by 1996
Abstract
User alignment can associate multiple social network accounts of the same user. It has important research implications. However, the same user has various behaviors and friends across different social networks. This will affect the accuracy of user alignment. In this paper, we aim [...] Read more.
User alignment can associate multiple social network accounts of the same user. It has important research implications. However, the same user has various behaviors and friends across different social networks. This will affect the accuracy of user alignment. In this paper, we aim to improve the accuracy of user alignment by reducing the semantic gap between the same user in different social networks. Therefore, we propose a semantically enhanced social network user alignment algorithm (SENUA). The algorithm performs user alignment based on user attributes, user-generated contents (UGCs), and user check-ins. The interference of local semantic noise can be reduced by mining the user’s semantic features for these three factors. In addition, we improve the algorithm’s adaptability to noise by multi-view graph-data augmentation. Too much similarity of non-aligned users can have a large negative impact on the user-alignment effect. Therefore, we optimize the embedding vectors based on multi-headed graph attention networks and multi-view contrastive learning. This can enhance the similar semantic features of the aligned users. Experimental results show that SENUA has an average improvement of 6.27% over the baseline method at hit-precision30. This shows that semantic enhancement can effectively improve user alignment. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications)
Show Figures

Figure 1

Review

Jump to: Research

44 pages, 1546 KiB  
Review
Deep Learning for 3D Reconstruction, Augmentation, and Registration: A Review Paper
by Prasoon Kumar Vinodkumar, Dogus Karabulut, Egils Avots, Cagri Ozcinar and Gholamreza Anbarjafari
Entropy 2024, 26(3), 235; https://doi.org/10.3390/e26030235 - 7 Mar 2024
Viewed by 2087
Abstract
The research groups in computer vision, graphics, and machine learning have dedicated a substantial amount of attention to the areas of 3D object reconstruction, augmentation, and registration. Deep learning is the predominant method used in artificial intelligence for addressing computer vision challenges. However, [...] Read more.
The research groups in computer vision, graphics, and machine learning have dedicated a substantial amount of attention to the areas of 3D object reconstruction, augmentation, and registration. Deep learning is the predominant method used in artificial intelligence for addressing computer vision challenges. However, deep learning on three-dimensional data presents distinct obstacles and is now in its nascent phase. There have been significant advancements in deep learning specifically for three-dimensional data, offering a range of ways to address these issues. This study offers a comprehensive examination of the latest advancements in deep learning methodologies. We examine many benchmark models for the tasks of 3D object registration, augmentation, and reconstruction. We thoroughly analyse their architectures, advantages, and constraints. In summary, this report provides a comprehensive overview of recent advancements in three-dimensional deep learning and highlights unresolved research areas that will need to be addressed in the future. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications)
Show Figures

Figure 1

Back to TopTop