Computational Intelligence: Theory and Applications

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Computational and Applied Mathematics".

Deadline for manuscript submissions: closed (30 September 2022) | Viewed by 17617

Special Issue Editors


E-Mail Website
Guest Editor
School of Mathematics and Computational Science, Xiangtan University, Xiangtan 411105, China
Interests: fractal theory and applications; time series analysis; complex network analysis; biological and environmental data processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Qian Xuesen Laboratory of Space Technology, China Academy of Space Technology, Beijing 100094, China
Interests: numerical PDE; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Mathematics and Computational Science, Xiangtan University, Xiangtan 411105, China
Interests: materials modeling and computation; artificial intelligence; scientific computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Computational intelligence (CI) is an intellectual mode of low-level cognition. If a system only involves numerical (low-level) data and does not use knowledge in the sense of artificial intelligence, then it can be regarded as a CI system. The areas covered by computational intelligence include fuzzy logic, neural networks, evolutionary computation and probabilistic reasoning. The theories and techniques of CI allow us to find solutions to problems in pattern recognition, control, automated decision-making, optimization, statistical modeling, and many other areas. The research and development of CI reflect the important interdisciplinary and integration developing trend of present science and technology. This Special Issue will focus on new developments and advances in the various areas of computational intelligence, including the theory and applications to the fields of engineering, scientific computing, computer science, physics and life sciences.

Topics include, but are not limited to, the following:

  • pattern recognition
  • prediction systems
  • process and system control
  • bioinformatics
  • cloud computing
  • data mining
  • decision support systems
  • intelligent information retrieval
  • noise analysis
  • real-time systems
  • signal and image processing
  • system modelling and optimization
  • time-series prediction
  • deep learning
  • scientific computing

Prof. Dr. Zuguo Yu
Dr. Xueshuang Xiang
Prof. Dr. Kai Jiang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • neural networks
  • fuzzy systems and control
  • evolutionary computation
  • statistical modeling
  • machine learning
  • optimization algorithms

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 1770 KiB  
Article
LightGBM-LncLoc: A LightGBM-Based Computational Predictor for Recognizing Long Non-Coding RNA Subcellular Localization
by Jianyi Lyu, Peijie Zheng, Yue Qi and Guohua Huang
Mathematics 2023, 11(3), 602; https://doi.org/10.3390/math11030602 - 25 Jan 2023
Cited by 4 | Viewed by 1820
Abstract
Long non-coding RNAs (lncRNA) are a class of RNA transcripts with more than 200 nucleotide residues. LncRNAs play versatile roles in cellular processes and are thus becoming a hot topic in the field of biomedicine. The function of lncRNAs was discovered to be [...] Read more.
Long non-coding RNAs (lncRNA) are a class of RNA transcripts with more than 200 nucleotide residues. LncRNAs play versatile roles in cellular processes and are thus becoming a hot topic in the field of biomedicine. The function of lncRNAs was discovered to be closely associated with subcellular localization. Although many methods have been developed to identify the subcellular localization of lncRNAs, there still is much room for improvement. Herein, we present a lightGBM-based computational predictor for recognizing lncRNA subcellular localization, which is called LightGBM-LncLoc. LightGBM-LncLoc uses reverse complement k-mer and position-specific trinucleotide propensity based on the single strand for multi-class sequences to encode LncRNAs and employs LightGBM as the learning algorithm. LightGBM-LncLoc reaches state-of-the-art performance by five-fold cross-validation and independent test over the datasets of five categories of lncRNA subcellular localization. We also implemented LightGBM-LncLoc as a user-friendly web server. Full article
(This article belongs to the Special Issue Computational Intelligence: Theory and Applications)
Show Figures

Figure 1

27 pages, 3986 KiB  
Article
A Many-Objective Evolutionary Algorithm Based on Indicator and Decomposition
by Yizhang Xia, Jianzun Huang, Xijun Li, Yuan Liu, Jinhua Zheng and Juan Zou
Mathematics 2023, 11(2), 413; https://doi.org/10.3390/math11020413 - 12 Jan 2023
Cited by 2 | Viewed by 1408
Abstract
In the field of many-objective evolutionary optimization algorithms (MaOEAs), how to maintain the balance between convergence and diversity has been a significant research problem. With the increase of the number of objectives, the number of mutually nondominated solutions increases rapidly, and multi-objective evolutionary [...] Read more.
In the field of many-objective evolutionary optimization algorithms (MaOEAs), how to maintain the balance between convergence and diversity has been a significant research problem. With the increase of the number of objectives, the number of mutually nondominated solutions increases rapidly, and multi-objective evolutionary optimization algorithms, based on Pareto-dominated relations, become invalid because of the loss of selection pressure in environmental selection. In order to solve this problem, indicator-based many-objective evolutionary algorithms have been proposed; however, they are not good enough at maintaining diversity. Decomposition-based methods have achieved promising performance in keeping diversity. In this paper, we propose a MaOEA based on indicator and decomposition (IDEA) to keep the convergence and diversity simultaneously. Moreover, decomposition-based algorithms do not work well on irregular PFs. To tackle this problem, this paper develops a reference-points adjustment method based on the learning population. Experimental studies of several well-known benchmark problems show that IDEA is very effective compared to ten state-of-the-art many-objective algorithms. Full article
(This article belongs to the Special Issue Computational Intelligence: Theory and Applications)
Show Figures

Figure 1

16 pages, 2392 KiB  
Article
Dynamic Constrained Boundary Method for Constrained Multi-Objective Optimization
by Qiuzhen Wang, Zhibing Liang, Juan Zou, Xiangdong Yin, Yuan Liu, Yaru Hu and Yizhang Xia
Mathematics 2022, 10(23), 4459; https://doi.org/10.3390/math10234459 - 26 Nov 2022
Cited by 1 | Viewed by 980
Abstract
When solving complex constrained problems, how to efficiently utilize promising infeasible solutions is an essential issue because these promising infeasible solutions can significantly improve the diversity of algorithms. However, most existing constrained multi-objective evolutionary algorithms (CMOEAs) do not fully exploit these promising infeasible [...] Read more.
When solving complex constrained problems, how to efficiently utilize promising infeasible solutions is an essential issue because these promising infeasible solutions can significantly improve the diversity of algorithms. However, most existing constrained multi-objective evolutionary algorithms (CMOEAs) do not fully exploit these promising infeasible solutions. In order to solve this problem, a constrained multi-objective optimization evolutionary algorithm based on the dynamic constraint boundary method is proposed (CDCBM). The proposed algorithm continuously searches for promising infeasible solutions between UPF (the unconstrained Pareto front) and CPF (the constrained Pareto front) during the evolution process by the dynamically changing auxiliary population of the constraint boundary, which continuously provides supplementary evolutionary directions to the main population and improves the convergence and diversity of the main population. Extensive experiments on three well-known test suites and three real-world constrained multi-objective optimization problems demonstrate that CDCBM is more competitive than seven state-of-the-art CMOEAs. Full article
(This article belongs to the Special Issue Computational Intelligence: Theory and Applications)
Show Figures

Figure 1

13 pages, 6617 KiB  
Article
A Multi-Category Inverse Design Neural Network and Its Application to Diblock Copolymers
by Dan Wei, Tiejun Zhou, Yunqing Huang and Kai Jiang
Mathematics 2022, 10(23), 4451; https://doi.org/10.3390/math10234451 - 25 Nov 2022
Cited by 2 | Viewed by 887
Abstract
In this work, we design a multi-category inverse design neural network to map ordered periodic structures to physical parameters. The neural network model consists of two parts, a classifier and Structure-Parameter-Mapping (SPM) subnets. The classifier is used to identify structures, and the SPM [...] Read more.
In this work, we design a multi-category inverse design neural network to map ordered periodic structures to physical parameters. The neural network model consists of two parts, a classifier and Structure-Parameter-Mapping (SPM) subnets. The classifier is used to identify structures, and the SPM subnets are used to predict physical parameters for desired structures. We also present an extensible reciprocal-space data augmentation method to guarantee the rotation and translation invariant of periodic structures. We apply the proposed network model and data augmentation method to two-dimensional diblock copolymers based on the Landau–Brazovskii model. Results show that the multi-category inverse design neural network has high accuracy in predicting physical parameters for desired structures. Moreover, the idea of multi-categorization can also be extended to other inverse design problems. Full article
(This article belongs to the Special Issue Computational Intelligence: Theory and Applications)
Show Figures

Figure 1

16 pages, 1661 KiB  
Article
Solving a Class of High-Order Elliptic PDEs Using Deep Neural Networks Based on Its Coupled Scheme
by Xi’an Li, Jinran Wu, Lei Zhang and Xin Tai
Mathematics 2022, 10(22), 4186; https://doi.org/10.3390/math10224186 - 09 Nov 2022
Viewed by 1297
Abstract
Deep learning—in particular, deep neural networks (DNNs)—as a mesh-free and self-adapting method has demonstrated its great potential in the field of scientific computation. In this work, inspired by the Deep Ritz method proposed by Weinan E et al. to solve a class of [...] Read more.
Deep learning—in particular, deep neural networks (DNNs)—as a mesh-free and self-adapting method has demonstrated its great potential in the field of scientific computation. In this work, inspired by the Deep Ritz method proposed by Weinan E et al. to solve a class of variational problems that generally stem from partial differential equations, we present a coupled deep neural network (CDNN) to solve the fourth-order biharmonic equation by splitting it into two well-posed Poisson’s problems, and then design a hybrid loss function for this method that can make efficiently the optimization of DNN easier and reduce the computer resources. In addition, a new activation function based on Fourier theory is introduced for our CDNN method. This activation function can reduce significantly the approximation error of the DNN. Finally, some numerical experiments are carried out to demonstrate the feasibility and efficiency of the CDNN method for the biharmonic equation in various cases. Full article
(This article belongs to the Special Issue Computational Intelligence: Theory and Applications)
Show Figures

Figure 1

14 pages, 2185 KiB  
Article
Community Detection Fusing Graph Attention Network
by Ruiqiang Guo, Juan Zou, Qianqian Bai, Wei Wang and Xiaomeng Chang
Mathematics 2022, 10(21), 4155; https://doi.org/10.3390/math10214155 - 07 Nov 2022
Cited by 1 | Viewed by 1898
Abstract
It has become a tendency to use a combination of autoencoders and graph neural networks for attribute graph clustering to solve the community detection problem. However, the existing methods do not consider the influence differences between node neighborhood information and high-order neighborhood information, [...] Read more.
It has become a tendency to use a combination of autoencoders and graph neural networks for attribute graph clustering to solve the community detection problem. However, the existing methods do not consider the influence differences between node neighborhood information and high-order neighborhood information, and the fusion of structural and attribute features is insufficient. In order to make better use of structural information and attribute information, we propose a model named community detection fusing graph attention network (CDFG). Specifically, we firstly use an autoencoder to learn attribute features. Then the graph attention network not only calculates the influence weight of the neighborhood node on the target node but also adds the high-order neighborhood information to learn the structural features. After that, the two features are initially fused by the balance parameter. The feature fusion module extracts the hidden layer representation of the graph attention layer to calculate the self-correlation matrix, which is multiplied by the node representation obtained by the preliminary fusion to achieve secondary fusion. Finally, the self-supervision mechanism makes it face the community detection task. Experiments are conducted on six real datasets. Using four evaluation metrics, the CDFG model performs better on most datasets, especially for the networks with longer average paths and diameters and smaller clustering coefficients. Full article
(This article belongs to the Special Issue Computational Intelligence: Theory and Applications)
Show Figures

Figure 1

26 pages, 5293 KiB  
Article
FAS-UNet: A Novel FAS-Driven UNet to Learn Variational Image Segmentation
by Hui Zhu, Shi Shu and Jianping Zhang
Mathematics 2022, 10(21), 4055; https://doi.org/10.3390/math10214055 - 01 Nov 2022
Cited by 3 | Viewed by 1447
Abstract
Solving variational image segmentation problems with hidden physics is often expensive and requires different algorithms and manually tuned model parameters. The deep learning methods based on the UNet structure have obtained outstanding performances in many different medical image segmentation tasks, but designing such [...] Read more.
Solving variational image segmentation problems with hidden physics is often expensive and requires different algorithms and manually tuned model parameters. The deep learning methods based on the UNet structure have obtained outstanding performances in many different medical image segmentation tasks, but designing such networks requires many parameters and training data, which are not always available for practical problems. In this paper, inspired by the traditional multiphase convexity Mumford–Shah variational model and full approximation scheme (FAS) solving the nonlinear systems, we propose a novel variational-model-informed network (FAS-UNet), which exploits the model and algorithm priors to extract the multiscale features. The proposed model-informed network integrates image data and mathematical models and implements them through learning a few convolution kernels. Based on the variational theory and FAS algorithm, we first design a feature extraction sub-network (FAS-Solution module) to solve the model-driven nonlinear systems, where a skip-connection is employed to fuse the multiscale features. Secondly, we further design a convolutional block to fuse the extracted features from the previous stage, resulting in the final segmentation possibility. Experimental results on three different medical image segmentation tasks show that the proposed FAS-UNet is very competitive with other state-of-the-art methods in the qualitative, quantitative, and model complexity evaluations. Moreover, it may also be possible to train specialized network architectures that automatically satisfy some of the mathematical and physical laws in other image problems for better accuracy, faster training, and improved generalization. Full article
(This article belongs to the Special Issue Computational Intelligence: Theory and Applications)
Show Figures

Figure 1

16 pages, 6885 KiB  
Article
Fourier Neural Solver for Large Sparse Linear Algebraic Systems
by Chen Cui, Kai Jiang, Yun Liu and Shi Shu
Mathematics 2022, 10(21), 4014; https://doi.org/10.3390/math10214014 - 28 Oct 2022
Cited by 1 | Viewed by 1386
Abstract
Large sparse linear algebraic systems can be found in a variety of scientific and engineering fields and many scientists strive to solve them in an efficient and robust manner. In this paper, we propose an interpretable neural solver, the Fourier neural solver (FNS), [...] Read more.
Large sparse linear algebraic systems can be found in a variety of scientific and engineering fields and many scientists strive to solve them in an efficient and robust manner. In this paper, we propose an interpretable neural solver, the Fourier neural solver (FNS), to address them. FNS is based on deep learning and a fast Fourier transform. Because the error between the iterative solution and the ground truth involves a wide range of frequency modes, the FNS combines a stationary iterative method and frequency space correction to eliminate different components of the error. Local Fourier analysis shows that the FNS can pick up on the error components in frequency space that are challenging to eliminate with stationary methods. Numerical experiments on the anisotropic diffusion equation, convection–diffusion equation, and Helmholtz equation show that the FNS is more efficient and more robust than the state-of-the-art neural solver. Full article
(This article belongs to the Special Issue Computational Intelligence: Theory and Applications)
Show Figures

Figure 1

24 pages, 10860 KiB  
Article
A Multi-Service Composition Model for Tasks in Cloud Manufacturing Based on VS–ABC Algorithm
by Di Liang, Jieyi Wang, Ran Bhamra, Liezhao Lu and Yuting Li
Mathematics 2022, 10(21), 3968; https://doi.org/10.3390/math10213968 - 25 Oct 2022
Cited by 2 | Viewed by 1350
Abstract
This study analyzes the impact of Industry 4.0 and SARS-CoV-2 on the manufacturing industry, in which manufacturing entities are faced with insufficient resources and uncertain services; however, the current study does not fit this situation well. A multi-service composition for complex manufacturing tasks [...] Read more.
This study analyzes the impact of Industry 4.0 and SARS-CoV-2 on the manufacturing industry, in which manufacturing entities are faced with insufficient resources and uncertain services; however, the current study does not fit this situation well. A multi-service composition for complex manufacturing tasks in a cloud manufacturing environment is proposed to improve the utilization of manufacturing service resources. Combining execution time, cost, energy consumption, service reliability and availability, a quality of service (QoS) model is constructed as the evaluation standard. A hybrid search algorithm (VS–ABC algorithm) based on the vortex search algorithm (VS) and the artificial bee colony algorithm (ABC) is introduced and combines the advantages of the two algorithms in search range and calculation speed. We take the customization production of automobiles as an example, and the case study shows that the VS–ABC algorithm has better applicability compared with traditional vortex search and artificial bee colony algorithms. Full article
(This article belongs to the Special Issue Computational Intelligence: Theory and Applications)
Show Figures

Figure 1

36 pages, 37325 KiB  
Article
A Modified Group Teaching Optimization Algorithm for Solving Constrained Engineering Optimization Problems
by Honghua Rao, Heming Jia, Di Wu, Changsheng Wen, Shanglong Li, Qingxin Liu and Laith Abualigah
Mathematics 2022, 10(20), 3765; https://doi.org/10.3390/math10203765 - 12 Oct 2022
Cited by 13 | Viewed by 1739
Abstract
The group teaching optimization algorithm (GTOA) is a meta heuristic optimization algorithm simulating the group teaching mechanism. The inspiration of GTOA comes from the group teaching mechanism. Each student will learn the knowledge obtained in the teacher phase, but each student’s autonomy is [...] Read more.
The group teaching optimization algorithm (GTOA) is a meta heuristic optimization algorithm simulating the group teaching mechanism. The inspiration of GTOA comes from the group teaching mechanism. Each student will learn the knowledge obtained in the teacher phase, but each student’s autonomy is weak. This paper considers that each student has different learning motivations. Elite students have strong self-learning ability, while ordinary students have general self-learning motivation. To solve this problem, this paper proposes a learning motivation strategy and adds random opposition-based learning and restart strategy to enhance the global performance of the optimization algorithm (MGTOA). In order to verify the optimization effect of MGTOA, 23 standard benchmark functions and 30 test functions of IEEE Evolutionary Computation 2014 (CEC2014) are adopted to verify the performance of the proposed MGTOA. In addition, MGTOA is also applied to six engineering problems for practical testing and achieved good results. Full article
(This article belongs to the Special Issue Computational Intelligence: Theory and Applications)
Show Figures

Figure 1

19 pages, 3186 KiB  
Article
SIP-UNet: Sequential Inputs Parallel UNet Architecture for Segmentation of Brain Tissues from Magnetic Resonance Images
by Rukesh Prajapati and Goo-Rak Kwon
Mathematics 2022, 10(15), 2755; https://doi.org/10.3390/math10152755 - 03 Aug 2022
Cited by 1 | Viewed by 2382
Abstract
Proper analysis of changes in brain structure can lead to a more accurate diagnosis of specific brain disorders. The accuracy of segmentation is crucial for quantifying changes in brain structure. In recent studies, UNet-based architectures have outperformed other deep learning architectures in biomedical [...] Read more.
Proper analysis of changes in brain structure can lead to a more accurate diagnosis of specific brain disorders. The accuracy of segmentation is crucial for quantifying changes in brain structure. In recent studies, UNet-based architectures have outperformed other deep learning architectures in biomedical image segmentation. However, improving segmentation accuracy is challenging due to the low resolution of medical images and insufficient data. In this study, we present a novel architecture that combines three parallel UNets using a residual network. This architecture improves upon the baseline methods in three ways. First, instead of using a single image as input, we use three consecutive images. This gives our model the freedom to learn from neighboring images as well. Additionally, the images are individually compressed and decompressed using three different UNets, which prevents the model from merging the features of the images. Finally, following the residual network architecture, the outputs of the UNets are combined in such a way that the features of the image corresponding to the output are enhanced by a skip connection. The proposed architecture performed better than using a single conventional UNet and other UNet variants. Full article
(This article belongs to the Special Issue Computational Intelligence: Theory and Applications)
Show Figures

Figure 1

Back to TopTop