Intelligent Computing and Optimization

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 2155

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Informatics and Telecommunication, Reshetnev Siberian State University of Science and Technology, 31 Krasnoyarskii rabochii av., 660037 Krasnoyarsk, Russia
Interests: automl; self-adjusting algorithms; cluster analysis; facility location; pseudo-boolean optimization; evolutionary computation

E-Mail Website
Guest Editor
Faculty of Sciences and Mathematics, University of Niš, Višegradska 33, 18106 Niš, Serbia
Interests: numerical linear algebra; operations research; nonlinear optimization; heuristic optimization; hybrid methods of optimization; gradient neural networks; zeroing neural networks; symbolic computation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Intelligent computing has greatly expanded the scope of computing, extending it from traditional computing on data to increasingly heterogeneous computing paradigms. At the same time, the majority of machine-learning methods are based on optimization theory and optimization algorithms, minimizing the intercluster distances or maximizing the Rand index in the cluster analysis, minimizing error in the regression, minimizing error rate in classification problems, etc.  Optimization problems arising in machine learning are often large-scale and multimodal, and the efficiency of specific optimization algorithms strongly depends on the data set. These difficulties require special computation techniques such as parallelizing, hardware implementation of the computation, self-configuring capabilities of the algorithms, and the hybridization of different and diverse approaches: mathematically provable/heuristic methods, discrete/continuous optimization, etc. Self-configuration requires optimizing the efficiency of the algorithm, which solves the machine-learning or other optimization problem. Moreover, the necessity to increase the efficiency of optimization algorithms can lead to the use of machine-learning algorithms embedded into optimization algorithms.

This Special Issue aims to be a platform to share the recent advances in topics such as (non-exhaustive list):

  • AutoML, self-configuring, adaptive and self-adjusting methods for optimization and machine learning;
  • Methods of optimization and machine learning intended for hardware implementation;
  • Mathematical optimization;
  • Multiobjective optimization;
  • Evolutionary computation;
  • Fuzzy systems;
  • Parallel algorithms for optimization and machine learning;
  • Hybrid optimization algorithms and algorithmic combinations;
  • Optimizing artificial neural networks;
  • Optimization software and decision support systems;
  • Applications of intelligent computing.

Prof. Dr. Lev Kazakovtsev
Prof. Dr. Predrag S. Stanimirovic
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent computing
  • optimization
  • AutoML
  • self-configuring optimization algorithms
  • evolutionary computation
  • hybrid optimization
  • gradient neural networks

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 2487 KiB  
Article
A Multi-Task Decomposition-Based Evolutionary Algorithm for Tackling High-Dimensional Bi-Objective Feature Selection
by Hang Xu, Chaohui Huang, Jianbing Lin, Min Lin, Huahui Zhang and Rongbin Xu
Mathematics 2024, 12(8), 1178; https://doi.org/10.3390/math12081178 - 14 Apr 2024
Viewed by 409
Abstract
Evolutionary algorithms have been widely applied for solving multi-objective optimization problems, while the feature selection in classification can also be treated as a discrete bi-objective optimization problem if attempting to minimize both the classification error and the ratio of selected features. However, traditional [...] Read more.
Evolutionary algorithms have been widely applied for solving multi-objective optimization problems, while the feature selection in classification can also be treated as a discrete bi-objective optimization problem if attempting to minimize both the classification error and the ratio of selected features. However, traditional multi-objective evolutionary algorithms (MOEAs) may have drawbacks for tackling large-scale feature selection, due to the curse of dimensionality in the decision space. Therefore, in this paper, we concentrated on designing an multi-task decomposition-based evolutionary algorithm (abbreviated as MTDEA), especially for handling high-dimensional bi-objective feature selection in classification. To be more specific, multiple subpopulations related to different evolutionary tasks are separately initialized and then adaptively merged into a single integrated population during the evolution. Moreover, the ideal points for these multi-task subpopulations are dynamically adjusted every generation, in order to achieve different search preferences and evolutionary directions. In the experiments, the proposed MTDEA was compared with seven state-of-the-art MOEAs on 20 high-dimensional classification datasets in terms of three performance indicators, along with using comprehensive Wilcoxon and Friedman tests. It was found that the MTDEA performed the best on most datasets, with a significantly better search ability and promising efficiency. Full article
(This article belongs to the Special Issue Intelligent Computing and Optimization)
Show Figures

Figure 1

24 pages, 1366 KiB  
Article
A Family of Multi-Step Subgradient Minimization Methods
by Elena Tovbis, Vladimir Krutikov, Predrag Stanimirović, Vladimir Meshechkin, Aleksey Popov and Lev Kazakovtsev
Mathematics 2023, 11(10), 2264; https://doi.org/10.3390/math11102264 - 11 May 2023
Cited by 1 | Viewed by 1317
Abstract
For solving non-smooth multidimensional optimization problems, we present a family of relaxation subgradient methods (RSMs) with a built-in algorithm for finding the descent direction that forms an acute angle with all subgradients in the neighborhood of the current minimum. Minimizing the function along [...] Read more.
For solving non-smooth multidimensional optimization problems, we present a family of relaxation subgradient methods (RSMs) with a built-in algorithm for finding the descent direction that forms an acute angle with all subgradients in the neighborhood of the current minimum. Minimizing the function along the opposite direction (with a minus sign) enables the algorithm to go beyond the neighborhood of the current minimum. The family of algorithms for finding the descent direction is based on solving systems of inequalities. The finite convergence of the algorithms on separable bounded sets is proved. Algorithms for solving systems of inequalities are used to organize the RSM family. On quadratic functions, the methods of the RSM family are equivalent to the conjugate gradient method (CGM). The methods are intended for solving high-dimensional problems and are studied theoretically and numerically. Examples of solving convex and non-convex smooth and non-smooth problems of large dimensions are given. Full article
(This article belongs to the Special Issue Intelligent Computing and Optimization)
Show Figures

Figure 1

Back to TopTop