Evolutionary and Swarm Computing for Emerging Applications

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Evolutionary Algorithms and Machine Learning".

Deadline for manuscript submissions: 31 May 2025 | Viewed by 4209

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer and Electrical Engineering (DET), Mid Sweden University, Holmgatan, 852 30 Sundsvall, Sweden
Interests: machine learning; evolutionary computation; deep learning; neuroevolution; image processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

As we stand on the brink of a technological and AI revolution, the role of evolutionary and swarm computing (EvoSwarm) in shaping the future of emerging technologies cannot be overstated. This Special Issue is dedicated to exploring the cutting edge of EvoSwarm, delving into its transformative effects on modern technological innovations. With its intrinsic capabilities to optimize, adapt, and evolve, EvoSwarm has emerged as a fundamental component in the realm of advanced technology development.

This Special Issue is dedicated to uncovering the potential of EvoSwarm in driving advancements in emerging technologies. Our goal is to highlight not only the theoretical advancements in EvoSwarm but also its practical applications in pioneering technological domains. We seek to provide a platform for researchers and innovators to present their latest findings, discuss challenges, and propose novel solutions in applying EvoSwarm to the ever-evolving landscape of technology.

Topics of interest for this Special Issue include, but are not limited to, the following:

  • The integration of EvoSwarm in artificial intelligence, machine learning, and deep learning for pioneering tech applications;
  • The application of EvoSwarm in advancing blockchain technology and decentralized systems;
  • EvoSwarm approaches to quantum computing;
  • The role of EvoSwarm in enhancing renewable energy systems and smart grid technologies;
  • The application of EvoSwarm in the development of advanced robotics, including humanoid and autonomous systems;
  • EvoSwarm for optimizing 5G and future communication networks;
  • Utilization of autonomous vehicles and advanced transportation systems;
  • The impact of EvoSwarm on agriculture, food science, and related fields;

The exploration of EvoSwarm in medical informatics and its applications in healthcare technology. 

Dr. Seyed Jalaleddin Mousavirad
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • evolutionary computation
  • swarm intelligence
  • emerging technologies
  • AI applications
  • evolutionary algorithms
  • swarm optimization
  • artificial intelligence
  • large language models
  • genetic algorithms
  • machine learning
  • quantum computing
  • robotics
  • autonomous systems
  • blockchain technology
  • deep learning
  • smart grids
  • renewable energy
  • 5G networks
  • autonomous vehicles
  • medical informatics
  • computer vision
  • neural networks
  • computational biology
  • computational intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 2075 KiB  
Article
Improved Trimming Ant Colony Optimization Algorithm for Mobile Robot Path Planning
by Junxia Ma, Qilin Liu, Zixu Yang and Bo Wang
Algorithms 2025, 18(5), 240; https://doi.org/10.3390/a18050240 - 23 Apr 2025
Viewed by 172
Abstract
Traditional ant colony algorithms for mobile robot path planning often suffer from slow convergence, susceptibility to local optima, and low search efficiency, limiting their applicability in dynamic and complex environments. To address these challenges, this paper proposes an improved trimming ant colony optimization [...] Read more.
Traditional ant colony algorithms for mobile robot path planning often suffer from slow convergence, susceptibility to local optima, and low search efficiency, limiting their applicability in dynamic and complex environments. To address these challenges, this paper proposes an improved trimming ant colony optimization (ITACO) algorithm. The method introduces a dynamic weighting factor into the state transition probability formula to balance global exploration and local exploitation, effectively avoiding local optima. Additionally, the traditional heuristic function is replaced with an artificial potential field attraction function, dynamically adjusting the potential field strength to enhance search efficiency. A path-length-dependent pheromone increment mechanism is also proposed to accelerate convergence, while a triangular pruning strategy is employed to remove redundant path nodes and shorten the optimal path length. Simulation experiments show that the ITACO algorithm improves the path length by up to 62.86% compared to the classical ACO algorithm. The ITACO algorithm improves the path length by 6.68% compared to the latest related research results. These improvements highlight the ITACO algorithm as an efficient and reliable solution for mobile robot path planning in challenging scenarios. Full article
(This article belongs to the Special Issue Evolutionary and Swarm Computing for Emerging Applications)
Show Figures

Figure 1

18 pages, 1287 KiB  
Article
Guided Particle Swarm Optimization for Feature Selection: Application to Cancer Genome Data
by Simone A. Ludwig
Algorithms 2025, 18(4), 220; https://doi.org/10.3390/a18040220 - 11 Apr 2025
Viewed by 211
Abstract
Feature selection is a crucial step in the data preprocessing stage of machine learning. It involves selecting a subset of relevant features for use in model construction. Feature selection helps in improving model performance by reducing overfitting, enhancing generalization, and decreasing computational cost. [...] Read more.
Feature selection is a crucial step in the data preprocessing stage of machine learning. It involves selecting a subset of relevant features for use in model construction. Feature selection helps in improving model performance by reducing overfitting, enhancing generalization, and decreasing computational cost. Techniques for feature selection can be broadly classified into filter methods, wrapper methods, and embedded methods. This paper presents a feature selection method based on Particle Swarm Optimization (PSO). The proposed algorithm makes use of a guided particle scheme whereby three filter-based methods are incorporated. The proposed algorithm addresses the issue of premature convergence to global optima compared to other PSO feature-based methods. In addition, the algorithm is tested on very-high-dimensional genome data that include up to 44,909 features. Results of an experimental comparison with other state-of-the-art feature selection algorithms show that the proposed algorithm produces overall better results. Full article
(This article belongs to the Special Issue Evolutionary and Swarm Computing for Emerging Applications)
Show Figures

Figure 1

25 pages, 2129 KiB  
Article
An Adaptive Feature-Based Quantum Genetic Algorithm for Dimension Reduction with Applications in Outlier Detection
by Tin H. Pham and Bijan Raahemi
Algorithms 2025, 18(3), 154; https://doi.org/10.3390/a18030154 - 8 Mar 2025
Viewed by 512
Abstract
Dimensionality reduction is essential in machine learning, reducing dataset dimensions while enhancing classification performance. Feature Selection, a key subset of dimensionality reduction, identifies the most relevant features. Genetic Algorithms (GA) are widely used for feature selection due to their robust exploration and efficient [...] Read more.
Dimensionality reduction is essential in machine learning, reducing dataset dimensions while enhancing classification performance. Feature Selection, a key subset of dimensionality reduction, identifies the most relevant features. Genetic Algorithms (GA) are widely used for feature selection due to their robust exploration and efficient convergence. However, GAs often suffer from premature convergence, getting stuck in local optima. Quantum Genetic Algorithm (QGA) address this limitation by introducing quantum representations to enhance the search process. To further improve QGA performance, we propose an Adaptive Feature-Based Quantum Genetic Algorithm (FbQGA), which strengthens exploration and exploitation through quantum representation and adaptive quantum rotation. The rotation angle dynamically adjusts based on feature significance, optimizing feature selection. FbQGA is applied to outlier detection tasks and benchmarked against basic GA and QGA variants on five high-dimensional, imbalanced datasets. Performance is evaluated using metrics like classification accuracy, F1 score, precision, recall, selected feature count, and computational cost. Results consistently show FbQGA outperforming other methods, with significant improvements in feature selection efficiency and computational cost. These findings highlight FbQGA’s potential as an advanced tool for feature selection in complex datasets. Full article
(This article belongs to the Special Issue Evolutionary and Swarm Computing for Emerging Applications)
Show Figures

Figure 1

35 pages, 41798 KiB  
Article
A Multi-Surrogate Assisted Multi-Tasking Optimization Algorithm for High-Dimensional Expensive Problems
by Hongyu Li, Lei Chen, Jian Zhang and Muxi Li
Algorithms 2025, 18(1), 4; https://doi.org/10.3390/a18010004 - 29 Dec 2024
Viewed by 807
Abstract
Surrogate-assisted evolutionary algorithms (SAEAs) are widely used in the field of high-dimensional expensive optimization. However, real-world problems are usually complex and characterized by a variety of features. Therefore, it is very challenging to choose the most appropriate surrogate. It has been shown that [...] Read more.
Surrogate-assisted evolutionary algorithms (SAEAs) are widely used in the field of high-dimensional expensive optimization. However, real-world problems are usually complex and characterized by a variety of features. Therefore, it is very challenging to choose the most appropriate surrogate. It has been shown that multiple surrogates can characterize the fitness landscape more accurately than a single surrogate. In this work, a multi-surrogate-assisted multi-tasking optimization algorithm (MSAMT) is proposed that solves high-dimensional problems by simultaneously optimizing multiple surrogates as related tasks using the generalized multi-factorial evolutionary algorithm. In the MSAMT, all exactly evaluated samples are initially grouped to form a collection of clusters. Subsequently, the search space can be divided into several areas based on the clusters, and surrogates are constructed in each region that are capable of completely describing the entire fitness landscape as a way to improve the exploration capability of the algorithm. Near the current optimal solution, a novel ensemble surrogate is adopted to achieve local search in speeding up the convergence process. In the framework of a multi-tasking optimization algorithm, several surrogates are optimized simultaneously as related tasks. As a result, several optimal solutions spread throughout disjoint regions can be found for real function evaluation. Fourteen 10- to 100-dimensional test functions and a spatial truss design problem were used to compare the proposed approach with several recently proposed SAEAs. The results show that the proposed MSAMT performs better than the comparison algorithms in most test functions and real engineering problems. Full article
(This article belongs to the Special Issue Evolutionary and Swarm Computing for Emerging Applications)
Show Figures

Figure 1

15 pages, 2907 KiB  
Article
Parallelization of the Bison Algorithm Applied to Data Classification
by Simone A. Ludwig, Jamil Al-Sawwa and Aaron Mackenzie Misquith
Algorithms 2024, 17(11), 501; https://doi.org/10.3390/a17110501 - 4 Nov 2024
Viewed by 946
Abstract
In data science and machine learning, efficient and scalable algorithms are paramount for handling large datasets and complex tasks. Classification algorithms, in particular, play a crucial role in a wide range of applications, from image recognition and natural language processing to fraud detection [...] Read more.
In data science and machine learning, efficient and scalable algorithms are paramount for handling large datasets and complex tasks. Classification algorithms, in particular, play a crucial role in a wide range of applications, from image recognition and natural language processing to fraud detection and medical diagnosis. Traditional classification methods, while effective, often struggle with scalability and efficiency when applied to massive datasets. This challenge has driven the development of innovative approaches that leverage modern computational frameworks and parallel processing capabilities. This paper presents the Bison Algorithm, applied to classification problems. The algorithm, inspired by the social behavior of bison, aims to enhance the accuracy of classification tasks. The Bison Algorithm is implemented using PySpark, leveraging the distributed computing power to handle large datasets efficiently. This study compares the performance of the Bison Algorithm on several dataset sizes using speedup and scaleup as the performance measure. Full article
(This article belongs to the Special Issue Evolutionary and Swarm Computing for Emerging Applications)
Show Figures

Figure 1

30 pages, 4135 KiB  
Article
Optimized Accelerated Over-Relaxation Method for Robust Signal Detection: A Metaheuristic Approach
by Muhammad Nauman Irshad, Imran Ali Khoso, Muhammad Muzamil Aslam and Rardchawadee Silapunt
Algorithms 2024, 17(10), 463; https://doi.org/10.3390/a17100463 - 18 Oct 2024
Viewed by 877
Abstract
Massive MIMO technology is recognized as a key enabler for beyond 5G (B5G) and next-generation wireless networks. By utilizing large-scale antenna arrays at the base station (BS), it significantly improves both system capacity and energy efficiency. Despite these advantages, the deployment of a [...] Read more.
Massive MIMO technology is recognized as a key enabler for beyond 5G (B5G) and next-generation wireless networks. By utilizing large-scale antenna arrays at the base station (BS), it significantly improves both system capacity and energy efficiency. Despite these advantages, the deployment of a high number of antennas at the BS presents considerable challenges, particularly in the design of signal detectors that can operate with low computational complexity. While the minimum mean square error (MMSE) detector offers optimal performance in these large-scale systems, it suffers from the computational burden that makes its practical implementation challenging. To mitigate this, various iterative methods and their improved versions have been introduced. However, these iterative methods often converge slowly and are less accurate. To address these challenges, this study introduces an improved variant of traditional accelerated over-relaxation (AOR), called optimized AOR (OAOR). AOR is an over-relaxation method, and its performance is highly dependent on its relaxation parameters. To find the optimal parameters, we have developed an innovative approach that integrates a nature-inspired meta-heuristic algorithm known as Particle Swarm Optimization (PSO). Specifically, we introduce a novel variant of PSO that improves upon basic PSO by enhancing the cognitive coefficients to optimize the relaxation parameters for OAOR. These key modifications to the standard PSO improve its ability to explore various solutions efficiently and help to find the optimal parameters more quickly for signal detection. It facilitates the OAOR with faster convergence towards the optimal solution by reducing the error rate, resulting in high detection accuracy and simultaneously decreasing computational complexity from O(K3) to O(K2) making it suitable for modern wireless communication systems. We conduct extensive simulations across various configurations of massive MIMO systems. The results indicate that our proposed method achieves better performance compared to existing techniques. This improvement is particularly evident in terms of both computational complexity and error rate. Full article
(This article belongs to the Special Issue Evolutionary and Swarm Computing for Emerging Applications)
Show Figures

Figure 1

Back to TopTop