Optimization Algorithms and Their Applications

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Applications".

Deadline for manuscript submissions: closed (30 June 2025) | Viewed by 14947

Special Issue Editor


E-Mail Website
Guest Editor
Department of Informatics and Telecommunications, University of Ioannina, 47100 Arta, Greece
Interests: optimization algorithms; combinatorial optimization; scheduling; timetabling; operations research; machine learning

Special Issue Information

Dear Colleagues,

MDPI’s Information journal is introducing a new Special Issue entitled “Optimization Algorithms and Their Applications”. Original papers related to optimization algorithms and their applications will be considered for publication. This Special Issue aims to bring together researchers in the optimization algorithms and combinatorial optimization research communities to present innovative research results or novel applications. In this Special Issue, we solicit papers on various aspects of optimization algorithms from various fields such as artificial intelligence, machine learning, computer science, graphs, and novel applications of optimization on scheduling, timetabling, transportation and logistics, robotic path planning, and others to promote research activities in these fields.

Optimization problems are ubiquitous, manifesting themselves in various settings, affecting organizations (e.g., hospitals, schools, and universities), companies (e.g., transport companies, call centers, and service industries), and society at large (e.g., resource capacity planning). In this Special Issue, we welcome you to present your findings regarding the latest advances in complex optimization problems. Papers may present optimization algorithms, feature applications, novel approaches, innovative techniques, and theoretical findings on difficult optimization problems. Since such problems are computationally hard, high-performance computing approaches are welcomed and endorsed.

We welcome you to submit your most recent work in the fields of optimization algorithms, scheduling, timetabling, and their applications to this Special Issue, "Optimization Algorithms and Their Applications", in Information. Researchers from both industry and academia are warmly invited to submit either theoretical or practical research.

Dr. Christos Gogos
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • operations research
  • scheduling
  • timetabling
  • artificial intelligence
  • heuristics
  • metaheuristics
  • machine learning
  • graph algorithms
  • linear programming
  • mixed integer programming
  • constraint programming
  • educational timetabling
  • healthcare timetabling
  • employee rostering
  • social network analysis
  • urban planning and traffic management
  • path planning in autonomous systems
  • portfolio optimization
  • service industries optimization
  • parallel and distributed approaches to scheduling and timetabling

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 1902 KB  
Article
Mobile Platform for Continuous Screening of Clear Water Quality Using Colorimetric Plasmonic Sensing
by Rima Mansour, Caterina Serafinelli, Rui Jesus and Alessandro Fantoni
Information 2025, 16(8), 683; https://doi.org/10.3390/info16080683 - 10 Aug 2025
Viewed by 304
Abstract
Effective water quality monitoring is very important for detecting pollution and protecting public health. However, traditional methods are slow, relying on costly equipment, central laboratories, and expert staffing, which delays real-time measurements. At the same time, significant advancements have been made in the [...] Read more.
Effective water quality monitoring is very important for detecting pollution and protecting public health. However, traditional methods are slow, relying on costly equipment, central laboratories, and expert staffing, which delays real-time measurements. At the same time, significant advancements have been made in the field of plasmonic sensing technologies, making them ideal for environmental monitoring. However, their reliance on large, expensive spectrometers limits accessibility. This work aims to bridge the gap between advanced plasmonic sensing and practical water monitoring needs, by integrating plasmonic sensors with mobile technology. We present BioColor, a mobile platform that consists of a plasmonic sensor setup, mobile application, and cloud services. The platform processes captured colorimetric sensor images in real-time using optimized image processing algorithms, including region-of-interest segmentation, color extraction (mean and dominant), and comparison via the CIEDE2000 metric. The results are visualized within the mobile app, providing instant and automated access to the sensing outcome. In our validation experiments, the system consistently measured color differences in various sensor images captured under media with different refractive indices. A user experience test with 12 participants demonstrated excellent usability, resulting in a System Usability Scale (SUS) score of 93. The BioColor platform brings advanced sensing capabilities from hardware into software, making environmental monitoring more accessible, efficient, and continuous. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Figure 1

13 pages, 398 KB  
Article
An Approximate Algorithm for Sparse Distributionally Robust Optimization
by Ruyu Wang, Yaozhong Hu, Cong Liu and Quanwei Gao
Information 2025, 16(8), 676; https://doi.org/10.3390/info16080676 - 7 Aug 2025
Viewed by 215
Abstract
In this paper, we propose a sparse distributionally robust optimization (DRO) model incorporating the Conditional Value-at-Risk (CVaR) measure to control tail risks in uncertain environments. The model utilizes sparsity to reduce transaction costs and enhance operational efficiency. We reformulate the problem as a [...] Read more.
In this paper, we propose a sparse distributionally robust optimization (DRO) model incorporating the Conditional Value-at-Risk (CVaR) measure to control tail risks in uncertain environments. The model utilizes sparsity to reduce transaction costs and enhance operational efficiency. We reformulate the problem as a Min-Max-Min optimization and convert it into an equivalent non-smooth minimization problem. To address this computational challenge, we develop an approximate discretization (AD) scheme for the underlying continuous random vector and prove its convergence to the original non-smooth formulation under mild conditions. The resulting problem can be efficiently solved using a subgradient method. While our analysis focuses on CVaR penalty, this approach is applicable to a broader class of non-smooth convex regularizers. The experimental results on the portfolio selection problem confirm the effectiveness and scalability of the proposed AD algorithm. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Graphical abstract

31 pages, 3315 KB  
Article
Searching for the Best Artificial Neural Network Architecture to Estimate Column and Beam Element Dimensions
by Ayla Ocak, Gebrail Bekdaş, Sinan Melih Nigdeli, Umit Işıkdağ and Zong Woo Geem
Information 2025, 16(8), 660; https://doi.org/10.3390/info16080660 - 1 Aug 2025
Viewed by 310
Abstract
The cross-sectional dimensions of structural elements in a structure are design elements that need to be carefully designed and are related to the stiffness of the structure. Various optimization processes are applied to determine the optimum cross-sectional dimensions of beams or columns in [...] Read more.
The cross-sectional dimensions of structural elements in a structure are design elements that need to be carefully designed and are related to the stiffness of the structure. Various optimization processes are applied to determine the optimum cross-sectional dimensions of beams or columns in structures. By repeating the optimization processes for multiple load scenarios, it is possible to create a data set that shows the optimum design section properties. However, this step means repeating the same processes to produce the optimum cross-sectional dimensions. Artificial intelligence technology offers a short-cut solution to this by providing the opportunity to train itself with previously generated optimum cross-sectional dimensions and infer new cross-sectional dimensions. By processing the data, the artificial neural network can generate models that predict the cross-section for a new structural element. In this study, an optimization process is applied to a simple tubular column and an I-section beam, and the results are compiled to create a data set that presents the optimum section dimensions as a class. The harmony search (HS) algorithm, which is a metaheuristic method, was used in optimization. An artificial neural network (ANN) was created to predict the cross-sectional dimensions of the sample structural elements. The neural architecture search (NAS) method, which incorporates many metaheuristic algorithms designed to search for the best artificial neural network architecture, was applied. In this method, the best values of various parameters of the neural network, such as activation function, number of layers, and neurons, are searched for in the model with a tool called HyperNetExplorer. Model metrics were calculated to evaluate the prediction success of the developed model. An effective neural network architecture for column and beam elements is obtained. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Figure 1

13 pages, 718 KB  
Article
Application of Optimization Algorithms in Voter Service Module Allocation
by Edgar Jardón, Marcelo Romero and José-Raymundo Marcial-Romero
Information 2025, 16(6), 506; https://doi.org/10.3390/info16060506 - 18 Jun 2025
Viewed by 438
Abstract
Allocation models are essential tools for optimally distributing client requests across multiple services under defined restrictions and objective functions. This study evaluates several heuristics to address an allocation problem involving young individuals reaching voting age. A five-step methodology was implemented: defining variables, executing [...] Read more.
Allocation models are essential tools for optimally distributing client requests across multiple services under defined restrictions and objective functions. This study evaluates several heuristics to address an allocation problem involving young individuals reaching voting age. A five-step methodology was implemented: defining variables, executing heuristics, compiling results, evaluating outcomes, and selecting the most effective heuristic. Using experimental data from the Mexican National Electoral Institute (INE), the study focuses on 88,107 individuals aged 17–18 in the 16 municipalities of the Toluca Valley, who can access any of the 10 INE service modules. Six heuristics were analyzed in sequence: genetic algorithm, ant colony optimization, local search, tabu search, simulated annealing, and greedy algorithm. The results indicate that genetic algorithm significantly reduces the processing time when used as the initial heuristic. Furthermore, given the current capacity of the 10 INE modules, serving the entire target population would require nine working days. These findings align with principles of spatial justice and highlight the practical efficiency of heuristic-based solutions in administrative resource allocation. The main contribution of this study is the development and evaluation of a hybrid heuristic framework for allocating INE modules, demonstrating that combining multiple heuristics—with a genetic algorithm as the initial phase—significantly improves solution quality and computational efficiency. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Figure 1

21 pages, 608 KB  
Article
A Machine Learning-Assisted Automation System for Optimizing Session Preparation Time in Digital Audio Workstations
by Bogdan Moroșanu, Marian Negru, Georgian Nicolae, Horia Sebastian Ioniță and Constantin Paleologu
Information 2025, 16(6), 494; https://doi.org/10.3390/info16060494 - 13 Jun 2025
Viewed by 739
Abstract
Modern audio production workflows often require significant manual effort during the initial session preparation phase, including track labeling, format standardization, and gain staging. This paper presents a rule-based and Machine Learning-assisted automation system designed to minimize the time required for these tasks in [...] Read more.
Modern audio production workflows often require significant manual effort during the initial session preparation phase, including track labeling, format standardization, and gain staging. This paper presents a rule-based and Machine Learning-assisted automation system designed to minimize the time required for these tasks in Digital Audio Workstations (DAWs). The system automatically detects and labels audio tracks, identifies and eliminates redundant fake stereo channels, merges double-tracked instruments into stereo pairs, standardizes sample rate and bit rate across all tracks, and applies initial gain staging using target loudness values derived from a Genetic Algorithm (GA)-based system, which optimizes gain levels for individual track types based on engineer preferences and instrument characteristics. By replacing manual setup processes with automated decision-making methods informed by Machine Learning (ML) and rule-based heuristics, the system reduces session preparation time by up to 70% in typical multitrack audio projects. The proposed approach highlights how practical automation, combined with lightweight Neural Network (NN) models, can optimize workflow efficiency in real-world music production environments. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Graphical abstract

23 pages, 3846 KB  
Article
Efficient Context-Preserving Encoding and Decoding of Compositional Structures Using Sparse Binary Representations
by Roman Malits and Avi Mendelson
Information 2025, 16(5), 343; https://doi.org/10.3390/info16050343 - 24 Apr 2025
Viewed by 486
Abstract
Despite their unprecedented success, artificial neural networks suffer extreme opacity and weakness in learning general knowledge from limited experience. Some argue that the key to overcoming those limitations in artificial neural networks is efficiently combining continuity with compositionality principles. While it is unknown [...] Read more.
Despite their unprecedented success, artificial neural networks suffer extreme opacity and weakness in learning general knowledge from limited experience. Some argue that the key to overcoming those limitations in artificial neural networks is efficiently combining continuity with compositionality principles. While it is unknown how the brain encodes and decodes information in a way that enables both rapid responses and complex processing, there is evidence that the neocortex employs sparse distributed representations for this task. This is an active area of research. This work deals with one of the challenges in this field related to encoding and decoding nested compositional structures, which are essential for representing complex real-world concepts. One of the algorithms in this field is called context-dependent thinning (CDT). A distinguishing feature of CDT relative to other methods is that the CDT-encoded vector remains similar to each component input and combinations of similar inputs. In this work, we propose a novel encoding method termed CPSE, based on CDT ideas. In addition, we propose a novel decoding method termed CPSD, based on triadic memory. The proposed algorithms extend CDT by allowing both encoding and decoding of information, including the composition order. In addition, the proposed algorithms allow to optimize the amount of compute and memory needed to achieve the desired encoding/decoding performance. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Figure 1

30 pages, 16764 KB  
Article
Design of a Device for Optimizing Burden Distribution in a Blast Furnace Hopper
by Gabriele Degrassi, Lucia Parussini, Marco Boscolo, Elio Padoano, Carlo Poloni, Nicola Petronelli and Vincenzo Dimastromatteo
Information 2025, 16(5), 337; https://doi.org/10.3390/info16050337 - 22 Apr 2025
Viewed by 514
Abstract
The coke and ore are stacked alternately in layers inside the blast furnace. The capability of the charging system to distribute them in the desired manner and with optimum strata thickness is crucial for the efficiency and high-performance operation of the blast furnace [...] Read more.
The coke and ore are stacked alternately in layers inside the blast furnace. The capability of the charging system to distribute them in the desired manner and with optimum strata thickness is crucial for the efficiency and high-performance operation of the blast furnace itself. The objective of this work is the optimization of the charging equipment of a specific blast furnace. This blast furnace consists of a hopper, a single bell and a deflector inserted in the hopper under the conveyor belt. The focus is the search for a deflector geometry capable of distributing the material as evenly as possible in the hopper in order to ensure the effective disposal of the material released in the blast furnace. This search was performed by coupling the discrete element method with a multi-strategy and self-adapting optimization algorithm. The numerical results were qualitatively validated with a laboratory-scale model. Low cost and the simplicity of operation and maintenance are the strengths of the proposed charging system. Moreover, the methodological approach can be extended to other applications and contexts, such as chemical, pharmaceutical and food processing industries. This is especially true when complex material release conditions necessitate achieving bulk material distribution requirements in containers, silos, hoppers or similar components. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Figure 1

13 pages, 850 KB  
Article
Improving Physically Unclonable Functions’ Performance Using Second-Order Compensated Measurement
by Jorge Fernández-Aragón, Guillermo Diez-Señorans, Miguel Garcia-Bosque, Raúl Aparicio-Téllez, Gabriel López-Pinar and Santiago Celma
Information 2025, 16(3), 166; https://doi.org/10.3390/info16030166 - 21 Feb 2025
Viewed by 655
Abstract
In this paper, we study the performance of second-order compensated measurement to generate a multi-bit response in physically unclonable functions (PUFs). The proposed technique is based on a novel second-order compensated measurement generating multiple bits instead of a single bit provided by the [...] Read more.
In this paper, we study the performance of second-order compensated measurement to generate a multi-bit response in physically unclonable functions (PUFs). The proposed technique is based on a novel second-order compensated measurement generating multiple bits instead of a single bit provided by the conventional compensated measurement. A PUF based on this technique has been proposed and implemented in 40 Artix-7 FPGAs, and its uniqueness and reproducibility have been compared to those of another PUF using the compensated measurement technique. In addition, we demonstrate that the best trade-off between identifiability and computation time performance is obtained when using only two bits. At the same time, the good performance of the technique has been demonstrated, improving the identifiability of a ring oscillator PUF (RO-PUF) between 70 and 90% compared to a RO-PUF that uses conventional compensated measurement. In particular, equal error rates (EER) of the order of EER1016 can be achieved by combining the sign bit with another bit extracted using the proposed technique; and up to EER1019 by using one more extra bit. In addition, the high reliability of the responses generated by this technique against possible temperature and voltage variations has been proved. These results show how this new technique improves the performance of the PUF in terms of identifiability, so it can be effectively used for device identification purposes. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Figure 1

21 pages, 1072 KB  
Article
Community Detection Using Deep Learning: Combining Variational Graph Autoencoders with Leiden and K-Truss Techniques
by Jyotika Hariom Patil, Petros Potikas, William B. Andreopoulos and Katerina Potika
Information 2024, 15(9), 568; https://doi.org/10.3390/info15090568 - 16 Sep 2024
Viewed by 2886
Abstract
Deep learning struggles with unsupervised tasks like community detection in networks. This work proposes the Enhanced Community Detection with Structural Information VGAE (VGAE-ECF) method, a method that enhances variational graph autoencoders (VGAEs) for community detection in large networks. It incorporates community structure information [...] Read more.
Deep learning struggles with unsupervised tasks like community detection in networks. This work proposes the Enhanced Community Detection with Structural Information VGAE (VGAE-ECF) method, a method that enhances variational graph autoencoders (VGAEs) for community detection in large networks. It incorporates community structure information and edge weights alongside traditional network data. This combined input leads to improved latent representations for community identification via K-means clustering. We perform experiments and show that our method works better than previous approaches of community-aware VGAEs. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Figure 1

19 pages, 373 KB  
Article
A New Integer Model for Selecting Students at Higher Education Institutions: Preparatory Classes of Engineers as Case Study
by Soufyane Majdoub, Chakir Loqman and Jaouad Boumhidi
Information 2024, 15(9), 529; https://doi.org/10.3390/info15090529 - 2 Sep 2024
Cited by 3 | Viewed by 1513
Abstract
This study addresses the challenge of selecting outstanding students at higher education institutions under multiple constraints. We propose a novel integer programming solution to manage this process, formulating it as a constrained assignment problem with a maximization objective function. This function prioritizes the [...] Read more.
This study addresses the challenge of selecting outstanding students at higher education institutions under multiple constraints. We propose a novel integer programming solution to manage this process, formulating it as a constrained assignment problem with a maximization objective function. This function prioritizes the fair selection of students while respecting criteria such as academic qualifications, required skills, and student preferences. The goal is to develop a decision support system that efficiently selects qualified students at higher education institutions within a reasonable time. The model was tested using real data from Moroccan preparatory classes, achieving important assignment rates across all student categories. Results demonstrate significance in execution time, fulfillment of student choices, and prioritization of outstanding students. This approach offers a flexible, efficient solution for managing academic merit-based selections, optimizing resource utilization, and enhancing fairness in the selection process. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Graphical abstract

19 pages, 404 KB  
Article
A New Algorithm Framework for the Influence Maximization Problem Using Graph Clustering
by Agostinho Agra and Jose Maria Samuco
Information 2024, 15(2), 112; https://doi.org/10.3390/info15020112 - 14 Feb 2024
Cited by 3 | Viewed by 2637
Abstract
Given a social network modelled by a graph, the goal of the influence maximization problem is to find k vertices that maximize the number of active vertices through a process of diffusion. For this diffusion, the linear threshold model is considered. A new [...] Read more.
Given a social network modelled by a graph, the goal of the influence maximization problem is to find k vertices that maximize the number of active vertices through a process of diffusion. For this diffusion, the linear threshold model is considered. A new algorithm, called ClusterGreedy, is proposed to solve the influence maximization problem. The ClusterGreedy algorithm creates a partition of the original set of nodes into small subsets (the clusters), applies the SimpleGreedy algorithm to the subgraphs induced by each subset of nodes, and obtains the seed set from a combination of the seed set of each cluster by solving an integer linear program. This algorithm is further improved by exploring the submodularity property of the diffusion function. Experimental results show that the ClusterGreedy algorithm provides, on average, higher influence spread and lower running times than the SimpleGreedy algorithm on Watts–Strogatz random graphs. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Figure 1

Review

Jump to: Research

19 pages, 2302 KB  
Review
Solutions to Address the Low-Capacity Utilization Issue in Singapore’s Precast Industry
by Chen Chen and Robert Tiong
Information 2024, 15(8), 458; https://doi.org/10.3390/info15080458 - 1 Aug 2024
Viewed by 2225
Abstract
Singapore has established six Integrated Construction and Prefabrication Hubs with the goal of meeting ambitious productivity targets and building a resilient precast supply chain by 2024. These factories are equipped with high levels of mechanization and automation. However, they are currently operating far [...] Read more.
Singapore has established six Integrated Construction and Prefabrication Hubs with the goal of meeting ambitious productivity targets and building a resilient precast supply chain by 2024. These factories are equipped with high levels of mechanization and automation. However, they are currently operating far below their designed capacity due to a storage bottleneck. In land-scarce Singapore, finding large spaces for precast storage is a challenge. One possible solution is to implement a just-in-time approach. To achieve this, a systematic approach is required to plan, monitor, and control the entire supply chain effectively, utilizing various strategies, methods, and tools. This paper aims to conduct a comprehensive literature review in related areas, believing that knowledge transfer is a faster way to develop solutions to new problems. The main idea of the proposed solution is to implement an integrated supply chain system model with a central decision-maker. It is recommended that the factories take a more active role in decision-making. Establishing this integrated system relies on trust and information sharing, which can be facilitated by cutting-edge digital technologies. The results of this paper will provide valuable insights for future research aimed at completely solving this issue. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Figure 1

Back to TopTop