Advances in Machine Learning and Mathematical Modeling for Optimization Problems

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: closed (15 December 2022) | Viewed by 18296

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
1. Mathematics and Computer Science, Royal Military College of Canada, Kingston, ON K7K 7B4, Canada
2. Centre for Neuroscience Studies, School of Computing, Queen’s University, Kingston, ON K7L 2N8, Canada
Interests: artificial intelligence; machine learning; reinforcement learning; animal learning; interval timing; computational neuroscience

E-Mail Website
Guest Editor
1. Department of Applied Sciences, University of Quebec in Chicoutimi, 555, boul. de l’Université, Chicoutimi, QC G7H 2B1, Canada
2. School of Information Technology and Engineering, University of Ottawa, 800 King Edward Avenue Ottawa, Ottawa, ON K1N 6N5, Canada
Interests: machine learning; big data analytics; modeling and optimization of vehicular networks; statistical signal processing; wireless communications; Internet of Things; industry 4.0; 5G
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine learning and deep learning have made tremendous progress over the last decade and have become the de facto standard across a wide range of image, video, text, and sound processing domains, from object recognition to image generation. More recently, deep learning and deep reinforcement learning have begun to develop end-to-end training to solve more complex operation research and combinatorial optimization problems, such as covering problems, vehicle routing problems, traveling salesmen problems, scheduling problems, and other complex problems requiring more general simulations. These methods also sometimes include classic search and optimization algorithms to machine learning, such as Monte Carlo Tree Search in AlphaGO.

This Special Issue focuses on recent advances in machine learning and mathematical modeling for optimization problems. Topics include but are not limited to:

  1. Machine learning for optimization problems
  2. Statistical learning
  3. End-to-end machine learning
  4. Graph neural networks
  5. Combining classic optimization algorithms and machine learning
  6. Mathematical models of problems for machine learning
  7. Optimization method for machine learning
  8. Evolutionary computation and optimization problems
  9. Applications such as scheduling problems, smart cities, etc.

New combinations of algorithms or new deep neural network architectures or loss functions specifically adapted to solve graph and combinatorial optimization problems are particularly welcome.

Prof. Dr. Francois Rivest
Prof. Dr. Abdellah Chehri
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Machine learning for optimization problems
  • Statistical learning
  • End-to-end machine learning
  • Graph neural networks
  • Combining classic optimization algorithms and machine learning
  • Mathematical models of problems for machine learning
  • Optimization method for machine learning
  • Evolutionary computation and optimization problems
  • Applications such as scheduling problems, smart cities, etc.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

5 pages, 175 KiB  
Editorial
Editorial for the Special Issue “Advances in Machine Learning and Mathematical Modeling for Optimization Problems”
by Abdellah Chehri and Francois Rivest
Mathematics 2023, 11(8), 1890; https://doi.org/10.3390/math11081890 - 17 Apr 2023
Viewed by 1071
Abstract
Machine learning and deep learning have made tremendous progress over the last decade and have become the de facto standard across a wide range of image, video, text, and sound processing domains, from object recognition to image generation [...] Full article

Research

Jump to: Editorial

23 pages, 6566 KiB  
Article
A Multi-Objective Crowding Optimization Solution for Efficient Sensing as a Service in Virtualized Wireless Sensor Networks
by Ramy A. Othman, Saad M. Darwish and Ibrahim A. Abd El-Moghith
Mathematics 2023, 11(5), 1128; https://doi.org/10.3390/math11051128 - 24 Feb 2023
Cited by 6 | Viewed by 1440
Abstract
The Internet of Things (IoT) encompasses a wide range of applications and service domains, from smart cities, autonomous vehicles, surveillance, medical devices, to crop control. Virtualization in wireless sensor networks (WSNs) is widely regarded as the most revolutionary technological technique used in these [...] Read more.
The Internet of Things (IoT) encompasses a wide range of applications and service domains, from smart cities, autonomous vehicles, surveillance, medical devices, to crop control. Virtualization in wireless sensor networks (WSNs) is widely regarded as the most revolutionary technological technique used in these areas. Due to node failure or communication latency and the regular identification of nodes in WSNs, virtualization in WSNs presents additional hurdles. Previous research on virtual WSNs has focused on issues such as resource maximization, node failure, and link-failure-based survivability, but has neglected to account for the impact of communication latency. Communication connection latency in WSNs has an effect on various virtual networks providing IoT services. There is a lack of research in this field at the present time. In this study, we utilize the Evolutionary Multi-Objective Crowding Algorithm (EMOCA) to maximize fault tolerance and minimize communication delay for virtual network embedding in WSN environments for service-oriented applications focusing on heterogeneous virtual networks in the IoT. Unlike the current wireless virtualization approach, which uses the Non-dominated Sorting Genetic Algorithm-II (NSGA-II), EMOCA uses both domination and diversity criteria in the evolving population for optimization problems. The analysis of the results demonstrates that the proposed framework successfully optimizes fault tolerance and communication delay for virtualization in WSNs. Full article
Show Figures

Figure 1

46 pages, 7323 KiB  
Article
S-Type Random k Satisfiability Logic in Discrete Hopfield Neural Network Using Probability Distribution: Performance Optimization and Analysis
by Suad Abdeen, Mohd Shareduwan Mohd Kasihmuddin, Nur Ezlin Zamri, Gaeithry Manoharam, Mohd. Asyraf Mansor and Nada Alshehri
Mathematics 2023, 11(4), 984; https://doi.org/10.3390/math11040984 - 15 Feb 2023
Cited by 6 | Viewed by 1371
Abstract
Recently, a variety of non-systematic satisfiability studies on Discrete Hopfield Neural Networks have been introduced to overcome a lack of interpretation. Although a flexible structure was established to assist in the generation of a wide range of spatial solutions that converge on global [...] Read more.
Recently, a variety of non-systematic satisfiability studies on Discrete Hopfield Neural Networks have been introduced to overcome a lack of interpretation. Although a flexible structure was established to assist in the generation of a wide range of spatial solutions that converge on global minima, the fundamental problem is that the existing logic completely ignores the probability dataset’s distribution and features, as well as the literal status distribution. Thus, this study considers a new type of non-systematic logic termed S-type Random k Satisfiability, which employs a creative layer of a Discrete Hopfield Neural Network, and which plays a significant role in the identification of the prevailing attribute likelihood of a binomial distribution dataset. The goal of the probability logic phase is to establish the logical structure and assign negative literals based on two given statistical parameters. The performance of the proposed logic structure was investigated using the comparison of a proposed metric to current state-of-the-art logical rules; consequently, was found that the models have a high value in two parameters that efficiently introduce a logical structure in the probability logic phase. Additionally, by implementing a Discrete Hopfield Neural Network, it has been observed that the cost function experiences a reduction. A new form of synaptic weight assessment via statistical methods was applied to investigate the effect of the two proposed parameters in the logic structure. Overall, the investigation demonstrated that controlling the two proposed parameters has a good effect on synaptic weight management and the generation of global minima solutions. Full article
Show Figures

Figure 1

19 pages, 4113 KiB  
Article
Edge Computing Offloading Method Based on Deep Reinforcement Learning for Gas Pipeline Leak Detection
by Dong Wei, Renjun Wang, Changqing Xia, Tianhao Xia, Xi Jin and Chi Xu
Mathematics 2022, 10(24), 4812; https://doi.org/10.3390/math10244812 - 18 Dec 2022
Cited by 3 | Viewed by 1577
Abstract
Traditional gas pipeline leak detection methods require task offload decisions in the cloud, which has low real time performance. The emergence of edge computing provides a solution by enabling offload decisions directly at the edge server, improving real-time performance; however, energy is the [...] Read more.
Traditional gas pipeline leak detection methods require task offload decisions in the cloud, which has low real time performance. The emergence of edge computing provides a solution by enabling offload decisions directly at the edge server, improving real-time performance; however, energy is the new bottleneck. Therefore, focusing on the gas transmission pipeline leakage detection scenario in real time, a novel detection algorithm that combines the benefits of both the heuristic algorithm and the advantage actor critic (AAC) algorithm is proposed in this paper. It aims at optimization with the goal of real-time guarantee of pipeline mapping analysis tasks and maximizing the survival time of portable gas leak detectors. Since the computing power of portable detection devices is limited, as they are powered by batteries, the main problem to be solved in this study is how to take into account the node energy overhead while guaranteeing the system performance requirements. By introducing the idea of edge computing and taking the mapping relationship between resource occupation and energy consumption as the starting point, the optimization model is established, with the goal to optimize the total system cost (TSC). This is composed of the node’s transmission energy consumption, local computing energy consumption, and residual electricity weight. In order to minimize TSC, the algorithm uses the AAC network to make task scheduling decisions and judge whether tasks need to be offloaded, and uses heuristic strategies and the Cauchy–Buniakowsky–Schwarz inequality to determine the allocation of communication resources. The experiments show that the proposed algorithm in this paper can meet the real-time requirements of the detector, and achieve lower energy consumption. The proposed algorithm saves approximately 56% of the system energy compared to the Deep Q Network (DQN) algorithm. Compared with the artificial gorilla troops Optimizer (GTO), the black widow optimization algorithm (BWOA), the exploration-enhanced grey wolf optimizer (EEGWO), the African vultures optimization algorithm (AVOA), and the driving training-based optimization (DTBO), it saves 21%, 38%, 30%, 31%, and 44% of energy consumption, respectively. Compared to the fully local computing and fully offloading algorithms, it saves 50% and 30%, respectively. Meanwhile, the task completion rate of this algorithm reaches 96.3%, which is the best real-time performance among these algorithms. Full article
Show Figures

Figure 1

21 pages, 387 KiB  
Article
Comparison of Genetic Operators for the Multiobjective Pickup and Delivery Problem
by Connor Little, Salimur Choudhury, Ting Hu and Kai Salomaa
Mathematics 2022, 10(22), 4308; https://doi.org/10.3390/math10224308 - 17 Nov 2022
Cited by 1 | Viewed by 1163
Abstract
The pickup and delivery problem is a pertinent problem in our interconnected world. Being able to move goods and people efficiently can lead to decreases in costs, emissions, and time. In this work, we create a genetic algorithm to solve the multiobjective capacitated [...] Read more.
The pickup and delivery problem is a pertinent problem in our interconnected world. Being able to move goods and people efficiently can lead to decreases in costs, emissions, and time. In this work, we create a genetic algorithm to solve the multiobjective capacitated pickup and delivery problem, adapting commonly used benchmarks. The objective is to minimize total distance travelled and the number of vehicles utilized. Based on NSGA-II, we explore how different inter-route and intraroute mutations affect the final solution. We introduce 6 inter-route operations and 16 intraroute operations and calculate the hypervolume measured to directly compare their impact. We also introduce two different crossover operators that are specialized for this problem. Our methodology was able to find optimal results in 23% of the instances in the first benchmark and in most other instances, it was able to generate a Pareto front within at most one vehicle and +20% of the best-known distance. With multiple solutions, it allows users to choose the routes that best suit their needs. Full article
Show Figures

Figure 1

70 pages, 5987 KiB  
Article
A Multi-Depot Dynamic Vehicle Routing Problem with Stochastic Road Capacity: An MDP Model and Dynamic Policy for Post-Decision State Rollout Algorithm in Reinforcement Learning
by Wadi Khalid Anuar, Lai Soon Lee, Hsin-Vonn Seow and Stefan Pickl
Mathematics 2022, 10(15), 2699; https://doi.org/10.3390/math10152699 - 30 Jul 2022
Cited by 8 | Viewed by 2377
Abstract
In the event of a disaster, the road network is often compromised in terms of its capacity and usability conditions. This is a challenge for humanitarian operations in the context of delivering critical medical supplies. To optimise vehicle routing for such a problem, [...] Read more.
In the event of a disaster, the road network is often compromised in terms of its capacity and usability conditions. This is a challenge for humanitarian operations in the context of delivering critical medical supplies. To optimise vehicle routing for such a problem, a Multi-Depot Dynamic Vehicle-Routing Problem with Stochastic Road Capacity (MDDVRPSRC) is formulated as a Markov Decision Processes (MDP) model. An Approximate Dynamic Programming (ADP) solution method is adopted where the Post-Decision State Rollout Algorithm (PDS-RA) is applied as the lookahead approach. To perform the rollout effectively for the problem, the PDS-RA is executed for all vehicles assigned for the problem. Then, at the end, a decision is made by the agent. Five types of constructive base heuristics are proposed for the PDS-RA. First, the Teach Base Insertion Heuristic (TBIH-1) is proposed to study the partial random construction approach for the non-obvious decision. The heuristic is extended by proposing TBIH-2 and TBIH-3 to show how Sequential Insertion Heuristic (SIH) (I1) as well as Clarke and Wright (CW) could be executed, respectively, in a dynamic setting as a modification to the TBIH-1. Additionally, another two heuristics: TBIH-4 and TBIH-5 (TBIH-1 with the addition of Dynamic Lookahead SIH (DLASIH) and Dynamic Lookahead CW (DLACW) respectively) are proposed to improve the on-the-go constructed decision rule (dynamic policy on the go) in the lookahead simulations. The results obtained are compared with the matheuristic approach from previous work based on PDS-RA. Full article
Show Figures

Figure 1

20 pages, 488 KiB  
Article
An Optimized Decision Support Model for COVID-19 Diagnostics Based on Complex Fuzzy Hypersoft Mapping
by Muhammad Saeed, Muhammad Ahsan, Muhammad Haris Saeed, Atiqe Ur Rahman, Asad Mehmood, Mazin Abed Mohammed, Mustafa Musa Jaber and Robertas Damaševičius
Mathematics 2022, 10(14), 2472; https://doi.org/10.3390/math10142472 - 15 Jul 2022
Cited by 18 | Viewed by 1815
Abstract
COVID-19 has shaken the entire world economy and affected millions of people in a brief period. COVID-19 has numerous overlapping symptoms with other upper respiratory conditions, making it hard for diagnosticians to diagnose correctly. Several mathematical models have been presented for its diagnosis [...] Read more.
COVID-19 has shaken the entire world economy and affected millions of people in a brief period. COVID-19 has numerous overlapping symptoms with other upper respiratory conditions, making it hard for diagnosticians to diagnose correctly. Several mathematical models have been presented for its diagnosis and treatment. This article delivers a mathematical framework based on a novel agile fuzzy-like arrangement, namely, the complex fuzzy hypersoft (CFHS) set, which is a formation of the complex fuzzy (CF) set and the hypersoft set (an extension of soft set). First, the elementary theory of CFHS is developed, which considers the amplitude term (A-term) and the phase term (P-term) of the complex numbers simultaneously to tackle uncertainty, ambivalence, and mediocrity of data. In two components, this new fuzzy-like hybrid theory is versatile. First, it provides access to a broad spectrum of membership function values by broadening them to the unit circle on an Argand plane and incorporating an additional term, the P-term, to accommodate the data’s periodic nature. Second, it categorizes the distinct attribute into corresponding sub-valued sets for better understanding. The CFHS set and CFHS-mapping with its inverse mapping (INM) can manage such issues. Our proposed framework is validated by a study establishing a link between COVID-19 symptoms and medicines. For the COVID-19 types, a table is constructed relying on the fuzzy interval of [0,1]. The computation is based on CFHS-mapping, which identifies the disease and selects the optimum medication correctly. Furthermore, a generalized CFHS-mapping is provided, which can help a specialist extract the patient’s health record and predict how long it will take to overcome the infection. Full article
Show Figures

Figure 1

17 pages, 5685 KiB  
Article
Application of ANN in Induction-Motor Fault-Detection System Established with MRA and CFFS
by Chun-Yao Lee, Meng-Syun Wen, Guang-Lin Zhuo and Truong-An Le
Mathematics 2022, 10(13), 2250; https://doi.org/10.3390/math10132250 - 27 Jun 2022
Cited by 9 | Viewed by 1840
Abstract
This paper proposes a fault-detection system for faulty induction motors (bearing faults, interturn shorts, and broken rotor bars) based on multiresolution analysis (MRA), correlation and fitness values-based feature selection (CFFS), and artificial neural network (ANN). First, this study compares two feature-extraction methods: the [...] Read more.
This paper proposes a fault-detection system for faulty induction motors (bearing faults, interturn shorts, and broken rotor bars) based on multiresolution analysis (MRA), correlation and fitness values-based feature selection (CFFS), and artificial neural network (ANN). First, this study compares two feature-extraction methods: the MRA and the Hilbert Huang transform (HHT) for induction-motor-current signature analysis. Furthermore, feature-selection methods are compared to reduce the number of features and maintain the best accuracy of the detection system to lower operating costs. Finally, the proposed detection system is tested with additive white Gaussian noise, and the signal-processing method and feature-selection method with good performance are selected to establish the best detection system. According to the results, features extracted from MRA can achieve better performance than HHT using CFFS and ANN. In the proposed detection system, CFFS significantly reduces the operation cost (95% of the number of features) and maintains 93% accuracy using ANN. Full article
Show Figures

Figure 1

24 pages, 5443 KiB  
Article
HIFA-LPR: High-Frequency Augmented License Plate Recognition in Low-Quality Legacy Conditions via Gradual End-to-End Learning
by Sung-Jin Lee, Jun-Seok Yun, Eung Joo Lee and Seok Bong Yoo
Mathematics 2022, 10(9), 1569; https://doi.org/10.3390/math10091569 - 6 May 2022
Cited by 5 | Viewed by 2344
Abstract
Scene text detection and recognition, such as automatic license plate recognition, is a technology utilized in various applications. Although numerous studies have been conducted to improve recognition accuracy, accuracy decreases when low-quality legacy license plate images are input into a recognition module due [...] Read more.
Scene text detection and recognition, such as automatic license plate recognition, is a technology utilized in various applications. Although numerous studies have been conducted to improve recognition accuracy, accuracy decreases when low-quality legacy license plate images are input into a recognition module due to low image quality and a lack of resolution. To obtain better recognition accuracy, this study proposes a high-frequency augmented license plate recognition model in which the super-resolution module and the license plate recognition module are integrated and trained collaboratively via a proposed gradual end-to-end learning-based optimization. To optimally train our model, we propose a holistic feature extraction method that effectively prevents generating grid patterns from the super-resolved image during the training process. Moreover, to exploit high-frequency information that affects the performance of license plate recognition, we propose a license plate recognition module based on high-frequency augmentation. Furthermore, we propose a gradual end-to-end learning process based on weight freezing with three steps. Our three-step methodological approach can properly optimize each module to provide robust recognition performance. The experimental results show that our model is superior to existing approaches in low-quality legacy conditions on UFPR and Greek vehicle datasets. Full article
Show Figures

Figure 1

20 pages, 543 KiB  
Article
An Accelerated Convex Optimization Algorithm with Line Search and Applications in Machine Learning
by Dawan Chumpungam, Panitarn Sarnmeta and Suthep Suantai
Mathematics 2022, 10(9), 1491; https://doi.org/10.3390/math10091491 - 30 Apr 2022
Cited by 3 | Viewed by 1848
Abstract
In this paper, we introduce a new line search technique, then employ it to construct a novel accelerated forward–backward algorithm for solving convex minimization problems of the form of the summation of two convex functions in which one of these functions is smooth [...] Read more.
In this paper, we introduce a new line search technique, then employ it to construct a novel accelerated forward–backward algorithm for solving convex minimization problems of the form of the summation of two convex functions in which one of these functions is smooth in a real Hilbert space. We establish a weak convergence to a solution of the proposed algorithm without the Lipschitz assumption on the gradient of the objective function. Furthermore, we analyze its performance by applying the proposed algorithm to solving classification problems on various data sets and compare with other line search algorithms. Based on the experiments, the proposed algorithm performs better than other line search algorithms. Full article
Show Figures

Figure 1

Back to TopTop