Next Issue
Volume 14, January
Previous Issue
Volume 13, November
 
 

Algorithms, Volume 13, Issue 12 (December 2020) – 39 articles

Cover Story (view full-size image): We connected two prominent fields of formal methods, the SAT problem and the directed graphs, to introduce the BaW property. Strongly connected directed graphs are represented by black and white SAT problems. Such a SAT problem has exactly two solutions: where each variable is true, and where each variable is false. We defined 3 models: the strong (SM), weak (WM), and Balatonboglár model (BM). Our main results are: (1) SM and WM have the BaW property; (2) every model has the BaW property if it is not weaker than WM, and not stronger than SM; (3) BM has the BaW property, because it fulfills the precondition of (2). These results give us a better understanding of how to represent a SAT problem as a directed graph, although this problem has not been solved yet. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 4459 KiB  
Article
Improved Sliding Mode Finite-Time Synchronization of Chaotic Systems with Unknown Parameters
by Hao Jia, Chen Guo, Lina Zhao and Zhao Xu
Algorithms 2020, 13(12), 346; https://doi.org/10.3390/a13120346 - 20 Dec 2020
Cited by 3 | Viewed by 2371
Abstract
This work uses the sliding mode control method to conduct the finite-time synchronization of chaotic systems. The utilized parameter selection principle differs from conventional methods. The designed controller selects the unknown parameters independently from the system model. These parameters enable tracking and prediction [...] Read more.
This work uses the sliding mode control method to conduct the finite-time synchronization of chaotic systems. The utilized parameter selection principle differs from conventional methods. The designed controller selects the unknown parameters independently from the system model. These parameters enable tracking and prediction of the additional variables that affect the chaotic motion but are difficult to measure. Consequently, the proposed approach avoids the limitations of selecting the unknown parameters that are challenging to measure or modeling the parameters solely within the relevant system. This paper proposes a novel nonsingular terminal sliding surface and demonstrates its finite-time convergence. Then, the adaptive law of unknown parameters is presented. Next, the adaptive sliding mode controller based on the finite-time control idea is proposed, and its finite-time convergence and stability are discussed. Finally, the paper presents numerical simulations of chaotic systems with either the same or different structures, thus verifying the proposed method’s applicability and effectiveness. Full article
Show Figures

Figure 1

32 pages, 652 KiB  
Article
Nature-Inspired Optimization Algorithms for Text Document Clustering—A Comprehensive Analysis
by Laith Abualigah, Amir H. Gandomi, Mohamed Abd Elaziz, Abdelazim G. Hussien, Ahmad M. Khasawneh, Mohammad Alshinwan and Essam H. Houssein
Algorithms 2020, 13(12), 345; https://doi.org/10.3390/a13120345 - 18 Dec 2020
Cited by 73 | Viewed by 6933
Abstract
Text clustering is one of the efficient unsupervised learning techniques used to partition a huge number of text documents into a subset of clusters. In which, each cluster contains similar documents and the clusters contain dissimilar text documents. Nature-inspired optimization algorithms have been [...] Read more.
Text clustering is one of the efficient unsupervised learning techniques used to partition a huge number of text documents into a subset of clusters. In which, each cluster contains similar documents and the clusters contain dissimilar text documents. Nature-inspired optimization algorithms have been successfully used to solve various optimization problems, including text document clustering problems. In this paper, a comprehensive review is presented to show the most related nature-inspired algorithms that have been used in solving the text clustering problem. Moreover, comprehensive experiments are conducted and analyzed to show the performance of the common well-know nature-inspired optimization algorithms in solving the text document clustering problems including Harmony Search (HS) Algorithm, Genetic Algorithm (GA), Particle Swarm Optimization (PSO) Algorithm, Ant Colony Optimization (ACO), Krill Herd Algorithm (KHA), Cuckoo Search (CS) Algorithm, Gray Wolf Optimizer (GWO), and Bat-inspired Algorithm (BA). Seven text benchmark datasets are used to validate the performance of the tested algorithms. The results showed that the performance of the well-known nurture-inspired optimization algorithms almost the same with slight differences. For improvement purposes, new modified versions of the tested algorithms can be proposed and tested to tackle the text clustering problems. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Figure 1

19 pages, 930 KiB  
Article
Hardness of Approximation for Langton’s Ant on a Twisted Torus
by Takeo Hagiwara and Tatsuie Tsukiji
Algorithms 2020, 13(12), 344; https://doi.org/10.3390/a13120344 - 16 Dec 2020
Cited by 2 | Viewed by 2749
Abstract
Langton’s ant is a deterministic cellular automaton studied in many fields, artificial life, computational complexity, cryptography, emergent dynamics, Lorents lattice gas, and so forth, motivated by the hardness of predicting the ant’s macroscopic behavior from an initial microscopic configuration. Gajardo, Moreira, and Goles [...] Read more.
Langton’s ant is a deterministic cellular automaton studied in many fields, artificial life, computational complexity, cryptography, emergent dynamics, Lorents lattice gas, and so forth, motivated by the hardness of predicting the ant’s macroscopic behavior from an initial microscopic configuration. Gajardo, Moreira, and Goles (2002) proved that Langton’s ant is PTIME -hard for reachability. On a twisted torus, we demonstrate that it is PSPACE hard to determine whether the ant will ever visit almost all vertices or nearly none of them. Full article
Show Figures

Figure 1

26 pages, 4942 KiB  
Article
Person Re-Identification across Data Distributions Based on General Purpose DNN Object Detector
by Roxana-Elena Mihaescu, Mihai Chindea, Constantin Paleologu, Serban Carata and Marian Ghenescu
Algorithms 2020, 13(12), 343; https://doi.org/10.3390/a13120343 - 15 Dec 2020
Cited by 8 | Viewed by 2392
Abstract
Solving the person re-identification problem involves making associations between the same person’s appearances across disjoint camera views. Further, those associations have to be made on multiple surveillance cameras in order to obtain a more efficient and powerful re-identification system. The re-identification problem becomes [...] Read more.
Solving the person re-identification problem involves making associations between the same person’s appearances across disjoint camera views. Further, those associations have to be made on multiple surveillance cameras in order to obtain a more efficient and powerful re-identification system. The re-identification problem becomes particularly challenging in very crowded areas. This mainly happens for two reasons. First, the visibility is reduced and occlusions of people can occur. Further, due to congestion, as the number of possible matches increases, the re-identification is becoming challenging to achieve. Additional challenges consist of variations of lightning, poses, or viewpoints, and the existence of noise and blurring effects. In this paper, we aim to generalize person re-identification by implementing a first attempt of a general system, which is robust in terms of distribution variations. Our method is based on the YOLO (You Only Look Once) model, which represents a general object detection system. The novelty of the proposed re-identification method consists of using a simple detection model, with minimal additional costs, but with results that are comparable with those of the other existing dedicated methods. Full article
(This article belongs to the Special Issue Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

16 pages, 1557 KiB  
Article
A New Click-Through Rates Prediction Model Based on Deep&Cross Network
by Guojing Huang, Qingliang Chen and Congjian Deng
Algorithms 2020, 13(12), 342; https://doi.org/10.3390/a13120342 - 14 Dec 2020
Cited by 11 | Viewed by 3912
Abstract
With the development of E-commerce, online advertising began to thrive and has gradually developed into a new mode of business, of which Click-Through Rates (CTR) prediction is the essential driving technology. Given a user, commodities and scenarios, the CTR model can predict the [...] Read more.
With the development of E-commerce, online advertising began to thrive and has gradually developed into a new mode of business, of which Click-Through Rates (CTR) prediction is the essential driving technology. Given a user, commodities and scenarios, the CTR model can predict the user’s click probability of an online advertisement. Recently, great progress has been made with the introduction of Deep Neural Networks (DNN) into CTR. In order to further advance the DNN-based CTR prediction models, this paper introduces a new model of FO-FTRL-DCN, based on the prestigious model of Deep&Cross Network (DCN) augmented with the latest optimization technique of Follow The Regularized Leader (FTRL) for DNN. The extensive comparative experiments on the iPinYou datasets show that the proposed model has outperformed other state-of-the-art baselines, with better generalization across different datasets in the benchmark. Full article
(This article belongs to the Special Issue Model Predictive Control: Algorithms and Applications)
Show Figures

Figure 1

18 pages, 970 KiB  
Article
Design Limitations, Errors and Hazards in Creating Decision Support Platforms with Large- and Very Large-Scale Data and Program Cores
by Elias Koukoutsis, Constantin Papaodysseus, George Tsavdaridis, Nikolaos V. Karadimas, Athanasios Ballis, Eirini Mamatsi and Athanasios Rafail Mamatsis
Algorithms 2020, 13(12), 341; https://doi.org/10.3390/a13120341 - 14 Dec 2020
Cited by 2 | Viewed by 3279
Abstract
Recently, very large-scale decision support systems (DSSs) have been developed, which tackle very complex problems, associated with very extensive and polymorphic information, which probably is geographically highly dispersed. The management, updating, modification and upgrading of the data and program core of such an [...] Read more.
Recently, very large-scale decision support systems (DSSs) have been developed, which tackle very complex problems, associated with very extensive and polymorphic information, which probably is geographically highly dispersed. The management, updating, modification and upgrading of the data and program core of such an information system is, as a rule, a very difficult task, which encompasses many hazards and risks. The purpose of the present work was (a) to list the more significant of these hazards and risks and (b) to introduce a new general methodology for designing decision support (DS) systems that are robust and circumvent these risks. The core of this new approach was the introduction of a meta-database, called teleological, on the base of which management, updating, modification, reduction, growth and upgrading of the system may be safely and efficiently achieved. The very same teleological meta-database can be used for the construction of a sound decision support system, incorporating elements of a previous one at a future stage. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems)
Show Figures

Figure 1

12 pages, 434 KiB  
Article
Lifting the Performance of a Heuristic for the Time-Dependent Travelling Salesman Problem through Machine Learning
by Gianpaolo Ghiani, Tommaso Adamo, Pierpaolo Greco and Emanuela Guerriero
Algorithms 2020, 13(12), 340; https://doi.org/10.3390/a13120340 - 14 Dec 2020
Cited by 5 | Viewed by 2861
Abstract
In recent years, there have been several attempts to use machine learning techniques to improve the performance of exact and approximate optimization algorithms. Along this line of research, the present paper shows how supervised and unsupervised techniques can be used to improve the [...] Read more.
In recent years, there have been several attempts to use machine learning techniques to improve the performance of exact and approximate optimization algorithms. Along this line of research, the present paper shows how supervised and unsupervised techniques can be used to improve the quality of the solutions generated by a heuristic for the Time-Dependent Travelling Salesman Problem with no increased computing time. This can be useful in a real-time setting where a speed update (or the arrival of a new customer request) may lead to the reoptimization of the planned route. The main contribution of this work is to show how to reuse the information gained in those settings in which instances with similar features have to be solved over and over again, as it is customary in distribution management. We use a method based on the nearest neighbor procedure (supervised learning) and the K-means algorithm with the Euclidean distance (unsupervised learning). In order to show the effectiveness of this approach, the computational experiments have been carried out for the dataset generated based on the real travel time functions of two European cities: Paris and London. The overall average improvement of our heuristic over the classical nearest neighbor procedure is about 5% for London, and about 4% for Paris. Full article
Show Figures

Figure 1

12 pages, 325 KiB  
Article
A Performance Study of Some Approximation Algorithms for Computing a Small Dominating Set in a Graph
by Jonathan Li, Rohan Potru and Farhad Shahrokhi
Algorithms 2020, 13(12), 339; https://doi.org/10.3390/a13120339 - 14 Dec 2020
Cited by 6 | Viewed by 3235
Abstract
We implement and test the performances of several approximation algorithms for computing the minimum dominating set of a graph. These algorithms are the standard greedy algorithm, the recent Linear programming (LP) rounding algorithms and a hybrid algorithm that we design by combining the [...] Read more.
We implement and test the performances of several approximation algorithms for computing the minimum dominating set of a graph. These algorithms are the standard greedy algorithm, the recent Linear programming (LP) rounding algorithms and a hybrid algorithm that we design by combining the greedy and LP rounding algorithms. Over the range of test data, all algorithms perform better than anticipated in theory, and have small performance ratios, measured as the size of output divided by the LP objective lower bound. However, each have advantages over the others. For instance, LP rounding algorithm normally outperforms the other algorithms on sparse real-world graphs. On a graph with 400,000+ vertices, LP rounding took less than 15 s of CPU time to generate a solution with performance ratio 1.011, while the greedy and hybrid algorithms generated solutions of performance ratio 1.12 in similar time. For synthetic graphs, the hybrid algorithm normally outperforms the others, whereas for hypercubes and k-Queens graphs, greedy outperforms the rest. Another advantage of the hybrid algorithm is to solve very large problems that are suitable for application of LP rounding (sparse graphs) but LP formulations become formidable in practice and LP solvers crash, as we observed on a real-world graph with 7.7 million+ vertices and a planar graph on 1,000,000 vertices. Full article
(This article belongs to the Special Issue Algorithms for Hard Graph Problems)
Show Figures

Figure 1

16 pages, 556 KiB  
Article
HD-Tree: An Efficient High-Dimensional Virtual Index Structure Using a Half Decomposition Strategy
by Ting Huang, Zhengping Weng, Gang Liu and Zhenwen He
Algorithms 2020, 13(12), 338; https://doi.org/10.3390/a13120338 - 14 Dec 2020
Cited by 1 | Viewed by 3991
Abstract
To manage multidimensional point data more efficiently, this paper presents an improvement, called HD-tree, of a previous indexing method, called D-tree. Both structures combine quadtree-like partitioning (using integer shift operations without storing internal nodes, but only leaves) and hash tables (for searching for [...] Read more.
To manage multidimensional point data more efficiently, this paper presents an improvement, called HD-tree, of a previous indexing method, called D-tree. Both structures combine quadtree-like partitioning (using integer shift operations without storing internal nodes, but only leaves) and hash tables (for searching for the nodes stored). However, the HD-tree follows a brand-new decomposition strategy, which is called half decomposition strategy. This improvement avoids the generation of nodes containing only a small amount of data and the sequential search of the hash table, so that it can save storage space while having faster I/O and better time performance when building the tree and querying data. The results demonstrate convincingly that the time and space performance of HD-tree is better than that of D-tree regardless of uniform or uneven data, which are less affected by data distribution. Full article
Show Figures

Figure 1

20 pages, 1098 KiB  
Article
An Algorithm for Efficient Generation of Customized Priority Rules for Production Control in Project Manufacturing with Stochastic Job Processing Times
by Mathias Kühn, Michael Völker and Thorsten Schmidt
Algorithms 2020, 13(12), 337; https://doi.org/10.3390/a13120337 - 13 Dec 2020
Cited by 7 | Viewed by 3183
Abstract
Project Planning and Control (PPC) problems with stochastic job processing times belong to the problem class of Stochastic Resource-Constrained Multi-Project Scheduling Problems (SRCMPSP). A practical example of this problem class is the industrial domain of customer-specific assembly of complex products. PPC approaches have [...] Read more.
Project Planning and Control (PPC) problems with stochastic job processing times belong to the problem class of Stochastic Resource-Constrained Multi-Project Scheduling Problems (SRCMPSP). A practical example of this problem class is the industrial domain of customer-specific assembly of complex products. PPC approaches have to compensate stochastic influences and achieve high objective fulfillment. This paper presents an efficient simulation-based optimization approach to generate Combined Priority Rules (CPRs) for determining the next job in short-term production control. The objective is to minimize project-specific objectives such as average and standard deviation of project delay or makespan. For this, we generate project-specific CPRs and evaluate the results with the Pareto dominance concept. However, generating CPRs considering stochastic influences is computationally intensive. To tackle this problem, we developed a 2-phase algorithm by first learning the algorithm with deterministic data and by generating promising starting solutions for the more computationally intensive stochastic phase. Since a good deterministic solution does not always lead to a good stochastic solution, we introduced the parameter Initial Copy Rate (ICR) to generate an initial population of copied and randomized individuals. Evaluating this approach, we conducted various computer-based experiments. Compared to Standard Priority Rules (SPRs) used in practice, the approach shows a higher objective fulfilment. The 2-phase algorithm can reduce the computation effort and increases the efficiency of generating CPRs. Full article
(This article belongs to the Special Issue Simulation-Optimization in Logistics, Transportation, and SCM)
Show Figures

Figure 1

18 pages, 3209 KiB  
Article
Evaluation of Agricultural Investment Climate in CEE Countries: The Application of Back Propagation Neural Network
by Ru Guo, Xiaodong Qiu and Yiyi He
Algorithms 2020, 13(12), 336; https://doi.org/10.3390/a13120336 - 13 Dec 2020
Cited by 1 | Viewed by 2161
Abstract
Evaluation of agricultural investment climate has essential reference value for site selection, operation and risk management of agricultural outward foreign direct investment projects. This study builds a back propagation neural network-based agricultural investment climate evaluation model, which has 22 indicators of four subsystems [...] Read more.
Evaluation of agricultural investment climate has essential reference value for site selection, operation and risk management of agricultural outward foreign direct investment projects. This study builds a back propagation neural network-based agricultural investment climate evaluation model, which has 22 indicators of four subsystems that take political climate, economic climate, social climate, and technological climate as the input vector, and agricultural investment climate rating as the output vector, to evaluate the agricultural investment climate in 16 Central and Eastern European (CEE) countries. The overall spatial distribution characteristics demonstrate that the best agricultural investment climate is in the three Baltic countries, followed by the Visegrad Group and Slovenia sector, and then the Balkan littoral countries. The findings may provide insights for entrepreneurs who aim to invest in agriculture abroad and contribute to the improvement of these countries’ investment climate. Full article
(This article belongs to the Special Issue Algorithmic Aspects of Neural Networks)
Show Figures

Figure 1

27 pages, 4581 KiB  
Article
From Trees to Barcodes and Back Again: Theoretical and Statistical Perspectives
by Lida Kanari, Adélie Garin and Kathryn Hess
Algorithms 2020, 13(12), 335; https://doi.org/10.3390/a13120335 - 11 Dec 2020
Cited by 8 | Viewed by 3659
Abstract
Methods of topological data analysis have been successfully applied in a wide range of fields to provide useful summaries of the structure of complex data sets in terms of topological descriptors, such as persistence diagrams. While there are many powerful techniques for computing [...] Read more.
Methods of topological data analysis have been successfully applied in a wide range of fields to provide useful summaries of the structure of complex data sets in terms of topological descriptors, such as persistence diagrams. While there are many powerful techniques for computing topological descriptors, the inverse problem, i.e., recovering the input data from topological descriptors, has proved to be challenging. In this article, we study in detail the Topological Morphology Descriptor (TMD), which assigns a persistence diagram to any tree embedded in Euclidean space, and a sort of stochastic inverse to the TMD, the Topological Neuron Synthesis (TNS) algorithm, gaining both theoretical and computational insights into the relation between the two. We propose a new approach to classify barcodes using symmetric groups, which provides a concrete language to formulate our results. We investigate to what extent the TNS recovers a geometric tree from its TMD and describe the effect of different types of noise on the process of tree generation from persistence diagrams. We prove moreover that the TNS algorithm is stable with respect to specific types of noise. Full article
(This article belongs to the Special Issue Topological Data Analysis)
Show Figures

Figure 1

15 pages, 1361 KiB  
Article
Feature Selection from Lyme Disease Patient Survey Using Machine Learning
by Joshua Vendrow, Jamie Haddock, Deanna Needell and Lorraine Johnson
Algorithms 2020, 13(12), 334; https://doi.org/10.3390/a13120334 - 11 Dec 2020
Cited by 7 | Viewed by 4240
Abstract
Lyme disease is a rapidly growing illness that remains poorly understood within the medical community. Critical questions about when and why patients respond to treatment or stay ill, what kinds of treatments are effective, and even how to properly diagnose the disease remain [...] Read more.
Lyme disease is a rapidly growing illness that remains poorly understood within the medical community. Critical questions about when and why patients respond to treatment or stay ill, what kinds of treatments are effective, and even how to properly diagnose the disease remain largely unanswered. We investigate these questions by applying machine learning techniques to a large scale Lyme disease patient registry, MyLymeData, developed by the nonprofit LymeDisease.org. We apply various machine learning methods in order to measure the effect of individual features in predicting participants’ answers to the Global Rating of Change (GROC) survey questions that assess the self-reported degree to which their condition improved, worsened, or remained unchanged following antibiotic treatment. We use basic linear regression, support vector machines, neural networks, entropy-based decision tree models, and k-nearest neighbors approaches. We first analyze the general performance of the model and then identify the most important features for predicting participant answers to GROC. After we identify the “key” features, we separate them from the dataset and demonstrate the effectiveness of these features at identifying GROC. In doing so, we highlight possible directions for future study both mathematically and clinically. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Biomedical Signal Processing)
Show Figures

Figure 1

20 pages, 2960 KiB  
Article
Applying Neural Networks in Aerial Vehicle Guidance to Simplify Navigation Systems
by Raúl de Celis, Pablo Solano and Luis Cadarso
Algorithms 2020, 13(12), 333; https://doi.org/10.3390/a13120333 - 11 Dec 2020
Cited by 2 | Viewed by 2733
Abstract
The Guidance, Navigation and Control (GNC) of air and space vehicles has been one of the spearheads of research in the aerospace field in recent times. Using Global Navigation Satellite Systems (GNSS) and inertial navigation systems, accuracy may be detached from range. However, [...] Read more.
The Guidance, Navigation and Control (GNC) of air and space vehicles has been one of the spearheads of research in the aerospace field in recent times. Using Global Navigation Satellite Systems (GNSS) and inertial navigation systems, accuracy may be detached from range. However, these sensor-based GNC systems may cause significant errors in determining attitude and position. These effects can be ameliorated using additional sensors, independent of cumulative errors. The quadrant photodetector semiactive laser is a good candidate for such a purpose. However, GNC systems’ development and construction costs are high. Reducing costs, while maintaining safety and accuracy standards, is key for development in aerospace engineering. Advanced algorithms for getting such standards while eliminating sensors are cornerstone. The development and application of machine learning techniques to GNC poses an innovative path for reducing complexity and costs. Here, a new nonlinear hybridization algorithm, which is based on neural networks, to estimate the gravity vector is presented. Using a neural network means that once it is trained, the physical-mathematical foundations of flight are not relevant; it is the network that returns dynamics to be fed to the GNC algorithm. The gravity vector, which can be accurately predicted, is used to determine vehicle attitude without calling for gyroscopes. Nonlinear simulations based on real flight dynamics are used to train the neural networks. Then, the approach is tested and simulated together with a GNC system. Monte Carlo analysis is conducted to determine performance when uncertainty arises. Simulation results prove that the performance of the presented approach is robust and precise in a six-degree-of-freedom simulation environment. Full article
(This article belongs to the Special Issue Simulation-Optimization in Logistics, Transportation, and SCM)
Show Figures

Figure 1

30 pages, 733 KiB  
Article
An Evaluation Framework and Algorithms for Train Rescheduling
by Sai Prashanth Josyula, Johanna Törnquist Krasemann and Lars Lundberg
Algorithms 2020, 13(12), 332; https://doi.org/10.3390/a13120332 - 11 Dec 2020
Cited by 5 | Viewed by 4036
Abstract
In railway traffic systems, whenever disturbances occur, it is important to effectively reschedule trains while optimizing the goals of various stakeholders. Algorithms can provide significant benefits to support the traffic controllers in train rescheduling, if well integrated into the overall traffic management process. [...] Read more.
In railway traffic systems, whenever disturbances occur, it is important to effectively reschedule trains while optimizing the goals of various stakeholders. Algorithms can provide significant benefits to support the traffic controllers in train rescheduling, if well integrated into the overall traffic management process. In the railway research literature, many algorithms are proposed to tackle different versions of the train rescheduling problem. However, limited research has been performed to assess the capabilities and performance of alternative approaches, with the purpose of identifying their main strengths and weaknesses. Evaluation of train rescheduling algorithms enables practitioners and decision support systems to select a suitable algorithm based on the properties of the type of disturbance scenario in focus. It also guides researchers and algorithm designers in improving the algorithms. In this paper, we (1) propose an evaluation framework for train rescheduling algorithms, (2) present two train rescheduling algorithms: a heuristic and a MILP-based exact algorithm, and (3) conduct an experiment to compare the two multi-objective algorithms using the proposed framework (a proof-of-concept). It is found that the heuristic algorithm is suitable for solving simpler disturbance scenarios since it is quick in producing decent solutions. For complex disturbances wherein multiple trains experience a primary delay due to an infrastructure failure, the exact algorithm is found to be more appropriate. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems)
Show Figures

Figure 1

23 pages, 5689 KiB  
Article
Predicting Intentions of Pedestrians from 2D Skeletal Pose Sequences with a Representation-Focused Multi-Branch Deep Learning Network
by Joseph Gesnouin, Steve Pechberti, Guillaume Bresson, Bogdan Stanciulescu and Fabien Moutarde
Algorithms 2020, 13(12), 331; https://doi.org/10.3390/a13120331 - 10 Dec 2020
Cited by 24 | Viewed by 4340
Abstract
Understanding the behaviors and intentions of humans is still one of the main challenges for vehicle autonomy. More specifically, inferring the intentions and actions of vulnerable actors, namely pedestrians, in complex situations such as urban traffic scenes remains a difficult task and a [...] Read more.
Understanding the behaviors and intentions of humans is still one of the main challenges for vehicle autonomy. More specifically, inferring the intentions and actions of vulnerable actors, namely pedestrians, in complex situations such as urban traffic scenes remains a difficult task and a blocking point towards more automated vehicles. Answering the question “Is the pedestrian going to cross?” is a good starting point in order to advance in the quest to the fifth level of autonomous driving. In this paper, we address the problem of real-time discrete intention prediction of pedestrians in urban traffic environments by linking the dynamics of a pedestrian’s skeleton to an intention. Hence, we propose SPI-Net (Skeleton-based Pedestrian Intention network): a representation-focused multi-branch network combining features from 2D pedestrian body poses for the prediction of pedestrians’ discrete intentions. Experimental results show that SPI-Net achieved 94.4% accuracy in pedestrian crossing prediction on the JAAD data set while being efficient for real-time scenarios since SPI-Net can reach around one inference every 0.25 ms on one GPU (i.e., RTX 2080ti), or every 0.67 ms on one CPU (i.e., Intel Core i7 8700K). Full article
(This article belongs to the Special Issue Algorithms for Human Gesture, Activity and Mobility Analysis)
Show Figures

Figure 1

17 pages, 2182 KiB  
Article
Segment-Based Clustering of Hyperspectral Images Using Tree-Based Data Partitioning Structures
by Mohamed Ismail and Milica Orlandić
Algorithms 2020, 13(12), 330; https://doi.org/10.3390/a13120330 - 10 Dec 2020
Cited by 9 | Viewed by 4031
Abstract
Hyperspectral image classification has been increasingly used in the field of remote sensing. In this study, a new clustering framework for large-scale hyperspectral image (HSI) classification is proposed. The proposed four-step classification scheme explores how to effectively use the global spectral information and [...] Read more.
Hyperspectral image classification has been increasingly used in the field of remote sensing. In this study, a new clustering framework for large-scale hyperspectral image (HSI) classification is proposed. The proposed four-step classification scheme explores how to effectively use the global spectral information and local spatial structure of hyperspectral data for HSI classification. Initially, multidimensional Watershed is used for pre-segmentation. Region-based hierarchical hyperspectral image segmentation is based on the construction of Binary partition trees (BPT). Each segmented region is modeled while using first-order parametric modelling, which is then followed by a region merging stage using HSI regional spectral properties in order to obtain a BPT representation. The tree is then pruned to obtain a more compact representation. In addition, principal component analysis (PCA) is utilized for HSI feature extraction, so that the extracted features are further incorporated into the BPT. Finally, an efficient variant of k-means clustering algorithm, called filtering algorithm, is deployed on the created BPT structure, producing the final cluster map. The proposed method is tested over eight publicly available hyperspectral scenes with ground truth data and it is further compared with other clustering frameworks. The extensive experimental analysis demonstrates the efficacy of the proposed method. Full article
(This article belongs to the Special Issue Algorithms in Hyperspectral Data Analysis)
Show Figures

Figure 1

17 pages, 468 KiB  
Article
Hard and Soft EM in Bayesian Network Learning from Incomplete Data
by Andrea Ruggieri, Francesco Stranieri, Fabio Stella and Marco Scutari
Algorithms 2020, 13(12), 329; https://doi.org/10.3390/a13120329 - 9 Dec 2020
Cited by 12 | Viewed by 4498
Abstract
Incomplete data are a common feature in many domains, from clinical trials to industrial applications. Bayesian networks (BNs) are often used in these domains because of their graphical and causal interpretations. BN parameter learning from incomplete data is usually implemented with the Expectation-Maximisation [...] Read more.
Incomplete data are a common feature in many domains, from clinical trials to industrial applications. Bayesian networks (BNs) are often used in these domains because of their graphical and causal interpretations. BN parameter learning from incomplete data is usually implemented with the Expectation-Maximisation algorithm (EM), which computes the relevant sufficient statistics (“soft EM”) using belief propagation. Similarly, the Structural Expectation-Maximisation algorithm (Structural EM) learns the network structure of the BN from those sufficient statistics using algorithms designed for complete data. However, practical implementations of parameter and structure learning often impute missing data (“hard EM”) to compute sufficient statistics instead of using belief propagation, for both ease of implementation and computational speed. In this paper, we investigate the question: what is the impact of using imputation instead of belief propagation on the quality of the resulting BNs? From a simulation study using synthetic data and reference BNs, we find that it is possible to recommend one approach over the other in several scenarios based on the characteristics of the data. We then use this information to build a simple decision tree to guide practitioners in choosing the EM algorithm best suited to their problem. Full article
Show Figures

Figure 1

16 pages, 1342 KiB  
Article
Kernel Identification of Non-Linear Systems with General Structure
by Grzegorz Mzyk, Zygmunt Hasiewicz and Paweł Mielcarek
Algorithms 2020, 13(12), 328; https://doi.org/10.3390/a13120328 - 6 Dec 2020
Cited by 4 | Viewed by 2297
Abstract
In the paper we deal with the problem of non-linear dynamic system identification in the presence of random noise. The class of considered systems is relatively general, in the sense that it is not limited to block-oriented structures such as Hammerstein or Wiener [...] Read more.
In the paper we deal with the problem of non-linear dynamic system identification in the presence of random noise. The class of considered systems is relatively general, in the sense that it is not limited to block-oriented structures such as Hammerstein or Wiener models. It is shown that the proposed algorithm can be generalized for two-stage strategy. In step 1 (non-parametric) the system is approximated by multi-dimensional regression functions for a given set of excitations, treated as representative set of points in multi-dimensional space. ‘Curse of dimensionality problem’ is solved by using specific (quantized or periodic) input sequences. Next, in step 2, non-parametric estimates can be plugged into least squares criterion and support model selection and estimation of system parameters. The proposed strategy allows decomposition of the identification problem, which can be of crucial meaning from the numerical point of view. The “estimation points” in step 1 are selected to ensure good task conditioning in step 2. Moreover, non-parametric procedure plays the role of data compression. We discuss the problem of selection of the scale of non-parametric model, and analyze asymptotic properties of the method. Also, the results of simple simulation are presented, to illustrate functioning of the method. Finally, the proposed method is successfully applied in Differential Scanning Calorimeter (DSC) to analyze aging processes in chalcogenide glasses. Full article
Show Figures

Figure 1

14 pages, 1695 KiB  
Article
Feasibility of Kd-Trees in Gaussian Process Regression to Partition Test Points in High Resolution Input Space
by Ivan De Boi, Bart Ribbens, Pieter Jorissen and Rudi Penne
Algorithms 2020, 13(12), 327; https://doi.org/10.3390/a13120327 - 5 Dec 2020
Cited by 4 | Viewed by 3054
Abstract
Bayesian inference using Gaussian processes on large datasets have been studied extensively over the past few years. However, little attention has been given on how to apply these on a high resolution input space. By approximating the set of test points (where we [...] Read more.
Bayesian inference using Gaussian processes on large datasets have been studied extensively over the past few years. However, little attention has been given on how to apply these on a high resolution input space. By approximating the set of test points (where we want to make predictions, not the set of training points in the dataset) by a kd-tree, a multi-resolution data structure arises that allows for considerable gains in performance and memory usage without a significant loss of accuracy. In this paper, we study the feasibility and efficiency of constructing and using such a kd-tree in Gaussian process regression. We propose a cut-off rule that is easy to interpret and to tune. We show our findings on generated toy data in a 3D point cloud and a simulated 2D vibrometry example. This survey is beneficial for researchers that are working on a high resolution input space. The kd-tree approximation outperforms the naïve Gaussian process implementation in all experiments. Full article
(This article belongs to the Special Issue Algorithms for Sequential Analysis)
Show Figures

Graphical abstract

16 pages, 1038 KiB  
Article
A Simulation-Based Optimization Method for Warehouse Worker Assignment
by Odkhishig Ganbold, Kaustav Kundu, Haobin Li and Wei Zhang
Algorithms 2020, 13(12), 326; https://doi.org/10.3390/a13120326 - 4 Dec 2020
Cited by 16 | Viewed by 5812
Abstract
The general assignment problem is a classical NP-hard (non-deterministic polynomial-time) problem. In a warehouse, the constraints on the equipment and the characteristics of consecutive processes make it even more complicated. To overcome the difficulty in calculating the benefit of an assignment and in [...] Read more.
The general assignment problem is a classical NP-hard (non-deterministic polynomial-time) problem. In a warehouse, the constraints on the equipment and the characteristics of consecutive processes make it even more complicated. To overcome the difficulty in calculating the benefit of an assignment and in finding the optimal assignment plan, a simulation-based optimization method is introduced. We first built a simulation model of the warehouse with the object-oriented discrete-event simulation (O2DES) framework, and then implemented a random neighborhood search method utilizing the simulation output. With this method, the throughput and service level of the warehouse can be improved, while keeping the number of workers constant. Numerical results with real data demonstrate the reduction of discrepancy between inbound and outbound service level performance. With a less than 10% reduction in inbound service level, we can achieve an over 30% increase in outbound service level. The proposed decision support tool assists the warehouse manager in dealing with warehouse worker allocation problem under conditions of random daily workload. Full article
(This article belongs to the Special Issue Simulation-Optimization in Logistics, Transportation, and SCM)
Show Figures

Figure 1

28 pages, 17627 KiB  
Article
Fuzzy-Based Multivariate Analysis for Input Modeling of Risk Assessment in Wind Farm Projects
by Emad Mohamed, Parinaz Jafari and Simaan AbouRizk
Algorithms 2020, 13(12), 325; https://doi.org/10.3390/a13120325 - 4 Dec 2020
Cited by 5 | Viewed by 3588
Abstract
Currently, input modeling for Monte Carlo simulation (MSC) is performed either by fitting a probability distribution to historical data or using expert elicitation methods when historical data are limited. These approaches, however, are not suitable for wind farm construction, where—although lacking in historical [...] Read more.
Currently, input modeling for Monte Carlo simulation (MSC) is performed either by fitting a probability distribution to historical data or using expert elicitation methods when historical data are limited. These approaches, however, are not suitable for wind farm construction, where—although lacking in historical data—large amounts of subjective knowledge describing the impacts of risk factors are available. Existing approaches are also limited by their inability to consider a risk factor’s impact on cost and schedule as dependent. This paper is proposing a methodology to enhance input modeling in Monte Carlo risk assessment of wind farm projects based on fuzzy set theory and multivariate modeling. In the proposed method, subjective expert knowledge is quantified using fuzzy logic and is used to determine the parameters of a marginal generalized Beta distribution. Then, the correlation between the cost and schedule impact is determined and fit jointly into a bivariate distribution using copulas. To evaluate the feasibility of the proposed methodology and to demonstrate its main features, the method was applied to an illustrative case study, and sensitivity analysis and face validation were used to evaluate the method. The results demonstrated that the proposed approach provides a reliable method for enhancing input modeling in Monte Carlo simulation (MCS). Full article
(This article belongs to the Special Issue Fuzzy Hybrid Systems for Construction Engineering and Management)
Show Figures

Figure 1

10 pages, 749 KiB  
Article
Some Notes on the Omega Distribution and the Pliant Probability Distribution Family
by Maria T. Vasileva
Algorithms 2020, 13(12), 324; https://doi.org/10.3390/a13120324 - 4 Dec 2020
Cited by 13 | Viewed by 2544
Abstract
In 2020 Dombi and Jónás (Acta Polytechnica Hungarica 17:1, 2020) introduced a new four parameter probability distribution which they named the pliant probability distribution family. One of the special members of this family is the so-called omega probability distribution. This paper deals with [...] Read more.
In 2020 Dombi and Jónás (Acta Polytechnica Hungarica 17:1, 2020) introduced a new four parameter probability distribution which they named the pliant probability distribution family. One of the special members of this family is the so-called omega probability distribution. This paper deals with one of the important characteristic “saturation” of these new cumulative functions to the horizontal asymptote with respect to Hausdorff metric. We obtain upper and lower estimates for the value of the Hausdorff distance. A simple dynamic software module using CAS Mathematica and Wolfram Cloud Open Access is developed. Numerical examples are given to illustrate the applicability of obtained results. Full article
Show Figures

Figure 1

15 pages, 843 KiB  
Article
A Hybrid Metaheuristic Algorithm for the Efficient Placement of UAVs
by Stephanie Alvarez Fernandez, Marcelo M. Carvalho and Daniel G. Silva
Algorithms 2020, 13(12), 323; https://doi.org/10.3390/a13120323 - 3 Dec 2020
Cited by 7 | Viewed by 2780
Abstract
This work addresses the problem of using Unmanned Aerial Vehicles (UAV) to deploy a wireless aerial relay communications infrastructure for stations scattered on the ground. In our problem, every station in the network must be assigned to a single UAV, which is responsible [...] Read more.
This work addresses the problem of using Unmanned Aerial Vehicles (UAV) to deploy a wireless aerial relay communications infrastructure for stations scattered on the ground. In our problem, every station in the network must be assigned to a single UAV, which is responsible for handling all data transfer on behalf of the stations that are assigned to it. Consequently, the placement of UAVs is key to achieving both network coverage and the maximization of the aggregate link capacities between UAVs and stations, and among the UAVs themselves. Because the complexity of this problem increases significantly with the number of stations to cover, for a given fixed number p of available UAVs, we model it as a single allocation p-hub median optimization problem, and we propose a hybrid metaheuristic algorithm to solve it. A series of numerical experiments illustrate the efficiency of the proposed algorithm against traditional optimization tools, which achieves high-quality results in very short time intervals, thus making it an attractive solution for real-world application scenarios. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing Vol. 2)
Show Figures

Figure 1

33 pages, 3263 KiB  
Article
On a Controlled Se(Is)(Ih)(Iicu)AR Epidemic Model with Output Controllability Issues to Satisfy Hospital Constraints on Hospitalized Patients
by Manuel De la Sen and Asier Ibeas
Algorithms 2020, 13(12), 322; https://doi.org/10.3390/a13120322 - 3 Dec 2020
Cited by 4 | Viewed by 2378
Abstract
An epidemic model, the so-called SE(Is)(Ih)(Iicu)AR epidemic model, is proposed which splits the infectious subpopulation of the classical SEIR (Susceptible-Exposed-Infectious-Recovered) model into four subpopulations, namely asymptomatic infectious and three categories of symptomatic infectious, namely slight infectious, non-intensive care infectious, and intensive care hospitalized [...] Read more.
An epidemic model, the so-called SE(Is)(Ih)(Iicu)AR epidemic model, is proposed which splits the infectious subpopulation of the classical SEIR (Susceptible-Exposed-Infectious-Recovered) model into four subpopulations, namely asymptomatic infectious and three categories of symptomatic infectious, namely slight infectious, non-intensive care infectious, and intensive care hospitalized infectious. The exposed subpopulation has four different transitions to each one of the four kinds of infectious subpopulations governed under eventually different proportionality parameters. The performed research relies on the problem of satisfying prescribed hospitalization constraints related to the number of patients via control interventions. There are four potential available controls which can be manipulated, namely the vaccination of the susceptible individuals, the treatment of the non-intensive care unit hospitalized patients, the treatment of the hospitalized patients at the intensive care unit, and the transmission rate which can be eventually updated via public interventions such as isolation of the infectious, rules of groups meetings, use of face masks, decrees of partial or total quarantines, and others. The patients staying at the non-intensive care unit and those staying at the intensive care unit are eventually, but not necessarily, managed as two different hospitalized subpopulations. The controls are designed based on output controllability issues in the sense that the levels of hospital admissions are constrained via prescribed maximum levels and the measurable outputs are defined by the hospitalized patients either under a joint consideration of the sum of both subpopulations or separately. In this second case, it is possible to target any of the two hospitalized subpopulations only or both of them considered as two different components of the output. Different algorithms are given to design the controls which guarantee, if possible, that the prescribed hospitalization constraints hold. If this were not possible, because the levels of serious infection are too high according to the hospital availability means, then the constraints are revised and modified accordingly so that the amended ones could be satisfied by a set of controls. The algorithms are tested through numerically worked examples under disease parameterizations of COVID-19. Full article
(This article belongs to the Special Issue Machine Learning in Healthcare and Biomedical Application)
Show Figures

Figure 1

15 pages, 265 KiB  
Article
Convert a Strongly Connected Directed Graph to a Black-and-White 3-SAT Problem by the Balatonboglár Model
by Gábor Kusper and Csaba Biró
Algorithms 2020, 13(12), 321; https://doi.org/10.3390/a13120321 - 3 Dec 2020
Cited by 3 | Viewed by 2622
Abstract
In a previous paper we defined the black and white SAT problem which has exactly two solutions, where each variable is either true or false. We showed that black and white 2-SAT problems represent strongly connected directed graphs. We presented also the strong [...] Read more.
In a previous paper we defined the black and white SAT problem which has exactly two solutions, where each variable is either true or false. We showed that black and white 2-SAT problems represent strongly connected directed graphs. We presented also the strong model of communication graphs. In this work we introduce two new models, the weak model, and the Balatonboglár model of communication graphs. A communication graph is a directed graph, where no self loops are allowed. In this work we show that the weak model of a strongly connected communication graph is a black and white SAT problem. We prove a powerful theorem, the so called transitions theorem. This theorem states that for any model which is between the strong and the weak model, we have that this model represents strongly connected communication graphs as black and white SAT problems. We show that the Balatonboglár model is between the strong and the weak model, and it generates 3-SAT problems, so the Balatonboglár model represents strongly connected communication graphs as black and white 3-SAT problems. Our motivation to study these models is the following: The strong model generates a 2-SAT problem from the input directed graph, so it does not give us a deep insight how to convert a general SAT problem into a directed graph. The weak model generates huge models, because it represents all cycles, even non-simple cycles, of the input directed graph. We need something between them to gain more experience. From the Balatonboglár model we learned that it is enough to have a subset of a clause, which represents a cycle in the weak model, to make the Balatonboglár model more compact. We still do not know how to represent a SAT problem as a directed graph, but this work gives a strong link between two prominent fields of formal methods: the SAT problem and directed graphs. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

2 pages, 631 KiB  
Editorial
Special Issue on Bio-Inspired Algorithms for Image Processing
by Sándor Szénási and Gábor Kertész
Algorithms 2020, 13(12), 320; https://doi.org/10.3390/a13120320 - 3 Dec 2020
Viewed by 1922
Abstract
In the field of image processing, there are several difficult issues that do not have exact solutions due to incomplete or imperfect information and limited computation capacity [...] Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms for Image Processing)
17 pages, 5790 KiB  
Article
Generative Model for Skeletal Human Movements Based on Conditional DC-GAN Applied to Pseudo-Images
by Wang Xi, Guillaume Devineau, Fabien Moutarde and Jie Yang
Algorithms 2020, 13(12), 319; https://doi.org/10.3390/a13120319 - 3 Dec 2020
Cited by 14 | Viewed by 3826
Abstract
Generative models for images, audio, text, and other low-dimension data have achieved great success in recent years. Generating artificial human movements can also be useful for many applications, including improvement of data augmentation methods for human gesture recognition. The objective of this research [...] Read more.
Generative models for images, audio, text, and other low-dimension data have achieved great success in recent years. Generating artificial human movements can also be useful for many applications, including improvement of data augmentation methods for human gesture recognition. The objective of this research is to develop a generative model for skeletal human movement, allowing to control the action type of generated motion while keeping the authenticity of the result and the natural style variability of gesture execution. We propose to use a conditional Deep Convolutional Generative Adversarial Network (DC-GAN) applied to pseudo-images representing skeletal pose sequences using tree structure skeleton image format. We evaluate our approach on the 3D skeletal data provided in the large NTU_RGB+D public dataset. Our generative model can output qualitatively correct skeletal human movements for any of the 60 action classes. We also quantitatively evaluate the performance of our model by computing Fréchet inception distances, which shows strong correlation to human judgement. To the best of our knowledge, our work is the first successful class-conditioned generative model for human skeletal motions based on pseudo-image representation of skeletal pose sequences. Full article
(This article belongs to the Special Issue Algorithms for Human Gesture, Activity and Mobility Analysis)
Show Figures

Figure 1

16 pages, 801 KiB  
Article
Deep Feature Learning with Manifold Embedding for Robust Image Retrieval
by Xin Chen and Ying Li
Algorithms 2020, 13(12), 318; https://doi.org/10.3390/a13120318 - 2 Dec 2020
Cited by 4 | Viewed by 3450
Abstract
Conventionally, the similarity between two images is measured by the easy-calculating Euclidean distance between their corresponding image feature representations for image retrieval. However, this kind of direct similarity measurement ignores the local geometry structure of the intrinsic data manifold, which is not discriminative [...] Read more.
Conventionally, the similarity between two images is measured by the easy-calculating Euclidean distance between their corresponding image feature representations for image retrieval. However, this kind of direct similarity measurement ignores the local geometry structure of the intrinsic data manifold, which is not discriminative enough for robust image retrieval. Some works have proposed to tackle this problem by re-ranking with manifold learning. While benefiting better performance, algorithms of this category suffer from non-trivial computational complexity, which is unfavorable for its application to large-scale retrieval tasks. To address the above problems, in this paper, we propose to learn a robust feature embedding with the guidance of manifold relationships. Specifically, the manifold relationship is used to guide the automatic selection of training image pairs. A fine-tuning network with those selected image pairs transfers such manifold relationships into the fine-tuned feature embedding. With the fine-tuned feature embedding, the Euclidean distance can be directly used to measure the pairwise similarity between images, where the manifold structure is implicitly embedded. Thus, we maintain both the efficiency of Euclidean distance-based similarity measurement and the effectiveness of manifold information in the new feature embedding. Extensive experiments on three benchmark datasets demonstrate the robustness of our proposed method, where our approach significantly outperforms the baselines and exceeds or is comparable to the state-of-the-art methods. Full article
Show Figures

Figure 1

19 pages, 1631 KiB  
Article
Sine Cosine Algorithm Assisted FOPID Controller Design for Interval Systems Using Reduced-Order Modeling Ensuring Stability
by Jagadish Kumar Bokam, Naresh Patnana, Tarun Varshney and Vinay Pratap Singh
Algorithms 2020, 13(12), 317; https://doi.org/10.3390/a13120317 - 1 Dec 2020
Cited by 10 | Viewed by 2919
Abstract
The focus of present research endeavor was to design a robust fractional-order proportional-integral-derivative (FOPID) controller with specified phase margin (PM) and gain cross over frequency (ωgc) through the reduced-order model for continuous interval systems. Currently, this investigation is two-fold: [...] Read more.
The focus of present research endeavor was to design a robust fractional-order proportional-integral-derivative (FOPID) controller with specified phase margin (PM) and gain cross over frequency (ωgc) through the reduced-order model for continuous interval systems. Currently, this investigation is two-fold: In the first part, a modified Routh approximation technique along with the matching Markov parameters (MPs) and time moments (TMs) are utilized to derive a stable reduced-order continuous interval plant (ROCIP) for a stable high-order continuous interval plant (HOCIP). Whereas in the second part, the FOPID controller is designed for ROCIP by considering PM and ωgc as the performance criteria. The FOPID controller parameters are tuned based on the frequency domain specifications using an advanced sine-cosine algorithm (SCA). SCA algorithm is used due to being simple in implementation and effective in performance. The proposed SCA-based FOPID controller is found to be robust and efficient. Thus, the designed FOPID controller is applied to HOCIP. The proposed controller design technique is elaborated by considering a single-input-single-output (SISO) test case. Validity and efficacy of the proposed technique is established based on the simulation results obtained. In addition, the designed FOPID controller retains the desired PM and ωgc when implemented on HOCIP. Further, the results proved the eminence of the proposed technique by showing that the designed controller is working effectively for ROCIP and HOCIP. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Engineering Optimization Problems)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop