Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 3157 KiB  
Article
Verifying Mutual Exclusion Algorithms with Non-Atomic Registers
by Libero Nigro
Algorithms 2024, 17(12), 536; https://doi.org/10.3390/a17120536 - 22 Nov 2024
Cited by 1 | Viewed by 1530
Abstract
The work described in this paper develops a formal method for modeling and exhaustive verification of mutual exclusion algorithms. The process is based on timed automata and the Uppaal model checker. The technique was successfully applied to several mutual exclusion algorithms, mainly under [...] Read more.
The work described in this paper develops a formal method for modeling and exhaustive verification of mutual exclusion algorithms. The process is based on timed automata and the Uppaal model checker. The technique was successfully applied to several mutual exclusion algorithms, mainly under the atomic memory model, when the read and write operations on memory cells (registers) are atomic or indivisible. The original contribution of this paper consists of a generalization of the approach to support modeling mutual exclusion algorithms with non-atomic registers, where multiple read operations can occur on a register simultaneously to a write operation on the same register, thus giving rise to the flickering phenomenon or multiple write operations can occur at the same time on the same register, hence determining the scrambling phenomenon. The paper first clarifies some consistency rules of non-atomic registers. Then, the developed Uppaal-based method for specifying and verifying mutual exclusion algorithms is presented. The method is applied to the correctness assessment of a sample mutual exclusion solution. After that, non-atomic register consistency rules are rendered in Uppaal to be embedded in the specification methodology. The paper goes on by presenting different mutual exclusion algorithms that are studied using non-atomic registers. Algorithms are also investigated in the context of a tournament tree organization that can provide standard and efficient mutual exclusion solutions for N>2 processes. The paper compares the proposed techniques for handling non-atomic registers and reports about their application to many other mutual exclusion solutions. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

75 pages, 1896 KiB  
Article
Complete Subhedge Projection for Stepwise Hedge Automata
by Antonio Al Serhali and Joachim Niehren
Algorithms 2024, 17(8), 339; https://doi.org/10.3390/a17080339 - 2 Aug 2024
Viewed by 1325
Abstract
We demonstrate how to evaluate stepwise hedge automata (Shas) with subhedge projection while completely projecting irrelevant subhedges. Since this requires passing finite state information top-down, we introduce the notion of downward stepwise hedge automata. We use them to define in-memory and [...] Read more.
We demonstrate how to evaluate stepwise hedge automata (Shas) with subhedge projection while completely projecting irrelevant subhedges. Since this requires passing finite state information top-down, we introduce the notion of downward stepwise hedge automata. We use them to define in-memory and streaming evaluators with complete subhedge projection for Shas. We then tune the evaluators so that they can decide on membership at the earliest time point. We apply our algorithms to the problem of answering regular XPath queries on Xml streams. Our experiments show that complete subhedge projection of Shas can indeed speed up earliest query answering on Xml streams so that it becomes competitive with the best existing streaming tools for XPath queries. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers From FCT 2023)
Show Figures

Figure 1

11 pages, 250 KiB  
Article
Hardness and Approximability of Dimension Reduction on the Probability Simplex
by Roberto Bruno
Algorithms 2024, 17(7), 296; https://doi.org/10.3390/a17070296 - 6 Jul 2024
Viewed by 1796
Abstract
Dimension reduction is a technique used to transform data from a high-dimensional space into a lower-dimensional space, aiming to retain as much of the original information as possible. This approach is crucial in many disciplines like engineering, biology, astronomy, and economics. In this [...] Read more.
Dimension reduction is a technique used to transform data from a high-dimensional space into a lower-dimensional space, aiming to retain as much of the original information as possible. This approach is crucial in many disciplines like engineering, biology, astronomy, and economics. In this paper, we consider the following dimensionality reduction instance: Given an n-dimensional probability distribution p and an integer m<n, we aim to find the m-dimensional probability distribution q that is the closest to p, using the Kullback–Leibler divergence as the measure of closeness. We prove that the problem is strongly NP-hard, and we present an approximation algorithm for it. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers from IWOCA 2024)
14 pages, 312 KiB  
Article
Parsing Unranked Tree Languages, Folded Once
by Martin Berglund, Henrik Björklund and Johanna Björklund
Algorithms 2024, 17(6), 268; https://doi.org/10.3390/a17060268 - 19 Jun 2024
Viewed by 996
Abstract
A regular unranked tree folding consists of a regular unranked tree language and a folding operation that merges (i.e., folds) selected nodes of a tree to form a graph; the combination is a formal device for representing graph languages. If, in the [...] Read more.
A regular unranked tree folding consists of a regular unranked tree language and a folding operation that merges (i.e., folds) selected nodes of a tree to form a graph; the combination is a formal device for representing graph languages. If, in the process of folding, the order among edges is discarded so that the result is an unordered graph, then two applications of a fold operation are enough to make the associated parsing problem NP-complete. However, if the order is kept, then the problem is solvable in non-uniform polynomial time. In this paper, we address the remaining case, where only one fold operation is applied, but the order among the edges is discarded. We show that, under these conditions, the problem is solvable in non-uniform polynomial time. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers From FCT 2023)
Show Figures

Figure 1

13 pages, 346 KiB  
Article
Minimizing Query Frequency to Bound Congestion Potential for Moving Entities at a Fixed Target Time
by William Evans and David Kirkpatrick
Algorithms 2024, 17(6), 246; https://doi.org/10.3390/a17060246 - 6 Jun 2024
Viewed by 1024
Abstract
Consider a collection of entities moving continuously with bounded speed, but otherwise unpredictably, in some low-dimensional space. Two such entities encroach upon one another at a fixed time if their separation is less than some specified threshold. Encroachment, of concern in many settings [...] Read more.
Consider a collection of entities moving continuously with bounded speed, but otherwise unpredictably, in some low-dimensional space. Two such entities encroach upon one another at a fixed time if their separation is less than some specified threshold. Encroachment, of concern in many settings such as collision avoidance, may be unavoidable. However, the associated difficulties are compounded if there is uncertainty about the precise location of entities, giving rise to potential encroachment and, more generally, potential congestion within the full collection. We adopt a model in which entities can be queried for their current location (at some cost) and the uncertainty region associated with an entity grows in proportion to the time since that entity was last queried. The goal is to maintain low potential congestion, measured in terms of the (dynamic) intersection graph of uncertainty regions, at specified (possibly all) times, using the lowest possible query cost. Previous work in the same uncertainty model addressed the problem of minimizing the congestion potential of point entities using location queries of some bounded frequency. It was shown that it is possible to design query schemes that are O(1)-competitive, in terms of worst-case congestion potential, with other, even clairvoyant query schemes (that exploit knowledge of the trajectories of all entities), subject to the same bound on query frequency. In this paper, we initiate the treatment of a more general problem with the complementary optimization objective: minimizing the query frequency, measured as the reciprocal of the minimum time between queries (granularity), while guaranteeing a fixed bound on congestion potential of entities with positive extent at one specified target time. This complementary objective necessitates quite different schemes and analyses. Nevertheless, our results parallel those of the earlier papers, specifically tight competitive bounds on required query frequency. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers From FCT 2023)
Show Figures

Figure 1

20 pages, 526 KiB  
Systematic Review
Prime Number Sieving—A Systematic Review with Performance Analysis
by Mircea Ghidarcea and Decebal Popescu
Algorithms 2024, 17(4), 157; https://doi.org/10.3390/a17040157 - 14 Apr 2024
Cited by 3 | Viewed by 2443
Abstract
The systematic generation of prime numbers has been almost ignored since the 1990s, when most of the IT research resources related to prime numbers migrated to studies on the use of very large primes for cryptography, and little effort was made to further [...] Read more.
The systematic generation of prime numbers has been almost ignored since the 1990s, when most of the IT research resources related to prime numbers migrated to studies on the use of very large primes for cryptography, and little effort was made to further the knowledge regarding techniques like sieving. At present, sieving techniques are mostly used for didactic purposes, and no real advances seem to be made in this domain. This systematic review analyzes the theoretical advances in sieving that have occurred up to the present. The research followed the PRISMA 2020 guidelines and was conducted using three established databases: Web of Science, IEEE Xplore and Scopus. Our methodical review aims to provide an extensive overview of the progress in prime sieving—unfortunately, no significant advancements in this field were identified in the last 20 years. Full article
Show Figures

Figure 1

22 pages, 1016 KiB  
Article
Multi-Objective BiLevel Optimization by Bayesian Optimization
by Vedat Dogan and Steven Prestwich
Algorithms 2024, 17(4), 146; https://doi.org/10.3390/a17040146 - 30 Mar 2024
Cited by 1 | Viewed by 2891
Abstract
In a multi-objective optimization problem, a decision maker has more than one objective to optimize. In a bilevel optimization problem, there are the following two decision-makers in a hierarchy: a leader who makes the first decision and a follower who reacts, each aiming [...] Read more.
In a multi-objective optimization problem, a decision maker has more than one objective to optimize. In a bilevel optimization problem, there are the following two decision-makers in a hierarchy: a leader who makes the first decision and a follower who reacts, each aiming to optimize their own objective. Many real-world decision-making processes have various objectives to optimize at the same time while considering how the decision-makers affect each other. When both features are combined, we have a multi-objective bilevel optimization problem, which arises in manufacturing, logistics, environmental economics, defence applications and many other areas. Many exact and approximation-based techniques have been proposed, but because of the intrinsic nonconvexity and conflicting multiple objectives, their computational cost is high. We propose a hybrid algorithm based on batch Bayesian optimization to approximate the upper-level Pareto-optimal solution set. We also extend our approach to handle uncertainty in the leader’s objectives via a hypervolume improvement-based acquisition function. Experiments show that our algorithm is more efficient than other current methods while successfully approximating Pareto-fronts. Full article
Show Figures

Figure 1

16 pages, 314 KiB  
Article
Closest Farthest Widest
by Kenneth Lange
Algorithms 2024, 17(3), 95; https://doi.org/10.3390/a17030095 - 22 Feb 2024
Cited by 1 | Viewed by 1537
Abstract
The current paper proposes and tests algorithms for finding the diameter of a compact convex set and the farthest point in the set to another point. For these two nonconvex problems, I construct Frank–Wolfe and projected gradient ascent algorithms. Although these algorithms are [...] Read more.
The current paper proposes and tests algorithms for finding the diameter of a compact convex set and the farthest point in the set to another point. For these two nonconvex problems, I construct Frank–Wolfe and projected gradient ascent algorithms. Although these algorithms are guaranteed to go uphill, they can become trapped by local maxima. To avoid this defect, I investigate a homotopy method that gradually deforms a ball into the target set. Motivated by the Frank–Wolfe algorithm, I also find the support function of the intersection of a convex cone and a ball centered at the origin and elaborate a known bisection algorithm for calculating the support function of a convex sublevel set. The Frank–Wolfe and projected gradient algorithms are tested on five compact convex sets: (a) the box whose coordinates range between −1 and 1, (b) the intersection of the unit ball and the non-negative orthant, (c) the probability simplex, (d) the Manhattan-norm unit ball, and (e) a sublevel set of the elastic net penalty. Frank–Wolfe and projected gradient ascent are about equally fast on these test problems. Ignoring homotopy, the Frank–Wolfe algorithm is more reliable. However, homotopy allows projected gradient ascent to recover from its failures. Full article
14 pages, 975 KiB  
Article
What Is a Causal Graph?
by Philip Dawid
Algorithms 2024, 17(3), 93; https://doi.org/10.3390/a17030093 - 21 Feb 2024
Cited by 1 | Viewed by 2011
Abstract
This article surveys the variety of ways in which a directed acyclic graph (DAG) can be used to represent a problem of probabilistic causality. For each of these ways, we describe the relevant formal or informal semantics governing that representation. It is suggested [...] Read more.
This article surveys the variety of ways in which a directed acyclic graph (DAG) can be used to represent a problem of probabilistic causality. For each of these ways, we describe the relevant formal or informal semantics governing that representation. It is suggested that the cleanest such representation is that embodied in an augmented DAG, which contains nodes for non-stochastic intervention indicators in addition to the usual nodes for domain variables. Full article
(This article belongs to the Special Issue Bayesian Networks and Causal Reasoning)
Show Figures

Figure 1

20 pages, 351 KiB  
Article
A Novel Higher-Order Numerical Scheme for System of Nonlinear Load Flow Equations
by Fiza Zafar, Alicia Cordero, Husna Maryam and Juan R. Torregrosa
Algorithms 2024, 17(2), 86; https://doi.org/10.3390/a17020086 - 18 Feb 2024
Viewed by 1582
Abstract
Power flow problems can be solved in a variety of ways by using the Newton–Raphson approach. The nonlinear power flow equations depend upon voltages Vi and phase angle δ. An electrical power system is obtained by taking the partial derivatives of [...] Read more.
Power flow problems can be solved in a variety of ways by using the Newton–Raphson approach. The nonlinear power flow equations depend upon voltages Vi and phase angle δ. An electrical power system is obtained by taking the partial derivatives of load flow equations which contain active and reactive powers. In this paper, we present an efficient seventh-order iterative scheme to obtain the solutions of nonlinear system of equations, with only three steps in its formulation. Then, we illustrate the computational cost for different operations such as matrix–matrix multiplication, matrix–vector multiplication, and LU-decomposition, which is then used to calculate the cost of our proposed method and is compared with the cost of already seventh-order methods. Furthermore, we elucidate the applicability of our newly developed scheme in an electrical power system. The two-bus, three-bus, and four-bus power flow problems are then solved by using load flow equations that describe the applicability of the new schemes. Full article
Show Figures

Figure 1

63 pages, 3409 KiB  
Review
Survey of Recent Applications of the Chaotic Lozi Map
by René Lozi
Algorithms 2023, 16(10), 491; https://doi.org/10.3390/a16100491 - 22 Oct 2023
Cited by 11 | Viewed by 6073
Abstract
Since its original publication in 1978, Lozi’s chaotic map has been thoroughly explored and continues to be. Hundreds of publications have analyzed its particular structure and applied its properties in many fields (e.g., improvement of physical devices, electrical components such as memristors, cryptography, [...] Read more.
Since its original publication in 1978, Lozi’s chaotic map has been thoroughly explored and continues to be. Hundreds of publications have analyzed its particular structure and applied its properties in many fields (e.g., improvement of physical devices, electrical components such as memristors, cryptography, optimization, evolutionary algorithms, synchronization, control, secure communications, AI with swarm intelligence, chimeras, solitary states, etc.) through algorithms such as the COLM algorithm (Chaotic Optimization algorithm based on Lozi Map), Particle Swarm Optimization (PSO), and Differential Evolution (DE). In this article, we present a survey based on dozens of articles on the use of this map in algorithms aimed at real applications or applications exploring new directions of dynamical systems such as chimeras and solitary states. Full article
(This article belongs to the Special Issue Surveys in Algorithm Analysis and Complexity Theory, Part II)
Show Figures

Figure 1

20 pages, 1849 KiB  
Review
Artificial Intelligence for Management Information Systems: Opportunities, Challenges, and Future Directions
by Stela Stoykova and Nikola Shakev
Algorithms 2023, 16(8), 357; https://doi.org/10.3390/a16080357 - 26 Jul 2023
Cited by 10 | Viewed by 22400
Abstract
The aim of this paper is to present a systematic literature review of the existing research, published between 2006 and 2023, in the field of artificial intelligence for management information systems. Of the 3946 studies that were considered by the authors, 60 primary [...] Read more.
The aim of this paper is to present a systematic literature review of the existing research, published between 2006 and 2023, in the field of artificial intelligence for management information systems. Of the 3946 studies that were considered by the authors, 60 primary studies were selected for analysis. The analysis shows that most research is focused on the application of AI for intelligent process automation, with an increasing number of studies focusing on predictive analytics and natural language processing. With respect to the platforms used by AI researchers, the study finds that cloud-based solutions are preferred over on-premises ones. A new research trend of deploying AI applications at the edge of industrial networks and utilizing federated learning is also identified. The need to focus research efforts on developing guidelines and frameworks in terms of ethics, data privacy, and security for AI adoption in MIS is highlighted. Developing a unified digital business strategy and overcoming barriers to user–AI engagement are some of the identified challenges to obtaining business value from AI integration. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

36 pages, 6469 KiB  
Article
Physics-Informed Deep Learning for Traffic State Estimation: A Survey and the Outlook
by Xuan Di, Rongye Shi, Zhaobin Mo and Yongjie Fu
Algorithms 2023, 16(6), 305; https://doi.org/10.3390/a16060305 - 17 Jun 2023
Cited by 26 | Viewed by 6432
Abstract
For its robust predictive power (compared to pure physics-based models) and sample-efficient training (compared to pure deep learning models), physics-informed deep learning (PIDL), a paradigm hybridizing physics-based models and deep neural networks (DNNs), has been booming in science and engineering fields. One key [...] Read more.
For its robust predictive power (compared to pure physics-based models) and sample-efficient training (compared to pure deep learning models), physics-informed deep learning (PIDL), a paradigm hybridizing physics-based models and deep neural networks (DNNs), has been booming in science and engineering fields. One key challenge of applying PIDL to various domains and problems lies in the design of a computational graph that integrates physics and DNNs. In other words, how the physics is encoded into DNNs and how the physics and data components are represented. In this paper, we offer an overview of a variety of architecture designs of PIDL computational graphs and how these structures are customized to traffic state estimation (TSE), a central problem in transportation engineering. When observation data, problem type, and goal vary, we demonstrate potential architectures of PIDL computational graphs and compare these variants using the same real-world dataset. Full article
Show Figures

Figure 1

17 pages, 2718 KiB  
Review
Enhancing Social Media Platforms with Machine Learning Algorithms and Neural Networks
by Hamed Taherdoost
Algorithms 2023, 16(6), 271; https://doi.org/10.3390/a16060271 - 29 May 2023
Cited by 14 | Viewed by 11308
Abstract
Network analysis aids management in reducing overall expenditures and maintenance workload. Social media platforms frequently use neural networks to suggest material that corresponds with user preferences. Machine learning is one of many methods for social network analysis. Machine learning algorithms operate on a [...] Read more.
Network analysis aids management in reducing overall expenditures and maintenance workload. Social media platforms frequently use neural networks to suggest material that corresponds with user preferences. Machine learning is one of many methods for social network analysis. Machine learning algorithms operate on a collection of observable features that are taken from user data. Machine learning and neural network-based systems represent a topic of study that spans several fields. Computers can now recognize the emotions behind particular content uploaded by users to social media networks thanks to machine learning. This study examines research on machine learning and neural networks, with an emphasis on social analysis in the context of the current literature. Full article
(This article belongs to the Special Issue Machine Learning in Social Network Analytics)
Show Figures

Figure 1

24 pages, 8508 KiB  
Article
From Activity Recognition to Simulation: The Impact of Granularity on Production Models in Heavy Civil Engineering
by Anne Fischer, Alexandre Beiderwellen Bedrikow, Iris D. Tommelein, Konrad Nübel and Johannes Fottner
Algorithms 2023, 16(4), 212; https://doi.org/10.3390/a16040212 - 18 Apr 2023
Cited by 11 | Viewed by 4638
Abstract
As in manufacturing with its Industry 4.0 transformation, the enormous potential of artificial intelligence (AI) is also being recognized in the construction industry. Specifically, the equipment-intensive construction industry can benefit from using AI. AI applications can leverage the data recorded by the numerous [...] Read more.
As in manufacturing with its Industry 4.0 transformation, the enormous potential of artificial intelligence (AI) is also being recognized in the construction industry. Specifically, the equipment-intensive construction industry can benefit from using AI. AI applications can leverage the data recorded by the numerous sensors on machines and mirror them in a digital twin. Analyzing the digital twin can help optimize processes on the construction site and increase productivity. We present a case from special foundation engineering: the machine production of bored piles. We introduce a hierarchical classification for activity recognition and apply a hybrid deep learning model based on convolutional and recurrent neural networks. Then, based on the results from the activity detection, we use discrete-event simulation to predict construction progress. We highlight the difficulty of defining the appropriate modeling granularity. While activity detection requires equipment movement, simulation requires knowledge of the production flow. Therefore, we present a flow-based production model that can be captured in a modularized process catalog. Overall, this paper aims to illustrate modeling using digital-twin technologies to increase construction process improvement in practice. Full article
Show Figures

Figure 1

21 pages, 558 KiB  
Article
Model-Robust Estimation of Multiple-Group Structural Equation Models
by Alexander Robitzsch
Algorithms 2023, 16(4), 210; https://doi.org/10.3390/a16040210 - 17 Apr 2023
Cited by 8 | Viewed by 2969
Abstract
Structural equation models (SEM) are widely used in the social sciences. They model the relationships between latent variables in structural models, while defining the latent variables by observed variables in measurement models. Frequently, it is of interest to compare particular parameters in an [...] Read more.
Structural equation models (SEM) are widely used in the social sciences. They model the relationships between latent variables in structural models, while defining the latent variables by observed variables in measurement models. Frequently, it is of interest to compare particular parameters in an SEM as a function of a discrete grouping variable. Multiple-group SEM is employed to compare structural relationships between groups. In this article, estimation approaches for the multiple-group are reviewed. We focus on comparing different estimation strategies in the presence of local model misspecifications (i.e., model errors). In detail, maximum likelihood and weighted least-squares estimation approaches are compared with a newly proposed robust Lp loss function and regularized maximum likelihood estimation. The latter methods are referred to as model-robust estimators because they show some resistance to model errors. In particular, we focus on the performance of the different estimators in the presence of unmodelled residual error correlations and measurement noninvariance (i.e., group-specific item intercepts). The performance of the different estimators is compared in two simulation studies and an empirical example. It turned out that the robust loss function approach is computationally much less demanding than regularized maximum likelihood estimation but resulted in similar statistical performance. Full article
(This article belongs to the Special Issue Statistical learning and Its Applications)
Show Figures

Figure 1

15 pages, 3252 KiB  
Article
An Adversarial DBN-LSTM Method for Detecting and Defending against DDoS Attacks in SDN Environments
by Lei Chen, Zhihao Wang, Ru Huo and Tao Huang
Algorithms 2023, 16(4), 197; https://doi.org/10.3390/a16040197 - 5 Apr 2023
Cited by 17 | Viewed by 3165
Abstract
As an essential piece of infrastructure supporting cyberspace security technology verification, network weapons and equipment testing, attack defense confrontation drills, and network risk assessment, Cyber Range is exceptionally vulnerable to distributed denial of service (DDoS) attacks from three malicious parties. Moreover, some attackers [...] Read more.
As an essential piece of infrastructure supporting cyberspace security technology verification, network weapons and equipment testing, attack defense confrontation drills, and network risk assessment, Cyber Range is exceptionally vulnerable to distributed denial of service (DDoS) attacks from three malicious parties. Moreover, some attackers try to fool the classification/prediction mechanism by crafting the input data to create adversarial attacks, which is hard to defend for ML-based Network Intrusion Detection Systems (NIDSs). This paper proposes an adversarial DBN-LSTM method for detecting and defending against DDoS attacks in SDN environments, which applies generative adversarial networks (GAN) as well as deep belief networks and long short-term memory (DBN-LSTM) to make the system less sensitive to adversarial attacks and faster feature extraction. We conducted the experiments using the public dataset CICDDoS 2019. The experimental results demonstrated that our method efficiently detected up-to-date common types of DDoS attacks compared to other approaches. Full article
(This article belongs to the Special Issue Artificial Intelligence in Intrusion Detection Systems)
Show Figures

Figure 1

19 pages, 1087 KiB  
Article
A Deep Analysis of Brain Tumor Detection from MR Images Using Deep Learning Networks
by Md Ishtyaq Mahmud, Muntasir Mamun and Ahmed Abdelgawad
Algorithms 2023, 16(4), 176; https://doi.org/10.3390/a16040176 - 23 Mar 2023
Cited by 156 | Viewed by 26422
Abstract
Creating machines that behave and work in a way similar to humans is the objective of artificial intelligence (AI). In addition to pattern recognition, planning, and problem-solving, computer activities with artificial intelligence include other activities. A group of algorithms called “deep learning” is [...] Read more.
Creating machines that behave and work in a way similar to humans is the objective of artificial intelligence (AI). In addition to pattern recognition, planning, and problem-solving, computer activities with artificial intelligence include other activities. A group of algorithms called “deep learning” is used in machine learning. With the aid of magnetic resonance imaging (MRI), deep learning is utilized to create models for the detection and categorization of brain tumors. This allows for the quick and simple identification of brain tumors. Brain disorders are mostly the result of aberrant brain cell proliferation, which can harm the structure of the brain and ultimately result in malignant brain cancer. The early identification of brain tumors and the subsequent appropriate treatment may lower the death rate. In this study, we suggest a convolutional neural network (CNN) architecture for the efficient identification of brain tumors using MR images. This paper also discusses various models such as ResNet-50, VGG16, and Inception V3 and conducts a comparison between the proposed architecture and these models. To analyze the performance of the models, we considered different metrics such as the accuracy, recall, loss, and area under the curve (AUC). As a result of analyzing different models with our proposed model using these metrics, we concluded that the proposed model performed better than the others. Using a dataset of 3264 MR images, we found that the CNN model had an accuracy of 93.3%, an AUC of 98.43%, a recall of 91.19%, and a loss of 0.25. We may infer that the proposed model is reliable for the early detection of a variety of brain tumors after comparing it to the other models. Full article
(This article belongs to the Special Issue Machine Learning in Healthcare and Biomedical Application II)
Show Figures

Figure 1

13 pages, 1355 KiB  
Article
Model Parallelism Optimization for CNN FPGA Accelerator
by Jinnan Wang, Weiqin Tong and Xiaoli Zhi
Algorithms 2023, 16(2), 110; https://doi.org/10.3390/a16020110 - 14 Feb 2023
Cited by 8 | Viewed by 4278
Abstract
Convolutional neural networks (CNNs) have made impressive achievements in image classification and object detection. For hardware with limited resources, it is not easy to achieve CNN inference with a large number of parameters without external storage. Model parallelism is an effective way to [...] Read more.
Convolutional neural networks (CNNs) have made impressive achievements in image classification and object detection. For hardware with limited resources, it is not easy to achieve CNN inference with a large number of parameters without external storage. Model parallelism is an effective way to reduce resource usage by distributing CNN inference among several devices. However, parallelizing a CNN model is not easy, because CNN models have an essentially tightly-coupled structure. In this work, we propose a novel model parallelism method to decouple the CNN structure with group convolution and a new channel shuffle procedure. Our method could eliminate inter-device synchronization while reducing the memory footprint of each device. Using the proposed model parallelism method, we designed a parallel FPGA accelerator for the classic CNN model ShuffleNet. This accelerator was further optimized with features such as aggregate read and kernel vectorization to fully exploit the hardware-level parallelism of the FPGA. We conducted experiments with ShuffleNet on two FPGA boards, each of which had an Intel Arria 10 GX1150 and 16GB DDR3 memory. The experimental results showed that when using two devices, ShuffleNet achieved a 1.42× speed increase and reduced its memory footprint by 34%, as compared to its non-parallel counterpart, while maintaining accuracy. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

28 pages, 953 KiB  
Article
Enhancing Logistic Regression Using Neural Networks for Classification in Actuarial Learning
by George Tzougas and Konstantin Kutzkov
Algorithms 2023, 16(2), 99; https://doi.org/10.3390/a16020099 - 9 Feb 2023
Cited by 9 | Viewed by 6751
Abstract
We developed a methodology for the neural network boosting of logistic regression aimed at learning an additional model structure from the data. In particular, we constructed two classes of neural network-based models: shallow–dense neural networks with one hidden layer and deep neural networks [...] Read more.
We developed a methodology for the neural network boosting of logistic regression aimed at learning an additional model structure from the data. In particular, we constructed two classes of neural network-based models: shallow–dense neural networks with one hidden layer and deep neural networks with multiple hidden layers. Furthermore, several advanced approaches were explored, including the combined actuarial neural network approach, embeddings and transfer learning. The model training was achieved by minimizing either the deviance or the cross-entropy loss functions, leading to fourteen neural network-based models in total. For illustrative purposes, logistic regression and the alternative neural network-based models we propose are employed for a binary classification exercise concerning the occurrence of at least one claim in a French motor third-party insurance portfolio. Finally, the model interpretability issue was addressed via the local interpretable model-agnostic explanations approach. Full article
(This article belongs to the Special Issue Deep Neural Networks and Optimization Algorithms)
Show Figures

Figure 1

30 pages, 3724 KiB  
Review
Defect Detection Methods for Industrial Products Using Deep Learning Techniques: A Review
by Alireza Saberironaghi, Jing Ren and Moustafa El-Gindy
Algorithms 2023, 16(2), 95; https://doi.org/10.3390/a16020095 - 8 Feb 2023
Cited by 116 | Viewed by 29922
Abstract
Over the last few decades, detecting surface defects has attracted significant attention as a challenging task. There are specific classes of problems that can be solved using traditional image processing techniques. However, these techniques struggle with complex textures in backgrounds, noise, and differences [...] Read more.
Over the last few decades, detecting surface defects has attracted significant attention as a challenging task. There are specific classes of problems that can be solved using traditional image processing techniques. However, these techniques struggle with complex textures in backgrounds, noise, and differences in lighting conditions. As a solution to this problem, deep learning has recently emerged, motivated by two main factors: accessibility to computing power and the rapid digitization of society, which enables the creation of large databases of labeled samples. This review paper aims to briefly summarize and analyze the current state of research on detecting defects using machine learning methods. First, deep learning-based detection of surface defects on industrial products is discussed from three perspectives: supervised, semi-supervised, and unsupervised. Secondly, the current research status of deep learning defect detection methods for X-ray images is discussed. Finally, we summarize the most common challenges and their potential solutions in surface defect detection, such as unbalanced sample identification, limited sample size, and real-time processing. Full article
(This article belongs to the Special Issue Deep Learning Architecture and Applications)
Show Figures

Figure 1

14 pages, 2116 KiB  
Article
Effective Heart Disease Prediction Using Machine Learning Techniques
by Chintan M. Bhatt, Parth Patel, Tarang Ghetia and Pier Luigi Mazzeo
Algorithms 2023, 16(2), 88; https://doi.org/10.3390/a16020088 - 6 Feb 2023
Cited by 248 | Viewed by 76794
Abstract
The diagnosis and prognosis of cardiovascular disease are crucial medical tasks to ensure correct classification, which helps cardiologists provide proper treatment to the patient. Machine learning applications in the medical niche have increased as they can recognize patterns from data. Using machine learning [...] Read more.
The diagnosis and prognosis of cardiovascular disease are crucial medical tasks to ensure correct classification, which helps cardiologists provide proper treatment to the patient. Machine learning applications in the medical niche have increased as they can recognize patterns from data. Using machine learning to classify cardiovascular disease occurrence can help diagnosticians reduce misdiagnosis. This research develops a model that can correctly predict cardiovascular diseases to reduce the fatality caused by cardiovascular diseases. This paper proposes a method of k-modes clustering with Huang starting that can improve classification accuracy. Models such as random forest (RF), decision tree classifier (DT), multilayer perceptron (MP), and XGBoost (XGB) are used. GridSearchCV was used to hypertune the parameters of the applied model to optimize the result. The proposed model is applied to a real-world dataset of 70,000 instances from Kaggle. Models were trained on data that were split in 80:20 and achieved accuracy as follows: decision tree: 86.37% (with cross-validation) and 86.53% (without cross-validation), XGBoost: 86.87% (with cross-validation) and 87.02% (without cross-validation), random forest: 87.05% (with cross-validation) and 86.92% (without cross-validation), multilayer perceptron: 87.28% (with cross-validation) and 86.94% (without cross-validation). The proposed models have AUC (area under the curve) values: decision tree: 0.94, XGBoost: 0.95, random forest: 0.95, multilayer perceptron: 0.95. The conclusion drawn from this underlying research is that multilayer perceptron with cross-validation has outperformed all other algorithms in terms of accuracy. It achieved the highest accuracy of 87.28%. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms for Healthcare)
Show Figures

Figure 1

42 pages, 655 KiB  
Review
Inverse Reinforcement Learning as the Algorithmic Basis for Theory of Mind: Current Methods and Open Problems
by Jaime Ruiz-Serra and Michael S. Harré
Algorithms 2023, 16(2), 68; https://doi.org/10.3390/a16020068 - 19 Jan 2023
Cited by 7 | Viewed by 7221
Abstract
Theory of mind (ToM) is the psychological construct by which we model another’s internal mental states. Through ToM, we adjust our own behaviour to best suit a social context, and therefore it is essential to our everyday interactions with others. In adopting an [...] Read more.
Theory of mind (ToM) is the psychological construct by which we model another’s internal mental states. Through ToM, we adjust our own behaviour to best suit a social context, and therefore it is essential to our everyday interactions with others. In adopting an algorithmic (rather than a psychological or neurological) approach to ToM, we gain insights into cognition that will aid us in building more accurate models for the cognitive and behavioural sciences, as well as enable artificial agents to be more proficient in social interactions as they become more embedded in our everyday lives. Inverse reinforcement learning (IRL) is a class of machine learning methods by which to infer the preferences (rewards as a function of state) of a decision maker from its behaviour (trajectories in a Markov decision process). IRL can provide a computational approach for ToM, as recently outlined by Jara-Ettinger, but this will require a better understanding of the relationship between ToM concepts and existing IRL methods at the algorthmic level. Here, we provide a review of prominent IRL algorithms and their formal descriptions, and discuss the applicability of IRL concepts as the algorithmic basis of a ToM in AI. Full article
(This article belongs to the Special Issue Advancements in Reinforcement Learning Algorithms)
Show Figures

Figure 1

15 pages, 1356 KiB  
Article
A Discrete Partially Observable Markov Decision Process Model for the Maintenance Optimization of Oil and Gas Pipelines
by Ezra Wari, Weihang Zhu and Gino Lim
Algorithms 2023, 16(1), 54; https://doi.org/10.3390/a16010054 - 12 Jan 2023
Cited by 8 | Viewed by 3144
Abstract
Corrosion is one of the major causes of failure in pipelines for transporting oil and gas products. To mitigate the impact of this problem, organizations perform different maintenance operations, including detecting corrosion, determining corrosion growth, and implementing optimal maintenance policies. This paper proposes [...] Read more.
Corrosion is one of the major causes of failure in pipelines for transporting oil and gas products. To mitigate the impact of this problem, organizations perform different maintenance operations, including detecting corrosion, determining corrosion growth, and implementing optimal maintenance policies. This paper proposes a partially observable Markov decision process (POMDP) model for optimizing maintenance based on the corrosion progress, which is monitored by an inline inspection to assess the extent of pipeline corrosion. The states are defined by dividing the deterioration range equally, whereas the actions are determined based on the specific states and pipeline attributes. Monte Carlo simulation and a pure birth Markov process method are used for computing the transition matrix. The cost of maintenance and failure are considered when calculating the rewards. The inline inspection methods and tool measurement errors may cause reading distortion, which is used to formulate the observations and the observation function. The model is demonstrated with two numerical examples constructed based on problems and parameters in the literature. The result shows that the proposed model performs well with the added advantage of integrating measurement errors and recommending actions for multiple-state situations. Overall, this discrete model can serve the maintenance decision-making process by better representing the stochastic features. Full article
(This article belongs to the Special Issue Algorithms in Monte Carlo Methods)
Show Figures

Figure 1

13 pages, 378 KiB  
Article
Solving the Parallel Drone Scheduling Traveling Salesman Problem via Constraint Programming
by Roberto Montemanni and Mauro Dell’Amico
Algorithms 2023, 16(1), 40; https://doi.org/10.3390/a16010040 - 8 Jan 2023
Cited by 19 | Viewed by 3685
Abstract
Drones are currently seen as a viable way of improving the distribution of parcels in urban and rural environments, while working in coordination with traditional vehicles, such as trucks. In this paper, we consider the parallel drone scheduling traveling salesman problem, where a [...] Read more.
Drones are currently seen as a viable way of improving the distribution of parcels in urban and rural environments, while working in coordination with traditional vehicles, such as trucks. In this paper, we consider the parallel drone scheduling traveling salesman problem, where a set of customers requiring a delivery is split between a truck and a fleet of drones, with the aim of minimizing the total time required to serve all the customers. We propose a constraint programming model for the problem, discuss its implementation and present the results of an experimental program on the instances previously cited in the literature to validate exact and heuristic algorithms. We were able to decrease the cost (the time required to serve customers) for some of the instances and, for the first time, to provide a demonstrated optimal solution for all the instances considered. These results show that constraint programming can be a very effective tool for attacking optimization problems with traveling salesman components, such as the one discussed. Full article
Show Figures

Figure 1

122 pages, 1505 KiB  
Systematic Review
Sybil in the Haystack: A Comprehensive Review of Blockchain Consensus Mechanisms in Search of Strong Sybil Attack Resistance
by Moritz Platt and Peter McBurney
Algorithms 2023, 16(1), 34; https://doi.org/10.3390/a16010034 - 6 Jan 2023
Cited by 37 | Viewed by 17931
Abstract
Consensus algorithms are applied in the context of distributed computer systems to improve their fault tolerance. The explosive development of distributed ledger technology following the proposal of ‘Bitcoin’ led to a sharp increase in research activity in this area. Specifically, public and permissionless [...] Read more.
Consensus algorithms are applied in the context of distributed computer systems to improve their fault tolerance. The explosive development of distributed ledger technology following the proposal of ‘Bitcoin’ led to a sharp increase in research activity in this area. Specifically, public and permissionless networks require robust leader selection strategies resistant to Sybil attacks in which malicious attackers present bogus identities to induce byzantine faults. Our goal is to analyse the entire breadth of works in this area systematically, thereby uncovering trends and research directions regarding Sybil attack resistance in today’s blockchain systems to benefit the designs of the future. Through a systematic literature review, we condense an immense set of research records (N = 21,799) to a relevant subset (N = 483). We categorise these mechanisms by their Sybil attack resistance characteristics, leader selection methodology, and incentive scheme. Mechanisms with strong Sybil attack resistance commonly adopt the principles underlying ‘Proof-of-Work’ or ‘Proof-of-Stake’ while mechanisms with limited resistance often use reputation systems or physical world linking. We find that only a few fundamental paradigms exist that can resist Sybil attacks in a permissionless setting but discover numerous innovative mechanisms that can deliver weaker protection in system scenarios with smaller attack surfaces. Full article
(This article belongs to the Special Issue Blockchain Consensus Algorithms)
Show Figures

Figure 1

18 pages, 2875 KiB  
Article
Image-to-Image Translation-Based Data Augmentation for Improving Crop/Weed Classification Models for Precision Agriculture Applications
by L. G. Divyanth, D. S. Guru, Peeyush Soni, Rajendra Machavaram, Mohammad Nadimi and Jitendra Paliwal
Algorithms 2022, 15(11), 401; https://doi.org/10.3390/a15110401 - 30 Oct 2022
Cited by 35 | Viewed by 8767
Abstract
Applications of deep-learning models in machine visions for crop/weed identification have remarkably upgraded the authenticity of precise weed management. However, compelling data are required to obtain the desired result from this highly data-driven operation. This study aims to curtail the effort needed to [...] Read more.
Applications of deep-learning models in machine visions for crop/weed identification have remarkably upgraded the authenticity of precise weed management. However, compelling data are required to obtain the desired result from this highly data-driven operation. This study aims to curtail the effort needed to prepare very large image datasets by creating artificial images of maize (Zea mays) and four common weeds (i.e., Charlock, Fat Hen, Shepherd’s Purse, and small-flowered Cranesbill) through conditional Generative Adversarial Networks (cGANs). The fidelity of these synthetic images was tested through t-distributed stochastic neighbor embedding (t-SNE) visualization plots of real and artificial images of each class. The reliability of this method as a data augmentation technique was validated through classification results based on the transfer learning of a pre-defined convolutional neural network (CNN) architecture—the AlexNet; the feature extraction method came from the deepest pooling layer of the same network. Machine learning models based on a support vector machine (SVM) and linear discriminant analysis (LDA) were trained using these feature vectors. The F1 scores of the transfer learning model increased from 0.97 to 0.99, when additionally supported by an artificial dataset. Similarly, in the case of the feature extraction technique, the classification F1-scores increased from 0.93 to 0.96 for SVM and from 0.94 to 0.96 for the LDA model. The results show that image augmentation using generative adversarial networks (GANs) can improve the performance of crop/weed classification models with the added advantage of reduced time and manpower. Furthermore, it has demonstrated that generative networks could be a great tool for deep-learning applications in agriculture. Full article
(This article belongs to the Special Issue Deep Learning Architecture and Applications)
Show Figures

Figure 1

24 pages, 2504 KiB  
Review
A Survey on Fault Diagnosis of Rolling Bearings
by Bo Peng, Ying Bi, Bing Xue, Mengjie Zhang and Shuting Wan
Algorithms 2022, 15(10), 347; https://doi.org/10.3390/a15100347 - 26 Sep 2022
Cited by 60 | Viewed by 7196
Abstract
The failure of a rolling bearing may cause the shutdown of mechanical equipment and even induce catastrophic accidents, resulting in tremendous economic losses and a severely negative impact on society. Fault diagnosis of rolling bearings becomes an important topic with much attention from [...] Read more.
The failure of a rolling bearing may cause the shutdown of mechanical equipment and even induce catastrophic accidents, resulting in tremendous economic losses and a severely negative impact on society. Fault diagnosis of rolling bearings becomes an important topic with much attention from researchers and industrial pioneers. There are an increasing number of publications on this topic. However, there is a lack of a comprehensive survey of existing works from the perspectives of fault detection and fault type recognition in rolling bearings using vibration signals. Therefore, this paper reviews recent fault detection and fault type recognition methods using vibration signals. First, it provides an overview of fault diagnosis of rolling bearings and typical fault types. Then, existing fault diagnosis methods are categorized into fault detection methods and fault type recognition methods, which are separately revised and discussed. Finally, a summary of existing datasets, limitations/challenges of existing methods, and future directions are presented to provide more guidance for researchers who are interested in this field. Overall, this survey paper conducts a review and analysis of the methods used to diagnose rolling bearing faults and provide comprehensive guidance for researchers in this field. Full article
(This article belongs to the Special Issue Artificial Intelligence for Fault Detection and Diagnosis)
Show Figures

Figure 1

19 pages, 2574 KiB  
Article
GA−Reinforced Deep Neural Network for Net Electric Load Forecasting in Microgrids with Renewable Energy Resources for Scheduling Battery Energy Storage Systems
by Chaoran Zheng, Mohsen Eskandari, Ming Li and Zeyue Sun
Algorithms 2022, 15(10), 338; https://doi.org/10.3390/a15100338 - 21 Sep 2022
Cited by 22 | Viewed by 4224
Abstract
The large−scale integration of wind power and PV cells into electric grids alleviates the problem of an energy crisis. However, this is also responsible for technical and management problems in the power grid, such as power fluctuation, scheduling difficulties, and reliability reduction. The [...] Read more.
The large−scale integration of wind power and PV cells into electric grids alleviates the problem of an energy crisis. However, this is also responsible for technical and management problems in the power grid, such as power fluctuation, scheduling difficulties, and reliability reduction. The microgrid concept has been proposed to locally control and manage a cluster of local distributed energy resources (DERs) and loads. If the net load power can be accurately predicted, it is possible to schedule/optimize the operation of battery energy storage systems (BESSs) through economic dispatch to cover intermittent renewables. However, the load curve of the microgrid is highly affected by various external factors, resulting in large fluctuations, which makes the prediction problematic. This paper predicts the net electric load of the microgrid using a deep neural network to realize a reliable power supply as well as reduce the cost of power generation. Considering that the backpropagation (BP) neural network has a good approximation effect as well as a strong adaptation ability, the load prediction model of the BP deep neural network is established. However, there are some defects in the BP neural network, such as the prediction effect, which is not precise enough and easily falls into a locally optimal solution. Hence, a genetic algorithm (GA)−reinforced deep neural network is introduced. By optimizing the weight and threshold of the BP network, the deficiency of the BP neural network algorithm is improved so that the prediction effect is realized and optimized. The results reveal that the error reduction in the mean square error (MSE) of the GA–BP neural network prediction is 2.0221, which is significantly smaller than the 30.3493 of the BP neural network prediction. Additionally, the error reduction is 93.3%. The error reductions of the root mean square error (RMSE) and mean absolute error (MAE) are 74.18% and 51.2%, respectively. Full article
Show Figures

Figure 1

23 pages, 3231 KiB  
Article
Social Media Hate Speech Detection Using Explainable Artificial Intelligence (XAI)
by Harshkumar Mehta and Kalpdrum Passi
Algorithms 2022, 15(8), 291; https://doi.org/10.3390/a15080291 - 17 Aug 2022
Cited by 50 | Viewed by 10399
Abstract
Explainable artificial intelligence (XAI) characteristics have flexible and multifaceted potential in hate speech detection by deep learning models. Interpreting and explaining decisions made by complex artificial intelligence (AI) models to understand the decision-making process of these model were the aims of this research. [...] Read more.
Explainable artificial intelligence (XAI) characteristics have flexible and multifaceted potential in hate speech detection by deep learning models. Interpreting and explaining decisions made by complex artificial intelligence (AI) models to understand the decision-making process of these model were the aims of this research. As a part of this research study, two datasets were taken to demonstrate hate speech detection using XAI. Data preprocessing was performed to clean data of any inconsistencies, clean the text of the tweets, tokenize and lemmatize the text, etc. Categorical variables were also simplified in order to generate a clean dataset for training purposes. Exploratory data analysis was performed on the datasets to uncover various patterns and insights. Various pre-existing models were applied to the Google Jigsaw dataset such as decision trees, k-nearest neighbors, multinomial naïve Bayes, random forest, logistic regression, and long short-term memory (LSTM), among which LSTM achieved an accuracy of 97.6%. Explainable methods such as LIME (local interpretable model—agnostic explanations) were applied to the HateXplain dataset. Variants of BERT (bidirectional encoder representations from transformers) model such as BERT + ANN (artificial neural network) with an accuracy of 93.55% and BERT + MLP (multilayer perceptron) with an accuracy of 93.67% were created to achieve a good performance in terms of explainability using the ERASER (evaluating rationales and simple English reasoning) benchmark. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

23 pages, 14110 KiB  
Article
Design of Multi-Objective-Based Artificial Intelligence Controller for Wind/Battery-Connected Shunt Active Power Filter
by Srilakshmi Koganti, Krishna Jyothi Koganti and Surender Reddy Salkuti
Algorithms 2022, 15(8), 256; https://doi.org/10.3390/a15080256 - 25 Jul 2022
Cited by 32 | Viewed by 5882
Abstract
Nowadays, the integration of renewable energy sources such as solar, wind, etc. into the grid is recommended to reduce losses and meet demands. The application of power electronics devices (PED) to control non-linear, unbalanced loads leads to power quality (PQ) issues. This work [...] Read more.
Nowadays, the integration of renewable energy sources such as solar, wind, etc. into the grid is recommended to reduce losses and meet demands. The application of power electronics devices (PED) to control non-linear, unbalanced loads leads to power quality (PQ) issues. This work presents a hybrid controller for the self-tuning filter (STF)-based Shunt active power filter (SHAPF), integrated with a wind power generation system (WPGS) and a battery storage system (BS). The SHAPF comprises a three-phase voltage source inverter, coupled via a DC-Link. The proposed neuro-fuzzy inference hybrid controller (NFIHC) utilizes both the properties of Fuzzy Logic (FL) and artificial neural network (ANN) controllers and maintains constant DC-Link voltage. The phase synchronization was generated by a self-tuning filter (STF) for the effective working of SHAPF during unbalanced and distorted supply voltages. In addition, STF also does the work of low-pass filters (LPFs) and HPFs (high-pass filters) for splitting the Fundamental component (FC) and Harmonic component (HC) of the current. The control of SHAPF works on d-q theory with the advantage of eliminating low-pass filters (LPFs) and phase-locked loop (PLL). The prime objective of the projected work is to regulate the DC-Link voltage during wind uncertainties and load variations, and minimize the total harmonic distortion (THD) in the current waveforms, thereby improving the power factor (PF).Test studies with various combinations of balanced/unbalanced loads, wind velocity variations, and supply voltage were used to evaluate the suggested method’s superior performance. In addition, the comparative analysis was carried out with those of the existing controllers such as conventional proportional-integral (PI), ANN, and FL. Full article
Show Figures

Figure 1

28 pages, 1344 KiB  
Review
Overview of Distributed Machine Learning Techniques for 6G Networks
by Eugenio Muscinelli, Swapnil Sadashiv Shinde and Daniele Tarchi
Algorithms 2022, 15(6), 210; https://doi.org/10.3390/a15060210 - 15 Jun 2022
Cited by 30 | Viewed by 5980
Abstract
The main goal of this paper is to survey the influential research of distributed learning technologies playing a key role in the 6G world. Upcoming 6G technology is expected to create an intelligent, highly scalable, dynamic, and programable wireless communication network able to [...] Read more.
The main goal of this paper is to survey the influential research of distributed learning technologies playing a key role in the 6G world. Upcoming 6G technology is expected to create an intelligent, highly scalable, dynamic, and programable wireless communication network able to serve many heterogeneous wireless devices. Various machine learning (ML) techniques are expected to be deployed over the intelligent 6G wireless network that provide solutions to highly complex networking problems. In order to do this, various 6G nodes and devices are expected to generate tons of data through external sensors, and data analysis will be needed. With such massive and distributed data, and various innovations in computing hardware, distributed ML techniques are expected to play an important role in 6G. Though they have several advantages over the centralized ML techniques, implementing the distributed ML algorithms over resource-constrained wireless environments can be challenging. Therefore, it is important to select a proper ML algorithm based upon the characteristics of the wireless environment and the resource requirements of the learning process. In this work, we survey the recently introduced distributed ML techniques with their characteristics and possible benefits by focusing our attention on the most influential papers in the area. We finally give our perspective on the main challenges and advantages for telecommunication networks, along with the main scenarios that could eventuate. Full article
(This article belongs to the Special Issue Algorithms for Communication Networks)
Show Figures

Figure 1

22 pages, 22712 KiB  
Article
Improved JPS Path Optimization for Mobile Robots Based on Angle-Propagation Theta* Algorithm
by Yuan Luo, Jiakai Lu, Qiong Qin and Yanyu Liu
Algorithms 2022, 15(6), 198; https://doi.org/10.3390/a15060198 - 8 Jun 2022
Cited by 12 | Viewed by 4221
Abstract
The Jump Point Search (JPS) algorithm ignores the possibility of any-angle walking, so the paths found by the JPS algorithm under the discrete grid map still have a gap with the real paths. To address the above problems, this paper improves the path [...] Read more.
The Jump Point Search (JPS) algorithm ignores the possibility of any-angle walking, so the paths found by the JPS algorithm under the discrete grid map still have a gap with the real paths. To address the above problems, this paper improves the path optimization strategy of the JPS algorithm by combining the viewable angle of the Angle-Propagation Theta* (AP Theta*) algorithm, and it proposes the AP-JPS algorithm based on an any-angle pathfinding strategy. First, based on the JPS algorithm, this paper proposes a vision triangle judgment method to optimize the generated path by selecting the successor search point. Secondly, the idea of the node viewable angle in the AP Theta* algorithm is introduced to modify the line of sight (LOS) reachability detection between two nodes. Finally, the paths are optimized using a seventh-order polynomial based on minimum snap, so that the AP-JPS algorithm generates paths that better match the actual robot motion. The feasibility and effectiveness of this method are proved by simulation experiments and comparison with other algorithms. The results show that the path planning algorithm in this paper obtains paths with good smoothness in environments with different obstacle densities and different map sizes. In the algorithm comparison experiments, it can be seen that the AP-JPS algorithm reduces the path by 1.61–4.68% and the total turning angle of the path by 58.71–84.67% compared with the JPS algorithm. The AP-JPS algorithm reduces the computing time by 98.59–99.22% compared with the AP-Theta* algorithm. Full article
Show Figures

Figure 1

22 pages, 848 KiB  
Review
A Survey on Network Optimization Techniques for Blockchain Systems
by Robert Antwi, James Dzisi Gadze, Eric Tutu Tchao, Axel Sikora, Henry Nunoo-Mensah, Andrew Selasi Agbemenu, Kwame Opunie-Boachie Obour Agyekum, Justice Owusu Agyemang, Dominik Welte and Eliel Keelson
Algorithms 2022, 15(6), 193; https://doi.org/10.3390/a15060193 - 4 Jun 2022
Cited by 23 | Viewed by 7236
Abstract
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as [...] Read more.
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as minimizing the computational complexity of consensus algorithms or blockchain storage requirements, have received attention. However, to realize the full potential of blockchain in IoT, the inefficiencies of its inter-peer communication must also be addressed. For example, blockchain uses a flooding technique to share blocks, resulting in duplicates and inefficient bandwidth usage. Moreover, blockchain peers use a random neighbor selection (RNS) technique to decide on other peers with whom to exchange blockchain data. As a result, the peer-to-peer (P2P) topology formation limits the effective achievable throughput. This paper provides a survey on the state-of-the-art network structures and communication mechanisms used in blockchain and establishes the need for network-based optimization. Additionally, it discusses the blockchain architecture and its layers categorizes existing literature into the layers and provides a survey on the state-of-the-art optimization frameworks, analyzing their effectiveness and ability to scale. Finally, this paper presents recommendations for future work. Full article
(This article belongs to the Special Issue Advances in Blockchain Architecture and Consensus)
Show Figures

Figure 1

22 pages, 878 KiB  
Article
Efficient Machine Learning Models for Early Stage Detection of Autism Spectrum Disorder
by Mousumi Bala, Mohammad Hanif Ali, Md. Shahriare Satu, Khondokar Fida Hasan and Mohammad Ali Moni
Algorithms 2022, 15(5), 166; https://doi.org/10.3390/a15050166 - 16 May 2022
Cited by 56 | Viewed by 7802
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental disorder that severely impairs an individual’s cognitive, linguistic, object recognition, communication, and social abilities. This situation is not treatable, although early detection of ASD can assist to diagnose and take proper steps for mitigating its effect. [...] Read more.
Autism spectrum disorder (ASD) is a neurodevelopmental disorder that severely impairs an individual’s cognitive, linguistic, object recognition, communication, and social abilities. This situation is not treatable, although early detection of ASD can assist to diagnose and take proper steps for mitigating its effect. Using various artificial intelligence (AI) techniques, ASD can be detected an at earlier stage than with traditional methods. The aim of this study was to propose a machine learning model that investigates ASD data of different age levels and to identify ASD more accurately. In this work, we gathered ASD datasets of toddlers, children, adolescents, and adults and used several feature selection techniques. Then, different classifiers were applied into these datasets, and we assessed their performance with evaluation metrics including predictive accuracy, kappa statistics, the f1-measure, and AUROC. In addition, we analyzed the performance of individual classifiers using a non-parametric statistical significant test. For the toddler, child, adolescent, and adult datasets, we found that Support Vector Machine (SVM) performed better than other classifiers where we gained 97.82% accuracy for the RIPPER-based toddler subset; 99.61% accuracy for the Correlation-based feature selection (CFS) and Boruta CFS intersect (BIC) method-based child subset; 95.87% accuracy for the Boruta-based adolescent subset; and 96.82% accuracy for the CFS-based adult subset. Then, we applied the Shapley Additive Explanations (SHAP) method into different feature subsets, which gained the highest accuracy and ranked their features based on the analysis. Full article
(This article belongs to the Special Issue Interpretability, Accountability and Robustness in Machine Learning)
Show Figures

Figure 1

18 pages, 523 KiB  
Article
Closed-Form Solution of the Bending Two-Phase Integral Model of Euler-Bernoulli Nanobeams
by Efthimios Providas
Algorithms 2022, 15(5), 151; https://doi.org/10.3390/a15050151 - 28 Apr 2022
Cited by 9 | Viewed by 3989
Abstract
Recent developments have shown that the widely used simplified differential model of Eringen’s nonlocal elasticity in nanobeam analysis is not equivalent to the corresponding and initially proposed integral models, the pure integral model and the two-phase integral model, in all cases of loading [...] Read more.
Recent developments have shown that the widely used simplified differential model of Eringen’s nonlocal elasticity in nanobeam analysis is not equivalent to the corresponding and initially proposed integral models, the pure integral model and the two-phase integral model, in all cases of loading and boundary conditions. This has resolved a paradox with solutions that are not in line with the expected softening effect of the nonlocal theory that appears in all other cases. In addition, it revived interest in the integral model and the two-phase integral model, which were not used due to their complexity in solving the relevant integral and integro-differential equations, respectively. In this article, we use a direct operator method for solving boundary value problems for nth order linear Volterra–Fredholm integro-differential equations of convolution type to construct closed-form solutions to the two-phase integral model of Euler–Bernoulli nanobeams in bending under transverse distributed load and various types of boundary conditions. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

30 pages, 937 KiB  
Review
A Review on the Performance of Linear and Mixed Integer Two-Stage Stochastic Programming Software
by Juan J. Torres, Can Li, Robert M. Apap and Ignacio E. Grossmann
Algorithms 2022, 15(4), 103; https://doi.org/10.3390/a15040103 - 22 Mar 2022
Cited by 19 | Viewed by 5914
Abstract
This paper presents a tutorial on the state-of-the-art software for the solution of two-stage (mixed-integer) linear stochastic programs and provides a list of software designed for this purpose. The methodologies are classified according to the decomposition alternatives and the types of the variables [...] Read more.
This paper presents a tutorial on the state-of-the-art software for the solution of two-stage (mixed-integer) linear stochastic programs and provides a list of software designed for this purpose. The methodologies are classified according to the decomposition alternatives and the types of the variables in the problem. We review the fundamentals of Benders decomposition, dual decomposition and progressive hedging, as well as possible improvements and variants. We also present extensive numerical results to underline the properties and performance of each algorithm using software implementations, including DECIS, FORTSP, PySP, and DSP. Finally, we discuss the strengths and weaknesses of each methodology and propose future research directions. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

18 pages, 576 KiB  
Article
Evolutionary Optimization of Spiking Neural P Systems for Remaining Useful Life Prediction
by Leonardo Lucio Custode, Hyunho Mo, Andrea Ferigo and Giovanni Iacca
Algorithms 2022, 15(3), 98; https://doi.org/10.3390/a15030098 - 19 Mar 2022
Cited by 9 | Viewed by 4936
Abstract
Remaining useful life (RUL) prediction is a key enabler for predictive maintenance. In fact, the possibility of accurately and reliably predicting the RUL of a system, based on a record of its monitoring data, can allow users to schedule maintenance interventions before faults [...] Read more.
Remaining useful life (RUL) prediction is a key enabler for predictive maintenance. In fact, the possibility of accurately and reliably predicting the RUL of a system, based on a record of its monitoring data, can allow users to schedule maintenance interventions before faults occur. In the recent literature, several data-driven methods for RUL prediction have been proposed. However, most of them are based on traditional (connectivist) neural networks, such as convolutional neural networks, and alternative mechanisms have barely been explored. Here, we tackle the RUL prediction problem for the first time by using a membrane computing paradigm, namely that of Spiking Neural P (in short, SN P) systems. First, we show how SN P systems can be adapted to handle the RUL prediction problem. Then, we propose the use of a neuro-evolutionary algorithm to optimize the structure and parameters of the SN P systems. Our results on two datasets, namely the CMAPSS and new CMAPSS benchmarks from NASA, are fairly comparable with those obtained by much more complex deep networks, showing a reasonable compromise between performance and number of trainable parameters, which in turn correlates with memory consumption and computing time. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems Vol. 2)
Show Figures

Figure 1

37 pages, 1000 KiB  
Article
Deterministic Approximate EM Algorithm; Application to the Riemann Approximation EM and the Tempered EM
by Thomas Lartigue, Stanley Durrleman and Stéphanie Allassonnière
Algorithms 2022, 15(3), 78; https://doi.org/10.3390/a15030078 - 25 Feb 2022
Cited by 5 | Viewed by 3671
Abstract
The Expectation Maximisation (EM) algorithm is widely used to optimise non-convex likelihood functions with latent variables. Many authors modified its simple design to fit more specific situations. For instance, the Expectation (E) step has been replaced by Monte Carlo (MC), Markov Chain Monte [...] Read more.
The Expectation Maximisation (EM) algorithm is widely used to optimise non-convex likelihood functions with latent variables. Many authors modified its simple design to fit more specific situations. For instance, the Expectation (E) step has been replaced by Monte Carlo (MC), Markov Chain Monte Carlo or tempered approximations, etc. Most of the well-studied approximations belong to the stochastic class. By comparison, the literature is lacking when it comes to deterministic approximations. In this paper, we introduce a theoretical framework, with state-of-the-art convergence guarantees, for any deterministic approximation of the E step. We analyse theoretically and empirically several approximations that fit into this framework. First, for intractable E-steps, we introduce a deterministic version of MC-EM using Riemann sums. A straightforward method, not requiring any hyper-parameter fine-tuning, useful when the low dimensionality does not warrant a MC-EM. Then, we consider the tempered approximation, borrowed from the Simulated Annealing literature and used to escape local extrema. We prove that the tempered EM verifies the convergence guarantees for a wider range of temperature profiles than previously considered. We showcase empirically how new non-trivial profiles can more successfully escape adversarial initialisations. Finally, we combine the Riemann and tempered approximations into a method that accomplishes both their purposes. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

19 pages, 5851 KiB  
Review
Machine Learning in Cereal Crops Disease Detection: A Review
by Fraol Gelana Waldamichael, Taye Girma Debelee, Friedhelm Schwenker, Yehualashet Megersa Ayano and Samuel Rahimeto Kebede
Algorithms 2022, 15(3), 75; https://doi.org/10.3390/a15030075 - 24 Feb 2022
Cited by 33 | Viewed by 9346
Abstract
Cereals are an important and major source of the human diet. They constitute more than two-thirds of the world’s food source and cover more than 56% of the world’s cultivatable land. These important sources of food are affected by a variety of damaging [...] Read more.
Cereals are an important and major source of the human diet. They constitute more than two-thirds of the world’s food source and cover more than 56% of the world’s cultivatable land. These important sources of food are affected by a variety of damaging diseases, causing significant loss in annual production. In this regard, detection of diseases at an early stage and quantification of the severity has acquired the urgent attention of researchers worldwide. One emerging and popular approach for this task is the utilization of machine learning techniques. In this work, we have identified the most common and damaging diseases affecting cereal crop production, and we also reviewed 45 works performed on the detection and classification of various diseases that occur on six cereal crops within the past five years. In addition, we identified and summarised numerous publicly available datasets for each cereal crop, which the lack thereof we identified as the main challenges faced for researching the application of machine learning in cereal crop detection. In this survey, we identified deep convolutional neural networks trained on hyperspectral data as the most effective approach for early detection of diseases and transfer learning as the most commonly used and yielding the best result training method. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications III)
Show Figures

Figure 1

14 pages, 1123 KiB  
Article
Approximation of the Riesz–Caputo Derivative by Cubic Splines
by Francesca Pitolli, Chiara Sorgentone and Enza Pellegrino
Algorithms 2022, 15(2), 69; https://doi.org/10.3390/a15020069 - 21 Feb 2022
Cited by 19 | Viewed by 5759
Abstract
Differential problems with the Riesz derivative in space are widely used to model anomalous diffusion. Although the Riesz–Caputo derivative is more suitable for modeling real phenomena, there are few examples in literature where numerical methods are used to solve such differential problems. In [...] Read more.
Differential problems with the Riesz derivative in space are widely used to model anomalous diffusion. Although the Riesz–Caputo derivative is more suitable for modeling real phenomena, there are few examples in literature where numerical methods are used to solve such differential problems. In this paper, we propose to approximate the Riesz–Caputo derivative of a given function with a cubic spline. As far as we are aware, this is the first time that cubic splines have been used in the context of the Riesz–Caputo derivative. To show the effectiveness of the proposed numerical method, we present numerical tests in which we compare the analytical solution of several boundary differential problems which have the Riesz–Caputo derivative in space with the numerical solution we obtain by a spline collocation method. The numerical results show that the proposed method is efficient and accurate. Full article
Show Figures

Figure 1

20 pages, 2764 KiB  
Article
A Real-Time Network Traffic Classifier for Online Applications Using Machine Learning
by Ahmed Abdelmoamen Ahmed and Gbenga Agunsoye
Algorithms 2021, 14(8), 250; https://doi.org/10.3390/a14080250 - 21 Aug 2021
Cited by 26 | Viewed by 8436
Abstract
The increasing ubiquity of network traffic and the new online applications’ deployment has increased traffic analysis complexity. Traditionally, network administrators rely on recognizing well-known static ports for classifying the traffic flowing their networks. However, modern network traffic uses dynamic ports and is transported [...] Read more.
The increasing ubiquity of network traffic and the new online applications’ deployment has increased traffic analysis complexity. Traditionally, network administrators rely on recognizing well-known static ports for classifying the traffic flowing their networks. However, modern network traffic uses dynamic ports and is transported over secure application-layer protocols (e.g., HTTPS, SSL, and SSH). This makes it a challenging task for network administrators to identify online applications using traditional port-based approaches. One way for classifying the modern network traffic is to use machine learning (ML) to distinguish between the different traffic attributes such as packet count and size, packet inter-arrival time, packet send–receive ratio, etc. This paper presents the design and implementation of NetScrapper, a flow-based network traffic classifier for online applications. NetScrapper uses three ML models, namely K-Nearest Neighbors (KNN), Random Forest (RF), and Artificial Neural Network (ANN), for classifying the most popular 53 online applications, including Amazon, Youtube, Google, Twitter, and many others. We collected a network traffic dataset containing 3,577,296 packet flows with different 87 features for training, validating, and testing the ML models. A web-based user-friendly interface is developed to enable users to either upload a snapshot of their network traffic to NetScrapper or sniff the network traffic directly from the network interface card in real time. Additionally, we created a middleware pipeline for interfacing the three models with the Flask GUI. Finally, we evaluated NetScrapper using various performance metrics such as classification accuracy and prediction time. Most notably, we found that our ANN model achieves an overall classification accuracy of 99.86% in recognizing the online applications in our dataset. Full article
Show Figures

Figure 1

22 pages, 2628 KiB  
Article
COVID-19 Prediction Applying Supervised Machine Learning Algorithms with Comparative Analysis Using WEKA
by Charlyn Nayve Villavicencio, Julio Jerison Escudero Macrohon, Xavier Alphonse Inbaraj, Jyh-Horng Jeng and Jer-Guang Hsieh
Algorithms 2021, 14(7), 201; https://doi.org/10.3390/a14070201 - 30 Jun 2021
Cited by 52 | Viewed by 9130
Abstract
Early diagnosis is crucial to prevent the development of a disease that may cause danger to human lives. COVID-19, which is a contagious disease that has mutated into several variants, has become a global pandemic that demands to be diagnosed as soon as [...] Read more.
Early diagnosis is crucial to prevent the development of a disease that may cause danger to human lives. COVID-19, which is a contagious disease that has mutated into several variants, has become a global pandemic that demands to be diagnosed as soon as possible. With the use of technology, available information concerning COVID-19 increases each day, and extracting useful information from massive data can be done through data mining. In this study, authors utilized several supervised machine learning algorithms in building a model to analyze and predict the presence of COVID-19 using the COVID-19 Symptoms and Presence dataset from Kaggle. J48 Decision Tree, Random Forest, Support Vector Machine, K-Nearest Neighbors and Naïve Bayes algorithms were applied through WEKA machine learning software. Each model’s performance was evaluated using 10-fold cross validation and compared according to major accuracy measures, correctly or incorrectly classified instances, kappa, mean absolute error, and time taken to build the model. The results show that Support Vector Machine using Pearson VII universal kernel outweighs other algorithms by attaining 98.81% accuracy and a mean absolute error of 0.012. Full article
Show Figures

Figure 1

12 pages, 994 KiB  
Article
Digital Twins in Solar Farms: An Approach through Time Series and Deep Learning
by Kamel Arafet and Rafael Berlanga
Algorithms 2021, 14(5), 156; https://doi.org/10.3390/a14050156 - 18 May 2021
Cited by 24 | Viewed by 5548
Abstract
The generation of electricity through renewable energy sources increases every day, with solar energy being one of the fastest-growing. The emergence of information technologies such as Digital Twins (DT) in the field of the Internet of Things and Industry 4.0 allows a substantial [...] Read more.
The generation of electricity through renewable energy sources increases every day, with solar energy being one of the fastest-growing. The emergence of information technologies such as Digital Twins (DT) in the field of the Internet of Things and Industry 4.0 allows a substantial development in automatic diagnostic systems. The objective of this work is to obtain the DT of a Photovoltaic Solar Farm (PVSF) with a deep-learning (DL) approach. To build such a DT, sensor-based time series are properly analyzed and processed. The resulting data are used to train a DL model (e.g., autoencoders) in order to detect anomalies of the physical system in its DT. Results show a reconstruction error around 0.1, a recall score of 0.92 and an Area Under Curve (AUC) of 0.97. Therefore, this paper demonstrates that the DT can reproduce the behavior as well as detect efficiently anomalies of the physical system. Full article
(This article belongs to the Special Issue Algorithms and Applications of Time Series Analysis)
Show Figures

Figure 1

21 pages, 1182 KiB  
Article
Machine Learning Predicts Outcomes of Phase III Clinical Trials for Prostate Cancer
by Felix D. Beacher, Lilianne R. Mujica-Parodi, Shreyash Gupta and Leonardo A. Ancora
Algorithms 2021, 14(5), 147; https://doi.org/10.3390/a14050147 - 5 May 2021
Cited by 12 | Viewed by 8216
Abstract
The ability to predict the individual outcomes of clinical trials could support the development of tools for precision medicine and improve the efficiency of clinical-stage drug development. However, there are no published attempts to predict individual outcomes of clinical trials for cancer. We [...] Read more.
The ability to predict the individual outcomes of clinical trials could support the development of tools for precision medicine and improve the efficiency of clinical-stage drug development. However, there are no published attempts to predict individual outcomes of clinical trials for cancer. We used machine learning (ML) to predict individual responses to a two-year course of bicalutamide, a standard treatment for prostate cancer, based on data from three Phase III clinical trials (n = 3653). We developed models that used a merged dataset from all three studies. The best performing models using merged data from all three studies had an accuracy of 76%. The performance of these models was confirmed by further modeling using a merged dataset from two of the three studies, and a separate study for testing. Together, our results indicate the feasibility of ML-based tools for predicting cancer treatment outcomes, with implications for precision oncology and improving the efficiency of clinical-stage drug development. Full article
(This article belongs to the Special Issue Machine Learning in Healthcare and Biomedical Application)
Show Figures

Graphical abstract

17 pages, 503 KiB  
Article
Multiple Criteria Decision Making and Prospective Scenarios Model for Selection of Companies to Be Incubated
by Altina S. Oliveira, Carlos F. S. Gomes, Camilla T. Clarkson, Adriana M. Sanseverino, Mara R. S. Barcelos, Igor P. A. Costa and Marcos Santos
Algorithms 2021, 14(4), 111; https://doi.org/10.3390/a14040111 - 30 Mar 2021
Cited by 48 | Viewed by 4355
Abstract
This paper proposes a model to evaluate business projects to get into an incubator, allowing to rank them in order of selection priority. The model combines the Momentum method to build prospective scenarios and the AHP-TOPSIS-2N Multiple Criteria Decision Making (MCDM) method to [...] Read more.
This paper proposes a model to evaluate business projects to get into an incubator, allowing to rank them in order of selection priority. The model combines the Momentum method to build prospective scenarios and the AHP-TOPSIS-2N Multiple Criteria Decision Making (MCDM) method to rank the alternatives. Six business projects were evaluated to be incubated. The Momentum method made it possible for us to create an initial core of criteria for the evaluation of incubation projects. The AHP-TOPSIS-2N method supported the decision to choose the company to be incubated by ranking the alternatives in order of relevance. Our evaluation model has improved the existing models used by incubators. This model can be used and/or adapted by any incubator to evaluate the business projects to be incubated. The set of criteria for the evaluation of incubation projects is original and the use of prospective scenarios with an MCDM method to evaluate companies to be incubated does not exist in the literature. Full article
(This article belongs to the Special Issue Algorithms and Models for Dynamic Multiple Criteria Decision Making)
Show Figures

Figure 1

14 pages, 611 KiB  
Article
An Integrated Neural Network and SEIR Model to Predict COVID-19
by Sharif Noor Zisad, Mohammad Shahadat Hossain, Mohammed Sazzad Hossain and Karl Andersson
Algorithms 2021, 14(3), 94; https://doi.org/10.3390/a14030094 - 19 Mar 2021
Cited by 34 | Viewed by 6633
Abstract
A novel coronavirus (COVID-19), which has become a great concern for the world, was identified first in Wuhan city in China. The rapid spread throughout the world was accompanied by an alarming number of infected patients and increasing number of deaths gradually. If [...] Read more.
A novel coronavirus (COVID-19), which has become a great concern for the world, was identified first in Wuhan city in China. The rapid spread throughout the world was accompanied by an alarming number of infected patients and increasing number of deaths gradually. If the number of infected cases can be predicted in advance, it would have a large contribution to controlling this pandemic in any area. Therefore, this study introduces an integrated model for predicting the number of confirmed cases from the perspective of Bangladesh. Moreover, the number of quarantined patients and the change in basic reproduction rate (the R0-value) can also be evaluated using this model. This integrated model combines the SEIR (Susceptible, Exposed, Infected, Removed) epidemiological model and neural networks. The model was trained using available data from 250 days. The accuracy of the prediction of confirmed cases is almost between 90% and 99%. The performance of this integrated model was evaluated by showing the difference in accuracy between the integrated model and the general SEIR model. The result shows that the integrated model is more accurate than the general SEIR model while predicting the number of confirmed cases in Bangladesh. Full article
Show Figures

Figure 1

12 pages, 393 KiB  
Article
UAV Formation Shape Control via Decentralized Markov Decision Processes
by Md Ali Azam, Hans D. Mittelmann and Shankarachary Ragi
Algorithms 2021, 14(3), 91; https://doi.org/10.3390/a14030091 - 17 Mar 2021
Cited by 27 | Viewed by 4674
Abstract
In this paper, we present a decentralized unmanned aerial vehicle (UAV) swarm formation control approach based on a decision theoretic approach. Specifically, we pose the UAV swarm motion control problem as a decentralized Markov decision process (Dec-MDP). Here, the goal is to drive [...] Read more.
In this paper, we present a decentralized unmanned aerial vehicle (UAV) swarm formation control approach based on a decision theoretic approach. Specifically, we pose the UAV swarm motion control problem as a decentralized Markov decision process (Dec-MDP). Here, the goal is to drive the UAV swarm from an initial geographical region to another geographical region where the swarm must form a three-dimensional shape (e.g., surface of a sphere). As most decision-theoretic formulations suffer from the curse of dimensionality, we adapt an existing fast approximate dynamic programming method called nominal belief-state optimization (NBO) to approximately solve the formation control problem. We perform numerical studies in MATLAB to validate the performance of the above control algorithms. Full article
(This article belongs to the Special Issue Algorithms in Stochastic Models)
Show Figures

Figure 1

15 pages, 367 KiB  
Article
An Improved Greedy Heuristic for the Minimum Positive Influence Dominating Set Problem in Social Networks
by Salim Bouamama and Christian Blum
Algorithms 2021, 14(3), 79; https://doi.org/10.3390/a14030079 - 28 Feb 2021
Cited by 17 | Viewed by 6127
Abstract
This paper presents a performance comparison of greedy heuristics for a recent variant of the dominating set problem known as the minimum positive influence dominating set (MPIDS) problem. This APX-hard combinatorial optimization problem has applications in social networks. Its aim is to identify [...] Read more.
This paper presents a performance comparison of greedy heuristics for a recent variant of the dominating set problem known as the minimum positive influence dominating set (MPIDS) problem. This APX-hard combinatorial optimization problem has applications in social networks. Its aim is to identify a small subset of key influential individuals in order to facilitate the spread of positive influence in the whole network. In this paper, we focus on the development of a fast and effective greedy heuristic for the MPIDS problem, because greedy heuristics are an essential component of more sophisticated metaheuristics. Thus, the development of well-working greedy heuristics supports the development of efficient metaheuristics. Extensive experiments conducted on a wide range of social networks and complex networks confirm the overall superiority of our greedy algorithm over its competitors, especially when the problem size becomes large. Moreover, we compare our algorithm with the integer linear programming solver CPLEX. While the performance of CPLEX is very strong for small and medium-sized networks, it reaches its limits when being applied to the largest networks. However, even in the context of small and medium-sized networks, our greedy algorithm is only 2.53% worse than CPLEX. Full article
(This article belongs to the Special Issue 2021 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Graphical abstract

31 pages, 2431 KiB  
Article
Solution Merging in Matheuristics for Resource Constrained Job Scheduling
by Dhananjay Thiruvady, Christian Blum and Andreas T. Ernst
Algorithms 2020, 13(10), 256; https://doi.org/10.3390/a13100256 - 9 Oct 2020
Cited by 15 | Viewed by 3996
Abstract
Matheuristics have been gaining in popularity for solving combinatorial optimisation problems in recent years. This new class of hybrid method combines elements of both mathematical programming for intensification and metaheuristic searches for diversification. A recent approach in this direction has been to build [...] Read more.
Matheuristics have been gaining in popularity for solving combinatorial optimisation problems in recent years. This new class of hybrid method combines elements of both mathematical programming for intensification and metaheuristic searches for diversification. A recent approach in this direction has been to build a neighbourhood for integer programs by merging information from several heuristic solutions, namely construct, solve, merge and adapt (CMSA). In this study, we investigate this method alongside a closely related novel approach—merge search (MS). Both methods rely on a population of solutions, and for the purposes of this study, we examine two options: (a) a constructive heuristic and (b) ant colony optimisation (ACO); that is, a method based on learning. These methods are also implemented in a parallel framework using multi-core shared memory, which leads to improving the overall efficiency. Using a resource constrained job scheduling problem as a test case, different aspects of the algorithms are investigated. We find that both methods, using ACO, are competitive with current state-of-the-art methods, outperforming them for a range of problems. Regarding MS and CMSA, the former seems more effective on medium-sized problems, whereas the latter performs better on large problems. Full article
(This article belongs to the Special Issue Algorithms for Graphs and Networks)
Show Figures

Figure 1

Back to TopTop