Next Issue
Volume 14, June
Previous Issue
Volume 14, April
 
 

Algorithms, Volume 14, Issue 5 (May 2021) – 31 articles

Cover Story (view full-size image): The USV3 vehicle is designed to perform an autonomous orbital and suborbital flight with enhanced flying capability, conventional landing on a runway, and a need to be housed in already operative launchers, such as VEGA-C. To match the mission’s requirements, the vehicle is equipped with deployable wings. For such kind of vehicles, the total weight is a key requirement and, therefore, all components shall be designed with the aim of maximum structural efficiency. In the preliminary design phase, the best location of the deployable system’s hinges has been investigated by means of optimization procedures based on multi-objective genetic algorithms and the parametric FE model. The results allow us to define a set of best designs in terms of the minimization of interface loads. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 1786 KiB  
Article
A New Cascade-Correlation Growing Deep Learning Neural Network Algorithm
by Soha Abd El-Moamen Mohamed, Marghany Hassan Mohamed and Mohammed F. Farghally
Algorithms 2021, 14(5), 158; https://doi.org/10.3390/a14050158 - 19 May 2021
Cited by 12 | Viewed by 3178
Abstract
In this paper, a proposed algorithm that dynamically changes the neural network structure is presented. The structure is changed based on some features in the cascade correlation algorithm. Cascade correlation is an important algorithm that is used to solve the actual problem by [...] Read more.
In this paper, a proposed algorithm that dynamically changes the neural network structure is presented. The structure is changed based on some features in the cascade correlation algorithm. Cascade correlation is an important algorithm that is used to solve the actual problem by artificial neural networks as a new architecture and supervised learning algorithm. This process optimizes the architectures of the network which intends to accelerate the learning process and produce better performance in generalization. Many researchers have to date proposed several growing algorithms to optimize the feedforward neural network architectures. The proposed algorithm has been tested on various medical data sets. The results prove that the proposed algorithm is a better method to evaluate the accuracy and flexibility resulting from it. Full article
Show Figures

Figure 1

15 pages, 731 KiB  
Article
Parallel Delay Multiply and Sum Algorithm for Microwave Medical Imaging Using Spark Big Data Framework
by Rahmat Ullah and Tughrul Arslan
Algorithms 2021, 14(5), 157; https://doi.org/10.3390/a14050157 - 18 May 2021
Cited by 10 | Viewed by 3057
Abstract
Microwave imaging systems are currently being investigated for breast cancer, brain stroke and neurodegenerative disease detection due to their low cost, portable and wearable nature. At present, commonly used radar-based algorithms for microwave imaging are based on the delay and sum algorithm. These [...] Read more.
Microwave imaging systems are currently being investigated for breast cancer, brain stroke and neurodegenerative disease detection due to their low cost, portable and wearable nature. At present, commonly used radar-based algorithms for microwave imaging are based on the delay and sum algorithm. These algorithms use ultra-wideband signals to reconstruct a 2D image of the targeted object or region. Delay multiply and sum is an extended version of the delay and sum algorithm. However, it is computationally expensive and time-consuming. In this paper, the delay multiply and sum algorithm is parallelised using a big data framework. The algorithm uses the Spark MapReduce programming model to improve its efficiency. The most computational part of the algorithm is pixel value calculation, where signals need to be multiplied in pairs and summed. The proposed algorithm broadcasts the input data and executes it in parallel in a distributed manner. The Spark-based parallel algorithm is compared with sequential and Python multiprocessing library implementation. The experimental results on both a standalone machine and a high-performance cluster show that Spark significantly accelerates the image reconstruction process without affecting its accuracy. Full article
(This article belongs to the Special Issue High-Performance Computing Algorithms and Their Applications 2021)
Show Figures

Figure 1

12 pages, 994 KiB  
Article
Digital Twins in Solar Farms: An Approach through Time Series and Deep Learning
by Kamel Arafet and Rafael Berlanga
Algorithms 2021, 14(5), 156; https://doi.org/10.3390/a14050156 - 18 May 2021
Cited by 14 | Viewed by 4153
Abstract
The generation of electricity through renewable energy sources increases every day, with solar energy being one of the fastest-growing. The emergence of information technologies such as Digital Twins (DT) in the field of the Internet of Things and Industry 4.0 allows a substantial [...] Read more.
The generation of electricity through renewable energy sources increases every day, with solar energy being one of the fastest-growing. The emergence of information technologies such as Digital Twins (DT) in the field of the Internet of Things and Industry 4.0 allows a substantial development in automatic diagnostic systems. The objective of this work is to obtain the DT of a Photovoltaic Solar Farm (PVSF) with a deep-learning (DL) approach. To build such a DT, sensor-based time series are properly analyzed and processed. The resulting data are used to train a DL model (e.g., autoencoders) in order to detect anomalies of the physical system in its DT. Results show a reconstruction error around 0.1, a recall score of 0.92 and an Area Under Curve (AUC) of 0.97. Therefore, this paper demonstrates that the DT can reproduce the behavior as well as detect efficiently anomalies of the physical system. Full article
(This article belongs to the Special Issue Algorithms and Applications of Time Series Analysis)
Show Figures

Figure 1

17 pages, 2827 KiB  
Article
Energy-Efficient Power Allocation in Non-Linear Energy Harvesting Multiple Relay Systems
by Huifang Pan and Qi Zhu
Algorithms 2021, 14(5), 155; https://doi.org/10.3390/a14050155 - 17 May 2021
Cited by 2 | Viewed by 1798
Abstract
In this paper, to maximize the energy efficiency (EE) in the two-hop multi-relay cooperative decoding and forwarding (DF) system for simultaneous wireless information and power transmission (SWIPT), an optimal power allocation algorithm is proposed, in which the relay energy harvesting (EH) adopts a [...] Read more.
In this paper, to maximize the energy efficiency (EE) in the two-hop multi-relay cooperative decoding and forwarding (DF) system for simultaneous wireless information and power transmission (SWIPT), an optimal power allocation algorithm is proposed, in which the relay energy harvesting (EH) adopts a nonlinear model. Under the constraints, including energy causality, the minimum transmission quality of information and the total transmission power at the relays, an optimization problem is constructed to jointly optimize the transmit power and power-splitting (PS) ratios of multiple relays. Although this problem is a nonlinear fractional programming problem, an iterative algorithm is developed to obtain the optimal power allocation. In particular, the joint power allocation at multiple relays is first decoupled into a single relay power allocation, and then single-relay power allocation is performed by the Dinkelbach iteration algorithm, which can be proven that it is a convex programming problem. Its closed form solutions for different polylines of EH models are obtained by using mathematical methods, such as monotonicity, Lagrange multipliers, the KKT condition and the Cardan formula. The simulation results show the superiority of the power allocation algorithm proposed in this paper in terms of EE. Full article
Show Figures

Figure 1

22 pages, 6397 KiB  
Article
Accelerating In-Transit Co-Processing for Scientific Simulations Using Region-Based Data-Driven Analysis
by Marcus Walldén, Masao Okita, Fumihiko Ino, Dimitris Drikakis and Ioannis Kokkinakis
Algorithms 2021, 14(5), 154; https://doi.org/10.3390/a14050154 - 12 May 2021
Viewed by 1951
Abstract
Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven [...] Read more.
Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Figure 1

13 pages, 424 KiB  
Article
A Set-Theoretic Approach to Modeling Network Structure
by John L. Pfaltz
Algorithms 2021, 14(5), 153; https://doi.org/10.3390/a14050153 - 11 May 2021
Cited by 1 | Viewed by 2077
Abstract
Three computer algorithms are presented. One reduces a network N to its interior, I. Another counts all the triangles in a network, and the last randomly generates networks similar to N given just its interior I. However, these algorithms are not [...] Read more.
Three computer algorithms are presented. One reduces a network N to its interior, I. Another counts all the triangles in a network, and the last randomly generates networks similar to N given just its interior I. However, these algorithms are not the usual numeric programs that manipulate a matrix representation of the network; they are set-based. Union and meet are essential binary operators; contained_in is the basic relational comparator. The interior I is shown to have desirable formal properties and to provide an effective way of revealing “communities” in social networks. A series of networks randomly generated from I is compared with the original network, N. Full article
(This article belongs to the Special Issue Network Science: Algorithms and Applications)
Show Figures

Figure 1

23 pages, 677 KiB  
Review
Predicting the Evolution of Syntenies—An Algorithmic Review
by Nadia El-Mabrouk
Algorithms 2021, 14(5), 152; https://doi.org/10.3390/a14050152 - 11 May 2021
Cited by 2 | Viewed by 2429
Abstract
Syntenies are genomic segments of consecutive genes identified by a certain conservation in gene content and order. The notion of conservation may vary from one definition to another, the more constrained requiring identical gene contents and gene orders, while more relaxed definitions just [...] Read more.
Syntenies are genomic segments of consecutive genes identified by a certain conservation in gene content and order. The notion of conservation may vary from one definition to another, the more constrained requiring identical gene contents and gene orders, while more relaxed definitions just require a certain similarity in gene content, and not necessarily in the same order. Regardless of the way they are identified, the goal is to characterize homologous genomic regions, i.e., regions deriving from a common ancestral region, reflecting a certain gene co-evolution that can enlighten important functional properties. In addition of being able to identify them, it is also necessary to infer the evolutionary history that has led from the ancestral segment to the extant ones. In this field, most algorithmic studies address the problem of inferring rearrangement scenarios explaining the disruption in gene order between segments with the same gene content, some of them extending the evolutionary model to gene insertion and deletion. However, syntenies also evolve through other events modifying their content in genes, such as duplications, losses or horizontal gene transfers, i.e., the movement of genes from one species to another. Although the reconciliation approach between a gene tree and a species tree addresses the problem of inferring such events for single-gene families, little effort has been dedicated to the generalization to segmental events and to syntenies. This paper reviews some of the main algorithmic methods for inferring ancestral syntenies and focus on those integrating both gene orders and gene trees. Full article
(This article belongs to the Special Issue Algorithms in Computational Biology)
Show Figures

Figure 1

12 pages, 293 KiB  
Article
The Traffic Grooming Problem in Optical Networks with Respect to ADMs and OADMs: Complexity and Approximation
by Michele Flammini, Gianpiero Monaco, Luca Moscardelli, Mordechai Shalom and Shmuel Zaks
Algorithms 2021, 14(5), 151; https://doi.org/10.3390/a14050151 - 11 May 2021
Cited by 2 | Viewed by 1885
Abstract
All-optical networks transmit messages along lightpaths in which the signal is transmitted using the same wavelength in all the relevant links. We consider the problem of switching cost minimization in these networks. Specifically, the input to the problem under consideration is an optical [...] Read more.
All-optical networks transmit messages along lightpaths in which the signal is transmitted using the same wavelength in all the relevant links. We consider the problem of switching cost minimization in these networks. Specifically, the input to the problem under consideration is an optical network modeled by a graph G, a set of lightpaths modeled by paths on G, and an integer g termed the grooming factor. One has to assign a wavelength (modeled by a color) to every lightpath, so that every edge of the graph is used by at most g paths of the same color. A lightpath operating at some wavelength λ uses one Add/Drop multiplexer (ADM) at both endpoints and one Optical Add/Drop multiplexer (OADM) at every intermediate node, all operating at a wavelength of λ. Two lightpaths, both operating at the same wavelength λ, share the ADMs and OADMs in their common nodes. Therefore, the total switching cost due to the usage of ADMs and OADMs depends on the wavelength assignment. We consider networks of ring and path topology and a cost function that is a convex combination α·|OADMs|+(1α)|ADMs| of the number of ADMs and the number of OADMs deployed in the network. We showed that the problem of minimizing this cost function is NP-complete for every convex combination, even in a path topology network with g=2. On the positive side, we present a polynomial-time approximation algorithm for the problem. Full article
(This article belongs to the Special Issue Graph Algorithms and Network Dynamics)
Show Figures

Figure 1

3 pages, 179 KiB  
Editorial
Special Issue on “Graph Algorithms and Applications”
by Serafino Cicerone and Gabriele Di Stefano
Algorithms 2021, 14(5), 150; https://doi.org/10.3390/a14050150 - 10 May 2021
Viewed by 1832
Abstract
The mixture of data in real life exhibits structure or connection property in nature. Typical data include biological data, communication network data, image data, etc. Graphs provide a natural way to represent and analyze these types of data and their relationships. For instance, [...] Read more.
The mixture of data in real life exhibits structure or connection property in nature. Typical data include biological data, communication network data, image data, etc. Graphs provide a natural way to represent and analyze these types of data and their relationships. For instance, more recently, graphs have found new applications in solving problems for emerging research fields such as social network analysis, design of robust computer network topologies, frequency allocation in wireless networks, and bioinformatics. Unfortunately, the related algorithms usually suffer from high computational complexity, since some of these problems are NP-hard. Therefore, in recent years, many graph models and optimization algorithms have been proposed to achieve a better balance between efficacy and efficiency. The aim of this Special Issue is to provide an opportunity for researchers and engineers from both academia and the industry to publish their latest and original results on graph models, algorithms, and applications to problems in the real world, with a focus on optimization and computational complexity. Full article
(This article belongs to the Special Issue Graph Algorithms and Applications)
19 pages, 590 KiB  
Article
Query Rewriting for Incremental Continuous Query Evaluation in HIFUN
by Petros Zervoudakis, Haridimos Kondylakis, Nicolas Spyratos and Dimitris Plexousakis
Algorithms 2021, 14(5), 149; https://doi.org/10.3390/a14050149 - 08 May 2021
Cited by 1 | Viewed by 2112
Abstract
HIFUN is a high-level query language for expressing analytic queries of big datasets, offering a clear separation between the conceptual layer, where analytic queries are defined independently of the nature and location of data, and the physical layer, where queries are evaluated. In [...] Read more.
HIFUN is a high-level query language for expressing analytic queries of big datasets, offering a clear separation between the conceptual layer, where analytic queries are defined independently of the nature and location of data, and the physical layer, where queries are evaluated. In this paper, we present a methodology based on the HIFUN language, and the corresponding algorithms for the incremental evaluation of continuous queries. In essence, our approach is able to process the most recent data batch by exploiting already computed information, without requiring the evaluation of the query over the complete dataset. We present the generic algorithm which we translated to both SQL and MapReduce using SPARK; it implements various query rewriting methods. We demonstrate the effectiveness of our approach in temrs of query answering efficiency. Finally, we show that by exploiting the formal query rewriting methods of HIFUN, we can further reduce the computational cost, adding another layer of query optimization to our implementation. Full article
Show Figures

Figure 1

25 pages, 562 KiB  
Article
Disjoint Tree Mergers for Large-Scale Maximum Likelihood Tree Estimation
by Minhyuk Park, Paul Zaharias and Tandy Warnow
Algorithms 2021, 14(5), 148; https://doi.org/10.3390/a14050148 - 07 May 2021
Cited by 5 | Viewed by 3077
Abstract
The estimation of phylogenetic trees for individual genes or multi-locus datasets is a basic part of considerable biological research. In order to enable large trees to be computed, Disjoint Tree Mergers (DTMs) have been developed; these methods operate by dividing the input sequence [...] Read more.
The estimation of phylogenetic trees for individual genes or multi-locus datasets is a basic part of considerable biological research. In order to enable large trees to be computed, Disjoint Tree Mergers (DTMs) have been developed; these methods operate by dividing the input sequence dataset into disjoint sets, constructing trees on each subset, and then combining the subset trees (using auxiliary information) into a tree on the full dataset. DTMs have been used to advantage for multi-locus species tree estimation, enabling highly accurate species trees at reduced computational effort, compared to leading species tree estimation methods. Here, we evaluate the feasibility of using DTMs to improve the scalability of maximum likelihood (ML) gene tree estimation to large numbers of input sequences. Our study shows distinct differences between the three selected ML codes—RAxML-NG, IQ-TREE 2, and FastTree 2—and shows that good DTM pipeline design can provide advantages over these ML codes on large datasets. Full article
(This article belongs to the Special Issue Algorithms in Computational Biology)
Show Figures

Figure 1

21 pages, 1182 KiB  
Article
Machine Learning Predicts Outcomes of Phase III Clinical Trials for Prostate Cancer
by Felix D. Beacher, Lilianne R. Mujica-Parodi, Shreyash Gupta and Leonardo A. Ancora
Algorithms 2021, 14(5), 147; https://doi.org/10.3390/a14050147 - 05 May 2021
Cited by 10 | Viewed by 6122
Abstract
The ability to predict the individual outcomes of clinical trials could support the development of tools for precision medicine and improve the efficiency of clinical-stage drug development. However, there are no published attempts to predict individual outcomes of clinical trials for cancer. We [...] Read more.
The ability to predict the individual outcomes of clinical trials could support the development of tools for precision medicine and improve the efficiency of clinical-stage drug development. However, there are no published attempts to predict individual outcomes of clinical trials for cancer. We used machine learning (ML) to predict individual responses to a two-year course of bicalutamide, a standard treatment for prostate cancer, based on data from three Phase III clinical trials (n = 3653). We developed models that used a merged dataset from all three studies. The best performing models using merged data from all three studies had an accuracy of 76%. The performance of these models was confirmed by further modeling using a merged dataset from two of the three studies, and a separate study for testing. Together, our results indicate the feasibility of ML-based tools for predicting cancer treatment outcomes, with implications for precision oncology and improving the efficiency of clinical-stage drug development. Full article
(This article belongs to the Special Issue Machine Learning in Healthcare and Biomedical Application)
Show Figures

Graphical abstract

19 pages, 4925 KiB  
Article
Investigation of Improved Cooperative Coevolution for Large-Scale Global Optimization Problems
by Aleksei Vakhnin and Evgenii Sopov
Algorithms 2021, 14(5), 146; https://doi.org/10.3390/a14050146 - 30 Apr 2021
Cited by 10 | Viewed by 2832
Abstract
Modern real-valued optimization problems are complex and high-dimensional, and they are known as “large-scale global optimization (LSGO)” problems. Classic evolutionary algorithms (EAs) perform poorly on this class of problems because of the curse of dimensionality. Cooperative Coevolution (CC) is a high-performed framework for [...] Read more.
Modern real-valued optimization problems are complex and high-dimensional, and they are known as “large-scale global optimization (LSGO)” problems. Classic evolutionary algorithms (EAs) perform poorly on this class of problems because of the curse of dimensionality. Cooperative Coevolution (CC) is a high-performed framework for performing the decomposition of large-scale problems into smaller and easier subproblems by grouping objective variables. The efficiency of CC strongly depends on the size of groups and the grouping approach. In this study, an improved CC (iCC) approach for solving LSGO problems has been proposed and investigated. iCC changes the number of variables in subcomponents dynamically during the optimization process. The SHADE algorithm is used as a subcomponent optimizer. We have investigated the performance of iCC-SHADE and CC-SHADE on fifteen problems from the LSGO CEC’13 benchmark set provided by the IEEE Congress of Evolutionary Computation. The results of numerical experiments have shown that iCC-SHADE outperforms, on average, CC-SHADE with a fixed number of subcomponents. Also, we have compared iCC-SHADE with some state-of-the-art LSGO metaheuristics. The experimental results have shown that the proposed algorithm is competitive with other efficient metaheuristics. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications II)
Show Figures

Figure 1

13 pages, 807 KiB  
Article
No-Wait Job Shop Scheduling Using a Population-Based Iterated Greedy Algorithm
by Mingming Xu, Shuning Zhang and Guanlong Deng
Algorithms 2021, 14(5), 145; https://doi.org/10.3390/a14050145 - 30 Apr 2021
Cited by 4 | Viewed by 2233
Abstract
When no-wait constraint holds in job shops, a job has to be processed with no waiting time from the first to the last operation, and the start time of a job is greatly restricted. Using key elements of the iterated greedy algorithm, this [...] Read more.
When no-wait constraint holds in job shops, a job has to be processed with no waiting time from the first to the last operation, and the start time of a job is greatly restricted. Using key elements of the iterated greedy algorithm, this paper proposes a population-based iterated greedy (PBIG) algorithm for finding high-quality schedules in no-wait job shops. Firstly, the Nawaz–Enscore–Ham (NEH) heuristic used for flow shop is extended in no-wait job shops, and an initialization scheme based on the NEH heuristic is developed to generate start solutions with a certain quality and diversity. Secondly, the iterated greedy procedure is introduced based on the destruction and construction perturbator and the insert-based local search. Furthermore, a population-based co-evolutionary scheme is presented by imposing the iterated greedy procedure in parallel and hybridizing both the left timetabling and inverse left timetabling methods. Computational results based on well-known benchmark instances show that the proposed algorithm outperforms two existing metaheuristics by a significant margin. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

19 pages, 1736 KiB  
Article
Boundary Loss-Based 2.5D Fully Convolutional Neural Networks Approach for Segmentation: A Case Study of the Liver and Tumor on Computed Tomography
by Yuexing Han, Xiaolong Li, Bing Wang and Lu Wang
Algorithms 2021, 14(5), 144; https://doi.org/10.3390/a14050144 - 30 Apr 2021
Cited by 12 | Viewed by 3303
Abstract
Image segmentation plays an important role in the field of image processing, helping to understand images and recognize objects. However, most existing methods are often unable to effectively explore the spatial information in 3D image segmentation, and they neglect the information from the [...] Read more.
Image segmentation plays an important role in the field of image processing, helping to understand images and recognize objects. However, most existing methods are often unable to effectively explore the spatial information in 3D image segmentation, and they neglect the information from the contours and boundaries of the observed objects. In addition, shape boundaries can help to locate the positions of the observed objects, but most of the existing loss functions neglect the information from the boundaries. To overcome these shortcomings, this paper presents a new cascaded 2.5D fully convolutional networks (FCNs) learning framework to segment 3D medical images. A new boundary loss that incorporates distance, area, and boundary information is also proposed for the cascaded FCNs to learning more boundary and contour features from the 3D medical images. Moreover, an effective post-processing method is developed to further improve the segmentation accuracy. We verified the proposed method on LITS and 3DIRCADb datasets that include the liver and tumors. The experimental results show that the performance of the proposed method is better than existing methods with a Dice Per Case score of 74.5% for tumor segmentation, indicating the effectiveness of the proposed method. Full article
Show Figures

Figure 1

16 pages, 645 KiB  
Article
Overrelaxed Sinkhorn–Knopp Algorithm for Regularized Optimal Transport
by Alexis Thibault, Lénaïc Chizat, Charles Dossal and Nicolas Papadakis
Algorithms 2021, 14(5), 143; https://doi.org/10.3390/a14050143 - 30 Apr 2021
Cited by 7 | Viewed by 3859
Abstract
This article describes a set of methods for quickly computing the solution to the regularized optimal transport problem. It generalizes and improves upon the widely used iterative Bregman projections algorithm (or Sinkhorn–Knopp algorithm). We first proposed to rely on regularized nonlinear acceleration schemes. [...] Read more.
This article describes a set of methods for quickly computing the solution to the regularized optimal transport problem. It generalizes and improves upon the widely used iterative Bregman projections algorithm (or Sinkhorn–Knopp algorithm). We first proposed to rely on regularized nonlinear acceleration schemes. In practice, such approaches lead to fast algorithms, but their global convergence is not ensured. Hence, we next proposed a new algorithm with convergence guarantees. The idea is to overrelax the Bregman projection operators, allowing for faster convergence. We proposed a simple method for establishing global convergence by ensuring the decrease of a Lyapunov function at each step. An adaptive choice of the overrelaxation parameter based on the Lyapunov function was constructed. We also suggested a heuristic to choose a suitable asymptotic overrelaxation parameter, based on a local convergence analysis. Our numerical experiments showed a gain in convergence speed by an order of magnitude in certain regimes. Full article
(This article belongs to the Special Issue Optimal Transport: Algorithms and Applications)
Show Figures

Figure 1

12 pages, 1342 KiB  
Article
Interval Extended Kalman Filter—Application to Underwater Localization and Control
by Morgan Louédec and Luc Jaulin
Algorithms 2021, 14(5), 142; https://doi.org/10.3390/a14050142 - 29 Apr 2021
Cited by 6 | Viewed by 2360
Abstract
The extended Kalman filter has been shown to be a precise method for nonlinear state estimation and is the facto standard in navigation systems. However, if the initial estimated state is far from the true one, the filter may diverge, mainly due to [...] Read more.
The extended Kalman filter has been shown to be a precise method for nonlinear state estimation and is the facto standard in navigation systems. However, if the initial estimated state is far from the true one, the filter may diverge, mainly due to an inconsistent linearization. Moreover, interval filters guarantee a robust and reliable, yet unprecise and discontinuous localization. This paper proposes to choose a point estimated by an interval method, as a linearization point of the extended Kalman filter. We will show that this combination allows us to get a higher level of integrity of the extended Kalman filter. Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control II)
Show Figures

Figure 1

17 pages, 6765 KiB  
Article
Design Optimization of Interfacing Attachments for the Deployable Wing of an Unmanned Re-Entry Vehicle
by Francesco Di Caprio, Roberto Scigliano, Roberto Fauci and Domenico Tescione
Algorithms 2021, 14(5), 141; https://doi.org/10.3390/a14050141 - 28 Apr 2021
Cited by 3 | Viewed by 2204
Abstract
Re-entry winged body vehicles have several advantages w.r.t capsules, such as maneuverability and controlled landing opportunity. On the other hand, they show an increment in design level complexity, especially from an aerodynamic, aero-thermodynamic, and structural point of view, and in the difficulties of [...] Read more.
Re-entry winged body vehicles have several advantages w.r.t capsules, such as maneuverability and controlled landing opportunity. On the other hand, they show an increment in design level complexity, especially from an aerodynamic, aero-thermodynamic, and structural point of view, and in the difficulties of housing in operative existing launchers. In this framework, the idea of designing unmanned vehicles equipped with deployable wings for suborbital flight was born. This work details a preliminary study for identifying the best configuration for the hinge system aimed at the in-orbit deployment of an unmanned re-entry vehicle’s wings. In particular, the adopted optimization methodology is described. The adopted approach uses a genetic algorithm available in commercial software in conjunction with fully parametric models created in FEM environments and, in particular, it can optimize the hinge position considering both the deployed and folded configuration. The results identify the best hinge configuration that minimizes interface loads, thus, realizing a lighter and more efficient deployment system. Indeed, for such a category of vehicle, it is mandatory to reduce the structural mass, as much as possible in order to increase the payload and reduce service costs. Full article
(This article belongs to the Special Issue Algorithms and Models for Dynamic Multiple Criteria Decision Making)
Show Figures

Figure 1

26 pages, 2275 KiB  
Article
PROMETHEE-SAPEVO-M1 a Hybrid Approach Based on Ordinal and Cardinal Inputs: Multi-Criteria Evaluation of Helicopters to Support Brazilian Navy Operations
by Miguel Ângelo Lellis Moreira, Igor Pinheiro de Araújo Costa, Maria Teresa Pereira, Marcos dos Santos, Carlos Francisco Simões Gomes and Fernando Martins Muradas
Algorithms 2021, 14(5), 140; https://doi.org/10.3390/a14050140 - 27 Apr 2021
Cited by 40 | Viewed by 3243
Abstract
This paper presents a new approach based on Multi-Criteria Decision Analysis (MCDA), named PROMETHEE-SAPEVO-M1, through its implementation and feasibility related to the decision-making process regarding the evaluation of helicopters of attack of the Brazilian Navy. The proposed methodology aims to present an integration [...] Read more.
This paper presents a new approach based on Multi-Criteria Decision Analysis (MCDA), named PROMETHEE-SAPEVO-M1, through its implementation and feasibility related to the decision-making process regarding the evaluation of helicopters of attack of the Brazilian Navy. The proposed methodology aims to present an integration of ordinal evaluation into the cardinal procedure from the PROMETHEE method, enabling to perform qualitative and quantitative data and generate the criteria weights by pairwise evaluation, transparently. The modeling provides three models of preference analysis, as partial, complete, and outranking by intervals, along with an intra-criterion analysis by veto threshold, enabling the analysis of the performance of an alternative in a specific criterion. As a demonstration of the application, is carried out a case study by the PROMETHEE-SAPEVO-M1 web platform, addressing a strategic analysis of attack helicopters to be acquired by the Brazilian Navy, from the need to be evaluating multiple specifications with different levels of importance within the context problem. The modeling implementation in the case study is made in detail, first performing the alternatives in each criterion and then presenting the results by three different models of preference analysis, along with the intra-criterion analysis and a rank reversal procedure. Moreover, is realized a comparison analysis to the PROMETHEE method, exploring the main features of the PROMETHEE-SAPEVO-M1. Moreover, a section of discussion is presented, exposing some features and main points of the proposal. Therefore, this paper provides a valuable contribution to academia and society since it represents the application of an MCDA method in the state of the art, contributing to the decision-making resolution of the most diverse real problems. Full article
(This article belongs to the Special Issue Algorithms and Models for Dynamic Multiple Criteria Decision Making)
Show Figures

Figure 1

15 pages, 1400 KiB  
Article
Diagnosing Schizophrenia Using Effective Connectivity of Resting-State EEG Data
by Claudio Ciprian, Kirill Masychev, Maryam Ravan, Akshaya Manimaran and AnkitaAmol Deshmukh
Algorithms 2021, 14(5), 139; https://doi.org/10.3390/a14050139 - 27 Apr 2021
Cited by 11 | Viewed by 3460
Abstract
Schizophrenia is a serious mental illness associated with neurobiological deficits. Even though the brain activities during tasks (i.e., P300 activities) are considered as biomarkers to diagnose schizophrenia, brain activities at rest have the potential to show an inherent dysfunctionality in schizophrenia and can [...] Read more.
Schizophrenia is a serious mental illness associated with neurobiological deficits. Even though the brain activities during tasks (i.e., P300 activities) are considered as biomarkers to diagnose schizophrenia, brain activities at rest have the potential to show an inherent dysfunctionality in schizophrenia and can be used to understand the cognitive deficits in these patients. In this study, we developed a machine learning algorithm (MLA) based on eyes closed resting-state electroencephalogram (EEG) datasets, which record the neural activity in the absence of any tasks or external stimuli given to the subjects, aiming to distinguish schizophrenic patients (SCZs) from healthy controls (HCs). The MLA has two steps. In the first step, symbolic transfer entropy (STE), which is a measure of effective connectivity, is applied to resting-state EEG data. In the second step, the MLA uses the STE matrix to find a set of features that can successfully discriminate SCZ from HC. From the results, we found that the MLA could achieve a total accuracy of 96.92%, with a sensitivity of 95%, a specificity of 98.57%, precision of 98.33%, F1-score of 0.97, and Matthews correlation coefficient (MCC) of 0.94 using only 10 out of 1900 STE features, which implies that the STE matrix extracted from resting-state EEG data may be a promising tool for the clinical diagnosis of schizophrenia. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Biomedical Signal Processing)
Show Figures

Figure 1

2 pages, 156 KiB  
Editorial
Editorial for the Special Issue on “Bayesian Networks: Inference Algorithms, Applications, and Software Tools”
by Daniele Codetta-Raiteri
Algorithms 2021, 14(5), 138; https://doi.org/10.3390/a14050138 - 27 Apr 2021
Cited by 3 | Viewed by 1702
Abstract
In the field of Artificial Intelligence, Bayesian Networks (BN) [...] Full article
13 pages, 366 KiB  
Article
KDAS-ReID: Architecture Search for Person Re-Identification via Distilled Knowledge with Dynamic Temperature
by Zhou Lei, Kangkang Yang, Kai Jiang and Shengbo Chen
Algorithms 2021, 14(5), 137; https://doi.org/10.3390/a14050137 - 26 Apr 2021
Viewed by 2081
Abstract
Person re-Identification(Re-ID) based on deep convolutional neural networks (CNNs) achieves remarkable success with its fast speed. However, prevailing Re-ID models are usually built upon backbones that manually design for classification. In order to automatically design an effective Re-ID architecture, we propose a pedestrian [...] Read more.
Person re-Identification(Re-ID) based on deep convolutional neural networks (CNNs) achieves remarkable success with its fast speed. However, prevailing Re-ID models are usually built upon backbones that manually design for classification. In order to automatically design an effective Re-ID architecture, we propose a pedestrian re-identification algorithm based on knowledge distillation, called KDAS-ReID. When the knowledge of the teacher model is transferred to the student model, the importance of knowledge in the teacher model will gradually decrease with the improvement of the performance of the student model. Therefore, instead of applying the distillation loss function directly, we consider using dynamic temperatures during the search stage and training stage. Specifically, we start searching and training at a high temperature and gradually reduce the temperature to 1 so that the student model can better learn from the teacher model through soft targets. Extensive experiments demonstrate that KDAS-ReID performs not only better than other state-of-the-art Re-ID models on three benchmarks, but also better than the teacher model based on the ResNet-50 backbone. Full article
Show Figures

Figure 1

17 pages, 2551 KiB  
Article
Multiple Loci Selection with Multi-Way Epistasis in Coalescence with Recombination
by Aritra Bose, Filippo Utro, Daniel E. Platt and Laxmi Parida
Algorithms 2021, 14(5), 136; https://doi.org/10.3390/a14050136 - 25 Apr 2021
Viewed by 2312
Abstract
As studies move into deeper characterization of the impact of selection through non-neutral mutations in whole genome population genetics, modeling for selection becomes crucial. Moreover, epistasis has long been recognized as a significant component in understanding the evolution of complex genetic systems. We [...] Read more.
As studies move into deeper characterization of the impact of selection through non-neutral mutations in whole genome population genetics, modeling for selection becomes crucial. Moreover, epistasis has long been recognized as a significant component in understanding the evolution of complex genetic systems. We present a backward coalescent model, EpiSimRA, that accommodates multiple loci selection, with multi-way (k-way) epistasis for any arbitrary k. Starting from arbitrary extant populations with epistatic sites, we trace the Ancestral Recombination Graph (ARG), sampling relevant recombination and coalescent events. Our framework allows for studying different complex evolutionary scenarios in the presence of selective sweeps, positive and negative selection with multiway epistasis. We also present a forward counterpart of the coalescent model based on a Wright-Fisher (WF) process, which we use as a validation framework, comparing the hallmarks of the ARG between the two. We provide the first framework that allows a nose-to-nose comparison of multiway epistasis in a coalescent simulator with its forward counterpart with respect to the hallmarks of the ARG. We demonstrate, through extensive experiments, that EpiSimRA is consistently superior in terms of performance (seconds vs. hours) in comparison to the forward model without compromising on its accuracy. Full article
(This article belongs to the Special Issue Algorithms in Computational Biology)
Show Figures

Figure 1

27 pages, 2771 KiB  
Article
Analysis of Data Presented by Multisets Using a Linguistic Approach
by Liliya A. Demidova and Julia S. Sokolova
Algorithms 2021, 14(5), 135; https://doi.org/10.3390/a14050135 - 25 Apr 2021
Viewed by 1669
Abstract
The problem of the analysis of datasets formed by the results of group expert assessment of objects by a certain set of features is considered. Such datasets may contain mismatched, including conflicting values of object evaluations by the analyzed features. In addition, the [...] Read more.
The problem of the analysis of datasets formed by the results of group expert assessment of objects by a certain set of features is considered. Such datasets may contain mismatched, including conflicting values of object evaluations by the analyzed features. In addition, the values of the assessments for the features can be not only point, but also interval due to the incompleteness and inaccuracy of the experts’ knowledge. Taking into account all the results of group expert assessment of objects for a certain set of features, estimated pointwise, can be carried out using the multiset toolkit. To process interval values of assessments, it is proposed to use a linguistic approach which involves the use of a linguistic scale in order to describe various strategies for evaluating objects: conservative, neutral and risky, and implement various decision-making strategies in the problems of clustering, classification, and ordering of objects. The linguistic approach to working with objects assessed by a group of experts with setting interval values of assessments has been successfully applied to the analysis of the dataset presented by competitive projects. A herewith, for the dataset under consideration, using various assessment strategies, solutions of clustering, classification, and ordering problems were obtained with the study of the influence of the chosen assessment strategy on the results of solving the corresponding problem. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications II)
Show Figures

Figure 1

10 pages, 1283 KiB  
Article
MultiKOC: Multi-One-Class Classifier Based K-Means Clustering
by Loai Abdallah, Murad Badarna, Waleed Khalifa and Malik Yousef
Algorithms 2021, 14(5), 134; https://doi.org/10.3390/a14050134 - 23 Apr 2021
Cited by 4 | Viewed by 2731
Abstract
In the computational biology community there are many biological cases that are considered as multi-one-class classification problems. Examples include the classification of multiple tumor types, protein fold recognition and the molecular classification of multiple cancer types. In all of these cases the real [...] Read more.
In the computational biology community there are many biological cases that are considered as multi-one-class classification problems. Examples include the classification of multiple tumor types, protein fold recognition and the molecular classification of multiple cancer types. In all of these cases the real world appropriately characterized negative cases or outliers are impractical to achieve and the positive cases might consist of different clusters, which in turn might lead to accuracy degradation. In this paper we present a novel algorithm named MultiKOC multi-one-class classifiers based K-means to deal with this problem. The main idea is to execute a clustering algorithm over the positive samples to capture the hidden subdata of the given positive data, and then building up a one-class classifier for every cluster member’s examples separately: in other word, train the OC classifier on each piece of subdata. For a given new sample, the generated classifiers are applied. If it is rejected by all of those classifiers, the given sample is considered as a negative sample, otherwise it is a positive sample. The results of MultiKOC are compared with the traditional one-class, multi-one-class, ensemble one-classes and two-class methods, yielding a significant improvement over the one-class and like the two-class performance. Full article
(This article belongs to the Special Issue Biological Knowledge Discovery from Big Data)
Show Figures

Figure 1

15 pages, 1072 KiB  
Article
Text Indexing for Regular Expression Matching
by Daniel Gibney and Sharma V. Thankachan
Algorithms 2021, 14(5), 133; https://doi.org/10.3390/a14050133 - 23 Apr 2021
Cited by 6 | Viewed by 2443
Abstract
Finding substrings of a text T that match a regular expression p is a fundamental problem. Despite being the subject of extensive research, no solution with a time complexity significantly better than O(|T||p|) has been [...] Read more.
Finding substrings of a text T that match a regular expression p is a fundamental problem. Despite being the subject of extensive research, no solution with a time complexity significantly better than O(|T||p|) has been found. Backurs and Indyk in FOCS 2016 established conditional lower bounds for the algorithmic problem based on the Strong Exponential Time Hypothesis that helps explain this difficulty. A natural question is whether we can improve the time complexity for matching the regular expression by preprocessing the text T? We show that conditioned on the Online Matrix–Vector Multiplication (OMv) conjecture, even with arbitrary polynomial preprocessing time, a regular expression query on a text cannot be answered in strongly sublinear time, i.e., O(|T|1ε) for any ε>0. Furthermore, if we extend the OMv conjecture to a plausible conjecture regarding Boolean matrix multiplication with polynomial preprocessing time, which we call Online Matrix–Matrix Multiplication (OMM), we can strengthen this hardness result to there being no solution with a query time that is O(|T|3/2ε). These results hold for alphabet sizes three or greater. We then provide data structures that answer queries in O(|T||p|τ) time where τ[1,|T|] is fixed at construction. These include a solution that works for all regular expressions with Expτ·|T| preprocessing time and space. For patterns containing only ‘concatenation’ and ‘or’ operators (the same type used in the hardness result), we provide (1) a deterministic solution which requires Expτ·|T|log2|T| preprocessing time and space, and (2) when |p||T|z for z=2o(log|T|), a randomized solution with amortized query time which answers queries correctly with high probability, requiring Expτ·|T|2Ωlog|T| preprocessing time and space. Full article
(This article belongs to the Special Issue Algorithms and Data-Structures for Compressed Computation)
Show Figures

Figure 1

15 pages, 3549 KiB  
Article
Classification of Precursor MicroRNAs from Different Species Based on K-mer Distance Features
by Malik Yousef and Jens Allmer
Algorithms 2021, 14(5), 132; https://doi.org/10.3390/a14050132 - 22 Apr 2021
Viewed by 2311
Abstract
MicroRNAs (miRNAs) are short RNA sequences that are actively involved in gene regulation. These regulators on the post-transcriptional level have been discovered in virtually all eukaryotic organisms. Additionally, miRNAs seem to exist in viruses and might also be produced in microbial pathogens. Initially, [...] Read more.
MicroRNAs (miRNAs) are short RNA sequences that are actively involved in gene regulation. These regulators on the post-transcriptional level have been discovered in virtually all eukaryotic organisms. Additionally, miRNAs seem to exist in viruses and might also be produced in microbial pathogens. Initially, transcribed RNA is cleaved by Drosha, producing precursor miRNAs. We have previously shown that it is possible to distinguish between microRNA precursors of different clades by representing the sequences in a k-mer feature space. The k-mer representation considers the frequency of a k-mer in the given sequence. We further hypothesized that the relationship between k-mers (e.g., distance between k-mers) could be useful for classification. Three different distance-based features were created, tested, and compared. The three feature sets were entitled inter k-mer distance, k-mer location distance, and k-mer first–last distance. Here, we show that classification performance above 80% (depending on the evolutionary distance) is possible with a combination of distance-based and regular k-mer features. With these novel features, classification at closer evolutionary distances is better than using k-mers alone. Combining the features leads to accurate classification for larger evolutionary distances. For example, categorizing Homo sapiens versus Brassicaceae leads to an accuracy of 93%. When considering average accuracy, the novel distance-based features lead to an overall increase in effectiveness. On the contrary, secondary-structure-based features did not lead to any effective separation among clades in this study. With this line of research, we support the differentiation between true and false miRNAs detected from next-generation sequencing data, provide an additional viewpoint for confirming miRNAs when the species of origin is known, and open up a new strategy for analyzing miRNA evolution. Full article
(This article belongs to the Special Issue Biological Knowledge Discovery from Big Data)
Show Figures

Figure 1

22 pages, 357 KiB  
Article
Adding Matrix Control: Insertion-Deletion Systems with Substitutions III
by Martin Vu and Henning Fernau
Algorithms 2021, 14(5), 131; https://doi.org/10.3390/a14050131 - 22 Apr 2021
Cited by 3 | Viewed by 1613
Abstract
Insertion-deletion systems have been introduced as a formalism to model operations that find their counterparts in ideas of bio-computing, more specifically, when using DNA or RNA strings and biological mechanisms that work on these strings. So-called matrix control has been introduced to insertion-deletion [...] Read more.
Insertion-deletion systems have been introduced as a formalism to model operations that find their counterparts in ideas of bio-computing, more specifically, when using DNA or RNA strings and biological mechanisms that work on these strings. So-called matrix control has been introduced to insertion-deletion systems in order to enable writing short program fragments. We discuss substitutions as a further type of operation, added to matrix insertion-deletion systems. For such systems, we additionally discuss the effect of appearance checking. This way, we obtain new characterizations of the family of context-sensitive and the family of recursively enumerable languages. Not much context is needed for systems with appearance checking to reach computational completeness. This also suggests that bio-computers may run rather traditionally written programs, as our simulations also show how Turing machines, like any other computational device, can be simulated by certain matrix insertion-deletion-substitution systems. Full article
Show Figures

Figure 1

30 pages, 2320 KiB  
Article
Self-Configuring (1 + 1)-Evolutionary Algorithm for the Continuous p-Median Problem with Agglomerative Mutation
by Lev Kazakovtsev, Ivan Rozhnov and Guzel Shkaberina
Algorithms 2021, 14(5), 130; https://doi.org/10.3390/a14050130 - 22 Apr 2021
Cited by 3 | Viewed by 2228
Abstract
The continuous p-median problem (CPMP) is one of the most popular and widely used models in location theory that minimizes the sum of distances from known demand points to the sought points called centers or medians. This NP-hard location problem is also useful [...] Read more.
The continuous p-median problem (CPMP) is one of the most popular and widely used models in location theory that minimizes the sum of distances from known demand points to the sought points called centers or medians. This NP-hard location problem is also useful for clustering (automatic grouping). In this case, sought points are considered as cluster centers. Unlike similar k-means model, p-median clustering is less sensitive to noisy data and appearance of the outliers (separately located demand points that do not belong to any cluster). Local search algorithms including Variable Neighborhood Search as well as evolutionary algorithms demonstrate rather precise results. Various algorithms based on the use of greedy agglomerative procedures are capable of obtaining very accurate results that are difficult to improve on with other methods. The computational complexity of such procedures limits their use for large problems, although computations on massively parallel systems significantly expand their capabilities. In addition, the efficiency of agglomerative procedures is highly dependent on the setting of their parameters. For the majority of practically important p-median problems, one can choose a very efficient algorithm based on the agglomerative procedures. However, the parameters of such algorithms, which ensure their high efficiency, are difficult to predict. We introduce the concept of the AGGLr neighborhood based on the application of the agglomerative procedure, and investigate the search efficiency in such a neighborhood depending on its parameter r. Using the similarities between local search algorithms and (1 + 1)-evolutionary algorithms, as well as the ability of the latter to adapt their search parameters, we propose a new algorithm based on a greedy agglomerative procedure with the automatically tuned parameter r. Our new algorithm does not require preliminary tuning of the parameter r of the agglomerative procedure, adjusting this parameter online, thus representing a more versatile computational tool. The advantages of the new algorithm are shown experimentally on problems with a data volume of up to 2,000,000 demand points. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications II)
Show Figures

Figure 1

18 pages, 8856 KiB  
Article
Three-Dimensional Elastodynamic Analysis Employing Partially Discontinuous Boundary Elements
by Yuan Li, Ni Zhang, Yuejiao Gong, Wentao Mao and Shiguang Zhang
Algorithms 2021, 14(5), 129; https://doi.org/10.3390/a14050129 - 21 Apr 2021
Cited by 1 | Viewed by 1668
Abstract
Compared with continuous elements, discontinuous elements advance in processing the discontinuity of physical variables at corner points and discretized models with complex boundaries. However, the computational accuracy of discontinuous elements is sensitive to the positions of element nodes. To reduce the side effect [...] Read more.
Compared with continuous elements, discontinuous elements advance in processing the discontinuity of physical variables at corner points and discretized models with complex boundaries. However, the computational accuracy of discontinuous elements is sensitive to the positions of element nodes. To reduce the side effect of the node position on the results, this paper proposes employing partially discontinuous elements to compute the time-domain boundary integral equation of 3D elastodynamics. Using the partially discontinuous element, the nodes located at the corner points will be shrunk into the element, whereas the nodes at the non-corner points remain unchanged. As such, a discrete model that is continuous on surfaces and discontinuous between adjacent surfaces can be generated. First, we present a numerical integration scheme of the partially discontinuous element. For the singular integral, an improved element subdivision method is proposed to reduce the side effect of the time step on the integral accuracy. Then, the effectiveness of the proposed method is verified by two numerical examples. Meanwhile, we study the influence of the positions of the nodes on the stability and accuracy of the computation results by cases. Finally, the recommended value range of the inward shrink ratio of the element nodes is provided. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop