Next Issue
Volume 13, May
Previous Issue
Volume 13, March

Table of Contents

Algorithms, Volume 13, Issue 4 (April 2020) – 31 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Finding an optimal and efficient combination of features and classifier is still an open problem in [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Variational Specific Mode Extraction: A Novel Method for Defect Signal Detection of Ferromagnetic Pipeline
Algorithms 2020, 13(4), 105; https://doi.org/10.3390/a13040105 - 24 Apr 2020
Viewed by 608
Abstract
The non-contact detection of buried ferromagnetic pipeline is a long-standing problem in the field of inspection of outside pipelines, and the extraction of magnetic anomaly signal is a prerequisite for accurate detection. Pipeline defects can cause the fluctuation of magnetic signals, which are [...] Read more.
The non-contact detection of buried ferromagnetic pipeline is a long-standing problem in the field of inspection of outside pipelines, and the extraction of magnetic anomaly signal is a prerequisite for accurate detection. Pipeline defects can cause the fluctuation of magnetic signals, which are easily submerged in wide-band background noise without external excitation sources. Previously, Variational Mode Decomposition (VMD) was used to separate modal components; however, VMD is based on narrow-band signal processing algorithm and the calculation is complex. In this article, a method of pipeline defect signal based on Variational Specific Mode Extraction (VSME) is employed to extract the signal of a specific central frequency by signal modal decomposition, i.e., the specific mode is weak magnetic anomaly signal of pipeline defects. VSME is based on the fact that a wide-band signal can be converted into a narrow-band signal by demodulation method. Furthermore, the problem of wide-band signal decomposition is expressed as an optimal demodulation problem, which can be solved by alternating direction method of multipliers. The proposed algorithm is verified by artificially synthesized signals, and its performance is better than that of VMD. The results showed that the VSME method can extract the magnetic anomaly signal of pipeline damage using experimental data, while obtaining a better accuracy. Full article
Show Figures

Figure 1

Open AccessArticle
Decision Support System for Fitting and Mapping Nonlinear Functions with Application to Insect Pest Management in the Biological Control Context
Algorithms 2020, 13(4), 104; https://doi.org/10.3390/a13040104 - 24 Apr 2020
Viewed by 587
Abstract
The process of moving from experimental data to modeling and characterizing the dynamics and interactions in natural processes is a challenging task. This paper proposes an interactive platform for fitting data derived from experiments to mathematical expressions and carrying out spatial visualization. The [...] Read more.
The process of moving from experimental data to modeling and characterizing the dynamics and interactions in natural processes is a challenging task. This paper proposes an interactive platform for fitting data derived from experiments to mathematical expressions and carrying out spatial visualization. The platform is designed using a component-based software architectural approach, implemented in R and the Java programming languages. It uses experimental data as input for model fitting, then applies the obtained model at the landscape level via a spatial temperature grid data to yield regional and continental maps. Different modules and functionalities of the tool are presented with a case study, in which the tool is used to establish a temperature-dependent virulence model and map the potential zone of efficacy of a fungal-based biopesticide. The decision support system (DSS) was developed in generic form, and it can be used by anyone interested in fitting mathematical equations to experimental data collected following the described protocol and, depending on the type of investigation, it offers the possibility of projecting the model at the landscape level. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems)
Show Figures

Figure 1

Open AccessArticle
Practical Grammar Compression Based on Maximal Repeats
Algorithms 2020, 13(4), 103; https://doi.org/10.3390/a13040103 - 23 Apr 2020
Viewed by 574
Abstract
This study presents an analysis of RePair, which is a grammar compression algorithm known for its simple scheme, while also being practically effective. First, we show that the main process of RePair, that is, the step by step substitution of the most frequent [...] Read more.
This study presents an analysis of RePair, which is a grammar compression algorithm known for its simple scheme, while also being practically effective. First, we show that the main process of RePair, that is, the step by step substitution of the most frequent symbol pairs, works within the corresponding most frequent maximal repeats. Then, we reveal the relation between maximal repeats and grammars constructed by RePair. On the basis of this analysis, we further propose a novel variant of RePair, called MR-RePair, which considers the one-time substitution of the most frequent maximal repeats instead of the consecutive substitution of the most frequent pairs. The results of the experiments comparing the size of constructed grammars and execution time of RePair and MR-RePair on several text corpora demonstrate that MR-RePair constructs more compact grammars than RePair does, especially for highly repetitive texts. Full article
(This article belongs to the Special Issue Lossless Data Compression)
Show Figures

Figure 1

Open AccessArticle
A Case Study for a Big Data and Machine Learning Platform to Improve Medical Decision Support in Population Health Management
Algorithms 2020, 13(4), 102; https://doi.org/10.3390/a13040102 - 23 Apr 2020
Viewed by 665
Abstract
Big data and artificial intelligence are currently two of the most important and trending pieces for innovation and predictive analytics in healthcare, leading the digital healthcare transformation. Keralty organization is already working on developing an intelligent big data analytic platform based on machine [...] Read more.
Big data and artificial intelligence are currently two of the most important and trending pieces for innovation and predictive analytics in healthcare, leading the digital healthcare transformation. Keralty organization is already working on developing an intelligent big data analytic platform based on machine learning and data integration principles. We discuss how this platform is the new pillar for the organization to improve population health management, value-based care, and new upcoming challenges in healthcare. The benefits of using this new data platform for community and population health include better healthcare outcomes, improvement of clinical operations, reducing costs of care, and generation of accurate medical information. Several machine learning algorithms implemented by the authors can use the large standardized datasets integrated into the platform to improve the effectiveness of public health interventions, improving diagnosis, and clinical decision support. The data integrated into the platform come from Electronic Health Records (EHR), Hospital Information Systems (HIS), Radiology Information Systems (RIS), and Laboratory Information Systems (LIS), as well as data generated by public health platforms, mobile data, social media, and clinical web portals. This massive volume of data is integrated using big data techniques for storage, retrieval, processing, and transformation. This paper presents the design of a digital health platform in a healthcare organization in Colombia to integrate operational, clinical, and business data repositories with advanced analytics to improve the decision-making process for population health management. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems)
Show Figures

Figure 1

Open AccessArticle
A New Way to Store Simple Text Files
Algorithms 2020, 13(4), 101; https://doi.org/10.3390/a13040101 - 22 Apr 2020
Viewed by 599
Abstract
In the era of ubiquitous digitization, the Internet of Things (IoT), information plays a vital role. All types of data are collected, and some of this data are stored as text files. An important aspect—regardless of the type of data—is related to file [...] Read more.
In the era of ubiquitous digitization, the Internet of Things (IoT), information plays a vital role. All types of data are collected, and some of this data are stored as text files. An important aspect—regardless of the type of data—is related to file storage, especially the amount of disk space that is required. The less space is used on storing data sets, the lower is the cost of this service. Another important aspect of storing data warehouses in the form of files is the cost of data transmission needed for file transfer and its processing. Moreover, the data that are stored should be minimally protected against access and reading by other entities. The aspects mentioned above are particularly important for large data sets like Big Data. Considering the above criteria, i.e., minimizing storage space, data transfer, ensuring minimum security, the main goal of the article was to show the new way of storing text files. This article presents a method that converts data from text files like txt, json, html, py to images (image files) in png format. Taking into account such criteria as the output size of the file, the results obtained for the test files confirm that presented method enables to reduce the need for disk space, as well as to hide data in an image file. The described method can be used for texts saved in extended ASCII and UTF-8 coding. Full article
(This article belongs to the Special Issue Big Data Solutions)
Show Figures

Figure 1

Open AccessArticle
A Survey of Low-Rank Updates of Preconditioners for Sequences of Symmetric Linear Systems
Algorithms 2020, 13(4), 100; https://doi.org/10.3390/a13040100 - 21 Apr 2020
Viewed by 554
Abstract
The aim of this survey is to review some recent developments in devising efficient preconditioners for sequences of symmetric positive definite (SPD) linear systems A k x k = b k , k = 1 , arising in many scientific applications, such [...] Read more.
The aim of this survey is to review some recent developments in devising efficient preconditioners for sequences of symmetric positive definite (SPD) linear systems A k x k = b k , k = 1 , arising in many scientific applications, such as discretization of transient Partial Differential Equations (PDEs), solution of eigenvalue problems, (Inexact) Newton methods applied to nonlinear systems, rational Krylov methods for computing a function of a matrix. In this paper, we will analyze a number of techniques of updating a given initial preconditioner by a low-rank matrix with the aim of improving the clustering of eigenvalues around 1, in order to speed-up the convergence of the Preconditioned Conjugate Gradient (PCG) method. We will also review some techniques to efficiently approximate the linearly independent vectors which constitute the low-rank corrections and whose choice is crucial for the effectiveness of the approach. Numerical results on real-life applications show that the performance of a given iterative solver can be very much enhanced by the use of low-rank updates. Full article
Show Figures

Figure 1

Open AccessArticle
A New Lossless DNA Compression Algorithm Based on A Single-Block Encoding Scheme
Algorithms 2020, 13(4), 99; https://doi.org/10.3390/a13040099 - 20 Apr 2020
Viewed by 580
Abstract
With the emergent evolution in DNA sequencing technology, a massive amount of genomic data is produced every day, mainly DNA sequences, craving for more storage and bandwidth. Unfortunately, managing, analyzing and specifically storing these large amounts of data become a major scientific challenge [...] Read more.
With the emergent evolution in DNA sequencing technology, a massive amount of genomic data is produced every day, mainly DNA sequences, craving for more storage and bandwidth. Unfortunately, managing, analyzing and specifically storing these large amounts of data become a major scientific challenge for bioinformatics. Therefore, to overcome these challenges, compression has become necessary. In this paper, we describe a new reference-free DNA compressor abbreviated as DNAC-SBE. DNAC-SBE is a lossless hybrid compressor that consists of three phases. First, starting from the largest base (Bi), the positions of each Bi are replaced with ones and the positions of other bases that have smaller frequencies than Bi are replaced with zeros. Second, to encode the generated streams, we propose a new single-block encoding scheme (SEB) based on the exploitation of the position of neighboring bits within the block using two different techniques. Finally, the proposed algorithm dynamically assigns the shorter length code to each block. Results show that DNAC-SBE outperforms state-of-the-art compressors and proves its efficiency in terms of special conditions imposed on compressed data, storage space and data transfer rate regardless of the file format or the size of the data. Full article
(This article belongs to the Special Issue Data Compression Algorithms and their Applications)
Show Figures

Figure 1

Open AccessArticle
Numerical Simulation of Non-Linear Models of Reaction—Diffusion for a DGT Sensor
Algorithms 2020, 13(4), 98; https://doi.org/10.3390/a13040098 - 20 Apr 2020
Viewed by 515
Abstract
In this work, we present a novel strategy for the numerical solution of a coupled system of partial differential equations that describe reaction–diffusion processes of a mixture of metals and ligands that can be absorbed by a sensor or a microorganism, in an [...] Read more.
In this work, we present a novel strategy for the numerical solution of a coupled system of partial differential equations that describe reaction–diffusion processes of a mixture of metals and ligands that can be absorbed by a sensor or a microorganism, in an aqueous medium. The novelty introduced in this work consisted of an adequate database management in conjunction with a direct iterative schema, which allowed the construction of simple, fast and efficient algorithms. Except in really adverse conditions, the calculation is converging and satisfactory solutions were reached. Computing times showed to be better than those obtained with some commercial programs. Although we concentrate on the solution for a particular system (Diffusive Gradients in Thin Films [DGT] sensors), the proposed algorithm does not require major modifications to consider new theoretical or experimental configurations. Since the quality of numerical simulations of reaction–diffusion problems often faces some drawbacks as the values of reaction rate constants increase, some additional effort has been invested in obtaining proper solutions in those cases. Full article
Show Figures

Figure 1

Open AccessArticle
Performance Assessment of Predictive Control—A Survey
Algorithms 2020, 13(4), 97; https://doi.org/10.3390/a13040097 - 17 Apr 2020
Viewed by 615
Abstract
Model Predictive Control constitutes an important element of any modern control system. There is growing interest in this technology. More and more advanced predictive structures have been implemented. The first applications were in chemical engineering, and now Model Predictive Control can be found [...] Read more.
Model Predictive Control constitutes an important element of any modern control system. There is growing interest in this technology. More and more advanced predictive structures have been implemented. The first applications were in chemical engineering, and now Model Predictive Control can be found in almost all kinds of applications, from the process industry to embedded control systems or for autonomous objects. Currently, each implementation of a control system requires strict financial justification. Application engineers need tools to measure and quantify the quality of the control and the potential for improvement that may be achieved by retrofitting control systems. Furthermore, a successful implementation of predictive control must conform to prior estimations not only during commissioning, but also during regular daily operations. The system must sustain the quality of control performance. The assessment of Model Predictive Control requires a suitable, often specific, methodology and comparative indicators. These demands establish the rationale of this survey. Therefore, the paper collects and summarizes control performance assessment methods specifically designed for and utilized in predictive control. These observations present the picture of the assessment technology. Further generalization leads to the formulation of a control assessment procedure to support control application engineers. Full article
(This article belongs to the Special Issue Model Predictive Control: Algorithms and Applications)
Show Figures

Figure 1

Open AccessArticle
A Hybrid Grasshopper Optimization Algorithm Applied to the Open Vehicle Routing Problem
Algorithms 2020, 13(4), 96; https://doi.org/10.3390/a13040096 - 16 Apr 2020
Viewed by 601
Abstract
This paper presents a hybrid grasshopper optimization algorithm using a novel decoder and local search to solve instances of the open vehicle routing problem with capacity and distance constraints. The algorithm’s decoder first defines the number of vehicles to be used and then [...] Read more.
This paper presents a hybrid grasshopper optimization algorithm using a novel decoder and local search to solve instances of the open vehicle routing problem with capacity and distance constraints. The algorithm’s decoder first defines the number of vehicles to be used and then it partitions the clients, assigning them to the available routes. The algorithm performs a local search in three neighborhoods after decoding. When a new best solution is found, every route is locally optimized by solving a traveling salesman problem, considering the depot and clients in the route. Three sets containing a total of 30 benchmark problems from the literature were used to test the algorithm. The experiments considered two cases of the problem. In the first, the primary objective is to minimize the total number of vehicles and then the total distance to be traveled. In the second case, the total distance traveled by the vehicles is minimized. The obtained results showed the algorithm’s proficient performance. For the first case, the algorithm was able to improve or match the best-known solutions for 21 of the 30 benchmark problems. For the second case, the best-known solutions for 18 of the 30 benchmark problems were found or improved by the algorithm. Finally, a case study from a real-life problem is included. Full article
Show Figures

Figure 1

Open AccessArticle
How to Identify Varying Lead–Lag Effects in Time Series Data: Implementation, Validation, and Application of the Generalized Causality Algorithm
Algorithms 2020, 13(4), 95; https://doi.org/10.3390/a13040095 - 16 Apr 2020
Viewed by 610
Abstract
This paper develops the generalized causality algorithm and applies it to a multitude of data from the fields of economics and finance. Specifically, our parameter-free algorithm efficiently determines the optimal non-linear mapping and identifies varying lead–lag effects between two given time series. This [...] Read more.
This paper develops the generalized causality algorithm and applies it to a multitude of data from the fields of economics and finance. Specifically, our parameter-free algorithm efficiently determines the optimal non-linear mapping and identifies varying lead–lag effects between two given time series. This procedure allows an elastic adjustment of the time axis to find similar but phase-shifted sequences—structural breaks in their relationship are also captured. A large-scale simulation study validates the outperformance in the vast majority of parameter constellations in terms of efficiency, robustness, and feasibility. Finally, the presented methodology is applied to real data from the areas of macroeconomics, finance, and metal. Highest similarity show the pairs of gross domestic product and consumer price index (macroeconomics), S&P 500 index and Deutscher Aktienindex (finance), as well as gold and silver (metal). In addition, the algorithm takes full use of its flexibility and identifies both various structural breaks and regime patterns over time, which are (partly) well documented in the literature. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications)
Show Figures

Figure 1

Open AccessArticle
Stochastic Models to Qualify Stem Tapers
Algorithms 2020, 13(4), 94; https://doi.org/10.3390/a13040094 - 15 Apr 2020
Viewed by 565
Abstract
This study examines the performance of 11 tree taper models to predict the diameter of bark at any given height and the total stem volume of eight dominant tree species in the boreal forests of Lithuania. Here, we develop eight new models using [...] Read more.
This study examines the performance of 11 tree taper models to predict the diameter of bark at any given height and the total stem volume of eight dominant tree species in the boreal forests of Lithuania. Here, we develop eight new models using stochastic differential equations (SDEs). The symmetrical Vasicek model and asymmetrical Gompertz model are used to describe tree taper evolution, as well as geometric-type diffusion processes. These models are compared with those traditionally used for four tree taper models by using performance statistics and residual analysis. The observed dataset consists of longitudinal measurements of 3703 trees, representing the eight dominant tree species in Lithuania (pine, spruce, oak, ash, birch, black alder, white alder, and aspen). Overall, the best goodness of fit statistics of diameter predictions produced the SDE taper models. All results have been implemented in the Maple computer algebra system using the “Statistics” and “VectorCalculus” packages. Full article
Show Figures

Figure 1

Open AccessArticle
Path Planning for Laser Cladding Robot on Artificial Joint Surface Based on Topology Reconstruction
Algorithms 2020, 13(4), 93; https://doi.org/10.3390/a13040093 - 15 Apr 2020
Viewed by 494
Abstract
Artificial joint surface coating is a hot issue in the interdisciplinary fields of manufacturing, materials and biomedicine. Due to the complex surface characteristics of artificial joints, there are some problems with efficiency and precision in automatic cladding path planning for coating fabrication. In [...] Read more.
Artificial joint surface coating is a hot issue in the interdisciplinary fields of manufacturing, materials and biomedicine. Due to the complex surface characteristics of artificial joints, there are some problems with efficiency and precision in automatic cladding path planning for coating fabrication. In this study, a path planning method for a laser cladding robot for artificial joints surface was proposed. The key of this method was the topological reconstruction of the artificial joint surface. On the basis of the topological relation, a set of parallel planes were used to intersect the CAD model to generate a set of continuous, directed and equidistant surface transversals on the artificial joint surface. The arch height error method was used to extract robot interpolation points from surface transversal lines according to machining accuracy requirements. The coordinates and normal vectors of interpolation points were used to calculate the position and pose of the robot tool center point (TCP). To ensure that the laser beam was always perpendicular to the artificial joint surface, a novel laser cladding set-up with a robot was designed, of which the joint part clamped by a six-axis robot moved while the laser head was fixed on the workbench. The proposed methodology was validated with the planned path on the surface of an artificial acetabular cup using simulation and experimentation via an industrial NACHI robot. The results indicated that the path planning method based on topological reconstruction was feasible and more efficient than the traditional robot teaching method. Full article
(This article belongs to the Special Issue Algorithms for Computer-Aided Design)
Show Figures

Figure 1

Open AccessArticle
Deterministic Coresets for k-Means of Big Sparse Data
Algorithms 2020, 13(4), 92; https://doi.org/10.3390/a13040092 - 14 Apr 2020
Viewed by 704
Abstract
Let P be a set of n points in R d , k 1 be an integer and ε ( 0 , 1 ) be a constant. An ε-coreset is a subset C P with appropriate non-negative weights (scalars), that [...] Read more.
Let P be a set of n points in R d , k 1 be an integer and ε ( 0 , 1 ) be a constant. An ε-coreset is a subset C P with appropriate non-negative weights (scalars), that approximates any given set Q R d of k centers. That is, the sum of squared distances over every point in P to its closest point in Q is the same, up to a factor of 1 ± ε to the weighted sum of C to the same k centers. If the coreset is small, we can solve problems such as k-means clustering or its variants (e.g., discrete k-means, where the centers are restricted to be in P, or other restricted zones) on the small coreset to get faster provable approximations. Moreover, it is known that such coreset support streaming, dynamic and distributed data using the classic merge-reduce trees. The fact that the coreset is a subset implies that it preserves the sparsity of the data. However, existing such coresets are randomized and their size has at least linear dependency on the dimension d. We suggest the first such coreset of size independent of d. This is also the first deterministic coreset construction whose resulting size is not exponential in d. Extensive experimental results and benchmarks are provided on public datasets, including the first coreset of the English Wikipedia using Amazon’s cloud. Full article
(This article belongs to the Special Issue Big Data Algorithmics)
Show Figures

Figure 1

Open AccessArticle
A Generalized Alternating Linearization Bundle Method for Structured Convex Optimization with Inexact First-Order Oracles
Algorithms 2020, 13(4), 91; https://doi.org/10.3390/a13040091 - 14 Apr 2020
Viewed by 546
Abstract
In this paper, we consider a class of structured optimization problems whose objective function is the summation of two convex functions: f and h, which are not necessarily differentiable. We focus particularly on the case where the function f is general and [...] Read more.
In this paper, we consider a class of structured optimization problems whose objective function is the summation of two convex functions: f and h, which are not necessarily differentiable. We focus particularly on the case where the function f is general and its exact first-order information (function value and subgradient) may be difficult to obtain, while the function h is relatively simple. We propose a generalized alternating linearization bundle method for solving this class of problems, which can handle inexact first-order information of on-demand accuracy. The inexact information can be very general, which covers various oracles, such as inexact, partially inexact and asymptotically exact oracles, and so forth. At each iteration, the algorithm solves two interrelated subproblems: one aims to find the proximal point of the polyhedron model of f plus the linearization of h; the other aims to find the proximal point of the linearization of f plus h. We establish global convergence of the algorithm under different types of inexactness. Finally, some preliminary numerical results on a set of two-stage stochastic linear programming problems show that our method is very encouraging. Full article
(This article belongs to the Special Issue Optimization Algorithms and Applications)
Open AccessFeature PaperArticle
Application of Generalized Polynomial Chaos for Quantification of Uncertainties of Time Averages and Their Sensitivities in Chaotic Systems
Algorithms 2020, 13(4), 90; https://doi.org/10.3390/a13040090 - 13 Apr 2020
Viewed by 649
Abstract
In this paper, we consider the effect of stochastic uncertainties on non-linear systems with chaotic behavior. More specifically, we quantify the effect of parametric uncertainties to time-averaged quantities and their sensitivities. Sampling methods for Uncertainty Quantification (UQ), such as the Monte–Carlo (MC), are [...] Read more.
In this paper, we consider the effect of stochastic uncertainties on non-linear systems with chaotic behavior. More specifically, we quantify the effect of parametric uncertainties to time-averaged quantities and their sensitivities. Sampling methods for Uncertainty Quantification (UQ), such as the Monte–Carlo (MC), are very costly, while traditional methods for sensitivity analysis, such as the adjoint, fail in chaotic systems. In this work, we employ the non-intrusive generalized Polynomial Chaos (gPC) for UQ, coupled with the Multiple-Shooting Shadowing (MSS) algorithm for sensitivity analysis of chaotic systems. It is shown that the gPC, coupled with MSS, is an appropriate method for conducting UQ in chaotic systems and produces results that match well with those from MC and Finite-Differences (FD). Full article
Show Figures

Figure 1

Open AccessArticle
Success History-Based Position Adaptation in Fuzzy-Controlled Ensemble of Biology-Inspired Algorithms
Algorithms 2020, 13(4), 89; https://doi.org/10.3390/a13040089 - 09 Apr 2020
Viewed by 693
Abstract
In this study, a new modification of the meta-heuristic approach called Co-Operation of Biology-Related Algorithms (COBRA) is proposed. Originally the COBRA approach was based on a fuzzy logic controller and used for solving real-parameter optimization problems. The basic idea consists of a cooperative [...] Read more.
In this study, a new modification of the meta-heuristic approach called Co-Operation of Biology-Related Algorithms (COBRA) is proposed. Originally the COBRA approach was based on a fuzzy logic controller and used for solving real-parameter optimization problems. The basic idea consists of a cooperative work of six well-known biology-inspired algorithms, referred to as components. However, it was established that the search efficiency of COBRA depends on its ability to keep the exploitation and exploration balance when solving optimization problems. The new modification of the COBRA approach is based on other method for generating potential solutions. This method keeps a historical memory of successful positions found by individuals to lead them in different directions and therefore to improve the exploitation and exploration capabilities. The proposed technique was applied to the COBRA components and to its basic steps. The newly proposed meta-heuristic as well as other modifications of the COBRA approach and components were evaluated on three sets of various benchmark problems. The experimental results obtained by all algorithms with the same computational effort are presented and compared. It was concluded that the proposed modification outperformed other algorithms used in comparison. Therefore, its usefulness and workability were demonstrated. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications)
Show Figures

Figure 1

Open AccessArticle
Feasibility Pump Algorithm for Sparse Representation under Gaussian Noise
Algorithms 2020, 13(4), 88; https://doi.org/10.3390/a13040088 - 09 Apr 2020
Viewed by 585
Abstract
In this paper, the Feasibility Pump is adapted for the problem of sparse representations of signals affected by Gaussian noise. This adaptation is tested and then compared to Orthogonal Matching Pursuit (OMP) and the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The feasibility pump recovers [...] Read more.
In this paper, the Feasibility Pump is adapted for the problem of sparse representations of signals affected by Gaussian noise. This adaptation is tested and then compared to Orthogonal Matching Pursuit (OMP) and the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The feasibility pump recovers the true support much better than the other two algorithms and, as the SNR decreases and the support size increases, it has a smaller recovery and representation error when compared with its competitors. It is observed that, in order for the algorithm to be efficient, a regularization parameter and a weight term for the error are needed. Full article
Show Figures

Figure 1

Open AccessArticle
The Need for Machine-Processable Agreements in Health Data Management
Algorithms 2020, 13(4), 87; https://doi.org/10.3390/a13040087 - 07 Apr 2020
Viewed by 742
Abstract
Data processing agreements in health data management are laid out by organisations in monolithic “Terms and Conditions” documents written in natural legal language. These top-down policies usually protect the interest of the service providers, rather than the data owners. They are coarse-grained and [...] Read more.
Data processing agreements in health data management are laid out by organisations in monolithic “Terms and Conditions” documents written in natural legal language. These top-down policies usually protect the interest of the service providers, rather than the data owners. They are coarse-grained and do not allow for more than a few opt-in or opt-out options for individuals to express their consent on personal data processing, and these options often do not transfer to software as they were intended to. In this paper, we study the problem of health data sharing and we advocate the need for individuals to describe their personal contract of data usage in a formal, machine-processable language. We develop an application for sharing patient genomic information and test results, and use interactions with patients and clinicians in order to identify the particular peculiarities a privacy/policy/consent language should offer in this complicated domain. We present how Semantic Web technologies can have a central role in this approach by providing the formal tools and features required in such a language. We present our ongoing approach to construct an ontology-based framework and a policy language that allows patients and clinicians to express fine-grained consent, preferences or suggestions on sharing medical information. Our language offers unique features such as multi-party ownership of data or data sharing dependencies. We evaluate the landscape of policy languages from different areas, and show how they are lacking major requirements needed in health data management. In addition to enabling patients, our approach helps organisations increase technological capabilities, abide by legal requirements, and save resources. Full article
Show Figures

Figure 1

Open AccessArticle
Confidence-Based Voting for the Design of Interpretable Ensembles with Fuzzy Systems
Algorithms 2020, 13(4), 86; https://doi.org/10.3390/a13040086 - 06 Apr 2020
Viewed by 734
Abstract
In this study, a new voting procedure for combining the fuzzy logic based classifiers and other classifiers called confidence-based voting is proposed. This method combines two classifiers, namely the fuzzy classification system, and for the cases when the fuzzy system returns high confidence [...] Read more.
In this study, a new voting procedure for combining the fuzzy logic based classifiers and other classifiers called confidence-based voting is proposed. This method combines two classifiers, namely the fuzzy classification system, and for the cases when the fuzzy system returns high confidence levels, i.e., the returned membership value is large, the fuzzy system is used to perform classification, otherwise, the second classifier is applied. As a result, most of the sample is classified by the explainable and interpretable fuzzy system, and the second, more accurate, but less interpretable classifier is applied only for the most difficult cases. To show the efficiency of the proposed approach, a set of experiments is performed on test datasets, as well as two problems of estimating the person’s emotional state with the data collected by non-contact vital sensors, which use the Doppler effect. To validate the accuracies of the proposed approach, the statistical tests were used for comparison. The obtained results demonstrate the efficiency of the proposed technique, as it allows for both improving the classification accuracy and explaining the decision making process. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications)
Show Figures

Figure 1

Open AccessArticle
Research and Study of the Hybrid Algorithms Based on the Collective Behavior of Fish Schools and Classical Optimization Methods
Algorithms 2020, 13(4), 85; https://doi.org/10.3390/a13040085 - 03 Apr 2020
Cited by 1 | Viewed by 848
Abstract
Inspired by biological systems, swarm intelligence algorithms are widely used to solve multimodal optimization problems. In this study, we consider the hybridization problem of an algorithm based on the collective behavior of fish schools. The algorithm is computationally inexpensive compared to other population-based [...] Read more.
Inspired by biological systems, swarm intelligence algorithms are widely used to solve multimodal optimization problems. In this study, we consider the hybridization problem of an algorithm based on the collective behavior of fish schools. The algorithm is computationally inexpensive compared to other population-based algorithms. Accuracy of fish school search increases with the increase of predefined iteration count, but this also affects computation time required to find a suboptimal solution. We propose two hybrid approaches, intending to improve the evolutionary-inspired algorithm accuracy by using classical optimization methods, such as gradient descent and Newton’s optimization method. The study shows the effectiveness of the proposed hybrid algorithms, and the strong advantage of the hybrid algorithm based on fish school search and gradient descent. We provide a solution for the linearly inseparable exclusive disjunction problem using the developed algorithm and a perceptron with one hidden layer. To demonstrate the effectiveness of the algorithms, we visualize high dimensional loss surfaces near global extreme points. In addition, we apply the distributed version of the most effective hybrid algorithm to the hyperparameter optimization problem of a neural network. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications)
Show Figures

Figure 1

Open AccessArticle
Representation of Traffic Congestion Data for Urban Road Traffic Networks Based on Pooling Operations
Algorithms 2020, 13(4), 84; https://doi.org/10.3390/a13040084 - 02 Apr 2020
Viewed by 741
Abstract
In order to improve the efficiency of transportation networks, it is critical to forecast traffic congestion. Large-scale traffic congestion data have become available and accessible, yet they need to be properly represented in order to avoid overfitting, reduce the requirements of computational resources, [...] Read more.
In order to improve the efficiency of transportation networks, it is critical to forecast traffic congestion. Large-scale traffic congestion data have become available and accessible, yet they need to be properly represented in order to avoid overfitting, reduce the requirements of computational resources, and be utilized effectively by various methodologies and models. Inspired by pooling operations in deep learning, we propose a representation framework for traffic congestion data in urban road traffic networks. This framework consists of grid-based partition of urban road traffic networks and a pooling operation to reduce multiple values into an aggregated one. We also propose using a pooling operation to calculate the maximum value in each grid (MAV). Raw snapshots of traffic congestion maps are transformed and represented as a series of matrices which are used as inputs to a spatiotemporal congestion prediction network (STCN) to evaluate the effectiveness of representation when predicting traffic congestion. STCN combines convolutional neural networks (CNNs) and long short-term memory neural network (LSTMs) for their spatiotemporal capability. CNNs can extract spatial features and dependencies of traffic congestion between roads, and LSTMs can learn their temporal evolution patterns and correlations. An empirical experiment on an urban road traffic network shows that when incorporated into our proposed representation framework, MAV outperforms other pooling operations in the effectiveness of the representation of traffic congestion data for traffic congestion prediction, and that the framework is cost-efficient in terms of computational resources. Full article
(This article belongs to the Special Issue Advanced Image and Video Processing)
Show Figures

Figure 1

Open AccessArticle
Ensemble Deep Learning for Multilabel Binary Classification of User-Generated Content
Algorithms 2020, 13(4), 83; https://doi.org/10.3390/a13040083 - 01 Apr 2020
Viewed by 765
Abstract
Sentiment analysis usually refers to the analysis of human-generated content via a polarity filter. Affective computing deals with the exact emotions conveyed through information. Emotional information most frequently cannot be accurately described by a single emotion class. Multilabel classifiers can categorize human-generated content [...] Read more.
Sentiment analysis usually refers to the analysis of human-generated content via a polarity filter. Affective computing deals with the exact emotions conveyed through information. Emotional information most frequently cannot be accurately described by a single emotion class. Multilabel classifiers can categorize human-generated content in multiple emotional classes. Ensemble learning can improve the statistical, computational and representation aspects of such classifiers. We present a baseline stacked ensemble and propose a weighted ensemble. Our proposed weighted ensemble can use multiple classifiers to improve classification results without hyperparameter tuning or data overfitting. We evaluate our ensemble models with two datasets. The first dataset is from Semeval2018-Task 1 and contains almost 7000 Tweets, labeled with 11 sentiment classes. The second dataset is the Toxic Comment Dataset with more than 150,000 comments, labeled with six different levels of abuse or harassment. Our results suggest that ensemble learning improves classification results by 1.5 % to 5.4 % . Full article
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)
Show Figures

Figure 1

Open AccessArticle
Algebraic Point Projection for Immersed Boundary Analysis on Low Degree NURBS Curves and Surfaces
Algorithms 2020, 13(4), 82; https://doi.org/10.3390/a13040082 - 31 Mar 2020
Viewed by 659
Abstract
Point projection is an important geometric need when boundaries described by parametric curves and surfaces are immersed in domains. In problems where an immersed parametric boundary evolves with time as in solidification or fracture analysis, the projection from a point in the domain [...] Read more.
Point projection is an important geometric need when boundaries described by parametric curves and surfaces are immersed in domains. In problems where an immersed parametric boundary evolves with time as in solidification or fracture analysis, the projection from a point in the domain to the boundary is necessary to determine the interaction of the moving boundary with the underlying domain approximation. Furthermore, during analysis, since the driving force behind interface evolution depends on locally computed curvatures and normals, it is ideal if the parametric entity is not approximated as piecewise-linear. To address this challenge, we present in this paper an algebraic procedure to project a point on to Non-uniform rational B-spline (NURBS) curves and surfaces. The developed technique utilizes the resultant theory to construct implicit forms of parametric Bézier patches, level sets of which are termed algebraic level sets (ALS). Boolean compositions of the algebraic level sets are carried out using the theory of R-functions. The algebraic level sets and their gradients at a given point on the domain are then used to project the point onto the immersed boundary. Beginning with a first-order algorithm, sequentially refined procedures culminating in a second-order projection algorithm are described for NURBS curves and surfaces. Examples are presented to illustrate the efficiency and robustness of the developed method. More importantly, the method is shown to be robust and able to generate valid solutions even for curves and surfaces with high local curvature or G 0 continuity—problems where the Newton–Raphson method fails due to discontinuity in the projected points or because the numerical iterations fail to converge to a solution, respectively. Full article
(This article belongs to the Special Issue Algorithms for Computer-Aided Design)
Show Figures

Figure 1

Open AccessArticle
Detection and Monitoring of Bottom-Up Cracks in Road Pavement Using a Machine-Learning Approach
Algorithms 2020, 13(4), 81; https://doi.org/10.3390/a13040081 - 31 Mar 2020
Viewed by 778
Abstract
The current methods that aim at monitoring the structural health status (SHS) of road pavements allow detecting surface defects and failures. This notwithstanding, there is a lack of methods and systems that are able to identify concealed cracks (particularly, bottom-up cracks) and monitor [...] Read more.
The current methods that aim at monitoring the structural health status (SHS) of road pavements allow detecting surface defects and failures. This notwithstanding, there is a lack of methods and systems that are able to identify concealed cracks (particularly, bottom-up cracks) and monitor their growth over time. For this reason, the objective of this study is to set up a supervised machine learning (ML)-based method for the identification and classification of the SHS of a differently cracked road pavement based on its vibro-acoustic signature. The method aims at collecting these signatures (using acoustic-sensors, located at the roadside) and classifying the pavement’s SHS through ML models. Different ML classifiers (i.e., multilayer perceptron, MLP, convolutional neural network, CNN, random forest classifier, RFC, and support vector classifier, SVC) were used and compared. Results show the possibility of associating with great accuracy (i.e., MLP = 91.8%, CNN = 95.6%, RFC = 91.0%, and SVC = 99.1%) a specific vibro-acoustic signature to a differently cracked road pavement. These results are encouraging and represent the bases for the application of the proposed method in real contexts, such as monitoring roads and bridges using wireless sensor networks, which is the target of future studies. Full article
(This article belongs to the Special Issue Models and Technologies for Intelligent Transportation Systems)
Show Figures

Figure 1

Open AccessArticle
Hierarchical-Matching-Based Online and Real-Time Multi-Object Tracking with Deep Appearance Features
Algorithms 2020, 13(4), 80; https://doi.org/10.3390/a13040080 - 29 Mar 2020
Viewed by 782
Abstract
Based on tracking-by-detection, we propose a hierarchical-matching-based online and real-time multi-object tracking approach with deep appearance features, which can effectively reduce the false positives (FP) in tracking. For the purpose of increasing the accuracy rate of data association, we define the trajectory confidence [...] Read more.
Based on tracking-by-detection, we propose a hierarchical-matching-based online and real-time multi-object tracking approach with deep appearance features, which can effectively reduce the false positives (FP) in tracking. For the purpose of increasing the accuracy rate of data association, we define the trajectory confidence using its position information, appearance information, and the information of historical relevant detections, after which we can classify the trajectories into different levels. In order to obtain discriminative appearance features, we developed a deep convolutional neural network to extract the appearance features of objects and trained it on a large-scale pedestrian re-identification dataset. Last but not least, we used the proposed diverse and hierarchical matching strategy to associate detection and trajectory sets. Experimental results on the MOT benchmark dataset show that our proposed approach performs well against other online methods, especially for the metrics of FP and frames per second (FPS). Full article
Show Figures

Figure 1

Open AccessReview
About Granular Rough Computing—Overview of Decision System Approximation Techniques and Future Perspectives
Algorithms 2020, 13(4), 79; https://doi.org/10.3390/a13040079 - 29 Mar 2020
Viewed by 741
Abstract
Granular computing techniques are a huge discipline in which the basic component is to operate on groups of similar objects according to a fixed similarity measure. The first references to the granular computing can be seen in the works of Zadeh in fuzzy [...] Read more.
Granular computing techniques are a huge discipline in which the basic component is to operate on groups of similar objects according to a fixed similarity measure. The first references to the granular computing can be seen in the works of Zadeh in fuzzy set theory. Granular computing allows for a very natural modelling of the world. It is very likely that the human brain, while solving problems, performs granular calculations on data collected from the senses. The researchers of this paradigm have proven the unlimited possibilities of granular computing. Among other things, they are used in the processes of classification, regression, missing values handling, for feature selection, and as mechanisms of data approximation. It is impossible to quote all methods based on granular computing—we can only discuss a selected group of techniques. In the article, we have presented a review of recently developed granulation techniques belonging to the family of approximation algorithms founded by Polkowski—in the framework of rough set theory. Starting from the basic Polkowski’s standard granulation, we have described further developed by us concept dependent, layered, and epsilon variants, and our recent homogeneous granulation. We are presenting simple numerical examples and samples of research results. The effectiveness of these methods in terms of decision system size reduction and maintenance of the internal knowledge from the original data are presented. The reduction in the number of objects in our techniques while maintaining classification efficiency reaches 90 percent—for standard granulation with usage of a kNN classifier (we achieve similar efficiency for the concept-dependent technique for the Naive Bayes classifier). The largest reduction achieved in the number of exhaustive set of rules at the efficiency level to the original data are 99 percent—it is for concept-dependent granulation. In homogeneous variants, the reduction is less than 60 percent, but the advantage of these techniques is that it is not necessary to look for optimal granulation parameters, which are selected dynamically. We also describe potential directions of development of granular computing techniques by prism of described methods. Full article
(This article belongs to the Special Issue Granular Computing: From Foundations to Applications)
Show Figures

Figure 1

Open AccessArticle
Beyond Newton: A New Root-Finding Fixed-Point Iteration for Nonlinear Equations
Algorithms 2020, 13(4), 78; https://doi.org/10.3390/a13040078 - 29 Mar 2020
Viewed by 840
Abstract
Finding roots of equations is at the heart of most computational science. A well-known and widely used iterative algorithm is Newton’s method. However, its convergence depends heavily on the initial guess, with poor choices often leading to slow convergence or even divergence. In [...] Read more.
Finding roots of equations is at the heart of most computational science. A well-known and widely used iterative algorithm is Newton’s method. However, its convergence depends heavily on the initial guess, with poor choices often leading to slow convergence or even divergence. In this short note, we seek to enlarge the basin of attraction of the classical Newton’s method. The key idea is to develop a relatively simple multiplicative transform of the original equations, which leads to a reduction in nonlinearity, thereby alleviating the limitation of Newton’s method. Based on this idea, we derive a new class of iterative methods and rediscover Halley’s method as the limit case. We present the application of these methods to several mathematical functions (real, complex, and vector equations). Across all examples, our numerical experiments suggest that the new methods converge for a significantly wider range of initial guesses. For scalar equations, the increase in computational cost per iteration is minimal. For vector functions, more extensive analysis is needed to compare the increase in cost per iteration and the improvement in convergence of specific problems. Full article
Show Figures

Figure 1

Open AccessArticle
On Classical Solutions for A Kuramoto–Sinelshchikov–Velarde-Type Equation
Algorithms 2020, 13(4), 77; https://doi.org/10.3390/a13040077 - 28 Mar 2020
Viewed by 733
Abstract
The Kuramoto–Sinelshchikov–Velarde equation describes the evolution of a phase turbulence in reaction-diffusion systems or the evolution of the plane flame propagation, taking into account the combined influence of diffusion and thermal conduction of the gas on the stability of a plane flame front. [...] Read more.
The Kuramoto–Sinelshchikov–Velarde equation describes the evolution of a phase turbulence in reaction-diffusion systems or the evolution of the plane flame propagation, taking into account the combined influence of diffusion and thermal conduction of the gas on the stability of a plane flame front. In this paper, we prove the well-posedness of the classical solutions for the Cauchy problem. Full article
Open AccessArticle
Experiments-Based Comparison of Different Power Controllers for a Solid Oxide Fuel Cell Against Model Imperfections and Delay Phenomena
Algorithms 2020, 13(4), 76; https://doi.org/10.3390/a13040076 - 25 Mar 2020
Viewed by 761
Abstract
Solid oxide fuel cell systems such as those presented in this paper are not only applicable for a pure supply with electric energy, they can typically also be used in decentralized power stations, i.e., as micro-cogeneration systems for houses, where both electric and [...] Read more.
Solid oxide fuel cell systems such as those presented in this paper are not only applicable for a pure supply with electric energy, they can typically also be used in decentralized power stations, i.e., as micro-cogeneration systems for houses, where both electric and thermal energy are required. For that application, obviously, the electric power need is not constant but rather changes over time. In such a way, it essentially depends on the user profiles of said houses which can refer to e.g., private households as well as offices. The power use is furthermore not predefined. For an optimal operation of the fuel cell, we want to adjust the power, to match the need with sufficiently small time constants without the implementation of mid- or long-term electrical storage systems such as battery buffers. To adapt the produced electric power a simple, however, sufficiently robust feedback controller regulating the hydrogen mass flow into the cells is necessary. To achieve this goal, four different controllers, namely, a PI output-feedback controller combined with a feedforward control, an internal model control (IMC) approach, a sliding-mode (SM) controller and a state-feedback controller, are developed and compared in this paper. As the challenge is to find a controller ensuring steady-state accuracy and good tracking behavior despite the nonlinearities and uncertainties of the plant, the comparison was done regarding these requirements. Simulations and experiments show that the IMC outperforms the alternatives with respect to steady-state accuracy and tracking behavior. Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop