Next Issue
Previous Issue

Table of Contents

Algorithms, Volume 12, Issue 8 (August 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-29
Export citation of selected articles as:
Open AccessArticle
A Distributed Hybrid Community Detection Methodology for Social Networks
Algorithms 2019, 12(8), 175; https://doi.org/10.3390/a12080175
Received: 30 June 2019 / Revised: 26 July 2019 / Accepted: 15 August 2019 / Published: 17 August 2019
Viewed by 185 | PDF Full-text (1410 KB) | HTML Full-text | XML Full-text
Abstract
Nowadays, the amount of digitally available information has tremendously grown, with real-world data graphs outreaching the millions or even billions of vertices. Hence, community detection, where groups of vertices are formed according to a well-defined similarity measure, has never been more essential affecting [...] Read more.
Nowadays, the amount of digitally available information has tremendously grown, with real-world data graphs outreaching the millions or even billions of vertices. Hence, community detection, where groups of vertices are formed according to a well-defined similarity measure, has never been more essential affecting a vast range of scientific fields such as bio-informatics, sociology, discrete mathematics, nonlinear dynamics, digital marketing, and computer science. Even if an impressive amount of research has yet been published to tackle this NP-hard class problem, the existing methods and algorithms have virtually been proven inefficient and severely unscalable. In this regard, the purpose of this manuscript is to combine the network topology properties expressed by the loose similarity and the local edge betweenness, which is a currently proposed Girvan–Newman’s edge betweenness measure alternative, along with the intrinsic user content information, in order to introduce a novel and highly distributed hybrid community detection methodology. The proposed approach has been thoroughly tested on various real social graphs, roundly compared to other classic divisive community detection algorithms that serve as baselines and practically proven exceptionally scalable, highly efficient, and adequately accurate in terms of revealing the subjacent network hierarchy. Full article
Figures

Figure 1

Open AccessArticle
A Novel Blind Restoration and Reconstruction Approach for CT Images Based on Sparse Representation and Hierarchical Bayesian-MAP
Algorithms 2019, 12(8), 174; https://doi.org/10.3390/a12080174
Received: 29 May 2019 / Revised: 11 August 2019 / Accepted: 14 August 2019 / Published: 16 August 2019
Viewed by 233 | PDF Full-text (2618 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Computed tomography (CT) image reconstruction and restoration are very important in medical image processing, and are associated together to be an inverse problem. Image iterative reconstruction is a key tool to increase the applicability of CT imaging and reduce radiation dose. Nevertheless, traditional [...] Read more.
Computed tomography (CT) image reconstruction and restoration are very important in medical image processing, and are associated together to be an inverse problem. Image iterative reconstruction is a key tool to increase the applicability of CT imaging and reduce radiation dose. Nevertheless, traditional image iterative reconstruction methods are limited by the sampling theorem and also the blurring of projection data will propagate unhampered artifact in the reconstructed image. To overcome these problems, image restoration techniques should be developed to accurately correct a wide variety of image degrading effects in order to effectively improve image reconstruction. In this paper, a blind image restoration technique is embedded in the compressive sensing CT image reconstruction, which can result in a high-quality reconstruction image using fewer projection data. Because a small amount of data can be obtained by radiation in a shorter time, high-quality image reconstruction with less data is equivalent to reducing radiation dose. Technically, both the blurring process and the sparse representation of the sharp CT image are first modeled as a serial of parameters. The sharp CT image will be obtained from the estimated sparse representation. Then, the model parameters are estimated by a hierarchical Bayesian maximum posteriori formulation. Finally, the estimated model parameters are optimized to obtain the final image reconstruction. We demonstrate the effectiveness of the proposed method with the simulation experiments in terms of the peak signal to noise ratio (PSNR), and structural similarity index (SSIM). Full article
Figures

Figure 1

Open AccessArticle
Long Short-Term Memory Neural Network Applied to Train Dynamic Model and Speed Prediction
Algorithms 2019, 12(8), 173; https://doi.org/10.3390/a12080173
Received: 26 July 2019 / Revised: 8 August 2019 / Accepted: 14 August 2019 / Published: 16 August 2019
Viewed by 163 | PDF Full-text (1093 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The automatic train operation system is a significant component of the intelligent railway transportation. As a fundamental problem, the construction of the train dynamic model has been extensively researched using parametric approaches. The parametric based models may have poor performances due to unrealistic [...] Read more.
The automatic train operation system is a significant component of the intelligent railway transportation. As a fundamental problem, the construction of the train dynamic model has been extensively researched using parametric approaches. The parametric based models may have poor performances due to unrealistic assumptions and changeable environments. In this paper, a long short-term memory network is carefully developed to build the train dynamic model in a nonparametric way. By optimizing the hyperparameters of the proposed model, more accurate outputs can be obtained with the same inputs of the parametric approaches. The proposed model was compared with two parametric methods using actual data. Experimental results suggest that the model performance is better than those of traditional models due to the strong learning ability. By exploring a detailed feature engineering process, the proposed long short-term memory network based algorithm was extended to predict train speed for multiple steps ahead. Full article
Figures

Figure 1

Open AccessArticle
Practical Access to Dynamic Programming on Tree Decompositions
Algorithms 2019, 12(8), 172; https://doi.org/10.3390/a12080172
Received: 18 July 2019 / Revised: 5 August 2019 / Accepted: 13 August 2019 / Published: 16 August 2019
Viewed by 205 | PDF Full-text (960 KB) | HTML Full-text | XML Full-text
Abstract
Parameterized complexity theory has led to a wide range of algorithmic breakthroughs within the last few decades, but the practicability of these methods for real-world problems is still not well understood. We investigate the practicability of one of the fundamental approaches of this [...] Read more.
Parameterized complexity theory has led to a wide range of algorithmic breakthroughs within the last few decades, but the practicability of these methods for real-world problems is still not well understood. We investigate the practicability of one of the fundamental approaches of this field: dynamic programming on tree decompositions. Indisputably, this is a key technique in parameterized algorithms and modern algorithm design. Despite the enormous impact of this approach in theory, it still has very little influence on practical implementations. The reasons for this phenomenon are manifold. One of them is the simple fact that such an implementation requires a long chain of non-trivial tasks (as computing the decomposition, preparing it, …). We provide an easy way to implement such dynamic programs that only requires the definition of the update rules. With this interface, dynamic programs for various problems, such as 3-coloring, can be implemented easily in about 100 lines of structured Java code. The theoretical foundation of the success of dynamic programming on tree decompositions is well understood due to Courcelle’s celebrated theorem, which states that every MSO-definable problem can be efficiently solved if a tree decomposition of small width is given. We seek to provide practical access to this theorem as well, by presenting a lightweight model checker for a small fragment of MSO 1 (that is, we do not consider “edge-set-based” problems). This fragment is powerful enough to describe many natural problems, and our model checker turns out to be very competitive against similar state-of-the-art tools. Full article
(This article belongs to the Special Issue New Frontiers in Parameterized Complexity and Algorithms)
Figures

Figure 1

Open AccessArticle
An Application of Manifold Learning in Global Shape Descriptors
Algorithms 2019, 12(8), 171; https://doi.org/10.3390/a12080171
Received: 28 May 2019 / Revised: 16 July 2019 / Accepted: 14 August 2019 / Published: 16 August 2019
Viewed by 183 | PDF Full-text (1336 KB) | HTML Full-text | XML Full-text
Abstract
With the rapid expansion of applied 3D computational vision, shape descriptors have become increasingly important for a wide variety of applications and objects from molecules to planets. Appropriate shape descriptors are critical for accurate (and efficient) shape retrieval and 3D model classification. Several [...] Read more.
With the rapid expansion of applied 3D computational vision, shape descriptors have become increasingly important for a wide variety of applications and objects from molecules to planets. Appropriate shape descriptors are critical for accurate (and efficient) shape retrieval and 3D model classification. Several spectral-based shape descriptors have been introduced by solving various physical equations over a 3D surface model. In this paper, for the first time, we incorporate a specific manifold learning technique, introduced in statistics and machine learning, to develop a global, spectral-based shape descriptor in the computer graphics domain. The proposed descriptor utilizes the Laplacian Eigenmap technique in which the Laplacian eigenvalue problem is discretized using an exponential weighting scheme. As a result, our descriptor eliminates the limitations tied to the existing spectral descriptors, namely dependency on triangular mesh representation and high intra-class quality of 3D models. We also present a straightforward normalization method to obtain a scale-invariant and noise-resistant descriptor. The extensive experiments performed in this study using two standard 3D shape benchmarks—high-resolution TOSCA and McGill datasets—demonstrate that the present contribution provides a highly discriminative and robust shape descriptor under the presence of a high level of noise, random scale variations, and low sampling rate, in addition to the known isometric-invariance property of the Laplace–Beltrami operator. The proposed method significantly outperforms state-of-the-art spectral descriptors in shape retrieval and classification. The proposed descriptor is limited to closed manifolds due to its inherited inability to accurately handle manifolds with boundaries. Full article
(This article belongs to the Special Issue Algorithms for Manifold Learning and Its Applications)
Figures

Figure 1

Open AccessArticle
Protograph LDPC Code Design for Asynchronous Random Access
Algorithms 2019, 12(8), 170; https://doi.org/10.3390/a12080170
Received: 30 July 2019 / Revised: 8 August 2019 / Accepted: 13 August 2019 / Published: 15 August 2019
Viewed by 222 | PDF Full-text (903 KB) | HTML Full-text | XML Full-text
Abstract
This work addresses the physical layer channel code design for an uncoordinated, frame- and slot-asynchronous random access protocol. Starting from the observation that collisions between two users yield very specific interference patterns, we define a surrogate channel model and propose different protograph low-density [...] Read more.
This work addresses the physical layer channel code design for an uncoordinated, frame- and slot-asynchronous random access protocol. Starting from the observation that collisions between two users yield very specific interference patterns, we define a surrogate channel model and propose different protograph low-density parity-check code designs. The proposed codes are both tested in a setup where the physical layer is abstracted, as well as on a more realistic channel model, where finite-length physical layer simulations of the entire asynchronous random access scheme, including decoding, are carried out. We find that the abstracted physical layer model overestimates the performance when short blocks are considered. Additionally, the optimized codes show gains in supported channel traffic, a measure of the number of terminals that can be concurrently accommodated on the channel, of around 17% at a packet loss rate of 10 2 w.r.t. off-the-shelf codes. Full article
(This article belongs to the Special Issue Coding Theory and Its Application)
Figures

Figure 1

Open AccessArticle
Structural Analysis and Application of Non-Standard Components Based on Genetic Algorithm
Algorithms 2019, 12(8), 169; https://doi.org/10.3390/a12080169
Received: 20 July 2019 / Revised: 2 August 2019 / Accepted: 6 August 2019 / Published: 15 August 2019
Viewed by 206 | PDF Full-text (2263 KB) | HTML Full-text | XML Full-text
Abstract
Aiming at the problems of low efficiency, heavy quality, and high cost of traditional components, it is necessary to study a design and analysis method of non-standard components. Taking the non-standard parts-turret loading and -unloading device as the carrier, the key parts of [...] Read more.
Aiming at the problems of low efficiency, heavy quality, and high cost of traditional components, it is necessary to study a design and analysis method of non-standard components. Taking the non-standard parts-turret loading and -unloading device as the carrier, the key parts of the non-standard parts are extracted for structural design and the multi-objective mathematical model and modal theory model are established. The optimization analysis of the key parts is carried out by genetic algorithm. Finally, the optimization results are compared and simulated by ANSYS Workbench. The results show that: in this case, the genetic algorithm optimized data with other data, the overall quality difference is 4.1%. The first six order modal values in the optimized results are in the range of 68 Hz to 130 Hz, which provides a basis for similar research in the future. Full article
Figures

Figure 1

Open AccessArticle
Cyclotomic Trace Codes
Algorithms 2019, 12(8), 168; https://doi.org/10.3390/a12080168
Received: 20 June 2019 / Revised: 3 August 2019 / Accepted: 10 August 2019 / Published: 13 August 2019
Viewed by 271 | PDF Full-text (249 KB) | HTML Full-text | XML Full-text
Abstract
A generalization of Ding’s construction is proposed that employs as a defining set the collection of the sth powers (s2) of all nonzero elements in GF(pm), where p2 is prime. [...] Read more.
A generalization of Ding’s construction is proposed that employs as a defining set the collection of the sth powers ( s 2 ) of all nonzero elements in G F ( p m ) , where p 2 is prime. Some of the resulting codes are optimal or near-optimal and include projective codes over G F ( 4 ) that give rise to optimal or near optimal quantum codes. In addition, the codes yield interesting combinatorial structures, such as strongly regular graphs and block designs. Full article
Open AccessArticle
LMI Pole Regions for a Robust Discrete-Time Pole Placement Controller Design
Algorithms 2019, 12(8), 167; https://doi.org/10.3390/a12080167
Received: 1 July 2019 / Revised: 2 August 2019 / Accepted: 8 August 2019 / Published: 13 August 2019
Viewed by 245 | PDF Full-text (2921 KB) | HTML Full-text | XML Full-text
Abstract
Herein, robust pole placement controller design for linear uncertain discrete time dynamic systems is addressed. The adopted approach uses the so called “D regions” where the closed loop system poles are determined to lie. The discrete time pole regions corresponding to the prescribed [...] Read more.
Herein, robust pole placement controller design for linear uncertain discrete time dynamic systems is addressed. The adopted approach uses the so called “D regions” where the closed loop system poles are determined to lie. The discrete time pole regions corresponding to the prescribed damping of the resulting closed loop system are studied. The key issue is to determine the appropriate convex approximation to the originally non-convex discrete-time system pole region, so that numerically efficient robust controller design algorithms based on Linear Matrix Inequalities (LMI) can be used. Several alternatives for relatively simple inner approximations and their corresponding LMI descriptions are presented. The developed LMI region for the prescribed damping can be arbitrarily combined with other LMI pole limitations (e.g., stability degree). Simple algorithms to calculate the matrices for LMI representation of the proposed convex pole regions are provided in a concise way. The results and their use in a robust controller design are illustrated on a case study of a laboratory magnetic levitation system. Full article
Figures

Figure 1

Open AccessArticle
MapReduce Algorithm for Variants of Skyline Queries: Skyband and Dominating Queries
Algorithms 2019, 12(8), 166; https://doi.org/10.3390/a12080166
Received: 28 June 2019 / Revised: 24 July 2019 / Accepted: 7 August 2019 / Published: 13 August 2019
Viewed by 255 | PDF Full-text (356 KB) | HTML Full-text | XML Full-text
Abstract
The skyline query and its variant queries are useful functions in the early stages of a knowledge-discovery processes. The skyline query and its variant queries select a set of important objects, which are better than other common objects in the dataset. In order [...] Read more.
The skyline query and its variant queries are useful functions in the early stages of a knowledge-discovery processes. The skyline query and its variant queries select a set of important objects, which are better than other common objects in the dataset. In order to handle big data, such knowledge-discovery queries must be computed in parallel distributed environments. In this paper, we consider an efficient parallel algorithm for the “K-skyband query” and the “top-k dominating query”, which are popular variants of skyline query. We propose a method for computing both queries simultaneously in a parallel distributed framework called MapReduce, which is a popular framework for processing “big data” problems. Our extensive evaluation results validate the effectiveness and efficiency of the proposed algorithm on both real and synthetic datasets. Full article
Figures

Figure 1

Open AccessArticle
Algorithmic Matching Attacks on Optimally Suppressed Tabular Data
Algorithms 2019, 12(8), 165; https://doi.org/10.3390/a12080165
Received: 30 June 2019 / Revised: 6 August 2019 / Accepted: 8 August 2019 / Published: 11 August 2019
Viewed by 340 | PDF Full-text (662 KB) | HTML Full-text | XML Full-text
Abstract
The objective of the cell suppression problem (CSP) is to protect sensitive cell values in tabular data under the presence of linear relations concerning marginal sums. Previous algorithms for solving CSPs ensure that every sensitive cell has enough uncertainty on its values based [...] Read more.
The objective of the cell suppression problem (CSP) is to protect sensitive cell values in tabular data under the presence of linear relations concerning marginal sums. Previous algorithms for solving CSPs ensure that every sensitive cell has enough uncertainty on its values based on the interval width of all possible values. However, we find that every deterministic CSP algorithm is vulnerable to an adversary who possesses the knowledge of that algorithm. We devise a matching attack scheme that narrows down the ranges of sensitive cell values by matching the suppression pattern of an original table with that of each candidate table. Our experiments show that actual ranges of sensitive cell values are significantly narrower than those assumed by the previous CSP algorithms. Full article
(This article belongs to the Special Issue Statistical Disclosure Control for Microdata)
Figures

Figure 1

Open AccessArticle
Equisum Partitions of Sets of Positive Integers
Algorithms 2019, 12(8), 164; https://doi.org/10.3390/a12080164
Received: 12 July 2019 / Revised: 5 August 2019 / Accepted: 7 August 2019 / Published: 11 August 2019
Viewed by 312 | PDF Full-text (902 KB)
Abstract
Let V be a finite set of positive integers with sum equal to a multiple of the integer b. When does V have a partition into b parts so that all parts have equal sums? We develop algorithmic constructions which yield positive, albeit [...] Read more.
Let V be a finite set of positive integers with sum equal to a multiple of the integer b. When does V have a partition into b parts so that all parts have equal sums? We develop algorithmic constructions which yield positive, albeit incomplete, answers for the following classes of set V, where n is a given positive integer: (1) an initial interval a∈Z+:a≤n; (2) an initial interval of primes p∈P:p≤n, where P is the set of primes; (3) a divisor set d∈Z+:d|n; (4) an aliquot set d∈Z+:d|n, d<n. Open general questions and conjectures are included for each of these classes. Full article
Open AccessArticle
Idea of Using Blockchain Technique for Choosing the Best Configuration of Weights in Neural Networks
Algorithms 2019, 12(8), 163; https://doi.org/10.3390/a12080163
Received: 4 July 2019 / Revised: 1 August 2019 / Accepted: 6 August 2019 / Published: 10 August 2019
Viewed by 371 | PDF Full-text (360 KB) | HTML Full-text | XML Full-text
Abstract
The blockchain technique is becoming more and more popular due to its advantages such as stability and dispersed nature. This is an idea based on blockchain activity paradigms. Another important field is machine learning, which is increasingly used in practice. Unfortunately, the training [...] Read more.
The blockchain technique is becoming more and more popular due to its advantages such as stability and dispersed nature. This is an idea based on blockchain activity paradigms. Another important field is machine learning, which is increasingly used in practice. Unfortunately, the training or overtraining artificial neural networks is very time-consuming and requires high computing power. In this paper, we proposed using a blockchain technique to train neural networks. This type of activity is important due to the possible search for initial weights in the network, which affect faster training, due to gradient decrease. We performed the tests with much heavier calculations to indicate that such an action is possible. However, this type of solution can also be used for less demanding calculations, i.e., only a few iterations of training and finding a better configuration of initial weights. Full article
(This article belongs to the Special Issue Algorithms for Pattern Recognition)
Figures

Figure 1

Open AccessArticle
Distributed Balanced Partitioning via Linear Embedding
Algorithms 2019, 12(8), 162; https://doi.org/10.3390/a12080162
Received: 18 July 2019 / Revised: 5 August 2019 / Accepted: 7 August 2019 / Published: 10 August 2019
Viewed by 333 | PDF Full-text (591 KB) | HTML Full-text | XML Full-text
Abstract
Balanced partitioning is often a crucial first step in solving large-scale graph optimization problems, for example, in some cases, a big graph can be chopped into pieces that fit on one machine to be processed independently before stitching the results together, leading to [...] Read more.
Balanced partitioning is often a crucial first step in solving large-scale graph optimization problems, for example, in some cases, a big graph can be chopped into pieces that fit on one machine to be processed independently before stitching the results together, leading to certain suboptimality from the interaction among different pieces. In other cases, links between different parts may show up in the running time and/or network communications cost, hence the desire to have small cut size. We study a distributed balanced-partitioning problem where the goal is to partition the vertices of a given graph into k pieces so as to minimize the total cut size. Our algorithm is composed of a few steps that are easily implementable in distributed computation frameworks such as MapReduce. The algorithm first embeds nodes of the graph onto a line, and then processes nodes in a distributed manner guided by the linear embedding order. We examine various ways to find the first embedding, for example, via a hierarchical clustering or Hilbert curves. Then we apply four different techniques including local swaps, and minimum cuts on the boundaries of partitions, as well as contraction and dynamic programming. As our empirical study, we compare the above techniques with each other, and also to previous work in distributed graph algorithms, for example, a label-propagation method, FENNEL and Spinner. We report our results both on a private map graph and several public social networks, and show that our results beat previous distributed algorithms: For instance, compared to the label-propagation algorithm, we report an improvement of 15–25% in the cut value. We also observe that our algorithms admit scalable distributed implementation for any number of partitions. Finally, we explain three applications of this work at Google: (1) Balanced partitioning is used to route multi-term queries to different replicas in Google Search backend in a way that reduces the cache miss rates by ≈ 0.5 % , which leads to a double-digit gain in throughput of production clusters. (2) Applied to the Google Maps Driving Directions, balanced partitioning minimizes the number of cross-shard queries with the goal of saving in CPU usage. This system achieves load balancing by dividing the world graph into several “shards”. Live experiments demonstrate an ≈ 40 % drop in the number of cross-shard queries when compared to a standard geography-based method. (3) In a job scheduling problem for our data centers, we use balanced partitioning to evenly distribute the work while minimizing the amount of communication across geographically distant servers. In fact, the hierarchical nature of our solution goes well with the layering of data center servers, where certain machines are closer to each other and have faster links to one another. Full article
(This article belongs to the Special Issue Graph Partitioning: Theory, Engineering, and Applications)
Figures

Figure 1

Open AccessFeature PaperArticle
Distributed Centrality Analysis of Social Network Data Using MapReduce
Algorithms 2019, 12(8), 161; https://doi.org/10.3390/a12080161
Received: 1 July 2019 / Revised: 4 August 2019 / Accepted: 7 August 2019 / Published: 9 August 2019
Viewed by 284 | PDF Full-text (1231 KB) | HTML Full-text | XML Full-text
Abstract
Analyzing the structure of a social network helps in gaining insights into interactions and relationships among users while revealing the patterns of their online behavior. Network centrality is a metric of importance of a network node in a network, which allows revealing the [...] Read more.
Analyzing the structure of a social network helps in gaining insights into interactions and relationships among users while revealing the patterns of their online behavior. Network centrality is a metric of importance of a network node in a network, which allows revealing the structural patterns and morphology of networks. We propose a distributed computing approach for the calculation of network centrality value for each user using the MapReduce approach in the Hadoop platform, which allows faster and more efficient computation as compared to the conventional implementation. A distributed approach is scalable and helps in efficient computations of large-scale datasets, such as social network data. The proposed approach improves the calculation performance of degree centrality by 39.8%, closeness centrality by 40.7% and eigenvalue centrality by 41.1% using a Twitter dataset. Full article
(This article belongs to the Special Issue Algorithms for Pattern Recognition)
Figures

Figure 1

Open AccessArticle
A Novel Virtual Sample Generation Method to Overcome the Small Sample Size Problem in Computer Aided Medical Diagnosing
Algorithms 2019, 12(8), 160; https://doi.org/10.3390/a12080160
Received: 14 March 2019 / Revised: 18 July 2019 / Accepted: 2 August 2019 / Published: 9 August 2019
Viewed by 318 | PDF Full-text (4158 KB) | HTML Full-text | XML Full-text
Abstract
Deep neural networks are successful learning tools for building nonlinear models. However, a robust deep learning-based classification model needs a large dataset. Indeed, these models are often unstable when they use small datasets. To solve this issue, which is particularly critical in light [...] Read more.
Deep neural networks are successful learning tools for building nonlinear models. However, a robust deep learning-based classification model needs a large dataset. Indeed, these models are often unstable when they use small datasets. To solve this issue, which is particularly critical in light of the possible clinical applications of these predictive models, researchers have developed approaches such as virtual sample generation. Virtual sample generation significantly improves learning and classification performance when working with small samples. The main objective of this study is to evaluate the ability of the proposed virtual sample generation to overcome the small sample size problem, which is a feature of the automated detection of a neurodevelopmental disorder, namely autism spectrum disorder. Results show that our method enhances diagnostic accuracy from 84%–95% using virtual samples generated on the basis of five actual clinical samples. The present findings show the feasibility of using the proposed technique to improve classification performance even in cases of clinical samples of limited size. Accounting for concerns in relation to small sample sizes, our technique represents a meaningful step forward in terms of pattern recognition methodology, particularly when it is applied to diagnostic classifications of neurodevelopmental disorders. Besides, the proposed technique has been tested with other available benchmark datasets. The experimental outcomes showed that the accuracy of the classification that used virtual samples was superior to the one that used original training data without virtual samples. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Health Technologies)
Figures

Figure 1

Open AccessArticle
Compaction of Church Numerals
Algorithms 2019, 12(8), 159; https://doi.org/10.3390/a12080159
Received: 28 June 2019 / Revised: 5 August 2019 / Accepted: 5 August 2019 / Published: 8 August 2019
Viewed by 314 | PDF Full-text (4444 KB) | HTML Full-text | XML Full-text
Abstract
In this study, we address the problem of compaction of Church numerals. Church numerals are unary representations of natural numbers on the scheme of lambda terms. We propose a novel decomposition scheme from a given natural number into an arithmetic expression using tetration, [...] Read more.
In this study, we address the problem of compaction of Church numerals. Church numerals are unary representations of natural numbers on the scheme of lambda terms. We propose a novel decomposition scheme from a given natural number into an arithmetic expression using tetration, which enables us to obtain a compact representation of lambda terms that leads to the Church numeral of the natural number. For natural number n, we prove that the size of the lambda term obtained by the proposed method is O ( ( slog 2 n ) ( log n / log log n ) ) . Moreover, we experimentally confirmed that the proposed method outperforms binary representation of Church numerals on average, when n is less than approximately 10,000. Full article
(This article belongs to the Special Issue Data Compression Algorithms and their Applications)
Figures

Figure 1

Open AccessArticle
A Fast Randomized Algorithm for the Heterogeneous Vehicle Routing Problem with Simultaneous Pickup and Delivery
Algorithms 2019, 12(8), 158; https://doi.org/10.3390/a12080158
Received: 19 June 2019 / Revised: 31 July 2019 / Accepted: 1 August 2019 / Published: 3 August 2019
Viewed by 467 | PDF Full-text (528 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
In the vehicle routing problem with simultaneous pickup and delivery (VRPSPD), customers demanding both delivery and pickup operations have to be visited once by a single vehicle. In this work, we propose a fast randomized algorithm using a nearest neighbor strategy to tackle [...] Read more.
In the vehicle routing problem with simultaneous pickup and delivery (VRPSPD), customers demanding both delivery and pickup operations have to be visited once by a single vehicle. In this work, we propose a fast randomized algorithm using a nearest neighbor strategy to tackle an extension of the VRPSPD in which the fleet of vehicles is heterogeneous. This variant is an NP-hard problem, which in practice makes it impossible to be solved to proven optimality for large instances. To evaluate the proposal, we use benchmark instances from the literature and compare our results to those obtained by a state-of-the-art algorithm. Our approach presents very competitive results, not only improving several of the known solutions, but also running in a shorter time. Full article
Figures

Figure 1

Open AccessArticle
In Search of the Densest Subgraph
Algorithms 2019, 12(8), 157; https://doi.org/10.3390/a12080157
Received: 23 June 2019 / Revised: 30 July 2019 / Accepted: 30 July 2019 / Published: 2 August 2019
Viewed by 444 | PDF Full-text (270 KB) | HTML Full-text | XML Full-text
Abstract
In this survey paper, we review various concepts of graph density, as well as associated theorems and algorithms. Our goal is motivated by the fact that, in many applications, it is a key algorithmic task to extract a densest subgraph from an input [...] Read more.
In this survey paper, we review various concepts of graph density, as well as associated theorems and algorithms. Our goal is motivated by the fact that, in many applications, it is a key algorithmic task to extract a densest subgraph from an input graph, according to some appropriate definition of graph density. While this problem has been the subject of active research for over half of a century, with many proposed variants and solutions, new results still continuously emerge in the literature. This shows both the importance and the richness of the subject. We also identify some interesting open problems in the field. Full article
Open AccessArticle
A Collocation Method for the Numerical Solution of Nonlinear Fractional Dynamical Systems
Algorithms 2019, 12(8), 156; https://doi.org/10.3390/a12080156
Received: 5 July 2019 / Revised: 26 July 2019 / Accepted: 30 July 2019 / Published: 31 July 2019
Viewed by 422 | PDF Full-text (410 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Fractional differential problems are widely used in applied sciences. For this reason, there is a great interest in the construction of efficient numerical methods to approximate their solution. The aim of this paper is to describe in detail a collocation method suitable to [...] Read more.
Fractional differential problems are widely used in applied sciences. For this reason, there is a great interest in the construction of efficient numerical methods to approximate their solution. The aim of this paper is to describe in detail a collocation method suitable to approximate the solution of dynamical systems with time derivative of fractional order. We will highlight all the steps necessary to implement the corresponding algorithm and we will use it to solve some test problems. Two Mathematica Notebooks that can be used to solve these test problems are provided. Full article
Figures

Figure 1

Open AccessArticle
A Rigid Motion Artifact Reduction Method for CT Based on Blind Deconvolution
Algorithms 2019, 12(8), 155; https://doi.org/10.3390/a12080155
Received: 15 July 2019 / Accepted: 29 July 2019 / Published: 31 July 2019
Viewed by 390 | PDF Full-text (2306 KB) | HTML Full-text | XML Full-text
Abstract
In computed tomography (CT), artifacts due to patient rigid motion often significantly degrade image quality. This paper suggests a method based on iterative blind deconvolution to eliminate motion artifacts. The proposed method alternately reconstructs the image and reduces motion artifacts in an iterative [...] Read more.
In computed tomography (CT), artifacts due to patient rigid motion often significantly degrade image quality. This paper suggests a method based on iterative blind deconvolution to eliminate motion artifacts. The proposed method alternately reconstructs the image and reduces motion artifacts in an iterative scheme until the difference measure between two successive iterations is smaller than a threshold. In this iterative process, Richardson–Lucy (RL) deconvolution with spatially adaptive total variation (SATV) regularization is inserted into the iterative process of the ordered subsets expectation maximization (OSEM) reconstruction algorithm. The proposed method is evaluated on a numerical phantom, a head phantom, and patient scan. The reconstructed images indicate that the proposed method can reduce motion artifacts and provide high-quality images. Quantitative evaluations also show the proposed method yielded an appreciable improvement on all metrics, reducing root-mean-square error (RMSE) by about 30% and increasing Pearson correlation coefficient (CC) and mean structural similarity (MSSIM) by about 15% and 20%, respectively, compared to the RL-OSEM method. Furthermore, the proposed method only needs measured raw data and no additional measurements are needed. Compared with the previous work, it can be applied to any scanning mode and can realize six degrees of freedom motion artifact reduction, so the artifact reduction effect is better in clinical experiments. Full article
(This article belongs to the Special Issue The Second Symposium on Machine Intelligence and Data Analytics)
Figures

Figure 1

Open AccessFeature PaperReview
A Survey of Convolutional Neural Networks on Edge with Reconfigurable Computing
Algorithms 2019, 12(8), 154; https://doi.org/10.3390/a12080154
Received: 24 June 2019 / Revised: 27 July 2019 / Accepted: 30 July 2019 / Published: 31 July 2019
Viewed by 538 | PDF Full-text (1109 KB) | HTML Full-text | XML Full-text
Abstract
The convolutional neural network (CNN) is one of the most used deep learning models for image detection and classification, due to its high accuracy when compared to other machine learning algorithms. CNNs achieve better results at the cost of higher computing and memory [...] Read more.
The convolutional neural network (CNN) is one of the most used deep learning models for image detection and classification, due to its high accuracy when compared to other machine learning algorithms. CNNs achieve better results at the cost of higher computing and memory requirements. Inference of convolutional neural networks is therefore usually done in centralized high-performance platforms. However, many applications based on CNNs are migrating to edge devices near the source of data due to the unreliability of a transmission channel in exchanging data with a central server, the uncertainty about channel latency not tolerated by many applications, security and data privacy, etc. While advantageous, deep learning on edge is quite challenging because edge devices are usually limited in terms of performance, cost, and energy. Reconfigurable computing is being considered for inference on edge due to its high performance and energy efficiency while keeping a high hardware flexibility that allows for the easy adaption of the target computing platform to the CNN model. In this paper, we described the features of the most common CNNs, the capabilities of reconfigurable computing for running CNNs, the state-of-the-art of reconfigurable computing implementations proposed to run CNN models, as well as the trends and challenges for future edge reconfigurable platforms. Full article
(This article belongs to the Special Issue High Performance Reconfigurable Computing)
Figures

Figure 1

Open AccessArticle
γ-Graphs of Trees
Algorithms 2019, 12(8), 153; https://doi.org/10.3390/a12080153
Received: 30 June 2019 / Revised: 26 July 2019 / Accepted: 27 July 2019 / Published: 30 July 2019
Viewed by 447 | PDF Full-text (320 KB) | HTML Full-text | XML Full-text
Abstract
For a graph G=(V,E), the γ-graph of G, denoted G(γ)=(V(γ),E(γ)), is the graph whose vertex set is the [...] Read more.
For a graph G = ( V , E ) , the γ -graph of G, denoted G ( γ ) = ( V ( γ ) , E ( γ ) ) , is the graph whose vertex set is the collection of minimum dominating sets, or γ -sets of G, and two γ -sets are adjacent in G ( γ ) if they differ by a single vertex and the two different vertices are adjacent in G. In this paper, we consider γ -graphs of trees. We develop an algorithm for determining the γ -graph of a tree, characterize which trees are γ -graphs of trees, and further comment on the structure of γ -graphs of trees and its connections with Cartesian product graphs, the set of graphs which can be obtained from the Cartesian product of graphs of order at least two. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Bicriteria Vehicle Routing Problem with Preferences and Timing Constraints in Home Health Care Services
Algorithms 2019, 12(8), 152; https://doi.org/10.3390/a12080152
Received: 28 June 2019 / Revised: 22 July 2019 / Accepted: 23 July 2019 / Published: 30 July 2019
Viewed by 459 | PDF Full-text (1057 KB) | HTML Full-text | XML Full-text
Abstract
Home Healthcare (HHC) is an emerging and fast-expanding service sector that gives rise to challenging vehicle routing and scheduling problems. Each day, HHC structures must schedule the visits of caregivers to patients requiring specific medical and paramedical services at home. These operations have [...] Read more.
Home Healthcare (HHC) is an emerging and fast-expanding service sector that gives rise to challenging vehicle routing and scheduling problems. Each day, HHC structures must schedule the visits of caregivers to patients requiring specific medical and paramedical services at home. These operations have the potential to be unsuitable if the visits are not planned correctly, leading hence to high logistics costs and/or deteriorated service level. In this article, this issue is modeled as a vehicle routing problem where a set of routes has to be built to visit patients asking for one or more specific service within a given time window and during a fixed service time. Each patient has a preference value associated with each available caregiver. The problem addressed in this paper considers two objectives to optimize simultaneously: minimize the caregivers’ travel costs and maximize the patients’ preferences. In this paper, different methods based on the bi-objective non-dominated sorting algorithm are proposed to solve the vehicle routing problem with time windows, preferences, and timing constraints. Numerical results are presented for instances with up to 73 clients. Metrics such as the distance measure, hyper-volume, and the number of non-dominated solutions in the Pareto front are used to assess the quality of the proposed approaches. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Health Technologies)
Figures

Figure 1

Open AccessArticle
Soft Iterative Decoding Algorithms for Rateless Codes in Satellite Systems
Algorithms 2019, 12(8), 151; https://doi.org/10.3390/a12080151
Received: 30 May 2019 / Revised: 24 July 2019 / Accepted: 27 July 2019 / Published: 29 July 2019
Viewed by 443 | PDF Full-text (1527 KB) | HTML Full-text | XML Full-text
Abstract
The satellite system is one of the most efficient means for broadcasting due to its wide service coverage as well as the fact that it can provide high data rate services by using high frequency bands. However, there are a number of problems [...] Read more.
The satellite system is one of the most efficient means for broadcasting due to its wide service coverage as well as the fact that it can provide high data rate services by using high frequency bands. However, there are a number of problems in the satellite system, such as a long round trip delay (RTD) and heterogeneity of the channel conditions of the earth stations. Even though utilizing adaptive coding and modulation (ACM) is almost mandatory for the satellite systems using high frequency bands due to the serious rain fading, the long RTD makes it difficult to quickly respond to channel quality information, resulting in a decrease in the efficiency of ACM. A high heterogeneity of earth stations caused by a wide service coverage also makes it difficult to apply a uniform transmission mode, and thus satellite systems require receiver-dependent transmission modes. A rateless code can be an effective means to compensate for these disadvantages of satellite systems compared to terrestrial wireless systems. This paper presents soft iterative decoding algorithms for efficient application of rateless codes in satellite systems and demonstrates that rateless codes can be effectively used for hybrid automatic repeat request schemes. Full article
(This article belongs to the Special Issue Coding Theory and Its Application)
Figures

Figure 1

Open AccessArticle
Defacement Detection with Passive Adversaries
Algorithms 2019, 12(8), 150; https://doi.org/10.3390/a12080150
Received: 28 May 2019 / Revised: 17 July 2019 / Accepted: 25 July 2019 / Published: 29 July 2019
Viewed by 470 | PDF Full-text (1136 KB) | HTML Full-text | XML Full-text
Abstract
A novel approach to defacement detection is proposed in this paper, addressing explicitly the possible presence of a passive adversary. Defacement detection is an important security measure for Web Sites and Applications, aimed at avoiding unwanted modifications that would result in significant reputational [...] Read more.
A novel approach to defacement detection is proposed in this paper, addressing explicitly the possible presence of a passive adversary. Defacement detection is an important security measure for Web Sites and Applications, aimed at avoiding unwanted modifications that would result in significant reputational damage. As in many other anomaly detection contexts, the algorithm used to identify possible defacements is obtained via an Adversarial Machine Learning process. We consider an exploratory setting, where the adversary can observe the detector’s alarm-generating behaviour, with the purpose of devising and injecting defacements that will pass undetected. It is then necessary to make to learning process unpredictable, so that the adversary will be unable to replicate it and predict the classifier’s behaviour. We achieve this goal by introducing a secret key—a key that our adversary does not know. The key will influence the learning process in a number of different ways, that are precisely defined in this paper. This includes the subset of examples and features that are actually used, the time of learning and testing, as well as the learning algorithm’s hyper-parameters. This learning methodology is successfully applied in this context, by using the system with both real and artificially modified Web sites. A year-long experimentation is also described, referred to the monitoring of the new Web Site of a major manufacturing company. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Mapping a Guided Image Filter on the HARP Reconfigurable Architecture Using OpenCL
Algorithms 2019, 12(8), 149; https://doi.org/10.3390/a12080149
Received: 27 June 2019 / Revised: 23 July 2019 / Accepted: 25 July 2019 / Published: 27 July 2019
Viewed by 512 | PDF Full-text (694 KB) | HTML Full-text | XML Full-text
Abstract
Intel recently introduced the Heterogeneous Architecture Research Platform, HARP. In this platform, the Central Processing Unit and a Field-Programmable Gate Array are connected through a high-bandwidth, low-latency interconnect and both share DRAM memory. For this platform, Open Computing Language (OpenCL), a High-Level Synthesis [...] Read more.
Intel recently introduced the Heterogeneous Architecture Research Platform, HARP. In this platform, the Central Processing Unit and a Field-Programmable Gate Array are connected through a high-bandwidth, low-latency interconnect and both share DRAM memory. For this platform, Open Computing Language (OpenCL), a High-Level Synthesis (HLS) language, is made available. By making use of HLS, a faster design cycle can be achieved compared to programming in a traditional hardware description language. This, however, comes at the cost of having less control over the hardware implementation. We will investigate how OpenCL can be applied to implement a real-time guided image filter on the HARP platform. In the first phase, the performance-critical parameters of the OpenCL programming model are defined using several specialized benchmarks. In a second phase, the guided image filter algorithm is implemented using the insights gained in the first phase. Both a floating-point and a fixed-point implementation were developed for this algorithm, based on a sliding window implementation. This resulted in a maximum floating-point performance of 135 GFLOPS, a maximum fixed-point performance of 430 GOPS and a throughput of HD color images at 74 frames per second. Full article
(This article belongs to the Special Issue High Performance Reconfigurable Computing)
Figures

Figure 1

Open AccessArticle
Variational Calculus Approach to Optimal Interception Task of a Ballistic Missile in 1D and 2D Cases
Algorithms 2019, 12(8), 148; https://doi.org/10.3390/a12080148
Received: 20 June 2019 / Revised: 22 July 2019 / Accepted: 23 July 2019 / Published: 26 July 2019
Viewed by 475 | PDF Full-text (359 KB) | HTML Full-text | XML Full-text
Abstract
The paper presents the application of variational calculus to achieve the optimal design of the open-loop control law in the process of anti-ballistic missile interception task. It presents the analytical results in the form of appropriate Euler–Lagrange equations for three different performance indices, [...] Read more.
The paper presents the application of variational calculus to achieve the optimal design of the open-loop control law in the process of anti-ballistic missile interception task. It presents the analytical results in the form of appropriate Euler–Lagrange equations for three different performance indices, with a simple model of the rocket and the missile, based on the conservation of momentum principle. It also presents the software program enabling rapid simulation of the interception process with selected parameters, parametric analysis, as well as easy potential modification by other researchers, as it is written in open code as m-function of Matlab. Full article
Figures

Figure 1

Open AccessArticle
An Optimized Differential Step-Size LMS Algorithm
Algorithms 2019, 12(8), 147; https://doi.org/10.3390/a12080147
Received: 28 June 2019 / Revised: 17 July 2019 / Accepted: 21 July 2019 / Published: 24 July 2019
Viewed by 536 | PDF Full-text (1908 KB) | HTML Full-text | XML Full-text
Abstract
Adaptive algorithms with differential step-sizes (related to the filter coefficients) are well known in the literature, most frequently as “proportionate” algorithms. Usually, they are derived on a heuristic basis. In this paper, we introduce an algorithm resulting from an optimization criterion. Thereby, we [...] Read more.
Adaptive algorithms with differential step-sizes (related to the filter coefficients) are well known in the literature, most frequently as “proportionate” algorithms. Usually, they are derived on a heuristic basis. In this paper, we introduce an algorithm resulting from an optimization criterion. Thereby, we obtain a benchmark algorithm and also another version with lower computational complexity, which is rigorously valid for less correlated input signals. Simulation results confirm the theory and outline the performance of the algorithms. Unfortunately, the good performance is obtained by an important increase in computational complexity. Nevertheless, the proposed algorithms could represent useful benchmarks in the field. Full article
Figures

Figure 1

Algorithms EISSN 1999-4893 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top