Next Issue
Previous Issue

Table of Contents

Algorithms, Volume 10, Issue 3 (September 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-37
Export citation of selected articles as:

Research

Jump to: Other

Open AccessArticle Hierarchical Gradient Similarity Based Video Quality Assessment Metric
Algorithms 2017, 10(3), 72; doi:10.3390/a10030072
Received: 18 April 2017 / Revised: 19 June 2017 / Accepted: 19 June 2017 / Published: 23 June 2017
PDF Full-text (542 KB) | HTML Full-text | XML Full-text
Abstract
Video quality assessment (VQA) plays an important role in video applications for quality evaluation and resource allocation. It aims to evaluate video quality in a way that is consistent with human perception. In this letter, a hierarchical gradient similarity based VQA metric is
[...] Read more.
Video quality assessment (VQA) plays an important role in video applications for quality evaluation and resource allocation. It aims to evaluate video quality in a way that is consistent with human perception. In this letter, a hierarchical gradient similarity based VQA metric is proposed inspired by the structure of the primate visual cortex, in which visual information is processed through sequential visual areas. These areas are modeled with the corresponding measures to evaluate the overall perceptual quality. Experimental results on the LIVE database show that the proposed VQA metric significantly outperforms most of the state-of-the-art VQA metrics. Full article
Figures

Figure 1

Open AccessArticle Variable Selection Using Adaptive Band Clustering and Physarum Network
Algorithms 2017, 10(3), 73; doi:10.3390/a10030073
Received: 19 April 2017 / Revised: 21 June 2017 / Accepted: 22 June 2017 / Published: 27 June 2017
PDF Full-text (1903 KB) | HTML Full-text | XML Full-text
Abstract
Variable selection is a key step for eliminating redundant information in spectroscopy. Among various variable selection methods, the physarum network (PN) is a newly-introduced and efficient one. However, the whole spectrum has to be equally divided into sub-spectral bands in PN. These division
[...] Read more.
Variable selection is a key step for eliminating redundant information in spectroscopy. Among various variable selection methods, the physarum network (PN) is a newly-introduced and efficient one. However, the whole spectrum has to be equally divided into sub-spectral bands in PN. These division criteria limit the selecting ability and prediction performance. In this paper, we transform the spectrum division problem into a clustering problem and solve the problem by using an affinity propagation (AP) algorithm, an adaptive clustering method, to find the optimized number of sub-spectral bands and the number of wavelengths in each sub-spectral band. Experimental results show that combining AP and PN together can achieve similar prediction accuracy with much less wavelength than what PN alone can achieve. Full article
Figures

Figure 1

Open AccessArticle Thresholds of the Inner Steps in Multi-Step Newton Method
Algorithms 2017, 10(3), 75; doi:10.3390/a10030075
Received: 2 June 2017 / Revised: 23 June 2017 / Accepted: 24 June 2017 / Published: 27 June 2017
PDF Full-text (210 KB) | HTML Full-text | XML Full-text
Abstract
We investigate the efficiency of multi-step Newton method (the classical Newton method in which the first derivative is re-evaluated periodically after m steps) for solving nonlinear equations, F(x)=0,F:DRnRn
[...] Read more.
We investigate the efficiency of multi-step Newton method (the classical Newton method in which the first derivative is re-evaluated periodically after m steps) for solving nonlinear equations, F ( x ) = 0 , F : D R n R n . We highlight the following property of multi-step Newton method with respect to some other Newton-type method: for a given n, there exist thresholds of m, that is an interval ( m i , m s ) , such that for m inside of this interval, the efficiency index of multi-step Newton method is better than that of other Newton-type method. We also search for optimal values of m. Full article
Figures

Figure 1

Open AccessArticle A Genetic Algorithm Using Triplet Nucleotide Encoding and DNA Reproduction Operations for Unconstrained Optimization Problems
Algorithms 2017, 10(3), 76; doi:10.3390/a10030076
Received: 31 May 2017 / Revised: 19 June 2017 / Accepted: 19 June 2017 / Published: 30 June 2017
PDF Full-text (3117 KB) | HTML Full-text | XML Full-text
Abstract
As one of the evolutionary heuristics methods, genetic algorithms (GAs) have shown a promising ability to solve complex optimization problems. However, existing GAs still have difficulties in finding the global optimum and avoiding premature convergence. To further improve the search efficiency and convergence
[...] Read more.
As one of the evolutionary heuristics methods, genetic algorithms (GAs) have shown a promising ability to solve complex optimization problems. However, existing GAs still have difficulties in finding the global optimum and avoiding premature convergence. To further improve the search efficiency and convergence rate of evolution algorithms, inspired by the mechanism of biological DNA genetic information and evolution, we present a new genetic algorithm, called GA-TNE+DRO, which uses a novel triplet nucleotide coding scheme to encode potential solutions and a set of new genetic operators to search for globally optimal solutions. The coding scheme represents potential solutions as a sequence of triplet nucleotides and the DNA reproduction operations mimic the DNA reproduction process more vividly than existing DNA-GAs. We compared our algorithm with several existing GA and DNA-based GA algorithms using a benchmark of eight unconstrained optimization functions. Our experimental results show that the proposed algorithm can converge to solutions much closer to the global optimal solutions in a much lower number of iterations than the existing algorithms. A complexity analysis also shows that our algorithm is computationally more efficient than the existing algorithms. Full article
Figures

Figure 1

Open AccessArticle New Methodology to Approximate Type-Reduction Based on a Continuous Root-Finding Karnik Mendel Algorithm
Algorithms 2017, 10(3), 77; doi:10.3390/a10030077
Received: 30 May 2017 / Revised: 29 June 2017 / Accepted: 1 July 2017 / Published: 5 July 2017
PDF Full-text (3785 KB) | HTML Full-text | XML Full-text
Abstract
Interval Type-2 fuzzy systems allow the possibility of considering uncertainty in models based on fuzzy systems, and enable an increase of robustness in solutions to applications, but also increase the complexity of the fuzzy system design. Several attempts have been previously proposed to
[...] Read more.
Interval Type-2 fuzzy systems allow the possibility of considering uncertainty in models based on fuzzy systems, and enable an increase of robustness in solutions to applications, but also increase the complexity of the fuzzy system design. Several attempts have been previously proposed to reduce the computational cost of the type-reduction stage, as this process requires a lot of computing time because it is basically a numerical approximation based on sampling, and the computational cost is proportional to the number of samples, but also the error is inversely proportional to the number of samples. Several works have focused on reducing the computational cost of type-reduction by developing strategies to reduce the number of operations. The first type-reduction method was proposed by Karnik and Mendel (KM), and then was followed by its enhanced version called EKM. Then continuous versions were called CKM and CEKM, and there were variations of this and also other types of variations that eliminate the type-reduction process reducing the computational cost to a Type-1 defuzzification, such as the Nie-Tan versions and similar enhancements. In this work we analyzed and proposed a variant of CEKM by viewing this process as solving a root-finding problem, in this way taking advantage of existing numerical methods to solve the type-reduction problem, the main objective being eliminating the type-reduction process and also providing a continuous solution of the defuzzification. Full article
(This article belongs to the Special Issue Extensions to Type-1 Fuzzy Logic: Theory, Algorithms and Applications)
Figures

Figure 1

Open AccessArticle An Efficient Algorithm for the Separable Nonlinear Least Squares Problem
Algorithms 2017, 10(3), 78; doi:10.3390/a10030078
Received: 7 June 2017 / Revised: 23 June 2017 / Accepted: 1 July 2017 / Published: 10 July 2017
PDF Full-text (403 KB) | HTML Full-text | XML Full-text
Abstract
The nonlinear least squares problem miny,zA(y)z+b(y), where A(y) is a full-rank (N+)×N matrix, yR
[...] Read more.
The nonlinear least squares problem m i n y , z A ( y ) z + b ( y ) , where A ( y ) is a full-rank ( N + ) × N matrix, y R n , z R N and b ( y ) R N + with n , can be solved by first solving a reduced problem m i n y f ( y ) to find the optimal value y * of y, and then solving the resulting linear least squares problem m i n z A ( y * ) z + b ( y * ) to find the optimal value z * of z. We have previously justified the use of the reduced function f ( y ) = C T ( y ) b ( y ) , where C ( y ) is a matrix whose columns form an orthonormal basis for the nullspace of A T ( y ) , and presented a quadratically convergent Gauss–Newton type method for solving m i n y C T ( y ) b ( y ) based on the use of QR factorization. In this note, we show how LU factorization can replace the QR factorization in those computations, halving the associated computational cost while also providing opportunities to exploit sparsity and thus further enhance computational efficiency. Full article
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems 2017)
Open AccessArticle Design of an Optimized Fuzzy Classifier for the Diagnosis of Blood Pressure with a New Computational Method for Expert Rule Optimization
Algorithms 2017, 10(3), 79; doi:10.3390/a10030079
Received: 25 May 2017 / Revised: 4 July 2017 / Accepted: 7 July 2017 / Published: 14 July 2017
PDF Full-text (8040 KB) | HTML Full-text | XML Full-text
Abstract
A neuro fuzzy hybrid model (NFHM) is proposed as a new artificial intelligence method to classify blood pressure (BP). The NFHM uses techniques such as neural networks, fuzzy logic and evolutionary computation, and in the last case genetic algorithms (GAs) are used. The
[...] Read more.
A neuro fuzzy hybrid model (NFHM) is proposed as a new artificial intelligence method to classify blood pressure (BP). The NFHM uses techniques such as neural networks, fuzzy logic and evolutionary computation, and in the last case genetic algorithms (GAs) are used. The main goal is to model the behavior of blood pressure based on monitoring data of 24 h per patient and based on this to obtain the trend, which is classified using a fuzzy system based on rules provided by an expert, and these rules are optimized by a genetic algorithm to obtain the best possible number of rules for the classifier with the lowest classification error. Simulation results are presented to show the advantage of the proposed model. Full article
(This article belongs to the Special Issue Extensions to Type-1 Fuzzy Logic: Theory, Algorithms and Applications)
Figures

Figure 1

Open AccessArticle A Hybrid Algorithm for Optimal Wireless Sensor Network Deployment with the Minimum Number of Sensor Nodes
Algorithms 2017, 10(3), 80; doi:10.3390/a10030080
Received: 15 June 2017 / Revised: 7 July 2017 / Accepted: 12 July 2017 / Published: 18 July 2017
PDF Full-text (23357 KB) | HTML Full-text | XML Full-text
Abstract
Wireless sensor network (WSN) applications are rapidly growing and are widely used in various disciplines. Deployment is one of the key issues to be solved in WSNs, since the sensor nodes’ positioning affects highly the system performance. An optimal WSN deployment should maximize
[...] Read more.
Wireless sensor network (WSN) applications are rapidly growing and are widely used in various disciplines. Deployment is one of the key issues to be solved in WSNs, since the sensor nodes’ positioning affects highly the system performance. An optimal WSN deployment should maximize the collection of the desired interest phenomena, guarantee the required coverage and connectivity, extend the network lifetime, and minimize the network cost in terms of energy consumption. Most of the research effort in this area aims to solve the deployment issue, without minimizing the network cost by reducing unnecessary working nodes in the network. In this paper, we propose a deployment approach based on the gradient method and the Simulated Annealing algorithm to solve the sensor deployment problem with the minimum number of sensor nodes. The proposed algorithm is able to heuristically optimize the number of sensors and their positions in order to achieve the desired application requirements. Full article
Figures

Figure 1

Open AccessArticle Low-Resource Cross-Domain Product Review Sentiment Classification Based on a CNN with an Auxiliary Large-Scale Corpus
Algorithms 2017, 10(3), 81; doi:10.3390/a10030081
Received: 31 May 2017 / Revised: 12 July 2017 / Accepted: 18 July 2017 / Published: 19 July 2017
Cited by 1 | PDF Full-text (385 KB) | HTML Full-text | XML Full-text
Abstract
The literature [-5]contains several reports evaluating the abilities of deep neural networks in text transfer learning. To our knowledge, however, there have been few efforts to fully realize the potential of deep neural networks in cross-domain product review sentiment classification. In this paper,
[...] Read more.
The literature [-5]contains several reports evaluating the abilities of deep neural networks in text transfer learning. To our knowledge, however, there have been few efforts to fully realize the potential of deep neural networks in cross-domain product review sentiment classification. In this paper, we propose a two-layer convolutional neural network (CNN) for cross-domain product review sentiment classification (LM-CNN-LB). Transfer learning research into product review sentiment classification based on deep neural networks has been limited by the lack of a large-scale corpus; we sought to remedy this problem using a large-scale auxiliary cross-domain dataset collected from Amazon product reviews. Our proposed framework exhibits the dramatic transferability of deep neural networks for cross-domain product review sentiment classification and achieves state-of-the-art performance. The framework also outperforms complex engineered features used with a non-deep neural network method. The experiments demonstrate that introducing large-scale data from similar domains is an effective way to resolve the lack of training data. The LM-CNN-LB trained on the multi-source related domain dataset outperformed the one trained on a single similar domain. Full article
Figures

Figure 1

Open AccessArticle Optimization of Intelligent Controllers Using a Type-1 and Interval Type-2 Fuzzy Harmony Search Algorithm
Algorithms 2017, 10(3), 82; doi:10.3390/a10030082
Received: 15 June 2017 / Revised: 16 July 2017 / Accepted: 16 July 2017 / Published: 20 July 2017
PDF Full-text (4833 KB) | HTML Full-text | XML Full-text
Abstract
This article focuses on the dynamic parameter adaptation in the harmony search algorithm using Type-1 and interval Type-2 fuzzy logic. In particular, this work focuses on the adaptation of the parameters of the original harmony search algorithm. At present there are several types
[...] Read more.
This article focuses on the dynamic parameter adaptation in the harmony search algorithm using Type-1 and interval Type-2 fuzzy logic. In particular, this work focuses on the adaptation of the parameters of the original harmony search algorithm. At present there are several types of algorithms that can solve complex real-world problems with uncertainty management. In this case the proposed method is in charge of optimizing the membership functions of three benchmark control problems (water tank, shower, and mobile robot). The main goal is to find the best parameters for the membership functions in the controller to follow a desired trajectory. Noise experiments are performed to test the efficacy of the method. Full article
(This article belongs to the Special Issue Extensions to Type-1 Fuzzy Logic: Theory, Algorithms and Applications)
Figures

Figure 1

Open AccessFeature PaperArticle Fuzzy Fireworks Algorithm Based on a Sparks Dispersion Measure
Algorithms 2017, 10(3), 83; doi:10.3390/a10030083
Received: 30 May 2017 / Revised: 7 July 2017 / Accepted: 18 July 2017 / Published: 21 July 2017
PDF Full-text (4717 KB) | HTML Full-text | XML Full-text
Abstract
The main goal of this paper is to improve the performance of the Fireworks Algorithm (FWA). To improve the performance of the FWA we propose three modifications: the first modification is to change the stopping criteria, this is to say, previously, the number
[...] Read more.
The main goal of this paper is to improve the performance of the Fireworks Algorithm (FWA). To improve the performance of the FWA we propose three modifications: the first modification is to change the stopping criteria, this is to say, previously, the number of function evaluations was utilized as a stopping criteria, and we decided to change this to specify a particular number of iterations; the second and third modifications consist on introducing a dispersion metric (dispersion percent), and both modifications were made with the goal of achieving dynamic adaptation of the two parameters in the algorithm. The parameters that were controlled are the explosion amplitude and the number of sparks, and it is worth mentioning that the control of these parameters is based on a fuzzy logic approach. To measure the impact of these modifications, we perform experiments with 14 benchmark functions and a comparative study shows the advantage of the proposed approach. We decided to call the proposed algorithms Iterative Fireworks Algorithm (IFWA) and two variants of the Dispersion Percent Iterative Fuzzy Fireworks Algorithm (DPIFWA-I and DPIFWA-II, respectively). Full article
(This article belongs to the Special Issue Extensions to Type-1 Fuzzy Logic: Theory, Algorithms and Applications)
Figures

Figure 1

Open AccessArticle Auxiliary Model Based Multi-Innovation Stochastic Gradient Identification Algorithm for Periodically Non-Uniformly Sampled-Data Hammerstein Systems
Algorithms 2017, 10(3), 84; doi:10.3390/a10030084
Received: 12 May 2017 / Revised: 7 July 2017 / Accepted: 19 July 2017 / Published: 31 July 2017
PDF Full-text (334 KB) | HTML Full-text | XML Full-text
Abstract
Due to the lack of powerful model description methods, the identification of Hammerstein systems based on the non-uniform input-output dataset remains a challenging problem. This paper introduces a time-varying backward shift operator to describe periodically non-uniformly sampled-data Hammerstein systems, which can simplify the
[...] Read more.
Due to the lack of powerful model description methods, the identification of Hammerstein systems based on the non-uniform input-output dataset remains a challenging problem. This paper introduces a time-varying backward shift operator to describe periodically non-uniformly sampled-data Hammerstein systems, which can simplify the structure of the lifted models using the traditional lifting technique. Furthermore, an auxiliary model-based multi-innovation stochastic gradient algorithm is presented to estimate the parameters involved in the linear and nonlinear blocks. The simulation results confirm that the proposed algorithm is effective and can achieve a high estimation performance. Full article
Figures

Figure 1

Open AccessArticle A New Meta-Heuristics of Optimization with Dynamic Adaptation of Parameters Using Type-2 Fuzzy Logic for Trajectory Control of a Mobile Robot
Algorithms 2017, 10(3), 85; doi:10.3390/a10030085
Received: 4 July 2017 / Revised: 21 July 2017 / Accepted: 22 July 2017 / Published: 26 July 2017
PDF Full-text (2842 KB) | HTML Full-text | XML Full-text
Abstract
Fuzzy logic is a soft computing technique that has been very successful in recent years when it is used as a complement to improve meta-heuristic optimization. In this paper, we present a new variant of the bio-inspired optimization algorithm based on the self-defense
[...] Read more.
Fuzzy logic is a soft computing technique that has been very successful in recent years when it is used as a complement to improve meta-heuristic optimization. In this paper, we present a new variant of the bio-inspired optimization algorithm based on the self-defense mechanisms of plants in the nature. The optimization algorithm proposed in this work is based on the predator-prey model originally presented by Lotka and Volterra, where two populations interact with each other and the objective is to maintain a balance. The system of predator-prey equations use four variables (α, β, λ, δ) and the values of these variables are very important since they are in charge of maintaining a balance between the pair of equations. In this work, we propose the use of Type-2 fuzzy logic for the dynamic adaptation of the variables of the system. This time a fuzzy controller is in charge of finding the optimal values for the model variables, the use of this technique will allow the algorithm to have a higher performance and accuracy in the exploration of the values. Full article
(This article belongs to the Special Issue Extensions to Type-1 Fuzzy Logic: Theory, Algorithms and Applications)
Figures

Figure 1

Open AccessArticle An Improved MOEA/D with Optimal DE Schemes for Many-Objective Optimization Problems
Algorithms 2017, 10(3), 86; doi:10.3390/a10030086
Received: 13 June 2017 / Revised: 18 July 2017 / Accepted: 24 July 2017 / Published: 26 July 2017
PDF Full-text (908 KB) | HTML Full-text | XML Full-text
Abstract
MOEA/D is a promising multi-objective evolutionary algorithm based on decomposition, and it has been used to solve many multi-objective optimization problems very well. However, there is a class of multi-objective problems, called many-objective optimization problems, but the original MOEA/D cannot solve them well.
[...] Read more.
MOEA/D is a promising multi-objective evolutionary algorithm based on decomposition, and it has been used to solve many multi-objective optimization problems very well. However, there is a class of multi-objective problems, called many-objective optimization problems, but the original MOEA/D cannot solve them well. In this paper, an improved MOEA/D with optimal differential evolution (oDE) schemes is proposed, called MOEA/D-oDE, aiming to solve many-objective optimization problems. Compared with MOEA/D, MOEA/D-oDE has two distinguishing points. On the one hand, MOEA/D-oDE adopts a newly-introduced decomposition approach to decompose the many-objective optimization problems, which combines the advantages of the weighted sum approach and the Tchebycheff approach. On the other hand, a kind of combination mechanism for DE operators is designed for finding the best child solution so as to do the a posteriori computing. In our experimental study, six continuous test instances with 4–6 objectives comparing NSGA-II (nondominated sorting genetic algorithm II) and MOEA/D as accompanying experiments are applied. Additionally, the final results indicate that MOEA/D-oDE outperforms NSGA-II and MOEA/D in almost all cases, particularly in those problems that have complicated Pareto shapes and higher dimensional objectives, where its advantages are more obvious. Full article
(This article belongs to the Special Issue Evolutionary Computation for Multiobjective Optimization)
Figures

Figure 1

Open AccessArticle Evolutionary Optimization for Robust Epipolar-Geometry Estimation and Outlier Detection
Algorithms 2017, 10(3), 87; doi:10.3390/a10030087
Received: 1 July 2017 / Revised: 25 July 2017 / Accepted: 25 July 2017 / Published: 27 July 2017
PDF Full-text (44006 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a robust technique based on a genetic algorithm is proposed for estimating two-view epipolar-geometry of uncalibrated perspective stereo images from putative correspondences containing a high percentage of outliers. The advantages of this technique are three-fold: (i) replacing random search with
[...] Read more.
In this paper, a robust technique based on a genetic algorithm is proposed for estimating two-view epipolar-geometry of uncalibrated perspective stereo images from putative correspondences containing a high percentage of outliers. The advantages of this technique are three-fold: (i) replacing random search with evolutionary search applying new strategies of encoding and guided sampling; (ii) robust and fast estimation of the epipolar geometry via detecting a more-than-enough set of inliers without making any assumptions about the probability distribution of the residuals; (iii) determining the inlier-outlier threshold based on the uncertainty of the estimated model. The proposed method was evaluated both on synthetic data and real images. The results were compared with the most popular techniques from the state-of-the-art, including RANSAC (random sample consensus), MSAC, MLESAC, Cov-RANSAC, LO-RANSAC, StaRSAC, Multi-GS RANSAC and least median of squares (LMedS). Experimental results showed that the proposed approach performed better than other methods regarding the accuracy of inlier detection and epipolar-geometry estimation, as well as the computational efficiency for datasets majorly contaminated by outliers and noise. Full article
Figures

Figure 1

Open AccessArticle On the Lagged Diffusivity Method for the Solution of Nonlinear Finite Difference Systems
Algorithms 2017, 10(3), 88; doi:10.3390/a10030088
Received: 31 May 2017 / Revised: 23 July 2017 / Accepted: 26 July 2017 / Published: 2 August 2017
PDF Full-text (4338 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we extend the analysis of the Lagged Diffusivity Method for nonlinear, non-steady reaction-convection-diffusion equations. In particular, we describe how the method can be used to solve the systems arising from different discretization schemes, recalling some results on the convergence of
[...] Read more.
In this paper, we extend the analysis of the Lagged Diffusivity Method for nonlinear, non-steady reaction-convection-diffusion equations. In particular, we describe how the method can be used to solve the systems arising from different discretization schemes, recalling some results on the convergence of the method itself. Moreover, we also analyze the behavior of the method in case of problems presenting boundary layers or blow-up solutions. Full article
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems 2017)
Figures

Figure 1

Open AccessFeature PaperArticle On the Existence of Solutions of Nonlinear Fredholm Integral Equations from Kantorovich’s Technique
Algorithms 2017, 10(3), 89; doi:10.3390/a10030089
Received: 16 May 2017 / Revised: 13 July 2017 / Accepted: 30 July 2017 / Published: 2 August 2017
PDF Full-text (282 KB) | HTML Full-text | XML Full-text
Abstract
The well-known Kantorovich technique based on majorizing sequences is used to analyse the convergence of Newton’s method when it is used to solve nonlinear Fredholm integral equations. In addition, we obtain information about the domains of existence and uniqueness of a solution for
[...] Read more.
The well-known Kantorovich technique based on majorizing sequences is used to analyse the convergence of Newton’s method when it is used to solve nonlinear Fredholm integral equations. In addition, we obtain information about the domains of existence and uniqueness of a solution for these equations. Finally, we illustrate the above with two particular Fredholm integral equations. Full article
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems 2017)
Open AccessArticle Local Community Detection Based on Small Cliques
Algorithms 2017, 10(3), 90; doi:10.3390/a10030090
Received: 21 June 2017 / Revised: 3 August 2017 / Accepted: 4 August 2017 / Published: 11 August 2017
PDF Full-text (464 KB) | HTML Full-text | XML Full-text
Abstract
Community detection aims to find dense subgraphs in a network. We consider the problem of finding a community locally around a seed node both in unweighted and weighted networks. This is a faster alternative to algorithms that detect communities that cover the whole
[...] Read more.
Community detection aims to find dense subgraphs in a network. We consider the problem of finding a community locally around a seed node both in unweighted and weighted networks. This is a faster alternative to algorithms that detect communities that cover the whole network when actually only a single community is required. Further, many overlapping community detection algorithms use local community detection algorithms as basic building block. We provide a broad comparison of different existing strategies of expanding a seed node greedily into a community. For this, we conduct an extensive experimental evaluation both on synthetic benchmark graphs as well as real world networks. We show that results both on synthetic as well as real-world networks can be significantly improved by starting from the largest clique in the neighborhood of the seed node. Further, our experiments indicate that algorithms using scores based on triangles outperform other algorithms in most cases. We provide theoretical descriptions as well as open source implementations of all algorithms used. Full article
(This article belongs to the Special Issue Algorithms for Community Detection in Complex Networks)
Figures

Figure 1

Open AccessArticle Transformation-Based Fuzzy Rule Interpolation Using Interval Type-2 Fuzzy Sets
Algorithms 2017, 10(3), 91; doi:10.3390/a10030091
Received: 18 June 2017 / Revised: 1 August 2017 / Accepted: 10 August 2017 / Published: 15 August 2017
PDF Full-text (889 KB) | HTML Full-text | XML Full-text
Abstract
In support of reasoning with sparse rule bases, fuzzy rule interpolation (FRI) offers a helpful inference mechanism for deriving an approximate conclusion when a given observation has no overlap with any rule in the existing rule base. One of the recent and popular
[...] Read more.
In support of reasoning with sparse rule bases, fuzzy rule interpolation (FRI) offers a helpful inference mechanism for deriving an approximate conclusion when a given observation has no overlap with any rule in the existing rule base. One of the recent and popular FRI approaches is the scale and move transformation-based rule interpolation, known as T-FRI in the literature. It supports both interpolation and extrapolation with multiple multi-antecedent rules. However, the difficult problem of defining the precise-valued membership functions required in the representation of fuzzy rules, or of the observations, restricts its applications. Fortunately, this problem can be alleviated through the use of type-2 fuzzy sets, owing to the fact that the membership functions of such fuzzy sets are themselves fuzzy, providing a more flexible means of modelling. This paper therefore, extends the existing T-FRI approach using interval type-2 fuzzy sets, which covers the original T-FRI as its specific instance. The effectiveness of this extension is demonstrated by experimental investigations and, also, by a practical application in comparison to the state-of-the-art alternative approach developed using rough-fuzzy sets. Full article
(This article belongs to the Special Issue Extensions to Type-1 Fuzzy Logic: Theory, Algorithms and Applications)
Figures

Figure 1

Open AccessArticle Post-Processing Partitions to Identify Domains of Modularity Optimization
Algorithms 2017, 10(3), 93; doi:10.3390/a10030093
Received: 5 June 2017 / Revised: 24 July 2017 / Accepted: 15 August 2017 / Published: 19 August 2017
PDF Full-text (6529 KB) | HTML Full-text | XML Full-text
Abstract
We introduce the Convex Hull of Admissible Modularity Partitions (CHAMP) algorithm to prune and prioritize different network community structures identified across multiple runs of possibly various computational heuristics. Given a set of partitions, CHAMP identifies the domain of modularity optimization for each partition—i.e.,
[...] Read more.
We introduce the Convex Hull of Admissible Modularity Partitions (CHAMP) algorithm to prune and prioritize different network community structures identified across multiple runs of possibly various computational heuristics. Given a set of partitions, CHAMP identifies the domain of modularity optimization for each partition—i.e., the parameter-space domain where it has the largest modularity relative to the input set—discarding partitions with empty domains to obtain the subset of partitions that are “admissible” candidate community structures that remain potentially optimal over indicated parameter domains. Importantly, CHAMP can be used for multi-dimensional parameter spaces, such as those for multilayer networks where one includes a resolution parameter and interlayer coupling. Using the results from CHAMP, a user can more appropriately select robust community structures by observing the sizes of domains of optimization and the pairwise comparisons between partitions in the admissible subset. We demonstrate the utility of CHAMP with several example networks. In these examples, CHAMP focuses attention onto pruned subsets of admissible partitions that are 20-to-1785 times smaller than the sets of unique partitions obtained by community detection heuristics that were input into CHAMP. Full article
(This article belongs to the Special Issue Algorithms for Community Detection in Complex Networks)
Figures

Figure 1

Open AccessArticle NBTI and Power Reduction Using an Input Vector Control and Supply Voltage Assignment Method
Algorithms 2017, 10(3), 94; doi:10.3390/a10030094
Received: 5 June 2017 / Revised: 31 July 2017 / Accepted: 13 August 2017 / Published: 19 August 2017
PDF Full-text (3034 KB) | HTML Full-text | XML Full-text
Abstract
As technology scales, negative bias temperature instability (NBTI) becomes one of the primary failure mechanisms for Very Large Scale Integration (VLSI) circuits. Meanwhile, the leakage power increases dramatically as the supply/threshold voltage continues to scale down. These two issues pose severe reliability problems
[...] Read more.
As technology scales, negative bias temperature instability (NBTI) becomes one of the primary failure mechanisms for Very Large Scale Integration (VLSI) circuits. Meanwhile, the leakage power increases dramatically as the supply/threshold voltage continues to scale down. These two issues pose severe reliability problems for complementary metal oxide semiconductor (CMOS) devices. Because both the NBTI and leakage are dependent on the input vector of the circuit, we present an input vector control (IVC) method based on a linear programming algorithm, which can co-optimize circuit aging and power dissipation simultaneously. In addition, our proposed IVC method is combined with the supply voltage assignment technique to further reduce delay degradation and leakage power. Experimental results on various circuits show the effectiveness of the proposed combination method. Full article
Figures

Figure 1

Open AccessArticle A Parallel Two-Stage Iteration Method for Solving Continuous Sylvester Equations
Algorithms 2017, 10(3), 95; doi:10.3390/a10030095
Received: 8 July 2017 / Revised: 8 August 2017 / Accepted: 10 August 2017 / Published: 21 August 2017
PDF Full-text (247 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we propose a parallel two-stage iteration algorithm for solving large-scale continuous Sylvester equations. By splitting the coefficient matrices, the original linear system is transformed into a symmetric linear system which is then solved by using the SYMMLQ algorithm. In order
[...] Read more.
In this paper we propose a parallel two-stage iteration algorithm for solving large-scale continuous Sylvester equations. By splitting the coefficient matrices, the original linear system is transformed into a symmetric linear system which is then solved by using the SYMMLQ algorithm. In order to improve the relative parallel efficiency, an adjusting strategy is explored during the iteration calculation of the SYMMLQ algorithm to decrease the degree of the reduce-operator from two to one communications at each step. Moreover, the convergence of the iteration scheme is discussed, and finally numerical results are reported showing that the proposed method is an efficient and robust algorithm for this class of continuous Sylvester equations on a parallel machine. Full article
Open AccessArticle Double-Threshold Cooperative Spectrum Sensing Algorithm Based on Sevcik Fractal Dimension
Algorithms 2017, 10(3), 96; doi:10.3390/a10030096
Received: 15 July 2017 / Revised: 14 August 2017 / Accepted: 17 August 2017 / Published: 21 August 2017
PDF Full-text (784 KB) | HTML Full-text | XML Full-text
Abstract
Spectrum sensing is of great importance in the cognitive radio (CR) networks. Compared with individual spectrum sensing, cooperative spectrum sensing (CSS) has been shown to greatly improve the accuracy of the detection. However, the existing CSS algorithms are sensitive to noise uncertainty and
[...] Read more.
Spectrum sensing is of great importance in the cognitive radio (CR) networks. Compared with individual spectrum sensing, cooperative spectrum sensing (CSS) has been shown to greatly improve the accuracy of the detection. However, the existing CSS algorithms are sensitive to noise uncertainty and are inaccurate in low signal-to-noise ratio (SNR) detection. To address this, we propose a double-threshold CSS algorithm based on Sevcik fractal dimension (SFD) in this paper. The main idea of the presented scheme is to sense the presence of primary users in the local spectrum sensing by analyzing different characteristics of the SFD between signals and noise. Considering the stochastic fluctuation characteristic of the noise SFD in a certain range, we adopt the double-threshold method in the multi-cognitive user CSS so as to improve the detection accuracy, where thresholds are set according to the maximum and minimum values of the noise SFD. After obtaining the detection results, the cognitive user sends local detection results to the fusion center for reliability fusion. Simulation results demonstrate that the proposed method is insensitive to noise uncertainty. Simulations also show that the algorithm presented in this paper can achieve high detection performance at the low SNR region. Full article
Figures

Figure 1

Open AccessArticle A Simplified Matrix Formulation for Sensitivity Analysis of Hidden Markov Models
Algorithms 2017, 10(3), 97; doi:10.3390/a10030097
Received: 9 June 2017 / Revised: 6 August 2017 / Accepted: 14 August 2017 / Published: 22 August 2017
PDF Full-text (906 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a new algorithm for sensitivity analysis of discrete hidden Markov models (HMMs) is proposed. Sensitivity analysis is a general technique for investigating the robustness of the output of a system model. Sensitivity analysis of probabilistic networks has recently been studied
[...] Read more.
In this paper, a new algorithm for sensitivity analysis of discrete hidden Markov models (HMMs) is proposed. Sensitivity analysis is a general technique for investigating the robustness of the output of a system model. Sensitivity analysis of probabilistic networks has recently been studied extensively. This has resulted in the development of mathematical relations between a parameter and an output probability of interest and also methods for establishing the effects of parameter variations on decisions. Sensitivity analysis in HMMs has usually been performed by taking small perturbations in parameter values and re-computing the output probability of interest. As recent studies show, the sensitivity analysis of an HMM can be performed using a functional relationship that describes how an output probability varies as the network’s parameters of interest change. To derive this sensitivity function, existing Bayesian network algorithms have been employed for HMMs. These algorithms are computationally inefficient as the length of the observation sequence and the number of parameters increases. In this study, a simplified efficient matrix-based algorithm for computing the coefficients of the sensitivity function for all hidden states and all time steps is proposed and an example is presented. Full article
Figures

Figure 1

Open AccessArticle Adaptive Virtual RSU Scheduling for Scalable Coverage under Bidirectional Vehicle Traffic Flow
Algorithms 2017, 10(3), 98; doi:10.3390/a10030098
Received: 12 May 2017 / Revised: 17 August 2017 / Accepted: 18 August 2017 / Published: 24 August 2017
PDF Full-text (1767 KB) | HTML Full-text | XML Full-text
Abstract
Over the past decades, vehicular ad hoc networks (VANETs) have been a core networking technology to provide drivers and passengers with safety and convenience. As a new emerging technology, the vehicular cloud computing (VCC) can provide cloud services for various data-intensive applications in
[...] Read more.
Over the past decades, vehicular ad hoc networks (VANETs) have been a core networking technology to provide drivers and passengers with safety and convenience. As a new emerging technology, the vehicular cloud computing (VCC) can provide cloud services for various data-intensive applications in VANETs, such as multimedia streaming. However, the vehicle mobility and intermittent connectivity present challenges to the large-scale data dissemination with underlying computing and networking architecture. In this paper, we will explore the service scheduling of virtual RSUs for diverse request demands in the dynamic traffic flow in vehicular cloud environment. Specifically, we formulate the RSU allocation problem as maximum service capacity with multiple-source and multiple-destination, and propose a bidirectional RSU allocation strategy. In addition, we formulate the content replication in distributed RSUs as the minimum replication set coverage problem in a two-layer mapping model, and analyze the solutions in different scenarios. Numerical results further prove the superiority of our proposed solution, as well as the scalability to various traffic condition variations. Full article
Figures

Figure 1

Open AccessArticle Hybrid Learning for General Type-2 TSK Fuzzy Logic Systems
Algorithms 2017, 10(3), 99; doi:10.3390/a10030099
Received: 19 June 2017 / Revised: 22 August 2017 / Accepted: 23 August 2017 / Published: 25 August 2017
PDF Full-text (4758 KB) | HTML Full-text | XML Full-text
Abstract
This work is focused on creating fuzzy granular classification models based on general type-2 fuzzy logic systems when consequents are represented by interval type-2 TSK linear functions. Due to the complexity of general type-2 TSK fuzzy logic systems, a hybrid learning approach is
[...] Read more.
This work is focused on creating fuzzy granular classification models based on general type-2 fuzzy logic systems when consequents are represented by interval type-2 TSK linear functions. Due to the complexity of general type-2 TSK fuzzy logic systems, a hybrid learning approach is proposed, where the principle of justifiable granularity is heuristically used to define an amount of uncertainty in the system, which in turn is used to define the parameters in the interval type-2 TSK linear functions via a dual LSE algorithm. Multiple classification benchmark datasets were tested in order to assess the quality of the formed granular models; its performance is also compared against other common classification algorithms. Shown results conclude that classification performance in general is better than results obtained by other techniques, and in general, all achieved results, when averaged, have a better performance rate than compared techniques, demonstrating the stability of the proposed hybrid learning technique. Full article
(This article belongs to the Special Issue Extensions to Type-1 Fuzzy Logic: Theory, Algorithms and Applications)
Figures

Figure 1

Open AccessArticle Biogeography-Based Optimization of the Portfolio Optimization Problem with Second Order Stochastic Dominance Constraints
Algorithms 2017, 10(3), 100; doi:10.3390/a10030100
Received: 7 August 2017 / Revised: 21 August 2017 / Accepted: 23 August 2017 / Published: 25 August 2017
PDF Full-text (463 KB) | HTML Full-text | XML Full-text
Abstract
The portfolio optimization problem is the central problem of modern economics and decision theory; there is the Mean-Variance Model and Stochastic Dominance Model for solving this problem. In this paper, based on the second order stochastic dominance constraints, we propose the improved biogeography-based
[...] Read more.
The portfolio optimization problem is the central problem of modern economics and decision theory; there is the Mean-Variance Model and Stochastic Dominance Model for solving this problem. In this paper, based on the second order stochastic dominance constraints, we propose the improved biogeography-based optimization algorithm to optimize the portfolio, which we called ε BBO. In order to test the computing power of ε BBO, we carry out two numerical experiments in several kinds of constraints. In experiment 1, comparing the Stochastic Approximation (SA) method with the Level Function (LF) algorithm and Genetic Algorithm (GA), we get a similar optimal solution by ε BBO in [ 0 , 0 . 6 ] and [ 0 , 1 ] constraints with the return of 1.174% and 1.178%. In [ - 1 , 2 ] constraint, we get the optimal return of 1.3043% by ε BBO, while the return of SA and LF is 1.23% and 1.26%. In experiment 2, we get the optimal return of 0.1325% and 0.3197% by ε BBO in [ 0 , 0 . 1 ] and [ - 0 . 05 , 0 . 15 ] constraints. As a comparison, the return of FTSE100 Index portfolio is 0.0937%. The results prove that ε BBO algorithm has great potential in the field of financial decision-making, it also shows that ε BBO algorithm has a better performance in optimization problem. Full article
Figures

Figure 1

Open AccessArticle Comparative Study of Type-2 Fuzzy Particle Swarm, Bee Colony and Bat Algorithms in Optimization of Fuzzy Controllers
Algorithms 2017, 10(3), 101; doi:10.3390/a10030101
Received: 16 July 2017 / Revised: 18 August 2017 / Accepted: 23 August 2017 / Published: 28 August 2017
PDF Full-text (4449 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a comparison among Particle swarm optimization (PSO), Bee Colony Optimization (BCO) and the Bat Algorithm (BA) is presented. In addition, a modification to the main parameters of each algorithm through an interval type-2 fuzzy logic system is presented. The main
[...] Read more.
In this paper, a comparison among Particle swarm optimization (PSO), Bee Colony Optimization (BCO) and the Bat Algorithm (BA) is presented. In addition, a modification to the main parameters of each algorithm through an interval type-2 fuzzy logic system is presented. The main aim of using interval type-2 fuzzy systems is providing dynamic parameter adaptation to the algorithms. These algorithms (original and modified versions) are compared with the design of fuzzy systems used for controlling the trajectory of an autonomous mobile robot. Simulation results reveal that PSO algorithm outperforms the results of the BCO and BA algorithms. Full article
(This article belongs to the Special Issue Extensions to Type-1 Fuzzy Logic: Theory, Algorithms and Applications)
Figures

Figure 1

Open AccessArticle Local Community Detection in Dynamic Graphs Using Personalized Centrality
Algorithms 2017, 10(3), 102; doi:10.3390/a10030102
Received: 31 May 2017 / Revised: 22 August 2017 / Accepted: 23 August 2017 / Published: 29 August 2017
PDF Full-text (840 KB) | HTML Full-text | XML Full-text
Abstract
Analyzing massive graphs poses challenges due to the vast amount of data available. Extracting smaller relevant subgraphs allows for further visualization and analysis that would otherwise be too computationally intensive. Furthermore, many real data sets are constantly changing, and require algorithms to update
[...] Read more.
Analyzing massive graphs poses challenges due to the vast amount of data available. Extracting smaller relevant subgraphs allows for further visualization and analysis that would otherwise be too computationally intensive. Furthermore, many real data sets are constantly changing, and require algorithms to update as the graph evolves. This work addresses the topic of local community detection, or seed set expansion, using personalized centrality measures, specifically PageRank and Katz centrality. We present a method to efficiently update local communities in dynamic graphs. By updating the personalized ranking vectors, we can incrementally update the corresponding local community. Applying our methods to real-world graphs, we are able to obtain speedups of up to 60× compared to static recomputation while maintaining an average recall of 0.94 of the highly ranked vertices returned. Next, we investigate how approximations of a centrality vector affect the resulting local community. Specifically, our method guarantees that the vertices returned in the community are the highly ranked vertices from a personalized centrality metric. Full article
(This article belongs to the Special Issue Algorithms for Community Detection in Complex Networks)
Figures

Figure 1a

Open AccessArticle An Enhanced Dynamic Spectrum Allocation Algorithm Based on Cournot Game in Maritime Cognitive Radio Communication System
Algorithms 2017, 10(3), 103; doi:10.3390/a10030103
Received: 14 July 2017 / Revised: 25 August 2017 / Accepted: 1 September 2017 / Published: 3 September 2017
PDF Full-text (5998 KB) | HTML Full-text | XML Full-text
Abstract
The recent development of maritime transport has resulted in the demand for a wider communication bandwidth being more intense. Cognitive radios can dynamically manage resources in a spectrum. Thus, building a new type of maritime cognitive radio communication system (MCRCS) is an effective
[...] Read more.
The recent development of maritime transport has resulted in the demand for a wider communication bandwidth being more intense. Cognitive radios can dynamically manage resources in a spectrum. Thus, building a new type of maritime cognitive radio communication system (MCRCS) is an effective solution. In this paper, the enhanced dynamic spectrum allocation algorithm (EDSAA) is proposed, which is based on the Cournot game model. In EDSAA, the decision-making center (DC) sets the weights according to the detection capability of the secondary user (SU), before adding these weighting coefficients in the price function. Furthermore, the willingness of the SU will reduce after meeting their basic communication needs when it continues to increase the leasable spectrum by adding the elastic model in the SU’s revenue function. On this basis, the profit function is established. The simulation results show that the EDSAA has Nash equilibrium and conforms to the actual situation. It shows that the results of spectrum allocation are fair, efficient and reasonable. Full article
Figures

Figure 1

Open AccessArticle Contract-Based Incentive Mechanism for Mobile Crowdsourcing Networks
Algorithms 2017, 10(3), 104; doi:10.3390/a10030104
Received: 26 July 2017 / Revised: 21 August 2017 / Accepted: 2 September 2017 / Published: 4 September 2017
PDF Full-text (1087 KB) | HTML Full-text | XML Full-text
Abstract
Mobile crowdsourcing networks (MCNs) are a promising method of data collecting and processing by leveraging the mobile devices’ sensing and computing capabilities. However, because of the selfish characteristics of the service provider (SP) and mobile users (MUs), crowdsourcing participants only aim to maximize
[...] Read more.
Mobile crowdsourcing networks (MCNs) are a promising method of data collecting and processing by leveraging the mobile devices’ sensing and computing capabilities. However, because of the selfish characteristics of the service provider (SP) and mobile users (MUs), crowdsourcing participants only aim to maximize their own benefits. This paper investigates the incentive mechanism between the above two parties to create mutual benefits. By modeling MCNs as a labor market, a contract-based crowdsourcing model with moral hazard is proposed under the asymmetric information scenario. In order to incentivize the potential MUs to participate in crowdsourcing tasks, the optimization problem is formulated to maximize the SP’s utility by jointly examining the crowdsourcing participants’ risk preferences. The impact of crowdsourcing participants’ attitudes of risks on the incentive mechanism has been studied analytically and experimentally. Numerical simulation results demonstrate the effectiveness of the proposed contract design scheme for the crowdsourcing incentive. Full article
Figures

Figure 1

Open AccessArticle Comparison of Internal Clustering Validation Indices for Prototype-Based Clustering
Algorithms 2017, 10(3), 105; doi:10.3390/a10030105
Received: 13 July 2017 / Revised: 28 August 2017 / Accepted: 1 September 2017 / Published: 6 September 2017
PDF Full-text (516 KB) | HTML Full-text | XML Full-text
Abstract
Clustering is an unsupervised machine learning and pattern recognition method. In general, in addition to revealing hidden groups of similar observations and clusters, their number needs to be determined. Internal clustering validation indices estimate this number without any external information. The purpose of
[...] Read more.
Clustering is an unsupervised machine learning and pattern recognition method. In general, in addition to revealing hidden groups of similar observations and clusters, their number needs to be determined. Internal clustering validation indices estimate this number without any external information. The purpose of this article is to evaluate, empirically, characteristics of a representative set of internal clustering validation indices with many datasets. The prototype-based clustering framework includes multiple, classical and robust, statistical estimates of cluster location so that the overall setting of the paper is novel. General observations on the quality of validation indices and on the behavior of different variants of clustering algorithms will be given. Full article
(This article belongs to the Special Issue Clustering Algorithms 2017)
Figures

Figure 1

Open AccessArticle Type-1 Fuzzy Sets and Intuitionistic Fuzzy Sets
Algorithms 2017, 10(3), 106; doi:10.3390/a10030106
Received: 3 July 2017 / Revised: 18 August 2017 / Accepted: 31 August 2017 / Published: 13 September 2017
PDF Full-text (619 KB) | HTML Full-text | XML Full-text
Abstract
A comparison between type-1 fuzzy sets (T1FSs) and intuitionistic fuzzy sets (IFSs) is made. The operators defined over IFSs that do not have analogues in T1FSs are shown, and such analogues are introduced whenever possible. Full article
(This article belongs to the Special Issue Extensions to Type-1 Fuzzy Logic: Theory, Algorithms and Applications)
Figures

Figure 1

Open AccessArticle A Monarch Butterfly Optimization for the Dynamic Vehicle Routing Problem
Algorithms 2017, 10(3), 107; doi:10.3390/a10030107
Received: 29 June 2017 / Revised: 4 September 2017 / Accepted: 4 September 2017 / Published: 12 September 2017
PDF Full-text (1517 KB) | HTML Full-text | XML Full-text
Abstract
The dynamic vehicle routing problem (DVRP) is a variant of the Vehicle Routing Problem (VRP) in which customers appear dynamically. The objective is to determine a set of routes that minimizes the total travel distance. In this paper, we propose a monarch butterfly
[...] Read more.
The dynamic vehicle routing problem (DVRP) is a variant of the Vehicle Routing Problem (VRP) in which customers appear dynamically. The objective is to determine a set of routes that minimizes the total travel distance. In this paper, we propose a monarch butterfly optimization (MBO) algorithm to solve DVRPs, utilizing a greedy strategy. Both migration operation and the butterfly adjusting operator only accept the offspring of butterfly individuals that have better fitness than their parents. To improve performance, a later perturbation procedure is implemented, to maintain a balance between global diversification and local intensification. The computational results indicate that the proposed technique outperforms the existing approaches in the literature for average performance by at least 9.38%. In addition, 12 new best solutions were found. This shows that this proposed technique consistently produces high-quality solutions and outperforms other published heuristics for the DVRP. Full article
Figures

Figure 1

Open AccessArticle Performance Analysis of Four Decomposition-Ensemble Models for One-Day-Ahead Agricultural Commodity Futures Price Forecasting
Algorithms 2017, 10(3), 108; doi:10.3390/a10030108
Received: 17 July 2017 / Revised: 31 August 2017 / Accepted: 9 September 2017 / Published: 12 September 2017
PDF Full-text (6074 KB) | HTML Full-text | XML Full-text
Abstract
Agricultural commodity futures prices play a significant role in the change tendency of these spot prices and the supply–demand relationship of global agricultural product markets. Due to the nonlinear and nonstationary nature of this kind of time series data, it is inevitable for
[...] Read more.
Agricultural commodity futures prices play a significant role in the change tendency of these spot prices and the supply–demand relationship of global agricultural product markets. Due to the nonlinear and nonstationary nature of this kind of time series data, it is inevitable for price forecasting research to take this nature into consideration. Therefore, we aim to enrich the existing research literature and offer a new way of thinking about forecasting agricultural commodity futures prices, so that four hybrid models are proposed based on the back propagation neural network (BPNN) optimized by the particle swarm optimization (PSO) algorithm and four decomposition methods: empirical mode decomposition (EMD), wavelet packet transform (WPT), intrinsic time-scale decomposition (ITD) and variational mode decomposition (VMD). In order to verify the applicability and validity of these hybrid models, we select three futures prices of wheat, corn and soybean to conduct the experiment. The experimental results show that (1) all the hybrid models combined with decomposition technique have a better performance than the single PSO–BPNN model; (2) VMD contributes the most in improving the forecasting ability of the PSO–BPNN model, while WPT ranks second; (3) ITD performs better than EMD in both cases of corn and soybean; and (4) the proposed models perform well in the forecasting of agricultural commodity futures prices. Full article
Figures

Figure 1

Other

Jump to: Research

Open AccessTechnical Note The Isomorphic Version of Brualdi’s and Sanderson’s Nestedness
Algorithms 2017, 10(3), 74; doi:10.3390/a10030074
Received: 16 January 2017 / Revised: 20 June 2017 / Accepted: 22 June 2017 / Published: 27 June 2017
PDF Full-text (297 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The discrepancy BR for an m × n 0, 1-matrix from Brualdi and Sanderson in 1998 is defined as the minimum number of 1 s that need to be shifted in each row to the left to achieve its Ferrers matrix, i.e., each
[...] Read more.
The discrepancy BR for an m × n 0, 1-matrix from Brualdi and Sanderson in 1998 is defined as the minimum number of 1 s that need to be shifted in each row to the left to achieve its Ferrers matrix, i.e., each row consists of consecutive 1 s followed by consecutive 0 s. For ecological bipartite networks, BR describes a nested set of relationships. Since two different labelled networks can be isomorphic, but possess different discrepancies due to different adjacency matrices, we define a metric determining the minimum discrepancy in an isomorphic class. We give a reduction to k ≤ n minimum weighted perfect matching problems. We show on 289 ecological matrices (given as a benchmark by Atmar and Patterson in 1995) that classical discrepancy can underestimate the nestedness by up to 30%. Full article
Figures

Figure 1

Open AccessLetter Automatic Modulation Recognition Using Compressive Cyclic Features
Algorithms 2017, 10(3), 92; doi:10.3390/a10030092
Received: 30 June 2017 / Revised: 10 August 2017 / Accepted: 10 August 2017 / Published: 18 August 2017
PDF Full-text (326 KB) | HTML Full-text | XML Full-text
Abstract
Higher-order cyclic cumulants (CCs) have been widely adopted for automatic modulation recognition (AMR) in cognitive radio. However, the CC-based AMR suffers greatly from the requirement of high-rate sampling. To overcome this limit, we resort to the theory of compressive sensing (CS). By exploiting
[...] Read more.
Higher-order cyclic cumulants (CCs) have been widely adopted for automatic modulation recognition (AMR) in cognitive radio. However, the CC-based AMR suffers greatly from the requirement of high-rate sampling. To overcome this limit, we resort to the theory of compressive sensing (CS). By exploiting the sparsity of CCs, recognition features can be extracted from a small amount of compressive measurements via a rough CS reconstruction algorithm. Accordingly, a CS-based AMR scheme is formulated. Simulation results demonstrate the availability and robustness of the proposed approach. Full article
Figures

Figure 1

Journal Contact

MDPI AG
Algorithms Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Algorithms Edit a special issue Review for Algorithms
logo
loading...
Back to Top