Next Issue
Previous Issue

Table of Contents

Algorithms, Volume 11, Issue 10 (October 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-22
Export citation of selected articles as:
Open AccessArticle Learning Representations of Natural Language Texts with Generative Adversarial Networks at Document, Sentence, and Aspect Level
Algorithms 2018, 11(10), 164; https://doi.org/10.3390/a11100164
Received: 27 August 2018 / Revised: 30 September 2018 / Accepted: 19 October 2018 / Published: 22 October 2018
Viewed by 331 | PDF Full-text (1468 KB) | HTML Full-text | XML Full-text
Abstract
The ability to learn robust, resizable feature representations from unlabeled data has potential applications in a wide variety of machine learning tasks. One way to create such representations is to train deep generative models that can learn to capture the complex distribution of
[...] Read more.
The ability to learn robust, resizable feature representations from unlabeled data has potential applications in a wide variety of machine learning tasks. One way to create such representations is to train deep generative models that can learn to capture the complex distribution of real-world data. Generative adversarial network (GAN) approaches have shown impressive results in producing generative models of images, but relatively little work has been done on evaluating the performance of these methods for the learning representation of natural language, both in supervised and unsupervised settings at the document, sentence, and aspect level. Extensive research validation experiments were performed by leveraging the 20 Newsgroups corpus, the Movie Review (MR) Dataset, and the Finegrained Sentiment Dataset (FSD). Our experimental analysis suggests that GANs can successfully learn representations of natural language texts at all three aforementioned levels. Full article
(This article belongs to the Special Issue Humanistic Data Mining: Tools and Applications)
Figures

Figure 1

Open AccessArticle Airfoil Optimization Design Based on the Pivot Element Weighting Iterative Method
Algorithms 2018, 11(10), 163; https://doi.org/10.3390/a11100163
Received: 14 August 2018 / Revised: 14 October 2018 / Accepted: 16 October 2018 / Published: 22 October 2018
Viewed by 368 | PDF Full-text (10341 KB) | HTML Full-text | XML Full-text
Abstract
Class function/shape function transformation (CST) is an advanced geometry representation method employed to generate airfoil coordinates. Aiming at the morbidity of the CST coefficient matrix, the pivot element weighting iterative (PEWI) method is proposed to improve the condition number of the ill-conditioned matrix
[...] Read more.
Class function/shape function transformation (CST) is an advanced geometry representation method employed to generate airfoil coordinates. Aiming at the morbidity of the CST coefficient matrix, the pivot element weighting iterative (PEWI) method is proposed to improve the condition number of the ill-conditioned matrix in the CST. The feasibility of the PEWI method is evaluated by using the RAE2822 and S1223 airfoil. The aerodynamic optimization of the S1223 airfoil is conducted based on the Isight software platform. First, the S1223 airfoil is parameterized by the CST with the PEWI method. It is very significant to confirm the range of variables for the airfoil optimization design. So the normalization method of design variables is put forward in the paper. Optimal Latin Hypercube sampling is applied to generate the samples, whose aerodynamic performances are calculated by the numerical simulation. Then the Radial Basis Functions (RBF) neural network model is trained by these aerodynamic performance data. Finally, the multi-island genetic algorithm is performed to achieve the maximum lift-drag ratio of S1223. The results show that the robustness of the CST can be improved. Moreover, the lift-drag ratio of S1223 increases by 2.27% and the drag coefficient decreases by 1.4%. Full article
Figures

Figure 1

Open AccessArticle Application of Data Science Technology on Research of Circulatory System Disease Prediction Based on a Prospective Cohort
Algorithms 2018, 11(10), 162; https://doi.org/10.3390/a11100162
Received: 11 September 2018 / Revised: 8 October 2018 / Accepted: 11 October 2018 / Published: 20 October 2018
Viewed by 313 | PDF Full-text (2099 KB) | HTML Full-text | XML Full-text
Abstract
Chronic diseases represented by circulatory diseases have gradually become the main types of diseases affecting the health of our population. Establishing a circulatory system disease prediction model to predict the occurrence of diseases and controlling them is of great significance to the health
[...] Read more.
Chronic diseases represented by circulatory diseases have gradually become the main types of diseases affecting the health of our population. Establishing a circulatory system disease prediction model to predict the occurrence of diseases and controlling them is of great significance to the health of our population. This article is based on the prospective population cohort data of chronic diseases in China, based on the existing medical cohort studies, the Kaplan–Meier method was used for feature selection, and the traditional medical analysis model represented by the Cox proportional hazards model was used and introduced. Support vector machine research methods in machine learning establish circulatory system disease prediction models. This paper also attempts to introduce the proportion of the explanation variation (PEV) and the shrinkage factor to improve the Cox proportional hazards model; and the use of Particle Swarm Optimization (PSO) algorithm to optimize the parameters of SVM model. Finally, the experimental verification of the above prediction models is carried out. This paper uses the model training time, Accuracy rate(ACC), the area under curve (AUC)of the Receiver Operator Characteristic curve (ROC) and other forecasting indicators. The experimental results show that the PSO-SVM-CSDPC disease prediction model and the S-Cox-CSDPC circulation system disease prediction model have the advantages of fast model solving speed, accurate prediction results and strong generalization ability, which are helpful for the intervention and control of chronic diseases. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Figures

Figure 1

Open AccessArticle Total Coloring Conjecture for Certain Classes of Graphs
Algorithms 2018, 11(10), 161; https://doi.org/10.3390/a11100161
Received: 30 August 2018 / Revised: 25 September 2018 / Accepted: 17 October 2018 / Published: 19 October 2018
Viewed by 263 | PDF Full-text (267 KB) | HTML Full-text | XML Full-text
Abstract
A total coloring of a graph G is an assignment of colors to the elements of the graph G such that no two adjacent or incident elements receive the same color. The total chromatic number of a graph G, denoted by χ
[...] Read more.
A total coloring of a graph G is an assignment of colors to the elements of the graph G such that no two adjacent or incident elements receive the same color. The total chromatic number of a graph G, denoted by χ ( G ) , is the minimum number of colors that suffice in a total coloring. Behzad and Vizing conjectured that for any graph G, Δ ( G ) + 1 χ ( G ) Δ ( G ) + 2 , where Δ ( G ) is the maximum degree of G. In this paper, we prove the total coloring conjecture for certain classes of graphs of deleted lexicographic product, line graph and double graph. Full article
Figures

Figure 1

Open AccessArticle Modeling and Evaluation of Power-Aware Software Rejuvenation in Cloud Systems
Algorithms 2018, 11(10), 160; https://doi.org/10.3390/a11100160
Received: 30 August 2018 / Revised: 13 October 2018 / Accepted: 15 October 2018 / Published: 18 October 2018
Viewed by 241 | PDF Full-text (1954 KB) | HTML Full-text | XML Full-text
Abstract
Long and continuous running of software can cause software aging-induced errors and failures. Cloud data centers suffer from these kinds of failures when Virtual Machine Monitors (VMMs), which control the execution of Virtual Machines (VMs), age. Software rejuvenation is a proactive fault management
[...] Read more.
Long and continuous running of software can cause software aging-induced errors and failures. Cloud data centers suffer from these kinds of failures when Virtual Machine Monitors (VMMs), which control the execution of Virtual Machines (VMs), age. Software rejuvenation is a proactive fault management technique that can prevent the occurrence of future failures by terminating VMMs, cleaning up their internal states, and restarting them. However, the appropriate time and type of VMM rejuvenation can affect performance, availability, and power consumption of a system. In this paper, an analytical model is proposed based on Stochastic Activity Networks for performance evaluation of Infrastructure-as-a-Service cloud systems. Using the proposed model, a two-threshold power-aware software rejuvenation scheme is presented. Many details of real cloud systems, such as VM multiplexing, migration of VMs between VMMs, VM heterogeneity, failure of VMMs, failure of VM migration, and different probabilities for arrival of different VM request types are investigated using the proposed model. The performance of the proposed rejuvenation scheme is compared with two baselines based on diverse performance, availability, and power consumption measures defined on the system. Full article
(This article belongs to the Special Issue Algorithms for the Resource Management of Large Scale Infrastructures)
Figures

Figure 1

Open AccessArticle A Faster Algorithm for Reducing the Computational Complexity of Convolutional Neural Networks
Algorithms 2018, 11(10), 159; https://doi.org/10.3390/a11100159
Received: 10 September 2018 / Revised: 4 October 2018 / Accepted: 16 October 2018 / Published: 18 October 2018
Viewed by 282 | PDF Full-text (1757 KB) | HTML Full-text | XML Full-text
Abstract
Convolutional neural networks have achieved remarkable improvements in image and video recognition but incur a heavy computational burden. To reduce the computational complexity of a convolutional neural network, this paper proposes an algorithm based on the Winograd minimal filtering algorithm and Strassen algorithm.
[...] Read more.
Convolutional neural networks have achieved remarkable improvements in image and video recognition but incur a heavy computational burden. To reduce the computational complexity of a convolutional neural network, this paper proposes an algorithm based on the Winograd minimal filtering algorithm and Strassen algorithm. Theoretical assessments of the proposed algorithm show that it can dramatically reduce computational complexity. Furthermore, the Visual Geometry Group (VGG) network is employed to evaluate the algorithm in practice. The results show that the proposed algorithm can provide the optimal performance by combining the savings of these two algorithms. It saves 75% of the runtime compared with the conventional algorithm. Full article
Figures

Figure 1

Open AccessArticle Incremental Learning for Classification of Unstructured Data Using Extreme Learning Machine
Algorithms 2018, 11(10), 158; https://doi.org/10.3390/a11100158
Received: 27 September 2018 / Revised: 13 October 2018 / Accepted: 16 October 2018 / Published: 17 October 2018
Viewed by 341 | PDF Full-text (1215 KB) | HTML Full-text | XML Full-text
Abstract
Unstructured data are irregular information with no predefined data model. Streaming data which constantly arrives over time is unstructured, and classifying these data is a tedious task as they lack class labels and get accumulated over time. As the data keeps growing, it
[...] Read more.
Unstructured data are irregular information with no predefined data model. Streaming data which constantly arrives over time is unstructured, and classifying these data is a tedious task as they lack class labels and get accumulated over time. As the data keeps growing, it becomes difficult to train and create a model from scratch each time. Incremental learning, a self-adaptive algorithm uses the previously learned model information, then learns and accommodates new information from the newly arrived data providing a new model, which avoids the retraining. The incrementally learned knowledge helps to classify the unstructured data. In this paper, we propose a framework CUIL (Classification of Unstructured data using Incremental Learning) which clusters the metadata, assigns a label for each cluster and then creates a model using Extreme Learning Machine (ELM), a feed-forward neural network, incrementally for each batch of data arrived. The proposed framework trains the batches separately, reducing the memory resources, training time significantly and is tested with metadata created for the standard image datasets like MNIST, STL-10, CIFAR-10, Caltech101, and Caltech256. Based on the tabulated results, our proposed work proves to show greater accuracy and efficiency. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Figures

Figure 1

Open AccessArticle LSTM Accelerator for Convolutional Object Identification
Algorithms 2018, 11(10), 157; https://doi.org/10.3390/a11100157
Received: 4 August 2018 / Revised: 10 October 2018 / Accepted: 12 October 2018 / Published: 17 October 2018
Viewed by 278 | PDF Full-text (507 KB) | HTML Full-text | XML Full-text
Abstract
Deep Learning has dramatically advanced the state of the art in vision, speech and many other areas. Recently, numerous deep learning algorithms have been proposed to solve traditional artificial intelligence problems. In this paper, in order to detect the version that can provide
[...] Read more.
Deep Learning has dramatically advanced the state of the art in vision, speech and many other areas. Recently, numerous deep learning algorithms have been proposed to solve traditional artificial intelligence problems. In this paper, in order to detect the version that can provide the best trade-off in terms of time and accuracy, convolutional networks of various depths have been implemented. Batch normalization is also considered since it acts as a regularizer and achieves the same accuracy with fewer training steps. For maximizing the yield of the complexity by diminishing, as well as minimizing the loss of accuracy, LSTM neural net layers are utilized in the process. The image sequences are proven to be classified by the LSTM in a more accelerated manner, while managing better precision. Concretely, the more complex the CNN, the higher the percentages of exactitude; in addition, but for the high-rank increase in accuracy, the time was significantly decreased, which eventually rendered the trade-off optimal. The average improvement of performance for all models regarding both datasets used amounted to 42 % . Full article
(This article belongs to the Special Issue Humanistic Data Mining: Tools and Applications)
Figures

Figure 1

Open AccessArticle Online Uniformly Inserting Points on the Sphere
Algorithms 2018, 11(10), 156; https://doi.org/10.3390/a11100156
Received: 11 August 2018 / Revised: 25 September 2018 / Accepted: 11 October 2018 / Published: 16 October 2018
Viewed by 271 | PDF Full-text (570 KB) | HTML Full-text | XML Full-text
Abstract
Uniformly inserting points on the sphere has been found useful in many scientific and engineering fields. Different from the offline version where the number of points is known in advance, we consider the online version of this problem. The requests for point insertion
[...] Read more.
Uniformly inserting points on the sphere has been found useful in many scientific and engineering fields. Different from the offline version where the number of points is known in advance, we consider the online version of this problem. The requests for point insertion arrive one by one and the target is to insert points as uniformly as possible. To measure the uniformity we use gap ratio which is defined as the ratio of the maximal gap to the minimal gap of two arbitrary inserted points. We propose a two-phase online insertion strategy with gap ratio of at most 3.69 . Moreover, the lower bound of the gap ratio is proved to be at least 1.78 . Full article
Figures

Figure 1

Open AccessArticle Real-Time Tumor Motion Tracking in 3D Using Planning 4D CT Images during Image-Guided Radiation Therapy
Algorithms 2018, 11(10), 155; https://doi.org/10.3390/a11100155
Received: 9 September 2018 / Revised: 5 October 2018 / Accepted: 7 October 2018 / Published: 11 October 2018
Viewed by 289 | PDF Full-text (3231 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we propose a novel method for tracking the respiratory phase and 3D tumor position in real time during treatment. The method uses planning four-dimensional (4D) computed tomography (CT) obtained through the respiratory phase, and a kV projection taken during treatment.
[...] Read more.
In this paper we propose a novel method for tracking the respiratory phase and 3D tumor position in real time during treatment. The method uses planning four-dimensional (4D) computed tomography (CT) obtained through the respiratory phase, and a kV projection taken during treatment. First, digitally rendered radiographs (DRRs) are generated from the 4DCT, and the structural similarity (SSIM) between the DRRs and the kV projection is computed to determine the current respiratory phase and magnitude. The 3D position of the tumor corresponding to the phase and magnitude is estimated using non-rigid registration by utilizing the tumor path segmented in the 4DCT. This method is evaluated using data from six patients with lung cancer and dynamic diaphragm phantom data. The method performs well irrespective of the gantry angle used, i.e., a respiration phase tracking accuracy of 97.2 ± 2.5%, and tumor tracking error in 3D of 0.9 ± 0.4 mm. The phantom study reveals that the DRRs match the actual projections well. The time taken to track the tumor is 400 ± 53 ms. This study demonstrated the feasibility of a technique used to track the respiratory phase and 3D tumor position in real time using kV fluoroscopy acquired from arbitrary angles around the freely breathing patient. Full article
Figures

Figure 1

Open AccessArticle Two Hesitant Multiplicative Decision-Making Algorithms and Their Application to Fog-Haze Factor Assessment Problem
Algorithms 2018, 11(10), 154; https://doi.org/10.3390/a11100154
Received: 29 August 2018 / Revised: 28 September 2018 / Accepted: 2 October 2018 / Published: 10 October 2018
Cited by 1 | Viewed by 280 | PDF Full-text (255 KB) | HTML Full-text | XML Full-text
Abstract
Hesitant multiplicative preference relation (HMPR) is a useful tool to cope with the problems in which the experts utilize Saaty’s 1–9 scale to express their preference information over paired comparisons of alternatives. It is known that the lack of acceptable consistency easily leads
[...] Read more.
Hesitant multiplicative preference relation (HMPR) is a useful tool to cope with the problems in which the experts utilize Saaty’s 1–9 scale to express their preference information over paired comparisons of alternatives. It is known that the lack of acceptable consistency easily leads to inconsistent conclusions, therefore consistency improvement processes and deriving the reliable priority weight vector for alternatives are two significant and challenging issues for hesitant multiplicative information decision-making problems. In this paper, some new concepts are first introduced, including HMPR, consistent HMPR and the consistency index of HMPR. Then, based on the logarithmic least squares model and linear optimization model, two novel automatic iterative algorithms are proposed to enhance the consistency of HMPR and generate the priority weights of HMPR, which are proved to be convergent. In the end, the proposed algorithms are applied to the factors affecting selection of fog-haze weather. The comparative analysis shows that the decision-making process in our algorithms would be more straight-forward and efficient. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Open AccessArticle Chronotype, Risk and Time Preferences, and Financial Behaviour
Algorithms 2018, 11(10), 153; https://doi.org/10.3390/a11100153
Received: 14 September 2018 / Revised: 5 October 2018 / Accepted: 8 October 2018 / Published: 10 October 2018
Viewed by 281 | PDF Full-text (2218 KB) | HTML Full-text | XML Full-text
Abstract
This paper examines the effect of chronotype on the delinquent credit card payments and stock market participation through preference channels. Using an online survey of 455 individuals who have been working for 3 to 8 years in companies in mainland China, the results
[...] Read more.
This paper examines the effect of chronotype on the delinquent credit card payments and stock market participation through preference channels. Using an online survey of 455 individuals who have been working for 3 to 8 years in companies in mainland China, the results reveal that morningness is negatively associated with delinquent credit card payments. Morningness also indirectly predicts delinquent credit card payments through time preference, but this relationship only exists when individuals’ monthly income is at a low and average level. On the other hand, financial risk preference accounts for the effect of morningness on stock market participation. Consequently, an additional finding is that morningness is positively associated with financial risk preference, which contradicts previous findings in the literature. Finally, based on the empirical evidence, we discuss the plausible mechanisms that may drive these relationships and the implications for theory and practice. The current study contributes to the literature by examining the links between circadian typology and particular financial behaviour of experienced workers. Full article
(This article belongs to the Special Issue Algorithms in Computational Finance)
Figures

Figure 1

Open AccessArticle Accelerated Iterative Learning Control of Speed Ripple Suppression for a Seeker Servo Motor
Algorithms 2018, 11(10), 152; https://doi.org/10.3390/a11100152
Received: 10 September 2018 / Revised: 28 September 2018 / Accepted: 2 October 2018 / Published: 10 October 2018
Viewed by 257 | PDF Full-text (2229 KB) | HTML Full-text | XML Full-text
Abstract
To suppress the speed ripple of a permanent magnet synchronous motor in a seeker servo system, we propose an accelerated iterative learning control with an adjustable learning interval. First, according to the error of current iterative learning for the system, we determine the
[...] Read more.
To suppress the speed ripple of a permanent magnet synchronous motor in a seeker servo system, we propose an accelerated iterative learning control with an adjustable learning interval. First, according to the error of current iterative learning for the system, we determine the next iterative learning interval and conduct real-time correction on the learning gain. For the learning interval, as the number of iterations increases, the actual interval that needs correction constantly shortens, accelerating the convergence speed. Second, we analyze the specific structure of the controller while applying reasonable assumptions pertaining to its condition. Using the λ-norm, we analyze and apply our mathematical knowledge to obtain a strict mathematical proof on the P-type iterative learning control and obtain the condition of convergence for the controller. Finally, we apply the proposed method for periodic ripple inhibition of the torque rotation speed of the permanent magnet synchronous motor and establish the system model; we use the periodic load torque to simulate the ripple torque of the synchronous motor. The simulation and experimental results indicate the effectiveness of the method. Full article
Figures

Figure 1

Open AccessArticle K-Means Cloning: Adaptive Spherical K-Means Clustering
Algorithms 2018, 11(10), 151; https://doi.org/10.3390/a11100151
Received: 17 May 2018 / Revised: 22 September 2018 / Accepted: 25 September 2018 / Published: 6 October 2018
Viewed by 350 | PDF Full-text (998 KB) | HTML Full-text | XML Full-text
Abstract
We propose a novel method for adaptive K-means clustering. The proposed method overcomes the problems of the traditional K-means algorithm. Specifically, the proposed method does not require prior knowledge of the number of clusters. Additionally, the initial identification of the cluster
[...] Read more.
We propose a novel method for adaptive K-means clustering. The proposed method overcomes the problems of the traditional K-means algorithm. Specifically, the proposed method does not require prior knowledge of the number of clusters. Additionally, the initial identification of the cluster elements has no negative impact on the final generated clusters. Inspired by cell cloning in microorganism cultures, each added data sample causes the existing cluster ‘colonies’ to evaluate, with the other clusters, various merging or splitting actions in order for reaching the optimum cluster set. The proposed algorithm is adequate for clustering data in isolated or overlapped compact spherical clusters. Experimental results support the effectiveness of this clustering algorithm. Full article
Figures

Figure 1

Open AccessArticle SLoPCloud: An Efficient Solution for Locality Problem in Peer-to-Peer Cloud Systems
Algorithms 2018, 11(10), 150; https://doi.org/10.3390/a11100150
Received: 2 September 2018 / Revised: 30 September 2018 / Accepted: 30 September 2018 / Published: 2 October 2018
Viewed by 289 | PDF Full-text (466 KB) | HTML Full-text | XML Full-text
Abstract
Peer-to-Peer (P2P) cloud systems are becoming more popular due to the high computational capability, scalability, reliability, and efficient data sharing. However, sending and receiving a massive amount of data causes huge network traffic leading to significant communication delays. In P2P systems, a considerable
[...] Read more.
Peer-to-Peer (P2P) cloud systems are becoming more popular due to the high computational capability, scalability, reliability, and efficient data sharing. However, sending and receiving a massive amount of data causes huge network traffic leading to significant communication delays. In P2P systems, a considerable amount of the mentioned traffic and delay is owing to the mismatch between the physical layer and the overlay layer, which is referred to as locality problem. To achieve higher performance and consequently resilience to failures, each peer has to make connections to geographically closer peers. To the best of our knowledge, locality problem is not considered in any well known P2P cloud system. However, considering this problem could enhance the overall network performance by shortening the response time and decreasing the overall network traffic. In this paper, we propose a novel, efficient, and general solution for locality problem in P2P cloud systems considering the round-trip-time (RTT). Furthermore, we suggest a flexible topology as the overlay graph to address the locality problem more effectively. Comprehensive simulation experiments are conducted to demonstrate the applicability of the proposed algorithm in most of the well-known P2P overlay networks while not introducing any serious overhead. Full article
(This article belongs to the Special Issue Algorithms for the Resource Management of Large Scale Infrastructures)
Figures

Figure 1

Open AccessArticle Cover Time in Edge-Uniform Stochastically-Evolving Graphs
Algorithms 2018, 11(10), 149; https://doi.org/10.3390/a11100149
Received: 28 February 2018 / Revised: 18 July 2018 / Accepted: 29 September 2018 / Published: 2 October 2018
Viewed by 285 | PDF Full-text (317 KB) | HTML Full-text | XML Full-text
Abstract
We define a general model of stochastically-evolving graphs, namely the edge-uniform stochastically-evolving graphs. In this model, each possible edge of an underlying general static graph evolves independently being either alive or dead at each discrete time step of evolution following a (Markovian) stochastic
[...] Read more.
We define a general model of stochastically-evolving graphs, namely the edge-uniform stochastically-evolving graphs. In this model, each possible edge of an underlying general static graph evolves independently being either alive or dead at each discrete time step of evolution following a (Markovian) stochastic rule. The stochastic rule is identical for each possible edge and may depend on the past k 0 observations of the edge’s state. We examine two kinds of random walks for a single agent taking place in such a dynamic graph: (i) The Random Walk with a Delay (RWD), where at each step, the agent chooses (uniformly at random) an incident possible edge, i.e., an incident edge in the underlying static graph, and then, it waits till the edge becomes alive to traverse it. (ii) The more natural Random Walk on what is Available (RWA), where the agent only looks at alive incident edges at each time step and traverses one of them uniformly at random. Our study is on bounding the cover time, i.e., the expected time until each node is visited at least once by the agent. For RWD, we provide a first upper bound for the cases k = 0 , 1 by correlating RWD with a simple random walk on a static graph. Moreover, we present a modified electrical network theory capturing the k = 0 case. For RWA, we derive some first bounds for the case k = 0 , by reducing RWA to an RWD-equivalent walk with a modified delay. Further, we also provide a framework that is shown to compute the exact value of the cover time for a general family of stochastically-evolving graphs in exponential time. Finally, we conduct experiments on the cover time of RWA in edge-uniform graphs and compare the experimental findings with our theoretical bounds. Full article
Open AccessArticle Fuzzy Q-Learning Agent for Online Tuning of PID Controller for DC Motor Speed Control
Algorithms 2018, 11(10), 148; https://doi.org/10.3390/a11100148
Received: 31 August 2018 / Revised: 20 September 2018 / Accepted: 29 September 2018 / Published: 30 September 2018
Viewed by 386 | PDF Full-text (3899 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes a hybrid Zeigler-Nichols (Z-N) reinforcement learning approach for online tuning of the parameters of the Proportional Integral Derivative (PID) for controlling the speed of a DC motor. The PID gains are set by the Z-N method, and are then adapted
[...] Read more.
This paper proposes a hybrid Zeigler-Nichols (Z-N) reinforcement learning approach for online tuning of the parameters of the Proportional Integral Derivative (PID) for controlling the speed of a DC motor. The PID gains are set by the Z-N method, and are then adapted online through the fuzzy Q-Learning agent. The fuzzy Q-Learning agent is used instead of the conventional Q-Learning, in order to deal with the continuous state-action space. The fuzzy Q-Learning agent defines its state according to the value of the error. The output signal of the agent consists of three output variables, in which each one defines the percentage change of each gain. Each gain can be increased or decreased from 0% to 50% of its initial value. Through this method, the gains of the controller are adjusted online via the interaction of the environment. The knowledge of the expert is not a necessity during the setup process. The simulation results highlight the performance of the proposed control strategy. After the exploration phase, the settling time is reduced in the steady states. In the transient states, the response has less amplitude oscillations and reaches the equilibrium point faster than the conventional PID controller. Full article
(This article belongs to the Special Issue Algorithms for PID Controller)
Figures

Figure 1

Open AccessArticle WSN Nodes Placement Optimization Based on a Weighted Centroid Artificial Fish Swarm Algorithm
Algorithms 2018, 11(10), 147; https://doi.org/10.3390/a11100147
Received: 22 August 2018 / Revised: 23 September 2018 / Accepted: 26 September 2018 / Published: 30 September 2018
Viewed by 397 | PDF Full-text (2038 KB) | HTML Full-text | XML Full-text
Abstract
Aiming at optimal placement of wireless sensor network (WSN) nodes of wind turbine blade for health inspection, a weighted centroid artificial fish swarm algorithm (WC-AFSA) is proposed. A weighted centroid algorithm is applied to construct an initial fish population so to enhance the
[...] Read more.
Aiming at optimal placement of wireless sensor network (WSN) nodes of wind turbine blade for health inspection, a weighted centroid artificial fish swarm algorithm (WC-AFSA) is proposed. A weighted centroid algorithm is applied to construct an initial fish population so to enhance the fish diversity and improve the search precision. Adaptive step based on dynamic parameter is used to jump out local optimal solution and improve the convergence speed. Optimal sensor placement is realized by minimizing the maximum off-diagonal elements of the modal assurance criterion as the objective function. Five typical test functions are applied to verify the effectiveness of the algorithm, and optimal placement of WSNs nodes on wind turbine blade is carried out. The results show that WC-AFSA has better optimization effect than AFSA, which can solve the problem of optimal arrangement of blade WSNs nodes. Full article
Figures

Figure 1

Open AccessArticle Fast Tuning of the PID Controller in An HVAC System Using the Big Bang–Big Crunch Algorithm and FPGA Technology
Algorithms 2018, 11(10), 146; https://doi.org/10.3390/a11100146
Received: 1 September 2018 / Revised: 23 September 2018 / Accepted: 26 September 2018 / Published: 28 September 2018
Viewed by 417 | PDF Full-text (2992 KB) | HTML Full-text | XML Full-text
Abstract
This article presents a novel technique for the fast tuning of the parameters of the proportional–integral–derivative (PID) controller of a second-order heat, ventilation, and air conditioning (HVAC) system. The HVAC systems vary greatly in size, control functions and the amount of consumed energy.
[...] Read more.
This article presents a novel technique for the fast tuning of the parameters of the proportional–integral–derivative (PID) controller of a second-order heat, ventilation, and air conditioning (HVAC) system. The HVAC systems vary greatly in size, control functions and the amount of consumed energy. The optimal design and power efficiency of an HVAC system depend on how fast the integrated controller, e.g., PID controller, is adapted in the changes of the environmental conditions. In this paper, to achieve high tuning speed, we rely on a fast convergence evolution algorithm, called Big Bang–Big Crunch (BB–BC). The BB–BC algorithm is implemented, along with the PID controller, in an FPGA device, in order to further accelerate of the optimization process. The FPGA-in-the-loop (FIL) technique is used to connect the FPGA board (i.e., the PID and BB–BC subsystems) with the plant (i.e., MATLAB/Simulink models of HVAC) in order to emulate and evaluate the entire system. The experimental results demonstrate the efficiency of the proposed technique in terms of optimization accuracy and convergence speed compared with other optimization approaches for the tuning of the PID parameters: sw implementation of the BB–BC, genetic algorithm (GA), and particle swarm optimization (PSO). Full article
(This article belongs to the Special Issue Algorithms for PID Controller)
Figures

Figure 1

Open AccessArticle Reducing the Operational Cost of Cloud Data Centers through Renewable Energy
Algorithms 2018, 11(10), 145; https://doi.org/10.3390/a11100145
Received: 31 July 2018 / Revised: 31 August 2018 / Accepted: 21 September 2018 / Published: 27 September 2018
Viewed by 337 | PDF Full-text (933 KB) | HTML Full-text | XML Full-text
Abstract
The success of cloud computing services has led to big computing infrastructures that are complex to manage and very costly to operate. In particular, power supply dominates the operational costs of big infrastructures, and several solutions have to be put in place to
[...] Read more.
The success of cloud computing services has led to big computing infrastructures that are complex to manage and very costly to operate. In particular, power supply dominates the operational costs of big infrastructures, and several solutions have to be put in place to alleviate these operational costs and make the whole infrastructure more sustainable. In this paper, we investigate the case of a complex infrastructure composed of data centers (DCs) located in different geographical areas in which renewable energy generators are installed, co-located with the data centers, to reduce the amount of energy that must be purchased by the power grid. Since renewable energy generators are intermittent, the load management strategies of the infrastructure have to be adapted to the intermittent nature of the sources. In particular, we consider EcoMultiCloud , a load management strategy already proposed in the literature for multi-objective load management strategies, and we adapt it to the presence of renewable energy sources. Hence, cost reduction is achieved in the load allocation process, when virtual machines (VMs) are assigned to a data center of the considered infrastructure, by considering both energy cost variations and the presence of renewable energy production. Performance is analyzed for a specific infrastructure composed of four data centers. Results show that, despite being intermittent and highly variable, renewable energy can be effectively exploited in geographical data centers when a smart load allocation strategy is implemented. In addition, the results confirm that EcoMultiCloud is very flexible and is suited to the considered scenario. Full article
(This article belongs to the Special Issue Algorithms for the Resource Management of Large Scale Infrastructures)
Figures

Figure 1

Open AccessArticle Multi-Branch Deep Residual Network for Single Image Super-Resolution
Algorithms 2018, 11(10), 144; https://doi.org/10.3390/a11100144
Received: 21 August 2018 / Revised: 23 September 2018 / Accepted: 25 September 2018 / Published: 27 September 2018
Viewed by 326 | PDF Full-text (14238 KB) | HTML Full-text | XML Full-text
Abstract
Recently, algorithms based on the deep neural networks and residual networks have been applied for super-resolution and exhibited excellent performance. In this paper, a multi-branch deep residual network for single image super-resolution (MRSR) is proposed. In the network, we adopt a multi-branch network
[...] Read more.
Recently, algorithms based on the deep neural networks and residual networks have been applied for super-resolution and exhibited excellent performance. In this paper, a multi-branch deep residual network for single image super-resolution (MRSR) is proposed. In the network, we adopt a multi-branch network framework and further optimize the structure of residual network. By using residual blocks and filters reasonably, the model size is greatly expanded while the stable training is also guaranteed. Besides, a perceptual evaluation function, which contains three parts of loss, is proposed. The experiment results show that the evaluation function provides great support for the quality of reconstruction and the competitive performance. The proposed method mainly uses three steps of feature extraction, mapping, and reconstruction to complete the super-resolution reconstruction and shows superior performance than other state-of-the-art super-resolution methods on benchmark datasets. Full article
Figures

Figure 1

Open AccessArticle An Algorithm for Mapping the Asymmetric Multiple Traveling Salesman Problem onto Colored Petri Nets
Algorithms 2018, 11(10), 143; https://doi.org/10.3390/a11100143
Received: 30 July 2018 / Revised: 4 September 2018 / Accepted: 14 September 2018 / Published: 25 September 2018
Cited by 1 | Viewed by 371 | PDF Full-text (4856 KB) | HTML Full-text | XML Full-text
Abstract
The Multiple Traveling Salesman Problem is an extension of the famous Traveling Salesman Problem. Finding an optimal solution to the Multiple Traveling Salesman Problem (mTSP) is a difficult task as it belongs to the class of NP-hard problems. The problem becomes more complicated
[...] Read more.
The Multiple Traveling Salesman Problem is an extension of the famous Traveling Salesman Problem. Finding an optimal solution to the Multiple Traveling Salesman Problem (mTSP) is a difficult task as it belongs to the class of NP-hard problems. The problem becomes more complicated when the cost matrix is not symmetric. In such cases, finding even a feasible solution to the problem becomes a challenging task. In this paper, an algorithm is presented that uses Colored Petri Nets (CPN)—a mathematical modeling language—to represent the Multiple Traveling Salesman Problem. The proposed algorithm maps any given mTSP onto a CPN. The transformed model in CPN guarantees a feasible solution to the mTSP with asymmetric cost matrix. The model is simulated in CPNTools to measure two optimization objectives: the maximum time a salesman takes in a feasible solution and the collective time taken by all salesmen. The transformed model is also formally verified through reachability analysis to ensure that it is correct and is terminating. Full article
Figures

Figure 1

Back to Top