Next Issue
Volume 12, November
Previous Issue
Volume 12, September
 
 

Algorithms, Volume 12, Issue 10 (October 2019) – 22 articles

Cover Story (view full-size image): Temporal networks are graphs in which edges have temporal labels, specifying their starting and traversal times. Ignoring this information might cause wrong conclusions concerning the reachability properties of the graph. On the other hand, computing these properties exactly may turn out to be computationally unfeasible due to the huge number of temporal edges. In this paper we show how the probabilistic counting approach can be used to approximately compute the sizes of the temporal reachability cones, and how we can approximate the temporal neighborhood function (i.e., the number of pairs of nodes reachable from one another in a given time interval) of large temporal networks in a few seconds. Finally, we apply our algorithm in order to analyze and compare the behavior of 25 public transportation networks. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 8447 KiB  
Article
Image Deblurring under Impulse Noise via Total Generalized Variation and Non-Convex Shrinkage
by Fan Lin, Yingpin Chen, Yuqun Chen and Fei Yu
Algorithms 2019, 12(10), 221; https://doi.org/10.3390/a12100221 - 22 Oct 2019
Cited by 3 | Viewed by 3438
Abstract
Image deblurring under the background of impulse noise is a typically ill-posed inverse problem which attracted great attention in the fields of image processing and computer vision. The fast total variation deconvolution (FTVd) algorithm proved to be an effective way to solve this [...] Read more.
Image deblurring under the background of impulse noise is a typically ill-posed inverse problem which attracted great attention in the fields of image processing and computer vision. The fast total variation deconvolution (FTVd) algorithm proved to be an effective way to solve this problem. However, it only considers sparsity of the first-order total variation, resulting in staircase artefacts. The L1 norm is adopted in the FTVd model to depict the sparsity of the impulse noise, while the L1 norm has limited capacity of depicting it. To overcome this limitation, we present a new algorithm based on the Lp-pseudo-norm and total generalized variation (TGV) regularization. The TGV regularization puts sparse constraints on both the first-order and second-order gradients of the image, effectively preserving the image edge while relieving undesirable artefacts. The Lp-pseudo-norm constraint is employed to replace the L1 norm constraint to depict the sparsity of the impulse noise more precisely. The alternating direction method of multipliers is adopted to solve the proposed model. In the numerical experiments, the proposed algorithm is compared with some state-of-the-art algorithms in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), signal-to-noise ratio (SNR), operation time, and visual effects to verify its superiority. Full article
Show Figures

Figure 1

23 pages, 3750 KiB  
Article
Freeway Traffic Congestion Reduction and Environment Regulation via Model Predictive Control
by Juan Chen, Yuxuan Yu and Qi Guo
Algorithms 2019, 12(10), 220; https://doi.org/10.3390/a12100220 - 21 Oct 2019
Cited by 12 | Viewed by 3706
Abstract
This paper proposes a model predictive control method based on dynamic multi-objective optimization algorithms (MPC_CPDMO-NSGA-II) for reducing freeway congestion and relieving environment impact simultaneously. A new dynamic multi-objective optimization algorithm based on clustering and prediction with NSGA-II (CPDMO-NSGA-II) is proposed. The proposed CPDMO-NSGA-II [...] Read more.
This paper proposes a model predictive control method based on dynamic multi-objective optimization algorithms (MPC_CPDMO-NSGA-II) for reducing freeway congestion and relieving environment impact simultaneously. A new dynamic multi-objective optimization algorithm based on clustering and prediction with NSGA-II (CPDMO-NSGA-II) is proposed. The proposed CPDMO-NSGA-II algorithm is used to realize on-line optimization at each control step in model predictive control. The performance indicators considered in model predictive control consists of total time spent, total travel distance, total emissions and total fuel consumption. Then TOPSIS method is adopted to select an optimal solution from Pareto front obtained from MPC_CPDMO-NSGA-II algorithm and is applied to the VISSIM environment. The control strategies are variable speed limit (VSL) and ramp metering (RM). In order to verify the performance of the proposed algorithm, the proposed algorithm is tested under the simulation environment originated from a real freeway network in Shanghai with one on-ramp. The result is compared with fixed speed limit strategy and single optimization method respectively. Simulation results show that it can effectively alleviate traffic congestion, reduce emissions and fuel consumption, as compared with fixed speed limit strategy and classical model predictive control method based on single optimization method. Full article
(This article belongs to the Special Issue Model Predictive Control: Algorithms and Applications)
Show Figures

Figure 1

11 pages, 259 KiB  
Article
On Finding Two Posets that Cover Given Linear Orders
by Ivy Ordanel, Proceso Fernandez, Jr. and Henry Adorna
Algorithms 2019, 12(10), 219; https://doi.org/10.3390/a12100219 - 19 Oct 2019
Cited by 2 | Viewed by 3154
Abstract
The Poset Cover Problem is an optimization problem where the goal is to determine a minimum set of posets that covers a given set of linear orders. This problem is relevant in the field of data mining, specifically in determining directed networks or [...] Read more.
The Poset Cover Problem is an optimization problem where the goal is to determine a minimum set of posets that covers a given set of linear orders. This problem is relevant in the field of data mining, specifically in determining directed networks or models that explain the ordering of objects in a large sequential dataset. It is already known that the decision version of the problem is NP-Hard while its variation where the goal is to determine only a single poset that covers the input is in P. In this study, we investigate the variation, which we call the 2-Poset Cover Problem, where the goal is to determine two posets, if they exist, that cover the given linear orders. We derive properties on posets, which leads to an exact solution for the 2-Poset Cover Problem. Although the algorithm runs in exponential-time, it is still significantly faster than a brute-force solution. Moreover, we show that when the posets being considered are tree-posets, the running-time of the algorithm becomes polynomial, which proves that the more restricted variation, which we called the 2-Tree-Poset Cover Problem, is also in P. Full article
Show Figures

Figure 1

14 pages, 680 KiB  
Article
A New Coding Paradigm for the Primitive Relay Channel
by Marco Mondelli, S. Hamed Hassani and Rüdiger Urbanke
Algorithms 2019, 12(10), 218; https://doi.org/10.3390/a12100218 - 18 Oct 2019
Cited by 4 | Viewed by 2989
Abstract
We consider the primitive relay channel, where the source sends a message to the relay and to the destination, and the relay helps the communication by transmitting an additional message to the destination via a separate channel. Two well-known coding techniques have been [...] Read more.
We consider the primitive relay channel, where the source sends a message to the relay and to the destination, and the relay helps the communication by transmitting an additional message to the destination via a separate channel. Two well-known coding techniques have been introduced for this setting: decode-and-forward and compress-and-forward. In decode-and-forward, the relay completely decodes the message and sends some information to the destination; in compress-and-forward, the relay does not decode, and it sends a compressed version of the received signal to the destination using Wyner–Ziv coding. In this paper, we present a novel coding paradigm that provides an improved achievable rate for the primitive relay channel. The idea is to combine compress-and-forward and decode-and-forward via a chaining construction. We transmit over pairs of blocks: in the first block, we use compress-and-forward; and, in the second block, we use decode-and-forward. More specifically, in the first block, the relay does not decode, it compresses the received signal via Wyner–Ziv, and it sends only part of the compression to the destination. In the second block, the relay completely decodes the message, it sends some information to the destination, and it also sends the remaining part of the compression coming from the first block. By doing so, we are able to strictly outperform both compress-and-forward and decode-and-forward. Note that the proposed coding scheme can be implemented with polar codes. As such, it has the typical attractive properties of polar coding schemes, namely, quasi-linear encoding and decoding complexity, and error probability that decays at super-polynomial speed. As a running example, we take into account the special case of the erasure relay channel, and we provide a comparison between the rates achievable by our proposed scheme and the existing upper and lower bounds. Full article
(This article belongs to the Special Issue Coding Theory and Its Application)
Show Figures

Figure 1

15 pages, 1249 KiB  
Article
Can People Really Do Nothing? Handling Annotation Gaps in ADL Sensor Data
by Alaa E. Abdel Hakim and Wael Deabes
Algorithms 2019, 12(10), 217; https://doi.org/10.3390/a12100217 - 17 Oct 2019
Cited by 2 | Viewed by 2781
Abstract
In supervised Activities of Daily Living (ADL) recognition systems, annotating collected sensor readings is an essential, yet exhaustive, task. Readings are collected from activity-monitoring sensors in a 24/7 manner. The size of the produced dataset is so huge that it is almost impossible [...] Read more.
In supervised Activities of Daily Living (ADL) recognition systems, annotating collected sensor readings is an essential, yet exhaustive, task. Readings are collected from activity-monitoring sensors in a 24/7 manner. The size of the produced dataset is so huge that it is almost impossible for a human annotator to give a certain label to every single instance in the dataset. This results in annotation gaps in the input data to the adopting learning system. The performance of the recognition system is negatively affected by these gaps. In this work, we propose and investigate three different paradigms to handle these gaps. In the first paradigm, the gaps are taken out by dropping all unlabeled readings. A single “Unknown” or “Do-Nothing” label is given to the unlabeled readings within the operation of the second paradigm. The last paradigm handles these gaps by giving every set of them a unique label identifying the encapsulating certain labels. Also, we propose a semantic preprocessing method of annotation gaps by constructing a hybrid combination of some of these paradigms for further performance improvement. The performance of the proposed three paradigms and their hybrid combination is evaluated using an ADL benchmark dataset containing more than 2.5 × 10 6 sensor readings that had been collected over more than nine months. The evaluation results emphasize the performance contrast under the operation of each paradigm and support a specific gap handling approach for better performance. Full article
Show Figures

Figure 1

15 pages, 1081 KiB  
Article
Adaptive Clustering via Symmetric Nonnegative Matrix Factorization of the Similarity Matrix
by Paola Favati, Grazia Lotti, Ornella Menchi and Francesco Romani
Algorithms 2019, 12(10), 216; https://doi.org/10.3390/a12100216 - 17 Oct 2019
Viewed by 2865
Abstract
The problem of clustering, that is, the partitioning of data into groups of similar objects, is a key step for many data-mining problems. The algorithm we propose for clustering is based on the symmetric nonnegative matrix factorization (SymNMF) of a similarity matrix. The [...] Read more.
The problem of clustering, that is, the partitioning of data into groups of similar objects, is a key step for many data-mining problems. The algorithm we propose for clustering is based on the symmetric nonnegative matrix factorization (SymNMF) of a similarity matrix. The algorithm is first presented for the case of a prescribed number k of clusters, then it is extended to the case of a not a priori given k. A heuristic approach improving the standard multistart strategy is proposed and validated by the experimentation. Full article
Show Figures

Figure 1

16 pages, 2535 KiB  
Article
Backstepping Adaptive Neural Network Control for Electric Braking Systems of Aircrafts
by Xi Zhang and Hui Lin
Algorithms 2019, 12(10), 215; https://doi.org/10.3390/a12100215 - 15 Oct 2019
Cited by 4 | Viewed by 3417
Abstract
This paper proposes an adaptive backstepping control algorithm for electric braking systems with electromechanical actuators (EMAs). First, the ideal mathematical model of the EMA is established, and the nonlinear factors are analyzed, such as the deformation of the reduction gear. Subsequently, the actual [...] Read more.
This paper proposes an adaptive backstepping control algorithm for electric braking systems with electromechanical actuators (EMAs). First, the ideal mathematical model of the EMA is established, and the nonlinear factors are analyzed, such as the deformation of the reduction gear. Subsequently, the actual mathematical model of the EMA is rebuilt by combining the ideal model and the nonlinear factors. To realize high performance braking pressure control, the backstepping control method is adopted to address the mismatched uncertainties in the electric braking system, and a radial basis function (RBF) neural network is established to estimate the nonlinear functions in the control system. The experimental results indicate that the proposed braking pressure control strategy can improve the servo performance of the electric braking system. In addition, the hardware-in-loop (HIL) experimental results show that the proposed EMA controller can satisfy the requirements of the aircraft antilock braking systems. Full article
Show Figures

Figure 1

7 pages, 327 KiB  
Article
Exploiting Sparse Statistics for a Sequence-Based Prediction of the Effect of Mutations
by Mihaly Mezei
Algorithms 2019, 12(10), 214; https://doi.org/10.3390/a12100214 - 14 Oct 2019
Cited by 1 | Viewed by 2634
Abstract
Recent work showed that there is a significant difference between the statistics of amino acid triplets and quadruplets in sequences of folded proteins and randomly generated sequences. These statistics were used to assign a score to each sequence and make a prediction whether [...] Read more.
Recent work showed that there is a significant difference between the statistics of amino acid triplets and quadruplets in sequences of folded proteins and randomly generated sequences. These statistics were used to assign a score to each sequence and make a prediction whether a sequence is likely to fold. The present paper extends the statistics to higher multiplets and suggests a way to handle the treatment of multiplets that were not found in the set of folded proteins. In particular, foldability predictions were done along the line of the previous work using pentuplet statistics and a way was found to combine the quadruplet and pentuplets statistics to improve the foldability predictions. A different, simpler, score was defined for hextuplets and heptuplets and were used to predict the direction of stability change of a protein upon mutation. With the best score combination the accuracy of the prediction was 73.4%. Full article
Show Figures

Figure 1

16 pages, 568 KiB  
Article
Multimodal Dynamic Journey-Planning
by Kalliopi Giannakopoulou, Andreas Paraskevopoulos and Christos Zaroliagis
Algorithms 2019, 12(10), 213; https://doi.org/10.3390/a12100213 - 13 Oct 2019
Cited by 19 | Viewed by 4405
Abstract
In this paper, a new model, known as the multimodal dynamic timetable model (DTM), is presented for computing optimal multimodal journeys in schedule-based public transport systems. The new model constitutes an extension of the dynamic timetable model (DTM), which was developed originally [...] Read more.
In this paper, a new model, known as the multimodal dynamic timetable model (DTM), is presented for computing optimal multimodal journeys in schedule-based public transport systems. The new model constitutes an extension of the dynamic timetable model (DTM), which was developed originally for a different setting (unimodal journey-planning). Multimodal DTM demonstrates a very fast query algorithm that meets the requirement for real-time response to best journey queries, and an ultra-fast update algorithm for updating the timetable information in case of delays of scheduled-based vehicles. An experimental study on real-world metropolitan networks demonstrates that the query and update algorithms of Multimodal DTM compare favorably with other state-of-the-art approaches when public transport, including unrestricted—with respect to departing time—traveling (e.g., walking and electric vehicles) is considered. Full article
Show Figures

Figure 1

2 pages, 156 KiB  
Addendum
Addendum: Mircea-Bogdan Radac and Timotei Lala. Learning Output Reference Model Tracking for Higher-order Nonlinear Systems with Unknown Dynamics. Algorithms 2019, 12, 121
by Mircea-Bogdan Radac and Timotei Lala
Algorithms 2019, 12(10), 212; https://doi.org/10.3390/a12100212 - 10 Oct 2019
Cited by 1 | Viewed by 2599
Abstract
The authors would like to mention that their paper is an extended version of the IEEE conference paper [...] Full article
22 pages, 1427 KiB  
Article
Approximating the Temporal Neighbourhood Function of Large Temporal Graphs
by Pierluigi Crescenzi, Clémence Magnien and Andrea Marino
Algorithms 2019, 12(10), 211; https://doi.org/10.3390/a12100211 - 10 Oct 2019
Cited by 10 | Viewed by 3476
Abstract
Temporal networks are graphs in which edges have temporal labels, specifying their starting times and their traversal times. Several notions of distances between two nodes in a temporal network can be analyzed, by referring, for example, to the earliest arrival time or to [...] Read more.
Temporal networks are graphs in which edges have temporal labels, specifying their starting times and their traversal times. Several notions of distances between two nodes in a temporal network can be analyzed, by referring, for example, to the earliest arrival time or to the latest starting time of a temporal path connecting the two nodes. In this paper, we mostly refer to the notion of temporal reachability by using the earliest arrival time. In particular, we first show how the sketch approach, which has already been used in the case of classical graphs, can be applied to the case of temporal networks in order to approximately compute the sizes of the temporal cones of a temporal network. By making use of this approach, we subsequently show how we can approximate the temporal neighborhood function (that is, the number of pairs of nodes reachable from one another in a given time interval) of large temporal networks in a few seconds. Finally, we apply our algorithm in order to analyze and compare the behavior of 25 public transportation temporal networks. Our results can be easily adapted to the case in which we want to refer to the notion of distance based on the latest starting time. Full article
Show Figures

Figure 1

14 pages, 426 KiB  
Article
Laplacian Eigenmaps Dimensionality Reduction Based on Clustering-Adjusted Similarity
by Honghu Zhou and Jun Wang
Algorithms 2019, 12(10), 210; https://doi.org/10.3390/a12100210 - 04 Oct 2019
Cited by 3 | Viewed by 3741
Abstract
Euclidean distance between instances is widely used to capture the manifold structure of data and for graph-based dimensionality reduction. However, in some circumstances, the basic Euclidean distance cannot accurately capture the similarity between instances; some instances from different classes but close to the [...] Read more.
Euclidean distance between instances is widely used to capture the manifold structure of data and for graph-based dimensionality reduction. However, in some circumstances, the basic Euclidean distance cannot accurately capture the similarity between instances; some instances from different classes but close to the decision boundary may be close to each other, which may mislead the graph-based dimensionality reduction and compromise the performance. To mitigate this issue, in this paper, we proposed an approach called Laplacian Eigenmaps based on Clustering-Adjusted Similarity (LE-CAS). LE-CAS first performs clustering on all instances to explore the global structure and discrimination of instances, and quantifies the similarity between cluster centers. Then, it adjusts the similarity between pairwise instances by multiplying the similarity between centers of clusters, which these two instances respectively belong to. In this way, if two instances are from different clusters, the similarity between them is reduced; otherwise, it is unchanged. Finally, LE-CAS performs graph-based dimensionality reduction (via Laplacian Eigenmaps) based on the adjusted similarity. We conducted comprehensive empirical studies on UCI datasets and show that LE-CAS not only has a better performance than other relevant comparing methods, but also is more robust to input parameters. Full article
(This article belongs to the Special Issue Clustering Algorithms and Their Applications)
Show Figures

Figure 1

34 pages, 549 KiB  
Article
A Finite Regime Analysis of Information Set Decoding Algorithms
by Marco Baldi, Alessandro Barenghi, Franco Chiaraluce, Gerardo Pelosi and Paolo Santini
Algorithms 2019, 12(10), 209; https://doi.org/10.3390/a12100209 - 01 Oct 2019
Cited by 29 | Viewed by 4075
Abstract
Decoding of random linear block codes has been long exploited as a computationally hard problem on which it is possible to build secure asymmetric cryptosystems. In particular, both correcting an error-affected codeword, and deriving the error vector corresponding to a given syndrome were [...] Read more.
Decoding of random linear block codes has been long exploited as a computationally hard problem on which it is possible to build secure asymmetric cryptosystems. In particular, both correcting an error-affected codeword, and deriving the error vector corresponding to a given syndrome were proven to be equally difficult tasks. Since the pioneering work of Eugene Prange in the early 1960s, a significant research effort has been put into finding more efficient methods to solve the random code decoding problem through a family of algorithms known as information set decoding. The obtained improvements effectively reduce the overall complexity, which was shown to decrease asymptotically at each optimization, while remaining substantially exponential in the number of errors to be either found or corrected. In this work, we provide a comprehensive survey of the information set decoding techniques, providing finite regime temporal and spatial complexities for them. We exploit these formulas to assess the effectiveness of the asymptotic speedups obtained by the improved information set decoding techniques when working with code parameters relevant for cryptographic purposes. We also delineate computational complexities taking into account the achievable speedup via quantum computers and similarly assess such speedups in the finite regime. To provide practical grounding to the choice of cryptographically relevant parameters, we employ as our validation suite the ones chosen by cryptosystems admitted to the second round of the ongoing standardization initiative promoted by the US National Institute of Standards and Technology. Full article
(This article belongs to the Special Issue Coding Theory and Its Application)
14 pages, 1812 KiB  
Article
A Convex Optimization Algorithm for Electricity Pricing of Charging Stations
by Jing Zhang, Xiangpeng Zhan, Taoyong Li, Linru Jiang, Jun Yang, Yuanxing Zhang, Xiaohong Diao and Sining Han
Algorithms 2019, 12(10), 208; https://doi.org/10.3390/a12100208 - 01 Oct 2019
Cited by 3 | Viewed by 3667
Abstract
The problem of electricity pricing for charging stations is a multi-objective mixed integer nonlinear programming. Existing algorithms have low efficiency in solving this problem. In this paper, a convex optimization algorithm is proposed to get the optimal solution quickly. Firstly, the model is [...] Read more.
The problem of electricity pricing for charging stations is a multi-objective mixed integer nonlinear programming. Existing algorithms have low efficiency in solving this problem. In this paper, a convex optimization algorithm is proposed to get the optimal solution quickly. Firstly, the model is transformed into a convex optimization problem by second-order conic relaxation and Karush–Kuhn–Tucker optimality conditions. Secondly, a polyhedral approximation method is applied to construct a mixed integer linear programming, which can be solved quickly by branch and bound method. Finally, the model is solved many times to obtain the Pareto front according to the scalarization basic theorem. Based on an IEEE 33-bus distribution network model, simulation results show that the proposed algorithm can obtain an exact global optimal solution quickly compared with the heuristic method. Full article
(This article belongs to the Special Issue Algorithms in Convex Optimization and Applications)
Show Figures

Figure 1

12 pages, 307 KiB  
Article
Recommending Links to Control Elections via Social Influence
by Federico Corò, Gianlorenzo D’Angelo and Yllka Velaj
Algorithms 2019, 12(10), 207; https://doi.org/10.3390/a12100207 - 01 Oct 2019
Cited by 5 | Viewed by 3274
Abstract
Political parties recently learned that they must use social media campaigns along with advertising on traditional media to defeat their opponents. Before the campaign starts, it is important for a political party to establish and ensure its media presence, for example by enlarging [...] Read more.
Political parties recently learned that they must use social media campaigns along with advertising on traditional media to defeat their opponents. Before the campaign starts, it is important for a political party to establish and ensure its media presence, for example by enlarging their number of connections in the social network in order to assure a larger portion of users. Indeed, adding new connections between users increases the capabilities of a social network of spreading information, which in turn can increase the retention rate and the number of new voters. In this work, we address the problem of selecting a fixed-size set of new connections to be added to a subset of voters that, with their influence, will change the opinion of the network’s users about a target candidate, maximizing its chances to win the election. We provide a constant factor approximation algorithm for this problem and we experimentally show that, with few new links and small computational time, our algorithm is able to maximize the chances to make the target candidate win the elections. Full article
Show Figures

Figure 1

17 pages, 1152 KiB  
Article
Self-Improving Generative Artificial Neural Network for Pseudorehearsal Incremental Class Learning
by Diego Mellado, Carolina Saavedra, Steren Chabert, Romina Torres and Rodrigo Salas
Algorithms 2019, 12(10), 206; https://doi.org/10.3390/a12100206 - 01 Oct 2019
Cited by 14 | Viewed by 4424
Abstract
Deep learning models are part of the family of artificial neural networks and, as such, they suffer catastrophic interference when learning sequentially. In addition, the greater number of these models have a rigid architecture which prevents the incremental learning of new classes. To [...] Read more.
Deep learning models are part of the family of artificial neural networks and, as such, they suffer catastrophic interference when learning sequentially. In addition, the greater number of these models have a rigid architecture which prevents the incremental learning of new classes. To overcome these drawbacks, we propose the Self-Improving Generative Artificial Neural Network (SIGANN), an end-to-end deep neural network system which can ease the catastrophic forgetting problem when learning new classes. In this method, we introduce a novel detection model that automatically detects samples of new classes, and an adversarial autoencoder is used to produce samples of previous classes. This system consists of three main modules: a classifier module implemented using a Deep Convolutional Neural Network, a generator module based on an adversarial autoencoder, and a novelty-detection module implemented using an OpenMax activation function. Using the EMNIST data set, the model was trained incrementally, starting with a small set of classes. The results of the simulation show that SIGANN can retain previous knowledge while incorporating gradual forgetfulness of each learning sequence at a rate of about 7% per training step. Moreover, SIGANN can detect new classes that are hidden in the data with a median accuracy of 43 % and, therefore, proceed with incremental class learning. Full article
Show Figures

Figure 1

12 pages, 3151 KiB  
Article
Real-Time Conveyor Belt Deviation Detection Algorithm Based on Multi-Scale Feature Fusion Network
by Chan Zeng, Junfeng Zheng and Jiangyun Li
Algorithms 2019, 12(10), 205; https://doi.org/10.3390/a12100205 - 26 Sep 2019
Cited by 21 | Viewed by 5756
Abstract
The conveyor belt is an indispensable piece of conveying equipment for a mine whose deviation caused by roller sticky material and uneven load distribution is the most common failure during operation. In this paper, a real-time conveyor belt detection algorithm based on a [...] Read more.
The conveyor belt is an indispensable piece of conveying equipment for a mine whose deviation caused by roller sticky material and uneven load distribution is the most common failure during operation. In this paper, a real-time conveyor belt detection algorithm based on a multi-scale feature fusion network is proposed, which mainly includes two parts: the feature extraction module and the deviation detection module. The feature extraction module uses a multi-scale feature fusion network structure to fuse low-level features with rich position and detail information and high-level features with stronger semantic information to improve network detection performance. Depthwise separable convolutions are used to achieve real-time detection. The deviation detection module identifies and monitors the deviation fault by calculating the offset of conveyor belt. In particular, a new weighted loss function is designed to optimize the network and to improve the detection effect of the conveyor belt edge. In order to evaluate the effectiveness of the proposed method, the Canny algorithm, FCNs, UNet and Deeplab v3 networks are selected for comparison. The experimental results show that the proposed algorithm achieves 78.92% in terms of pixel accuracy (PA), and reaches 13.4 FPS (Frames per Second) with the error of less than 3.2 mm, which outperforms the other four algorithms. Full article
Show Figures

Figure 1

16 pages, 1752 KiB  
Article
Simulation on Cooperative Changeover of Production Team Using Hybrid Modeling Method
by Xiaodong Zhang, Yiqi Wang and Bingcun Xu
Algorithms 2019, 12(10), 204; https://doi.org/10.3390/a12100204 - 24 Sep 2019
Cited by 2 | Viewed by 2782
Abstract
In the multi-variety and small-quantity manufacturing environment, changeover operation occurs frequently, and cooperative changeover method is often used as a way to shorten the changeover time and balance the workload. However, more workers and tasks will be affected by cooperative changeover. As such, [...] Read more.
In the multi-variety and small-quantity manufacturing environment, changeover operation occurs frequently, and cooperative changeover method is often used as a way to shorten the changeover time and balance the workload. However, more workers and tasks will be affected by cooperative changeover. As such, the effectiveness of the cooperative changeover is dependent on other factors, such as the scope of cooperation and the proportion of newly introduced products. For this reason, this paper proposes a hybrid modeling method to support the simulation study of the production team's cooperative changeover strategies under various environments. Firstly, a hybrid simulation modeling method consisting of multi-agent systems and discrete events is introduced. Secondly, according to the scope of cooperation, this paper puts forward four kinds of cooperative changeover strategies. This paper also describes the cooperative line-changing behavior of operators. Finally, based on the changeover strategies, the proposed simulation method is applied to a production cell. Four production scenarios are considered according to the proportion of newly introduced part. The performance of various cooperative strategies in different production scenarios is simulated, and the statistical test results show that the optimal or satisfactory strategy can be determined in each production scenario. Additionally, the effectiveness and practicability of the proposed modeling method are verified. Full article
Show Figures

Figure 1

38 pages, 5169 KiB  
Article
Data-Driven Predictive Modeling of Neuronal Dynamics Using Long Short-Term Memory
by Benjamin Plaster and Gautam Kumar
Algorithms 2019, 12(10), 203; https://doi.org/10.3390/a12100203 - 24 Sep 2019
Cited by 4 | Viewed by 3667
Abstract
Modeling brain dynamics to better understand and control complex behaviors underlying various cognitive brain functions have been of interest to engineers, mathematicians and physicists over the last several decades. With the motivation of developing computationally efficient models of brain dynamics to use in [...] Read more.
Modeling brain dynamics to better understand and control complex behaviors underlying various cognitive brain functions have been of interest to engineers, mathematicians and physicists over the last several decades. With the motivation of developing computationally efficient models of brain dynamics to use in designing control-theoretic neurostimulation strategies, we have developed a novel data-driven approach in a long short-term memory (LSTM) neural network architecture to predict the temporal dynamics of complex systems over an extended long time-horizon in future. In contrast to recent LSTM-based dynamical modeling approaches that make use of multi-layer perceptrons or linear combination layers as output layers, our architecture uses a single fully connected output layer and reversed-order sequence-to-sequence mapping to improve short time-horizon prediction accuracy and to make multi-timestep predictions of dynamical behaviors. We demonstrate the efficacy of our approach in reconstructing the regular spiking to bursting dynamics exhibited by an experimentally-validated 9-dimensional Hodgkin-Huxley model of hippocampal CA1 pyramidal neurons. Through simulations, we show that our LSTM neural network can predict the multi-time scale temporal dynamics underlying various spiking patterns with reasonable accuracy. Moreover, our results show that the predictions improve with increasing predictive time-horizon in the multi-timestep deep LSTM neural network. Full article
Show Figures

Figure 1

3 pages, 149 KiB  
Editorial
Evolutionary Algorithms in Health Technologies
by Sai Ho Ling and Hak Keung Lam
Algorithms 2019, 12(10), 202; https://doi.org/10.3390/a12100202 - 24 Sep 2019
Cited by 3 | Viewed by 3011
Abstract
Health technology research brings together complementary interdisciplinary research skills in the development of innovative health technology applications. Recent research indicates that artificial intelligence can help achieve outstanding performance for particular types of health technology applications. An evolutionary algorithm is one of the subfields [...] Read more.
Health technology research brings together complementary interdisciplinary research skills in the development of innovative health technology applications. Recent research indicates that artificial intelligence can help achieve outstanding performance for particular types of health technology applications. An evolutionary algorithm is one of the subfields of artificial intelligence, and is an effective algorithm for global optimization inspired by biological evolution. With the rapidly growing complexity of design issues, methodologies and a higher demand for quality health technology applications, the development of evolutionary computation algorithms for health has become timely and of high relevance. This Special Issue intends to bring together researchers to report the recent findings in evolutionary algorithms in health technology. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Health Technologies)
19 pages, 448 KiB  
Article
GASP: Genetic Algorithms for Service Placement in Fog Computing Systems
by Claudia Canali and Riccardo Lancellotti
Algorithms 2019, 12(10), 201; https://doi.org/10.3390/a12100201 - 21 Sep 2019
Cited by 47 | Viewed by 5328
Abstract
Fog computing is becoming popular as a solution to support applications based on geographically distributed sensors that produce huge volumes of data to be processed and filtered with response time constraints. In this scenario, typical of a smart city environment, the traditional cloud [...] Read more.
Fog computing is becoming popular as a solution to support applications based on geographically distributed sensors that produce huge volumes of data to be processed and filtered with response time constraints. In this scenario, typical of a smart city environment, the traditional cloud paradigm with few powerful data centers located far away from the sources of data becomes inadequate. The fog computing paradigm, which provides a distributed infrastructure of nodes placed close to the data sources, represents a better solution to perform filtering, aggregation, and preprocessing of incoming data streams reducing the experienced latency and increasing the overall scalability. However, many issues still exist regarding the efficient management of a fog computing architecture, such as the distribution of data streams coming from sensors over the fog nodes to minimize the experienced latency. The contribution of this paper is two-fold. First, we present an optimization model for the problem of mapping data streams over fog nodes, considering not only the current load of the fog nodes, but also the communication latency between sensors and fog nodes. Second, to address the complexity of the problem, we present a scalable heuristic based on genetic algorithms. We carried out a set of experiments based on a realistic smart city scenario: the results show how the performance of the proposed heuristic is comparable with the one achieved through the solution of the optimization problem. Then, we carried out a comparison among different genetic evolution strategies and operators that identify the uniform crossover as the best option. Finally, we perform a wide sensitivity analysis to show the stability of the heuristic performance with respect to its main parameters. Full article
Show Figures

Figure 1

20 pages, 516 KiB  
Article
A Machine Learning Approach to Algorithm Selection for Exact Computation of Treewidth
by Borislav Slavchev, Evelina Masliankova and Steven Kelk
Algorithms 2019, 12(10), 200; https://doi.org/10.3390/a12100200 - 20 Sep 2019
Viewed by 4879
Abstract
We present an algorithm selection framework based on machine learning for the exact computation of treewidth, an intensively studied graph parameter that is NP-hard to compute. Specifically, we analyse the comparative performance of three state-of-the-art exact treewidth algorithms on a wide array [...] Read more.
We present an algorithm selection framework based on machine learning for the exact computation of treewidth, an intensively studied graph parameter that is NP-hard to compute. Specifically, we analyse the comparative performance of three state-of-the-art exact treewidth algorithms on a wide array of graphs and use this information to predict which of the algorithms, on a graph by graph basis, will compute the treewidth the quickest. Experimental results show that the proposed meta-algorithm outperforms existing methods on benchmark instances on all three performance metrics we use: in a nutshell, it computes treewidth faster than any single algorithm in isolation. We analyse our results to derive insights about graph feature importance and the strengths and weaknesses of the algorithms we used. Our results are further evidence of the advantages to be gained by strategically blending machine learning and combinatorial optimisation approaches within a hybrid algorithmic framework. The machine learning model we use is intentionally simple to emphasise that speedup can already be obtained without having to engage in the full complexities of machine learning engineering. We reflect on how future work could extend this simple but effective, proof-of-concept by deploying more sophisticated machine learning models. Full article
(This article belongs to the Special Issue New Frontiers in Parameterized Complexity and Algorithms)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop