Next Issue
Previous Issue

Table of Contents

Algorithms, Volume 12, Issue 2 (February 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Developments in embedded computing and machine learning can help build advanced wearable devices [...] Read more.
View options order results:
result details:
Displaying articles 1-23
Export citation of selected articles as:
Open AccessArticle Randomized Parameterized Algorithms for the Kidney Exchange Problem
Algorithms 2019, 12(2), 50; https://doi.org/10.3390/a12020050
Received: 28 December 2018 / Revised: 20 February 2019 / Accepted: 21 February 2019 / Published: 25 February 2019
Viewed by 609 | PDF Full-text (784 KB) | HTML Full-text | XML Full-text
Abstract
In order to increase the potential kidney transplants between patients and their incompatible donors, kidney exchange programs have been created in many countries. In the programs, designing algorithms for the kidney exchange problem plays a critical role. The graph theory model of the [...] Read more.
In order to increase the potential kidney transplants between patients and their incompatible donors, kidney exchange programs have been created in many countries. In the programs, designing algorithms for the kidney exchange problem plays a critical role. The graph theory model of the kidney exchange problem is to find a maximum weight packing of vertex-disjoint cycles and chains for a given weighted digraph. In general, the length of cycles is not more than a given constant L (typically 2 L 5), and the objective function corresponds to maximizing the number of possible kidney transplants. In this paper, we study the parameterized complexity and randomized algorithms for the kidney exchange problem without chains from theory. We construct two different parameterized models of the kidney exchange problem for two cases L = 3 and L 3, and propose two randomized parameterized algorithms based on the random partitioning technique and the randomized algebraic technique, respectively. Full article
(This article belongs to the Special Issue New Frontiers in Parameterized Complexity and Algorithms)
Open AccessArticle Secrecy Control of Wireless Networks with Finite Encoding Blocklength
Algorithms 2019, 12(2), 49; https://doi.org/10.3390/a12020049
Received: 28 December 2018 / Revised: 15 February 2019 / Accepted: 20 February 2019 / Published: 25 February 2019
Viewed by 534 | PDF Full-text (346 KB) | HTML Full-text | XML Full-text
Abstract
We consider wireless multi-hop networks in which each node aims to securely transmit a message. To guarantee the secure transmission, we employ an independent randomization encoding strategy to encode the confidential message. We aim to maximize the network utility. Based on the finite [...] Read more.
We consider wireless multi-hop networks in which each node aims to securely transmit a message. To guarantee the secure transmission, we employ an independent randomization encoding strategy to encode the confidential message. We aim to maximize the network utility. Based on the finite length of a secrecy codewords strategy, we develop an improved control algorithm, subject to network stability and secrecy outage requirements. On the basis of the Lyapunov optimization method, we design an control algorithm, which is decomposed into end-to-end secrecy encoding, flow control and routing scheduling. The simulation results show that the proposed algorithm can achieve a utility result that is arbitrarily close to the optimal value. Finally, the performance of the proposed control policy is validated with various network conditions. Full article
Figures

Figure 1

Open AccessArticle Selective Offloading by Exploiting ARIMA-BP for Energy Optimization in Mobile Edge Computing Networks
Algorithms 2019, 12(2), 48; https://doi.org/10.3390/a12020048
Received: 21 January 2019 / Revised: 12 February 2019 / Accepted: 19 February 2019 / Published: 25 February 2019
Viewed by 583 | PDF Full-text (962 KB) | HTML Full-text | XML Full-text
Abstract
Mobile Edge Computing (MEC) is an innovative technique, which can provide cloud-computing near mobile devices on the edge of networks. Based on the MEC architecture, this paper proposes an ARIMA-BP-based Selective Offloading (ABSO) strategy, which minimizes the energy consumption of mobile devices while [...] Read more.
Mobile Edge Computing (MEC) is an innovative technique, which can provide cloud-computing near mobile devices on the edge of networks. Based on the MEC architecture, this paper proposes an ARIMA-BP-based Selective Offloading (ABSO) strategy, which minimizes the energy consumption of mobile devices while meeting the delay requirements. In ABSO, we exploit an ARIMA-BP model for estimating computation capacity of the edge cloud, and then design a Selective Offloading Algorithm for obtaining offloading strategy. Simulation results reveal that the ABSO can apparently decrease the energy consumption of mobile devices in comparison with other offloading methods. Full article
Figures

Figure 1

Open AccessArticle Real-time Conflict Resolution Algorithm for Multi-UAV Based on Model Predict Control
Algorithms 2019, 12(2), 47; https://doi.org/10.3390/a12020047
Received: 16 January 2019 / Revised: 18 February 2019 / Accepted: 20 February 2019 / Published: 22 February 2019
Viewed by 693 | PDF Full-text (3518 KB) | HTML Full-text | XML Full-text
Abstract
A real-time conflict resolution algorithm based on model predictive control (MPC) is introduced to address the flight conflict resolution problem in multi-UAV scenarios. Using a low-level controller, the UAV dynamic equations are abstracted into simpler unicycle kinematic equations. The neighboring UAVs exchange their [...] Read more.
A real-time conflict resolution algorithm based on model predictive control (MPC) is introduced to address the flight conflict resolution problem in multi-UAV scenarios. Using a low-level controller, the UAV dynamic equations are abstracted into simpler unicycle kinematic equations. The neighboring UAVs exchange their predicted trajectories at each sample time to predict the conflicts. Then, under the predesignated resolution rule and strategy, decentralized coordination and cooperation are performed to resolve the predicted conflicts. The controller structure of the distributed nonlinear model predictive control (DNMPC) is designed to predict potential conflicts and calculate control variables for each UAV. Numerical simulations of multi-UAV coordination are performed to verify the performance of the proposed algorithm. Results demonstrate that the proposed algorithm can resolve the conflicts sufficiently in real time, while causing no further conflicts. Full article
Figures

Figure 1

Open AccessArticle The Prediction of Intrinsically Disordered Proteins Based on Feature Selection
Algorithms 2019, 12(2), 46; https://doi.org/10.3390/a12020046
Received: 28 December 2018 / Revised: 14 February 2019 / Accepted: 18 February 2019 / Published: 20 February 2019
Viewed by 583 | PDF Full-text (908 KB) | HTML Full-text | XML Full-text
Abstract
Intrinsically disordered proteins perform a variety of important biological functions, which makes their accurate prediction useful for a wide range of applications. We develop a scheme for predicting intrinsically disordered proteins by employing 35 features including eight structural properties, seven physicochemical properties and [...] Read more.
Intrinsically disordered proteins perform a variety of important biological functions, which makes their accurate prediction useful for a wide range of applications. We develop a scheme for predicting intrinsically disordered proteins by employing 35 features including eight structural properties, seven physicochemical properties and 20 pieces of evolutionary information. In particular, the scheme includes a preprocessing procedure which greatly reduces the input features. Using two different windows, the preprocessed data containing not only the properties of the surroundings of the target residue but also the properties related to the specific target residue are fed into a multi-layer perceptron neural network as its inputs. The Adam algorithm for the back propagation together with the dropout algorithm to avoid overfitting are introduced during the training process. The training as well as testing our procedure is performed on the dataset DIS803 from a DisProt database. The simulation results show that the performance of our scheme is competitive in comparison with ESpritz and IsUnstruct. Full article
Figures

Figure 1

Open AccessArticle A Heuristic Approach for a Real-World Electric Vehicle Routing Problem
Algorithms 2019, 12(2), 45; https://doi.org/10.3390/a12020045
Received: 8 January 2019 / Revised: 12 February 2019 / Accepted: 13 February 2019 / Published: 20 February 2019
Viewed by 722 | PDF Full-text (996 KB) | HTML Full-text | XML Full-text
Abstract
To develop a non-polluting and sustainable city, urban administrators encourage logistics companies to use electric vehicles instead of conventional (i.e., fuel-based) vehicles for transportation services. However, electric energy-based limitations pose a new challenge in designing reasonable visiting routes that are essential for the [...] Read more.
To develop a non-polluting and sustainable city, urban administrators encourage logistics companies to use electric vehicles instead of conventional (i.e., fuel-based) vehicles for transportation services. However, electric energy-based limitations pose a new challenge in designing reasonable visiting routes that are essential for the daily operations of companies. Therefore, this paper investigates a real-world electric vehicle routing problem (VRP) raised by a logistics company. The problem combines the features of the capacitated VRP, the VRP with time windows, the heterogeneous fleet VRP, the multi-trip VRP, and the electric VRP with charging stations. To solve such a complicated problem, a heuristic approach based on the adaptive large neighborhood search (ALNS) and integer programming is proposed in this paper. Specifically, a charging station adjustment heuristic and a departure time adjustment heuristic are devised to decrease the total operational cost. Furthermore, the best solution obtained by the ALNS is improved by integer programming. Twenty instances generated from real-world data were used to validate the effectiveness of the proposed algorithm. The results demonstrate that using our algorithm can save 7.52% of operational cost. Full article
Figures

Figure 1

Open AccessArticle Integrated Speed Planning and Friction Coefficient Estimation Algorithm for Intelligent Electric Vehicles
Algorithms 2019, 12(2), 44; https://doi.org/10.3390/a12020044
Received: 16 December 2018 / Revised: 31 January 2019 / Accepted: 12 February 2019 / Published: 20 February 2019
Viewed by 618 | PDF Full-text (822 KB) | HTML Full-text | XML Full-text
Abstract
To improve the safety of intelligent electric vehicles and avoid side slipping on curved roads with changing friction coefficients, an integrated speed planning and friction coefficient estimation algorithm is proposed. With this algorithm, the speeds of intelligent electric vehicles can be planned online [...] Read more.
To improve the safety of intelligent electric vehicles and avoid side slipping on curved roads with changing friction coefficients, an integrated speed planning and friction coefficient estimation algorithm is proposed. With this algorithm, the speeds of intelligent electric vehicles can be planned online using estimated road friction coefficients to avoid lane departures. When a decrease in the friction coefficient is detected on a curved road with a large curvature, the algorithm will plan a low and safe speed to avoid side slipping. When a normal friction coefficient is detected, the algorithm will plan a higher speed for normal driving. Simulations using MATLAB and CarSim have been performed to demonstrate the effectiveness of the designed algorithm. The simulation results suggest that the proposed algorithm is applicable to speed planning on curved roads with changing friction coefficients. Full article
Figures

Figure 1

Open AccessArticle An Improved Genetic Algorithm for Emergency Decision Making under Resource Constraints Based on Prospect Theory
Algorithms 2019, 12(2), 43; https://doi.org/10.3390/a12020043
Received: 10 January 2019 / Revised: 12 February 2019 / Accepted: 14 February 2019 / Published: 18 February 2019
Viewed by 722 | PDF Full-text (483 KB) | HTML Full-text | XML Full-text
Abstract
The study of emergency decision making (EDM) is helpful to reduce the difficulty of decision making and improve the efficiency of decision makers (DMs). The purpose of this paper is to propose an innovative genetic algorithm for emergency decision making under resource constraints. [...] Read more.
The study of emergency decision making (EDM) is helpful to reduce the difficulty of decision making and improve the efficiency of decision makers (DMs). The purpose of this paper is to propose an innovative genetic algorithm for emergency decision making under resource constraints. Firstly, this paper analyzes the emergency situation under resource constraints, and then, according to the prospect theory (PT), we further propose an improved value measurement function and an emergency loss levels weighting algorithm. Secondly, we assign weights for all emergency locations using the best–worst method (BWM). Then, an improved genetic algorithm (GA) based on prospect theory (PT) is established to solve the problem of emergency resource allocation between multiple emergency locations under resource constraints. Finally, the analyses of example show that the algorithm can shorten the decision-making time and provide a better decision scheme, which has certain practical significance. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Figures

Figure 1

Open AccessArticle Design Optimization of a VX Gasket Structure for a Subsea Connector Based on the Kriging Surrogate Model-NSGA-II Algorithm Considering the Load Randomness
Algorithms 2019, 12(2), 42; https://doi.org/10.3390/a12020042
Received: 5 January 2019 / Revised: 12 February 2019 / Accepted: 15 February 2019 / Published: 18 February 2019
Viewed by 656 | PDF Full-text (5450 KB) | HTML Full-text | XML Full-text
Abstract
The VX gasket is an important part of the wellhead connector for a subsea Christmas tree. Optimization of the gasket’s structure can improve the connector’s sealing performance. In this paper, we develop an optimization approach for the VX gasket structure, taking into consideration [...] Read more.
The VX gasket is an important part of the wellhead connector for a subsea Christmas tree. Optimization of the gasket’s structure can improve the connector’s sealing performance. In this paper, we develop an optimization approach for the VX gasket structure, taking into consideration working load randomness, based on the Kriging surrogate model-NSGA-II algorithm. To guarantee the simulation accuracy, a random finite element (R-FE) model of the connector’s sealing structure was constructed to calculate the gasket’s sealing performance under random working load conditions. The working load’s randomness was simulated using the Gaussian distribution function. To improve the calculation efficiency of the sealing performance for individuals within the initial populations, Kriging surrogate models were constructed. These models accelerated the optimization speed, where the training sample was obtained using an experimental method design and the constructed R-FE model. The effectiveness of the presented approach was verified in the context of a subsea Christmas tree wellhead connector, which matched the 20'' casing head. The results indicated that the proposed method is effective for VX gasket structure optimization in subsea connectors, and that efficiency was significantly enhanced compared to the traditional FE method. Full article
Figures

Figure 1

Open AccessArticle Computation of Compact Distributions of Discrete Elements
Algorithms 2019, 12(2), 41; https://doi.org/10.3390/a12020041
Received: 29 December 2018 / Revised: 7 February 2019 / Accepted: 13 February 2019 / Published: 18 February 2019
Viewed by 633 | PDF Full-text (20199 KB) | HTML Full-text | XML Full-text
Abstract
In our daily lives, many plane patterns can actually be regarded as a compact distribution of a number of elements with certain shapes, like the classic pattern mosaic. In order to synthesize this kind of pattern, the basic problem is, with given graphics [...] Read more.
In our daily lives, many plane patterns can actually be regarded as a compact distribution of a number of elements with certain shapes, like the classic pattern mosaic. In order to synthesize this kind of pattern, the basic problem is, with given graphics elements with certain shapes, to distribute a large number of these elements within a plane region in a possibly random and compact way. It is not easy to achieve this because it not only involves complicated adjacency calculations, but also is closely related to the shape of the elements. This paper attempts to propose an approach that can effectively and quickly synthesize compact distributions of elements of a variety of shapes. The primary idea is that with the seed points and distribution region given as premise, the generation of the Centroidal Voronoi Tesselation (CVT) of this region by iterative relaxation and the CVT will partition the distribution area into small regions of Voronoi, with each region representing the space of an element, to achieve a compact distribution of all the elements. In the generation process of Voronoi diagram, we adopt various distance metrics to control the shape of the generated Voronoi regions, and finally achieve the compact element distributions of different shapes. Additionally, approaches are introduced to control the sizes and directions of the Voronoi regions to generate element distributions with size and direction variations during the Voronoi diagram generation process to enrich the effect of compact element distributions. Moreover, to increase the synthesis efficiency, the time-consuming Voronoi diagram generation process was converted into a graphical rendering process, thus increasing the speed of the synthesis process. This paper is an exploration of elements compact distribution and also carries application value in the fields like mosaic pattern synthesis. Full article
Figures

Figure 1

Open AccessArticle An INS-UWB Based Collision Avoidance System for AGV
Algorithms 2019, 12(2), 40; https://doi.org/10.3390/a12020040
Received: 20 December 2018 / Revised: 9 February 2019 / Accepted: 13 February 2019 / Published: 18 February 2019
Viewed by 636 | PDF Full-text (1157 KB) | HTML Full-text | XML Full-text
Abstract
As a highly automated carrying vehicle, an automated guided vehicle (AGV) has been widely applied in various industrial areas. The collision avoidance of AGV is always a problem in factories. Current solutions such as inertial and laser guiding have low flexibility and high [...] Read more.
As a highly automated carrying vehicle, an automated guided vehicle (AGV) has been widely applied in various industrial areas. The collision avoidance of AGV is always a problem in factories. Current solutions such as inertial and laser guiding have low flexibility and high environmental requirements. An INS (inertial navigation system)-UWB (ultra-wide band) based AGV collision avoidance system is introduced to improve the safety and flexibility of AGV in factories. An electronic map of the factory is established and the UWB anchor nodes are deployed in order to realize an accurate positioning. The extended Kalman filter (EKF) scheme that combines UWB with INS data is used to improve the localization accuracy. The current location of AGV and its motion state data are used to predict its next position, decrease the effect of control delay of AGV and avoid collisions among AGVs. Finally, experiments are given to show that the EKF scheme can get accurate position estimation and the collisions among AGVs can be detected and avoided in time. Full article
(This article belongs to the Special Issue Modeling Computing and Data Handling for Marine Transportation)
Figures

Figure 1

Open AccessArticle A Hybrid Adaptive Large Neighborhood Heuristic for a Real-Life Dial-a-Ride Problem
Algorithms 2019, 12(2), 39; https://doi.org/10.3390/a12020039
Received: 15 December 2018 / Revised: 13 February 2019 / Accepted: 14 February 2019 / Published: 16 February 2019
Viewed by 686 | PDF Full-text (2274 KB) | HTML Full-text | XML Full-text
Abstract
The transportation of elderly and impaired people is commonly solved as a Dial-A-Ride Problem (DARP). The DARP aims to design pick-up and delivery vehicle routing schedules. Its main objective is to accommodate as many users as possible with a minimum operation cost. It [...] Read more.
The transportation of elderly and impaired people is commonly solved as a Dial-A-Ride Problem (DARP). The DARP aims to design pick-up and delivery vehicle routing schedules. Its main objective is to accommodate as many users as possible with a minimum operation cost. It adds realistic precedence and transit time constraints on the pairing of vehicles and customers. This paper tackles the DARP with time windows (DARPTW) from a new and innovative angle as it combines hybridization techniques with an adaptive large neighborhood search heuristic algorithm. The main objective is to improve the overall real-life performance of vehicle routing operations. Real-life data are refined and fed to a hybrid adaptive large neighborhood search (Hybrid-ALNS) algorithm which provides a near-optimal routing solution. The computational results on real-life instances, in the Canadian city of Vancouver and its region, and DARPTW benchmark instances show the potential improvements achieved by the proposed heuristic and its adaptability. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Figures

Figure 1

Open AccessArticle A Two-Level Rolling Optimization Model for Real-time Adaptive Signal Control
Algorithms 2019, 12(2), 38; https://doi.org/10.3390/a12020038
Received: 4 January 2019 / Revised: 12 February 2019 / Accepted: 12 February 2019 / Published: 15 February 2019
Viewed by 661 | PDF Full-text (2068 KB) | HTML Full-text | XML Full-text
Abstract
Recently, dynamic traffic flow prediction models have increasingly been developed in a connected vehicle environment, which will be conducive to the development of more advanced traffic signal control systems. This paper proposes a rolling optimization model for real-time adaptive signal control based on [...] Read more.
Recently, dynamic traffic flow prediction models have increasingly been developed in a connected vehicle environment, which will be conducive to the development of more advanced traffic signal control systems. This paper proposes a rolling optimization model for real-time adaptive signal control based on a dynamic traffic flow model. The proposed method consists of two levels, i.e., barrier group and phase. The upper layer optimizes the length of the barrier group based on dynamic programming. The lower level optimizes the signal phase lengths with the objective of minimizing vehicle delay. Then, to capture the dynamic traffic flow, a rolling strategy was developed based on a real-time traffic flow prediction model. Finally, the proposed method was compared to the Controlled Optimization of Phases (COP) algorithm in a simulation experiment. The results showed that the average vehicle delay was significantly reduced, by as much as 17.95%, using the proposed method. Full article
Figures

Figure 1

Open AccessArticle Stream Data Load Prediction for Resource Scaling Using Online Support Vector Regression
Algorithms 2019, 12(2), 37; https://doi.org/10.3390/a12020037
Received: 29 December 2018 / Revised: 2 February 2019 / Accepted: 9 February 2019 / Published: 14 February 2019
Viewed by 656 | PDF Full-text (1345 KB) | HTML Full-text | XML Full-text
Abstract
A distributed data stream processing system handles real-time, changeable and sudden streaming data load. Its elastic resource allocation has become a fundamental and challenging problem with a fixed strategy that will result in waste of resources or a reduction in QoS (quality of [...] Read more.
A distributed data stream processing system handles real-time, changeable and sudden streaming data load. Its elastic resource allocation has become a fundamental and challenging problem with a fixed strategy that will result in waste of resources or a reduction in QoS (quality of service). Spark Streaming as an emerging system has been developed to process real time stream data analytics by using micro-batch approach. In this paper, first, we propose an improved SVR (support vector regression) based stream data load prediction scheme. Then, we design a spark-based maximum sustainable throughput of time window (MSTW) performance model to find the optimized number of virtual machines. Finally, we present a resource scaling algorithm TWRES (time window resource elasticity scaling algorithm) with MSTW constraint and streaming data load prediction. The evaluation results show that TWRES could improve resource utilization and mitigate SLA (service level agreement) violation. Full article
Figures

Figure 1

Open AccessArticle Conjugate Gradient Hard Thresholding Pursuit Algorithm for Sparse Signal Recovery
Algorithms 2019, 12(2), 36; https://doi.org/10.3390/a12020036
Received: 25 December 2018 / Revised: 29 January 2019 / Accepted: 30 January 2019 / Published: 13 February 2019
Viewed by 726 | PDF Full-text (7539 KB) | HTML Full-text | XML Full-text
Abstract
We propose a new iterative greedy algorithm to reconstruct sparse signals in Compressed Sensing. The algorithm, called Conjugate Gradient Hard Thresholding Pursuit (CGHTP), is a simple combination of Hard Thresholding Pursuit (HTP) and Conjugate Gradient Iterative Hard Thresholding (CGIHT). The conjugate gradient method [...] Read more.
We propose a new iterative greedy algorithm to reconstruct sparse signals in Compressed Sensing. The algorithm, called Conjugate Gradient Hard Thresholding Pursuit (CGHTP), is a simple combination of Hard Thresholding Pursuit (HTP) and Conjugate Gradient Iterative Hard Thresholding (CGIHT). The conjugate gradient method with a fast asymptotic convergence rate is integrated into the HTP scheme that only uses simple line search, which accelerates the convergence of the iterative process. Moreover, an adaptive step size selection strategy, which constantly shrinks the step size until a convergence criterion is met, ensures that the algorithm has a stable and fast convergence rate without choosing step size. Finally, experiments on both Gaussian-signal and real-world images demonstrate the advantages of the proposed algorithm in convergence rate and reconstruction performance. Full article
Figures

Figure 1

Open AccessArticle Research on Quantitative Investment Strategies Based on Deep Learning
Algorithms 2019, 12(2), 35; https://doi.org/10.3390/a12020035
Received: 5 January 2019 / Revised: 2 February 2019 / Accepted: 9 February 2019 / Published: 12 February 2019
Viewed by 1016 | PDF Full-text (11688 KB) | HTML Full-text | XML Full-text
Abstract
This paper takes 50 ETF options in the options market with high transaction complexity as the research goal. The Random Forest (RF) model, the Long Short-Term Memory network (LSTM) model, and the Support Vector Regression (SVR) model are used to predict 50 ETF [...] Read more.
This paper takes 50 ETF options in the options market with high transaction complexity as the research goal. The Random Forest (RF) model, the Long Short-Term Memory network (LSTM) model, and the Support Vector Regression (SVR) model are used to predict 50 ETF price. Firstly, the original quantitative investment strategy is taken as the research object, and the 15 min trading frequency, which is more in line with the actual trading situation, is used, and then the Delta hedging concept of the options is introduced to control the risk of the quantitative investment strategy, to achieve the 15 min hedging strategy. Secondly, the final transaction price, buy price, highest price, lowest price, volume, historical volatility, and the implied volatility of the time segment marked with 50 ETF are the seven key factors affecting the price of 50 ETF. Then, two different types of LSTM-SVR models, LSTM-SVR I and LSTM-SVR II, are used to predict the final transaction price of the 50 ETF in the next time segment. In LSTM-SVR I model, the output of LSTM and seven key factors are combined as the input of SVR model. In LSTM-SVR II model, the hidden state vectors of LSTM and seven key factors are combined as the inputs of the SVR model. The results of the two LSTM-SVR models are compared with each other, and the better one is applied to the trading strategy. Finally, the benefit of the deep learning-based quantitative investment strategy, the resilience, and the maximum drawdown are used as indicators to judge the pros and cons of the research results. The accuracy and deviations of the LSTM-SVR prediction models are compared with those of the LSTM model and those of the RF model. The experimental results show that the quantitative investment strategy based on deep learning has higher returns than the traditional quantitative investment strategy, the yield curve is more stable, and the anti-fall performance is better. Full article
Figures

Figure 1

Open AccessArticle From the Quantum Approximate Optimization Algorithm to a Quantum Alternating Operator Ansatz
Algorithms 2019, 12(2), 34; https://doi.org/10.3390/a12020034
Received: 31 December 2018 / Revised: 29 January 2019 / Accepted: 4 February 2019 / Published: 12 February 2019
Viewed by 996 | PDF Full-text (601 KB) | HTML Full-text | XML Full-text
Abstract
The next few years will be exciting as prototype universal quantum processors emerge, enabling the implementation of a wider variety of algorithms. Of particular interest are quantum heuristics, which require experimentation on quantum hardware for their evaluation and which have the potential to [...] Read more.
The next few years will be exciting as prototype universal quantum processors emerge, enabling the implementation of a wider variety of algorithms. Of particular interest are quantum heuristics, which require experimentation on quantum hardware for their evaluation and which have the potential to significantly expand the breadth of applications for which quantum computers have an established advantage. A leading candidate is Farhi et al.’s quantum approximate optimization algorithm, which alternates between applying a cost function based Hamiltonian and a mixing Hamiltonian. Here, we extend this framework to allow alternation between more general families of operators. The essence of this extension, the quantum alternating operator ansatz, is the consideration of general parameterized families of unitaries rather than only those corresponding to the time evolution under a fixed local Hamiltonian for a time specified by the parameter. This ansatz supports the representation of a larger, and potentially more useful, set of states than the original formulation, with potential long-term impact on a broad array of application areas. For cases that call for mixing only within a desired subspace, refocusing on unitaries rather than Hamiltonians enables more efficiently implementable mixers than was possible in the original framework. Such mixers are particularly useful for optimization problems with hard constraints that must always be satisfied, defining a feasible subspace, and soft constraints whose violation we wish to minimize. More efficient implementation enables earlier experimental exploration of an alternating operator approach, in the spirit of the quantum approximate optimization algorithm, to a wide variety of approximate optimization, exact optimization, and sampling problems. In addition to introducing the quantum alternating operator ansatz, we lay out design criteria for mixing operators, detail mappings for eight problems, and provide a compendium with brief descriptions of mappings for a diverse array of problems. Full article
(This article belongs to the Special Issue Quantum Optimization Theory, Algorithms, and Applications)
Figures

Figure 1

Open AccessArticle Optimized Sonar Broadband Focused Beamforming Algorithm
Algorithms 2019, 12(2), 33; https://doi.org/10.3390/a12020033
Received: 27 November 2018 / Revised: 25 January 2019 / Accepted: 31 January 2019 / Published: 5 February 2019
Viewed by 752 | PDF Full-text (3277 KB) | HTML Full-text | XML Full-text
Abstract
Biases of initial direction estimation and focusing frequency selection affect the final focusing effect and may even cause algorithm failure in determining the focusing matrix in the coherent signal–subspace method. An optimized sonar broadband focused beamforming algorithm is proposed to address these defects. [...] Read more.
Biases of initial direction estimation and focusing frequency selection affect the final focusing effect and may even cause algorithm failure in determining the focusing matrix in the coherent signal–subspace method. An optimized sonar broadband focused beamforming algorithm is proposed to address these defects. Initially, the robust Capon beamforming algorithm was used to correct the focusing matrix, and the broadband signals were then focused on the optimal focusing frequency by the corrected focusing matrix such that the wideband beamforming was transformed into a narrowband problem. Finally, the focused narrowband signals were beamformed by the second-order cone programming algorithm. Computer simulation results and water pool experiments verified that the proposed algorithm provides a good performance. Full article
Figures

Figure 1

Open AccessArticle Fog-Computing-Based Heartbeat Detection and Arrhythmia Classification Using Machine Learning
Algorithms 2019, 12(2), 32; https://doi.org/10.3390/a12020032
Received: 10 January 2019 / Revised: 25 January 2019 / Accepted: 30 January 2019 / Published: 2 February 2019
Viewed by 960 | PDF Full-text (2450 KB) | HTML Full-text | XML Full-text
Abstract
Designing advanced health monitoring systems is still an active research topic. Wearable and remote monitoring devices enable monitoring of physiological and clinical parameters (heart rate, respiration rate, temperature, etc.) and analysis using cloud-centric machine-learning applications and decision-support systems to predict critical clinical states. [...] Read more.
Designing advanced health monitoring systems is still an active research topic. Wearable and remote monitoring devices enable monitoring of physiological and clinical parameters (heart rate, respiration rate, temperature, etc.) and analysis using cloud-centric machine-learning applications and decision-support systems to predict critical clinical states. This paper moves from a totally cloud-centric concept to a more distributed one, by transferring sensor data processing and analysis tasks to the edges of the network. The resulting solution enables the analysis and interpretation of sensor-data traces within the wearable device to provide actionable alerts without any dependence on cloud services. In this paper, we use a supervised-learning approach to detect heartbeats and classify arrhythmias. The system uses a window-based feature definition that is suitable for execution within an asymmetric multicore embedded processor that provides a dedicated core for hardware assisted pattern matching. We evaluate the performance of the system in comparison with various existing approaches, in terms of achieved accuracy in the detection of abnormal events. The results show that the proposed embedded system achieves a high detection rate that in some cases matches the accuracy of the state-of-the-art algorithms executed in standard processors. Full article
(This article belongs to the Special Issue Algorithm Engineering for Collective Ambient Intelligence)
Figures

Figure 1

Open AccessArticle Particle Probability Hypothesis Density Filter Based on Pairwise Markov Chains
Algorithms 2019, 12(2), 31; https://doi.org/10.3390/a12020031
Received: 2 January 2019 / Revised: 20 January 2019 / Accepted: 24 January 2019 / Published: 31 January 2019
Viewed by 922 | PDF Full-text (3365 KB) | HTML Full-text | XML Full-text
Abstract
Most multi-target tracking filters assume that one target and its observation follow a Hidden Markov Chain (HMC) model, but the implicit independence assumption of the HMC model is invalid in many practical applications, and a Pairwise Markov Chain (PMC) model is more universally [...] Read more.
Most multi-target tracking filters assume that one target and its observation follow a Hidden Markov Chain (HMC) model, but the implicit independence assumption of the HMC model is invalid in many practical applications, and a Pairwise Markov Chain (PMC) model is more universally suitable than the traditional HMC model. A set of weighted particles is used to approximate the probability hypothesis density of multi-targets in the framework of the PMC model, and a particle probability hypothesis density filter based on the PMC model (PF-PMC-PHD) is proposed for the nonlinear multi-target tracking system. Simulation results show the effectiveness of the PF-PMC-PHD filter and that the tracking performance of the PF-PMC-PHD filter is superior to the particle PHD filter based on the HMC model in a scenario where we kept the local physical properties of nonlinear and Gaussian HMC models while relaxing their independence assumption. Full article
Figures

Figure 1

Open AccessArticle An Exploration of a Balanced Up-Downwind Scheme for Solving Heston Volatility Model Equations on Variable Grids
Algorithms 2019, 12(2), 30; https://doi.org/10.3390/a12020030
Received: 26 October 2018 / Revised: 4 January 2019 / Accepted: 6 January 2019 / Published: 22 January 2019
Viewed by 885 | PDF Full-text (32566 KB) | HTML Full-text | XML Full-text
Abstract
This paper studies an effective finite difference scheme for solving two-dimensional Heston stochastic volatility option-pricing model problems. A dynamically balanced up-downwind strategy for approximating the cross-derivative is implemented and analyzed. Semi-discretized and spatially nonuniform platforms are utilized. The numerical method comprised is simple [...] Read more.
This paper studies an effective finite difference scheme for solving two-dimensional Heston stochastic volatility option-pricing model problems. A dynamically balanced up-downwind strategy for approximating the cross-derivative is implemented and analyzed. Semi-discretized and spatially nonuniform platforms are utilized. The numerical method comprised is simple and straightforward, with reliable first order overall approximations. The spectral norm is used throughout the investigation, and numerical stability is proven. Simulation experiments are given to illustrate our results. Full article
Figures

Figure 1

Open AccessArticle A Distributed Execution Pipeline for Clustering Trajectories Based on a Fuzzy Similarity Relation
Algorithms 2019, 12(2), 29; https://doi.org/10.3390/a12020029
Received: 2 December 2018 / Revised: 16 January 2019 / Accepted: 17 January 2019 / Published: 22 January 2019
Viewed by 883 | PDF Full-text (629 KB) | HTML Full-text | XML Full-text
Abstract
The proliferation of indoor and outdoor tracking devices has led to a vast amount of spatial data. Each object can be described by several trajectories that, once analysed, can yield to significant knowledge. In particular, pattern analysis by clustering generic trajectories can give [...] Read more.
The proliferation of indoor and outdoor tracking devices has led to a vast amount of spatial data. Each object can be described by several trajectories that, once analysed, can yield to significant knowledge. In particular, pattern analysis by clustering generic trajectories can give insight into objects sharing the same patterns. Still, sequential clustering approaches fail to handle large volumes of data. Hence, the necessity of distributed systems to be able to infer knowledge in a trivial time interval. In this paper, we detail an efficient, scalable and distributed execution pipeline for clustering raw trajectories. The clustering is achieved via a fuzzy similarity relation obtained by the transitive closure of a proximity relation. Moreover, the pipeline is integrated in Spark, implemented in Scala and leverages the Core and Graphx libraries making use of Resilient Distributed Datasets (RDD) and graph processing. Furthermore, a new simple, but very efficient, partitioning logic has been deployed in Spark and integrated into the execution process. The objective behind this logic is to equally distribute the load among all executors by considering the complexity of the data. In particular, resolving the load balancing issue has reduced the conventional execution time in an important manner. Evaluation and performance of the whole distributed process has been analysed by handling the Geolife project’s GPS trajectory dataset. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Figures

Figure 1

Open AccessArticle FPGA Implementation of ECT Digital System for Imaging Conductive Materials
Algorithms 2019, 12(2), 28; https://doi.org/10.3390/a12020028
Received: 3 December 2018 / Revised: 14 January 2019 / Accepted: 16 January 2019 / Published: 22 January 2019
Viewed by 814 | PDF Full-text (24483 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents the hardware implementation of a stand-alone Electrical Capacitance Tomography (ECT) system employing a Field Programmable Gate Array (FPGA). The image reconstruction algorithms of the ECT system demand intensive computation and fast processing of large number of measurements. The inner product [...] Read more.
This paper presents the hardware implementation of a stand-alone Electrical Capacitance Tomography (ECT) system employing a Field Programmable Gate Array (FPGA). The image reconstruction algorithms of the ECT system demand intensive computation and fast processing of large number of measurements. The inner product of large vectors is the core of the majority of these algorithms. Therefore, a reconfigurable segmented parallel inner product architecture for the parallel matrix multiplication is proposed. In addition, hardware-software codesign targeting FPGA System-On-Chip (SoC) is applied to achieve high performance. The development of the hardware-software codesign is carried out via commercial tools to adjust the software algorithms and parameters of the system. The ECT system is used in this work to monitor the characteristic of the molten metal in the Lost Foam Casting (LFC) process. The hardware system consists of capacitive sensors, wireless nodes and FPGA module. The experimental results reveal high stability and accuracy when building the ECT system based on the FPGA architecture. The proposed system achieves high performance in terms of speed and small design density. Full article
Figures

Figure 1

Algorithms EISSN 1999-4893 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top