Next Issue
Previous Issue

Table of Contents

Algorithms, Volume 12, Issue 7 (July 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) In this study, we present Optimus, which implements a self-adaptive differential evolution [...] Read more.
View options order results:
result details:
Displaying articles 1-21
Export citation of selected articles as:
Open AccessArticle
Hybrid MU-MIMO Precoding Based on K-Means User Clustering
Algorithms 2019, 12(7), 146; https://doi.org/10.3390/a12070146
Received: 31 May 2019 / Revised: 15 July 2019 / Accepted: 19 July 2019 / Published: 23 July 2019
Viewed by 101 | PDF Full-text (1541 KB) | HTML Full-text | XML Full-text
Abstract
Multi-User (MU) Multiple-Input-Multiple-Output (MIMO) systems have been extensively investigated over the last few years from both theoretical and practical perspectives. The low complexity Linear Precoding (LP) schemes for MU-MIMO are already deployed in Long-Term Evolution (LTE) networks; however, they do not work well [...] Read more.
Multi-User (MU) Multiple-Input-Multiple-Output (MIMO) systems have been extensively investigated over the last few years from both theoretical and practical perspectives. The low complexity Linear Precoding (LP) schemes for MU-MIMO are already deployed in Long-Term Evolution (LTE) networks; however, they do not work well for users with strongly-correlated channels. Alternatives to those schemes, like Non-Linear Precoding (NLP), and hybrid precoding schemes were proposed in the standardization phase for the Third-Generation Partnership Project (3GPP) 5G New Radio (NR). NLP schemes have better performance, but their complexity is prohibitively high. Hybrid schemes, which combine LP schemes to serve users with separable channels and NLP schemes for users with strongly-correlated channels, can help reduce the computational burden, while limiting the performance degradation. Finding the optimum set of users that can be co-scheduled through LP schemes could require an exhaustive search and, thus, may not be affordable for practical systems. The purpose of this paper is to present a new semi-orthogonal user selection algorithm based on the statistical K-means clustering and to assess its performance in MU-MIMO systems employing hybrid precoding schemes. Full article
Figures

Figure 1

Open AccessArticle
A Study on Sensitive Bands of EEG Data under Different Mental Workloads
Algorithms 2019, 12(7), 145; https://doi.org/10.3390/a12070145
Received: 29 May 2019 / Revised: 10 July 2019 / Accepted: 15 July 2019 / Published: 22 July 2019
Viewed by 122 | PDF Full-text (8683 KB) | HTML Full-text | XML Full-text
Abstract
Electroencephalogram (EEG) signals contain a lot of human body performance information. With the development of the brain–computer interface (BCI) technology, many researchers have used the feature extraction and classification algorithms in various fields to study the feature extraction and classification of EEG signals. [...] Read more.
Electroencephalogram (EEG) signals contain a lot of human body performance information. With the development of the brain–computer interface (BCI) technology, many researchers have used the feature extraction and classification algorithms in various fields to study the feature extraction and classification of EEG signals. In this paper, the sensitive bands of EEG data under different mental workloads are studied. By selecting the characteristics of EEG signals, the bands with the highest sensitivity to mental loads are selected. In this paper, EEG signals are measured in different load flight experiments. First, the EEG signals are preprocessed by independent component analysis (ICA) to remove the interference of electrooculogram (EOG) signals, and then the power spectral density and energy are calculated for feature extraction. Finally, the feature importance is selected based on Gini impurity. The classification accuracy of the support vector machines (SVM) classifier is verified by comparing the characteristics of the full band with the characteristics of the β band. The results show that the characteristics of the β band are the most sensitive in EEG data under different mental workloads. Full article
(This article belongs to the Special Issue The Second Symposium on Machine Intelligence and Data Analytics)
Figures

Figure 1

Open AccessArticle
Simulation Tool for Tuning and Performance Analysis of Robust, Tracking, Disturbance Rejection and Aggressiveness Controller
Algorithms 2019, 12(7), 144; https://doi.org/10.3390/a12070144
Received: 13 June 2019 / Revised: 16 July 2019 / Accepted: 17 July 2019 / Published: 20 July 2019
Viewed by 173 | PDF Full-text (7033 KB) | HTML Full-text | XML Full-text
Abstract
The RTD-A (robust, tracking, disturbance rejection and aggressiveness) controller is a novel control scheme that substitutes the classical proportional integral derivative (PID) controller. This novel controller’s performance depends on the four controller tuning parameters (θR, θT, θD [...] Read more.
The RTD-A (robust, tracking, disturbance rejection and aggressiveness) controller is a novel control scheme that substitutes the classical proportional integral derivative (PID) controller. This novel controller’s performance depends on the four controller tuning parameters (θR, θT, θD and θA). The tuning of RTD-A controller is more transparent than classic PID controllers. The RTD-A tuning parameters values lies between ZERO and ONE. Availability of a tool to design optimal parameters for this controller and evaluating the performance on a given system is necessary for the researchers. In this paper, the new simulation tool is presented to deal with the RTD-A control scheme. There are four graphical user interface tools included in the proposed tool and working of each tool is explained in detail. To demonstrate the proposed tool, two examples, which involve a liquid level control application and an air pressure control application, are presented in this work. The performance of the RTD-A controller is compared with PID controller. RTD-A controllers are tuned using optimization algorithms and their performances are observed and analyzed in both cases under deterministic and uncertain conditions. Full article
Figures

Figure 1

Open AccessArticle
Bi-Level Multi-Objective Production Planning Problem with Multi-Choice Parameters: A Fuzzy Goal Programming Algorithm
Algorithms 2019, 12(7), 143; https://doi.org/10.3390/a12070143
Received: 30 June 2019 / Revised: 15 July 2019 / Accepted: 16 July 2019 / Published: 19 July 2019
Viewed by 191 | PDF Full-text (676 KB) | HTML Full-text | XML Full-text
Abstract
This paper deals with the modeling and optimization of a bi-level multi-objective production planning problem, where some of the coefficients of objective functions and parameters of constraints are multi-choice. A general transformation technique based on a binary variable has been used to transform [...] Read more.
This paper deals with the modeling and optimization of a bi-level multi-objective production planning problem, where some of the coefficients of objective functions and parameters of constraints are multi-choice. A general transformation technique based on a binary variable has been used to transform the multi-choices parameters of the problem into their equivalent deterministic form. Finally, two different types of secularization technique have been used to achieve the maximum degree of individually membership goals by minimizing their deviational variables and obtained the most satisfactory solution of the formulated problem. An illustrative real case study of production planning has been discussed and, also compared to validate the efficiency and usefulness of the proposed work. Full article
(This article belongs to the Special Issue Algorithms for Multi-Criteria Decision-Making)
Figures

Figure 1

Open AccessArticle
New Bipartite Graph Techniques for Irregular Data Redistribution Scheduling
Algorithms 2019, 12(7), 142; https://doi.org/10.3390/a12070142
Received: 26 May 2019 / Revised: 10 July 2019 / Accepted: 14 July 2019 / Published: 16 July 2019
Viewed by 261 | PDF Full-text (2232 KB) | HTML Full-text | XML Full-text
Abstract
For many parallel and distributed systems, automatic data redistribution improves its locality and increases system performance for various computer problems and applications. In general, an array can be distributed to multiple processing systems by using regular or irregular distributions. Some data distribution adopts [...] Read more.
For many parallel and distributed systems, automatic data redistribution improves its locality and increases system performance for various computer problems and applications. In general, an array can be distributed to multiple processing systems by using regular or irregular distributions. Some data distribution adopts BLOCK, CYCLIC, or BLOCK-CYCLIC to specify data array decomposition and distribution. On the other hand, irregular distributions specify a different-size data array distribution according to user-defined commands or procedures. In this work, we propose three bipartite graph problems, including the “maximum edge coloring problem”, the “maximum degree edge coloring problem”, and the “cost-sharing maximum edge coloring problem” to formulate these kinds of distribution problems. Next, we propose an approximation algorithm with a ratio bound of two for the maximum edge coloring problem when the input graph is biplanar. Moreover, we also prove that the “cost-sharing maximum edge coloring problem” is an NP-complete problem even when the input graph is biplanar. Full article
Figures

Figure 1

Open AccessArticle
OPTIMUS: Self-Adaptive Differential Evolution with Ensemble of Mutation Strategies for Grasshopper Algorithmic Modeling
Algorithms 2019, 12(7), 141; https://doi.org/10.3390/a12070141
Received: 7 May 2019 / Revised: 4 July 2019 / Accepted: 8 July 2019 / Published: 12 July 2019
Viewed by 379 | PDF Full-text (10891 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Most of the architectural design problems are basically real-parameter optimization problems. So, any type of evolutionary and swarm algorithms can be used in this field. However, there is a little attention on using optimization methods within the computer aided design (CAD) programs. In [...] Read more.
Most of the architectural design problems are basically real-parameter optimization problems. So, any type of evolutionary and swarm algorithms can be used in this field. However, there is a little attention on using optimization methods within the computer aided design (CAD) programs. In this paper, we present Optimus, which is a new optimization tool for grasshopper algorithmic modeling in Rhinoceros CAD software. Optimus implements self-adaptive differential evolution algorithm with ensemble of mutation strategies (jEDE). We made an experiment using standard test problems in the literature and some of the test problems proposed in IEEE CEC 2005. We reported minimum, maximum, average, standard deviations and number of function evaluations of five replications for each function. Experimental results on the benchmark suite showed that Optimus (jEDE) outperforms other optimization tools, namely Galapagos (genetic algorithm), SilverEye (particle swarm optimization), and Opossum (RbfOpt) by finding better results for 19 out of 20 problems. For only one function, Galapagos presented slightly better result than Optimus. Ultimately, we presented an architectural design problem and compared the tools for testing Optimus in the design domain. We reported minimum, maximum, average and number of function evaluations of one replication for each tool. Galapagos and Silvereye presented infeasible results, whereas Optimus and Opossum found feasible solutions. However, Optimus discovered a much better fitness result than Opossum. As a conclusion, we discuss advantages and limitations of Optimus in comparison to other tools. The target audience of this paper is frequent users of parametric design modelling e.g., architects, engineers, designers. The main contribution of this paper is summarized as follows. Optimus showed that near-optimal solutions of architectural design problems can be improved by testing different types of algorithms with respect to no-free lunch theorem. Moreover, Optimus facilitates implementing different type of algorithms due to its modular system. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimization and Applications (volume 2))
Figures

Figure 1

Open AccessArticle
Projected Augmented Reality Intelligent Model of a City Area with Path Optimization
Algorithms 2019, 12(7), 140; https://doi.org/10.3390/a12070140
Received: 3 June 2019 / Revised: 9 July 2019 / Accepted: 9 July 2019 / Published: 12 July 2019
Viewed by 331 | PDF Full-text (7745 KB) | HTML Full-text | XML Full-text
Abstract
Augmented Reality is increasingly used for enhancing user experiences in different tasks. The present paper describes a model combining augmented reality and artificial intelligence algorithms in a 3D model of an area of the city of Coimbra, based on information extracted from OpenStreetMap. [...] Read more.
Augmented Reality is increasingly used for enhancing user experiences in different tasks. The present paper describes a model combining augmented reality and artificial intelligence algorithms in a 3D model of an area of the city of Coimbra, based on information extracted from OpenStreetMap. The augmented reality effect is achieved using a video projection over a 3D printed map. Users can interact with the model using a smart phone or similar device and simulate itineraries which are optimized using a genetic algorithm and A*. Among other applications, the model can be used for tourists or travelers to simulate travels with realism, as well as virtual reconstructions of historical places or remote areas. Full article
Figures

Figure 1

Open AccessArticle
A Credit Rating Model in a Fuzzy Inference System Environment
Algorithms 2019, 12(7), 139; https://doi.org/10.3390/a12070139
Received: 21 May 2019 / Revised: 29 June 2019 / Accepted: 30 June 2019 / Published: 9 July 2019
Viewed by 371 | PDF Full-text (372 KB) | HTML Full-text | XML Full-text
Abstract
One of the most important functions of an export credit agency (ECA) is to act as an intermediary between national governments and exporters. These organizations provide financing to reduce the political and commercial risks in international trade. The agents assess the buyers based [...] Read more.
One of the most important functions of an export credit agency (ECA) is to act as an intermediary between national governments and exporters. These organizations provide financing to reduce the political and commercial risks in international trade. The agents assess the buyers based on financial and non-financial indicators to determine whether it is advisable to grant them credit. Because many of these indicators are qualitative and inherently linguistically ambiguous, the agents must make decisions in uncertain environments. Therefore, to make the most accurate decision possible, they often utilize fuzzy inference systems. The purpose of this research was to design a credit rating model in an uncertain environment using the fuzzy inference system (FIS). In this research, we used suitable variables of agency ratings from previous studies and then screened them via the Delphi method. Finally, we created a credit rating model using these variables and FIS including related IF-THEN rules which can be applied in a practical setting. Full article
Figures

Figure 1

Open AccessArticle
A Quantum-Behaved Neurodynamic Approach for Nonconvex Optimization with Constraints
Algorithms 2019, 12(7), 138; https://doi.org/10.3390/a12070138
Received: 29 May 2019 / Revised: 27 June 2019 / Accepted: 3 July 2019 / Published: 5 July 2019
Viewed by 348 | PDF Full-text (1355 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a quantum-behaved neurodynamic swarm optimization approach to solve the nonconvex optimization problems with inequality constraints. Firstly, the general constrained optimization problem is addressed and a high-performance feedback neural network for solving convex nonlinear programming problems is introduced. The convergence of [...] Read more.
This paper presents a quantum-behaved neurodynamic swarm optimization approach to solve the nonconvex optimization problems with inequality constraints. Firstly, the general constrained optimization problem is addressed and a high-performance feedback neural network for solving convex nonlinear programming problems is introduced. The convergence of the proposed neural network is also proved. Then, combined with the quantum-behaved particle swarm method, a quantum-behaved neurodynamic swarm optimization (QNSO) approach is presented. Finally, the performance of the proposed QNSO algorithm is evaluated through two function tests and three applications including the hollow transmission shaft, heat exchangers and crank–rocker mechanism. Numerical simulations are also provided to verify the advantages of our method. Full article
Figures

Figure 1

Open AccessArticle
Money Neutrality, Monetary Aggregates and Machine Learning
Algorithms 2019, 12(7), 137; https://doi.org/10.3390/a12070137
Received: 2 May 2019 / Revised: 1 July 2019 / Accepted: 3 July 2019 / Published: 5 July 2019
Viewed by 494 | PDF Full-text (1370 KB) | HTML Full-text | XML Full-text
Abstract
The issue of whether or not money affects real economic activity (money neutrality) has attracted significant empirical attention over the last five decades. If money is neutral even in the short-run, then monetary policy is ineffective and its role limited. If money matters, [...] Read more.
The issue of whether or not money affects real economic activity (money neutrality) has attracted significant empirical attention over the last five decades. If money is neutral even in the short-run, then monetary policy is ineffective and its role limited. If money matters, it will be able to forecast real economic activity. In this study, we test the traditional simple sum monetary aggregates that are commonly used by central banks all over the world and also the theoretically correct Divisia monetary aggregates proposed by the Barnett Critique (Chrystal and MacDonald, 1994; Belongia and Ireland, 2014), both in three levels of aggregation: M1, M2, and M3. We use them to directionally forecast the Eurocoin index: A monthly index that measures the growth rate of the euro area GDP. The data span from January 2001 to June 2018. The forecasting methodology we employ is support vector machines (SVM) from the area of machine learning. The empirical results show that: (a) The Divisia monetary aggregates outperform the simple sum ones and (b) both monetary aggregates can directionally forecast the Eurocoin index reaching the highest accuracy of 82.05% providing evidence against money neutrality even in the short term. Full article
Figures

Figure 1

Open AccessEditorial
Editorial: Special Issue on Efficient Data Structures
Algorithms 2019, 12(7), 136; https://doi.org/10.3390/a12070136
Received: 2 July 2019 / Accepted: 3 July 2019 / Published: 5 July 2019
Viewed by 340 | PDF Full-text (154 KB) | HTML Full-text | XML Full-text
Abstract
This Special Issue of Algorithms is focused on the design, formal analysis, implementation, and experimental evaluation of efficient data structures for various computational problems. Full article
(This article belongs to the Special Issue Efficient Data Structures)
Open AccessArticle
Breast Microcalcification Detection Algorithm Based on Contourlet and ASVM
Algorithms 2019, 12(7), 135; https://doi.org/10.3390/a12070135
Received: 26 April 2019 / Revised: 15 June 2019 / Accepted: 27 June 2019 / Published: 30 June 2019
Viewed by 469 | PDF Full-text (1934 KB) | HTML Full-text | XML Full-text
Abstract
Microcalcification is the most important landmark information for early breast cancer. At present, morphological artificial observation is the main method for clinical diagnosis of such diseases, but it is easy to cause misdiagnosis and missed diagnosis. The present study proposes an algorithm for [...] Read more.
Microcalcification is the most important landmark information for early breast cancer. At present, morphological artificial observation is the main method for clinical diagnosis of such diseases, but it is easy to cause misdiagnosis and missed diagnosis. The present study proposes an algorithm for detecting microcalcification on mammography for early breast cancer. Firstly, the contrast characteristics of mammograms are enhanced by Contourlet transformation and morphology (CTM). Secondly, split the ROI by the improved K-means algorithm. Thirdly, calculate grayscale feature, shape feature, and Histogram of Oriented Gradient (HOG) for the ROI region. The Adaptive support vector machine (ASVM) is used as a tool to classify the rough calcification point and the false calcification point. Under the guidance of a professional doctor, 280 normal images and 120 calcification images were selected for experimentation, of which 210 normal images and 90 images with calcification images were used for training classification. The remaining 100 are used to test the algorithm. It is found that the accuracy of the automatic classification results of the Adaptive support vector machine (ASVM) algorithm reaches 94%, and the experimental results are superior to similar algorithms. The algorithm overcomes various difficulties in microcalcification detection and has great clinical application value. Full article
(This article belongs to the Special Issue Algorithms for Computer-Aided Design)
Figures

Figure 1

Open AccessArticle
An Enhanced Lightning Attachment Procedure Optimization Algorithm
Algorithms 2019, 12(7), 134; https://doi.org/10.3390/a12070134
Received: 23 May 2019 / Revised: 27 June 2019 / Accepted: 28 June 2019 / Published: 29 June 2019
Viewed by 426 | PDF Full-text (1606 KB) | HTML Full-text | XML Full-text
Abstract
To overcome the shortcomings of the lightning attachment procedure optimization (LAPO) algorithm, such as premature convergence and slow convergence speed, an enhanced lightning attachment procedure optimization (ELAPO) algorithm was proposed in this paper. In the downward leader movement, the idea of differential evolution [...] Read more.
To overcome the shortcomings of the lightning attachment procedure optimization (LAPO) algorithm, such as premature convergence and slow convergence speed, an enhanced lightning attachment procedure optimization (ELAPO) algorithm was proposed in this paper. In the downward leader movement, the idea of differential evolution was introduced to speed up population convergence; in the upward leader movement, by superimposing vectors pointing to the average individual, the individual updating mode was modified to change the direction of individual evolution, avoid falling into local optimum, and carry out a more fine local information search; in the performance enhancement stage, opposition-based learning (OBL) was used to replace the worst individuals, improve the convergence rate of population, and increase the global exploration capability. Finally, 16 typical benchmark functions in CEC2005 are used to carry out simulation experiments with LAPO algorithm, four improved algorithms, and ELAPO. Experimental results showed that ELAPO obtained the better convergence velocity and optimization accuracy. Full article
Figures

Figure 1

Open AccessArticle
A New Method for Markovian Adaptation of the Non-Markovian Queueing System Using the Hidden Markov Model
Algorithms 2019, 12(7), 133; https://doi.org/10.3390/a12070133
Received: 27 May 2019 / Revised: 21 June 2019 / Accepted: 25 June 2019 / Published: 28 June 2019
Viewed by 531 | PDF Full-text (2073 KB) | HTML Full-text | XML Full-text
Abstract
This manuscript starts with a detailed analysis of the current solution for the queueing system M/Er/1/∞. In the existing solution, Erlang’s service is caused by Poisson’s arrival process of groups, but not individual clients. The service of individual clients is still exponentially distributed, [...] Read more.
This manuscript starts with a detailed analysis of the current solution for the queueing system M/Er/1/∞. In the existing solution, Erlang’s service is caused by Poisson’s arrival process of groups, but not individual clients. The service of individual clients is still exponentially distributed, contrary to the declaration in Kendall’s notation. From the related theory of the Hidden Markov Model (HMM), for the advancement of queueing theory, the idea of “hidden Markov states” (HMS) was taken. In this paper, the basic principles of application of HMS have first been established. The abstract HMS states have a catalytic role in the standard procedure of solving the non-Markovian queueing systems. The proposed solution based on HMS exceeds the problem of accessing identical client groups in the current solution of the M/Er/r queueing system. A detailed procedure for the new solution of the queueing system M/Er/1/∞ is implemented. Additionally, a new solution to the queueing system M/N/1/∞ with a normal service time N(μ, σ) based on HMS is also implemented. Full article
(This article belongs to the Special Issue Algorithms for Multi-Criteria Decision-Making)
Figures

Figure 1

Open AccessArticle
Drum Water Level Control Based on Improved ADRC
Algorithms 2019, 12(7), 132; https://doi.org/10.3390/a12070132
Received: 26 May 2019 / Revised: 20 June 2019 / Accepted: 25 June 2019 / Published: 28 June 2019
Viewed by 388 | PDF Full-text (1242 KB) | HTML Full-text | XML Full-text
Abstract
Drum water level systems show strong disturbance, big inertia, large time delay, and non-linearity characteristics. In order to improve the antidisturbance performance and robustness of the traditional active disturbance rejection controller (ADRC), an improved linear active disturbance rejection controller (ILADRC) for drum water [...] Read more.
Drum water level systems show strong disturbance, big inertia, large time delay, and non-linearity characteristics. In order to improve the antidisturbance performance and robustness of the traditional active disturbance rejection controller (ADRC), an improved linear active disturbance rejection controller (ILADRC) for drum water level is designed. On the basis of the linear active disturbance rejection controller (LADRC) structure, an identical linear extended state observer (ESO) is added with the same parameters as that of the original one. The estimation error value of the total disturbance is introduced, and the estimation error of the total disturbance is compensated, which can improve the control system’s ability to suppress unknown disturbances, so as to improve the antidisturbance performance and robustness. The antijamming performance and robustness of LADRC and ILADRC for drum water level are simulated and analyzed under the influence of external disturbance and model parameter variation. Results show that the proposed control system ILADRC has shorter settling time, smaller overshot, and strong anti-interference ability and robustness. It has better performance than the LADRC and has certain application value in engineering. Full article
(This article belongs to the Special Issue The Second Symposium on Machine Intelligence and Data Analytics)
Figures

Figure 1

Open AccessArticle
Aiding Dictionary Learning Through Multi-Parametric Sparse Representation
Algorithms 2019, 12(7), 131; https://doi.org/10.3390/a12070131
Received: 20 May 2019 / Revised: 21 June 2019 / Accepted: 25 June 2019 / Published: 28 June 2019
Viewed by 430 | PDF Full-text (751 KB) | HTML Full-text | XML Full-text
Abstract
The 1 relaxations of the sparse and cosparse representation problems which appear in the dictionary learning procedure are usually solved repeatedly (varying only the parameter vector), thus making them well-suited to a multi-parametric interpretation. The associated constrained optimization problems differ only through [...] Read more.
The 1 relaxations of the sparse and cosparse representation problems which appear in the dictionary learning procedure are usually solved repeatedly (varying only the parameter vector), thus making them well-suited to a multi-parametric interpretation. The associated constrained optimization problems differ only through an affine term from one iteration to the next (i.e., the problem’s structure remains the same while only the current vector, which is to be (co)sparsely represented, changes). We exploit this fact by providing an explicit, piecewise affine with a polyhedral support, representation of the solution. Consequently, at runtime, the optimal solution (the (co)sparse representation) is obtained through a simple enumeration throughout the non-overlapping regions of the polyhedral partition and the application of an affine law. We show that, for a suitably large number of parameter instances, the explicit approach outperforms the classical implementation. Full article
(This article belongs to the Special Issue Dictionary Learning Algorithms and Applications)
Figures

Figure 1

Open AccessArticle
A Novel Consistent Quality Driven for JEM Based Distributed Video Coding
Algorithms 2019, 12(7), 130; https://doi.org/10.3390/a12070130
Received: 20 May 2019 / Revised: 2 June 2019 / Accepted: 25 June 2019 / Published: 28 June 2019
Viewed by 392 | PDF Full-text (5674 KB) | HTML Full-text | XML Full-text
Abstract
Distributed video coding (DVC) is an attractive and promising solution for low complexity constrained video applications, such as wireless sensor networks or wireless surveillance systems. In DVC, visual quality consistency is one of the most important issues to evaluate the performance of a [...] Read more.
Distributed video coding (DVC) is an attractive and promising solution for low complexity constrained video applications, such as wireless sensor networks or wireless surveillance systems. In DVC, visual quality consistency is one of the most important issues to evaluate the performance of a DVC codec. However, it is the fact that the quality of the decoded frames that is achieved in most recent DVC codecs is not consistent and it is varied with high quality fluctuation. In this paper, we propose a novel DVC solution named Joint exploration model based DVC (JEM-DVC) to solve the problem, which can provide not only higher performance as compared to the traditional DVC solutions, but also an effective scheme for the quality consistency control. We first employ several advanced techniques that are provided in the Joint exploration model (JEM) of the future video coding standard (FVC) in the proposed JEM-DVC solution to effectively improve the performance of JEM-DVC codec. Subsequently, for consistent quality control, we propose two novel methods, named key frame quantization (KF-Q) and Wyner-Zip frame quantization (WZF-Q), which determine the optimal values of the quantization parameter (QP) and quantization matrix (QM) applied for the key and WZ frame coding, respectively. The optimal values of QP and QM are adaptively controlled and updated for every key and WZ frames to guarantee the consistent video quality for the proposed codec unlike the conventional approaches. Our proposed JEM-DVC is the first DVC codec in literature that employs the JEM coding technique, and then all of the results that are presented in this paper are new. The experimental results show that the proposed JEM-DVC significantly outperforms the relevant DVC benchmarks, notably the DISCOVER DVC and the recent H.265/HEVC based DVC, in terms of both Peak signal-to-noise ratio (PSNR) performance and consistent visual quality. Full article
Figures

Figure 1

Open AccessArticle
A Hyper Heuristic Algorithm to Solve the Low-Carbon Location Routing Problem
Algorithms 2019, 12(7), 129; https://doi.org/10.3390/a12070129
Received: 25 May 2019 / Revised: 14 June 2019 / Accepted: 24 June 2019 / Published: 27 June 2019
Viewed by 416 | PDF Full-text (1682 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes a low-carbon location routing problem (LCLRP) model with simultaneous delivery and pick up, time windows, and heterogeneous fleets to reduce the logistics cost and carbon emissions and improve customer satisfaction. The correctness of the model is tested by a simple [...] Read more.
This paper proposes a low-carbon location routing problem (LCLRP) model with simultaneous delivery and pick up, time windows, and heterogeneous fleets to reduce the logistics cost and carbon emissions and improve customer satisfaction. The correctness of the model is tested by a simple example of CPLEX (optimization software for mathematical programming). To solve this problem, a hyper-heuristic algorithm is designed based on a secondary exponential smoothing strategy and adaptive receiving mechanism. The algorithm can achieve fast convergence and is highly robust. This case study analyzes the impact of depot distribution and cost, heterogeneous fleets (HF), and customer distribution and time windows on logistics costs, carbon emissions, and customer satisfaction. The experimental results show that the proposed model can reduce logistics costs by 1.72%, carbon emissions by 11.23%, and vehicle travel distance by 9.69%, and show that the proposed model has guiding significance for reducing logistics costs. Full article
Figures

Figure 1

Open AccessArticle
Refinement of Background-Subtraction Methods Based on Convolutional Neural Network Features for Dynamic Background
Algorithms 2019, 12(7), 128; https://doi.org/10.3390/a12070128
Received: 28 April 2019 / Revised: 20 June 2019 / Accepted: 25 June 2019 / Published: 27 June 2019
Viewed by 417 | PDF Full-text (4317 KB) | HTML Full-text | XML Full-text
Abstract
Advancing the background-subtraction method in dynamic scenes is an ongoing timely goal for many researchers. Recently, background subtraction methods have been developed with deep convolutional features, which have improved their performance. However, most of these deep methods are supervised, only available for a [...] Read more.
Advancing the background-subtraction method in dynamic scenes is an ongoing timely goal for many researchers. Recently, background subtraction methods have been developed with deep convolutional features, which have improved their performance. However, most of these deep methods are supervised, only available for a certain scene, and have high computational cost. In contrast, the traditional background subtraction methods have low computational costs and can be applied to general scenes. Therefore, in this paper, we propose an unsupervised and concise method based on the features learned from a deep convolutional neural network to refine the traditional background subtraction methods. For the proposed method, the low-level features of an input image are extracted from the lower layer of a pretrained convolutional neural network, and the main features are retained to further establish the dynamic background model. The evaluation of the experiments on dynamic scenes demonstrates that the proposed method significantly improves the performance of traditional background subtraction methods. Full article
(This article belongs to the Special Issue Deep Learning for Image and Video Understanding)
Figures

Figure 1

Open AccessArticle
Guidelines for Experimental Algorithmics: A Case Study in Network Analysis
Algorithms 2019, 12(7), 127; https://doi.org/10.3390/a12070127
Received: 1 June 2019 / Revised: 19 June 2019 / Accepted: 19 June 2019 / Published: 26 June 2019
Viewed by 511 | PDF Full-text (711 KB) | HTML Full-text | XML Full-text
Abstract
The field of network science is a highly interdisciplinary area; for the empirical analysis of network data, it draws algorithmic methodologies from several research fields. Hence, research procedures and descriptions of the technical results often differ, sometimes widely. In this paper we focus [...] Read more.
The field of network science is a highly interdisciplinary area; for the empirical analysis of network data, it draws algorithmic methodologies from several research fields. Hence, research procedures and descriptions of the technical results often differ, sometimes widely. In this paper we focus on methodologies for the experimental part of algorithm engineering for network analysis—an important ingredient for a research area with empirical focus. More precisely, we unify and adapt existing recommendations from different fields and propose universal guidelines—including statistical analyses—for the systematic evaluation of network analysis algorithms. This way, the behavior of newly proposed algorithms can be properly assessed and comparisons to existing solutions become meaningful. Moreover, as the main technical contribution, we provide SimexPal, a highly automated tool to perform and analyze experiments following our guidelines. To illustrate the merits of SimexPal and our guidelines, we apply them in a case study: we design, perform, visualize and evaluate experiments of a recent algorithm for approximating betweenness centrality, an important problem in network analysis. In summary, both our guidelines and SimexPal shall modernize and complement previous efforts in experimental algorithmics; they are not only useful for network analysis, but also in related contexts. Full article
Figures

Figure 1

Open AccessArticle
A New Regularized Reconstruction Algorithm Based on Compressed Sensing for the Sparse Underdetermined Problem and Applications of One-Dimensional and Two-Dimensional Signal Recovery
Algorithms 2019, 12(7), 126; https://doi.org/10.3390/a12070126
Received: 27 May 2019 / Revised: 23 June 2019 / Accepted: 24 June 2019 / Published: 26 June 2019
Viewed by 404 | PDF Full-text (1681 KB) | HTML Full-text | XML Full-text
Abstract
The compressed sensing theory has been widely used in solving undetermined equations in various fields and has made remarkable achievements. The regularized smooth L0 (ReSL0) reconstruction algorithm adds an error regularization term to the smooth L0(SL0) algorithm, achieving the reconstruction of the signal [...] Read more.
The compressed sensing theory has been widely used in solving undetermined equations in various fields and has made remarkable achievements. The regularized smooth L0 (ReSL0) reconstruction algorithm adds an error regularization term to the smooth L0(SL0) algorithm, achieving the reconstruction of the signal well in the presence of noise. However, the ReSL0 reconstruction algorithm still has some flaws. It still chooses the original optimization method of SL0 and the Gauss approximation function, but this method has the problem of a sawtooth effect in the later optimization stage, and the convergence effect is not ideal. Therefore, we make two adjustments to the basis of the ReSL0 reconstruction algorithm: firstly, we introduce another CIPF function which has a better approximation effect than Gauss function; secondly, we combine the steepest descent method and Newton method in terms of the algorithm optimization. Then, a novel regularized recovery algorithm named combined regularized smooth L0 (CReSL0) is proposed. Under the same experimental conditions, the CReSL0 algorithm is compared with other popular reconstruction algorithms. Overall, the CReSL0 algorithm achieves excellent reconstruction performance in terms of the peak signal-to-noise ratio (PSNR) and run-time for both a one-dimensional Gauss signal and two-dimensional image reconstruction tasks. Full article
(This article belongs to the Special Issue Data Compression Algorithms and their Applications)
Figures

Figure 1

Algorithms EISSN 1999-4893 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top