Next Issue
Previous Issue

Table of Contents

Algorithms, Volume 11, Issue 5 (May 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-22
Export citation of selected articles as:
Open AccessArticle A Novel Design of Sparse Prototype Filter for Nearly Perfect Reconstruction Cosine-Modulated Filter Banks
Algorithms 2018, 11(5), 77; https://doi.org/10.3390/a11050077
Received: 15 April 2018 / Revised: 8 May 2018 / Accepted: 21 May 2018 / Published: 22 May 2018
PDF Full-text (504 KB) | HTML Full-text | XML Full-text
Abstract
Cosine-modulated filter banks play a major role in digital signal processing. Sparse FIR filter banks have lower implementation complexity than full filter banks, while keeping a good performance level. This paper presents a fast design paradigm for sparse nearly perfect-reconstruction (NPR) cosine-modulated filter
[...] Read more.
Cosine-modulated filter banks play a major role in digital signal processing. Sparse FIR filter banks have lower implementation complexity than full filter banks, while keeping a good performance level. This paper presents a fast design paradigm for sparse nearly perfect-reconstruction (NPR) cosine-modulated filter banks. First, an approximation function is introduced to reduce the non-convex quadratically constrained optimization problem to a linearly constrained optimization problem. Then, the desired sparse linear phase FIR prototype filter is derived through the orthogonal matching pursuit (OMP) performed under the weighted l 2 norm. The simulation results demonstrate that the proposed scheme is an effective paradigm to design sparse NPR cosine-modulated filter banks. Full article
Figures

Figure 1

Open AccessArticle PHEFT: Pessimistic Image Processing Workflow Scheduling for DSP Clusters
Algorithms 2018, 11(5), 76; https://doi.org/10.3390/a11050076
Received: 27 February 2018 / Revised: 9 April 2018 / Accepted: 9 April 2018 / Published: 22 May 2018
Cited by 1 | PDF Full-text (2000 KB) | HTML Full-text | XML Full-text
Abstract
We address image processing workflow scheduling problems on a multicore digital signal processor cluster. We present an experimental study of scheduling strategies that include task labeling, prioritization, resource selection, and digital signal processor scheduling. We apply these strategies in the context of executing
[...] Read more.
We address image processing workflow scheduling problems on a multicore digital signal processor cluster. We present an experimental study of scheduling strategies that include task labeling, prioritization, resource selection, and digital signal processor scheduling. We apply these strategies in the context of executing the Ligo and Montage applications. To provide effective guidance in choosing a good strategy, we present a joint analysis of three conflicting goals based on performance degradation. A case study is given, and experimental results demonstrate that a pessimistic scheduling approach provides the best optimization criteria trade-offs. The Pessimistic Heterogeneous Earliest Finish Time scheduling algorithm performs well in different scenarios with a variety of workloads and cluster configurations. Full article
(This article belongs to the Special Issue Algorithms for Scheduling Problems) Printed Edition available
Figures

Figure 1

Open AccessArticle A New Oren–Nayar Shape-from-Shading Approach for 3D Reconstruction Using High-Order Godunov-Based Scheme
Algorithms 2018, 11(5), 75; https://doi.org/10.3390/a11050075
Received: 17 April 2018 / Revised: 13 May 2018 / Accepted: 16 May 2018 / Published: 18 May 2018
PDF Full-text (1342 KB) | HTML Full-text | XML Full-text
Abstract
3D shape reconstruction from images has been an important topic in the field of robot vision. Shape-From-Shading (SFS) is a classical method for determining the shape of a 3D surface from a one intensity image. The Lambertian reflectance is a fundamental assumption in
[...] Read more.
3D shape reconstruction from images has been an important topic in the field of robot vision. Shape-From-Shading (SFS) is a classical method for determining the shape of a 3D surface from a one intensity image. The Lambertian reflectance is a fundamental assumption in conventional SFS approaches. Unfortunately, when applied to characterize the reflection attribute of the diffuse reflection, the Lambertian model is tested to be inexact. In this paper, we present a new SFS approach for 3D reconstruction of diffuse surfaces whose reflection attribute is approximated by the Oren–Nayar reflection model. The partial differential Image Irradiance Equation (IIR) is set up with this model under a single distant point light source and an orthographic camera projection whose direction coincides with the light source. Then, the IIR is converted into an eikonal equation by solving a quadratic equation that includes the 3D surface shape. The viscosity solution of the resulting eikonal equation is approximated by using the high-order Godunov-based scheme that is accelerated by means of an alternating sweeping strategy. We conduct the experiments on synthetic and real-world images, and the experimental results illustrate the effectiveness of the presented approach. Full article
Figures

Figure 1

Open AccessArticle Using Metaheuristics on the Multi-Depot Vehicle Routing Problem with Modified Optimization Criterion
Algorithms 2018, 11(5), 74; https://doi.org/10.3390/a11050074
Received: 8 March 2018 / Revised: 16 May 2018 / Accepted: 17 May 2018 / Published: 18 May 2018
Cited by 1 | PDF Full-text (2663 KB) | HTML Full-text | XML Full-text
Abstract
This article deals with the modified Multi-Depot Vehicle Routing Problem (MDVRP). The modification consists of altering the optimization criterion. The optimization criterion of the standard MDVRP is to minimize the total sum of routes of all vehicles, whereas the criterion of modified MDVRP
[...] Read more.
This article deals with the modified Multi-Depot Vehicle Routing Problem (MDVRP). The modification consists of altering the optimization criterion. The optimization criterion of the standard MDVRP is to minimize the total sum of routes of all vehicles, whereas the criterion of modified MDVRP (M-MDVRP) is to minimize the longest route of all vehicles, i.e., the time to conduct the routing operation is as short as possible. For this problem, a metaheuristic algorithm—based on the Ant Colony Optimization (ACO) theory and developed by the author for solving the classic MDVRP instances—has been modified and adapted for M-MDVRP. In this article, an additional deterministic optimization process which further enhances the original ACO algorithm has been proposed. For evaluation of results, Cordeau’s benchmark instances are used. Full article
(This article belongs to the Special Issue Metaheuristics for Rich Vehicle Routing Problems)
Figures

Figure 1

Open AccessArticle The NIRS Brain AnalyzIR Toolbox
Algorithms 2018, 11(5), 73; https://doi.org/10.3390/a11050073
Received: 30 March 2018 / Revised: 5 May 2018 / Accepted: 12 May 2018 / Published: 16 May 2018
Cited by 2 | PDF Full-text (3182 KB) | HTML Full-text | XML Full-text
Abstract
Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low-levels of light (650–900 nm) to measure changes in cerebral blood volume and oxygenation. Over the last several decades, this technique has been utilized in a growing number of functional and resting-state
[...] Read more.
Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low-levels of light (650–900 nm) to measure changes in cerebral blood volume and oxygenation. Over the last several decades, this technique has been utilized in a growing number of functional and resting-state brain studies. The lower operation cost, portability, and versatility of this method make it an alternative to methods such as functional magnetic resonance imaging for studies in pediatric and special populations and for studies without the confining limitations of a supine and motionless acquisition setup. However, the analysis of fNIRS data poses several challenges stemming from the unique physics of the technique, the unique statistical properties of data, and the growing diversity of non-traditional experimental designs being utilized in studies due to the flexibility of this technology. For these reasons, specific analysis methods for this technology must be developed. In this paper, we introduce the NIRS Brain AnalyzIR toolbox as an open-source Matlab-based analysis package for fNIRS data management, pre-processing, and first- and second-level (i.e., single subject and group-level) statistical analysis. Here, we describe the basic architectural format of this toolbox, which is based on the object-oriented programming paradigm. We also detail the algorithms for several of the major components of the toolbox including statistical analysis, probe registration, image reconstruction, and region-of-interest based statistics. Full article
Figures

Figure 1

Open AccessArticle Gray Wolf Optimization Algorithm for Multi-Constraints Second-Order Stochastic Dominance Portfolio Optimization
Algorithms 2018, 11(5), 72; https://doi.org/10.3390/a11050072
Received: 11 April 2018 / Revised: 10 May 2018 / Accepted: 11 May 2018 / Published: 15 May 2018
PDF Full-text (417 KB) | HTML Full-text | XML Full-text
Abstract
In the field of investment, how to construct a suitable portfolio based on historical data is still an important issue. The second-order stochastic dominant constraint is a branch of the stochastic dominant constraint theory. However, only considering the second-order stochastic dominant constraints does
[...] Read more.
In the field of investment, how to construct a suitable portfolio based on historical data is still an important issue. The second-order stochastic dominant constraint is a branch of the stochastic dominant constraint theory. However, only considering the second-order stochastic dominant constraints does not conform to the investment environment under realistic conditions. Therefore, we added a series of constraints into basic portfolio optimization model, which reflect the realistic investment environment, such as skewness and kurtosis. In addition, we consider two kinds of risk measures: conditional value at risk and value at risk. Most important of all, in this paper, we introduce Gray Wolf Optimization (GWO) algorithm into portfolio optimization model, which simulates the gray wolf’s social hierarchy and predatory behavior. In the numerical experiments, we compare the GWO algorithm with Particle Swarm Optimization (PSO) algorithm and Genetic Algorithm (GA). The experimental results show that GWO algorithm not only shows better optimization ability and optimization efficiency, but also the portfolio optimized by GWO algorithm has a better performance than FTSE100 index, which prove that GWO algorithm has a great potential in portfolio optimization. Full article
(This article belongs to the Special Issue Algorithms in Computational Finance)
Figures

Figure 1

Open AccessArticle Improving Monarch Butterfly Optimization Algorithm with Self-Adaptive Population
Algorithms 2018, 11(5), 71; https://doi.org/10.3390/a11050071
Received: 21 March 2018 / Revised: 27 April 2018 / Accepted: 27 April 2018 / Published: 14 May 2018
PDF Full-text (382 KB) | HTML Full-text | XML Full-text
Abstract
Inspired by the migration behavior of monarch butterflies in nature, Wang et al. proposed a novel, promising, intelligent swarm-based algorithm, monarch butterfly optimization (MBO), for tackling global optimization problems. In the basic MBO algorithm, the butterflies in land 1 (subpopulation 1) and land
[...] Read more.
Inspired by the migration behavior of monarch butterflies in nature, Wang et al. proposed a novel, promising, intelligent swarm-based algorithm, monarch butterfly optimization (MBO), for tackling global optimization problems. In the basic MBO algorithm, the butterflies in land 1 (subpopulation 1) and land 2 (subpopulation 2) are calculated according to the parameter p, which is unchanged during the entire optimization process. In our present work, a self-adaptive strategy is introduced to dynamically adjust the butterflies in land 1 and 2. Accordingly, the population size in subpopulation 1 and 2 are dynamically changed as the algorithm evolves in a linear way. After introducing the concept of a self-adaptive strategy, an improved MBO algorithm, called monarch butterfly optimization with self-adaptive population (SPMBO), is put forward. In SPMBO, only generated individuals who are better than before can be accepted as new individuals for the next generations in the migration operation. Finally, the proposed SPMBO algorithm is benchmarked by thirteen standard test functions with dimensions of 30 and 60. The experimental results indicate that the search ability of the proposed SPMBO approach significantly outperforms the basic MBO algorithm on most test functions. This also implies the self-adaptive strategy is an effective way to improve the performance of the basic MBO algorithm. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Figures

Figure 1

Open AccessArticle Estimating Functional Connectivity Symmetry between Oxy- and Deoxy-Haemoglobin: Implications for fNIRS Connectivity Analysis
Algorithms 2018, 11(5), 70; https://doi.org/10.3390/a11050070
Received: 30 March 2018 / Revised: 3 May 2018 / Accepted: 9 May 2018 / Published: 11 May 2018
PDF Full-text (14836 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Functional Near InfraRed Spectroscopy (fNIRS) connectivity analysis is often performed using the measured oxy-haemoglobin (HbO2) signal, while the deoxy-haemoglobin (HHb) is largely ignored. The in-common information of the connectivity networks of both HbO2 and HHb is not regularly reported, or
[...] Read more.
Functional Near InfraRed Spectroscopy (fNIRS) connectivity analysis is often performed using the measured oxy-haemoglobin (HbO2) signal, while the deoxy-haemoglobin (HHb) is largely ignored. The in-common information of the connectivity networks of both HbO2 and HHb is not regularly reported, or worse, assumed to be similar. Here we describe a methodology that allows the estimation of the symmetry between the functional connectivity (FC) networks of HbO2 and HHb and propose a differential symmetry index (DSI) indicative of the in-common physiological information. Our hypothesis is that the symmetry between FC networks associated with HbO2 and HHb is above what should be expected from random networks. FC analysis was done in fNIRS data collected from six freely-moving healthy volunteers over 16 locations on the prefrontal cortex during a real-world task in an out-of-the-lab environment. In addition, systemic data including breathing rate (BR) and heart rate (HR) were also synchronously collected and used within the FC analysis. FC networks for HbO2 and HHb were established independently using a Bayesian networks analysis. The DSI between both haemoglobin (Hb) networks with and without systemic influence was calculated. The relationship between the symmetry of HbO2 and HHb networks, including the segregational and integrational characteristics of the networks (modularity and global efficiency respectively) were further described. Consideration of systemic information increases the path lengths of the connectivity networks by 3%. Sparse networks exhibited higher asymmetry than dense networks. Importantly, our experimental connectivity networks symmetry between HbO2 and HHb departs from random (t-test: t(509) = 26.39, p < 0.0001). The DSI distribution suggests a threshold of 0.2 to decide whether both HbO2 and HHb FC networks ought to be studied. For sparse FC networks, analysis of both haemoglobin species is strongly recommended. Our DSI can provide a quantifiable guideline for deciding whether to proceed with single or both Hb networks in FC analysis. Full article
Figures

Figure 1

Open AccessArticle A Multi-Stage Algorithm for a Capacitated Vehicle Routing Problem with Time Constraints
Algorithms 2018, 11(5), 69; https://doi.org/10.3390/a11050069
Received: 28 February 2018 / Revised: 16 April 2018 / Accepted: 16 April 2018 / Published: 10 May 2018
PDF Full-text (1556 KB) | HTML Full-text | XML Full-text
Abstract
The Vehicle Routing Problem (VRP) is one of the most optimized tasks studied and it is implemented in a huge variety of industrial applications. The objective is to design a set of minimum cost paths for each vehicle in order to serve a
[...] Read more.
The Vehicle Routing Problem (VRP) is one of the most optimized tasks studied and it is implemented in a huge variety of industrial applications. The objective is to design a set of minimum cost paths for each vehicle in order to serve a given set of customers. Our attention is focused on a variant of VRP, the capacitated vehicle routing problem when applied to natural gas distribution networks. Managing natural gas distribution networks includes facing a variety of decisions ranging from human resources and material resources to facilities, infrastructures, and carriers. Despite the numerous papers available on vehicle routing problem, there are only a few that study and analyze the problems occurring in capillary distribution operations such as those found in a metropolitan area. Therefore, this work introduces a new algorithm based on the Saving Algorithm heuristic approach which aims to solve a Capacitated Vehicle Routing Problem with time and distance constraints. This joint algorithm minimizes the transportation costs and maximizes the workload according to customer demand within the constraints of a time window. Results from a real case study in a natural gas distribution network demonstrates the effectiveness of the approach. Full article
(This article belongs to the Special Issue Metaheuristics for Rich Vehicle Routing Problems)
Figures

Figure 1

Open AccessArticle Hybrid Flow Shop with Unrelated Machines, Setup Time, and Work in Progress Buffers for Bi-Objective Optimization of Tortilla Manufacturing
Algorithms 2018, 11(5), 68; https://doi.org/10.3390/a11050068
Received: 27 February 2018 / Revised: 30 April 2018 / Accepted: 1 May 2018 / Published: 9 May 2018
Cited by 1 | PDF Full-text (13706 KB) | HTML Full-text | XML Full-text
Abstract
We address a scheduling problem in an actual environment of the tortilla industry. Since the problem is NP hard, we focus on suboptimal scheduling solutions. We concentrate on a complex multistage, multiproduct, multimachine, and batch production environment considering completion time and energy consumption
[...] Read more.
We address a scheduling problem in an actual environment of the tortilla industry. Since the problem is NP hard, we focus on suboptimal scheduling solutions. We concentrate on a complex multistage, multiproduct, multimachine, and batch production environment considering completion time and energy consumption optimization criteria. The production of wheat-based and corn-based tortillas of different styles is considered. The proposed bi-objective algorithm is based on the known Nondominated Sorting Genetic Algorithm II (NSGA-II). To tune it up, we apply statistical analysis of multifactorial variance. A branch and bound algorithm is used to assert obtained performance. We show that the proposed algorithms can be efficiently used in a real production environment. The mono-objective and bi-objective analyses provide a good compromise between saving energy and efficiency. To demonstrate the practical relevance of the results, we examine our solution on real data. We find that it can save 48% of production time and 47% of electricity consumption over the actual production. Full article
(This article belongs to the Special Issue Algorithms for Scheduling Problems) Printed Edition available
Figures

Figure 1

Open AccessArticle Automated Processing of fNIRS Data—A Visual Guide to the Pitfalls and Consequences
Algorithms 2018, 11(5), 67; https://doi.org/10.3390/a11050067
Received: 6 April 2018 / Revised: 1 May 2018 / Accepted: 2 May 2018 / Published: 8 May 2018
Cited by 1 | PDF Full-text (24989 KB) | HTML Full-text | XML Full-text
Abstract
With the rapid increase in new fNIRS users employing commercial software, there is a concern that many studies are biased by suboptimal processing methods. The purpose of this study is to provide a visual reference showing the effects of different processing methods, to
[...] Read more.
With the rapid increase in new fNIRS users employing commercial software, there is a concern that many studies are biased by suboptimal processing methods. The purpose of this study is to provide a visual reference showing the effects of different processing methods, to help inform researchers in setting up and evaluating a processing pipeline. We show the significant impact of pre- and post-processing choices and stress again how important it is to combine data from both hemoglobin species in order to make accurate inferences about the activation site. Full article
Figures

Figure 1

Open AccessArticle Single Machine Scheduling Problem with Interval Processing Times and Total Completion Time Objective
Algorithms 2018, 11(5), 66; https://doi.org/10.3390/a11050066
Received: 2 March 2018 / Revised: 14 April 2018 / Accepted: 23 April 2018 / Published: 7 May 2018
Cited by 1 | PDF Full-text (358 KB) | HTML Full-text | XML Full-text
Abstract
We consider a single machine scheduling problem with uncertain durations of the given jobs. The objective function is minimizing the sum of the job completion times. We apply the stability approach to the considered uncertain scheduling problem using a relative perimeter of the
[...] Read more.
We consider a single machine scheduling problem with uncertain durations of the given jobs. The objective function is minimizing the sum of the job completion times. We apply the stability approach to the considered uncertain scheduling problem using a relative perimeter of the optimality box as a stability measure of the optimal job permutation. We investigated properties of the optimality box and developed algorithms for constructing job permutations that have the largest relative perimeters of the optimality box. Computational results for constructing such permutations showed that they provided the average error less than 0 . 74 % for the solved uncertain problems. Full article
(This article belongs to the Special Issue Algorithms for Scheduling Problems) Printed Edition available
Figures

Figure 1

Open AccessArticle Control Strategy of Speed Servo Systems Based on Deep Reinforcement Learning
Algorithms 2018, 11(5), 65; https://doi.org/10.3390/a11050065
Received: 8 March 2018 / Revised: 28 April 2018 / Accepted: 3 May 2018 / Published: 5 May 2018
PDF Full-text (2071 KB) | HTML Full-text | XML Full-text
Abstract
We developed a novel control strategy of speed servo systems based on deep reinforcement learning. The control parameters of speed servo systems are difficult to regulate for practical applications, and problems of moment disturbance and inertia mutation occur during the operation process. A
[...] Read more.
We developed a novel control strategy of speed servo systems based on deep reinforcement learning. The control parameters of speed servo systems are difficult to regulate for practical applications, and problems of moment disturbance and inertia mutation occur during the operation process. A class of reinforcement learning agents for speed servo systems is designed based on the deep deterministic policy gradient algorithm. The agents are trained by a significant number of system data. After learning completion, they can automatically adjust the control parameters of servo systems and compensate for current online. Consequently, a servo system can always maintain good control performance. Numerous experiments are conducted to verify the proposed control strategy. Results show that the proposed method can achieve proportional–integral–derivative automatic tuning and effectively overcome the effects of inertia mutation and torque disturbance. Full article
(This article belongs to the Special Issue Algorithms for PID Controller)
Figures

Figure 1

Open AccessArticle Utility Distribution Strategy of the Task Agents in Coalition Skill Games
Algorithms 2018, 11(5), 64; https://doi.org/10.3390/a11050064
Received: 27 March 2018 / Revised: 24 April 2018 / Accepted: 2 May 2018 / Published: 5 May 2018
PDF Full-text (1982 KB) | HTML Full-text | XML Full-text
Abstract
This paper focuses on the rational distribution of task utilities in coalition skill games, which is a restricted form of coalition game, where each service agent has a set of skills and each task agent needs a set of skills in order to
[...] Read more.
This paper focuses on the rational distribution of task utilities in coalition skill games, which is a restricted form of coalition game, where each service agent has a set of skills and each task agent needs a set of skills in order to be completed. These two types of agents are assumed to be self-interested. Given the task selection strategy of service agents, the utility distribution strategies of task agents play an important role in improving their individual revenues and system total revenue. The problem that needs to be resolved is how to design the task selection strategies of the service agents and the utility distribution strategies of the task agents to make the self-interested decisions improve the system whole performance. However, to the best of our knowledge, this problem has been the topic of very few studies and has not been properly addressed. To address this problem, a task allocation algorithm for self-interested agents in a coalition skill game is proposed, it distributes the utilities of tasks to the needed skills according to the powers of the service agents that possess the corresponding skills. The final simulation results verify the effectiveness of the algorithm. Full article
Figures

Figure 1

Open AccessArticle The Supplier Selection of the Marine Rescue Equipment Based on the Analytic Hierarchy Process (AHP)-Limited Diversity Factors Method
Algorithms 2018, 11(5), 63; https://doi.org/10.3390/a11050063
Received: 23 February 2018 / Revised: 18 April 2018 / Accepted: 3 May 2018 / Published: 4 May 2018
Cited by 1 | PDF Full-text (1201 KB) | HTML Full-text | XML Full-text
Abstract
Supplier selection is an important decision-making link in bidding activity. When overall scores of several suppliers are similar, it is hard to obtain an accurate ranking of these suppliers. Applying the Diversity Factors Method (Diversity Factors Method, DFM) may lead to over correction
[...] Read more.
Supplier selection is an important decision-making link in bidding activity. When overall scores of several suppliers are similar, it is hard to obtain an accurate ranking of these suppliers. Applying the Diversity Factors Method (Diversity Factors Method, DFM) may lead to over correction of weights, which would degrade the capability of indexes to reflect the importance. A Limited Diversity Factors Method (Limited Diversity Factors Method, LDFM) based on entropy is presented in this paper in order to adjust the weights, in order to relieve the over correction in DFM and to improve the capability of identification of indexes in supplier selection. An example of salvage ship bidding demonstrates the advantages of the LDFM, in which the raking of overall scores of suppliers is more accurate. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Figures

Figure 1

Open AccessArticle A Feature-Weighted SVR Method Based on Kernel Space Feature
Algorithms 2018, 11(5), 62; https://doi.org/10.3390/a11050062
Received: 24 March 2018 / Revised: 23 April 2018 / Accepted: 2 May 2018 / Published: 4 May 2018
PDF Full-text (873 KB) | HTML Full-text | XML Full-text
Abstract
Support Vector Regression (SVR), which converts the original low-dimensional problem to a high-dimensional kernel space linear problem by introducing kernel functions, has been successfully applied in system modeling. Regarding the classical SVR algorithm, the value of the features has been taken into account,
[...] Read more.
Support Vector Regression (SVR), which converts the original low-dimensional problem to a high-dimensional kernel space linear problem by introducing kernel functions, has been successfully applied in system modeling. Regarding the classical SVR algorithm, the value of the features has been taken into account, while its contribution to the model output is omitted. Therefore, the construction of the kernel space may not be reasonable. In the paper, a Feature-Weighted SVR (FW-SVR) is presented. The range of the feature is matched with its contribution by properly assigning the weight of the input features in data pre-processing. FW-SVR optimizes the distribution of the sample points in the kernel space to make the minimizing of the structural risk more reasonable. Four synthetic datasets and seven real datasets are applied. A superior generalization ability is obtained by the proposed method. Full article
Figures

Figure 1

Open AccessArticle Relaxed Data Types as Consistency Conditions
Algorithms 2018, 11(5), 61; https://doi.org/10.3390/a11050061
Received: 27 February 2018 / Revised: 25 April 2018 / Accepted: 2 May 2018 / Published: 4 May 2018
PDF Full-text (293 KB) | HTML Full-text | XML Full-text
Abstract
In the quest for higher-performance shared data structures, weakening consistency conditions and relaxing the sequential specifications of data types are two of the primary tools available in the literature today. In this paper, we show that these two approaches are in many cases
[...] Read more.
In the quest for higher-performance shared data structures, weakening consistency conditions and relaxing the sequential specifications of data types are two of the primary tools available in the literature today. In this paper, we show that these two approaches are in many cases different ways to specify the same sets of allowed concurrent behaviors of a given shared data object. This equivalence allows us to use whichever description is clearer, simpler, or easier to achieve equivalent guarantees. Specifically, for three common data type relaxations, we define consistency conditions such that the combination of the new consistency condition and an unrelaxed type allows the same behaviors as Linearizability and the relaxed version of the data type. Conversely, for the consistency condition k-Atomicity, we define a new data type relaxation such that the behaviors allowed by the relaxed version of a data type, combined with Linearizability, are the same as those allowed by k-Atomicity and the original type. As an example of the possibilities opened by our new equivalence, we use standard techniques from the literature on consistency conditions to prove that the three data type relaxations we consider are not comparable to one another or to several similar known conditions. Finally, we show a particular class of data types where one of our newly-defined consistency conditions is comparable to, and stronger than, one of the known consistency conditions we consider. Full article
Open AccessArticle Vessel Traffic Risk Assessment Based on Uncertainty Analysis in the Risk Matrix
Algorithms 2018, 11(5), 60; https://doi.org/10.3390/a11050060
Received: 5 April 2018 / Revised: 28 April 2018 / Accepted: 2 May 2018 / Published: 3 May 2018
PDF Full-text (1446 KB) | HTML Full-text | XML Full-text
Abstract
Uncertainty analysis is considered to be a necessary step in the process of vessel traffic risk assessment. The purpose of this study is to propose the uncertainty analysis algorithm which can be used to investigate the reliability of the risk assessment result. Probability
[...] Read more.
Uncertainty analysis is considered to be a necessary step in the process of vessel traffic risk assessment. The purpose of this study is to propose the uncertainty analysis algorithm which can be used to investigate the reliability of the risk assessment result. Probability and possibility distributions are used to quantify the two types of uncertainty identified in the risk assessment process. In addition, the algorithm for appropriate time window selection is chosen by considering the uncertainty of vessel traffic accident occurrence and the variation trend of the vessel traffic risk caused by maritime rules becoming operative. Vessel traffic accident data from the United Kingdom’s marine accident investigation branch are used for the case study. Based on a comparison with the common method of estimating the vessel traffic risk and the algorithm for uncertainty quantification without considering the time window selection, the availability of the proposed algorithms is verified, which can provide guidance for vessel traffic risk management. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Figures

Figure 1

Open AccessArticle Decision-Making Approach Based on Neutrosophic Rough Information
Algorithms 2018, 11(5), 59; https://doi.org/10.3390/a11050059
Received: 13 February 2018 / Revised: 20 April 2018 / Accepted: 1 May 2018 / Published: 3 May 2018
Cited by 3 | PDF Full-text (525 KB) | HTML Full-text | XML Full-text
Abstract
Rough set theory and neutrosophic set theory are mathematical models to deal with incomplete and vague information. These two theories can be combined into a framework for modeling and processing incomplete information in information systems. Thus, the neutrosophic rough set hybrid model gives
[...] Read more.
Rough set theory and neutrosophic set theory are mathematical models to deal with incomplete and vague information. These two theories can be combined into a framework for modeling and processing incomplete information in information systems. Thus, the neutrosophic rough set hybrid model gives more precision, flexibility and compatibility to the system as compared to the classic and fuzzy models. In this research study, we develop neutrosophic rough digraphs based on the neutrosophic rough hybrid model. Moreover, we discuss regular neutrosophic rough digraphs, and we solve decision-making problems by using our proposed hybrid model. Finally, we give a comparison analysis of two hybrid models, namely, neutrosophic rough digraphs and rough neutrosophic digraphs. Full article
(This article belongs to the Special Issue Algorithms for Decision Making)
Figures

Figure 1

Open AccessFeature PaperArticle Computing Fault-Containment Times of Self-Stabilizing Algorithms Using Lumped Markov Chains
Algorithms 2018, 11(5), 58; https://doi.org/10.3390/a11050058
Received: 10 February 2018 / Revised: 28 March 2018 / Accepted: 2 May 2018 / Published: 3 May 2018
PDF Full-text (947 KB) | HTML Full-text | XML Full-text
Abstract
The analysis of self-stabilizing algorithms is often limited to the worst case stabilization time starting from an arbitrary state, i.e., a state resulting from a sequence of faults. Considering the fact that these algorithms are intended to provide fault tolerance in the long
[...] Read more.
The analysis of self-stabilizing algorithms is often limited to the worst case stabilization time starting from an arbitrary state, i.e., a state resulting from a sequence of faults. Considering the fact that these algorithms are intended to provide fault tolerance in the long run, this is not the most relevant metric. A common situation is that a running system is an a legitimate state when hit by a single fault. This event has a much higher probability than multiple concurrent faults. Therefore, the worst case time to recover from a single fault is more relevant than the recovery time from a large number of faults. This paper presents techniques to derive upper bounds for the mean time to recover from a single fault for self-stabilizing algorithms based on Markov chains in combination with lumping. To illustrate the applicability of the techniques they are applied to a new self-stabilizing coloring algorithm. Full article
Figures

Figure 1

Open AccessArticle Optimal Control Algorithms and Their Analysis for Short-Term Scheduling in Manufacturing Systems
Algorithms 2018, 11(5), 57; https://doi.org/10.3390/a11050057
Received: 18 February 2018 / Revised: 15 April 2018 / Accepted: 16 April 2018 / Published: 3 May 2018
Cited by 1 | PDF Full-text (3954 KB) | HTML Full-text | XML Full-text
Abstract
Current literature presents optimal control computational algorithms with regard to state, control, and conjunctive variable spaces. This paper first analyses the advantages and limitations of different optimal control computational methods and algorithms which can be used for short-term scheduling. Second, it develops an
[...] Read more.
Current literature presents optimal control computational algorithms with regard to state, control, and conjunctive variable spaces. This paper first analyses the advantages and limitations of different optimal control computational methods and algorithms which can be used for short-term scheduling. Second, it develops an optimal control computational algorithm that allows for the solution of short-term scheduling in an optimal manner. Moreover, qualitative and quantitative analysis of the manufacturing system scheduling problem is presented. Results highlight computer experiments with a scheduling software prototype as well as potential future research avenues. Full article
(This article belongs to the Special Issue Algorithms for Scheduling Problems) Printed Edition available
Figures

Figure 1

Open AccessArticle BELMKN: Bayesian Extreme Learning Machines Kohonen Network
Algorithms 2018, 11(5), 56; https://doi.org/10.3390/a11050056
Received: 1 April 2018 / Revised: 22 April 2018 / Accepted: 24 April 2018 / Published: 27 April 2018
Cited by 2 | PDF Full-text (18203 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes the Bayesian Extreme Learning Machine Kohonen Network (BELMKN) framework to solve the clustering problem. The BELMKN framework uses three levels in processing nonlinearly separable datasets to obtain efficient clustering in terms of accuracy. In the first level, the Extreme Learning
[...] Read more.
This paper proposes the Bayesian Extreme Learning Machine Kohonen Network (BELMKN) framework to solve the clustering problem. The BELMKN framework uses three levels in processing nonlinearly separable datasets to obtain efficient clustering in terms of accuracy. In the first level, the Extreme Learning Machine (ELM)-based feature learning approach captures the nonlinearity in the data distribution by mapping it onto a d-dimensional space. In the second level, ELM-based feature extracted data is used as an input for Bayesian Information Criterion (BIC) to predict the number of clusters termed as a cluster prediction. In the final level, feature-extracted data along with the cluster prediction is passed to the Kohonen Network to obtain improved clustering accuracy. The main advantage of the proposed method is to overcome the problem of having a priori identifiers or class labels for the data; it is difficult to obtain labels in most of the cases for the real world datasets. The BELMKN framework is applied to 3 synthetic datasets and 10 benchmark datasets from the UCI machine learning repository and compared with the state-of-the-art clustering methods. The experimental results show that the proposed BELMKN-based clustering outperforms other clustering algorithms for the majority of the datasets. Hence, the BELMKN framework can be used to improve the clustering accuracy of the nonlinearly separable datasets. Full article
(This article belongs to the Special Issue Advanced Artificial Neural Networks)
Figures

Figure 1

Back to Top