Previous Issue
Volume 12, November

Table of Contents

Algorithms, Volume 12, Issue 12 (December 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Graph Theory Approach to the Vulnerability of Transportation Networks
Algorithms 2019, 12(12), 270; https://doi.org/10.3390/a12120270 - 12 Dec 2019
Viewed by 136
Abstract
Nowadays, transport is the basis for the functioning of national, continental, and global economies. Thus, many governments recognize it as a critical element in ensuring the daily existence of societies in their countries. Those responsible for the proper operation of the transport sector [...] Read more.
Nowadays, transport is the basis for the functioning of national, continental, and global economies. Thus, many governments recognize it as a critical element in ensuring the daily existence of societies in their countries. Those responsible for the proper operation of the transport sector must have the right tools to model, analyze, and optimize its elements. One of the most critical problems is the need to prevent bottlenecks in transport networks. Thus, the main aim of the article was to define the parameters characterizing the transportation network vulnerability and select algorithms to support their search. The parameters proposed are based on characteristics related to domination in graph theory. The domination, edge-domination concepts, and related topics, such as bondage-connected and weighted bondage-connected numbers, were applied as the tools for searching and identifying the bottlenecks in transportation networks. Furthermore, the algorithms for finding the minimal dominating set and minimal (maximal) weighted dominating sets are proposed. This way, the exemplary academic transportation network was analyzed in two cases: stationary and dynamic. Some conclusions are presented. The main one is the fact that the methods given in this article are universal and applicable to both small and large-scale networks. Moreover, the approach can support the dynamic analysis of bottlenecks in transport networks. Full article
Show Figures

Figure 1

Open AccessArticle
Application and Evaluation of Surrogate Models for Radiation Source Search
Algorithms 2019, 12(12), 269; https://doi.org/10.3390/a12120269 - 12 Dec 2019
Viewed by 128
Abstract
Surrogate models are increasingly required for applications in which first-principles simulation models are prohibitively expensive to employ for uncertainty analysis, design, or control. They can also be used to approximate models whose discontinuous derivatives preclude the use of gradient-based optimization or data assimilation [...] Read more.
Surrogate models are increasingly required for applications in which first-principles simulation models are prohibitively expensive to employ for uncertainty analysis, design, or control. They can also be used to approximate models whose discontinuous derivatives preclude the use of gradient-based optimization or data assimilation algorithms. We consider the problem of inferring the 2D location and intensity of a radiation source in an urban environment using a ray-tracing model based on Boltzmann transport theory. Whereas the code implementing this model is relatively efficient, extension to 3D Monte Carlo transport simulations precludes subsequent Bayesian inference to infer source locations, which typically requires thousands to millions of simulations. Additionally, the resulting likelihood exhibits discontinuous derivatives due to the presence of buildings. To address these issues, we discuss the construction of surrogate models for optimization, Bayesian inference, and uncertainty propagation. Specifically, we consider surrogate models based on Legendre polynomials, multivariate adaptive regression splines, radial basis functions, Gaussian processes, and neural networks. We detail strategies for computing training points and discuss the merits and deficits of each method. Full article
Show Figures

Figure 1

Open AccessArticle
Construction Method of Probabilistic Boolean Networks Based on Imperfect Information
Algorithms 2019, 12(12), 268; https://doi.org/10.3390/a12120268 - 12 Dec 2019
Viewed by 146
Abstract
A probabilistic Boolean network (PBN) is well known as one of the mathematical models of gene regulatory networks. In a Boolean network, expression of a gene is approximated by a binary value, and its time evolution is expressed by Boolean functions. In a [...] Read more.
A probabilistic Boolean network (PBN) is well known as one of the mathematical models of gene regulatory networks. In a Boolean network, expression of a gene is approximated by a binary value, and its time evolution is expressed by Boolean functions. In a PBN, a Boolean function is probabilistically chosen from candidates of Boolean functions. One of the authors has proposed a method to construct a PBN from imperfect information. However, there is a weakness that the number of candidates of Boolean functions may be redundant. In this paper, this construction method is improved to efficiently utilize given information. To derive Boolean functions and those selection probabilities, the linear programming problem is solved. Here, we introduce the objective function to reduce the number of candidates. The proposed method is demonstrated by a numerical example. Full article
(This article belongs to the Special Issue Biological Networks II)
Open AccessArticle
Finding Patterns in Signals Using Lossy Text Compression
Algorithms 2019, 12(12), 267; https://doi.org/10.3390/a12120267 - 11 Dec 2019
Viewed by 150
Abstract
Whether the source is autonomous car, robotic vacuum cleaner, or a quadcopter, signals from sensors tend to have some hidden patterns that repeat themselves. For example, typical GPS traces from a smartphone contain periodic trajectories such as “home, work, home, work, ⋯”. Our [...] Read more.
Whether the source is autonomous car, robotic vacuum cleaner, or a quadcopter, signals from sensors tend to have some hidden patterns that repeat themselves. For example, typical GPS traces from a smartphone contain periodic trajectories such as “home, work, home, work, ⋯”. Our goal in this study was to automatically reverse engineer such signals, identify their periodicity, and then use it to compress and de-noise these signals. To do so, we present a novel method of using algorithms from the field of pattern matching and text compression to represent the “language” in such signals. Common text compression algorithms are less tailored to handle such strings. Moreover, they are lossless, and cannot be used to recover noisy signals. To this end, we define the recursive run-length encoding (RRLE) method, which is a generalization of the well known run-length encoding (RLE) method. Then, we suggest lossy and lossless algorithms to compress and de-noise such signals. Unlike previous results, running time and optimality guarantees are proved for each algorithm. Experimental results on synthetic and real data sets are provided. We demonstrate our system by showing how it can be used to turn commercial micro air-vehicles into autonomous robots. This is by reverse engineering their unpublished communication protocols and using a laptop or on-board micro-computer to control them. Our open source code may be useful for both the community of millions of toy robots users, as well as for researchers that may extend it for further protocols. Full article
(This article belongs to the Special Issue Data Compression Algorithms and their Applications)
Open AccessArticle
Storage Efficient Trajectory Clustering and k-NN for Robust Privacy Preserving Spatio-Temporal Databases
Algorithms 2019, 12(12), 266; https://doi.org/10.3390/a12120266 - 11 Dec 2019
Viewed by 150
Abstract
The need to store massive volumes of spatio-temporal data has become a difficult task as GPS capabilities and wireless communication technologies have become prevalent to modern mobile devices. As a result, massive trajectory data are produced, incurring expensive costs for storage, transmission, as [...] Read more.
The need to store massive volumes of spatio-temporal data has become a difficult task as GPS capabilities and wireless communication technologies have become prevalent to modern mobile devices. As a result, massive trajectory data are produced, incurring expensive costs for storage, transmission, as well as query processing. A number of algorithms for compressing trajectory data have been proposed in order to overcome these difficulties. These algorithms try to reduce the size of trajectory data, while preserving the quality of the information. In the context of this research work, we focus on both the privacy preservation and storage problem of spatio-temporal databases. To alleviate this issue, we propose an efficient framework for trajectories representation, entitled DUST (DUal-based Spatio-temporal Trajectory), by which a raw trajectory is split into a number of linear sub-trajectories which are subjected to dual transformation that formulates the representatives of each linear component of initial trajectory; thus, the compressed trajectory achieves compression ratio equal to M : 1 . To our knowledge, we are the first to study and address k-NN queries on nonlinear moving object trajectories that are represented in dual dimensional space. Additionally, the proposed approach is expected to reinforce the privacy protection of such data. Specifically, even in case that an intruder has access to the dual points of trajectory data and try to reproduce the native points that fit a specific component of the initial trajectory, the identity of the mobile object will remain secure with high probability. In this way, the privacy of the k-anonymity method is reinforced. Through experiments on real spatial datasets, we evaluate the robustness of the new approach and compare it with the one studied in our previous work. Full article
(This article belongs to the Special Issue Mining Humanistic Data 2019)
Show Figures

Figure 1

Open AccessArticle
Enhanced Knowledge Graph Embedding by Jointly Learning Soft Rules and Facts
Algorithms 2019, 12(12), 265; https://doi.org/10.3390/a12120265 - 10 Dec 2019
Viewed by 189
Abstract
Combining first order logic rules with a Knowledge Graph (KG) embedding model has recently gained increasing attention, as rules introduce rich background information. Among such studies, models equipped with soft rules, which are extracted with certain confidences, achieve state-of-the-art performance. However, the existing [...] Read more.
Combining first order logic rules with a Knowledge Graph (KG) embedding model has recently gained increasing attention, as rules introduce rich background information. Among such studies, models equipped with soft rules, which are extracted with certain confidences, achieve state-of-the-art performance. However, the existing methods either cannot support the transitivity and composition rules or take soft rules as regularization terms to constrain derived facts, which is incapable of encoding the logical background knowledge about facts contained in soft rules. In addition, previous works performed one time logical inference over rules to generate valid groundings for modeling rules, ignoring forward chaining inference, which can further generate more valid groundings to better model rules. To these ends, this paper proposes Soft Logical rules enhanced Embedding (SoLE), a novel KG embedding model equipped with a joint training algorithm over soft rules and KG facts to inject the logical background knowledge of rules into embeddings, as well as forward chaining inference over rules. Evaluations on Freebase and DBpedia show that SoLE not only achieves improvements of 11.6%/5.9% in Mean Reciprocal Rank (MRR) and 18.4%/15.9% in [email protected] compared to the model on which SoLE is based, but also significantly and consistently outperforms the state-of-the-art baselines in the link prediction task. Full article
Show Figures

Figure 1

Open AccessArticle
Exploring Travelers’ Characteristics Affecting their Intention to Shift to Bike-Sharing Systems due to a Sophisticated Mobile App
Algorithms 2019, 12(12), 264; https://doi.org/10.3390/a12120264 (registering DOI) - 07 Dec 2019
Viewed by 305
Abstract
Many cities have already installed bike-sharing systems for several years now, but especially in recent years with the rise of micro-mobility, many efforts are being made worldwide to improve the operation of these systems. Technology has an essential role to play in the [...] Read more.
Many cities have already installed bike-sharing systems for several years now, but especially in recent years with the rise of micro-mobility, many efforts are being made worldwide to improve the operation of these systems. Technology has an essential role to play in the success of micro-mobility schemes, including bike-sharing systems. In this paper, it is examined if a state-of-the-art mobile application (app) can contribute to increasing the usage levels of such a system. It is also seeking to identify groups of travelers, who are more likely to be affected by the sophisticated app. With this aim, a questionnaire survey was designed and addressed to the users of the bike-sharing system of the city of Thessaloniki, Greece, as well as to other residents of the city. Through a descriptive analysis, the most useful services that an app can provide are identified. Most importantly, two different types of predictive models (i.e., classification tree and binary logit model) were applied in order to identify groups of users who are more likely to shift to or to use the bike-sharing system due to the sophisticated app. The results of the two predictive models confirm that people of younger ages and those who are not currently users of the system are those most likely to be attracted to the system due to such an app. Other factors, such as car usage frequency, education, and income also appeared to have slight impact on travelers’ intention to use the system more often due to the app. Full article
(This article belongs to the Special Issue Models and Technologies for Intelligent Transportation Systems)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Linking Scheduling Criteria to Shop Floor Performance in Permutation Flowshops
Algorithms 2019, 12(12), 263; https://doi.org/10.3390/a12120263 - 07 Dec 2019
Viewed by 290
Abstract
The goal of manufacturing scheduling is to allocate a set of jobs to the machines in the shop so these jobs are processed according to a given criterion (or set of criteria). Such criteria are based on properties of the jobs to be [...] Read more.
The goal of manufacturing scheduling is to allocate a set of jobs to the machines in the shop so these jobs are processed according to a given criterion (or set of criteria). Such criteria are based on properties of the jobs to be scheduled (e.g., their completion times, due dates); so it is not clear how these (short-term) criteria impact on (long-term) shop floor performance measures. In this paper, we analyse the connection between the usual scheduling criteria employed as objectives in flowshop scheduling (e.g., makespan or idle time), and customary shop floor performance measures (e.g., work-in-process and throughput). Two of these linkages can be theoretically predicted (i.e., makespan and throughput as well as completion time and average cycle time), and the other such relationships should be discovered on a numerical/empirical basis. In order to do so, we set up an experimental analysis consisting in finding optimal (or good) schedules under several scheduling criteria, and then computing how these schedules perform in terms of the different shop floor performance measures for several instance sizes and for different structures of processing times. Results indicate that makespan only performs well with respect to throughput, and that one formulation of idle times obtains nearly as good results as makespan, while outperforming it in terms of average cycle time and work in process. Similarly, minimisation of completion time seems to be quite balanced in terms of shop floor performance, although it does not aim exactly at work-in-process minimisation, as some literature suggests. Finally, the experiments show that some of the existing scheduling criteria are poorly related to the shop floor performance measures under consideration. These results may help to better understand the impact of scheduling on flowshop performance, so scheduling research may be more geared towards shop floor performance, which is sometimes suggested as a cause for the lack of applicability of some scheduling models in manufacturing. Full article
(This article belongs to the Special Issue Exact and Heuristic Scheduling Algorithms)
Show Figures

Figure 1

Open AccessArticle
Using Interval Analysis to Compute the Invariant Set of a Nonlinear Closed-Loop Control System
Algorithms 2019, 12(12), 262; https://doi.org/10.3390/a12120262 - 06 Dec 2019
Viewed by 235
Abstract
In recent years, many applications, as well as theoretical properties of interval analysis have been investigated. Without any claim for completeness, such applications and methodologies range from enclosing the effect of round-off errors in highly accurate numerical computations over simulating guaranteed enclosures of [...] Read more.
In recent years, many applications, as well as theoretical properties of interval analysis have been investigated. Without any claim for completeness, such applications and methodologies range from enclosing the effect of round-off errors in highly accurate numerical computations over simulating guaranteed enclosures of all reachable states of a dynamic system model with bounded uncertainty in parameters and initial conditions, to the solution of global optimization tasks. By exploiting the fundamental enclosure properties of interval analysis, this paper aims at computing invariant sets of nonlinear closed-loop control systems. For that purpose, Lyapunov-like functions and interval analysis are combined in a novel manner. To demonstrate the proposed techniques for enclosing invariant sets, the systems examined in this paper are controlled via sliding mode techniques with subsequently enclosing the invariant sets by an interval based set inversion technique. The applied methods for the control synthesis make use of a suitably chosen Gröbner basis, which is employed to solve Bézout’s identity. Illustrating simulation results conclude this paper to visualize the novel combination of sliding mode control with an interval based computation of invariant sets. Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control)
Open AccessArticle
A Pareto-Based Hybrid Whale Optimization Algorithm with Tabu Search for Multi-Objective Optimization
Algorithms 2019, 12(12), 261; https://doi.org/10.3390/a12120261 - 04 Dec 2019
Viewed by 272
Abstract
Multi-Objective Problems (MOPs) are common real-life problems that can be found in different fields, such as bioinformatics and scheduling. Pareto Optimization (PO) is a popular method for solving MOPs, which optimizes all objectives simultaneously. It provides an effective [...] Read more.
Multi-Objective Problems (MOPs) are common real-life problems that can be found in different fields, such as bioinformatics and scheduling. Pareto Optimization (PO) is a popular method for solving MOPs, which optimizes all objectives simultaneously. It provides an effective way to evaluate the quality of multi-objective solutions. Swarm Intelligence (SI) methods are population-based methods that generate multiple solutions to the problem, providing SI methods suitable for MOP solutions. SI methods have certain drawbacks when applied to MOPs, such as swarm leader selection and obtaining evenly distributed solutions over solution space. Whale Optimization Algorithm (WOA) is a recent SI method. In this paper, we propose combining WOA with Tabu Search (TS) for MOPs (MOWOATS). MOWOATS uses TS to store non-dominated solutions in elite lists to guide swarm members, which overcomes the swarm leader selection problem. MOWOATS employs crossover in both intensification and diversification phases to improve diversity of the population. MOWOATS proposes a new diversification step to eliminate the need for local search methods. MOWOATS has been tested over different benchmark multi-objective test functions, such as CEC2009, ZDT, and DTLZ. Results present the efficiency of MOWOATS in finding solutions near Pareto front and evenly distributed over solution space. Full article
Open AccessArticle
Damage Diagnosis of Reactive Powder Concrete Under Fatigue Loading Using 3D Laser Scanning Technology
Algorithms 2019, 12(12), 260; https://doi.org/10.3390/a12120260 - 04 Dec 2019
Viewed by 225
Abstract
Damage mechanisms of Reactive Powder Concrete (RPC) under fatigue loading are investigated using the 3D laser scanning technology. An independently configured 3D laser scanning system is used to monitor the damaging procedure. Texture analysis technique is also applied to enhance the understanding of [...] Read more.
Damage mechanisms of Reactive Powder Concrete (RPC) under fatigue loading are investigated using the 3D laser scanning technology. An independently configured 3D laser scanning system is used to monitor the damaging procedure. Texture analysis technique is also applied to enhance the understanding of the damage mechanisms of RPC under fatigue loading. In order to obtain the characteristic parameters of point cloud data, a point clouds projection algorithm is proposed. Damage evolution is described by the change of point cloud data of the damage in the 2D plane and 3D space during fatigue loading. The Gray Level Co-occurrence Matrix (GLCM) method is used to extract the characteristic parameters to evaluate the statue of the structural. Angular Second Moment and Cluster Shadow of typical sensitive characteristic indexes is screened by using the Digital Feature Screening. The reliability of the damage index was verified by image texture analysis and data expansion. Indexes extracted in this paper can be used as a new structural health monitoring indicator to assess health condition. Full article
(This article belongs to the Special Issue Algorithms for Diagnostics and Nondestructive Testing)
Open AccessArticle
The Research of Improved Active Disturbance Rejection Control Algorithm for Particleboard Glue System Based on Neural Network State Observer
Algorithms 2019, 12(12), 259; https://doi.org/10.3390/a12120259 - 03 Dec 2019
Viewed by 231
Abstract
For achieving high-performance control for a particleboard glue mixing and dosing control system, which is a time-delay system in low frequency working conditions, an improved active disturbance rejection controller is proposed. In order to reduce overshoot caused by a given large change between [...] Read more.
For achieving high-performance control for a particleboard glue mixing and dosing control system, which is a time-delay system in low frequency working conditions, an improved active disturbance rejection controller is proposed. In order to reduce overshoot caused by a given large change between the actual output and expected value of the control object, a tracking differentiator (TD) is used to arrange the appropriate excesses. Through the first-order approximation of the time-delay link, the time-delay system is transformed into an output feedback problem with unknown function. Using the neural network state observer (NNSO), a sliding mode control law is used to achieve the accurate and fast tracking of the output signal. Finally, the numerical simulation results verify the effectiveness and feasibility of the proposed method. Full article
Open AccessArticle
Improved Bilateral Filtering for a Gaussian Pyramid Structure-Based Image Enhancement Algorithm
Algorithms 2019, 12(12), 258; https://doi.org/10.3390/a12120258 - 01 Dec 2019
Viewed by 340
Abstract
To address the problem of unclear images affected by occlusion from fog, we propose an improved Retinex image enhancement algorithm based on the Gaussian pyramid transformation. Our algorithm features bilateral filtering as a replacement for the Gaussian function used in the original Retinex [...] Read more.
To address the problem of unclear images affected by occlusion from fog, we propose an improved Retinex image enhancement algorithm based on the Gaussian pyramid transformation. Our algorithm features bilateral filtering as a replacement for the Gaussian function used in the original Retinex algorithm. Operation of the technique is as follows. To begin, we deduced the mathematical model for an improved bilateral filtering function based on the spatial domain kernel function and the pixel difference parameter. The input RGB image was subsequently converted into the Hue Saturation Intensity (HSI) color space, where the reflection component of the intensity channel was extracted to obtain an image whose edges were retained and are not affected by changes in brightness. Following reconversion to the RGB color space, color images of this reflection component were obtained at different resolutions using Gaussian pyramid down-sampling. Each of these images was then processed using the improved Retinex algorithm to improve the contrast of the final image, which was reconstructed using the Laplace algorithm. Results from experiments show that the proposed algorithm can enhance image contrast effectively, and the color of the processed image is in line with what would be perceived by a human observer. Full article
Open AccessArticle
A General Integrated Method for Design Analysis and Optimization of Missile Structure
Algorithms 2019, 12(12), 257; https://doi.org/10.3390/a12120257 - 01 Dec 2019
Viewed by 308
Abstract
In the demonstration phase of a missile scheme, to obtain the optimum proposal, designers need to modify the parameters of the overall structure frequently and significantly, and perform the structural analysis repeatedly. In order to reduce the manual workload and improve the efficiency [...] Read more.
In the demonstration phase of a missile scheme, to obtain the optimum proposal, designers need to modify the parameters of the overall structure frequently and significantly, and perform the structural analysis repeatedly. In order to reduce the manual workload and improve the efficiency of research and development, a general integrated method of missile structure modeling, analysis and optimization was proposed. First, CST (Class and Shape transformation functions) parametric method was used to describe the general structure of the missile. The corresponding software geometric modeling and FEM (Finite Element Method) analyzing of the missile were developed in C/C++ language on the basis of the CST parametric method and UG (Unigraphics) secondary development technology. Subsequently, a novel surrogate model-based optimation strategy was proposed to obtain a relatively light mass missile structure under existing shape size. Eventually, different missile models were used to verify the validity of the method. After executing the structure modeling, analysis and optimization modules, satisfactory results can be obtained that demonstrated the stability and adaptability of the proposed method. The method presented saves plenty of time comparing to the traditional manual modeling and analysis method, which provides a valuable technique to improve the efficiency of research and development. Full article
Open AccessArticle
Parameterized Algorithms in Bioinformatics: An Overview
Algorithms 2019, 12(12), 256; https://doi.org/10.3390/a12120256 - 01 Dec 2019
Viewed by 354
Abstract
Bioinformatics regularly poses new challenges to algorithm engineers and theoretical computer scientists. This work surveys recent developments of parameterized algorithms and complexity for important NP-hard problems in bioinformatics. We cover sequence assembly and analysis, genome comparison and completion, and haplotyping and phylogenetics. Aside [...] Read more.
Bioinformatics regularly poses new challenges to algorithm engineers and theoretical computer scientists. This work surveys recent developments of parameterized algorithms and complexity for important NP-hard problems in bioinformatics. We cover sequence assembly and analysis, genome comparison and completion, and haplotyping and phylogenetics. Aside from reporting the state of the art, we give challenges and open problems for each topic. Full article
(This article belongs to the Special Issue New Frontiers in Parameterized Complexity and Algorithms)
Show Figures

Figure 1

Open AccessArticle
Pre and Postprocessing for JPEG to Handle Large Monochrome Images
Algorithms 2019, 12(12), 255; https://doi.org/10.3390/a12120255 - 01 Dec 2019
Viewed by 350
Abstract
Image compression is one of the most important fields of image processing. Because of the rapid development of image acquisition which will increase the image size, and in turn requires bigger storage space. JPEG has been considered as the most famous and applicable [...] Read more.
Image compression is one of the most important fields of image processing. Because of the rapid development of image acquisition which will increase the image size, and in turn requires bigger storage space. JPEG has been considered as the most famous and applicable algorithm for image compression; however, it has shortfalls for some image types. Hence, new techniques are required to improve the quality of reconstructed images as well as to increase the compression ratio. The work in this paper introduces a scheme to enhance the JPEG algorithm. The proposed scheme is a new method which shrinks and stretches images using a smooth filter. In order to remove the blurring artifact which would be developed from shrinking and stretching the image, a hyperbolic function (tanh) is used to enhance the quality of the reconstructed image. Furthermore, the new approach achieves higher compression ratio for the same image quality, and/or better image quality for the same compression ratio than ordinary JPEG with respect to large size and more complex content images. However, it is an application for optimization to enhance the quality (PSNR and SSIM), of the reconstructed image and to reduce the size of the compressed image, especially for large size images. Full article
Show Figures

Figure 1

Open AccessArticle
FPT Algorithms for Diverse Collections of Hitting Sets
Algorithms 2019, 12(12), 254; https://doi.org/10.3390/a12120254 - 27 Nov 2019
Viewed by 335
Abstract
In this work, we study the d-Hitting Set and Feedback Vertex Set problems through the paradigm of finding diverse collections of r solutions of size at most k each, which has recently been introduced to the field of parameterized complexity. This [...] Read more.
In this work, we study the d-Hitting Set and Feedback Vertex Set problems through the paradigm of finding diverse collections of r solutions of size at most k each, which has recently been introduced to the field of parameterized complexity. This paradigm is aimed at addressing the loss of important side information which typically occurs during the abstraction process that models real-world problems as computational problems. We use two measures for the diversity of such a collection: the sum of all pairwise Hamming distances, and the minimum pairwise Hamming distance. We show that both problems are fixed-parameter tractable in k + r for both diversity measures. A key ingredient in our algorithms is a (problem independent) network flow formulation that, given a set of ‘base’ solutions, computes a maximally diverse collection of solutions. We believe that this could be of independent interest. Full article
(This article belongs to the Special Issue New Frontiers in Parameterized Complexity and Algorithms)
Show Figures

Figure 1

Open AccessArticle
Walking Gait Phase Detection Based on Acceleration Signals Using LSTM-DNN Algorithm
Algorithms 2019, 12(12), 253; https://doi.org/10.3390/a12120253 - 26 Nov 2019
Viewed by 306
Abstract
Gait phase detection is a new biometric method which is of great significance in gait correction, disease diagnosis, and exoskeleton assisted robots. Especially for the development of bone assisted robots, gait phase recognition is an indispensable key technology. In this study, the main [...] Read more.
Gait phase detection is a new biometric method which is of great significance in gait correction, disease diagnosis, and exoskeleton assisted robots. Especially for the development of bone assisted robots, gait phase recognition is an indispensable key technology. In this study, the main characteristics of the gait phases were determined to identify each gait phase. A long short-term memory-deep neural network (LSTM-DNN) algorithm is proposed for gate detection. Compared with the traditional threshold algorithm and the LSTM, the proposed algorithm has higher detection accuracy for different walking speeds and different test subjects. During the identification process, the acceleration signals obtained from the acceleration sensors were normalized to ensure that the different features had the same scale. Principal components analysis (PCA) was used to reduce the data dimensionality and the processed data were used to create the input feature vector of the LSTM-DNN algorithm. Finally, the data set was classified using the Softmax classifier in the full connection layer. Different algorithms were applied to the gait phase detection of multiple male and female subjects. The experimental results showed that the gait-phase recognition accuracy and F-score of the LSTM-DNN algorithm are over 91.8% and 92%, respectively, which is better than the other three algorithms and also verifies the effectiveness of the LSTM-DNN algorithm in practice. Full article
(This article belongs to the Special Issue Algorithms for Pattern Recognition)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Planning the Schedule for the Disposal of the Spent Nuclear Fuel with Interactive Multiobjective Optimization
Algorithms 2019, 12(12), 252; https://doi.org/10.3390/a12120252 - 25 Nov 2019
Viewed by 346
Abstract
Several countries utilize nuclear power and face the problem of what to do with the spent nuclear fuel. One possibility, which is under the scope in this paper, is to dispose of the fuel assemblies in the disposal facility. Before the assemblies can [...] Read more.
Several countries utilize nuclear power and face the problem of what to do with the spent nuclear fuel. One possibility, which is under the scope in this paper, is to dispose of the fuel assemblies in the disposal facility. Before the assemblies can be disposed of, they must cool down their decay heat power in the interim storage. Next, they are loaded into canisters in the encapsulation facility, and finally, the canisters are placed in the disposal facility. In this paper, we model this process as a nonsmooth multiobjective mixed-integer nonlinear optimization problem with the minimization of nine objectives: the maximum number of assemblies in the storage, maximum storage time, average storage time, total number of canisters, end time of the encapsulation, operation time of the encapsulation facility, the lengths of disposal and central tunnels, and total costs. As a result, we obtain the disposal schedule i.e., amount of canisters disposed of periodically. We introduce the interactive multiobjective optimization method using the two-slope parameterized achievement scalarizing functions which enables us to obtain systematically several different Pareto optimal solutions from the same preference information. Finally, a case study adapting the disposal in Finland is given. The results obtained are analyzed in terms of the objective values and disposal schedules. Full article
Show Figures

Figure 1

Open AccessArticle
Target Image Mask Correction Based on Skeleton Divergence
Algorithms 2019, 12(12), 251; https://doi.org/10.3390/a12120251 - 25 Nov 2019
Viewed by 277
Abstract
Traditional approaches to modeling and processing discrete pixels are mainly based on image features or model optimization. These methods often result in excessive shrinkage or expansion of the restored pixel region, inhibiting accurate recovery of the target pixel region shape. This paper proposes [...] Read more.
Traditional approaches to modeling and processing discrete pixels are mainly based on image features or model optimization. These methods often result in excessive shrinkage or expansion of the restored pixel region, inhibiting accurate recovery of the target pixel region shape. This paper proposes a simultaneous source and mask-images optimization model based on skeleton divergence that overcomes these problems. In the proposed model, first, the edge of the entire discrete pixel region is extracted through bilateral filtering. Then, edge information and Delaunay triangulation are used to optimize the entire discrete pixel region. The skeleton is optimized with the skeleton as the local optimization center and the source and mask images are simultaneously optimized through edge guidance. The technique for order of preference by similarity to ideal solution (TOPSIS) and point-cloud regularization verification are subsequently employed to provide the optimal merging strategy and reduce cumulative error. In the regularization verification stage, the model is iteratively simplified via incremental and hierarchical clustering, so that point-cloud sampling is concentrated in the high-curvature region. The results of experiments conducted using the moving-target region in the RGB-depth (RGB-D) data (Technical University of Munich, Germany) indicate that the proposed algorithm is more accurate and suitable for image processing than existing high-performance algorithms. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
Some Results on Shop Scheduling with S-Precedence Constraints among Job Tasks
Algorithms 2019, 12(12), 250; https://doi.org/10.3390/a12120250 - 25 Nov 2019
Viewed by 310
Abstract
We address some special cases of job shop and flow shop scheduling problems with s-precedence constraints. Unlike the classical setting, in which precedence constraints among the tasks of a job are finish–start, here the task of a job cannot start before the task [...] Read more.
We address some special cases of job shop and flow shop scheduling problems with s-precedence constraints. Unlike the classical setting, in which precedence constraints among the tasks of a job are finish–start, here the task of a job cannot start before the task preceding it has started. We give polynomial exact algorithms for the following problems: a two-machine job shop with two jobs when recirculation is allowed (i.e., jobs can visit the same machine many times), a two-machine flow shop, and an m-machine flow shop with two jobs. We also point out some special cases whose complexity status is open. Full article
(This article belongs to the Special Issue Exact and Heuristic Scheduling Algorithms)
Show Figures

Figure 1

Open AccessFeature PaperArticle
SVM-Based Multiple Instance Classification via DC Optimization
Algorithms 2019, 12(12), 249; https://doi.org/10.3390/a12120249 - 23 Nov 2019
Viewed by 434
Abstract
A multiple instance learning problem consists of categorizing objects, each represented as a set (bag) of points. Unlike the supervised classification paradigm, where each point of the training set is labeled, the labels are only associated with bags, while the labels of the [...] Read more.
A multiple instance learning problem consists of categorizing objects, each represented as a set (bag) of points. Unlike the supervised classification paradigm, where each point of the training set is labeled, the labels are only associated with bags, while the labels of the points inside the bags are unknown. We focus on the binary classification case, where the objective is to discriminate between positive and negative bags using a separating surface. Adopting a support vector machine setting at the training level, the problem of minimizing the classification-error function can be formulated as a nonconvex nonsmooth unconstrained program. We propose a difference-of-convex (DC) decomposition of the nonconvex function, which we face using an appropriate nonsmooth DC algorithm. Some of the numerical results on benchmark data sets are reported. Full article
Show Figures

Figure 1

Open AccessArticle
Solving Integer Linear Programs by Exploiting Variable-Constraint Interactions: A Survey
Algorithms 2019, 12(12), 248; https://doi.org/10.3390/a12120248 - 22 Nov 2019
Viewed by 385
Abstract
Integer Linear Programming (ILP) is among the most successful and general paradigms for solving computationally intractable optimization problems in computer science. ILP is NP-complete, and until recently we have lacked a systematic study of the complexity of ILP through the lens of variable-constraint [...] Read more.
Integer Linear Programming (ILP) is among the most successful and general paradigms for solving computationally intractable optimization problems in computer science. ILP is NP-complete, and until recently we have lacked a systematic study of the complexity of ILP through the lens of variable-constraint interactions. This changed drastically in recent years thanks to a series of results that together lay out a detailed complexity landscape for the problem centered around the structure of graphical representations of instances. The aim of this survey is to summarize these recent developments, put them into context and a unified format, and make them more approachable for experts from many diverse backgrounds. Full article
(This article belongs to the Special Issue New Frontiers in Parameterized Complexity and Algorithms)
Show Figures

Figure 1

Open AccessArticle
Modeling and Solving Scheduling Problem with m Uniform Parallel Machines Subject to Unavailability Constraints
Algorithms 2019, 12(12), 247; https://doi.org/10.3390/a12120247 - 21 Nov 2019
Viewed by 340
Abstract
The problem investigated in this paper is scheduling on uniform parallel machines, taking into account that machines can be periodically unavailable during the planning horizon. The objective is to determine planning for job processing so that the makespan is minimal. The problem is [...] Read more.
The problem investigated in this paper is scheduling on uniform parallel machines, taking into account that machines can be periodically unavailable during the planning horizon. The objective is to determine planning for job processing so that the makespan is minimal. The problem is known to be NP-hard. A new quadratic model was developed. Because of the limitation of the aforementioned model in terms of problem sizes, a novel algorithm was developed to tackle big-sized instances. This consists of mainly two phases. The first phase generates schedules using a modified Largest Processing Time ( L P T )-based procedure. Then, theses schedules are subject to further improvement during the second phase. This improvement is obtained by simultaneously applying pairwise job interchanges between machines. The proposed algorithm and the quadratic model were implemented and tested on variously sized problems. Computational results showed that the developed quadratic model could optimally solve small- to medium-sized problem instances. However, the proposed algorithm was able to optimally solve large-sized problems in a reasonable time. Full article
(This article belongs to the Special Issue Exact and Heuristic Scheduling Algorithms)
Show Figures

Figure 1

Open AccessArticle
Estimation of Reliability in a Multicomponent Stress–Strength System for the Exponentiated Moment-Based Exponential Distribution
Algorithms 2019, 12(12), 246; https://doi.org/10.3390/a12120246 (registering DOI) - 21 Nov 2019
Viewed by 365
Abstract
A multicomponent system of k components with independent and identically distributed random strengths X 1 , X 2 , X k , with each component undergoing random stress, is in working condition if and only if at least s out of k [...] Read more.
A multicomponent system of k components with independent and identically distributed random strengths X 1 , X 2 , X k , with each component undergoing random stress, is in working condition if and only if at least s out of k strengths exceed the subjected stress. Reliability is measured while strength and stress are obtained through a process following an exponentiated moment-based exponential distribution with different shape parameters. Reliability is gauged from the samples using maximum likelihood (ML) on the computed distributions of strength and stress. Asymptotic estimates of reliability are compared using Monte Carlo simulation. Application to forest data and to breaking strengths of jute fiber shows the usefulness of the model. Full article
Open AccessArticle
Observations on the Computation of Eigenvalue and Eigenvector Jacobians
Algorithms 2019, 12(12), 245; https://doi.org/10.3390/a12120245 - 20 Nov 2019
Viewed by 405
Abstract
Many scientific and engineering problems benefit from analytic expressions for eigenvalue and eigenvector derivatives with respect to the elements of the parent matrix. While there exists extensive literature on the calculation of these derivatives, which take the form of Jacobian matrices, there are [...] Read more.
Many scientific and engineering problems benefit from analytic expressions for eigenvalue and eigenvector derivatives with respect to the elements of the parent matrix. While there exists extensive literature on the calculation of these derivatives, which take the form of Jacobian matrices, there are a variety of deficiencies that have yet to be addressed—including the need for both left and right eigenvectors, limitations on the matrix structure, and issues with complex eigenvalues and eigenvectors. This work addresses these deficiencies by proposing a new analytic solution for the eigenvalue and eigenvector derivatives. The resulting analytic Jacobian matrices are numerically efficient to compute and are valid for the general complex case. It is further shown that this new general result collapses to previously known relations for the special cases of real symmetric matrices and real diagonal matrices. Finally, the new Jacobian expressions are validated using forward finite differencing and performance is compared with another technique. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop