Next Issue
Volume 13, February
Previous Issue
Volume 13, December

Table of Contents

Algorithms, Volume 13, Issue 1 (January 2020) – 29 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessEditorial
Acknowledgement to Reviewers of Algorithms in 2019
Algorithms 2020, 13(1), 29; https://doi.org/10.3390/a13010029 (registering DOI) - 20 Jan 2020
Abstract
The editorial team greatly appreciates the reviewers who have dedicated their considerable time and expertise to the journal’s rigorous editorial process over the past 12 months, regardless of whether the papers are finally published or no [...] Full article
Open AccessEditorial
Editorial: Special Issue on Data Compression Algorithms and their Applications
Algorithms 2020, 13(1), 28; https://doi.org/10.3390/a13010028 - 17 Jan 2020
Viewed by 115
Abstract
This Special Issue of Algorithms is focused on data compression algorithms and their
applications. Full article
(This article belongs to the Special Issue Data Compression Algorithms and their Applications)
Open AccessArticle
A Matheuristic for Joint Optimal Power and Scheduling Assignment in DVB-T2 Networks
Algorithms 2020, 13(1), 27; https://doi.org/10.3390/a13010027 - 16 Jan 2020
Viewed by 172
Abstract
Because of the introduction and spread of the second generation of the Digital Video Broadcasting—Terrestrial standard (DVB-T2), already active television broadcasters and new broadcasters that have entered in the market will be required to (re)design their networks. This is generating a new interest [...] Read more.
Because of the introduction and spread of the second generation of the Digital Video Broadcasting—Terrestrial standard (DVB-T2), already active television broadcasters and new broadcasters that have entered in the market will be required to (re)design their networks. This is generating a new interest for effective and efficient DVB optimization software tools. In this work, we propose a strengthened binary linear programming model for representing the optimal DVB design problem, including power and scheduling configuration, and propose a new matheuristic for its solution. The matheuristic combines a genetic algorithm, adopted to efficiently explore the solution space of power emissions of DVB stations, with relaxation-guided variable fixing and exact large neighborhood searches formulated as integer linear programming (ILP) problems solved exactly. Computational tests on realistic instances show that the new matheuristic performs much better than a state-of-the-art optimization solver, identifying solutions associated with much higher user coverage. Full article
Open AccessArticle
A Soft-Voting Ensemble Based Co-Training Scheme Using Static Selection for Binary Classification Problems
Algorithms 2020, 13(1), 26; https://doi.org/10.3390/a13010026 - 16 Jan 2020
Viewed by 166
Abstract
In recent years, a forward-looking subfield of machine learning has emerged with important applications in a variety of scientific fields. Semi-supervised learning is increasingly being recognized as a burgeoning area embracing a plethora of efficient methods and algorithms seeking to exploit a small [...] Read more.
In recent years, a forward-looking subfield of machine learning has emerged with important applications in a variety of scientific fields. Semi-supervised learning is increasingly being recognized as a burgeoning area embracing a plethora of efficient methods and algorithms seeking to exploit a small pool of labeled examples together with a large pool of unlabeled ones in the most efficient way. Co-training is a representative semi-supervised classification algorithm originally based on the assumption that each example can be described by two distinct feature sets, usually referred to as views. Since such an assumption can hardly be met in real world problems, several variants of the co-training algorithm have been proposed dealing with the absence or existence of a naturally two-view feature split. In this context, a Static Selection Ensemble-based co-training scheme operating under a random feature split strategy is outlined regarding binary classification problems, where the type of the base ensemble learner is a soft-Voting one composed of two participants. Ensemble methods are commonly used to boost the predictive performance of learning models by using a set of different classifiers, while the Static Ensemble Selection approach seeks to find the most suitable structure of ensemble classifier based on a specific criterion through a pool of candidate classifiers. The efficacy of the proposed scheme is verified through several experiments on a plethora of benchmark datasets as statistically confirmed by the Friedman Aligned Ranks non-parametric test over the behavior of classification accuracy, F1-score, and Area Under Curve metrics. Full article
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)
Open AccessFeature PaperArticle
Local Convergence of an Efficient Multipoint Iterative Method in Banach Space
Algorithms 2020, 13(1), 25; https://doi.org/10.3390/a13010025 - 15 Jan 2020
Viewed by 204
Abstract
We discuss the local convergence of a derivative-free eighth order method in a Banach space setting. The present study provides the radius of convergence and bounds on errors under the hypothesis based on the first Fréchet-derivative only. The approaches of using Taylor expansions, [...] Read more.
We discuss the local convergence of a derivative-free eighth order method in a Banach space setting. The present study provides the radius of convergence and bounds on errors under the hypothesis based on the first Fréchet-derivative only. The approaches of using Taylor expansions, containing higher order derivatives, do not provide such estimates since the derivatives may be nonexistent or costly to compute. By using only first derivative, the method can be applied to a wider class of functions and hence its applications are expanded. Numerical experiments show that the present results are applicable to the cases wherein previous results cannot be applied. Full article
Open AccessArticle
Unstructured Uncertainty Based Modeling and Robust Stability Analysis of Textile-Reinforced Composites with Embedded Shape Memory Alloys
Algorithms 2020, 13(1), 24; https://doi.org/10.3390/a13010024 - 15 Jan 2020
Viewed by 190
Abstract
This paper develops the mathematical modeling and deflection control of a textile-reinforced composite integrated with shape memory actuators. The mathematical model of the system is derived using the identification method and an unstructured uncertainty approach. Based on this model and a robust stability [...] Read more.
This paper develops the mathematical modeling and deflection control of a textile-reinforced composite integrated with shape memory actuators. The mathematical model of the system is derived using the identification method and an unstructured uncertainty approach. Based on this model and a robust stability analysis, a robust proportional–integral controller is designed for controlling the deflection of the composite. We showed that the robust controller depends significantly on the modeling of the uncertainty. The performance of the proposed controller is compared with a classical one through experimental analysis. Experimental results show that the proposed controller has a better performance as it reduces the overshoot and provide robustness to uncertainty. Due to the robust design, the controller also has a wide operating range, which is advantageous for practical applications. Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control)
Show Figures

Figure 1

Open AccessArticle
Optimal Learning and Self-Awareness Versus PDI
Algorithms 2020, 13(1), 23; https://doi.org/10.3390/a13010023 - 11 Jan 2020
Viewed by 313
Abstract
This manuscript will explore and analyze the effects of different paradigms for the control of rigid body motion mechanics. The experimental setup will include deterministic artificial intelligence composed of optimal self-awareness statements together with a novel, optimal learning algorithm, and these will be [...] Read more.
This manuscript will explore and analyze the effects of different paradigms for the control of rigid body motion mechanics. The experimental setup will include deterministic artificial intelligence composed of optimal self-awareness statements together with a novel, optimal learning algorithm, and these will be re-parameterized as ideal nonlinear feedforward and feedback evaluated within a Simulink simulation. Comparison is made to a custom proportional, derivative, integral controller (modified versions of classical proportional-integral-derivative control) implemented as a feedback control with a specific term to account for the nonlinear coupled motion. Consistent proportional, derivative, and integral gains were used throughout the duration of the experiments. The simulation results will show that akin feedforward control, deterministic self-awareness statements lack an error correction mechanism, relying on learning (which stands in place of feedback control), and the proposed combination of optimal self-awareness statements and a newly demonstrated analytically optimal learning yielded the highest accuracy with the lowest execution time. This highlights the potential effectiveness of a learning control system. Full article
(This article belongs to the Special Issue Algorithms for PID Controller 2019)
Open AccessArticle
Non Data-Aided SNR Estimation for UAV OFDM Systems
Algorithms 2020, 13(1), 22; https://doi.org/10.3390/a13010022 (registering DOI) - 10 Jan 2020
Viewed by 262
Abstract
Signal-to-noise ratio (SNR) estimation is essential in the unmanned aerial vehicle (UAV) orthogonal frequency division multiplexing (OFDM) system for getting accurate channel estimation. In this paper, we propose a novel non-data-aided (NDA) SNR estimation method for UAV OFDM system to overcome the carrier [...] Read more.
Signal-to-noise ratio (SNR) estimation is essential in the unmanned aerial vehicle (UAV) orthogonal frequency division multiplexing (OFDM) system for getting accurate channel estimation. In this paper, we propose a novel non-data-aided (NDA) SNR estimation method for UAV OFDM system to overcome the carrier interference caused by the frequency offset. First, an absolute value series is achieved which is based on the sampled received sequence, where each sampling point is validated by the data length apart. Second, by dividing absolute value series into the different series according to the total length of symbol, we obtain an output series by stacking each part. Third, the root mean squares of noise power and total power are estimated by utilizing the maximum and minimum platform in the characteristic curve of the output series after the wavelet denoising. Simulation results show that the proposed method performs better than other methods, especially in the low synchronization precision, and it has low computation complexity. Full article
Show Figures

Figure 1

Open AccessArticle
Markov Chain Monte Carlo Based Energy Use Behaviors Prediction of Office Occupants
Algorithms 2020, 13(1), 21; https://doi.org/10.3390/a13010021 - 09 Jan 2020
Viewed by 251
Abstract
Prediction of energy use behaviors is a necessary prerequisite for designing personalized and scalable energy efficiency programs. The energy use behaviors of office occupants are different from those of residential occupants and have not yet been studied as intensively as residential occupants. This [...] Read more.
Prediction of energy use behaviors is a necessary prerequisite for designing personalized and scalable energy efficiency programs. The energy use behaviors of office occupants are different from those of residential occupants and have not yet been studied as intensively as residential occupants. This paper proposes a method based on Markov chain Monte Carlo (MCMC) to predict the energy use behaviors of office occupants. Firstly, an indoor electrical Internet of Things system (IEIoTS) for the office scenario is developed to collect the switching state time series data of selected user electrical equipment (desktop computer, water dispenser, light) and the historical environment parameters. Then, the Metropolis–Hastings (MH) algorithm is used to sample and obtain the optimal solution of the parameters for the office occupants’ behavior function, the model of which includes the energy action model, energy working hours model, and air-conditioner energy use behavior model. Finally, comparative experiments are carried out to evaluate the performance of the proposed method. The experimental results show that while the mean value performs similarly in estimating the energy use model, the proposed method outperforms the Maximum Likelihood Estimation (MLE) method on uncertainty quantification with relatively narrower confidence intervals. Full article
Show Figures

Figure 1

Open AccessArticle
Citywide Cellular Traffic Prediction Based on a Hybrid Spatiotemporal Network
Algorithms 2020, 13(1), 20; https://doi.org/10.3390/a13010020 - 08 Jan 2020
Viewed by 303
Abstract
With the arrival of 5G networks, cellular networks are moving in the direction of diversified, broadband, integrated, and intelligent networks. At the same time, the popularity of various smart terminals has led to an explosive growth in cellular traffic. Accurate network traffic prediction [...] Read more.
With the arrival of 5G networks, cellular networks are moving in the direction of diversified, broadband, integrated, and intelligent networks. At the same time, the popularity of various smart terminals has led to an explosive growth in cellular traffic. Accurate network traffic prediction has become an important part of cellular network intelligence. In this context, this paper proposes a deep learning method for space-time modeling and prediction of cellular network communication traffic. First, we analyze the temporal and spatial characteristics of cellular network traffic from Telecom Italia. On this basis, we propose a hybrid spatiotemporal network (HSTNet), which is a deep learning method that uses convolutional neural networks to capture the spatiotemporal characteristics of communication traffic. This work adds deformable convolution to the convolution model to improve predictive performance. The time attribute is introduced as auxiliary information. An attention mechanism based on historical data for weight adjustment is proposed to improve the robustness of the module. We use the dataset of Telecom Italia to evaluate the performance of the proposed model. Experimental results show that compared with the existing statistics methods and machine learning algorithms, HSTNet significantly improved the prediction accuracy based on MAE and RMSE. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing vol. 2)
Open AccessArticle
Computing Persistent Homology of Directed Flag Complexes
Algorithms 2020, 13(1), 19; https://doi.org/10.3390/a13010019 - 07 Jan 2020
Viewed by 358
Abstract
We present a new computing package Flagser, designed to construct the directed flag complex of a finite directed graph, and compute persistent homology for flexibly defined filtrations on the graph and the resulting complex. The persistent homology computation part of F [...] Read more.
We present a new computing package Flagser, designed to construct the directed flag complex of a finite directed graph, and compute persistent homology for flexibly defined filtrations on the graph and the resulting complex. The persistent homology computation part of Flagser is based on the program Ripser by U. Bauer, but is optimised specifically for large computations. The construction of the directed flag complex is done in a way that allows easy parallelisation by arbitrarily many cores. Flagser also has the option of working with undirected graphs. For homology computations Flagser has an approximate option, which shortens compute time with remarkable accuracy. We demonstrate the power of Flagser by applying it to the construction of the directed flag complex of digital reconstructions of brain microcircuitry by the Blue Brain Project and several other examples. In some instances we perform computation of homology. For a more complete performance analysis, we also apply Flagser to some other data collections. In all cases the hardware used in the computation, the use of memory and the compute time are recorded. Full article
(This article belongs to the Special Issue Topological Data Analysis)
Show Figures

Figure 1

Open AccessArticle
Top Position Sensitive Ordinal Relation Preserving Bitwise Weight for Image Retrieval
Algorithms 2020, 13(1), 18; https://doi.org/10.3390/a13010018 - 06 Jan 2020
Viewed by 310
Abstract
In recent years, binary coding methods have become increasingly popular for tasks of searching approximate nearest neighbors (ANNs). High-dimensional data can be quantized into binary codes to give an efficient similarity approximation via a Hamming distance. However, most of existing schemes consider the [...] Read more.
In recent years, binary coding methods have become increasingly popular for tasks of searching approximate nearest neighbors (ANNs). High-dimensional data can be quantized into binary codes to give an efficient similarity approximation via a Hamming distance. However, most of existing schemes consider the importance of each binary bit as the same and treat training samples at different positions equally, which causes many data pairs to share the same Hamming distance and a larger retrieval loss at the top position. To handle these problems, we propose a novel method dubbed by the top-position-sensitive ordinal-relation-preserving bitwise weight (TORBW) method. The core idea is to penalize data points without preserving an ordinal relation at the top position of a ranking list more than those at the bottom and assign different weight values to their binary bits according to the distribution of query data. Specifically, we design an iterative optimization mechanism to simultaneously learn binary codes and bitwise weights, which makes their learning processes related to each other. When the iterative procedure converges, the binary codes and bitwise weights are effectively adapted to each other. To reduce the training complexity, we relax the discrete constraints of both the binary codes and the indicator function. Furthermore, we pretrain a tensor ordinal graph to decrease the time consumption of computing a relative similarity relationship among data points. Experimental results on three large-scale ANN search benchmark datasets, i.e., SIFT1M, GIST1M, and Cifar10, show that the proposed TORBW method can achieve superior performance over state-of-the-art approaches. Full article
(This article belongs to the Special Issue Algorithms for Pattern Recognition)
Show Figures

Figure 1

Open AccessArticle
A Grey-Box Ensemble Model Exploiting Black-Box Accuracy and White-Box Intrinsic Interpretability
Algorithms 2020, 13(1), 17; https://doi.org/10.3390/a13010017 - 05 Jan 2020
Viewed by 377
Abstract
Machine learning has emerged as a key factor in many technological and scientific advances and applications. Much research has been devoted to developing high performance machine learning models, which are able to make very accurate predictions and decisions on a wide range of [...] Read more.
Machine learning has emerged as a key factor in many technological and scientific advances and applications. Much research has been devoted to developing high performance machine learning models, which are able to make very accurate predictions and decisions on a wide range of applications. Nevertheless, we still seek to understand and explain how these models work and make decisions. Explainability and interpretability in machine learning is a significant issue, since in most of real-world problems it is considered essential to understand and explain the model’s prediction mechanism in order to trust it and make decisions on critical issues. In this study, we developed a Grey-Box model based on semi-supervised methodology utilizing a self-training framework. The main objective of this work is the development of a both interpretable and accurate machine learning model, although this is a complex and challenging task. The proposed model was evaluated on a variety of real world datasets from the crucial application domains of education, finance and medicine. Our results demonstrate the efficiency of the proposed model performing comparable to a Black-Box and considerably outperforming single White-Box models, while at the same time remains as interpretable as a White-Box model. Full article
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)
Show Figures

Figure 1

Open AccessArticle
A Comparative Study of Four Metaheuristic Algorithms, AMOSA, MOABC, MSPSO, and NSGA-II for Evacuation Planning
Algorithms 2020, 13(1), 16; https://doi.org/10.3390/a13010016 - 03 Jan 2020
Viewed by 415
Abstract
Evacuation planning is an important activity in disaster management to reduce the effects of disasters on urban communities. It is regarded as a multi-objective optimization problem that involves conflicting spatial objectives and constraints in a decision-making process. Such problems are difficult to solve [...] Read more.
Evacuation planning is an important activity in disaster management to reduce the effects of disasters on urban communities. It is regarded as a multi-objective optimization problem that involves conflicting spatial objectives and constraints in a decision-making process. Such problems are difficult to solve by traditional methods. However, metaheuristics methods have been shown to be proper solutions. Well-known classical metaheuristic algorithms—such as simulated annealing (SA), artificial bee colony (ABC), standard particle swarm optimization (SPSO), genetic algorithm (GA), and multi-objective versions of them—have been used in the spatial optimization domain. However, few types of research have applied these classical methods, and their performance has not always been well evaluated, specifically not on evacuation planning problems. This research applies the multi-objective versions of four classical metaheuristic algorithms (AMOSA, MOABC, NSGA-II, and MSPSO) on an urban evacuation problem in Rwanda in order to compare the performances of the four algorithms. The performances of the algorithms have been evaluated based on the effectiveness, efficiency, repeatability, and computational time of each algorithm. The results showed that in terms of effectiveness, AMOSA and MOABC achieve good quality solutions that satisfy the objective functions. NSGA-II and MSPSO showed third and fourth-best effectiveness. For efficiency, NSGA-II is the fastest algorithm in terms of execution time and convergence speed followed by AMOSA, MOABC, and MSPSO. AMOSA, MOABC, and MSPSO showed a high level of repeatability compared to NSGA-II. It seems that by modifying MOABC and increasing its effectiveness, it could be a proper algorithm for evacuation planning. Full article
Show Figures

Figure 1

Open AccessArticle
A Visual Object Tracking Algorithm Based on Improved TLD
Algorithms 2020, 13(1), 15; https://doi.org/10.3390/a13010015 - 01 Jan 2020
Viewed by 370
Abstract
Visual object tracking is an important research topic in the field of computer vision. Tracking–learning–detection (TLD) decomposes the tracking problem into three modules—tracking, learning, and detection—which provides effective ideas for solving the tracking problem. In order to improve the tracking performance of the [...] Read more.
Visual object tracking is an important research topic in the field of computer vision. Tracking–learning–detection (TLD) decomposes the tracking problem into three modules—tracking, learning, and detection—which provides effective ideas for solving the tracking problem. In order to improve the tracking performance of the TLD tracker, three improvements are proposed in this paper. The built‐in tracking module is replaced with a kernelized correlation filter (KCF) algorithm based on the histogram of oriented gradient (HOG) descriptor in the tracking module. Failure detection is added for the response of KCF to identify whether KCF loses the target. A more specific detection area of the detection module is obtained through the estimated location provided by the tracking module. With the above operations, the scanning area of object detection is reduced, and a full frame search is required in the detection module if objects fails to be tracked in the tracking module. Comparative experiments were
conducted on the object tracking benchmark (OTB) and the results showed that the tracking speed and accuracy was improved. Further, the TLD tracker performed better in different challenging scenarios with the proposed method, such as motion blur, occlusion, and environmental changes.
Moreover, the improved TLD achieved outstanding tracking performance compared with common tracking algorithms. Full article
Open AccessArticle
Image Completion with Large or Edge-Missing Areas
Algorithms 2020, 13(1), 14; https://doi.org/10.3390/a13010014 - 31 Dec 2019
Viewed by 430
Abstract
Existing image completion methods are mostly based on missing regions that are small or located in the middle of the images. When regions to be completed are large or near the edge of the images, due to the lack of context information, the [...] Read more.
Existing image completion methods are mostly based on missing regions that are small or located in the middle of the images. When regions to be completed are large or near the edge of the images, due to the lack of context information, the completion results tend to be blurred or distorted, and there will be a large blank area in the final results. In addition, the unstable training of the generative adversarial network is also prone to cause pseudo-color in the completion results. Aiming at the two above-mentioned problems, a method of image completion with large or edge-missing areas is proposed; also, the network structures have been improved. On the one hand, it overcomes the problem of lacking context information, which thereby ensures the reality of generated texture details; on the other hand, it suppresses the generation of pseudo-color, which guarantees the consistency of the whole image both in vision and content. The experimental results show that the proposed method achieves better completion results in completing large or edge-missing areas. Full article
Show Figures

Figure 1

Open AccessArticle
An Effective and Efficient Genetic-Fuzzy Algorithm for Supporting Advanced Human-Machine Interfaces in Big Data Settings
Algorithms 2020, 13(1), 13; https://doi.org/10.3390/a13010013 - 31 Dec 2019
Viewed by 441
Abstract
In this paper we describe a novel algorithm, inspired by the mirror neuron discovery, to support automatic learning oriented to advanced man-machine interfaces. The algorithm introduces several points of innovation, based on complex metrics of similarity that involve different characteristics of the entire [...] Read more.
In this paper we describe a novel algorithm, inspired by the mirror neuron discovery, to support automatic learning oriented to advanced man-machine interfaces. The algorithm introduces several points of innovation, based on complex metrics of similarity that involve different characteristics of the entire learning process. In more detail, the proposed approach deals with an humanoid robot algorithm suited for automatic vocalization acquisition from a human tutor. The learned vocalization can be used to multi-modal reproduction of speech, as the articulatory and acoustic parameters that compose the vocalization database can be used to synthesize unrestricted speech utterances and reproduce the articulatory and facial movements of the humanoid talking face automatically synchronized. The algorithm uses fuzzy articulatory rules, which describe transitions between phonemes derived from the International Phonetic Alphabet (IPA), to allow simpler adaptation to different languages, and genetic optimization of the membership degrees. Large experimental evaluation and analysis of the proposed algorithm on synthetic and real data sets confirms the benefits of our proposal. Indeed, experimental results show that the vocalization acquired respects the basic phonetic rules of Italian languages and that subjective results show the effectiveness of multi-modal speech production with automatic synchronization between facial movements and speech emissions. The algorithm has been applied to a virtual speaking face but it may also be used in mechanical vocalization systems as well. Full article
Show Figures

Figure 1

Open AccessArticle
Optimal Prefix Free Codes with Partial Sorting
Algorithms 2020, 13(1), 12; https://doi.org/10.3390/a13010012 - 31 Dec 2019
Viewed by 327
Abstract
We describe an algorithm computing an optimal prefix free code for n unsorted positive weights in time within O(n(1+lgα))O(nlgn), where the alternation α[1 [...] Read more.
We describe an algorithm computing an optimal prefix free code for n unsorted positive weights in time within O ( n ( 1 + lg α ) ) O ( n lg n ) , where the alternation α [ 1 . . n 1 ] approximates the minimal amount of sorting required by the computation. This asymptotical complexity is within a constant factor of the optimal in the algebraic decision tree computational model, in the worst case over all instances of size n and alternation α . Such results refine the state of the art complexity of Θ ( n lg n ) in the worst case over instances of size n in the same computational model, a landmark in compression and coding since 1952. Beside the new analysis technique, such improvement is obtained by combining a new algorithm, inspired by van Leeuwen’s algorithm to compute optimal prefix free codes from sorted weights (known since 1976), with a relatively minor extension of Karp et al.’s deferred data structure to partially sort a multiset accordingly to the queries performed on it (known since 1988). Preliminary experimental results on text compression by words show α to be polynomially smaller than n, which suggests improvements by at most a constant multiplicative factor in the running time for such applications. Full article
(This article belongs to the Special Issue Data Compression Algorithms and their Applications)
Show Figures

Figure 1

Open AccessArticle
A Numerical Approach for the Filtered Generalized Čech Complex
Algorithms 2020, 13(1), 11; https://doi.org/10.3390/a13010011 - 30 Dec 2019
Viewed by 414
Abstract
In this paper, we present an algorithm to compute the filtered generalized Čech complex for a finite collection of disks in the plane, which do not necessarily have the same radius. The key step behind the algorithm is to calculate the minimum scale [...] Read more.
In this paper, we present an algorithm to compute the filtered generalized Čech complex for a finite collection of disks in the plane, which do not necessarily have the same radius. The key step behind the algorithm is to calculate the minimum scale factor needed to ensure rescaled disks have a nonempty intersection, through a numerical approach, whose convergence is guaranteed by a generalization of the well-known Vietoris–Rips Lemma, which we also prove in an alternative way, using elementary geometric arguments. We give an algorithm for computing the 2-dimensional filtered generalized Čech complex of a finite collection of d-dimensional disks in R d , and we show the performance of our algorithm. Full article
(This article belongs to the Special Issue Topological Data Analysis)
Show Figures

Graphical abstract

Open AccessArticle
Comparative Analysis of Different Model-Based Controllers Using Active Vehicle Suspension System
Algorithms 2020, 13(1), 10; https://doi.org/10.3390/a13010010 - 26 Dec 2019
Viewed by 506
Abstract
This paper deals with the active vibration control of a quarter-vehicle suspension system. Damping control methods investigated in this paper are: higher-order sliding mode control (HOSMC) based on super twisting algorithm (STA), first-order sliding mode control (FOSMC), integral sliding mode control (ISMC), proportional [...] Read more.
This paper deals with the active vibration control of a quarter-vehicle suspension system. Damping control methods investigated in this paper are: higher-order sliding mode control (HOSMC) based on super twisting algorithm (STA), first-order sliding mode control (FOSMC), integral sliding mode control (ISMC), proportional integral derivative (PID), linear quadratic regulator (LQR) and passive suspension system. Performance comparison of different active controllers are analyzed in terms of vertical displacement, suspension travel and wheel deflection. The theoretical, quantitative and qualitative analysis verify that the STA-based HOSMC exhibits better performance as well as negate the undesired disturbances with respect to FOSMC, ISMC, PID, LQR and passive suspension system. Furthermore, it is also robust to intrinsic bounded uncertain dynamics of the model. Full article
Show Figures

Figure 1

Open AccessEditorial
Special Issue on Exact and Heuristic Scheduling Algorithms
Algorithms 2020, 13(1), 9; https://doi.org/10.3390/a13010009 - 25 Dec 2019
Viewed by 506
Abstract
This special issue of Algorithms is a follow-up issue of an earlier one, entitled ‘Algorithms for Scheduling Problems’. In particular, the new issue is devoted to the development of exact and heuristic scheduling algorithms. Submissions were welcome both for traditional scheduling problems as [...] Read more.
This special issue of Algorithms is a follow-up issue of an earlier one, entitled ‘Algorithms for Scheduling Problems’. In particular, the new issue is devoted to the development of exact and heuristic scheduling algorithms. Submissions were welcome both for traditional scheduling problems as well as for new practical applications. In the Call for Papers, we mentioned topics such as single-criterion and multi-criteria scheduling problems with additional constraints including setup times (costs), precedence constraints, batching (lot sizing), resource constraints as well as scheduling problems arising in emerging applications. Full article
(This article belongs to the Special Issue Exact and Heuristic Scheduling Algorithms)
Open AccessArticle
On the Use of Biased-Randomized Algorithms for Solving Non-Smooth Optimization Problems
Algorithms 2020, 13(1), 8; https://doi.org/10.3390/a13010008 - 25 Dec 2019
Viewed by 467
Abstract
Soft constraints are quite common in real-life applications. For example, in freight transportation, the fleet size can be enlarged by outsourcing part of the distribution service and some deliveries to customers can be postponed as well; in inventory management, it is possible to [...] Read more.
Soft constraints are quite common in real-life applications. For example, in freight transportation, the fleet size can be enlarged by outsourcing part of the distribution service and some deliveries to customers can be postponed as well; in inventory management, it is possible to consider stock-outs generated by unexpected demands; and in manufacturing processes and project management, it is frequent that some deadlines cannot be met due to delays in critical steps of the supply chain. However, capacity-, size-, and time-related limitations are included in many optimization problems as hard constraints, while it would be usually more realistic to consider them as soft ones, i.e., they can be violated to some extent by incurring a penalty cost. Most of the times, this penalty cost will be nonlinear and even noncontinuous, which might transform the objective function into a non-smooth one. Despite its many practical applications, non-smooth optimization problems are quite challenging, especially when the underlying optimization problem is NP-hard in nature. In this paper, we propose the use of biased-randomized algorithms as an effective methodology to cope with NP-hard and non-smooth optimization problems in many practical applications. Biased-randomized algorithms extend constructive heuristics by introducing a nonuniform randomization pattern into them. Hence, they can be used to explore promising areas of the solution space without the limitations of gradient-based approaches, which assume the existence of smooth objective functions. Moreover, biased-randomized algorithms can be easily parallelized, thus employing short computing times while exploring a large number of promising regions. This paper discusses these concepts in detail, reviews existing work in different application areas, and highlights current trends and open research lines. Full article
Show Figures

Figure 1

Open AccessArticle
Detection of Suicide Ideation in Social Media Forums Using Deep Learning
Algorithms 2020, 13(1), 7; https://doi.org/10.3390/a13010007 - 24 Dec 2019
Viewed by 497
Abstract
Suicide ideation expressed in social media has an impact on language usage. Many at-risk individuals use social forum platforms to discuss their problems or get access to information on similar tasks. The key objective of our study is to present ongoing work on [...] Read more.
Suicide ideation expressed in social media has an impact on language usage. Many at-risk individuals use social forum platforms to discuss their problems or get access to information on similar tasks. The key objective of our study is to present ongoing work on automatic recognition of suicidal posts. We address the early detection of suicide ideation through deep learning and machine learning-based classification approaches applied to Reddit social media. For such purpose, we employ an LSTM-CNN combined model to evaluate and compare to other classification models. Our experiment shows the combined neural network architecture with word embedding techniques can achieve the best relevance classification results. Additionally, our results support the strength and ability of deep learning architectures to build an effective model for a suicide risk assessment in various text classification tasks. Full article
Show Figures

Figure 1

Open AccessArticle
A Generalized MILP Formulation for the Period-Aggregated Resource Leveling Problem with Variable Job Duration
Algorithms 2020, 13(1), 6; https://doi.org/10.3390/a13010006 - 23 Dec 2019
Cited by 1 | Viewed by 467
Abstract
We study a resource leveling problem with variable job duration. The considered problem includes both scheduling and resource management decisions. The planning horizon is fixed and separated into a set of time periods of equal length. There are several types of resources and [...] Read more.
We study a resource leveling problem with variable job duration. The considered problem includes both scheduling and resource management decisions. The planning horizon is fixed and separated into a set of time periods of equal length. There are several types of resources and their amount varies from one period to another. There is a set of jobs. For each job, a fixed volume of work has to be completed without any preemption while using different resources. If necessary, extra resources can be used at additional costs during each time period. The optimization goal is to minimize the total overload costs required for the execution of all jobs by the given deadline. The decision variables specify the starting time of each job, the duration of the job and the resource amount assigned to the job during each period (it may vary over periods). We propose a new generalized mathematical formulation for this optimization problem. The formulation is compared with existing approaches from the literature. Theoretical study and computational experiments show that our approach provides more flexible resource allocation resulting in better final solutions. Full article
(This article belongs to the Special Issue Exact and Heuristic Scheduling Algorithms)
Show Figures

Figure 1

Open AccessArticle
Simple Constructive, Insertion, and Improvement Heuristics Based on the Girding Polygon for the Euclidean Traveling Salesman Problem
Algorithms 2020, 13(1), 5; https://doi.org/10.3390/a13010005 - 21 Dec 2019
Cited by 1 | Viewed by 513
Abstract
The Traveling Salesman Problem (TSP) aims at finding the shortest trip for a salesman, who has to visit each of the locations from a given set exactly once, starting and ending at the same location. Here, we consider the Euclidean version of the [...] Read more.
The Traveling Salesman Problem (TSP) aims at finding the shortest trip for a salesman, who has to visit each of the locations from a given set exactly once, starting and ending at the same location. Here, we consider the Euclidean version of the problem, in which the locations are points in the two-dimensional Euclidean space and the distances are correspondingly Euclidean distances. We propose simple, fast, and easily implementable heuristics that work well, in practice, for large real-life problem instances. The algorithm works on three phases, the constructive, the insertion, and the improvement phases. The first two phases run in time O ( n 2 ) and the number of repetitions in the improvement phase, in practice, is bounded by a small constant. We have tested the practical behavior of our heuristics on the available benchmark problem instances. The approximation provided by our algorithm for the tested benchmark problem instances did not beat best known results. At the same time, comparing the CPU time used by our algorithm with that of the earlier known ones, in about 92% of the cases our algorithm has required less computational time. Our algorithm is also memory efficient: for the largest tested problem instance with 744,710 cities, it has used about 50 MiB, whereas the average memory usage for the remained 217 instances was 1.6 MiB. Full article
(This article belongs to the Special Issue Exact and Heuristic Scheduling Algorithms)
Show Figures

Figure 1

Open AccessArticle
Two-Machine Job-Shop Scheduling Problem to Minimize the Makespan with Uncertain Job Durations
Algorithms 2020, 13(1), 4; https://doi.org/10.3390/a13010004 - 20 Dec 2019
Cited by 1 | Viewed by 510
Abstract
We study two-machine shop-scheduling problems provided that lower and upper bounds on durations of n jobs are given before scheduling. An exact value of the job duration remains unknown until completing the job. The objective is to minimize the makespan (schedule length). We [...] Read more.
We study two-machine shop-scheduling problems provided that lower and upper bounds on durations of n jobs are given before scheduling. An exact value of the job duration remains unknown until completing the job. The objective is to minimize the makespan (schedule length). We address the issue of how to best execute a schedule if the job duration may take any real value from the given segment. Scheduling decisions may consist of two phases: an off-line phase and an on-line phase. Using information on the lower and upper bounds for each job duration available at the off-line phase, a scheduler can determine a minimal dominant set of schedules (DS) based on sufficient conditions for schedule domination. The DS optimally covers all possible realizations (scenarios) of the uncertain job durations in the sense that, for each possible scenario, there exists at least one schedule in the DS which is optimal. The DS enables a scheduler to quickly make an on-line scheduling decision whenever additional information on completing jobs is available. A scheduler can choose a schedule which is optimal for the most possible scenarios. We developed algorithms for testing a set of conditions for a schedule dominance. These algorithms are polynomial in the number of jobs. Their time complexity does not exceed O ( n 2 ) . Computational experiments have shown the effectiveness of the developed algorithms. If there were no more than 600 jobs, then all 1000 instances in each tested series were solved in one second at most. An instance with 10,000 jobs was solved in 0.4 s on average. The most instances from nine tested classes were optimally solved. If the maximum relative error of the job duration was not greater than 20 % , then more than 80 % of the tested instances were optimally solved. If the maximum relative error was equal to 50 % , then 45 % of the tested instances from the nine classes were optimally solved. Full article
(This article belongs to the Special Issue Exact and Heuristic Scheduling Algorithms)
Show Figures

Figure 1

Open AccessArticle
Parameterized Optimization in Uncertain Graphs—A Survey and Some Results
Algorithms 2020, 13(1), 3; https://doi.org/10.3390/a13010003 - 19 Dec 2019
Viewed by 441
Abstract
We present a detailed survey of results and two new results on graphical models of uncertainty and associated optimization problems. We focus on two well-studied models, namely, the Random Failure (RF) model and the Linear Reliability Ordering (LRO) model. We present an FPT [...] Read more.
We present a detailed survey of results and two new results on graphical models of uncertainty and associated optimization problems. We focus on two well-studied models, namely, the Random Failure (RF) model and the Linear Reliability Ordering (LRO) model. We present an FPT algorithm parameterized by the product of treewidth and max-degree for maximizing expected coverage in an uncertain graph under the RF model. We then consider the problem of finding the maximal core in a graph, which is known to be polynomial time solvable. We show that the Probabilistic-Core problem is polynomial time solvable in uncertain graphs under the LRO model. On the other hand, under the RF model, we show that the Probabilistic-Core problem is W[1]-hard for the parameter d, where d is the minimum degree of the core. We then design an FPT algorithm for the parameter treewidth. Full article
(This article belongs to the Special Issue New Frontiers in Parameterized Complexity and Algorithms)
Show Figures

Figure 1

Open AccessArticle
Journey Planning Algorithms for Massive Delay-Prone Transit Networks
Algorithms 2020, 13(1), 2; https://doi.org/10.3390/a13010002 - 18 Dec 2019
Viewed by 448
Abstract
This paper studies the journey planning problem in the context of transit networks. Given
the timetable of a schedule-based transportation system (consisting, e.g., of trains, buses, etc.),
the problem seeks journeys optimizing some criteria. Specifically, it seeks to answer natural queries
such as, [...] Read more.
This paper studies the journey planning problem in the context of transit networks. Given
the timetable of a schedule-based transportation system (consisting, e.g., of trains, buses, etc.),
the problem seeks journeys optimizing some criteria. Specifically, it seeks to answer natural queries
such as, for example, “find a journey starting from a source stop and arriving at a target stop as early
as possible”. The fastest approach for answering to these queries, yielding the smallest average query
time even on very large networks, is the Public Transit Labeling framework, proposed for the first
time in Delling et al., SEA 2015. This method combines three main ingredients: (i) a graph-based
representation of the schedule of the transit network; (ii) a labeling of such graph encoding its
transitive closure (computed via a time-consuming pre-processing); (iii) an efficient query algorithm
exploiting both (i) and (ii) to answer quickly to queries of interest at runtime. Unfortunately, while
transit networks’ timetables are inherently dynamic (they are often subject to delays or disruptions),
PTL is not natively designed to handle updates in the schedule—even after a single change,
precomputed data may become outdated and queries can return incorrect results. This is a major
limitation, especially when dealing with massively sized inputs (e.g., metropolitan or continental
sized networks), as recomputing the labeling from scratch, after each change, yields unsustainable
time overheads that are not compatible with interactive applications. In this work, we introduce a new
framework that extends PTL to function in delay-prone transit networks. In particular, we provide
a new set of algorithms able to update both the graph and the precomputed labeling whenever
a delay affects the network, without performing any recomputation from scratch. We demonstrate
the effectiveness of our solution through an extensive experimental evaluation conducted on
real-world networks. Our experiments show that: (i) the update time required by the new algorithms
is, on average, orders of magnitude smaller than that required by the recomputation from scratch via
PTL; (ii) the updated graph and labeling induce both query time performance and space overhead that
are equivalent to those that are obtained by the recomputation from scratch via PTL. This suggests that
our new solution is an effective approach to handling the journey planning problem in delay-prone
transit networks. Full article
Open AccessArticle
Image Restoration Using a Fixed-Point Method for a TVL2 Regularization Problem
Algorithms 2020, 13(1), 1; https://doi.org/10.3390/a13010001 - 18 Dec 2019
Viewed by 512
Abstract
In this paper, we first propose a new TVL2 regularization model for image restoration, and then we propose two iterative methods, which are fixed-point and fixed-point-like methods, using CGLS (Conjugate Gradient Least Squares method) for solving the new proposed TVL2 problem. We also [...] Read more.
In this paper, we first propose a new TVL2 regularization model for image restoration, and then we propose two iterative methods, which are fixed-point and fixed-point-like methods, using CGLS (Conjugate Gradient Least Squares method) for solving the new proposed TVL2 problem. We also provide convergence analysis for the fixed-point method. Lastly, numerical experiments for several test problems are provided to evaluate the effectiveness of the proposed two iterative methods. Numerical results show that the new proposed TVL2 model is preferred over an existing TVL2 model and the proposed fixed-point-like method is well suited for the new TVL2 model. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop