Previous Issue
Volume 13, June

Table of Contents

Algorithms, Volume 13, Issue 7 (July 2020) – 14 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
On the Relationship between Self-Admitted Technical Debt Removals and Technical Debt Measures
Algorithms 2020, 13(7), 168; https://doi.org/10.3390/a13070168 (registering DOI) - 11 Jul 2020
Viewed by 170
Open AccessArticle
Biologically Inspired Visual System Architecture for Object Recognition in Autonomous Systems
Algorithms 2020, 13(7), 167; https://doi.org/10.3390/a13070167 (registering DOI) - 11 Jul 2020
Viewed by 206
Abstract
Computer vision is currently one of the most exciting and rapidly evolving fields of science, which affects numerous industries. Research and development breakthroughs, mainly in the field of convolutional neural networks (CNNs), opened the way to unprecedented sensitivity and precision in object detection [...] Read more.
Computer vision is currently one of the most exciting and rapidly evolving fields of science, which affects numerous industries. Research and development breakthroughs, mainly in the field of convolutional neural networks (CNNs), opened the way to unprecedented sensitivity and precision in object detection and recognition tasks. Nevertheless, the findings in recent years on the sensitivity of neural networks to additive noise, light conditions, and to the wholeness of the training dataset, indicate that this technology still lacks the robustness needed for the autonomous robotic industry. In an attempt to bring computer vision algorithms closer to the capabilities of a human operator, the mechanisms of the human visual system was analyzed in this work. Recent studies show that the mechanisms behind the recognition process in the human brain include continuous generation of predictions based on prior knowledge of the world. These predictions enable rapid generation of contextual hypotheses that bias the outcome of the recognition process. This mechanism is especially advantageous in situations of uncertainty, when visual input is ambiguous. In addition, the human visual system continuously updates its knowledge about the world based on the gaps between its prediction and the visual feedback. CNNs are feed forward in nature and lack such top-down contextual attenuation mechanisms. As a result, although they process massive amounts of visual information during their operation, the information is not transformed into knowledge that can be used to generate contextual predictions and improve their performance. In this work, an architecture was designed that aims to integrate the concepts behind the top-down prediction and learning processes of the human visual system with the state-of-the-art bottom-up object recognition models, e.g., deep CNNs. The work focuses on two mechanisms of the human visual system: anticipation-driven perception and reinforcement-driven learning. Imitating these top-down mechanisms, together with the state-of-the-art bottom-up feed-forward algorithms, resulted in an accurate, robust, and continuously improving target recognition model. Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms for Image Processing)
Open AccessArticle
Polyhedral DC Decomposition and DCA Optimization of Piecewise Linear Functions
Algorithms 2020, 13(7), 166; https://doi.org/10.3390/a13070166 (registering DOI) - 11 Jul 2020
Viewed by 168
Abstract
For piecewise linear functions f : R n R we show how their abs-linear representation can be extended to yield simultaneously their decomposition into a convex f and a concave part f ^ , including a pair of generalized gradients g [...] Read more.
For piecewise linear functions f : R n R we show how their abs-linear representation can be extended to yield simultaneously their decomposition into a convex f and a concave part f ^ , including a pair of generalized gradients g R n g ^ . The latter satisfy strict chain rules and can be computed in the reverse mode of algorithmic differentiation, at a small multiple of the cost of evaluating f itself. It is shown how f and f ^ can be expressed as a single maximum and a single minimum of affine functions, respectively. The two subgradients g and g ^ are then used to drive DCA algorithms, where the (convex) inner problem can be solved in finitely many steps, e.g., by a Simplex variant or the true steepest descent method. Using a reflection technique to update the gradients of the concave part, one can ensure finite convergence to a local minimizer of f, provided the Linear Independence Kink Qualification holds. For piecewise smooth objectives the approach can be used as an inner method for successive piecewise linearization. Full article
Open AccessArticle
Image Edge Detector with Gabor Type Filters Using a Spiking Neural Network of Biologically Inspired Neurons
Algorithms 2020, 13(7), 165; https://doi.org/10.3390/a13070165 - 09 Jul 2020
Viewed by 226
Abstract
We report the design of a Spiking Neural Network (SNN) edge detector with biologically inspired neurons that has a conceptual similarity with both Hodgkin-Huxley (HH) model neurons and Leaky Integrate-and-Fire (LIF) neurons. The computation of the membrane potential, which is used to determine [...] Read more.
We report the design of a Spiking Neural Network (SNN) edge detector with biologically inspired neurons that has a conceptual similarity with both Hodgkin-Huxley (HH) model neurons and Leaky Integrate-and-Fire (LIF) neurons. The computation of the membrane potential, which is used to determine the occurrence or absence of spike events, at each time step, is carried out by using the analytical solution to a simplified version of the HH neuron model. We find that the SNN based edge detector detects more edge pixels in images than those obtained by a Sobel edge detector. We designed a pipeline for image classification with a low-exposure frame simulation layer, SNN edge detection layers as pre-processing layers and a Convolutional Neural Network (CNN) as a classification module. We tested this pipeline for the task of classification with the Digits dataset, which is available in MATLAB. We find that the SNN based edge detection layer increases the image classification accuracy at lower exposure times, that is, for 1 < t < T /4, where t is the number of milliseconds in a simulated exposure frame and T is the total exposure time, with reference to a Sobel edge or Canny edge detection layer in the pipeline. These results pave the way for developing novel cognitive neuromorphic computing architectures for millisecond timescale detection and object classification applications using event or spike cameras. Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms for Image Processing)
Show Figures

Graphical abstract

Open AccessArticle
Nonparametric Estimation of Continuously Parametrized Families of Probability Density Functions—Computational Aspects
Algorithms 2020, 13(7), 164; https://doi.org/10.3390/a13070164 - 08 Jul 2020
Viewed by 235
Abstract
We consider a rather general problem of nonparametric estimation of an uncountable set of probability density functions (p.d.f.’s) of the form: f ( x ; r ) , where r is a non-random real variable and ranges from R 1 to R 2 [...] Read more.
We consider a rather general problem of nonparametric estimation of an uncountable set of probability density functions (p.d.f.’s) of the form: f ( x ; r ) , where r is a non-random real variable and ranges from R 1 to R 2 . We put emphasis on the algorithmic aspects of this problem, since they are crucial for exploratory analysis of big data that are needed for the estimation. A specialized learning algorithm, based on the 2D FFT, is proposed and tested on observations that allow for estimate p.d.f.’s of a jet engine temperatures as a function of its rotation speed. We also derive theoretical results concerning the convergence of the estimation procedure that contains hints on selecting parameters of the estimation algorithm. Full article
(This article belongs to the Special Issue Algorithms for Nonparametric Estimation)
Open AccessArticle
An Interval Type-2 Fuzzy Risk Analysis Model (IT2FRAM) for Determining Construction Project Contingency Reserve
Algorithms 2020, 13(7), 163; https://doi.org/10.3390/a13070163 - 07 Jul 2020
Viewed by 230
Abstract
Determining contingency reserve is critical to project risk management. Classic methods of determining contingency reserve significantly rely on historical data and fail to effectively incorporate certain types of uncertainties such as vagueness, ambiguity, and subjectivity. In this paper, an interval type-2 fuzzy risk [...] Read more.
Determining contingency reserve is critical to project risk management. Classic methods of determining contingency reserve significantly rely on historical data and fail to effectively incorporate certain types of uncertainties such as vagueness, ambiguity, and subjectivity. In this paper, an interval type-2 fuzzy risk analysis model (IT2FRAM) is introduced in order to determine the contingency reserve. In IT2FRAM, the membership functions for the linguistic terms used to describe the probability, impact of risk and the opportunity events are developed, optimized, and aggregated using interval type-2 fuzzy sets and the principle of justifiable granularity. IT2FRAM is an extension of a fuzzy arithmetic-based risk analysis method which considers such uncertainties and addresses the limitations of probabilistic and deterministic techniques of contingency determination methods. The contribution of IT2FRAM is that it considers the opinions of several subject matter experts to develop the membership functions of linguistic terms. Moreover, the effect of outlier opinions in developing the membership functions of linguistic terms are reduced. IT2FRAM also enables the aggregation of non-linear membership functions into trapezoidal membership functions. A hypothetical case study is presented in order to illustrate the application of IT2FRAM in Fuzzy Risk Analyzer© (FRA©), a risk analysis software. Full article
(This article belongs to the Special Issue Fuzzy Hybrid Systems for Construction Engineering and Management)
Open AccessArticle
Sensitivity Analysis for Microscopic Crowd Simulation
Algorithms 2020, 13(7), 162; https://doi.org/10.3390/a13070162 - 05 Jul 2020
Viewed by 391
Abstract
Microscopic crowd simulation can help to enhance the safety of pedestrians in situations that range from museum visits to music festivals. To obtain a useful prediction, the input parameters must be chosen carefully. In many cases, a lack of knowledge or limited measurement [...] Read more.
Microscopic crowd simulation can help to enhance the safety of pedestrians in situations that range from museum visits to music festivals. To obtain a useful prediction, the input parameters must be chosen carefully. In many cases, a lack of knowledge or limited measurement accuracy add uncertainty to the input. In addition, for meaningful parameter studies, we first need to identify the most influential parameters of our parametric computer models. The field of uncertainty quantification offers standardized and fully automatized methods that we believe to be beneficial for pedestrian dynamics. In addition, many methods come at a comparatively low cost, even for computationally expensive problems. This allows for their application to larger scenarios. We aim to identify and adapt fitting methods to microscopic crowd simulation in order to explore their potential in pedestrian dynamics. In this work, we first perform a variance-based sensitivity analysis using Sobol’ indices and then crosscheck the results by a derivative-based measure, the activity scores. We apply both methods to a typical scenario in crowd simulation, a bottleneck. Because constrictions can lead to high crowd densities and delays in evacuations, several experiments and simulation studies have been conducted for this setting. We show qualitative agreement between the results of both methods. Additionally, we identify a one-dimensional subspace in the input parameter space and discuss its impact on the simulation. Moreover, we analyze and interpret the sensitivity indices with respect to the bottleneck scenario. Full article
Open AccessArticle
CONDA-PM—A Systematic Review and Framework for Concept Drift Analysis in Process Mining
Algorithms 2020, 13(7), 161; https://doi.org/10.3390/a13070161 - 03 Jul 2020
Viewed by 288
Abstract
Business processes evolve over time to adapt to changing business environments. This requires continuous monitoring of business processes to gain insights into whether they conform to the intended design or deviate from it. The situation when a business process changes while being analysed [...] Read more.
Business processes evolve over time to adapt to changing business environments. This requires continuous monitoring of business processes to gain insights into whether they conform to the intended design or deviate from it. The situation when a business process changes while being analysed is denoted as Concept Drift. Its analysis is concerned with studying how a business process changes, in terms of detecting and localising changes and studying the effects of the latter. Concept drift analysis is crucial to enable early detection and management of changes, that is, whether to promote a change to become part of an improved process, or to reject the change and make decisions to mitigate its effects. Despite its importance, there exists no comprehensive framework for analysing concept drift types, affected process perspectives, and granularity levels of a business process. This article proposes the CONcept Drift Analysis in Process Mining (CONcept Drift Analysis in Process Mining Framework. A four-staged framework providing guidance on the fundamental components of a concept drift analysis approach in the context of process mining (CONDA-PM,)) framework describing phases and requirements of a concept drift analysis approach. CONDA-PM was derived from a Systematic Literature Review (Systematic Literature Review. A survey of a topic conducted according to systematic steps and adopts a certain format (SLR,)) of current approaches analysing concept drift. We apply the CONDA-PM framework on current approaches to concept drift analysis and evaluate their maturity. Applying CONDA-PM framework highlights areas where research is needed to complement existing efforts. Full article
(This article belongs to the Special Issue Process Mining and Emerging Applications)
Open AccessArticle
Text Semantic Annotation: A Distributed Methodology Based on Community Coherence
Algorithms 2020, 13(7), 160; https://doi.org/10.3390/a13070160 - 01 Jul 2020
Viewed by 294
Abstract
Text annotation is the process of identifying the sense of a textual segment within a given context to a corresponding entity on a concept ontology. As the bag of words paradigm’s limitations become increasingly discernible in modern applications, several information retrieval and artificial [...] Read more.
Text annotation is the process of identifying the sense of a textual segment within a given context to a corresponding entity on a concept ontology. As the bag of words paradigm’s limitations become increasingly discernible in modern applications, several information retrieval and artificial intelligence tasks are shifting to semantic representations for addressing the inherent natural language polysemy and homonymy challenges. With extensive application in a broad range of scientific fields, such as digital marketing, bioinformatics, chemical engineering, neuroscience, and social sciences, community detection has attracted great scientific interest. Focusing on linguistics, by aiming to identify groups of densely interconnected subgroups of semantic ontologies, community detection application has proven beneficial in terms of disambiguation improvement and ontology enhancement. In this paper we introduce a novel distributed supervised knowledge-based methodology employing community detection algorithms for text annotation with Wikipedia Entities, establishing the unprecedented concept of community Coherence as a metric for local contextual coherence compatibility. Our experimental evaluation revealed that deeper inference of relatedness and local entity community coherence in the Wikipedia graph bears substantial improvements overall via a focus on accuracy amelioration of less common annotations. The proposed methodology is propitious for wider adoption, attaining robust disambiguation performance. Full article
(This article belongs to the Special Issue Algorithmic Data Management)
Show Figures

Figure 1

Open AccessArticle
Stream-Based Lossless Data Compression Applying Adaptive Entropy Coding for Hardware-Based Implementation
Algorithms 2020, 13(7), 159; https://doi.org/10.3390/a13070159 - 30 Jun 2020
Viewed by 327
Abstract
Toward strong demand for very high-speed I/O for processors, physical performance growth of hardware I/O speed was drastically increased in this decade. However, the recent Big Data applications still demand the larger I/O bandwidth and the lower latency for the speed. Because the [...] Read more.
Toward strong demand for very high-speed I/O for processors, physical performance growth of hardware I/O speed was drastically increased in this decade. However, the recent Big Data applications still demand the larger I/O bandwidth and the lower latency for the speed. Because the current I/O performance does not improve so drastically, it is the time to consider another way to increase it. To overcome this challenge, we focus on lossless data compression technology to decrease the amount of data itself in the data communication path. The recent Big Data applications treat data stream that flows continuously and never allow stalling processing due to the high speed. Therefore, an elegant hardware-based data compression technology is demanded. This paper proposes a novel lossless data compression, called ASE coding. It encodes streaming data by applying the entropy coding approach. ASE coding instantly assigns the fewest bits to the corresponding compressed data according to the number of occupied entries in a look-up table. This paper describes the detailed mechanism of ASE coding. Furthermore, the paper demonstrates performance evaluations to promise that ASE coding adaptively shrinks streaming data and also works on a small amount of hardware resources without stalling or buffering any part of data stream. Full article
(This article belongs to the Special Issue Lossless Data Compression)
Open AccessArticle
Fuzzy C-Means Clustering Algorithm with Multiple Fuzzification Coefficients
Algorithms 2020, 13(7), 158; https://doi.org/10.3390/a13070158 - 30 Jun 2020
Viewed by 356
Abstract
Clustering is an unsupervised machine learning technique with many practical applications that has gathered extensive research interest. Aside from deterministic or probabilistic techniques, fuzzy C-means clustering (FCM) is also a common clustering technique. Since the advent of the FCM method, many improvements have [...] Read more.
Clustering is an unsupervised machine learning technique with many practical applications that has gathered extensive research interest. Aside from deterministic or probabilistic techniques, fuzzy C-means clustering (FCM) is also a common clustering technique. Since the advent of the FCM method, many improvements have been made to increase clustering efficiency. These improvements focus on adjusting the membership representation of elements in the clusters, or on fuzzifying and defuzzifying techniques, as well as the distance function between elements. This study proposes a novel fuzzy clustering algorithm using multiple different fuzzification coefficients depending on the characteristics of each data sample. The proposed fuzzy clustering method has similar calculation steps to FCM with some modifications. The formulas are derived to ensure convergence. The main contribution of this approach is the utilization of multiple fuzzification coefficients as opposed to only one coefficient in the original FCM algorithm. The new algorithm is then evaluated with experiments on several common datasets and the results show that the proposed algorithm is more efficient compared to the original FCM as well as other clustering methods. Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms)
Show Figures

Figure 1

Open AccessTechnical Note
The RONO (Rank-Order-Normalization) Procedure for Power-Spectrum Analysis of Datasets with Non-Normal Distributions
Algorithms 2020, 13(7), 157; https://doi.org/10.3390/a13070157 - 30 Jun 2020
Viewed by 312
Abstract
Standard (Lomb-Scargle, likelihood, etc.) procedures for power-spectrum analysis provide convenient estimates of the significance of any peak in a power spectrum, based—typically—on the assumption that the measurements being analyzed have a normal (i.e., Gaussian) distribution. However, the measurement sequence provided by a real [...] Read more.
Standard (Lomb-Scargle, likelihood, etc.) procedures for power-spectrum analysis provide convenient estimates of the significance of any peak in a power spectrum, based—typically—on the assumption that the measurements being analyzed have a normal (i.e., Gaussian) distribution. However, the measurement sequence provided by a real experiment or a real observational program may not meet this requirement. The RONO (rank-order normalization) procedure generates a proxy distribution that retains the rank-order of the original measurements but has a strictly normal distribution. The proxy distribution may then be analyzed by standard power-spectrum analysis. We show by an example that the resulting power spectrum may prove to be quite close to the power spectrum obtained from the original data by a standard procedure, even if the distribution of the original measurements is far from normal. Such a comparison would tend to validate the original analysis. Full article
Show Figures

Figure 1

Open AccessArticle
Generalized Polynomial Chaos Expansion for Fast and Accurate Uncertainty Quantification in Geomechanical Modelling
Algorithms 2020, 13(7), 156; https://doi.org/10.3390/a13070156 - 30 Jun 2020
Viewed by 297
Abstract
Geomechanical modelling of the processes associated to the exploitation of subsurface resources, such as land subsidence or triggered/induced seismicity, is a common practice of major interest. The prediction reliability depends on different sources of uncertainty, such as the parameterization of the constitutive model [...] Read more.
Geomechanical modelling of the processes associated to the exploitation of subsurface resources, such as land subsidence or triggered/induced seismicity, is a common practice of major interest. The prediction reliability depends on different sources of uncertainty, such as the parameterization of the constitutive model characterizing the deep rock behaviour. In this study, we focus on a Sobol’-based sensitivity analysis and uncertainty reduction via assimilation of land deformations. A synthetic test case application on a deep hydrocarbon reservoir is considered, where land settlements are predicted with the aid of a 3-D Finite Element (FE) model. Data assimilation is performed via the Ensemble Smoother (ES) technique and its variation in the form of Multiple Data Assimilation (ES-MDA). However, the ES convergence is guaranteed with a large number of Monte Carlo (MC) simulations, that may be computationally infeasible in large scale and complex systems. For this reason, a surrogate model based on the generalized Polynomial Chaos Expansion (gPCE) is proposed as an approximation of the forward problem. This approach allows to efficiently compute the Sobol’ indices for the sensitivity analysis and greatly reduce the computational cost of the original ES and MDA formulations, also enhancing the accuracy of the overall prediction process. Full article
Show Figures

Figure 1

Open AccessArticle
Embedded Bayesian Network Contribution for a Safe Mission Planning of Autonomous Vehicles
Algorithms 2020, 13(7), 155; https://doi.org/10.3390/a13070155 - 28 Jun 2020
Viewed by 389
Abstract
Bayesian Networks (BN) are probabilistic models that are commonly used for the diagnosis in numerous domains (medicine, finance, transport, robotics, …). In the case of autonomous vehicles, they can contribute to elaborate intelligent monitors that can take the environmental context into account. We [...] Read more.
Bayesian Networks (BN) are probabilistic models that are commonly used for the diagnosis in numerous domains (medicine, finance, transport, robotics, …). In the case of autonomous vehicles, they can contribute to elaborate intelligent monitors that can take the environmental context into account. We show in this paper some main abilities of BN that can help in the elaboration of fault detection isolation and recovery (FDIR) modules. One of the main difficulty with the BN model is generally to elaborate these ones according to the case of study. Then, we propose some automatic generation techniques from failure mode and effects analysis (FMEA)-like tables using the pattern design approach. Once defined, these modules have to operate online for autonomous vehicles. In a second part, we propose a design methodology to embed the real-time and non-intrusive implementations of the BN modules using FPGA-SoC support. We show that the FPGA implementation can offer an interesting speed-up with very limited energy cost. Lastly, we show how these BN modules can be incorporated into the decision-making model for the mission planning of unmanned aerial vehicles (UAVs). We illustrate the integration by means of two models: the Decision Network model that is a straightforward extension of the BN model, and the BFM model that is an extension of the Markov Decision Process (MDP) decision-making model incorporating a BN. We illustrate the different proposals with realistic examples and show that the hybrid implementation on FPGA-SoC can offer some benefits. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop