-
Guaranteed Diversity and Optimality for Computational Protein Design -
Using Machine Learning for Quantum Annealing Accuracy Prediction -
Efficient and Scalable Initialization of Partitioned Coupled Simulations with preCICE -
Iterative Solution of Linear Matrix Inequalities for the Combined Control and Observer Design of Systems with Polytopic Parameter Uncertainty and Stochastic Noise
Journal Description
Algorithms
Algorithms
is a peer-reviewed, open access journal which provides an advanced forum for studies related to algorithms and their applications. Algorithms is published monthly online by MDPI. The European Society for Fuzzy Logic and Technology (EUSFLAT) is affiliated with Algorithms and their members receive discounts on the article processing charges.
- Open Access — free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, MathSciNet and many other databases.
- Journal Rank: CiteScore - Q2 (Numerical Analysis)
- Rapid Publication: manuscripts are peer-reviewed and a first decision provided to authors approximately 14.6 days after submission; acceptance to publication is undertaken in 2.6 days (median values for papers published in this journal in the first half of 2021).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Latest Articles
A Novel Semi-Supervised Fuzzy C-Means Clustering Algorithm Using Multiple Fuzzification Coefficients
Algorithms 2021, 14(9), 258; https://doi.org/10.3390/a14090258 - 29 Aug 2021
Abstract
Clustering is an unsupervised machine learning method with many practical applications that has gathered extensive research interest. It is a technique of dividing data elements into clusters such that elements in the same cluster are similar. Clustering belongs to the group of unsupervised
[...] Read more.
Clustering is an unsupervised machine learning method with many practical applications that has gathered extensive research interest. It is a technique of dividing data elements into clusters such that elements in the same cluster are similar. Clustering belongs to the group of unsupervised machine learning techniques, meaning that there is no information about the labels of the elements. However, when knowledge of data points is known in advance, it will be beneficial to use a semi-supervised algorithm. Within many clustering techniques available, fuzzy C-means clustering (FCM) is a common one. To make the FCM algorithm a semi-supervised method, it was proposed in the literature to use an auxiliary matrix to adjust the membership grade of the elements to force them into certain clusters during the computation. In this study, instead of using the auxiliary matrix, we proposed to use multiple fuzzification coefficients to implement the semi-supervision component. After deriving the proposed semi-supervised fuzzy C-means clustering algorithm with multiple fuzzification coefficients (sSMC-FCM), we demonstrated the convergence of the algorithm and validated the efficiency of the method through a numerical example.
Full article
Open AccessArticle
Metal Surface Defect Detection Using Modified YOLO
Algorithms 2021, 14(9), 257; https://doi.org/10.3390/a14090257 - 28 Aug 2021
Abstract
Aiming at the problems of inefficient detection caused by traditional manual inspection and unclear features in metal surface defect detection, an improved metal surface defect detection technology based on the You Only Look Once (YOLO) model is presented. The shallow features of the
[...] Read more.
Aiming at the problems of inefficient detection caused by traditional manual inspection and unclear features in metal surface defect detection, an improved metal surface defect detection technology based on the You Only Look Once (YOLO) model is presented. The shallow features of the 11th layer in the Darknet-53 are combined with the deep features of the neural network to generate a new scale feature layer using the basis of the network structure of YOLOv3. Its goal is to extract more features of small defects. Furthermore, then, K-Means++ is used to reduce the sensitivity to the initial cluster center when analyzing the size information of the anchor box. The optimal anchor box is selected to make the positioning more accurate. The performance of the modified metal surface defect detection technology is compared with other detection methods on the Tianchi dataset. The results show that the average detection accuracy of the modified YOLO model is 75.1%, which ia higher than that of YOLOv3. Furthermore, it also has a great detection speed advantage, compared with faster region-based convolutional neural network (Faster R-CNN) and other detection algorithms. The improved YOLO model can make the highly accurate location information of the small defect target and has strong real-time performance.
Full article
Open AccessArticle
Summarisation, Simulation and Comparison of Nine Control Algorithms for an Active Control Mount with an Oscillating Coil Actuator
Algorithms 2021, 14(9), 256; https://doi.org/10.3390/a14090256 - 27 Aug 2021
Abstract
With the further development of the automotive industry, the traditional vibration isolation method is difficult to meet the requirements for wide frequency bands under multiple operating conditions, the active control mount (ACM) is gradually paid attentions, and the control algorithm plays a decisive
[...] Read more.
With the further development of the automotive industry, the traditional vibration isolation method is difficult to meet the requirements for wide frequency bands under multiple operating conditions, the active control mount (ACM) is gradually paid attentions, and the control algorithm plays a decisive role. In this paper, the ACM with oscillating coil actuator (OCA) is taken as the object, and the comparative study of the control algorithms is performed to select the optimal one for ACM. Through the modelling of ACM, the design of controller and the system simulations, the force transmission rate is used to compare the vibration isolation performance of the nine control algorithms, which are least mean square (LMS) adaptive feedforward control, recursive least square (RLS) adaptive feedforward control, filtered reference signal LMS (FxLMS) adaptive control, linear quadratic regulator (LQR) optimal control, H2 control, H∞ control, proportional integral derivative (PID) feedback control, fuzzy control and fuzzy PID control. In summary, the FxLMS adaptive control algorithm has the better performance and the advantage of easier hardware implementation, and it can apply in ACMs.
Full article
Open AccessArticle
An Algebraic Approach to Identifiability
by
and
Algorithms 2021, 14(9), 255; https://doi.org/10.3390/a14090255 - 27 Aug 2021
Abstract
This paper addresses the problem of identifiability of nonlinear polynomial state-space systems. Such systems have already been studied via the input-output equations, a description that, in general, requires differential algebra. The authors use a different algebraic approach, which is based on distinguishability and
[...] Read more.
This paper addresses the problem of identifiability of nonlinear polynomial state-space systems. Such systems have already been studied via the input-output equations, a description that, in general, requires differential algebra. The authors use a different algebraic approach, which is based on distinguishability and observability. Employing techniques from algebraic geometry such as polynomial ideals and Gröbner bases, local as well as global results are derived. The methods are illustrated on some example systems.
Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control II)
Open AccessArticle
Prioritizing Construction Labor Productivity Improvement Strategies Using Fuzzy Multi-Criteria Decision Making and Fuzzy Cognitive Maps
Algorithms 2021, 14(9), 254; https://doi.org/10.3390/a14090254 - 24 Aug 2021
Abstract
Construction labor productivity (CLP) is affected by various interconnected factors, such as crew motivation and working conditions. Improved CLP can benefit a construction project in many ways, such as a shortened project life cycle and lowering project cost. However, budget, time, and resource
[...] Read more.
Construction labor productivity (CLP) is affected by various interconnected factors, such as crew motivation and working conditions. Improved CLP can benefit a construction project in many ways, such as a shortened project life cycle and lowering project cost. However, budget, time, and resource restrictions force companies to select and implement only a limited number of CLP improvement strategies. Therefore, a research gap exists regarding methods for supporting the selection of CLP improvement strategies for a given project by quantifying the impact of strategies on CLP with respect to interrelationships among CLP factors. This paper proposes a decision support model that integrates fuzzy multi-criteria decision making with fuzzy cognitive maps to prioritize CLP improvement strategies based on their impact on CLP, causal relationships among CLP factors, and project characteristics. The proposed model was applied to determine CLP improvement strategies for concrete-pouring activities in building projects as an illustrative example. This study contributes to the body of knowledge by providing a systematic approach for selecting appropriate CLP improvement strategies based on interrelationships among the factors affecting CLP and the impact of such strategies on CLP. The results are expected to support construction practitioners with identifying effective improvement strategies to enhance CLP in their projects.
Full article
Open AccessArticle
The Power of Human–Algorithm Collaboration in Solving Combinatorial Optimization Problems
by
and
Algorithms 2021, 14(9), 253; https://doi.org/10.3390/a14090253 - 24 Aug 2021
Abstract
Many combinatorial optimization problems are often considered intractable to solve exactly or by approximation. An example of such a problem is maximum clique, which—under standard assumptions in complexity theory—cannot be solved in sub-exponential time or be approximated within the polynomial factor efficiently.
[...] Read more.
Many combinatorial optimization problems are often considered intractable to solve exactly or by approximation. An example of such a problem is maximum clique, which—under standard assumptions in complexity theory—cannot be solved in sub-exponential time or be approximated within the polynomial factor efficiently. However, we show that if a polynomial time algorithm can query informative Gaussian priors from an expert times, then a class of combinatorial optimization problems can be solved efficiently up to a multiplicative factor , where is arbitrary constant. In this paper, we present proof of our claims and show numerical results to support them. Our methods can cast new light on how to approach optimization problems in domains where even the approximation of the problem is not feasible. Furthermore, the results can help researchers to understand the structures of these problems (or whether these problems have any structure at all!). While the proposed methods can be used to approximate combinatorial problems in NPO, we note that the scope of the problems solvable might well include problems that are provable intractable (problems in EXPTIME).
Full article
(This article belongs to the Special Issue Metaheuristics)
Open AccessArticle
Constrained Dynamic Mean-Variance Portfolio Selection in Continuous-Time
Algorithms 2021, 14(8), 252; https://doi.org/10.3390/a14080252 - 23 Aug 2021
Abstract
►▼
Show Figures
This paper revisits the dynamic MV portfolio selection problem with cone constraints in continuous-time. We first reformulate our constrained MV portfolio selection model into a special constrained LQ optimal control model and develop the optimal portfolio policy of our model. In addition, we
[...] Read more.
This paper revisits the dynamic MV portfolio selection problem with cone constraints in continuous-time. We first reformulate our constrained MV portfolio selection model into a special constrained LQ optimal control model and develop the optimal portfolio policy of our model. In addition, we provide an alternative method to resolve this dynamic MV portfolio selection problem with cone constraints. More specifically, instead of solving the correspondent HJB equation directly, we develop the optimal solution for this problem by using the special properties of value function induced from its model structure, such as the monotonicity and convexity of value function. Finally, we provide an example to illustrate how to use our solution in real application. The illustrative example demonstrates that our dynamic MV portfolio policy dominates the static MV portfolio policy.
Full article

Figure 1
Open AccessArticle
Comparative Analysis of Recurrent Neural Networks in Stock Price Prediction for Different Frequency Domains
by
, , , , , and
Algorithms 2021, 14(8), 251; https://doi.org/10.3390/a14080251 - 22 Aug 2021
Abstract
►▼
Show Figures
Investors in the stock market have always been in search of novel and unique techniques so that they can successfully predict stock price movement and make a big profit. However, investors continue to look for improved and new techniques to beat the market
[...] Read more.
Investors in the stock market have always been in search of novel and unique techniques so that they can successfully predict stock price movement and make a big profit. However, investors continue to look for improved and new techniques to beat the market instead of old and traditional ones. Therefore, researchers are continuously working to build novel techniques to supply the demand of investors. Different types of recurrent neural networks (RNN) are used in time series analyses, especially in stock price prediction. However, since not all stocks’ prices follow the same trend, a single model cannot be used to predict the movement of all types of stock’s price. Therefore, in this research we conducted a comparative analysis of three commonly used RNNs—simple RNN, Long Short Term Memory (LSTM), and Gated Recurrent Unit (GRU)—and analyzed their efficiency for stocks having different stock trends and various price ranges and for different time frequencies. We considered three companies’ datasets from 30 June 2000 to 21 July 2020. The stocks follow different trends of price movements, with price ranges of $30, $50, and $290 during this period. We also analyzed the performance for one-day, three-day, and five-day time intervals. We compared the performance of RNN, LSTM, and GRU in terms of value, MAE, MAPE, and RMSE metrics. The results show that simple RNN is outperformed by LSTM and GRU because RNN is susceptible to vanishing gradient problems, while the other two models are not. Moreover, GRU produces lesser errors comparing to LSTM. It is also evident from the results that as the time intervals get smaller, the models produce lower errors and higher reliability.
Full article

Figure 1
Open AccessArticle
A Real-Time Network Traffic Classifier for Online Applications Using Machine Learning
Algorithms 2021, 14(8), 250; https://doi.org/10.3390/a14080250 - 21 Aug 2021
Abstract
The increasing ubiquity of network traffic and the new online applications’ deployment has increased traffic analysis complexity. Traditionally, network administrators rely on recognizing well-known static ports for classifying the traffic flowing their networks. However, modern network traffic uses dynamic ports and is transported
[...] Read more.
The increasing ubiquity of network traffic and the new online applications’ deployment has increased traffic analysis complexity. Traditionally, network administrators rely on recognizing well-known static ports for classifying the traffic flowing their networks. However, modern network traffic uses dynamic ports and is transported over secure application-layer protocols (e.g., HTTPS, SSL, and SSH). This makes it a challenging task for network administrators to identify online applications using traditional port-based approaches. One way for classifying the modern network traffic is to use machine learning (ML) to distinguish between the different traffic attributes such as packet count and size, packet inter-arrival time, packet send–receive ratio, etc. This paper presents the design and implementation of NetScrapper, a flow-based network traffic classifier for online applications. NetScrapper uses three ML models, namely K-Nearest Neighbors (KNN), Random Forest (RF), and Artificial Neural Network (ANN), for classifying the most popular 53 online applications, including Amazon, Youtube, Google, Twitter, and many others. We collected a network traffic dataset containing 3,577,296 packet flows with different 87 features for training, validating, and testing the ML models. A web-based user-friendly interface is developed to enable users to either upload a snapshot of their network traffic to NetScrapper or sniff the network traffic directly from the network interface card in real time. Additionally, we created a middleware pipeline for interfacing the three models with the Flask GUI. Finally, we evaluated NetScrapper using various performance metrics such as classification accuracy and prediction time. Most notably, we found that our ANN model achieves an overall classification accuracy of 99.86% in recognizing the online applications in our dataset.
Full article
(This article belongs to the Special Issue Intelligent Optimization for Transportation, Logistics and Vehicle Routing)
►▼
Show Figures

Figure 1
Open AccessArticle
Myocardial Infarction Quantification from Late Gadolinium Enhancement MRI Using Top-Hat Transforms and Neural Networks
by
, , , , and
Algorithms 2021, 14(8), 249; https://doi.org/10.3390/a14080249 - 20 Aug 2021
Abstract
Late gadolinium enhancement (LGE) MRI is the gold standard technique for myocardial viability assessment. Although the technique accurately reflects the damaged tissue, there is no clinical standard to quantify myocardial infarction (MI). Moreover, commercial software used in clinical practice are mostly semi-automatic, and
[...] Read more.
Late gadolinium enhancement (LGE) MRI is the gold standard technique for myocardial viability assessment. Although the technique accurately reflects the damaged tissue, there is no clinical standard to quantify myocardial infarction (MI). Moreover, commercial software used in clinical practice are mostly semi-automatic, and hence require direct intervention of experts. In this work, a new automatic method for MI quantification from LGE-MRI is proposed. Our novel segmentation approach is devised for accurately detecting not only hyper-enhanced lesions, but also microvascular obstruction areas. Moreover, it includes a myocardial disease detection step which extends the algorithm for working under healthy scans. The method is based on a cascade approach where firstly, diseased slices are identified by a convolutional neural network (CNN). Secondly, by means of morphological operations a fast coarse scar segmentation is obtained. Thirdly, the segmentation is refined by a boundary-voxel reclassification strategy using an ensemble of very light CNNs. We tested the method on a LGE-MRI database with healthy (n = 20) and diseased (n = 80) cases following a 5-fold cross-validation scheme. Our approach segmented myocardial scars with an average Dice coefficient of 77.22 ± 14.3% and with a volumetric error of 1.0 ± 6.9 cm . In a comparison against nine reference algorithms, the proposed method achieved the highest agreement in volumetric scar quantification with the expert delineations (p< 0.001 when compared to the other approaches). Moreover, it was able to reproduce the scar segmentation intra- and inter-rater variability. Our approach was shown to be a good first attempt towards automatic and accurate myocardial scar segmentation, although validation over larger LGE-MRI databases is needed.
Full article
(This article belongs to the Special Issue Advances in Intelligence Artificial Algorithms Applied to Medical Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
Experimental Validation of a Guaranteed Nonlinear Model Predictive Control
Algorithms 2021, 14(8), 248; https://doi.org/10.3390/a14080248 - 20 Aug 2021
Abstract
This paper combines the interval analysis tools with the nonlinear model predictive control (NMPC). The NMPC strategy is formulated based on an uncertain dynamic model expressed as nonlinear ordinary differential equations (ODEs). All the dynamic parameters are identified in a guaranteed way considering
[...] Read more.
This paper combines the interval analysis tools with the nonlinear model predictive control (NMPC). The NMPC strategy is formulated based on an uncertain dynamic model expressed as nonlinear ordinary differential equations (ODEs). All the dynamic parameters are identified in a guaranteed way considering the various uncertainties on the embedded sensors and the system’s design. The NMPC problem is solved at each time step using validated simulation and interval analysis methods to compute the optimal and safe control inputs over a finite prediction horizon. This approach considers several constraints which are crucial for the system’s safety and stability, namely the state and the control limits. The proposed controller consists of two steps: filtering and branching procedures enabling to find the input intervals that fulfill the state constraints and ensure the convergence to the reference set. Then, the optimization procedure allows for computing the optimal and punctual control input that must be sent to the system’s actuators for the pendulum stabilization. The validated NMPC capabilities are illustrated through several simulations under the DynIbex library and experiments using an inverted pendulum.
Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control II)
►▼
Show Figures

Figure 1
Open AccessArticle
Numerical Algorithm for Dynamic Impedance of Bridge Pile-Group Foundation and Its Validation
Algorithms 2021, 14(8), 247; https://doi.org/10.3390/a14080247 - 20 Aug 2021
Abstract
►▼
Show Figures
The characteristics of bridge pile-group foundation have a significant influence on the dynamic performance of the superstructure. Most of the existing analysis methods for the pile-group foundation impedance take the trait of strong specialty, which cannot be generalized in practical projects. Therefore, a
[...] Read more.
The characteristics of bridge pile-group foundation have a significant influence on the dynamic performance of the superstructure. Most of the existing analysis methods for the pile-group foundation impedance take the trait of strong specialty, which cannot be generalized in practical projects. Therefore, a project-oriented numerical solution algorithm is proposed to compute the dynamic impedance of bridge pile-group foundation. Based on the theory of viscous-spring artificial boundary, the derivation and solution of the impedance function are transferred to numerical modeling and harmonic analysis, which can be carried out through the finite element method. By taking a typical pile-group foundation as a case study, the results based on the algorithm are compared with those from existing literature. Moreover, an impact experiment of a real pile-group foundation was implemented, the results of which are also compared with those resulting from the proposed numerical algorithm. Both comparisons show that the proposed numerical algorithm satisfies engineering precision, thus showing good effectiveness in application.
Full article

Figure 1
Open AccessArticle
Scheduling Multiprocessor Tasks with Equal Processing Times as a Mixed Graph Coloring Problem
Algorithms 2021, 14(8), 246; https://doi.org/10.3390/a14080246 - 19 Aug 2021
Abstract
This article extends the scheduling problem with dedicated processors, unit-time tasks, and minimizing maximal lateness for integer due dates to the scheduling problem, where along with precedence constraints given on the set
[...] Read more.
This article extends the scheduling problem with dedicated processors, unit-time tasks, and minimizing maximal lateness for integer due dates to the scheduling problem, where along with precedence constraints given on the set of the multiprocessor tasks, a subset of tasks must be processed simultaneously. Contrary to a classical shop-scheduling problem, several processors must fulfill a multiprocessor task. Furthermore, two types of the precedence constraints may be given on the task set . We prove that the extended scheduling problem with integer release times of the jobs to minimize schedule length may be solved as an optimal mixed graph coloring problem that consists of the assignment of a minimal number of colors (positive integers) to the vertices of the mixed graph such that, if two vertices and are joined by the edge , their colors have to be different. Further, if two vertices and are joined by the arc , the color of vertex has to be no greater than the color of vertex . We prove two theorems, which imply that most analytical results proved so far for optimal colorings of the mixed graphs , have analogous results, which are valid for the extended scheduling problems to minimize the schedule length or maximal lateness, and vice versa.
Full article
(This article belongs to the Special Issue 2021 Selected Papers from Algorithms Editorial Board Members)
►▼
Show Figures

Figure 1
Open AccessArticle
SVSL: A Human Activity Recognition Method Using Soft-Voting and Self-Learning
Algorithms 2021, 14(8), 245; https://doi.org/10.3390/a14080245 - 19 Aug 2021
Abstract
►▼
Show Figures
Many smart city and society applications such as smart health (elderly care, medical applications), smart surveillance, sports, and robotics require the recognition of user activities, an important class of problems known as human activity recognition (HAR). Several issues have hindered progress in HAR
[...] Read more.
Many smart city and society applications such as smart health (elderly care, medical applications), smart surveillance, sports, and robotics require the recognition of user activities, an important class of problems known as human activity recognition (HAR). Several issues have hindered progress in HAR research, particularly due to the emergence of fog and edge computing, which brings many new opportunities (a low latency, dynamic and real-time decision making, etc.) but comes with its challenges. This paper focuses on addressing two important research gaps in HAR research: (i) improving the HAR prediction accuracy and (ii) managing the frequent changes in the environment and data related to user activities. To address this, we propose an HAR method based on Soft-Voting and Self-Learning (SVSL). SVSL uses two strategies. First, to enhance accuracy, it combines the capabilities of Deep Learning (DL), Generalized Linear Model (GLM), Random Forest (RF), and AdaBoost classifiers using soft-voting. Second, to classify the most challenging data instances, the SVSL method is equipped with a self-training mechanism that generates training data and retrains itself. We investigate the performance of our proposed SVSL method using two publicly available datasets on six human activities related to lying, sitting, and walking positions. The first dataset consists of 562 features and the second dataset consists of five features. The data are collected using the accelerometer and gyroscope smartphone sensors. The results show that the proposed method provides 6.26%, 1.75%, 1.51%, and 4.40% better prediction accuracy (average over the two datasets) compared to GLM, DL, RF, and AdaBoost, respectively. We also analyze and compare the class-wise performance of the SVSL methods with that of DL, GLM, RF, and AdaBoost.
Full article

Figure 1
Open AccessArticle
An Efficient Geometric Search Algorithm of Pandemic Boundary Detection
by
and
Algorithms 2021, 14(8), 244; https://doi.org/10.3390/a14080244 - 18 Aug 2021
Abstract
We consider a scenario where the pandemic infection rate is inversely proportional to the power of the distance between the infected region and the non-infected region. In our study, we analyze the case where the exponent of the distance is 2, which is
[...] Read more.
We consider a scenario where the pandemic infection rate is inversely proportional to the power of the distance between the infected region and the non-infected region. In our study, we analyze the case where the exponent of the distance is 2, which is in accordance with Reilly’s law of retail gravitation. One can test for infection but such tests are costly so one seeks to determine the region of infection while performing few tests. Our goal is to find a boundary region of minimal size that contains all infected areas. We discuss efficient algorithms and provide the asymptotic bound of the testing cost and simulation results for this problem.
Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Tourism Demand Forecasting Based on an LSTM Network and Its Variants
Algorithms 2021, 14(8), 243; https://doi.org/10.3390/a14080243 - 18 Aug 2021
Abstract
►▼
Show Figures
The need for accurate tourism demand forecasting is widely recognized. The unreliability of traditional methods makes tourism demand forecasting still challenging. Using deep learning approaches, this study aims to adapt Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), and Gated Recurrent Unit networks (GRU),
[...] Read more.
The need for accurate tourism demand forecasting is widely recognized. The unreliability of traditional methods makes tourism demand forecasting still challenging. Using deep learning approaches, this study aims to adapt Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), and Gated Recurrent Unit networks (GRU), which are straightforward and efficient, to improve Taiwan’s tourism demand forecasting. The networks are able to seize the dependence of visitor arrival time series data. The Adam optimization algorithm with adaptive learning rate is used to optimize the basic setup of the models. The results show that the proposed models outperform previous studies undertaken during the Severe Acute Respiratory Syndrome (SARS) events of 2002–2003. This article also examines the effects of the current COVID-19 outbreak to tourist arrivals to Taiwan. The results show that the use of the LSTM network and its variants can perform satisfactorily for tourism demand forecasting.
Full article

Figure 1
Open AccessReview
Data Mining Algorithms for Smart Cities: A Bibliometric Analysis
by
and
Algorithms 2021, 14(8), 242; https://doi.org/10.3390/a14080242 - 17 Aug 2021
Abstract
Smart cities connect people and places using innovative technologies such as Data Mining (DM), Machine Learning (ML), big data, and the Internet of Things (IoT). This paper presents a bibliometric analysis to provide a comprehensive overview of studies associated with DM technologies used
[...] Read more.
Smart cities connect people and places using innovative technologies such as Data Mining (DM), Machine Learning (ML), big data, and the Internet of Things (IoT). This paper presents a bibliometric analysis to provide a comprehensive overview of studies associated with DM technologies used in smart cities applications. The study aims to identify the main DM techniques used in the context of smart cities and how the research field of DM for smart cities evolves over time. We adopted both qualitative and quantitative methods to explore the topic. We used the Scopus database to find relative articles published in scientific journals. This study covers 197 articles published over the period from 2013 to 2021. For the bibliometric analysis, we used the Biliometrix library, developed in R. Our findings show that there is a wide range of DM technologies used in every layer of a smart city project. Several ML algorithms, supervised or unsupervised, are adopted for operating the instrumentation, middleware, and application layer. The bibliometric analysis shows that DM for smart cities is a fast-growing scientific field. Scientists from all over the world show a great interest in researching and collaborating on this interdisciplinary scientific field.
Full article
(This article belongs to the Special Issue New Algorithms for Visual Data Mining)
►▼
Show Figures

Figure 1
Open AccessArticle
Property-Based Semantic Similarity Criteria to Evaluate the Overlaps of Schemas
by
, , , , , , and
Algorithms 2021, 14(8), 241; https://doi.org/10.3390/a14080241 - 17 Aug 2021
Abstract
Knowledge graph-based data integration is a practical methodology for heterogeneous legacy database-integrated service construction. However, it is neither efficient nor economical to build a new cross-domain knowledge graph on top of the schemas of each legacy database for the specific integration application rather
[...] Read more.
Knowledge graph-based data integration is a practical methodology for heterogeneous legacy database-integrated service construction. However, it is neither efficient nor economical to build a new cross-domain knowledge graph on top of the schemas of each legacy database for the specific integration application rather than reusing the existing high-quality knowledge graphs. Consequently, a question arises as to whether the existing knowledge graph is compatible with cross-domain queries and with heterogenous schemas of the legacy systems. An effective criterion is urgently needed in order to evaluate such compatibility as it limits the quality upbound of the integration. This research studies the semantic similarity of the schemas from the aspect of properties. It provides a set of in-depth criteria, namely coverage and flexibility, to evaluate the pairwise compatibility between the schemas. It takes advantage of the properties of knowledge graphs to evaluate the overlaps between schemas and defines the weights of entity types in order to perform precise compatibility computation. The effectiveness of the criteria obtained to evaluate the compatibility between knowledge graphs and cross-domain queries is demonstrated using a case study.
Full article
(This article belongs to the Special Issue Ontologies, Ontology Development and Evaluation)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Adaptive Supply Chain: Demand–Supply Synchronization Using Deep Reinforcement Learning
by
and
Algorithms 2021, 14(8), 240; https://doi.org/10.3390/a14080240 - 15 Aug 2021
Abstract
Adaptive and highly synchronized supply chains can avoid a cascading rise-and-fall inventory dynamic and mitigate ripple effects caused by operational failures. This paper aims to demonstrate how a deep reinforcement learning agent based on the proximal policy optimization algorithm can synchronize inbound and
[...] Read more.
Adaptive and highly synchronized supply chains can avoid a cascading rise-and-fall inventory dynamic and mitigate ripple effects caused by operational failures. This paper aims to demonstrate how a deep reinforcement learning agent based on the proximal policy optimization algorithm can synchronize inbound and outbound flows and support business continuity operating in the stochastic and nonstationary environment if end-to-end visibility is provided. The deep reinforcement learning agent is built upon the Proximal Policy Optimization algorithm, which does not require hardcoded action space and exhaustive hyperparameter tuning. These features, complimented with a straightforward supply chain environment, give rise to a general and task unspecific approach to adaptive control in multi-echelon supply chains. The proposed approach is compared with the base-stock policy, a well-known method in classic operations research and inventory control theory. The base-stock policy is prevalent in continuous-review inventory systems. The paper concludes with the statement that the proposed solution can perform adaptive control in complex supply chains. The paper also postulates fully fledged supply chain digital twins as a necessary infrastructural condition for scalable real-world applications.
Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Adaptive Self-Scaling Brain-Storm Optimization via a Chaotic Search Mechanism
Algorithms 2021, 14(8), 239; https://doi.org/10.3390/a14080239 - 13 Aug 2021
Abstract
Brain-storm optimization (BSO), which is a population-based optimization algorithm, exhibits a poor search performance, premature convergence, and a high probability of falling into local optima. To address these problems, we developed the adaptive mechanism-based BSO (ABSO) algorithm based on the chaotic local search
[...] Read more.
Brain-storm optimization (BSO), which is a population-based optimization algorithm, exhibits a poor search performance, premature convergence, and a high probability of falling into local optima. To address these problems, we developed the adaptive mechanism-based BSO (ABSO) algorithm based on the chaotic local search in this study. The adjustment of the search space using the local search method based on an adaptive self-scaling mechanism balances the global search and local development performance of the ABSO algorithm, effectively preventing the algorithm from falling into local optima and improving its convergence accuracy. To verify the stability and effectiveness of the proposed ABSO algorithm, the performance was tested using 29 benchmark test functions, and the mean and standard deviation were compared with those of five other optimization algorithms. The results showed that ABSO outperforms the other algorithms in terms of stability and convergence accuracy. In addition, the performance of ABSO was further verified through a nonparametric statistical test.
Full article
(This article belongs to the Special Issue Evolutionary Algorithms and Applications)
►▼
Show Figures

Figure 1
Journal Menu
► ▼ Journal Menu-
- Algorithms Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topics Board
- Instructions for Authors
- Special Issues
- Sections
- Article Processing Charge
- Indexing & Archiving
- Editor's Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Algorithms, J. Imaging
Image Processing Techniques for Biomedical Applications
Editor-in-Chief: Cecilia Di RubertoDeadline: 31 March 2022
Topic in
ASI, Sensors, Algorithms, AI, JSAN
Data Analytics and Machine Learning in Artificial Emotional Intelligence
Editor-in-Chief: Friedhelm SchwenkerDeadline: 30 April 2022
Conferences
25–26 September 2021
2nd International Conference on Data Mining and Software Engineering (DMSE 2021)

Special Issues
Special Issue in
Algorithms
Algorithms in Stochastic Models
Guest Editors: Shankarachary Ragi, Edwin K P ChongDeadline: 31 August 2021
Special Issue in
Algorithms
Evolutionary Algorithms and Applications
Guest Editors: Lorenzo Salas-Morera, Laura Garcia-HernandezDeadline: 12 September 2021
Special Issue in
Algorithms
Interpretability, Accountability and Robustness in Machine Learning
Guest Editor: Laurent RisserDeadline: 3 October 2021
Special Issue in
Algorithms
Stochastic Algorithms and Their Applications
Guest Editor: Stephanie AllassonniereDeadline: 17 October 2021


