Next Issue
Volume 14, October
Previous Issue
Volume 14, August
 
 

Algorithms, Volume 14, Issue 9 (September 2021) – 22 articles

Cover Story (view full-size image): Mathematical models of dynamical systems often depend on physical parameters. In order to use such models in simulations as well as for observer or controller design, these parameters must be known. While it may be possible to measure some of them directly, other parameters have to be determined using experiments with the real system. This is only possible if these parameters are structurally identifiable. In order to test this property of the mathematical model, an algebraic approach is discussed, which is based on the indistinguishability of system states. This method differs from existing approaches that use the systems input–output equations. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
8 pages, 860 KiB  
Article
Dynamical Recovery of Complex Networks under a Localised Attack
by Fan Wang, Gaogao Dong and Lixin Tian
Algorithms 2021, 14(9), 274; https://doi.org/10.3390/a14090274 - 21 Sep 2021
Cited by 3 | Viewed by 2349
Abstract
In real systems, some damaged nodes can spontaneously become active again when recovered from themselves or their active neighbours. However, the spontaneous dynamical recovery of complex networks that suffer a local failure has not yet been taken into consideration. To model this recovery [...] Read more.
In real systems, some damaged nodes can spontaneously become active again when recovered from themselves or their active neighbours. However, the spontaneous dynamical recovery of complex networks that suffer a local failure has not yet been taken into consideration. To model this recovery process, we develop a framework to study the resilience behaviours of the network under a localised attack (LA). Since the nodes’ state within the network affects the subsequent dynamic evolution, we study the dynamic behaviours of local failure propagation and node recoveries based on this memory characteristic. It can be found that the fraction of active nodes switches back and forth between high network activity and low network activity, which leads to the spontaneous emergence of phase-flipping phenomena. These behaviours can be found in a random regular network, Erdős-Rényi network and Scale-free network, which shows that these three types of networks have the same or different resilience behaviours under an LA and random attack. These results will be helpful for studying the spontaneous recovery real systems under an LA. Our work provides insight into understanding the recovery process and a protection strategy of various complex systems from the perspective of damaged memory. Full article
(This article belongs to the Special Issue Advances in Complex Network Models and Random Graphs)
Show Figures

Figure 1

12 pages, 3953 KiB  
Article
Intelligent Search of Values for a Controller Using the Artificial Bee Colony Algorithm to Control the Velocity of Displacement of a Robot
by José M. Villegas, Camilo Caraveo, David A. Mejía, José L. Rodríguez, Yuridia Vega, Leticia Cervantes and Alejandro Medina-Santiago
Algorithms 2021, 14(9), 273; https://doi.org/10.3390/a14090273 - 18 Sep 2021
Cited by 3 | Viewed by 2012
Abstract
The optimization is essential in the engineering area and, in conjunction with use of meta-heuristics, has had a great impact in recent years; this is because of its great precision in search of optimal parameters for the solution of problems. In this work, [...] Read more.
The optimization is essential in the engineering area and, in conjunction with use of meta-heuristics, has had a great impact in recent years; this is because of its great precision in search of optimal parameters for the solution of problems. In this work, the use of the Artificial Bee Colony Algorithm (ABC) is presented to optimize the values for the variables of a proportional integral controller (PI) to observe the behavior of the controller with the optimized Ti and Kp values. It is proposed using a robot built using the MINDSTORMS version EV3 kit. The objective of this work is to demonstrate the improvement and efficiency of the controllers in conjunction with optimization meta-heuristics. In the results section, we observe that the results improve considerably compared to traditional methods. In this work, the main contribution is the implementation of an optimization algorithm (ABC) applied to a controller (PI), and the results are tested to control the movement of a robot. There are many papers where the kit is used in various domains such as education as well as research for science and technology tasks and some real-world problems by engineering scholars, showing the acceptable result. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

15 pages, 279 KiB  
Article
How Neurons in Deep Models Relate with Neurons in the Brain
by Arianna Pavone and Alessio Plebe
Algorithms 2021, 14(9), 272; https://doi.org/10.3390/a14090272 - 17 Sep 2021
Cited by 3 | Viewed by 2509
Abstract
In dealing with the algorithmic aspects of intelligent systems, the analogy with the biological brain has always been attractive, and has often had a dual function. On the one hand, it has been an effective source of inspiration for their design, while, on [...] Read more.
In dealing with the algorithmic aspects of intelligent systems, the analogy with the biological brain has always been attractive, and has often had a dual function. On the one hand, it has been an effective source of inspiration for their design, while, on the other hand, it has been used as the justification for their success, especially in the case of Deep Learning (DL) models. However, in recent years, inspiration from the brain has lost its grip on its first role, yet it continues to be proposed in its second role, although we believe it is also becoming less and less defensible. Outside the chorus, there are theoretical proposals that instead identify important demarcation lines between DL and human cognition, to the point of being even incommensurable. In this article we argue that, paradoxically, the partial indifference of the developers of deep neural models to the functioning of biological neurons is one of the reasons for their success, having promoted a pragmatically opportunistic attitude. We believe that it is even possible to glimpse a biological analogy of a different kind, in that the essentially heuristic way of proceeding in modern DL development bears intriguing similarities to natural evolution. Full article
(This article belongs to the Special Issue Algorithmic Aspects of Neural Networks)
17 pages, 5105 KiB  
Article
Long-Term EEG Component Analysis Method Based on Lasso Regression
by Hongjian Bo, Haifeng Li, Boying Wu, Hongwei Li and Lin Ma
Algorithms 2021, 14(9), 271; https://doi.org/10.3390/a14090271 - 17 Sep 2021
Cited by 1 | Viewed by 2871
Abstract
At present, there are very few analysis methods for long-term electroencephalogram (EEG) components. Temporal information is always ignored by most of the existing techniques in cognitive studies. Therefore, a new analysis method based on time-varying characteristics was proposed. First of all, a regression [...] Read more.
At present, there are very few analysis methods for long-term electroencephalogram (EEG) components. Temporal information is always ignored by most of the existing techniques in cognitive studies. Therefore, a new analysis method based on time-varying characteristics was proposed. First of all, a regression model based on Lasso was proposed to reveal the difference between acoustics and physiology. Then, Permutation Tests and Gaussian fitting were applied to find the highest correlation. A cognitive experiment based on 93 emotional sounds was designed, and the EEG data of 10 volunteers were collected to verify the model. The 48-dimensional acoustic features and 428 EEG components were extracted and analyzed together. Through this method, the relationship between the EEG components and the acoustic features could be measured. Moreover, according to the temporal relations, an optimal offset of acoustic features was found, which could obtain better alignment with EEG features. After the regression analysis, the significant EEG components were found, which were in good agreement with cognitive laws. This provides a new idea for long-term EEG components, which could be applied in other correlative subjects. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing)
Show Figures

Graphical abstract

7 pages, 214 KiB  
Article
Use of the Codon Table to Quantify the Evolutionary Role of Random Mutations
by Mihaly Mezei
Algorithms 2021, 14(9), 270; https://doi.org/10.3390/a14090270 - 17 Sep 2021
Cited by 1 | Viewed by 2043
Abstract
The various biases affecting RNA mutations during evolution is the subject of intense research, leaving the extent of the role of random mutations undefined. To remedy this lacuna, using the codon table, the number of codons representing each amino acid was correlated with [...] Read more.
The various biases affecting RNA mutations during evolution is the subject of intense research, leaving the extent of the role of random mutations undefined. To remedy this lacuna, using the codon table, the number of codons representing each amino acid was correlated with the amino acid frequencies in different branches of the evolutionary tree. The correlations were seen to increase as evolution progressed. Furthermore, the number of RNA mutations that resulted in a given amino acid mutation were found to be correlated with several widely used amino acid similarity tables (used in sequence alignments). These correlations were seen to increase when the observed codon usage was factored in. Full article
(This article belongs to the Special Issue 2021 Selected Papers from Algorithms Editorial Board Members)
17 pages, 1595 KiB  
Article
Algorithms for Bidding Strategies in Local Energy Markets: Exhaustive Search through Parallel Computing and Metaheuristic Optimization
by Andrés Angulo, Diego Rodríguez, Wilmer Garzón, Diego F. Gómez, Ameena Al Sumaiti and Sergio Rivera
Algorithms 2021, 14(9), 269; https://doi.org/10.3390/a14090269 - 16 Sep 2021
Cited by 8 | Viewed by 3702
Abstract
The integration of different energy resources from traditional power systems presents new challenges for real-time implementation and operation. In the last decade, a way has been sought to optimize the operation of small microgrids (SMGs) that have a great variety of energy sources [...] Read more.
The integration of different energy resources from traditional power systems presents new challenges for real-time implementation and operation. In the last decade, a way has been sought to optimize the operation of small microgrids (SMGs) that have a great variety of energy sources (PV (photovoltaic) prosumers, Genset CHP (combined heat and power), etc.) with uncertainty in energy production that results in different market prices. For this reason, metaheuristic methods have been used to optimize the decision-making process for multiple players in local and external markets. Players in this network include nine agents: three consumers, three prosumers (consumers with PV capabilities), and three CHP generators. This article deploys metaheuristic algorithms with the objective of maximizing power market transactions and clearing price. Since metaheuristic optimization algorithms do not guarantee global optima, an exhaustive search is deployed to find global optima points. The exhaustive search algorithm is implemented using a parallel computing architecture to reach feasible results in a short amount of time. The global optimal result is used as an indicator to evaluate the performance of the different metaheuristic algorithms. The paper presents results, discussion, comparison, and recommendations regarding the proposed set of algorithms and performance tests. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimization and Applications 2021)
Show Figures

Figure 1

18 pages, 2880 KiB  
Article
UFaceNet: Research on Multi-Task Face Recognition Algorithm Based on CNN
by Huoyou Li, Jianshiun Hu, Jingwen Yu, Ning Yu and Qingqiang Wu
Algorithms 2021, 14(9), 268; https://doi.org/10.3390/a14090268 - 15 Sep 2021
Cited by 6 | Viewed by 4288
Abstract
With the application of deep convolutional neural networks, the performance of computer vision tasks has been improved to a new level. The construction of a deeper and more complex network allows the face recognition algorithm to obtain a higher accuracy, However, the disadvantages [...] Read more.
With the application of deep convolutional neural networks, the performance of computer vision tasks has been improved to a new level. The construction of a deeper and more complex network allows the face recognition algorithm to obtain a higher accuracy, However, the disadvantages of large computation and storage costs of neural networks limit the further popularization of the algorithm. To solve this problem, we have studied the unified and efficient neural network face recognition algorithm under the condition of a single camera; we propose that the complete face recognition process consists of four tasks: face detection, in vivo detection, keypoint detection, and face verification; combining the key algorithms of these four tasks, we propose a unified network model based on a deep separable convolutional structure—UFaceNet. The model uses multisource data to carry out multitask joint training and uses the keypoint detection results to aid the learning of other tasks. It further introduces the attention mechanism through feature level clipping and alignment to ensure the accuracy of the model, using the shared convolutional layer network among tasks to reduce model calculations amount and realize network acceleration. The learning goal of multi-tasking implicitly increases the amount of training data and different data distribution, making it easier to learn the characteristics with generalization. The experimental results show that the UFaceNet model is better than other models in terms of calculation amount and number of parameters with higher efficiency, and some potential areas to be used. Full article
Show Figures

Figure 1

25 pages, 1354 KiB  
Article
A New Constructive Heuristic Driven by Machine Learning for the Traveling Salesman Problem
by Umberto Junior Mele, Luca Maria Gambardella and Roberto Montemanni
Algorithms 2021, 14(9), 267; https://doi.org/10.3390/a14090267 - 14 Sep 2021
Cited by 11 | Viewed by 4804
Abstract
Recent systems applying Machine Learning (ML) to solve the Traveling Salesman Problem (TSP) exhibit issues when they try to scale up to real case scenarios with several hundred vertices. The use of Candidate Lists (CLs) has been brought up to cope with the [...] Read more.
Recent systems applying Machine Learning (ML) to solve the Traveling Salesman Problem (TSP) exhibit issues when they try to scale up to real case scenarios with several hundred vertices. The use of Candidate Lists (CLs) has been brought up to cope with the issues. A CL is defined as a subset of all the edges linked to a given vertex such that it contains mainly edges that are believed to be found in the optimal tour. The initialization procedure that identifies a CL for each vertex in the TSP aids the solver by restricting the search space during solution creation. It results in a reduction of the computational burden as well, which is highly recommended when solving large TSPs. So far, ML was engaged to create CLs and values on the elements of these CLs by expressing ML preferences at solution insertion. Although promising, these systems do not restrict what the ML learns and does to create solutions, bringing with them some generalization issues. Therefore, motivated by exploratory and statistical studies of the CL behavior in multiple TSP solutions, in this work, we rethink the usage of ML by purposely employing this system just on a task that avoids well-known ML weaknesses, such as training in presence of frequent outliers and the detection of under-represented events. The task is to confirm inclusion in a solution just for edges that are most likely optimal. The CLs of the edge considered for inclusion are employed as an input of the neural network, and the ML is in charge of distinguishing when such edge is in the optimal solution from when it is not. The proposed approach enables a reasonable generalization and unveils an efficient balance between ML and optimization techniques. Our ML-Constructive heuristic is trained on small instances. Then, it is able to produce solutions—without losing quality—for large problems as well. We compare our method against classic constructive heuristics, showing that the new approach performs well for TSPLIB instances up to 1748 cities. Although ML-Constructive exhibits an expensive constant computation time due to training, we proved that the computational complexity in the worst-case scenario—for the solution construction after training—is O(n2logn2), n being the number of vertices in the TSP instance. Full article
(This article belongs to the Special Issue 2021 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

22 pages, 4366 KiB  
Article
Multi-Class Freeway Congestion and Emission Based on Robust Dynamic Multi-Objective Optimization
by Juan Chen, Qinxuan Feng and Qi Guo
Algorithms 2021, 14(9), 266; https://doi.org/10.3390/a14090266 - 13 Sep 2021
Cited by 4 | Viewed by 1977
Abstract
In order to solve the problem of traffic congestion and emission optimization of urban multi-class expressways, a robust dynamic nondominated sorting multi-objective genetic algorithm DFCM-RDNSGA-III based on density fuzzy c-means clustering method is proposed in this paper. Considering the three performance indicators of [...] Read more.
In order to solve the problem of traffic congestion and emission optimization of urban multi-class expressways, a robust dynamic nondominated sorting multi-objective genetic algorithm DFCM-RDNSGA-III based on density fuzzy c-means clustering method is proposed in this paper. Considering the three performance indicators of travel time, ramp queue and traffic emissions, the ramp metering and variable speed limit control schemes of an expressway are optimized to improve the main road and ramp traffic congestion, therefore achieving energy conservation and emission reduction. In the VISSIM simulation environment, a multi-on-ramp and multi-off-ramp road network is built to verify the performance of the algorithm. The results show that, compared with the existing algorithm NSGA-III, the DFCM-RDNSGA-III algorithm proposed in this paper can provide better ramp metering and variable speed limit control schemes in the process of road network peak formation and dissipation. In addition, the traffic congestion of expressways can be improved and energy conservation as well as emission reduction can also be realized. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms and Applications)
Show Figures

Figure 1

26 pages, 2786 KiB  
Article
QB4MobOLAP: A Vocabulary Extension for Mobility OLAP on the Semantic Web
by Irya Wisnubhadra, Safiza Kamal Baharin, Nurul A. Emran and Djoko Budiyanto Setyohadi
Algorithms 2021, 14(9), 265; https://doi.org/10.3390/a14090265 - 13 Sep 2021
Cited by 2 | Viewed by 2064
Abstract
The accessibility of devices that track the positions of moving objects has attracted many researchers in Mobility Online Analytical Processing (Mobility OLAP). Mobility OLAP makes use of trajectory data warehousing techniques, which typically include a path of moving objects at a particular point [...] Read more.
The accessibility of devices that track the positions of moving objects has attracted many researchers in Mobility Online Analytical Processing (Mobility OLAP). Mobility OLAP makes use of trajectory data warehousing techniques, which typically include a path of moving objects at a particular point in time. The Semantic Web (SW) users have published a large number of moving object datasets that include spatial and non-spatial data. These data are available as open data and require advanced analysis to aid in decision making. However, current SW technologies support advanced analysis only for multidimensional data warehouses and Online Analytical Processing (OLAP) over static spatial and non-spatial SW data. The existing technology does not support the modeling of moving object facts, the creation of basic mobility analytical queries, or the definition of fundamental operators and functions for moving object types. This article introduces the QB4MobOLAP vocabulary, which enables the analysis of mobility data stored in RDF cubes. This article defines Mobility OLAP operators and SPARQL user-defined functions. As a result, QB4MobOLAP vocabulary and the Mobility OLAP operators are evaluated by applying them to a practical use case of transportation analysis involving 8826 triples consisting of approximately 7000 fact triples. Each triple contains nearly 1000 temporal data points (equivalent to 7 million records in conventional databases). The execution of six pertinent spatiotemporal analytics query samples results in a practical, simple model with expressive performance for the enabling of executive decisions on transportation analysis. Full article
Show Figures

Figure 1

19 pages, 2020 KiB  
Article
Fully Automatic Operation Algorithm of Urban Rail Train Based on RBFNN Position Output Constrained Robust Adaptive Control
by Junxia Yang, Youpeng Zhang and Yuxiang Jin
Algorithms 2021, 14(9), 264; https://doi.org/10.3390/a14090264 - 9 Sep 2021
Cited by 3 | Viewed by 2132
Abstract
High parking accuracy, comfort and stability, and fast response speed are important indicators to measure the control performance of a fully automatic operation system. In this paper, aiming at the problem of low accuracy of the fully automatic operation control of urban rail [...] Read more.
High parking accuracy, comfort and stability, and fast response speed are important indicators to measure the control performance of a fully automatic operation system. In this paper, aiming at the problem of low accuracy of the fully automatic operation control of urban rail trains, a radial basis function neural network position output-constrained robust adaptive control algorithm based on train operation curve tracking is proposed. Firstly, on the basis of the mechanism of motion mechanics, the nonlinear dynamic model of train motion is established. Then, RBFNN is used to adaptively approximate and compensate for the additional resistance and unknown interference of the train model, and the basic resistance parameter adaptive mechanism is introduced to enhance the anti-interference ability and adaptability of the control system. Lastly, on the basis of the RBFNN position output-constrained robust adaptive control technology, the train can track the desired operation curve, thereby achieving the smooth operation between stations and accurate stopping. The simulation results show that the position output-constrained robust adaptive control algorithm based on RBFNN has good robustness and adaptability. In the case of system parameter uncertainty and external disturbance, the control system can ensure high-precision control and improve the ride comfort. Full article
Show Figures

Figure 1

16 pages, 484 KiB  
Article
Sequential Recommendation through Graph Neural Networks and Transformer Encoder with Degree Encoding
by Shuli Wang, Xuewen Li, Xiaomeng Kou, Jin Zhang, Shaojie Zheng, Jinlong Wang and Jibing Gong
Algorithms 2021, 14(9), 263; https://doi.org/10.3390/a14090263 - 31 Aug 2021
Cited by 5 | Viewed by 3559
Abstract
Predicting users’ next behavior through learning users’ preferences according to the users’ historical behaviors is known as sequential recommendation. In this task, learning sequence representation by modeling the pairwise relationship between items in the sequence to capture their long-range dependencies is crucial. In [...] Read more.
Predicting users’ next behavior through learning users’ preferences according to the users’ historical behaviors is known as sequential recommendation. In this task, learning sequence representation by modeling the pairwise relationship between items in the sequence to capture their long-range dependencies is crucial. In this paper, we propose a novel deep neural network named graph convolutional network transformer recommender (GCNTRec). GCNTRec is capable of learning effective item representation in a user’s historical behaviors sequence, which involves extracting the correlation between the target node and multi-layer neighbor nodes on the graphs constructed under the heterogeneous information networks in an end-to-end fashion through a graph convolutional network (GCN) with degree encoding, while the capturing long-range dependencies of items in a sequence through the transformer encoder model. Using this multi-dimensional vector representation, items related to a user historical behavior sequence can be easily predicted. We empirically evaluated GCNTRec on multiple public datasets. The experimental results show that our approach can effectively predict subsequent relevant items and outperforms previous techniques. Full article
(This article belongs to the Special Issue Algorithms for Sequential Analysis)
Show Figures

Figure 1

13 pages, 2190 KiB  
Article
Parallel Hybrid Particle Swarm Algorithm for Workshop Scheduling Based on Spark
by Tianhua Zheng, Jiabin Wang and Yuxiang Cai
Algorithms 2021, 14(9), 262; https://doi.org/10.3390/a14090262 - 30 Aug 2021
Cited by 3 | Viewed by 2538
Abstract
In hybrid mixed-flow workshop scheduling, there are problems such as mass production, mass manufacturing, mass assembly and mass synthesis of products. In order to solve these problems, combined with the Spark platform, a hybrid particle swarm algorithm that will be parallelized is proposed. [...] Read more.
In hybrid mixed-flow workshop scheduling, there are problems such as mass production, mass manufacturing, mass assembly and mass synthesis of products. In order to solve these problems, combined with the Spark platform, a hybrid particle swarm algorithm that will be parallelized is proposed. Compared with the existing intelligent algorithms, the parallel hybrid particle swarm algorithm is more conducive to the realization of the global optimal solution. In the loader manufacturing workshop, the optimization goal is to minimize the maximum completion time and a parallelized hybrid particle swarm algorithm is used. The results show that in the case of relatively large batches, the parallel hybrid particle swarm algorithm can effectively obtain the scheduling plan and avoid falling into the local optimal solution. Compared with algorithm serialization, algorithm parallelization improves algorithm efficiency by 2–4 times. The larger the batches, the more obvious the algorithm parallelization improves computational efficiency. Full article
(This article belongs to the Special Issue Scheduling: Algorithms and Applications)
Show Figures

Figure 1

20 pages, 5419 KiB  
Article
PFSegIris: Precise and Fast Segmentation Algorithm for Multi-Source Heterogeneous Iris
by Lin Dong, Yuanning Liu and Xiaodong Zhu
Algorithms 2021, 14(9), 261; https://doi.org/10.3390/a14090261 - 30 Aug 2021
Cited by 7 | Viewed by 2621
Abstract
Current segmentation methods have limitations for multi-source heterogeneous iris segmentation since differences of acquisition devices and acquisition environment conditions lead to images of greatly varying quality from different iris datasets. Thus, different segmentation algorithms are generally applied to distinct datasets. Meanwhile, deep-learning-based iris [...] Read more.
Current segmentation methods have limitations for multi-source heterogeneous iris segmentation since differences of acquisition devices and acquisition environment conditions lead to images of greatly varying quality from different iris datasets. Thus, different segmentation algorithms are generally applied to distinct datasets. Meanwhile, deep-learning-based iris segmentation models occupy more space and take a long time. Therefore, a lightweight, precise, and fast segmentation network model, PFSegIris, aimed at the multi-source heterogeneous iris is proposed by us. First, the iris feature extraction modules designed were used to fully extract heterogeneous iris feature information, reducing the number of parameters, computation, and the loss of information. Then, an efficient parallel attention mechanism was introduced only once between the encoder and the decoder to capture semantic information, suppress noise interference, and enhance the discriminability of iris region pixels. Finally, we added a skip connection from low-level features to catch more detailed information. Experiments on four near-infrared datasets and three visible datasets show that the segmentation precision is better than that of existing algorithms, and the number of parameters and storage space are only 1.86 M and 0.007 GB, respectively. The average prediction time is less than 0.10 s. The proposed algorithm can segment multi-source heterogeneous iris images more precisely and quicker than other algorithms. Full article
(This article belongs to the Special Issue Information Fusion in Medical Image Computing)
Show Figures

Figure 1

14 pages, 2953 KiB  
Article
Comparison of Profit-Based Multi-Objective Approaches for Feature Selection in Credit Scoring
by Naomi Simumba, Suguru Okami, Akira Kodaka and Naohiko Kohtake
Algorithms 2021, 14(9), 260; https://doi.org/10.3390/a14090260 - 30 Aug 2021
Cited by 4 | Viewed by 2484
Abstract
Feature selection is crucial to the credit-scoring process, allowing for the removal of irrelevant variables with low predictive power. Conventional credit-scoring techniques treat this as a separate process wherein features are selected based on improving a single statistical measure, such as accuracy; however, [...] Read more.
Feature selection is crucial to the credit-scoring process, allowing for the removal of irrelevant variables with low predictive power. Conventional credit-scoring techniques treat this as a separate process wherein features are selected based on improving a single statistical measure, such as accuracy; however, recent research has focused on meaningful business parameters such as profit. More than one factor may be important to the selection process, making multi-objective optimization methods a necessity. However, the comparative performance of multi-objective methods has been known to vary depending on the test problem and specific implementation. This research employed a recent hybrid non-dominated sorting binary Grasshopper Optimization Algorithm and compared its performance on multi-objective feature selection for credit scoring to that of two popular benchmark algorithms in this space. Further comparison is made to determine the impact of changing the profit-maximizing base classifiers on algorithm performance. Experiments demonstrate that, of the base classifiers used, the neural network classifier improved the profit-based measure and minimized the mean number of features in the population the most. Additionally, the NSBGOA algorithm gave relatively smaller hypervolumes and increased computational time across all base classifiers, while giving the highest mean objective values for the solutions. It is clear that the base classifier has a significant impact on the results of multi-objective optimization. Therefore, careful consideration should be made of the base classifier to use in the scenarios. Full article
(This article belongs to the Special Issue Algorithms in Multi-Objective Optimization)
Show Figures

Figure 1

17 pages, 2105 KiB  
Article
Solving the Two Echelon Vehicle Routing Problem Using Simulated Annealing Algorithm Considering Drop Box Facilities and Emission Cost: A Case Study of Reverse Logistics Application in Indonesia
by Marco Reinaldi, Anak Agung Ngurah Perwira Redi, Dio Fawwaz Prakoso, Arrie Wicaksono Widodo, Mochammad Rizal Wibisono, Agus Supranartha, Rahmad Inca Liperda, Reny Nadlifatin, Yogi Tri Prasetyo and Sekar Sakti
Algorithms 2021, 14(9), 259; https://doi.org/10.3390/a14090259 - 30 Aug 2021
Cited by 6 | Viewed by 3639
Abstract
A two echelon distribution system is often used to solve logistics problems. This study considers a two-echelon distribution system in reverse logistics context with the use of drop box facility as an intermediary facility. An optimization model of integer linear programming is proposed, [...] Read more.
A two echelon distribution system is often used to solve logistics problems. This study considers a two-echelon distribution system in reverse logistics context with the use of drop box facility as an intermediary facility. An optimization model of integer linear programming is proposed, representing a two-echelon vehicle routing problem with a drop box facility (2EVRP-DF). The aim is to find the minimum total costs consisting of vehicle transportation costs and the costs to compensate customers who have to travel to access these intermediary facilities. The results are then compared to those of common practice in reverse logistics. In common practice, customers are assumed to go directly to the depot to drop their goods. In addition, this study analyzes the environmental impact by adding a component of carbon emissions emitted by the vehicles. A set of comprehensive computational experiments is conducted. The results indicate that the 2EVRP-DF model can provide optimal costs and lower carbon emissions than the common practice. Full article
(This article belongs to the Special Issue Metaheuristics and Applications in Operations Research)
Show Figures

Figure 1

12 pages, 294 KiB  
Article
A Novel Semi-Supervised Fuzzy C-Means Clustering Algorithm Using Multiple Fuzzification Coefficients
by Tran Dinh Khang, Manh-Kien Tran and Michael Fowler
Algorithms 2021, 14(9), 258; https://doi.org/10.3390/a14090258 - 29 Aug 2021
Cited by 8 | Viewed by 3715
Abstract
Clustering is an unsupervised machine learning method with many practical applications that has gathered extensive research interest. It is a technique of dividing data elements into clusters such that elements in the same cluster are similar. Clustering belongs to the group of unsupervised [...] Read more.
Clustering is an unsupervised machine learning method with many practical applications that has gathered extensive research interest. It is a technique of dividing data elements into clusters such that elements in the same cluster are similar. Clustering belongs to the group of unsupervised machine learning techniques, meaning that there is no information about the labels of the elements. However, when knowledge of data points is known in advance, it will be beneficial to use a semi-supervised algorithm. Within many clustering techniques available, fuzzy C-means clustering (FCM) is a common one. To make the FCM algorithm a semi-supervised method, it was proposed in the literature to use an auxiliary matrix to adjust the membership grade of the elements to force them into certain clusters during the computation. In this study, instead of using the auxiliary matrix, we proposed to use multiple fuzzification coefficients to implement the semi-supervision component. After deriving the proposed semi-supervised fuzzy C-means clustering algorithm with multiple fuzzification coefficients (sSMC-FCM), we demonstrated the convergence of the algorithm and validated the efficiency of the method through a numerical example. Full article
14 pages, 1163 KiB  
Article
Metal Surface Defect Detection Using Modified YOLO
by Yiming Xu, Kai Zhang and Li Wang
Algorithms 2021, 14(9), 257; https://doi.org/10.3390/a14090257 - 28 Aug 2021
Cited by 50 | Viewed by 7253
Abstract
Aiming at the problems of inefficient detection caused by traditional manual inspection and unclear features in metal surface defect detection, an improved metal surface defect detection technology based on the You Only Look Once (YOLO) model is presented. The shallow features of the [...] Read more.
Aiming at the problems of inefficient detection caused by traditional manual inspection and unclear features in metal surface defect detection, an improved metal surface defect detection technology based on the You Only Look Once (YOLO) model is presented. The shallow features of the 11th layer in the Darknet-53 are combined with the deep features of the neural network to generate a new scale feature layer using the basis of the network structure of YOLOv3. Its goal is to extract more features of small defects. Furthermore, then, K-Means++ is used to reduce the sensitivity to the initial cluster center when analyzing the size information of the anchor box. The optimal anchor box is selected to make the positioning more accurate. The performance of the modified metal surface defect detection technology is compared with other detection methods on the Tianchi dataset. The results show that the average detection accuracy of the modified YOLO model is 75.1%, which ia higher than that of YOLOv3. Furthermore, it also has a great detection speed advantage, compared with faster region-based convolutional neural network (Faster R-CNN) and other detection algorithms. The improved YOLO model can make the highly accurate location information of the small defect target and has strong real-time performance. Full article
Show Figures

Figure 1

19 pages, 3866 KiB  
Article
Summarisation, Simulation and Comparison of Nine Control Algorithms for an Active Control Mount with an Oscillating Coil Actuator
by Rang-Lin Fan, Pu Wang, Chen Han, Li-Jun Wei, Zi-Jian Liu and Pei-Ju Yuan
Algorithms 2021, 14(9), 256; https://doi.org/10.3390/a14090256 - 27 Aug 2021
Cited by 3 | Viewed by 2583
Abstract
With the further development of the automotive industry, the traditional vibration isolation method is difficult to meet the requirements for wide frequency bands under multiple operating conditions, the active control mount (ACM) is gradually paid attentions, and the control algorithm plays a decisive [...] Read more.
With the further development of the automotive industry, the traditional vibration isolation method is difficult to meet the requirements for wide frequency bands under multiple operating conditions, the active control mount (ACM) is gradually paid attentions, and the control algorithm plays a decisive role. In this paper, the ACM with oscillating coil actuator (OCA) is taken as the object, and the comparative study of the control algorithms is performed to select the optimal one for ACM. Through the modelling of ACM, the design of controller and the system simulations, the force transmission rate is used to compare the vibration isolation performance of the nine control algorithms, which are least mean square (LMS) adaptive feedforward control, recursive least square (RLS) adaptive feedforward control, filtered reference signal LMS (FxLMS) adaptive control, linear quadratic regulator (LQR) optimal control, H2 control, H control, proportional integral derivative (PID) feedback control, fuzzy control and fuzzy PID control. In summary, the FxLMS adaptive control algorithm has the better performance and the advantage of easier hardware implementation, and it can apply in ACMs. Full article
Show Figures

Figure 1

14 pages, 310 KiB  
Article
An Algebraic Approach to Identifiability
by Daniel Gerbet and Klaus Röbenack
Algorithms 2021, 14(9), 255; https://doi.org/10.3390/a14090255 - 27 Aug 2021
Cited by 4 | Viewed by 2375
Abstract
This paper addresses the problem of identifiability of nonlinear polynomial state-space systems. Such systems have already been studied via the input-output equations, a description that, in general, requires differential algebra. The authors use a different algebraic approach, which is based on distinguishability and [...] Read more.
This paper addresses the problem of identifiability of nonlinear polynomial state-space systems. Such systems have already been studied via the input-output equations, a description that, in general, requires differential algebra. The authors use a different algebraic approach, which is based on distinguishability and observability. Employing techniques from algebraic geometry such as polynomial ideals and Gröbner bases, local as well as global results are derived. The methods are illustrated on some example systems. Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control II)
22 pages, 2256 KiB  
Article
Prioritizing Construction Labor Productivity Improvement Strategies Using Fuzzy Multi-Criteria Decision Making and Fuzzy Cognitive Maps
by Matin Kazerooni, Phuong Nguyen and Aminah Robinson Fayek
Algorithms 2021, 14(9), 254; https://doi.org/10.3390/a14090254 - 24 Aug 2021
Cited by 8 | Viewed by 2814
Abstract
Construction labor productivity (CLP) is affected by various interconnected factors, such as crew motivation and working conditions. Improved CLP can benefit a construction project in many ways, such as a shortened project life cycle and lowering project cost. However, budget, time, and resource [...] Read more.
Construction labor productivity (CLP) is affected by various interconnected factors, such as crew motivation and working conditions. Improved CLP can benefit a construction project in many ways, such as a shortened project life cycle and lowering project cost. However, budget, time, and resource restrictions force companies to select and implement only a limited number of CLP improvement strategies. Therefore, a research gap exists regarding methods for supporting the selection of CLP improvement strategies for a given project by quantifying the impact of strategies on CLP with respect to interrelationships among CLP factors. This paper proposes a decision support model that integrates fuzzy multi-criteria decision making with fuzzy cognitive maps to prioritize CLP improvement strategies based on their impact on CLP, causal relationships among CLP factors, and project characteristics. The proposed model was applied to determine CLP improvement strategies for concrete-pouring activities in building projects as an illustrative example. This study contributes to the body of knowledge by providing a systematic approach for selecting appropriate CLP improvement strategies based on interrelationships among the factors affecting CLP and the impact of such strategies on CLP. The results are expected to support construction practitioners with identifying effective improvement strategies to enhance CLP in their projects. Full article
Show Figures

Figure 1

19 pages, 460 KiB  
Article
The Power of Human–Algorithm Collaboration in Solving Combinatorial Optimization Problems
by Tapani Toivonen and Markku Tukiainen
Algorithms 2021, 14(9), 253; https://doi.org/10.3390/a14090253 - 24 Aug 2021
Cited by 1 | Viewed by 2919
Abstract
Many combinatorial optimization problems are often considered intractable to solve exactly or by approximation. An example of such a problem is maximum clique, which—under standard assumptions in complexity theory—cannot be solved in sub-exponential time or be approximated within the polynomial factor efficiently. [...] Read more.
Many combinatorial optimization problems are often considered intractable to solve exactly or by approximation. An example of such a problem is maximum clique, which—under standard assumptions in complexity theory—cannot be solved in sub-exponential time or be approximated within the polynomial factor efficiently. However, we show that if a polynomial time algorithm can query informative Gaussian priors from an expert poly(n) times, then a class of combinatorial optimization problems can be solved efficiently up to a multiplicative factor ϵ, where ϵ is arbitrary constant. In this paper, we present proof of our claims and show numerical results to support them. Our methods can cast new light on how to approach optimization problems in domains where even the approximation of the problem is not feasible. Furthermore, the results can help researchers to understand the structures of these problems (or whether these problems have any structure at all!). While the proposed methods can be used to approximate combinatorial problems in NPO, we note that the scope of the problems solvable might well include problems that are provable intractable (problems in EXPTIME). Full article
(This article belongs to the Special Issue Metaheuristics)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop