Next Issue
Volume 16, May
Previous Issue
Volume 16, March
 
 

Algorithms, Volume 16, Issue 4 (April 2023) – 41 articles

Cover Story (view full-size image): The complexity of products has increased considerably, and key functions can often only be realized by using high-precision components. Microgears have a particularly complex geometry, and thus, the manufacturing requirements often reach technological limits. Furthermore, there are still no readily available production-integrated measuring methods. Thus, manufacturers only measure samples, if at all, as this is only possible by means of specialized, sensitive, and cost-intensive tactile or optical measurement technologies. In a novel approach, this paper examines the integration of an acoustic emission sensor into the hobbing process in order to predict process parameters as well as geometric and functional features of the produced microgears by means of supervised machine learning. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Article
Solving an Industrial-Scale Warehouse Delivery Problem with Answer Set Programming Modulo Difference Constraints
Algorithms 2023, 16(4), 216; https://doi.org/10.3390/a16040216 - 21 Apr 2023
Viewed by 503
Abstract
A warehouse delivery problem consists of a set of robots that undertake delivery jobs within a warehouse. Items are moved around the warehouse in response to events. A solution to a warehouse delivery problem is a collision-free schedule of robot movements and actions [...] Read more.
A warehouse delivery problem consists of a set of robots that undertake delivery jobs within a warehouse. Items are moved around the warehouse in response to events. A solution to a warehouse delivery problem is a collision-free schedule of robot movements and actions that ensures that all delivery jobs are completed and each robot is returned to its docking station. While the warehouse delivery problem is related to existing research, such as the study of multi-agent path finding (MAPF), the specific industrial requirements necessitated a novel approach that diverges from these other approaches. For example, our problem description was more suited to formalizing the warehouse in terms of a weighted directed graph rather than the more common grid-based formalization. We formalize and encode the warehouse delivery problem in Answer Set Programming (ASP) extended with difference constraints. We systematically develop and study different encoding variants, with a view to computing good quality solutions in near real-time. In particular, application specific criteria are contrasted against the traditional notion of makespan minimization as a measure of solution quality. The encoding is tested against both crafted and industry data and experiments run using the Hybrid ASP solver clingo[dl]. Full article
(This article belongs to the Special Issue Hybrid Answer Set Programming Systems and Applications)
Show Figures

Figure 1

Article
A Novel Hybrid Recommender System for the Tourism Domain
Algorithms 2023, 16(4), 215; https://doi.org/10.3390/a16040215 - 21 Apr 2023
Viewed by 755
Abstract
In this paper, we develop a novel hybrid recommender system for the tourism domain, which combines (a) a Bayesian preferences elicitation component which operates by asking the user to rate generic images (corresponding to generic types of POIs) in order to build a [...] Read more.
In this paper, we develop a novel hybrid recommender system for the tourism domain, which combines (a) a Bayesian preferences elicitation component which operates by asking the user to rate generic images (corresponding to generic types of POIs) in order to build a user model and (b) a novel content-based (CB) recommendations component. The second component can in fact itself be considered a hybrid among two different CB algorithms, each exploiting one of two semantic similarity measures: a hierarchy-based and a non-hierarchy based one. The latter is the recently introduced Weighted Extended Jaccard Similarity (WEJS). We note that WEJS is employed for the first time within a recommender algorithm. We incorporate our algorithm within a real, already available at Google Play, tour-planning mobile application for short-term visitors of the popular touristic destination of Agios Nikolaos, Crete, Greece, and evaluate our approach via extensive simulations conducted on a real-world dataset constructed for the needs of the aforementioned mobile application. Our experiments verify that our algorithms result in effective personalized recommendations of touristic points of interest, while our final hybrid algorithm outperforms our exclusively content-based recommender algorithms in terms of recommendations accuracy. Specifically, when comparing the performance of several hybrid recommender system variants, we are able to come up with a “winner”: the most preferable variant of our hybrid recommender algorithm is one using a ⟨four elicitation slates, six shown images per slate⟩ pair as input to its Bayesian elicitation component. This variant combines increased precision performance with a lightweight preferences elicitation process. Full article
(This article belongs to the Special Issue New Trends in Algorithms for Intelligent Recommendation Systems)
Show Figures

Figure 1

Article
Ising-Based Kernel Clustering
Algorithms 2023, 16(4), 214; https://doi.org/10.3390/a16040214 - 19 Apr 2023
Viewed by 556
Abstract
Combinatorial clustering based on the Ising model is drawing attention as a high-quality clustering method. However, conventional Ising-based clustering methods using the Euclidean distance cannot handle irregular data. To overcome this problem, this paper proposes an Ising-based kernel clustering method. The kernel clustering method is [...] Read more.
Combinatorial clustering based on the Ising model is drawing attention as a high-quality clustering method. However, conventional Ising-based clustering methods using the Euclidean distance cannot handle irregular data. To overcome this problem, this paper proposes an Ising-based kernel clustering method. The kernel clustering method is designed based on two critical ideas. One is to perform clustering of irregular data by mapping the data onto a high-dimensional feature space by using a kernel trick. The other is the utilization of matrix–matrix calculations in the numerical libraries to accelerate preprocess for annealing. While the conventional Ising-based clustering is not designed to accept the transformed data by the kernel trick, this paper extends the availability of Ising-based clustering to process a distance matrix defined in high-dimensional data space. The proposed method can handle the Gram matrix determined by the kernel method as a high-dimensional distance matrix to handle irregular data. By comparing the proposed Ising-based kernel clustering method with the conventional Euclidean distance-based combinatorial clustering, it is clarified that the quality of the clustering results of the proposed method for irregular data is significantly better than that of the conventional method. Furthermore, the preprocess for annealing by the proposed method using numerical libraries is by a factor of up to 12.4 million × from the conventional naive python’s implementation. Comparisons between Ising-based kernel clustering and kernel K-means reveal that the proposed method has the potential to obtain higher-quality clustering results than the kernel K-means as a representative of the state-of-the-art kernel clustering methods. Full article
Show Figures

Figure 1

Article
Synchronization, Control and Data Assimilation of the Lorenz System
Algorithms 2023, 16(4), 213; https://doi.org/10.3390/a16040213 - 19 Apr 2023
Viewed by 473
Abstract
We explore several aspects of replica synchronization with the goal of retrieving the values of parameters applied to the Lorenz system. The idea is to establish a computer replica (slave) of a natural system (master, simulated in this paper), and exploit the fact [...] Read more.
We explore several aspects of replica synchronization with the goal of retrieving the values of parameters applied to the Lorenz system. The idea is to establish a computer replica (slave) of a natural system (master, simulated in this paper), and exploit the fact that the slave synchronizes with the master only if they evolve with the same parameters. As a byproduct, in the synchronized phase, the state variables of the slave and those of the master are the same, thus allowing us to perform measurements that would be impossible in the real system. We review some aspects of master–slave synchronization using a subset of variables with intermittent coupling. We show how synchronization can be achieved when some of the state variables are available for direct measurement using a simulated annealing approach, and also when they are accessible only through a scalar function, using a pruned-enriching ensemble approach, similar to genetic algorithms without cross-over. We also explore the case of exploiting the “gene exchange” option among members of the ensemble. Full article
(This article belongs to the Special Issue Algorithms for Natural Computing Models)
Show Figures

Figure 1

Article
From Activity Recognition to Simulation: The Impact of Granularity on Production Models in Heavy Civil Engineering
Algorithms 2023, 16(4), 212; https://doi.org/10.3390/a16040212 - 18 Apr 2023
Viewed by 665
Abstract
As in manufacturing with its Industry 4.0 transformation, the enormous potential of artificial intelligence (AI) is also being recognized in the construction industry. Specifically, the equipment-intensive construction industry can benefit from using AI. AI applications can leverage the data recorded by the numerous [...] Read more.
As in manufacturing with its Industry 4.0 transformation, the enormous potential of artificial intelligence (AI) is also being recognized in the construction industry. Specifically, the equipment-intensive construction industry can benefit from using AI. AI applications can leverage the data recorded by the numerous sensors on machines and mirror them in a digital twin. Analyzing the digital twin can help optimize processes on the construction site and increase productivity. We present a case from special foundation engineering: the machine production of bored piles. We introduce a hierarchical classification for activity recognition and apply a hybrid deep learning model based on convolutional and recurrent neural networks. Then, based on the results from the activity detection, we use discrete-event simulation to predict construction progress. We highlight the difficulty of defining the appropriate modeling granularity. While activity detection requires equipment movement, simulation requires knowledge of the production flow. Therefore, we present a flow-based production model that can be captured in a modularized process catalog. Overall, this paper aims to illustrate modeling using digital-twin technologies to increase construction process improvement in practice. Full article
Show Figures

Figure 1

Review
Impact of Digital Transformation on the Energy Sector: A Review
Algorithms 2023, 16(4), 211; https://doi.org/10.3390/a16040211 - 18 Apr 2023
Viewed by 1264
Abstract
Digital transformation is a phenomenon introduced by the transformative power of digital technologies, and it has become a key driver for the energy sector, with advancements in technology leading to significant changes in the way energy is produced, transmitted, and consumed. The impact [...] Read more.
Digital transformation is a phenomenon introduced by the transformative power of digital technologies, and it has become a key driver for the energy sector, with advancements in technology leading to significant changes in the way energy is produced, transmitted, and consumed. The impact of digital transformation on the energy sector is profound, with benefits such as improved efficiency, cost reduction, and enhanced customer experience. This article provides a review of the impact of digital transformation on the energy sector, highlighting key trends and emerging technologies that are transforming the sector. The article begins by defining the concept of digital transformation, describing its scope, and explaining two conceptual frameworks to provide a deep understanding of the concept. This article then explores the benefits of digital transformation, examines its impact, and identifies its enablers and barriers. Each source examined was analyzed to extract qualitative results and assess its contribution to the researched topic. This paper also acknowledges the challenges posed by digital transformation, including concerns about cybersecurity, data privacy, and workforce displacement. Finally, we discuss the potential developments that are expected in the future of digital transformation in the power sector and conclude that digital transformation has the potential to significantly improve the energy sector’s efficiency, sustainability, and resiliency. Full article
(This article belongs to the Collection Feature Papers in Algorithms)
Show Figures

Figure 1

Article
Model-Robust Estimation of Multiple-Group Structural Equation Models
Algorithms 2023, 16(4), 210; https://doi.org/10.3390/a16040210 - 17 Apr 2023
Viewed by 540
Abstract
Structural equation models (SEM) are widely used in the social sciences. They model the relationships between latent variables in structural models, while defining the latent variables by observed variables in measurement models. Frequently, it is of interest to compare particular parameters in an [...] Read more.
Structural equation models (SEM) are widely used in the social sciences. They model the relationships between latent variables in structural models, while defining the latent variables by observed variables in measurement models. Frequently, it is of interest to compare particular parameters in an SEM as a function of a discrete grouping variable. Multiple-group SEM is employed to compare structural relationships between groups. In this article, estimation approaches for the multiple-group are reviewed. We focus on comparing different estimation strategies in the presence of local model misspecifications (i.e., model errors). In detail, maximum likelihood and weighted least-squares estimation approaches are compared with a newly proposed robust Lp loss function and regularized maximum likelihood estimation. The latter methods are referred to as model-robust estimators because they show some resistance to model errors. In particular, we focus on the performance of the different estimators in the presence of unmodelled residual error correlations and measurement noninvariance (i.e., group-specific item intercepts). The performance of the different estimators is compared in two simulation studies and an empirical example. It turned out that the robust loss function approach is computationally much less demanding than regularized maximum likelihood estimation but resulted in similar statistical performance. Full article
(This article belongs to the Special Issue Statistical learning and Its Applications)
Show Figures

Figure 1

Article
Detection of Plausibility and Error Reasons in Finite Element Simulations with Deep Learning Networks
Algorithms 2023, 16(4), 209; https://doi.org/10.3390/a16040209 - 13 Apr 2023
Viewed by 759
Abstract
The field of application of data-driven product development is diverse and ranges from requirements through the early phases to the detailed design of the product. The goal is to consistently analyze data to support and improve individual steps in the development process. In [...] Read more.
The field of application of data-driven product development is diverse and ranges from requirements through the early phases to the detailed design of the product. The goal is to consistently analyze data to support and improve individual steps in the development process. In the context of this work, the focus is on the design and detailing phase, represented by the virtual testing of products through Finite Element (FE) simulations. However, due to the heterogeneous data of a simulation model, automatic use is a big challenge. A method is therefore presented that utilizes the entire stock of calculated simulations to predict the plausibility of new simulations. Correspondingly, a large amount of data is utilized to support less experienced users of FE software in the application. Thus, obvious errors in the simulation should be detected immediately with this procedure and unnecessary iterations are therefore avoided. Previous solutions were only able to perform a general plausibility classification, whereas the approach presented in this paper is intended to predict specific error sources in FE simulations. Full article
(This article belongs to the Special Issue Deep Learning Architecture and Applications)
Show Figures

Figure 1

Article
A Brain Storm and Chaotic Accelerated Particle Swarm Optimization Hybridization
Algorithms 2023, 16(4), 208; https://doi.org/10.3390/a16040208 - 13 Apr 2023
Viewed by 678
Abstract
Brain storm optimization (BSO) and particle swarm optimization (PSO) are two popular nature-inspired optimization algorithms, with BSO being the more recently developed one. It has been observed that BSO has an advantage over PSO regarding exploration with a random initialization, while PSO is [...] Read more.
Brain storm optimization (BSO) and particle swarm optimization (PSO) are two popular nature-inspired optimization algorithms, with BSO being the more recently developed one. It has been observed that BSO has an advantage over PSO regarding exploration with a random initialization, while PSO is more capable at local exploitation if given a predetermined initialization. The two algorithms have also been examined as a hybrid. In this work, the BSO algorithm was hybridized with the chaotic accelerated particle swarm optimization (CAPSO) algorithm in order to investigate how such an approach could serve as an improvement to the stand-alone algorithms. CAPSO is an advantageous variant of APSO, an accelerated, exploitative and minimalistic PSO algorithm. We initialized CAPSO with BSO in order to study the potential benefits from BSO’s initial exploration as well as CAPSO’s exploitation and speed. Seven benchmarking functions were used to compare the algorithms’ behavior. The chosen functions included both unimodal and multimodal benchmarking functions of various complexities and sizes of search areas. The functions were tested for different numbers of dimensions. The results showed that a properly tuned BSO–CAPSO hybrid could be significantly more beneficial over stand-alone BSO, especially with respect to computational time, while it heavily outperformed stand-alone CAPSO in the vast majority of cases. Full article
(This article belongs to the Collection Feature Paper in Metaheuristic Algorithms and Applications)
Show Figures

Figure 1

Article
A Trajectory-Based Immigration Strategy Genetic Algorithm to Solve a Single-Machine Scheduling Problem with Job Release Times and Flexible Preventive Maintenance
Algorithms 2023, 16(4), 207; https://doi.org/10.3390/a16040207 - 12 Apr 2023
Cited by 1 | Viewed by 741
Abstract
This paper considers the single-machine problem with job release times and flexible preventive maintenance activities to minimize total weighted tardiness, a complicated scheduling problem for which many algorithms have been proposed in the literature. However, the considered problems are rarely solved by genetic [...] Read more.
This paper considers the single-machine problem with job release times and flexible preventive maintenance activities to minimize total weighted tardiness, a complicated scheduling problem for which many algorithms have been proposed in the literature. However, the considered problems are rarely solved by genetic algorithms (GAs), even though it has successfully solved various complicated combinatorial optimization problems. For the problem, we propose a trajectory-based immigration strategy, where immigrant generation is based on the given information of solution extraction knowledge matrices. We embed the immigration strategy into the GA method to improve the population’s diversification process. To examine the performance of the proposed GA method, two versions of GA methods (the GA without immigration and the GA method with random immigration) and a mixed integer programming (MIP) model are also developed. Comprehensive experiments demonstrate the effectiveness of the proposed GA method by comparing the MIP model with two versions of GA methods. Overall, the proposed GA method significantly outperforms the other GA methods regarding solution quality due to the trajectory-based immigration strategy. Full article
Show Figures

Figure 1

Article
Conditional Temporal Aggregation for Time Series Forecasting Using Feature-Based Meta-Learning
Algorithms 2023, 16(4), 206; https://doi.org/10.3390/a16040206 - 12 Apr 2023
Viewed by 747
Abstract
We present a machine learning approach for applying (multiple) temporal aggregation in time series forecasting settings. The method utilizes a classification model that can be used to either select the most appropriate temporal aggregation level for producing forecasts or to derive weights to [...] Read more.
We present a machine learning approach for applying (multiple) temporal aggregation in time series forecasting settings. The method utilizes a classification model that can be used to either select the most appropriate temporal aggregation level for producing forecasts or to derive weights to properly combine the forecasts generated at various levels. The classifier consists a meta-learner that correlates key time series features with forecasting accuracy, thus enabling a dynamic, data-driven selection or combination. Our experiments, conducted in two large data sets of slow- and fast-moving series, indicate that the proposed meta-learner can outperform standard forecasting approaches. Full article
(This article belongs to the Special Issue Algorithms and Optimization Models for Forecasting and Prediction)
Show Figures

Figure 1

Article
Unsupervised Cyclic Siamese Networks Automating Cell Imagery Analysis
Algorithms 2023, 16(4), 205; https://doi.org/10.3390/a16040205 - 12 Apr 2023
Viewed by 611
Abstract
Novel neural network models that can handle complex tasks with fewer examples than before are being developed for a wide range of applications. In some fields, even the creation of a few labels is a laborious task and impractical, especially for data that [...] Read more.
Novel neural network models that can handle complex tasks with fewer examples than before are being developed for a wide range of applications. In some fields, even the creation of a few labels is a laborious task and impractical, especially for data that require more than a few seconds to generate each label. In the biotechnological domain, cell cultivation experiments are usually done by varying the circumstances of the experiments, seldom in such a way that hand-labeled data of one experiment cannot be used in others. In this field, exact cell counts are required for analysis, and even by modern standards, semi-supervised models typically need hundreds of labels to achieve acceptable accuracy on this task, while classical image processing yields unsatisfactory results. We research whether an unsupervised learning scheme is able to accomplish this task without manual labeling of the given data. We present a VAE-based Siamese architecture that is expanded in a cyclic fashion to allow the use of labeled synthetic data. In particular, we focus on generating pseudo-natural images from synthetic images for which the target variable is known to mimic the existence of labeled natural data. We show that this learning scheme provides reliable estimates for multiple microscopy technologies and for unseen data sets without manual labeling. We provide the source code as well as the data we use. The code package is open source and free to use (MIT licensed). Full article
(This article belongs to the Special Issue Deep Learning Architecture and Applications)
Show Figures

Figure 1

Article
Implementation of Novel Evolutional Algorithm for 3-Dimensional Radiation Mapping and Gamma-Field Reconstruction within the Chornobyl Sarcophagus
Algorithms 2023, 16(4), 204; https://doi.org/10.3390/a16040204 - 11 Apr 2023
Viewed by 1143
Abstract
This work presents the application of a novel evolutional algorithmic approach to determine and reconstruct the specific 3-dimensional source location of gamma-ray emissions within the shelter object, the sarcophagus of reactor Unit 4 of the Chornobyl Nuclear Power Plant. Despite over 30 years [...] Read more.
This work presents the application of a novel evolutional algorithmic approach to determine and reconstruct the specific 3-dimensional source location of gamma-ray emissions within the shelter object, the sarcophagus of reactor Unit 4 of the Chornobyl Nuclear Power Plant. Despite over 30 years having passed since the catastrophic accident, the high radiation levels combined with strict safety and operational restrictions continue to preclude many modern radiation detection and mapping systems from being extensively or successfully deployed within the shelter object. Hence, methods for reconstructing the intense and evolving gamma fields based on the limited inventory of available data are crucially needed. Such data is particularly important in planning the demolition of the unstable structures that comprise the facility, as well as during the prior operations to remove fuel containing materials from inside the sarcophagus and reactor Unit 4. For this approach, a simplified model of gamma emissions within the shelter object is represented by a series of point sources, each regularly spaced on the shelter object’s exterior surface, whereby the calculated activity values of these discrete sources are considered as a population in terms of evolutionary algorithms. To assess the numerical reconstruction, a fitness function is defined, comprising the variation between the known activity values (obtained during the commissioning of the New Safe Confinement at the end of 2019 on the level of the main crane system, located just below the arch above the shelter object) and the calculated values at these known locations for each new population. The final algorithm’s performance was subsequently verified using newly obtained information on the gamma dose-rate on the roof of the shelter object during radiation survey works at the end of 2021. With only 7000 iterations, the algorithm attained an MAPE percentage error of less than 23%, which the authors consider as satisfactory, considering that the relative error of the measurements is ±17%. While a simple initial application is presented in this work, it is demonstrated that evolutional algorithms could be used for radiation mapping with an existing network of radiation sensors, or, as in this instance, based on historic gamma-field data. Full article
Show Figures

Figure 1

Article
Feasibility of Low Latency, Single-Sample Delay Resampling: A New Kriging Based Method
Algorithms 2023, 16(4), 203; https://doi.org/10.3390/a16040203 - 11 Apr 2023
Viewed by 740
Abstract
Wireless sensor systems often fail to provide measurements with uniform time spacing. Measurements can be delayed or even miss completely. Resampling to uniform intervals is necessary to satisfy the requirements of subsequent signal processing. Common resampling algorithms, based on symmetric finite impulse response [...] Read more.
Wireless sensor systems often fail to provide measurements with uniform time spacing. Measurements can be delayed or even miss completely. Resampling to uniform intervals is necessary to satisfy the requirements of subsequent signal processing. Common resampling algorithms, based on symmetric finite impulse response (FIR) filters, entail a group delay of 10 s of samples, which is not acceptable regarding the typical interval of wireless sensors of seconds or minutes. The purpose of this paper is to verify the feasibility of single-delay resampling, i.e., the algorithm resamples the data without waiting for future samples. A new method to parametrize Kriging interpolation is presented and compared with two variants of Lagrange interpolation in detailed simulations for the resulting prediction error. Kriging provided the most accurate resampling in the group-delay scenario. The single-delay scenario required almost double the OSR to achieve the same signal-to-noise ratio (SNR). An OSR between 1.8 and 3.1 was necessary for single-delay resampling, depending on the required SNR and signal distortions in terms of jitter, missing samples, and noise. Kriging was the least noise-sensitive method. Especially for signals with missing samples, Kriging provided the best accuracy. The simulations showed that single-delay resampling is feasible, but at the expense of higher OSR and limited SNR. Full article
(This article belongs to the Special Issue Computational Intelligence in Wireless Sensor Networks and IoT)
Show Figures

Figure 1

Article
Deep Learning Stranded Neural Network Model for the Detection of Sensory Triggered Events
Algorithms 2023, 16(4), 202; https://doi.org/10.3390/a16040202 - 10 Apr 2023
Viewed by 821
Abstract
Maintenance processes are of high importance for industrial plants. They have to be performed regularly and uninterruptedly. To assist maintenance personnel, industrial sensors monitored by distributed control systems observe and collect several machinery parameters in the cloud. Then, machine learning algorithms try to [...] Read more.
Maintenance processes are of high importance for industrial plants. They have to be performed regularly and uninterruptedly. To assist maintenance personnel, industrial sensors monitored by distributed control systems observe and collect several machinery parameters in the cloud. Then, machine learning algorithms try to match patterns and classify abnormal behaviors. This paper presents a new deep learning model called stranded-NN. This model uses a set of NN models of variable layer depths depending on the input. This way, the proposed model can classify different types of emergencies occurring in different time intervals; real-time, close-to-real-time, or periodic. The proposed stranded-NN model has been compared against existing fixed-depth MLPs and LSTM networks used by the industry. Experimentation has shown that the stranded-NN model can outperform fixed depth MLPs 15–21% more in terms of accuracy for real-time events and at least 10–14% more for close-to-real-time events. Regarding LSTMs of the same memory depth as the NN strand input, the stranded NN presents similar results in terms of accuracy for a specific number of strands. Nevertheless, the stranded-NN model’s ability to maintain multiple trained strands makes it a superior and more flexible classification and prediction solution than its LSTM counterpart, as well as being faster at training and classification. Full article
Show Figures

Figure 1

Article
Line Clipping in 3D: Overview, Techniques and Algorithms
Algorithms 2023, 16(4), 201; https://doi.org/10.3390/a16040201 - 09 Apr 2023
Viewed by 661
Abstract
Clipping algorithms essentially compute the intersection of the clipping object and the subject, so to go from two to three dimensions we replace the two-dimensional clipping object by the three-dimensional one (the view frustum). In three-dimensional graphics, the terminology of clipping can be [...] Read more.
Clipping algorithms essentially compute the intersection of the clipping object and the subject, so to go from two to three dimensions we replace the two-dimensional clipping object by the three-dimensional one (the view frustum). In three-dimensional graphics, the terminology of clipping can be used to describe many related features. Typically, “clipping” refers to operations in the plane that work with rectangular shapes, and “culling” refers to more general methods to selectively process scene model elements. The aim of this article is to survey important techniques and algorithms for line clipping in 3D, but it also includes some of the latest research performed by the authors. Full article
(This article belongs to the Special Issue Machine Learning in Computational Geometry)
Show Figures

Figure 1

Article
On Modeling Antennas Using MoM-Based Algorithms: Wire-Grid versus Surface Triangulation
Algorithms 2023, 16(4), 200; https://doi.org/10.3390/a16040200 - 07 Apr 2023
Viewed by 1563
Abstract
This paper focuses on antenna modeling using wire-grid and surface triangulation as two of the most commonly used MoM-based approaches in this field. A comprehensive overview is provided for each of them, including their history, applications, and limitations. The mathematical background of these [...] Read more.
This paper focuses on antenna modeling using wire-grid and surface triangulation as two of the most commonly used MoM-based approaches in this field. A comprehensive overview is provided for each of them, including their history, applications, and limitations. The mathematical background of these approaches is briefly presented. Two working algorithms were developed and described in detail, along with their implementations using acceleration techniques. The wire-grid-based algorithm enables modeling of arbitrary antenna solid structures using their equivalent grid of wires according to a specific modeling recommendation proposed in earlier work. On the other hand, the surface triangulation-based algorithm enables calculation of antenna characteristics using a novel excitation source model. Additionally, a new mesh generator based on the combined use of the considered algorithms is developed. These algorithms were used to estimate the characteristics of several antenna types with different levels of complexity. The algorithms computational complexities were also obtained. The results obtained using these algorithms were compared with those obtained using the finite difference time domain numerical method, as well as those calculated analytically and measured. The analysis and comparisons were performed on the example of a rectangle spiral, a spiral, rounded bow-tie planar antennas, biconical, and horn antennas. Furthermore, the validity of the proposed algorithms is verified using the Monte Carlo methodology. Full article
(This article belongs to the Special Issue Computational Methods and Optimization for Numerical Analysis)
Show Figures

Figure 1

Article
Can Self-Similarity Processes Be Reflected by the Power-Law Dependencies?
Algorithms 2023, 16(4), 199; https://doi.org/10.3390/a16040199 - 07 Apr 2023
Viewed by 570
Abstract
This work was greatly influenced by the opinions of one of the authors (JS), who demonstrated in a recent book that it is important to distinguish between “fractal models” and “fractal” (power-law) behaviors. According to the self-similarity principle (SSP), the authors of this [...] Read more.
This work was greatly influenced by the opinions of one of the authors (JS), who demonstrated in a recent book that it is important to distinguish between “fractal models” and “fractal” (power-law) behaviors. According to the self-similarity principle (SSP), the authors of this study completely distinguish between independent “fractal” (power-law) behavior and the “fractal models”, which result from the solution of equations incorporating non-integer differentiation/integration operators. It is feasible to demonstrate how many random curves resemble one another and how they can be predicted by functions with real and complex-conjugated power-law exponents. Bellman’s inequality can be used to demonstrate that the generalized geometric mean, not the arithmetic mean, which is typically recognized as the fundamental criterion in the signal processing field, corresponds to the global fitting minimum. To highlight the efficiency of the proposed algorithms, they are applied to two sets of data: one without a clearly expressed power-law behavior, the other containing clear power-law dependence. Full article
Show Figures

Figure 1

Article
Model of Lexico-Semantic Bonds between Texts for Creating Their Similarity Metrics and Developing Statistical Clustering Algorithm
Algorithms 2023, 16(4), 198; https://doi.org/10.3390/a16040198 - 05 Apr 2023
Viewed by 674
Abstract
To solve the problem of text clustering according to semantic groups, we suggest using a model of a unified lexico-semantic bond between texts and a similarity matrix based on it. Using lexico-semantic analysis methods, we can create “term–document” matrices based both on the [...] Read more.
To solve the problem of text clustering according to semantic groups, we suggest using a model of a unified lexico-semantic bond between texts and a similarity matrix based on it. Using lexico-semantic analysis methods, we can create “term–document” matrices based both on the occurrence frequencies of words and n-grams and the determination of the degrees of nodes in their semantic network, followed by calculating the cosine metrics of text similarity. In the process of the construction of the text similarity matrix using lexical or semantic analysis methods, the cosine of the angle for a vector pair describing such texts will determine the degree of similarity in the lexical or semantic presentation, respectively. Based on the averaging procedure described in this paper, we can obtain a matrix of cosine metric values that describes the lexico-semantic bonds between texts. We propose an algorithm for solving text clustering problems. This algorithm allows one to use the statistical characteristics of the distribution functions of element values in the rows of the cosine metric value matrix in the model of the lexico-semantic bond between documents. In addition, this algorithm allows one to separately describe the matrix of the cosine metric values obtained separately based on the lexical or semantic properties of texts. Our research has shown that the developed model for the lexico-semantic presentation of texts allows one to slightly increase the accuracy of their subsequent clustering. The statistical text clustering algorithm based on this model shows excellent results that are comparable to those of the widely used affinity propagation algorithm. Additionally, our algorithm does not require specification of the degree of similarity for combining vectors into a common cluster and other configuration parameters. The suggested model and algorithm significantly expand the list of known approaches for determining text similarity metrics and their clustering. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Figure 1

Article
An Adversarial DBN-LSTM Method for Detecting and Defending against DDoS Attacks in SDN Environments
Algorithms 2023, 16(4), 197; https://doi.org/10.3390/a16040197 - 05 Apr 2023
Viewed by 723
Abstract
As an essential piece of infrastructure supporting cyberspace security technology verification, network weapons and equipment testing, attack defense confrontation drills, and network risk assessment, Cyber Range is exceptionally vulnerable to distributed denial of service (DDoS) attacks from three malicious parties. Moreover, some attackers [...] Read more.
As an essential piece of infrastructure supporting cyberspace security technology verification, network weapons and equipment testing, attack defense confrontation drills, and network risk assessment, Cyber Range is exceptionally vulnerable to distributed denial of service (DDoS) attacks from three malicious parties. Moreover, some attackers try to fool the classification/prediction mechanism by crafting the input data to create adversarial attacks, which is hard to defend for ML-based Network Intrusion Detection Systems (NIDSs). This paper proposes an adversarial DBN-LSTM method for detecting and defending against DDoS attacks in SDN environments, which applies generative adversarial networks (GAN) as well as deep belief networks and long short-term memory (DBN-LSTM) to make the system less sensitive to adversarial attacks and faster feature extraction. We conducted the experiments using the public dataset CICDDoS 2019. The experimental results demonstrated that our method efficiently detected up-to-date common types of DDoS attacks compared to other approaches. Full article
(This article belongs to the Special Issue Artificial Intelligence in Intrusion Detection Systems)
Show Figures

Figure 1

Article
Attention–Survival Score: A Metric to Choose Better Keywords and Improve Visibility of Information
Algorithms 2023, 16(4), 196; https://doi.org/10.3390/a16040196 - 03 Apr 2023
Viewed by 610
Abstract
In this paper, we propose a method to aid authors in choosing alternative keywords that help their papers gain visibility. These alternative keywords must have a certain level of popularity in the scientific community and, simultaneously, be keywords with fewer competitors. The competitors [...] Read more.
In this paper, we propose a method to aid authors in choosing alternative keywords that help their papers gain visibility. These alternative keywords must have a certain level of popularity in the scientific community and, simultaneously, be keywords with fewer competitors. The competitors are derived from other papers containing the same keywords. Having fewer competitors would allow an author’s paper to have a higher consult frequency. In order to recommend keywords, we must first determine an attention–survival score. The attention score is obtained using the popularity of a keyword. The survival score is derived from the number of manuscripts using the same keyword. With these two scores, we created a new algorithm that finds alternative keywords with a high attention–survival score. We used ontologies to ensure that alternative keywords proposed by our method are semantically related to the original authors’ keywords that they wish to refine. The hierarchical structure in an ontology supports the relationship between the alternative and input keywords. To test the sensibility of the ontology, we used two sources: WordNet and the Computer Science Ontology (CSO). Finally, we launched a survey for the human validation of our algorithm using keywords from Web of Science papers and three ontologies: WordNet, CSO, and DBpedia. We obtained good results from all our tests. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

Article
Entropy-Based Anomaly Detection for Gaussian Mixture Modeling
Algorithms 2023, 16(4), 195; https://doi.org/10.3390/a16040195 - 03 Apr 2023
Viewed by 742
Abstract
Gaussian mixture modeling is a generative probabilistic model that assumes that the observed data are generated from a mixture of multiple Gaussian distributions. This mixture model provides a flexible approach to model complex distributions that may not be easily represented by a single [...] Read more.
Gaussian mixture modeling is a generative probabilistic model that assumes that the observed data are generated from a mixture of multiple Gaussian distributions. This mixture model provides a flexible approach to model complex distributions that may not be easily represented by a single Gaussian distribution. The Gaussian mixture model with a noise component refers to a finite mixture that includes an additional noise component to model the background noise or outliers in the data. This additional noise component helps to take into account the presence of anomalies or outliers in the data. This latter aspect is crucial for anomaly detection in situations where a clear, early warning of an abnormal condition is required. This paper proposes a novel entropy-based procedure for initializing the noise component in Gaussian mixture models. Our approach is shown to be easy to implement and effective for anomaly detection. We successfully identify anomalies in both simulated and real-world datasets, even in the presence of significant levels of noise and outliers. We provide a step-by-step description of the proposed data analysis process, along with the corresponding R code, which is publicly available in a GitHub repository. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Bioinformatics Problems)
Show Figures

Figure 1

Article
NSGA-PINN: A Multi-Objective Optimization Method for Physics-Informed Neural Network Training
Algorithms 2023, 16(4), 194; https://doi.org/10.3390/a16040194 - 03 Apr 2023
Cited by 1 | Viewed by 943
Abstract
This paper presents NSGA-PINN, a multi-objective optimization framework for the effective training of physics-informed neural networks (PINNs). The proposed framework uses the non-dominated sorting genetic algorithm (NSGA-II) to enable traditional stochastic gradient optimization algorithms (e.g., ADAM) to escape local minima effectively. Additionally, the [...] Read more.
This paper presents NSGA-PINN, a multi-objective optimization framework for the effective training of physics-informed neural networks (PINNs). The proposed framework uses the non-dominated sorting genetic algorithm (NSGA-II) to enable traditional stochastic gradient optimization algorithms (e.g., ADAM) to escape local minima effectively. Additionally, the NSGA-II algorithm enables satisfying the initial and boundary conditions encoded into the loss function during physics-informed training precisely. We demonstrate the effectiveness of our framework by applying NSGA-PINN to several ordinary and partial differential equation problems. In particular, we show that the proposed framework can handle challenging inverse problems with noisy data. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

Article
Polychrony as Chinampas
Algorithms 2023, 16(4), 193; https://doi.org/10.3390/a16040193 - 03 Apr 2023
Viewed by 946
Abstract
In this paper, we study the flow of signals through linear paths with the nonlinear condition that a node emits a signal when it receives external stimuli or when two incoming signals from other nodes arrive coincidentally with a combined amplitude above a [...] Read more.
In this paper, we study the flow of signals through linear paths with the nonlinear condition that a node emits a signal when it receives external stimuli or when two incoming signals from other nodes arrive coincidentally with a combined amplitude above a fixed threshold. Sets of such nodes form a polychrony group and can sometimes lead to cascades. In the context of this work, cascades are polychrony groups in which the number of nodes activated as a consequence of other nodes is greater than the number of externally activated nodes. The difference between these two numbers is the so-called profit. Given the initial conditions, we predict the conditions for a vertex to activate at a prescribed time and provide an algorithm to efficiently reconstruct a cascade. We develop a dictionary between polychrony groups and graph theory. We call the graph corresponding to a cascade a chinampa. This link leads to a topological classification of chinampas. We enumerate the chinampas of profits zero and one and the description of a family of chinampas isomorphic to a family of partially ordered sets, which implies that the enumeration problem of this family is equivalent to computing the Stanley-order polynomials of those partially ordered sets. Full article
Show Figures

Figure 1

Article
A Novel Short-Memory Sequence-Based Model for Variable-Length Reading Recognition of Multi-Type Digital Instruments in Industrial Scenarios
Algorithms 2023, 16(4), 192; https://doi.org/10.3390/a16040192 - 31 Mar 2023
Viewed by 619
Abstract
As a practical application of Optical Character Recognition (OCR) for the digital situation, the digital instrument recognition is significant to achieve automatic information management in real-industrial scenarios. However, different from the normal digital recognition task such as license plate recognition, CAPTCHA recognition and [...] Read more.
As a practical application of Optical Character Recognition (OCR) for the digital situation, the digital instrument recognition is significant to achieve automatic information management in real-industrial scenarios. However, different from the normal digital recognition task such as license plate recognition, CAPTCHA recognition and handwritten digit recognition, the recognition task of multi-type digital instruments faces greater challenges due to the reading strings are variable-length with different fonts, different spacing and aspect ratios. In order to overcome this shortcoming, we propose a novel short-memory sequence-based model for variable-length reading recognition. First, we involve shortcut connection strategy into traditional convolutional structure to form a feature extractor for capturing effective features from characters with different fonts of multi-type digital instruments images. Then, we apply an RNN-based sequence module, which strengthens short-distance dependencies while reducing the long-distance trending memory of the reading string, to greatly improve the robustness and generalization of the model for invisible data. Finally, a novel short-memory sequence-based model consisting of a feature extractor, an RNN-based sequence module and the CTC, is proposed for variable-length reading recognition of multi-type digital instruments. Experimental results show that this method is effective on variable-length instrument reading recognition task, especially for invisible data, which proves that our method has outstanding generalization and robustness in real-industrial applications. Full article
(This article belongs to the Special Issue Machine Learning and Deep Learning in Pattern Recognition)
Show Figures

Figure 1

Editorial
Overview on the Special Issue on “Simulation-Based Optimization: Methods and Applications in Engineering Design”
Algorithms 2023, 16(4), 191; https://doi.org/10.3390/a16040191 - 30 Mar 2023
Viewed by 523
Abstract
The simulation-based design optimization (SBDO) paradigm is a well-known approach that has assisted, assists, and will continue to assist designers to develop ever-improving systems [...] Full article
Article
JointContrast: Skeleton-Based Interaction Recognition with New Representation and Contrastive Learning
Algorithms 2023, 16(4), 190; https://doi.org/10.3390/a16040190 - 30 Mar 2023
Viewed by 538
Abstract
Skeleton-based action recognition depends on skeleton sequences to detect categories of human actions. In skeleton-based action recognition, the recognition of action scenes with more than one subject is named as interaction recognition. Different from the single-subject action recognition methods, interaction recognition requires an [...] Read more.
Skeleton-based action recognition depends on skeleton sequences to detect categories of human actions. In skeleton-based action recognition, the recognition of action scenes with more than one subject is named as interaction recognition. Different from the single-subject action recognition methods, interaction recognition requires an explicit representation of the interaction information between subjects. Recalling the success of skeletal graph representation and graph convolution in modeling the spatial structural information of skeletal data, we consider whether we can embed the inter-subject interaction information into the skeletal graph and use graph convolution for a unified feature representation. In this paper, we propose the interaction information embedding skeleton graph representation (IE-Graph) and use the graph convolution operation to represent the intra-subject spatial structure information and inter-subject interaction information in a uniform manner. Inspired by recent pre-training methods in 2D vision, we propose unsupervised pre-training methods for skeletal data as well as contrast loss. In SBU datasets, JointContrast achieves 98.2% recognition accuracy. in NTU60 datasets, JointContrast respectively achieves 94.1% and 96.8% recognition accuracy under Cross-Subject and Cross-View evaluation metrics. Full article
(This article belongs to the Special Issue Machine Learning in Pattern Recognition)
Show Figures

Figure 1

Article
Application of Search Algorithms in Determining Fault Location on Overhead Power Lines According to the Emergency Mode Parameters
Algorithms 2023, 16(4), 189; https://doi.org/10.3390/a16040189 - 30 Mar 2023
Viewed by 695
Abstract
The identification of fault locations (FL) on overhead power lines (OHPLs) in the shortest possible time allows for a reduction in the time to shut down OHPLs in case of damage. This helps to improve the reliability of power systems. FL devices on [...] Read more.
The identification of fault locations (FL) on overhead power lines (OHPLs) in the shortest possible time allows for a reduction in the time to shut down OHPLs in case of damage. This helps to improve the reliability of power systems. FL devices on OHPLs according to the emergency mode parameters (EMPs) are widely used, as they have a lower cost. However, they have a larger error than FL devices that record traveling wave processes. Most well-known algorithms for FL on OHPL by EMP assume a uniform distribution of resistivity along the OHPL. In real conditions, this is not the case. The application of these algorithms in FL devices on OHPLs with inhomogeneities leads to significant errors in calculating the distance to the fault location. The use of search algorithms for unconditional one-dimensional optimization is proposed to increase the speed of the implementation of iterative procedures in FL devices on OHPLs by EMPs. Recommendations have been developed for choosing optimization criteria, as well as options for implementing computational procedures. Using the example of a two-sided FL on OHPL, it is shown that the use of search algorithms can significantly (from tens to hundreds of times) reduce the number of steps of the computational iterative procedure. The implementation of search algorithms is possible in the software of typical relay protection and automation terminals, without upgrading their hardware. Full article
Show Figures

Figure 1

Editorial
Editorial: Surveys in Algorithm Analysis and Complexity Theory (Special Issue)
Algorithms 2023, 16(4), 188; https://doi.org/10.3390/a16040188 - 30 Mar 2023
Viewed by 408
Abstract
This is a Special Issue of the open-access journal Algorithms consisting of surveys in theoretical computer science [...] Full article
(This article belongs to the Special Issue Surveys in Algorithm Analysis and Complexity Theory)
Article
Continuous Semi-Supervised Nonnegative Matrix Factorization
Algorithms 2023, 16(4), 187; https://doi.org/10.3390/a16040187 - 30 Mar 2023
Viewed by 575
Abstract
Nonnegative matrix factorization can be used to automatically detect topics within a corpus in an unsupervised fashion. The technique amounts to an approximation of a nonnegative matrix as the product of two nonnegative matrices of lower rank. In certain applications it is desirable [...] Read more.
Nonnegative matrix factorization can be used to automatically detect topics within a corpus in an unsupervised fashion. The technique amounts to an approximation of a nonnegative matrix as the product of two nonnegative matrices of lower rank. In certain applications it is desirable to extract topics and use them to predict quantitative outcomes. In this paper, we show Nonnegative Matrix Factorization can be combined with regression on a continuous response variable by minimizing a penalty function that adds a weighted regression error to a matrix factorization error. We show theoretically that as the weighting increases, the regression error in training decreases weakly. We test our method on synthetic data and real data coming from Rate My Professors reviews to predict an instructor’s rating from the text in their reviews. In practice, when used as a dimensionality reduction method (when the number of topics chosen in the model is fewer than the true number of topics), the method performs better than doing regression after topics are identified—both during training and testing—and it retrains interpretability. Full article
(This article belongs to the Special Issue Algorithms for Non-negative Matrix Factorisation)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop