Next Issue
Volume 17, February
Previous Issue
Volume 16, December
 
 

Algorithms, Volume 17, Issue 1 (January 2024) – 49 articles

Cover Story (view full-size image): We present a general version of the polygonal fitting problem called Unconstrained Polygonal Fitting (UPF). Our goal is to represent a given 2D shape S with an N-vertex polygonal curve P with a known number of vertices, so that the Intersection over Union (IoU) metric between S and P is maximized without any assumption of the location of the N-vertices of P that can be placed anywhere in the 2D space. The resulting solutions of the UPF may approximate the given curve better than the solutions of the classical polygonal approximation problem, where the vertices are constrained to belong to the boundary of the given 2D shape. For a given number of vertices N, a Particle Swarm Optimization method is used to maximize the IoU metric, which yields almost optimal solutions. View the paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
36 pages, 4214 KiB  
Article
Correntropy-Based Constructive One Hidden Layer Neural Network
by Mojtaba Nayyeri, Modjtaba Rouhani, Hadi Sadoghi Yazdi, Marko M. Mäkelä, Alaleh Maskooki and Yury Nikulin
Algorithms 2024, 17(1), 49; https://doi.org/10.3390/a17010049 - 22 Jan 2024
Viewed by 1772
Abstract
One of the main disadvantages of the traditional mean square error (MSE)-based constructive networks is their poor performance in the presence of non-Gaussian noises. In this paper, we propose a new incremental constructive network based on the correntropy objective function (correntropy-based constructive neural [...] Read more.
One of the main disadvantages of the traditional mean square error (MSE)-based constructive networks is their poor performance in the presence of non-Gaussian noises. In this paper, we propose a new incremental constructive network based on the correntropy objective function (correntropy-based constructive neural network (C2N2)), which is robust to non-Gaussian noises. In the proposed learning method, input and output side optimizations are separated. It is proved theoretically that the new hidden node, which is obtained from the input side optimization problem, is not orthogonal to the residual error function. Regarding this fact, it is proved that the correntropy of the residual error converges to its optimum value. During the training process, the weighted linear least square problem is iteratively applied to update the parameters of the newly added node. Experiments on both synthetic and benchmark datasets demonstrate the robustness of the proposed method in comparison with the MSE-based constructive network, the radial basis function (RBF) network. Moreover, the proposed method outperforms other robust learning methods including the cascade correntropy network (CCOEN), Multi-Layer Perceptron based on the Minimum Error Entropy objective function (MLPMEE), Multi-Layer Perceptron based on the correntropy objective function (MLPMCC) and the Robust Least Square Support Vector Machine (RLS-SVM). Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

23 pages, 1212 KiB  
Article
Activation-Based Pruning of Neural Networks
by Tushar Ganguli and Edwin K. P. Chong
Algorithms 2024, 17(1), 48; https://doi.org/10.3390/a17010048 - 21 Jan 2024
Cited by 1 | Viewed by 2471
Abstract
We present a novel technique for pruning called activation-based pruning to effectively prune fully connected feedforward neural networks for multi-object classification. Our technique is based on the number of times each neuron is activated during model training. We compare the performance of activation-based [...] Read more.
We present a novel technique for pruning called activation-based pruning to effectively prune fully connected feedforward neural networks for multi-object classification. Our technique is based on the number of times each neuron is activated during model training. We compare the performance of activation-based pruning with a popular pruning method: magnitude-based pruning. Further analysis demonstrated that activation-based pruning can be considered a dimensionality reduction technique, as it leads to a sparse low-rank matrix approximation for each hidden layer of the neural network. We also demonstrate that the rank-reduced neural network generated using activation-based pruning has better accuracy than a rank-reduced network using principal component analysis. We provide empirical results to show that, after each successive pruning, the amount of reduction in the magnitude of singular values of each matrix representing the hidden layers of the network is equivalent to introducing the sum of singular values of the hidden layers as a regularization parameter to the objective function. Full article
Show Figures

Figure 1

15 pages, 438 KiB  
Article
Framework Based on Simulation of Real-World Message Streams to Evaluate Classification Solutions
by Wenny Hojas-Mazo, Francisco Maciá-Pérez, José Vicente Berná Martínez, Mailyn Moreno-Espino, Iren Lorenzo Fonseca and Juan Pavón
Algorithms 2024, 17(1), 47; https://doi.org/10.3390/a17010047 - 21 Jan 2024
Viewed by 1787
Abstract
Analysing message streams in a dynamic environment is challenging. Various methods and metrics are used to evaluate message classification solutions, but often fail to realistically simulate the actual environment. As a result, the evaluation can produce overly optimistic results, rendering current solution evaluations [...] Read more.
Analysing message streams in a dynamic environment is challenging. Various methods and metrics are used to evaluate message classification solutions, but often fail to realistically simulate the actual environment. As a result, the evaluation can produce overly optimistic results, rendering current solution evaluations inadequate for real-world environments. This paper proposes a framework based on the simulation of real-world message streams to evaluate classification solutions. The framework consists of four modules: message stream simulation, processing, classification and evaluation. The simulation module uses techniques and queueing theory to replicate a real-world message stream. The processing module refines the input messages for optimal classification. The classification module categorises the generated message stream using existing solutions. The evaluation module evaluates the performance of the classification solutions by measuring accuracy, precision and recall. The framework can model different behaviours from different sources, such as different spammers with different attack strategies, press media or social network sources. Each profile generates a message stream that is combined into the main stream for greater realism. A spam detection case study is developed that demonstrates the implementation of the proposed framework and identifies latency and message body obfuscation as critical classification quality parameters. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
Show Figures

Figure 1

20 pages, 2119 KiB  
Article
A Biased-Randomized Discrete Event Algorithm to Improve the Productivity of Automated Storage and Retrieval Systems in the Steel Industry
by Mattia Neroni, Massimo Bertolini and Angel A. Juan
Algorithms 2024, 17(1), 46; https://doi.org/10.3390/a17010046 - 19 Jan 2024
Viewed by 1855
Abstract
In automated storage and retrieval systems (AS/RSs), the utilization of intelligent algorithms can reduce the makespan required to complete a series of input/output operations. This paper introduces a simulation optimization algorithm designed to minimize the makespan in a realistic AS/RS commonly found in [...] Read more.
In automated storage and retrieval systems (AS/RSs), the utilization of intelligent algorithms can reduce the makespan required to complete a series of input/output operations. This paper introduces a simulation optimization algorithm designed to minimize the makespan in a realistic AS/RS commonly found in the steel sector. This system includes weight and quality constraints for the selected items. Our hybrid approach combines discrete event simulation with biased-randomized heuristics. This combination enables us to efficiently address the complex time dependencies inherent in such dynamic scenarios. Simultaneously, it allows for intelligent decision making, resulting in feasible and high-quality solutions within seconds. A series of computational experiments illustrates the potential of our approach, which surpasses an alternative method based on traditional simulated annealing. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
Show Figures

Figure 1

19 pages, 773 KiB  
Article
Distributed Data-Driven Learning-Based Optimal Dynamic Resource Allocation for Multi-RIS-Assisted Multi-User Ad-Hoc Network
by Yuzhu Zhang and Hao Xu
Algorithms 2024, 17(1), 45; https://doi.org/10.3390/a17010045 - 19 Jan 2024
Cited by 2 | Viewed by 1857
Abstract
This study investigates the problem of decentralized dynamic resource allocation optimization for ad-hoc network communication with the support of reconfigurable intelligent surfaces (RIS), leveraging a reinforcement learning framework. In the present context of cellular networks, device-to-device (D2D) communication stands out as a promising [...] Read more.
This study investigates the problem of decentralized dynamic resource allocation optimization for ad-hoc network communication with the support of reconfigurable intelligent surfaces (RIS), leveraging a reinforcement learning framework. In the present context of cellular networks, device-to-device (D2D) communication stands out as a promising technique to enhance the spectrum efficiency. Simultaneously, RIS have gained considerable attention due to their ability to enhance the quality of dynamic wireless networks by maximizing the spectrum efficiency without increasing the power consumption. However, prevalent centralized D2D transmission schemes require global information, leading to a significant signaling overhead. Conversely, existing distributed schemes, while avoiding the need for global information, often demand frequent information exchange among D2D users, falling short of achieving global optimization. This paper introduces a framework comprising an outer loop and inner loop. In the outer loop, decentralized dynamic resource allocation optimization has been developed for self-organizing network communication aided by RIS. This is accomplished through the application of a multi-player multi-armed bandit approach, completing strategies for RIS and resource block selection. Notably, these strategies operate without requiring signal interaction during execution. Meanwhile, in the inner loop, the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm has been adopted for cooperative learning with neural networks (NNs) to obtain optimal transmit power control and RIS phase shift control for multiple users, with a specified RIS and resource block selection policy from the outer loop. Through the utilization of optimization theory, distributed optimal resource allocation can be attained as the outer and inner reinforcement learning algorithms converge over time. Finally, a series of numerical simulations are presented to validate and illustrate the effectiveness of the proposed scheme. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Figure 1

11 pages, 374 KiB  
Communication
Numerical Algorithms in III–V Semiconductor Heterostructures
by Ioannis G. Tsoulos and V. N. Stavrou
Algorithms 2024, 17(1), 44; https://doi.org/10.3390/a17010044 - 19 Jan 2024
Viewed by 1510
Abstract
In the current research, we consider the solution of dispersion relations addressed to solid state physics by using artificial neural networks (ANNs). Most specifically, in a double semiconductor heterostructure, we theoretically investigate the dispersion relations of the interface polariton (IP) modes and describe [...] Read more.
In the current research, we consider the solution of dispersion relations addressed to solid state physics by using artificial neural networks (ANNs). Most specifically, in a double semiconductor heterostructure, we theoretically investigate the dispersion relations of the interface polariton (IP) modes and describe the reststrahlen frequency bands between the frequencies of the transverse and longitudinal optical phonons. The numerical results obtained by the aforementioned methods are in agreement with the results obtained by the recently published literature. Two methods were used to train the neural network: a hybrid genetic algorithm and a modified version of the well-known particle swarm optimization method. Full article
(This article belongs to the Special Issue Algorithms in Data Classification (2nd Edition))
Show Figures

Figure 1

23 pages, 4215 KiB  
Article
Frequent Errors in Modeling by Machine Learning: A Prototype Case of Predicting the Timely Evolution of COVID-19 Pandemic
by Károly Héberger
Algorithms 2024, 17(1), 43; https://doi.org/10.3390/a17010043 - 19 Jan 2024
Cited by 2 | Viewed by 2077
Abstract
Background: The development and application of machine learning (ML) methods have become so fast that almost nobody can follow their developments in every detail. It is no wonder that numerous errors and inconsistencies in their usage have also spread with a similar [...] Read more.
Background: The development and application of machine learning (ML) methods have become so fast that almost nobody can follow their developments in every detail. It is no wonder that numerous errors and inconsistencies in their usage have also spread with a similar speed independently from the tasks: regression and classification. This work summarizes frequent errors committed by certain authors with the aim of helping scientists to avoid them. Methods: The principle of parsimony governs the train of thought. Fair method comparison can be completed with multicriteria decision-making techniques, preferably by the sum of ranking differences (SRD). Its coupling with analysis of variance (ANOVA) decomposes the effects of several factors. Earlier findings are summarized in a review-like manner: the abuse of the correlation coefficient and proper practices for model discrimination are also outlined. Results: Using an illustrative example, the correct practice and the methodology are summarized as guidelines for model discrimination, and for minimizing the prediction errors. The following factors are all prerequisites for successful modeling: proper data preprocessing, statistical tests, suitable performance parameters, appropriate degrees of freedom, fair comparison of models, and outlier detection, just to name a few. A checklist is provided in a tutorial manner on how to present ML modeling properly. The advocated practices are reviewed shortly in the discussion. Conclusions: Many of the errors can easily be filtered out with careful reviewing. Every authors’ responsibility is to adhere to the rules of modeling and validation. A representative sampling of recent literature outlines correct practices and emphasizes that no error-free publication exists. Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms (2nd Edition))
Show Figures

Graphical abstract

16 pages, 533 KiB  
Article
Dictionary Encoding Based on Tagged Sentential Decision Diagrams
by Deyuan Zhong, Liangda Fang and Quanlong Guan
Algorithms 2024, 17(1), 42; https://doi.org/10.3390/a17010042 - 18 Jan 2024
Viewed by 1322
Abstract
Encoding a dictionary into another representation means that all the words can be stored in the dictionary in a more efficient way. In this way, we can complete common operations in dictionaries, such as (1) searching for a word in the dictionary, (2) [...] Read more.
Encoding a dictionary into another representation means that all the words can be stored in the dictionary in a more efficient way. In this way, we can complete common operations in dictionaries, such as (1) searching for a word in the dictionary, (2) adding some words to the dictionary, and (3) removing some words from the dictionary, in a shorter time. Binary decision diagrams (BDDs) are one of the most famous representations of such encoding and are widely popular due to their excellent properties. Recently, some people have proposed encoding dictionaries into BDDs and some variants of BDDs and showed that it is feasible. Hence, we further investigate the topic of encoding dictionaries into decision diagrams. Tagged sentential decision diagrams (TSDDs), as one of these variants based on structured decomposition, exploit both the standard and zero-suppressed trimming rules. In this paper, we first introduce how to use Boolean functions to represent dictionary files and then design an algorithm that encodes dictionaries into TSDDs with the help of tries and a decoding algorithm that restores TSDDs to dictionaries. We utilize the help of tries in the encoding algorithm, which greatly accelerates the encoding process. Considering that TSDDs integrate two trimming rules, we believe that using TSDDs to represent dictionaries would be more effective, and the experiments also show this. Full article
Show Figures

Figure 1

27 pages, 721 KiB  
Article
Efficient Multi-Objective Simulation Metamodeling for Researchers
by Ken Jom Ho, Ender Özcan and Peer-Olaf Siebers
Algorithms 2024, 17(1), 41; https://doi.org/10.3390/a17010041 - 18 Jan 2024
Viewed by 1762
Abstract
Solving multiple objective optimization problems can be computationally intensive even when experiments can be performed with the help of a simulation model. There are many methodologies that can achieve good tradeoffs between solution quality and resource use. One possibility is using an intermediate [...] Read more.
Solving multiple objective optimization problems can be computationally intensive even when experiments can be performed with the help of a simulation model. There are many methodologies that can achieve good tradeoffs between solution quality and resource use. One possibility is using an intermediate “model of a model” (metamodel) built on experimental responses from the underlying simulation model and an optimization heuristic that leverages the metamodel to explore the input space more efficiently. However, determining the best metamodel and optimizer pairing for a specific problem is not directly obvious from the problem itself, and not all domains have experimental answers to this conundrum. This paper introduces a discrete multiple objective simulation metamodeling and optimization methodology that allows algorithmic testing and evaluation of four Metamodel-Optimizer (MO) pairs for different problems. For running our experiments, we have implemented a test environment in R and tested four different MO pairs on four different problem scenarios in the Operations Research domain. The results of our experiments suggest that patterns of relative performance between the four MO pairs tested differ in terms of computational time costs for the four problems studied. With additional integration of problems, metamodels and optimizers, the opportunity to identify ex ante the best MO pair to employ for a general problem can lead to a more profitable use of metamodel optimization. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
Show Figures

Figure 1

21 pages, 1388 KiB  
Article
BENK: The Beran Estimator with Neural Kernels for Estimating the Heterogeneous Treatment Effect
by Stanislav Kirpichenko, Lev Utkin, Andrei Konstantinov and Vladimir Muliukha
Algorithms 2024, 17(1), 40; https://doi.org/10.3390/a17010040 - 18 Jan 2024
Viewed by 1500
Abstract
A method for estimating the conditional average treatment effect under the condition of censored time-to-event data, called BENK (the Beran Estimator with Neural Kernels), is proposed. The main idea behind the method is to apply the Beran estimator for estimating the survival functions [...] Read more.
A method for estimating the conditional average treatment effect under the condition of censored time-to-event data, called BENK (the Beran Estimator with Neural Kernels), is proposed. The main idea behind the method is to apply the Beran estimator for estimating the survival functions of controls and treatments. Instead of typical kernel functions in the Beran estimator, it is proposed to implement kernels in the form of neural networks of a specific form, called neural kernels. The conditional average treatment effect is estimated by using the survival functions as outcomes of the control and treatment neural networks, which consist of a set of neural kernels with shared parameters. The neural kernels are more flexible and can accurately model a complex location structure of feature vectors. BENK does not require a large dataset for training due to its special way for training networks by means of pairs of examples from the control and treatment groups. The proposed method extends a set of models that estimate the conditional average treatment effect. Various numerical simulation experiments illustrate BENK and compare it with the well-known T-learner, S-learner and X-learner for several types of control and treatment outcome functions based on the Cox models, the random survival forest and the Beran estimator with Gaussian kernels. The code of the proposed algorithms implementing BENK is publicly available. Full article
(This article belongs to the Special Issue Artificial Intelligence for Fault Detection and Diagnosis)
Show Figures

Figure 1

17 pages, 10090 KiB  
Article
Pedestrian Detection Based on Feature Enhancement in Complex Scenes
by Jiao Su, Yi An, Jialin Wu and Kai Zhang
Algorithms 2024, 17(1), 39; https://doi.org/10.3390/a17010039 - 18 Jan 2024
Cited by 1 | Viewed by 1612
Abstract
Pedestrian detection has always been a difficult and hot spot in computer vision research. At the same time, pedestrian detection technology plays an important role in many applications, such as intelligent transportation and security monitoring. In complex scenes, pedestrian detection often faces some [...] Read more.
Pedestrian detection has always been a difficult and hot spot in computer vision research. At the same time, pedestrian detection technology plays an important role in many applications, such as intelligent transportation and security monitoring. In complex scenes, pedestrian detection often faces some challenges, such as low detection accuracy and misdetection due to small target sizes and scale variations. To solve these problems, this paper proposes a pedestrian detection network PT-YOLO based on the YOLOv5. The pedestrian detection network PT-YOLO consists of the YOLOv5 network, the squeeze-and-excitation module (SE), the weighted bi-directional feature pyramid module (BiFPN), the coordinate convolution (coordconv) module and the wise intersection over union loss function (WIoU). The SE module in the backbone allows it to focus on the important features of pedestrians and improves accuracy. The weighted BiFPN module enhances the fusion of multi-scale pedestrian features and information transfer, which can improve fusion efficiency. The prediction head design uses the WIoU loss function to reduce the regression error. The coordconv module allows the network to better perceive the location information in the feature map. The experimental results show that the pedestrian detection network PT-YOLO is more accurate compared with other target detection methods in pedestrian detection and can effectively accomplish the task of pedestrian detection in complex scenes. Full article
Show Figures

Graphical abstract

11 pages, 2644 KiB  
Article
Atom Filtering Algorithm and GPU-Accelerated Calculation of Simulation Atomic Force Microscopy Images
by Romain Amyot, Noriyuki Kodera and Holger Flechsig
Algorithms 2024, 17(1), 38; https://doi.org/10.3390/a17010038 - 17 Jan 2024
Viewed by 1787
Abstract
Simulation of atomic force microscopy (AFM) computationally emulates experimental scanning of a biomolecular structure to produce topographic images that can be correlated with measured images. Its application to the enormous amount of available high-resolution structures, as well as to molecular dynamics modelling data, [...] Read more.
Simulation of atomic force microscopy (AFM) computationally emulates experimental scanning of a biomolecular structure to produce topographic images that can be correlated with measured images. Its application to the enormous amount of available high-resolution structures, as well as to molecular dynamics modelling data, facilitates the quantitative interpretation of experimental observations by inferring atomistic information from resolution-limited measured topographies. The computation required to generate a simulated AFM image generally includes the calculation of contacts between the scanning tip and all atoms from the biomolecular structure. However, since only contacts with surface atoms are relevant, a filtering method shall highly improve the efficiency of simulated AFM computations. In this report, we address this issue and present an elegant solution based on graphics processing unit (GPU) computations that significantly accelerates the computation of simulated AFM images. This method not only allows for the visualization of biomolecular structures combined with ultra-fast synchronized calculation and graphical representation of corresponding simulated AFM images (live simulation AFM), but, as we demonstrate, it can also reduce the computational effort during the automatized fitting of atomistic structures into measured AFM topographies by orders of magnitude. Hence, the developed method will play an important role in post-experimental computational analysis involving simulated AFM, including expected applications in machine learning approaches. The implementation is realized in our BioAFMviewer software (ver. 3) package for simulated AFM of biomolecular structures and dynamics. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

22 pages, 3492 KiB  
Article
Optimizing Reinforcement Learning Using a Generative Action-Translator Transformer
by Jiaming Li, Ning Xie and Tingting Zhao
Algorithms 2024, 17(1), 37; https://doi.org/10.3390/a17010037 - 16 Jan 2024
Viewed by 2746
Abstract
In recent years, with the rapid advancements in Natural Language Processing (NLP) technologies, large models have become widespread. Traditional reinforcement learning algorithms have also started experimenting with language models to optimize training. However, they still fundamentally rely on the Markov Decision Process (MDP) [...] Read more.
In recent years, with the rapid advancements in Natural Language Processing (NLP) technologies, large models have become widespread. Traditional reinforcement learning algorithms have also started experimenting with language models to optimize training. However, they still fundamentally rely on the Markov Decision Process (MDP) for reinforcement learning, and do not fully exploit the advantages of language models for dealing with long sequences of problems. The Decision Transformer (DT) introduced in 2021 is the initial effort to completely transform the reinforcement learning problem into a challenge within the NLP domain. It attempts to use text generation techniques to create reinforcement learning trajectories, addressing the issue of finding optimal trajectories. However, the article places the training trajectory data of reinforcement learning directly into a basic language model for training. Its aim is to predict the entire trajectory, encompassing state and reward information. This approach deviates from the reinforcement learning training objective of finding the optimal action. Furthermore, it generates redundant information in the output, impacting the final training effectiveness of the agent. This paper proposes a more reasonable network model structure, the Action-Translator Transformer (ATT), to predict only the next action of the agent. This makes the language model more interpretable for the reinforcement learning problem. We test our model in simulated gaming scenarios and compare it with current mainstream methods in the offline reinforcement learning field. Based on the presented experimental results, our model demonstrates superior performance. We hope that introducing this model will inspire new ideas and solutions for combining language models and reinforcement learning, providing fresh perspectives for offline reinforcement learning research. Full article
(This article belongs to the Special Issue Algorithms for Games AI)
Show Figures

Figure 1

18 pages, 967 KiB  
Article
Reducing Q-Value Estimation Bias via Mutual Estimation and Softmax Operation in MADRL
by Zheng Li, Xinkai Chen, Jiaqing Fu, Ning Xie and Tingting Zhao
Algorithms 2024, 17(1), 36; https://doi.org/10.3390/a17010036 - 16 Jan 2024
Viewed by 1530
Abstract
With the development of electronic game technology, the content of electronic games presents a larger number of units, richer unit attributes, more complex game mechanisms, and more diverse team strategies. Multi-agent deep reinforcement learning shines brightly in this type of team electronic game, [...] Read more.
With the development of electronic game technology, the content of electronic games presents a larger number of units, richer unit attributes, more complex game mechanisms, and more diverse team strategies. Multi-agent deep reinforcement learning shines brightly in this type of team electronic game, achieving results that surpass professional human players. Reinforcement learning algorithms based on Q-value estimation often suffer from Q-value overestimation, which may seriously affect the performance of AI in multi-agent scenarios. We propose a multi-agent mutual evaluation method and a multi-agent softmax method to reduce the estimation bias of Q values in multi-agent scenarios, and have tested them in both the particle multi-agent environment and the multi-agent tank environment we constructed. The multi-agent tank environment we have built has achieved a good balance between experimental verification efficiency and multi-agent game task simulation. It can be easily extended for different multi-agent cooperation or competition tasks. We hope that it can be promoted in the research of multi-agent deep reinforcement learning. Full article
(This article belongs to the Special Issue Algorithms for Games AI)
Show Figures

Figure 1

18 pages, 7433 KiB  
Article
An Interactive Digital-Twin Model for Virtual Reality Environments to Train in the Use of a Sensorized Upper-Limb Prosthesis
by Alessio Cellupica, Marco Cirelli, Giovanni Saggio, Emanuele Gruppioni and Pier Paolo Valentini
Algorithms 2024, 17(1), 35; https://doi.org/10.3390/a17010035 - 14 Jan 2024
Cited by 2 | Viewed by 1966
Abstract
In recent years, the boost in the development of hardware and software resources for building virtual reality environments has fuelled the development of tools to support training in different disciplines. The purpose of this work is to discuss a complete methodology and the [...] Read more.
In recent years, the boost in the development of hardware and software resources for building virtual reality environments has fuelled the development of tools to support training in different disciplines. The purpose of this work is to discuss a complete methodology and the supporting algorithms to develop a virtual reality environment to train the use of a sensorized upper-limb prosthesis targeted at amputees. The environment is based on the definition of a digital twin of a virtual prosthesis, able to communicate with the sensors worn by the user and reproduce its dynamic behaviour and the interaction with virtual objects. Several training tasks are developed according to standards, including the Southampton Hand Assessment Procedure, and the usability of the entire system is evaluated, too. Full article
(This article belongs to the Special Issue Algorithms for Virtual and Augmented Environments)
Show Figures

Figure 1

23 pages, 7475 KiB  
Article
Ensemble Heuristic–Metaheuristic Feature Fusion Learning for Heart Disease Diagnosis Using Tabular Data
by Mohammad Shokouhifar, Mohamad Hasanvand, Elaheh Moharamkhani and Frank Werner
Algorithms 2024, 17(1), 34; https://doi.org/10.3390/a17010034 - 14 Jan 2024
Cited by 6 | Viewed by 2039
Abstract
Heart disease is a global health concern of paramount importance, causing a significant number of fatalities and disabilities. Precise and timely diagnosis of heart disease is pivotal in preventing adverse outcomes and improving patient well-being, thereby creating a growing demand for intelligent approaches [...] Read more.
Heart disease is a global health concern of paramount importance, causing a significant number of fatalities and disabilities. Precise and timely diagnosis of heart disease is pivotal in preventing adverse outcomes and improving patient well-being, thereby creating a growing demand for intelligent approaches to predict heart disease effectively. This paper introduces an ensemble heuristic–metaheuristic feature fusion learning (EHMFFL) algorithm for heart disease diagnosis using tabular data. Within the EHMFFL algorithm, a diverse ensemble learning model is crafted, featuring different feature subsets for each heterogeneous base learner, including support vector machine, K-nearest neighbors, logistic regression, random forest, naive bayes, decision tree, and XGBoost techniques. The primary objective is to identify the most pertinent features for each base learner, leveraging a combined heuristic–metaheuristic approach that integrates the heuristic knowledge of the Pearson correlation coefficient with the metaheuristic-driven grey wolf optimizer. The second objective is to aggregate the decision outcomes of the various base learners through ensemble learning. The performance of the EHMFFL algorithm is rigorously assessed using the Cleveland and Statlog datasets, yielding remarkable results with an accuracy of 91.8% and 88.9%, respectively, surpassing state-of-the-art techniques in heart disease diagnosis. These findings underscore the potential of the EHMFFL algorithm in enhancing diagnostic accuracy for heart disease and providing valuable support to clinicians in making more informed decisions regarding patient care. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

20 pages, 891 KiB  
Article
A Quantum-Inspired Predator–Prey Algorithm for Real-Parameter Optimization
by Azal Ahmad Khan, Salman Hussain and Rohitash Chandra
Algorithms 2024, 17(1), 33; https://doi.org/10.3390/a17010033 - 12 Jan 2024
Cited by 1 | Viewed by 1661
Abstract
Quantum computing has opened up various opportunities for the enhancement of computational power in the coming decades. We can design algorithms inspired by the principles of quantum computing, without implementing in quantum computing infrastructure. In this paper, we present the quantum predator–prey algorithm [...] Read more.
Quantum computing has opened up various opportunities for the enhancement of computational power in the coming decades. We can design algorithms inspired by the principles of quantum computing, without implementing in quantum computing infrastructure. In this paper, we present the quantum predator–prey algorithm (QPPA), which fuses the fundamentals of quantum computing and swarm optimization based on a predator–prey algorithm. Our results demonstrate the efficacy of QPPA in solving complex real-parameter optimization problems with better accuracy when compared to related algorithms in the literature. QPPA achieves highly rapid convergence for relatively low- and high-dimensional optimization problems and outperforms selected traditional and advanced algorithms. This motivates the application of QPPA to real-world application problems. Full article
(This article belongs to the Special Issue Recent Advances in Algorithms for Swarm Systems)
Show Figures

Figure 1

20 pages, 1159 KiB  
Article
Image Watermarking Using Discrete Wavelet Transform and Singular Value Decomposition for Enhanced Imperceptibility and Robustness
by Mahbuba Begum, Sumaita Binte Shorif, Mohammad Shorif Uddin, Jannatul Ferdush, Tony Jan, Alistair Barros and Md Whaiduzzaman
Algorithms 2024, 17(1), 32; https://doi.org/10.3390/a17010032 - 12 Jan 2024
Cited by 1 | Viewed by 2578
Abstract
Digital multimedia elements such as text, image, audio, and video can be easily manipulated because of the rapid rise of multimedia technology, making data protection a prime concern. Hence, copyright protection, content authentication, and integrity verification are today’s new challenging issues. To address [...] Read more.
Digital multimedia elements such as text, image, audio, and video can be easily manipulated because of the rapid rise of multimedia technology, making data protection a prime concern. Hence, copyright protection, content authentication, and integrity verification are today’s new challenging issues. To address these issues, digital image watermarking techniques have been proposed by several researchers. Image watermarking can be conducted through several transformations, such as discrete wavelet transform (DWT), singular value decomposition (SVD), orthogonal matrix Q and upper triangular matrix R (QR) decomposition, and non-subsampled contourlet transform (NSCT). However, a single transformation cannot simultaneously satisfy all the design requirements of image watermarking, which makes a platform to design a hybrid invisible image watermarking technique in this work. The proposed work combines four-level (4L) DWT and two-level (2L) SVD. The Arnold map initially encrypts the watermark image, and 2L SVD is applied to it to extract the s components of the watermark image. A 4L DWT is applied to the host image to extract the LL sub-band, and then 2L SVD is applied to extract s components that are embedded into the host image to generate the watermarked image. The dynamic-sized watermark maintains a balanced visual impact and non-blind watermarking preserves the quality and integrity of the host image. We have evaluated the performance after applying several intentional and unintentional attacks and found high imperceptibility and improved robustness with enhanced security to the system than existing state-of-the-art methods. Full article
Show Figures

Figure 1

18 pages, 492 KiB  
Article
GPU Algorithms for Structured Sparse Matrix Multiplication with Diagonal Storage Schemes
by Sardar Anisul Haque, Mohammad Tanvir Parvez and Shahadat Hossain
Algorithms 2024, 17(1), 31; https://doi.org/10.3390/a17010031 - 12 Jan 2024
Viewed by 1882
Abstract
Matrix–matrix multiplication is of singular importance in linear algebra operations with a multitude of applications in scientific and engineering computing. Data structures for storing matrix elements are designed to minimize overhead information as well as to optimize the operation count. In this study, [...] Read more.
Matrix–matrix multiplication is of singular importance in linear algebra operations with a multitude of applications in scientific and engineering computing. Data structures for storing matrix elements are designed to minimize overhead information as well as to optimize the operation count. In this study, we utilize the notion of the compact diagonal storage method (CDM), which builds upon the previously developed diagonal storage—an orientation-independent uniform scheme to store the nonzero elements of a range of matrices. This study exploits both these storage schemes and presents efficient GPU-accelerated parallel implementations of matrix multiplication when the input matrices are banded and/or structured sparse. We exploit the data layouts in the diagonal storage schemes to expose a substantial amount of fine-grained parallelism and effectively utilize the GPU shared memory to improve the locality of data access for numerical calculations. Results from an extensive set of numerical experiments with the aforementioned types of matrices demonstrate orders-of-magnitude speedups compared with the sequential performance. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Figure 1

16 pages, 4615 KiB  
Article
Quantum-Inspired Neural Network Model of Optical Illusions
by Ivan S. Maksymov
Algorithms 2024, 17(1), 30; https://doi.org/10.3390/a17010030 - 10 Jan 2024
Cited by 6 | Viewed by 2474
Abstract
Ambiguous optical illusions have been a paradigmatic object of fascination, research and inspiration in arts, psychology and video games. However, accurate computational models of perception of ambiguous figures have been elusive. In this paper, we design and train a deep neural network model [...] Read more.
Ambiguous optical illusions have been a paradigmatic object of fascination, research and inspiration in arts, psychology and video games. However, accurate computational models of perception of ambiguous figures have been elusive. In this paper, we design and train a deep neural network model to simulate human perception of the Necker cube, an ambiguous drawing with several alternating possible interpretations. Defining the weights of the neural network connection using a quantum generator of truly random numbers, in agreement with the emerging concepts of quantum artificial intelligence and quantum cognition, we reveal that the actual perceptual state of the Necker cube is a qubit-like superposition of the two fundamental perceptual states predicted by classical theories. Our results finds applications in video games and virtual reality systems employed for training of astronauts and operators of unmanned aerial vehicles. They are also useful for researchers working in the fields of machine learning and vision, psychology of perception and quantum–mechanical models of human mind and decision making. Full article
(This article belongs to the Special Issue Applications of AI and Data Engineering in Science)
Show Figures

Figure 1

26 pages, 21938 KiB  
Article
Navigating the Maps: Euclidean vs. Road Network Distances in Spatial Queries
by Pornrawee Tatit, Kiki Adhinugraha and David Taniar
Algorithms 2024, 17(1), 29; https://doi.org/10.3390/a17010029 - 10 Jan 2024
Cited by 2 | Viewed by 2111
Abstract
Using spatial data in mobile applications has grown significantly, thereby empowering users to explore locations, navigate unfamiliar areas, find transportation routes, employ geomarketing strategies, and model environmental factors. Spatial databases are pivotal in efficiently storing, retrieving, and manipulating spatial data to fulfill users’ [...] Read more.
Using spatial data in mobile applications has grown significantly, thereby empowering users to explore locations, navigate unfamiliar areas, find transportation routes, employ geomarketing strategies, and model environmental factors. Spatial databases are pivotal in efficiently storing, retrieving, and manipulating spatial data to fulfill users’ needs. Two fundamental spatial query types, k-nearest neighbors (kNN) and range search, enable users to access specific points of interest (POIs) based on their location, which are measured by actual road distance. However, retrieving the nearest POIs using actual road distance can be computationally intensive due to the need to find the shortest distance. Using straight-line measurements could expedite the process but might compromise accuracy. Consequently, this study aims to evaluate the accuracy of the Euclidean distance method in POIs retrieval by comparing it with the road network distance method. The primary focus is determining whether the trade-off between computational time and accuracy is justified, thus employing the Open Source Routing Machine (OSRM) for distance extraction. The assessment encompasses diverse scenarios and analyses factors influencing the accuracy of the Euclidean distance method. The methodology employs a quantitative approach, thereby categorizing query points based on density and analyzing them using kNN and range query methods. Accuracy in the Euclidean distance method is evaluated against the road network distance method. The results demonstrate peak accuracy for kNN queries at k=1, thus exceeding 85% across classes but declining as k increases. Range queries show varied accuracy based on POI density, with higher-density classes exhibiting earlier accuracy increases. Notably, datasets with fewer POIs exhibit unexpectedly higher accuracy, thereby providing valuable insights into spatial query processing. Full article
(This article belongs to the Special Issue Recent Advances in Computational Intelligence for Path Planning)
Show Figures

Figure 1

18 pages, 1954 KiB  
Article
Specification Mining Based on the Ordering Points to Identify the Clustering Structure Clustering Algorithm and Model Checking
by Yiming Fan and Meng Wang
Algorithms 2024, 17(1), 28; https://doi.org/10.3390/a17010028 - 10 Jan 2024
Viewed by 1354
Abstract
Software specifications are of great importance to improve the quality of software. To automatically mine specifications from software systems, some specification mining approaches based on finite-state automatons have been proposed. However, these approaches are inaccurate when dealing with large-scale systems. In order to [...] Read more.
Software specifications are of great importance to improve the quality of software. To automatically mine specifications from software systems, some specification mining approaches based on finite-state automatons have been proposed. However, these approaches are inaccurate when dealing with large-scale systems. In order to improve the accuracy of mined specifications, we propose a specification mining approach based on the ordering points to identify the clustering structure clustering algorithm and model checking. In the approach, the neural network model is first used to produce the feature values of states in the traces of the program. Then, according to the feature values, finite-state automatons are generated based on the ordering points to identify the clustering structure clustering algorithm. Further, the finite-state automaton with the highest F-measure is selected. To improve the quality of the finite-state automatons, we refine it based on model checking. The proposed approach was implemented in a tool named MCLSM and experiments, including 13 target classes, were conducted to evaluate its effectiveness. The experimental results show that the average F-measure of finite-state automatons generated by our method reaches 92.19%, which is higher than most related tools. Full article
(This article belongs to the Special Issue Algorithms in Software Engineering)
Show Figures

Figure 1

17 pages, 1904 KiB  
Article
Personalized Advertising in E-Commerce: Using Clickstream Data to Target High-Value Customers
by Virgilijus Sakalauskas and Dalia Kriksciuniene
Algorithms 2024, 17(1), 27; https://doi.org/10.3390/a17010027 - 10 Jan 2024
Cited by 2 | Viewed by 3207
Abstract
The growing popularity of e-commerce has prompted researchers to take a greater interest in deeper understanding online shopping behavior, consumer interest patterns, and the effectiveness of advertising campaigns. This paper presents a fresh approach for targeting high-value e-shop clients by utilizing clickstream data. [...] Read more.
The growing popularity of e-commerce has prompted researchers to take a greater interest in deeper understanding online shopping behavior, consumer interest patterns, and the effectiveness of advertising campaigns. This paper presents a fresh approach for targeting high-value e-shop clients by utilizing clickstream data. We propose the new algorithm to measure customer engagement and recognizing high-value customers. Clickstream data is employed in the algorithm to compute a Customer Merit (CM) index that measures the customer’s level of engagement and anticipates their purchase intent. The CM index is evaluated dynamically by the algorithm, examining the customer’s activity level, efficiency in selecting items, and time spent in browsing. It combines tracking customers browsing and purchasing behaviors with other relevant factors: time spent on the website and frequency of visits to e-shops. This strategy proves highly beneficial for e-commerce enterprises, enabling them to pinpoint potential buyers and design targeted advertising campaigns exclusively for high-value customers of e-shops. It allows not only boosts e-shop sales but also minimizes advertising expenses effectively. The proposed method was tested on actual clickstream data from two e-commerce websites and showed that the personalized advertising campaign outperformed the non-personalized campaign in terms of click-through and conversion rate. In general, the findings suggest, that personalized advertising scenarios can be a useful tool for boosting e-commerce sales and reduce advertising cost. By utilizing clickstream data and adopting a targeted approach, e-commerce businesses can attract and retain high-value customers, leading to higher revenue and profitability. Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

34 pages, 4687 KiB  
Article
Hybrid Sparrow Search-Exponential Distribution Optimization with Differential Evolution for Parameter Prediction of Solar Photovoltaic Models
by Amr A. Abd El-Mageed, Ayoub Al-Hamadi, Samy Bakheet and Asmaa H. Abd El-Rahiem
Algorithms 2024, 17(1), 26; https://doi.org/10.3390/a17010026 - 9 Jan 2024
Cited by 5 | Viewed by 1820
Abstract
It is difficult to determine unknown solar cell and photovoltaic (PV) module parameters owing to the nonlinearity of the characteristic current–voltage (I-V) curve. Despite this, precise parameter estimation is necessary due to the substantial effect parameters have on the efficacy of the PV [...] Read more.
It is difficult to determine unknown solar cell and photovoltaic (PV) module parameters owing to the nonlinearity of the characteristic current–voltage (I-V) curve. Despite this, precise parameter estimation is necessary due to the substantial effect parameters have on the efficacy of the PV system with respect to current and energy results. The problem’s characteristics make the handling of algorithms susceptible to local optima and resource-intensive processing. To effectively extract PV model parameter values, an improved hybrid Sparrow Search Algorithm (SSA) with Exponential Distribution Optimization (EDO) based on the Differential Evolution (DE) technique and the bound-constraint modification procedure, called ISSAEDO, is presented in this article. The hybrid strategy utilizes EDO to improve global exploration and SSA to effectively explore the solution space, while DE facilitates local search to improve parameter estimations. The proposed method is compared to standard optimization methods using solar PV system data to demonstrate its effectiveness and speed in obtaining PV model parameters such as the single diode model (SDM) and the double diode model (DDM). The results indicate that the hybrid technique is a viable instrument for enhancing solar PV system design and performance analysis because it can predict PV model parameters accurately. Full article
Show Figures

Figure 1

14 pages, 2547 KiB  
Article
Particle Swarm Optimization-Based Unconstrained Polygonal Fitting of 2D Shapes
by Costas Panagiotakis
Algorithms 2024, 17(1), 25; https://doi.org/10.3390/a17010025 - 7 Jan 2024
Viewed by 1845
Abstract
In this paper, we present a general version of polygonal fitting problem called Unconstrained Polygonal Fitting (UPF). Our goal is to represent a given 2D shape S with an N-vertex polygonal curve P with a known number of vertices, so that the Intersection [...] Read more.
In this paper, we present a general version of polygonal fitting problem called Unconstrained Polygonal Fitting (UPF). Our goal is to represent a given 2D shape S with an N-vertex polygonal curve P with a known number of vertices, so that the Intersection over Union (IoU) metric between S and P is maximized without any assumption or prior knowledge of the object structure and the location of the N-vertices of P that can be placed anywhere in the 2D space. The search space of the UPF problem is a superset of the classical polygonal approximation (PA) problem, where the vertices are constrained to belong in the boundary of the given 2D shape. Therefore, the resulting solutions of the UPF may better approximate the given curve than the solutions of the PA problem. For a given number of vertices N, a Particle Swarm Optimization (PSO) method is used to maximize the IoU metric, which yields almost optimal solutions. Furthermore, the proposed method has also been implemented under the equal area principle so that the total area covered by P is equal to the area of the original 2D shape to measure how this constraint affects IoU metric. The quantitative results obtained on more than 2800 2D shapes included in two standard datasets quantify the performance of the proposed methods and illustrate that their solutions outperform baselines from the literature. Full article
Show Figures

Figure 1

32 pages, 703 KiB  
Article
Entropy and the Kullback–Leibler Divergence for Bayesian Networks: Computational Complexity and Efficient Implementation
by Marco Scutari
Algorithms 2024, 17(1), 24; https://doi.org/10.3390/a17010024 - 6 Jan 2024
Viewed by 1463
Abstract
Bayesian networks (BNs) are a foundational model in machine learning and causal inference. Their graphical structure can handle high-dimensional problems, divide them into a sparse collection of smaller ones, underlies Judea Pearl’s causality, and determines their explainability and interpretability. Despite their popularity, there [...] Read more.
Bayesian networks (BNs) are a foundational model in machine learning and causal inference. Their graphical structure can handle high-dimensional problems, divide them into a sparse collection of smaller ones, underlies Judea Pearl’s causality, and determines their explainability and interpretability. Despite their popularity, there are almost no resources in the literature on how to compute Shannon’s entropy and the Kullback–Leibler (KL) divergence for BNs under their most common distributional assumptions. In this paper, we provide computationally efficient algorithms for both by leveraging BNs’ graphical structure, and we illustrate them with a complete set of numerical examples. In the process, we show it is possible to reduce the computational complexity of KL from cubic to quadratic for Gaussian BNs. Full article
(This article belongs to the Special Issue Bayesian Networks and Causal Reasoning)
Show Figures

Figure 1

15 pages, 3185 KiB  
Article
A New Approach to Identifying Sorghum Hybrids Using UAV Imagery Using Multispectral Signature and Machine Learning
by Dthenifer Cordeiro Santana, Gustavo de Faria Theodoro, Ricardo Gava, João Lucas Gouveia de Oliveira, Larissa Pereira Ribeiro Teodoro, Izabela Cristina de Oliveira, Fábio Henrique Rojo Baio, Carlos Antonio da Silva Junior, Job Teixeira de Oliveira and Paulo Eduardo Teodoro
Algorithms 2024, 17(1), 23; https://doi.org/10.3390/a17010023 - 5 Jan 2024
Cited by 3 | Viewed by 1667
Abstract
Using multispectral sensors attached to unmanned aerial vehicles (UAVs) can assist in the collection of morphological and physiological information from several crops. This approach, also known as high-throughput phenotyping, combined with data processing by machine learning (ML) algorithms, can provide fast, accurate, and [...] Read more.
Using multispectral sensors attached to unmanned aerial vehicles (UAVs) can assist in the collection of morphological and physiological information from several crops. This approach, also known as high-throughput phenotyping, combined with data processing by machine learning (ML) algorithms, can provide fast, accurate, and large-scale discrimination of genotypes in the field, which is crucial for improving the efficiency of breeding programs. Despite their importance, studies aimed at accurately classifying sorghum hybrids using spectral variables as input sets in ML models are still scarce in the literature. Against this backdrop, this study aimed: (I) to discriminate sorghum hybrids based on canopy reflectance in different spectral bands (SB) and vegetation indices (VIs); (II) to evaluate the performance of ML algorithms in classifying sorghum hybrids; (III) to evaluate the best dataset input for the algorithms. A field experiment was carried out in the 2022 crop season in a randomized block design with three replications and six sorghum hybrids. At 60 days after crop emergence, a flight was carried out over the experimental area using the Sensefly eBee real time kinematic. The spectral bands (SB) acquired by the sensor were: blue (475 nm, B_475), green (550 nm, G_550), red (660 nm, R_660), Rededge (735 nm, RE_735) e NIR (790 nm, NIR_790). From the SB acquired, vegetation indices (VIs) were calculated. Data were submitted to ML classification analysis, in which three input settings (using only SB, using only VIs, and using SB + VIs) and six algorithms were tested: artificial neural networks (ANN), support vector machine (SVM), J48 decision trees (J48), random forest (RF), REPTree (DT) and logistic regression (LR, conventional technique used as a control). There were differences in the spectral signature of each sorghum hybrid, which made it possible to differentiate them using SBs and VIs. The ANN algorithm performed best for the three accuracy metrics tested, regardless of the input used. In this case, the use of SB is feasible due to the speed and practicality of analyzing the data, as it does not require calculations to perform the VIs. RF showed better accuracy when VIs were used as an input. The use of VIs provided the best performance for all the algorithms, as did the use of SB + VIs which provided good performance for all the algorithms except RF. Using ML algorithms provides accurate identification of the hybrids, in which ANNs using only SB and RF using VIs as inputs stand out (above 55 for CC, above 0.4 for kappa and around 0.6 for F-score). There were differences in the spectral signature of each sorghum hybrid, which makes it possible to differentiate them using wavelengths and vegetation indices. Processing the multispectral data using machine learning techniques made it possible to accurately differentiate the hybrids, with emphasis on artificial neural networks using spectral bands as inputs and random forest using vegetation indices as inputs. Full article
Show Figures

Figure 1

18 pages, 5560 KiB  
Article
Towards Full Forward On-Tiny-Device Learning: A Guided Search for a Randomly Initialized Neural Network
by Danilo Pau, Andrea Pisani and Antonio Candelieri
Algorithms 2024, 17(1), 22; https://doi.org/10.3390/a17010022 - 5 Jan 2024
Viewed by 1829
Abstract
In the context of TinyML, many research efforts have been devoted to designing forward topologies to support On-Device Learning. Reaching this target would bring numerous advantages, including reductions in latency and computational complexity, stronger privacy, data safety and robustness to adversarial attacks, higher [...] Read more.
In the context of TinyML, many research efforts have been devoted to designing forward topologies to support On-Device Learning. Reaching this target would bring numerous advantages, including reductions in latency and computational complexity, stronger privacy, data safety and robustness to adversarial attacks, higher resilience against concept drift, etc. However, On-Device Learning on resource constrained devices poses severe limitations to computational power and memory. Therefore, deploying Neural Networks on tiny devices appears to be prohibitive, since their backpropagation-based training is too memory demanding for their embedded assets. Using Extreme Learning Machines based on Convolutional Neural Networks might be feasible and very convenient, especially for Feature Extraction tasks. However, it requires searching for a randomly initialized topology that achieves results as good as those achieved by the backpropagated model. This work proposes a novel approach for automatically composing an Extreme Convolutional Feature Extractor, based on Neural Architecture Search and Bayesian Optimization. It was applied to the CIFAR-10 and MNIST datasets for evaluation. Two search spaces have been defined, as well as a search strategy that has been tested with two surrogate models, Gaussian Process and Random Forest. A performance estimation strategy was defined, keeping the feature set computed by the MLCommons-Tiny benchmark ResNet as a reference model. In as few as 1200 search iterations, the proposed strategy was able to achieve a topology whose extracted features scored a mean square error equal to 0.64 compared to the reference set. Further improvements are required, with a target of at least one order of magnitude decrease in mean square error for improved classification accuracy. The code is made available via GitHub to allow for the reproducibility of the results reported in this paper. Full article
Show Figures

Figure 1

38 pages, 10361 KiB  
Article
Exploring the Use of Artificial Intelligence in Agent-Based Modeling Applications: A Bibliometric Study
by Ștefan Ionescu, Camelia Delcea, Nora Chiriță and Ionuț Nica
Algorithms 2024, 17(1), 21; https://doi.org/10.3390/a17010021 - 3 Jan 2024
Cited by 9 | Viewed by 4242
Abstract
This research provides a comprehensive analysis of the dynamic interplay between agent-based modeling (ABM) and artificial intelligence (AI) through a meticulous bibliometric study. This study reveals a substantial increase in scholarly interest, particularly post-2006, peaking in 2021 and 2022, indicating a contemporary surge [...] Read more.
This research provides a comprehensive analysis of the dynamic interplay between agent-based modeling (ABM) and artificial intelligence (AI) through a meticulous bibliometric study. This study reveals a substantial increase in scholarly interest, particularly post-2006, peaking in 2021 and 2022, indicating a contemporary surge in research on the synergy between AI and ABM. Temporal trends and fluctuations prompt questions about influencing factors, potentially linked to technological advancements or shifts in research focus. The sustained increase in citations per document per year underscores the field’s impact, with the 2021 peak suggesting cumulative influence. Reference Publication Year Spectroscopy (RPYS) reveals historical patterns, and the recent decline prompts exploration into shifts in research focus. Lotka’s law is reflected in the author’s contributions, supported by Pareto analysis. Journal diversity signals extensive exploration of AI applications in ABM. Identifying impactful journals and clustering them per Bradford’s Law provides insights for researchers. Global scientific production dominance and regional collaboration maps emphasize the worldwide landscape. Despite acknowledging limitations, such as citation lag and interdisciplinary challenges, our study offers a global perspective with implications for future research and as a resource in the evolving AI and ABM landscape. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
Show Figures

Graphical abstract

14 pages, 2779 KiB  
Communication
Algorithms for Fractional Dynamical Behaviors Modelling Using Non-Singular Rational Kernels
by Jocelyn Sabatier and Christophe Farges
Algorithms 2024, 17(1), 20; https://doi.org/10.3390/a17010020 - 31 Dec 2023
Cited by 1 | Viewed by 1286
Abstract
This paper proposes algorithms to model fractional (dynamical) behaviors using non-singular rational kernels whose interest is first demonstrated on a pure power law function. Two algorithms are then proposed to find a non-singular rational kernel that allows the input-output data to be fitted. [...] Read more.
This paper proposes algorithms to model fractional (dynamical) behaviors using non-singular rational kernels whose interest is first demonstrated on a pure power law function. Two algorithms are then proposed to find a non-singular rational kernel that allows the input-output data to be fitted. The first one derives the impulse response of the modeled system from the data. The second one finds the interlaced poles and zeros of the rational function that fits the impulse response found using the first algorithm. Several applications show the efficiency of the proposed work. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop