Previous Issue
Volume 18, August
 
 

Algorithms, Volume 18, Issue 9 (September 2025) – 41 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
28 pages, 16146 KB  
Article
A Smooth-Delayed Phase-Type Mixture Model for Human-Driven Process Duration Modeling
by Dongwei Wang, Sally McClean, Lingkai Yang, Ian McChesney and Zeeshan Tariq
Algorithms 2025, 18(9), 575; https://doi.org/10.3390/a18090575 - 11 Sep 2025
Abstract
Activities in business processes primarily depend on human behavior for completion. Due to human agency, the behavior underlying individual activities may occur in multiple phases and can vary in execution. As a result, the execution duration and nature of such activities may exhibit [...] Read more.
Activities in business processes primarily depend on human behavior for completion. Due to human agency, the behavior underlying individual activities may occur in multiple phases and can vary in execution. As a result, the execution duration and nature of such activities may exhibit complex multimodal characteristics. Phase-type distributions are useful for analyzing the underlying behavioral structure, which may consist of multiple sub-activities. The phenomenon of delayed start is also common in such activities, possibly due to the minimum task completion time or prerequisite tasks. As a result, the distribution of durations or certain components does not start at zero but has a minimum value, and the probability below this value is zero. When using phase-type models to fit such distributions, a large number of phases are often required, which exceed the actual number of sub-activities. This reduces the interpretability of the parameters and may also lead to optimization difficulties due to overparameterization. In this paper, we propose a smooth-delayed phase-type mixture model that introduces delay parameters to address the difficulty of fitting this kind of distribution. Since durations shorter than the delay should have zero probability, such hard truncation renders the parameter not estimable under the Expectation–Maximization (EM) framework. To overcome this, we design a soft-truncation mechanism to improve model convergence. We further develop an inference framework that combines the EM algorithm, Bayesian inference, and Sequential Least Squares Programming for comprehensive and efficient parameter estimation. The method is validated on a synthetic dataset and two real-world datasets. Results demonstrate that the proposed approach maintains a suitable performance comparable to purely data-driven methods while providing good interpretability to reveal the potential underlying structure behind human-driven activities. Full article
27 pages, 1844 KB  
Article
A Quantum Frequency-Domain Framework for Image Transmission with Three-Qubit Error Correction
by Udara Jayasinghe, Thanuj Fernando and Anil Fernando
Algorithms 2025, 18(9), 574; https://doi.org/10.3390/a18090574 - 11 Sep 2025
Abstract
Quantum communication enables high-fidelity image transmission but is vulnerable to channel noise, and while advanced quantum error correction (QEC) can reduce such effects, its complexity and time-domain dependence limit practical efficiency. This paper presents a novel, low-complexity, and noise-resilient quantum image transmission framework [...] Read more.
Quantum communication enables high-fidelity image transmission but is vulnerable to channel noise, and while advanced quantum error correction (QEC) can reduce such effects, its complexity and time-domain dependence limit practical efficiency. This paper presents a novel, low-complexity, and noise-resilient quantum image transmission framework that operates in the frequency domain using the quantum Fourier transform (QFT) combined with the three-qubit QEC code. In the proposed system, input images are first source-encoded (JPEG/HEIF) and mapped to quantum states using single-qubit superposition encoding. Three-qubit QEC is then applied for channel protection, effectively safeguarding the encoded data against quantum errors. The channel-encoded quantum data are subsequently transformed via QFT for transmission over noisy quantum channels. At the receiver, the inverse QFT recovers the frequency-domain representation, after which three-qubit error correction, quantum decoding, and corresponding source decoding are performed to reconstruct the image. Results are analyzed using bit error rate (BER), peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and universal quality index (UQI). Experimental results show that the proposed quantum frequency-domain approach achieves up to 4 dB channel SNR gain over equivalent quantum time-domain methods and up to 10 dB over an equivalent-bandwidth classical communication system, regardless of the image format. These findings highlight the practical advantages of integrating QFT-based transmission with lightweight QEC, offering an efficient, scalable, and noise-tolerant solution for future quantum communication networks. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Graphical abstract

32 pages, 1828 KB  
Review
Recent Trends in the Optimization of Logistics Systems Through Discrete-Event Simulation and Deep Learning
by Róbert Skapinyecz
Algorithms 2025, 18(9), 573; https://doi.org/10.3390/a18090573 - 11 Sep 2025
Abstract
The main objective of the study is to present the latest trends and research directions in the field of optimization of logistics systems with Discrete-Event Simulation (DES) and Deep Learning (DL). This research area is highly relevant from several aspects: on the one [...] Read more.
The main objective of the study is to present the latest trends and research directions in the field of optimization of logistics systems with Discrete-Event Simulation (DES) and Deep Learning (DL). This research area is highly relevant from several aspects: on the one hand, in the modern Industry 4.0 concept, simulation tools, especially Discrete-Event Simulations, are increasingly used for the modelling of material flow processes; on the other hand, the use of Artificial Intelligence (AI)—especially Deep Neural Networks (DNNs)—to evaluate the results of the former significantly enhances the potential applicability and effectiveness of such simulations. At the same time, the results obtained from Discrete-Event Simulations can also be used as synthetic datasets for the training of DNNs, which creates entirely new opportunities for both scientific research and practical applications. As a result, the interest in the combination of Discrete-Event Simulation with Deep Learning in the field of logistics has significantly increased in the recent period, giving rise to multiple different approaches. The main contribution of the current paper is that, through a review of the relevant literature, it provides an overview and systematization of the state-of-the-art methods and approaches in this developing field. Based on the results of the literature review, the study also presents the evolution of the research trends and identifies the most important research gaps in the field. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

34 pages, 21994 KB  
Article
Multimodal Video Summarization Using Machine Learning: A Comprehensive Benchmark of Feature Selection and Classifier Performance
by Elmin Marevac, Esad Kadušić, Nataša Živić, Nevzudin Buzađija, Edin Tabak and Safet Velić
Algorithms 2025, 18(9), 572; https://doi.org/10.3390/a18090572 - 10 Sep 2025
Abstract
The exponential growth of user-generated video content necessitates efficient summarization systems for improved accessibility, retrieval, and analysis. This study presents and benchmarks a multimodal video summarization framework that classifies segments as informative or non-informative using audio, visual, and fused features. Sixty hours of [...] Read more.
The exponential growth of user-generated video content necessitates efficient summarization systems for improved accessibility, retrieval, and analysis. This study presents and benchmarks a multimodal video summarization framework that classifies segments as informative or non-informative using audio, visual, and fused features. Sixty hours of annotated video across ten diverse categories were analyzed. Audio features were extracted with pyAudioAnalysis, while visual features (colour histograms, optical flow, object detection, facial recognition) were derived using OpenCV. Six supervised classifiers—Naive Bayes, K-Nearest Neighbors, Logistic Regression, Decision Tree, Random Forest, and XGBoost—were evaluated, with hyperparameters optimized via grid search. Temporal coherence was enhanced using median filtering. Random Forest achieved the best performance, with 74% AUC on fused features and a 3% F1-score gain after post-processing. Spectral flux, grayscale histograms, and optical flow emerged as key discriminative features. The best model was deployed as a practical web service using TensorFlow and Flask, integrating informative segment detection with subtitle generation via beam search to ensure coherence and coverage. System-level evaluation demonstrated low latency and efficient resource utilization under load. Overall, the results confirm the strength of multimodal fusion and ensemble learning for video summarization and highlight their potential for real-world applications in surveillance, digital archiving, and online education. Full article
(This article belongs to the Special Issue Visual Attributes in Computer Vision Applications)
Show Figures

Figure 1

12 pages, 1553 KB  
Article
Enhancing Wireless Sensor Networks with Bluetooth Low-Energy Mesh and Ant Colony Optimization Algorithm
by Hussein S. Mohammed, Hayam K. Mustafa and Omar A. Abdulkareem
Algorithms 2025, 18(9), 571; https://doi.org/10.3390/a18090571 - 10 Sep 2025
Abstract
Wireless Sensor Networks (WSNs) face persistent challenges of uneven energy depletion, limited scalability, and reduced network lifetime, all of which hinder their effectiveness in Internet of Things (IoT) applications. This paper introduces a hybrid framework that integrates Bluetooth Low-Energy (BLE) mesh networking with [...] Read more.
Wireless Sensor Networks (WSNs) face persistent challenges of uneven energy depletion, limited scalability, and reduced network lifetime, all of which hinder their effectiveness in Internet of Things (IoT) applications. This paper introduces a hybrid framework that integrates Bluetooth Low-Energy (BLE) mesh networking with Ant Colony Optimization (ACO) to deliver energy-aware, adaptive routing over a standards-compliant mesh fabric. BLE mesh contributes a resilient many-to-many topology with Friend/Low-Power Node roles that minimize idle listening, while ACO dynamically selects next hops based on residual energy, distance, and link quality to balance load and prevent hot spots. Using large-scale simulations with 1000 nodes over a 1000 × 1000 m field, the proposed BLE-ACO system reduced overall energy consumption by approximately 35%, extended network lifetime by 40%, and improved throughput by 25% compared with conventional BLE forwarding, while also surpassing a LEACH-like clustering baseline. Confidence interval analysis confirmed the statistical robustness of these results. The findings demonstrate that BLE-ACO is a scalable, sustainable, and standards-aligned solution for energy-constrained IoT deployments, particularly in smart cities, industrial automation, and environmental monitoring, where long-term performance and adaptability are critical. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

18 pages, 9177 KB  
Article
Understanding Physiological Responses for Intelligent Posture and Autonomic Response Detection Using Wearable Technology
by Chaitanya Vardhini Anumula, Tanvi Banerjee and William Lee Romine
Algorithms 2025, 18(9), 570; https://doi.org/10.3390/a18090570 - 10 Sep 2025
Abstract
This study investigates how Iyengar yoga postures influence autonomic nervous system (ANS) activity by analyzing multimodal physiological signals collected via wearable sensors. The goal was to explore whether subtle postural variations elicit measurable autonomic responses and to identify which sensor features most effectively [...] Read more.
This study investigates how Iyengar yoga postures influence autonomic nervous system (ANS) activity by analyzing multimodal physiological signals collected via wearable sensors. The goal was to explore whether subtle postural variations elicit measurable autonomic responses and to identify which sensor features most effectively capture these changes. Participants performed a sequence of yoga poses while wearing synchronized sensors measuring electrodermal activity (EDA), heart rate variability, skin temperature, and motion. Interpretable machine learning models, including linear classifiers, were trained to distinguish physiological states and rank feature relevance. The results revealed that even minor postural adjustments led to significant shifts in ANS markers, with phasic EDA and RR interval features showing heightened sensitivity. Surprisingly, micro-movements captured via accelerometry and transient electrodermal reactivity, specifically EDA peak-to-RMS ratios, emerged as dominant contributors to classification performance. These findings suggest that small-scale kinematic and autonomic shifts, which are often overlooked, play a central role in the physiological effects of yoga. The study demonstrates that wearable sensor analytics can decode a more nuanced and granular physiological profile of mind–body practices than traditionally appreciated, offering a foundation for precision-tailored biofeedback systems and advancing objective approaches to yoga-based interventions. Full article
Show Figures

Figure 1

18 pages, 2596 KB  
Article
Research on CNC Machine Tool Spindle Fault Diagnosis Method Based on Deep Residual Shrinkage Network with Dynamic Convolution and Selective Kernel Attention Model
by Xiaoxu Li, Jixuan Wang, Jianqiang Wang, Jiahao Wang, Jiamin Liu, Jiaming Chen and Xuelian Yu
Algorithms 2025, 18(9), 569; https://doi.org/10.3390/a18090569 - 9 Sep 2025
Abstract
Rolling bearing vibration signals are often severely affected by strong external noise, which can obscure fault-related features and hinder accurate diagnosis. To address this challenge, this paper proposes an enhanced Deep Residual Shrinkage Network with Dynamic Convolution and Selective Kernel Attention (DDRSN-SKA). First, [...] Read more.
Rolling bearing vibration signals are often severely affected by strong external noise, which can obscure fault-related features and hinder accurate diagnosis. To address this challenge, this paper proposes an enhanced Deep Residual Shrinkage Network with Dynamic Convolution and Selective Kernel Attention (DDRSN-SKA). First, one-dimensional vibration signals are converted into two-dimensional time frequency images using the Continuous Wavelet Transform (CWT), providing richer input representations. Then, a dynamic convolution module is introduced to adaptively adjust kernel weights based on the input, enabling the network to better extract salient features. To improve feature discrimination, an Selective Kernel Attention (SKAttention) module is incorporated into the intermediate layers of the network. By applying a multi-receptive field channel attention mechanism, the network can emphasize critical information and suppress irrelevant features. The final classification layer determines the fault types. Experiments conducted on both the Case Western Reserve University (CWRU) dataset and a laboratory-collected bearing dataset demonstrate that DDRSN-SKA achieves diagnostic accuracies of 98.44% and 94.44% under −8 dB Gaussian and Laplace noise, respectively. These results confirm the model’s strong noise robustness and its suitability for fault diagnosis in noisy industrial environments. Full article
Show Figures

Figure 1

11 pages, 474 KB  
Article
Secant-Type Iterative Classes for Nonlinear Equations with Multiple Roots
by Francisco I. Chicharro, Neus Garrido-Saez and Julissa H. Jerezano
Algorithms 2025, 18(9), 568; https://doi.org/10.3390/a18090568 - 9 Sep 2025
Abstract
General-purpose iterative methods for solving nonlinear equations provide approximations to solving problems without closed-form solutions. However, these methods lose some properties when the problems have multiple roots or are not differentiable, in which case specific methods are used. However, in most problems the [...] Read more.
General-purpose iterative methods for solving nonlinear equations provide approximations to solving problems without closed-form solutions. However, these methods lose some properties when the problems have multiple roots or are not differentiable, in which case specific methods are used. However, in most problems the multiplicity of the root is unknown, which reduces the range of methods available to us. In this work we propose two iterative classes with memory for solving multiple-root nonlinear equations without knowing the multiplicity. One of the proposals includes derivatives, but the other is derivative-free, obtained from the previous one using divided differences and a parameter in its iterative expression. The order of convergence of the proposed schemes is analyzed. The stability of the methods is studied using real dynamics, showing the good behavior of the methods. A numerical benchmark confirms the theoretical study. Full article
Show Figures

Figure 1

27 pages, 1902 KB  
Article
Few-Shot Breast Cancer Diagnosis Using a Siamese Neural Network Framework and Triplet-Based Loss
by Tea Marasović and Vladan Papić
Algorithms 2025, 18(9), 567; https://doi.org/10.3390/a18090567 - 8 Sep 2025
Abstract
Breast cancer is one of the leading causes of death among women of all ages and backgrounds globally. In recent years, the growing deficit of expert radiologists—particularly in underdeveloped countries—alongside a surge in the number of images for analysis, has negatively affected the [...] Read more.
Breast cancer is one of the leading causes of death among women of all ages and backgrounds globally. In recent years, the growing deficit of expert radiologists—particularly in underdeveloped countries—alongside a surge in the number of images for analysis, has negatively affected the ability to secure timely and precise diagnostic results in breast cancer screening. AI technologies offer powerful tools that allow for the effective diagnosis and survival forecasting, reducing the dependency on human cognitive input. Towards this aim, this research introduces a deep meta-learning framework for swift analysis of mammography images—combining a Siamese network model with a triplet-based loss function—to facilitate automatic screening (recognition) of potentially suspicious breast cancer cases. Three pre-trained deep CNN architectures, namely GoogLeNet, ResNet50, and MobileNetV3, are fine-tuned and scrutinized for their effectiveness in transforming input mammograms to a suitable embedding space. The proposed framework undergoes a comprehensive evaluation through a rigorous series of experiments, utilizing two different, publicly accessible, and widely used datasets of digital X-ray mammograms: INbreast and CBIS-DDSM. The experimental results demonstrate the framework’s strong performance in differentiating between tumorous and normal images, even with a very limited number of training samples, on both datasets. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (3rd Edition))
Show Figures

Figure 1

17 pages, 2625 KB  
Article
Improved Active Disturbance Rejection Speed Tracking Control for High-Speed Trains Based on SBWO Algorithm
by Chuanfang Xu, Chengyu Zhang, Mingxia Xu, Jiaqing Chen, Longda Wang and Zhaoyu Han
Algorithms 2025, 18(9), 566; https://doi.org/10.3390/a18090566 - 8 Sep 2025
Viewed by 63
Abstract
To address the problems of random noise interference, inadequate disturbance estimation and compensation, and the difficulty in controller parameter tuning in speed tracking control of high-speed trains, an improved Active Disturbance Rejection Control (ADRC) strategy combined with a Sobol-based Black Widow Optimization (SBWO) [...] Read more.
To address the problems of random noise interference, inadequate disturbance estimation and compensation, and the difficulty in controller parameter tuning in speed tracking control of high-speed trains, an improved Active Disturbance Rejection Control (ADRC) strategy combined with a Sobol-based Black Widow Optimization (SBWO) algorithm is proposed. An improved Tracking Differentiator (TD) is adopted by integrating a novel optimal control synthesis function with a phase compensator to suppress input noise and ensure a smooth transition process. A novel Extended State Observer (ESO) using a nonlinear saturation function is designed to improve the observation accuracy and decrease chattering. An enhanced Nonlinear State Error Feedback (NLSEF) law that incorporates an error integral and adaptive parameter update laws is developed to reduce steady-state error and achieve self-tuned proportional and derivative gains. A feedforward compensation term is added to provide real-time dynamic compensation for ESO estimation errors. Finally, an enhanced Black Widow Optimization (BWO) algorithm, which initializes its population with Sobol sequences to improve its global search capability, is employed for parameter optimization. The simulation results demonstrate that compared with the control methods based on Proportional–Integral–Derivative (PID) control and conventional ADRC, the proposed strategy achieves higher steady-state tracking accuracy, better adaptability to dynamic operating conditions, stronger anti-disturbance ability, and more precise stopping precision. Full article
Show Figures

Figure 1

15 pages, 843 KB  
Article
Extended von Bertalanffy Equation in Solow Growth Modelling
by Antonio E. Bargellini, Daniele Ritelli and Giulia Spaletta
Algorithms 2025, 18(9), 565; https://doi.org/10.3390/a18090565 - 7 Sep 2025
Viewed by 130
Abstract
The aim of this work is to model the growth of an economic system and, in particular, the evolution of capital accumulation over time, analysing the feasibility of a closed-form solution to the initial value problem that governs the capital-per-capita dynamics. The latter [...] Read more.
The aim of this work is to model the growth of an economic system and, in particular, the evolution of capital accumulation over time, analysing the feasibility of a closed-form solution to the initial value problem that governs the capital-per-capita dynamics. The latter are related to the labour-force dynamics, which are assumed to follow a von Bertalanffy model, studied in the literature in its simplest form and for which the existence of an exact solution, in terms of hypergeometric functions, is known. Here, we consider an extended form of the von Bertalanffy equation, which we make dependent on two parameters, rather than the single-parameter model known in the literature, to better capture the features that a reliable economic growth model should possess. Furthermore, we allow one of the two parameters to vary over time, making it dependent on a periodic function to account for seasonality. We prove that the two-parameter model admits an exact solution, in terms of hypergeometric functions, when both parameters are constant. In the time-varying case, although it is not possible to obtain a closed-form solution, we are able to find two exact solutions that closely bound, from below and from above, the desired one, as well as its numerical approximation. The presented models are implemented in the Mathematica environment, where simulations, parameter sensitivity analyses and comparisons with the known single-parameter model are also performed, validating our findings. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

30 pages, 6483 KB  
Article
The Generative Adversarial Approach: A Cautionary Tale of Finite Samples
by Marcos Escobar-Anel and Yiyao Jiao
Algorithms 2025, 18(9), 564; https://doi.org/10.3390/a18090564 - 5 Sep 2025
Viewed by 197
Abstract
Given the relevance and wide use of the Generative Adversarial (GA) methodology, this paper focuses on finite samples to better understand its benefits and pitfalls. We focus on its finite-sample properties from both statistical and numerical perspectives. We set up a simple and [...] Read more.
Given the relevance and wide use of the Generative Adversarial (GA) methodology, this paper focuses on finite samples to better understand its benefits and pitfalls. We focus on its finite-sample properties from both statistical and numerical perspectives. We set up a simple and ideal “controlled experiment” where the input data are an i.i.d. Gaussian series where the mean is to be learned, and the discriminant and generator are in the same distributional family, not a neural network (NN), as in the popular GAN. We show that, even with the ideal discriminant, the classical GA methodology delivers a biased estimator while producing multiple local optima, confusing numerical methods. The situation worsens when the discriminator is in the correct parametric family but is not the oracle, leading to the absence of a saddle point. To improve the quality of the estimators within the GA method, we propose an alternative loss function, the alternative GA method, that leads to a unique saddle point with better statistical properties. Our findings are intended to start a conversation on the potential pitfalls of GA and GAN methods. In this spirit, the ideas presented here should be explored in other distributional cases and will be extended to the actual use of an NN for discriminators and generators. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

28 pages, 8417 KB  
Article
Democratizing IoT for Smart Irrigation: A Cost-Effective DIY Solution Proposal Evaluated in an Actinidia Orchard
by David Pascoal, Telmo Adão, Agnieszka Chojka, Nuno Silva, Sandra Rodrigues, Emanuel Peres and Raul Morais
Algorithms 2025, 18(9), 563; https://doi.org/10.3390/a18090563 - 5 Sep 2025
Viewed by 315
Abstract
Proper management of water resources in agriculture is of utmost importance for sustainable productivity, especially under the current context of climate change. However, many smart agriculture systems, including for managing irrigation, involve costly, complex tools for most farmers, especially small/medium-scale producers, despite the [...] Read more.
Proper management of water resources in agriculture is of utmost importance for sustainable productivity, especially under the current context of climate change. However, many smart agriculture systems, including for managing irrigation, involve costly, complex tools for most farmers, especially small/medium-scale producers, despite the availability of user-friendly and community-accessible tools supported by well-established providers (e.g., Google). Hence, this paper proposes an irrigation management system integrating low-cost Internet of Things (IoT) sensors with community-accessible cloud-based data management tools. Specifically, it resorts to sensors managed by an ESP32 development board to monitor several agroclimatic parameters and employs Google Sheets for data handling, visualization, and decision support, assisting operators in carrying out proper irrigation procedures. To ensure reproducibility for both digital experts but mainly non-technical professionals, a comprehensive set of guidelines is provided for the assembly and configuration of the proposed irrigation management system, aiming to promote a democratized dissemination of key technical knowledge within a do-it-yourself (DIY) paradigm. As part of this contribution, a market survey identified numerous e-commerce platforms that offer the required components at competitive prices, enabling the system to be affordably replicated. Furthermore, an irrigation management prototype was tested in a real production environment, consisting of a 2.4-hectare yellow kiwi orchard managed by an association of producers from July to September 2021. Significant resource reductions were achieved by using low-cost IoT devices for data acquisition and the capabilities of accessible online tools like Google Sheets. Specifically, for this study, irrigation periods were reduced by 62.50% without causing water deficits detrimental to the crops’ development. Full article
Show Figures

Figure 1

28 pages, 2702 KB  
Article
An Overview of the Euler-Type Universal Numerical Integrator (E-TUNI): Applications in Non-Linear Dynamics and Predictive Control
by Paulo M. Tasinaffo, Gildárcio S. Gonçalves, Johnny C. Marques, Luiz A. V. Dias and Adilson M. da Cunha
Algorithms 2025, 18(9), 562; https://doi.org/10.3390/a18090562 - 4 Sep 2025
Viewed by 323
Abstract
A Universal Numerical Integrator (UNI) is a computational framework that combines a classical numerical integration method, such as Euler, Runge–Kutta, or Adams–Bashforth, with a universal approximator of functions, such as a feed-forward neural network (including MLP, SVM, RBF, among others) or a fuzzy [...] Read more.
A Universal Numerical Integrator (UNI) is a computational framework that combines a classical numerical integration method, such as Euler, Runge–Kutta, or Adams–Bashforth, with a universal approximator of functions, such as a feed-forward neural network (including MLP, SVM, RBF, among others) or a fuzzy inference system. The Euler-Type Universal Numerical Integrator (E–TUNI) is a particular case of UNI based on the first-order Euler integrator and is designed to model non-linear dynamic systems observed in real-world scenarios accurately. The UNI framework can be organized into three primary methodologies: the NARMAX model (Non-linear AutoRegressive Moving Average with eXogenous input), the mean derivatives approach (which characterizes E–TUNI), and the instantaneous derivatives approach. The E–TUNI methodology relies exclusively on mean derivative functions, distinguishing it from techniques that employ instantaneous derivatives. Although it is based on a first-order scheme, the E–TUNI achieves an accuracy level comparable to that of higher-order integrators. This performance is made possible by the incorporation of a neural network acting as a universal approximator, which significantly reduces the approximation error. This article provides a comprehensive overview of the E–TUNI methodology, focusing on its application to the modeling of non-linear autonomous dynamic systems and its use in predictive control. Several computational experiments are presented to illustrate and validate the effectiveness of the proposed method. Full article
Show Figures

Figure 1

12 pages, 786 KB  
Article
A SHAP-Guided Grouped L1 Regularization Method for CRISPR-Cas9 Off-Target Predictions
by Evmorfia Tentsidou and Haridimos Kondylakis
Algorithms 2025, 18(9), 561; https://doi.org/10.3390/a18090561 - 4 Sep 2025
Viewed by 239
Abstract
CRISPR-Cas9 has emerged as a remarkably powerful gene editing tool and has advanced both research and gene therapy applications. Machine learning models have been developed to predict off-target cleavages. Despite progress, accuracy, stability, and interpretability remain open challenges. Combining predictive modeling with interpretability [...] Read more.
CRISPR-Cas9 has emerged as a remarkably powerful gene editing tool and has advanced both research and gene therapy applications. Machine learning models have been developed to predict off-target cleavages. Despite progress, accuracy, stability, and interpretability remain open challenges. Combining predictive modeling with interpretability can provide valuable insights into model behavior and increase its trustworthiness. This study proposes a group-wise L1 regularization method guided by SHAP values. For the implementation of this method, the CRISPR-M model was used, and SHAP-informed regularization strengths were calculated and applied to features grouped by relevance. Models were trained on HEK293T and evaluated on K562. In addition to the CRISPR-M baseline, three variants were developed: L1-Grouped-Epigenetics, L1-Grouped-Complete, and L1-Uniform-Epigenetics (control). L1-Grouped-Epigenetics, using penalties split by on- and off-target epigenetic factors, moderately improved mean precision, AUPRC, and AUROC relative to the baseline, as well as showing reduced variability in precision and AUPRC across seeds, although its mean recall and F-metrics were slightly lower than those of CRISPR-M. L1-Grouped-Complete achieved the highest mean AUROC and Spearman correlation and presented lower variability than CRISPR-M for recall, F1, and F-beta, despite reduced recall and F-metrics relative to CRISPR-M. Overall, this approach required only minor architectural adjustments, making it adaptable to other models and domains. While results demonstrate potential for enhancing interpretability and robustness without sacrificing predictive performance, further validation across additional datasets is required. Full article
(This article belongs to the Collection Feature Papers in Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

19 pages, 439 KB  
Article
Speeding Up Floyd–Warshall’s Algorithm to Compute All-Pairs Shortest Paths and the Transitive Closure of a Graph
by Giuseppe Lancia and Marcello Dalpasso
Algorithms 2025, 18(9), 560; https://doi.org/10.3390/a18090560 - 4 Sep 2025
Viewed by 262
Abstract
Floyd–Warshall’s algorithm is a widely-known procedure for computing all-pairs shortest paths in a graph of n vertices in Θ(n3) time complexity. A simplified version of the same algorithm computes the transitive closure of the graph with the same time [...] Read more.
Floyd–Warshall’s algorithm is a widely-known procedure for computing all-pairs shortest paths in a graph of n vertices in Θ(n3) time complexity. A simplified version of the same algorithm computes the transitive closure of the graph with the same time complexity. The algorithm operates on an n×n matrix, performing n inspections and no more than n updates of each matrix cell, until the final matrix is computed. In this paper, we apply a technique called SmartForce, originally devised as a performance enhancement for solving the traveling salesman problem, to avoid the inspection and checking of cells that do not need to be updated, thus reducing the overall computation time when the number, u, of cell updates is substantially smaller than n3. When the ratio u/n3 is not small enough, the performance of the proposed procedure might be worse than that of the Floyd–Warshall algorithm. To speed up the algorithm independently of the input instance type, we introduce an effective hybrid approach. Finally, a similar procedure, which exploits suitable fast data structures, can be used to achieve a speedup over the Floyd–Warshall simplified algorithm that computes the transitive closure of a graph. Full article
(This article belongs to the Special Issue Graph and Hypergraph Algorithms and Applications)
Show Figures

Figure 1

24 pages, 1074 KB  
Article
Research on Dual-Loop ADRC for PMSM Based on Opposition-Based Learning Hybrid Optimization Algorithm
by Longda Wang, Zhang Wu, Yang Liu and Yan Chen
Algorithms 2025, 18(9), 559; https://doi.org/10.3390/a18090559 - 4 Sep 2025
Viewed by 285
Abstract
To enhance the speed regulation accuracy and robustness of permanent magnet synchronous motor (PMSM) drives under complex operating conditions, this paper proposes a dual-loop active disturbance rejection control strategy optimized by an opposition-based learning hybrid optimization algorithm (DLADRC-OBLHOA). First, the vector control system [...] Read more.
To enhance the speed regulation accuracy and robustness of permanent magnet synchronous motor (PMSM) drives under complex operating conditions, this paper proposes a dual-loop active disturbance rejection control strategy optimized by an opposition-based learning hybrid optimization algorithm (DLADRC-OBLHOA). First, the vector control system and ADRC model of the PMSM are established. Then, a nonlinear function, ifal, is introduced to improve the performance of the speed-loop ADRC. Meanwhile, an active disturbance rejection controller is also introduced into the current loop to suppress current disturbances. To address the challenge of tuning multiple ADRC parameters, an opposition-based learning hybrid optimization algorithm (OBLHOA) is developed. This algorithm integrates chaotic mapping for population initialization and employs opposition-based learning to enhance global search capability. The proposed OBLHOA is utilized to optimize the speed-loop ADRC parameters, thereby achieving high-precision speed control of the PMSM system. Its optimization performance is validated on 12 benchmark functions from the IEEE CEC2022 test suite, demonstrating superior convergence speed and solution accuracy compared to conventional heuristic algorithms. The proposed strategy achieves superior speed regulation accuracy and reliability under complex operating conditions when deployed on high-performance processors, but its effectiveness may diminish on resource-limited hardware. Moreover, simulation results show that the DLADRC-OBLHOA control strategy outperforms PI control, traditional ADRC, and ADRC-ifal in terms of tracking accuracy and disturbance rejection capability. Full article
Show Figures

Figure 1

12 pages, 1451 KB  
Article
Machine Learning Models for Predicting Postoperative Complications and Hospitalization After Percutaneous Nephrolithotomy
by Laura Shalabayeva, Pilar Bahílo Mateu, Marc Romeu Ferras, Javier Díaz-Carnicero, Alberto Budía and David Vivas-Consuelo
Algorithms 2025, 18(9), 558; https://doi.org/10.3390/a18090558 - 4 Sep 2025
Viewed by 280
Abstract
PCNL treatment is often associated with complications of hemorrhagic or infectious origin, which can result in prolonged hospitalization. This study aims to develop predictive models using machine learning (ML) techniques to anticipate these outcomes. Multiple ML algorithms—including Logistic Regression, Decision Tree, Random Forest, [...] Read more.
PCNL treatment is often associated with complications of hemorrhagic or infectious origin, which can result in prolonged hospitalization. This study aims to develop predictive models using machine learning (ML) techniques to anticipate these outcomes. Multiple ML algorithms—including Logistic Regression, Decision Tree, Random Forest, and Extreme Gradient Boosting—were evaluated on separate validation and test datasets. The Random Forest model achieved the highest predictive performance for hospitalization need (AUC 0.726/0.736) and infectious complications (AUC 0.799/0.735). Threshold adjustment was applied to increase sensitivity, reducing false negatives. The interpretability of the models was ensured through SHAP analysis, identifying clinically meaningful variables. Risk factors for both hospitalization and infectious complications models included nephrostomy drainage, a neutrophils percentage higher than 80, Guy’s score of grade 4, leukocytes level higher than 15 or lower than 4.5, and balloon dilation, while protective features included tubeless intervention, easy localization of a stone, negative culture, and microorganism results. However, no model achieved acceptable performance for predicting hemorrhagic complications, likely due to limited data. These results suggest that AI-based models can contribute to risk stratification after PCNL. Further experiments with larger, multi-center datasets are needed to confirm these findings and improve the generalizability of the models. Full article
Show Figures

Figure 1

21 pages, 1814 KB  
Article
Data-Driven Prior Construction in Hilbert Spaces for Bayesian Optimization
by Carol Santos Almonte, Oscar Sanchez Jimenez, Eduardo Souza de Cursi and Emmanuel Pagnacco
Algorithms 2025, 18(9), 557; https://doi.org/10.3390/a18090557 - 3 Sep 2025
Viewed by 331
Abstract
We propose a variant of Bayesian optimization in which probability distributions are constructed using uncertainty quantification (UQ) techniques. In this context, UQ techniques rely on a Hilbert basis expansion to infer probability distributions from limited experimental data. These distributions act as prior knowledge [...] Read more.
We propose a variant of Bayesian optimization in which probability distributions are constructed using uncertainty quantification (UQ) techniques. In this context, UQ techniques rely on a Hilbert basis expansion to infer probability distributions from limited experimental data. These distributions act as prior knowledge of the search space and are incorporated into the acquisition function to guide the selection of enrichment points more effectively. Several variants of the method are examined, depending on the distribution type (normal, log-normal, etc.), and benchmarked against traditional Bayesian optimization on test functions. The results show competitive performance, with selective improvements depending on the problem structure, and faster convergence in specific cases. As a practical application, we address a structural shape optimization problem. The initial geometry is an L-shaped plate, where the goal is to minimize the volume under a horizontal displacement constraint expressed as a penalty. Our approach first identifies a promising region while efficiently training the surrogate model. A subsequent gradient-based optimization step then refines the design using the trained surrogate, achieving a volume reduction of more than 30% while satisfying the displacement constraint, without requiring any additional evaluations of the objective function. Full article
Show Figures

Figure 1

36 pages, 576 KB  
Review
A Review of Explainable Artificial Intelligence from the Perspectives of Challenges and Opportunities
by Sami Kabir, Mohammad Shahadat Hossain and Karl Andersson
Algorithms 2025, 18(9), 556; https://doi.org/10.3390/a18090556 - 3 Sep 2025
Viewed by 1117
Abstract
The widespread adoption of Artificial Intelligence (AI) in critical domains, such as healthcare, finance, law, and autonomous systems, has brought unprecedented societal benefits. Its black-box (sub-symbolic) nature allows AI to compute prediction without explaining the rationale to the end user, resulting in lack [...] Read more.
The widespread adoption of Artificial Intelligence (AI) in critical domains, such as healthcare, finance, law, and autonomous systems, has brought unprecedented societal benefits. Its black-box (sub-symbolic) nature allows AI to compute prediction without explaining the rationale to the end user, resulting in lack of transparency between human and machine. Concerns are growing over the opacity of such complex AI models, particularly deep learning architectures. To address this concern, explainability is of paramount importance, which has triggered the emergence of Explainable Artificial Intelligence (XAI) as a vital research area. XAI is aimed at enhancing transparency, trust, and accountability of AI models. This survey presents a comprehensive overview of XAI from the dual perspectives of challenges and opportunities. We analyze the foundational concepts, definitions, terminologies, and taxonomy of XAI methods. We then review several application domains of XAI. Special attention is given to various challenges of XAI, such as no universal definition, trade-off between accuracy and interpretability, and lack of standardized evaluation metrics. We conclude by outlining the future research directions of human-centric design, interactive explanation, and standardized evaluation frameworks. This survey serves as a resource for researchers, practitioners, and policymakers to navigate the evolving landscape of interpretable and responsible AI. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

22 pages, 3458 KB  
Article
Output Voltage Control of a Synchronous Buck DC/DC Converter Using Artificial Neural Networks
by Juraj Šimko, Michal Praženica, Roman Koňarik, Slavomír Kaščák and Peter Klčo
Algorithms 2025, 18(9), 555; https://doi.org/10.3390/a18090555 - 2 Sep 2025
Viewed by 305
Abstract
This article presents a neural network-based control method for maintaining the required output voltage of a synchronous buck converter. The goal was to replace a traditional PID controller with a neural network that calculates the duty cycle based on real-time data. Several versions [...] Read more.
This article presents a neural network-based control method for maintaining the required output voltage of a synchronous buck converter. The goal was to replace a traditional PID controller with a neural network that calculates the duty cycle based on real-time data. Several versions of the neural network were tested. The final version, which included the input voltage, reference, and output current as inputs and compensated for dead time, was successfully validated on real hardware. It was able to respond to changes in load and input voltage within a limited operating range. Full article
(This article belongs to the Collection Feature Papers in Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

34 pages, 1992 KB  
Article
Future Skills in the GenAI Era: A Labor Market Classification System Using Kolmogorov–Arnold Networks and Explainable AI
by Dimitrios Christos Kavargyris, Konstantinos Georgiou, Eleanna Papaioannou, Theodoros Moysiadis, Nikolaos Mittas and Lefteris Angelis
Algorithms 2025, 18(9), 554; https://doi.org/10.3390/a18090554 - 2 Sep 2025
Viewed by 332
Abstract
Generative Artificial Intelligence (GenAI) is widely recognized for its profound impact on labor market demand, supply, and skill dynamics. However, due to its transformative nature, GenAI increasingly overlaps with traditional AI roles, blurring boundaries and intensifying the need to reassess workforce competencies. To [...] Read more.
Generative Artificial Intelligence (GenAI) is widely recognized for its profound impact on labor market demand, supply, and skill dynamics. However, due to its transformative nature, GenAI increasingly overlaps with traditional AI roles, blurring boundaries and intensifying the need to reassess workforce competencies. To address this challenge, this paper introduces KANVAS (Kolmogorov–Arnold Network Versatile Algorithmic Solution)—a framework based on Kolmogorov–Arnold Networks (KANs), which utilize B-spline-based, compact, and interpretable neural units—to distinguish between traditional AI roles and emerging GenAI-related positions. The aim of the study is to develop a reliable and interpretable labor market classification system that differentiates these roles using explainable machine learning. Unlike prior studies that emphasize predictive performance, our work is the first to employ KANs as an explanatory tool for labor classification, to reveal how GenAI-related and European Skills, Competences, Qualifications, and Occupations (ESCO)-aligned skills differentially contribute to distinguishing modern from traditional AI job roles. Using raw job vacancy data from two labor market platforms, KANVAS implements a hybrid pipeline combining a state-of-the-art Large Language Model (LLM) with Explainable AI (XAI) techniques, including Shapley Additive Explanations (SHAP), to enhance model transparency. The framework achieves approximately 80% classification consistency between traditional and GenAI-aligned roles, while also identifying the most influential skills contributing to each category. Our findings indicate that GenAI positions prioritize competencies such as prompt engineering and LLM integration, whereas traditional roles emphasize statistical modeling and legacy toolkits. By surfacing these distinctions, the framework offers actionable insights for curriculum design, targeted reskilling programs, and workforce policy development. Overall, KANVAS contributes a novel, interpretable approach to understanding how GenAI reshapes job roles and skill requirements in a rapidly evolving labor market. Finally, the open-source implementation of KANVAS is flexible and well-suited for HR managers and relevant stakeholders. Full article
Show Figures

Figure 1

17 pages, 569 KB  
Article
AI-Driven Optimization of Functional Feature Placement in Automotive CAD
by Ardian Kelmendi and George Pappas
Algorithms 2025, 18(9), 553; https://doi.org/10.3390/a18090553 - 2 Sep 2025
Viewed by 438
Abstract
The automotive industry increasingly relies on 3D modeling technologies to design and manufacture vehicle components with high precision. One critical challenge is optimizing the placement of latches that secure the dashboard side paneling, as these placements vary between models and must maintain minimal [...] Read more.
The automotive industry increasingly relies on 3D modeling technologies to design and manufacture vehicle components with high precision. One critical challenge is optimizing the placement of latches that secure the dashboard side paneling, as these placements vary between models and must maintain minimal tolerance for movement to ensure durability. While generative artificial intelligence (AI) has advanced rapidly in generating text, images, and video, its application to creating accurate 3D CAD models remains limited. This paper proposes a novel framework that integrates a PointNet deep learning model with Python-based CAD automation to predict optimal clip placements and surface thickness for dashboard side panels. Unlike prior studies that focus on general-purpose CAD generation, this work specifically targets automotive interior components and demonstrates a practical method for automating part design. The approach involves generating placement data—potentially via generative AI—and importing it into the CAD environment to produce fully parameterized 3D models. Experimental results show that the prototype achieved a 75% success rate across six of eight test surfaces, indicating strong potential despite the limited sample size. This research highlights a clear pathway for applying generative AI to part design automation in the automotive sector and offers a foundation for scaling to broader design applications. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

22 pages, 8021 KB  
Article
Multi-Task Semi-Supervised Approach for Counting Cones in Adaptive Optics Images
by Vidya Bommanapally, Amir Akhavanrezayat, Parvathi Chundi, Quan Dong Nguyen and Mahadevan Subramaniam
Algorithms 2025, 18(9), 552; https://doi.org/10.3390/a18090552 - 2 Sep 2025
Viewed by 373
Abstract
Counting and density estimation of cone cells using adaptive optics (AO) imaging plays an important role in the clinical management of retinal diseases. A novel deep learning approach for the cone counting task with minimal manual labeling of cone cells in AO images [...] Read more.
Counting and density estimation of cone cells using adaptive optics (AO) imaging plays an important role in the clinical management of retinal diseases. A novel deep learning approach for the cone counting task with minimal manual labeling of cone cells in AO images is described in this paper. We propose a hybrid multi-task semi-supervised learning (MTSSL) framework that simultaneously trains on unlabeled and labeled data. On the unlabeled images, the model learns structural and relational features by employing two self-supervised pretext tasks—image inpainting (IP) and learning-to-rank (L2R). At the same time, it leverages a small set of labeled examples to supervise a density estimation head for cone counting. By jointly minimizing the image reconstruction loss, the ranking loss, and the supervised density-map loss, our approach harnesses the rich information in unlabeled data to learn feature representations and directly incorporates ground-truth annotations to guide accurate density prediction and counts. Experiments were conducted on a dataset of AO images of 120 subjects captured using a device with a retinal camera (rtx1) with a wide field-of-view. MTSSL gains strengths from hybrid self-supervised pretext tasks of generative and predictive pretraining that aid in learning global and local context required for counting cones. The results show that the proposed MTSSL approach significantly outperforms the individual self-supervised pipelines with an RMSE score improved by a factor of 2 for cone counting. Full article
(This article belongs to the Special Issue Advanced Machine Learning Algorithms for Image Processing)
Show Figures

Figure 1

19 pages, 17084 KB  
Article
SPADE: Superpixel Adjacency Driven Embedding for Three-Class Melanoma Segmentation
by Pablo Ordóñez, Ying Xie, Xinyue Zhang, Chloe Yixin Xie, Santiago Acosta and Issac Guitierrez
Algorithms 2025, 18(9), 551; https://doi.org/10.3390/a18090551 - 2 Sep 2025
Viewed by 372
Abstract
The accurate segmentation of pigmented skin lesions is a critical prerequisite for reliable melanoma detection, yet approximately 30% of lesions exhibit fuzzy or poorly defined borders. This ambiguity makes the definition of a single contour unreliable and limits the effectiveness of computer-assisted diagnosis [...] Read more.
The accurate segmentation of pigmented skin lesions is a critical prerequisite for reliable melanoma detection, yet approximately 30% of lesions exhibit fuzzy or poorly defined borders. This ambiguity makes the definition of a single contour unreliable and limits the effectiveness of computer-assisted diagnosis (CAD) systems. While clinical assessment based on the ABCDE criteria (asymmetry, border, color, diameter, and evolution), dermoscopic imaging, and scoring systems remains the standard, these methods are inherently subjective and vary with clinician experience. We address this challenge by reframing segmentation into three distinct regions: background, border, and lesion core. These regions are delineated using superpixels generated via the Simple Linear Iterative Clustering (SLIC) algorithm, which provides meaningful structural units for analysis. Our contributions are fourfold: (1) redefining lesion borders as regions, rather than sharp lines; (2) generating superpixel-level embeddings with a transformer-based autoencoder; (3) incorporating these embeddings as features for superpixel classification; and (4) integrating neighborhood information to construct enhanced feature vectors. Unlike pixel-level algorithms that often overlook boundary context, our pipeline fuses global class information with local spatial relationships, significantly improving precision and recall in challenging border regions. An evaluation on the HAM10000 melanoma dataset demonstrates that our superpixel–RAG–transformer (region adjacency graph) pipeline achieves exceptional performance (100% F1 score, accuracy, and precision) in classifying background, border, and lesion core superpixels. By transforming raw dermoscopic images into region-based structured representations, the proposed method generates more informative inputs for downstream deep learning models. This strategy not only advances melanoma analysis but also provides a generalizable framework for other medical image segmentation and classification tasks. Full article
Show Figures

Figure 1

19 pages, 641 KB  
Article
Lightweight Hash Function Design for the Internet of Things: Structure and SAT-Based Cryptanalysis
by Kairat Sakan, Kunbolat Algazy, Nursulu Kapalova and Andrey Varennikov
Algorithms 2025, 18(9), 550; https://doi.org/10.3390/a18090550 - 1 Sep 2025
Viewed by 359
Abstract
This paper introduces a lightweight cryptographic hash algorithm, LWH-128, developed using a sponge-based construction and specifically adapted for operation under constrained computational and energy conditions typical of embedded systems and Internet of Things devices. The algorithm employs a two-layer processing structure based on [...] Read more.
This paper introduces a lightweight cryptographic hash algorithm, LWH-128, developed using a sponge-based construction and specifically adapted for operation under constrained computational and energy conditions typical of embedded systems and Internet of Things devices. The algorithm employs a two-layer processing structure based on simple logical operations (XOR, cyclic shifts, and S-boxes) and incorporates a preliminary diffusion transformation function G, along with the Davis–Meyer compression scheme, to enhance irreversibility and improve cryptographic robustness. A comparative analysis of hardware implementation demonstrates that LWH-128 exhibits balanced characteristics in terms of circuit complexity, memory usage, and processing speed, making it competitive with existing lightweight hash algorithms. As part of the cryptanalytic evaluation, a Boolean SATisfiability (SAT) Problem-based model of the compression function is constructed in the form of a conjunctive normal form of Boolean variables. Experimental results using the Parkissat SAT solver show an exponential increase in computational time as the number of unknown input bits increased. These findings support the conclusion that the LWH-128 algorithm exhibits strong resistance to preimage attacks based on SAT-solving techniques. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

20 pages, 3333 KB  
Article
A New Hybrid Intelligent System for Predicting Bottom-Hole Pressure in Vertical Oil Wells: A Case Study
by Kheireddine Redouane and Ashkan Jahanbani Ghahfarokhi
Algorithms 2025, 18(9), 549; https://doi.org/10.3390/a18090549 - 1 Sep 2025
Viewed by 361
Abstract
The evaluation of pressure drops across the length of production wells is a crucial task, as it influences both the cost-effective selection of tubing and the development of an efficient production strategy, both of which are vital for maximizing oil recovery while minimizing [...] Read more.
The evaluation of pressure drops across the length of production wells is a crucial task, as it influences both the cost-effective selection of tubing and the development of an efficient production strategy, both of which are vital for maximizing oil recovery while minimizing operational expenses. To address this, our study proposes an innovative hybrid intelligent system designed to predict bottom-hole flowing pressure in vertical multiphase conditions with superior accuracy compared to existing methods using a data set of 150 field measurements amassed from Algerian fields. In this work, the applied hybrid framework is the Adaptive Neuro-Fuzzy Inference System (ANFIS), which integrates artificial neural networks (ANN) with fuzzy logic (FL). The ANFIS model was constructed using a subtractive clustering technique after data filtering, and then its outcomes were evaluated against the most widely utilized correlations and mechanistic models. Graphical inspection and error statistics confirmed that ANFIS consistently outperformed all other approaches in terms of precision, reliability, and effectiveness. For further improvement of the ANFIS performance, a particle swarm optimization (PSO) algorithm is employed to refine the model and optimize the design of the antecedent Gaussian memberships along with the consequent linear coefficient vector. The results achieved by the hybrid ANFIS-PSO model demonstrated greater accuracy in bottom-hole pressure estimation than the conventional hybrid approach. Full article
(This article belongs to the Special Issue AI and Computational Methods in Engineering and Science)
Show Figures

Figure 1

13 pages, 685 KB  
Article
Comparison of Linear MPC and Explicit MPC for Battery Cell Balancing Control
by Wanqun Yang and Jun Chen
Algorithms 2025, 18(9), 548; https://doi.org/10.3390/a18090548 - 1 Sep 2025
Viewed by 346
Abstract
This paper presents and compares two model predictive control (MPC) approaches for battery cell state-of-charge (SOC) balancing. In both approaches, a linearized discrete-time model that takes into account individual cell capacities is used. The first approach is a linear MPC controller that effectively [...] Read more.
This paper presents and compares two model predictive control (MPC) approaches for battery cell state-of-charge (SOC) balancing. In both approaches, a linearized discrete-time model that takes into account individual cell capacities is used. The first approach is a linear MPC controller that effectively regulates multiple cells to track a target SOC level while satisfying physical constraints. The second approach is based on explicit MPC implementation to reduce online computation while achieving a comparable performance. The simulation results suggest that explicit MPC can deliver the same balancing performance as linear MPC, while achieving faster online execution. Specifically, explicit MPC reduces the computation time by 37.3% in a five-cell battery example, with the cost of higher offline computation. However, simulation results also reveal a significant limitation for explicit MPC for battery systems with a larger number of cells. As the number of cells increases and/or the prediction horizon increases, the computational requirements grow exponentially, making its application to SOC balancing for large battery systems impractical. To the best of the authors’ knowledge, this is the first study that compares MPC and explicit MPC algorithms in the context of battery cell balancing. Full article
Show Figures

Figure 1

16 pages, 1007 KB  
Article
Learning SMILES Semantics: Word2Vec and Transformer Embeddings for Molecular Property Prediction
by Saya Hashemian, Zak Khan, Pulkit Kalhan and Yang Liu
Algorithms 2025, 18(9), 547; https://doi.org/10.3390/a18090547 - 1 Sep 2025
Viewed by 327
Abstract
This paper investigates the effectiveness of Word2Vec-based molecular representation learning on SMILES (Simplified Molecular Input Line Entry System) strings for a downstream prediction task related to the market approvability of chemical compounds. Here, market approvability is treated as a proxy classification label derived [...] Read more.
This paper investigates the effectiveness of Word2Vec-based molecular representation learning on SMILES (Simplified Molecular Input Line Entry System) strings for a downstream prediction task related to the market approvability of chemical compounds. Here, market approvability is treated as a proxy classification label derived from approval status, where only the molecular structure is analyzed. We train character-level embeddings using Continuous Bag of Words (CBOW) and Skip-Gram with Negative Sampling architectures and apply the resulting embeddings in a downstream classification task using a multi-layer perceptron (MLP). To evaluate the utility of these lightweight embedding techniques, we conduct experiments on a curated SMILES dataset labeled by approval status under both imbalanced and SMOTE-balanced training conditions. In addition to our Word2Vec-based models, we include a ChemBERTa-based baseline using the pretrained ChemBERTa-77M model. Our findings show that while ChemBERTa achieves a higher performance, the Word2Vec-based models offer a favorable trade-off between accuracy and computational efficiency. This efficiency is especially relevant in large-scale compound screening, where rapid exploration of the chemical space can support early-stage cheminformatics workflows. These results suggest that traditional embedding models can serve as viable alternatives for scalable and interpretable cheminformatics pipelines, particularly in resource-constrained environments. Full article
Show Figures

Figure 1

25 pages, 11853 KB  
Article
Mixed 1D/2D Simplicial Approximation of Volumetric Medial Axis by Direct Palpation of Shape Diameter Function
by Andres F. Puentes-Atencio, Daniel Mejia-Parra, Ander Arbelaiz, Carlos Cadavid and Oscar Ruiz-Salguero
Algorithms 2025, 18(9), 546; https://doi.org/10.3390/a18090546 - 31 Aug 2025
Viewed by 400
Abstract
In the domain of Shape Encoding, the approximation of the Medial Axis of a solid region in R3 with Boundary Representation M, is relevant because the Medial Axis is an efficient encoding for M in Design, Manufacturing, and Shape Learning. Existing [...] Read more.
In the domain of Shape Encoding, the approximation of the Medial Axis of a solid region in R3 with Boundary Representation M, is relevant because the Medial Axis is an efficient encoding for M in Design, Manufacturing, and Shape Learning. Existing Medial Axis approximations include (a) full Voronoi and (b) and partial Shape Diameter Function (SDF)-based ones. Methods (a) produce large high-frequency data, which must then be pruned. Methods (b) reduce computing expenses at the price of not handling some shapes (e.g., prismatic), and currently, they only synthesize 1D Medial Axes. To partially overcome these limitations, this investigation performs a direct synthesis of a 1D and 2D simplex-based Medial Axis approximation by a combination of stochastic geometric reasoning and graph operations on the SDF-originated point cloud. Our method covers one- and two-dimensional Simplicial Complex Medial Axes, thus improving on 1D Medial Axes approximation methods. Our approach avoids the expensive full computing plus pruning of Medial Axis based on Voronoi methods. Future work is needed in the synthesis of Medial Axis approximation for high-frequency neighborhoods of mesh M. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

Previous Issue
Back to TopTop