Next Issue
Volume 14, February
Previous Issue
Volume 13, December
 
 

Computation, Volume 14, Issue 1 (January 2026) – 27 articles

Cover Story (view full-size image): Thermal-fluid topology optimization is often limited by the high cost of Navier–Stokes simulations. We proposed a multifidelity pipeline that performs rapid optimization with a Darcy–convection low-fidelity model, while SEMDOT decouples analysis and geometry to deliver smooth, CAD-ready channel boundaries. By varying inlet pressure and P-norm aggregation, diverse candidates are seeded and then screened using high-fidelity Navier–Stokes–convection simulations in COMSOL. In representative cases, the selected designs reduce peak temperature (from ~337 K to ~323 K) and pressure drop (from ~18.7 Pa to ~12.6 Pa) versus conventional straight channels, and achieve a stronger Pareto distribution than a RAMP-based workflow. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
12 pages, 5353 KB  
Review
State-of-the-Art Overview of Smooth-Edged Material Distribution for Optimizing Topology (SEMDOT) Algorithm
by Minyan Liu, Wanghua Hu, Xuhui Gong, Hao Zhou and Baolin Zhao
Computation 2026, 14(1), 27; https://doi.org/10.3390/computation14010027 - 21 Jan 2026
Viewed by 286
Abstract
Topology optimization is a powerful and efficient design tool, but the structures obtained by element-based topology optimization methods are often limited by fuzzy or jagged boundaries. The smooth-edged material distribution for optimizing topology algorithm (SEMDOT) can effectively deal with this problem and promote [...] Read more.
Topology optimization is a powerful and efficient design tool, but the structures obtained by element-based topology optimization methods are often limited by fuzzy or jagged boundaries. The smooth-edged material distribution for optimizing topology algorithm (SEMDOT) can effectively deal with this problem and promote the practical application of topology optimization structures. This review outlines the theoretical evolution of SEMDOT, including both penalty-based and non-penalty-based formulations, while also providing access to open access codes. SEMDOT’s applications cover diverse areas, including self-supporting structures, energy-efficient manufacturing, bone tissue scaffolds, heat transfer systems, and building parts, demonstrating the versatility of SEMDOT. While SEMDOT addresses boundary issues in topology optimization structures, further theoretical refinement is needed to develop it into a comprehensive platform. This work consolidates the advances in SEMDOT, highlights its interdisciplinary impact, and identifies future research and implementation directions. Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
Show Figures

Figure 1

26 pages, 766 KB  
Article
Regression Extensions of the New Polynomial Exponential Distribution: NPED-GLM and Poisson–NPED Count Models with Applications in Engineering and Insurance
by Halim Zeghdoudi, Sandra S. Ferreira, Vinoth Raman and Dário Ferreira
Computation 2026, 14(1), 26; https://doi.org/10.3390/computation14010026 - 21 Jan 2026
Viewed by 430
Abstract
The New Polynomial Exponential Distribution (NPED), introduced by Beghriche et al. (2022), provides a flexible one-parameter family capable of representing diverse hazard shapes and heavy-tailed behavior. Regression frameworks based on the NPED, however, have not yet been established. This paper introduces two methodological [...] Read more.
The New Polynomial Exponential Distribution (NPED), introduced by Beghriche et al. (2022), provides a flexible one-parameter family capable of representing diverse hazard shapes and heavy-tailed behavior. Regression frameworks based on the NPED, however, have not yet been established. This paper introduces two methodological extensions: (i) a generalized linear model (NPED-GLM) in which the distribution parameter depends on covariates, and (ii) a Poisson–NPED count regression model suitable for overdispersed and heavy-tailed count data. Likelihood-based inference, asymptotic properties, and simulation studies are developed to investigate the performance of the estimators. Applications to engineering failure-count data and insurance claim frequencies illustrate the advantages of the proposed models relative to classical Poisson, negative binomial, and Poisson–Lindley regressions. These developments substantially broaden the applicability of the NPED in actuarial science, reliability engineering, and applied statistics. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Graphical abstract

13 pages, 2336 KB  
Article
Embedding-Based Alignments Capture Structural and Sequence Domains of Distantly Related Multifunctional Human Proteins
by Gabriele Vazzana, Matteo Manfredi, Castrense Savojardo, Pier Luigi Martelli and Rita Casadio
Computation 2026, 14(1), 25; https://doi.org/10.3390/computation14010025 - 20 Jan 2026
Viewed by 419
Abstract
Protein embedding is a protein representation that carries along the information derived from filtering large volumes of sequences stored in large archives. Routinely, the protein is represented by a matrix in which each residue is a context-specific vector whose dimensions reflect the size [...] Read more.
Protein embedding is a protein representation that carries along the information derived from filtering large volumes of sequences stored in large archives. Routinely, the protein is represented by a matrix in which each residue is a context-specific vector whose dimensions reflect the size of the large architectures of neural networks (transformers) trained with deep learning algorithms on large volumes of sequences. A recently introduced method (Embedding-Based Alignment, EBA) is particularly suited for pairwise embedding comparisons and, as we report here, allows for remote homolog detection under specific constraints, including protein sequence length similarity. Multifunctional proteins are present in different species. However, particularly in humans, the problem of their structural and functional annotation is urgent since, according to recent statistics, they comprise up to 50% of the human reference proteome. In this paper we show that when EBA is applied to a set of randomly selected multifunctional human proteins, it retrieves, after a clustering procedure and rigorous validation on the reference Swiss-Prot database, proteins that are remote homologs to each other and carry similar structural and functional features as the query protein. Full article
(This article belongs to the Section Computational Biology)
Show Figures

Figure 1

19 pages, 2826 KB  
Article
Development and Assessment of Simplified Conductance Models for the Particle Exhaust in Wendelstein 7-X
by Foteini Litovoli, Christos Tantos, Volker Hauer, Victoria Haak, Dirk Naujoks, Chandra-Prakash Dhard and W7-X Team
Computation 2026, 14(1), 24; https://doi.org/10.3390/computation14010024 - 19 Jan 2026
Viewed by 372
Abstract
The particle exhaust system plays a pivotal role in fusion reactors and is essential for ensuring both the feasibility and sustained operation of the fusion reaction. For the successful development of such a system, density control is of great importance and some key [...] Read more.
The particle exhaust system plays a pivotal role in fusion reactors and is essential for ensuring both the feasibility and sustained operation of the fusion reaction. For the successful development of such a system, density control is of great importance and some key design parameters include the neutral gas pressure and the resulting particle fluxes. This study presents a simplified conductance-based model for estimating neutral gas pressure distributions in the particle exhaust system of fusion reactors, focusing specifically on the sub-divertor region. In the proposed model, the pumping region is represented as an interconnected set of reservoirs and channels. Mass conservation and conductance relations, appropriate for all flow regimes, are applied. The model was benchmarked against complex 3D DIVGAS simulations across representative operating scenarios of the Wendelstein 7-X (W7-X) stellarator. Despite geometric simplifications, the model is capable of predicting pressure values at several key locations inside the particle exhaust area of W7-X, as well as various types of particle fluxes. The developed model is computationally efficient for large-scale parametric studies, exhibiting an average deviation of approximately 20%, which indicates reasonable predictive accuracy considering the model simplifications and the flow problem complexity. Its application may assist early-stage engineering design, pumping performance improvement, and operational planning for W7-X and other future fusion reactors. Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
Show Figures

Figure 1

34 pages, 23520 KB  
Article
Topology Optimisation of Heat Sinks Embedded with Phase-Change Material for Minimising Temperature Oscillations
by Mark Bjerre Müller Christensen and Joe Alexandersen
Computation 2026, 14(1), 23; https://doi.org/10.3390/computation14010023 - 16 Jan 2026
Viewed by 481
Abstract
This study presents a gradient-based topology optimisation framework for heat sinks embedded with phase-change material (PCM) that targets the mitigation of temperature oscillations under cyclic thermal loads. The approach couples transient thermal diffusion modelling in FEniCS with automatic adjoint sensitivities and GCMMA, and [...] Read more.
This study presents a gradient-based topology optimisation framework for heat sinks embedded with phase-change material (PCM) that targets the mitigation of temperature oscillations under cyclic thermal loads. The approach couples transient thermal diffusion modelling in FEniCS with automatic adjoint sensitivities and GCMMA, and uses a simple analytical homogenisation to parametrise a composite of PCM and conductive material. With latent-heat buffering using PCM, the optimised layouts reduce the temperature variance by 41% when the full time history is used and by 32% when only the quasi-steady-state cycle is used. To improve physical manufacturability, explicit penalisation yields near-discrete designs with only ∼10% performance loss, preserving most oscillation reduction benefits. The results demonstrate that adjoint-driven PCM topology optimisation can systematically suppress thermal oscillations. Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
Show Figures

Graphical abstract

21 pages, 1337 KB  
Article
The Health-Wealth Gradient in Labor Markets: Integrating Health, Insurance, and Social Metrics to Predict Employment Density
by Dingyuan Liu, Qiannan Shen and Jiaci Liu
Computation 2026, 14(1), 22; https://doi.org/10.3390/computation14010022 - 15 Jan 2026
Cited by 2 | Viewed by 466
Abstract
Labor market forecasting relies heavily on economic time-series data, often overlooking the “health–wealth” gradient that links population health to workforce participation. This study develops a machine learning framework integrating non-traditional health and social metrics to predict state-level employment density. Methods: We constructed a [...] Read more.
Labor market forecasting relies heavily on economic time-series data, often overlooking the “health–wealth” gradient that links population health to workforce participation. This study develops a machine learning framework integrating non-traditional health and social metrics to predict state-level employment density. Methods: We constructed a multi-source longitudinal dataset (2014–2024) by aggregating county-level Quarterly Census of Employment and Wages (QCEW) data with County Health Rankings to the state level. Using a time-aware split to evaluate performance across the COVID-19 structural break, we compared LASSO, Random Forest, and regularized XGBoost models, employing SHAP values for interpretability. Results: The tuned, regularized XGBoost model achieved strong out-of-sample performance (Test R2 = 0.800). A leakage-safe stacked Ridge ensemble yielded comparable performance (Test R2 = 0.827), while preserving the interpretability of the underlying tree model used for SHAP analysis. Full article
Show Figures

Graphical abstract

20 pages, 1129 KB  
Article
Solving the Synthesis Problem Self-Organizing Control System in the Class of Elliptical Accidents Optics for Objects with One Input and One Output
by Maxot Rakhmetov, Ainagul Adiyeva, Balaussa Orazbayeva, Shynar Yelezhanova, Raigul Tuleuova and Raushan Moldasheva
Computation 2026, 14(1), 21; https://doi.org/10.3390/computation14010021 - 14 Jan 2026
Viewed by 333
Abstract
Nonlinear single-input single-output (SISO) systems operating under parametric uncertainty often exhibit bifurcations, multistability, and deterministic chaos, which significantly limit the effectiveness of classical linear, adaptive, and switching control methods. This paper proposes a novel synthesis framework for self-organizing control systems based on catastrophe [...] Read more.
Nonlinear single-input single-output (SISO) systems operating under parametric uncertainty often exhibit bifurcations, multistability, and deterministic chaos, which significantly limit the effectiveness of classical linear, adaptive, and switching control methods. This paper proposes a novel synthesis framework for self-organizing control systems based on catastrophe theory, specifically within the class of elliptic catastrophes. Unlike conventional approaches that stabilize a predefined system structure, the proposed method embeds the control law directly into a structurally stable catastrophe model, enabling autonomous bifurcation-driven transitions between stable equilibria. The synthesis procedure is formulated using a Lyapunov vector-function gradient–velocity method, which guarantees aperiodic robust stability under parametric uncertainty. The definiteness of the Lyapunov functions is established using Morse’s lemma, providing a rigorous stability foundation. To support practical implementation, a data-driven parameter tuning mechanism based on self-organizing maps (SOM) is integrated, allowing adaptive adjustment of controller coefficients while preserving Lyapunov stability conditions. Simulation results demonstrate suppression of chaotic regimes, smooth bifurcation-induced transitions between stable operating modes, and improved transient performance compared to benchmark adaptive control schemes. The proposed framework provides a structurally robust alternative for controlling nonlinear systems in uncertain and dynamically changing environments. Full article
(This article belongs to the Topic A Real-World Application of Chaos Theory)
Show Figures

Figure 1

30 pages, 6201 KB  
Article
AFAD-MSA: Dataset and Models for Arabic Fake Audio Detection
by Elsayed Issa
Computation 2026, 14(1), 20; https://doi.org/10.3390/computation14010020 - 14 Jan 2026
Viewed by 602
Abstract
As generative speech synthesis produces near-human synthetic voices and reliance on online media grows, robust audio-deepfake detection is essential to fight misuse and misinformation. In this study, we introduce the Arabic Fake Audio Dataset for Modern Standard Arabic (AFAD-MSA), a curated corpus of [...] Read more.
As generative speech synthesis produces near-human synthetic voices and reliance on online media grows, robust audio-deepfake detection is essential to fight misuse and misinformation. In this study, we introduce the Arabic Fake Audio Dataset for Modern Standard Arabic (AFAD-MSA), a curated corpus of authentic and synthetic Arabic speech designed to advance research on Arabic deepfake and spoofed-speech detection. The synthetic subset is generated with four state-of-the-art proprietary text-to-speech and voice-conversion models. Rich metadata—covering speaker attributes and generation information—is provided to support reproducibility and benchmarking. To establish reference performance, we trained three AASIST models and compared their performance to two baseline transformer detectors (Wav2Vec 2.0 and Whisper). On the AFAD-MSA test split, AASIST-2 achieved perfect accuracy, surpassing the baseline models. However, its performance declined under cross-dataset evaluation. These results underscore the importance of data construction. Detectors generalize best when exposed to diverse attack types. In addition, continual or contrastive training that interleaves bona fide speech with large, heterogeneous spoofed corpora will further improve detectors’ robustness. Full article
Show Figures

Figure 1

21 pages, 5472 KB  
Article
Multifidelity Topology Design for Thermal–Fluid Devices via SEMDOT Algorithm
by Yiding Sun, Yun-Fei Fu, Shuzhi Xu and Yifan Guo
Computation 2026, 14(1), 19; https://doi.org/10.3390/computation14010019 - 12 Jan 2026
Cited by 1 | Viewed by 385
Abstract
Designing thermal–fluid devices that reduce peak temperature while limiting pressure loss is challenging because high-fidelity (HF) Navier–Stokes–convection simulations make direct HF-driven topology optimization computationally expensive. This study presents a two-dimensional, steady, laminar multifidelity topology design framework for thermal–fluid devices operating in a low-to-moderate [...] Read more.
Designing thermal–fluid devices that reduce peak temperature while limiting pressure loss is challenging because high-fidelity (HF) Navier–Stokes–convection simulations make direct HF-driven topology optimization computationally expensive. This study presents a two-dimensional, steady, laminar multifidelity topology design framework for thermal–fluid devices operating in a low-to-moderate Reynolds number regime. A computationally efficient low-fidelity (LF) Darcy–convection model is used for topology optimization, where SEMDOT decouples geometric smoothness from the analysis field to produce CAD-ready boundaries. The LF optimization minimizes a P-norm aggregated temperature subject to a prescribed volume fraction constraint; the inlet–outlet pressure difference and the P-norm parameter are varied to generate a diverse candidate set. All candidates are then evaluated using a steady incompressible HF Navier–Stokes–convection model in COMSOL 6.3 under a consistent operating condition (fixed flow; pressure drop reported as an output). In representative single- and multi-channel case studies, SEMDOT designs reduce the HF peak temperature (e.g., ~337 K to ~323 K) while also reducing the pressure drop (e.g., ~18.7 Pa to ~12.6 Pa) relative to conventional straight-channel layouts under the same operating point. Compared with a conventional RAMP-based pipeline under the tested settings, the proposed approach yields a more favorable Pareto distribution (normalized hypervolume 1.000 vs. 0.923). Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
Show Figures

Graphical abstract

19 pages, 28388 KB  
Article
Finite Element Analysis of Stress and Displacement in the Distal Femur: A Comparative Study of Normal and Osteoarthritic Bone Under Knee Flexion
by Kamonchat Trachoo, Inthira Chaiya and Din Prathumwan
Computation 2026, 14(1), 18; https://doi.org/10.3390/computation14010018 - 12 Jan 2026
Viewed by 377
Abstract
Osteoarthritis (OA) is a progressive degenerative joint disease that fundamentally alters the mechanical environment of the knee. This study utilizes a finite element framework to evaluate the biomechanical response of the distal femur in healthy and osteoarthritic conditions across critical functional postures. To [...] Read more.
Osteoarthritis (OA) is a progressive degenerative joint disease that fundamentally alters the mechanical environment of the knee. This study utilizes a finite element framework to evaluate the biomechanical response of the distal femur in healthy and osteoarthritic conditions across critical functional postures. To isolate the bone’s inherent structural stiffness and avoid numerical artifacts, a free-body computational approach was implemented, omitting external surface fixations. The distal femur was modeled as a linearly elastic domain with material properties representing healthy tissue and OA-induced degradation. Simulations were performed under passive gravitational loading at knee flexion angles of 0,60, and 90. The results demonstrate that the mechanical response is highly sensitive to postural orientation, with peak von Mises stress consistently occurring at 60 of flexion for both models. Quantitative analysis revealed that the stiffer Normal bone attracted significantly higher internal stress, with a reduction of over 30% in peak stress magnitude observed in the OA model at the most critical flexion angle. Total displacement magnitudes remained relatively stable across conditions, suggesting that OA-induced material softening primarily influences internal stress redistribution rather than global structural sag under passive loads. These findings provide a quantitative index of skeletal vulnerability, supporting the development of patient-specific orthopedic treatments and rehabilitation strategies. Full article
(This article belongs to the Section Computational Biology)
Show Figures

Graphical abstract

11 pages, 732 KB  
Article
Approximate Analytical Solutions of Nonlinear Jerk Equations Using the Parameter Expansion Method
by Gamal M. Ismail, Galal M. Moatimid and Stylianos V. Kontomaris
Computation 2026, 14(1), 17; https://doi.org/10.3390/computation14010017 - 12 Jan 2026
Cited by 1 | Viewed by 387
Abstract
The Parameter Expansion Method (PEM) is employed to study nonlinear Jerk equations, which are often difficult to solve because of their strong nonlinearity. This method provides higher accuracy and broader applicability, enabling analytical insights and closed-form approximations. This study explores the use of [...] Read more.
The Parameter Expansion Method (PEM) is employed to study nonlinear Jerk equations, which are often difficult to solve because of their strong nonlinearity. This method provides higher accuracy and broader applicability, enabling analytical insights and closed-form approximations. This study explores the use of Prof. He’s PEM to derive approximate analytical solutions of the nonlinear third-order Jerk equation, this model is commonly encountered in the analysis of complex dynamical systems across physics and engineering. Owing to the strong nonlinearity inherent in Jerk equations, exact solutions are often unattainable. The PEM provides a simple, effective framework by expanding the solution with respect to an embedding parameter, allowing accurate approximations without the need of small parameters or linearization. The method’s reliability and precision are validated through comparisons with numerical simulations, demonstrating its practicality and robustness in tackling nonlinear problems. The results indicate that PEM provides highly accurate approximations of nonlinear Jerk equation, showcasing greater simplicity and efficiency relative to other analytical methods, along with excellent concordance with numerical simulations. Additionally, the nonlinear Jerk equation demonstrates exact approximate solutions via PEM, closely mirroring numerical results and surpassing several contemporary analytical techniques in efficiency and usability. Furthermore, the study indicates that PEM is a straightforward and effective approach in solving nonlinear Jerk equation. It generates accurate estimates that nearly align with numerical simulations and surpass numerous other analytical methods. Full article
(This article belongs to the Special Issue Nonlinear System Modelling and Control)
Show Figures

Figure 1

12 pages, 1115 KB  
Communication
Linguistic Influence on Multidimensional Word Embeddings: Analysis of Ten Languages
by Anna V. Aleshina, Andrey L. Bulgakov, Yanliang Xin and Larisa S. Skrebkova
Computation 2026, 14(1), 16; https://doi.org/10.3390/computation14010016 - 9 Jan 2026
Viewed by 403
Abstract
Understanding how linguistic typology shapes multilingual embeddings is important for cross-lingual NLP. We examine static MUSE word embedding for ten diverse languages (English, Russian, Chinese, Arabic, Indonesian, German, Lithuanian, Hindi, Tajik and Persian). Using pairwise cosine distances, Random Forest classification, and UMAP visualization, [...] Read more.
Understanding how linguistic typology shapes multilingual embeddings is important for cross-lingual NLP. We examine static MUSE word embedding for ten diverse languages (English, Russian, Chinese, Arabic, Indonesian, German, Lithuanian, Hindi, Tajik and Persian). Using pairwise cosine distances, Random Forest classification, and UMAP visualization, we find that language identity and script type largely determine embedding clusters, with morphological complexity affecting cluster compactness and lexical overlap connecting clusters. The Random Forest model predicts language labels with high accuracy (≈98%), indicating strong language-specific patterns in embedding space. These results highlight script, morphology, and lexicon as key factors influencing multilingual embedding structures, informing linguistically aware design of cross-lingual models. Full article
Show Figures

Figure 1

21 pages, 1858 KB  
Article
Numerical Simulation of Diffusion in Cylindrical Pores: The Influence of Pore Radius on Particle Capture Kinetics
by Valeriy E. Arkhincheev, Bair V. Khabituev, Daniil F. Deriugin and Stanislav P. Maltsev
Computation 2026, 14(1), 15; https://doi.org/10.3390/computation14010015 - 8 Jan 2026
Viewed by 407
Abstract
The diffusion and trapping of particles in complex porous media are fundamental processes in materials science and bioengineering. This study systematically investigates the influence of pore radius on particle capture kinetics within a three-dimensional cylindrical pore containing randomly distributed absorbing traps. Numerical simulations [...] Read more.
The diffusion and trapping of particles in complex porous media are fundamental processes in materials science and bioengineering. This study systematically investigates the influence of pore radius on particle capture kinetics within a three-dimensional cylindrical pore containing randomly distributed absorbing traps. Numerical simulations were performed for a wide range of pore radii (from 3a to 81a, a is a minimal length of the problem, arbitrary unit) and trap concentrations M (from 100 to 5090, these numbers are determined by the pore geometry) using a random walk algorithm. The particle lifetime (τ), characterizing the capture rate, was calculated and analyzed. Results reveal three distinct capture regimes dependent on trap concentration: a diffusion-limited regime at low concentration M (<1000), a transition regime at medium M (1000 < M < 2000), and a trap-density-dominated saturation regime at high M (>2000). For each regime, optimal approximating functions for τ(M) were identified. Furthermore, empirical relationships between the approximating coefficients and the pore radius were derived, which enable the prediction of particle lifetimes. The findings demonstrate that while the pore radius significantly impacts capture kinetics at low trap densities, its influence diminishes as trap concentration increases, converging towards a universal behavior dominated by trap density. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Graphical abstract

20 pages, 10445 KB  
Article
Ab Initio Computational Investigations of Low-Lying Electronic States of Yttrium Lithide and Scandium Lithide
by Jean Tabet, Nancy Zgheib, Sylvie Magnier and Fadia Taher
Computation 2026, 14(1), 14; https://doi.org/10.3390/computation14010014 - 8 Jan 2026
Viewed by 299
Abstract
Ab initio studies using CASSCF/MRCI calculations have been performed to investigate the spectroscopic properties of YLi and ScLi molecules. Our calculations have computed 25 singlet and triplet states for YLi and 37 electronic states for ScLi. The lowest lying states, including the ground [...] Read more.
Ab initio studies using CASSCF/MRCI calculations have been performed to investigate the spectroscopic properties of YLi and ScLi molecules. Our calculations have computed 25 singlet and triplet states for YLi and 37 electronic states for ScLi. The lowest lying states, including the ground state 1+ of YLi, have been investigated for the first time. The spin–orbit coupling in YLi has also been assessed from the splitting between Ω components generated from the lowest triplet lying Λ–S states. Regarding ScLi, the ground state is found to be the (1)3Δ state. Spectroscopic constants, energy levels at equilibrium, permanent dipole moments, and transition dipole moments have also been calculated. The potential energy curves for all calculated states have been displayed to large bond internuclear distances. In both ScLi and YLi, the potential energy curves have shown a small dissociation energy for the lowest states (1) 1,3Δ, (1) 1,3Π and (1) 1,3+. Full article
(This article belongs to the Special Issue Feature Papers in Computational Chemistry)
Show Figures

Graphical abstract

33 pages, 8095 KB  
Article
Numerical Error Analysis of the Poisson Equation Under RHS Inaccuracies in Particle-in-Cell Simulations
by Kai Zhang, Tao Xiao, Weizong Wang and Bijiao He
Computation 2026, 14(1), 13; https://doi.org/10.3390/computation14010013 - 7 Jan 2026
Viewed by 408
Abstract
Particle-in-Cell (PIC) simulations require accurate solutions of the electrostatic Poisson equation, yet accuracy often degrades near irregular Dirichlet boundaries on Cartesian meshes. While prior work has focused on left-hand-side (LHS) discretization errors, the impact of right-hand-side (RHS) inaccuracies arising from charge deposition near [...] Read more.
Particle-in-Cell (PIC) simulations require accurate solutions of the electrostatic Poisson equation, yet accuracy often degrades near irregular Dirichlet boundaries on Cartesian meshes. While prior work has focused on left-hand-side (LHS) discretization errors, the impact of right-hand-side (RHS) inaccuracies arising from charge deposition near boundaries remains largely unexplored. This study analyzes numerical errors induced by underestimated RHS values at near-boundary nodes when using embedded finite difference schemes with linear and quadratic boundary treatments. Analytical results in one dimension and truncation error analyses in two dimensions show that RHS inaccuracies affect the two schemes in fundamentally different ways: They reduce boundary-induced errors in the linear scheme but introduce zeroth-order truncation errors in the quadratic scheme, leading to larger global errors. Numerical experiments in one, two, and three dimensions confirm these predictions. In two-dimensional tests, RHS inaccuracies reduce the L error of the linear scheme by a factor of 2–3, while increasing the quadratic-scheme error by several times, and in some cases by nearly an order of magnitude, with both schemes retaining second-order global convergence. A simple δ¯-based RHS calibration is proposed and shown to effectively restore the accuracy of the quadratic scheme. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Graphical abstract

22 pages, 8949 KB  
Article
A Physics-Informed Neural Network Aided Venturi–Microwave Co-Sensing Method for Three-Phase Metering
by Jinhua Tan, Yuxiao Yuan, Ying Xu, Jingya Wang, Zirui Song, Rongji Zuo, Zhengyang Chen and Chao Yuan
Computation 2026, 14(1), 12; https://doi.org/10.3390/computation14010012 - 5 Jan 2026
Viewed by 370
Abstract
Addressing the challenges of online measurement of oil-gas-water three-phase flow under high gas–liquid ratio (GVF > 90%) conditions (fire-driven mining, gas injection mining, natural gas mining), which rely heavily on radioactive sources, this study proposes an integrated, radiation-source-free three-phase measurement scheme utilizing a [...] Read more.
Addressing the challenges of online measurement of oil-gas-water three-phase flow under high gas–liquid ratio (GVF > 90%) conditions (fire-driven mining, gas injection mining, natural gas mining), which rely heavily on radioactive sources, this study proposes an integrated, radiation-source-free three-phase measurement scheme utilizing a “Venturi tube-microwave resonator”. Additionally, a physics-informed neural network (PINN) is introduced to predict the volumetric flow rate of oil-gas-water three-phase flow. Methodologically, the main features are the Venturi differential pressure signal (ΔP) and microwave resonance amplitude (V). A PINN model is constructed by embedding an improved L-M model, a cross-sectional water content model, and physical constraint equations into the loss function, thereby maintaining physical consistency and generalization ability under small sample sizes and across different operating conditions. Through experiments on oil-gas-water three-phase flow, the PINN model is compared with an artificial neural network (ANN) and a support vector machine (SVM). The results showed that under high gas–liquid ratio conditions (GVF > 90%), the relative errors (REL) of PINN in predicting the volumetric flow rates of oil, gas, and water were 0.1865, 0.0397, and 0.0619, respectively, which were better than ANN and SVM, and the output met physical constraints. The results indicate that under current laboratory conditions and working conditions, the PINN model has good performance in predicting the flow rate of oil-gas-water three-phase flow. However, in order to apply it to the field in the future, experiments with a wider range of working conditions and long-term stability testing should be conducted. This study provides a new technological solution for developing three-phase measurement and machine learning models that are radiation-free, real-time, and engineering-feasible. Full article
Show Figures

Figure 1

45 pages, 1557 KB  
Article
A Hybrid Gradient-Based Optimiser for Solving Complex Engineering Design Problems
by Jamal Zraqou, Riyad Alrousan, Zaid Khrisat, Faten Hamad, Niveen Halalsheh and Hussam Fakhouri
Computation 2026, 14(1), 11; https://doi.org/10.3390/computation14010011 - 4 Jan 2026
Cited by 1 | Viewed by 487
Abstract
This paper proposes JADEGBO, a hybrid gradient-based metaheuristic for solving complex single- and multi-constraint engineering design problems as well as cost-sensitive security optimisation tasks. The method combines Adaptive Differential Evolution with Optional External Archive (JADE), which provides self-adaptive exploration through p-best mutation, [...] Read more.
This paper proposes JADEGBO, a hybrid gradient-based metaheuristic for solving complex single- and multi-constraint engineering design problems as well as cost-sensitive security optimisation tasks. The method combines Adaptive Differential Evolution with Optional External Archive (JADE), which provides self-adaptive exploration through p-best mutation, an external archive, and success-based parameter learning, with the Gradient-Based Optimiser (GBO), which contributes Newton-inspired gradient search rules and a local escaping operator. In the proposed scheme, JADE is first employed to discover promising regions of the search space, after which GBO performs an intensified local refinement of the best individuals inherited from JADE. The performance of JADEGBO is assessed on the CEC2017 single-objective benchmark suite and compared against a broad set of classical and recent metaheuristics. Statistical indicators, convergence curves, box plots, histograms, sensitivity analyses, and scatter plots show that the hybrid typically attains the best or near-best mean fitness, exhibits low run-to-run variance, and maintains a favourable balance between exploration and exploitation across rotated, shifted, and composite landscapes. To demonstrate practical relevance, JADEGBO is further applied to the following four well-known constrained engineering design problems: welded beam, pressure vessel, speed reducer, and three-bar truss design. The algorithm consistently produces feasible high-quality designs and closely matches or improves upon the best reported results while keeping computation time competitive. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

19 pages, 1801 KB  
Article
Do LLMs Speak BPMN? An Evaluation of Their Process Modeling Capabilities Based on Quality Measures
by Panagiotis Drakopoulos, Panagiotis Malousoudis, Nikolaos Nousias, George Tsakalidis and Kostas Vergidis
Computation 2026, 14(1), 10; https://doi.org/10.3390/computation14010010 - 4 Jan 2026
Viewed by 856
Abstract
Large Language Models (LLMs) are emerging as powerful tools for automating business process modeling, promising to streamline the translation of textual process descriptions into Business Process Model and Notation (BPMN) diagrams. However, the extent to which these Al systems can produce high-quality BPMN [...] Read more.
Large Language Models (LLMs) are emerging as powerful tools for automating business process modeling, promising to streamline the translation of textual process descriptions into Business Process Model and Notation (BPMN) diagrams. However, the extent to which these Al systems can produce high-quality BPMN models has not yet been rigorously evaluated. This paper presents an early evaluation of five LLM-powered BPMN generation tools that automatically convert textual process descriptions into BPMN models. To assess the external quality of these Al-generated models, we introduce a novel structured evaluation framework that scores each BPMN diagram across three key process model quality dimensions: clarity, correctness, and completeness, covering both accuracy and diagram understandability. Using this framework, we conducted experiments where each tool was tasked with modeling the same set of textual process scenarios, and the resulting diagrams were systematically scored based on the criteria. This approach provides a consistent and repeatable evaluation procedure and offers a new lens for comparing LLM-based modeling capabilities. Given the focused scope of the study, the results should be interpreted as an exploratory benchmark that surfaces initial observations about tool performance rather than definitive conclusions. Our findings reveal that while current LLM-based tools can produce BPMN diagrams that capture the main elements of a process description, they often exhibit errors such as missing steps, inconsistent logic, or modeling rule violations, highlighting limitations in achieving fully correct and complete models. The clarity and readability of the generated diagrams also vary, indicating that these Al models are still maturing in generating easily interpretable process flows. We conclude that although LLMs show promise in automating BPMN modeling, significant improvements are needed for them to consistently generate both syntactically and semantically valid process models. Full article
Show Figures

Figure 1

14 pages, 2141 KB  
Communication
A Consumer Digital Twin for Energy Demand Prediction: Development and Implementation Under the SENDER Project (HORIZON 2020)
by Dimitra Douvi, Eleni Douvi, Jason Tsahalis and Haralabos-Theodoros Tsahalis
Computation 2026, 14(1), 9; https://doi.org/10.3390/computation14010009 - 3 Jan 2026
Cited by 1 | Viewed by 399
Abstract
This paper presents the development and implementation of a consumer Digital Twin (DT) for energy demand prediction under the SENDER (Sustainable Consumer Engagement and Demand Response) project, funded by HORIZON 2020. This project aims to engage consumers in the energy sector with innovative [...] Read more.
This paper presents the development and implementation of a consumer Digital Twin (DT) for energy demand prediction under the SENDER (Sustainable Consumer Engagement and Demand Response) project, funded by HORIZON 2020. This project aims to engage consumers in the energy sector with innovative energy service applications to achieve proactive Demand Response (DR) and optimized usage of Renewable Energy Sources (RES). The proposed DT model is designed to digitally represent occupant behaviors and energy consumption patterns using Artificial Neural Networks (ANN), which enable continuous learning by processing real-time and historical data in different pilot sites and seasons. The DT development incorporates the International Energy Agency (IEA)—Energy in Buildings and Communities (EBC) Annex 66 and Drivers-Needs-Actions-Systems (DNAS) framework to standardize occupant behavior modeling. The research methodology consists of the following steps: (i) a mock-up simulation environment for three pilot sites was created, (ii) the DT was trained and calibrated using the artificial data from the previous step, and (iii) the DT model was validated with real data from the Alginet pilot site in Spain. Results showed a strong correlation between DT predictions and mock-up data, with a maximum deviation of ±2%. Finally, a set of selected Key Performance Indicators (KPIs) was defined and categorized in order to evaluate the system’s technical effectiveness. Full article
Show Figures

Graphical abstract

24 pages, 2362 KB  
Article
Attention Bidirectional Recurrent Neural Zero-Shot Semantic Classifier for Emotional Footprint Identification
by Karthikeyan Jagadeesan and Annapurani Kumarappan
Computation 2026, 14(1), 8; https://doi.org/10.3390/computation14010008 - 2 Jan 2026
Viewed by 335
Abstract
Exploring emotions in organization settings, particularly in feedback on organizational welfare programs, is critical for understanding employee experiences and enhancing organizational policies. Recognizing emotions from a conversation (i.e., leaving an emotional footprint) is a predominant task for a machine to comprehend the full [...] Read more.
Exploring emotions in organization settings, particularly in feedback on organizational welfare programs, is critical for understanding employee experiences and enhancing organizational policies. Recognizing emotions from a conversation (i.e., leaving an emotional footprint) is a predominant task for a machine to comprehend the full context of the conversation. While fine-tuning of pre-trained models has invariably provided state-of-the-art results in emotion footprint recognition tasks, the prospect of a zero-shot learned model in this sphere is, on the whole, unexplored. The objective here remains to identify the emotional footprint of the members participating in the conversation after the conversation is over with improved accuracy, time and minimal error rate. To address these gaps, in this work, a method called Attention Bidirectional Recurrent Neural Zero-Shot Semantic Classifier (ABRN-ZSSC) for emotional footprint identification is proposed. The ABRN-ZSSC for emotional footprint identification is split into two sections. First, the raw data from a Two-Party Conversation with Emotional Footprint and Emotional Intensity are subjected to the Attention Bidirectional Recurrent Neural Network model with the intent of identifying the emotional footprint for each party near the conclusion of the conversation and, second, with the identified emotional footprint in a conversation. The Zero-Shot Learning-based classifier is applied to train and classify emotions both accurately and precisely. We verify the utility of these approaches (i.e., emotional footprint identification and classification) by performing an extensive experimental evaluation on two corpora on four aspects, training time, accuracy, precision, and error rate for varying samples. Experimental results demonstrate that the ABRN-ZSSC method outperforms two existing baseline models in emotion inference tasks across the dataset. An outcome of the proposed ABRN-ZSSC method is that it obtains superior performance in terms of 10% precision, 17% accuracy and 8% recall as well as 19% training time and 18% error rate compared to the conventional methods. Full article
(This article belongs to the Section Computational Social Science)
Show Figures

Figure 1

20 pages, 6827 KB  
Article
Multiphysics Modelling and Experimental Validation of Road Tanker Dynamics: Stress Analysis and Material Characterization
by Conor Robb, Gasser Abdelal, Pearse McKeefry and Conor Quinn
Computation 2026, 14(1), 7; https://doi.org/10.3390/computation14010007 - 2 Jan 2026
Viewed by 438
Abstract
Crossland Tankers is a leading manufacturer of bulk-load road tankers in Northern Ireland. These tankers transport up to forty thousand litres of liquid over long distances across diverse road conditions. Liquid sloshing within the tank has a significant impact on driveability and the [...] Read more.
Crossland Tankers is a leading manufacturer of bulk-load road tankers in Northern Ireland. These tankers transport up to forty thousand litres of liquid over long distances across diverse road conditions. Liquid sloshing within the tank has a significant impact on driveability and the tanker’s lifespan. This study introduces a novel Multiphysics model combining Smooth Particle Hydrodynamics (SPH) and Finite Element Analysis (FEA) to simulate fluid–structure interactions in a full-scale road tanker, validated with real-world road test data. The model reveals high-stress zones under braking and turning, with peak stresses at critical chassis locations, offering design insights for weight reduction and enhanced safety. Results demonstrate the approach’s effectiveness in optimising tanker design, reducing prototyping costs, and improving longevity, providing a valuable computational tool for industry applications. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

18 pages, 12862 KB  
Review
Advances in Single-Cell Sequencing for Understanding and Treating Kidney Disease
by Jose L. Agraz, Amit Verma and Claudia M. Agraz
Computation 2026, 14(1), 6; https://doi.org/10.3390/computation14010006 - 2 Jan 2026
Cited by 1 | Viewed by 1154
Abstract
The fields of medical diagnostics, nephrology, and the sequencing of cellular genetic material are pivotal for precise quantification of kidney diseases. Single-cell sequencing, enhanced by automation and software tools, enables efficient examination of biopsies at the individual cell level. This approach shows the [...] Read more.
The fields of medical diagnostics, nephrology, and the sequencing of cellular genetic material are pivotal for precise quantification of kidney diseases. Single-cell sequencing, enhanced by automation and software tools, enables efficient examination of biopsies at the individual cell level. This approach shows the complex cellular mosaic that shapes organ function. By quantifying gene expression following injury, single-cell analysis provides insight into disease progression. In this review, new developments in single-cell analysis methods, spatial integration of single-cell analysis, single-nucleus RNA sequencing, and emerging methods, including expression quantitative trait loci, whole-genome sequencing, and whole-exome sequencing in nephrology, are discussed. These advancements are poised to enhance kidney disease diagnostic processes, therapeutic strategies, and patient prognosis. Full article
Show Figures

Figure 1

29 pages, 4367 KB  
Article
SARIMA vs. Prophet: Comparative Efficacy in Forecasting Traffic Accidents Across Ecuadorian Provinces
by Wilson Chango, Ana Salguero, Tatiana Landivar, Roberto Vásconez, Geovanny Silva, Pedro Peñafiel-Arcos, Lucía Núñez and Homero Velasteguí-Izurieta
Computation 2026, 14(1), 5; https://doi.org/10.3390/computation14010005 - 31 Dec 2025
Viewed by 783
Abstract
This study aimed to evaluate the comparative predictive efficacy of the SARIMA statistical model and the Prophet machine learning model for forecasting monthly traffic accidents across the 24 provinces of Ecuador, addressing a critical research gap in model selection for geographically and socioeconomically [...] Read more.
This study aimed to evaluate the comparative predictive efficacy of the SARIMA statistical model and the Prophet machine learning model for forecasting monthly traffic accidents across the 24 provinces of Ecuador, addressing a critical research gap in model selection for geographically and socioeconomically heterogeneous regions. By integrating classical time series modeling with algorithmic decomposition techniques, the research sought to determine whether a universally superior model exists or if predictive performance is inherently context-dependent. Monthly accident data from January 2013 to June 2025 were analyzed using a rolling-window evaluation framework. Model accuracy was assessed through Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE) metrics to ensure consistency and comparability across provinces. The results revealed a global tie, with 12 provinces favoring SARIMA and 12 favoring Prophet, indicating the absence of a single dominant model. However, regional patterns of superiority emerged: Prophet achieved exceptional precision in coastal and urban provinces with stationary and high-volume time series—such as Guayas, which recorded the lowest MAPE (4.91%)—while SARIMA outperformed Prophet in the Andean highlands, particularly in non-stationary, medium-to-high-volume provinces such as Tungurahua (MAPE 6.07%) and Pichincha (MAPE 13.38%). Computational instability in MAPE was noted for provinces with extremely low accident counts (e.g., Galápagos, Carchi), though RMSE values remained low, indicating a metric rather than model limitation. Overall, the findings invalidate the notion of a universally optimal model and underscore the necessity of adopting adaptive, region-specific modeling frameworks that account for local geographic, demographic, and structural factors in predictive road safety analytics. Full article
Show Figures

Graphical abstract

17 pages, 5230 KB  
Article
Experimental and Numerical Investigation of Hydrodynamic Characteristics of Aquaculture Nets: The Critical Role of Solidity Ratio in Biofouling Assessment
by Wei Liu, Lei Wang, Yongli Liu, Yuyan Li, Guangrui Qi and Dawen Mao
Computation 2026, 14(1), 4; https://doi.org/10.3390/computation14010004 - 30 Dec 2025
Viewed by 401
Abstract
Biofouling on aquaculture netting increases hydrodynamic drag and restricts water exchange across net cages. The solidity ratio is introduced as a quantitative parameter to characterize fouling severity. Towing tank experiments and computational fluid dynamics (CFD) simulations were used to assess the hydrodynamic behavior [...] Read more.
Biofouling on aquaculture netting increases hydrodynamic drag and restricts water exchange across net cages. The solidity ratio is introduced as a quantitative parameter to characterize fouling severity. Towing tank experiments and computational fluid dynamics (CFD) simulations were used to assess the hydrodynamic behavior of netting under different fouling conditions. Experimental results indicated a nonlinear increase in drag force with increasing solidity. At a flow velocity of 0.90 m/s, the drag force increased by 112.2%, 195.1%, and 295.7% for netting with solidity ratios of 0.445, 0.733, and 0.787, respectively, compared to clean netting (Sn = 0.211). The drag coefficient remained stable within 1.445–1.573 across Re of 995–2189. Numerical simulations demonstrated the evolution of flow fields around netting, including jet flow formation in mesh openings and reverse flow regions and vortex structures behind knots. Under high solidity (Sn = 0.733–0.787), complex wake patterns such as dual-peak vortex streets appeared. Therefore, this study confirmed that the solidity ratio is an effective comprehensive parameter for evaluating biofouling effects, providing a theoretical basis for antifouling design and cleaning strategy development for aquaculture cages. Full article
Show Figures

Figure 1

29 pages, 8003 KB  
Article
Reaction-Diffusion Model of CAR-T Cell Therapy in Solid Tumours with Antigen Escape
by Maxim V. Polyakov and Elena I. Tuchina
Computation 2026, 14(1), 3; https://doi.org/10.3390/computation14010003 - 30 Dec 2025
Viewed by 655
Abstract
Developing effective CAR-T cell therapy for solid tumours remains challenging because of biological barriers such as antigen escape and an immunosuppressive microenvironment. The aim of this study is to develop a mathematical model of the spatio-temporal dynamics of tumour processes in order to [...] Read more.
Developing effective CAR-T cell therapy for solid tumours remains challenging because of biological barriers such as antigen escape and an immunosuppressive microenvironment. The aim of this study is to develop a mathematical model of the spatio-temporal dynamics of tumour processes in order to assess key factors that limit treatment efficacy. We propose a reaction–diffusion model described by a system of partial differential equations for the densities of tumour cells and CAR-T cells, the concentration of immune inhibitors, and the degree of antigen escape. The methods of investigation include stability analysis and numerical solution of the model using a finite-difference scheme. The simulations show that antigen escape produces a resistant tumour core and relapse after an initial regression; increasing the escape rate from γ=0.001 to 0.1 increases the final tumour volume at t=100 days from approximately 35.3 a.u. to 36.2 a.u. Parameter mapping further indicates that for γ0.01 tumour control can be achieved at moderate killing rates (kCT1day1), whereas for γ0.05 comparable control requires kCT25day1. Repeated CAR-T administration improves durability: the residual normalised tumour volume at t=100 days decreases from approximately 4.5 after a single infusion to approximately 0.9 (double) and approximately 0.5 (triple), with a saturating benefit for further intensification. We conclude that the proposed model is a valuable tool for analysing and optimising CAR-T therapy protocols, and that our results highlight the need for combined strategies aimed at overcoming antigen escape. Full article
(This article belongs to the Section Computational Biology)
Show Figures

Figure 1

37 pages, 11472 KB  
Article
An Interpretable Artificial Intelligence Approach for Reliability and Regulation-Aware Decision Support in Power Systems
by Diego Armando Pérez-Rosero, Santiago Pineda-Quintero, Juan Carlos Álvarez-Barreto, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Computation 2026, 14(1), 2; https://doi.org/10.3390/computation14010002 - 21 Dec 2025
Viewed by 722
Abstract
Modern medium-voltage (MV) distribution networks face increasing reliability challenges driven by aging assets, climate variability, and evolving operational demands. In Colombia and across Latin America, reliability metrics, such as the System Average Interruption Frequency Index (SAIFI), standardized under IEEE 1366, serve as key [...] Read more.
Modern medium-voltage (MV) distribution networks face increasing reliability challenges driven by aging assets, climate variability, and evolving operational demands. In Colombia and across Latin America, reliability metrics, such as the System Average Interruption Frequency Index (SAIFI), standardized under IEEE 1366, serve as key indicators for regulatory compliance and service quality. However, existing analytical approaches struggle to jointly deliver predictive accuracy, interpretability, and traceability required for regulated environments. Here, we introduce CRITAIR (Criticality Analysis through Interpretable Artificial Intelligence-based Recommendations), an integrated framework that combines predictive modeling, explainable analytics, and regulation-aware reasoning to enhance reliability management in MV networks. CRITAIR unifies three components: (i) a TabNet-based predictive module that estimates SAIFI using outage, asset, and meteorological data while producing global and local attributions; (ii) an agentic retrieval-and-reasoning stage that grounds recommendations in regulatory evidence from RETIE and NTC 2050; and (iii) interpretable reasoning graphs that map decision pathways. Evaluations conducted on real operational data demonstrate that CRITAIR achieves competitive predictive performance—comparable to Random Forest and XGBoost—while maintaining transparency through sparse attention and sequential feature explainability. Also, our regulation-aware reasoning module exhibits coherent and verifiable recommendations, achieving high semantic alignment scores (BERTScore) and expert-rated interpretability. Overall, CRITAIR bridges the gap between predictive analytics and regulatory governance, offering a transparent, auditable, and deployment-ready solution for digital transformation in electric distribution systems. Full article
(This article belongs to the Special Issue Smart Analytics for Future Energy Systems)
Show Figures

Figure 1

48 pages, 5409 KB  
Article
Enhanced Chimp Algorithm and Its Application in Optimizing Real-World Data and Engineering Design Problems
by Hussam N. Fakhouri, Riyad Alrousan, Hasan Rashaideh, Faten Hamad and Zaid Khrisat
Computation 2026, 14(1), 1; https://doi.org/10.3390/computation14010001 - 20 Dec 2025
Viewed by 610
Abstract
This work proposes an Enhanced Chimp Optimization Algorithm (EChOA) for solving continuous and constrained data science and engineering optimization problems. The EChOA integrates a self-adaptive DE/current-to-pbest/1 (with jDE-style parameter control) variation stage with the canonical four-leader ChOA guidance and augments the search with [...] Read more.
This work proposes an Enhanced Chimp Optimization Algorithm (EChOA) for solving continuous and constrained data science and engineering optimization problems. The EChOA integrates a self-adaptive DE/current-to-pbest/1 (with jDE-style parameter control) variation stage with the canonical four-leader ChOA guidance and augments the search with three lightweight modules: (i) L’evy flight refinement around the incumbent best, (ii) periodic elite opposition-based learning, and (iii) stagnation-aware partial restarts. The EChOA is compared with more than 35 optimizers on the CEC2022 single-objective suite (12 functions). The results shows that the EChOA attains state-of-the-art results at both D=10 and D=20. At D=10, it ranks first on all functions (average rank 1.00; 12/12 wins) with the lowest mean objective and the smallest dispersion relative to the strongest competitor (OMA). At D=20, the EChOA retains the best overall rank and achieves top scores on most functions, indicating stable scalability with problem dimension. Pairwise Wilcoxon signed-rank tests (α=0.05) against the full competitor set corroborate statistical superiority on the majority of functions at both dimensions, aligning with the aggregate rank outcomes. Population size studies indicate that larger populations primarily enhance reliability and time to improvement while yielding similar terminal accuracy under a fixed iteration budget. Four constrained engineering case studies (including welded beam, helical spring, pressure vessel, and cantilever stepped beam) further confirm practical effectiveness, with consistently low cost/weight/volume and tight dispersion. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop