Journal Description
Computation
Computation
is a peer-reviewed journal of computational science and engineering published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), CAPlus / SciFinder, Inspec, dblp, and other databases.
- Journal Rank: JCR - Q2 (Mathematics, Interdisciplinary Applications) / CiteScore - Q1 (Applied Mathematics)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 14.8 days after submission; acceptance to publication is undertaken in 5.6 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Mathematics and Its Applications: AppliedMath, Axioms, Computation, Fractal and Fractional, Geometry, International Journal of Topology, Logics, Mathematics and Symmetry.
Impact Factor:
1.9 (2024);
5-Year Impact Factor:
1.9 (2024)
Latest Articles
Artificial Intelligence Applications in Public Health: 2nd Edition
Computation 2026, 14(5), 106; https://doi.org/10.3390/computation14050106 - 4 May 2026
Abstract
Artificial intelligence (AI) is assuming an increasingly important role in public health, where the scale, heterogeneity, and temporal dynamics of health-related data often exceed the capacity of conventional analytic approaches [...]
Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Open AccessArticle
Risk-Aware Downlink Throughput Prediction in High-Density 5G Networks
by
Najem N. Sirhan, Riyad Alrousan, Samar Al-Saqqa, Faten Hamad and Zaid Khrisat
Computation 2026, 14(5), 105; https://doi.org/10.3390/computation14050105 - 2 May 2026
Abstract
Accurate short-horizon downlink throughput prediction is essential for automation in high-density 5G deployments (e.g., stadiums and events), where user load, scheduling decisions, and interference conditions change rapidly and produce highly variable user-perceived rates. This paper benchmarks lightweight regression models for per-user throughput prediction
[...] Read more.
Accurate short-horizon downlink throughput prediction is essential for automation in high-density 5G deployments (e.g., stadiums and events), where user load, scheduling decisions, and interference conditions change rapidly and produce highly variable user-perceived rates. This paper benchmarks lightweight regression models for per-user throughput prediction from readily available radio access network (RAN) key performance indicators (KPIs) and studies a risk-aware extension that augments point forecasts with calibrated uncertainty and an abstention (deferral) rule. Experiments use a strictly time-ordered train/calibration/test protocol on the Liverpool 5G High-Density Demand (L5GHDD) dataset. The target is strongly zero-inflated (about 62% of samples at 0 Mbps) and heavy-tailed, creating regimes where average-error optimization can mask rare but operationally important bursts. In the point-prediction benchmark, the best model is a tuned two-stage support vector regressor with a mean absolute error (MAE) of Mbps, while the strongest single-stage model attains a weighted mean absolute percentage error (WMAPE) of 56.200%. For uncertainty quantification, we compare standard split conformal prediction against two input-adaptive alternatives. Constant-width split conformal attains 88.900% marginal coverage for a nominal 90% target with an average interval width of Mbps, but width-based deferral is degenerate because all intervals have the same size. Variable-length conformal intervals preserve near-nominal coverage (91.100%) while producing informative width variation: normalized conformal reduces the average width to Mbps, and conformalized quantile regression reduces it to Mbps. At a deferral threshold of Mbps, constant-width conformal defers all samples, whereas normalized conformal still acts on 61.200% of samples with selective MAE Mbps. These results show that input-adaptive uncertainty is necessary for meaningful selective prediction in heteroscedastic 5G throughput dynamics.
Full article
(This article belongs to the Section Computational Engineering)
Open AccessArticle
Optimization of Convolutional Neural Networks Using Genetic Algorithms for the Classification of Arrhythmias in Skeletonized ECG Images
by
Álvaro Gabriel Vega-De la Garza, Ervin Jesús Alvarez-Sánchez, Julio Fernando Zaballa-Contreras, Rosario Aldana-Franco, Fernando Aldana-Franco, José Gustavo Leyva-Retureta and Andrés López-Velázquez
Computation 2026, 14(5), 104; https://doi.org/10.3390/computation14050104 - 1 May 2026
Abstract
Class imbalance among arrhythmia types and electrocardiogram (ECG) signal complexity present significant challenges for automated ECG-based arrhythmia detection. This research proposes an innovative approach that combines Genetic Algorithm (GA) optimization of Convolutional Neural Network (CNN) hyperparameters with morphological skeletonization of ECG images. The
[...] Read more.
Class imbalance among arrhythmia types and electrocardiogram (ECG) signal complexity present significant challenges for automated ECG-based arrhythmia detection. This research proposes an innovative approach that combines Genetic Algorithm (GA) optimization of Convolutional Neural Network (CNN) hyperparameters with morphological skeletonization of ECG images. The MIT-BIH Arrhythmia Database served as the primary data source, with the ECG signal converted to skeletonized representations emphasizing QRS complex geometry. A GA-optimized model was compared against a heuristic (manual design) baseline to determine optimal kernel and filter configurations. Evaluation emphasized not only overall accuracy but also robust metrics for minority classes. The optimized model achieved 97.26% accuracy, with macro recall improving substantially from 77.36% to 83.10% (+5.74%). These results demonstrate that evolutionary optimization enhances detection sensitivity to subtle geometric patterns, effectively mitigating class imbalance without artificial oversampling techniques.
Full article
(This article belongs to the Section Computational Biology)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Admissible Reconstruction of Reaction-Channel Levels on Fixed Subgroup Support and Probabilities in Algebraic Probability Table Construction
by
Beichen Zheng and Lili Wen
Computation 2026, 14(5), 103; https://doi.org/10.3390/computation14050103 - 30 Apr 2026
Abstract
This work considers admissibility-enforcing reconstruction of reaction-channel subgroup levels on prescribed total-subgroup support and probabilities, a setting in which conventional exact reconstruction may produce negative reaction-channel levels. The proposed reconstruction relaxes conventional full matching by retaining selected low-order channel quantities associated with limiting
[...] Read more.
This work considers admissibility-enforcing reconstruction of reaction-channel subgroup levels on prescribed total-subgroup support and probabilities, a setting in which conventional exact reconstruction may produce negative reaction-channel levels. The proposed reconstruction relaxes conventional full matching by retaining selected low-order channel quantities associated with limiting dilution responses exactly, while fitting the remaining matching conditions in a constrained least-squares sense under nonnegativity. The exact-retention constraints are embedded through a null-space parametrization, which reduces the reconstruction to a convex optimization problem over the remaining degrees of freedom. Two variants are examined: a single-retention formulation, which is automatically feasible for nonnegative retained data, and a two-retention formulation, which is more restrictive and depends on compatibility with the fixed total-subgroup rule. Numerical tests for capture data show that the proposed reconstruction removes the negative reaction-channel levels observed in the violating groups. Restoring admissibility entails deterioration in response accuracy relative to the unconstrained full-matching baseline, reflecting the trade-off between exact matching and nonnegativity on the fixed rule. Of the two variants considered, the single-retention formulation shows more stable overall behavior in the present comparison. In particular, for all violating cases at orders , it restores nonnegativity, with the reported 95th-percentile relative errors in the folded effective cross section not exceeding .
Full article
(This article belongs to the Section Computational Engineering)
Open AccessArticle
A Hybrid Multi-Model Framework for Personalized User-Level Anomaly Detection with Data-Driven Threshold Optimization
by
Amit Kumar, Wakar Ahmad, Om Pal and Sunil
Computation 2026, 14(5), 102; https://doi.org/10.3390/computation14050102 - 30 Apr 2026
Abstract
Modern user authentication systems increasingly need user and device-behavior-aware adaptive mechanisms to detect evolving threats beyond the traditional authentication framework of static credential verification. This paper proposes a hybrid multi-model framework for personalized user-level anomaly detection using a data-driven Hybrid Anomaly Score (HAS).
[...] Read more.
Modern user authentication systems increasingly need user and device-behavior-aware adaptive mechanisms to detect evolving threats beyond the traditional authentication framework of static credential verification. This paper proposes a hybrid multi-model framework for personalized user-level anomaly detection using a data-driven Hybrid Anomaly Score (HAS). The primary contribution lies in deriving the HAS using the joint integration of three adaptive attributes: dynamically computed per-user deviation thresholds conditioned on individual behavioral history, profile-age-aware baseline weights reflecting user cohort maturity, and criticality-scaled aggregation with the security impact of each detection methodology. The framework is evaluated on a large-scale real-world dataset and demonstrates strong detection performance, while achieving low inference latency suitable for real-time enterprise deployment. The ablation analysis of the framework confirms that dynamic weighting and personalized threshold substantially improve detection stability and convergence with an effective and deployable solution for large-scale authentication environments.
Full article
(This article belongs to the Section Computational Engineering)
Open AccessArticle
What Is an Oval, Officially and Overall? Old and New Mathematical Descriptions
by
Valeriy G. Narushin, Stefan T. Orszulik, Michael N. Romanov and Darren K. Griffin
Computation 2026, 14(5), 101; https://doi.org/10.3390/computation14050101 - 27 Apr 2026
Abstract
Deriving from the Latin “ovum” (egg), the oval is a commonly used term, but does not have the status of a standard geometric figure like a circle or ellipse. Consequently, the oval lacks both a mathematical descriptive basis to attribute a set of
[...] Read more.
Deriving from the Latin “ovum” (egg), the oval is a commonly used term, but does not have the status of a standard geometric figure like a circle or ellipse. Consequently, the oval lacks both a mathematical descriptive basis to attribute a set of key geometric parameters and an elegant formula to describe its contours. Herein, we consider the basis for deriving the formula of an oval for typical egg profiles. Specifically, these are round, ellipsoid, classic oval, pyriform (conical) and biconical shapes. To do this, we adhered to four basic postulates: (i) the ability to describe all possible egg shapes; (ii) a minimum set of measurable geometric parameters; (iii) the application of some universal indices (ratios of key geometric dimensions) to describe mathematical models; (iv) conformity with the “Main Axiom of the Mathematical Formula of the Bird’s Egg.” Additionally, we sought to comply with the principles of mathematical elegance. Following these theoretical assumptions and practical verification, we obtained a mathematically supported, elegant formula for this well-known but non-standardized geometric figure. The derived oval geometry equation will find use in applied problems of biology, construction, engineering and school curricula, alongside the classical figures of the circle and ellipse.
Full article
(This article belongs to the Section Computational Biology)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Micro-Macro Modeling of Inherent Cognitive Biases in 5-Point Likert Scales: Uncovering the Non-Linearity of Critical Sample Sizes for Capturing Identical Statistical Populations
by
Yasuko Kawahata
Computation 2026, 14(5), 100; https://doi.org/10.3390/computation14050100 - 27 Apr 2026
Abstract
As social infrastructure intensively developed during the high economic growth period of the 1970s faces simultaneous aging, there is an urgent need to transition from conventional reactive maintenance to preventive maintenance utilizing various data (data-driven asset management. However, the greatest barrier in practice
[...] Read more.
As social infrastructure intensively developed during the high economic growth period of the 1970s faces simultaneous aging, there is an urgent need to transition from conventional reactive maintenance to preventive maintenance utilizing various data (data-driven asset management. However, the greatest barrier in practice is that inspection data is unevenly distributed in analog formats such as paper and unstructured files, and heavily relies on the subjective visual evaluation of expert engineers (e.g., discrete graded evaluations from A to D). The intervention of this “Assessor Bias” makes it difficult to ensure the robustness required for direct statistical analysis. This paper serves as a bridge between this analog expert knowledge and quantitative data science. It formulates human cognitive conflicts (true state, peer pressure, avoidance of cognitive load) using the distance-decay model of the Analytic Hierarchy Process (AHP) and the Softmax function, constructing a micro-macro link model accompanied by stochastic variations. Through large-scale multi-agent simulations ( ) validating the model’s convergence, it was demonstrated that in long-tail distributions formed under peer pressure, macroscopic statistical distance metrics such as the Kullback-Leibler (KL) divergence ignore the fact that a small number of true signals are non-linearly suppressed, causing a statistical misinterpretation that “the error is within an acceptable range”. This implies that as long as macroscopic statistical indicators are over-trusted, signs of critical deterioration (minorities) will be structurally marginalized. Returning to the debate on “Homogeneity (Homogenität)” in German social statistics, this paper advocates that in order to realize objective “Micro-segmentation of Homogeneous Statistical Populations,” a paradigm shift from qualitative methods relying on human intuition to quantitative methods incorporating multi-criteria decision making is essential, rather than simply expanding the sample size.
Full article
(This article belongs to the Section Computational Social Science)
►▼
Show Figures

Figure 1
Open AccessArticle
A Spectrum-Driven Hierarchical Learning Network for Aero-Engine Defect Segmentation
by
Yining Xie, Aoqi Shen, Haochen Qi, Jing Zhao, Jianpeng Li, Xichun Pan and Anlong Zhang
Computation 2026, 14(5), 99; https://doi.org/10.3390/computation14050099 - 25 Apr 2026
Abstract
Aero-engine defects often exhibit micro-scale and high-frequency characteristics under complex metallic textures, which makes precise segmentation difficult. Most existing pixel-level methods rely on spatial-domain modeling and lack frequency-domain decoupling. As a result, high-frequency details are easily hidden by low-frequency background information. In addition,
[...] Read more.
Aero-engine defects often exhibit micro-scale and high-frequency characteristics under complex metallic textures, which makes precise segmentation difficult. Most existing pixel-level methods rely on spatial-domain modeling and lack frequency-domain decoupling. As a result, high-frequency details are easily hidden by low-frequency background information. In addition, repeated downsampling weakens the representation of fine-grained structures, leading to inaccurate boundary localization and limited robustness. To address these issues, a spectrum-driven hierarchical learning network is proposed for aero-engine defect segmentation. First, a dual-band spectral module is constructed using the discrete cosine transform to separate high-frequency and low-frequency components, providing stable and physically meaningful frequency-domain priors for the network. Second, a detail-guided module is designed where high-frequency features adaptively guide skip connections, compensating information loss during encoding and improving boundary recovery. Furthermore, a low-frequency-driven region-aware modeling module is developed. The internal defect regions, boundary areas, and background regions are modeled hierarchically. A dynamic hyper-kernel generation mechanism performs region-sensitive convolutional modeling, improving adaptation to complex structural variations. Extensive experiments on the Turbo19 and NEU-Seg datasets demonstrate that the proposed method produces accurate defect boundaries and achieves mIoU scores of 89.82% and 91.44%, improving over the second-best method by 5.22% and 4.42%, respectively.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Securing Tool-Using AI Agents Against Injection and Authority Misuse
by
Hasan Kanaker, Hussam Fakhouri, Nader Abdel Karim, Maher Abuhamdeh, Nurul Halimatul Asmak Ismail and Sandi Fakhouri
Computation 2026, 14(5), 98; https://doi.org/10.3390/computation14050098 - 25 Apr 2026
Abstract
Tool-using AI agents couple a language model with controller logic, memory, and external tools such as browsers, email, calendars, file systems, and transaction APIs. This architecture expands capability, but it also enlarges the security boundary: agents routinely ingest untrusted content while holding privileges
[...] Read more.
Tool-using AI agents couple a language model with controller logic, memory, and external tools such as browsers, email, calendars, file systems, and transaction APIs. This architecture expands capability, but it also enlarges the security boundary: agents routinely ingest untrusted content while holding privileges that can reveal private data and trigger external side effects. The resulting failures are not limited to poor text generation; they include prompt injection, indirect injection through tool outputs, confused-deputy behavior, unauthorized actions, and misleading claims about the tool state. Because large-scale testing on deployed products is difficult, vendor-specific, and ethically sensitive, we present a transparent, theoretical simulation-based framework for evaluating user-facing risk in tool-using agents. The methodological contribution is a formal threat model that separates compromise, harm, and severity, and a Monte Carlo evaluation pipeline that maps architectural choices (permissions, retrieval, memory exposure, and approvals) and defensive controls to comparable outcome metrics. We instantiate the framework for six representative threat scenarios and nine defense configurations, reporting attack success rate (ASR), benign task success, latency overhead, and severity-weighted harm. Across scenarios, the least-privilege tool design is the strongest single broad control, human-in-the-loop approvals sharply reduce high-impact actions and exports but degrade under user error and habituation, retrieval allowlisting nearly eliminates indirect injection while leaving other channels largely unaffected, and rate limiting reduces tail severity more than ASR. These results position agent safety as an architectural and operational problem and because they arise from an assumption-explicit simulator rather than field measurements, should be read as comparative design guidance rather than incident-rate estimates for any deployed product.
Full article
(This article belongs to the Section Computational Engineering)
Open AccessArticle
AI-Enabled Governance: Board Gender Diversity and Corporate Tax Avoidance
by
Marwan Mansour, Mo’taz Al Zobi, Ahmad Marei, Luay Daoud and Nour Ibrahim Kurdi
Computation 2026, 14(5), 97; https://doi.org/10.3390/computation14050097 - 23 Apr 2026
Abstract
Corporate tax avoidance has become a major governance and fiscal sustainability concern, particularly in developing economies where corporate tax revenues constitute a critical source of public financing. While prior research suggests that board gender diversity (BGD) enhances ethical oversight and monitoring, its effectiveness
[...] Read more.
Corporate tax avoidance has become a major governance and fiscal sustainability concern, particularly in developing economies where corporate tax revenues constitute a critical source of public financing. While prior research suggests that board gender diversity (BGD) enhances ethical oversight and monitoring, its effectiveness in constraining aggressive tax planning may depend on firms’ informational and technological environments. This study examines whether artificial intelligence (AI) capability strengthens the governance role of BGD in reducing corporate tax avoidance. Using a balanced panel of 1586 non-financial firms from developing economies over the period 2009–2023, the analysis employs firm FE models and dynamic two-step System GMM estimations to address unobserved heterogeneity, endogeneity, and the persistence of corporate tax behavior. The results indicate that BGD is positively associated with effective tax rates, implying lower levels of corporate tax avoidance. Furthermore, AI capability—measured using a lagged specification—significantly strengthens this relationship, suggesting that firms with higher AI adoption exhibit a stronger governance effect of gender-diverse boards on tax compliance. Additional robustness tests—including alternative tax avoidance measures, alternative BGD specifications, heterogeneity analysis, and selection-bias corrections using Heckman, propensity score matching (PSM), and instrumental variable (2SLS) approaches—confirm the stability of the findings. Overall, the results highlight the complementary role of technological capability and board diversity in strengthening corporate governance (CG) and fiscal discipline in developing economies.
Full article
(This article belongs to the Special Issue Sentiment-Driven Modelling in Business, Economics, and Social Sciences)
►▼
Show Figures

Figure 1
Open AccessArticle
Object Re-Identification Method for Air-to-Ground Targets Based on Neighborhood Feature Centralization Attention
by
Tian Yao, Yong Xu, Yue Ma, Hongtao Yan, Haihang Xu and An Wang
Computation 2026, 14(5), 96; https://doi.org/10.3390/computation14050096 - 22 Apr 2026
Abstract
To address the core challenges in air-to-ground target re-identification (ReID), including network focus on invalid background information, poor adaptability to nonlinear feature distribution, and insufficient cross-domain generalization, this paper proposes a novel air-to-ground ReID framework based on Neighborhood Feature Centralization Attention (NFCA). On
[...] Read more.
To address the core challenges in air-to-ground target re-identification (ReID), including network focus on invalid background information, poor adaptability to nonlinear feature distribution, and insufficient cross-domain generalization, this paper proposes a novel air-to-ground ReID framework based on Neighborhood Feature Centralization Attention (NFCA). On the basis of Coordinate Attention, the framework introduces a parameter-free Neighborhood Feature Centralization mechanism to build a lightweight attention module, which enhances cross-feature semantic interaction and suppresses background noise while retaining precise position encoding. It achieves end-to-end direct optimization of sample pair similarity through binary cross-entropy loss, eliminating the proxy task bias of traditional classification loss and adapting to the nonlinear structure of feature space. A multi-source data-driven training strategy is constructed by fusing ReID datasets and general classification datasets, which expands the coverage of feature space and narrows the distribution gap between training data and real air-to-ground scenarios without additional manual annotation. Experiments show that the proposed method achieves leading mAP values on the self-developed UAV air-to-ground dataset JC-1, the public person ReID dataset Market-1501, and the public vehicle ReID dataset VehicleID. Sufficient statistical validation, ablation experiments and cross-domain tests verify the advancement, reliability and generalization of the proposed method in complex air-to-ground scenarios.
Full article
(This article belongs to the Topic Intelligent Optimization Algorithm: Theory and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
SOC-Dependent Soft Current Limiting for Second-Life Lithium-Ion Batteries in Off-Grid Photovoltaic Battery Energy Storage Systems
by
Hongyan Wang, Pathomthat Chiradeja, Atthapol Ngaopitakkul and Suntiti Yoomak
Computation 2026, 14(4), 95; https://doi.org/10.3390/computation14040095 - 19 Apr 2026
Abstract
The increasing deployment of off-grid photovoltaic–battery energy storage systems (PV–BESSs) has intensified operational demands on battery energy storage, particularly when second-life lithium-ion batteries are employed. Due to aging-induced increases in internal resistance and reduced thermal margins, second-life batteries are more vulnerable to high-current
[...] Read more.
The increasing deployment of off-grid photovoltaic–battery energy storage systems (PV–BESSs) has intensified operational demands on battery energy storage, particularly when second-life lithium-ion batteries are employed. Due to aging-induced increases in internal resistance and reduced thermal margins, second-life batteries are more vulnerable to high-current operation at a low state-of-charge (SOC), which aggravates heat generation and accelerates degradation. In this study, an SOC-dependent soft current limiting strategy is proposed that reshapes the discharge current reference under low-SOC conditions while maintaining fixed SOC limits, thereby targeting current-domain protection rather than SOC-boundary adaptation for reliable off-grid operation. The proposed method introduces two SOC thresholds to gradually derate the allowable discharge current, preventing abrupt current changes near the lower SOC bound. A unified MATLAB/Simulink-based framework is developed for a 24 h representative off-grid PV–BESS scenario using a second-order equivalent circuit model coupled with a lumped thermal model. Simulation results show that the proposed current shaping reduces low-SOC current stress and associated Joule heating, leading to moderated temperature rise, while only slightly affecting the unmet load under the tested conditions. These findings indicate that SOC-dependent current shaping can provide a control-oriented means to reduce low-SOC electro-thermal stress in second-life batteries within the studied off-grid PV–BESS framework.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Sequential H2 Adsorption on the Aromatic Li6 Superatom: Field-Activated Physisorption and Thermodynamic Limits
by
Karen Ochoa Lara, Jancarlo Gomez-Vega, Rafael Pacheco-Contreras and Octavio Juárez-Sánchez
Computation 2026, 14(4), 94; https://doi.org/10.3390/computation14040094 - 17 Apr 2026
Abstract
Understanding the intrinsic Li–H2 interaction, decoupled from substrate effects, is essential to rationalize the performance of lithium-decorated hydrogen storage materials. To address the current lack of a clean theoretical baseline, we characterized the sequential H2 adsorption on the gas-phase Li6
[...] Read more.
Understanding the intrinsic Li–H2 interaction, decoupled from substrate effects, is essential to rationalize the performance of lithium-decorated hydrogen storage materials. To address the current lack of a clean theoretical baseline, we characterized the sequential H2 adsorption on the gas-phase Li6 superatomic cluster using high-level density functional theory (DFT), complemented by Energy Decomposition Analysis (EDA), QTAIM, and NICS(0) calculations. Li6 acts as a structurally rigid platform (RMSD < 0.032 Å) where ligand-induced polarization progressively strengthens its σ-aromaticity (NICS(0) from −2.917 to −13.98 ppm) and increases the HOMO–LUMO gap up to 5.05 eV. EDA identifies the binding as field-activated physisorption, electrostatically dominated (65–67%) and mechanistically distinct from Kubas coordination, as confirmed by QTAIM closed-shell interaction parameters. Negative cooperativity governs an effective loading capacity of n = 2 molecules under cryogenic conditions (Teq = 143.76 and 114.64 K), while an entropic bottleneck renders higher loading non-spontaneous at all temperatures. These results establish Li6(H2)n as a foundational gas-phase reference, providing a systematic, contamination-free descriptor set for the intrinsic Li–H2 interaction. This framework is essential for isolating the electronic role of the lithium superatom and unambiguously identifying substrate-induced modulations in supported hydrogen storage materials.
Full article
(This article belongs to the Special Issue Feature Papers in Computational Chemistry)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Attention-Based Transformer Framework with Predictive Uncertainty Quantification for Multi-Crop Yield Forecasting
by
Bharat Lal, Abhinav Shukla, Ayush Kumar Agrawal, R Kanesaraj Ramasamy and Parul Dubey
Computation 2026, 14(4), 93; https://doi.org/10.3390/computation14040093 - 15 Apr 2026
Abstract
Accurate crop yield forecasting is essential for ensuring food security, optimizing agricultural resource allocation, and supporting climate-resilient farming systems. Recent advances in deep learning have improved yield prediction accuracy; however, most existing models provide deterministic estimates without quantifying predictive uncertainty. This limitation restricts
[...] Read more.
Accurate crop yield forecasting is essential for ensuring food security, optimizing agricultural resource allocation, and supporting climate-resilient farming systems. Recent advances in deep learning have improved yield prediction accuracy; however, most existing models provide deterministic estimates without quantifying predictive uncertainty. This limitation restricts their reliability under climatic variability, missing data, and real-world decision-making scenarios where risk awareness is critical. This study utilizes two publicly available multi-crop datasets comprising historical yield records integrated with weather and soil attributes across multiple growing seasons. An attention-based Transformer framework is proposed, augmented with uncertainty quantification through Monte Carlo Dropout, Quantile Regression, and Bayesian Attention mechanisms. The proposed approach represents an integrated uncertainty-aware Transformer framework that combines temporal self-attention with complementary uncertainty estimation strategies. The contribution of this work lies in the systematic integration and comparative evaluation of multiple uncertainty quantification mechanisms within a unified deep learning framework for multi-crop yield forecasting. Experimental results demonstrate improved predictive accuracy and calibration compared to deterministic baselines. However, these findings are bounded by the scope of the datasets, which consist of coarse tabular climatic and soil variables, and should be interpreted accordingly.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Comparative Analysis of Supervised and Unsupervised Learning for Intrusion Detection in Network Logs
by
Paulo Castro, Fernando Santos and Pedro Lopes
Computation 2026, 14(4), 92; https://doi.org/10.3390/computation14040092 - 15 Apr 2026
Abstract
The escalating complexity of network infrastructures and the increasing sophistication of cyber threats require increasingly robust and automated Intrusion Detection Systems (IDS). This article presents a comparative investigation of the effectiveness of various Machine Learning and Deep Learning architectures in detecting network anomalies
[...] Read more.
The escalating complexity of network infrastructures and the increasing sophistication of cyber threats require increasingly robust and automated Intrusion Detection Systems (IDS). This article presents a comparative investigation of the effectiveness of various Machine Learning and Deep Learning architectures in detecting network anomalies in network logs. The methodology encompassed classic supervised and ensemble algorithms, such as Random Forest and XGBoost, to sequential Deep Learning approaches (LSTM, GRU) and unsupervised models based on latent reconstruction (VAE, DeepLog). The results demonstrate that supervised approaches significantly outperformed unsupervised methods in the analyzed context. The optimized XGBoost model established a performance benchmark, achieving a Recall of 0.96 and a Precision of 0.85, thereby offering an optimal balance between detecting rare threats and minimizing false alarms. In contrast, unsupervised models revealed critical limitations, suggesting that statistical mimicry between normal and anomalous traffic hinders detection based solely on reconstruction error. Additionally, the study documents the technical interoperability challenges when attempting to integrate state-of-the-art language models, such as BERT. In conclusion, this work validates the effectiveness of Gradient Boosting algorithms and recurrent networks as viable and scalable solutions for critical network security, providing guidelines for model selection in real monitoring environments.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Reinforcement Learning-Based Inverse Design of Multilayer Particles
by
Zhaohui Li, Fang Gao and Delian Liu
Computation 2026, 14(4), 91; https://doi.org/10.3390/computation14040091 - 10 Apr 2026
Abstract
Multilayered particles possess exceptional optical properties and hold significant potential for applications in chemical analysis, life sciences, optical sensing, and photonic integration. In practical applications, however, it is often necessary to perform inverse design of multilayered particles with given optical characteristics to meet
[...] Read more.
Multilayered particles possess exceptional optical properties and hold significant potential for applications in chemical analysis, life sciences, optical sensing, and photonic integration. In practical applications, however, it is often necessary to perform inverse design of multilayered particles with given optical characteristics to meet specific requirements, a process that remains time-consuming. To overcome this challenge, we propose a reinforcement learning-based method for the automated design of multilayered particles. Leveraging the self-learning capacity of reinforcement learning models in combination with an optical characteristics calculation model, the method iteratively determines particle parameters that fulfill the desired optical responses. This method effectively addresses the many-to-one parameter mapping problem in inverse design, eliminates the need for extensive pre-computations, and provides an innovative approach to the automated design of complex nanostructures.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Two-Dimensional Anomalous Solute Transport in a Two-Zone Fractal Porous Medium
by
B. Kh. Khuzhayorov, F. B. Kholliev, A. I. Usmonov, B. Rushi Kumar and K. K. Viswanathan
Computation 2026, 14(4), 90; https://doi.org/10.3390/computation14040090 - 9 Apr 2026
Abstract
This study addresses a two-dimensional anomalous solute transport process within a two-zone fractal porous medium. A mathematical formulation is developed to characterise transport phenomena in a non-homogeneous porous domain. The medium consists of two interacting regions: one containing mobile fluid and the other
[...] Read more.
This study addresses a two-dimensional anomalous solute transport process within a two-zone fractal porous medium. A mathematical formulation is developed to characterise transport phenomena in a non-homogeneous porous domain. The medium consists of two interacting regions: one containing mobile fluid and the other containing immobile fluid, between which mass transfer occurs. In the mobile-fluid region, solute transport is governed by the convection–diffusion equation. In contrast, the immobile-fluid region is described using a first-order kinetic model. The problem of solute injection through a designated boundary point is formulated and numerically implemented. The effects of anomalous transport behaviour on solute migration and filtration characteristics are examined. The study further evaluates the pressure field, filtration velocity distribution, and solute concentration in both zones.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Feature-Based Population Initialization for Evolutionary Optimization of Machine Learning Models in Short-Term Solar Power Forecasting
by
Aleksei Vakhnin, Harri Niska, Anders V. Lindfors and Mikko Kolehmainen
Computation 2026, 14(4), 89; https://doi.org/10.3390/computation14040089 - 8 Apr 2026
Abstract
Nowadays, solar energy is becoming one of the most popular sources of renewable energy worldwide. Traditional fossil fuels cause pollution and climate change, while solar power offers a clean and sustainable alternative. However, effective planning requires accurate prediction of the amount of solar
[...] Read more.
Nowadays, solar energy is becoming one of the most popular sources of renewable energy worldwide. Traditional fossil fuels cause pollution and climate change, while solar power offers a clean and sustainable alternative. However, effective planning requires accurate prediction of the amount of solar energy that can be produced. Prediction accuracy directly depends on two factors: the model’s hyperparameters and the feature set. In this study, we use boosting models, such as LightGBM, XGBoost, and CatBoost, to forecast solar power production. The prediction horizon is 60 min, which corresponds to short-term forecasting. Model tuning is performed using the NSGA-II multi-objective optimization algorithm. In this study, NSGA-II simultaneously tunes hyperparameters and a feature set of boosting models. We aim to enhance the performance of the NSGA-II algorithm in the early stages using the proposed method to generate the initial population. The initialization is based on an ensemble of filtering methods. The proposed approach promotes faster convergence in the early stages of the algorithm compared to the traditional initialization method. The results of numerical experiments are proven by the Wilcoxon test.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
A Comparative Study of Imbalance-Handling Methods in Multiclass Predictive Maintenance
by
Mohammed Alnahhal, Mosab I. Tabash, Samir K. Safi, Mujeeb Saif Mohsen Al-Absy and Zokir Mamadiyarov
Computation 2026, 14(4), 88; https://doi.org/10.3390/computation14040088 - 7 Apr 2026
Abstract
Predictive maintenance plays a key role in digitalization initiatives; however, in real settings, issues related to failure prediction occur when failure instances are rare compared to normal instances, leading to class imbalance. In this study, we systematically compare five machine learning (ML) models—random
[...] Read more.
Predictive maintenance plays a key role in digitalization initiatives; however, in real settings, issues related to failure prediction occur when failure instances are rare compared to normal instances, leading to class imbalance. In this study, we systematically compare five machine learning (ML) models—random forest, XGBoost, support vector machine, k-nearest neighbors, and multinomial logistic regression (MLR)—to detect multiclass rare failures using four imbalance-handling approaches (i.e., no handling, manual oversampling, selective manual oversampling, and class weighting), forming 20 configurations. Using the AI4I 2020 predictive maintenance dataset, which contains five failure types, we determined that XGBoost with no handling achieved the highest macro-averaged F1 (macro-F1) score (0.842) but obtained 0% recall for tool wear failure (TWF). MLR with selective manual oversampling achieved approximately 50% TWF recall with lower overall performance (0.636 macro-F1) than top-performing models such as XGBoost. We also found that very rare classes remain difficult to detect. Even high-performing models fail to consistently detect all five failure types. Overall, no single strategy can achieve a high detection rate across all performance measures.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Spatiotemporal Modelling of CAR-T Cell Therapy in Solid Tumours: Mechanisms of Antigen Escape and Immunosuppression
by
Maxim Polyakov
Computation 2026, 14(4), 87; https://doi.org/10.3390/computation14040087 - 7 Apr 2026
Abstract
CAR-T cell therapy has shown substantial efficacy in haematological malignancies, but its application to solid tumours remains limited by poor effector-cell infiltration, functional exhaustion, antigenic heterogeneity, and an immunosuppressive microenvironment. In this study, we develop a new spatiotemporal mathematical model of CAR-T therapy
[...] Read more.
CAR-T cell therapy has shown substantial efficacy in haematological malignancies, but its application to solid tumours remains limited by poor effector-cell infiltration, functional exhaustion, antigenic heterogeneity, and an immunosuppressive microenvironment. In this study, we develop a new spatiotemporal mathematical model of CAR-T therapy for solid tumours that integrates these resistance mechanisms within a single reaction–diffusion framework. The model is formulated as a system of partial differential equations describing functional and exhausted CAR-T cells, antigen-positive and antigen-low tumour subpopulations, and chemokine, immunosuppressive, and hypoxic fields. Steady-state analysis and finite-difference simulations showed that therapeutic outcome is governed by the interplay between CAR-T cell infiltration, exhaustion, and antigen escape. The model reproduces partial tumour regression followed by residual tumour persistence, therapy-driven enrichment of antigen-low cells, and reduced efficacy under stronger immunosuppressive and hypoxic conditions. In the combination therapy scenario considered here, repeated simulated CAR-T cell administration together with attenuation of the suppressive microenvironment improves tumour control. The proposed model provides a mechanistic basis for analysing resistance and for future optimisation studies of CAR-T therapy in solid tumours.
Full article
(This article belongs to the Section Computational Biology)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Axioms, Computation, Fractal Fract, Mathematics, Symmetry
Fractional Calculus: Theory and Applications, 2nd Edition
Topic Editors: António Lopes, Liping Chen, Sergio Adriani David, Alireza AlfiDeadline: 30 May 2026
Topic in
Brain Sciences, NeuroSci, Applied Sciences, Mathematics, Computation
The Computational Brain
Topic Editors: William Winlow, Andrew JohnsonDeadline: 31 July 2026
Topic in
Sustainability, Remote Sensing, Forests, Applied Sciences, Computation
Artificial Intelligence, Remote Sensing and Digital Twin Driving Innovation in Sustainable Natural Resources and Ecology
Topic Editors: Huaiqing Zhang, Ting YunDeadline: 31 January 2027
Topic in
Algorithms, AppliedMath, Computation, Mathematics, Symmetry, Sci, Applied Sciences
Intelligent Optimization Algorithm: Theory and Applications, 2nd Edition
Topic Editors: Shi Cheng, Chaomin Luo, Shangce GaoDeadline: 31 March 2027
Conferences
Special Issues
Special Issue in
Computation
Advances in Computational Methods for Soil Stability Analysis and Slope Engineering
Guest Editor: Florin Dumitru PopescuDeadline: 31 May 2026
Special Issue in
Computation
Nonlinear System Modelling and Control—2nd Edition
Guest Editor: Chathura WanigasekaraDeadline: 1 June 2026
Special Issue in
Computation
Object Detection Models for Transportation Systems
Guest Editors: Taqwa AlHadidi, Shadi Jaradat, Ahmed JaberDeadline: 30 June 2026
Special Issue in
Computation
Multi-Omics for Diagnosing Diseases: Bioinformatics Approaches and Integrative Data Analyses
Guest Editors: Emanuel Maldonado, Imran KhanDeadline: 30 June 2026



